[maven-release-plugin] prepare for next development iteration
12 files changed
tree: d12bef0093f2eeb1b318c2aefe05d4e38506d17a
  1. .gitattributes
  2. .gitignore
  3. .travis.yml
  5. KEYS
  9. README.md
  10. avatica-server/
  11. avatica/
  12. core/
  13. doc/
  14. example/
  15. linq4j/
  16. mongodb/
  17. plus/
  18. pom.xml
  19. spark/
  20. splunk/
  21. sqlline
  22. sqlline.bat
  23. src/
  24. ubenchmark/

Build Status

Apache Calcite

Apache Calcite is a dynamic data management framework.

It was formerly called Optiq.

Getting Calcite

To run Apache Calcite, you can either download and build from github, or download a release then build the source code.

Pre-built jars are in the Apache maven repository with the following Maven coordinates:



Calcite makes data anywhere, of any format, look like a database. For example, you can execute a complex ANSI-standard SQL statement on in-memory collections:

public static class HrSchema {
  public final Employee[] emps = ... ;
  public final Department[] depts = ...;

Properties info = new Properties();
info.setProperty("lex", "JAVA");
Connection connection = DriverManager.getConnection("jdbc:calcite:", info);
OptiqConnection optiqConnection =
    optiqConnection.getRootSchema(), "hr", new HrSchema());
Statement statement = optiqConnection.createStatement();
ResultSet resultSet = statement.executeQuery(
    "select d.deptno, min(e.empid)\n"
    + "from hr.emps as e\n"
    + "join hr.depts as d\n"
    + "  on e.deptno = d.deptno\n"
    + "group by d.deptno\n"
    + "having count(*) > 1");

Where is the database? There is no database. The connection is completely empty until ReflectiveSchema.create registers a Java object as a schema and its collection fields emps and depts as tables.

Calcite does not want to own data; it does not even have favorite data format. This example used in-memory data sets, and processed them using operators such as groupBy and join from the linq4j library. But Calcite can also process data in other data formats, such as JDBC. In the first example, replace

    optiqConnection.getRootSchema(), "hr", new HrSchema());


BasicDataSource dataSource = new BasicDataSource();
JdbcSchema.create(optiqConnection, dataSource, rootSchema, "hr", "");

and Calcite will execute the same query in JDBC. To the application, the data and API are the same, but behind the scenes the implementation is very different. Calcite uses optimizer rules to push the JOIN and GROUP BY operations to the source database.

In-memory and JDBC are just two familiar examples. Calcite can handle any data source and data format. To add a data source, you need to write an adapter that tells Calcite what collections in the data source it should consider “tables”.

For more advanced integration, you can write optimizer rules. Optimizer rules allow Calcite to access data of a new format, allow you to register new operators (such as a better join algorithm), and allow Calcite to optimize how queries are translated to operators. Calcite will combine your rules and operators with built-in rules and operators, apply cost-based optimization, and generate an efficient plan.

Non-JDBC access

Calcite also allows front-ends other than SQL/JDBC. For example, you can execute queries in linq4j:

final OptiqConnection connection = ...;
ParameterExpression c = Expressions.parameter(Customer.class, "c");
for (Customer customer
    : connection.getRootSchema()
        .getTable("customer", Customer.class)
                    Expressions.field(c, "customer_id"),
                c))) {

Linq4j understands the full query parse tree, and the Linq4j query provider for Calcite invokes Calcite as an query optimizer. If the customer table comes from a JDBC database (based on this code fragment, we really can't tell) then the optimal plan will be to send the query

FROM "customer"
WHERE "customer_id" < 5

to the JDBC data source.

Writing an adapter

The optiq-csv project provides a CSV adapter, which is fully functional for use in applications but is also simple enough to serve as a good template if you are writing your own adapter.

See the optiq-csv tutorial for information on using optiq-csv and writing adapters.

See the HOWTO for more information about using other adapters, and about using Calcite in general.


The following features are complete.

  • Query parser, validator and optimizer
  • Support for reading models in JSON format
  • Many standard functions and aggregate functions
  • JDBC queries against Linq4j and JDBC back-ends
  • Linq4j front-end
  • SQL features: SELECT, FROM (including JOIN syntax), WHERE, GROUP BY (and aggregate functions including COUNT(DISTINCT ...)), HAVING, ORDER BY (including NULLS FIRST/LAST), set operations (UNION, INTERSECT, MINUS), sub-queries (including correlated sub-queries), windowed aggregates, LIMIT (syntax as Postgres)

For more details, see the Reference guide.


  • JDBC driver


  • Apache Drill adapter
  • Cascading adapter (Lingual)
  • CSV adapter (optiq-csv)
  • JDBC adapter (part of calcite-core)
  • MongoDB adapter (calcite-mongodb)
  • Spark adapter (calcite-spark)
  • Splunk adapter (calcite-splunk)
  • Eclipse Memory Analyzer (MAT) adapter (mat-calcite-plugin)

More information

Pre-Apache resources

These resources, which we used when Calcite was called Optiq and before it joined the Apache incubator, are for reference only. They may be out of date. Please don't post or try to subscribe to the mailing list.


  • How to integrate Splunk with any data solution (Splunk User Conference, 2012)
  • Drill / SQL / Optiq (2013)
  • SQL on Big Data using Optiq (2013)
  • SQL Now! (NoSQL Now! conference, 2013)
  • Cost-based optimization in Hive (video) (Hadoop Summit, 2014)
  • Discardable, in-memory materialized query for Hadoop (video) (Hadoop Summit, 2014)
  • Cost-based optimization in Hive 0.14 (Seattle, 2014)


Apache Calcite is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.