layout: page title: “Generic JDBC Interpreter for Apache Zeppelin” description: “Generic JDBC Interpreter lets you create a JDBC connection to any data source. You can use Postgres, MySql, MariaDB, Redshift, Apache Hive, Presto/Trino, Impala, Apache Phoenix, Apache Drill and Apache Tajo using JDBC interpreter.” group: interpreter

{% include JB/setup %}

Generic JDBC Interpreter for Apache Zeppelin

Overview

JDBC interpreter lets you create a JDBC connection to any data sources seamlessly.

Inserts, Updates, and Upserts are applied immediately after running each statement.

By now, it has been tested with:

If you are using other databases not in the above list, please feel free to share your use case. It would be helpful to improve the functionality of JDBC interpreter.

Create a new JDBC Interpreter

First, click + Create button at the top-right corner in the interpreter setting page.

Fill Interpreter name field with whatever you want to use as the alias(e.g. mysql, mysql2, hive, redshift, and etc..). Please note that this alias will be used as %interpreter_name to call the interpreter in the paragraph. Then select jdbc as an Interpreter group.

The default driver of JDBC interpreter is set as PostgreSQL. It means Zeppelin includes PostgreSQL driver jar in itself. So you don't need to add any dependencies(e.g. the artifact name or path for PostgreSQL driver jar) for PostgreSQL connection. The JDBC interpreter properties are defined by default like below.

If you want to connect other databases such as Mysql, Redshift and Hive, you need to edit the property values. You can also use Credential for JDBC authentication. If default.user and default.password properties are deleted(using X button) for database connection in the interpreter setting page, the JDBC interpreter will get the account information from Credential.

The below example is for Mysql connection.

The last step is Dependency Setting. Since Zeppelin only includes PostgreSQL driver jar by default, you need to add each driver‘s maven coordinates or JDBC driver’s jar file path for the other databases.

That's it. You can find more JDBC connection setting examples(Mysql, MariaDB, Redshift, Apache Hive, Presto/Trino, Impala, Apache Phoenix, and Apache Tajo) in this section.

JDBC Interpreter Datasource Pool Configuration

The Jdbc interpreter uses the connection pool technology, and supports users to do some personal configuration of the connection pool. For example, we can configure default.validationQuery='select 1' and default.testOnBorrow=true in the Interpreter configuration to avoid the “Invalid SessionHandle” runtime error caused by Session timeout when connecting to HiveServer2 through JDBC interpreter.

The Jdbc Interpreter supports the following database connection pool configurations:

More properties

There are more JDBC interpreter properties you can specify like below.

You can also add more properties by using this method. For example, if a connection needs a schema parameter, it would have to add the property as follows:

How to use

Run the paragraph with JDBC interpreter

To test whether your databases and Zeppelin are successfully connected or not, type %jdbc_interpreter_name(e.g. %mysql) at the top of the paragraph and run show databases.

%jdbc_interpreter_name

show databases

If the paragraph is FINISHED without any errors, a new paragraph will be automatically added after the previous one with %jdbc_interpreter_name. So you don‘t need to type this prefix in every paragraphs’ header.

Multiple SQL statements

You can write multiple sql statements in one paragraph, just separate them with semi-colon. e.g

%jdbc_interpreter_name

USE zeppelin_demo;

CREATE TABLE pet (name VARCHAR(20), owner VARCHAR(20),
       species VARCHAR(20), sex CHAR(1), birth DATE, death DATE);

SQL Comment

2 kinds of SQL comments are supported:

  • Single line comment start with --
  • Multiple line comment around with /* ... */

%jdbc_interpreter_name -- single line comment show tables; /* multiple line comment */ select * from test_1;

Apply Zeppelin Dynamic Forms

You can leverage Zeppelin Dynamic Form inside your queries. You can use both the text input and select form parametrization features.

Run SQL Continuously

By default, sql statements in one paragraph are executed only once. But you can run it continuously by specifying local property refreshInterval (unit: milli-seconds), So that the sql statements are executed every interval of refreshInterval milli-seconds. This is useful when your data in database is updated continuously by external system, and you can build dynamic dashboard in Zeppelin via this approach.

e.g. Here we query the mysql which is updated continuously by other external system.

Usage precode

You can set precode for each data source. Code runs once while opening the connection.

Properties

An example settings of interpreter for the two data sources, each of which has its precode parameter.

Usage

Test of execution precode for each data source.

%jdbc

show search_path

Returns value of search_path which is set in the default jdbc (use postgresql) interpreter's default.precode.

%mysql

select @v

Returns value of v which is set in the mysql interpreter's default.precode.

Examples

Here are some examples you can refer to. Including the below connectors, you can connect every databases as long as it can be configured with it's JDBC driver.

Postgres

Properties

Postgres JDBC Driver Docs

Dependencies

Maven Repository: org.postgresql:postgresql

Mysql

Properties

Mysql JDBC Driver Docs

Dependencies

Maven Repository: mysql:mysql-connector-java

MariaDB

Properties

MariaDB JDBC Driver Docs

Dependencies

Maven Repository: org.mariadb.jdbc:mariadb-java-client

Redshift

Properties

AWS Redshift JDBC Driver Docs

Dependencies

Maven Repository: com.amazonaws:aws-java-sdk-redshift

Apache Hive

Zeppelin just connect to hiveserver2 to run hive sql via hive jdbc. There are 2 cases of connecting with Hive:

  • Connect to Hive without KERBEROS
  • Connect to Hive with KERBEROS

Each case requires different settings.

Connect to Hive without KERBEROS

In this scenario, you need to make the following settings at least. By default, hive job run as user of default.user. Refer impersonation if you want hive job run as the Zeppelin login user when authentication is enabled.

Connect to Hive with KERBEROS

In this scenario, you need to make the following settings at least. By default, hive job run as user of client principal (zeppelin.jdbc.principal). Refer impersonation if you want hive job run as the Zeppelin login user when authentication is enabled.

Maven Repository : org.apache.hive:hive-jdbc

Impersonation

When Zeppelin server is running with authentication enabled, then the interpreter can utilize Hive's user proxy feature i.e. send extra parameter for creating and running a session (“hive.server2.proxy.user=”: “${loggedInUser}”). This is particularly useful when multiple users are sharing a notebook.

To enable this set following:

  • default.proxy.user.property as hive.server2.proxy.user

See User Impersonation in interpreter for more information.

Sample configuration

Presto/Trino

Properties

Trino JDBC Driver Docs
Presto JDBC Driver Docs

Dependencies

Impala

Properties

Dependencies

Impala JDBC Driver Docs

Dependencies

Apache Phoenix

Phoenix supports thick and thin connection types:

Use the appropriate default.driver, default.url, and the dependency artifact for your connection type.

Thick client connection

Properties
Dependencies

Maven Repository: org.apache.phoenix:phoenix-core

Thin client connection

Properties
Dependencies

Before Adding one of the below dependencies, check the Phoenix version first.

Maven Repository: org.apache.phoenix:phoenix-queryserver-client

Apache Tajo

Properties

Apache Tajo JDBC Driver Docs

Dependencies

Maven Repository: org.apache.tajo:tajo-jdbc

Object Interpolation

The JDBC interpreter also supports interpolation of ZeppelinContext objects into the paragraph text. The following example shows one use of this facility:

In Scala cell:

z.put("country_code", "KR")
    // ...

In later JDBC cell:

%jdbc_interpreter_name

select * from patents_list where 
priority_country = '{country_code}' and filing_date like '2015-%'

Object interpolation is disabled by default, and can be enabled for all instances of the JDBC interpreter by setting the value of the property zeppelin.jdbc.interpolation to true (see More Properties above). More details of this feature can be found in the Spark interpreter documentation under Zeppelin-Context

Bug reporting

If you find a bug using JDBC interpreter, please create a JIRA ticket.