{% include JB/setup %}
Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. Apache Spark is supported in Zeppelin with Spark interpreter group which consists of below five interpreters.
The Spark interpreter can be configured with properties provided by Zeppelin. You can also set other Spark properties which are not listed in the table. For a list of additional properties, refer to Spark Available Properties.
Without any configuration, Spark interpreter works out of box in local mode. But if you want to connect to your Spark cluster, you'll need to follow below two simple steps.
In conf/zeppelin-env.sh
, export SPARK_HOME
environment variable with your Spark installation path.
For example,
export SPARK_HOME=/usr/lib/spark
You can optionally set more environment variables
# set hadoop conf dir export HADOOP_CONF_DIR=/usr/lib/hadoop # set options to pass spark-submit command export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0" # extra classpath. e.g. set classpath for hive-site.xml export ZEPPELIN_INTP_CLASSPATH_OVERRIDES=/etc/hive/conf
For Windows, ensure you have winutils.exe
in %HADOOP_HOME%\bin
. Please see Problems running Hadoop on Windows for the details.
After start Zeppelin, go to Interpreter menu and edit master property in your Spark interpreter setting. The value may vary depending on your Spark cluster deployment type.
For example,
That's it. Zeppelin will work with any version of Spark and any deployment type without rebuilding Zeppelin in this way. For the further information about Spark & Zeppelin version compatibility, please refer to “Available Interpreters” section in Zeppelin download page.
Note that without exporting
SPARK_HOME
, it's running in local mode with included version of Spark. The included version may vary depending on the build profile.
Zeppelin support both yarn client and yarn cluster mode (yarn cluster mode is supported from 0.8.0). For yarn mode, you must specify SPARK_HOME
& HADOOP_CONF_DIR
. You can either specify them in zeppelin-env.sh
, or in interpreter setting page. Specifying them in zeppelin-env.sh
means you can use only one version of spark
& hadoop
. Specifying them in interpreter setting page means you can use multiple versions of spark
& hadoop
in one zeppelin instance.
There's one new version of SparkInterpreter with better spark support and code completion starting from Zeppelin 0.8.0. We enable it by default, but user can still use the old version of SparkInterpreter by setting zeppelin.spark.useNew
as false
in its interpreter setting.
SparkContext, SQLContext and ZeppelinContext are automatically created and exposed as variable names sc
, sqlContext
and z
, respectively, in Scala, Python and R environments. Staring from 0.6.1 SparkSession is available as variable spark
when you are using Spark 2.x.
Note that Scala/Python/R environment shares the same SparkContext, SQLContext and ZeppelinContext instance.
There're 2 kinds of properties that would be passed to SparkConf
spark.
). e.g. spark.executor.memory
will be passed to SparkConf
zeppelin.spark.
). e.g. zeppelin.spark.property_1
, property_1
will be passed to SparkConf
There are two ways to load external libraries in Spark interpreter. First is using interpreter setting menu and second is loading Spark properties.
Please see Dependency Management for the details.
Once SPARK_HOME
is set in conf/zeppelin-env.sh
, Zeppelin uses spark-submit
as spark interpreter runner. spark-submit
supports two ways to load configurations. The first is command line options such as --master and Zeppelin can pass these options to spark-submit
by exporting SPARK_SUBMIT_OPTIONS
in conf/zeppelin-env.sh
. Second is reading configuration options from SPARK_HOME/conf/spark-defaults.conf
. Spark properties that user can set to distribute libraries are:
Here are few examples:
SPARK_SUBMIT_OPTIONS
in conf/zeppelin-env.sh
export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar --files /path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg"
SPARK_HOME/conf/spark-defaults.conf
spark.jars /path/mylib1.jar,/path/mylib2.jar spark.jars.packages com.databricks:spark-csv_2.10:1.2.0 spark.files /path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip
Note:
%spark.dep
interpreter loads libraries to%spark
and%spark.pyspark
but not to%spark.sql
interpreter. So we recommend you to use the first option instead.
When your code requires external library, instead of doing download/copy/restart Zeppelin, you can easily do following jobs using %spark.dep
interpreter.
Dep interpreter leverages Scala environment. So you can write any Scala code here. Note that %spark.dep
interpreter should be used before %spark
, %spark.pyspark
, %spark.sql
.
Here's usages.
%spark.dep z.reset() // clean up previously added artifact and repository // add maven repository z.addRepo("RepoName").url("RepoURL") // add maven snapshot repository z.addRepo("RepoName").url("RepoURL").snapshot() // add credentials for private maven repository z.addRepo("RepoName").url("RepoURL").username("username").password("password") // add artifact from filesystem z.load("/path/to.jar") // add artifact from maven repository, with no dependency z.load("groupId:artifactId:version").excludeAll() // add artifact recursively z.load("groupId:artifactId:version") // add artifact recursively except comma separated GroupID:ArtifactId list z.load("groupId:artifactId:version").exclude("groupId:artifactId,groupId:artifactId, ...") // exclude with pattern z.load("groupId:artifactId:version").exclude(*) z.load("groupId:artifactId:version").exclude("groupId:artifactId:*") z.load("groupId:artifactId:version").exclude("groupId:*") // local() skips adding artifact to spark clusters (skipping sc.addJar()) z.load("groupId:artifactId:version").local()
Zeppelin automatically injects ZeppelinContext
as variable z
in your Scala/Python environment. ZeppelinContext
provides some additional functions and utilities. See Zeppelin-Context for more details.
Both the python
and pyspark
interpreters have built-in support for inline visualization using matplotlib
, a popular plotting library for python. More details can be found in the python interpreter documentation, since matplotlib support is identical. More advanced interactive plotting can be done with pyspark through utilizing Zeppelin's built-in Angular Display System, as shown below:
By default, each sql statement would run sequentially in %spark.sql
. But you can run them concurrently by following setup.
zeppelin.spark.concurrentSQL
to true to enable the sql concurrent feature, underneath zeppelin will change to use fairscheduler for spark. And also set zeppelin.spark.concurrentSQL.max
to control the max number of sql statements running concurrently.fairscheduler.xml
under your SPARK_CONF_DIR
, check the offical spark doc Configuring Pool Properties%spark(pool=pool1) sql statement
This feature is available for both all versions of scala spark, pyspark. For sparkr, it is only available starting from 2.3.0.
You can choose one of shared
, scoped
and isolated
options wheh you configure Spark interpreter. Spark interpreter creates separated Scala compiler per each notebook but share a single SparkContext in scoped
mode (experimental). It creates separated SparkContext per each notebook in isolated
mode.
By default, zeppelin would use IPython in pyspark
when IPython is available, Otherwise it would fall back to the original PySpark implementation. If you don't want to use IPython, then you can set zeppelin.pyspark.useIPython
as false
in interpreter setting. For the IPython features, you can refer doc Python Interpreter
Logical setup with Zeppelin, Kerberos Key Distribution Center (KDC), and Spark on YARN:
On the server that Zeppelin is installed, install Kerberos client modules and configuration, krb5.conf. This is to make the server communicate with KDC.
Set SPARK_HOME
in [ZEPPELIN_HOME]/conf/zeppelin-env.sh
to use spark-submit (Additionally, you might have to set export HADOOP_CONF_DIR=/etc/hadoop/conf
)
Add the two properties below to Spark configuration ([SPARK_HOME]/conf/spark-defaults.conf
):
spark.yarn.principal spark.yarn.keytab
NOTE: If you do not have permission to access for the above spark-defaults.conf file, optionally, you can add the above lines to the Spark Interpreter setting through the Interpreter tab in the Zeppelin UI.