layout: page title: “Spark Interpreter Group” description: "" group: manual

{% include JB/setup %}

Spark Interpreter

Apache Spark is supported in Zeppelin with Spark Interpreter group, which consisted of 4 interpreters.



Configuration

Without any configuration, Spark interpreter works out of box in local mode. But if you want to connect to your Spark cluster, you'll need following two simple steps.

1. export SPARK_HOME

In conf/zeppelin-env.sh, export SPARK_HOME environment variable with your Spark installation path.

for example

export SPARK_HOME=/usr/lib/spark

You can optionally export HADOOP_CONF_DIR and SPARK_SUBMIT_OPTIONS

export HADOOP_CONF_DIR=/usr/lib/hadoop
export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0"

After start Zeppelin, go to Interpreter menu and edit master property in your Spark interpreter setting. The value may vary depending on your Spark cluster deployment type.

for example,

Note that without exporting SPARK_HOME, it's running in local mode with included version of Spark. The included version may vary depending on the build profile.



SparkContext, SQLContext, ZeppelinContext

SparkContext, SQLContext, ZeppelinContext are automatically created and exposed as variable names ‘sc’, ‘sqlContext’ and ‘z’, respectively, both in scala and python environments.

Note that scala / python environment shares the same SparkContext, SQLContext, ZeppelinContext instance.



Dependency Management

1. Dynamic Dependency Loading via %dep interpreter

When your code requires external library, instead of doing download/copy/restart Zeppelin, you can easily do following jobs using %dep interpreter.

  • Load libraries recursively from Maven repository
  • Load libraries from local filesystem
  • Add additional maven repository
  • Automatically add libraries to SparkCluster (You can turn off)

Dep interpreter leverages scala environment. So you can write any Scala code here. Note that %dep interpreter should be used before %spark, %pyspark, %sql.

Here's usages.

%dep
z.reset() // clean up previously added artifact and repository

// add maven repository
z.addRepo("RepoName").url("RepoURL")

// add maven snapshot repository
z.addRepo("RepoName").url("RepoURL").snapshot()

// add credentials for private maven repository
z.addRepo("RepoName").url("RepoURL").username("username").password("password")

// add artifact from filesystem
z.load("/path/to.jar")

// add artifact from maven repository, with no dependency
z.load("groupId:artifactId:version").excludeAll()

// add artifact recursively
z.load("groupId:artifactId:version")

// add artifact recursively except comma separated GroupID:ArtifactId list
z.load("groupId:artifactId:version").exclude("groupId:artifactId,groupId:artifactId, ...")

// exclude with pattern
z.load("groupId:artifactId:version").exclude(*)
z.load("groupId:artifactId:version").exclude("groupId:artifactId:*")
z.load("groupId:artifactId:version").exclude("groupId:*")

// local() skips adding artifact to spark clusters (skipping sc.addJar())
z.load("groupId:artifactId:version").local()
  • SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh

      export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar --files /path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg"
    
  • SPARK_HOME/conf/spark-defaults.conf

      spark.jars				/path/mylib1.jar,/path/mylib2.jar
      spark.jars.packages		com.databricks:spark-csv_2.10:1.2.0
      spark.files				/path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip
    

Zeppelin automatically injects ZeppelinContext as variable ‘z’ in your scala/python environment. ZeppelinContext provides some additional functions and utility.

ZeppelinContext extends map and it's shared between scala, python environment. So you can put some object from scala and read it from python, vise versa.

Put object from scala

%spark
val myObject = ...
z.put("objName", myObject)

Get object from python

%python
myObject = z.get("objName")

ZeppelinContext provides functions for creating forms. In scala and python environments, you can create forms programmatically.

%spark
/* Create text input form */
z.input("formName")

/* Create text input form with default value */
z.input("formName", "defaultValue")

/* Create select form */
z.select("formName", Seq(("option1", "option1DisplayName"),
                         ("option2", "option2DisplayName")))

/* Create select form with default value*/
z.select("formName", "option1", Seq(("option1", "option1DisplayName"),
                                    ("option2", "option2DisplayName")))

In sql environment, you can create form in simple template.

%sql
select * from ${table=defaultTableName} where text like '%${search}%'

To learn more about dynamic form, checkout Dynamic Form.