Apache Toree (Incubating)

Clone this repo:
  1. 29d10b3 Removing optional filename extension (which is not currently included in language_info) by Liam Fisk · 2 years, 6 months ago master
  2. 9c5807c Splicing the Python and Java kernels together by Liam Fisk · 2 years, 6 months ago
  3. a7275ed Wrapping the kernel object in a python class, and tweaking the spark context creation by Liam Fisk · 2 years, 6 months ago
  4. c01f8a9 https://issues.apache.org/jira/browse/TOREE-319 Removing unnecesary guava dependency by Liam Fisk · 2 years, 7 months ago
  5. 9c226e9 Fixed Travis badge to point to Toree by Chip Senkbeil · 2 years, 6 months ago

Build Status License Join the chat at https://gitter.im/ibm-et/spark-kernel Binder

Apache Toree

The main goal of the Toree is to provide the foundation for interactive applications to connect to and use Apache Spark.


Toree provides an interface that allows clients to interact with a Spark Cluster. Clients can send libraries and snippets of code that are interpreted and ran against a preconfigured Spark context. These snippets can do a variety of things:

  1. Define and run spark jobs of all kinds
  2. Collect results from spark and push them to the client
  3. Load necessary dependencies for the running code
  4. Start and monitor a stream
  5. ...

The main supported language is Scala, but it is also capable of processing both Python and R. It implements the latest Jupyter message protocol (5.0), so it can easily plug into the latest releases of Jupyter/IPython (3.2.x+ and 4.x+) for quick, interactive data exploration.


This project is currently not fully compliant with Apache release policy as it includes a runtime dependency that is licensed as LGPL v3 (plus a static linking exception). This package is currently under an effort to re-license (https://github.com/zeromq/jeromq/issues/327).

Try It

A version of Toree is deployed as part of the Try Jupyter! site. Select Scala 2.10.4 (Spark 1.4.1) under the New dropdown. Note that this version only supports Scala.


This project uses make as the entry point for build, test, and packaging. It supports 2 modes, local and vagrant. The default is local and all command (i.e. sbt) will be ran locally on your machine. This means that you need to install sbt, jupyter/ipython, and other development requirements locally on your machine. The 2nd mode uses Vagrant to simplify the development experience. In vagrant mode, all commands are sent to the vagrant box that has all necessary dependencies pre-installed. To run in vagrant mode, run export USE_VAGRANT=true.

To build and interact with Toree using Jupyter, run

make dev

This will start a Jupyter notebook server. Depending on your mode, it will be accessible at http://localhost:8888 or From here you can create notebooks that use Toree configured for Spark local mode.

Tests can be run by doing make test.

NOTE: Do not use sbt directly.

Build & Package

To build and package up Toree, run

make release

This results in 2 packages.

  • ./dist/toree-<VERSION>-binary-release.tar.gz is a simple package that contains JAR and executable
  • ./dist/toree-<VERSION>.tar.gz is a pip installable package that adds Toree as a Jupyter kernel.

NOTE: make release uses docker. Please refer to docker installation instructions for your system. USE_VAGRANT is not supported by this make target.

Run Examples

To play with the example notebooks, run

make jupyter

A notebook server will be launched in a Docker container with Toree and some other dependencies installed. Refer to your Docker setup for the ip address. The notebook will be at http://<ip>:8888/.


pip install --pre toree
jupyter toree install

Reporting Issues

Refer to and open issue here


You can reach us through gitter or our mailing list


We are working on publishing binary releases of Toree soon. As part of our move into Apache Incubator, Toree will start a new version sequence starting at 0.1.

Our goal is to keep master up to date with the latest version of Spark. When new versions of Spark require specific code changes to Toree, we will branch out older Spark version support.

As it stands, we maintain several branches for legacy versions of Spark. The table below shows what is available now.

BranchApache Spark Version

Please note that for the most part, new features will mainly be added to the master branch.


We are working on porting our documentation into Apache. For the time being, you can refer to this Wiki and our Getting Started guide. You may also visit our website.