commit | 119911ba00c52afdadb0404ac1fc140bd5e0ff64 | [log] [tgz] |
---|---|---|
author | Sean Welleck <wellecks@gmail.com> | Tue Apr 21 20:27:25 2015 -0400 |
committer | Sean Welleck <wellecks@gmail.com> | Tue Apr 21 20:27:25 2015 -0400 |
tree | cabe7509009dc255e2cb5a1ba1f6a7b7f167c5d8 | |
parent | e82f48f18c2b9b15ba9079a5cc2b6ecef630bfa1 [diff] | |
parent | 7f72db0e189617e5fc2ef29c12a3b5be6f38d07a [diff] |
Merge pull request #105 from ibm-et/SwitchJVMOptOrder Switch order of log level and env jvm options.
The Spark Kernel has one main goal: provide the foundation for interactive applications to connect and use Apache Spark.
The kernel provides several key features for applications:
Define and run Spark Tasks
Collect Results without a Datastore
Send execution results and streaming data back via the Spark Kernel to your applications
Use the Comm API - an abstraction of the IPython protocol - for more detailed data communication and synchronization between your applications and the Spark Kernel
Host and Manage Applications Separately from Apache Spark
The project intends to provide applications with the ability to send both packaged jars and code snippets. As it implements the latest IPython message protocol (5.0), the Spark Kernel can easily plug into the 3.x branch of IPython for quick, interactive data exploration. The Spark Kernel strives to be extensible, providing a pluggable interface for developers to add their own functionality.
If you are new to the Spark Kernel, please see the Getting Started section.
For more information, please visit the Spark Kernel wiki.
For bug reporting and feature requests, please visit the Spark Kernel issue list.