blob: 40f74dcc9a85e48f166e706f490a92260d9d2ee6 [file] [log] [blame]
<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title>Spark Overview - Spark 0.9.0 Documentation</title>
<meta name="description" content="">
<link rel="stylesheet" href="css/bootstrap.min.css">
<style>
body {
padding-top: 60px;
padding-bottom: 40px;
}
</style>
<meta name="viewport" content="width=device-width">
<link rel="stylesheet" href="css/bootstrap-responsive.min.css">
<link rel="stylesheet" href="css/main.css">
<script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script>
<link rel="stylesheet" href="css/pygments-default.css">
<!-- Google analytics script -->
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-32518208-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</head>
<body>
<!--[if lt IE 7]>
<p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p>
<![endif]-->
<!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html -->
<div class="navbar navbar-fixed-top" id="topbar">
<div class="navbar-inner">
<div class="container">
<div class="brand"><a href="index.html">
<img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">0.9.0</span>
</div>
<ul class="nav">
<!--TODO(andyk): Add class="active" attribute to li some how.-->
<li><a href="index.html">Overview</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="quick-start.html">Quick Start</a></li>
<li><a href="scala-programming-guide.html">Spark in Scala</a></li>
<li><a href="java-programming-guide.html">Spark in Java</a></li>
<li><a href="python-programming-guide.html">Spark in Python</a></li>
<li class="divider"></li>
<li><a href="streaming-programming-guide.html">Spark Streaming</a></li>
<li><a href="mllib-guide.html">MLlib (Machine Learning)</a></li>
<li><a href="bagel-programming-guide.html">Bagel (Pregel on Spark)</a></li>
<li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="api/core/index.html#org.apache.spark.package">Spark Core for Java/Scala</a></li>
<li><a href="api/pyspark/index.html">Spark Core for Python</a></li>
<li class="divider"></li>
<li><a href="api/streaming/index.html#org.apache.spark.streaming.package">Spark Streaming</a></li>
<li><a href="api/mllib/index.html#org.apache.spark.mllib.package">MLlib (Machine Learning)</a></li>
<li><a href="api/bagel/index.html#org.apache.spark.bagel.package">Bagel (Pregel on Spark)</a></li>
<li><a href="api/graphx/index.html#org.apache.spark.graphx.package">GraphX (Graph Processing)</a></li>
<li class="divider"></li>
<li class="dropdown-submenu">
<a tabindex="-1" href="#">External Data Sources</a>
<ul class="dropdown-menu">
<li><a href="api/external/kafka/index.html#org.apache.spark.streaming.kafka.KafkaUtils$">Kafka</a></li>
<li><a href="api/external/flume/index.html#org.apache.spark.streaming.flume.FlumeUtils$">Flume</a></li>
<li><a href="api/external/twitter/index.html#org.apache.spark.streaming.twitter.TwitterUtils$">Twitter</a></li>
<li><a href="api/external/zeromq/index.html#org.apache.spark.streaming.zeromq.ZeroMQUtils$">ZeroMQ</a></li>
<li><a href="api/external/mqtt/index.html#org.apache.spark.streaming.mqtt.MQTTUtils$">MQTT</a></li>
</ul>
</li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="cluster-overview.html">Overview</a></li>
<li><a href="ec2-scripts.html">Amazon EC2</a></li>
<li><a href="spark-standalone.html">Standalone Mode</a></li>
<li><a href="running-on-mesos.html">Mesos</a></li>
<li><a href="running-on-yarn.html">YARN</a></li>
</ul>
</li>
<li class="dropdown">
<a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="configuration.html">Configuration</a></li>
<li><a href="monitoring.html">Monitoring</a></li>
<li><a href="tuning.html">Tuning Guide</a></li>
<li><a href="hadoop-third-party-distributions.html">Running with CDH/HDP</a></li>
<li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
<li><a href="job-scheduling.html">Job Scheduling</a></li>
<li class="divider"></li>
<li><a href="building-with-maven.html">Building Spark with Maven</a></li>
<li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li>
</ul>
</li>
</ul>
<!--<p class="navbar-text pull-right"><span class="version-text">v0.9.0</span></p>-->
</div>
</div>
</div>
<div class="container" id="content">
<h1 class="title">Spark Overview</h1>
<p>Apache Spark is a fast and general-purpose cluster computing system.
It provides high-level APIs in <a href="scala-programming-guide.html">Scala</a>, <a href="java-programming-guide.html">Java</a>, and <a href="python-programming-guide.html">Python</a> that make parallel jobs easy to write, and an optimized engine that supports general computation graphs.
It also supports a rich set of higher-level tools including <a href="http://shark.cs.berkeley.edu">Shark</a> (Hive on Spark), <a href="mllib-guide.html">MLlib</a> for machine learning, <a href="graphx-programming-guide.html">GraphX</a> for graph processing, and <a href="streaming-programming-guide.html">Spark Streaming</a>.</p>
<h1 id="downloading">Downloading</h1>
<p>Get Spark by visiting the <a href="http://spark.apache.org/downloads.html">downloads page</a> of the Apache Spark site. This documentation is for Spark version 0.9.0-incubating.</p>
<p>Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run it is to have <code>java</code> to installed on your system <code>PATH</code>, or the <code>JAVA_HOME</code> environment variable pointing to a Java installation.</p>
<h1 id="building">Building</h1>
<p>Spark uses <a href="http://www.scala-sbt.org">Simple Build Tool</a>, which is bundled with it. To compile the code, go into the top-level Spark directory and run</p>
<pre><code>sbt/sbt assembly
</code></pre>
<p>For its Scala API, Spark 0.9.0-incubating depends on Scala 2.10. If you write applications in Scala, you will need to use a compatible Scala version (e.g. 2.10.X) &#8211; newer major versions may not work. You can get the right version of Scala from <a href="http://www.scala-lang.org/download/">scala-lang.org</a>.</p>
<h1 id="running-the-examples-and-shell">Running the Examples and Shell</h1>
<p>Spark comes with several sample programs in the <code>examples</code> directory.
To run one of the samples, use <code>./bin/run-example &lt;class&gt; &lt;params&gt;</code> in the top-level Spark directory
(the <code>bin/run-example</code> script sets up the appropriate paths and launches that program).
For example, try <code>./bin/run-example org.apache.spark.examples.SparkPi local</code>.
Each example prints usage help when run with no parameters.</p>
<p>Note that all of the sample programs take a <code>&lt;master&gt;</code> parameter specifying the cluster URL
to connect to. This can be a <a href="scala-programming-guide.html#master-urls">URL for a distributed cluster</a>,
or <code>local</code> to run locally with one thread, or <code>local[N]</code> to run locally with N threads. You should start by using
<code>local</code> for testing.</p>
<p>Finally, you can run Spark interactively through modified versions of the Scala shell (<code>./bin/spark-shell</code>) or
Python interpreter (<code>./bin/pyspark</code>). These are a great way to learn the framework.</p>
<h1 id="launching-on-a-cluster">Launching on a Cluster</h1>
<p>The Spark <a href="cluster-overview.html">cluster mode overview</a> explains the key concepts in running on a cluster.
Spark can run both by itself, or over several existing cluster managers. It currently provides several
options for deployment:</p>
<ul>
<li><a href="ec2-scripts.html">Amazon EC2</a>: our EC2 scripts let you launch a cluster in about 5 minutes</li>
<li><a href="spark-standalone.html">Standalone Deploy Mode</a>: simplest way to deploy Spark on a private cluster</li>
<li><a href="running-on-mesos.html">Apache Mesos</a></li>
<li><a href="running-on-yarn.html">Hadoop YARN</a></li>
</ul>
<h1 id="a-note-about-hadoop-versions">A Note About Hadoop Versions</h1>
<p>Spark uses the Hadoop-client library to talk to HDFS and other Hadoop-supported
storage systems. Because the HDFS protocol has changed in different versions of
Hadoop, you must build Spark against the same version that your cluster uses.
By default, Spark links to Hadoop 1.0.4. You can change this by setting the
<code>SPARK_HADOOP_VERSION</code> variable when compiling:</p>
<pre><code>SPARK_HADOOP_VERSION=2.2.0 sbt/sbt assembly
</code></pre>
<p>In addition, if you wish to run Spark on <a href="running-on-yarn.html">YARN</a>, set
<code>SPARK_YARN</code> to <code>true</code>:</p>
<pre><code>SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly
</code></pre>
<p>Note that on Windows, you need to set the environment variables on separate lines, e.g., <code>set SPARK_HADOOP_VERSION=1.2.1</code>.</p>
<p>For this version of Spark (0.8.1) Hadoop 2.2.x (or newer) users will have to build Spark and publish it locally. See <a href="running-on-yarn.html">Launching Spark on YARN</a>. This is needed because Hadoop 2.2 has non backwards compatible API changes.</p>
<h1 id="where-to-go-from-here">Where to Go from Here</h1>
<p><strong>Programming guides:</strong></p>
<ul>
<li><a href="quick-start.html">Quick Start</a>: a quick introduction to the Spark API; start here!</li>
<li><a href="scala-programming-guide.html">Spark Programming Guide</a>: an overview of Spark concepts, and details on the Scala API
<ul>
<li><a href="java-programming-guide.html">Java Programming Guide</a>: using Spark from Java</li>
<li><a href="python-programming-guide.html">Python Programming Guide</a>: using Spark from Python</li>
</ul>
</li>
<li><a href="streaming-programming-guide.html">Spark Streaming</a>: Spark&#8217;s API for processing data streams</li>
<li><a href="mllib-guide.html">MLlib (Machine Learning)</a>: Spark&#8217;s built-in machine learning library</li>
<li><a href="bagel-programming-guide.html">Bagel (Pregel on Spark)</a>: simple graph processing model</li>
<li><a href="graphx-programming-guide.html">GraphX (Graphs on Spark)</a>: Spark&#8217;s new API for graphs</li>
</ul>
<p><strong>API Docs:</strong></p>
<ul>
<li><a href="api/core/index.html">Spark for Java/Scala (Scaladoc)</a></li>
<li><a href="api/pyspark/index.html">Spark for Python (Epydoc)</a></li>
<li><a href="api/streaming/index.html">Spark Streaming for Java/Scala (Scaladoc)</a></li>
<li><a href="api/mllib/index.html">MLlib (Machine Learning) for Java/Scala (Scaladoc)</a></li>
<li><a href="api/bagel/index.html">Bagel (Pregel on Spark) for Scala (Scaladoc)</a></li>
<li><a href="api/graphx/index.html">GraphX (Graphs on Spark) for Scala (Scaladoc)</a></li>
</ul>
<p><strong>Deployment guides:</strong></p>
<ul>
<li><a href="cluster-overview.html">Cluster Overview</a>: overview of concepts and components when running on a cluster</li>
<li><a href="ec2-scripts.html">Amazon EC2</a>: scripts that let you launch a cluster on EC2 in about 5 minutes</li>
<li><a href="spark-standalone.html">Standalone Deploy Mode</a>: launch a standalone cluster quickly without a third-party cluster manager</li>
<li><a href="running-on-mesos.html">Mesos</a>: deploy a private cluster using
<a href="http://mesos.apache.org">Apache Mesos</a></li>
<li><a href="running-on-yarn.html">YARN</a>: deploy Spark on top of Hadoop NextGen (YARN)</li>
</ul>
<p><strong>Other documents:</strong></p>
<ul>
<li><a href="configuration.html">Configuration</a>: customize Spark via its configuration system</li>
<li><a href="tuning.html">Tuning Guide</a>: best practices to optimize performance and memory use</li>
<li><a href="hardware-provisioning.html">Hardware Provisioning</a>: recommendations for cluster hardware</li>
<li><a href="job-scheduling.html">Job Scheduling</a>: scheduling resources across and within Spark applications</li>
<li><a href="building-with-maven.html">Building Spark with Maven</a>: build Spark using the Maven system</li>
<li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li>
</ul>
<p><strong>External resources:</strong></p>
<ul>
<li><a href="http://spark.apache.org">Spark Homepage</a></li>
<li><a href="http://shark.cs.berkeley.edu">Shark</a>: Apache Hive over Spark</li>
<li><a href="http://spark.apache.org/mailing-lists.html">Mailing Lists</a>: ask questions about Spark here</li>
<li><a href="http://ampcamp.berkeley.edu/">AMP Camps</a>: a series of training camps at UC Berkeley that featured talks and
exercises about Spark, Shark, Mesos, and more. <a href="http://ampcamp.berkeley.edu/agenda-2012">Videos</a>,
<a href="http://ampcamp.berkeley.edu/agenda-2012">slides</a> and <a href="http://ampcamp.berkeley.edu/exercises-2012">exercises</a> are
available online for free.</li>
<li><a href="http://spark.apache.org/examples.html">Code Examples</a>: more are also available in the <a href="https://github.com/apache/spark/tree/master/examples/src/main/scala/">examples subfolder</a> of Spark</li>
<li><a href="http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf">Paper Describing Spark</a></li>
<li><a href="http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf">Paper Describing Spark Streaming</a></li>
</ul>
<h1 id="community">Community</h1>
<p>To get help using Spark or keep up with Spark development, sign up for the <a href="http://spark.apache.org/mailing-lists.html">user mailing list</a>.</p>
<p>If you&#8217;re in the San Francisco Bay Area, there&#8217;s a regular <a href="http://www.meetup.com/spark-users/">Spark meetup</a> every few weeks. Come by to meet the developers and other users.</p>
<p>Finally, if you&#8217;d like to contribute code to Spark, read <a href="contributing-to-spark.html">how to contribute</a>.</p>
<!-- Main hero unit for a primary marketing message or call to action -->
<!--<div class="hero-unit">
<h1>Hello, world!</h1>
<p>This is a template for a simple marketing or informational website. It includes a large callout called the hero unit and three supporting pieces of content. Use it as a starting point to create something more unique.</p>
<p><a class="btn btn-primary btn-large">Learn more &raquo;</a></p>
</div>-->
<!-- Example row of columns -->
<!--<div class="row">
<div class="span4">
<h2>Heading</h2>
<p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p>
<p><a class="btn" href="#">View details &raquo;</a></p>
</div>
<div class="span4">
<h2>Heading</h2>
<p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p>
<p><a class="btn" href="#">View details &raquo;</a></p>
</div>
<div class="span4">
<h2>Heading</h2>
<p>Donec sed odio dui. Cras justo odio, dapibus ac facilisis in, egestas eget quam. Vestibulum id ligula porta felis euismod semper. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus.</p>
<p><a class="btn" href="#">View details &raquo;</a></p>
</div>
</div>
<hr>-->
</div> <!-- /container -->
<script src="js/vendor/jquery-1.8.0.min.js"></script>
<script src="js/vendor/bootstrap.min.js"></script>
<script src="js/main.js"></script>
<!-- A script to fix internal hash links because we have an overlapping top bar.
Based on https://github.com/twitter/bootstrap/issues/193#issuecomment-2281510 -->
<script>
$(function() {
function maybeScrollToHash() {
if (window.location.hash && $(window.location.hash).length) {
var newTop = $(window.location.hash).offset().top - $('#topbar').height() - 5;
$(window).scrollTop(newTop);
}
}
$(window).bind('hashchange', function() {
maybeScrollToHash();
});
// Scroll now too in case we had opened the page on a hash, but wait 1 ms because some browsers
// will try to do *their* initial scroll after running the onReady handler.
setTimeout(function() { maybeScrollToHash(); }, 1)
})
</script>
</body>
</html>