| <!DOCTYPE html> |
| <!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]--> |
| <!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]--> |
| <!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]--> |
| <!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]--> |
| <head> |
| <meta charset="utf-8"> |
| <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> |
| <title>Running Spark on EC2 - Spark 1.4.0 Documentation</title> |
| |
| |
| |
| |
| <link rel="stylesheet" href="css/bootstrap.min.css"> |
| <style> |
| body { |
| padding-top: 60px; |
| padding-bottom: 40px; |
| } |
| </style> |
| <meta name="viewport" content="width=device-width"> |
| <link rel="stylesheet" href="css/bootstrap-responsive.min.css"> |
| <link rel="stylesheet" href="css/main.css"> |
| |
| <script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script> |
| |
| <link rel="stylesheet" href="css/pygments-default.css"> |
| |
| |
| |
| </head> |
| <body> |
| <!--[if lt IE 7]> |
| <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p> |
| <![endif]--> |
| |
| <!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html --> |
| |
| <div class="navbar navbar-fixed-top" id="topbar"> |
| <div class="navbar-inner"> |
| <div class="container"> |
| <div class="brand"><a href="index.html"> |
| <img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">1.4.0</span> |
| </div> |
| <ul class="nav"> |
| <!--TODO(andyk): Add class="active" attribute to li some how.--> |
| <li><a href="index.html">Overview</a></li> |
| |
| <li class="dropdown"> |
| <a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a> |
| <ul class="dropdown-menu"> |
| <li><a href="quick-start.html">Quick Start</a></li> |
| <li><a href="programming-guide.html">Spark Programming Guide</a></li> |
| <li class="divider"></li> |
| <li><a href="streaming-programming-guide.html">Spark Streaming</a></li> |
| <li><a href="sql-programming-guide.html">DataFrames and SQL</a></li> |
| <li><a href="mllib-guide.html">MLlib (Machine Learning)</a></li> |
| <li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li> |
| <li><a href="bagel-programming-guide.html">Bagel (Pregel on Spark)</a></li> |
| <li><a href="sparkr.html">SparkR (R on Spark)</a></li> |
| </ul> |
| </li> |
| |
| <li class="dropdown"> |
| <a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a> |
| <ul class="dropdown-menu"> |
| <li><a href="api/scala/index.html#org.apache.spark.package">Scala</a></li> |
| <li><a href="api/java/index.html">Java</a></li> |
| <li><a href="api/python/index.html">Python</a></li> |
| <li><a href="api/R/index.html">R</a></li> |
| </ul> |
| </li> |
| |
| <li class="dropdown"> |
| <a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a> |
| <ul class="dropdown-menu"> |
| <li><a href="cluster-overview.html">Overview</a></li> |
| <li><a href="submitting-applications.html">Submitting Applications</a></li> |
| <li class="divider"></li> |
| <li><a href="spark-standalone.html">Spark Standalone</a></li> |
| <li><a href="running-on-mesos.html">Mesos</a></li> |
| <li><a href="running-on-yarn.html">YARN</a></li> |
| <li class="divider"></li> |
| <li><a href="ec2-scripts.html">Amazon EC2</a></li> |
| </ul> |
| </li> |
| |
| <li class="dropdown"> |
| <a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a> |
| <ul class="dropdown-menu"> |
| <li><a href="configuration.html">Configuration</a></li> |
| <li><a href="monitoring.html">Monitoring</a></li> |
| <li><a href="tuning.html">Tuning Guide</a></li> |
| <li><a href="job-scheduling.html">Job Scheduling</a></li> |
| <li><a href="security.html">Security</a></li> |
| <li><a href="hardware-provisioning.html">Hardware Provisioning</a></li> |
| <li><a href="hadoop-third-party-distributions.html">3<sup>rd</sup>-Party Hadoop Distros</a></li> |
| <li class="divider"></li> |
| <li><a href="building-spark.html">Building Spark</a></li> |
| <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li> |
| <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Supplemental+Spark+Projects">Supplemental Projects</a></li> |
| </ul> |
| </li> |
| </ul> |
| <!--<p class="navbar-text pull-right"><span class="version-text">v1.4.0</span></p>--> |
| </div> |
| </div> |
| </div> |
| |
| <div class="container" id="content"> |
| |
| <h1 class="title">Running Spark on EC2</h1> |
| |
| |
| <p>The <code>spark-ec2</code> script, located in Spark’s <code>ec2</code> directory, allows you |
| to launch, manage and shut down Spark clusters on Amazon EC2. It automatically |
| sets up Spark and HDFS on the cluster for you. This guide describes |
| how to use <code>spark-ec2</code> to launch clusters, how to run jobs on them, and how |
| to shut them down. It assumes you’ve already signed up for an EC2 account |
| on the <a href="http://aws.amazon.com/">Amazon Web Services site</a>.</p> |
| |
| <p><code>spark-ec2</code> is designed to manage multiple named clusters. You can |
| launch a new cluster (telling the script its size and giving it a name), |
| shutdown an existing cluster, or log into a cluster. Each cluster is |
| identified by placing its machines into EC2 security groups whose names |
| are derived from the name of the cluster. For example, a cluster named |
| <code>test</code> will contain a master node in a security group called |
| <code>test-master</code>, and a number of slave nodes in a security group called |
| <code>test-slaves</code>. The <code>spark-ec2</code> script will create these security groups |
| for you based on the cluster name you request. You can also use them to |
| identify machines belonging to each cluster in the Amazon EC2 Console.</p> |
| |
| <h1 id="before-you-start">Before You Start</h1> |
| |
| <ul> |
| <li>Create an Amazon EC2 key pair for yourself. This can be done by |
| logging into your Amazon Web Services account through the <a href="http://aws.amazon.com/console/">AWS |
| console</a>, clicking Key Pairs on the |
| left sidebar, and creating and downloading a key. Make sure that you |
| set the permissions for the private key file to <code>600</code> (i.e. only you |
| can read and write it) so that <code>ssh</code> will work.</li> |
| <li>Whenever you want to use the <code>spark-ec2</code> script, set the environment |
| variables <code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code> to your |
| Amazon EC2 access key ID and secret access key. These can be |
| obtained from the <a href="http://aws.amazon.com/">AWS homepage</a> by clicking |
| Account > Security Credentials > Access Credentials.</li> |
| </ul> |
| |
| <h1 id="launching-a-cluster">Launching a Cluster</h1> |
| |
| <ul> |
| <li>Go into the <code>ec2</code> directory in the release of Spark you downloaded.</li> |
| <li> |
| <p>Run |
| <code>./spark-ec2 -k <keypair> -i <key-file> -s <num-slaves> launch <cluster-name></code>, |
| where <code><keypair></code> is the name of your EC2 key pair (that you gave it |
| when you created it), <code><key-file></code> is the private key file for your |
| key pair, <code><num-slaves></code> is the number of slave nodes to launch (try |
| 1 at first), and <code><cluster-name></code> is the name to give to your |
| cluster.</p> |
| |
| <p>For example:</p> |
| |
| <p><code>bash |
| export AWS_SECRET_ACCESS_KEY=AaBbCcDdEeFGgHhIiJjKkLlMmNnOoPpQqRrSsTtU |
| export AWS_ACCESS_KEY_ID=ABCDEFG1234567890123 |
| ./spark-ec2 --key-pair=awskey --identity-file=awskey.pem --region=us-west-1 --zone=us-west-1a launch my-spark-cluster |
| </code></p> |
| </li> |
| <li>After everything launches, check that the cluster scheduler is up and sees |
| all the slaves by going to its web UI, which will be printed at the end of |
| the script (typically <code>http://<master-hostname>:8080</code>).</li> |
| </ul> |
| |
| <p>You can also run <code>./spark-ec2 --help</code> to see more usage options. The |
| following options are worth pointing out:</p> |
| |
| <ul> |
| <li><code>--instance-type=<instance-type></code> can be used to specify an EC2 |
| instance type to use. For now, the script only supports 64-bit instance |
| types, and the default type is <code>m1.large</code> (which has 2 cores and 7.5 GB |
| RAM). Refer to the Amazon pages about <a href="http://aws.amazon.com/ec2/instance-types">EC2 instance |
| types</a> and <a href="http://aws.amazon.com/ec2/#pricing">EC2 |
| pricing</a> for information about other |
| instance types.</li> |
| <li><code>--region=<ec2-region></code> specifies an EC2 region in which to launch |
| instances. The default region is <code>us-east-1</code>.</li> |
| <li><code>--zone=<ec2-zone></code> can be used to specify an EC2 availability zone |
| to launch instances in. Sometimes, you will get an error because there |
| is not enough capacity in one zone, and you should try to launch in |
| another.</li> |
| <li><code>--ebs-vol-size=<GB></code> will attach an EBS volume with a given amount |
| of space to each node so that you can have a persistent HDFS cluster |
| on your nodes across cluster restarts (see below).</li> |
| <li><code>--spot-price=<price></code> will launch the worker nodes as |
| <a href="http://aws.amazon.com/ec2/spot-instances/">Spot Instances</a>, |
| bidding for the given maximum price (in dollars).</li> |
| <li><code>--spark-version=<version></code> will pre-load the cluster with the |
| specified version of Spark. The <code><version></code> can be a version number |
| (e.g. “0.7.3”) or a specific git hash. By default, a recent |
| version will be used.</li> |
| <li><code>--spark-git-repo=<repository url></code> will let you run a custom version of |
| Spark that is built from the given git repository. By default, the |
| <a href="https://github.com/apache/spark">Apache Github mirror</a> will be used. |
| When using a custom Spark version, <code>--spark-version</code> must be set to git |
| commit hash, such as 317e114, instead of a version number.</li> |
| <li>If one of your launches fails due to e.g. not having the right |
| permissions on your private key file, you can run <code>launch</code> with the |
| <code>--resume</code> option to restart the setup process on an existing cluster.</li> |
| </ul> |
| |
| <h1 id="launching-a-cluster-in-a-vpc">Launching a Cluster in a VPC</h1> |
| |
| <ul> |
| <li> |
| <p>Run |
| <code>./spark-ec2 -k <keypair> -i <key-file> -s <num-slaves> --vpc-id=<vpc-id> --subnet-id=<subnet-id> launch <cluster-name></code>, |
| where <code><keypair></code> is the name of your EC2 key pair (that you gave it |
| when you created it), <code><key-file></code> is the private key file for your |
| key pair, <code><num-slaves></code> is the number of slave nodes to launch (try |
| 1 at first), <code><vpc-id></code> is the name of your VPC, <code><subnet-id></code> is the |
| name of your subnet, and <code><cluster-name></code> is the name to give to your |
| cluster.</p> |
| |
| <p>For example:</p> |
| |
| <p><code>bash |
| export AWS_SECRET_ACCESS_KEY=AaBbCcDdEeFGgHhIiJjKkLlMmNnOoPpQqRrSsTtU |
| export AWS_ACCESS_KEY_ID=ABCDEFG1234567890123 |
| ./spark-ec2 --key-pair=awskey --identity-file=awskey.pem --region=us-west-1 --zone=us-west-1a --vpc-id=vpc-a28d24c7 --subnet-id=subnet-4eb27b39 --spark-version=1.1.0 launch my-spark-cluster |
| </code></p> |
| </li> |
| </ul> |
| |
| <h1 id="running-applications">Running Applications</h1> |
| |
| <ul> |
| <li>Go into the <code>ec2</code> directory in the release of Spark you downloaded.</li> |
| <li>Run <code>./spark-ec2 -k <keypair> -i <key-file> login <cluster-name></code> to |
| SSH into the cluster, where <code><keypair></code> and <code><key-file></code> are as |
| above. (This is just for convenience; you could also use |
| the EC2 console.)</li> |
| <li>To deploy code or data within your cluster, you can log in and use the |
| provided script <code>~/spark-ec2/copy-dir</code>, which, |
| given a directory path, RSYNCs it to the same location on all the slaves.</li> |
| <li>If your application needs to access large datasets, the fastest way to do |
| that is to load them from Amazon S3 or an Amazon EBS device into an |
| instance of the Hadoop Distributed File System (HDFS) on your nodes. |
| The <code>spark-ec2</code> script already sets up a HDFS instance for you. It’s |
| installed in <code>/root/ephemeral-hdfs</code>, and can be accessed using the |
| <code>bin/hadoop</code> script in that directory. Note that the data in this |
| HDFS goes away when you stop and restart a machine.</li> |
| <li>There is also a <em>persistent HDFS</em> instance in |
| <code>/root/persistent-hdfs</code> that will keep data across cluster restarts. |
| Typically each node has relatively little space of persistent data |
| (about 3 GB), but you can use the <code>--ebs-vol-size</code> option to |
| <code>spark-ec2</code> to attach a persistent EBS volume to each node for |
| storing the persistent HDFS.</li> |
| <li>Finally, if you get errors while running your application, look at the slave’s logs |
| for that application inside of the scheduler work directory (/root/spark/work). You can |
| also view the status of the cluster using the web UI: <code>http://<master-hostname>:8080</code>.</li> |
| </ul> |
| |
| <h1 id="configuration">Configuration</h1> |
| |
| <p>You can edit <code>/root/spark/conf/spark-env.sh</code> on each machine to set Spark configuration options, such |
| as JVM options. This file needs to be copied to <strong>every machine</strong> to reflect the change. The easiest way to |
| do this is to use a script we provide called <code>copy-dir</code>. First edit your <code>spark-env.sh</code> file on the master, |
| then run <code>~/spark-ec2/copy-dir /root/spark/conf</code> to RSYNC it to all the workers.</p> |
| |
| <p>The <a href="configuration.html">configuration guide</a> describes the available configuration options.</p> |
| |
| <h1 id="terminating-a-cluster">Terminating a Cluster</h1> |
| |
| <p><strong><em>Note that there is no way to recover data on EC2 nodes after shutting |
| them down! Make sure you have copied everything important off the nodes |
| before stopping them.</em></strong></p> |
| |
| <ul> |
| <li>Go into the <code>ec2</code> directory in the release of Spark you downloaded.</li> |
| <li>Run <code>./spark-ec2 destroy <cluster-name></code>.</li> |
| </ul> |
| |
| <h1 id="pausing-and-restarting-clusters">Pausing and Restarting Clusters</h1> |
| |
| <p>The <code>spark-ec2</code> script also supports pausing a cluster. In this case, |
| the VMs are stopped but not terminated, so they |
| <strong><em>lose all data on ephemeral disks</em></strong> but keep the data in their |
| root partitions and their <code>persistent-hdfs</code>. Stopped machines will not |
| cost you any EC2 cycles, but <strong><em>will</em></strong> continue to cost money for EBS |
| storage.</p> |
| |
| <ul> |
| <li>To stop one of your clusters, go into the <code>ec2</code> directory and run |
| <code>./spark-ec2 --region=<ec2-region> stop <cluster-name></code>.</li> |
| <li>To restart it later, run |
| <code>./spark-ec2 -i <key-file> --region=<ec2-region> start <cluster-name></code>.</li> |
| <li>To ultimately destroy the cluster and stop consuming EBS space, run |
| <code>./spark-ec2 --region=<ec2-region> destroy <cluster-name></code> as described in the previous |
| section.</li> |
| </ul> |
| |
| <h1 id="limitations">Limitations</h1> |
| |
| <ul> |
| <li>Support for “cluster compute” nodes is limited – there’s no way to specify a |
| locality group. However, you can launch slave nodes in your |
| <code><clusterName>-slaves</code> group manually and then use <code>spark-ec2 launch |
| --resume</code> to start a cluster with them.</li> |
| </ul> |
| |
| <p>If you have a patch or suggestion for one of these limitations, feel free to |
| <a href="contributing-to-spark.html">contribute</a> it!</p> |
| |
| <h1 id="accessing-data-in-s3">Accessing Data in S3</h1> |
| |
| <p>Spark’s file interface allows it to process data in Amazon S3 using the same URI formats that are supported for Hadoop. You can specify a path in S3 as input through a URI of the form <code>s3n://<bucket>/path</code>. To provide AWS credentials for S3 access, launch the Spark cluster with the option <code>--copy-aws-credentials</code>. Full instructions on S3 access using the Hadoop input libraries can be found on the <a href="http://wiki.apache.org/hadoop/AmazonS3">Hadoop S3 page</a>.</p> |
| |
| <p>In addition to using a single input file, you can also use a directory of files as input by simply giving the path to the directory.</p> |
| |
| |
| </div> <!-- /container --> |
| |
| <script src="js/vendor/jquery-1.8.0.min.js"></script> |
| <script src="js/vendor/bootstrap.min.js"></script> |
| <script src="js/main.js"></script> |
| |
| <!-- MathJax Section --> |
| <script type="text/x-mathjax-config"> |
| MathJax.Hub.Config({ |
| TeX: { equationNumbers: { autoNumber: "AMS" } } |
| }); |
| </script> |
| <script> |
| // Note that we load MathJax this way to work with local file (file://), HTTP and HTTPS. |
| // We could use "//cdn.mathjax...", but that won't support "file://". |
| (function(d, script) { |
| script = d.createElement('script'); |
| script.type = 'text/javascript'; |
| script.async = true; |
| script.onload = function(){ |
| MathJax.Hub.Config({ |
| tex2jax: { |
| inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ], |
| displayMath: [ ["$$","$$"], ["\\[", "\\]"] ], |
| processEscapes: true, |
| skipTags: ['script', 'noscript', 'style', 'textarea', 'pre'] |
| } |
| }); |
| }; |
| script.src = ('https:' == document.location.protocol ? 'https://' : 'http://') + |
| 'cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML'; |
| d.getElementsByTagName('head')[0].appendChild(script); |
| }(document)); |
| </script> |
| </body> |
| </html> |