layout: global title: Running Spark on Mesos license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Note: Apache Mesos support is deprecated as of Apache Spark 3.2.0. It will be removed in a future version.
Spark can run on hardware clusters managed by Apache Mesos.
The advantages of deploying Spark with Mesos include:
Security features like authentication are not enabled by default. When deploying a cluster that is open to the internet or an untrusted network, it's important to secure access to the cluster to prevent unauthorized applications from running on the cluster. Please see Spark Security and the specific security sections in this doc before running Spark.
In a standalone cluster deployment, the cluster manager in the below diagram is a Spark master instance. When using Mesos, the Mesos master replaces the Spark master as the cluster manager.
Now when a driver creates a job and starts issuing tasks for scheduling, Mesos determines what machines handle what tasks. Because it takes into account other frameworks when scheduling these many short-lived tasks, multiple frameworks can coexist on the same cluster without resorting to a static partitioning of resources.
To get started, follow the steps below to install Mesos and deploy Spark jobs via Mesos.
Spark {{site.SPARK_VERSION}} is designed for use with Mesos {{site.MESOS_VERSION}} or newer and does not require any special patches of Mesos. File and environment-based secrets support requires Mesos 1.3.0 or newer.
If you already have a Mesos cluster running, you can skip this Mesos installation step.
Otherwise, installing Mesos for Spark is no different than installing Mesos for use by other frameworks. You can install Mesos either from source or using prebuilt packages.
To install Apache Mesos from source, follow these steps:
Note: If you want to run Mesos without installing it into the default paths on your system (e.g., if you lack administrative privileges to install it), pass the --prefix
option to configure
to tell it where to install. For example, pass --prefix=/home/me/mesos
. By default the prefix is /usr/local
.
The Apache Mesos project only publishes source releases, not binary packages. But other third party projects publish binary releases that may be helpful in setting Mesos up.
One of those is Mesosphere. To install Mesos using the binary releases provided by Mesosphere:
The Mesosphere installation documents suggest setting up ZooKeeper to handle Mesos master failover, but Mesos can be run without ZooKeeper using a single master as well.
To verify that the Mesos cluster is ready for Spark, navigate to the Mesos master webui at port :5050
Confirm that all expected machines are present in the agents tab.
To use Mesos from Spark, you need a Spark binary package available in a place accessible by Mesos, and a Spark driver program configured to connect to Mesos.
Alternatively, you can also install Spark in the same location in all the Mesos agents, and configure spark.mesos.executor.home
(defaults to SPARK_HOME) to point to that location.
When Mesos Framework authentication is enabled it is necessary to provide a principal and secret by which to authenticate Spark to Mesos. Each Spark job will register with Mesos as a separate framework.
Depending on your deployment environment you may wish to create a single set of framework credentials that are shared across all users or create framework credentials for each user. Creating and managing framework credentials should be done following the Mesos Authentication documentation.
Framework credentials may be specified in a variety of ways depending on your deployment environment and security requirements. The most simple way is to specify the spark.mesos.principal
and spark.mesos.secret
values directly in your Spark configuration. Alternatively you may specify these values indirectly by instead specifying spark.mesos.principal.file
and spark.mesos.secret.file
, these settings point to files containing the principal and secret. These files must be plaintext files in UTF-8 encoding. Combined with appropriate file ownership and mode/ACLs this provides a more secure way to specify these credentials.
Additionally, if you prefer to use environment variables you can specify all of the above via environment variables instead, the environment variable names are simply the configuration settings uppercased with .
replaced with _
e.g. SPARK_MESOS_PRINCIPAL
.
Please note that if you specify multiple ways to obtain the credentials then the following preference order applies. Spark will use the first valid value found and any subsequent values are ignored:
spark.mesos.principal
configuration settingSPARK_MESOS_PRINCIPAL
environment variablespark.mesos.principal.file
configuration settingSPARK_MESOS_PRINCIPAL_FILE
environment variableAn equivalent order applies for the secret. Essentially we prefer the configuration to be specified directly rather than indirectly by files, and we prefer that configuration settings are used over environment variables.
If you want to deploy a Spark Application into a Mesos cluster that is running in a secure mode there are some environment variables that need to be set.
LIBPROCESS_SSL_ENABLED=true
enables SSL communicationLIBPROCESS_SSL_VERIFY_CERT=false
verifies the ssl certificateLIBPROCESS_SSL_KEY_FILE=pathToKeyFile.key
path to keyLIBPROCESS_SSL_CERT_FILE=pathToCRTFile.crt
the certificate file to be usedAll options can be found at http://mesos.apache.org/documentation/latest/ssl/
Then submit happens as described in Client mode or Cluster mode below
When Mesos runs a task on a Mesos agent for the first time, that agent must have a Spark binary package for running the Spark Mesos executor backend. The Spark package can be hosted at any Hadoop-accessible URI, including HTTP via http://
, Amazon Simple Storage Service via s3n://
, or HDFS via hdfs://
.
To use a precompiled package:
To host on HDFS, use the Hadoop fs put command: hadoop fs -put spark-{{site.SPARK_VERSION}}.tar.gz /path/to/spark-{{site.SPARK_VERSION}}.tar.gz
Or if you are using a custom-compiled version of Spark, you will need to create a package using the dev/make-distribution.sh
script included in a Spark source tarball/checkout.
./dev/make-distribution.sh --tgz
.The Master URLs for Mesos are in the form mesos://host:5050
for a single-master Mesos cluster, or mesos://zk://host1:2181,host2:2181,host3:2181/mesos
for a multi-master Mesos cluster using ZooKeeper.
In client mode, a Spark Mesos framework is launched directly on the client machine and waits for the driver output.
The driver needs some configuration in spark-env.sh
to interact properly with Mesos:
spark-env.sh
set some environment variables:export MESOS_NATIVE_JAVA_LIBRARY=<path to libmesos.so>
. This path is typically <prefix>/lib/libmesos.so
where the prefix is /usr/local
by default. See Mesos installation instructions above. On Mac OS X, the library is called libmesos.dylib
instead of libmesos.so
.export SPARK_EXECUTOR_URI=<URL of spark-{{site.SPARK_VERSION}}.tar.gz uploaded above>
.spark.executor.uri
to <URL of spark-{{site.SPARK_VERSION}}.tar.gz>
.Now when starting a Spark application against the cluster, pass a mesos://
URL as the master when creating a SparkContext
. For example:
{% highlight scala %} val conf = new SparkConf() .setMaster(“mesos://HOST:5050”) .setAppName(“My app”) .set(“spark.executor.uri”, “<path to spark-{{site.SPARK_VERSION}}.tar.gz uploaded above>”) val sc = new SparkContext(conf) {% endhighlight %}
(You can also use spark-submit
and configure spark.executor.uri
in the conf/spark-defaults.conf file.)
When running a shell, the spark.executor.uri
parameter is inherited from SPARK_EXECUTOR_URI
, so it does not need to be redundantly passed in as a system property.
{% highlight bash %} ./bin/spark-shell --master mesos://host:5050 {% endhighlight %}
Spark on Mesos also supports cluster mode, where the driver is launched in the cluster and the client can find the results of the driver from the Mesos Web UI.
To use cluster mode, you must start the MesosClusterDispatcher
in your cluster via the sbin/start-mesos-dispatcher.sh
script, passing in the Mesos master URL (e.g: mesos://host:5050). This starts the MesosClusterDispatcher
as a daemon running on the host. Note that the MesosClusterDispatcher
does not support authentication. You should ensure that all network access to it is protected (port 7077 by default).
By setting the Mesos proxy config property (requires mesos version >= 1.4), --conf spark.mesos.proxy.baseURL=http://localhost:5050
when launching the dispatcher, the mesos sandbox URI for each driver is added to the mesos dispatcher UI.
If you like to run the MesosClusterDispatcher
with Marathon, you need to run the MesosClusterDispatcher
in the foreground (i.e: ./bin/spark-class org.apache.spark.deploy.mesos.MesosClusterDispatcher
). Note that the MesosClusterDispatcher
not yet supports multiple instances for HA.
The MesosClusterDispatcher
also supports writing recovery state into Zookeeper. This will allow the MesosClusterDispatcher
to be able to recover all submitted and running containers on relaunch. In order to enable this recovery mode, you can set SPARK_DAEMON_JAVA_OPTS in spark-env by configuring spark.deploy.recoveryMode
and related spark.deploy.zookeeper.* configurations. For more information about these configurations please refer to the configurations doc.
You can also specify any additional jars required by the MesosClusterDispatcher
in the classpath by setting the environment variable SPARK_DAEMON_CLASSPATH in spark-env.
From the client, you can submit a job to Mesos cluster by running spark-submit
and specifying the master URL to the URL of the MesosClusterDispatcher
(e.g: mesos://dispatcher:7077). You can view driver statuses on the Spark cluster Web UI.
For example: {% highlight bash %} ./bin/spark-submit
--class org.apache.spark.examples.SparkPi
--master mesos://207.184.161.138:7077
--deploy-mode cluster
--supervise
--executor-memory 20G
--total-executor-cores 100
http://path/to/examples.jar
1000 {% endhighlight %}
Note that jars or python files that are passed to spark-submit should be URIs reachable by Mesos agents, as the Spark driver doesn't automatically upload local jars.
Spark can run over Mesos in two modes: “coarse-grained” (default) and “fine-grained” (deprecated).
In “coarse-grained” mode, each Spark executor runs as a single Mesos task. Spark executors are sized according to the following configuration variables:
spark.executor.memory
spark.executor.cores
spark.cores.max
/spark.executor.cores
Please see the Spark Configuration page for details and default values.
Executors are brought up eagerly when the application starts, until spark.cores.max
is reached. If you don't set spark.cores.max
, the Spark application will consume all resources offered to it by Mesos, so we, of course, urge you to set this variable in any sort of multi-tenant cluster, including one which runs multiple concurrent Spark applications.
The scheduler will start executors round-robin on the offers Mesos gives it, but there are no spread guarantees, as Mesos does not provide such guarantees on the offer stream.
In this mode Spark executors will honor port allocation if such is provided from the user. Specifically, if the user defines spark.blockManager.port
in Spark configuration, the mesos scheduler will check the available offers for a valid port range containing the port numbers. If no such range is available it will not launch any task. If no restriction is imposed on port numbers by the user, ephemeral ports are used as usual. This port honouring implementation implies one task per host if the user defines a port. In the future network, isolation shall be supported.
The benefit of coarse-grained mode is much lower startup overhead, but at the cost of reserving Mesos resources for the complete duration of the application. To configure your job to dynamically adjust to its resource requirements, look into Dynamic Allocation.
NOTE: Fine-grained mode is deprecated as of Spark 2.0.0. Consider using Dynamic Allocation for some of the benefits. For a full explanation see SPARK-11857
In “fine-grained” mode, each Spark task inside the Spark executor runs as a separate Mesos task. This allows multiple instances of Spark (and other frameworks) to share cores at a very fine granularity, where each application gets more or fewer cores as it ramps up and down, but it comes with an additional overhead in launching each task. This mode may be inappropriate for low-latency requirements like interactive queries or serving web requests.
Note that while Spark tasks in fine-grained will relinquish cores as they terminate, they will not relinquish memory, as the JVM does not give memory back to the Operating System. Neither will executors terminate when they're idle.
To run in fine-grained mode, set the spark.mesos.coarse
property to false in your SparkConf:
{% highlight scala %} conf.set(“spark.mesos.coarse”, “false”) {% endhighlight %}
You may also make use of spark.mesos.constraints
to set attribute-based constraints on Mesos resource offers. By default, all resource offers will be accepted.
{% highlight scala %} conf.set(“spark.mesos.constraints”, “os:centos7;us-east-1:false”) {% endhighlight %}
For example, Let's say spark.mesos.constraints
is set to os:centos7;us-east-1:false
, then the resource offers will be checked to see if they meet both these constraints and only then will be accepted to start new executors.
To constrain where driver tasks are run, use spark.mesos.driver.constraints
Spark can make use of a Mesos Docker containerizer by setting the property spark.mesos.executor.docker.image
in your SparkConf.
The Docker image used must have an appropriate version of Spark already part of the image, or you can have Mesos download Spark via the usual methods.
Requires Mesos version 0.20.1 or later.
Note that by default Mesos agents will not pull the image if it already exists on the agent. If you use mutable image tags you can set spark.mesos.executor.docker.forcePullImage
to true
in order to force the agent to always pull the image before running the executor. Force pulling images is only available in Mesos version 0.22 and above.
You can run Spark and Mesos alongside your existing Hadoop cluster by just launching them as a separate service on the machines. To access Hadoop data from Spark, a full hdfs://
URL is required (typically hdfs://<namenode>:9000/path
, but you can find the right URL on your Hadoop Namenode web UI).
In addition, it is possible to also run Hadoop MapReduce on Mesos for better resource isolation and sharing between the two. In this case, Mesos will act as a unified scheduler that assigns cores to either Hadoop or Spark, as opposed to having them share resources via the Linux scheduler on each node. Please refer to Hadoop on Mesos.
In either case, HDFS runs separately from Hadoop MapReduce, without being scheduled through Mesos.
Mesos supports dynamic allocation only with coarse-grained mode, which can resize the number of executors based on statistics of the application. For general information, see Dynamic Resource Allocation.
The External Shuffle Service to use is the Mesos Shuffle Service. It provides shuffle data cleanup functionality on top of the Shuffle Service since Mesos doesn‘t yet support notifying another framework’s termination. To launch it, run $SPARK_HOME/sbin/start-mesos-shuffle-service.sh
on all agent nodes, with spark.shuffle.service.enabled
set to true
.
This can also be achieved through Marathon, using a unique host constraint, and the following command: ./bin/spark-class org.apache.spark.deploy.mesos.MesosExternalShuffleService
.
See the configuration page for information on Spark configurations. The following configs are specific for Spark on Mesos.
<pre>key1=val1,key2=val2,key3=val3</pre>
<pre>[host_path:]container_path[:ro|:rw]</pre>
<pre>spark.mesos.driver.secret.values=guessme</pre> </p> <p> (2) To specify a secret that has been placed in a secret store by reference, specify its name within the secret store by setting the <code>spark.mesos.[driver|executor].secret.names</code> property. For example, to make a secret password named "password" in a secret store available to the driver process, set: <pre>spark.mesos.driver.secret.names=password</pre> </p> <p> Note: To use a secret store, make sure one has been integrated with Mesos via a custom <a href="http://mesos.apache.org/documentation/latest/secrets/">SecretResolver module</a>. </p> <p> To specify multiple secrets, provide a comma-separated list: <pre>spark.mesos.driver.secret.values=guessme,passwd123</pre> or <pre>spark.mesos.driver.secret.names=password1,password2</pre> </p>
<pre>spark.mesos.driver.secret.envkeys=PASSWORD</pre> </p> <p> (2) To make a file-based secret, set the <code>spark.mesos.[driver|executor].secret.filenames</code> property. The secret will appear in the contents of a file with the given file name in the driver or executors. For example, to make a secret password available in a file named "pwdfile" in the driver process, set: <pre>spark.mesos.driver.secret.filenames=pwdfile</pre> </p> <p> Paths are relative to the container's work directory. Absolute paths must already exist. Note: File-based secrets require a custom <a href="http://mesos.apache.org/documentation/latest/secrets/">SecretResolver module</a>. </p> <p> To specify env vars or file names corresponding to multiple secrets, provide a comma-separated list: <pre>spark.mesos.driver.secret.envkeys=PASSWORD1,PASSWORD2</pre> or <pre>spark.mesos.driver.secret.filenames=pwdfile1,pwdfile2</pre> </p>
spark.mesos.dispatcher.queue=URGENT spark.mesos.role=URGENT </pre>
<pre>key1:val1,key2:val2</pre> See <a href="http://mesos.apache.org/documentation/latest/cni/#mesos-meta-data-to-cni-plugins">the Mesos CNI docs</a> for more details.
A few places to look during debugging:
:5050
/var/log/mesos
by defaultAnd common pitfalls:
http://
, hdfs://
or s3n://
URL you gave