Spark provides three locations to configure the system:
conf/spark-env.sh
script on each node.log4j.properties
.Spark properties control most application settings and are configured separately for each application. The preferred way to set them is by passing a SparkConf class to your SparkContext constructor. Alternatively, Spark will also load them from Java system properties, for compatibility with old versions of Spark.
SparkConf lets you configure most of the common properties to initialize a cluster (e.g., master URL and application name), as well as arbitrary key-value pairs through the set()
method. For example, we could initialize an application as follows:
{% highlight scala %} val conf = new SparkConf() .setMaster(“local”) .setAppName(“My application”) .set(“spark.executor.memory”, “1g”) val sc = new SparkContext(conf) {% endhighlight %}
Most of the properties control internal settings that have reasonable default values. However, there are at least five properties that you will commonly want to control:
Apart from these, the following properties are also available, and may be useful in some situations:
The application web UI at http://<driver>:4040
lists Spark properties in the “Environment” tab. This is a useful place to check to make sure that your properties have been set correctly.
Certain Spark settings can be configured through environment variables, which are read from the conf/spark-env.sh
script in the directory where Spark is installed (or conf/spark-env.cmd
on Windows). These variables are meant to be for machine-specific settings, such as library search paths. While Spark properties can also be set there through SPARK_JAVA_OPTS
, for per-application settings, we recommend setting these properties within the application instead of in spark-env.sh
so that different applications can use different settings.
Note that conf/spark-env.sh
does not exist by default when Spark is installed. However, you can copy conf/spark-env.sh.template
to create it. Make sure you make the copy executable.
The following variables can be set in spark-env.sh
:
JAVA_HOME
, the location where Java is installed (if it's not on your default PATH
)PYSPARK_PYTHON
, the Python binary to use for PySparkSPARK_LOCAL_IP
, to configure which IP address of the machine to bind to.SPARK_LIBRARY_PATH
, to add search directories for native libraries.SPARK_CLASSPATH
, to add elements to Spark's classpath that you want to be present for all applications. Note that applications can also add dependencies for themselves through SparkContext.addJar
-- we recommend doing that when possible.SPARK_JAVA_OPTS
, to add JVM options. This includes Java options like garbage collector settings and any system properties that you'd like to pass with -D
. One use case is to set some Spark properties differently on this machine, e.g., -Dspark.local.dir=/disk1,/disk2
.Since spark-env.sh
is a shell script, some of these can be set programmatically -- for example, you might compute SPARK_LOCAL_IP
by looking up the IP of a specific network interface.
Spark uses log4j for logging. You can configure it by adding a log4j.properties
file in the conf
directory. One way to start is to copy the existing log4j.properties.template
located there.