layout: global title: Spark Configuration

Spark provides three main locations to configure the system:

  • Java system properties, which control internal configuration parameters and can be set either programmatically (by calling System.setProperty before creating a SparkContext) or through JVM arguments.
  • Environment variables for configuring per-machine settings such as the IP address, which can be set in the conf/spark-env.sh script.
  • Logging configuration, which is done through log4j.properties.

System Properties

To set a system property for configuring Spark, you need to either pass it with a -D flag to the JVM (for example java -Dspark.cores.max=5 MyProgram) or call System.setProperty in your code before creating your Spark context, as follows:

{% highlight scala %} System.setProperty(“spark.cores.max”, “5”) val sc = new SparkContext(...) {% endhighlight %}

Most of the configurable system properties control internal settings that have reasonable default values. However, there are at least five properties that you will commonly want to control:

Apart from these, the following properties are also available, and may be useful in some situations:

Environment Variables

Certain Spark settings can also be configured through environment variables, which are read from the conf/spark-env.sh script in the directory where Spark is installed (or conf/spark-env.cmd on Windows). These variables are meant to be for machine-specific settings, such as library search paths. While Java system properties can also be set here, for application settings, we recommend setting these properties within the application instead of in spark-env.sh so that different applications can use different settings.

Note that conf/spark-env.sh does not exist by default when Spark is installed. However, you can copy conf/spark-env.sh.template to create it. Make sure you make the copy executable.

The following variables can be set in spark-env.sh:

  • JAVA_HOME, the location where Java is installed (if it's not on your default PATH)
  • PYSPARK_PYTHON, the Python binary to use for PySpark
  • SPARK_LOCAL_IP, to configure which IP address of the machine to bind to.
  • SPARK_LIBRARY_PATH, to add search directories for native libraries.
  • SPARK_CLASSPATH, to add elements to Spark's classpath that you want to be present for all applications. Note that applications can also add dependencies for themselves through SparkContext.addJar -- we recommend doing that when possible.
  • SPARK_JAVA_OPTS, to add JVM options. This includes Java options like garbage collector settings and any system properties that you'd like to pass with -D (e.g., -Dspark.local.dir=/disk1,/disk2).
  • Options for the Spark standalone cluster scripts, such as number of cores to use on each machine and maximum memory.

Since spark-env.sh is a shell script, some of these can be set programmatically -- for example, you might compute SPARK_LOCAL_IP by looking up the IP of a specific network interface.

Configuring Logging

Spark uses log4j for logging. You can configure it by adding a log4j.properties file in the conf directory. One way to start is to copy the existing log4j.properties.template located there.