There are several ways to monitor Spark applications: web UIs, metrics, and external instrumentation.
Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:
You can access this interface by simply opening http://<driver-node>:4040
in a web browser. If multiple SparkContexts are running on the same host, they will bind to successive ports beginning with 4040 (4041, 4042, etc).
Note that this information is only available for the duration of the application by default. To view the web UI after the fact, set spark.eventLog.enabled
to true before starting the application. This configures Spark to log Spark events that encode the information displayed in the UI to persisted storage.
Spark‘s Standalone Mode cluster manager also has its own web UI. If an application has logged events over the course of its lifetime, then the Standalone master’s web UI will automatically re-render the application's UI after the application has finished.
If Spark is run on Mesos or YARN, it is still possible to reconstruct the UI of a finished application through Spark‘s history server, provided that the application’s event logs exist. You can start the history server by executing:
./sbin/start-history-server.sh <base-logging-directory>
The base logging directory must be supplied, and should contain sub-directories that each represents an application's event logs. This creates a web interface at http://<server-url>:18080
by default. The history server can be configured as follows:
Note that in all of these UIs, the tables are sortable by clicking their headers, making it easy to identify slow tasks, data skew, etc.
Note that the history server only displays completed Spark jobs. One way to signal the completion of a Spark job is to stop the Spark Context explicitly (sc.stop()
), or in Python using the with SparkContext() as sc:
to handle the Spark Context setup and tear down, and still show the job history on the UI.
Spark has a configurable metrics system based on the Coda Hale Metrics Library. This allows users to report Spark metrics to a variety of sinks including HTTP, JMX, and CSV files. The metrics system is configured via a configuration file that Spark expects to be present at $SPARK_HOME/conf/metrics.properties
. A custom file location can be specified via the spark.metrics.conf
configuration property. Spark's metrics are decoupled into different instances corresponding to Spark components. Within each instance, you can configure a set of sinks to which metrics are reported. The following instances are currently supported:
master
: The Spark standalone master process.applications
: A component within the master which reports on various applications.worker
: A Spark standalone worker process.executor
: A Spark executor.driver
: The Spark driver process (the process in which your SparkContext is created).Each instance can report to zero or more sinks. Sinks are contained in the org.apache.spark.metrics.sink
package:
ConsoleSink
: Logs metrics information to the console.CSVSink
: Exports metrics data to CSV files at regular intervals.JmxSink
: Registers metrics for viewing in a JMX console.MetricsServlet
: Adds a servlet within the existing Spark UI to serve metrics data as JSON data.GraphiteSink
: Sends metrics to a Graphite node.Spark also supports a Ganglia sink which is not included in the default build due to licensing restrictions:
GangliaSink
: Sends metrics to a Ganglia node or multicast group.To install the GangliaSink
you‘ll need to perform a custom build of Spark. Note that by embedding this library you will include LGPL-licensed code in your Spark package. For sbt users, set the SPARK_GANGLIA_LGPL
environment variable before building. For Maven users, enable the -Pspark-ganglia-lgpl
profile. In addition to modifying the cluster’s Spark build user applications will need to link to the spark-ganglia-lgpl
artifact.
The syntax of the metrics configuration file is defined in an example configuration file, $SPARK_HOME/conf/metrics.properties.template
.
Several external tools can be used to help profile the performance of Spark jobs:
jstack
for providing stack traces, jmap
for creating heap-dumps, jstat
for reporting time-series statistics and jconsole
for visually exploring various JVM properties are useful for those comfortable with JVM internals.