blob: d391685322d12671b081fae1f06ce814e39b98f2 [file] [log] [blame]
:py:mod:`airflow.providers.apache.spark.hooks.spark_submit`
===========================================================
.. py:module:: airflow.providers.apache.spark.hooks.spark_submit
Module Contents
---------------
Classes
~~~~~~~
.. autoapisummary::
airflow.providers.apache.spark.hooks.spark_submit.SparkSubmitHook
.. py:class:: SparkSubmitHook(conf = None, conn_id = 'spark_default', files = None, py_files = None, archives = None, driver_class_path = None, jars = None, java_class = None, packages = None, exclude_packages = None, repositories = None, total_executor_cores = None, executor_cores = None, executor_memory = None, driver_memory = None, keytab = None, principal = None, proxy_user = None, name = 'default-name', num_executors = None, status_poll_interval = 1, application_args = None, env_vars = None, verbose = False, spark_binary = None)
Bases: :py:obj:`airflow.hooks.base.BaseHook`, :py:obj:`airflow.utils.log.logging_mixin.LoggingMixin`
This hook is a wrapper around the spark-submit binary to kick off a spark-submit job.
It requires that the "spark-submit" binary is in the PATH or the spark_home to be
supplied.
:param conf: Arbitrary Spark configuration properties
:param spark_conn_id: The :ref:`spark connection id <howto/connection:spark>` as configured
in Airflow administration. When an invalid connection_id is supplied, it will default
to yarn.
:param files: Upload additional files to the executor running the job, separated by a
comma. Files will be placed in the working directory of each executor.
For example, serialized objects.
:param py_files: Additional python files used by the job, can be .zip, .egg or .py.
:param archives: Archives that spark should unzip (and possibly tag with #ALIAS) into
the application working directory.
:param driver_class_path: Additional, driver-specific, classpath settings.
:param jars: Submit additional jars to upload and place them in executor classpath.
:param java_class: the main class of the Java application
:param packages: Comma-separated list of maven coordinates of jars to include on the
driver and executor classpaths
:param exclude_packages: Comma-separated list of maven coordinates of jars to exclude
while resolving the dependencies provided in 'packages'
:param repositories: Comma-separated list of additional remote repositories to search
for the maven coordinates given with 'packages'
:param total_executor_cores: (Standalone & Mesos only) Total cores for all executors
(Default: all the available cores on the worker)
:param executor_cores: (Standalone, YARN and Kubernetes only) Number of cores per
executor (Default: 2)
:param executor_memory: Memory per executor (e.g. 1000M, 2G) (Default: 1G)
:param driver_memory: Memory allocated to the driver (e.g. 1000M, 2G) (Default: 1G)
:param keytab: Full path to the file that contains the keytab
:param principal: The name of the kerberos principal used for keytab
:param proxy_user: User to impersonate when submitting the application
:param name: Name of the job (default airflow-spark)
:param num_executors: Number of executors to launch
:param status_poll_interval: Seconds to wait between polls of driver status in cluster
mode (Default: 1)
:param application_args: Arguments for the application being submitted
:param env_vars: Environment variables for spark-submit. It
supports yarn and k8s mode too.
:param verbose: Whether to pass the verbose flag to spark-submit process for debugging
:param spark_binary: The command to use for spark submit.
Some distros may use spark2-submit.
.. py:attribute:: conn_name_attr
:annotation: = conn_id
.. py:attribute:: default_conn_name
:annotation: = spark_default
.. py:attribute:: conn_type
:annotation: = spark
.. py:attribute:: hook_name
:annotation: = Spark
.. py:method:: get_ui_field_behaviour()
:staticmethod:
Returns custom field behaviour
.. py:method:: get_conn(self)
Returns connection for the hook.
.. py:method:: submit(self, application = '', **kwargs)
Remote Popen to execute the spark-submit job
:param application: Submitted application, jar or py file
:param kwargs: extra arguments to Popen (see subprocess.Popen)
.. py:method:: on_kill(self)
Kill Spark submit command