| :py:mod:`airflow.providers.apache.spark.operators.spark_submit` |
| =============================================================== |
| |
| .. py:module:: airflow.providers.apache.spark.operators.spark_submit |
| |
| |
| Module Contents |
| --------------- |
| |
| Classes |
| ~~~~~~~ |
| |
| .. autoapisummary:: |
| |
| airflow.providers.apache.spark.operators.spark_submit.SparkSubmitOperator |
| |
| |
| |
| |
| .. py:class:: SparkSubmitOperator(*, application = '', conf = None, conn_id = 'spark_default', files = None, py_files = None, archives = None, driver_class_path = None, jars = None, java_class = None, packages = None, exclude_packages = None, repositories = None, total_executor_cores = None, executor_cores = None, executor_memory = None, driver_memory = None, keytab = None, principal = None, proxy_user = None, name = 'arrow-spark', num_executors = None, status_poll_interval = 1, application_args = None, env_vars = None, verbose = False, spark_binary = None, **kwargs) |
| |
| Bases: :py:obj:`airflow.models.BaseOperator` |
| |
| This hook is a wrapper around the spark-submit binary to kick off a spark-submit job. |
| It requires that the "spark-submit" binary is in the PATH or the spark-home is set |
| in the extra on the connection. |
| |
| .. seealso:: |
| For more information on how to use this operator, take a look at the guide: |
| :ref:`howto/operator:SparkSubmitOperator` |
| |
| :param application: The application that submitted as a job, either jar or py file. (templated) |
| :param conf: Arbitrary Spark configuration properties (templated) |
| :param spark_conn_id: The :ref:`spark connection id <howto/connection:spark>` as configured |
| in Airflow administration. When an invalid connection_id is supplied, it will default to yarn. |
| :param files: Upload additional files to the executor running the job, separated by a |
| comma. Files will be placed in the working directory of each executor. |
| For example, serialized objects. (templated) |
| :param py_files: Additional python files used by the job, can be .zip, .egg or .py. (templated) |
| :param jars: Submit additional jars to upload and place them in executor classpath. (templated) |
| :param driver_class_path: Additional, driver-specific, classpath settings. (templated) |
| :param java_class: the main class of the Java application |
| :param packages: Comma-separated list of maven coordinates of jars to include on the |
| driver and executor classpaths. (templated) |
| :param exclude_packages: Comma-separated list of maven coordinates of jars to exclude |
| while resolving the dependencies provided in 'packages' (templated) |
| :param repositories: Comma-separated list of additional remote repositories to search |
| for the maven coordinates given with 'packages' |
| :param total_executor_cores: (Standalone & Mesos only) Total cores for all executors |
| (Default: all the available cores on the worker) |
| :param executor_cores: (Standalone & YARN only) Number of cores per executor (Default: 2) |
| :param executor_memory: Memory per executor (e.g. 1000M, 2G) (Default: 1G) |
| :param driver_memory: Memory allocated to the driver (e.g. 1000M, 2G) (Default: 1G) |
| :param keytab: Full path to the file that contains the keytab (templated) |
| :param principal: The name of the kerberos principal used for keytab (templated) |
| :param proxy_user: User to impersonate when submitting the application (templated) |
| :param name: Name of the job (default airflow-spark). (templated) |
| :param num_executors: Number of executors to launch |
| :param status_poll_interval: Seconds to wait between polls of driver status in cluster |
| mode (Default: 1) |
| :param application_args: Arguments for the application being submitted (templated) |
| :param env_vars: Environment variables for spark-submit. It supports yarn and k8s mode too. (templated) |
| :param verbose: Whether to pass the verbose flag to spark-submit process for debugging |
| :param spark_binary: The command to use for spark submit. |
| Some distros may use spark2-submit. |
| |
| .. py:attribute:: template_fields |
| :annotation: :Sequence[str] = ['_application', '_conf', '_files', '_py_files', '_jars', '_driver_class_path', '_packages',... |
| |
| |
| |
| .. py:attribute:: ui_color |
| |
| |
| |
| |
| .. py:method:: execute(context) |
| |
| Call the SparkSubmitHook to run the provided spark job |
| |
| |
| .. py:method:: on_kill() |
| |
| Override this method to cleanup subprocesses when a task instance |
| gets killed. Any use of the threading, subprocess or multiprocessing |
| module within an operator needs to be cleaned up or it will leave |
| ghost processes behind. |
| |
| |
| |