blob: 7e75060921afb49abc2c3ad0f290b955248feaa4 [file] [log] [blame]
:py:mod:`airflow.providers.microsoft.azure.operators.synapse`
=============================================================
.. py:module:: airflow.providers.microsoft.azure.operators.synapse
Module Contents
---------------
Classes
~~~~~~~
.. autoapisummary::
airflow.providers.microsoft.azure.operators.synapse.AzureSynapseRunSparkBatchOperator
.. py:class:: AzureSynapseRunSparkBatchOperator(*, azure_synapse_conn_id = AzureSynapseHook.default_conn_name, wait_for_termination = True, spark_pool = '', payload, timeout = 60 * 60 * 24 * 7, check_interval = 60, **kwargs)
Bases: :py:obj:`airflow.models.BaseOperator`
Executes a Spark job on Azure Synapse.
.. see also::
For more information on how to use this operator, take a look at the guide:
:ref:`howto/operator:AzureSynapseRunSparkBatchOperator`
:param azure_synapse_conn_id: The connection identifier for connecting to Azure Synapse.
:param wait_for_termination: Flag to wait on a job run's termination.
:param spark_pool: The target synapse spark pool used to submit the job
:param payload: Livy compatible payload which represents the spark job that a user wants to submit
:param timeout: Time in seconds to wait for a job to reach a terminal status for non-asynchronous
waits. Used only if ``wait_for_termination`` is True.
:param check_interval: Time in seconds to check on a job run's status for non-asynchronous waits.
Used only if ``wait_for_termination`` is True.
.. py:attribute:: template_fields
:annotation: :Sequence[str] = ['azure_synapse_conn_id', 'spark_pool']
.. py:attribute:: template_fields_renderers
.. py:attribute:: ui_color
:annotation: = #0678d4
.. py:method:: execute(context)
This is the main method to derive when creating an operator.
Context is the same dictionary used as when rendering jinja templates.
Refer to get_template_context for more context.
.. py:method:: on_kill()
Override this method to cleanup subprocesses when a task instance
gets killed. Any use of the threading, subprocess or multiprocessing
module within an operator needs to be cleaned up or it will leave
ghost processes behind.