blob: 37df31a36396f1e5857e4a644aa480223721577c [file] [log] [blame]
:py:mod:`airflow.providers.google.cloud.triggers.dataproc`
==========================================================
.. py:module:: airflow.providers.google.cloud.triggers.dataproc
.. autoapi-nested-parse::
This module contains Google Dataproc triggers.
Module Contents
---------------
Classes
~~~~~~~
.. autoapisummary::
airflow.providers.google.cloud.triggers.dataproc.DataprocBaseTrigger
.. py:class:: DataprocBaseTrigger(job_id, region, project_id = None, gcp_conn_id = 'google_cloud_default', impersonation_chain = None, delegate_to = None, polling_interval_seconds = 30)
Bases: :py:obj:`airflow.triggers.base.BaseTrigger`
Trigger that periodically polls information from Dataproc API to verify job status.
Implementation leverages asynchronous transport.
.. py:method:: serialize()
Returns the information needed to reconstruct this Trigger.
:return: Tuple of (class path, keyword arguments needed to re-instantiate).
.. py:method:: run()
:async:
Runs the trigger in an asynchronous context.
The trigger should yield an Event whenever it wants to fire off
an event, and return None if it is finished. Single-event triggers
should thus yield and then immediately return.
If it yields, it is likely that it will be resumed very quickly,
but it may not be (e.g. if the workload is being moved to another
triggerer process, or a multi-event trigger was being used for a
single-event task defer).
In either case, Trigger classes should assume they will be persisted,
and then rely on cleanup() being called when they are no longer needed.