blob: 2b9972b42c415883fd055fc696bedc82afdab4a9 [file] [log] [blame]
:py:mod:`airflow.models.trigger`
================================
.. py:module:: airflow.models.trigger
Module Contents
---------------
Classes
~~~~~~~
.. autoapisummary::
airflow.models.trigger.Trigger
.. py:class:: Trigger(classpath: str, kwargs: Dict[str, Any], created_date: Optional[datetime.datetime] = None)
Bases: :py:obj:`airflow.models.base.Base`
Triggers are a workload that run in an asynchronous event loop shared with
other Triggers, and fire off events that will unpause deferred Tasks,
start linked DAGs, etc.
They are persisted into the database and then re-hydrated into a
"triggerer" process, where many are run at once. We model it so that
there is a many-to-one relationship between Task and Trigger, for future
deduplication logic to use.
Rows will be evicted from the database when the triggerer detects no
active Tasks/DAGs using them. Events are not stored in the database;
when an Event is fired, the triggerer will directly push its data to the
appropriate Task/DAG.
.. py:attribute:: __tablename__
:annotation: = trigger
.. py:attribute:: id
.. py:attribute:: classpath
.. py:attribute:: kwargs
.. py:attribute:: created_date
.. py:attribute:: triggerer_id
.. py:method:: from_object(cls, trigger: airflow.triggers.base.BaseTrigger)
:classmethod:
Alternative constructor that creates a trigger row based directly
off of a Trigger object.
.. py:method:: bulk_fetch(cls, ids: List[int], session=None) -> Dict[int, Trigger]
:classmethod:
Fetches all of the Triggers by ID and returns a dict mapping
ID -> Trigger instance
.. py:method:: clean_unused(cls, session=None)
:classmethod:
Deletes all triggers that have no tasks/DAGs dependent on them
(triggers have a one-to-many relationship to both)
.. py:method:: submit_event(cls, trigger_id, event, session=None)
:classmethod:
Takes an event from an instance of itself, and triggers all dependent
tasks to resume.
.. py:method:: submit_failure(cls, trigger_id, exc=None, session=None)
:classmethod:
Called when a trigger has failed unexpectedly, and we need to mark
everything that depended on it as failed. Notably, we have to actually
run the failure code from a worker as it may have linked callbacks, so
hilariously we have to re-schedule the task instances to a worker just
so they can then fail.
We use a special __fail__ value for next_method to achieve this that
the runtime code understands as immediate-fail, and pack the error into
next_kwargs.
TODO: Once we have shifted callback (and email) handling to run on
workers as first-class concepts, we can run the failure code here
in-process, but we can't do that right now.
.. py:method:: ids_for_triggerer(cls, triggerer_id, session=None)
:classmethod:
Retrieves a list of triggerer_ids.
.. py:method:: assign_unassigned(cls, triggerer_id, capacity, session=None)
:classmethod:
Takes a triggerer_id and the capacity for that triggerer and assigns unassigned
triggers until that capacity is reached, or there are no more unassigned triggers.