| :mod:`airflow.models` |
| ===================== |
| |
| .. py:module:: airflow.models |
| |
| .. autoapi-nested-parse:: |
| |
| Airflow models |
| |
| |
| |
| Submodules |
| ---------- |
| .. toctree:: |
| :titlesonly: |
| :maxdepth: 1 |
| |
| base/index.rst |
| baseoperator/index.rst |
| chart/index.rst |
| connection/index.rst |
| crypto/index.rst |
| dag/index.rst |
| dagbag/index.rst |
| dagcode/index.rst |
| dagpickle/index.rst |
| dagrun/index.rst |
| errors/index.rst |
| knownevent/index.rst |
| kubernetes/index.rst |
| log/index.rst |
| pool/index.rst |
| renderedtifields/index.rst |
| serialized_dag/index.rst |
| skipmixin/index.rst |
| slamiss/index.rst |
| taskfail/index.rst |
| taskinstance/index.rst |
| taskreschedule/index.rst |
| user/index.rst |
| variable/index.rst |
| xcom/index.rst |
| |
| |
| Package Contents |
| ---------------- |
| |
| .. data:: ID_LEN |
| :annotation: = 250 |
| |
| |
| |
| .. data:: Base |
| :annotation: :Any |
| |
| |
| |
| .. py:class:: BaseOperator(task_id, owner=conf.get('operators', 'DEFAULT_OWNER'), email=None, email_on_retry=True, email_on_failure=True, retries=conf.getint('core', 'default_task_retries', fallback=0), retry_delay=timedelta(seconds=300), retry_exponential_backoff=False, max_retry_delay=None, start_date=None, end_date=None, schedule_interval=None, depends_on_past=False, wait_for_downstream=False, dag=None, params=None, default_args=None, priority_weight=1, weight_rule=WeightRule.DOWNSTREAM, queue=conf.get('celery', 'default_queue'), pool=Pool.DEFAULT_POOL_NAME, pool_slots=1, sla=None, execution_timeout=None, on_failure_callback=None, on_success_callback=None, on_retry_callback=None, trigger_rule=TriggerRule.ALL_SUCCESS, resources=None, run_as_user=None, task_concurrency=None, executor_config=None, do_xcom_push=True, inlets=None, outlets=None, *args, **kwargs) |
| |
| Bases: :class:`airflow.utils.log.logging_mixin.LoggingMixin` |
| |
| Abstract base class for all operators. Since operators create objects that |
| become nodes in the dag, BaseOperator contains many recursive methods for |
| dag crawling behavior. To derive this class, you are expected to override |
| the constructor as well as the 'execute' method. |
| |
| Operators derived from this class should perform or trigger certain tasks |
| synchronously (wait for completion). Example of operators could be an |
| operator that runs a Pig job (PigOperator), a sensor operator that |
| waits for a partition to land in Hive (HiveSensorOperator), or one that |
| moves data from Hive to MySQL (Hive2MySqlOperator). Instances of these |
| operators (tasks) target specific operations, running specific scripts, |
| functions or data transfers. |
| |
| This class is abstract and shouldn't be instantiated. Instantiating a |
| class derived from this one results in the creation of a task object, |
| which ultimately becomes a node in DAG objects. Task dependencies should |
| be set by using the set_upstream and/or set_downstream methods. |
| |
| :param task_id: a unique, meaningful id for the task |
| :type task_id: str |
| :param owner: the owner of the task, using the unix username is recommended |
| :type owner: str |
| :param email: the 'to' email address(es) used in email alerts. This can be a |
| single email or multiple ones. Multiple addresses can be specified as a |
| comma or semi-colon separated string or by passing a list of strings. |
| :type email: str or list[str] |
| :param email_on_retry: Indicates whether email alerts should be sent when a |
| task is retried |
| :type email_on_retry: bool |
| :param email_on_failure: Indicates whether email alerts should be sent when |
| a task failed |
| :type email_on_failure: bool |
| :param retries: the number of retries that should be performed before |
| failing the task |
| :type retries: int |
| :param retry_delay: delay between retries |
| :type retry_delay: datetime.timedelta |
| :param retry_exponential_backoff: allow progressive longer waits between |
| retries by using exponential backoff algorithm on retry delay (delay |
| will be converted into seconds) |
| :type retry_exponential_backoff: bool |
| :param max_retry_delay: maximum delay interval between retries |
| :type max_retry_delay: datetime.timedelta |
| :param start_date: The ``start_date`` for the task, determines |
| the ``execution_date`` for the first task instance. The best practice |
| is to have the start_date rounded |
| to your DAG's ``schedule_interval``. Daily jobs have their start_date |
| some day at 00:00:00, hourly jobs have their start_date at 00:00 |
| of a specific hour. Note that Airflow simply looks at the latest |
| ``execution_date`` and adds the ``schedule_interval`` to determine |
| the next ``execution_date``. It is also very important |
| to note that different tasks' dependencies |
| need to line up in time. If task A depends on task B and their |
| start_date are offset in a way that their execution_date don't line |
| up, A's dependencies will never be met. If you are looking to delay |
| a task, for example running a daily task at 2AM, look into the |
| ``TimeSensor`` and ``TimeDeltaSensor``. We advise against using |
| dynamic ``start_date`` and recommend using fixed ones. Read the |
| FAQ entry about start_date for more information. |
| :type start_date: datetime.datetime |
| :param end_date: if specified, the scheduler won't go beyond this date |
| :type end_date: datetime.datetime |
| :param depends_on_past: when set to true, task instances will run |
| sequentially while relying on the previous task's schedule to |
| succeed. The task instance for the start_date is allowed to run. |
| :type depends_on_past: bool |
| :param wait_for_downstream: when set to true, an instance of task |
| X will wait for tasks immediately downstream of the previous instance |
| of task X to finish successfully before it runs. This is useful if the |
| different instances of a task X alter the same asset, and this asset |
| is used by tasks downstream of task X. Note that depends_on_past |
| is forced to True wherever wait_for_downstream is used. Also note that |
| only tasks *immediately* downstream of the previous task instance are waited |
| for; the statuses of any tasks further downstream are ignored. |
| :type wait_for_downstream: bool |
| :param dag: a reference to the dag the task is attached to (if any) |
| :type dag: airflow.models.DAG |
| :param priority_weight: priority weight of this task against other task. |
| This allows the executor to trigger higher priority tasks before |
| others when things get backed up. Set priority_weight as a higher |
| number for more important tasks. |
| :type priority_weight: int |
| :param weight_rule: weighting method used for the effective total |
| priority weight of the task. Options are: |
| ``{ downstream | upstream | absolute }`` default is ``downstream`` |
| When set to ``downstream`` the effective weight of the task is the |
| aggregate sum of all downstream descendants. As a result, upstream |
| tasks will have higher weight and will be scheduled more aggressively |
| when using positive weight values. This is useful when you have |
| multiple dag run instances and desire to have all upstream tasks to |
| complete for all runs before each dag can continue processing |
| downstream tasks. When set to ``upstream`` the effective weight is the |
| aggregate sum of all upstream ancestors. This is the opposite where |
| downtream tasks have higher weight and will be scheduled more |
| aggressively when using positive weight values. This is useful when you |
| have multiple dag run instances and prefer to have each dag complete |
| before starting upstream tasks of other dags. When set to |
| ``absolute``, the effective weight is the exact ``priority_weight`` |
| specified without additional weighting. You may want to do this when |
| you know exactly what priority weight each task should have. |
| Additionally, when set to ``absolute``, there is bonus effect of |
| significantly speeding up the task creation process as for very large |
| DAGS. Options can be set as string or using the constants defined in |
| the static class ``airflow.utils.WeightRule`` |
| :type weight_rule: str |
| :param queue: which queue to target when running this job. Not |
| all executors implement queue management, the CeleryExecutor |
| does support targeting specific queues. |
| :type queue: str |
| :param pool: the slot pool this task should run in, slot pools are a |
| way to limit concurrency for certain tasks |
| :type pool: str |
| :param pool_slots: the number of pool slots this task should use (>= 1) |
| Values less than 1 are not allowed. |
| :type pool_slots: int |
| :param sla: time by which the job is expected to succeed. Note that |
| this represents the ``timedelta`` after the period is closed. For |
| example if you set an SLA of 1 hour, the scheduler would send an email |
| soon after 1:00AM on the ``2016-01-02`` if the ``2016-01-01`` instance |
| has not succeeded yet. |
| The scheduler pays special attention for jobs with an SLA and |
| sends alert |
| emails for sla misses. SLA misses are also recorded in the database |
| for future reference. All tasks that share the same SLA time |
| get bundled in a single email, sent soon after that time. SLA |
| notification are sent once and only once for each task instance. |
| :type sla: datetime.timedelta |
| :param execution_timeout: max time allowed for the execution of |
| this task instance, if it goes beyond it will raise and fail. |
| :type execution_timeout: datetime.timedelta |
| :param on_failure_callback: a function to be called when a task instance |
| of this task fails. a context dictionary is passed as a single |
| parameter to this function. Context contains references to related |
| objects to the task instance and is documented under the macros |
| section of the API. |
| :type on_failure_callback: callable |
| :param on_retry_callback: much like the ``on_failure_callback`` except |
| that it is executed when retries occur. |
| :type on_retry_callback: callable |
| :param on_success_callback: much like the ``on_failure_callback`` except |
| that it is executed when the task succeeds. |
| :type on_success_callback: callable |
| :param trigger_rule: defines the rule by which dependencies are applied |
| for the task to get triggered. Options are: |
| ``{ all_success | all_failed | all_done | one_success | |
| one_failed | none_failed | none_failed_or_skipped | none_skipped | dummy}`` |
| default is ``all_success``. Options can be set as string or |
| using the constants defined in the static class |
| ``airflow.utils.TriggerRule`` |
| :type trigger_rule: str |
| :param resources: A map of resource parameter names (the argument names of the |
| Resources constructor) to their values. |
| :type resources: dict |
| :param run_as_user: unix username to impersonate while running the task |
| :type run_as_user: str |
| :param task_concurrency: When set, a task will be able to limit the concurrent |
| runs across execution_dates |
| :type task_concurrency: int |
| :param executor_config: Additional task-level configuration parameters that are |
| interpreted by a specific executor. Parameters are namespaced by the name of |
| executor. |
| |
| **Example**: to run this task in a specific docker container through |
| the KubernetesExecutor :: |
| |
| MyOperator(..., |
| executor_config={ |
| "KubernetesExecutor": |
| {"image": "myCustomDockerImage"} |
| } |
| ) |
| |
| :type executor_config: dict |
| :param do_xcom_push: if True, an XCom is pushed containing the Operator's |
| result |
| :type do_xcom_push: bool |
| |
| .. attribute:: template_fields |
| :annotation: :Iterable[str] = [] |
| |
| |
| |
| .. attribute:: template_ext |
| :annotation: :Iterable[str] = [] |
| |
| |
| |
| .. attribute:: ui_color |
| :annotation: = #fff |
| |
| |
| |
| .. attribute:: ui_fgcolor |
| :annotation: = #000 |
| |
| |
| |
| .. attribute:: pool |
| :annotation: :str = |
| |
| |
| |
| .. attribute:: _base_operator_shallow_copy_attrs |
| :annotation: :Iterable[str] = ['user_defined_macros', 'user_defined_filters', 'params', '_log'] |
| |
| |
| |
| .. attribute:: shallow_copy_attrs |
| :annotation: :Iterable[str] = [] |
| |
| |
| |
| .. attribute:: operator_extra_links |
| :annotation: :Iterable['BaseOperatorLink'] = [] |
| |
| |
| |
| .. attribute:: __serialized_fields |
| :annotation: :Optional[FrozenSet[str]] |
| |
| |
| |
| .. attribute:: _comps |
| |
| |
| |
| |
| .. attribute:: dag |
| |
| |
| Returns the Operator's DAG if set, otherwise raises an error |
| |
| |
| .. attribute:: dag_id |
| |
| |
| Returns dag id if it has one or an adhoc + owner |
| |
| |
| .. attribute:: deps |
| |
| |
| Returns the list of dependencies for the operator. These differ from execution |
| context dependencies in that they are specific to tasks and can be |
| extended/overridden by subclasses. |
| |
| |
| .. attribute:: schedule_interval |
| |
| |
| The schedule interval of the DAG always wins over individual tasks so |
| that tasks within a DAG always line up. The task still needs a |
| schedule_interval as it may not be attached to a DAG. |
| |
| |
| .. attribute:: priority_weight_total |
| |
| |
| Total priority weight for the task. It might include all upstream or downstream tasks. |
| depending on the weight rule. |
| |
| - WeightRule.ABSOLUTE - only own weight |
| - WeightRule.DOWNSTREAM - adds priority weight of all downstream tasks |
| - WeightRule.UPSTREAM - adds priority weight of all upstream tasks |
| |
| |
| .. attribute:: upstream_list |
| |
| |
| @property: list of tasks directly upstream |
| |
| |
| .. attribute:: upstream_task_ids |
| |
| |
| @property: list of ids of tasks directly upstream |
| |
| |
| .. attribute:: downstream_list |
| |
| |
| @property: list of tasks directly downstream |
| |
| |
| .. attribute:: downstream_task_ids |
| |
| |
| @property: list of ids of tasks directly downstream |
| |
| |
| .. attribute:: task_type |
| |
| |
| @property: type of the task |
| |
| |
| |
| .. method:: __eq__(self, other) |
| |
| |
| |
| |
| .. method:: __ne__(self, other) |
| |
| |
| |
| |
| .. method:: __lt__(self, other) |
| |
| |
| |
| |
| .. method:: __hash__(self) |
| |
| |
| |
| |
| .. method:: __rshift__(self, other) |
| |
| Implements Self >> Other == self.set_downstream(other) |
| |
| If "Other" is a DAG, the DAG is assigned to the Operator. |
| |
| |
| |
| |
| .. method:: __lshift__(self, other) |
| |
| Implements Self << Other == self.set_upstream(other) |
| |
| If "Other" is a DAG, the DAG is assigned to the Operator. |
| |
| |
| |
| |
| .. method:: __rrshift__(self, other) |
| |
| Called for [DAG] >> [Operator] because DAGs don't have |
| __rshift__ operators. |
| |
| |
| |
| |
| .. method:: __rlshift__(self, other) |
| |
| Called for [DAG] << [Operator] because DAGs don't have |
| __lshift__ operators. |
| |
| |
| |
| |
| .. method:: has_dag(self) |
| |
| Returns True if the Operator has been assigned to a DAG. |
| |
| |
| |
| |
| .. method:: operator_extra_link_dict(self) |
| |
| Returns dictionary of all extra links for the operator |
| |
| |
| |
| |
| .. method:: global_operator_extra_link_dict(self) |
| |
| Returns dictionary of all global extra links |
| |
| |
| |
| |
| .. method:: pre_execute(self, context) |
| |
| This hook is triggered right before self.execute() is called. |
| |
| |
| |
| |
| .. method:: execute(self, context) |
| |
| This is the main method to derive when creating an operator. |
| Context is the same dictionary used as when rendering jinja templates. |
| |
| Refer to get_template_context for more context. |
| |
| |
| |
| |
| .. method:: post_execute(self, context, result=None) |
| |
| This hook is triggered right after self.execute() is called. |
| It is passed the execution context and any results returned by the |
| operator. |
| |
| |
| |
| |
| .. method:: on_kill(self) |
| |
| Override this method to cleanup subprocesses when a task instance |
| gets killed. Any use of the threading, subprocess or multiprocessing |
| module within an operator needs to be cleaned up or it will leave |
| ghost processes behind. |
| |
| |
| |
| |
| .. method:: __deepcopy__(self, memo) |
| |
| Hack sorting double chained task lists by task_id to avoid hitting |
| max_depth on deepcopy operations. |
| |
| |
| |
| |
| .. method:: __getstate__(self) |
| |
| |
| |
| |
| .. method:: __setstate__(self, state) |
| |
| |
| |
| |
| .. method:: render_template_fields(self, context, jinja_env=None) |
| |
| Template all attributes listed in template_fields. Note this operation is irreversible. |
| |
| :param context: Dict with values to apply on content |
| :type context: dict |
| :param jinja_env: Jinja environment |
| :type jinja_env: jinja2.Environment |
| |
| |
| |
| |
| .. method:: _do_render_template_fields(self, parent, template_fields, context, jinja_env, seen_oids) |
| |
| |
| |
| |
| .. method:: render_template(self, content, context, jinja_env=None, seen_oids=None) |
| |
| Render a templated string. The content can be a collection holding multiple templated strings and will |
| be templated recursively. |
| |
| :param content: Content to template. Only strings can be templated (may be inside collection). |
| :type content: Any |
| :param context: Dict with values to apply on templated content |
| :type context: dict |
| :param jinja_env: Jinja environment. Can be provided to avoid re-creating Jinja environments during |
| recursion. |
| :type jinja_env: jinja2.Environment |
| :param seen_oids: template fields already rendered (to avoid RecursionError on circular dependencies) |
| :type seen_oids: set |
| :return: Templated content |
| |
| |
| |
| |
| .. method:: _render_nested_template_fields(self, content, context, jinja_env, seen_oids) |
| |
| |
| |
| |
| .. method:: get_template_env(self) |
| |
| Fetch a Jinja template environment from the DAG or instantiate empty environment if no DAG. |
| |
| |
| |
| |
| .. method:: prepare_template(self) |
| |
| Hook that is triggered after the templated fields get replaced |
| by their content. If you need your operator to alter the |
| content of the file before the template is rendered, |
| it should override this method to do so. |
| |
| |
| |
| |
| .. method:: resolve_template_files(self) |
| |
| |
| |
| |
| .. method:: clear(self, start_date=None, end_date=None, upstream=False, downstream=False, session=None) |
| |
| Clears the state of task instances associated with the task, following |
| the parameters specified. |
| |
| |
| |
| |
| .. method:: get_task_instances(self, start_date=None, end_date=None, session=None) |
| |
| Get a set of task instance related to this task for a specific date |
| range. |
| |
| |
| |
| |
| .. method:: get_flat_relative_ids(self, upstream=False, found_descendants=None) |
| |
| Get a flat list of relatives' ids, either upstream or downstream. |
| |
| |
| |
| |
| .. method:: get_flat_relatives(self, upstream=False) |
| |
| Get a flat list of relatives, either upstream or downstream. |
| |
| |
| |
| |
| .. method:: run(self, start_date=None, end_date=None, ignore_first_depends_on_past=False, ignore_ti_state=False, mark_success=False) |
| |
| Run a set of task instances for a date range. |
| |
| |
| |
| |
| .. method:: dry_run(self) |
| |
| Performs dry run for the operator - just render template fields. |
| |
| |
| |
| |
| .. method:: get_direct_relative_ids(self, upstream=False) |
| |
| Get the direct relative ids to the current task, upstream or |
| downstream. |
| |
| |
| |
| |
| .. method:: get_direct_relatives(self, upstream=False) |
| |
| Get the direct relatives to the current task, upstream or |
| downstream. |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. method:: add_only_new(self, item_set, item) |
| |
| Adds only new items to item set |
| |
| |
| |
| |
| .. method:: _set_relatives(self, task_or_task_list, upstream=False) |
| |
| Sets relatives for the task. |
| |
| |
| |
| |
| .. method:: set_downstream(self, task_or_task_list) |
| |
| Set a task or a task list to be directly downstream from the current |
| task. |
| |
| |
| |
| |
| .. method:: set_upstream(self, task_or_task_list) |
| |
| Set a task or a task list to be directly upstream from the current |
| task. |
| |
| |
| |
| |
| .. method:: xcom_push(self, context, key, value, execution_date=None) |
| |
| See TaskInstance.xcom_push() |
| |
| |
| |
| |
| .. method:: xcom_pull(self, context, task_ids=None, dag_id=None, key=XCOM_RETURN_KEY, include_prior_dates=None) |
| |
| See TaskInstance.xcom_pull() |
| |
| |
| |
| |
| .. method:: extra_links(self) |
| |
| @property: extra links for the task. |
| |
| |
| |
| |
| .. method:: get_extra_links(self, dttm, link_name) |
| |
| For an operator, gets the URL that the external links specified in |
| `extra_links` should point to. |
| |
| :raise ValueError: The error message of a ValueError will be passed on through to |
| the fronted to show up as a tooltip on the disabled link |
| :param dttm: The datetime parsed execution date for the URL being searched for |
| :param link_name: The name of the link we're looking for the URL for. Should be |
| one of the options specified in `extra_links` |
| :return: A URL |
| |
| |
| |
| |
| .. classmethod:: get_serialized_fields(cls) |
| |
| Stringified DAGs and operators contain exactly these fields. |
| |
| |
| |
| |
| .. py:class:: BaseOperatorLink |
| |
| Abstract base class that defines how we get an operator link. |
| |
| .. attribute:: __metaclass__ |
| |
| |
| |
| |
| .. attribute:: operators |
| :annotation: :ClassVar[List[Type[BaseOperator]]] = [] |
| |
| This property will be used by Airflow Plugins to find the Operators to which you want |
| to assign this Operator Link |
| |
| :return: List of Operator classes used by task for which you want to create extra link |
| |
| |
| .. attribute:: name |
| |
| |
| Name of the link. This will be the button name on the task UI. |
| |
| :return: link name |
| |
| |
| |
| .. method:: get_link(self, operator, dttm) |
| |
| Link to external system. |
| |
| :param operator: airflow operator |
| :param dttm: datetime |
| :return: link to external system |
| |
| |
| |
| |
| .. py:class:: Connection(conn_id=None, conn_type=None, host=None, login=None, password=None, schema=None, port=None, extra=None, uri=None) |
| |
| Bases: :class:`airflow.models.base.Base`, :class:`airflow.LoggingMixin` |
| |
| Placeholder to store information about different database instances |
| connection information. The idea here is that scripts use references to |
| database instances (conn_id) instead of hard coding hostname, logins and |
| passwords when using operators or hooks. |
| |
| .. attribute:: __tablename__ |
| :annotation: = connection |
| |
| |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: conn_id |
| |
| |
| |
| |
| .. attribute:: conn_type |
| |
| |
| |
| |
| .. attribute:: host |
| |
| |
| |
| |
| .. attribute:: schema |
| |
| |
| |
| |
| .. attribute:: login |
| |
| |
| |
| |
| .. attribute:: _password |
| |
| |
| |
| |
| .. attribute:: port |
| |
| |
| |
| |
| .. attribute:: is_encrypted |
| |
| |
| |
| |
| .. attribute:: is_extra_encrypted |
| |
| |
| |
| |
| .. attribute:: _extra |
| |
| |
| |
| |
| .. attribute:: _types |
| :annotation: = [['docker', 'Docker Registry'], ['fs', 'File (path)'], ['ftp', 'FTP'], ['google_cloud_platform', 'Google Cloud Platform'], ['hdfs', 'HDFS'], ['http', 'HTTP'], ['pig_cli', 'Pig Client Wrapper'], ['hive_cli', 'Hive Client Wrapper'], ['hive_metastore', 'Hive Metastore Thrift'], ['hiveserver2', 'Hive Server 2 Thrift'], ['jdbc', 'Jdbc Connection'], ['jenkins', 'Jenkins'], ['mysql', 'MySQL'], ['postgres', 'Postgres'], ['oracle', 'Oracle'], ['vertica', 'Vertica'], ['presto', 'Presto'], ['s3', 'S3'], ['samba', 'Samba'], ['sqlite', 'Sqlite'], ['ssh', 'SSH'], ['cloudant', 'IBM Cloudant'], ['mssql', 'Microsoft SQL Server'], ['mesos_framework-id', 'Mesos Framework ID'], ['jira', 'JIRA'], ['redis', 'Redis'], ['wasb', 'Azure Blob Storage'], ['databricks', 'Databricks'], ['aws', 'Amazon Web Services'], ['emr', 'Elastic MapReduce'], ['snowflake', 'Snowflake'], ['segment', 'Segment'], ['azure_data_lake', 'Azure Data Lake'], ['azure_container_instances', 'Azure Container Instances'], ['azure_cosmos', 'Azure CosmosDB'], ['cassandra', 'Cassandra'], ['qubole', 'Qubole'], ['mongo', 'MongoDB'], ['gcpcloudsql', 'Google Cloud SQL'], ['grpc', 'GRPC Connection'], ['yandexcloud', 'Yandex Cloud'], ['spark', 'Spark']] |
| |
| |
| |
| .. attribute:: password |
| |
| |
| |
| |
| .. attribute:: extra |
| |
| |
| |
| |
| .. attribute:: extra_dejson |
| |
| |
| Returns the extra property by deserializing json. |
| |
| |
| |
| .. method:: parse_from_uri(self, uri) |
| |
| |
| |
| |
| .. method:: get_uri(self) |
| |
| |
| |
| |
| .. method:: get_password(self) |
| |
| |
| |
| |
| .. method:: set_password(self, value) |
| |
| |
| |
| |
| .. method:: get_extra(self) |
| |
| |
| |
| |
| .. method:: set_extra(self, value) |
| |
| |
| |
| |
| .. method:: rotate_fernet_key(self) |
| |
| |
| |
| |
| .. method:: get_hook(self) |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. method:: log_info(self) |
| |
| |
| |
| |
| .. method:: debug_info(self) |
| |
| |
| |
| |
| .. py:class:: DAG(dag_id, description=None, schedule_interval=timedelta(days=1), start_date=None, end_date=None, full_filepath=None, template_searchpath=None, template_undefined=None, user_defined_macros=None, user_defined_filters=None, default_args=None, concurrency=conf.getint('core', 'dag_concurrency'), max_active_runs=conf.getint('core', 'max_active_runs_per_dag'), dagrun_timeout=None, sla_miss_callback=None, default_view=None, orientation=conf.get('webserver', 'dag_orientation'), catchup=conf.getboolean('scheduler', 'catchup_by_default'), on_success_callback=None, on_failure_callback=None, doc_md=None, params=None, access_control=None, is_paused_upon_creation=None, jinja_environment_kwargs=None, tags=None) |
| |
| Bases: :class:`airflow.dag.base_dag.BaseDag`, :class:`airflow.utils.log.logging_mixin.LoggingMixin` |
| |
| A dag (directed acyclic graph) is a collection of tasks with directional |
| dependencies. A dag also has a schedule, a start date and an end date |
| (optional). For each schedule, (say daily or hourly), the DAG needs to run |
| each individual tasks as their dependencies are met. Certain tasks have |
| the property of depending on their own past, meaning that they can't run |
| until their previous schedule (and upstream tasks) are completed. |
| |
| DAGs essentially act as namespaces for tasks. A task_id can only be |
| added once to a DAG. |
| |
| :param dag_id: The id of the DAG |
| :type dag_id: str |
| :param description: The description for the DAG to e.g. be shown on the webserver |
| :type description: str |
| :param schedule_interval: Defines how often that DAG runs, this |
| timedelta object gets added to your latest task instance's |
| execution_date to figure out the next schedule |
| :type schedule_interval: datetime.timedelta or |
| dateutil.relativedelta.relativedelta or str that acts as a cron |
| expression |
| :param start_date: The timestamp from which the scheduler will |
| attempt to backfill |
| :type start_date: datetime.datetime |
| :param end_date: A date beyond which your DAG won't run, leave to None |
| for open ended scheduling |
| :type end_date: datetime.datetime |
| :param template_searchpath: This list of folders (non relative) |
| defines where jinja will look for your templates. Order matters. |
| Note that jinja/airflow includes the path of your DAG file by |
| default |
| :type template_searchpath: str or list[str] |
| :param template_undefined: Template undefined type. |
| :type template_undefined: jinja2.Undefined |
| :param user_defined_macros: a dictionary of macros that will be exposed |
| in your jinja templates. For example, passing ``dict(foo='bar')`` |
| to this argument allows you to ``{{ foo }}`` in all jinja |
| templates related to this DAG. Note that you can pass any |
| type of object here. |
| :type user_defined_macros: dict |
| :param user_defined_filters: a dictionary of filters that will be exposed |
| in your jinja templates. For example, passing |
| ``dict(hello=lambda name: 'Hello %s' % name)`` to this argument allows |
| you to ``{{ 'world' | hello }}`` in all jinja templates related to |
| this DAG. |
| :type user_defined_filters: dict |
| :param default_args: A dictionary of default parameters to be used |
| as constructor keyword parameters when initialising operators. |
| Note that operators have the same hook, and precede those defined |
| here, meaning that if your dict contains `'depends_on_past': True` |
| here and `'depends_on_past': False` in the operator's call |
| `default_args`, the actual value will be `False`. |
| :type default_args: dict |
| :param params: a dictionary of DAG level parameters that are made |
| accessible in templates, namespaced under `params`. These |
| params can be overridden at the task level. |
| :type params: dict |
| :param concurrency: the number of task instances allowed to run |
| concurrently |
| :type concurrency: int |
| :param max_active_runs: maximum number of active DAG runs, beyond this |
| number of DAG runs in a running state, the scheduler won't create |
| new active DAG runs |
| :type max_active_runs: int |
| :param dagrun_timeout: specify how long a DagRun should be up before |
| timing out / failing, so that new DagRuns can be created. The timeout |
| is only enforced for scheduled DagRuns, and only once the |
| # of active DagRuns == max_active_runs. |
| :type dagrun_timeout: datetime.timedelta |
| :param sla_miss_callback: specify a function to call when reporting SLA |
| timeouts. |
| :type sla_miss_callback: types.FunctionType |
| :param default_view: Specify DAG default view (tree, graph, duration, |
| gantt, landing_times) |
| :type default_view: str |
| :param orientation: Specify DAG orientation in graph view (LR, TB, RL, BT) |
| :type orientation: str |
| :param catchup: Perform scheduler catchup (or only run latest)? Defaults to True |
| :type catchup: bool |
| :param on_failure_callback: A function to be called when a DagRun of this dag fails. |
| A context dictionary is passed as a single parameter to this function. |
| :type on_failure_callback: callable |
| :param on_success_callback: Much like the ``on_failure_callback`` except |
| that it is executed when the dag succeeds. |
| :type on_success_callback: callable |
| :param access_control: Specify optional DAG-level permissions, e.g., |
| "{'role1': {'can_dag_read'}, 'role2': {'can_dag_read', 'can_dag_edit'}}" |
| :type access_control: dict |
| :param is_paused_upon_creation: Specifies if the dag is paused when created for the first time. |
| If the dag exists already, this flag will be ignored. If this optional parameter |
| is not specified, the global config setting will be used. |
| :type is_paused_upon_creation: bool or None |
| :param jinja_environment_kwargs: additional configuration options to be passed to Jinja |
| ``Environment`` for template rendering |
| |
| **Example**: to avoid Jinja from removing a trailing newline from template strings :: |
| |
| DAG(dag_id='my-dag', |
| jinja_environment_kwargs={ |
| 'keep_trailing_newline': True, |
| # some other jinja2 Environment options here |
| } |
| ) |
| |
| **See**: `Jinja Environment documentation |
| <https://jinja.palletsprojects.com/en/master/api/#jinja2.Environment>`_ |
| |
| :type jinja_environment_kwargs: dict |
| :param tags: List of tags to help filtering DAGS in the UI. |
| :type tags: List[str] |
| |
| .. attribute:: _comps |
| |
| |
| |
| |
| .. attribute:: __serialized_fields |
| :annotation: :Optional[FrozenSet[str]] |
| |
| |
| |
| .. attribute:: dag_id |
| |
| |
| |
| |
| .. attribute:: full_filepath |
| |
| |
| |
| |
| .. attribute:: concurrency |
| |
| |
| |
| |
| .. attribute:: access_control |
| |
| |
| |
| |
| .. attribute:: description |
| |
| |
| |
| |
| .. attribute:: description_unicode |
| |
| |
| |
| |
| .. attribute:: pickle_id |
| |
| |
| |
| |
| .. attribute:: tasks |
| |
| |
| |
| |
| .. attribute:: task_ids |
| |
| |
| |
| |
| .. attribute:: filepath |
| |
| |
| File location of where the dag object is instantiated |
| |
| |
| .. attribute:: folder |
| |
| |
| Folder location of where the DAG object is instantiated. |
| |
| |
| .. attribute:: owner |
| |
| |
| Return list of all owners found in DAG tasks. |
| |
| :return: Comma separated list of owners in DAG tasks |
| :rtype: str |
| |
| |
| .. attribute:: allow_future_exec_dates |
| |
| |
| |
| |
| .. attribute:: concurrency_reached |
| |
| |
| Returns a boolean indicating whether the concurrency limit for this DAG |
| has been reached |
| |
| |
| .. attribute:: is_paused |
| |
| |
| Returns a boolean indicating whether this DAG is paused |
| |
| |
| .. attribute:: normalized_schedule_interval |
| |
| |
| Returns Normalized Schedule Interval. This is used internally by the Scheduler to |
| schedule DAGs. |
| |
| 1. Converts Cron Preset to a Cron Expression (e.g ``@monthly`` to ``0 0 1 * *``) |
| 2. If Schedule Interval is "@once" return "None" |
| 3. If not (1) or (2) returns schedule_interval |
| |
| |
| .. attribute:: latest_execution_date |
| |
| |
| Returns the latest date for which at least one dag run exists |
| |
| |
| .. attribute:: subdags |
| |
| |
| Returns a list of the subdag objects associated to this DAG |
| |
| |
| .. attribute:: roots |
| |
| |
| Return nodes with no parents. These are first to execute and are called roots or root nodes. |
| |
| |
| .. attribute:: leaves |
| |
| |
| Return nodes with no children. These are last to execute and are called leaves or leaf nodes. |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. method:: __eq__(self, other) |
| |
| |
| |
| |
| .. method:: __ne__(self, other) |
| |
| |
| |
| |
| .. method:: __lt__(self, other) |
| |
| |
| |
| |
| .. method:: __hash__(self) |
| |
| |
| |
| |
| .. method:: __enter__(self) |
| |
| |
| |
| |
| .. method:: __exit__(self, _type, _value, _tb) |
| |
| |
| |
| |
| .. method:: get_default_view(self) |
| |
| This is only there for backward compatible jinja2 templates |
| |
| |
| |
| |
| .. method:: date_range(self, start_date, num=None, end_date=timezone.utcnow()) |
| |
| |
| |
| |
| .. method:: is_fixed_time_schedule(self) |
| |
| Figures out if the DAG schedule has a fixed time (e.g. 3 AM). |
| |
| :return: True if the schedule has a fixed time, False if not. |
| |
| |
| |
| |
| .. method:: following_schedule(self, dttm) |
| |
| Calculates the following schedule for this dag in UTC. |
| |
| :param dttm: utc datetime |
| :return: utc datetime |
| |
| |
| |
| |
| .. method:: previous_schedule(self, dttm) |
| |
| Calculates the previous schedule for this dag in UTC |
| |
| :param dttm: utc datetime |
| :return: utc datetime |
| |
| |
| |
| |
| .. method:: get_run_dates(self, start_date, end_date=None) |
| |
| Returns a list of dates between the interval received as parameter using this |
| dag's schedule interval. Returned dates can be used for execution dates. |
| |
| :param start_date: the start date of the interval |
| :type start_date: datetime |
| :param end_date: the end date of the interval, defaults to timezone.utcnow() |
| :type end_date: datetime |
| :return: a list of dates within the interval following the dag's schedule |
| :rtype: list |
| |
| |
| |
| |
| .. method:: normalize_schedule(self, dttm) |
| |
| Returns dttm + interval unless dttm is first interval then it returns dttm |
| |
| |
| |
| |
| .. method:: get_last_dagrun(self, session=None, include_externally_triggered=False) |
| |
| |
| |
| |
| .. method:: _get_concurrency_reached(self, session=None) |
| |
| |
| |
| |
| .. method:: _get_is_paused(self, session=None) |
| |
| |
| |
| |
| .. method:: handle_callback(self, dagrun, success=True, reason=None, session=None) |
| |
| Triggers the appropriate callback depending on the value of success, namely the |
| on_failure_callback or on_success_callback. This method gets the context of a |
| single TaskInstance part of this DagRun and passes that to the callable along |
| with a 'reason', primarily to differentiate DagRun failures. |
| |
| .. note: The logs end up in |
| ``$AIRFLOW_HOME/logs/scheduler/latest/PROJECT/DAG_FILE.py.log`` |
| |
| :param dagrun: DagRun object |
| :param success: Flag to specify if failure or success callback should be called |
| :param reason: Completion reason |
| :param session: Database session |
| |
| |
| |
| |
| .. method:: get_active_runs(self) |
| |
| Returns a list of dag run execution dates currently running |
| |
| :return: List of execution dates |
| |
| |
| |
| |
| .. method:: get_num_active_runs(self, external_trigger=None, session=None) |
| |
| Returns the number of active "running" dag runs |
| |
| :param external_trigger: True for externally triggered active dag runs |
| :type external_trigger: bool |
| :param session: |
| :return: number greater than 0 for active dag runs |
| |
| |
| |
| |
| .. method:: get_dagrun(self, execution_date, session=None) |
| |
| Returns the dag run for a given execution date if it exists, otherwise |
| none. |
| |
| :param execution_date: The execution date of the DagRun to find. |
| :param session: |
| :return: The DagRun if found, otherwise None. |
| |
| |
| |
| |
| .. method:: get_dagruns_between(self, start_date, end_date, session=None) |
| |
| Returns the list of dag runs between start_date (inclusive) and end_date (inclusive). |
| |
| :param start_date: The starting execution date of the DagRun to find. |
| :param end_date: The ending execution date of the DagRun to find. |
| :param session: |
| :return: The list of DagRuns found. |
| |
| |
| |
| |
| .. method:: _get_latest_execution_date(self, session=None) |
| |
| |
| |
| |
| .. method:: resolve_template_files(self) |
| |
| |
| |
| |
| .. method:: get_template_env(self) |
| |
| Build a Jinja2 environment. |
| |
| |
| |
| |
| .. method:: set_dependency(self, upstream_task_id, downstream_task_id) |
| |
| Simple utility method to set dependency between two tasks that |
| already have been added to the DAG using add_task() |
| |
| |
| |
| |
| .. method:: get_task_instances(self, start_date=None, end_date=None, state=None, session=None) |
| |
| |
| |
| |
| .. method:: topological_sort(self) |
| |
| Sorts tasks in topographical order, such that a task comes after any of its |
| upstream dependencies. |
| |
| Heavily inspired by: |
| http://blog.jupo.org/2012/04/06/topological-sorting-acyclic-directed-graphs/ |
| |
| :return: list of tasks in topological order |
| |
| |
| |
| |
| .. method:: set_dag_runs_state(self, state=State.RUNNING, session=None, start_date=None, end_date=None) |
| |
| |
| |
| |
| .. method:: clear(self, start_date=None, end_date=None, only_failed=False, only_running=False, confirm_prompt=False, include_subdags=True, include_parentdag=True, reset_dag_runs=True, dry_run=False, session=None, get_tis=False, recursion_depth=0, max_recursion_depth=None, dag_bag=None) |
| |
| Clears a set of task instances associated with the current dag for |
| a specified date range. |
| |
| :param start_date: The minimum execution_date to clear |
| :type start_date: datetime.datetime or None |
| :param end_date: The maximum exeuction_date to clear |
| :type end_date: datetime.datetime or None |
| :param only_failed: Only clear failed tasks |
| :type only_failed: bool |
| :param only_running: Only clear running tasks. |
| :type only_running: bool |
| :param confirm_prompt: Ask for confirmation |
| :type confirm_prompt: bool |
| :param include_subdags: Clear tasks in subdags and clear external tasks |
| indicated by ExternalTaskMarker |
| :type include_subdags: bool |
| :param include_parentdag: Clear tasks in the parent dag of the subdag. |
| :type include_parentdag: bool |
| :param reset_dag_runs: Set state of dag to RUNNING |
| :type reset_dag_runs: bool |
| :param dry_run: Find the tasks to clear but don't clear them. |
| :type dry_run: bool |
| :param session: The sqlalchemy session to use |
| :type session: sqlalchemy.orm.session.Session |
| :param get_tis: Return the sqlachemy query for finding the TaskInstance without clearing the tasks |
| :type get_tis: bool |
| :param recursion_depth: The recursion depth of nested calls to DAG.clear(). |
| :type recursion_depth: int |
| :param max_recursion_depth: The maximum recusion depth allowed. This is determined by the |
| first encountered ExternalTaskMarker. Default is None indicating no ExternalTaskMarker |
| has been encountered. |
| :type max_recursion_depth: int |
| :param dag_bag: The DagBag used to find the dags |
| :type dag_bag: airflow.models.dagbag.DagBag |
| |
| |
| |
| |
| .. classmethod:: clear_dags(cls, dags, start_date=None, end_date=None, only_failed=False, only_running=False, confirm_prompt=False, include_subdags=True, include_parentdag=False, reset_dag_runs=True, dry_run=False) |
| |
| |
| |
| |
| .. method:: __deepcopy__(self, memo) |
| |
| |
| |
| |
| .. method:: sub_dag(self, task_regex, include_downstream=False, include_upstream=True) |
| |
| Returns a subset of the current dag as a deep copy of the current dag |
| based on a regex that should match one or many tasks, and includes |
| upstream and downstream neighbours based on the flag passed. |
| |
| |
| |
| |
| .. method:: has_task(self, task_id) |
| |
| |
| |
| |
| .. method:: get_task(self, task_id) |
| |
| |
| |
| |
| .. method:: pickle_info(self) |
| |
| |
| |
| |
| .. method:: pickle(self, session=None) |
| |
| |
| |
| |
| .. method:: tree_view(self) |
| |
| Print an ASCII tree representation of the DAG. |
| |
| |
| |
| |
| .. method:: add_task(self, task) |
| |
| Add a task to the DAG |
| |
| :param task: the task you want to add |
| :type task: task |
| |
| |
| |
| |
| .. method:: add_tasks(self, tasks) |
| |
| Add a list of tasks to the DAG |
| |
| :param tasks: a lit of tasks you want to add |
| :type tasks: list of tasks |
| |
| |
| |
| |
| .. method:: run(self, start_date=None, end_date=None, mark_success=False, local=False, executor=None, donot_pickle=conf.getboolean('core', 'donot_pickle'), ignore_task_deps=False, ignore_first_depends_on_past=False, pool=None, delay_on_limit_secs=1.0, verbose=False, conf=None, rerun_failed_tasks=False, run_backwards=False) |
| |
| Runs the DAG. |
| |
| :param start_date: the start date of the range to run |
| :type start_date: datetime.datetime |
| :param end_date: the end date of the range to run |
| :type end_date: datetime.datetime |
| :param mark_success: True to mark jobs as succeeded without running them |
| :type mark_success: bool |
| :param local: True to run the tasks using the LocalExecutor |
| :type local: bool |
| :param executor: The executor instance to run the tasks |
| :type executor: airflow.executor.BaseExecutor |
| :param donot_pickle: True to avoid pickling DAG object and send to workers |
| :type donot_pickle: bool |
| :param ignore_task_deps: True to skip upstream tasks |
| :type ignore_task_deps: bool |
| :param ignore_first_depends_on_past: True to ignore depends_on_past |
| dependencies for the first set of tasks only |
| :type ignore_first_depends_on_past: bool |
| :param pool: Resource pool to use |
| :type pool: str |
| :param delay_on_limit_secs: Time in seconds to wait before next attempt to run |
| dag run when max_active_runs limit has been reached |
| :type delay_on_limit_secs: float |
| :param verbose: Make logging output more verbose |
| :type verbose: bool |
| :param conf: user defined dictionary passed from CLI |
| :type conf: dict |
| :param rerun_failed_tasks: |
| :type: bool |
| :param run_backwards: |
| :type: bool |
| |
| |
| |
| |
| .. method:: cli(self) |
| |
| Exposes a CLI specific to this DAG |
| |
| |
| |
| |
| .. method:: create_dagrun(self, run_id, state, execution_date=None, start_date=None, external_trigger=False, conf=None, session=None) |
| |
| Creates a dag run from this dag including the tasks associated with this dag. |
| Returns the dag run. |
| |
| :param run_id: defines the the run id for this dag run |
| :type run_id: str |
| :param execution_date: the execution date of this dag run |
| :type execution_date: datetime.datetime |
| :param state: the state of the dag run |
| :type state: airflow.utils.state.State |
| :param start_date: the date this dag run should be evaluated |
| :type start_date: datetime |
| :param external_trigger: whether this dag run is externally triggered |
| :type external_trigger: bool |
| :param conf: Dict containing configuration/parameters to pass to the DAG |
| :type conf: dict |
| :param session: database session |
| :type session: sqlalchemy.orm.session.Session |
| |
| |
| |
| |
| .. method:: sync_to_db(self, owner=None, sync_time=None, session=None) |
| |
| Save attributes about this DAG to the DB. Note that this method |
| can be called for both DAGs and SubDAGs. A SubDag is actually a |
| SubDagOperator. |
| |
| :param dag: the DAG object to save to the DB |
| :type dag: airflow.models.DAG |
| :param sync_time: The time that the DAG should be marked as sync'ed |
| :type sync_time: datetime |
| :return: None |
| |
| |
| |
| |
| .. method:: get_dagtags(self, session=None) |
| |
| Creating a list of DagTags, if one is missing from the DB, will insert. |
| |
| :return: The DagTag list. |
| :rtype: list |
| |
| |
| |
| |
| .. staticmethod:: deactivate_unknown_dags(active_dag_ids, session=None) |
| |
| Given a list of known DAGs, deactivate any other DAGs that are |
| marked as active in the ORM |
| |
| :param active_dag_ids: list of DAG IDs that are active |
| :type active_dag_ids: list[unicode] |
| :return: None |
| |
| |
| |
| |
| .. staticmethod:: deactivate_stale_dags(expiration_date, session=None) |
| |
| Deactivate any DAGs that were last touched by the scheduler before |
| the expiration date. These DAGs were likely deleted. |
| |
| :param expiration_date: set inactive DAGs that were touched before this |
| time |
| :type expiration_date: datetime |
| :return: None |
| |
| |
| |
| |
| .. staticmethod:: get_num_task_instances(dag_id, task_ids=None, states=None, session=None) |
| |
| Returns the number of task instances in the given DAG. |
| |
| :param session: ORM session |
| :param dag_id: ID of the DAG to get the task concurrency of |
| :type dag_id: unicode |
| :param task_ids: A list of valid task IDs for the given DAG |
| :type task_ids: list[unicode] |
| :param states: A list of states to filter by if supplied |
| :type states: list[state] |
| :return: The number of running tasks |
| :rtype: int |
| |
| |
| |
| |
| .. method:: test_cycle(self) |
| |
| Check to see if there are any cycles in the DAG. Returns False if no cycle found, |
| otherwise raises exception. |
| |
| |
| |
| |
| .. method:: _test_cycle_helper(self, visit_map, task_id) |
| |
| Checks if a cycle exists from the input task using DFS traversal |
| |
| |
| |
| |
| .. classmethod:: get_serialized_fields(cls) |
| |
| Stringified DAGs and operators contain exactly these fields. |
| |
| |
| |
| |
| .. py:class:: DagModel |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| .. attribute:: __tablename__ |
| :annotation: = dag |
| |
| These items are stored in the database for state related information |
| |
| |
| .. attribute:: dag_id |
| |
| |
| |
| |
| .. attribute:: root_dag_id |
| |
| |
| |
| |
| .. attribute:: is_paused_at_creation |
| |
| |
| |
| |
| .. attribute:: is_paused |
| |
| |
| |
| |
| .. attribute:: is_subdag |
| |
| |
| |
| |
| .. attribute:: is_active |
| |
| |
| |
| |
| .. attribute:: last_scheduler_run |
| |
| |
| |
| |
| .. attribute:: last_pickled |
| |
| |
| |
| |
| .. attribute:: last_expired |
| |
| |
| |
| |
| .. attribute:: scheduler_lock |
| |
| |
| |
| |
| .. attribute:: pickle_id |
| |
| |
| |
| |
| .. attribute:: fileloc |
| |
| |
| |
| |
| .. attribute:: owners |
| |
| |
| |
| |
| .. attribute:: description |
| |
| |
| |
| |
| .. attribute:: default_view |
| |
| |
| |
| |
| .. attribute:: schedule_interval |
| |
| |
| |
| |
| .. attribute:: tags |
| |
| |
| |
| |
| .. attribute:: __table_args__ |
| |
| |
| |
| |
| .. attribute:: timezone |
| |
| |
| |
| |
| .. attribute:: safe_dag_id |
| |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. staticmethod:: get_dagmodel(dag_id, session=None) |
| |
| |
| |
| |
| .. classmethod:: get_current(cls, dag_id, session=None) |
| |
| |
| |
| |
| .. method:: get_default_view(self) |
| |
| |
| |
| |
| .. method:: get_last_dagrun(self, session=None, include_externally_triggered=False) |
| |
| |
| |
| |
| .. staticmethod:: get_paused_dag_ids(dag_ids, session) |
| |
| Given a list of dag_ids, get a set of Paused Dag Ids |
| |
| :param dag_ids: List of Dag ids |
| :param session: ORM Session |
| :return: Paused Dag_ids |
| |
| |
| |
| |
| .. method:: get_dag(self, store_serialized_dags=False) |
| |
| Creates a dagbag to load and return a DAG. |
| Calling it from UI should set store_serialized_dags = STORE_SERIALIZED_DAGS. |
| There may be a delay for scheduler to write serialized DAG into database, |
| loads from file in this case. |
| FIXME: remove it when webserver does not access to DAG folder in future. |
| |
| |
| |
| |
| .. method:: create_dagrun(self, run_id, state, execution_date, start_date=None, external_trigger=False, conf=None, session=None) |
| |
| Creates a dag run from this dag including the tasks associated with this dag. |
| Returns the dag run. |
| |
| :param run_id: defines the the run id for this dag run |
| :type run_id: str |
| :param execution_date: the execution date of this dag run |
| :type execution_date: datetime.datetime |
| :param state: the state of the dag run |
| :type state: airflow.utils.state.State |
| :param start_date: the date this dag run should be evaluated |
| :type start_date: datetime.datetime |
| :param external_trigger: whether this dag run is externally triggered |
| :type external_trigger: bool |
| :param session: database session |
| :type session: sqlalchemy.orm.session.Session |
| |
| |
| |
| |
| .. method:: set_is_paused(self, is_paused, including_subdags=True, store_serialized_dags=False, session=None) |
| |
| Pause/Un-pause a DAG. |
| |
| :param is_paused: Is the DAG paused |
| :param including_subdags: whether to include the DAG's subdags |
| :param store_serialized_dags: whether to serialize DAGs & store it in DB |
| :param session: session |
| |
| |
| |
| |
| .. classmethod:: deactivate_deleted_dags(cls, alive_dag_filelocs, session=None) |
| |
| Set ``is_active=False`` on the DAGs for which the DAG files have been removed. |
| Additionally change ``is_active=False`` to ``True`` if the DAG file exists. |
| |
| :param alive_dag_filelocs: file paths of alive DAGs |
| :param session: ORM Session |
| |
| |
| |
| |
| .. py:class:: DagTag |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| A tag name per dag, to allow quick filtering in the DAG view. |
| |
| .. attribute:: __tablename__ |
| :annotation: = dag_tag |
| |
| |
| |
| .. attribute:: name |
| |
| |
| |
| |
| .. attribute:: dag_id |
| |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. py:class:: DagBag(dag_folder=None, executor=None, include_examples=conf.getboolean('core', 'LOAD_EXAMPLES'), safe_mode=conf.getboolean('core', 'DAG_DISCOVERY_SAFE_MODE'), store_serialized_dags=False) |
| |
| Bases: :class:`airflow.dag.base_dag.BaseDagBag`, :class:`airflow.utils.log.logging_mixin.LoggingMixin` |
| |
| A dagbag is a collection of dags, parsed out of a folder tree and has high |
| level configuration settings, like what database to use as a backend and |
| what executor to use to fire off tasks. This makes it easier to run |
| distinct environments for say production and development, tests, or for |
| different teams or security profiles. What would have been system level |
| settings are now dagbag level so that one system can run multiple, |
| independent settings sets. |
| |
| :param dag_folder: the folder to scan to find DAGs |
| :type dag_folder: unicode |
| :param executor: the executor to use when executing task instances |
| in this DagBag |
| :param include_examples: whether to include the examples that ship |
| with airflow or not |
| :type include_examples: bool |
| :param has_logged: an instance boolean that gets flipped from False to True after a |
| file has been skipped. This is to prevent overloading the user with logging |
| messages about skipped files. Therefore only once per DagBag is a file logged |
| being skipped. |
| :param store_serialized_dags: Read DAGs from DB if store_serialized_dags is ``True``. |
| If ``False`` DAGs are read from python files. |
| :type store_serialized_dags: bool |
| |
| .. attribute:: CYCLE_NEW |
| :annotation: = 0 |
| |
| |
| |
| .. attribute:: CYCLE_IN_PROGRESS |
| :annotation: = 1 |
| |
| |
| |
| .. attribute:: CYCLE_DONE |
| :annotation: = 2 |
| |
| |
| |
| .. attribute:: DAGBAG_IMPORT_TIMEOUT |
| |
| |
| |
| |
| .. attribute:: UNIT_TEST_MODE |
| |
| |
| |
| |
| .. attribute:: SCHEDULER_ZOMBIE_TASK_THRESHOLD |
| |
| |
| |
| |
| .. attribute:: dag_ids |
| |
| |
| |
| |
| |
| .. method:: size(self) |
| |
| :return: the amount of dags contained in this dagbag |
| |
| |
| |
| |
| .. method:: get_dag(self, dag_id) |
| |
| Gets the DAG out of the dictionary, and refreshes it if expired |
| |
| :param dag_id: DAG Id |
| :type dag_id: str |
| |
| |
| |
| |
| .. method:: _add_dag_from_db(self, dag_id) |
| |
| Add DAG to DagBag from DB |
| |
| |
| |
| |
| .. method:: process_file(self, filepath, only_if_updated=True, safe_mode=True) |
| |
| Given a path to a python module or zip file, this method imports |
| the module and look for dag objects within it. |
| |
| |
| |
| |
| .. method:: kill_zombies(self, zombies, session=None) |
| |
| Fail given zombie tasks, which are tasks that haven't |
| had a heartbeat for too long, in the current DagBag. |
| |
| :param zombies: zombie task instances to kill. |
| :type zombies: airflow.utils.dag_processing.SimpleTaskInstance |
| :param session: DB session. |
| :type session: sqlalchemy.orm.session.Session |
| |
| |
| |
| |
| .. method:: bag_dag(self, dag, parent_dag, root_dag) |
| |
| Adds the DAG into the bag, recurses into sub dags. |
| Throws AirflowDagCycleException if a cycle is detected in this dag or its subdags |
| |
| |
| |
| |
| .. method:: collect_dags(self, dag_folder=None, only_if_updated=True, include_examples=conf.getboolean('core', 'LOAD_EXAMPLES'), safe_mode=conf.getboolean('core', 'DAG_DISCOVERY_SAFE_MODE')) |
| |
| Given a file path or a folder, this method looks for python modules, |
| imports them and adds them to the dagbag collection. |
| |
| Note that if a ``.airflowignore`` file is found while processing |
| the directory, it will behave much like a ``.gitignore``, |
| ignoring files that match any of the regex patterns specified |
| in the file. |
| |
| **Note**: The patterns in .airflowignore are treated as |
| un-anchored regexes, not shell-like glob patterns. |
| |
| |
| |
| |
| .. method:: collect_dags_from_db(self) |
| |
| Collects DAGs from database. |
| |
| |
| |
| |
| .. method:: dagbag_report(self) |
| |
| Prints a report around DagBag loading stats |
| |
| |
| |
| |
| .. py:class:: DagPickle(dag) |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| Dags can originate from different places (user repos, master repo, ...) |
| and also get executed in different places (different executors). This |
| object represents a version of a DAG and becomes a source of truth for |
| a BackfillJob execution. A pickle is a native python serialized object, |
| and in this case gets stored in the database for the duration of the job. |
| |
| The executors pick up the DagPickle id and read the dag definition from |
| the database. |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: pickle |
| |
| |
| |
| |
| .. attribute:: created_dttm |
| |
| |
| |
| |
| .. attribute:: pickle_hash |
| |
| |
| |
| |
| .. attribute:: __tablename__ |
| :annotation: = dag_pickle |
| |
| |
| |
| |
| .. py:class:: DagRun |
| |
| Bases: :class:`airflow.models.base.Base`, :class:`airflow.utils.log.logging_mixin.LoggingMixin` |
| |
| DagRun describes an instance of a Dag. It can be created |
| by the scheduler (for regular runs) or by an external trigger |
| |
| .. attribute:: __tablename__ |
| :annotation: = dag_run |
| |
| |
| |
| .. attribute:: ID_PREFIX |
| :annotation: = scheduled__ |
| |
| |
| |
| .. attribute:: ID_FORMAT_PREFIX |
| |
| |
| |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: dag_id |
| |
| |
| |
| |
| .. attribute:: execution_date |
| |
| |
| |
| |
| .. attribute:: start_date |
| |
| |
| |
| |
| .. attribute:: end_date |
| |
| |
| |
| |
| .. attribute:: _state |
| |
| |
| |
| |
| .. attribute:: run_id |
| |
| |
| |
| |
| .. attribute:: external_trigger |
| |
| |
| |
| |
| .. attribute:: conf |
| |
| |
| |
| |
| .. attribute:: dag |
| |
| |
| |
| |
| .. attribute:: __table_args__ |
| |
| |
| |
| |
| .. attribute:: state |
| |
| |
| |
| |
| .. attribute:: is_backfill |
| |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. method:: get_state(self) |
| |
| |
| |
| |
| .. method:: set_state(self, state) |
| |
| |
| |
| |
| .. classmethod:: id_for_date(cls, date, prefix=ID_FORMAT_PREFIX) |
| |
| |
| |
| |
| .. method:: refresh_from_db(self, session=None) |
| |
| Reloads the current dagrun from the database |
| |
| :param session: database session |
| |
| |
| |
| |
| .. staticmethod:: find(dag_id=None, run_id=None, execution_date=None, state=None, external_trigger=None, no_backfills=False, session=None) |
| |
| Returns a set of dag runs for the given search criteria. |
| |
| :param dag_id: the dag_id to find dag runs for |
| :type dag_id: int, list |
| :param run_id: defines the the run id for this dag run |
| :type run_id: str |
| :param execution_date: the execution date |
| :type execution_date: datetime.datetime |
| :param state: the state of the dag run |
| :type state: str |
| :param external_trigger: whether this dag run is externally triggered |
| :type external_trigger: bool |
| :param no_backfills: return no backfills (True), return all (False). |
| Defaults to False |
| :type no_backfills: bool |
| :param session: database session |
| :type session: sqlalchemy.orm.session.Session |
| |
| |
| |
| |
| .. method:: get_task_instances(self, state=None, session=None) |
| |
| Returns the task instances for this dag run |
| |
| |
| |
| |
| .. method:: get_task_instance(self, task_id, session=None) |
| |
| Returns the task instance specified by task_id for this dag run |
| |
| :param task_id: the task id |
| |
| |
| |
| |
| .. method:: get_dag(self) |
| |
| Returns the Dag associated with this DagRun. |
| |
| :return: DAG |
| |
| |
| |
| |
| .. method:: get_previous_dagrun(self, state=None, session=None) |
| |
| The previous DagRun, if there is one |
| |
| |
| |
| |
| .. method:: get_previous_scheduled_dagrun(self, session=None) |
| |
| The previous, SCHEDULED DagRun, if there is one |
| |
| |
| |
| |
| .. method:: update_state(self, session=None) |
| |
| Determines the overall state of the DagRun based on the state |
| of its TaskInstances. |
| |
| :return: ready_tis: the tis that can be scheduled in the current loop |
| :rtype ready_tis: list[airflow.models.TaskInstance] |
| |
| |
| |
| |
| .. method:: _get_ready_tis(self, scheduleable_tasks, finished_tasks, session) |
| |
| |
| |
| |
| .. method:: _are_premature_tis(self, unfinished_tasks, finished_tasks, session) |
| |
| |
| |
| |
| .. method:: _emit_true_scheduling_delay_stats_for_finished_state(self, finished_tis) |
| |
| This is a helper method to emit the true scheduling delay stats, which is defined as |
| the time when the first task in DAG starts minus the expected DAG run datetime. |
| This method will be used in the update_state method when the state of the DagRun |
| is updated to a completed status (either success or failure). The method will find the first |
| started task within the DAG and calculate the expected DagRun start time (based on |
| dag.execution_date & dag.schedule_interval), and minus these two values to get the delay. |
| The emitted data may contains outlier (e.g. when the first task was cleared, so |
| the second task's start_date will be used), but we can get rid of the the outliers |
| on the stats side through the dashboards tooling built. |
| Note, the stat will only be emitted if the DagRun is a scheduler triggered one |
| (i.e. external_trigger is False). |
| |
| |
| |
| |
| .. method:: _emit_duration_stats_for_finished_state(self) |
| |
| |
| |
| |
| .. method:: verify_integrity(self, session=None) |
| |
| Verifies the DagRun by checking for removed tasks or tasks that are not in the |
| database yet. It will set state to removed or add the task if required. |
| |
| |
| |
| |
| .. staticmethod:: get_run(session, dag_id, execution_date) |
| |
| :param dag_id: DAG ID |
| :type dag_id: unicode |
| :param execution_date: execution date |
| :type execution_date: datetime |
| :return: DagRun corresponding to the given dag_id and execution date |
| if one exists. None otherwise. |
| :rtype: airflow.models.DagRun |
| |
| |
| |
| |
| .. classmethod:: get_latest_runs(cls, session) |
| |
| Returns the latest DagRun for each DAG. |
| |
| |
| |
| |
| .. py:class:: ImportError |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| .. attribute:: __tablename__ |
| :annotation: = import_error |
| |
| |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: timestamp |
| |
| |
| |
| |
| .. attribute:: filename |
| |
| |
| |
| |
| .. attribute:: stacktrace |
| |
| |
| |
| |
| |
| .. py:class:: Log(event, task_instance, owner=None, extra=None, **kwargs) |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| Used to actively log events to the database |
| |
| .. attribute:: __tablename__ |
| :annotation: = log |
| |
| |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: dttm |
| |
| |
| |
| |
| .. attribute:: dag_id |
| |
| |
| |
| |
| .. attribute:: task_id |
| |
| |
| |
| |
| .. attribute:: event |
| |
| |
| |
| |
| .. attribute:: execution_date |
| |
| |
| |
| |
| .. attribute:: owner |
| |
| |
| |
| |
| .. attribute:: extra |
| |
| |
| |
| |
| .. attribute:: __table_args__ |
| |
| |
| |
| |
| |
| .. py:class:: Pool |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| .. attribute:: __tablename__ |
| :annotation: = slot_pool |
| |
| |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: pool |
| |
| |
| |
| |
| .. attribute:: slots |
| |
| |
| |
| |
| .. attribute:: description |
| |
| |
| |
| |
| .. attribute:: DEFAULT_POOL_NAME |
| :annotation: = default_pool |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. staticmethod:: get_pool(pool_name, session=None) |
| |
| |
| |
| |
| .. staticmethod:: get_default_pool(session=None) |
| |
| |
| |
| |
| .. method:: to_json(self) |
| |
| |
| |
| |
| .. method:: occupied_slots(self, session) |
| |
| Returns the number of slots used by running/queued tasks at the moment. |
| |
| |
| |
| |
| .. method:: used_slots(self, session) |
| |
| Returns the number of slots used by running tasks at the moment. |
| |
| |
| |
| |
| .. method:: queued_slots(self, session) |
| |
| Returns the number of slots used by queued tasks at the moment. |
| |
| |
| |
| |
| .. method:: open_slots(self, session) |
| |
| Returns the number of slots open at the moment |
| |
| |
| |
| |
| .. py:class:: RenderedTaskInstanceFields(ti, render_templates=True) |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| Save Rendered Template Fields |
| |
| .. attribute:: __tablename__ |
| :annotation: = rendered_task_instance_fields |
| |
| |
| |
| .. attribute:: dag_id |
| |
| |
| |
| |
| .. attribute:: task_id |
| |
| |
| |
| |
| .. attribute:: execution_date |
| |
| |
| |
| |
| .. attribute:: rendered_fields |
| |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. classmethod:: get_templated_fields(cls, ti, session=None) |
| |
| Get templated field for a TaskInstance from the RenderedTaskInstanceFields |
| table. |
| |
| :param ti: Task Instance |
| :param session: SqlAlchemy Session |
| :return: Rendered Templated TI field |
| |
| |
| |
| |
| .. method:: write(self, session=None) |
| |
| Write instance to database |
| |
| :param session: SqlAlchemy Session |
| |
| |
| |
| |
| .. classmethod:: delete_old_records(cls, task_id, dag_id, num_to_keep=conf.getint('core', 'max_num_rendered_ti_fields_per_task', fallback=0), session=None) |
| |
| Keep only Last X (num_to_keep) number of records for a task by deleting others |
| |
| :param task_id: Task ID |
| :param dag_id: Dag ID |
| :param num_to_keep: Number of Records to keep |
| :param session: SqlAlchemy Session |
| |
| |
| |
| |
| .. py:class:: SkipMixin |
| |
| Bases: :class:`airflow.utils.log.logging_mixin.LoggingMixin` |
| |
| |
| .. method:: _set_state_to_skipped(self, dag_run, execution_date, tasks, session) |
| |
| Used internally to set state of task instances to skipped from the same dag run. |
| |
| |
| |
| |
| .. method:: skip(self, dag_run, execution_date, tasks, session=None) |
| |
| Sets tasks instances to skipped from the same dag run. |
| |
| If this instance has a `task_id` attribute, store the list of skipped task IDs to XCom |
| so that NotPreviouslySkippedDep knows these tasks should be skipped when they |
| are cleared. |
| |
| :param dag_run: the DagRun for which to set the tasks to skipped |
| :param execution_date: execution_date |
| :param tasks: tasks to skip (not task_ids) |
| :param session: db session to use |
| |
| |
| |
| |
| .. method:: skip_all_except(self, ti, branch_task_ids) |
| |
| This method implements the logic for a branching operator; given a single |
| task ID or list of task IDs to follow, this skips all other tasks |
| immediately downstream of this operator. |
| |
| branch_task_ids is stored to XCom so that NotPreviouslySkippedDep knows skipped tasks or |
| newly added tasks should be skipped when they are cleared. |
| |
| |
| |
| |
| .. py:class:: SlaMiss |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| Model that stores a history of the SLA that have been missed. |
| It is used to keep track of SLA failures over time and to avoid double |
| triggering alert emails. |
| |
| .. attribute:: __tablename__ |
| :annotation: = sla_miss |
| |
| |
| |
| .. attribute:: task_id |
| |
| |
| |
| |
| .. attribute:: dag_id |
| |
| |
| |
| |
| .. attribute:: execution_date |
| |
| |
| |
| |
| .. attribute:: email_sent |
| |
| |
| |
| |
| .. attribute:: timestamp |
| |
| |
| |
| |
| .. attribute:: description |
| |
| |
| |
| |
| .. attribute:: notification_sent |
| |
| |
| |
| |
| .. attribute:: __table_args__ |
| |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. py:class:: TaskFail(task, execution_date, start_date, end_date) |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| TaskFail tracks the failed run durations of each task instance. |
| |
| .. attribute:: __tablename__ |
| :annotation: = task_fail |
| |
| |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: task_id |
| |
| |
| |
| |
| .. attribute:: dag_id |
| |
| |
| |
| |
| .. attribute:: execution_date |
| |
| |
| |
| |
| .. attribute:: start_date |
| |
| |
| |
| |
| .. attribute:: end_date |
| |
| |
| |
| |
| .. attribute:: duration |
| |
| |
| |
| |
| .. attribute:: __table_args__ |
| |
| |
| |
| |
| |
| .. py:class:: TaskInstance(task, execution_date, state=None) |
| |
| Bases: :class:`airflow.models.base.Base`, :class:`airflow.utils.log.logging_mixin.LoggingMixin` |
| |
| Task instances store the state of a task instance. This table is the |
| authority and single source of truth around what tasks have run and the |
| state they are in. |
| |
| The SqlAlchemy model doesn't have a SqlAlchemy foreign key to the task or |
| dag model deliberately to have more control over transactions. |
| |
| Database transactions on this table should insure double triggers and |
| any confusion around what task instances are or aren't ready to run |
| even while multiple schedulers may be firing task instances. |
| |
| .. attribute:: __tablename__ |
| :annotation: = task_instance |
| |
| |
| |
| .. attribute:: task_id |
| |
| |
| |
| |
| .. attribute:: dag_id |
| |
| |
| |
| |
| .. attribute:: execution_date |
| |
| |
| |
| |
| .. attribute:: start_date |
| |
| |
| |
| |
| .. attribute:: end_date |
| |
| |
| |
| |
| .. attribute:: duration |
| |
| |
| |
| |
| .. attribute:: state |
| |
| |
| |
| |
| .. attribute:: _try_number |
| |
| |
| |
| |
| .. attribute:: max_tries |
| |
| |
| |
| |
| .. attribute:: hostname |
| |
| |
| |
| |
| .. attribute:: unixname |
| |
| |
| |
| |
| .. attribute:: job_id |
| |
| |
| |
| |
| .. attribute:: pool |
| |
| |
| |
| |
| .. attribute:: pool_slots |
| |
| |
| |
| |
| .. attribute:: queue |
| |
| |
| |
| |
| .. attribute:: priority_weight |
| |
| |
| |
| |
| .. attribute:: operator |
| |
| |
| |
| |
| .. attribute:: queued_dttm |
| |
| |
| |
| |
| .. attribute:: pid |
| |
| |
| |
| |
| .. attribute:: executor_config |
| |
| |
| |
| |
| .. attribute:: __table_args__ |
| |
| |
| |
| |
| .. attribute:: try_number |
| |
| |
| Return the try number that this task number will be when it is actually |
| run. |
| |
| If the TI is currently running, this will match the column in the |
| database, in all other cases this will be incremented. |
| |
| |
| .. attribute:: prev_attempted_tries |
| |
| |
| Based on this instance's try_number, this will calculate |
| the number of previously attempted tries, defaulting to 0. |
| |
| |
| .. attribute:: next_try_number |
| |
| |
| |
| |
| .. attribute:: log_filepath |
| |
| |
| |
| |
| .. attribute:: log_url |
| |
| |
| |
| |
| .. attribute:: mark_success_url |
| |
| |
| |
| |
| .. attribute:: key |
| |
| |
| Returns a tuple that identifies the task instance uniquely |
| |
| |
| .. attribute:: is_premature |
| |
| |
| Returns whether a task is in UP_FOR_RETRY state and its retry interval |
| has elapsed. |
| |
| |
| .. attribute:: previous_ti |
| |
| |
| The task instance for the task that ran before this task instance. |
| |
| |
| .. attribute:: previous_ti_success |
| |
| |
| The ti from prior succesful dag run for this task, by execution date. |
| |
| |
| .. attribute:: previous_execution_date_success |
| |
| |
| The execution date from property previous_ti_success. |
| |
| |
| .. attribute:: previous_start_date_success |
| |
| |
| The start date from property previous_ti_success. |
| |
| |
| |
| .. method:: init_on_load(self) |
| |
| Initialize the attributes that aren't stored in the DB. |
| |
| |
| |
| |
| .. method:: command(self, mark_success=False, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, local=False, pickle_id=None, raw=False, job_id=None, pool=None, cfg_path=None) |
| |
| Returns a command that can be executed anywhere where airflow is |
| installed. This command is part of the message sent to executors by |
| the orchestrator. |
| |
| |
| |
| |
| .. method:: command_as_list(self, mark_success=False, ignore_all_deps=False, ignore_task_deps=False, ignore_depends_on_past=False, ignore_ti_state=False, local=False, pickle_id=None, raw=False, job_id=None, pool=None, cfg_path=None) |
| |
| Returns a command that can be executed anywhere where airflow is |
| installed. This command is part of the message sent to executors by |
| the orchestrator. |
| |
| |
| |
| |
| .. staticmethod:: generate_command(dag_id, task_id, execution_date, mark_success=False, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, local=False, pickle_id=None, file_path=None, raw=False, job_id=None, pool=None, cfg_path=None) |
| |
| Generates the shell command required to execute this task instance. |
| |
| :param dag_id: DAG ID |
| :type dag_id: unicode |
| :param task_id: Task ID |
| :type task_id: unicode |
| :param execution_date: Execution date for the task |
| :type execution_date: datetime.datetime |
| :param mark_success: Whether to mark the task as successful |
| :type mark_success: bool |
| :param ignore_all_deps: Ignore all ignorable dependencies. |
| Overrides the other ignore_* parameters. |
| :type ignore_all_deps: bool |
| :param ignore_depends_on_past: Ignore depends_on_past parameter of DAGs |
| (e.g. for Backfills) |
| :type ignore_depends_on_past: bool |
| :param ignore_task_deps: Ignore task-specific dependencies such as depends_on_past |
| and trigger rule |
| :type ignore_task_deps: bool |
| :param ignore_ti_state: Ignore the task instance's previous failure/success |
| :type ignore_ti_state: bool |
| :param local: Whether to run the task locally |
| :type local: bool |
| :param pickle_id: If the DAG was serialized to the DB, the ID |
| associated with the pickled DAG |
| :type pickle_id: unicode |
| :param file_path: path to the file containing the DAG definition |
| :param raw: raw mode (needs more details) |
| :param job_id: job ID (needs more details) |
| :param pool: the Airflow pool that the task should run in |
| :type pool: unicode |
| :param cfg_path: the Path to the configuration file |
| :type cfg_path: basestring |
| :return: shell command that can be used to run the task instance |
| |
| |
| |
| |
| .. method:: current_state(self, session=None) |
| |
| Get the very latest state from the database, if a session is passed, |
| we use and looking up the state becomes part of the session, otherwise |
| a new session is used. |
| |
| |
| |
| |
| .. method:: error(self, session=None) |
| |
| Forces the task instance's state to FAILED in the database. |
| |
| |
| |
| |
| .. method:: refresh_from_db(self, session=None, lock_for_update=False) |
| |
| Refreshes the task instance from the database based on the primary key |
| |
| :param lock_for_update: if True, indicates that the database should |
| lock the TaskInstance (issuing a FOR UPDATE clause) until the |
| session is committed. |
| |
| |
| |
| |
| .. method:: refresh_from_task(self, task, pool_override=None) |
| |
| Copy common attributes from the given task. |
| |
| :param task: The task object to copy from |
| :type task: airflow.models.BaseOperator |
| :param pool_override: Use the pool_override instead of task's pool |
| :type pool_override: str |
| |
| |
| |
| |
| .. method:: clear_xcom_data(self, session=None) |
| |
| Clears all XCom data from the database for the task instance |
| |
| |
| |
| |
| .. method:: set_state(self, state, session=None, commit=True) |
| |
| |
| |
| |
| .. method:: are_dependents_done(self, session=None) |
| |
| Checks whether the dependents of this task instance have all succeeded. |
| This is meant to be used by wait_for_downstream. |
| |
| This is useful when you do not want to start processing the next |
| schedule of a task until the dependents are done. For instance, |
| if the task DROPs and recreates a table. |
| |
| |
| |
| |
| .. method:: _get_previous_ti(self, state=None, session=None) |
| |
| |
| |
| |
| .. method:: are_dependencies_met(self, dep_context=None, session=None, verbose=False) |
| |
| Returns whether or not all the conditions are met for this task instance to be run |
| given the context for the dependencies (e.g. a task instance being force run from |
| the UI will ignore some dependencies). |
| |
| :param dep_context: The execution context that determines the dependencies that |
| should be evaluated. |
| :type dep_context: DepContext |
| :param session: database session |
| :type session: sqlalchemy.orm.session.Session |
| :param verbose: whether log details on failed dependencies on |
| info or debug log level |
| :type verbose: bool |
| |
| |
| |
| |
| .. method:: get_failed_dep_statuses(self, dep_context=None, session=None) |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. method:: next_retry_datetime(self) |
| |
| Get datetime of the next retry if the task instance fails. For exponential |
| backoff, retry_delay is used as base and will be converted to seconds. |
| |
| |
| |
| |
| .. method:: ready_for_retry(self) |
| |
| Checks on whether the task instance is in the right state and timeframe |
| to be retried. |
| |
| |
| |
| |
| .. method:: pool_full(self, session) |
| |
| Returns a boolean as to whether the slot pool has room for this |
| task to run |
| |
| |
| |
| |
| .. method:: get_dagrun(self, session) |
| |
| Returns the DagRun for this TaskInstance |
| |
| :param session: |
| :return: DagRun |
| |
| |
| |
| |
| .. method:: _check_and_change_state_before_execution(self, verbose=True, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, mark_success=False, test_mode=False, job_id=None, pool=None, session=None) |
| |
| Checks dependencies and then sets state to RUNNING if they are met. Returns |
| True if and only if state is set to RUNNING, which implies that task should be |
| executed, in preparation for _run_raw_task |
| |
| :param verbose: whether to turn on more verbose logging |
| :type verbose: bool |
| :param ignore_all_deps: Ignore all of the non-critical dependencies, just runs |
| :type ignore_all_deps: bool |
| :param ignore_depends_on_past: Ignore depends_on_past DAG attribute |
| :type ignore_depends_on_past: bool |
| :param ignore_task_deps: Don't check the dependencies of this TI's task |
| :type ignore_task_deps: bool |
| :param ignore_ti_state: Disregards previous task instance state |
| :type ignore_ti_state: bool |
| :param mark_success: Don't run the task, mark its state as success |
| :type mark_success: bool |
| :param test_mode: Doesn't record success or failure in the DB |
| :type test_mode: bool |
| :param pool: specifies the pool to use to run the task instance |
| :type pool: str |
| :return: whether the state was changed to running or not |
| :rtype: bool |
| |
| |
| |
| |
| .. method:: _run_raw_task(self, mark_success=False, test_mode=False, job_id=None, pool=None, session=None) |
| |
| Immediately runs the task (without checking or changing db state |
| before execution) and then sets the appropriate final state after |
| completion and runs any post-execute callbacks. Meant to be called |
| only after another function changes the state to running. |
| |
| :param mark_success: Don't run the task, mark its state as success |
| :type mark_success: bool |
| :param test_mode: Doesn't record success or failure in the DB |
| :type test_mode: bool |
| :param pool: specifies the pool to use to run the task instance |
| :type pool: str |
| |
| |
| |
| |
| .. method:: run(self, verbose=True, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, mark_success=False, test_mode=False, job_id=None, pool=None, session=None) |
| |
| |
| |
| |
| .. method:: dry_run(self) |
| |
| |
| |
| |
| .. method:: _handle_reschedule(self, actual_start_date, reschedule_exception, test_mode=False, context=None, session=None) |
| |
| |
| |
| |
| .. method:: handle_failure(self, error, test_mode=None, context=None, force_fail=False, session=None) |
| |
| |
| |
| |
| .. method:: is_eligible_to_retry(self) |
| |
| Is task instance is eligible for retry |
| |
| |
| |
| |
| .. method:: _safe_date(self, date_attr, fmt) |
| |
| |
| |
| |
| .. method:: get_template_context(self, session=None) |
| |
| |
| |
| |
| .. method:: get_rendered_template_fields(self) |
| |
| Fetch rendered template fields from DB if Serialization is enabled. |
| Else just render the templates |
| |
| |
| |
| |
| .. method:: overwrite_params_with_dag_run_conf(self, params, dag_run) |
| |
| |
| |
| |
| .. method:: render_templates(self, context=None) |
| |
| Render templates in the operator fields. |
| |
| |
| |
| |
| .. method:: email_alert(self, exception) |
| |
| |
| |
| |
| .. method:: set_duration(self) |
| |
| |
| |
| |
| .. method:: xcom_push(self, key, value, execution_date=None) |
| |
| Make an XCom available for tasks to pull. |
| |
| :param key: A key for the XCom |
| :type key: str |
| :param value: A value for the XCom. The value is pickled and stored |
| in the database. |
| :type value: any pickleable object |
| :param execution_date: if provided, the XCom will not be visible until |
| this date. This can be used, for example, to send a message to a |
| task on a future date without it being immediately visible. |
| :type execution_date: datetime |
| |
| |
| |
| |
| .. method:: xcom_pull(self, task_ids=None, dag_id=None, key=XCOM_RETURN_KEY, include_prior_dates=False) |
| |
| Pull XComs that optionally meet certain criteria. |
| |
| The default value for `key` limits the search to XComs |
| that were returned by other tasks (as opposed to those that were pushed |
| manually). To remove this filter, pass key=None (or any desired value). |
| |
| If a single task_id string is provided, the result is the value of the |
| most recent matching XCom from that task_id. If multiple task_ids are |
| provided, a tuple of matching values is returned. None is returned |
| whenever no matches are found. |
| |
| :param key: A key for the XCom. If provided, only XComs with matching |
| keys will be returned. The default key is 'return_value', also |
| available as a constant XCOM_RETURN_KEY. This key is automatically |
| given to XComs returned by tasks (as opposed to being pushed |
| manually). To remove the filter, pass key=None. |
| :type key: str |
| :param task_ids: Only XComs from tasks with matching ids will be |
| pulled. Can pass None to remove the filter. |
| :type task_ids: str or iterable of strings (representing task_ids) |
| :param dag_id: If provided, only pulls XComs from this DAG. |
| If None (default), the DAG of the calling task is used. |
| :type dag_id: str |
| :param include_prior_dates: If False, only XComs from the current |
| execution_date are returned. If True, XComs from previous dates |
| are returned as well. |
| :type include_prior_dates: bool |
| |
| |
| |
| |
| .. method:: get_num_running_task_instances(self, session) |
| |
| |
| |
| |
| .. method:: init_run_context(self, raw=False) |
| |
| Sets the log context. |
| |
| |
| |
| |
| .. function:: clear_task_instances(tis, session, activate_dag_runs=True, dag=None) |
| Clears a set of task instances, but makes sure the running ones |
| get killed. |
| |
| :param tis: a list of task instances |
| :param session: current session |
| :param activate_dag_runs: flag to check for active dag run |
| :param dag: DAG object |
| |
| |
| .. py:class:: TaskReschedule(task, execution_date, try_number, start_date, end_date, reschedule_date) |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| TaskReschedule tracks rescheduled task instances. |
| |
| .. attribute:: __tablename__ |
| :annotation: = task_reschedule |
| |
| |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: task_id |
| |
| |
| |
| |
| .. attribute:: dag_id |
| |
| |
| |
| |
| .. attribute:: execution_date |
| |
| |
| |
| |
| .. attribute:: try_number |
| |
| |
| |
| |
| .. attribute:: start_date |
| |
| |
| |
| |
| .. attribute:: end_date |
| |
| |
| |
| |
| .. attribute:: duration |
| |
| |
| |
| |
| .. attribute:: reschedule_date |
| |
| |
| |
| |
| .. attribute:: __table_args__ |
| |
| |
| |
| |
| |
| .. staticmethod:: find_for_task_instance(task_instance, session) |
| |
| Returns all task reschedules for the task instance and try number, |
| in ascending order. |
| |
| :param task_instance: the task instance to find task reschedules for |
| :type task_instance: airflow.models.TaskInstance |
| |
| |
| |
| |
| .. py:class:: Variable |
| |
| Bases: :class:`airflow.models.base.Base`, :class:`airflow.utils.log.logging_mixin.LoggingMixin` |
| |
| .. attribute:: __tablename__ |
| :annotation: = variable |
| |
| |
| |
| .. attribute:: __NO_DEFAULT_SENTINEL |
| |
| |
| |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: key |
| |
| |
| |
| |
| .. attribute:: _val |
| |
| |
| |
| |
| .. attribute:: is_encrypted |
| |
| |
| |
| |
| .. attribute:: val |
| |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. method:: get_val(self) |
| |
| |
| |
| |
| .. method:: set_val(self, value) |
| |
| |
| |
| |
| .. classmethod:: setdefault(cls, key, default, deserialize_json=False) |
| |
| Like a Python builtin dict object, setdefault returns the current value |
| for a key, and if it isn't there, stores the default value and returns it. |
| |
| :param key: Dict key for this Variable |
| :type key: str |
| :param default: Default value to set and return if the variable |
| isn't already in the DB |
| :type default: Mixed |
| :param deserialize_json: Store this as a JSON encoded value in the DB |
| and un-encode it when retrieving a value |
| :return: Mixed |
| |
| |
| |
| |
| .. classmethod:: get(cls, key, default_var=__NO_DEFAULT_SENTINEL, deserialize_json=False, session=None) |
| |
| |
| |
| |
| .. classmethod:: set(cls, key, value, serialize_json=False, session=None) |
| |
| |
| |
| |
| .. classmethod:: delete(cls, key, session=None) |
| |
| |
| |
| |
| .. method:: rotate_fernet_key(self) |
| |
| |
| |
| |
| .. data:: XCOM_RETURN_KEY |
| :annotation: = return_value |
| |
| |
| |
| .. data:: XCom |
| |
| |
| |
| |
| .. py:class:: KnownEvent |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| .. attribute:: __tablename__ |
| :annotation: = known_event |
| |
| |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: label |
| |
| |
| |
| |
| .. attribute:: start_date |
| |
| |
| |
| |
| .. attribute:: end_date |
| |
| |
| |
| |
| .. attribute:: user_id |
| |
| |
| |
| |
| .. attribute:: known_event_type_id |
| |
| |
| |
| |
| .. attribute:: reported_by |
| |
| |
| |
| |
| .. attribute:: event_type |
| |
| |
| |
| |
| .. attribute:: description |
| |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. py:class:: KnownEventType |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| .. attribute:: __tablename__ |
| :annotation: = known_event_type |
| |
| |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: know_event_type |
| |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. py:class:: User |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| .. attribute:: __tablename__ |
| :annotation: = users |
| |
| |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: username |
| |
| |
| |
| |
| .. attribute:: email |
| |
| |
| |
| |
| .. attribute:: superuser |
| |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |
| .. method:: get_id(self) |
| |
| |
| |
| |
| .. method:: is_superuser(self) |
| |
| |
| |
| |
| .. py:class:: Chart |
| |
| Bases: :class:`airflow.models.base.Base` |
| |
| .. attribute:: __tablename__ |
| :annotation: = chart |
| |
| |
| |
| .. attribute:: id |
| |
| |
| |
| |
| .. attribute:: label |
| |
| |
| |
| |
| .. attribute:: conn_id |
| |
| |
| |
| |
| .. attribute:: user_id |
| |
| |
| |
| |
| .. attribute:: chart_type |
| |
| |
| |
| |
| .. attribute:: sql_layout |
| |
| |
| |
| |
| .. attribute:: sql |
| |
| |
| |
| |
| .. attribute:: y_log_scale |
| |
| |
| |
| |
| .. attribute:: show_datatable |
| |
| |
| |
| |
| .. attribute:: show_sql |
| |
| |
| |
| |
| .. attribute:: height |
| |
| |
| |
| |
| .. attribute:: default_params |
| |
| |
| |
| |
| .. attribute:: owner |
| |
| |
| |
| |
| .. attribute:: x_is_date |
| |
| |
| |
| |
| .. attribute:: iteration_no |
| |
| |
| |
| |
| .. attribute:: last_modified |
| |
| |
| |
| |
| |
| .. method:: __repr__(self) |
| |
| |
| |
| |