blob: cf6a4db105f1bf7cd29fb5d17a4c409170e3a748 [file] [log] [blame]
:py:mod:`airflow.executors.kubernetes_executor`
===============================================
.. py:module:: airflow.executors.kubernetes_executor
.. autoapi-nested-parse::
KubernetesExecutor
.. seealso::
For more information on how the KubernetesExecutor works, take a look at the guide:
:ref:`executor:KubernetesExecutor`
Module Contents
---------------
Classes
~~~~~~~
.. autoapisummary::
airflow.executors.kubernetes_executor.ResourceVersion
airflow.executors.kubernetes_executor.KubernetesJobWatcher
airflow.executors.kubernetes_executor.AirflowKubernetesScheduler
airflow.executors.kubernetes_executor.KubernetesExecutor
Functions
~~~~~~~~~
.. autoapisummary::
airflow.executors.kubernetes_executor.get_base_pod_from_template
Attributes
~~~~~~~~~~
.. autoapisummary::
airflow.executors.kubernetes_executor.KubernetesJobType
airflow.executors.kubernetes_executor.KubernetesResultsType
airflow.executors.kubernetes_executor.KubernetesWatchType
.. py:data:: KubernetesJobType
.. py:data:: KubernetesResultsType
.. py:data:: KubernetesWatchType
.. py:class:: ResourceVersion
Singleton for tracking resourceVersion from Kubernetes
.. py:attribute:: resource_version
:annotation: = 0
.. py:class:: KubernetesJobWatcher(namespace: Optional[str], multi_namespace_mode: bool, watcher_queue: Queue[KubernetesWatchType], resource_version: Optional[str], scheduler_job_id: Optional[str], kube_config: kubernetes.client.Configuration)
Bases: :py:obj:`multiprocessing.Process`, :py:obj:`airflow.utils.log.logging_mixin.LoggingMixin`
Watches for Kubernetes jobs
.. py:method:: run(self) -> None
Performs watching
.. py:method:: process_error(self, event: Any) -> str
Process error response
.. py:method:: process_status(self, pod_id: str, namespace: str, status: str, annotations: Dict[str, str], resource_version: str, event: Any) -> None
Process status response
.. py:class:: AirflowKubernetesScheduler(kube_config: Any, task_queue: Queue[KubernetesJobType], result_queue: Queue[KubernetesResultsType], kube_client: kubernetes.client.CoreV1Api, scheduler_job_id: str)
Bases: :py:obj:`airflow.utils.log.logging_mixin.LoggingMixin`
Airflow Scheduler for Kubernetes
.. py:method:: run_pod_async(self, pod: kubernetes.client.models.V1Pod, **kwargs)
Runs POD asynchronously
.. py:method:: run_next(self, next_job: KubernetesJobType) -> None
The run_next command will check the task_queue for any un-run jobs.
It will then create a unique job-id, launch that job in the cluster,
and store relevant info in the current_jobs map so we can track the job's
status
.. py:method:: delete_pod(self, pod_id: str, namespace: str) -> None
Deletes POD
.. py:method:: sync(self) -> None
The sync function checks the status of all currently running kubernetes jobs.
If a job is completed, its status is placed in the result queue to
be sent back to the scheduler.
:return:
.. py:method:: process_watcher_task(self, task: KubernetesWatchType) -> None
Process the task by watcher.
.. py:method:: terminate(self) -> None
Terminates the watcher.
.. py:function:: get_base_pod_from_template(pod_template_file: Optional[str], kube_config: Any) -> kubernetes.client.models.V1Pod
Reads either the pod_template_file set in the executor_config or the base pod_template_file
set in the airflow.cfg to craft a "base pod" that will be used by the KubernetesExecutor
:param pod_template_file: absolute path to a pod_template_file.yaml or None
:param kube_config: The KubeConfig class generated by airflow that contains all kube metadata
:return: a V1Pod that can be used as the base pod for k8s tasks
.. py:class:: KubernetesExecutor
Bases: :py:obj:`airflow.executors.base_executor.BaseExecutor`, :py:obj:`airflow.utils.log.logging_mixin.LoggingMixin`
Executor for Kubernetes
.. py:method:: clear_not_launched_queued_tasks(self, session=None) -> None
Tasks can end up in a "Queued" state through either the executor being
abruptly shut down (leaving a non-empty task_queue on this executor)
or when a rescheduled/deferred operator comes back up for execution
(with the same try_number) before the pod of its previous incarnation
has been fully removed (we think).
This method checks each of those tasks to see if the corresponding pod
is around, and if not, and there's no matching entry in our own
task_queue, marks it for re-execution.
.. py:method:: start(self) -> None
Starts the executor
.. py:method:: execute_async(self, key: airflow.models.taskinstance.TaskInstanceKey, command: airflow.executors.base_executor.CommandType, queue: Optional[str] = None, executor_config: Optional[Any] = None) -> None
Executes task asynchronously
.. py:method:: sync(self) -> None
Synchronize task state.
.. py:method:: try_adopt_task_instances(self, tis: List[airflow.models.taskinstance.TaskInstance]) -> List[airflow.models.taskinstance.TaskInstance]
Try to adopt running task instances that have been abandoned by a SchedulerJob dying.
Anything that is not adopted will be cleared by the scheduler (and then become eligible for
re-scheduling)
:return: any TaskInstances that were unable to be adopted
:rtype: list[airflow.models.TaskInstance]
.. py:method:: adopt_launched_task(self, kube_client: kubernetes.client.CoreV1Api, pod: kubernetes.client.models.V1Pod, pod_ids: Dict[airflow.models.taskinstance.TaskInstanceKey, kubernetes.client.models.V1Pod]) -> None
Patch existing pod so that the current KubernetesJobWatcher can monitor it via label selectors
:param kube_client: kubernetes client for speaking to kube API
:param pod: V1Pod spec that we will patch with new label
:param pod_ids: pod_ids we expect to patch.
.. py:method:: end(self) -> None
Called when the executor shuts down
.. py:method:: terminate(self)
Terminate the executor is not doing anything.