blob: 101d3f12671e7a945565928a2013f84d7ef84ace [file] [log] [blame]
:py:mod:`tests.system.providers.databricks.example_databricks_sql`
==================================================================
.. py:module:: tests.system.providers.databricks.example_databricks_sql
.. autoapi-nested-parse::
This is an example DAG which uses the DatabricksSubmitRunOperator.
In this example, we create two tasks which execute sequentially.
The first task is to run a notebook at the workspace path "/test"
and the second task is to run a JAR uploaded to DBFS. Both,
tasks use new clusters.
Because we have set a downstream dependency on the notebook task,
the spark jar task will NOT run until the notebook task completes
successfully.
The definition of a successful run is if the run has a result_state of "SUCCESS".
For more information about the state of a run refer to
https://docs.databricks.com/api/latest/jobs.html#runstate
Module Contents
---------------
.. py:data:: ENV_ID
.. py:data:: DAG_ID
:annotation: = example_databricks_sql_operator
.. py:data:: connection_id
:annotation: = my_connection
.. py:data:: test_run