blob: c1734192779213bc4584c1697a79c5114ccf74f3 [file] [log] [blame]
:py:mod:`airflow.providers.apache.spark.hooks.spark_jdbc_script`
================================================================
.. py:module:: airflow.providers.apache.spark.hooks.spark_jdbc_script
Module Contents
---------------
Functions
~~~~~~~~~
.. autoapisummary::
airflow.providers.apache.spark.hooks.spark_jdbc_script.set_common_options
airflow.providers.apache.spark.hooks.spark_jdbc_script.spark_write_to_jdbc
airflow.providers.apache.spark.hooks.spark_jdbc_script.spark_read_from_jdbc
Attributes
~~~~~~~~~~
.. autoapisummary::
airflow.providers.apache.spark.hooks.spark_jdbc_script.SPARK_WRITE_TO_JDBC
airflow.providers.apache.spark.hooks.spark_jdbc_script.SPARK_READ_FROM_JDBC
.. py:data:: SPARK_WRITE_TO_JDBC
:annotation: :str = spark_to_jdbc
.. py:data:: SPARK_READ_FROM_JDBC
:annotation: :str = jdbc_to_spark
.. py:function:: set_common_options(spark_source, url = 'localhost:5432', jdbc_table = 'default.default', user = 'root', password = 'root', driver = 'driver')
Get Spark source from JDBC connection
:param spark_source: Spark source, here is Spark reader or writer
:param url: JDBC resource url
:param jdbc_table: JDBC resource table name
:param user: JDBC resource user name
:param password: JDBC resource password
:param driver: JDBC resource driver
.. py:function:: spark_write_to_jdbc(spark_session, url, user, password, metastore_table, jdbc_table, driver, truncate, save_mode, batch_size, num_partitions, create_table_column_types)
Transfer data from Spark to JDBC source
.. py:function:: spark_read_from_jdbc(spark_session, url, user, password, metastore_table, jdbc_table, driver, save_mode, save_format, fetch_size, num_partitions, partition_column, lower_bound, upper_bound)
Transfer data from JDBC source to Spark