blob: a8dc031691683621306a8e952885450ef599462d [file] [log] [blame]
:py:mod:`airflow.providers.apache.hive.transfers.s3_to_hive`
============================================================
.. py:module:: airflow.providers.apache.hive.transfers.s3_to_hive
.. autoapi-nested-parse::
This module contains an operator to move data from an S3 bucket to Hive.
Module Contents
---------------
Classes
~~~~~~~
.. autoapisummary::
airflow.providers.apache.hive.transfers.s3_to_hive.S3ToHiveOperator
.. py:class:: S3ToHiveOperator(*, s3_key, field_dict, hive_table, delimiter = ',', create = True, recreate = False, partition = None, headers = False, check_headers = False, wildcard_match = False, aws_conn_id = 'aws_default', verify = None, hive_cli_conn_id = 'hive_cli_default', input_compressed = False, tblproperties = None, select_expression = None, **kwargs)
Bases: :py:obj:`airflow.models.BaseOperator`
Moves data from S3 to Hive. The operator downloads a file from S3,
stores the file locally before loading it into a Hive table.
If the ``create`` or ``recreate`` arguments are set to ``True``,
a ``CREATE TABLE`` and ``DROP TABLE`` statements are generated.
Hive data types are inferred from the cursor's metadata from.
Note that the table generated in Hive uses ``STORED AS textfile``
which isn't the most efficient serialization format. If a
large amount of data is loaded and/or if the tables gets
queried considerably, you may want to use this operator only to
stage the data into a temporary table before loading it into its
final destination using a ``HiveOperator``.
:param s3_key: The key to be retrieved from S3. (templated)
:param field_dict: A dictionary of the fields name in the file
as keys and their Hive types as values
:param hive_table: target Hive table, use dot notation to target a
specific database. (templated)
:param delimiter: field delimiter in the file
:param create: whether to create the table if it doesn't exist
:param recreate: whether to drop and recreate the table at every
execution
:param partition: target partition as a dict of partition columns
and values. (templated)
:param headers: whether the file contains column names on the first
line
:param check_headers: whether the column names on the first line should be
checked against the keys of field_dict
:param wildcard_match: whether the s3_key should be interpreted as a Unix
wildcard pattern
:param aws_conn_id: source s3 connection
:param verify: Whether or not to verify SSL certificates for S3 connection.
By default SSL certificates are verified.
You can provide the following values:
- ``False``: do not validate SSL certificates. SSL will still be used
(unless use_ssl is False), but SSL certificates will not be
verified.
- ``path/to/cert/bundle.pem``: A filename of the CA cert bundle to uses.
You can specify this argument if you want to use a different
CA cert bundle than the one used by botocore.
:param hive_cli_conn_id: Reference to the
:ref:`Hive CLI connection id <howto/connection:hive_cli>`.
:param input_compressed: Boolean to determine if file decompression is
required to process headers
:param tblproperties: TBLPROPERTIES of the hive table being created
:param select_expression: S3 Select expression
.. py:attribute:: template_fields
:annotation: :Sequence[str] = ['s3_key', 'partition', 'hive_table']
.. py:attribute:: template_ext
:annotation: :Sequence[str] = []
.. py:attribute:: ui_color
:annotation: = #a0e08c
.. py:method:: execute(self, context)
This is the main method to derive when creating an operator.
Context is the same dictionary used as when rendering jinja templates.
Refer to get_template_context for more context.