| :mod:`airflow.contrib.operators.cassandra_to_gcs` |
| ================================================= |
| |
| .. py:module:: airflow.contrib.operators.cassandra_to_gcs |
| |
| .. autoapi-nested-parse:: |
| |
| This module contains operator for copying |
| data from Cassandra to Google cloud storage in JSON format. |
| |
| |
| |
| Module Contents |
| --------------- |
| |
| .. py:class:: CassandraToGoogleCloudStorageOperator(cql, bucket, filename, schema_filename=None, approx_max_file_size_bytes=1900000000, gzip=False, cassandra_conn_id='cassandra_default', google_cloud_storage_conn_id='google_cloud_default', delegate_to=None, *args, **kwargs) |
| |
| Bases: :class:`airflow.models.BaseOperator` |
| |
| Copy data from Cassandra to Google cloud storage in JSON format |
| |
| Note: Arrays of arrays are not supported. |
| |
| :param cql: The CQL to execute on the Cassandra table. |
| :type cql: str |
| :param bucket: The bucket to upload to. |
| :type bucket: str |
| :param filename: The filename to use as the object name when uploading |
| to Google cloud storage. A {} should be specified in the filename |
| to allow the operator to inject file numbers in cases where the |
| file is split due to size. |
| :type filename: str |
| :param schema_filename: If set, the filename to use as the object name |
| when uploading a .json file containing the BigQuery schema fields |
| for the table that was dumped from MySQL. |
| :type schema_filename: str |
| :param approx_max_file_size_bytes: This operator supports the ability |
| to split large table dumps into multiple files (see notes in the |
| filename param docs above). This param allows developers to specify the |
| file size of the splits. Check https://cloud.google.com/storage/quotas |
| to see the maximum allowed file size for a single object. |
| :type approx_max_file_size_bytes: long |
| :param cassandra_conn_id: Reference to a specific Cassandra hook. |
| :type cassandra_conn_id: str |
| :param gzip: Option to compress file for upload |
| :type gzip: bool |
| :param google_cloud_storage_conn_id: Reference to a specific Google |
| cloud storage hook. |
| :type google_cloud_storage_conn_id: str |
| :param delegate_to: The account to impersonate, if any. For this to |
| work, the service account making the request must have domain-wide |
| delegation enabled. |
| :type delegate_to: str |
| |
| .. attribute:: template_fields |
| :annotation: = ['cql', 'bucket', 'filename', 'schema_filename'] |
| |
| |
| |
| .. attribute:: template_ext |
| :annotation: = ['.cql'] |
| |
| |
| |
| .. attribute:: ui_color |
| :annotation: = #a0e08c |
| |
| |
| |
| .. attribute:: CQL_TYPE_MAP |
| |
| |
| |
| |
| |
| .. method:: execute(self, context) |
| |
| |
| |
| |
| .. method:: _query_cassandra(self) |
| |
| Queries cassandra and returns a cursor to the results. |
| |
| |
| |
| |
| .. method:: _write_local_data_files(self, cursor) |
| |
| Takes a cursor, and writes results to a local file. |
| |
| :return: A dictionary where keys are filenames to be used as object |
| names in GCS, and values are file handles to local files that |
| contain the data for the GCS objects. |
| |
| |
| |
| |
| .. method:: _write_local_schema_file(self, cursor) |
| |
| Takes a cursor, and writes the BigQuery schema for the results to a |
| local file system. |
| |
| :return: A dictionary where key is a filename to be used as an object |
| name in GCS, and values are file handles to local files that |
| contains the BigQuery schema fields in .json format. |
| |
| |
| |
| |
| .. method:: _upload_to_gcs(self, files_to_upload) |
| |
| |
| |
| |
| .. classmethod:: generate_data_dict(cls, names, values) |
| |
| |
| |
| |
| .. classmethod:: convert_value(cls, name, value) |
| |
| |
| |
| |
| .. classmethod:: convert_array_types(cls, name, value) |
| |
| |
| |
| |
| .. classmethod:: convert_user_type(cls, name, value) |
| |
| Converts a user type to RECORD that contains n fields, where n is the |
| number of attributes. Each element in the user type class will be converted to its |
| corresponding data type in BQ. |
| |
| |
| |
| |
| .. classmethod:: convert_tuple_type(cls, name, value) |
| |
| Converts a tuple to RECORD that contains n fields, each will be converted |
| to its corresponding data type in bq and will be named 'field_<index>', where |
| index is determined by the order of the tuple elements defined in cassandra. |
| |
| |
| |
| |
| .. classmethod:: convert_map_type(cls, name, value) |
| |
| Converts a map to a repeated RECORD that contains two fields: 'key' and 'value', |
| each will be converted to its corresponding data type in BQ. |
| |
| |
| |
| |
| .. classmethod:: generate_schema_dict(cls, name, type) |
| |
| |
| |
| |
| .. classmethod:: get_bq_fields(cls, name, type) |
| |
| |
| |
| |
| .. classmethod:: is_simple_type(cls, type) |
| |
| |
| |
| |
| .. classmethod:: is_array_type(cls, type) |
| |
| |
| |
| |
| .. classmethod:: is_record_type(cls, type) |
| |
| |
| |
| |
| .. classmethod:: get_bq_type(cls, type) |
| |
| |
| |
| |
| .. classmethod:: get_bq_mode(cls, type) |
| |
| |
| |
| |