title: Firehose weight: 5 type: docs aliases:

  • /dev/table/connectors/firehose.html

Amazon Kinesis Data Firehose SQL Connector

{{< label “Sink: Batch” >}} {{< label “Sink: Streaming Append Mode” >}}

The Kinesis Data Firehose connector allows for writing data into Amazon Kinesis Data Firehose (KDF).

Dependencies

{{< sql_connector_download_table “firehose” 4.1.0 >}}

How to create a Kinesis Data Firehose table

Follow the instructions from the Amazon Kinesis Data Firehose Developer Guide to set up a Kinesis Data Firehose delivery stream. The following example shows how to create a table backed by a Kinesis Data Firehose delivery stream with minimum required options:

CREATE TABLE FirehoseTable (
  `user_id` BIGINT,
  `item_id` BIGINT,
  `category_id` BIGINT,
  `behavior` STRING
)
WITH (
  'connector' = 'firehose',
  'delivery-stream' = 'user_behavior',
  'aws.region' = 'us-east-2',
  'format' = 'csv'
);

Connector Options

Authorization

Make sure to create an appropriate IAM policy to allow reading writing to the Kinesis Data Firehose delivery stream.

Authentication

Depending on your deployment you would choose a different Credentials Provider to allow access to Kinesis Data Firehose. By default, the AUTO Credentials Provider is used. If the access key ID and secret key are set in the deployment configuration, this results in using the BASIC provider.

A specific AWSCredentialsProvider can be optionally set using the aws.credentials.provider setting. Supported values are:

  • AUTO - Use the default AWS Credentials Provider chain that searches for credentials in the following order: ENV_VARS, SYS_PROPS, WEB_IDENTITY_TOKEN, PROFILE, and EC2/ECS credentials provider.
  • BASIC - Use access key ID and secret key supplied as configuration.
  • ENV_VAR - Use AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY environment variables.
  • SYS_PROP - Use Java system properties aws.accessKeyId and aws.secretKey.
  • PROFILE - Use an AWS credentials profile to create the AWS credentials.
  • ASSUME_ROLE - Create AWS credentials by assuming a role. The credentials for assuming the role must be supplied.
  • WEB_IDENTITY_TOKEN - Create AWS credentials by assuming a role using Web Identity Token.
  • CUSTOM - Provide a custom class that implements the interface AWSCredentialsProvider and has a constructor MyCustomClass(java.util.Properties config). All connector properties will be passed down to this custom credential provider class via the constructor.

Data Type Mapping

Kinesis Data Firehose stores records as Base64-encoded binary data objects, so it doesn't have a notion of internal record structure. Instead, Kinesis Data Firehose records are deserialized and serialized by formats, e.g. ‘avro’, ‘csv’, or ‘json’. To determine the data type of the messages in your Kinesis Data Firehose backed tables, pick a suitable Flink format with the format keyword. Please refer to the [Formats]({{< ref “docs/connectors/table/formats/overview” >}}) pages for more details.

Notice

The current implementation for the Kinesis Data Firehose SQL connector only supports Kinesis Data Firehose backed sinks and doesn't provide an implementation for source queries. Queries similar to:

SELECT * FROM FirehoseTable;

should result in an error similar to

Connector firehose can only be used as a sink. It cannot be used as a source.

{{< top >}}