|author||Alex Gout <email@example.com>||Mon Mar 27 10:07:33 2023 -0400|
|committer||Tzu-Li (Gordon) Tai <firstname.lastname@example.org>||Tue Apr 11 16:56:33 2023 -0700|
[FLINK-31049] [flink-connector-kafka] Add support for Kafka record headers to KafkaSink Co-Authored-By: Tzu-Li (Gordon) Tai <email@example.com> This closes #18.
This repository contains the official Apache Flink Kafka connector.
Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities.
Learn more about Flink at https://flink.apache.org/
git clone https://github.com/apache/flink-connector-kafka.git cd flink-connector-kafka mvn clean package -DskipTests
The resulting jars can be found in the
target directory of the respective module.
The Flink committers use IntelliJ IDEA to develop the Flink codebase. We recommend IntelliJ IDEA for developing projects that involve Scala code.
Minimal requirements for an IDE are:
The IntelliJ IDE supports Maven out of the box and offers a plugin for Scala development.
Check out our Setting up IntelliJ guide for details.
Don’t hesitate to ask!
Contact the developers and community on the mailing lists if you need any help.
Open an issue if you found a bug in Flink.
The documentation of Apache Flink is located on the website: https://flink.apache.org or in the
docs/ directory of the source code.
This is an active open-source project. We are always open to people who want to use the system or contribute to it. Contact us if you are looking for implementation tasks that fit your skills. This article describes how to contribute to Apache Flink.
Apache Flink is an open source project of The Apache Software Foundation (ASF). The Apache Flink project originated from the Stratosphere research project.