commit | 5322767b70615e65d1e6c98d56582028c1f20f96 | [log] [tgz] |
---|---|---|
author | liuzhi <371684521@qq.com> | Wed Nov 27 17:44:21 2019 +0800 |
committer | Jacky Li <jacky.likun@qq.com> | Wed Nov 27 23:50:25 2019 +0800 |
tree | 460a31e2417ea6c160ae384ec3b963af1e5cd292 | |
parent | 030f711d6c99bc3cdaf3fd491a5e02f66a87b1d8 [diff] |
[CARBONDATA-3557] Support write Flink streaming data to Carbon The write process is: 1. For every checkpoint in each Flink task, write data to local file system by StreamingFileSink and carbon SDK; 2. Copy local carbon data file to carbon data store system, such as HDFS, S3; 3. Generate and write metadata file and success file to ${tablePath}/Metadata/stage folder as a commit; This closes #3421
Apache CarbonData is an indexed columnar data store solution for fast analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.
You can find the latest CarbonData document and learn more at: http://carbondata.apache.org
CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features:
CarbonData is built using Apache Maven, to build CarbonData
This is an active open source project for everyone, and we are always open to people who want to use this system or contribute to it. This guide document introduce how to contribute to CarbonData.
To get involved in CarbonData:
Apache CarbonData is an open source project of The Apache Software Foundation (ASF).