commit | 45e84e58bf6235393653c8e2c3d85a3c27c7872c | [log] [tgz] |
---|---|---|
author | QiangCai <qiangcai@qq.com> | Fri Dec 27 19:56:27 2019 +0800 |
committer | Jacky Li <jacky.likun@qq.com> | Mon Dec 30 12:30:17 2019 +0800 |
tree | 03fe751c8c262e3e47c033a8803dc138c52a1497 | |
parent | b0bdab2597dd658eceaea0b87672c76e06eaf340 [diff] |
[CARBONDATA-3641] Refactory data loading for partition table [Background] Currently, CarbonData only implemented hadoop commit algorithm version 1, which generated too many segment files during loading and generated too many small data files and index files [Modification] 1. implemented carbon commit algorithm, avoid to move data file and index files 2. generate the final segment file directly 3. optimize global_sort to avoid small files issue 4. support complex data type in partition table (non-partition column) This closes #3535
Apache CarbonData is an indexed columnar data store solution for fast analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.
You can find the latest CarbonData document and learn more at: http://carbondata.apache.org
CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features:
CarbonData is built using Apache Maven, to build CarbonData
This is an active open source project for everyone, and we are always open to people who want to use this system or contribute to it. This guide document introduce how to contribute to CarbonData.
To get involved in CarbonData:
Apache CarbonData is an open source project of The Apache Software Foundation (ASF).