commit | ab003979d9f67f7b9183768eb2d799a872d3561e | [log] [tgz] |
---|---|---|
author | akashrn5 <akashnilugal@gmail.com> | Fri May 29 00:08:25 2020 +0530 |
committer | QiangCai <qiangcai@qq.com> | Wed Aug 26 15:12:57 2020 +0800 |
tree | 2d2d91d6c98d0a43b549f44d4fb27c3db1a95d26 | |
parent | 1ccfb9be3fab8dfc2556541872632471d733518a [diff] |
[CARBONDATA-3929] Improve CDC performance Why is this PR needed? This PR is to improve the CDC merge performance. CDC is currently very slow in the case of full outer joins and slow in normal cases. Identified pain points are as below: 1. currently we are writing the intermediate delete data to carbon format, which is the columnar format, and we do a full scan which is slow. Here since intermediate, we do full scan, compression, columnar format, its all-time taking. 2. Full outer join case is very slow. 3. when we insert new data into new segments, we follow the old insert flow with the converter step. 4. since we write the intermediate data carbon format, we use coalesce to limit the partition to number of active executors. What changes were proposed in this PR? Some improvements points are identified as below 1. Write the intermediate data to a faster row format like Avro. 2. use bucketing on join column and do the repartition of the Dataframe before performing the join operation, which avoids the shuffle on one side as shuffle is major time consuming part in join. 3. make the insert flow to the new flow without the converter step. 4. remove coalesce and can use resource to write the intermediate Avro data in a faster way. Performance results DataSize -> 2GB target table data 230MB source table data InnerJoin case - around 17000+ deleted rows, 70400 odd updated rows Full outer Join case - 2million target data, 0.2million src data, 70400 odd rows updated and some deleted Old Time(sec) New Time(sec) Join Type 1st time query 2nd time query 1st time query 2nd time query Inner Join 20 9.6 14 4.6 Full Outer Join 43 17.8 26 7.7 Does this PR introduce any user interface change? No Is any new testcase added? Yes This closes #3856
Apache CarbonData is an indexed columnar data store solution for fast analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.
You can find the latest CarbonData document and learn more at: http://carbondata.apache.org
CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features:
CarbonData is built using Apache Maven, to build CarbonData
Some features are marked as experimental because the syntax/implementation might change in the future.
This is an active open source project for everyone, and we are always open to people who want to use this system or contribute to it. This guide document introduce how to contribute to CarbonData.
To get involved in CarbonData:
Apache CarbonData is an open source project of The Apache Software Foundation (ASF).