[CARBONDATA-3929] Improve CDC performance

Why is this PR needed?
This PR is to improve the CDC merge performance. CDC is currently very slow in the case of full outer joins and slow in normal cases. Identified pain points are as below:
1. currently we are writing the intermediate delete data to carbon format, which is the columnar format, and we do a full scan which is slow. Here since intermediate, we do full scan, compression, columnar format, its all-time taking.
2. Full outer join case is very slow.
3. when we insert new data into new segments, we follow the old insert flow with the converter step.
4. since we write the intermediate data carbon format, we use coalesce to limit the partition to number of active executors.

What changes were proposed in this PR?
Some improvements points are identified as below
1. Write the intermediate data to a faster row format like Avro.
2. use bucketing on join column and do the repartition of the Dataframe before performing the join operation, which avoids the shuffle on one side as shuffle is major time consuming part in join.
3. make the insert flow to the new flow without the converter step.
4. remove coalesce and can use resource to write the intermediate Avro data in a faster way.

Performance results
DataSize -> 2GB target table data
230MB source table data
InnerJoin case - around 17000+ deleted rows, 70400 odd updated rows
Full outer Join case - 2million target data, 0.2million src data, 70400 odd rows updated and some deleted
                Old Time(sec)		        New Time(sec)
Join Type	1st time query	2nd time query	1st time query	2nd time query
Inner Join	20      	9.6     	14      	4.6
Full Outer Join	43      	17.8    	26      	7.7

Does this PR introduce any user interface change?
No

Is any new testcase added?
Yes

This closes #3856
7 files changed
tree: 2d2d91d6c98d0a43b549f44d4fb27c3db1a95d26
  1. .github/
  2. assembly/
  3. bin/
  4. build/
  5. common/
  6. conf/
  7. core/
  8. dev/
  9. docs/
  10. examples/
  11. format/
  12. geo/
  13. hadoop/
  14. index/
  15. integration/
  16. licenses-binary/
  17. mv/
  18. processing/
  19. python/
  20. sdk/
  21. streaming/
  22. tools/
  23. .gitignore
  24. LICENSE
  25. NOTICE
  26. pom.xml
  27. README.md
  28. scalastyle-config.xml
README.md

Apache CarbonData is an indexed columnar data store solution for fast analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.

You can find the latest CarbonData document and learn more at: http://carbondata.apache.org

CarbonData cwiki

Visit count: HitCount

Status

Spark2.4: Build Status Coverage Status

Features

CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features:

  • Stores data along with index: it can significantly accelerate query performance and reduces the I/O scans and CPU resources, where there are filters in the query. CarbonData index consists of multiple level of indices, a processing framework can leverage this index to reduce the task it needs to schedule and process, and it can also do skip scan in more finer grain unit (called blocklet) in task side scanning instead of scanning the whole file.
  • Operable encoded data :Through supporting efficient compression and global encoding schemes, can query on compressed/encoded data, the data can be converted just before returning the results to the users, which is “late materialized”.
  • Supports for various use cases with one single Data format : like interactive OLAP-style query, Sequential Access (big scan), Random Access (narrow scan).

Building CarbonData

CarbonData is built using Apache Maven, to build CarbonData

Online Documentation

Experimental Features

Some features are marked as experimental because the syntax/implementation might change in the future.

  1. Hybrid format table using Add Segment.
  2. Accelerating performance using MV on parquet/orc.
  3. Merge API for Spark DataFrame.
  4. Hive write for non-transactional table.

Integration

Other Technical Material

Fork and Contribute

This is an active open source project for everyone, and we are always open to people who want to use this system or contribute to it. This guide document introduce how to contribute to CarbonData.

Contact us

To get involved in CarbonData:

About

Apache CarbonData is an open source project of The Apache Software Foundation (ASF).