[CARBONDATA-3721][CARBONDATA-3590] Optimize Bucket Table

Why is this PR needed?

Support Bucket Table consistent with spark to improve the join performance by avoid shuffle for bucket column. The same time, fix bugs about load/compact query of bucket.

What changes were proposed in this PR?

Support Bucket Table and consistent with spark to improve the join performance by avoid shuffle for bucket column. Fix bugs also.

1. For create table, ddl support both tblproperties and clustered by like hive.
2. For loading, fix some problems in loading when bucket column specified, make it clusterd into diff files based on bucket column.
3. For query, the hash impl should either keep the same for a given value or keep same with parquet table, so that the join result of diff bucket tables give correct result. By the way, the hash impl is configurable.
4. For compaction, group the block files based on bucket id, the data should hash into diff carbondata files also, otherwise the query flow will group the files based on bucket num, all data compacted will com into 1 partition and the join result will mismatch, the performace will very slow.
5. For tests, add 19 test cases in TableBucketingTestCase

Does this PR introduce any user interface change?
No

Is any new testcase added?
Yes

This closes #3637
38 files changed
tree: 614ad19a3ca9546d317832f4ee510b3bb70d195c
  1. .github/
  2. assembly/
  3. bin/
  4. build/
  5. common/
  6. conf/
  7. core/
  8. dev/
  9. docs/
  10. examples/
  11. format/
  12. geo/
  13. hadoop/
  14. index/
  15. integration/
  16. licenses-binary/
  17. mv/
  18. processing/
  19. python/
  20. sdk/
  21. streaming/
  22. tools/
  23. .gitignore
  24. LICENSE
  25. NOTICE
  26. pom.xml
  27. README.md
README.md

Apache CarbonData is an indexed columnar data store solution for fast analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.

You can find the latest CarbonData document and learn more at: http://carbondata.apache.org

CarbonData cwiki

Visit count: HitCount

Status

Spark2.3: Build Status Coverage Status

Features

CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features:

  • Stores data along with index: it can significantly accelerate query performance and reduces the I/O scans and CPU resources, where there are filters in the query. CarbonData index consists of multiple level of indices, a processing framework can leverage this index to reduce the task it needs to schedule and process, and it can also do skip scan in more finer grain unit (called blocklet) in task side scanning instead of scanning the whole file.
  • Operable encoded data :Through supporting efficient compression and global encoding schemes, can query on compressed/encoded data, the data can be converted just before returning the results to the users, which is “late materialized”.
  • Supports for various use cases with one single Data format : like interactive OLAP-style query, Sequential Access (big scan), Random Access (narrow scan).

Building CarbonData

CarbonData is built using Apache Maven, to build CarbonData

Online Documentation

Integration

Other Technical Material

Fork and Contribute

This is an active open source project for everyone, and we are always open to people who want to use this system or contribute to it. This guide document introduce how to contribute to CarbonData.

Contact us

To get involved in CarbonData:

About

Apache CarbonData is an open source project of The Apache Software Foundation (ASF).