[CARBONDATA-3830] Support Array and Struct of all primitive type reading from presto

Why is this PR needed?
Currently, presto cannot read complex data type stores. Sometimes it gives empty results and some times exception.

What changes were proposed in this PR?
Supported all the 13 complex primitive types (including binary, refer the testcase added) with non-nested array and struct data type

Supported complex type in Direct vector filling flow :
Currently, spark integration carbondata will use row level filling for complex type instead of vector filling. But presto supports only vector reading. so need to support complex type in vector filling.

Supported complex primitive vector handling in DIRECT_COMPESS, ADAPTIVE_CODEC flows
Encoding of all the complex primitive type is either DIRECT_COMPESS or ADAPTIVE_CODEC, it will never use a legacy encoding. so, because of this string, varchar (with/without local dictionary), binary, date vector filling need to handle in DIRECT_COMPESS. Parent column also comes as DIRECT_COMPESS. Extracted data from parent column page here.

Supported vector stack in complex column vectorInfo to store all the children vectors.

Keep a list of children vector inside CarbonColumnVectorImpl.java

Support ComplexStreamReader to fill presto ROW (struct) block and ARRAY block.

Handle null value filling by wrapping children vector with ColumnarVectorWrapperDirect

Limitations / next work:
Some pending TODO 's are,

Local dictionary need to handle for string / varchar columns as DIRECT_COMPRESS flow don't have that handling
Can support map of all primitive types
Can support multilevel nested arrays and struct

Does this PR introduce any user interface change?
No

Is any new testcase added?
Yes [Added test case for all 13 primitive type with array and struct, null values and more than one page data]

This closes #3887

Co-authored-by: akkio-97 <akshay.nuthala@gmail.com>
26 files changed
tree: dfa047f6c8d9e4da520b41abb314ec757ec971dd
  1. .github/
  2. assembly/
  3. bin/
  4. build/
  5. common/
  6. conf/
  7. core/
  8. dev/
  9. docs/
  10. examples/
  11. format/
  12. geo/
  13. hadoop/
  14. index/
  15. integration/
  16. licenses-binary/
  17. mv/
  18. processing/
  19. python/
  20. sdk/
  21. streaming/
  22. tools/
  23. .gitignore
  24. LICENSE
  25. NOTICE
  26. pom.xml
  27. README.md
  28. scalastyle-config.xml
README.md

Apache CarbonData is an indexed columnar data store solution for fast analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.

You can find the latest CarbonData document and learn more at: http://carbondata.apache.org

CarbonData cwiki

Visit count: HitCount

Status

Spark2.4: Build Status Coverage Status

Features

CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features:

  • Stores data along with index: it can significantly accelerate query performance and reduces the I/O scans and CPU resources, where there are filters in the query. CarbonData index consists of multiple level of indices, a processing framework can leverage this index to reduce the task it needs to schedule and process, and it can also do skip scan in more finer grain unit (called blocklet) in task side scanning instead of scanning the whole file.
  • Operable encoded data :Through supporting efficient compression and global encoding schemes, can query on compressed/encoded data, the data can be converted just before returning the results to the users, which is “late materialized”.
  • Supports for various use cases with one single Data format : like interactive OLAP-style query, Sequential Access (big scan), Random Access (narrow scan).

Building CarbonData

CarbonData is built using Apache Maven, to build CarbonData

Online Documentation

Experimental Features

Some features are marked as experimental because the syntax/implementation might change in the future.

  1. Hybrid format table using Add Segment.
  2. Accelerating performance using MV on parquet/orc.
  3. Merge API for Spark DataFrame.
  4. Hive write for non-transactional table.

Integration

Other Technical Material

Fork and Contribute

This is an active open source project for everyone, and we are always open to people who want to use this system or contribute to it. This guide document introduce how to contribute to CarbonData.

Contact us

To get involved in CarbonData:

About

Apache CarbonData is an open source project of The Apache Software Foundation (ASF).