commit | 4b594686071185b910f6dcc7e51708b43e62d541 | [log] [tgz] |
---|---|---|
author | ajantha-bhat <ajanthabhat@gmail.com> | Thu Apr 30 09:26:10 2020 +0530 |
committer | QiangCai <qiangcai@qq.com> | Thu Apr 30 17:28:34 2020 +0800 |
tree | f6617ef4ce99db69fe5e9f1115a58ff73dfde917 | |
parent | 9929f3a30e6f544b8696e780fb448179abcdc6b0 [diff] |
[CARBONDATA-3788] Fix insert failure during global sort with huge data in new insert flow Why is this PR needed? Spark is resuing the internalRow in global sort partition flow with huge data. As RDD of Internal row is persisted for global sort. What changes were proposed in this PR? Need to have a copy and work on the internalRow before the last transform for global sort partition flow. Already this was doing for insert stage command (which uses global sort partition) This closes #3732
Apache CarbonData is an indexed columnar data store solution for fast analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.
You can find the latest CarbonData document and learn more at: http://carbondata.apache.org
CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features:
CarbonData is built using Apache Maven, to build CarbonData
This is an active open source project for everyone, and we are always open to people who want to use this system or contribute to it. This guide document introduce how to contribute to CarbonData.
To get involved in CarbonData:
Apache CarbonData is an open source project of The Apache Software Foundation (ASF).