commit | 85264e751fbd4be80446df006466efe3c3248537 | [log] [tgz] |
---|---|---|
author | Nihal ojha <nihalnitn97@gmail.com> | Mon Jul 27 17:10:04 2020 +0530 |
committer | akashrn5 <akashnilugal@gmail.com> | Fri Aug 21 18:20:28 2020 +0530 |
tree | bca56a373e3937e2e99ca206db27ccf886451b5a | |
parent | cc59e74d9026048bd36335db30611f1bcd32fd9b [diff] |
[CARBONDATA-3928] Handled the Strings which length is greater than 32000 as a bad record. Why is this PR needed? Currently, when the string length exceeds 32000 then the load is failed. The particular record should be handled as a bad record and load should not be failed. What changes were proposed in this PR? At place of throwing exception in case of strings greater than 32000 bytes now we will handle as a bad record and load will not fail. Does this PR introduce any user interface change? No Is any new testcase added? Yes This closes #3865
Apache CarbonData is an indexed columnar data store solution for fast analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.
You can find the latest CarbonData document and learn more at: http://carbondata.apache.org
CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features:
CarbonData is built using Apache Maven, to build CarbonData
Some features are marked as experimental because the syntax/implementation might change in the future.
This is an active open source project for everyone, and we are always open to people who want to use this system or contribute to it. This guide document introduce how to contribute to CarbonData.
To get involved in CarbonData:
Apache CarbonData is an open source project of The Apache Software Foundation (ASF).