commit | 00f64c6f32710a0e2beddfdb6403b2faa879e031 | [log] [tgz] |
---|---|---|
author | IceMimosa <chk19940609@gmail.com> | Tue Jan 07 13:24:57 2020 +0800 |
committer | ajantha-bhat <ajanthabhat@gmail.com> | Wed Apr 08 12:38:32 2020 +0530 |
tree | f41650f09fdfaad50b48166dad64f80b1b1d04ab | |
parent | 89369613e1374489ed63e0ab358244c348b63918 [diff] |
[CARBONDATA-3565] Fix complex binary data broken issue when loading dataframe data Why is this PR needed? When binary data is DataOutputStream#writeDouble and so on. Spark DataFrame(SQL) load it to a table, the data will be broken (EF BF BD) when reading out. What changes were proposed in this PR? If data is byte[], no need to convert to string and decode to byte[] again Does this PR introduce any user interface change? No Is any new testcase added? Yes This closes #3430
Apache CarbonData is an indexed columnar data store solution for fast analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.
You can find the latest CarbonData document and learn more at: http://carbondata.apache.org
CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features:
CarbonData is built using Apache Maven, to build CarbonData
This is an active open source project for everyone, and we are always open to people who want to use this system or contribute to it. This guide document introduce how to contribute to CarbonData.
To get involved in CarbonData:
Apache CarbonData is an open source project of The Apache Software Foundation (ASF).