commit | 1d552c5c4b6e88477335f021b9934385254be9db | [log] [tgz] |
---|---|---|
author | sychen <sychen@ctrip.com> | Mon Apr 22 01:04:52 2024 -0700 |
committer | Dongjoon Hyun <dongjoon@apache.org> | Mon Apr 22 01:05:23 2024 -0700 |
tree | 617c31612b93cdfc8dfbb7c5c0dbecd6a3f3b576 | |
parent | 2c3cbca49b7ff0809cc9778177436e6004ac6e77 [diff] |
ORC-1696: Fix ClassCastException when reading avro decimal type in bechmark ### What changes were proposed in this pull request? This PR aims to fix `ClassCastException` when reading avro decimal type in bechmark. ### Why are the changes needed? ORC-1191 Forcing `object` to `double`, but object type is `ByteBuffer`, which causes scan to fail. ```bash java -jar core/target/orc-benchmarks-core-*-uber.jar scan data ``` ```java Exception in thread "main" java.lang.ClassCastException: class java.nio.HeapByteBuffer cannot be cast to class java.lang.Double (java.nio.HeapByteBuffer and java.lang.Double are in module java.base of loader 'bootstrap') at org.apache.orc.bench.core.convert.avro.AvroReader$DecimalConverter.convert(AvroReader.java:204) at org.apache.orc.bench.core.convert.avro.AvroReader.nextBatch(AvroReader.java:69) at org.apache.orc.bench.core.convert.ScanVariants.run(ScanVariants.java:92) at org.apache.orc.bench.core.Driver.main(Driver.java:64) ``` ### How was this patch tested? local test ```bash java -jar core/target/orc-benchmarks-core-*-uber.jar scan data ``` output ``` data/generated/taxi/avro.snappy rows: 22758236 batches: 22225 ``` ### Was this patch authored or co-authored using generative AI tooling? No Closes #1898 from cxzl25/ORC-1696. Authored-by: sychen <sychen@ctrip.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit d4f13dc284fc12b7ff109493652473faec8724d3) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
ORC is a self-describing type-aware columnar file format designed for Hadoop workloads. It is optimized for large streaming reads, but with integrated support for finding required rows quickly. Storing data in a columnar format lets the reader read, decompress, and process only the values that are required for the current query. Because ORC files are type-aware, the writer chooses the most appropriate encoding for the type and builds an internal index as the file is written. Predicate pushdown uses those indexes to determine which stripes in a file need to be read for a particular query and the row indexes can narrow the search to a particular set of 10,000 rows. ORC supports the complete set of types in Hive, including the complex types: structs, lists, maps, and unions.
This project includes both a Java library and a C++ library for reading and writing the Optimized Row Columnar (ORC) file format. The C++ and Java libraries are completely independent of each other and will each read all versions of ORC files.
Releases:
The current build status:
Bug tracking: Apache Jira
The subdirectories are:
To build a release version with debug information:
% mkdir build % cd build % cmake .. % make package % make test-out
To build a debug version:
% mkdir build % cd build % cmake .. -DCMAKE_BUILD_TYPE=DEBUG % make package % make test-out
To build a release version without debug information:
% mkdir build % cd build % cmake .. -DCMAKE_BUILD_TYPE=RELEASE % make package % make test-out
To build only the Java library:
% cd java % ./mvnw package
To build only the C++ library:
% mkdir build % cd build % cmake .. -DBUILD_JAVA=OFF % make package % make test-out