commit | ac7fe07d436b000c853ab77695adec9359a9cc72 | [log] [tgz] |
---|---|---|
author | Owen O'Malley <omalley@apache.org> | Fri Jul 08 18:04:09 2016 -0700 |
committer | Owen O'Malley <omalley@apache.org> | Fri Jul 08 18:04:09 2016 -0700 |
tree | b17e32bee023da5cdba4d84f70dfe7abe98275ad | |
parent | 1b5544f7eb1d046e3d59449c3de3a608453f41d5 [diff] |
Update version after ORC 1.1.2 release. Signed-off-by: Owen O'Malley <omalley@apache.org>
ORC is a self-describing type-aware columnar file format designed for Hadoop workloads. It is optimized for large streaming reads, but with integrated support for finding required rows quickly. Storing data in a columnar format lets the reader read, decompress, and process only the values that are required for the current query. Because ORC files are type-aware, the writer chooses the most appropriate encoding for the type and builds an internal index as the file is written. Predicate pushdown uses those indexes to determine which stripes in a file need to be read for a particular query and the row indexes can narrow the search to a particular set of 10,000 rows. ORC supports the complete set of types in Hive, including the complex types: structs, lists, maps, and unions.
This project includes both a Java library for reading and writing and a C++ library for reading the Optimized Row Columnar (ORC) file format. The C++ and Java libraries are completely independent of each other and will each read all versions of ORC files.
To build a release version with debug information:
% mkdir build % cd build % cmake .. % make package % make test-out
To build a debug version:
% mkdir build % cd build % cmake .. -DCMAKE_BUILD_TYPE=DEBUG % make package % make test-out
To build a release version without debug information:
% mkdir build % cd build % cmake .. -DCMAKE_BUILD_TYPE=RELEASE % make package % make test-out
To build only the Java library:
% cd java % mvn package
To build only the C++ library:
% mkdir build % cd build % cmake .. -DBUILD_JAVA=OFF % make package % make test-out