commit | 486433f56fa3a63326b2313e0d19396e1be41b6c | [log] [tgz] |
---|---|---|
author | Aliaksei Sandryhaila <aliaksei.sandryhaila@hp.com> | Mon Jul 06 10:15:29 2015 -0700 |
committer | Owen O'Malley <omalley@apache.org> | Mon Jul 06 14:27:52 2015 -0700 |
tree | 0c4a21fe42b4ea86bd08271b499c403d21578ff6 | |
parent | 388fb8e9034d3e6d0e21bd3cc8297e304c66c1f9 [diff] |
ORC-18. Replaced Buffer with DataBuffer<char> and converted InputStream::read() method to posix style. (asandryh via omalley)
ORC is a self-describing type-aware columnar file format designed for Hadoop workloads. It is optimized for large streaming reads, but with integrated support for finding required rows quickly. Storing data in a columnar format lets the reader read, decompress, and process only the values that are required for the current query. Because ORC files are type-aware, the writer chooses the most appropriate encoding for the type and builds an internal index as the file is written. Predicate pushdown uses those indexes to determine which stripes in a file need to be read for a particular query and the row indexes can narrow the search to a particular set of 10,000 rows. ORC supports the complete set of types in Hive, including the complex types: structs, lists, maps, and unions.
This library allows C++ programs to read and write the Optimized Row Columnar (ORC) file format.
-To compile: % export TZ=America/Los_Angeles % mkdir build % cd build % cmake .. % make % make test-out