| commit | b08a17c469a0980c718b9f24d46c94c03caae88a | [log] [tgz] |
|---|---|---|
| author | Yiqun Zhang <guiyanakuang@gmail.com> | Fri Dec 03 15:29:58 2021 +0800 |
| committer | Dongjoon Hyun <dongjoon@apache.org> | Thu Dec 02 23:30:40 2021 -0800 |
| tree | f254f2813c59bdb9ac0484eb0dfa6a37655f1fa7 | |
| parent | 3f1e57cf1cebe58027c1bd48c09eef4e9717a9e3 [diff] |
ORC-1053: Fix time zone offset precision when convert tool converts `LocalDateTime` to `Timestamp` is not consistent with the internal default precision of ORC (#967)
### What changes were proposed in this pull request?
```java
// use tool compute offset : 17762
int toolOffset = ((LocalDateTime) temporalAccessor).atZone(TimeZone.getTimeZone("America/New_York").toZoneId()).getOffset().getTotalSeconds();
// in orc internal compute offset: 18000
int orcInternalOffset = TimeZone.getTimeZone("America/New_York").getRawOffset() / 1000
```
This pr is designed to modify the implementation of the LocalDateTime to Timestamp conversion so that the time zone accuracy is consistent with the ORC internal accuracy during the conversion
### Why are the changes needed?
Avoid inconsistencies between converted data and expected data from convert tools.
### How was this patch tested?
Add issue-specific unit test
(cherry picked from commit e787b8b78555f11c93ee89181189c88d55a8bbdc)
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
ORC is a self-describing type-aware columnar file format designed for Hadoop workloads. It is optimized for large streaming reads, but with integrated support for finding required rows quickly. Storing data in a columnar format lets the reader read, decompress, and process only the values that are required for the current query. Because ORC files are type-aware, the writer chooses the most appropriate encoding for the type and builds an internal index as the file is written. Predicate pushdown uses those indexes to determine which stripes in a file need to be read for a particular query and the row indexes can narrow the search to a particular set of 10,000 rows. ORC supports the complete set of types in Hive, including the complex types: structs, lists, maps, and unions.
This project includes both a Java library and a C++ library for reading and writing the Optimized Row Columnar (ORC) file format. The C++ and Java libraries are completely independent of each other and will each read all versions of ORC files. But the C++ library only writes the original (Hive 0.11) version of ORC files, and will be extended in the future.
Releases:
The current build status:
Bug tracking: Apache Jira
The subdirectories are:
To build a release version with debug information:
% mkdir build % cd build % cmake .. % make package % make test-out
To build a debug version:
% mkdir build % cd build % cmake .. -DCMAKE_BUILD_TYPE=DEBUG % make package % make test-out
To build a release version without debug information:
% mkdir build % cd build % cmake .. -DCMAKE_BUILD_TYPE=RELEASE % make package % make test-out
To build only the Java library:
% cd java % ./mvnw package
To build only the C++ library:
% mkdir build % cd build % cmake .. -DBUILD_JAVA=OFF % make package % make test-out