ORC-1053: Fix time zone offset precision when convert tool converts `LocalDateTime` to `Timestamp` is not consistent with the internal default precision of ORC (#967)

### What changes were proposed in this pull request?

```java
// use tool compute offset : 17762
int toolOffset = ((LocalDateTime) temporalAccessor).atZone(TimeZone.getTimeZone("America/New_York").toZoneId()).getOffset().getTotalSeconds();

// in orc internal compute offset: 18000
int orcInternalOffset = TimeZone.getTimeZone("America/New_York").getRawOffset() / 1000
```

This pr is designed to modify the implementation of the LocalDateTime to Timestamp conversion so that the time zone accuracy is consistent with the ORC internal accuracy during the conversion

### Why are the changes needed?
Avoid inconsistencies between converted data and expected data from convert tools.

### How was this patch tested?
Add issue-specific unit test

(cherry picked from commit e787b8b78555f11c93ee89181189c88d55a8bbdc)
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2 files changed
tree: f254f2813c59bdb9ac0484eb0dfa6a37655f1fa7
  1. .github/
  2. c++/
  3. cmake_modules/
  4. docker/
  5. examples/
  6. java/
  7. proto/
  8. site/
  9. tools/
  10. .asf.yaml
  11. .gitignore
  12. .travis.yml
  13. appveyor.yml
  14. CMakeLists.txt
  15. LICENSE
  16. NOTICE
  17. README.md
README.md

Apache ORC

ORC is a self-describing type-aware columnar file format designed for Hadoop workloads. It is optimized for large streaming reads, but with integrated support for finding required rows quickly. Storing data in a columnar format lets the reader read, decompress, and process only the values that are required for the current query. Because ORC files are type-aware, the writer chooses the most appropriate encoding for the type and builds an internal index as the file is written. Predicate pushdown uses those indexes to determine which stripes in a file need to be read for a particular query and the row indexes can narrow the search to a particular set of 10,000 rows. ORC supports the complete set of types in Hive, including the complex types: structs, lists, maps, and unions.

ORC File Library

This project includes both a Java library and a C++ library for reading and writing the Optimized Row Columnar (ORC) file format. The C++ and Java libraries are completely independent of each other and will each read all versions of ORC files. But the C++ library only writes the original (Hive 0.11) version of ORC files, and will be extended in the future.

Releases:

  • Latest: Apache ORC releases
  • Maven Central: Maven Central
  • Downloads: Apache ORC downloads

The current build status:

  • Main branch main build status main build status
  • Pull Requests

Bug tracking: Apache Jira

The subdirectories are:

  • c++ - the c++ reader and writer
  • cmake_modules - the cmake modules
  • docker - docker scripts to build and test on various linuxes
  • examples - various ORC example files that are used to test compatibility
  • java - the java reader and writer
  • proto - the protocol buffer definition for the ORC metadata
  • site - the website and documentation
  • tools - the c++ tools for reading and inspecting ORC files

Building

  • Install java 1.8 or higher
  • Install maven 3.6.3 or higher
  • Install cmake

To build a release version with debug information:

% mkdir build
% cd build
% cmake ..
% make package
% make test-out

To build a debug version:

% mkdir build
% cd build
% cmake .. -DCMAKE_BUILD_TYPE=DEBUG
% make package
% make test-out

To build a release version without debug information:

% mkdir build
% cd build
% cmake .. -DCMAKE_BUILD_TYPE=RELEASE
% make package
% make test-out

To build only the Java library:

% cd java
% ./mvnw package

To build only the C++ library:

% mkdir build
% cd build
% cmake .. -DBUILD_JAVA=OFF
% make package
% make test-out