IMPALA-8525: preads should use hdfsPreadFully rather than hdfsPread

Modifies HdfsFileReader so that it calls hdfsPreadFully instead of
hdfsPread. hdfsPreadFully is a new libhdfs API introduced by HDFS-14564
(Add libhdfs APIs for readFully; add readFully to
ByteBufferPositionedReadable). hdfsPreadFully improves performance of
preads, especially when reading data from S3. The major difference
between hdfsPread and hdfsPreadFully is that hdfsPreadFully is
guaranteed to read all the requested bytes, whereas hdfsPread is only
guaranteed to read up to the number of requested bytes.

hdfsPreadFully reduces the amount of JNI array allocations necessary
when reading data from S3. When any read method in libhdfs is called,
the method allocates an array whose size is equal to the amount of data
requested. The issue is that Java's InputStream#read only guarantees
that it will read up to the amount of data requested. This can lead to
issues where a libhdfs read request allocates a large Java array, even
though the read request only partially fills it up.
PositionedReadable#readFully on the other hand, guarantees that all
requested data will be read, thus preventing any unnecessary JNI array
allocations.

hdfsPreadFully improves the effectiveness of
fs.s3a.experimental.input.fadvise=RANDOM (HADOOP-13203). S3A recommends
setting fadvise=RANDOM when doing random reads, which is common in
Impala when reading Parquet or ORC files. fadvise=RANDOM causes the
HTTP GET request that reads the S3 data to simply request the data
bounded by the parameters of the current read request (e.g. for
'read(long position, ..., int length)' it requests 'length' bytes). The
chunk-size optimization in HdfsFileReader hurts performance when
fadvise=RANDOM because each HTTP GET request will only request
'chunk-size' amount of bytes at a time. Which is why this patch removes
the chunk-size optimization as well. hdfsPreadFully helps here because
all the data in the scan range will be requested by a single HTTP GET
request.

Since hdfsPreadFully improves S3 read performance, this patch enables
preads for S3A files by default. Even if fadvise=SEQUENTIAL,
hdfsPreadFully still improves performance since it avoids unnecessary
JNI allocation overhead.

The chunk-size optimization (added in
https://gerrit.cloudera.org/#/c/63/) is no longer necessary after this
patch. hdfsPreadFully prevents any unnecessary array allocations.
Furthermore, it is likely the chunk-size optimization was added due to
overhead fixed by HDFS-14285.

Fixes a bug in IMPALA-8884 where the
'impala-server.io-mgr.queue-$i.read-size' statistics were being updated
with the chunk-size passed to HdfsFileReader::ReadFromPosInternal, which
is not necessarily equivalent to the amount of data actually read.

Testing:
* Ran core tests
* Ran core tests on S3
* Ad-hoc functional and performance testing on ABFS; no perf regression
observed; planning to further investigate the interaction between
hdfsPreadFully + ABFS in a future JIRA

Change-Id: I29ea34897096bc790abdeb98073a47f1c4c10feb
Reviewed-on: http://gerrit.cloudera.org:8080/14635
Reviewed-by: Sahil Takiar <stakiar@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
6 files changed
tree: 8c11fbc7d09b743cd2b8663bd01007a94986c2f0
  1. be/
  2. bin/
  3. cmake_modules/
  4. common/
  5. docker/
  6. docs/
  7. ext-data-source/
  8. fe/
  9. impala-parent/
  10. infra/
  11. lib/
  12. query-event-hook-api/
  13. security/
  14. shaded-deps/
  15. shell/
  16. ssh_keys/
  17. testdata/
  18. tests/
  19. www/
  20. .clang-format
  21. .clang-tidy
  22. .gitattributes
  23. .gitignore
  24. buildall.sh
  25. CMakeLists.txt
  26. EXPORT_CONTROL.md
  27. LICENSE.txt
  28. LOGS.md
  29. NOTICE.txt
  30. README.md
  31. setup.cfg
README.md

Welcome to Impala

Lightning-fast, distributed SQL queries for petabytes of data stored in Apache Hadoop clusters.

Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:

  • Best of breed performance and scalability.
  • Support for data stored in HDFS, Apache HBase and Amazon S3.
  • Wide analytic SQL support, including window functions and subqueries.
  • On-the-fly code generation using LLVM to generate CPU-efficient code tailored specifically to each individual query.
  • Support for the most commonly-used Hadoop file formats, including the Apache Parquet project.
  • Apache-licensed, 100% open source.

More about Impala

To learn more about Impala as a business user, or to try Impala live or in a VM, please visit the Impala homepage.

If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.

Supported Platforms

Impala only supports Linux at the moment.

Export Control Notice

This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.

Build Instructions

See bin/bootstrap_build.sh.

Detailed Build Notes

Impala can be built with pre-built components or components downloaded from S3. The components needed to build Impala are Apache Hadoop, Hive, HBase, and Sentry. If you need to manually override the locations or versions of these components, you can do so through the environment variables and scripts listed below.

Scripts and directories
LocationPurpose
bin/impala-config.shThis script must be sourced to setup all environment variables properly to allow other scripts to work
bin/impala-config-local.shA script can be created in this location to set local overrides for any environment variables
bin/impala-config-branch.shA version of the above that can be checked into a branch for convenience.
bin/bootstrap_build.shA helper script to bootstrap some of the build requirements.
bin/bootstrap_development.shA helper script to bootstrap a developer environment. Please read it before using.
be/build/Impala build output goes here.
be/generated-sources/Thrift and other generated source will be found here.
Build Related Variables
Environment variableDefault valueDescription
IMPALA_HOMETop level Impala directory
IMPALA_TOOLCHAIN“${IMPALA_HOME}/toolchain”Native toolchain directory (for compilers, libraries, etc.)
SKIP_TOOLCHAIN_BOOTSTRAP“false”Skips downloading the toolchain any python dependencies if “true”
CDH_BUILD_NUMBERIdentifier to indicate the CDH build number
CDH_COMPONENTS_HOME“${IMPALA_HOME}/toolchain/cdh_components-${CDH_BUILD_NUMBER}”Location of the CDH components within the toolchain.
CDH_MAJOR_VERSION“5”Identifier used to uniqueify paths for potentially incompatible component builds.
IMPALA_CONFIG_SOURCED“1”Set by ${IMPALA_HOME}/bin/impala-config.sh (internal use)
JAVA_HOME“/usr/lib/jvm/${JAVA_VERSION}”Used to locate Java
JAVA_VERSION“java-7-oracle-amd64”Can override to set a local Java version.
JAVA“${JAVA_HOME}/bin/java”Java binary location.
CLASSPATHSee bin/set-classpath.sh for details.
PYTHONPATHWill be changed to include: “${IMPALA_HOME}/shell/gen-py” “${IMPALA_HOME}/testdata” “${THRIFT_HOME}/python/lib/python2.7/site-packages” “${HIVE_HOME}/lib/py” “${IMPALA_HOME}/shell/ext-py/prettytable-0.7.1/dist/prettytable-0.7.1” "${IMPALA_HOME}/shell/ext-py/sasl-0.1.1/dist/sasl-0.1.1-py2.7-linux-x "${IMPALA_HOME}/shell/ext-py/sqlparse-0.1.19/dist/sqlparse-0.1.19-py2
Source Directories for Impala
Environment variableDefault valueDescription
IMPALA_BE_DIR“${IMPALA_HOME}/be”Backend directory. Build output is also stored here.
IMPALA_FE_DIR“${IMPALA_HOME}/fe”Frontend directory
IMPALA_COMMON_DIR“${IMPALA_HOME}/common”Common code (thrift, function registry)
Various Compilation Settings
Environment variableDefault valueDescription
IMPALA_BUILD_THREADS“8” or set to number of processors by default.Used for make -j and distcc -j settings.
IMPALA_MAKE_FLAGS""Any extra settings to pass to make. Also used when copying udfs / udas into HDFS.
USE_SYSTEM_GCC“0”If set to any other value, directs cmake to not set GCC_ROOT, CMAKE_C_COMPILER, CMAKE_CXX_COMPILER, as well as setting TOOLCHAIN_LINK_FLAGS
IMPALA_CXX_COMPILER“default”Used by cmake (cmake_modules/toolchain and clang_toolchain.cmake) to select gcc / clang
USE_GOLD_LINKER“true”Directs backend cmake to use gold.
IS_OSX“false”(Experimental) currently only used to disable Kudu.
Dependencies
Environment variableDefault valueDescription
HADOOP_HOME“${CDH_COMPONENTS_HOME}/hadoop-${IMPALA_HADOOP_VERSION}/”Used to locate Hadoop
HADOOP_INCLUDE_DIR“${HADOOP_HOME}/include”For ‘hdfs.h’
HADOOP_LIB_DIR“${HADOOP_HOME}/lib”For ‘libhdfs.a’ or ‘libhdfs.so’
HIVE_HOME“${CDH_COMPONENTS_HOME}/{hive-${IMPALA_HIVE_VERSION}/”
HBASE_HOME“${CDH_COMPONENTS_HOME}/hbase-${IMPALA_HBASE_VERSION}/”
SENTRY_HOME“${CDH_COMPONENTS_HOME}/sentry-${IMPALA_SENTRY_VERSION}/”Used to setup test data
THRIFT_HOME“${IMPALA_TOOLCHAIN}/thrift-${IMPALA_THRIFT_VERSION}”