commit | 89b9c93c7ac5f3eb19977290ba5115547120a0a3 | [log] [tgz] |
---|---|---|
author | Sahil Takiar <takiar.sahil@gmail.com> | Fri Nov 01 12:39:39 2019 -0700 |
committer | Impala Public Jenkins <impala-public-jenkins@cloudera.com> | Wed Nov 20 00:44:13 2019 +0000 |
tree | 8c11fbc7d09b743cd2b8663bd01007a94986c2f0 | |
parent | 66322f27e36f3c322bfa78726e9742ff110e96fd [diff] |
IMPALA-8525: preads should use hdfsPreadFully rather than hdfsPread Modifies HdfsFileReader so that it calls hdfsPreadFully instead of hdfsPread. hdfsPreadFully is a new libhdfs API introduced by HDFS-14564 (Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable). hdfsPreadFully improves performance of preads, especially when reading data from S3. The major difference between hdfsPread and hdfsPreadFully is that hdfsPreadFully is guaranteed to read all the requested bytes, whereas hdfsPread is only guaranteed to read up to the number of requested bytes. hdfsPreadFully reduces the amount of JNI array allocations necessary when reading data from S3. When any read method in libhdfs is called, the method allocates an array whose size is equal to the amount of data requested. The issue is that Java's InputStream#read only guarantees that it will read up to the amount of data requested. This can lead to issues where a libhdfs read request allocates a large Java array, even though the read request only partially fills it up. PositionedReadable#readFully on the other hand, guarantees that all requested data will be read, thus preventing any unnecessary JNI array allocations. hdfsPreadFully improves the effectiveness of fs.s3a.experimental.input.fadvise=RANDOM (HADOOP-13203). S3A recommends setting fadvise=RANDOM when doing random reads, which is common in Impala when reading Parquet or ORC files. fadvise=RANDOM causes the HTTP GET request that reads the S3 data to simply request the data bounded by the parameters of the current read request (e.g. for 'read(long position, ..., int length)' it requests 'length' bytes). The chunk-size optimization in HdfsFileReader hurts performance when fadvise=RANDOM because each HTTP GET request will only request 'chunk-size' amount of bytes at a time. Which is why this patch removes the chunk-size optimization as well. hdfsPreadFully helps here because all the data in the scan range will be requested by a single HTTP GET request. Since hdfsPreadFully improves S3 read performance, this patch enables preads for S3A files by default. Even if fadvise=SEQUENTIAL, hdfsPreadFully still improves performance since it avoids unnecessary JNI allocation overhead. The chunk-size optimization (added in https://gerrit.cloudera.org/#/c/63/) is no longer necessary after this patch. hdfsPreadFully prevents any unnecessary array allocations. Furthermore, it is likely the chunk-size optimization was added due to overhead fixed by HDFS-14285. Fixes a bug in IMPALA-8884 where the 'impala-server.io-mgr.queue-$i.read-size' statistics were being updated with the chunk-size passed to HdfsFileReader::ReadFromPosInternal, which is not necessarily equivalent to the amount of data actually read. Testing: * Ran core tests * Ran core tests on S3 * Ad-hoc functional and performance testing on ABFS; no perf regression observed; planning to further investigate the interaction between hdfsPreadFully + ABFS in a future JIRA Change-Id: I29ea34897096bc790abdeb98073a47f1c4c10feb Reviewed-on: http://gerrit.cloudera.org:8080/14635 Reviewed-by: Sahil Takiar <stakiar@cloudera.com> Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Lightning-fast, distributed SQL queries for petabytes of data stored in Apache Hadoop clusters.
Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:
To learn more about Impala as a business user, or to try Impala live or in a VM, please visit the Impala homepage.
If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.
Impala only supports Linux at the moment.
This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.
See bin/bootstrap_build.sh.
Impala can be built with pre-built components or components downloaded from S3. The components needed to build Impala are Apache Hadoop, Hive, HBase, and Sentry. If you need to manually override the locations or versions of these components, you can do so through the environment variables and scripts listed below.
Location | Purpose |
---|---|
bin/impala-config.sh | This script must be sourced to setup all environment variables properly to allow other scripts to work |
bin/impala-config-local.sh | A script can be created in this location to set local overrides for any environment variables |
bin/impala-config-branch.sh | A version of the above that can be checked into a branch for convenience. |
bin/bootstrap_build.sh | A helper script to bootstrap some of the build requirements. |
bin/bootstrap_development.sh | A helper script to bootstrap a developer environment. Please read it before using. |
be/build/ | Impala build output goes here. |
be/generated-sources/ | Thrift and other generated source will be found here. |
Environment variable | Default value | Description |
---|---|---|
IMPALA_HOME | Top level Impala directory | |
IMPALA_TOOLCHAIN | “${IMPALA_HOME}/toolchain” | Native toolchain directory (for compilers, libraries, etc.) |
SKIP_TOOLCHAIN_BOOTSTRAP | “false” | Skips downloading the toolchain any python dependencies if “true” |
CDH_BUILD_NUMBER | Identifier to indicate the CDH build number | |
CDH_COMPONENTS_HOME | “${IMPALA_HOME}/toolchain/cdh_components-${CDH_BUILD_NUMBER}” | Location of the CDH components within the toolchain. |
CDH_MAJOR_VERSION | “5” | Identifier used to uniqueify paths for potentially incompatible component builds. |
IMPALA_CONFIG_SOURCED | “1” | Set by ${IMPALA_HOME}/bin/impala-config.sh (internal use) |
JAVA_HOME | “/usr/lib/jvm/${JAVA_VERSION}” | Used to locate Java |
JAVA_VERSION | “java-7-oracle-amd64” | Can override to set a local Java version. |
JAVA | “${JAVA_HOME}/bin/java” | Java binary location. |
CLASSPATH | See bin/set-classpath.sh for details. | |
PYTHONPATH | Will be changed to include: “${IMPALA_HOME}/shell/gen-py” “${IMPALA_HOME}/testdata” “${THRIFT_HOME}/python/lib/python2.7/site-packages” “${HIVE_HOME}/lib/py” “${IMPALA_HOME}/shell/ext-py/prettytable-0.7.1/dist/prettytable-0.7.1” "${IMPALA_HOME}/shell/ext-py/sasl-0.1.1/dist/sasl-0.1.1-py2.7-linux-x "${IMPALA_HOME}/shell/ext-py/sqlparse-0.1.19/dist/sqlparse-0.1.19-py2 |
Environment variable | Default value | Description |
---|---|---|
IMPALA_BE_DIR | “${IMPALA_HOME}/be” | Backend directory. Build output is also stored here. |
IMPALA_FE_DIR | “${IMPALA_HOME}/fe” | Frontend directory |
IMPALA_COMMON_DIR | “${IMPALA_HOME}/common” | Common code (thrift, function registry) |
Environment variable | Default value | Description |
---|---|---|
IMPALA_BUILD_THREADS | “8” or set to number of processors by default. | Used for make -j and distcc -j settings. |
IMPALA_MAKE_FLAGS | "" | Any extra settings to pass to make. Also used when copying udfs / udas into HDFS. |
USE_SYSTEM_GCC | “0” | If set to any other value, directs cmake to not set GCC_ROOT, CMAKE_C_COMPILER, CMAKE_CXX_COMPILER, as well as setting TOOLCHAIN_LINK_FLAGS |
IMPALA_CXX_COMPILER | “default” | Used by cmake (cmake_modules/toolchain and clang_toolchain.cmake) to select gcc / clang |
USE_GOLD_LINKER | “true” | Directs backend cmake to use gold. |
IS_OSX | “false” | (Experimental) currently only used to disable Kudu. |
Environment variable | Default value | Description |
---|---|---|
HADOOP_HOME | “${CDH_COMPONENTS_HOME}/hadoop-${IMPALA_HADOOP_VERSION}/” | Used to locate Hadoop |
HADOOP_INCLUDE_DIR | “${HADOOP_HOME}/include” | For ‘hdfs.h’ |
HADOOP_LIB_DIR | “${HADOOP_HOME}/lib” | For ‘libhdfs.a’ or ‘libhdfs.so’ |
HIVE_HOME | “${CDH_COMPONENTS_HOME}/{hive-${IMPALA_HIVE_VERSION}/” | |
HBASE_HOME | “${CDH_COMPONENTS_HOME}/hbase-${IMPALA_HBASE_VERSION}/” | |
SENTRY_HOME | “${CDH_COMPONENTS_HOME}/sentry-${IMPALA_SENTRY_VERSION}/” | Used to setup test data |
THRIFT_HOME | “${IMPALA_TOOLCHAIN}/thrift-${IMPALA_THRIFT_VERSION}” |