commit | 65198faa3beeea13aec905f8cda8f644e99af960 | [log] [tgz] |
---|---|---|
author | Jiawei Wang <jiawei.wang@cloudera.com> | Fri Nov 01 01:36:56 2019 -0500 |
committer | Impala Public Jenkins <impala-public-jenkins@cloudera.com> | Fri Nov 22 11:10:06 2019 +0000 |
tree | 4ef66c59f99b7f24a6af693f87b6f15e838c1c73 | |
parent | d747cc3646511531d6ae4479ec17012c58a596f4 [diff] |
IMPALA-9110: Add table loading time break-down metrics for HdfsTable A. Problem: Catalog table loading currently only records the total loading time. We will need some break-down times, i.e. more detailed time recording on each loading function. Also, the table schema loading is not taken into account for load-duration. We will need to add some more metrics for that. B. Solution: - We added "hms-load-tbl-schema", "load-duration.all-column-stats", "load-duration.all-partitions.total-time", "load-duration.all-partitions.file-metadata". Also, we logged the loadValidWriteIdList() time. So now we have a more detailed breakdown time for table loading info. The table loading time metrics for HDFS tables are in the following hierarchy: - Table Schema Loading - Table Metadata Loading - total time - all column stats loading time - ValidWriteIds loading time - all partitions loading time - total time - file metadata loading time - storage-metadata-loading-time(standalone metric) 1. Table Schema Loading: * Meaning: The time for HMS to fetch table object and the real schema loading time. Normally, the code path is "msClient.getHiveClient().getTable(dbName, tblName)" * Metric : hms-load-tbl-schema 2. Table Metadata Loading -- total time * Meaning: The time to load all the table metadata. The code path is load() function in HdfsTable.load() function. * Metric: load-duration.total-time 2.1 Table Metadata Loading -- all column stats * Meaning: load all column stats, this is part of table metadata loading The code path is HdfsTable.loadAllColumnStats() * Metric: load-duration.all-column-stats 2.2 Table Metadata Loading -- loadValidWriteIdList * Meaning: fetch ValidWriteIds from HMS The code path is HdfsTable.loadValidWriteIdList() * Metric: no metric recorded for this one. Instead, a debug log is generated. 2.3 Table Metadata Loading -- storage metadata loading(standalone metric) * Meaning: Storage related to file system operations during metadata loading.(The amount of time spent loading metadata from the underlying storage layer.) * Metric: we rename it to load-duration.storage-metadata. This is a metric introduced by IMPALA-7322 2.4 Table Metadata Loading -- load all partitions * Meaning: Load all partitions time, including fetching all partitions from HMS and loading all partitions. The code path is MetaStoreUtil.fetchAllPartitions() and HdfsTable.loadAllPartitions() * Metric: load-duration.all-partitions 2.4.1 Table Metadata Loading -- load all partitions -- load file metadata * Meaning: The file metadata loading for all all partitions. (This is part of 2.4). Code path: loadFileMetadataForPartitions() inside loadAllPartitions() * Metric: load-duration.all-partitions.file-metadata C. Extra thing in this commit: 1. Add PrintUtils.printTimeNs for PrettyPrint time in FrontEnd 2. Add explanation for table loading manager D. Test: 1. Add Unit tests for PrintUtils.printTime() function 2. Manual describe table and verify the table loading metrics are correct. Change-Id: I5381f9316df588b2004876c6cd9fb7e674085b10 Reviewed-on: http://gerrit.cloudera.org:8080/14611 Reviewed-by: Vihang Karajgaonkar <vihang@cloudera.com> Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Lightning-fast, distributed SQL queries for petabytes of data stored in Apache Hadoop clusters.
Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:
To learn more about Impala as a business user, or to try Impala live or in a VM, please visit the Impala homepage.
If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.
Impala only supports Linux at the moment.
This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.
See bin/bootstrap_build.sh.
Impala can be built with pre-built components or components downloaded from S3. The components needed to build Impala are Apache Hadoop, Hive, HBase, and Sentry. If you need to manually override the locations or versions of these components, you can do so through the environment variables and scripts listed below.
Location | Purpose |
---|---|
bin/impala-config.sh | This script must be sourced to setup all environment variables properly to allow other scripts to work |
bin/impala-config-local.sh | A script can be created in this location to set local overrides for any environment variables |
bin/impala-config-branch.sh | A version of the above that can be checked into a branch for convenience. |
bin/bootstrap_build.sh | A helper script to bootstrap some of the build requirements. |
bin/bootstrap_development.sh | A helper script to bootstrap a developer environment. Please read it before using. |
be/build/ | Impala build output goes here. |
be/generated-sources/ | Thrift and other generated source will be found here. |
Environment variable | Default value | Description |
---|---|---|
IMPALA_HOME | Top level Impala directory | |
IMPALA_TOOLCHAIN | “${IMPALA_HOME}/toolchain” | Native toolchain directory (for compilers, libraries, etc.) |
SKIP_TOOLCHAIN_BOOTSTRAP | “false” | Skips downloading the toolchain any python dependencies if “true” |
CDH_BUILD_NUMBER | Identifier to indicate the CDH build number | |
CDH_COMPONENTS_HOME | “${IMPALA_HOME}/toolchain/cdh_components-${CDH_BUILD_NUMBER}” | Location of the CDH components within the toolchain. |
CDH_MAJOR_VERSION | “5” | Identifier used to uniqueify paths for potentially incompatible component builds. |
IMPALA_CONFIG_SOURCED | “1” | Set by ${IMPALA_HOME}/bin/impala-config.sh (internal use) |
JAVA_HOME | “/usr/lib/jvm/${JAVA_VERSION}” | Used to locate Java |
JAVA_VERSION | “java-7-oracle-amd64” | Can override to set a local Java version. |
JAVA | “${JAVA_HOME}/bin/java” | Java binary location. |
CLASSPATH | See bin/set-classpath.sh for details. | |
PYTHONPATH | Will be changed to include: “${IMPALA_HOME}/shell/gen-py” “${IMPALA_HOME}/testdata” “${THRIFT_HOME}/python/lib/python2.7/site-packages” “${HIVE_HOME}/lib/py” “${IMPALA_HOME}/shell/ext-py/prettytable-0.7.1/dist/prettytable-0.7.1” "${IMPALA_HOME}/shell/ext-py/sasl-0.1.1/dist/sasl-0.1.1-py2.7-linux-x "${IMPALA_HOME}/shell/ext-py/sqlparse-0.1.19/dist/sqlparse-0.1.19-py2 |
Environment variable | Default value | Description |
---|---|---|
IMPALA_BE_DIR | “${IMPALA_HOME}/be” | Backend directory. Build output is also stored here. |
IMPALA_FE_DIR | “${IMPALA_HOME}/fe” | Frontend directory |
IMPALA_COMMON_DIR | “${IMPALA_HOME}/common” | Common code (thrift, function registry) |
Environment variable | Default value | Description |
---|---|---|
IMPALA_BUILD_THREADS | “8” or set to number of processors by default. | Used for make -j and distcc -j settings. |
IMPALA_MAKE_FLAGS | "" | Any extra settings to pass to make. Also used when copying udfs / udas into HDFS. |
USE_SYSTEM_GCC | “0” | If set to any other value, directs cmake to not set GCC_ROOT, CMAKE_C_COMPILER, CMAKE_CXX_COMPILER, as well as setting TOOLCHAIN_LINK_FLAGS |
IMPALA_CXX_COMPILER | “default” | Used by cmake (cmake_modules/toolchain and clang_toolchain.cmake) to select gcc / clang |
USE_GOLD_LINKER | “true” | Directs backend cmake to use gold. |
IS_OSX | “false” | (Experimental) currently only used to disable Kudu. |
Environment variable | Default value | Description |
---|---|---|
HADOOP_HOME | “${CDH_COMPONENTS_HOME}/hadoop-${IMPALA_HADOOP_VERSION}/” | Used to locate Hadoop |
HADOOP_INCLUDE_DIR | “${HADOOP_HOME}/include” | For ‘hdfs.h’ |
HADOOP_LIB_DIR | “${HADOOP_HOME}/lib” | For ‘libhdfs.a’ or ‘libhdfs.so’ |
HIVE_HOME | “${CDH_COMPONENTS_HOME}/{hive-${IMPALA_HIVE_VERSION}/” | |
HBASE_HOME | “${CDH_COMPONENTS_HOME}/hbase-${IMPALA_HBASE_VERSION}/” | |
SENTRY_HOME | “${CDH_COMPONENTS_HOME}/sentry-${IMPALA_SENTRY_VERSION}/” | Used to setup test data |
THRIFT_HOME | “${IMPALA_TOOLCHAIN}/thrift-${IMPALA_THRIFT_VERSION}” |