IMPALA-14583: Support partial RPC dispatch for Iceberg tables This patch extends IMPALA-11402 to support partial RPC dispatch for Iceberg tables in local catalog mode. IMPALA-11402 added support for HDFS partitioned tables where catalogd can truncate the response of getPartialCatalogObject at partition boundaries when the file count exceeds catalog_partial_fetch_max_files. For Iceberg tables, the file list is not organized by partitions but stored as a flat list of data and delete files. This patch implements offset-based pagination to allow catalogd to truncate the response at any point in the file list, not just at partition boundaries. Implementation details: - Added iceberg_file_offset field to TTableInfoSelector thrift struct - IcebergContentFileStore.toThriftPartial() supports pagination with offset and limit parameters - IcebergContentFileStore uses a reverse lookup table (icebergFileOffsetToContentFile_) for efficient offset-based access to files - IcebergTable.getPartialInfo() enforces the file limit configured by catalog_partial_fetch_max_files (reusing the flag from IMPALA-11402) - CatalogdMetaProvider.loadIcebergTableWithRetry() implements the retry loop on the coordinator side, sending follow-up requests with incremented offsets until all files are fetched - Coordinator detects catalog version changes between requests and throws InconsistentMetadataFetchException for query replanning Key differences from IMPALA-11402: - Offset-based pagination instead of partition-based (can split anywhere) - Single flat file list instead of per-partition file lists - Works with both data files and delete files (Iceberg v2) Tests: - Added two custom-cluster tests in TestAllowIncompleteData: * test_incomplete_iceberg_file_list: 150 data files with limit=100 * test_iceberg_with_delete_files: 60+ data+delete files with limit=50 - Both tests verify partial fetch across multiple requests and proper log messages for truncation warnings and request counts Change-Id: I7f2c058b7cc8efc15bac9fe0e91baadbb7b92cbb Reviewed-on: http://gerrit.cloudera.org:8080/24041 Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com> Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Lightning-fast, distributed SQL queries for petabytes of data stored in open data and table formats.
Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:
The fastest way to try out Impala is a quickstart Docker container. You can try out running queries and processing data sets in Impala on a single machine without installing dependencies. It can automatically load test data sets into Apache Kudu and Apache Parquet formats and you can start playing around with Apache Impala SQL within minutes.
To learn more about Impala as a user or administrator, or to try Impala, please visit the Impala homepage. Detailed documentation for administrators and users is available at Apache Impala documentation.
If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.
Impala only supports Linux at the moment. Impala supports x86_64 and has experimental support for arm64 (as of Impala 4.0). Impala Requirements contains more detailed information on the minimum CPU requirements.
Impala runs on Linux systems only. The supported distros are
Other systems, e.g. SLES12, may also be supported but are not tested by the community.
This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.
See Impala's developer documentation to get started.
Detailed build notes has some detailed information on the project layout and build.