commit | b18999fe0975a0795c4b10243d7c5465ebef9ed3 | [log] [tgz] |
---|---|---|
author | Daniel Becker <daniel.becker@cloudera.com> | Fri Mar 01 17:57:51 2024 +0100 |
committer | Impala Public Jenkins <impala-public-jenkins@cloudera.com> | Fri Mar 08 19:48:33 2024 +0000 |
tree | 9db38826bb6811a4393c336b1c24b6bcb4376934 | |
parent | ca3fe6d6af6f5216f75bea26d6e90cf5cc816efc [diff] |
IMPALA-12845: Crash with DESCRIBE on a complex type from an Iceberg table A DESCRIBE statement on a complex column contained in an Iceberg table runs into a DCHECK and crashes Impala. An example with an array: describe functional_parquet.iceberg_resolution_test_external.phone Note that this also happens with Iceberg metadata tables, for example: describe functional_parquet.iceberg_query_metadata.\ entries.readable_metrics; With non-Iceberg tables there is no error. The problem is that for Iceberg tables, the DESCRIBE statement returns four columns: "name", "type", "comment" and "nullable" (only Iceberg and Kudu tables have "nullable"). However, the DESCRIBE statement response for complex types only contains the first three columns, i.e. no column for "nullable". But as the table is an Iceberg table, the 'metadata_' field of HS2ColumnarResultSet is still populated with four columns. The DCHECK in HS2ColumnarResultSet::AddOneRow() expects the number of columns to be the same in the DESCRIBE statement response and the 'metadata_' field. This commit solves the problem by only adding the "nullable" column to the 'metadata_' field if the target of the DESCRIBE statement is a table, not a complex type. Note that Kudu tables do not support complex types so this issue does not arise there. This change also addresses a minor issue: DescribeTableStmt::analyze() did not check whether the statement was already analyzed and did not set the 'analyzer_' field which would indicate that analysis had already been done. This is now corrected. Testing: - added tests in describe-path.test for arrays, maps and structs from regular Iceberg tables and metadata tables. Change-Id: I5eda21a41167cc1fda183aa16fd6276a6a16f5d3 Reviewed-on: http://gerrit.cloudera.org:8080/21105 Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com> Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Lightning-fast, distributed SQL queries for petabytes of data stored in open data and table formats.
Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:
The fastest way to try out Impala is a quickstart Docker container. You can try out running queries and processing data sets in Impala on a single machine without installing dependencies. It can automatically load test data sets into Apache Kudu and Apache Parquet formats and you can start playing around with Apache Impala SQL within minutes.
To learn more about Impala as a user or administrator, or to try Impala, please visit the Impala homepage. Detailed documentation for administrators and users is available at Apache Impala documentation.
If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.
Impala only supports Linux at the moment. Impala supports x86_64 and has experimental support for arm64 (as of Impala 4.0). Impala Requirements contains more detailed information on the minimum CPU requirements.
Impala runs on Linux systems only. The supported distros are
Other systems, e.g. SLES12, may also be supported but are not tested by the community.
This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.
See Impala's developer documentation to get started.
Detailed build notes has some detailed information on the project layout and build.