commit | 72732da9d8d9211b9dc374af42340c56968fb86a | [log] [tgz] |
---|---|---|
author | Daniel Becker <daniel.becker@cloudera.com> | Mon Feb 05 15:55:12 2024 +0100 |
committer | Daniel Becker <daniel.becker@cloudera.com> | Tue Apr 02 09:58:37 2024 +0000 |
tree | 43905772f789517d70225a8f890a2dde3e4dc13c | |
parent | 23a14a249c783a9dfb38586052513c6455ce3dad [diff] |
IMPALA-12609: Implement SHOW METADATA TABLES IN statement to list Iceberg Metadata tables After this change, the new SHOW METADATA TABLES IN statement can be used to list all the available metadata tables of an Iceberg table. Note that in contrast to querying the contents of Iceberg metadata tables, this does not require fully qualified paths, e.g. both SHOW METADATA TABLES IN functional_parquet.iceberg_query_metadata; and USE functional_parquet; SHOW METADATA TABLES IN iceberg_query_metadata; work. The available metadata tables for all Iceberg tables are the same, corresponding to the values of the enum "org.apache.iceberg.MetadataTableType", so there is actually no need to pass the name of the regular table for which the metadata table list is requested through Thrift. This change, however, does send the table name because this way - if we add support for metadata tables for other table formats, the table name/path will be necessary to determine the correct list of metadata tables - we could later add support for different authorisation policies for individual tables - we can check also at the point of generating the list of metadata tables that the table is an Iceberg table Testing: - added and updated tests in ParserTest, AnalyzeDDLTest, ToSqlTest and AuthorizationStmtTest - added a custom cluster test in test_authorization.py - added functional tests in iceberg-metadata-tables.test Change-Id: Ide10ccf10fc0abf5c270119ba7092c67e712ec49 Reviewed-on: http://gerrit.cloudera.org:8080/21026 Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com> Reviewed-by: Zoltan Borok-Nagy <boroknagyz@cloudera.com>
Lightning-fast, distributed SQL queries for petabytes of data stored in open data and table formats.
Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:
The fastest way to try out Impala is a quickstart Docker container. You can try out running queries and processing data sets in Impala on a single machine without installing dependencies. It can automatically load test data sets into Apache Kudu and Apache Parquet formats and you can start playing around with Apache Impala SQL within minutes.
To learn more about Impala as a user or administrator, or to try Impala, please visit the Impala homepage. Detailed documentation for administrators and users is available at Apache Impala documentation.
If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.
Impala only supports Linux at the moment. Impala supports x86_64 and has experimental support for arm64 (as of Impala 4.0). Impala Requirements contains more detailed information on the minimum CPU requirements.
Impala runs on Linux systems only. The supported distros are
Other systems, e.g. SLES12, may also be supported but are not tested by the community.
This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.
See Impala's developer documentation to get started.
Detailed build notes has some detailed information on the project layout and build.