commit | 73171cb7164573349bd53a996a51bb7058b778e0 | [log] [tgz] |
---|---|---|
author | Gabor Kaszab <gaborkaszab@cloudera.com> | Wed Feb 07 12:33:04 2024 +0100 |
committer | Impala Public Jenkins <impala-public-jenkins@cloudera.com> | Thu Mar 28 13:57:07 2024 +0000 |
tree | 2cd2d5ae3b94a1a0e18df0851ff83748603d4f89 | |
parent | 3e4fdeece1735de85c17155dda626e8f28af0092 [diff] |
IMPALA-12729: Allow creating primary keys for Iceberg tables There are writer engines that use Iceberg's identifier-field-ids from the Iceberg schema to identify the columns to be written into the equality delete files (Flink, NiFi). So far Impala wasn't able to populate this identifier-field-ids. This patch introduces the support for not enforced primary keys for Iceberg tables, where the primary key is going to be used for setting identifier-field-ids during Iceberg schema creation. Example syntax: CREATE TABLE ice_tbl ( i int NOT NULL, j int, s string NOT NULL primary key(i, s) not enforced) PARTITIONED BY SPEC (truncate(10, s)) STORED AS ICEBERG; There are some constraints with primary keys (PK) following the behavior of Flink: - Only NOT NULL columns can be in the PK. - PK is not allowed in the column definition level like 'i int NOT NULL PRIMARY KEY'. - If the table is partitioned then the partition columns have to be part of the PK. - Float and double columns are not allowed for the PK. - Not allowed to drop a column that is used as a PK. Testing: - New E2E tests added for different table creation scenarios. - Manual test to use Nifi for writing into a table with PK. Change-Id: I7bea787acdabd8cb04661f4ddb5c3309af0364a6 Reviewed-on: http://gerrit.cloudera.org:8080/21149 Reviewed-by: Daniel Becker <daniel.becker@cloudera.com> Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Lightning-fast, distributed SQL queries for petabytes of data stored in open data and table formats.
Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:
The fastest way to try out Impala is a quickstart Docker container. You can try out running queries and processing data sets in Impala on a single machine without installing dependencies. It can automatically load test data sets into Apache Kudu and Apache Parquet formats and you can start playing around with Apache Impala SQL within minutes.
To learn more about Impala as a user or administrator, or to try Impala, please visit the Impala homepage. Detailed documentation for administrators and users is available at Apache Impala documentation.
If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.
Impala only supports Linux at the moment. Impala supports x86_64 and has experimental support for arm64 (as of Impala 4.0). Impala Requirements contains more detailed information on the minimum CPU requirements.
Impala runs on Linux systems only. The supported distros are
Other systems, e.g. SLES12, may also be supported but are not tested by the community.
This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.
See Impala's developer documentation to get started.
Detailed build notes has some detailed information on the project layout and build.