commit | d3c3ae41c4aeb2dec9f55dacb3dfc357d16713a3 | [log] [tgz] |
---|---|---|
author | Zoltan Borok-Nagy <boroknagyz@cloudera.com> | Wed Nov 23 16:06:28 2022 +0100 |
committer | Impala Public Jenkins <impala-public-jenkins@cloudera.com> | Mon Nov 28 17:35:24 2022 +0000 |
tree | d92c30c409779adddcde0d5e0d3fed4d68d32980 | |
parent | 16190b4f77a86ef008bf28334dcca50e7c498556 [diff] |
IMPALA-11740: Incorrect results for partitioned Iceberg V2 tables when runtime filters are applied If an Iceberg V2 table is partitioned, and contains delete files, then in a query that involves runtime filters on the partition columns return empty result set. E.g.: select count(*) from store_sales, date_dim where d_date_sk = ss_sold_date_sk and d_moy=2 and d_year=1998; In the above query store_sales is partitioned by ss_sold_date_sk which will be filtered by runtime filters created by the JOIN. If store_sales has delete files then the above query returns empty result set. The problem is that we are invoking PartitionPassesFilters() on these Iceberg tables. It is usually a no-op for Iceberg tables, as the template tuple is NULL. But when we have virtual columns a template tuple has been created in HdfsScanPlanNode::InitTemplateTuple. For Iceberg tables this tempalte tuple is incomplete, i.e. it doesn't have the partition values set. This means the filters evaluate to false and the files are getting filtered out, hence the query produces an empty result set. With this patch we don't invoke PartitionPassesFilters() on Iceberg tables, only the Iceberg-specific IcebergPartitionPassesFilters() gets invoked. Also added DCHECKs to ensure this. Testing: * e2e tests added Change-Id: I43f3e0a4df7c1ba6d8ea61410b570d8cf7b31ad3 Reviewed-on: http://gerrit.cloudera.org:8080/19274 Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com> Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Lightning-fast, distributed SQL queries for petabytes of data stored in Apache Hadoop clusters.
Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:
The fastest way to try out Impala is a quickstart Docker container. You can try out running queries and processing data sets in Impala on a single machine without installing dependencies. It can automatically load test data sets into Apache Kudu and Apache Parquet formats and you can start playing around with Apache Impala SQL within minutes.
To learn more about Impala as a user or administrator, or to try Impala, please visit the Impala homepage. Detailed documentation for administrators and users is available at Apache Impala documentation.
If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.
Impala only supports Linux at the moment. Impala supports x86_64 and has experimental support for arm64 (as of Impala 4.0). Impala Requirements contains more detailed information on the minimum CPU requirements.
Impala runs on Linux systems only. The supported distros are
Other systems, e.g. SLES12, may also be supported but are not tested by the community.
This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.
See Impala's developer documentation to get started.
Detailed build notes has some detailed information on the project layout and build.