HIVE-24957 HIVE-24999: Inefficient & wrong CBO plans in the presence of subqueries (Stamatis Zampetakis, reviewed by Krisztian Kasa)

* HIVE-24999: HiveSubQueryRemoveRule generates invalid plan for IN subquery with correlations

1. Add workaround for CALCITE-4574 in HiveRelBuilder to avoid generating
invalid plans (filter with references to columns which do not exist).

2. Adapt HiveRelDecorrelator based on new plans generated by HiveSubQueryRemoveRule

2a. Remove workaround getNewForOldInputRef that was needed due to the
invalid plans.

2b. Adapt input references based on new the input operator (frame) inside
decorrelateInputWithValueGenerator method.

3c. Refactor DecorrelateRexShuttle#visitCall to improve readability and
cover a few more corner cases.

3. Add subquery_in_invalid_intermediate_plan.q with problematic plans
relevant for the case.

4. Add CBO explain plans in queries related to masking since there are
easier to read and compare with. There are few plan regressions that
will be fixed by HIVE-24957.

* HIVE-24957: Wrong results when subquery has COALESCE in correlation predicate

1. Add plan transformations before starting the core RelDecorrelator logic
to bring the plan into an equivalent but more convenient form that can be
decorrelated into more efficient and correct plans.

2. Adapt HiveRelDecorrelator#decorrelateInputWithValueGenerator to avoid
creating value generator for already satisfied correlations present in
the input.

3. Based on the changes above many plans with subqueries become more
efficient since the value generator is no longer necessary and it is dropped.

4. Add subquery_complex_correlation_predicates.q which includes queries
generating wrong results without the new transformations.

5. Add CBO plans in few queries since they are easier to read and reason
about correctness and efficiency.
30 files changed
tree: a9e4179dd2f0f2fef9a604566f9a64771939903f
  1. .github/
  2. accumulo-handler/
  3. beeline/
  4. bin/
  5. binary-package-licenses/
  6. checkstyle/
  7. classification/
  8. cli/
  9. common/
  10. conf/
  11. contrib/
  12. core/
  13. data/
  14. dev-support/
  15. docs/
  16. druid-handler/
  17. hbase-handler/
  18. hcatalog/
  19. hplsql/
  20. iceberg/
  21. itests/
  22. jdbc/
  23. jdbc-handler/
  24. kafka-handler/
  25. kryo-registrator/
  26. kudu-handler/
  27. lib/
  28. llap-client/
  29. llap-common/
  30. llap-ext-client/
  31. llap-server/
  32. llap-tez/
  33. metastore/
  34. packaging/
  35. parser/
  36. ql/
  37. serde/
  38. service/
  39. service-rpc/
  40. shims/
  41. spark-client/
  42. spotbugs/
  43. standalone-metastore/
  44. storage-api/
  45. streaming/
  46. testutils/
  47. udf/
  48. upgrade-acid/
  49. vector-code-gen/
  50. .arcconfig
  51. .asf.yaml
  52. .checkstyle
  53. .gitattributes
  54. .gitignore
  55. .reviewboardrc
  56. errata.txt
  57. Jenkinsfile
  58. LICENSE
  59. NOTICE
  60. pom.xml
  61. README.md
  62. RELEASE_NOTES.txt
README.md

Apache Hive (TM)

Master Build Status Maven Central

The Apache Hive (TM) data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Built on top of Apache Hadoop (TM), it provides:

  • Tools to enable easy access to data via SQL, thus enabling data warehousing tasks such as extract/transform/load (ETL), reporting, and data analysis

  • A mechanism to impose structure on a variety of data formats

  • Access to files stored either directly in Apache HDFS (TM) or in other data storage systems such as Apache HBase (TM)

  • Query execution using Apache Hadoop MapReduce, Apache Tez or Apache Spark frameworks.

Hive provides standard SQL functionality, including many of the later 2003 and 2011 features for analytics. These include OLAP functions, subqueries, common table expressions, and more. Hive's SQL can also be extended with user code via user defined functions (UDFs), user defined aggregates (UDAFs), and user defined table functions (UDTFs).

Hive users have a choice of 3 runtimes when executing SQL queries. Users can choose between Apache Hadoop MapReduce, Apache Tez or Apache Spark frameworks as their execution backend. MapReduce is a mature framework that is proven at large scales. However, MapReduce is a purely batch framework, and queries using it may experience higher latencies (tens of seconds), even over small datasets. Apache Tez is designed for interactive query, and has substantially reduced overheads versus MapReduce. Apache Spark is a cluster computing framework that's built outside of MapReduce, but on top of HDFS, with a notion of composable and transformable distributed collection of items called Resilient Distributed Dataset (RDD) which allows processing and analysis without traditional intermediate stages that MapReduce introduces.

Users are free to switch back and forth between these frameworks at any time. In each case, Hive is best suited for use cases where the amount of data processed is large enough to require a distributed system.

Hive is not designed for online transaction processing. It is best used for traditional data warehousing tasks. Hive is designed to maximize scalability (scale out with more machines added dynamically to the Hadoop cluster), performance, extensibility, fault-tolerance, and loose-coupling with its input formats.

General Info

For the latest information about Hive, please visit out website at:

http://hive.apache.org/

Getting Started

Requirements

Java

Hive VersionJava Version
Hive 1.0Java 6
Hive 1.1Java 6
Hive 1.2Java 7
Hive 2.xJava 7
Hive 3.xJava 8
Hive 4.xJava 8

Hadoop

  • Hadoop 1.x, 2.x
  • Hadoop 3.x (Hive 3.x)

Upgrading from older versions of Hive

  • Hive includes changes to the MetaStore schema. If you are upgrading from an earlier version of Hive it is imperative that you upgrade the MetaStore schema by running the appropriate schema upgrade scripts located in the scripts/metastore/upgrade directory.

  • We have provided upgrade scripts for MySQL, PostgreSQL, Oracle, Microsoft SQL Server, and Derby databases. If you are using a different database for your MetaStore you will need to provide your own upgrade script.

Useful mailing lists

  1. user@hive.apache.org - To discuss and ask usage questions. Send an empty email to user-subscribe@hive.apache.org in order to subscribe to this mailing list.

  2. dev@hive.apache.org - For discussions about code, design and features. Send an empty email to dev-subscribe@hive.apache.org in order to subscribe to this mailing list.

  3. commits@hive.apache.org - In order to monitor commits to the source repository. Send an empty email to commits-subscribe@hive.apache.org in order to subscribe to this mailing list.