commit | f396676b09d9dd706b4bff4e1c1d999f9f3b1d2f | [log] [tgz] |
---|---|---|
author | Stamatis Zampetakis <zabetak@gmail.com> | Thu Dec 07 17:56:08 2023 +0100 |
committer | Stamatis Zampetakis <zabetak@gmail.com> | Fri Dec 08 13:32:36 2023 +0100 |
tree | db7d4af071df404ef68ca38b1a7e1d2ea7312c03 | |
parent | 223241548054b95ec3ec497a855a5f602c419e74 [diff] |
HIVE-27658: Error resolving join keys during conversion to dynamic partition hashjoin (Stamatis Zampetakis reviewed by Denys Kuzmenko) Sometimes when the compiler attempts to convert a Join to a Dynamic Partition HashJoin (DPHJ) and certain assumptions about the shape of the plan do not hold a SemanticException is thrown. The DPHJ is a performance optimization so there is no reason to raise a fatal error when the conversion cannot be performed. It is preferable to simply skip the conversion and use a regular join instead of blocking completely the query. The `MapJoinProcessor.getMapJoinDesc` method already returns null in certain cases, so it is safe to add another exit condition. Overview of changes: 1. Return null when join key resolution fails and simply skip conversion to DPHJ. 2. Log a warning instead of throwing a fatal SemanticException. 3. Enrich error message with more information to improve diagnosability. Bringing the plan into a shape that will allow the DPHJ conversion is still meaningful but can be tracked independently with other tickets. Close apache/hive#4930
The Apache Hive (TM) data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Built on top of Apache Hadoop (TM), it provides:
Tools to enable easy access to data via SQL, thus enabling data warehousing tasks such as extract/transform/load (ETL), reporting, and data analysis
A mechanism to impose structure on a variety of data formats
Access to files stored either directly in Apache HDFS (TM) or in other data storage systems such as Apache HBase (TM)
Query execution using Apache Hadoop MapReduce or Apache Tez frameworks.
Hive provides standard SQL functionality, including many of the later 2003 and 2011 features for analytics. These include OLAP functions, subqueries, common table expressions, and more. Hive's SQL can also be extended with user code via user defined functions (UDFs), user defined aggregates (UDAFs), and user defined table functions (UDTFs).
Hive users have a choice of 3 runtimes when executing SQL queries. Users can choose between Apache Hadoop MapReduce or Apache Tez frameworks as their execution backend. MapReduce is a mature framework that is proven at large scales. However, MapReduce is a purely batch framework, and queries using it may experience higher latencies (tens of seconds), even over small datasets. Apache Tez is designed for interactive query, and has substantially reduced overheads versus MapReduce.
Users are free to switch back and forth between these frameworks at any time. In each case, Hive is best suited for use cases where the amount of data processed is large enough to require a distributed system.
Hive is not designed for online transaction processing. It is best used for traditional data warehousing tasks. Hive is designed to maximize scalability (scale out with more machines added dynamically to the Hadoop cluster), performance, extensibility, fault-tolerance, and loose-coupling with its input formats.
For the latest information about Hive, please visit out website at:
Installation Instructions and a quick tutorial: https://cwiki.apache.org/confluence/display/Hive/GettingStarted
Instructions to build Hive from source: https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-BuildingHivefromSource
A longer tutorial that covers more features of HiveQL: https://cwiki.apache.org/confluence/display/Hive/Tutorial
The HiveQL Language Manual: https://cwiki.apache.org/confluence/display/Hive/LanguageManual
Hive Version | Java Version |
---|---|
Hive 1.0 | Java 6 |
Hive 1.1 | Java 6 |
Hive 1.2 | Java 7 |
Hive 2.x | Java 7 |
Hive 3.x | Java 8 |
Hive 4.x | Java 8 |
Hive includes changes to the MetaStore schema. If you are upgrading from an earlier version of Hive it is imperative that you upgrade the MetaStore schema by running the appropriate schema upgrade scripts located in the scripts/metastore/upgrade directory.
We have provided upgrade scripts for MySQL, PostgreSQL, Oracle, Microsoft SQL Server, and Derby databases. If you are using a different database for your MetaStore you will need to provide your own upgrade script.
user@hive.apache.org - To discuss and ask usage questions. Send an empty email to user-subscribe@hive.apache.org in order to subscribe to this mailing list.
dev@hive.apache.org - For discussions about code, design and features. Send an empty email to dev-subscribe@hive.apache.org in order to subscribe to this mailing list.
commits@hive.apache.org - In order to monitor commits to the source repository. Send an empty email to commits-subscribe@hive.apache.org in order to subscribe to this mailing list.