blob: 4a26885b393bf4460759bbca696f71200287ab7b [file] [log] [blame]
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
<concept id="common">
<title>Reusable Text, Paragraphs, List Items, and Other Elements for Impala</title>
<conbody>
<p>
All the elements in this file with IDs are intended to be conref'ed elsewhere. Practically
all of the conref'ed elements for the Impala docs are in this file, to avoid questions of
when it's safe to remove or move something in any of the 'main' files, and avoid having to
change and conref references as a result.
</p>
<p>
This file defines some dummy subheadings as section elements, just for self-documentation.
Using sections instead of nested concepts lets all the conref links point to a very simple
name pattern, '#common/id_within_the_file', rather than a 3-part reference with an
intervening, variable concept ID.
</p>
<section id="concepts">
<title>Conceptual Content</title>
<p>
Overview and conceptual information for Impala as a whole.
</p>
<!-- Reconcile the 'advantages' and 'benefits' elements; be mindful of where each is used. -->
<p id="impala_advantages">
The following are some of the key advantages of Impala:
<ul>
<li>
Impala integrates with the existing <keyword keyref="hadoop_distro"/> ecosystem,
meaning data can be stored, shared, and accessed using the various solutions
included with <keyword keyref="hadoop_distro"/>. This also avoids data silos and
minimizes expensive data movement.
</li>
<li>
Impala provides access to data stored in <keyword keyref="hadoop_distro"/> without
requiring the Java skills required for MapReduce jobs. Impala can access data
directly from the HDFS file system. Impala also provides a SQL front-end to access
data in the HBase database system, <ph rev="2.2.0">or in the Amazon Simple Storage
System (S3)</ph>.
</li>
<li>
Impala returns results typically within seconds or a few minutes, rather than the
many minutes or hours that are often required for Hive queries to complete.
</li>
<li>
Impala is pioneering the use of the Parquet file format, a columnar storage layout
that is optimized for large-scale queries typical in data warehouse scenarios.
</li>
</ul>
</p>
<p id="impala_benefits">
Impala provides:
<ul>
<li>
Familiar SQL interface that data scientists and analysts already know.
</li>
<li>
Ability to query high volumes of data (<q>big data</q>) in Apache Hadoop.
</li>
<li>
Distributed queries in a cluster environment, for convenient scaling and to make use
of cost-effective commodity hardware.
</li>
<li>
Ability to share data files between different components with no copy or
export/import step; for example, to write with Pig, transform with Hive and query
with Impala. Impala can read from and write to Hive tables, enabling simple data
interchange using Impala for analytics on Hive-produced data.
</li>
<li>
Single system for big data processing and analytics, so customers can avoid costly
modeling and ETL just for analytics.
</li>
</ul>
</p>
</section>
<section id="authz">
<title>Authorization Content</title>
<p> Material related to Sentry and Ranger security, intended to be reused
between Hive and Impala. Complicated by the fact that most of it will
probably be multi-paragraph or involve subheads, might need to be
represented as nested topics at the end of this file. </p>
<p id="privileges_objects">
The table below lists the minimum level of privileges and the scope required to execute
SQL statements in <keyword keyref="impala30_full"/> and higher. The following notations
are used:
<ul>
<li>The <b>SERVER</b> resource type in Ranger implies all databases,
all tables, all columns, all UDFs, and all URIs.</li>
<li>
<b>ANY</b> denotes the <codeph>SELECT</codeph>, <codeph>INSERT</codeph>,
<codeph>CREATE</codeph>, <codeph>ALTER</codeph>, <codeph>DROP</codeph>,
<b><i>or</i></b> <codeph>REFRESH</codeph> privilege.
</li>
<li>
<b>ALL</b> privilege denotes the <codeph>SELECT</codeph>, <codeph>INSERT</codeph>,
<codeph>CREATE</codeph>, <codeph>ALTER</codeph>, <codeph>DROP</codeph>,
<b><i>and</i></b> <codeph>REFRESH</codeph> privileges.
</li>
<li>
The owner of an object effectively has the ALL privilege on the object.
</li>
<li>
The parent levels of the specified scope are implicitly supported where a scope
refers to the specific level in the object hierarchy that the privilege is granted.
For example, if a privilege is listed with the <codeph>TABLE</codeph> scope, the
same privilege granted on <codeph>DATABASE</codeph> and <codeph>SERVER</codeph> will
allow the user to execute the specified SQL statement.
</li>
</ul>
<table id="sentry_privileges_objects_tab" frame="all" colsep="1"
rowsep="1">
<tgroup cols="3">
<colspec colnum="1" colname="col1"/>
<colspec colnum="2" colname="col2"/>
<colspec colnum="3" colname="col3"/>
<tbody>
<row>
<entry>
<b>SQL Statement</b>
</entry>
<entry>
<b>Privileges</b>
</entry>
<entry>
<b>Object Type / </b><p><b>Resource Type</b></p></entry>
</row>
<row>
<entry>
SELECT
</entry>
<entry>
SELECT
</entry>
<entry> TABLE</entry>
</row>
<row>
<entry>
WITH SELECT
</entry>
<entry>
SELECT
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
EXPLAIN SELECT
</entry>
<entry>
SELECT
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
INSERT
</entry>
<entry>
INSERT
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
EXPLAIN INSERT
</entry>
<entry>
INSERT
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
TRUNCATE
</entry>
<entry>
INSERT
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
LOAD
</entry>
<entry>
INSERT
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry/>
<entry>
ALL
</entry>
<entry>
URI
</entry>
</row>
<row>
<entry>
CREATE DATABASE
</entry>
<entry>
CREATE
</entry>
<entry>
SERVER
</entry>
</row>
<row>
<entry>
CREATE DATABASE LOCATION
</entry>
<entry>
CREATE
</entry>
<entry>
SERVER
</entry>
</row>
<row>
<entry/>
<entry>
ALL
</entry>
<entry>
URI
</entry>
</row>
<row>
<entry>
CREATE TABLE
</entry>
<entry>
CREATE
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry>
CREATE TABLE LIKE
</entry>
<entry>
CREATE
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry/>
<entry>
SELECT, INSERT, <b><i>or</i></b> REFRESH
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
CREATE TABLE AS SELECT
</entry>
<entry>
CREATE
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry/>
<entry>
INSERT
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry/>
<entry>
SELECT
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
EXPLAIN CREATE TABLE AS SELECT
</entry>
<entry>
CREATE
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry/>
<entry>
INSERT
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry/>
<entry>
SELECT
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
CREATE TABLE LOCATION
</entry>
<entry>
CREATE
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry/>
<entry>
ALL
</entry>
<entry>
URI
</entry>
</row>
<row>
<entry>
CREATE VIEW
</entry>
<entry>
CREATE
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry/>
<entry>
SELECT
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
ALTER DATABASE SET OWNER
</entry>
<entry>
ALL WITH GRANT
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry>
ALTER TABLE
</entry>
<entry>
ALTER
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
ALTER TABLE SET LOCATION
</entry>
<entry>
ALTER
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry/>
<entry>
ALL
</entry>
<entry>
URI
</entry>
</row>
<row>
<entry>
ALTER TABLE RENAME
</entry>
<entry>
CREATE
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry/>
<entry>
ALL
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
ALTER TABLE SET OWNER
</entry>
<entry>
ALL WITH GRANT
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
ALTER VIEW
</entry>
<entry>
ALTER
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry/>
<entry>
SELECT
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
ALTER VIEW RENAME
</entry>
<entry>
CREATE
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry/>
<entry>
ALL
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
ALTER VIEW SET OWNER
</entry>
<entry>
ALL WITH GRANT
</entry>
<entry>
VIEW
</entry>
</row>
<row>
<entry>
DROP DATABASE
</entry>
<entry>
DROP
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry>
DROP TABLE
</entry>
<entry>
DROP
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
DROP VIEW
</entry>
<entry>
DROP
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
CREATE FUNCTION
</entry>
<entry>
CREATE
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry/>
<entry>
ALL
</entry>
<entry>
URI
</entry>
</row>
<row>
<entry>
DROP FUNCTION
</entry>
<entry>
DROP
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry>
COMPUTE STATS
</entry>
<entry>
ALTER and SELECT
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
DROP STATS
</entry>
<entry>
ALTER
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
INVALIDATE METADATA
</entry>
<entry>
REFRESH
</entry>
<entry>
SERVER
</entry>
</row>
<row>
<entry>
INVALIDATE METADATA &lt;table>
</entry>
<entry>
REFRESH
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
REFRESH &lt;table>
</entry>
<entry>
REFRESH
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
REFRESH AUTHORIZATION
</entry>
<entry>
REFRESH
</entry>
<entry>
SERVER
</entry>
</row>
<row>
<entry>
REFRESH FUNCTIONS
</entry>
<entry>
REFRESH
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry>
COMMENT ON DATABASE
</entry>
<entry>
ALTER
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry>
COMMENT ON TABLE
</entry>
<entry>
ALTER
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
COMMENT ON VIEW
</entry>
<entry>
ALTER
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
COMMENT ON COLUMN
</entry>
<entry>
ALTER
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
DESCRIBE DATABASE
</entry>
<entry>
SELECT, INSERT, <b><i>or</i></b> REFRESH
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry>
DESCRIBE &lt;table/view>
</entry>
<entry>
SELECT, INSERT, <b><i>or</i></b> REFRESH
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
If the user has the SELECT privilege at the COLUMN level, only the columns the
user has access will show.
</entry>
<entry>
SELECT
</entry>
<entry>
COLUMN
</entry>
</row>
<row>
<entry>
USE
</entry>
<entry>
ANY
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
SHOW DATABASES
</entry>
<entry>
ANY
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
SHOW TABLES
</entry>
<entry>
ANY
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
SHOW FUNCTIONS
</entry>
<entry>
SELECT, INSERT, <b><i>or</i></b> REFRESH
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry>
SHOW PARTITIONS
</entry>
<entry>
SELECT, INSERT, <b><i>or</i></b> REFRESH
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
SHOW TABLE STATS
</entry>
<entry>
SELECT, INSERT, <b><i>or</i></b> REFRESH
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
SHOW COLUMN STATS
</entry>
<entry>
SELECT, INSERT, <b><i>or</i></b> REFRESH
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
SHOW FILES
</entry>
<entry>
SELECT, INSERT, <b><i>or</i></b> REFRESH
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
SHOW CREATE TABLE
</entry>
<entry>
SELECT, INSERT, <b><i>or</i></b> REFRESH
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
SHOW CREATE VIEW
</entry>
<entry>
SELECT, INSERT, <b><i>or</i></b> REFRESH
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
SHOW CREATE FUNCTION
</entry>
<entry>
SELECT, INSERT, <b><i>or</i></b> REFRESH
</entry>
<entry>
DATABASE
</entry>
</row>
<row>
<entry>
SHOW RANGE PARTITIONS (Kudu only)
</entry>
<entry>
SELECT, INSERT, <b><i>or</i></b> REFRESH
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
UPDATE (Kudu only)
</entry>
<entry>
ALL
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
EXPLAIN UPDATE (Kudu only)
</entry>
<entry>
ALL
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
UPSERT (Kudu only)
</entry>
<entry>
ALL
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
WITH UPSERT (Kudu only)
</entry>
<entry>
ALL
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
EXPLAIN UPSERT (Kudu only)
</entry>
<entry>
ALL
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
DELETE (Kudu only)
</entry>
<entry>
ALL
</entry>
<entry>
TABLE
</entry>
</row>
<row>
<entry>
EXPLAIN DELETE (Kudu only)
</entry>
<entry>
ALL
</entry>
<entry>
TABLE
</entry>
</row>
</tbody>
</tgroup>
</table>
</p>
<p rev="IMPALA-2660" id="auth_to_local_instructions">
In <keyword keyref="impala26_full"/> and higher, Impala recognizes the
<codeph>auth_to_local</codeph> setting, specified through the HDFS configuration setting
<codeph>hadoop.security.auth_to_local</codeph>. This feature is disabled by default, to
avoid an unexpected change in security-related behavior. To enable it:
<ul>
<li>
<p>
Specify <codeph>&#8209;&#8209;load_auth_to_local_rules=true</codeph> in the
<cmdname>impalad</cmdname> and <cmdname>catalogd</cmdname> configuration settings.
</p>
</li>
</ul>
</p>
<note id="authentication_vs_authorization">
Regardless of the authentication mechanism used, Impala always creates HDFS directories
and data files owned by the same user (typically <codeph>impala</codeph>). To implement
user-level access to different databases, tables, columns, partitions, and so on, use
the Sentry authorization feature, as explained in
<xref href="../topics/impala_authorization.xml#authorization"/>.
</note>
<!-- Contrived nesting needed to allow <ph> with ID to be reused inside the <title> of a conref. -->
<p>
<b><ph id="title_sentry_debug">Debugging Failed Sentry Authorization Requests</ph></b>
</p>
<p id="sentry_debug">
Sentry logs all facts that lead up to authorization decisions at the debug level. If you
do not understand why Sentry is denying access, the best way to debug is to temporarily
turn on debug logging:
<ul>
<li>
Add <codeph>log4j.logger.org.apache.sentry=DEBUG</codeph> to the
<filepath>log4j.properties</filepath> file on each host in the cluster, in the
appropriate configuration directory for each service.
</li>
</ul>
Specifically, look for exceptions and messages such as:
<codeblock xml:space="preserve">FilePermission server..., RequestPermission server...., result [true|false]</codeblock>
which indicate each evaluation Sentry makes. The <codeph>FilePermission</codeph> is from
the policy file, while <codeph>RequestPermission</codeph> is the privilege required for
the query. A <codeph>RequestPermission</codeph> will iterate over all appropriate
<codeph>FilePermission</codeph> settings until a match is found. If no matching
privilege is found, Sentry returns <codeph>false</codeph> indicating <q>Access
Denied</q>.
</p>
</section>
<section id="restrictions">
<title>Restrictions and Limitations</title>
<p>
Potential misunderstandings for people familiar with other database systems. Currently
not referenced anywhere, because they were only conref'ed from the FAQ page.
</p>
<p id="string_concatenation">
With Impala, you use the built-in <codeph>CONCAT()</codeph> function to concatenate two,
three, or more strings:
<codeblock xml:space="preserve">select concat('some prefix: ', col1) from t1;
select concat('abc','mno','xyz');</codeblock>
Impala does not currently support operators for string concatenation, such as
<codeph>||</codeph> as seen in some other database systems.
</p>
<p id="column_aliases" rev="IMPALA-6415 IMPALA-5191">
You can specify column aliases with or without the <codeph>AS</codeph> keyword, and with
no quotation marks, single quotation marks, or double quotation marks. Some kind of
quotation marks are required if the column alias contains any spaces or other
problematic characters. The alias text is displayed in the
<cmdname>impala-shell</cmdname> output as all-lowercase. For example:
<codeblock xml:space="preserve">[localhost:21000] &gt; select c1 First_Column from t;
[localhost:21000] &gt; select c1 as First_Column from t;
+--------------+
| first_column |
+--------------+
...
[localhost:21000] &gt; select c1 'First Column' from t;
[localhost:21000] &gt; select c1 as 'First Column' from t;
+--------------+
| first column |
+--------------+
...
[localhost:21000] &gt; select c1 "First Column" from t;
[localhost:21000] &gt; select c1 as "First Column" from t;
+--------------+
| first column |
+--------------+
...</codeblock>
From Impala 3.0, the alias substitution logic in the <codeph>GROUP BY</codeph>,
<codeph>HAVING</codeph>, and <codeph>ORDER BY</codeph> clauses has become more
consistent with standard SQL behavior, as follows. Aliases are now only legal at the top
level, and not in subexpressions. The following statements are allowed:
<codeblock>
SELECT int_col / 2 AS x
FROM t
GROUP BY x;
SELECT int_col / 2 AS x
FROM t
ORDER BY x;
SELECT NOT bool_col AS nb
FROM t
GROUP BY nb
HAVING nb;
</codeblock>
And the following statements are NOT allowed:
<codeblock>
SELECT int_col / 2 AS x
FROM t
GROUP BY x / 2;
SELECT int_col / 2 AS x
FROM t
ORDER BY -x;
SELECT int_col / 2 AS x
FROM t
GROUP BY x
HAVING x > 3;
</codeblock>
</p>
<p id="column_ordinals" rev="IMPALA-6415 IMPALA-5191"> You can refer to
<codeph>SELECT</codeph>-list items by their ordinal position. Impala
supports ordinals in the <codeph>GROUP BY</codeph>,
<codeph>HAVING</codeph>, and <codeph>ORDER BY</codeph> clauses. From
Impala 3.0, ordinals can only be used at the top level. For example, the
following statements are allowed:
<codeblock>
SELECT int_col / 2, sum(x)
FROM t
GROUP BY 1;
SELECT int_col / 2
FROM t
ORDER BY 1;
SELECT NOT bool_col
FROM t
GROUP BY 1
HAVING 1;
</codeblock>
Numbers in subexpressions are not interpreted as ordinals:
<codeblock>
SELECT int_col / 2, sum(x)
FROM t
GROUP BY 1 * 2;
The above parses OK, however GROUP BY 1 * 2 has no effect.
SELECT int_col / 2
FROM t
ORDER BY 1 + 2;
The above parses OK, however ORDER BY 1 + 2 has no effect.
SELECT NOT bool_col
FROM t
GROUP BY 1
HAVING not 1;
The above raises an error at parse-time.
</codeblock>
</p>
<p id="temp_tables">
Currently, Impala does not support temporary tables. Some other database systems have a
class of <q>lightweight</q> tables that are held only in memory and/or that are only
accessible by one connection and disappear when the session ends. In Impala, creating
new databases is a relatively lightweight operation, so as an alternative, you could
create a database with a unique name and use <codeph>CREATE TABLE LIKE</codeph>,
<codeph>CREATE TABLE AS SELECT</codeph>, and <codeph>INSERT</codeph> statements to
create a table in that database to hold the result set of a query, to use in subsequent
queries. When finished, issue a <codeph>DROP TABLE</codeph> statement followed by
<codeph>DROP DATABASE</codeph>.
</p>
</section>
<section id="standards">
<title>Blurbs About Standards Compliance</title>
<p>
The following blurbs simplify the process of flagging which SQL standard various
features were first introduced in. The wording and the tagging can be modified by
editing one central instance of each blurb. Not extensively used yet, just here and
there in the SQL Language Reference section.
</p>
<p id="sql1986">
<!-- No Wikipedia page for SQL-1986, so no link. -->
<b>Standards compliance:</b> Introduced in SQL-1986.
</p>
<p id="sql1989">
<!-- No Wikipedia page for SQL-1989, so no link. -->
<b>Standards compliance:</b> Introduced in SQL-1989.
</p>
<p id="sql1992">
<b>Standards compliance:</b> Introduced in
<xref href="http://en.wikipedia.org/wiki/SQL-92" scope="external" format="html">SQL-1992</xref>.
</p>
<p id="sql1999">
<b>Standards compliance:</b> Introduced in
<xref href="http://en.wikipedia.org/wiki/SQL:1999" scope="external" format="html">SQL:1999</xref>.
</p>
<p id="sql2003">
<b>Standards compliance:</b> Introduced in
<xref href="http://en.wikipedia.org/wiki/SQL:2003" scope="external" format="html">SQL:2003</xref>.
</p>
<p id="sql2008">
<b>Standards compliance:</b> Introduced in
<xref href="http://en.wikipedia.org/wiki/SQL:2008" scope="external" format="html">SQL:2008</xref>.
</p>
<p id="sql2011">
<b>Standards compliance:</b> Introduced in
<xref href="http://en.wikipedia.org/wiki/SQL:2011" scope="external" format="html">SQL:2011</xref>.
</p>
<p id="hiveql">
<b>Standards compliance:</b> Extension first introduced in HiveQL.
</p>
<p id="impalaql">
<b>Standards compliance:</b> Extension first introduced in Impala.
</p>
</section>
<section id="refresh_invalidate">
<title>Background Info for REFRESH, INVALIDATE METADATA, and General Metadata Discussion</title>
<p id="invalidate_then_refresh" rev="DOCS-1013">
Because <codeph>REFRESH <varname>table_name</varname></codeph> only works for tables
that the current Impala node is already aware of, when you create a new table in the
Hive shell, enter <codeph>INVALIDATE METADATA <varname>new_table</varname></codeph>
before you can see the new table in <cmdname>impala-shell</cmdname>. Once the table is
known by Impala, you can issue <codeph>REFRESH <varname>table_name</varname></codeph>
after you add data files for that table.
</p>
<p id="refresh_vs_invalidate">
<codeph>INVALIDATE METADATA</codeph> and <codeph>REFRESH</codeph> are counterparts:
<ul>
<li>
<codeph>INVALIDATE METADATA</codeph> is an asynchronous operations that simply
discards the loaded metadata from the catalog and coordinator caches. After that
operation, the catalog and all the Impala coordinators only know about the existence
of databases and tables and nothing more. Metadata loading for tables is triggered
by any subsequent queries.
</li>
<li>
<codeph>REFRESH</codeph> reloads the metadata synchronously.
<codeph>REFRESH</codeph> is more lightweight than doing a full metadata load after a
table has been invalidated. <codeph>REFRESH</codeph> cannot detect changes in block
locations triggered by operations like HDFS balancer, hence causing remote reads
during query execution with negative performance implications.
</li>
</ul>
</p>
</section>
<section id="sql_ref">
<title>SQL Language Reference Snippets</title>
<p>
These reusable chunks were taken from conrefs originally in
<filepath>ciiu_langref_sql.xml</filepath>. Or they are primarily used in new SQL syntax
topics underneath that parent topic.
</p>
<p id="tablesample_caveat" rev="IMPALA-5309">
The <codeph>TABLESAMPLE</codeph> clause of the <codeph>SELECT</codeph> statement does
not apply to a table reference derived from a view, a subquery, or anything other than a
real base table. This clause only works for tables backed by HDFS or HDFS-like data
files, therefore it does not apply to Kudu or HBase tables.
</p>
<p id="boolean_functions_vs_expressions" rev="2.11.0 IMPALA-1767">
In <keyword keyref="impala211_full"/> and higher, you can use the operators <codeph>IS
[NOT] TRUE</codeph> and <codeph>IS [NOT] FALSE</codeph> as equivalents for the built-in
functions <codeph>ISTRUE()</codeph>, <codeph>ISNOTTRUE()</codeph>,
<codeph>ISFALSE()</codeph>, and <codeph>ISNOTFALSE()</codeph>.
</p>
<p id="base64_charset">
The set of characters that can be generated as output from
<codeph>BASE64ENCODE()</codeph>, or specified in the argument string to
<codeph>BASE64DECODE()</codeph>, are the ASCII uppercase and lowercase letters (A-Z,
a-z), digits (0-9), and the punctuation characters <codeph>+</codeph>,
<codeph>/</codeph>, and <codeph>=</codeph>.
</p>
<p id="base64_error_handling">
If the argument string to <codeph>BASE64DECODE()</codeph> does not represent a valid
base64-encoded value, subject to the constraints of the Impala implementation such as
the allowed character set, the function returns <codeph>NULL</codeph>.
</p>
<p id="base64_use_cases">
The functions <codeph>BASE64ENCODE()</codeph> and <codeph>BASE64DECODE()</codeph> are
typically used in combination, to store in an Impala table string data that is
problematic to store or transmit. For example, you could use these functions to store
string data that uses an encoding other than UTF-8, or to transform the values in
contexts that require ASCII values, such as for partition key columns. Keep in mind that
base64-encoded values produce different results for string functions such as
<codeph>LENGTH()</codeph>, <codeph>MAX()</codeph>, and <codeph>MIN()</codeph> than when
those functions are called with the unencoded string values.
</p>
<p id="base64_alignment">
All return values produced by <codeph>BASE64ENCODE()</codeph> are a multiple of 4 bytes
in length. All argument values supplied to <codeph>BASE64DECODE()</codeph> must also be
a multiple of 4 bytes in length. If a base64-encoded value would otherwise have a
different length, it can be padded with trailing <codeph>=</codeph> characters to reach
a length that is a multiple of 4 bytes.
</p>
<p id="base64_examples">
The following examples show how to use <codeph>BASE64ENCODE()</codeph> and
<codeph>BASE64DECODE()</codeph> together to store and retrieve string values:
<codeblock>
-- An arbitrary string can be encoded in base 64.
-- The length of the output is a multiple of 4 bytes,
-- padded with trailing = characters if necessary.
select base64encode('hello world') as encoded,
length(base64encode('hello world')) as length;
+------------------+--------+
| encoded | length |
+------------------+--------+
| aGVsbG8gd29ybGQ= | 16 |
+------------------+--------+
-- Passing an encoded value to base64decode() produces
-- the original value.
select base64decode('aGVsbG8gd29ybGQ=') as decoded;
+-------------+
| decoded |
+-------------+
| hello world |
+-------------+
</codeblock>
These examples demonstrate incorrect encoded values that produce <codeph>NULL</codeph>
return values when decoded:
<codeblock>
-- The input value to base64decode() must be a multiple of 4 bytes.
-- In this case, leaving off the trailing = padding character
-- produces a NULL return value.
select base64decode('aGVsbG8gd29ybGQ') as decoded;
+---------+
| decoded |
+---------+
| NULL |
+---------+
WARNINGS: UDF WARNING: Invalid base64 string; input length is 15,
which is not a multiple of 4.
-- The input to base64decode() can only contain certain characters.
-- The $ character in this case causes a NULL return value.
select base64decode('abc$');
+----------------------+
| base64decode('abc$') |
+----------------------+
| NULL |
+----------------------+
WARNINGS: UDF WARNING: Could not base64 decode input in space 4; actual output length 0
</codeblock>
These examples demonstrate <q>round-tripping</q> of an original string to an encoded
string, and back again. This technique is applicable if the original source is in an
unknown encoding, or if some intermediate processing stage might cause national
characters to be misrepresented:
<codeblock>
select 'circumflex accents: â, ê, î, ô, û' as original,
base64encode('circumflex accents: â, ê, î, ô, û') as encoded;
+-----------------------------------+------------------------------------------------------+
| original | encoded |
+-----------------------------------+------------------------------------------------------+
| circumflex accents: â, ê, î, ô, û | Y2lyY3VtZmxleCBhY2NlbnRzOiDDoiwgw6osIMOuLCDDtCwgw7s= |
+-----------------------------------+------------------------------------------------------+
select base64encode('circumflex accents: â, ê, î, ô, û') as encoded,
base64decode(base64encode('circumflex accents: â, ê, î, ô, û')) as decoded;
+------------------------------------------------------+-----------------------------------+
| encoded | decoded |
+------------------------------------------------------+-----------------------------------+
| Y2lyY3VtZmxleCBhY2NlbnRzOiDDoiwgw6osIMOuLCDDtCwgw7s= | circumflex accents: â, ê, î, ô, û |
+------------------------------------------------------+-----------------------------------+
</codeblock>
</p>
<codeblock id="parquet_fallback_schema_resolution_example"><![CDATA[
create database schema_evolution;
use schema_evolution;
create table t1 (c1 int, c2 boolean, c3 string, c4 timestamp)
stored as parquet;
insert into t1 values
(1, true, 'yes', now()),
(2, false, 'no', now() + interval 1 day);
select * from t1;
+----+-------+-----+-------------------------------+
| c1 | c2 | c3 | c4 |
+----+-------+-----+-------------------------------+
| 1 | true | yes | 2016-06-28 14:53:26.554369000 |
| 2 | false | no | 2016-06-29 14:53:26.554369000 |
+----+-------+-----+-------------------------------+
desc formatted t1;
...
| Location: | /user/hive/warehouse/schema_evolution.db/t1 |
...
-- Make T2 have the same data file as in T1, including 2
-- unused columns and column order different than T2 expects.
load data inpath '/user/hive/warehouse/schema_evolution.db/t1'
into table t2;
+----------------------------------------------------------+
| summary |
+----------------------------------------------------------+
| Loaded 1 file(s). Total files in destination location: 1 |
+----------------------------------------------------------+
-- 'position' is the default setting.
-- Impala cannot read the Parquet file if the column order does not match.
set PARQUET_FALLBACK_SCHEMA_RESOLUTION=position;
PARQUET_FALLBACK_SCHEMA_RESOLUTION set to position
select * from t2;
WARNINGS:
File 'schema_evolution.db/t2/45331705_data.0.parq'
has an incompatible Parquet schema for column 'schema_evolution.t2.c4'.
Column type: TIMESTAMP, Parquet schema: optional int32 c1 [i:0 d:1 r:0]
File 'schema_evolution.db/t2/45331705_data.0.parq'
has an incompatible Parquet schema for column 'schema_evolution.t2.c4'.
Column type: TIMESTAMP, Parquet schema: optional int32 c1 [i:0 d:1 r:0]
-- With the 'name' setting, Impala can read the Parquet data files
-- despite mismatching column order.
set PARQUET_FALLBACK_SCHEMA_RESOLUTION=name;
PARQUET_FALLBACK_SCHEMA_RESOLUTION set to name
select * from t2;
+-------------------------------+-------+
| c4 | c2 |
+-------------------------------+-------+
| 2016-06-28 14:53:26.554369000 | true |
| 2016-06-29 14:53:26.554369000 | false |
+-------------------------------+-------+
]]>
</codeblock>
<note rev="IMPALA-3334" id="one_but_not_true">
In <keyword keyref="impala250"/>, only the value 1 enables the option, and the value
<codeph>true</codeph> is not recognized. This limitation is tracked by the issue
<xref keyref="IMPALA-3334">IMPALA-3334</xref>, which shows the releases where the
problem is fixed.
</note>
<p rev="IMPALA-3732" id="avro_2gb_strings">
The Avro specification allows string values up to 2**64 bytes in length. Impala queries
for Avro tables use 32-bit integers to hold string lengths. In
<keyword keyref="impala25_full"/> and higher, Impala truncates <codeph>CHAR</codeph> and
<codeph>VARCHAR</codeph> values in Avro tables to (2**31)-1 bytes. If a query encounters
a <codeph>STRING</codeph> value longer than (2**31)-1 bytes in an Avro table, the query
fails. In earlier releases, encountering such long values in an Avro table could cause a
crash.
</p>
<p rev="2.6.0 IMPALA-3369" id="set_column_stats_example">
You specify a case-insensitive symbolic name for the kind of statistics:
<codeph>numDVs</codeph>, <codeph>numNulls</codeph>, <codeph>avgSize</codeph>,
<codeph>maxSize</codeph>. The key names and values are both quoted. This operation
applies to an entire table, not a specific partition. For example:
<codeblock>
create table t1 (x int, s string);
insert into t1 values (1, 'one'), (2, 'two'), (2, 'deux');
show column stats t1;
+--------+--------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+--------+--------+------------------+--------+----------+----------+
| x | INT | -1 | -1 | 4 | 4 |
| s | STRING | -1 | -1 | -1 | -1 |
+--------+--------+------------------+--------+----------+----------+
alter table t1 set column stats x ('numDVs'='2','numNulls'='0');
alter table t1 set column stats s ('numdvs'='3','maxsize'='4');
show column stats t1;
+--------+--------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+--------+--------+------------------+--------+----------+----------+
| x | INT | 2 | 0 | 4 | 4 |
| s | STRING | 3 | -1 | 4 | -1 |
+--------+--------+------------------+--------+----------+----------+
</codeblock>
</p>
<codeblock id="set_numrows_example">create table analysis_data stored as parquet as select * from raw_data;
Inserted 1000000000 rows in 181.98s
compute stats analysis_data;
insert into analysis_data select * from smaller_table_we_forgot_before;
Inserted 1000000 rows in 15.32s
-- Now there are 1001000000 rows. We can update this single data point in the stats.
alter table analysis_data set tblproperties('numRows'='1001000000', 'STATS_GENERATED_VIA_STATS_TASK'='true');</codeblock>
<codeblock id="set_numrows_partitioned_example">-- If the table originally contained 1 million rows, and we add another partition with 30 thousand rows,
-- change the numRows property for the partition and the overall table.
alter table partitioned_data partition(year=2009, month=4) set tblproperties ('numRows'='30000', 'STATS_GENERATED_VIA_STATS_TASK'='true');
alter table partitioned_data set tblproperties ('numRows'='1030000', 'STATS_GENERATED_VIA_STATS_TASK'='true');</codeblock>
<p id="int_overflow_behavior">
Impala does not return column overflows as <codeph>NULL</codeph>, so that customers can
distinguish between <codeph>NULL</codeph> data and overflow conditions similar to how
they do so with traditional database systems. Impala returns the largest or smallest
value in the range for the type. For example, valid values for a
<codeph>tinyint</codeph> range from -128 to 127. In Impala, a <codeph>tinyint</codeph>
with a value of -200 returns -128 rather than <codeph>NULL</codeph>. A
<codeph>tinyint</codeph> with a value of 200 returns 127.
</p>
<p rev="2.5.0" id="partition_key_optimization">
If you frequently run aggregate functions such as <codeph>MIN()</codeph>,
<codeph>MAX()</codeph>, and <codeph>COUNT(DISTINCT)</codeph> on partition key columns,
consider enabling the <codeph>OPTIMIZE_PARTITION_KEY_SCANS</codeph> query option, which
optimizes such queries. This feature is available in <keyword keyref="impala25_full"/>
and higher. See <xref href="../topics/impala_optimize_partition_key_scans.xml"/> for the
kinds of queries that this option applies to, and slight differences in how partitions
are evaluated when this query option is enabled.
</p>
<p id="live_reporting_details">
The output from this query option is printed to standard error. The output is only
displayed in interactive mode, that is, not when the <codeph>-q</codeph> or
<codeph>-f</codeph> options are used.
</p>
<p id="live_progress_live_summary_asciinema">
To see how the <codeph>LIVE_PROGRESS</codeph> and <codeph>LIVE_SUMMARY</codeph> query
options work in real time, see
<xref href="https://asciinema.org/a/1rv7qippo0fe7h5k1b6k4nexk" scope="external" format="html">this
animated demo</xref>.
</p>
<p rev="2.5.0" id="runtime_filter_mode_blurb">
Because the runtime filtering feature is enabled by default only for local processing,
the other filtering-related query options have the greatest effect when used in
combination with the setting <codeph>RUNTIME_FILTER_MODE=GLOBAL</codeph>.
</p>
<note id="square_bracket_hint_caveat" rev="IMPALA-2522">
The square bracket style of hint is now deprecated and might be removed in a future
release. For that reason, any newly added hints are not available with the square
bracket syntax.
</note>
<p rev="2.5.0" id="runtime_filtering_option_caveat">
Because the runtime filtering feature applies mainly to resource-intensive and
long-running queries, only adjust this query option when tuning long-running queries
involving some combination of large partitioned tables and joins involving large tables.
</p>
<p rev="2.3.0" id="impala_shell_progress_reports_compute_stats_caveat">
The <codeph>LIVE_PROGRESS</codeph> and <codeph>LIVE_SUMMARY</codeph> query options
currently do not produce any output during <codeph>COMPUTE STATS</codeph> operations.
</p>
<!-- This is a shorter version of the similar 'caveat' text. This shorter one can be reused more easily in various places. -->
<p rev="2.3.0" id="impala_shell_progress_reports_shell_only_blurb">
The <codeph>LIVE_PROGRESS</codeph> and <codeph>LIVE_SUMMARY</codeph> query options only
apply inside the <cmdname>impala-shell</cmdname> interpreter. You cannot use them with
the <codeph>SET</codeph> statement from a JDBC or ODBC application.
</p>
<p id="impala_shell_progress_reports_shell_only_caveat">
Because the <codeph>LIVE_PROGRESS</codeph> and <codeph>LIVE_SUMMARY</codeph> query
options are available only within the <cmdname>impala-shell</cmdname> interpreter:
<ul>
<li>
<p>
You cannot change these query options through the SQL <codeph>SET</codeph>
statement using the JDBC or ODBC interfaces. The <codeph>SET</codeph> command in
<cmdname>impala-shell</cmdname> recognizes these names as shell-only options.
</p>
</li>
<li>
<p>
Be careful when using <cmdname>impala-shell</cmdname> on a
pre-<keyword keyref="impala23"/> system to connect to a system running
<keyword keyref="impala23"/> or higher. The older <cmdname>impala-shell</cmdname>
does not recognize these query option names. Upgrade
<cmdname>impala-shell</cmdname> on the systems where you intend to use these query
options.
</p>
</li>
<li>
<p>
Likewise, the <cmdname>impala-shell</cmdname> command relies on some information
only available in <keyword keyref="impala23_full"/> and higher to prepare live
progress reports and query summaries. The <codeph>LIVE_PROGRESS</codeph> and
<codeph>LIVE_SUMMARY</codeph> query options have no effect when
<cmdname>impala-shell</cmdname> connects to a cluster running an older version of
Impala.
</p>
</li>
</ul>
</p>
<!-- Same example used in both CREATE DATABASE and DROP DATABASE. -->
<codeblock id="create_drop_db_example">create database first_db;
use first_db;
create table t1 (x int);
create database second_db;
use second_db;
-- Each database has its own namespace for tables.
-- You can reuse the same table names in each database.
create table t1 (s string);
create database temp;
-- You can either USE a database after creating it,
-- or qualify all references to the table name with the name of the database.
-- Here, tables T2 and T3 are both created in the TEMP database.
create table temp.t2 (x int, y int);
use database temp;
create table t3 (s string);
-- You cannot drop a database while it is selected by the USE statement.
drop database temp;
<i>ERROR: AnalysisException: Cannot drop current default database: temp</i>
-- The always-available database 'default' is a convenient one to USE
-- before dropping a database you created.
use default;
-- Before dropping a database, first drop all the tables inside it,
<ph rev="2.3.0">-- or in <keyword keyref="impala23_full"/> and higher use the CASCADE clause.</ph>
drop database temp;
ERROR: ImpalaRuntimeException: Error making 'dropDatabase' RPC to Hive Metastore:
CAUSED BY: InvalidOperationException: Database temp is not empty
show tables in temp;
+------+
| name |
+------+
| t3 |
+------+
<ph rev="2.3.0">-- <keyword keyref="impala23_full"/> and higher:</ph>
<ph rev="2.3.0">drop database temp cascade;</ph>
-- Earlier releases:
drop table temp.t3;
drop database temp;
</codeblock>
<p id="cast_convenience_fn_example">
This example shows how to use the <codeph>castto*()</codeph> functions as an equivalent
to <codeph>CAST(<varname>value</varname> AS <varname>type</varname>)</codeph>
expressions.
</p>
<p id="cast_convenience_fn_usage">
<b>Usage notes:</b> A convenience function to skip the SQL <codeph>CAST
<varname>value</varname> AS <varname>type</varname></codeph> syntax, for example when
programmatically generating SQL statements where a regular function call might be easier
to construct.
</p>
<p rev="2.3.0" id="current_timezone_tip">
To determine the time zone of the server you are connected to, in
<keyword keyref="impala23_full"/> and higher you can call the
<codeph>timeofday()</codeph> function, which includes the time zone specifier in its
return value. Remember that with cloud computing, the server you interact with might be
in a different time zone than you are, or different sessions might connect to servers in
different time zones, or a cluster might include servers in more than one time zone.
</p>
<p rev="2.2.0" id="timezone_conversion_caveat">
The way this function deals with time zones when converting to or from
<codeph>TIMESTAMP</codeph> values is affected by the
<codeph>&#8209;&#8209;use_local_tz_for_unix_timestamp_conversions</codeph> startup flag
for the <cmdname>impalad</cmdname> daemon. See
<xref
href="../topics/impala_timestamp.xml#timestamp"/> for details about how
Impala handles time zone considerations for the <codeph>TIMESTAMP</codeph> data type.
</p>
<p rev="2.6.0 IMPALA-3558" id="s3_drop_table_purge"> For best
compatibility with the S3 write support in <keyword
keyref="impala26_full"/> and higher: <ul>
<li> Use native Hadoop techniques to create data files in S3 for
querying through Impala. </li>
<li> Use the <codeph>PURGE</codeph> clause of <codeph>DROP
TABLE</codeph> when dropping internal (managed) tables. </li>
</ul> By default, when you drop an internal (managed) table, the data
files are moved to the HDFS trashcan. This operation is expensive for
tables that reside on the Amazon S3 object store. Therefore, for S3
tables, prefer to use <codeph>DROP TABLE <varname>table_name</varname>
PURGE</codeph> rather than the default <codeph>DROP TABLE</codeph>
statement. The <codeph>PURGE</codeph> clause makes Impala delete the
data files immediately, skipping the HDFS trashcan. For the
<codeph>PURGE</codeph> clause to work effectively, you must originally
create the data files on S3 using one of the tools from the Hadoop
ecosystem, such as <codeph>hadoop fs -cp</codeph>, or
<codeph>INSERT</codeph> in Impala or Hive. </p>
<p rev="2.11.0 IMPALA-4252" id="filter_option_bloom_only">
This query option affects only Bloom filters, not the min/max filters that are applied
to Kudu tables. Therefore, it does not affect the performance of queries against Kudu
tables.
</p>
<p rev="2.6.0 IMPALA-1878" id="s3_dml_performance"> Because of differences
between S3 and traditional filesystems, DML operations for S3 tables can
take longer than for tables on HDFS. For example, both the <codeph>LOAD
DATA</codeph> statement and the final stage of the
<codeph>INSERT</codeph> and <codeph>CREATE TABLE AS SELECT</codeph>
statements involve moving files from one directory to another. (In the
case of <codeph>INSERT</codeph> and <codeph>CREATE TABLE AS
SELECT</codeph>, the files are moved from a temporary staging
directory to the final destination directory.) Because S3 does not
support a <q>rename</q> operation for existing objects, in these cases
Impala actually copies the data files from one location to another and
then removes the original files. In <keyword keyref="impala26_full"/>,
the <codeph>S3_SKIP_INSERT_STAGING</codeph> query option provides a way
to speed up <codeph>INSERT</codeph> statements for S3 tables and
partitions, with the tradeoff that a problem during statement execution
could leave data in an inconsistent state. It does not apply to
<codeph>INSERT OVERWRITE</codeph> or <codeph>LOAD DATA</codeph>
statements. See <xref
href="../topics/impala_s3_skip_insert_staging.xml#s3_skip_insert_staging"
>S3_SKIP_INSERT_STAGING Query Option</xref> for details. </p>
<p id="adls_block_splitting" rev="IMPALA-5383">
Because ADLS does not expose the block sizes of data files the way HDFS does, any Impala
<codeph>INSERT</codeph> or <codeph>CREATE TABLE AS SELECT</codeph> statements use the
<codeph>PARQUET_FILE_SIZE</codeph> query option setting to define the size of Parquet
data files. (Using a large block size is more important for Parquet tables than for
tables that use other file formats.)
</p>
<p rev="2.6.0 IMPALA-3453" id="s3_block_splitting">
In <keyword keyref="impala26_full"/> and higher, Impala queries are optimized for files
stored in Amazon S3. For Impala tables that use the file formats Parquet, ORC, RCFile,
SequenceFile, Avro, and uncompressed text, the setting
<codeph>fs.s3a.block.size</codeph> in the <filepath>core-site.xml</filepath>
configuration file determines how Impala divides the I/O work of reading the data files.
This configuration setting is specified in bytes. By default, this value is 33554432 (32
MB), meaning that Impala parallelizes S3 read operations on the files as if they were
made up of 32 MB blocks. For example, if your S3 queries primarily access Parquet files
written by MapReduce or Hive, increase <codeph>fs.s3a.block.size</codeph> to 134217728
(128 MB) to match the row group size of those files. If most S3 queries involve Parquet
files written by Impala, increase <codeph>fs.s3a.block.size</codeph> to 268435456 (256
MB) to match the row group size produced by Impala.
</p>
<note rev="2.6.0 IMPALA-1878" id="s3_production" type="important">
<p>
In <keyword keyref="impala26_full"/> and higher, Impala supports both queries
(<codeph>SELECT</codeph>) and DML (<codeph>INSERT</codeph>, <codeph>LOAD
DATA</codeph>, <codeph>CREATE TABLE AS SELECT</codeph>) for data residing on Amazon
S3. With the inclusion of write support,
<!-- and configuration settings for more secure S3 key management, -->
the Impala support for S3 is now considered ready for production use.
</p>
</note>
<note rev="2.2.0" id="s3_caveat" type="important">
<p>
Impala query support for Amazon S3 is included in <keyword keyref="impala22_full"/>,
but is not supported or recommended for production use in this version.
</p>
</note>
<p rev="2.6.0 IMPALA-1878" id="s3_ddl">
In <keyword keyref="impala26_full"/> and higher, Impala DDL statements such as
<codeph>CREATE DATABASE</codeph>, <codeph>CREATE TABLE</codeph>, <codeph>DROP DATABASE
CASCADE</codeph>, <codeph>DROP TABLE</codeph>, and <codeph>ALTER TABLE [ADD|DROP]
PARTITION</codeph> can create or remove folders as needed in the Amazon S3 system. Prior
to <keyword keyref="impala26_full"/>, you had to create folders yourself and point
Impala database, tables, or partitions at them, and manually remove folders when no
longer needed. See <xref href="../topics/impala_s3.xml#s3"/> for details about reading
and writing S3 data with Impala.
</p>
<p rev="2.9.0 IMPALA-5333" id="adls_dml">
In <keyword
keyref="impala29_full"/> and higher, the Impala DML statements
(<codeph>INSERT</codeph>, <codeph>LOAD DATA</codeph>, and <codeph>CREATE TABLE AS
SELECT</codeph>) can write data into a table or partition that resides in the Azure Data
Lake Store (ADLS). ADLS Gen2 is supported in <keyword keyref="impala31"/> and higher.
</p>
<p rev="2.9.0 IMPALA-5333">
In the<codeph>CREATE TABLE</codeph> or <codeph>ALTER TABLE</codeph> statements, specify
the ADLS location for tables and partitions with the <codeph>adl://</codeph> prefix for
ADLS Gen1 and <codeph>abfs://</codeph> or <codeph>abfss://</codeph> for ADLS Gen2 in the
<codeph>LOCATION</codeph> attribute.
</p>
<p rev="2.9.0 IMPALA-5333" id="adls_dml_end">
If you bring data into ADLS using the normal ADLS transfer mechanisms instead of Impala
DML statements, issue a <codeph>REFRESH</codeph> statement for the table before using
Impala to query the ADLS data.
</p>
<p rev="2.6.0 IMPALA-1878" id="s3_dml"> In <keyword keyref="impala26_full"
/> and higher, the Impala DML statements (<codeph>INSERT</codeph>,
<codeph>LOAD DATA</codeph>, and <codeph>CREATE TABLE AS
SELECT</codeph>) can write data into a table or partition that resides
in S3. The syntax of the DML statements is the same as for any other
tables, because the S3 location for tables and partitions is specified
by an <codeph>s3a://</codeph> prefix in the <codeph>LOCATION</codeph>
attribute of <codeph>CREATE TABLE</codeph> or <codeph>ALTER
TABLE</codeph> statements. If you bring data into S3 using the normal
S3 transfer mechanisms instead of Impala DML statements, issue a
<codeph>REFRESH</codeph> statement for the table before using Impala
to query the S3 data. </p>
<p rev="2.2.0" id="s3_metadata">
Impala caches metadata for tables where the data resides in the Amazon Simple Storage
Service (S3), and the <codeph>REFRESH</codeph> and <codeph>INVALIDATE METADATA</codeph>
statements are supported for the S3 tables. In particular, issue a
<codeph>REFRESH</codeph> for a table after adding or removing files in the associated S3
data directory. See <xref
href="../topics/impala_s3.xml#s3"/> for details
about working with S3 tables.
</p>
<p id="y2k38" rev="2.2.0">
In Impala 2.2.0 and higher, built-in functions that accept or return integers
representing <codeph>TIMESTAMP</codeph> values use the <codeph>BIGINT</codeph> type for
parameters and return values, rather than <codeph>INT</codeph>. This change lets the
date and time functions avoid an overflow error that would otherwise occur on January
19th, 2038 (known as the
<xref
href="http://en.wikipedia.org/wiki/Year_2038_problem" scope="external"
format="html"><q>Year
2038 problem</q> or <q>Y2K38 problem</q></xref>). This change affects the
<codeph>FROM_UNIXTIME()</codeph> and <codeph>UNIX_TIMESTAMP()</codeph> functions. You
might need to change application code that interacts with these functions, change the
types of columns that store the return values, or add <codeph>CAST()</codeph> calls to
SQL statements that call these functions.
</p>
<p id="timestamp_conversions">
Impala automatically converts <codeph>STRING</codeph> literals of the correct format
into <codeph>TIMESTAMP</codeph> values. Timestamp values are accepted in the format
<codeph>'yyyy‑MM‑dd HH:mm:ss.SSSSSS'</codeph>, and can consist of just the date, or
just the time, with or without the fractional second portion. For example, you can
specify <codeph>TIMESTAMP</codeph> values such as <codeph>'1966‑07‑30'</codeph>,
<codeph>'08:30:00'</codeph>, or <codeph>'1985‑09‑25 17:45:30.005'</codeph>.
</p>
<p>
Leading zeroes are not required in the numbers representing the date component, such as
month and date, or the time component, such as hour, minute, and second. For example,
Impala accepts both <codeph>'2018‑1‑1 01:02:03'</codeph> and
<codeph>'2018‑01‑01 1:2:3'</codeph> as valid.
</p>
<p>
In <codeph>STRING</codeph> to <codeph>TIMESTAMP</codeph> conversions, leading and
trailing white spaces, such as a space, a tab, a newline, or a carriage return, are
ignored. For example, Impala treats the following as equivalent:
'1999‑12‑01 01:02:03 ', ' 1999‑12‑01 01:02:03',
'1999‑12‑01 01:02:03\r\n\t'.
</p>
<p id="cast_string_to_timestamp">
When you convert or cast a <codeph>STRING</codeph> literal to
<codeph>TIMESTAMP</codeph>, you can use the following separators between the date part
and the time part:
<ul>
<li>
<p>
One or more space characters
</p>
<p>
Example: <codeph>CAST('2001-01-09 01:05:01' AS TIMESTAMP)</codeph>
</p>
</li>
<li>
<p>
The character “T”
</p>
<p>
Example: <codeph>CAST('2001-01-09T01:05:01' AS TIMESTAMP)</codeph>
</p>
</li>
</ul>
</p>
<p>
<ph id="cast_int_to_timestamp"> Casting an integer or floating-point value
<codeph>N</codeph> to <codeph>TIMESTAMP</codeph> produces a value that is
<codeph>N</codeph> seconds past the start of the epoch date (January 1, 1970). By
default, the result value represents a date and time in the UTC time zone. If the
setting <codeph>&#8209;&#8209;use_local_tz_for_unix_timestamp_conversions=true</codeph>
is in effect, the resulting <codeph>TIMESTAMP</codeph> represents a date and time in the
local time zone. </ph>
</p>
<p id="redaction_yes" rev="2.2.0">
If these statements in your environment contain sensitive literal values such as credit
card numbers or tax identifiers, Impala can redact this sensitive information when
displaying the statements in log files and other administrative contexts. See
<xref keyref="sg_redaction"/> for details.
</p>
<p id="cs_or_cis">
For a particular table, use either <codeph>COMPUTE STATS</codeph> or <codeph>COMPUTE
INCREMENTAL STATS</codeph>, but never combine the two or alternate between them. If you
switch from <codeph>COMPUTE STATS</codeph> to <codeph>COMPUTE INCREMENTAL STATS</codeph>
during the lifetime of a table, or vice versa, drop all statistics by running
<codeph>DROP STATS</codeph> before making the switch.
</p>
<p id="incremental_stats_after_full">
When you run <codeph>COMPUTE INCREMENTAL STATS</codeph> on a table for the first time,
the statistics are computed again from scratch regardless of whether the table already
has statistics. Therefore, expect a one-time resource-intensive operation for scanning
the entire table when running <codeph>COMPUTE INCREMENTAL STATS</codeph> for the first
time on a given table.
</p>
<p id="incremental_stats_caveats">
In Impala 3.0 and lower, approximately 400 bytes of metadata per column per partition
are needed for caching. Tables with a big number of partitions and many columns can add
up to a significant memory overhead as the metadata must be cached on the
<cmdname>catalogd</cmdname> host and on every <cmdname>impalad</cmdname> host that is
eligible to be a coordinator. If this metadata for all tables exceeds 2 GB, you might
experience service downtime. In Impala 3.1 and higher, the issue was alleviated with an
improved handling of incremental stats.
</p>
<p id="incremental_partition_spec">
The <codeph>PARTITION</codeph> clause is only allowed in combination with the
<codeph>INCREMENTAL</codeph> clause. It is optional for <codeph>COMPUTE INCREMENTAL
STATS</codeph>, and required for <codeph>DROP INCREMENTAL STATS</codeph>. Whenever you
specify partitions through the <codeph>PARTITION
(<varname>partition_spec</varname>)</codeph> clause in a <codeph>COMPUTE INCREMENTAL
STATS</codeph> or <codeph>DROP INCREMENTAL STATS</codeph> statement, you must include
all the partitioning columns in the specification, and specify constant values for all
the partition key columns.
</p>
<codeblock id="compute_stats_walkthrough">-- Initially the table has no incremental stats, as indicated
-- 'false' under Incremental stats.
show table stats item_partitioned;
+-------------+-------+--------+----------+--------------+---------+------------------
| i_category | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats
+-------------+-------+--------+----------+--------------+---------+------------------
| Books | -1 | 1 | 223.74KB | NOT CACHED | PARQUET | false
| Children | -1 | 1 | 230.05KB | NOT CACHED | PARQUET | false
| Electronics | -1 | 1 | 232.67KB | NOT CACHED | PARQUET | false
| Home | -1 | 1 | 232.56KB | NOT CACHED | PARQUET | false
| Jewelry | -1 | 1 | 223.72KB | NOT CACHED | PARQUET | false
| Men | -1 | 1 | 231.25KB | NOT CACHED | PARQUET | false
| Music | -1 | 1 | 237.90KB | NOT CACHED | PARQUET | false
| Shoes | -1 | 1 | 234.90KB | NOT CACHED | PARQUET | false
| Sports | -1 | 1 | 227.97KB | NOT CACHED | PARQUET | false
| Women | -1 | 1 | 226.27KB | NOT CACHED | PARQUET | false
| Total | -1 | 10 | 2.25MB | 0B | |
+-------------+-------+--------+----------+--------------+---------+------------------
-- After the first COMPUTE INCREMENTAL STATS,
-- all partitions have stats. The first
-- COMPUTE INCREMENTAL STATS scans the whole
-- table, discarding any previous stats from
-- a traditional COMPUTE STATS statement.
compute incremental stats item_partitioned;
+-------------------------------------------+
| summary |
+-------------------------------------------+
| Updated 10 partition(s) and 21 column(s). |
+-------------------------------------------+
show table stats item_partitioned;
+-------------+-------+--------+----------+--------------+---------+------------------
| i_category | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats
+-------------+-------+--------+----------+--------------+---------+------------------
| Books | 1733 | 1 | 223.74KB | NOT CACHED | PARQUET | true
| Children | 1786 | 1 | 230.05KB | NOT CACHED | PARQUET | true
| Electronics | 1812 | 1 | 232.67KB | NOT CACHED | PARQUET | true
| Home | 1807 | 1 | 232.56KB | NOT CACHED | PARQUET | true
| Jewelry | 1740 | 1 | 223.72KB | NOT CACHED | PARQUET | true
| Men | 1811 | 1 | 231.25KB | NOT CACHED | PARQUET | true
| Music | 1860 | 1 | 237.90KB | NOT CACHED | PARQUET | true
| Shoes | 1835 | 1 | 234.90KB | NOT CACHED | PARQUET | true
| Sports | 1783 | 1 | 227.97KB | NOT CACHED | PARQUET | true
| Women | 1790 | 1 | 226.27KB | NOT CACHED | PARQUET | true
| Total | 17957 | 10 | 2.25MB | 0B | |
+-------------+-------+--------+----------+--------------+---------+------------------
-- Add a new partition...
alter table item_partitioned add partition (i_category='Camping');
-- Add or replace files in HDFS outside of Impala,
-- rendering the stats for a partition obsolete.
!import_data_into_sports_partition.sh
refresh item_partitioned;
drop incremental stats item_partitioned partition (i_category='Sports');
-- Now some partitions have incremental stats
-- and some do not.
show table stats item_partitioned;
+-------------+-------+--------+----------+--------------+---------+------------------
| i_category | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats
+-------------+-------+--------+----------+--------------+---------+------------------
| Books | 1733 | 1 | 223.74KB | NOT CACHED | PARQUET | true
| Camping | -1 | 1 | 408.02KB | NOT CACHED | PARQUET | false
| Children | 1786 | 1 | 230.05KB | NOT CACHED | PARQUET | true
| Electronics | 1812 | 1 | 232.67KB | NOT CACHED | PARQUET | true
| Home | 1807 | 1 | 232.56KB | NOT CACHED | PARQUET | true
| Jewelry | 1740 | 1 | 223.72KB | NOT CACHED | PARQUET | true
| Men | 1811 | 1 | 231.25KB | NOT CACHED | PARQUET | true
| Music | 1860 | 1 | 237.90KB | NOT CACHED | PARQUET | true
| Shoes | 1835 | 1 | 234.90KB | NOT CACHED | PARQUET | true
| Sports | -1 | 1 | 227.97KB | NOT CACHED | PARQUET | false
| Women | 1790 | 1 | 226.27KB | NOT CACHED | PARQUET | true
| Total | 17957 | 11 | 2.65MB | 0B | |
+-------------+-------+--------+----------+--------------+---------+------------------
-- After another COMPUTE INCREMENTAL STATS,
-- all partitions have incremental stats, and only the 2
-- partitions without incremental stats were scanned.
compute incremental stats item_partitioned;
+------------------------------------------+
| summary |
+------------------------------------------+
| Updated 2 partition(s) and 21 column(s). |
+------------------------------------------+
show table stats item_partitioned;
+-------------+-------+--------+----------+--------------+---------+------------------
| i_category | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats
+-------------+-------+--------+----------+--------------+---------+------------------
| Books | 1733 | 1 | 223.74KB | NOT CACHED | PARQUET | true
| Camping | 5328 | 1 | 408.02KB | NOT CACHED | PARQUET | true
| Children | 1786 | 1 | 230.05KB | NOT CACHED | PARQUET | true
| Electronics | 1812 | 1 | 232.67KB | NOT CACHED | PARQUET | true
| Home | 1807 | 1 | 232.56KB | NOT CACHED | PARQUET | true
| Jewelry | 1740 | 1 | 223.72KB | NOT CACHED | PARQUET | true
| Men | 1811 | 1 | 231.25KB | NOT CACHED | PARQUET | true
| Music | 1860 | 1 | 237.90KB | NOT CACHED | PARQUET | true
| Shoes | 1835 | 1 | 234.90KB | NOT CACHED | PARQUET | true
| Sports | 1783 | 1 | 227.97KB | NOT CACHED | PARQUET | true
| Women | 1790 | 1 | 226.27KB | NOT CACHED | PARQUET | true
| Total | 17957 | 11 | 2.65MB | 0B | |
+-------------+-------+--------+----------+--------------+---------+------------------
</codeblock>
<p id="udf_persistence_restriction" rev="2.5.0 IMPALA-1748">
In <keyword keyref="impala25_full"/> and higher, Impala UDFs and UDAs written in C++ are
persisted in the metastore database. Java UDFs are also persisted, if they were created
with the new <codeph>CREATE FUNCTION</codeph> syntax for Java UDFs, where the Java
function argument and return types are omitted. Java-based UDFs created with the old
<codeph>CREATE FUNCTION</codeph> syntax do not persist across restarts because they are
held in the memory of the <cmdname>catalogd</cmdname> daemon. Until you re-create such
Java UDFs using the new <codeph>CREATE FUNCTION</codeph> syntax, you must reload those
Java-based UDFs by running the original <codeph>CREATE FUNCTION</codeph> statements
again each time you restart the <cmdname>catalogd</cmdname> daemon. Prior to
<keyword keyref="impala25_full"/> the requirement to reload functions after a restart
applied to both C++ and Java functions.
</p>
<p rev="2.9.0 IMPALA-5259" id="refresh_functions_tip">
In <keyword keyref="impala29_full"/> and higher, you can refresh the user-defined
functions (UDFs) that Impala recognizes, at the database level, by running the
<codeph>REFRESH FUNCTIONS</codeph> statement with the database name as an argument.
Java-based UDFs can be added to the metastore database through Hive <codeph>CREATE
FUNCTION</codeph> statements, and made visible to Impala by subsequently running
<codeph>REFRESH FUNCTIONS</codeph>. For example:
<codeblock>CREATE DATABASE shared_udfs;
USE shared_udfs;
...use CREATE FUNCTION statements in Hive to create some Java-based UDFs
that Impala is not initially aware of...
REFRESH FUNCTIONS shared_udfs;
SELECT udf_created_by_hive(c1) FROM ...
</codeblock>
</p>
<p id="current_user_caveat" rev="">
The Hive <codeph>current_user()</codeph> function cannot be called from a Java UDF
through Impala.
</p>
<note id="add_partition_set_location">
If you are creating a partition for the first time and specifying its location, for
maximum efficiency, use a single <codeph>ALTER TABLE</codeph> statement including both
the <codeph>ADD PARTITION</codeph> and <codeph>LOCATION</codeph> clauses, rather than
separate statements with <codeph>ADD PARTITION</codeph> and <codeph>SET
LOCATION</codeph> clauses.
</note>
<p id="insert_hidden_work_directory">
The <codeph>INSERT</codeph> statement has always left behind a hidden work directory
inside the data directory of the table. Formerly, this hidden work directory was named
<filepath>.impala_insert_staging</filepath> . In Impala 2.0.1 and later, this directory
name is changed to <filepath>_impala_insert_staging</filepath> . (While HDFS tools are
expected to treat names beginning either with underscore and dot as hidden, in practice
names beginning with an underscore are more widely supported.) If you have any scripts,
cleanup jobs, and so on that rely on the name of this work directory, adjust them to use
the new name.
</p>
<p id="check_internal_external_table">
To see whether a table is internal or external, and its associated HDFS location, issue
the statement <codeph>DESCRIBE FORMATTED <varname>table_name</varname></codeph>. The
<codeph>Table Type</codeph> field displays <codeph>MANAGED_TABLE</codeph> for internal
tables and <codeph>EXTERNAL_TABLE</codeph> for external tables. The
<codeph>Location</codeph> field displays the path of the table directory as an HDFS URI.
</p>
<p id="switch_internal_external_table"> You can switch a table from
internal to external, or from external to internal, by using the
<codeph>ALTER TABLE</codeph> statement:
<codeblock xml:space="preserve">
-- Switch a table from internal to external.
ALTER TABLE <varname>table_name</varname> SET TBLPROPERTIES('EXTERNAL'='TRUE');
-- Switch a table from external to internal.
ALTER TABLE <varname>table_name</varname> SET TBLPROPERTIES('EXTERNAL'='FALSE');
</codeblock>If
the Kudu service is integrated with the Hive Metastore, the above
operations are not supported.</p>
<!-- The data to show sensible output from these queries is in the TPC-DS schema 'CUSTOMER' table.
If you want to show real output, add a LIMIT 5 or similar clause to each query to avoid
too-long output. -->
<codeblock id="regexp_rlike_examples" xml:space="preserve">-- Find all customers whose first name starts with 'J', followed by 0 or more of any character.
select c_first_name, c_last_name from customer where c_first_name regexp '^J.*';
select c_first_name, c_last_name from customer where c_first_name rlike '^J.*';
-- Find 'Macdonald', where the first 'a' is optional and the 'D' can be upper- or lowercase.
-- The ^...$ are required, to match the start and end of the value.
select c_first_name, c_last_name from customer where c_last_name regexp '^Ma?c[Dd]onald$';
select c_first_name, c_last_name from customer where c_last_name rlike '^Ma?c[Dd]onald$';
-- Match multiple character sequences, either 'Mac' or 'Mc'.
select c_first_name, c_last_name from customer where c_last_name regexp '^(Mac|Mc)donald$';
select c_first_name, c_last_name from customer where c_last_name rlike '^(Mac|Mc)donald$';
-- Find names starting with 'S', then one or more vowels, then 'r', then any other characters.
-- Matches 'Searcy', 'Sorenson', 'Sauer'.
select c_first_name, c_last_name from customer where c_last_name regexp '^S[aeiou]+r.*$';
select c_first_name, c_last_name from customer where c_last_name rlike '^S[aeiou]+r.*$';
-- Find names that end with 2 or more vowels: letters from the set a,e,i,o,u.
select c_first_name, c_last_name from customer where c_last_name regexp '.*[aeiou]{2,}$';
select c_first_name, c_last_name from customer where c_last_name rlike '.*[aeiou]{2,}$';
-- You can use letter ranges in the [] blocks, for example to find names starting with A, B, or C.
select c_first_name, c_last_name from customer where c_last_name regexp '^[A-C].*';
select c_first_name, c_last_name from customer where c_last_name rlike '^[A-C].*';
-- If you are not sure about case, leading/trailing spaces, and so on, you can process the
-- column using string functions first.
select c_first_name, c_last_name from customer where lower(trim(c_last_name)) regexp '^de.*';
select c_first_name, c_last_name from customer where lower(trim(c_last_name)) rlike '^de.*';
</codeblock>
<p id="case_insensitive_comparisons_tip" rev="2.5.0 IMPALA-1787">
In <keyword keyref="impala25_full"/> and higher, you can simplify queries that use many
<codeph>UPPER()</codeph> and <codeph>LOWER()</codeph> calls to do case-insensitive
comparisons, by using the <codeph>ILIKE</codeph> or <codeph>IREGEXP</codeph> operators
instead. See <xref href="../topics/impala_operators.xml#ilike"/> and
<xref href="../topics/impala_operators.xml#iregexp"/> for details.
</p>
<p id="show_security">
When authorization is enabled, the output of the <codeph>SHOW</codeph> statement only
shows those objects for which you have the privilege to view. If you believe an object
exists but you cannot see it in the <codeph>SHOW</codeph> output, check with the system
administrator if you need to be granted a new privilege for that object. See
<xref href="../topics/impala_authorization.xml#authorization"/> for how to set up
authorization and add privileges for specific objects.
</p>
<p id="infinity_and_nan" rev="IMPALA-3267">
Infinity and NaN can be specified in text data files as <codeph>inf</codeph> and
<codeph>nan</codeph> respectively, and Impala interprets them as these special values.
They can also be produced by certain arithmetic expressions; for example,
<codeph>1/0</codeph> returns <codeph>Infinity</codeph> and <codeph>pow(-1, 0.5)</codeph>
returns <codeph>NaN</codeph>. Or you can cast the literal values, such as
<codeph>CAST('nan' AS DOUBLE)</codeph> or <codeph>CAST('inf' AS DOUBLE)</codeph>.
</p>
<p rev="2.0.0" id="user_kerberized">
In Impala 2.0 and later, <codeph>user()</codeph> returns the full Kerberos principal
string, such as <codeph>user@example.com</codeph>, in a Kerberized environment.
</p>
<p id="vm_overcommit_memory_intro">
On a kerberized cluster with high memory utilization, <cmdname>kinit</cmdname> commands
executed after every <codeph>'kerberos_reinit_interval'</codeph> may cause out-of-memory
errors, because executing the command involves a fork of the Impala process. The error
looks similar to the following:
<codeblock><![CDATA[
Failed to obtain Kerberos ticket for principal: <varname>principal_details</varname>
Failed to execute shell cmd: 'kinit -k -t <varname>keytab_details</varname>',
error was: Error(12): Cannot allocate memory
]]>
</codeblock>
</p>
<p id="vm_overcommit_memory_start">
The following command changes the <codeph>vm.overcommit_memory</codeph> setting
immediately on a running host. However, this setting is reset when the host is
restarted.
<codeblock><![CDATA[
echo 1 > /proc/sys/vm/overcommit_memory
]]>
</codeblock>
</p>
<p>
To change the setting in a persistent way, add the following line to the
<filepath>/etc/sysctl.conf</filepath> file:
<codeblock><![CDATA[
vm.overcommit_memory=1
]]>
</codeblock>
</p>
<p id="vm_overcommit_memory_end">
Then run <codeph>sysctl -p</codeph>. No reboot is needed.
</p>
<ul>
<li id="grant_revoke_single">
Currently, each Impala <codeph>GRANT</codeph> or <codeph>REVOKE</codeph> statement can
only grant or revoke a single privilege to or from a single role.
</li>
</ul>
<p id="blobs_are_strings">
All data in <codeph>CHAR</codeph> and <codeph>VARCHAR</codeph> columns must be in a
character encoding that is compatible with UTF-8. If you have binary data from another
database system (that is, a BLOB type), use a <codeph>STRING</codeph> column to hold it.
</p>
<!-- The codeblock is nested inside this paragraph, so the intro text
and the code get conref'ed as a unit. -->
<p id="create_drop_view_examples">
The following example creates a series of views and then drops them. These examples
illustrate how views are associated with a particular database, and both the view
definitions and the view names for <codeph>CREATE VIEW</codeph> and <codeph>DROP
VIEW</codeph> can refer to a view in the current database or a fully qualified view
name.
<codeblock xml:space="preserve">
-- Create and drop a view in the current database.
CREATE VIEW few_rows_from_t1 AS SELECT * FROM t1 LIMIT 10;
DROP VIEW few_rows_from_t1;
-- Create and drop a view referencing a table in a different database.
CREATE VIEW table_from_other_db AS SELECT x FROM db1.foo WHERE x IS NOT NULL;
DROP VIEW table_from_other_db;
USE db1;
-- Create a view in a different database.
CREATE VIEW db2.v1 AS SELECT * FROM db2.foo;
-- Switch into the other database and drop the view.
USE db2;
DROP VIEW v1;
USE db1;
-- Create a view in a different database.
CREATE VIEW db2.v1 AS SELECT * FROM db2.foo;
-- Drop a view in the other database.
DROP VIEW db2.v1;
</codeblock>
</p>
<p id="char_varchar_cast_from_string">
For <codeph>INSERT</codeph> operations into <codeph>CHAR</codeph> or
<codeph>VARCHAR</codeph> columns, you must cast all <codeph>STRING</codeph> literals or
expressions returning <codeph>STRING</codeph> to to a <codeph>CHAR</codeph> or
<codeph>VARCHAR</codeph> type with the appropriate length.
</p>
<p id="length_demo" rev="IMPALA-6391 IMPALA-2172">
The following example demonstrates how <codeph>length()</codeph> and
<codeph>char_length()</codeph> sometimes produce the same result, and sometimes produce
different results depending on the type of the argument and the presence of trailing
spaces for <codeph>CHAR</codeph> values. The <codeph>S</codeph> and <codeph>C</codeph>
values are displayed with enclosing quotation marks to show any trailing spaces.
<codeblock id="length_demo_example">create table length_demo (s string, c char(5));
insert into length_demo values
('a',cast('a' as char(5))),
('abc',cast('abc' as char(5))),
('hello',cast('hello' as char(5)));
select concat('"',s,'"') as s, concat('"',c,'"') as c,
length(s), length(c),
char_length(s), char_length(c)
from length_demo;
+---------+---------+-----------+-----------+----------------+----------------+
| s | c | length(s) | length(c) | char_length(s) | char_length(c) |
+---------+---------+-----------+-----------+----------------+----------------+
| "a" | "a " | 1 | 1 | 1 | 5 |
| "abc" | "abc " | 3 | 3 | 3 | 5 |
| "hello" | "hello" | 5 | 5 | 5 | 5 |
+---------+---------+-----------+-----------+----------------+----------------+
</codeblock>
</p>
<p rev="2.0.0" id="subquery_no_limit">
Correlated subqueries used in <codeph>EXISTS</codeph> and <codeph>IN</codeph> operators
cannot include a <codeph>LIMIT</codeph> clause.
</p>
<p id="avro_no_timestamp">
Currently, Avro tables cannot contain <codeph>TIMESTAMP</codeph> columns. If you need to
store date and time values in Avro tables, as a workaround you can use a
<codeph>STRING</codeph> representation of the values, convert the values to
<codeph>BIGINT</codeph> with the <codeph>UNIX_TIMESTAMP()</codeph> function, or create
separate numeric columns for individual date and time fields using the
<codeph>EXTRACT()</codeph> function.
</p>
<p id="zero_length_strings">
<b>Zero-length strings:</b> For purposes of clauses such as <codeph>DISTINCT</codeph>
and <codeph>GROUP BY</codeph>, Impala considers zero-length strings
(<codeph>""</codeph>), <codeph>NULL</codeph>, and space to all be different values.
</p>
<p rev="2.5.0 IMPALA-3054" id="spill_to_disk_vs_dynamic_partition_pruning">
When the spill-to-disk feature is activated for a join node within a query, Impala does
not produce any runtime filters for that join operation on that host. Other join nodes
within the query are not affected.
</p>
<codeblock id="simple_dpp_example">
CREATE TABLE yy (s STRING) PARTITIONED BY (year INT);
INSERT INTO yy PARTITION (year) VALUES ('1999', 1999), ('2000', 2000),
('2001', 2001), ('2010', 2010), ('2018', 2018);
COMPUTE STATS yy;
CREATE TABLE yy2 (s STRING, year INT);
INSERT INTO yy2 VALUES ('1999', 1999), ('2000', 2000), ('2001', 2001);
COMPUTE STATS yy2;
-- The following query reads an unknown number of partitions, whose key values
-- are only known at run time. The <b>runtime filters</b> line shows the
-- information used in query fragment 02 to decide which partitions to skip.
EXPLAIN SELECT s FROM yy WHERE year IN (SELECT year FROM yy2);
+--------------------------------------------------------------------------+
| PLAN-ROOT SINK |
| | |
| 04:EXCHANGE [UNPARTITIONED] |
| | |
| 02:HASH JOIN [LEFT SEMI JOIN, BROADCAST] |
| | hash predicates: year = year |
| | <b>runtime filters: RF000 &lt;- year</b> |
| | |
| |--03:EXCHANGE [BROADCAST] |
| | | |
| | 01:SCAN HDFS [default.yy2] |
| | partitions=1/1 files=1 size=620B |
| | |
| 00:SCAN HDFS [default.yy] |
| <b>partitions=5/5</b> files=5 size=1.71KB |
| runtime filters: RF000 -> year |
+--------------------------------------------------------------------------+
SELECT s FROM yy WHERE year IN (SELECT year FROM yy2); -- Returns 3 rows from yy
PROFILE;
</codeblock>
<p id="order_by_scratch_dir"> By default, intermediate files used during
large sort, join, aggregation, or analytic function operations are
stored in the directory <filepath>/tmp/impala-scratch</filepath>, and
these intermediate files are removed when the operation finishes. You
can specify a different location by starting the
<cmdname>impalad</cmdname> daemon with the
<codeph>&#8209;&#8209;scratch_dirs="<varname>path_to_directory</varname>"</codeph>
configuration option. </p>
<p id="order_by_view_restriction">
An <codeph>ORDER BY</codeph> clause without an additional <codeph>LIMIT</codeph> clause
is ignored in any view definition. If you need to sort the entire result set from a
view, use an <codeph>ORDER BY</codeph> clause in the <codeph>SELECT</codeph> statement
that queries the view. You can still make a simple <q>top 10</q> report by combining the
<codeph>ORDER BY</codeph> and <codeph>LIMIT</codeph> clauses in the same view
definition:
<codeblock xml:space="preserve">[localhost:21000] &gt; create table unsorted (x bigint);
[localhost:21000] &gt; insert into unsorted values (1), (9), (3), (7), (5), (8), (4), (6), (2);
[localhost:21000] &gt; create view sorted_view as select x from unsorted order by x;
[localhost:21000] &gt; select x from sorted_view; -- ORDER BY clause in view has no effect.
+---+
| x |
+---+
| 1 |
| 9 |
| 3 |
| 7 |
| 5 |
| 8 |
| 4 |
| 6 |
| 2 |
+---+
[localhost:21000] &gt; select x from sorted_view order by x; -- View query requires ORDER BY at outermost level.
+---+
| x |
+---+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
+---+
[localhost:21000] &gt; create view top_3_view as select x from unsorted order by x limit 3;
[localhost:21000] &gt; select x from top_3_view; -- ORDER BY and LIMIT together in view definition are preserved.
+---+
| x |
+---+
| 1 |
| 2 |
| 3 |
+---+
</codeblock>
</p>
<p id="precision_scale_example">
The following examples demonstrate how to check the precision and scale of numeric
literals or other numeric expressions. Impala represents numeric literals in the
smallest appropriate type. 5 is a <codeph>TINYINT</codeph> value, which ranges from -128
to 127, therefore 3 decimal digits are needed to represent the entire range, and because
it is an integer value there are no fractional digits. 1.333 is interpreted as a
<codeph>DECIMAL</codeph> value, with 4 digits total and 3 digits after the decimal
point.
<codeblock xml:space="preserve">[localhost:21000] &gt; select precision(5), scale(5);
+--------------+----------+
| precision(5) | scale(5) |
+--------------+----------+
| 3 | 0 |
+--------------+----------+
[localhost:21000] &gt; select precision(1.333), scale(1.333);
+------------------+--------------+
| precision(1.333) | scale(1.333) |
+------------------+--------------+
| 4 | 3 |
+------------------+--------------+
[localhost:21000] &gt; with t1 as
( select cast(12.34 as decimal(20,2)) x union select cast(1 as decimal(8,6)) x )
select precision(x), scale(x) from t1 limit 1;
+--------------+----------+
| precision(x) | scale(x) |
+--------------+----------+
| 24 | 6 |
+--------------+----------+
</codeblock>
</p>
<!-- These 'type_' entries are for query options, where the type doesn't match up exactly with an Impala data type. -->
<p id="type_boolean">
<b>Type:</b> Boolean; recognized values are 1 and 0, or <codeph>true</codeph> and
<codeph>false</codeph>; any other value interpreted as <codeph>false</codeph>
</p>
<p id="type_string">
<b>Type:</b> string
</p>
<p id="type_integer">
<b>Type:</b> integer
</p>
<p id="default_blurb">
<b>Default:</b>
</p>
<p id="default_false">
<b>Default:</b> <codeph>false</codeph>
</p>
<p id="default_0">
<b>Default:</b> <codeph>0</codeph>
</p>
<p id="default_false_0">
<b>Default:</b> <codeph>false</codeph> (shown as 0 in output of <codeph>SET</codeph>
statement)
</p>
<p id="default_true_1">
<b>Default:</b> <codeph>true</codeph> (shown as 1 in output of <codeph>SET</codeph>
statement)
</p>
<p id="units_blurb">
<b>Units:</b> A numeric argument represents a size in bytes; you can also use a suffix
of <codeph>m</codeph> or <codeph>mb</codeph> for megabytes, or <codeph>g</codeph> or
<codeph>gb</codeph> for gigabytes. If you specify a value with unrecognized formats,
subsequent queries fail with an error.
</p>
<p id="odd_return_type_string">
Currently, the return value is always a <codeph>STRING</codeph>. The return type is
subject to change in future releases. Always use <codeph>CAST()</codeph> to convert the
result to whichever data type is appropriate for your computations.
</p>
<p rev="2.0.0" id="former_odd_return_type_string">
<b>Return type:</b> <codeph>DOUBLE</codeph> in Impala 2.0 and higher;
<codeph>STRING</codeph> in earlier releases
</p>
<p id="for_compatibility_only">
<b>Usage notes:</b> Primarily for compatibility with code containing industry extensions
to SQL.
</p>
<p id="return_type_boolean">
<b>Return type:</b> <codeph>BOOLEAN</codeph>
</p>
<p id="return_type_double">
<b>Return type:</b> <codeph>DOUBLE</codeph>
</p>
<p id="return_type_same">
<b>Return type:</b> Same as the input value
</p>
<p id="return_type_same_except_string">
<b>Return type:</b> Same as the input value, except for <codeph>CHAR</codeph> and
<codeph>VARCHAR</codeph> arguments which produce a <codeph>STRING</codeph> result
</p>
<p id="builtins_db">
Impala includes another predefined database, <codeph>_impala_builtins</codeph>, that
serves as the location for the
<xref href="../topics/impala_functions.xml#builtins">built-in functions</xref>. To see
the built-in functions, use a statement like the following:
<codeblock xml:space="preserve">show functions in _impala_builtins;
show functions in _impala_builtins like '*<varname>substring</varname>*';
</codeblock>
</p>
<p id="sum_double">
Due to the way arithmetic on <codeph>FLOAT</codeph> and <codeph>DOUBLE</codeph> columns
uses high-performance hardware instructions, and distributed queries can perform these
operations in different order for each query, results can vary slightly for aggregate
function calls such as <codeph>SUM()</codeph> and <codeph>AVG()</codeph> for
<codeph>FLOAT</codeph> and <codeph>DOUBLE</codeph> columns, particularly on large data
sets where millions or billions of values are summed or averaged. For perfect
consistency and repeatability, use the <codeph>DECIMAL</codeph> data type for such
operations instead of <codeph>FLOAT</codeph> or <codeph>DOUBLE</codeph>.
</p>
<p id="float_double_decimal_caveat">
The inability to exactly represent certain floating-point values means that
<codeph>DECIMAL</codeph> is sometimes a better choice than <codeph>DOUBLE</codeph> or
<codeph>FLOAT</codeph> when precision is critical, particularly when transferring data
from other database systems that use different representations or file formats.
</p>
<p rev="" id="hive_column_stats_caveat">
If you run the Hive statement <codeph>ANALYZE TABLE COMPUTE STATISTICS FOR
COLUMNS</codeph>, Impala can only use the resulting column statistics if the table is
unpartitioned. Impala cannot use Hive-generated column statistics for a partitioned
table.
</p>
<p id="datetime_function_chaining">
<codeph>UNIX_TIMESTAMP()</codeph> and <codeph>FROM_UNIXTIME()</codeph> are often used in
combination to convert a <codeph>TIMESTAMP</codeph> value into a particular string
format. For example:
<codeblock xml:space="preserve">SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(NOW() + interval 3 days),
'yyyy/MM/dd HH:mm') AS yyyy_mm_dd_hh_mm;
+------------------+
| yyyy_mm_dd_hh_mm |
+------------------+
| 2016/06/03 11:38 |
+------------------+
</codeblock>
</p>
<p rev="1.4.0 obwl" id="insert_sort_blurb">
<b>Sorting considerations:</b> Although you can specify an <codeph>ORDER BY</codeph>
clause in an <codeph>INSERT ... SELECT</codeph> statement, any <codeph>ORDER BY</codeph>
clause is ignored and the results are not necessarily sorted. An <codeph>INSERT ...
SELECT</codeph> operation potentially creates many different data files, prepared by
different executor Impala daemons, and therefore the notion of the data being stored in
sorted order is impractical.
</p>
<p rev="1.4.0" id="create_table_like_view">
Prior to Impala 1.4.0, it was not possible to use the <codeph>CREATE TABLE LIKE
<varname>view_name</varname></codeph> syntax. In Impala 1.4.0 and higher, you can create
a table with the same column definitions as a view using the <codeph>CREATE TABLE
LIKE</codeph> technique. Although <codeph>CREATE TABLE LIKE</codeph> normally inherits
the file format of the original table, a view has no underlying file format, so
<codeph>CREATE TABLE LIKE <varname>view_name</varname></codeph> produces a text table by
default. To specify a different file format, include a <codeph>STORED AS
<varname>file_format</varname></codeph> clause at the end of the <codeph>CREATE TABLE
LIKE</codeph> statement.
</p>
<note rev="1.4.0" id="compute_stats_nulls">
Prior to Impala 1.4.0, <codeph>COMPUTE STATS</codeph> counted the number of
<codeph>NULL</codeph> values in each column and recorded that figure in the metastore
database. Because Impala does not currently use the <codeph>NULL</codeph> count during
query planning, Impala 1.4.0 and higher speeds up the <codeph>COMPUTE STATS</codeph>
statement by skipping this <codeph>NULL</codeph> counting.
</note>
<p id="regular_expression_whole_string">
The regular expression must match the entire value, not just occur somewhere inside it.
Use <codeph>.*</codeph> at the beginning, the end, or both if you only need to match
characters anywhere in the middle. Thus, the <codeph>^</codeph> and <codeph>$</codeph>
atoms are often redundant, although you might already have them in your expression
strings that you reuse from elsewhere.
</p>
<p rev="1.3.1" id="regexp_matching">
In Impala 1.3.1 and higher, the <codeph>REGEXP</codeph> and <codeph>RLIKE</codeph>
operators now match a regular expression string that occurs anywhere inside the target
string, the same as if the regular expression was enclosed on each side by
<codeph>.*</codeph>. See <xref href="../topics/impala_operators.xml#regexp"/> for
examples. Previously, these operators only succeeded when the regular expression matched
the entire target string. This change improves compatibility with the regular expression
support for popular database systems. There is no change to the behavior of the
<codeph>regexp_extract()</codeph> and <codeph>regexp_replace()</codeph> built-in
functions.
</p>
<p rev="1.3.1" id="insert_inherit_permissions">
By default, if an <codeph>INSERT</codeph> statement creates any new subdirectories
underneath a partitioned table, those subdirectories are assigned default HDFS
permissions for the <codeph>impala</codeph> user. To make each subdirectory have the
same permissions as its parent directory in HDFS, specify the
<codeph>&#8209;&#8209;insert_inherit_permissions</codeph> startup option for the
<cmdname>impalad</cmdname> daemon.
</p>
<p>
<ph id="union_all_vs_union">Prefer <codeph>UNION ALL</codeph> over
<codeph>UNION</codeph> when you know the data sets are disjoint or duplicate values are
not a problem; <codeph>UNION ALL</codeph> is more efficient because it avoids
materializing and sorting the entire result set to eliminate duplicate values.</ph>
</p>
<note id="thorn">
The <codeph>CREATE TABLE</codeph> clauses <codeph>FIELDS TERMINATED BY</codeph>,
<codeph>ESCAPED BY</codeph>, and <codeph>LINES TERMINATED BY</codeph> have special rules
for the string literal used for their argument, because they all require a single
character. You can use a regular character surrounded by single or double quotation
marks, an octal sequence such as <codeph>'\054'</codeph> (representing a comma), or an
integer in the range '-127'..'128' (with quotation marks but no backslash), which is
interpreted as a single-byte ASCII character. Negative values are subtracted from 256;
for example, <codeph>FIELDS TERMINATED BY '-2'</codeph> sets the field delimiter to
ASCII code 254, the <q>Icelandic Thorn</q> character used as a delimiter by some data
formats.
</note>
<!--The following caveats no longer apply starting in 3.2. Will remove when confirmed. AR-->
<p id="sqoop_blurb" audience="hidden" rev="3.2">
<b>Sqoop considerations:</b>
</p>
<p id="sqoop_timestamp_caveat" rev="3.2" audience="hidden">
If you use Sqoop to convert RDBMS data to Parquet, be careful with interpreting any
resulting values from <codeph>DATE</codeph>, <codeph>DATETIME</codeph>, or
<codeph>TIMESTAMP</codeph> columns. The underlying values are represented as the Parquet
<codeph>INT64</codeph> type, which is represented as <codeph>BIGINT</codeph> in the
Impala table. The Parquet values represent the time in milliseconds, while Impala
interprets <codeph>BIGINT</codeph> as the time in seconds. Therefore, if you have a
<codeph>BIGINT</codeph> column in a Parquet table that was imported this way from Sqoop,
divide the values by 1000 when interpreting as the <codeph>TIMESTAMP</codeph> type.
</p>
<p id="command_line_blurb">
<b>Command-line equivalent:</b>
</p>
<p rev="2.3.0" id="complex_types_blurb">
<b>Complex type considerations:</b>
</p>
<p id="complex_types_combo">
Because complex types are often used in combination, for example an
<codeph>ARRAY</codeph> of <codeph>STRUCT</codeph> elements, if you are unfamiliar with
the Impala complex types, start with
<xref href="../topics/impala_complex_types.xml#complex_types"/> for background
information and usage examples.
</p>
<p id="complex_types_short_intro">
In <keyword keyref="impala23_full"/> and higher, Impala supports the complex types
<codeph>ARRAY</codeph>, <codeph>STRUCT</codeph>, and <codeph>MAP</codeph>. In
<keyword
keyref="impala32_full"/> and higher, Impala also supports these
complex types in ORC. See
<xref
href="../topics/impala_complex_types.xml#complex_types"/> for details.
These Complex types are currently supported only for the Parquet or ORC file formats.
Because Impala has better performance on Parquet than ORC, if you plan to use complex
types, become familiar with the performance and storage aspects of Parquet first.
</p>
<ul id="complex_types_restrictions">
<li>
<p>
Columns with this data type can only be used in tables or partitions with the
Parquet or ORC file format.
</p>
</li>
<li>
<p>
Columns with this data type cannot be used as partition key columns in a partitioned
table.
</p>
</li>
<li>
<p>
The <codeph>COMPUTE STATS</codeph> statement does not produce any statistics for
columns of this data type.
</p>
</li>
<li rev="">
<p id="complex_types_max_length">
The maximum length of the column definition for any complex type, including
declarations for any nested types, is 4000 characters.
</p>
</li>
<li>
<p>
See <xref href="../topics/impala_complex_types.xml#complex_types_limits"/> for a
full list of limitations and associated guidelines about complex type columns.
</p>
</li>
</ul>
<p rev="2.3.0" id="complex_types_partitioning">
Partitioned tables can contain complex type columns. All the partition key columns must
be scalar types.
</p>
<p rev="2.3.0" id="complex_types_describe">
You can pass a multi-part qualified name to <codeph>DESCRIBE</codeph> to specify an
<codeph>ARRAY</codeph>, <codeph>STRUCT</codeph>, or <codeph>MAP</codeph> column and
visualize its structure as if it were a table. For example, if table <codeph>T1</codeph>
contains an <codeph>ARRAY</codeph> column <codeph>A1</codeph>, you could issue the
statement <codeph>DESCRIBE t1.a1</codeph>. If table <codeph>T1</codeph> contained a
<codeph>STRUCT</codeph> column <codeph>S1</codeph>, and a field <codeph>F1</codeph>
within the <codeph>STRUCT</codeph> was a <codeph>MAP</codeph>, you could issue the
statement <codeph>DESCRIBE t1.s1.f1</codeph>. An <codeph>ARRAY</codeph> is shown as a
two-column table, with <codeph>ITEM</codeph> and <codeph>POS</codeph> columns. A
<codeph>STRUCT</codeph> is shown as a table with each field representing a column in the
table. A <codeph>MAP</codeph> is shown as a two-column table, with <codeph>KEY</codeph>
and <codeph>VALUE</codeph> columns.
</p>
<note id="complex_type_schema_pointer">
Many of the complex type examples refer to tables such as <codeph>CUSTOMER</codeph> and
<codeph>REGION</codeph> adapted from the tables used in the TPC-H benchmark. See
<xref href="../topics/impala_complex_types.xml#complex_sample_schema"/> for the table
definitions.
</note>
<p rev="2.3.0" id="complex_types_unsupported_filetype">
<b>Complex type considerations:</b> Although you can create tables in this file format
using the complex types (<codeph>ARRAY</codeph>, <codeph>STRUCT</codeph>, and
<codeph>MAP</codeph>) available in <keyword keyref="impala23_full"/> and higher,
currently, Impala can query these types only in Parquet tables. <ph rev="IMPALA-2844">
The one exception to the preceding rule is <codeph>COUNT(*)</codeph> queries on RCFile
tables that include complex types. Such queries are allowed in
<keyword keyref="impala26_full"/> and higher. </ph>
</p>
<p rev="2.3.0" id="complex_types_caveat_no_operator">
You cannot refer to a column with a complex data type (<codeph>ARRAY</codeph>,
<codeph>STRUCT</codeph>, or <codeph>MAP</codeph> directly in an operator. You can apply
operators only to scalar values that make up a complex type (the fields of a
<codeph>STRUCT</codeph>, the items of an <codeph>ARRAY</codeph>, or the key or value
portion of a <codeph>MAP</codeph>) as part of a join query that refers to the scalar
value using the appropriate dot notation or <codeph>ITEM</codeph>, <codeph>KEY</codeph>,
or <codeph>VALUE</codeph> pseudocolumn names.
</p>
<p rev="2.3.0" id="udfs_no_complex_types">
Currently, Impala UDFs cannot accept arguments or return values of the Impala complex
types (<codeph>STRUCT</codeph>, <codeph>ARRAY</codeph>, or <codeph>MAP</codeph>).
</p>
<p rev="2.3.0" id="complex_types_read_only">
Impala currently cannot write new data files containing complex type columns. Therefore,
although the <codeph>SELECT</codeph> statement works for queries involving complex type
columns, you cannot use a statement form that writes data to complex type columns, such
as <codeph>CREATE TABLE AS SELECT</codeph> or <codeph>INSERT ... SELECT</codeph>. To
create data files containing complex type data, use the Hive <codeph>INSERT</codeph>
statement, or another ETL mechanism such as MapReduce jobs, Spark jobs, Pig, and so on.
</p>
<p rev="2.3.0" id="complex_types_views">
For tables containing complex type columns (<codeph>ARRAY</codeph>,
<codeph>STRUCT</codeph>, or <codeph>MAP</codeph>), you typically use join queries to
refer to the complex values. You can use views to hide the join notation, making such
tables seem like traditional denormalized tables, and making those tables queryable by
business intelligence tools that do not have built-in support for those complex types.
See <xref href="../topics/impala_complex_types.xml#complex_types_views"/> for details.
</p>
<p rev="2.3.0" id="complex_types_views_caveat">
Because you cannot directly issue <codeph>SELECT <varname>col_name</varname></codeph>
against a column of complex type, you cannot use a view or a <codeph>WITH</codeph>
clause to <q>rename</q> a column by selecting it with a column alias.
</p>
<p rev="2.3.0" id="complex_types_aggregation_explanation">
To access a column with a complex type (<codeph>ARRAY</codeph>, <codeph>STRUCT</codeph>,
or <codeph>MAP</codeph>) in an aggregation function, you unpack the individual elements
using join notation in the query, and then apply the function to the final scalar item,
field, key, or value at the bottom of any nested type hierarchy in the column. See
<xref href="../topics/impala_complex_types.xml#complex_types"/> for details about using
complex types in Impala.
</p>
<p rev="2.3.0" id="complex_types_aggregation_example">
The following example demonstrates calls to several aggregation functions using values
from a column containing nested complex types (an <codeph>ARRAY</codeph> of
<codeph>STRUCT</codeph> items). The array is unpacked inside the query using join
notation. The array elements are referenced using the <codeph>ITEM</codeph>
pseudocolumn, and the structure fields inside the array elements are referenced using
dot notation. Numeric values such as <codeph>SUM()</codeph> and <codeph>AVG()</codeph>
are computed using the numeric <codeph>R_NATIONKEY</codeph> field, and the
general-purpose <codeph>MAX()</codeph> and <codeph>MIN()</codeph> values are computed
from the string <codeph>N_NAME</codeph> field.
<codeblock>describe region;
+-------------+-------------------------+---------+
| name | type | comment |
+-------------+-------------------------+---------+
| r_regionkey | smallint | |
| r_name | string | |
| r_comment | string | |
| r_nations | array&lt;struct&lt; | |
| | n_nationkey:smallint, | |
| | n_name:string, | |
| | n_comment:string | |
| | &gt;&gt; | |
+-------------+-------------------------+---------+
select r_name, r_nations.item.n_nationkey
from region, region.r_nations as r_nations
order by r_name, r_nations.item.n_nationkey;
+-------------+------------------+
| r_name | item.n_nationkey |
+-------------+------------------+
| AFRICA | 0 |
| AFRICA | 5 |
| AFRICA | 14 |
| AFRICA | 15 |
| AFRICA | 16 |
| AMERICA | 1 |
| AMERICA | 2 |
| AMERICA | 3 |
| AMERICA | 17 |
| AMERICA | 24 |
| ASIA | 8 |
| ASIA | 9 |
| ASIA | 12 |
| ASIA | 18 |
| ASIA | 21 |
| EUROPE | 6 |
| EUROPE | 7 |
| EUROPE | 19 |
| EUROPE | 22 |
| EUROPE | 23 |
| MIDDLE EAST | 4 |
| MIDDLE EAST | 10 |
| MIDDLE EAST | 11 |
| MIDDLE EAST | 13 |
| MIDDLE EAST | 20 |
+-------------+------------------+
select
r_name,
count(r_nations.item.n_nationkey) as count,
sum(r_nations.item.n_nationkey) as sum,
avg(r_nations.item.n_nationkey) as avg,
min(r_nations.item.n_name) as minimum,
max(r_nations.item.n_name) as maximum,
ndv(r_nations.item.n_nationkey) as distinct_vals
from
region, region.r_nations as r_nations
group by r_name
order by r_name;
+-------------+-------+-----+------+-----------+----------------+---------------+
| r_name | count | sum | avg | minimum | maximum | distinct_vals |
+-------------+-------+-----+------+-----------+----------------+---------------+
| AFRICA | 5 | 50 | 10 | ALGERIA | MOZAMBIQUE | 5 |
| AMERICA | 5 | 47 | 9.4 | ARGENTINA | UNITED STATES | 5 |
| ASIA | 5 | 68 | 13.6 | CHINA | VIETNAM | 5 |
| EUROPE | 5 | 77 | 15.4 | FRANCE | UNITED KINGDOM | 5 |
| MIDDLE EAST | 5 | 58 | 11.6 | EGYPT | SAUDI ARABIA | 5 |
+-------------+-------+-----+------+-----------+----------------+---------------+
</codeblock>
</p>
<p id="hive_blurb">
<b>Hive considerations:</b>
</p>
<p rev="" id="permissions_blurb">
<b>HDFS permissions:</b>
</p>
<p rev="" id="permissions_blurb_no">
<b>HDFS permissions:</b> This statement does not touch any HDFS files or directories,
therefore no HDFS permissions are required.
</p>
<p id="security_blurb">
<b>Security considerations:</b>
</p>
<p id="performance_blurb">
<b>Performance considerations:</b>
</p>
<p id="conversion_blurb">
<b>Casting and conversions:</b>
</p>
<p id="related_info">
<b>Related information:</b>
</p>
<p id="related_tasks">
<b>Related tasks:</b>
</p>
<p id="related_options">
<b>Related startup options:</b>
</p>
<p id="restrictions_blurb">
<b>Restrictions:</b>
</p>
<p rev="2.0.0" id="restrictions_sliding_window">
<b>Restrictions:</b> In Impala 2.0 and higher, this function can be used as an analytic
function, but with restrictions on any window clause. For <codeph>MAX()</codeph> and
<codeph>MIN()</codeph>, the window clause is only allowed if the start bound is
<codeph>UNBOUNDED PRECEDING</codeph>.
</p>
<!-- This blurb has been superceded by analytic_not_allowed_caveat. Consider removing it if it turns out never to be needed. -->
<p rev="2.0.0" id="restrictions_non_analytic">
<b>Restrictions:</b> This function cannot be used as an analytic function; it does not
currently support the <codeph>OVER()</codeph> clause.
</p>
<p id="compatibility_blurb">
<b>Compatibility:</b>
</p>
<p id="null_blurb">
<b>NULL considerations:</b>
</p>
<p id="udf_blurb">
<b>UDF considerations:</b>
</p>
<p id="udf_blurb_no">
<b>UDF considerations:</b> This type cannot be used for the argument or return type of a
user-defined function (UDF) or user-defined aggregate function (UDA).
</p>
<p id="view_blurb">
<b>Considerations for views:</b>
</p>
<p id="null_bad_numeric_cast">
<b>NULL considerations:</b> Casting any non-numeric value to this type produces a
<codeph>NULL</codeph> value.
</p>
<p id="null_bad_timestamp_cast">
<b>NULL considerations:</b> Casting any unrecognized <codeph>STRING</codeph> value to
this type produces a <codeph>NULL</codeph> value.
</p>
<p id="null_null_arguments">
<b>NULL considerations:</b> An expression of this type produces a <codeph>NULL</codeph>
value if any argument of the expression is <codeph>NULL</codeph>.
</p>
<p id="privileges_blurb">
<b>Required privileges:</b>
</p>
<p id="parquet_blurb">
<b>Parquet considerations:</b>
</p>
<!-- Github project for parquet-tools: https://github.com/Parquet/parquet-mr/tree/master/parquet-tools -->
<p id="parquet_tools_blurb">
To examine the internal structure and data of Parquet files, you can use the
<cmdname>parquet-tools</cmdname> command. Make sure this command is in your
<codeph>$PATH</codeph>. (Typically, it is symlinked from <filepath>/usr/bin</filepath>;
sometimes, depending on your installation setup, you might need to locate it under an
alternative <codeph>bin</codeph> directory.) The arguments to this command let you
perform operations such as:
<ul>
<li>
<codeph>cat</codeph>: Print a file's contents to standard out. In
<keyword keyref="impala23_full"/> and higher, you can use the <codeph>-j</codeph>
option to output JSON.
</li>
<li>
<codeph>head</codeph>: Print the first few records of a file to standard output.
</li>
<li>
<codeph>schema</codeph>: Print the Parquet schema for the file.
</li>
<li>
<codeph>meta</codeph>: Print the file footer metadata, including key-value
properties (like Avro schema), compression ratios, encodings, compression used, and
row group information.
</li>
<li>
<codeph>dump</codeph>: Print all data and metadata.
</li>
</ul>
Use <codeph>parquet-tools -h</codeph> to see usage information for all the arguments.
Here are some examples showing <cmdname>parquet-tools</cmdname> usage:
<codeblock><![CDATA[
$ # Be careful doing this for a big file! Use parquet-tools head to be safe.
$ parquet-tools cat sample.parq
year = 1992
month = 1
day = 2
dayofweek = 4
dep_time = 748
crs_dep_time = 750
arr_time = 851
crs_arr_time = 846
carrier = US
flight_num = 53
actual_elapsed_time = 63
crs_elapsed_time = 56
arrdelay = 5
depdelay = -2
origin = CMH
dest = IND
distance = 182
cancelled = 0
diverted = 0
year = 1992
month = 1
day = 3
...
]]>
</codeblock>
<codeblock><![CDATA[
$ parquet-tools head -n 2 sample.parq
year = 1992
month = 1
day = 2
dayofweek = 4
dep_time = 748
crs_dep_time = 750
arr_time = 851
crs_arr_time = 846
carrier = US
flight_num = 53
actual_elapsed_time = 63
crs_elapsed_time = 56
arrdelay = 5