blob: 50e95ab9c7620fe76147b424b6fbf548ac569f42 [file] [log] [blame]
PREHOOK: query: CREATE TABLE test_table_n4(key int, value string) CLUSTERED BY (key) INTO 3 BUCKETS
PREHOOK: type: CREATETABLE
PREHOOK: Output: database:default
PREHOOK: Output: default@test_table_n4
POSTHOOK: query: CREATE TABLE test_table_n4(key int, value string) CLUSTERED BY (key) INTO 3 BUCKETS
POSTHOOK: type: CREATETABLE
POSTHOOK: Output: database:default
POSTHOOK: Output: default@test_table_n4
PREHOOK: query: explain extended insert overwrite table test_table_n4
select * from src
PREHOOK: type: QUERY
PREHOOK: Input: default@src
PREHOOK: Output: default@test_table_n4
POSTHOOK: query: explain extended insert overwrite table test_table_n4
select * from src
POSTHOOK: type: QUERY
POSTHOOK: Input: default@src
POSTHOOK: Output: default@test_table_n4
OPTIMIZED SQL: SELECT `key`, `value`
FROM `default`.`src`
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-2 depends on stages: Stage-1
Stage-0 depends on stages: Stage-2
Stage-3 depends on stages: Stage-0
STAGE PLANS:
Stage: Stage-1
Tez
#### A masked pattern was here ####
Edges:
Reducer 2 <- Map 1 (SIMPLE_EDGE)
#### A masked pattern was here ####
Vertices:
Map 1
Map Operator Tree:
TableScan
alias: src
Statistics: Num rows: 500 Data size: 89000 Basic stats: COMPLETE Column stats: COMPLETE
GatherStats: false
Select Operator
expressions: UDFToInteger(key) (type: int), value (type: string)
outputColumnNames: _col0, _col1
Statistics: Num rows: 500 Data size: 47500 Basic stats: COMPLETE Column stats: COMPLETE
Reduce Output Operator
bucketingVersion: 2
key expressions: _col0 (type: int)
null sort order: a
numBuckets: -1
sort order: +
Map-reduce partition columns: _col0 (type: int)
Statistics: Num rows: 500 Data size: 47500 Basic stats: COMPLETE Column stats: COMPLETE
tag: -1
value expressions: _col1 (type: string)
auto parallelism: false
Execution mode: llap
LLAP IO: all inputs
Path -> Alias:
#### A masked pattern was here ####
Path -> Partition:
#### A masked pattern was here ####
Partition
base file name: src
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
properties:
bucket_count -1
bucketing_version 2
column.name.delimiter ,
columns key,value
columns.types string:string
#### A masked pattern was here ####
name default.src
serialization.format 1
serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
properties:
bucketing_version 2
column.name.delimiter ,
columns key,value
columns.comments 'default','default'
columns.types string:string
#### A masked pattern was here ####
name default.src
serialization.format 1
serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.src
name: default.src
Truncated Path -> Alias:
/src [src]
Reducer 2
Execution mode: llap
Needs Tagging: false
Reduce Operator Tree:
Select Operator
expressions: KEY.reducesinkkey0 (type: int), VALUE._col0 (type: string)
outputColumnNames: _col0, _col1
Statistics: Num rows: 500 Data size: 47500 Basic stats: COMPLETE Column stats: COMPLETE
File Output Operator
bucketingVersion: 2
compressed: false
GlobalTableId: 1
#### A masked pattern was here ####
NumFilesPerFileSink: 3
Statistics: Num rows: 500 Data size: 47500 Basic stats: COMPLETE Column stats: COMPLETE
#### A masked pattern was here ####
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
properties:
bucket_count 3
bucket_field_name key
bucketing_version 2
column.name.delimiter ,
columns key,value
columns.comments
columns.types int:string
#### A masked pattern was here ####
name default.test_table_n4
serialization.format 1
serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.test_table_n4
TotalFiles: 3
GatherStats: true
MultiFileSpray: true
Select Operator
expressions: _col0 (type: int), _col1 (type: string)
outputColumnNames: key, value
Statistics: Num rows: 500 Data size: 47500 Basic stats: COMPLETE Column stats: COMPLETE
Group By Operator
aggregations: min(key), max(key), count(1), count(key), compute_bit_vector(key, 'hll'), max(length(value)), avg(COALESCE(length(value),0)), count(value), compute_bit_vector(value, 'hll')
mode: complete
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8
Statistics: Num rows: 1 Data size: 332 Basic stats: COMPLETE Column stats: COMPLETE
Select Operator
expressions: 'LONG' (type: string), UDFToLong(_col0) (type: bigint), UDFToLong(_col1) (type: bigint), (_col2 - _col3) (type: bigint), COALESCE(ndv_compute_bit_vector(_col4),0) (type: bigint), _col4 (type: binary), 'STRING' (type: string), UDFToLong(COALESCE(_col5,0)) (type: bigint), COALESCE(_col6,0) (type: double), (_col2 - _col7) (type: bigint), COALESCE(ndv_compute_bit_vector(_col8),0) (type: bigint), _col8 (type: binary)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11
Statistics: Num rows: 1 Data size: 530 Basic stats: COMPLETE Column stats: COMPLETE
File Output Operator
bucketingVersion: 2
compressed: false
GlobalTableId: 0
#### A masked pattern was here ####
NumFilesPerFileSink: 1
Statistics: Num rows: 1 Data size: 530 Basic stats: COMPLETE Column stats: COMPLETE
#### A masked pattern was here ####
table:
input format: org.apache.hadoop.mapred.SequenceFileInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
properties:
bucketing_version -1
columns _col0,_col1,_col2,_col3,_col4,_col5,_col6,_col7,_col8,_col9,_col10,_col11
columns.types string:bigint:bigint:bigint:bigint:binary:string:bigint:double:bigint:bigint:binary
escape.delim \
hive.serialization.extend.additional.nesting.levels true
serialization.escape.crlf true
serialization.format 1
serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
TotalFiles: 1
GatherStats: false
MultiFileSpray: false
Stage: Stage-2
Dependency Collection
Stage: Stage-0
Move Operator
tables:
replace: true
#### A masked pattern was here ####
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
properties:
bucket_count 3
bucket_field_name key
bucketing_version 2
column.name.delimiter ,
columns key,value
columns.comments
columns.types int:string
#### A masked pattern was here ####
name default.test_table_n4
serialization.format 1
serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.test_table_n4
Stage: Stage-3
Stats Work
Basic Stats Work:
#### A masked pattern was here ####
Column Stats Desc:
Columns: key, value
Column Types: int, string
Table: default.test_table_n4
Is Table Level Stats: true
PREHOOK: query: insert overwrite table test_table_n4
select * from src
PREHOOK: type: QUERY
PREHOOK: Input: default@src
PREHOOK: Output: default@test_table_n4
POSTHOOK: query: insert overwrite table test_table_n4
select * from src
POSTHOOK: type: QUERY
POSTHOOK: Input: default@src
POSTHOOK: Output: default@test_table_n4
POSTHOOK: Lineage: test_table_n4.key EXPRESSION [(src)src.FieldSchema(name:key, type:string, comment:default), ]
POSTHOOK: Lineage: test_table_n4.value SIMPLE [(src)src.FieldSchema(name:value, type:string, comment:default), ]
PREHOOK: query: drop table test_table_n4
PREHOOK: type: DROPTABLE
PREHOOK: Input: default@test_table_n4
PREHOOK: Output: default@test_table_n4
POSTHOOK: query: drop table test_table_n4
POSTHOOK: type: DROPTABLE
POSTHOOK: Input: default@test_table_n4
POSTHOOK: Output: default@test_table_n4