blob: 3f0a32fe7fade5da7db9b12ee65dee0e7a92451d [file] [log] [blame]
PREHOOK: query: CREATE TABLE dest1_n11(key INT, value STRING) STORED AS TEXTFILE
PREHOOK: type: CREATETABLE
PREHOOK: Output: database:default
PREHOOK: Output: default@dest1_n11
POSTHOOK: query: CREATE TABLE dest1_n11(key INT, value STRING) STORED AS TEXTFILE
POSTHOOK: type: CREATETABLE
POSTHOOK: Output: database:default
POSTHOOK: Output: default@dest1_n11
PREHOOK: query: explain
FROM srcpart src1 JOIN src src2 ON (src1.key = src2.key)
INSERT OVERWRITE TABLE dest1_n11 SELECT src1.key, src2.value
where (src1.ds = '2008-04-08' or src1.ds = '2008-04-09' )and (src1.hr = '12' or src1.hr = '11')
PREHOOK: type: QUERY
POSTHOOK: query: explain
FROM srcpart src1 JOIN src src2 ON (src1.key = src2.key)
INSERT OVERWRITE TABLE dest1_n11 SELECT src1.key, src2.value
where (src1.ds = '2008-04-08' or src1.ds = '2008-04-09' )and (src1.hr = '12' or src1.hr = '11')
POSTHOOK: type: QUERY
STAGE DEPENDENCIES:
Stage-6 is a root stage
Stage-5 depends on stages: Stage-6
Stage-0 depends on stages: Stage-5
Stage-2 depends on stages: Stage-0, Stage-3
Stage-3 depends on stages: Stage-5
STAGE PLANS:
Stage: Stage-6
Map Reduce Local Work
Alias -> Map Local Tables:
$hdt$_1:src2
Fetch Operator
limit: -1
Alias -> Map Local Operator Tree:
$hdt$_1:src2
TableScan
alias: src2
Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
Filter Operator
predicate: key is not null (type: boolean)
Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: key (type: string), value (type: string)
outputColumnNames: _col0, _col1
Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
HashTable Sink Operator
keys:
0 _col0 (type: string)
1 _col0 (type: string)
Stage: Stage-5
Map Reduce
Map Operator Tree:
TableScan
alias: src1
Statistics: Num rows: 2000 Data size: 21248 Basic stats: COMPLETE Column stats: NONE
Filter Operator
predicate: key is not null (type: boolean)
Statistics: Num rows: 2000 Data size: 21248 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: key (type: string)
outputColumnNames: _col0
Statistics: Num rows: 2000 Data size: 21248 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Inner Join 0 to 1
keys:
0 _col0 (type: string)
1 _col0 (type: string)
outputColumnNames: _col0, _col4
Statistics: Num rows: 2200 Data size: 23372 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: UDFToInteger(_col0) (type: int), _col4 (type: string)
outputColumnNames: _col0, _col1
Statistics: Num rows: 2200 Data size: 23372 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
Statistics: Num rows: 2200 Data size: 23372 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.dest1_n11
Select Operator
expressions: _col0 (type: int), _col1 (type: string)
outputColumnNames: key, value
Statistics: Num rows: 2200 Data size: 23372 Basic stats: COMPLETE Column stats: NONE
Group By Operator
aggregations: compute_stats(key, 'hll'), compute_stats(value, 'hll')
mode: hash
outputColumnNames: _col0, _col1
Statistics: Num rows: 1 Data size: 864 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
table:
input format: org.apache.hadoop.mapred.SequenceFileInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
Local Work:
Map Reduce Local Work
Stage: Stage-0
Move Operator
tables:
replace: true
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.dest1_n11
Stage: Stage-2
Stats Work
Basic Stats Work:
Column Stats Desc:
Columns: key, value
Column Types: int, string
Table: default.dest1_n11
Stage: Stage-3
Map Reduce
Map Operator Tree:
TableScan
Reduce Output Operator
sort order:
Statistics: Num rows: 1 Data size: 864 Basic stats: COMPLETE Column stats: NONE
value expressions: _col0 (type: struct<columntype:string,min:bigint,max:bigint,countnulls:bigint,bitvector:binary>), _col1 (type: struct<columntype:string,maxlength:bigint,sumlength:bigint,count:bigint,countnulls:bigint,bitvector:binary>)
Execution mode: vectorized
Reduce Operator Tree:
Group By Operator
aggregations: compute_stats(VALUE._col0), compute_stats(VALUE._col1)
mode: mergepartial
outputColumnNames: _col0, _col1
Statistics: Num rows: 1 Data size: 880 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
Statistics: Num rows: 1 Data size: 880 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.SequenceFileInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
PREHOOK: query: FROM srcpart src1 JOIN src src2 ON (src1.key = src2.key)
INSERT OVERWRITE TABLE dest1_n11 SELECT src1.key, src2.value
where (src1.ds = '2008-04-08' or src1.ds = '2008-04-09' )and (src1.hr = '12' or src1.hr = '11')
PREHOOK: type: QUERY
PREHOOK: Input: default@src
PREHOOK: Input: default@srcpart
PREHOOK: Input: default@srcpart@ds=2008-04-08/hr=11
PREHOOK: Input: default@srcpart@ds=2008-04-08/hr=12
PREHOOK: Input: default@srcpart@ds=2008-04-09/hr=11
PREHOOK: Input: default@srcpart@ds=2008-04-09/hr=12
PREHOOK: Output: default@dest1_n11
POSTHOOK: query: FROM srcpart src1 JOIN src src2 ON (src1.key = src2.key)
INSERT OVERWRITE TABLE dest1_n11 SELECT src1.key, src2.value
where (src1.ds = '2008-04-08' or src1.ds = '2008-04-09' )and (src1.hr = '12' or src1.hr = '11')
POSTHOOK: type: QUERY
POSTHOOK: Input: default@src
POSTHOOK: Input: default@srcpart
POSTHOOK: Input: default@srcpart@ds=2008-04-08/hr=11
POSTHOOK: Input: default@srcpart@ds=2008-04-08/hr=12
POSTHOOK: Input: default@srcpart@ds=2008-04-09/hr=11
POSTHOOK: Input: default@srcpart@ds=2008-04-09/hr=12
POSTHOOK: Output: default@dest1_n11
POSTHOOK: Lineage: dest1_n11.key EXPRESSION [(srcpart)src1.FieldSchema(name:key, type:string, comment:default), ]
POSTHOOK: Lineage: dest1_n11.value SIMPLE [(src)src2.FieldSchema(name:value, type:string, comment:default), ]
PREHOOK: query: SELECT sum(hash(dest1_n11.key,dest1_n11.value)) FROM dest1_n11
PREHOOK: type: QUERY
PREHOOK: Input: default@dest1_n11
#### A masked pattern was here ####
POSTHOOK: query: SELECT sum(hash(dest1_n11.key,dest1_n11.value)) FROM dest1_n11
POSTHOOK: type: QUERY
POSTHOOK: Input: default@dest1_n11
#### A masked pattern was here ####
407444119660