blob: 769953679d10f16600ef2c7b970d0b85661e4031 [file] [log] [blame]
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
<document>
<header>
<title>Built In Functions</title>
</header>
<body>
<section id="built-in-functions">
<title>Introduction</title>
<p>
Pig comes with a set of built in functions (the eval, load/store, math, string, bag and tuple functions). Two main properties differentiate built in functions from <a href="udf.html">user defined functions</a> (UDFs). First, built in functions don't need to be registered because Pig knows where they are. Second, built in functions don't need to be qualified when they are used because Pig knows where to find them.
</p>
</section>
<!-- ================================================================== -->
<!-- DYNAMIC INVOKERS -->
<section id="dynamic-invokers">
<title>Dynamic Invokers</title>
<p>Often you may need to use a simple function that is already provided by standard Java libraries, but for which a <a href="udf.html">user defined functions</a> (UDF) has not been written. Dynamic invokers allow you to refer to Java functions without having to wrap them in custom UDFs, at the cost of doing some Java reflection on every function call.
</p>
<source>
...
DEFINE UrlDecode InvokeForString('java.net.URLDecoder.decode', 'String String');
encoded_strings = LOAD 'encoded_strings.txt' as (encoded:chararray);
decoded_strings = FOREACH encoded_strings GENERATE UrlDecode(encoded, 'UTF-8');
...
</source>
<p>Currently, dynamic invokers can be used for any static function that: </p>
<ul>
<li>Accepts no arguments or accepts some combination of strings, ints, longs, doubles, floats, or arrays with these same types </li>
<li>Returns a string, an int, a long, a double, or a float</li>
</ul>
<p>Only primitives can be used for numbers; no capital-letter numeric classes can be used as arguments. Depending on the return type, a specific kind of invoker must be used: InvokeForString, InvokeForInt, InvokeForLong, InvokeForDouble, or InvokeForFloat. </p>
<p>The <a href="basic.html#define">DEFINE</a> statement is used to bind a keyword to a Java method, as above. The first argument to the InvokeFor* constructor is the full path to the desired method. The second argument is a space-delimited ordered list of the classes of the method arguments. This can be omitted or an empty string if the method takes no arguments. Valid class names are string, long, float, double, and int. Invokers can also work with array arguments, represented in Pig as DataBags of single-tuple elements. Simply refer to string[], for example. Class names are not case sensitive. </p>
<p>The ability to use invokers on methods that take array arguments makes methods like those in org.apache.commons.math.stat.StatUtils available (for processing the results of grouping your datasets, for example). This is helpful, but a word of caution: the resulting UDF will not be optimized for Hadoop, and the very significant benefits one gains from implementing the Algebraic and Accumulator interfaces are lost here. Be careful if you use invokers this way.</p>
</section>
<!-- ======================================================== -->
<!-- EVAL FUNCTIONS -->
<section id="eval-functions">
<title>Eval Functions</title>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="avg">
<title>AVG</title>
<p>Computes the average of the numeric values in a single-column bag. </p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>AVG(expression)</p>
</td>
</tr>
</table>
</section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>Any expression whose result is a bag. The elements of the bag should be data type int, long, float, double, bigdecimal, biginteger or bytearray.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>Use the AVG function to compute the average of the numeric values in a single-column bag.
AVG requires a preceding GROUP ALL statement for global averages and a GROUP BY statement for group averages.</p>
<p>The AVG function ignores NULL values. </p>
</section>
<section>
<title>Example</title>
<p>In this example the average GPA for each student is computed (see the <a href="basic.html#group">GROUP</a> operator for information about the field names in relation B).</p>
<source>
A = LOAD 'student.txt' AS (name:chararray, term:chararray, gpa:float);
DUMP A;
(John,fl,3.9F)
(John,wt,3.7F)
(John,sp,4.0F)
(John,sm,3.8F)
(Mary,fl,3.8F)
(Mary,wt,3.9F)
(Mary,sp,4.0F)
(Mary,sm,4.0F)
B = GROUP A BY name;
DUMP B;
(John,{(John,fl,3.9F),(John,wt,3.7F),(John,sp,4.0F),(John,sm,3.8F)})
(Mary,{(Mary,fl,3.8F),(Mary,wt,3.9F),(Mary,sp,4.0F),(Mary,sm,4.0F)})
C = FOREACH B GENERATE A.name, AVG(A.gpa);
DUMP C;
({(John),(John),(John),(John)},3.850000023841858)
({(Mary),(Mary),(Mary),(Mary)},3.925000011920929)
</source>
</section>
<section>
<title>Types Tables</title>
<table>
<tr>
<td>
<p></p>
</td>
<td>
<p>int </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>float </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>bigdecimal </p>
</td>
<td>
<p>biginteger </p>
</td>
<td>
<p>chararray </p>
</td>
<td>
<p>bytearray </p>
</td>
</tr>
<tr>
<td>
<p>AVG </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>bigdecimal *</p>
</td>
<td>
<p>bigdecimal *</p>
</td>
<td>
<p>error </p>
</td>
<td>
<p>cast as double </p>
</td>
</tr>
</table>
<p>* Average values for datatypes bigdecimal and biginteger have precision setting <a href="http://docs.oracle.com/javase/7/docs/api/java/math/MathContext.html#DECIMAL128">java.math.MathContext.DECIMAL128</a>.</p>
</section></section>
<!-- ======================================================== -->
<section id="bagtostring">
<title>BagToString</title>
<p>Concatenate the elements of a Bag into a chararray string, placing an optional delimiter between each value.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>BagToString(vals:bag [, delimiter:chararray])</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td><p>vals</p></td>
<td><p>A bag of arbitrary values. They will each be cast to chararray if they are not already.</p></td>
</tr>
<tr>
<td><p>delimiter</p></td>
<td><p>A chararray value to place between elements of the bag; defaults to underscore <code>'_'</code>.</p></td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>BagToString creates a single string from the elements of a bag, similar to SQL's <code>GROUP_CONCAT</code> function. Keep in mind the following:</p>
<ul>
<li>Bags can be of arbitrary size, while strings in Java cannot: you will either exhaust available memory or exceed the maximum number of characters (about 2 billion). One of the worst features a production job can have is thresholding behavior: everything will seem nearly fine until the data size of your largest bag grows from nearly-too-big to just-barely-too-big.</li>
<li>Bags are disordered unless you explicitly apply a nested <code>ORDER BY</code> operation as demonstrated below. A nested <code>FOREACH</code> will preserve ordering, letting you order by one combination of fields then project out just the values you'd like to concatenate.</li>
<li>The default string conversion is applied to each element. If the bags contents are not atoms (tuple, map, etc), this may be not be what you want. Use a nested <code>FOREACH</code> to format values and then compose them with BagToString as shown below</li>
</ul>
<p>Examples:</p>
<table>
<tr><th>vals</th> <th>delimiter</th> <th>BagToString(vals, delimiter)</th> <th>Notes</th> </tr>
<tr> <td><code>{('BOS'),('NYA'),('BAL')}</code></td> <td><code></code></td> <td><code>BOS_NYA_BAL</code></td> <td>If only one argument is given, the field is delimited with underscore characters</td></tr>
<tr> <td><code>{('BOS'),('NYA'),('BAL')}</code></td> <td><code>'|'</code></td> <td><code>BOS|NYA|BAL</code></td> <td>But you can supply your own delimiter</td></tr>
<tr> <td><code>{('BOS'),('NYA'),('BAL')}</code></td> <td><code>''</code></td> <td><code>BOSNYABAL</code></td> <td>Use an explicit empty string to just smush everything together</td></tr>
<tr> <td><code>{(1),(2),(3)}</code></td> <td><code>'|'</code></td> <td><code>1|2|3</code></td> <td>Elements are type-converted for you (but see examples below)</td></tr>
</table>
</section>
<section>
<title>Examples</title>
<p>Simple delimited strings are simple:</p>
<source>
team_parks = LOAD 'team_parks' AS (team_id:chararray, park_id:chararray, years:bag{(year_id:int)});
-- BOS BOS07 {(1995),(1997),(1996),(1998),(1999)}
-- NYA NYC16 {(1995),(1999),(1998),(1997),(1996)}
-- NYA NYC17 {(1998)}
-- SDN HON01 {(1997)}
-- SDN MNT01 {(1996),(1999)}
-- SDN SAN01 {(1999),(1997),(1998),(1995),(1996)}
team_parkslist = FOREACH (GROUP team_parks BY team_id) GENERATE
group AS team_id, BagToString(team_parks.park_id, ';');
-- BOS BOS07
-- NYA NYC17;NYC16
-- SDN SAN01;MNT01;HON01
</source>
<p>The default handling of complex elements works, but probably isn't what you want.</p>
<source>
team_parkyearsugly = FOREACH (GROUP team_parks BY team_id) GENERATE
group AS team_id,
BagToString(team_parks.(park_id, years));
-- BOS BOS07_{(1995),(1997),(1996),(1998),(1999)}
-- NYA NYC17_{(1998)}_NYC16_{(1995),(1999),(1998),(1997),(1996)}
-- SDN SAN01_{(1999),(1997),(1998),(1995),(1996)}_MNT01_{(1996),(1999)}_HON01_{(1997)}
</source>
<p>Instead, assemble it in pieces. In step 2, we sort on one field but process another; it remains in the sorted order.</p>
<source>
team_park_yearslist = FOREACH team_parks {
years_o = ORDER years BY year_id;
GENERATE team_id, park_id, SIZE(years_o) AS n_years, BagToString(years_o, '/') AS yearslist;
};
team_parkyearslist = FOREACH (GROUP team_park_yearslist BY team_id) {
tpy_o = ORDER team_park_yearslist BY n_years DESC, park_id ASC;
tpy_f = FOREACH tpy_o GENERATE CONCAT(park_id, ':', yearslist);
GENERATE group AS team_id, BagToString(tpy_f, ';');
};
-- BOS BOS07:1995/1996/1997/1998/1999
-- NYA NYC16:1995/1996/1997/1998/1999;NYC17:1998
-- SDN SAN01:1995/1996/1997/1998/1999;MNT01:1996/1999;HON01:1997
</source>
</section>
</section>
<section id="bagtotuple">
<title>BagToTuple</title>
<p>Un-nests the elements of a bag into a tuple.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>BagToTuple(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression with data type bag.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>BagToTuple creates a tuple from the elements of a bag. It removes only
the first level of nesting; it does not recursively un-nest nested bags.
Unlike FLATTEN, BagToTuple will not generate multiple output records per
input record.
</p>
</section>
<section>
<title>Examples</title>
<p>In this example, a bag containing tuples with one field is converted to a tuple.</p>
<source>
A = LOAD 'bag_data' AS (B1:bag{T1:tuple(f1:chararray)});
DUMP A;
({('a'),('b'),('c')})
({('d'),('e'),('f')})
X = FOREACH A GENERATE BagToTuple(B1);
DUMP X;
(('a','b','c'))
(('d','e','f'))
</source>
<p>In this example, a bag containing tuples with two fields is converted to a tuple.</p>
<source>
A = LOAD 'bag_data' AS (B1:bag{T1:tuple(f1:int,f2:int)});
DUMP A;
({(4,1),(7,8),(4,9)})
({(5,8),(4,3),(3,8)})
X = FOREACH A GENERATE BagToTuple(B1);
DUMP X;
((4,1,7,8,4,9))
((5,8,4,3,3,8))
</source>
</section>
</section>
<section id="bloom">
<title>Bloom</title>
<p>Bloom filters are a common way to select a limited set of records before
moving data for a join or other heavy weight operation.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>BuildBloom(String hashType, String mode, String vectorSize, String nbHash)</p>
</td>
</tr>
<tr>
<td>
<p>Bloom(String filename)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td><p>hashtype</p></td>
<td><p>The type of hash function to use. Valid values for the hash functions are 'jenkins' and 'murmur'.</p></td>
</tr>
<tr>
<td><p>mode</p></td>
<td><p>Will be ignored, though by convention it should be "fixed" or "fixedsize"</p></td>
</tr>
<tr>
<td><p>vectorSize</p></td>
<td><p>The number of bits in the bloom filter.</p></td>
</tr>
<tr>
<td><p>nbHash</p></td>
<td><p>The number of hash functions used in constructing the bloom filter.</p></td>
</tr>
<tr>
<td><p>filename</p></td>
<td><p>File containing the serialized Bloom filter.</p></td>
</tr>
</table>
<p>See <a href="http://en.wikipedia.org/wiki/Bloom_filter">Bloom Filter</a> for
a discussion of how to select the number of bits and the number of hash
functions.
</p>
</section>
<section>
<title>Usage</title>
<p>Bloom filters are a common way to select a limited set of records before
moving data for a join or other heavy weight operation. For example, if
one wanted to join a very large data set L with a smaller set S, and it
was known that the number of keys in L that will match with S is small,
building a bloom filter on S and then applying it to L before the join
can greatly reduce the number of records from L that have to be moved
from the map to the reduce, thus speeding the join.
</p>
<p>The implementation uses Hadoop's bloom filters
(org.apache.hadoop.util.bloom.BloomFilter) internally.
</p>
</section>
<section>
<title>Examples</title>
<source>
define bb BuildBloom('128', '3', 'jenkins');
small = load 'S' as (x, y, z);
grpd = group small all;
fltrd = foreach grpd generate bb(small.x);
store fltrd in 'mybloom';
exec;
define bloom Bloom('mybloom');
large = load 'L' as (a, b, c);
flarge = filter large by bloom(L.a);
joined = join small by x, flarge by a;
store joined into 'results';
</source>
</section>
</section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="concat">
<title>CONCAT</title>
<p>Concatenates two or more expressions of identical type.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>CONCAT (expression, expression, [...expression])</p>
</td>
</tr>
</table>
</section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>Any expression.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>Use the CONCAT function to concatenate two or more expressions. The result values of the expressions must have identical types.</p>
<p>If any subexpression is null, the resulting expression is null.</p>
</section>
<section>
<title>Example</title>
<p>In this example, fields f1, an underscore string literal, f2 and f3 are concatenated.</p>
<source>
A = LOAD 'data' as (f1:chararray, f2:chararray, f3:chararray);
DUMP A;
(apache,open,source)
(hadoop,map,reduce)
(pig,pig,latin)
X = FOREACH A GENERATE CONCAT(f1, '_', f2,f3);
DUMP X;
(apache_opensource)
(hadoop_mapreduce)
(pig_piglatin)
</source>
</section>
</section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="count">
<title>COUNT</title>
<p>Computes the number of elements in a bag. </p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>COUNT(expression) </p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression with data type bag.</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>Use the COUNT function to compute the number of elements in a bag.
COUNT requires a preceding GROUP ALL statement for global counts and a GROUP BY statement for group counts.</p>
<p>
The COUNT function follows syntax semantics and ignores nulls.
What this means is that a tuple in the bag will not be counted if the FIRST FIELD in this tuple is NULL.
If you want to include NULL values in the count computation, use
<a href="#count-star">COUNT_STAR</a>.
</p>
<p>
Note: You cannot use the tuple designator (*) with COUNT; that is, COUNT(*) will not work.
</p>
</section>
<section>
<title>Example</title>
<p>In this example the tuples in the bag are counted (see the <a href="basic.html#group">GROUP</a> operator for information about the field names in relation B).</p>
<source>
A = LOAD 'data' AS (f1:int,f2:int,f3:int);
DUMP A;
(1,2,3)
(4,2,1)
(8,3,4)
(4,3,3)
(7,2,5)
(8,4,3)
B = GROUP A BY f1;
DUMP B;
(1,{(1,2,3)})
(4,{(4,2,1),(4,3,3)})
(7,{(7,2,5)})
(8,{(8,3,4),(8,4,3)})
X = FOREACH B GENERATE COUNT(A);
DUMP X;
(1L)
(2L)
(1L)
(2L)
</source>
</section>
<section>
<title>Types Tables</title>
<table>
<tr>
<td>
<p></p>
</td>
<td>
<p>int </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>float </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>chararray </p>
</td>
<td>
<p>bytearray </p>
</td>
</tr>
<tr>
<td>
<p>COUNT </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>long </p>
</td>
</tr>
</table>
</section></section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="count-star">
<title>COUNT_STAR</title>
<p>Computes the number of elements in a bag. </p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>COUNT_STAR(expression)  </p>
</td>
</tr>
</table>
</section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression with data type bag.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>Use the COUNT_STAR function to compute the number of elements in a bag.
COUNT_STAR requires a preceding GROUP ALL statement for global counts and a GROUP BY statement for group counts.</p>
<p>COUNT_STAR includes NULL values in the count computation
(unlike <a href="#count">COUNT</a>, which ignores NULL values).
</p>
</section>
<section>
<title>Example</title>
<p>In this example COUNT_STAR is used to count the tuples in a bag.</p>
<source>
X = FOREACH B GENERATE COUNT_STAR(A);
</source>
</section>
</section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="diff">
<title>DIFF</title>
<p>Compares two fields in a tuple.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>DIFF (expression, expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression with any data type.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>The DIFF function takes two bags as arguments and compares them.
Any tuples that are in one bag but not the other are returned in a bag.
If the bags match, an empty bag is returned. If the fields are not bags
then they will be wrapped in tuples and returned in a bag if they do not match,
or an empty bag will be returned if the two records match. The implementation
assumes that both bags being passed to the DIFF function will fit entirely
into memory simultaneously. If this is not the case the UDF will still function
but it will be VERY slow.</p>
</section>
<section>
<title>Example</title>
<p>In this example DIFF compares the tuples in two bags.</p>
<source>
A = LOAD 'bag_data' AS (B1:bag{T1:tuple(t1:int,t2:int)},B2:bag{T2:tuple(f1:int,f2:int)});
DUMP A;
({(8,9),(0,1)},{(8,9),(1,1)})
({(2,3),(4,5)},{(2,3),(4,5)})
({(6,7),(3,7)},{(2,2),(3,7)})
DESCRIBE A;
a: {B1: {T1: (t1: int,t2: int)},B2: {T2: (f1: int,f2: int)}}
X = FOREACH A GENERATE DIFF(B1,B2);
grunt> dump x;
({(0,1),(1,1)})
({})
({(6,7),(2,2)})
</source>
</section></section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="isempty">
<title>IsEmpty</title>
<p>Checks if a bag or map is empty.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>IsEmpty(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression with any data type.</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>The IsEmpty function checks if a bag or map is empty (has no data). The function can be used to filter data.</p></section>
<section>
<title>Example</title>
<p>In this example all students with an SSN but no name are located.</p>
<source>
SSN = load 'ssn.txt' using PigStorage() as (ssn:long);
SSN_NAME = load 'students.txt' using PigStorage() as (ssn:long, name:chararray);
/* do a cogroup of SSN with SSN_Name */
X = COGROUP SSN by ssn, SSN_NAME by ssn;
/* only keep those ssn's for which there is no name */
Y = filter X by IsEmpty(SSN_NAME);
</source>
</section></section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="max">
<title>MAX</title>
<p>Computes the maximum of the numeric values or chararrays in a single-column bag. MAX requires a preceding GROUP ALL statement for global maximums and a GROUP BY statement for group maximums.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>MAX(expression)        </p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression with data types int, long, float, double, bigdecimal, biginteger, chararray, datetime or bytearray.</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>Use the MAX function to compute the maximum of the numeric values or chararrays in a single-column bag.</p>
<p>The MAX function ignores NULL values.</p>
</section>
<section>
<title>Example</title>
<p>In this example the maximum GPA for all terms is computed for each student (see the GROUP operator for information about the field names in relation B).</p>
<source>
A = LOAD 'student' AS (name:chararray, session:chararray, gpa:float);
DUMP A;
(John,fl,3.9F)
(John,wt,3.7F)
(John,sp,4.0F)
(John,sm,3.8F)
(Mary,fl,3.8F)
(Mary,wt,3.9F)
(Mary,sp,4.0F)
(Mary,sm,4.0F)
B = GROUP A BY name;
DUMP B;
(John,{(John,fl,3.9F),(John,wt,3.7F),(John,sp,4.0F),(John,sm,3.8F)})
(Mary,{(Mary,fl,3.8F),(Mary,wt,3.9F),(Mary,sp,4.0F),(Mary,sm,4.0F)})
X = FOREACH B GENERATE group, MAX(A.gpa);
DUMP X;
(John,4.0F)
(Mary,4.0F)
</source>
</section>
<section>
<title>Types Tables</title>
<table>
<tr>
<td>
<p></p>
</td>
<td>
<p>int </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>float </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>bigdecimal </p>
</td>
<td>
<p>biginteger </p>
</td>
<td>
<p>chararray </p>
</td>
<td>
<p>datetime </p>
</td>
<td>
<p>bytearray </p>
</td>
</tr>
<tr>
<td>
<p>MAX </p>
</td>
<td>
<p>int </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>float </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>bigdecimal </p>
</td>
<td>
<p>biginteger </p>
</td>
<td>
<p>chararray </p>
</td>
<td>
<p>datetime </p>
</td>
<td>
<p>cast as double</p>
</td>
</tr>
</table>
</section></section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="min">
<title>MIN</title>
<p>Computes the minimum of the numeric values or chararrays in a single-column bag. MIN requires a preceding GROUP… ALL statement for global minimums and a GROUP … BY statement for group minimums.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>MIN(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression with data types int, long, float, double, bigdecimal, biginteger, chararray, datetime or bytearray.</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>Use the MIN function to compute the minimum of a set of numeric values or chararrays in a single-column bag.</p>
<p>The MIN function ignores NULL values.</p>
</section>
<section>
<title>Example</title>
<p>In this example the minimum GPA for all terms is computed for each student (see the GROUP operator for information about the field names in relation B).</p>
<source>
A = LOAD 'student' AS (name:chararray, session:chararray, gpa:float);
DUMP A;
(John,fl,3.9F)
(John,wt,3.7F)
(John,sp,4.0F)
(John,sm,3.8F)
(Mary,fl,3.8F)
(Mary,wt,3.9F)
(Mary,sp,4.0F)
(Mary,sm,4.0F)
B = GROUP A BY name;
DUMP B;
(John,{(John,fl,3.9F),(John,wt,3.7F),(John,sp,4.0F),(John,sm,3.8F)})
(Mary,{(Mary,fl,3.8F),(Mary,wt,3.9F),(Mary,sp,4.0F),(Mary,sm,4.0F)})
X = FOREACH B GENERATE group, MIN(A.gpa);
DUMP X;
(John,3.7F)
(Mary,3.8F)
</source>
</section>
<section>
<title>Types Tables</title>
<table>
<tr>
<td>
<p></p>
</td>
<td>
<p>int </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>float </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>bigdecimal </p>
</td>
<td>
<p>biginteger </p>
</td>
<td>
<p>chararray </p>
</td>
<td>
<p>datetime </p>
</td>
<td>
<p>bytearray </p>
</td>
</tr>
<tr>
<td>
<p>MIN </p>
</td>
<td>
<p>int </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>float </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>bigdecimal </p>
</td>
<td>
<p>biginteger </p>
</td>
<td>
<p>chararray </p>
</td>
<td>
<p>datetime </p>
</td>
<td>
<p>cast as double</p>
</td>
</tr>
</table>
</section></section>
<section id="plucktuple">
<title>PluckTuple</title>
<p>Allows the user to specify a string prefix, and then filter for the columns in a relation that begin with that prefix or match that regex pattern. Optionally, include flag 'false' to filter
for columns that do not match that prefix or match that regex pattern</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>DEFINE pluck PluckTuple(expression1)</p>
<p>DEFINE pluck PluckTuple(expression1,expression3)</p>
<p>pluck(expression2)</p>
</td>
</tr>
</table>
</section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression1</p>
</td>
<td>
<p>A prefix to pluck by or an regex pattern to pluck by</p>
</td>
</tr>
<tr>
<td>
<p>expression2</p>
</td>
<td>
<p>The fields to apply the pluck to, usually '*'</p>
</td>
</tr>
<tr>
<td>
<p>expression3</p>
</td>
<td>
<p>A boolean flag to indicate whether to include or exclude matching columns</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>Example:</p>
<source>
a = load 'a' as (x, y);
b = load 'b' as (x, y);
c = join a by x, b by x;
DEFINE pluck PluckTuple('a::');
d = foreach c generate FLATTEN(pluck(*));
describe c;
c: {a::x: bytearray,a::y: bytearray,b::x: bytearray,b::y: bytearray}
describe d;
d: {plucked::a::x: bytearray,plucked::a::y: bytearray}
DEFINE pluckNegative PluckTuple('a::','false');
d = foreach c generate FLATTEN(pluckNegative(*));
describe d;
d: {plucked::b::x: bytearray,plucked::b::y: bytearray}
</source>
</section>
</section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="size">
<title>SIZE</title>
<p>Computes the number of elements based on any Pig data type. </p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>SIZE(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression with any data type.</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>Use the SIZE function to compute the number of elements based on the data type (see the Types Tables below).
SIZE includes NULL values in the size computation. SIZE is not algebraic.</p>
<p>If the tested object is null, the SIZE function returns null.</p>
</section>
<section>
<title>Example</title>
<p>In this example the number of characters in the first field is computed.</p>
<source>
A = LOAD 'data' as (f1:chararray, f2:chararray, f3:chararray);
(apache,open,source)
(hadoop,map,reduce)
(pig,pig,latin)
X = FOREACH A GENERATE SIZE(f1);
DUMP X;
(6L)
(6L)
(3L)
</source>
</section>
<section>
<title>Types Tables</title>
<table>
<tr>
<td>
<p>int </p>
</td>
<td>
<p>returns 1 </p>
</td>
</tr>
<tr>
<td>
<p>long </p>
</td>
<td>
<p>returns 1 </p>
</td>
</tr>
<tr>
<td>
<p>float </p>
</td>
<td>
<p>returns 1 </p>
</td>
</tr>
<tr>
<td>
<p>double </p>
</td>
<td>
<p>returns 1 </p>
</td>
</tr>
<tr>
<td>
<p>chararray </p>
</td>
<td>
<p>returns number of characters in the array </p>
</td>
</tr>
<tr>
<td>
<p>bytearray </p>
</td>
<td>
<p>returns number of bytes in the array </p>
</td>
</tr>
<tr>
<td>
<p>tuple </p>
</td>
<td>
<p>returns number of fields in the tuple</p>
</td>
</tr>
<tr>
<td>
<p>bag </p>
</td>
<td>
<p>returns number of tuples in bag </p>
</td>
</tr>
<tr>
<td>
<p>map </p>
</td>
<td>
<p>returns number of key/value pairs in map </p>
</td>
</tr>
</table></section></section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="subtract">
<title>SUBTRACT</title>
<p>Bags subtraction, SUBTRACT(bag1, bag2) = bags composed of bag1 elements not in bag2</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>SUBTRACT(expression, expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression with data type bag.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>SUBTRACT takes two bags as arguments and returns a new bag composed of the tuples of first bag are not in the second bag.</p>
<p>If null, bag arguments are replaced by empty bags.<br></br>If arguments are not bags, an IOException is thrown.</p>
<p>The implementation assumes that both bags being passed to the SUBTRACT function will fit <strong>entirely
into memory</strong> simultaneously, if this is not the case, SUBTRACT will still function but will be <strong>very</strong> slow.</p>
</section>
<section>
<title>Example</title>
<p>In this example, SUBTRACT creates a new bag composed of B1 elements that are not in B2.</p>
<source>
A = LOAD 'bag_data' AS (B1:bag{T1:tuple(t1:int,t2:int)},B2:bag{T2:tuple(f1:int,f2:int)});
DUMP A;
({(8,9),(0,1),(1,2)},{(8,9),(1,1)})
({(2,3),(4,5)},{(2,3),(4,5)})
({(6,7),(3,7),(3,7)},{(2,2),(3,7)})
DESCRIBE A;
A: {B1: {T1: (t1: int,t2: int)},B2: {T2: (f1: int,f2: int)}}
X = FOREACH A GENERATE SUBTRACT(B1,B2);
DUMP X;
({(0,1),(1,2)})
({})
({(6,7)})
</source>
</section></section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="sum">
<title>SUM</title>
<p>Computes the sum of the numeric values in a single-column bag. SUM requires a preceding GROUP ALL statement for global sums and a GROUP BY statement for group sums.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>SUM(expression)        </p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression with data types int, long, float, double, bigdecimal, biginteger or bytearray cast as double.</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>Use the SUM function to compute the sum of a set of numeric values in a single-column bag.</p>
<p>The SUM function ignores NULL values.</p>
</section>
<section>
<title>Example</title>
<p>In this example the number of pets is computed. (see the GROUP operator for information about the field names in relation B).</p>
<source>
A = LOAD 'data' AS (owner:chararray, pet_type:chararray, pet_num:int);
DUMP A;
(Alice,turtle,1)
(Alice,goldfish,5)
(Alice,cat,2)
(Bob,dog,2)
(Bob,cat,2)
B = GROUP A BY owner;
DUMP B;
(Alice,{(Alice,turtle,1),(Alice,goldfish,5),(Alice,cat,2)})
(Bob,{(Bob,dog,2),(Bob,cat,2)})
X = FOREACH B GENERATE group, SUM(A.pet_num);
DUMP X;
(Alice,8L)
(Bob,4L)
</source>
</section>
<section>
<title>Types Tables</title>
<table>
<tr>
<td>
<p></p>
</td>
<td>
<p>int </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>float </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>bigdecimal </p>
</td>
<td>
<p>biginteger </p>
</td>
<td>
<p>chararray </p>
</td>
<td>
<p>bytearray </p>
</td>
</tr>
<tr>
<td>
<p>SUM </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>long </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>double </p>
</td>
<td>
<p>bigdecimal </p>
</td>
<td>
<p>biginteger </p>
</td>
<td>
<p>error </p>
</td>
<td>
<p>cast as double </p>
</td>
</tr>
</table>
</section></section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="in">
<title>IN</title>
<p>IN operator allows you to easily test if an expression matches any value in a list of values. It is used to reduce the need for multiple OR conditions.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>IN (expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression with data types chararray, int, long, float, double, bigdecimal, biginteger or bytearray.</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>IN operator allows you to easily test if an expression matches any value in a list of values. It is used to help reduce the need for multiple OR conditions.</p>
</section>
<section>
<title>Example</title>
<p>In this example we filter out ID 4 and 6.</p>
<source>
A = load 'data' using PigStorage(',') AS (id:int, first:chararray, last:chararray, gender:chararray);
DUMP A;
(1,Christine,Romero,Female)
(2,Sara,Hansen,Female)
(3,Albert,Rogers,Male)
(4,Kimberly,Morrison,Female)
(5,Eugene,Baker,Male)
(6,Ann,Alexander,Female)
(7,Kathleen,Reed,Female)
(8,Todd,Scott,Male)
(9,Sharon,Mccoy,Female)
(10,Evelyn,Rice,Female)
X = FILTER A BY id IN (4, 6);
DUMP X;
(4,Kimberly,Morrison,Female)
(6,Ann,Alexander,Female)
</source>
</section>
<p>In this example, we're passing a BigInteger and using NOT operator, thereby negating the passed list of fields in the IN clause</p>
<source>
A = load 'data' using PigStorage(',') AS (id:biginteger, first:chararray, last:chararray, gender:chararray);
X = FILTER A BY NOT id IN (1, 3, 5, 7, 9);
DUMP X;
(2,Sara,Hansen,Female)
(4,Kimberly,Morrison,Female)
(6,Ann,Alexander,Female)
(8,Todd,Scott,Male)
(10,Evelyn,Rice,Female)
</source>
</section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="tokenize">
<title>TOKENIZE</title>
<p>Splits a string and outputs a bag of words. </p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>TOKENIZE(expression [, 'field_delimiter'])        </p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression with data type chararray.</p>
</td>
</tr>
<tr>
<td>
<p>'field_delimiter'</p>
</td>
<td>
<p>An optional field delimiter (in single quotes).</p>
<p>If field_delimiter is null or not passed, the following will be used as delimiters: space [ ], double quote [ " ], coma [ , ] parenthesis [ () ], star [ * ].</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>Use the TOKENIZE function to split a string of words (all words in a single tuple) into a bag of words (each word in a single tuple). </p>
</section>
<section>
<title>Example</title>
<p>In this example the strings in each row are split.</p>
<source>
A = LOAD 'data' AS (f1:chararray);
DUMP A;
(Here is the first string.)
(Here is the second string.)
(Here is the third string.)
X = FOREACH A GENERATE TOKENIZE(f1);
DUMP X;
({(Here),(is),(the),(first),(string.)})
({(Here),(is),(the),(second),(string.)})
({(Here),(is),(the),(third),(string.)})
</source>
<p>In this example a field delimiter is specified.</p>
<source>
{code}
A = LOAD 'data' AS (f1:chararray);
B = FOREACH A GENERATE TOKENIZE (f1,'||');
DUMP B;
{code}
</source>
</section></section></section>
<!-- ======================================================================== -->
<section id="load-store-functions">
<title>Load/Store Functions</title>
<p>Load/store functions determine how data goes into Pig and comes out of Pig.
Pig provides a set of built-in load/store functions, described in the sections below.
You can also write your own load/store functions (see <a href="udf.html">User Defined Functions</a>).</p>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="handling-compression">
<title>Handling Compression</title>
<p>Support for compression is determined by the load/store function. PigStorage and TextLoader support gzip and bzip compression for both read (load) and write (store). BinStorage does not support compression.</p>
<p>To work with gzip compressed files, input/output files need to have a .gz extension. Gzipped files cannot be split across multiple maps; this means that the number of maps created is equal to the number of part files in the input location.</p>
<source>
A = load 'myinput.gz';
store A into 'myoutput.gz';
</source>
<p>To work with bzip compressed files, the input/output files need to have a .bz or .bz2 extension. Because the compression is block-oriented, bzipped files can be split across multiple maps.</p>
<source>
A = load 'myinput.bz';
store A into 'myoutput.bz';
</source>
<p>Note: PigStorage and TextLoader correctly read compressed files as long as they are NOT CONCATENATED bz/bz2 FILES generated in this manner: </p>
<ul>
<li>
<p>cat *.bz > text/concat.bz </p>
</li>
<li>
<p>cat *.bz2 > text/concat.bz2</p>
</li>
</ul>
<p></p>
<p>If you use concatenated bzip files with your Pig jobs, you will NOT see a failure but the results will be INCORRECT.</p>
<p></p>
</section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="binstorage">
<title>BinStorage</title>
<p>Loads and stores data in machine-readable format.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>BinStorage()        </p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>none</p>
</td>
<td>
<p>no parameters</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>Pig uses BinStorage to load and store the temporary data that is generated between multiple MapReduce jobs.</p>
<ul>
<li>BinStorage works with data that is represented on disk in machine-readable format.
BinStorage does NOT support <a href="#handling-compression">compression</a>.</li>
<li>BinStorage supports multiple locations (files, directories, globs) as input.</li>
</ul>
<p></p>
<p>Occasionally, users use BinStorage to store their own data. However, because BinStorage is a proprietary binary format, the original data is never in BinStorage - it is always a derivation of some other data.</p>
<p>We have seen several examples of users doing something like this:</p>
<source>
a = load 'b.txt' as (id, f);
b = group a by id;
store b into 'g' using BinStorage();
</source>
<p>And then later:</p>
<source>
a = load 'g/part*' using BinStorage() as (id, d:bag{t:(v, s)});
b = foreach a generate (double)id, flatten(d);
dump b;
</source>
<p>There is a problem with this sequence of events. The first script does not define data types and, as the result, the data is stored as a bytearray and a bag with a tuple that contains two bytearrays. The second script attempts to cast the bytearray to double; however, since the data originated from a different loader, it has no way to know the format of the bytearray or how to cast it to a different type. To solve this problem, Pig:</p>
<ul>
<li>Sends an error message when the second script is executed: "ERROR 1118: Cannot cast bytes loaded from BinStorage. Please provide a custom converter."</li>
<li id="custom-converter">Allows you to use a custom converter to perform the casting. <br></br>
<source>
a = load 'g/part*' using BinStorage('Utf8StorageConverter') as (id, d:bag{t:(v, s)});
b = foreach a generate (double)id, flatten(d);
dump b;
</source>
</li>
</ul>
</section>
<section>
<title>Examples</title>
<p>In this example BinStorage is used with the LOAD and STORE functions.</p>
<source>
A = LOAD 'data' USING BinStorage();
STORE X into 'output' USING BinStorage();
</source>
<p>In this example BinStorage is used to load multiple locations.</p>
<source>
A = LOAD 'input1.bin, input2.bin' USING BinStorage();
</source>
<p>BinStorage does not track data lineage. When Pig uses BinStorage to move data between MapReduce jobs, Pig can figure out the correct cast function to use and apply it. However, as shown in the example below, when you store data using BinStorage and then use a separate Pig Latin script to read data (thus loosing the type information), it is your responsibility to correctly cast the data before storing it using BinStorage.
</p>
<source>
raw = load 'sampledata' using BinStorage() as (col1,col2, col3);
--filter out null columns
A = filter raw by col1#'bcookie' is not null;
B = foreach A generate col1#'bcookie' as reqcolumn;
describe B;
--B: {regcolumn: bytearray}
X = limit B 5;
dump X;
(36co9b55onr8s)
(36co9b55onr8s)
(36hilul5oo1q1)
(36hilul5oo1q1)
(36l4cj15ooa8a)
B = foreach A generate (chararray)col1#'bcookie' as convertedcol;
describe B;
--B: {convertedcol: chararray}
X = limit B 5;
dump X;
()
()
()
()
()
</source>
</section>
</section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="jsonloadstore">
<title>JsonLoader, JsonStorage</title>
<p>Load or store JSON data.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>JsonLoader( ['schema'] ) </p>
</td>
</tr>
<tr>
<td>
<p>JsonStorage( ) </p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>schema</p>
</td>
<td>
<p>An optional Pig schema, in single quotes.</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>Use JsonLoader to load JSON data. </p>
<p>Use JsonStorage to store JSON data.</p>
<p>Note that there is no concept of delimit in JsonLoader or JsonStorage. The data is encoded in standard JSON format. JsonLoader optionally takes a schema as the construct argument.</p>
</section>
<section>
<title>Examples</title>
<p>In this example data is loaded with a schema. </p>
<source>
a = load 'a.json' using JsonLoader('a0:int,a1:{(a10:int,a11:chararray)},a2:(a20:double,a21:bytearray),a3:[chararray]');
</source>
<p>In this example data is loaded without a schema; it assumes there is a .pig_schema (produced by JsonStorage) in the input directory. </p>
<source>
a = load 'a.json' using JsonLoader();
</source>
</section></section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="pigdump">
<title>PigDump</title>
<p>Stores data in UTF-8 format.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>PigDump()        </p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>none</p>
</td>
<td>
<p>no parameters</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>PigDump stores data as tuples in human-readable UTF-8 format. </p></section>
<section>
<title>Example</title>
<p>In this example PigDump is used with the STORE function.</p>
<source>
STORE X INTO 'output' USING PigDump();
</source>
</section></section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="pigstorage">
<title>PigStorage</title>
<p>Loads and stores data as structured text files.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>PigStorage( [field_delimiter] , ['options'] ) </p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p id="field-delimiter">field_delimiter</p>
</td>
<td>
<p>The default field delimiter is tab ('\t'). </p>
<p>You can specify other characters as field delimiters; however, be sure to encase the characters in single quotes.</p>
</td>
</tr>
<tr>
<td>
<p id="pigstorage-options">'options'</p>
</td>
<td>
<p>A string that contains space-separated options ('optionA optionB optionC')</p>
<p>Currently supported options are:</p>
<ul>
<li>('schema') - Stores the schema of the relation using a hidden JSON file.</li>
<li>('noschema') - Ignores a stored schema during the load.</li>
<li>('tagsource') - (deprecated, Use tagPath instead) Add a first column indicates the input file of the record.</li>
<li>('tagPath') - Add a first column indicates the input path of the record.</li>
<li>('tagFile') - Add a first column indicates the input file name of the record.</li>
</ul>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>PigStorage is the default function used by Pig to load/store the data. PigStorage supports structured text files (in human-readable UTF-8 format) in compressed or uncompressed form (see <a href="#handling-compression">Handling Compression</a>). All Pig <a href="basic.html#data-types">data types</a> (both simple and complex) can be read/written using this function. The input data to the load can be a file, a directory or a glob.</p>
<p><strong>Load/Store Statements</strong></p>
<p>Load statements – PigStorage expects data to be formatted using field delimiters, either the tab character ('\t') or other specified character.</p>
<p>Store statements – PigStorage outputs data using field delimiters, either the tab character ('\t') or other specified character, and the line feed record delimiter ('\n'). </p>
<p><strong>Field/Record Delimiters</strong></p>
<p>Field Delimiters – For load and store statements the default field delimiter is the tab character ('\t'). You can use other characters as field delimiters, but separators such as ^A or Ctrl-A should be represented in Unicode (\u0001) using UTF-16 encoding (see Wikipedia <a href="http://en.wikipedia.org/wiki/ASCII">ASCII</a>, <a href="http://en.wikipedia.org/wiki/Unicode">Unicode</a>, and <a href="http://en.wikipedia.org/wiki/UTF-16">UTF-16</a>).</p>
<p>Record Deliminters – For load statements Pig interprets the line feed ( '\n' ), carriage return ( '\r' or CTRL-M) and combined CR + LF ( '\r\n' ) characters as record delimiters (do not use these characters as field delimiters). For store statements Pig uses the line feed ('\n') character as the record delimiter.</p>
<p><strong>Schemas</strong></p>
<p>If the schema option is specified, a hidden ".pig_schema" file is created in the output directory when storing data. It is used by PigStorage (with or without -schema) during loading to determine the field names and types of the data without the need for a user to explicitly provide the schema in an as clause, unless <code>noschema</code> is specified. No attempt to merge conflicting schemas is made during loading. The first schema encountered during a file system scan is used. </p>
<p>Additionally, if the schema option is specified, a ".pig_headers" file is created in the output directory. This file simply lists the delimited aliases. This is intended to make export to tools that can read files with header lines easier (just cat the header to your data). </p>
<p>If the schema option is NOT specified, a schema will not be written when storing data.</p>
<p>If the noschema option is NOT specified, and a schema is found, it gets loaded when loading data.</p>
<p>Note that regardless of whether or not you store the schema, you always need to specify the correct delimiter to read your data. If you store using delimiter "#" and then load using the default delimiter, your data will not be parsed correctly.</p>
<p><strong>Record Provenance</strong></p>
<p>If tagPath or tagFile option is specified, PigStorage will add a pseudo-column INPUT_FILE_PATH or INPUT_FILE_NAME respectively to the beginning of the record. As the name suggests, it is the input file path/name containing this particular record. Please note tagsource is deprecated.</p>
<p><strong>Complex Data Types</strong></p>
<p>The formats for complex data types are shown here:</p>
<ul>
<li><a href="basic.html#tuple">Tuple</a>: enclosed by (), items separated by ","
<ul>
<li>Non-empty tuple: (item1,item2,item3)</li>
<li>Empty tuple is valid: ()</li>
</ul>
</li>
<li><a href="basic.html#bag">Bag</a>: enclosed by {}, tuples separated by ","
<ul>
<li>Non-empty bag: {code}{(tuple1),(tuple2),(tuple3)}{code}</li>
<li>Empty bag is valid: {}</li>
</ul>
</li>
<li><a href="basic.html#map">Map</a>: enclosed by [], items separated by ",", key and value separated by "#"
<ul>
<li>Non-empty map: [key1#value1,key2#value2]</li>
<li>Empty map is valid: []</li>
</ul>
</li>
</ul>
<p>If load statement specify a schema, Pig will convert the complex type according to schema. If conversion fails, the affected item will be null (see <a href="basic.html#nulls">Nulls and Pig Latin</a>). </p>
</section>
<section>
<title>Examples</title>
<p>In this example PigStorage expects input.txt to contain tab-separated fields and newline-separated records. The statements are equivalent.</p>
<source>
A = LOAD 'student' USING PigStorage('\t') AS (name: chararray, age:int, gpa: float);
A = LOAD 'student' AS (name: chararray, age:int, gpa: float);
</source>
<p>In this example PigStorage stores the contents of X into files with fields that are delimited with an asterisk ( * ). The STORE statement specifies that the files will be located in a directory named output and that the files will be named part-nnnnn (for example, part-00000).</p>
<source>
STORE X INTO 'output' USING PigStorage('*');
</source>
<p>In this example, PigStorage loads data with complex data type, a bag of map and double.</p>
<source>
a = load '1.txt' as (a0:{t:(m:map[int],d:double)});
{([foo#1,bar#2],34.0),([white#3,yellow#4],45.0)} : valid
{([foo#badint],baddouble)} : conversion fail for badint/baddouble, get {([foo#],)}
{} : valid, empty bag
</source>
</section>
</section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="textloader">
<title>TextLoader</title>
<p>Loads unstructured data in UTF-8 format.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>TextLoader()</p>
</td>
</tr>
</table>
</section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>none</p>
</td>
<td>
<p>no parameters</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>TextLoader works with unstructured data in UTF8 format. Each resulting tuple contains a single field with one line of input text. TextLoader also supports <a href="#handling-compression">compression</a>.</p>
<p>Currently, TextLoader support for compression is limited.</p>
<p>TextLoader cannot be used to store data.</p>
</section>
<section>
<title>Example</title>
<p>In this example TextLoader is used with the LOAD function.</p>
<source>
A = LOAD 'data' USING TextLoader();
</source>
</section></section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="HBaseStorage">
<title>HBaseStorage</title>
<p>Loads and stores data from an HBase table.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>HBaseStorage('columns', ['options'])</p>
</td>
</tr>
</table>
</section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>columns</p>
</td>
<td>
<p>A list of qualified HBase columns to read data from or store data to.
The column family name and column qualifier are seperated by a colon (:).
Only the columns used in the Pig script need to be specified. Columns are specified
in one of three different ways as described below.</p>
<ul>
<li>Explicitly specify a column family and column qualifier (e.g., user_info:id). This
will produce a scalar in the resultant tuple.</li>
<li>Specify a column family and a portion of column qualifier name as a prefix followed
by an asterisk (i.e., user_info:address_*). This approach is used to read one or
more columns from the same column family with a matching descriptor prefix.
The datatype for this field will be a map of column descriptor name to field value.
Note that combining this style of prefix with a long list of fully qualified
column descriptor names could cause perfomance degredation on the HBase scan.
This will produce a Pig map in the resultant tuple with column descriptors as keys.</li>
<li>Specify all the columns of a column family using the column family name followed
by an asterisk (i.e., user_info:*). This will produce a Pig map in the resultant
tuple with column descriptors as keys.</li>
</ul>
</td>
</tr>
<tr>
<td>
<p>'options'</p>
</td>
<td>
<p>A string that contains space-separated options (&lsquo;-optionA=valueA -optionB=valueB -optionC=valueC&rsquo;)</p>
<p>Currently supported options are:</p>
<ul>
<li>-loadKey=(true|false) Load the row key as the first value in every tuple
returned from HBase (default=false)</li>
<li>-gt=minKeyVal Return rows with a rowKey greater than minKeyVal</li>
<li>-lt=maxKeyVal Return rows with a rowKey less than maxKeyVal</li>
<li>-regex=regex Return rows with a rowKey that match this regex on KeyVal</li>
<li>-gte=minKeyVal Return rows with a rowKey greater than or equal to minKeyVal</li>
<li>-lte=maxKeyVal Return rows with a rowKey less than or equal to maxKeyVal</li>
<li>-limit=numRowsPerRegion Max number of rows to retrieve per region</li>
<li>-caching=numRows Number of rows to cache (faster scans, more memory)</li>
<li>-delim=delimiter Column delimiter in columns list (default is whitespace)</li>
<li>-ignoreWhitespace=(true|false) When delim is set to something other than
whitespace, ignore spaces when parsing column list (default=true)</li>
<li>-caster=(HBaseBinaryConverter|Utf8StorageConverter) Class name of Caster to use
to convert values (default=Utf8StorageConverter). The default caster can be
overridden with the pig.hbase.caster config param. Casters must implement LoadStoreCaster.</li>
<li>-noWAL=(true|false) During storage, sets the write ahead to false for faster
loading into HBase (default=false). To be used with extreme caution since this
could result in data loss (see <a href="http://hbase.apache.org/book.html#perf.hbase.client.putwal">http://hbase.apache.org/book.html#perf.hbase.client.putwal</a>).</li>
<li>-minTimestamp=timestamp Return cell values that have a creation timestamp
greater or equal to this value</li>
<li>-maxTimestamp=timestamp Return cell values that have a creation timestamp
less than this value</li>
<li>-timestamp=timestamp Return cell values that have a creation timestamp equal to
this value</li>
<li>-includeTimestamp=Record will include the timestamp after the rowkey on store (rowkey, timestamp, ...)</li>
<li>-includeTombstone=Record will include a tombstone marker on store after the rowKey and timestamp (if included) (rowkey, [timestamp,] tombstone, ...)</li>
</ul>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>HBaseStorage stores and loads data from HBase. The function takes two arguments. The first
argument is a space seperated list of columns. The second optional argument is a
space seperated list of options. Column syntax and available options are listed above.
Note that HBaseStorage always disable split combination.</p>
</section>
<section>
<title>Load Example</title>
<p>In this example HBaseStorage is used with the LOAD function with an explicit schema.</p>
<source>
raw = LOAD 'hbase://SomeTableName'
USING org.apache.pig.backend.hadoop.hbase.HBaseStorage(
'info:first_name info:last_name tags:work_* info:*', '-loadKey=true -limit=5') AS
(id:bytearray, first_name:chararray, last_name:chararray, tags_map:map[], info_map:map[]);
</source>
<p>The datatypes of the columns are declared with the "AS" clause. The first_name and last_name
columns are specified as fully qualified column names with a chararray datatype. The third
specification of tags:work_* requests a set of columns in the tags column family that begin
with "work_". There can be zero, one or more columns of that type in the HBase table. The
type is specified as tags_map:map[]. This indicates that the set of column values returned
will be accessed as a map, where the key is the column name and the value is the cell value
of the column. The fourth column specification is also a map of column descriptors to cell
values.</p>
<p>When the type of the column is specified as a map in the "AS" clause, the map keys are the
column descriptor names and the data type is chararray. The datatype of the columns values can
be declared explicitly as shown in the examples below:</p>
<ul>
<li>tags_map[chararray] - In this case, the column values are all declared to be of type chararray</li>
<li>tags_map[int] - In this case, the column values are all declared to be of type int.</li>
</ul>
</section>
<section>
<title>Store Example</title>
<p>In this example HBaseStorage is used to store a relation into HBase.</p>
<source>
A = LOAD 'hdfs_users' AS (id:bytearray, first_name:chararray, last_name:chararray);
STORE A INTO 'hbase://users_table' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage(
'info:first_name info:last_name');
</source>
<p>In the example above relation A is loaded from HDFS and stored in HBase. Note that the schema
of relation A is a tuple of size 3, but only two column descriptor names are passed to the
HBaseStorage constructor. This is because the first entry in the tuple is used as the HBase
rowKey.</p>
</section>
</section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="AvroStorage">
<title>AvroStorage</title>
<p>Loads and stores data from Avro files.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>AvroStorage(['schema|record name'], ['options'])</p>
</td>
</tr>
</table>
</section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>schema</p>
</td>
<td>
<p>A JSON string specifying the Avro schema for the input. You may specify an explicit schema
when storing data or when loading data. When you manually provide a schema, Pig
will use the provided schema for serialization and deserialization. This means that
you can provide an explicit schema when saving data to simplify the output (for example
by removing nullable unions), or rename fields. This also means that you can provide
an explicit schema when reading data to only read a subset of the fields in each record.</p>
<p>See
<a href="http://avro.apache.org/docs/current/spec.html"> the Apache Avro Documentation</a>
for more details on how to specify a valid schema.</p>
</td>
</tr>
<tr>
<td>
<p>record name</p>
</td>
<td>
<p>When storing a bag of tuples with AvroStorage, if you do not want to specify
the full schema, you may specify the avro record name instead. (AvroStorage will
determine that the argument isn't a valid schema definition and use it as a
variable name instead.)</p>
</td>
</tr>
<tr>
<td>
<p>'options'</p>
</td>
<td>
<p>A string that contains space-separated options (&lsquo;-optionA valueA -optionB valueB -optionC &rsquo;)</p>
<p>Currently supported options are:</p>
<ul>
<li>-namespace nameSpace or -n nameSpace Explicitly specify the namespace
field in Avro records when storing data</li>
<li>-schemfile schemaFile or -f schemaFile Specify the input (or output) schema from
an external file. Pig assumes that the file is located on the default filesystem,
but you may use an explicity URL to unambigously specify the location. (For example, if
the data was on your local file system in /stuff/schemafile.avsc, you
could specify "-f file:///stuff/schemafile.avsc" to specify the location. If the
data was on HDFS under /yourdirectory/schemafile.avsc, you could specify
"-f hdfs:///yourdirectory/schemafile.avsc"). Pig expects this to be a
text file, containing a valid avro schema.</li>
<li>-examplefile exampleFile or -e exampleFile Specify the input (or output)
schema using another Avro file as an example. Pig assumes that the file is located on the default filesystem,
but you may use and explicity URL to specify the location. Pig
expects this to be an Avro data file.</li>
<li>-allowrecursive or -r Specify whether to allow recursive schema definitions (the
default is to throw an exception if Pig encounters a recursive schema). When
reading objects with recursive definitions, Pig will translate Avro records to
schema-less tuples; the Pig Schema for the object may not match the data exactly.</li>
<li>-doublecolons or -d Specify how to handle Pig schemas that contain double-colons
when writing data in Avro format. (When you join two bags in Pig, Pig will automatically
label the fields in the output Tuples with names that contain double-colons). If
you select this option, AvroStorage will translate names with double colons into
names with double underscores. </li>
</ul>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>AvroStorage stores and loads data from Avro files. Often, you can load and
store data using AvroStorage without knowing much about the Avros serialization format.
AvroStorage will attempt to automatically translate a pig schema and pig data to avro data,
or avro data to pig data.</p>
<p>By default, when you use AvoStorage to load data, AvroStorage will use depth first search to
find a valid Avro file on the input path, then use the schema from that file to load the
data. When you use AvroStorage to store data, AvroStorage will attempt to translate the
Pig schema to an equivalent Avro schema. You can manually specify the schema by providing
an explicit schema in Pig, loading a schema from an external schema file, or explicitly telling
Pig to read the schema from a specific avro file.</p>
<p>To compress your output with AvroStorage, you need to use the correct Avro properties for compression.
For example, to enable compression using deflate level 5, you would specify</p>
<source>
SET avro.output.codec 'deflate'
SET avro.mapred.deflate.level 5
</source>
<p>Valid values for avro.output.codec include deflate, snappy, and null.</p>
<p>There are a few key differences between Avro and Pig data, and in some cases
it helps to understand the differences between the Avro and Pig data models.
Before writing Pig data to Avro (or creating Avro files to use in Pig), keep in
mind that there might not be an equivalent Avro Schema for every Pig Schema (and
vice versa):</p>
<ul>
<li><strong>Recursive schema definitions</strong> You cannot define schemas recursively in Pig,
but you can define schemas recursively in Avro.</li>
<li><strong>Allowed characters</strong> Pig schemas may sometimes contain characters like colons (":")
that are illegal in Avro names.</li>
<li><strong>Unions</strong> In Avro, you can define an object that may be one of several different
types (including complex types such as records). In Pig, you cannot.</li>
<li><strong>Enums</strong> Avro allows you to define enums to efficiently and abstractly
represent categorical variable, but Pig does not.</li>
<li><strong>Fixed Length Byte Arrays</strong> Avro allows you to define fixed length byte arrays,
but Pig does not.</li>
<li><strong>Nullable values</strong> In Pig, all types are nullable. In Avro, they are not. </li>
</ul>
<p>Here is how AvroStorage translates Pig values to Avro:</p>
<table>
<tr>
<td></td>
<td>Original Pig Type</td>
<td>Translated Avro Type</td>
</tr>
<tr>
<td>Integers</td>
<td>int</td>
<td>["int","null"]</td>
</tr>
<tr>
<td>Longs</td>
<td>long</td>
<td>["long,"null"]</td>
</tr>
<tr>
<td>Floats</td>
<td>float</td>
<td>["float","null"]</td>
</tr>
<tr>
<td>Doubles</td>
<td>double</td>
<td>["double","null"]</td>
</tr>
<tr>
<td>Strings</td>
<td>chararray</td>
<td>["string","null"]</td>
</tr>
<tr>
<td>Bytes</td>
<td>bytearray</td>
<td>["bytes","null"]</td>
</tr>
<tr>
<td>Booleans</td>
<td>boolean</td>
<td>["boolean","null"]</td>
</tr>
<tr>
<td>Tuples</td>
<td>tuple</td>
<td>The Pig Tuple schema will be translated to an union of and Avro record with an equivalent
schem and null.</td>
</tr>
<tr>
<td>Bags of Tuples</td>
<td>bag</td>
<td>The Pig Tuple schema will be translated to a union of an array of records with an equivalent
schema and null.</td>
</tr>
<tr>
<td>Maps</td>
<td>map</td>
<td>The Pig Tuple schema will be translated to a union of a map of records with an equivalent
schema and null.</td>
</tr>
</table>
<p>Here is how AvroStorage translates Avro values to Pig:</p>
<table>
<tr>
<td></td>
<td>Original Avro Types</td>
<td>Translated Pig Type</td>
</tr>
<tr>
<td>Integers</td>
<td>["int","null"] or "int"</td>
<td>int</td>
</tr>
<tr>
<td>Longs</td>
<td>["long,"null"] or "long"</td>
<td>long</td>
</tr>
<tr>
<td>Floats</td>
<td>["float","null"] or "float"</td>
<td>float</td>
</tr>
<tr>
<td>Doubles</td>
<td>["double","null"] or "double"</td>
<td>double</td>
</tr>
<tr>
<td>Strings</td>
<td>["string","null"] or "string"</td>
<td>chararray</td>
</tr>
<tr>
<td>Enums</td>
<td>Either an enum or a union of an enum and null</td>
<td>chararray</td>
</tr>
<tr>
<td>Bytes</td>
<td>["bytes","null"] or "bytes"</td>
<td>bytearray</td>
</tr>
<tr>
<td>Fixes</td>
<td>Either a fixed length byte array, or a union of a fixed length array and null</td>
<td>bytearray</td>
</tr>
<tr>
<td>Booleans</td>
<td>["boolean","null"] or "boolean"</td>
<td>boolean</td>
</tr>
<tr>
<td>Tuples</td>
<td>Either a record type, or a union or a record and null</td>
<td>tuple</td>
</tr>
<tr>
<td>Bags of Tuples</td>
<td>Either an array, or a union of an array and null</td>
<td>bag</td>
</tr>
<tr>
<td>Maps</td>
<td>Either a map, or a union of a map and null</td>
<td>map</td>
</tr>
</table>
<p> In many cases, AvroStorage will automatically translate your data correctly and you will not
need to provide any more information to AvroStorage. But sometimes, it may be convenient to
manually provide a schema to AvroStorge. See the example selection below for examples
on manually specifying a schema with AvroStorage.
</p>
</section>
<section>
<title>Load Examples</title>
<p>Suppose that you were provided with a file of avro data (located in 'stuff')
with the following schema:</p>
<source>
{"type" : "record",
"name" : "stuff",
"fields" : [
{"name" : "label", "type" : "string"},
{"name" : "value", "type" : "int"},
{"name" : "marketingPlans", "type" : ["string", "bytearray", "null"]}
]
}
</source>
<p>Additionally, suppose that you don't need the value of the field "marketingPlans."
(That's a good thing, because AvroStorage doesn't know how to translate that Avro schema
to a Pig schema). To load only the fieds "label" and "value" into Pig, you can
manually specify the schema passed to AvroStorage:</p>
<source>
measurements = LOAD 'stuff' USING AvroStorage(
'{"type":"record","name":"measurement","fields":[{"name":"label","type":"string"},{"name":"value","type":"int"}]}'
);
</source>
</section>
<section>
<title>Store Examples</title>
<p>Suppose that you are saving a bag called measurements with the schema:</p>
<source>
measurements:{measurement:(label:chararray,value:int)}
</source>
<p>To store this bag into a file called "measurements", you can use a statement like:</p>
<source>
STORE measurements INTO 'measurements' USING AvroStorage('measurement');
</source>
<p>AvroStorage will translate this to the Avro schema</p>
<source>
{"type":"record",
"name":"measurement",
"fields" : [
{"name" : "label", "type" : ["string", "null"]},
{"name" : "value", "type" : ["int", "null"]}
]
}
</source>
<p>But suppose that you knew that the label and value fields would never be null. You could
define a more precise schema manually using a statement like:</p>
<source>
STORE measurements INTO 'measurements' USING AvroStorage(
'{"type":"record","name":"measurement","fields":[{"name":"label","type":"string"},{"name":"value","type":"int"}]}'
);
</source>
</section>
</section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="TrevniStorage">
<title>TrevniStorage</title>
<p>Loads and stores data from Trevni files.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>TrevniStorage(['schema|record name'], ['options'])</p>
</td>
</tr>
</table>
</section>
<p>Trevni is a column-oriented storage format that is part of the Apache Avro project. Trevni is
closely related to Avro.</p>
<p>Likewise, TrevniStorage is very closely related to AvroStorage, and shares the same options as
AvroStorage. See <a href="#AvroStorage">AvroStorage</a> for a detailed description of the
arguments for TrevniStorage.</p>
</section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="AccumuloStorage">
<title>AccumuloStorage</title>
<p>Loads or stores data from an Accumulo table. The first element in a Tuple is equivalent to the "row"
from the Accumulo Key, while the columns in that row are can be grouped in various static or wildcarded
ways. Basic wildcarding functionality exists to group various columns families/qualifiers into a Map for
LOADs, or serialize a Map into some group of column families or qualifiers on STOREs.
</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>AccumuloStorage(['columns'[, 'options']])</p>
</td>
</tr>
</table>
</section>
<section>
<title>Arguments</title>
<table>
<tr>
<td>
<p>'columns'</p>
</td>
<td>
<p>A comma-separated list of "columns" to read data from to write data to.
Each of these columns can be considered one of three different types:
</p>
<ol>
<li>Literal</li>
<li>Column family prefix</li>
<li>Column qualifier prefix</li>
</ol>
<p><strong>Literal:</strong> this is the simplest specification
which is a colon-delimited string that maps to a column family and column
qualifier. This will read/write a simple scalar from/to Accumulo.
</p>
<p><strong>Column family prefix:</strong> When reading data, this
will fetch data from Accumulo Key-Values in the current row whose column family match the
given prefix. This will result in a Map being placed into the Tuple. When writing
data, a Map is also expected at the given offset in the Tuple whose Keys will be
appended to the column family prefix, an empty column qualifier is used, and the Map
value will be placed in the Accumulo Value. A valid column family prefix is a literal
asterisk (*) in which case the Map Key will be equivalent to the Accumulo column family.
</p>
<p><strong>Column qualifier prefix:</strong> Similar to the column
family prefix except it operates on the column qualifier. On reads, Accumulo Key-Values
in the same row that match the given column family and column qualifier prefix will be
placed into a single Map. On writes, the provided column family from the column specification
will be used, the Map key will be appended to the column qualifier provided in the specification,
and the Map Value will be the Accumulo Value.
</p>
<p>When "columns" is not provided or is a blank String, it is treated equivalently to "*".
This is to say that when a column specification string is not provided, for reads, all columns
in the given Accumulo row will be placed into a single Map (with the Map keys being colon
delimited to preserve the column family/qualifier from Accumulo). For writes, the Map keys
will be placed into the column family and the column qualifier will be empty.
</p>
</td>
</tr>
<tr>
<td>
<p>'options'</p>
</td>
<td>
<p>A string that contains space-separated options ("optionA valueA -optionB valueB -optionC valueC")</p>
<p>The currently supported options are:</p>
<ul>
<li>(-c|--caster) LoadStoreCasterImpl An implementation of a LoadStoreCaster to use when serializing types into Accumulo,
usually AccumuloBinaryConverter or UTF8StringConverter, defaults to UTF8StorageConverter.
</li>
<li>(-auths|--authorizations) auth1,auth2... A comma-separated list of Accumulo authorizations to use when reading
data from Accumulo. Defaults to the empty set of authorizations (none).
</li>
<li>(-s|--start) start_row The Accumulo row to begin reading from, inclusive</li>
<li>(-e|--end) end_row The Accumulo row to read until, inclusive</li>
<li>(-buff|--mutation-buffer-size) num_bytes The number of bytes to buffer when writing data to Accumulo. A higher
value requires more memory</li>
<li>(-wt|--write-threads) num_threads The number of threads used to write data to Accumulo.</li>
<li>(-ml|--max-latency) milliseconds Maximum time in milliseconds before data is flushed to Accumulo.</li>
<li>(-sep|--separator) str The separator character used when parsing the column specification, defaults to comma (,)</li>
<li>(-iw|--ignore-whitespace) (true|false) Should whitespace be stripped from the column specification, defaults to true</li>
</ul>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>AccumuloStorage has the functionality to store or fetch data from Accumulo. Its goal is to provide
a simple, widely applicable table schema compatible with Pig's API. Each Tuple contains some subset
of the columns stored within one row of the Accumulo table, which depends on the columns provided
as an argument to the function. If '*' is provided, all columns in the table will be returned. The
second argument provides control over a variety of options that can be used to change various properties.</p>
<p>When invoking Pig Scripts that use AccumuloStorage, it's important to ensure that Pig has the Accumulo
jars on its classpath. This is easily achieved using the ACCUMULO_HOME environment variable.
</p>
<source>
PIG_CLASSPATH="$ACCUMULO_HOME/lib/*:$PIG_CLASSPATH" pig my_script.pig
</source>
</section>
<section>
<title>Load Example</title>
<p>It is simple to fetch all columns from Airport codes that fall between Boston and San Francisco
that can be viewed with 'auth1' and/or 'auth2' Accumulo authorizations.</p>
<source>
raw = LOAD 'accumulo://airports?instance=accumulo&amp;user=root&amp;password=passwd&amp;zookeepers=localhost'
USING org.apache.pig.backend.hadoop.accumulo.AccumuloStorage(
'*', '-a auth1,auth2 -s BOS -e SFO') AS
(code:chararray, all_columns:map[]);
</source>
<p>The datatypes of the columns are declared with the "AS" clause. In this example, the row key,
which is the unique airport code is assigned to the "code" variable while all of the other
columns are placed into the map. When there is a non-empty column qualifier, the key in that
map will have a colon which separates which portion of the key came from the column family and
which portion came from the column qualifier. The Accumulo value is placed in the Map value.</p>
<p>Most times, it is not necessary, nor desired for performance reasons, to fetch all columns.</p>
<source>
raw = LOAD 'accumulo://airports?instance=accumulo&amp;user=root&amp;password=passwd&amp;zookeepers=localhost'
USING org.apache.pig.backend.hadoop.accumulo.AccumuloStorage(
'name,building:num_terminals,carrier*,reviews:transportation*') AS
(code:chararray name:bytearray carrier_map:map[] transportion_reviews_map:map[]);
</source>
<p>An asterisk can be used when requesting columns to group a collection of columns into a single
Map instead of enumerating each column.</p>
</section>
<section>
<title>Store Example</title>
<p>Data can be easily stored into Accumulo.</p>
<source>
A = LOAD 'flights.txt' AS (id:chararray, carrier_name:chararray, src_airport:chararray, dest_airport:chararray, tail_number:int);
STORE A INTO 'accumulo://flights?instance=accumulo&amp;user=root&amp;password=passwd&amp;zookeepers=localhost' USING
org.apache.pig.backend.hadoop.accumulo.AccumuloStorage('carrier_name,src_airport,dest_airport,tail_number');
</source>
<p>Here, we read the file 'flights.txt' out of HDFS and store the results into the relation A.
We extract a unique ID for the flight, its source and destination and the tail number from the
given file. When STORE'ing back into Accumulo, we specify the column specifications (in this case,
just a column family). It is also important to note that four elements are provided as columns
because the first element in the Tuple is used as the row in Accumulo.
</p>
</section>
</section>
<section id="OrcStorage">
<title>OrcStorage</title>
<p>Loads from or stores data to Orc file.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>OrcStorage(['options'])</p>
</td>
</tr>
</table>
</section>
<section>
<title>Options</title>
<table>
<tr>
<td>
<p>A string that contains space-separated options (&lsquo;-optionA valueA -optionB valueB -optionC &rsquo;). Current options are only applicable with STORE operation and not for LOAD.</p>
<p>Currently supported options are:</p>
<ul>
<li>--stripeSize or -s Set the stripe size for the file. Default is 268435456(256 MB).</li>
<li>--rowIndexStride or -r Set the distance between entries in the row index. Default is 10000.</li>
<li>--bufferSize or -b Set the size of the memory buffers used for compressing and storing the stripe in memory. Default is 262144 (256K).</li>
<li>--blockPadding or -p Sets whether the HDFS blocks are padded to prevent stripes from straddling blocks. Default is true.</li>
<li>--compress or -c Sets the generic compression that is used to compress the data. Valid codecs are: NONE, ZLIB, SNAPPY, LZO. Default is ZLIB.</li>
<li>--keepSingleFieldTuple or -k Sets whether to keep a Tuple(struct) schema inside a Bag(array) even if the tuple only contains a single field. Default is false.</li>
<li>--version or -v Sets the version of the file that will be written</li>
</ul>
</td>
</tr>
</table>
</section>
<section>
<title>Example</title>
<p>OrcStorage as a StoreFunc.</p>
<source>
A = LOAD 'student.txt' as (name:chararray, age:int, gpa:double);
store A into 'student.orc' using OrcStorage('-c SNAPPY'); -- store student.txt into data.orc with SNAPPY compression
</source>
<p>OrcStorage as a LoadFunc.</p>
<source>
A = LOAD 'student.orc' USING OrcStorage();
describe A; -- See the schema of student.orc
B = filter A by age &gt; 25 and gpa &lt; 3; -- filter condition will be pushed up to loader
dump B; -- dump the content of student.orc
</source>
</section>
<section>
<title>Data types</title>
<p>Most Orc data type has one to one mapping to Pig data type. Several exceptions are:</p>
<p>Loader side:</p>
<ul>
<li>Orc STRING/CHAR/VARCHAR all map to Pig varchar</li>
<li>Orc BYTE/BINARY all map to Pig bytearray</li>
<li>Orc TIMESTAMP/DATE all maps to Pig datetime</li>
<li>Orc DECIMAL maps to Pig bigdecimal</li>
</ul>
<p>Storer side:</p>
<ul>
<li>Pig chararray maps to Orc STRING</li>
<li>Pig datetime maps to Orc TIMESTAMP</li>
<li>Pig bigdecimal/biginteger all map to Orc DECIMAL</li>
<li>Pig bytearray maps to Orc BINARY</li>
</ul>
</section>
<section>
<title>Predicate pushdown</title>
<p>If there is a filter statement right after OrcStorage, Pig will push the filter condition to the loader.
OrcStorage will prune file/stripe/row group which does not satisfy the condition entirely. For the file/stripe/row group contains
data that satisfies the filter condition, OrcStorage will load the file/stripe/row group and Pig will evaluate the filter condition
again to remove additional data which does not satisfy the filter condition.</p>
<p>OrcStorage predicate pushdown currently support all primitive data types but none of the complex data types. For example, map condition
cannot push into OrcStorage:</p>
<source>
A = LOAD 'student.orc' USING OrcStorage();
B = filter A by info#'age' > 25; -- map condition cannot push to OrcStorage
dump B;
</source>
<p>Currently, the following expressions in filter condition are supported in OrcStorage predicate pushdown: &gt;, &gt;=, &lt;, &lt;=, ==, !=, between, in, and, or, not. The missing expressions are: is null, is not null, matches.</p>
</section>
</section>
</section>
<!-- ======================================================== -->
<!-- ======================================================== -->
<!-- Math Functions -->
<section id="math-functions">
<title>Math Functions</title>
<p>For general information about these functions, see the <a href="http://docs.oracle.com/javase/6/docs/api/">Java API Specification</a>,
<a href="http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html">Class Math</a>. Note the following:</p>
<ul>
<li>
<p>Pig function names are case sensitive and UPPER CASE.</p>
</li>
<li>
<p>Pig may process results differently than as stated in the Java API Specification:</p>
<ul>
<li>
<p>If the result value is null or empty, Pig returns null.</p>
</li>
<li>
<p>If the result value is not a number (NaN), Pig returns null.</p>
</li>
<li>
<p>If Pig is unable to process the expression, Pig returns an exception.</p>
</li>
</ul>
</li>
</ul>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="abs">
<title>ABS</title>
<p>Returns the absolute value of an expression.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>ABS(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>Any expression whose result is type int, long, float, or double.</p>
</td>
</tr>
</table></section>
<section>
<title>Usage</title>
<p>
Use the ABS function to return the absolute value of an expression.
If the result is not negative (x &#8805; 0), the result is returned. If the result is negative (x &lt; 0), the negation of the result is returned.
</p>
</section>
</section>
<!-- ++++++++++++++++++++++++++++++++++++++++++++++ -->
<section id="acos">
<title>ACOS</title>
<p>Returns the arc cosine of an expression.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>ACOS(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression whose result is type double.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>
Use the ACOS function to return the arc cosine of an expression.
</p>
</section>
</section>
<!-- ======================================================== -->
<section id="asin">
<title>ASIN</title>
<p>Returns the arc sine of an expression.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>ASIN(expression)</p>
</td>
</tr>
</table>
</section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression whose result is type double.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>
Use the ASIN function to return the arc sine of an expression.
</p>
</section>
</section>
<!-- ======================================================== -->
<section id="atan">
<title>ATAN</title>
<p>Returns the arc tangent of an expression.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>ATAN(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression whose result is type double.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>
Use the ATAN function to return the arc tangent of an expression.
</p>
</section>
</section>
<!-- ======================================================== -->
<section id="cbrt">
<title>CBRT</title>
<p>Returns the cube root of an expression.</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>CBRT(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression whose result is type double.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>
Use the CBRT function to return the cube root of an expression.
</p>
</section>
</section>
<!-- ======================================================== -->
<section id="ceil">
<title>CEIL</title>
<p>Returns the value of an expression rounded up to the nearest integer.
</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>CEIL(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression whose result is type double.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>
Use the CEIL function to return the value of an expression rounded up to the nearest integer.
This function never decreases the result value.
</p>
<table>
<tr>
<td>
<p>x</p>
</td>
<td>
<p>CEIL(x)</p>
</td>
</tr>
<tr>
<td>
<p> 4.6</p>
</td>
<td>
<p> 5</p>
</td>
</tr>
<tr>
<td>
<p> 3.5</p>
</td>
<td>
<p> 4</p>
</td>
</tr>
<tr>
<td>
<p> 2.4</p>
</td>
<td>
<p> 3</p>
</td>
</tr>
<tr>
<td>
<p>1.0</p>
</td>
<td>
<p>1</p>
</td>
</tr>
<tr>
<td>
<p>-1.0</p>
</td>
<td>
<p>-1</p>
</td>
</tr>
<tr>
<td>
<p>-2.4</p>
</td>
<td>
<p>-2</p>
</td>
</tr>
<tr>
<td>
<p>-3.5</p>
</td>
<td>
<p>-3</p>
</td>
</tr>
<tr>
<td>
<p>-4.6</p>
</td>
<td>
<p>-4</p>
</td>
</tr>
</table>
</section>
</section>
<!-- ======================================================== -->
<section id="cos">
<title>COS</title>
<p>Returns the trigonometric cosine of an expression.
</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>COS(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression (angle) whose result is type double.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>
Use the COS function to return the trigonometric cosine of an expression.
</p>
</section>
</section>
<!-- ======================================================== -->
<section id="cosh">
<title>COSH</title>
<p>Returns the hyperbolic cosine of an expression.
</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>COSH(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression whose result is type double.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>
Use the COSH function to return the hyperbolic cosine of an expression.
</p>
</section>
</section>
<!-- ======================================================== -->
<section id="exp">
<title>EXP</title>
<p>Returns Euler's number e raised to the power of x.
</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>EXP(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression whose result is type double.</p>
</td>
</tr>
</table>
</section>
<section>
<title>Usage</title>
<p>
Use the EXP function to return the value of Euler's number e raised to the power of x (where x is the result value of the expression).
</p>
</section>
</section>
<!-- ======================================================== -->
<section id="floor">
<title>FLOOR</title>
<p>Returns the value of an expression rounded down to the nearest integer.
</p>
<section>
<title>Syntax</title>
<table>
<tr>
<td>
<p>FLOOR(expression)</p>
</td>
</tr>
</table></section>
<section>
<title>Terms</title>
<table>
<tr>
<td>
<p>expression</p>
</td>
<td>
<p>An expression whose result is type double.</p>
</td>