title: “User-defined Functions” nav-parent_id: tableapi nav-pos: 50

User-defined functions are an important feature, because they significantly extend the expressiveness of queries.

  • This will be replaced by the TOC {:toc}

Register User-Defined Functions

In most cases, a user-defined function must be registered before it can be used in an query. It is not necessary to register functions for the Scala Table API.

Functions are registered at the TableEnvironment by calling a registerFunction() method. When a user-defined function is registered, it is inserted into the function catalog of the TableEnvironment such that the Table API or SQL parser can recognize and properly translate it.

Please find detailed examples of how to register and how to call each type of user-defined function (ScalarFunction, TableFunction, and AggregateFunction) in the following sub-sessions.

{% top %}

Scalar Functions

If a required scalar function is not contained in the built-in functions, it is possible to define custom, user-defined scalar functions for both the Table API and SQL. A user-defined scalar functions maps zero, one, or multiple scalar values to a new scalar value.

In order to define a scalar function one has to extend the base class ScalarFunction in org.apache.flink.table.functions and implement (one or more) evaluation methods. The behavior of a scalar function is determined by the evaluation method. An evaluation method must be declared publicly and named eval. The parameter types and return type of the evaluation method also determine the parameter and return types of the scalar function. Evaluation methods can also be overloaded by implementing multiple methods named eval. Evaluation methods can also support variable arguments, such as eval(String... strs).

The following example shows how to define your own hash code function, register it in the TableEnvironment, and call it in a query. Note that you can configure your scalar function via a constructor before it is registered:

public HashCode(int factor) { this.factor = factor; }

public int eval(String s) { return s.hashCode() * factor; } }

BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);

// register the function tableEnv.registerFunction(“hashCode”, new HashCode(10));

// use the function in Java Table API myTable.select(“string, string.hashCode(), hashCode(string)”);

// use the function in SQL API tableEnv.sqlQuery(“SELECT string, HASHCODE(string) FROM MyTable”); {% endhighlight %}

val tableEnv = TableEnvironment.getTableEnvironment(env)

// use the function in Scala Table API val hashCode = new HashCode(10) myTable.select('string, hashCode('string))

// register and use the function in SQL tableEnv.registerFunction(“hashCode”, new HashCode(10)) tableEnv.sqlQuery(“SELECT string, HASHCODE(string) FROM MyTable”) {% endhighlight %}

By default the result type of an evaluation method is determined by Flink's type extraction facilities. This is sufficient for basic types or simple POJOs but might be wrong for more complex, custom, or composite types. In these cases DataType of the result type can be manually defined by overriding ScalarFunction#getResultType().

The following example shows an advanced example which takes the internal timestamp representation and also returns the internal timestamp representation as a long value. By overriding ScalarFunction#getResultType() we define that the returned long value should be interpreted as a DataTypes.TIMESTAMP by the code generation.

public DataType getResultType(Object[] arguments, Class[] argTypes) { return DataTypes.TIMESTAMP; } } {% endhighlight %}

override def getResultType(arguments: Array[Object], signature: Array[Class[_]]): DataType = { DataTypes.TIMESTAMP } } {% endhighlight %}

{% top %}

Table Functions

Similar to a user-defined scalar function, a user-defined table function takes zero, one, or multiple scalar values as input parameters. However in contrast to a scalar function, it can return an arbitrary number of rows as output instead of a single value. The returned rows may consist of one or more columns.

In order to define a table function one has to extend the base class TableFunction in org.apache.flink.table.functions and implement (one or more) evaluation methods. The behavior of a table function is determined by its evaluation methods. An evaluation method must be declared public and named eval. The TableFunction can be overloaded by implementing multiple methods named eval. The parameter types of the evaluation methods determine all valid parameters of the table function. Evaluation methods can also support variable arguments, such as eval(String... strs). The type of the returned table is determined by the generic type of TableFunction. Evaluation methods emit output rows using the protected collect(T) method.

In the Table API, a table function is used with .join(Table) or .leftOuterJoin(Table). The join operator (cross) joins each row from the outer table (table on the left of the operator) with all rows produced by the table-valued function (which is on the right side of the operator). The leftOuterJoin operator joins each row from the outer table (table on the left of the operator) with all rows produced by the table-valued function (which is on the right side of the operator) and preserves outer rows for which the table function returns an empty table. In SQL use LATERAL TABLE(<TableFunction>) with CROSS JOIN and LEFT JOIN with an ON TRUE join condition (see examples below).

The following example shows how to define table-valued function, register it in the TableEnvironment, and call it in a query. Note that you can configure your table function via a constructor before it is registered:

public Split(String separator) {
    this.separator = separator;
}

public void eval(String str) {
    for (String s : str.split(separator)) {
        // use collect(...) to emit a row
        collect(new Tuple2<String, Integer>(s, s.length()));
    }
}

}

BatchTableEnvironment tableEnv = TableEnvironment.getBatchTableEnvironment(env); // or // StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env); Table myTable = ... // table schema: [a: String]

// Register the function. tableEnv.registerFunction(“split”, new Split(“#”));

// Use the table function in the Java Table API. “as” specifies the field names of the table. myTable.join(new Table(tableEnv, “split(a) as (word, length)”)) .select(“a, word, length”); myTable.leftOuterJoin(new Table(tableEnv, “split(a) as (word, length)”)) .select(“a, word, length”);

// Use the table function in SQL with LATERAL and TABLE keywords. // CROSS JOIN a table function (equivalent to “join” in Table API). tableEnv.sqlQuery(“SELECT a, word, length FROM MyTable, LATERAL TABLE(split(a)) as T(word, length)”); // LEFT JOIN a table function (equivalent to “leftOuterJoin” in Table API). tableEnv.sqlQuery(“SELECT a, word, length FROM MyTable LEFT JOIN LATERAL TABLE(split(a)) as T(word, length) ON TRUE”); {% endhighlight %}

val tableEnv = TableEnvironment.getBatchTableEnvironment(env) // or // val tableEnv = TableEnvironment.getTableEnvironment(env) val myTable = ... // table schema: [a: String]

// Use the table function in the Scala Table API (Note: No registration required in Scala Table API). val split = new Split(“#”) // “as” specifies the field names of the generated table. myTable.join(split('a) as ('word, 'length)).select('a, 'word, 'length) myTable.leftOuterJoin(split('a) as ('word, 'length)).select('a, 'word, 'length)

// Register the table function to use it in SQL queries. tableEnv.registerFunction(“split”, new Split(“#”))

// Use the table function in SQL with LATERAL and TABLE keywords. // CROSS JOIN a table function (equivalent to “join” in Table API) tableEnv.sqlQuery(“SELECT a, word, length FROM MyTable, LATERAL TABLE(split(a)) as T(word, length)”) // LEFT JOIN a table function (equivalent to “leftOuterJoin” in Table API) tableEnv.sqlQuery(“SELECT a, word, length FROM MyTable LEFT JOIN TABLE(split(a)) as T(word, length) ON TRUE”) {% endhighlight %} IMPORTANT: Do not implement TableFunction as a Scala object. Scala object is a singleton and will cause concurrency issues.

Please note that POJO types do not have a deterministic field order. Therefore, you cannot rename the fields of POJO returned by a table function using AS.

By default the result type of a TableFunction is determined by Flink’s automatic type extraction facilities. This works well for basic types and simple POJOs but might be wrong for more complex, custom, or composite types. In such a case, the type of the result can be manually specified by overriding TableFunction#getResultType() which returns its DataType.

The following example shows an example of a TableFunction that returns a Row type which requires explicit data type. We define the returned table type by overriding TableFunction#getResultType().

@Override
public DataType getResultType() {
    return DataTypes.createRowType(DataTypes.STRING, DataTypes.INT);
}

} {% endhighlight %}

override def getResultType: DataType = { DataTypes.createRowType(DataTypes.STRING, DataTypes.INT) } } {% endhighlight %}

{% top %}

Aggregation Functions

User-Defined Aggregate Functions (UDAGGs) aggregate a table (one or more rows with one or more attributes) to a scalar value.

The above figure shows an example of an aggregation. Assume you have a table that contains data about beverages. The table consists of three columns, id, name and price and 5 rows. Imagine you need to find the highest price of all beverages in the table, i.e., perform a max() aggregation. You would need to check each of the 5 rows and the result would be a single numeric value.

User-defined aggregation functions are implemented by extending the AggregateFunction class or DeclarativeAggregateFunction class.

AggregateFunction

An AggregateFunction works as follows. First, it needs an accumulator, which is the data structure that holds the intermediate result of the aggregation. An empty accumulator is created by calling the createAccumulator() method of the AggregateFunction. Subsequently, the accumulate() method of the function is called for each input row to update the accumulator. Once all rows have been processed, the getValue() method of the function is called to compute and return the final result.

The following methods are mandatory for each AggregateFunction:

  • createAccumulator()
  • accumulate()
  • getValue()

Flink’s type extraction facilities can fail to identify complex data types, e.g., if they are not basic types or simple POJOs. So similar to ScalarFunction and TableFunction, AggregateFunction provides methods to specify the DataType of the result type (through AggregateFunction#getResultType()) and the type of the accumulator (through AggregateFunction#getAccumulatorType()).

Besides the above methods, there are a few contracted methods that can be optionally implemented. While some of these methods allow the system more efficient query execution, others are mandatory for certain use cases. For instance, the merge() method is mandatory if the aggregation function should be applied in the context of a session group window (the accumulators of two session windows need to be joined when a row is observed that “connects” them).

The following methods of AggregateFunction are required depending on the use case:

  • retract() is required for aggregations on bounded OVER windows.
  • merge() is required for many batch aggregations and session window aggregations.
  • resetAccumulator() is required for many batch aggregations.

All methods of AggregateFunction must be declared as public, not static and named exactly as the names mentioned above. The methods createAccumulator, getValue, getResultType, and getAccumulatorType are defined in the AggregateFunction abstract class, while others are contracted methods. In order to define a aggregate function, one has to extend the base class org.apache.flink.table.api.functions.AggregateFunction and implement one (or more) accumulate methods. The method accumulate can be overloaded with different parameter types and supports variable arguments.

Detailed documentation for all methods of AggregateFunction is given below.

/** * Creates and init the Accumulator for this [[AggregateFunction]]. * * @return the accumulator with the initial value */ public ACC createAccumulator(); // MANDATORY

/** Processes the input values and update the provided accumulator instance. The method * accumulate can be overloaded with different custom types and arguments. An AggregateFunction * requires at least one accumulate() method. * * @param accumulator the accumulator which contains the current aggregated results * @param [user defined inputs] the input value (usually obtained from a new arrived data). */ public void accumulate(ACC accumulator, [user defined inputs]); // MANDATORY

/** * Retracts the input values from the accumulator instance. The current design assumes the * inputs are the values that have been previously accumulated. The method retract can be * overloaded with different custom types and arguments. This function must be implemented for * datastream bounded over aggregate. * * @param accumulator the accumulator which contains the current aggregated results * @param [user defined inputs] the input value (usually obtained from a new arrived data). */ public void retract(ACC accumulator, [user defined inputs]); // OPTIONAL

/** * Merges a group of accumulator instances into one accumulator instance. This function must be * implemented for datastream session window grouping aggregate and dataset grouping aggregate. * * @param accumulator the accumulator which will keep the merged aggregate results. It should * be noted that the accumulator may contain the previous aggregated * results. Therefore user should not replace or clean this instance in the * custom merge method. * @param its an [[java.lang.Iterable]] pointed to a group of accumulators that will be * merged. */ public void merge(ACC accumulator, java.lang.Iterable its); // OPTIONAL

/** * Called every time when an aggregation result should be materialized. * The returned value could be either an early and incomplete result * (periodically emitted as data arrive) or the final result of the * aggregation. * * @param accumulator the accumulator which contains the current * aggregated results * @return the aggregation result */ public T getValue(ACC accumulator); // MANDATORY

/** * Resets the accumulator for this [[AggregateFunction]]. This function must be implemented for * dataset grouping aggregate. * * @param accumulator the accumulator which needs to be reset */ public void resetAccumulator(ACC accumulator); // OPTIONAL

/** * Returns true if this AggregateFunction can only be applied in an OVER window. * * @return true if the AggregateFunction requires an OVER window, false otherwise. */ public Boolean requiresOver = false; // PRE-DEFINED

/** * Returns the DataType of the AggregateFunction‘s result. * * @return The DataType of the AggregateFunction’s result or null if the result type * should be automatically inferred. */ public DataType getResultType = null; // PRE-DEFINED

/** * Returns the DataType of the AggregateFunction‘s accumulator. * * @return The DataType of the AggregateFunction’s accumulator or null if the * accumulator type should be automatically inferred. */ public DataType getAccumulatorType = null; // PRE-DEFINED } {% endhighlight %}

/** * Processes the input values and update the provided accumulator instance. The method * accumulate can be overloaded with different custom types and arguments. An AggregateFunction * requires at least one accumulate() method. * * @param accumulator the accumulator which contains the current aggregated results * @param [user defined inputs] the input value (usually obtained from a new arrived data). */ def accumulate(accumulator: ACC, [user defined inputs]): Unit // MANDATORY

/** * Retracts the input values from the accumulator instance. The current design assumes the * inputs are the values that have been previously accumulated. The method retract can be * overloaded with different custom types and arguments. This function must be implemented for * datastream bounded over aggregate. * * @param accumulator the accumulator which contains the current aggregated results * @param [user defined inputs] the input value (usually obtained from a new arrived data). */ def retract(accumulator: ACC, [user defined inputs]): Unit // OPTIONAL

/** * Merges a group of accumulator instances into one accumulator instance. This function must be * implemented for datastream session window grouping aggregate and dataset grouping aggregate. * * @param accumulator the accumulator which will keep the merged aggregate results. It should * be noted that the accumulator may contain the previous aggregated * results. Therefore user should not replace or clean this instance in the * custom merge method. * @param its an [[java.lang.Iterable]] pointed to a group of accumulators that will be * merged. */ def merge(accumulator: ACC, its: java.lang.Iterable[ACC]): Unit // OPTIONAL

/** * Called every time when an aggregation result should be materialized. * The returned value could be either an early and incomplete result * (periodically emitted as data arrive) or the final result of the * aggregation. * * @param accumulator the accumulator which contains the current * aggregated results * @return the aggregation result */ def getValue(accumulator: ACC): T // MANDATORY

h/** * Resets the accumulator for this [[AggregateFunction]]. This function must be implemented for * dataset grouping aggregate. * * @param accumulator the accumulator which needs to be reset */ def resetAccumulator(accumulator: ACC): Unit // OPTIONAL

/** * Returns true if this AggregateFunction can only be applied in an OVER window. * * @return true if the AggregateFunction requires an OVER window, false otherwise. */ def requiresOver: Boolean = false // PRE-DEFINED

/** * Returns the DataType of the AggregateFunction‘s result. * * @return The DataType of the AggregateFunction’s result or null if the result type * should be automatically inferred. */ def getResultType: DataType = null // PRE-DEFINED

/** * Returns the DataType of the AggregateFunction‘s accumulator. * * @return The DataType of the AggregateFunction’s accumulator or null if the * accumulator type should be automatically inferred. */ def getAccumulatorType: DataType = null // PRE-DEFINED } {% endhighlight %}

The following example shows how to

  • define an AggregateFunction that calculates the weighted average on a given column,
  • register the function in the TableEnvironment, and
  • use the function in a query.

To calculate a weighted average value, the accumulator needs to store the weighted sum and count of all the data that has been accumulated. In our example we define a class WeightedAvgAccum to be the accumulator. Accumulators are automatically backup-ed by Flink's checkpointing mechanism and restored in case of a failure to ensure exactly-once semantics.

The accumulate() method of our WeightedAvg AggregateFunction has three inputs. The first one is the WeightedAvgAccum accumulator, the other two are user-defined inputs: input value ivalue and weight of the input iweight. Although the retract(), merge(), and resetAccumulator() methods are not mandatory for most aggregation types, we provide them below as examples. Please note that we used Java primitive types and defined getResultType() and getAccumulatorType() methods in the Scala example because Flink type extraction does not work very well for Scala types.

/**

  • Weighted Average user-defined aggregate function. */ public static class WeightedAvg extends AggregateFunction<Long, WeightedAvgAccum> {

    @Override public WeightedAvgAccum createAccumulator() { return new WeightedAvgAccum(); }

    @Override public Long getValue(WeightedAvgAccum acc) { if (acc.count == 0) { return null; } else { return acc.sum / acc.count; } }

    public void accumulate(WeightedAvgAccum acc, long iValue, int iWeight) { acc.sum += iValue * iWeight; acc.count += iWeight; }

    public void retract(WeightedAvgAccum acc, long iValue, int iWeight) { acc.sum -= iValue * iWeight; acc.count -= iWeight; }

    public void merge(WeightedAvgAccum acc, Iterable it) { Iterator iter = it.iterator(); while (iter.hasNext()) { WeightedAvgAccum a = iter.next(); acc.count += a.count; acc.sum += a.sum; } }

    public void resetAccumulator(WeightedAvgAccum acc) { acc.count = 0; acc.sum = 0L; } }

// register function StreamTableEnvironment tEnv = ... tEnv.registerFunction(“wAvg”, new WeightedAvg());

// use function tEnv.sqlQuery(“SELECT user, wAvg(points, level) AS avgPoints FROM userScores GROUP BY user”);

{% endhighlight %}

/**

  • Accumulator for WeightedAvg. */ class WeightedAvgAccum extends JTuple1[JLong, JInteger] { sum = 0L count = 0 }

/**

  • Weighted Average user-defined aggregate function. */ class WeightedAvg extends AggregateFunction[JLong, CountAccumulator] {

override def createAccumulator(): WeightedAvgAccum = { new WeightedAvgAccum }

override def getValue(acc: WeightedAvgAccum): JLong = { if (acc.count == 0) { null } else { acc.sum / acc.count } }

def accumulate(acc: WeightedAvgAccum, iValue: JLong, iWeight: JInteger): Unit = { acc.sum += iValue * iWeight acc.count += iWeight }

def retract(acc: WeightedAvgAccum, iValue: JLong, iWeight: JInteger): Unit = { acc.sum -= iValue * iWeight acc.count -= iWeight }

def merge(acc: WeightedAvgAccum, it: java.lang.Iterable[WeightedAvgAccum]): Unit = { val iter = it.iterator() while (iter.hasNext) { val a = iter.next() acc.count += a.count acc.sum += a.sum } }

def resetAccumulator(acc: WeightedAvgAccum): Unit = { acc.count = 0 acc.sum = 0L }

override def getAccumulatorType: DataType = { DataTypes.of(new TupleTypeInfo(classOf[WeightedAvgAccum], Types.LONG, Types.INT)) }

override def getResultType: DataType = DataTypes.LONG }

// register function val tEnv: StreamTableEnvironment = ??? tEnv.registerFunction(“wAvg”, new WeightedAvg())

// use function tEnv.sqlQuery(“SELECT user, wAvg(points, level) AS avgPoints FROM userScores GROUP BY user”)

{% endhighlight %}

DeclarativeAggregateFunction

DeclarativeAggregationFunction expresses in terms of Expression of Flink Table API, which will be used while generating the code of aggregation operator for queries.

When implementing a new expression-based aggregate function, you should firstly decide how many operands your function will have by implementing inputCount method. And then you can use operands fields to represent your operand, like operands(0), operands(2). Then you should declare all agg buffer attributes by implementing aggBufferAttributes. Agg buffer is used to cache the status of the Agg like Accumulator in AggregateFunction. You should declare all buffer attributes as UnresolvedAggBufferReference, and make sure the name of your attributes is unique within the function. You can implement initialValuesExpressions to define initial value of the agg buffers and accumulateExpressions to update the agg buffers by accumulate the input operands. mergeExpressions is used to merge the agg buffers from partial aggs. getValueExpression is used to get the final value of the Agg like getValue in AggregationFunction.

Detailed documentation for all methods of DeclarativeAggregateFunction is given below.

/**

  • How many inputs your function will deal with. */ def inputCount: Int

/**

  • All fields of the aggregate buffer. */ def aggBufferAttributes: Seq[UnresolvedAggBufferReference]

/**

  • The result type of the function */ def getResultType: InternalType

/**

  • Expressions for initializing empty aggregation buffers. */ def initialValuesExpressions: Seq[Expression]

/**

  • Expressions for accumulating the mutable aggregation buffer based on an input row. */ def accumulateExpressions: Seq[Expression]

/** * Expressions for retracting the mutable aggregation buffer based on an input row. */ def retractExpressions: Seq[Expression] = ???

/**

  • A sequence of expressions for merging two aggregation buffers together. When defining these
  • expressions, you can use the syntax attributeName.left and attributeName.right to refer
  • to the attributes corresponding to each of the buffers being merged (this magic is enabled
  • by the [[RichAggregateBufferAttribute]] implicit class). */ def mergeExpressions: Seq[Expression]

/**

  • An expression which returns the final value for this aggregate function. */ def getValueExpression: Expression

} {% endhighlight %}

The following is an example of DeclarativeAggregateFunction which has the same function with the WeightedAvg example above.

protected lazy val sum = UnresolvedAggBufferReference(“sum”, DataTypes.LONG) protected lazy val count = UnresolvedAggBufferReference(“count”, DataTypes.INT)

override def inputCount = 2

override def aggBufferAttributes = Seq(sum, count)

override def getResultType = DataTypes.DOUBLE

override def initialValuesExpressions = Seq(Literal(0L), Literal(0L))

override def accumulateExpressions = Seq( /* sum = / IsNull(operands(0)) ? (sum, sum + operands(0) * operands(1)), / count = */ IsNull(operands(0)) ? (count, count + operands(1)) )

override def retractExpressions: Seq[Expression] = Seq( /* sum = / IsNull(operands(0)) ? (sum, sum - operands(0) * operands(1)), / count = */ IsNull(operands(0)) ? (count, count - operands(1))

)

override def mergeExpressions = Seq( /* sum = / sum.left + sum.right, / count = */ count.left + count.right )

override def getValueExpression = If(count === Literal(0L), Null(getResultType), sum / count) }

// register function val tEnv: StreamTableEnvironment = ??? tEnv.registerFunction(“wAvg”, new WeightedAvg())

// use function tEnv.sqlQuery(“SELECT user, wAvg(points, level) AS avgPoints FROM userScores GROUP BY user”) {% endhighlight %}

AggregateFunction VS DeclarativeAggregateFunction

  1. In DeclarativeAggregateFunction, Expression is used to declare the logic of the AGG, so we can make use of the builtin scalar functions of Flink table api. It also make it possible for the optimizer to reduce duplicate computation between different AGG functions.
  2. AggregationFunction we can use DataView to store complex states of the accumulator. DataView will be persisted in State which can cache a lot of data. That is needed by AGGs like DistinctCount.

{% top %}

Best Practices for Implementing UDFs

The Table API and SQL code generation internally tries to work with primitive values as much as possible. A user-defined function can introduce much overhead through object creation, casting, and (un)boxing. Therefore, it is highly recommended to declare parameters and result types as primitive types instead of their boxed classes. DataTypes.DATE and DataTypes.TIME can also be represented as int. DataTypes.TIMESTAMP can be represented as long.

We recommended that user-defined functions should be written by Java instead of Scala as Scala types pose a challenge for Flink's type extractor.

{% top %}

Integrating UDFs with the Runtime

Sometimes it might be necessary for a user-defined function to get global runtime information or do some setup/clean-up work before the actual work. User-defined functions provide open() and close() methods that can be overridden and provide similar functionality as the methods in RichFunction of DataStream API.

The open() method is called once before the evaluation method. The close() method after the last call to the evaluation method.

The open() method provides a FunctionContext that contains information about the context in which user-defined functions are executed, such as the metric group, the distributed cache files, or the global job parameters.

The following information can be obtained by calling the corresponding methods of FunctionContext:

MethodDescription
getMetricGroup()Metric group for this parallel subtask.
getCachedFile(name)Local temporary file copy of a distributed cache file.
getJobParameter(name, defaultValue)Global job parameter value associated with given key.

The following example snippet shows how to use FunctionContext in a scalar function for accessing a global job parameter:

private int factor = 0;

@Override
public void open(FunctionContext context) throws Exception {
    // access "hashcode_factor" parameter
    // "12" would be the default value if parameter does not exist
    factor = Integer.valueOf(context.getJobParameter("hashcode_factor", "12")); 
}

public int eval(String s) {
    return s.hashCode() * factor;
}

}

ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); BatchTableEnvironment tableEnv = TableEnvironment.getBatchTableEnvironment(env); // or // StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);

// set job parameter Configuration conf = new Configuration(); conf.setString(“hashcode_factor”, “31”); env.getConfig().setGlobalJobParameters(conf);

// register the function tableEnv.registerFunction(“hashCode”, new HashCode());

// use the function in Java Table API myTable.select(“string, string.hashCode(), hashCode(string)”);

// use the function in SQL tableEnv.sqlQuery(“SELECT string, HASHCODE(string) FROM MyTable”); {% endhighlight %}

var hashcode_factor = 12

override def open(context: FunctionContext): Unit = { // access “hashcode_factor” parameter // “12” would be the default value if parameter does not exist hashcode_factor = context.getJobParameter(“hashcode_factor”, “12”).toInt }

def eval(s: String): Int = { s.hashCode() * hashcode_factor } }

val tableEnv = TableEnvironment.getTableEnvironment(env)

// use the function in Scala Table API myTable.select('string, hashCode('string))

// register and use the function in SQL tableEnv.registerFunction(“hashCode”, hashCode) tableEnv.sqlQuery(“SELECT string, HASHCODE(string) FROM MyTable”) {% endhighlight %}

{% top %}