Flink exposes a metric system that allows gathering and exposing metrics to external systems.
You can access the metric system from any user function that extends [RichFunction]({{ site.baseurl }}/dev/api_concepts.html#rich-functions) by calling getRuntimeContext().getMetricGroup(). This method returns a MetricGroup object on which you can create and register new metrics.
Flink supports Counters, Gauges, Histograms and Meters.
A Counter is used to count something. The current value can be in- or decremented using inc()/inc(long n) or dec()/dec(long n). You can create and register a Counter by calling counter(String name) on a MetricGroup.
{% highlight java %}
public class MyMapper extends RichMapFunction<String, Integer> { private Counter counter;
@Override public void open(Configuration config) { this.counter = getRuntimeContext() .getMetricGroup() .counter(“myCounter”); }
@public Integer map(String value) throws Exception { this.counter.inc(); } }
{% endhighlight %}
Alternatively you can also use your own Counter implementation:
{% highlight java %}
public class MyMapper extends RichMapFunction<String, Integer> { private Counter counter;
@Override public void open(Configuration config) { this.counter = getRuntimeContext() .getMetricGroup() .counter(“myCustomCounter”, new CustomCounter()); } }
{% endhighlight %}
A Gauge provides a value of any type on demand. In order to use a Gauge you must first create a class that implements the org.apache.flink.metrics.Gauge interface. There is no restriction for the type of the returned value. You can register a gauge by calling gauge(String name, Gauge gauge) on a MetricGroup.
public class MyMapper extends RichMapFunction<String, Integer> { private int valueToExpose;
@Override public void open(Configuration config) { getRuntimeContext() .getMetricGroup() .gauge(“MyGauge”, new Gauge() { @Override public Integer getValue() { return valueToExpose; } }); } }
{% endhighlight %}
public class MyMapper extends RichMapFunction[String,Int] { val valueToExpose = 5
override def open(parameters: Configuration): Unit = { getRuntimeContext() .getMetricGroup() .gauge(“MyGauge”, ScalaGauge[Int]( () => valueToExpose ) ) } ... }
{% endhighlight %}
Note that reporters will turn the exposed object into a String, which means that a meaningful toString() implementation is required.
A Histogram measures the distribution of long values. You can register one by calling histogram(String name, Histogram histogram) on a MetricGroup.
{% highlight java %} public class MyMapper extends RichMapFunction<Long, Integer> { private Histogram histogram;
@Override public void open(Configuration config) { this.histogram = getRuntimeContext() .getMetricGroup() .histogram(“myHistogram”, new MyHistogram()); }
@public Integer map(Long value) throws Exception { this.histogram.update(value); } } {% endhighlight %}
Flink does not provide a default implementation for Histogram, but offers a {% gh_link flink-metrics/flink-metrics-dropwizard/src/main/java/org/apache/flink/dropwizard/metrics/DropwizardHistogramWrapper.java “Wrapper” %} that allows usage of Codahale/DropWizard histograms. To use this wrapper add the following dependency in your pom.xml: {% highlight xml %} org.apache.flink flink-metrics-dropwizard {{site.version}} {% endhighlight %}
You can then register a Codahale/DropWizard histogram like this:
{% highlight java %} public class MyMapper extends RichMapFunction<Long, Integer> { private Histogram histogram;
@Override public void open(Configuration config) { com.codahale.metrics.Histogram histogram = new com.codahale.metrics.Histogram(new SlidingWindowReservoir(500));
this.histogram = getRuntimeContext()
.getMetricGroup()
.histogram("myHistogram", new DropwizardHistogramWrapper(histogram));
} } {% endhighlight %}
A Meter measures an average throughput. An occurrence of an event can be registered with the markEvent() method. Occurrence of multiple events at the same time can be registered with markEvent(long n) method. You can register a meter by calling meter(String name, Meter meter) on a MetricGroup.
{% highlight java %} public class MyMapper extends RichMapFunction<Long, Integer> { private Meter meter;
@Override public void open(Configuration config) { this.meter = getRuntimeContext() .getMetricGroup() .meter(“myMeter”, new MyMeter()); }
@public Integer map(Long value) throws Exception { this.meter.markEvent(); } } {% endhighlight %}
Flink offers a {% gh_link flink-metrics/flink-metrics-dropwizard/src/main/java/org/apache/flink/dropwizard/metrics/DropwizardMeterWrapper.java “Wrapper” %} that allows usage of Codahale/DropWizard meters. To use this wrapper add the following dependency in your pom.xml: {% highlight xml %} org.apache.flink flink-metrics-dropwizard {{site.version}} {% endhighlight %}
You can then register a Codahale/DropWizard meter like this:
{% highlight java %} public class MyMapper extends RichMapFunction<Long, Integer> { private Meter meter;
@Override public void open(Configuration config) { com.codahale.metrics.Meter meter = new com.codahale.metrics.Meter();
this.meter = getRuntimeContext()
.getMetricGroup()
.meter("myMeter", new DropwizardMeterWrapper(meter));
} } {% endhighlight %}
Every metric is assigned an identifier under which it will be reported that is based on 3 components: the user-provided name when registering the metric, an optional user-defined scope and a system-provided scope. For example, if A.B is the sytem scope, C.D the user scope and E the name, then the identifier for the metric will be A.B.C.D.E.
You can configure which delimiter to use for the identifier (default: .) by setting the metrics.scope.delimiter key in conf/flink-conf.yaml.
You can define a user scope by calling either MetricGroup#addGroup(String name) or MetricGroup#addGroup(int name).
{% highlight java %}
counter = getRuntimeContext() .getMetricGroup() .addGroup(“MyMetrics”) .counter(“myCounter”);
{% endhighlight %}
The system scope contains context information about the metric, for example in which task it was registered or what job that task belongs to.
Which context information should be included can be configured by setting the following keys in conf/flink-conf.yaml. Each of these keys expect a format string that may contain constants (e.g. “taskmanager”) and variables (e.g. “<task_id>”) which will be replaced at runtime.
metrics.scope.jmmetrics.scope.jm.jobmetrics.scope.tmmetrics.scope.tm.jobmetrics.scope.taskmetrics.scope.operatorThere are no restrictions on the number or order of variables. Variables are case sensitive.
The default scope for operator metrics will result in an identifier akin to localhost.taskmanager.1234.MyJob.MyOperator.0.MyMetric
If you also want to include the task name but omit the task manager information you can specify the following format:
metrics.scope.operator: <host>.<job_name>.<task_name>.<operator_name>.<subtask_index>
This could create the identifier localhost.MyJob.MySource_->_MyOperator.MyOperator.0.MyMetric.
Note that for this format string an identifier clash can occur should the same job be run multiple times concurrently, which can lead to inconsistent metric data. As such it is advised to either use format strings that provide a certain degree of uniqueness by including IDs (e.g <job_id>) or by assigning unique names to jobs and operators.
Metrics can be exposed to an external system by configuring one or several reporters in conf/flink-conf.yaml. These reporters will be instantiated on each job and task manager when they are started.
metrics.reporters: The list of named reporters.metrics.reporter.<name>.<config>: Generic setting <config> for the reporter named <name>.metrics.reporter.<name>.class: The reporter class to use for the reporter named <name>.metrics.reporter.<name>.interval: The reporter interval to use for the reporter named <name>.metrics.reporter.<name>.scope.delimiter: The delimiter to use for the identifier (default value use metrics.scope.delimiter) for the reporter named <name>.All reporters must at least have the class property, some allow specifying a reporting interval. Below, we will list more settings specific to each reporter.
Example reporter configuration that specifies multiple reporters:
metrics.reporters: my_jmx_reporter,my_other_reporter metrics.reporter.my_jmx_reporter.class: org.apache.flink.metrics.jmx.JMXReporter metrics.reporter.my_jmx_reporter.port: 9020-9040 metrics.reporter.my_other_reporter.class: org.apache.flink.metrics.graphite.GraphiteReporter metrics.reporter.my_other_reporter.host: 192.168.1.1 metrics.reporter.my_other_reporter.port: 10000
Important: The jar containing the reporter must be accessible when Flink is started by placing it in the /lib folder.
You can write your own Reporter by implementing the org.apache.flink.metrics.reporter.MetricReporter interface. If the Reporter should send out reports regularly you have to implement the Scheduled interface as well.
The following sections list the supported reporters.
You don't have to include an additional dependency since the JMX reporter is available by default but not activated.
Parameters:
port - (optional) the port on which JMX listens for connections. This can also be a port range. When a range is specified the actual port is shown in the relevant job or task manager log. If this setting is set Flink will start an extra JMX connector for the given port/range. Metrics are always available on the default local JMX interface.Example configuration:
{% highlight yaml %}
metrics.reporters: jmx metrics.reporter.jmx.class: org.apache.flink.metrics.jmx.JMXReporter metrics.reporter.jmx.port: 8789
{% endhighlight %}
Metrics exposed through JMX are identified by a domain and a list of key-properties, which together form the object name.
The domain always begins with org.apache.flink followed by a generalized metric identifier. In contrast to the usual identifier it is not affected by scope-formats, does not contain any variables and is constant across jobs. An example for such a domain would be org.apache.flink.job.task.numBytesOut.
The key-property list contains the values for all variables, regardless of configured scope formats, that are associated with a given metric. An example for such a list would be host=localhost,job_name=MyJob,task_name=MyTask.
The domain thus identifies a metric class, while the key-property list identifies one (or multiple) instances of that metric.
In order to use this reporter you must copy /opt/flink-metrics-ganglia-{{site.version}}.jar into the /lib folder of your Flink distribution.
Parameters:
host - the gmond host address configured under udp_recv_channel.bind in gmond.confport - the gmond port configured under udp_recv_channel.port in gmond.conftmax - soft limit for how long an old metric should be retaineddmax - hard limit for how long an old metric should be retainedttl - time-to-live for transmitted UDP packetsaddressingMode - UDP addressing mode to use (UNICAST/MULTICAST)Example configuration:
{% highlight yaml %}
metrics.reporters: gang metrics.reporter.gang.class: org.apache.flink.metrics.ganglia.GangliaReporter metrics.reporter.gang.host: localhost metrics.reporter.gang.port: 8649 metrics.reporter.gang.tmax: 60 metrics.reporter.gang.dmax: 0 metrics.reporter.gang.ttl: 1 metrics.reporter.gang.addressingMode: MULTICAST
{% endhighlight %}
In order to use this reporter you must copy /opt/flink-metrics-graphite-{{site.version}}.jar into the /lib folder of your Flink distribution.
Parameters:
host - the Graphite server hostport - the Graphite server portprotocol - protocol to use (TCP/UDP)Example configuration:
{% highlight yaml %}
metrics.reporters: grph metrics.reporter.grph.class: org.apache.flink.metrics.graphite.GraphiteReporter metrics.reporter.grph.host: localhost metrics.reporter.grph.port: 2003 metrics.reporter.grph.protocol: TCP
{% endhighlight %}
In order to use this reporter you must copy /opt/flink-metrics-statsd-{{site.version}}.jar into the /lib folder of your Flink distribution.
Parameters:
host - the StatsD server hostport - the StatsD server portExample configuration:
{% highlight yaml %}
metrics.reporters: stsd metrics.reporter.stsd.class: org.apache.flink.metrics.statsd.StatsDReporter metrics.reporter.stsd.host: localhost metrics.reporter.stsd.port: 8125
{% endhighlight %}
In order to use this reporter you must copy /opt/flink-metrics-datadog-{{site.version}}.jar into the /lib folder of your Flink distribution.
Note any variables in Flink metrics, such as <host>, <job_name>, <tm_id>, <subtask_index>, <task_name>, and <operator_name>, will be sent to Datadog as tags. Tags will look like host:localhost and job_name:myjobname.
Parameters:
apikey - the Datadog API keytags - (optional) the global tags that will be applied to metrics when sending to Datadog. Tags should be separated by comma onlyExample configuration:
{% highlight yaml %}
metrics.reporters: dghttp metrics.reporter.dghttp.class: org.apache.flink.metrics.datadog.DatadogHttpReporter metrics.reporter.dghttp.apikey: xxx metrics.reporter.dghttp.tags: myflinkapp,prod
{% endhighlight %}
By default Flink gathers several metrics that provide deep insights on the current state. This section is a reference of all these metrics.
The tables below generally feature 4 columns:
The “Scope” column describes which scope format is used to generate the system scope. For example, if the cell contains “Operator” then the scope format for “metrics.scope.operator” is used. If the cell contains multiple values, separated by a slash, then the metrics are reported multiple times for different entities, like for both job- and taskmanagers.
The (optional)“Infix” column describes which infix is appended to the system scope.
The “Metrics” column lists the names of all metrics that are registered for the given scope and infix.
The “Description” column provides information as to what a given metric is measuring.
Note that all dots in the infix/metric name columns are still subject to the “metrics.delimiter” setting.
Thus, in order to infer the metric identifier:
Flink allows to track the latency of records traveling through the system. To enable the latency tracking a latencyTrackingInterval (in milliseconds) has to be set to a positive value in the ExecutionConfig.
At the latencyTrackingInterval, the sources will periodically emit a special record, called a LatencyMarker. The marker contains a timestamp from the time when the record has been emitted at the sources. Latency markers can not overtake regular user records, thus if records are queuing up in front of an operator, it will add to the latency tracked by the marker.
Note that the latency markers are not accounting for the time user records spend in operators as they are bypassing them. In particular the markers are not accounting for the time records spend for example in window buffers. Only if operators are not able to accept new records, thus they are queuing up, the latency measured using the markers will reflect that.
All intermediate operators keep a list of the last n latencies from each source to compute a latency distribution. The sink operators keep a list from each source, and each parallel source instance to allow detecting latency issues caused by individual machines.
Currently, Flink assumes that the clocks of all machines in the cluster are in sync. We recommend setting up an automated clock synchronisation service (like NTP) to avoid false latency results.
Metrics that were gathered for each task or operator can also be visualized in the Dashboard. On the main page for a job, select the Metrics tab. After selecting one of the tasks in the top graph you can select metrics to display using the Add Metric drop-down menu.
<subtask_index>.<metric_name>.<subtask_index>.<operator_name>.<metric_name>.Each metric will be visualized as a separate graph, with the x-axis representing time and the y-axis the measured value. All graphs are automatically updated every 10 seconds, and continue to do so when navigating to another page.
There is no limit as to the number of visualized metrics; however only numeric metrics can be visualized.
{% top %}