| commit | ab0015dfc6c9a176c04277bc09b930ecedd602cc | [log] [tgz] |
|---|---|---|
| author | Piotr Nowojski <piotr.nowojski@gmail.com> | Wed Jun 12 16:24:21 2019 +0200 |
| committer | Nico Kruber <nico@ververica.com> | Fri Jun 05 11:02:25 2020 +0200 |
| tree | 74f584298fb66f3e1c21c63dd646d16a5de720e3 | |
| parent | 9e3833a5f8a21eca8684aece48c37da2f20e1854 [diff] |
[FLINK-12818] Improve stability of twoInputMapSink benchmark This is an attempt to improve twoInputMapSink benchmark stability. Local test show that for some unknown reason, without 1ms buffer timeout, some JVM forks are much slower then others, making results unstable.
This repository contains sets of micro benchmarks designed to run on single machine to help Apache Flink's developers assess performance implications of their changes.
The main methods defined in the various classes (test cases) are using jmh micro benchmark suite to define runners to execute those test cases. You can execute the default benchmark suite (which takes ~1hour) at once:
mvn -Dflink.version=1.5.0 clean install exec:exec
There is also a separate benchmark suit for state backend, and you can execute this suit (which takes ~1hour) using below command:
mvn -Dflink.version=1.5.0 clean package exec:exec \ -Dexec.executable=java -Dexec.args="-jar target/benchmarks.jar -rf csv org.apache.flink.state.benchmark.*"
If you want to execute just one benchmark, the best approach is to execute selected main function manually. There're mainly two ways:
From your IDE (hint there is a plugin for Intellij IDEA).
flink.version, default value for the property is defined in pom.xml.From command line, using command like:
mvn -Dflink.version=1.5.0 clean package exec:exec \ -Dexec.executable=java -Dexec.args="-jar target/benchmarks.jar <benchmark_class>"
We also support to run each benchmark once (with only one fork and one iteration) for testing, with below command:
mvn test -P test
Recommended code structure is to define all benchmarks in Apache Flink and only wrap them here, in this repository, into executor classes.
Such code structured is due to using GPL2 licensed jmh library for the actual execution of the benchmarks. Ideally we would prefer to have all of the code moved to Apache Flink
Regarding naming the benchmark methods, there is one important thing. When uploading the results to the codespeed web UI, uploader is using just the benchmark's method name combined with the parameters to generate visible name of the benchmark in the UI. Because of that it is important to:
benchmark, test, ...)Good example of how to name benchmark methods are:
networkThroughputsessionWindowPlease attach the results of your benchmarks.