|author||Chesnay Schepler <email@example.com>||Thu Jul 15 11:54:06 2021 +0200|
|committer||Chesnay Schepler <firstname.lastname@example.org>||Thu Jul 15 11:54:06 2021 +0200|
remove test log4j2 file since the log4j2 properties files have the same name in production/tests only the production one was actually used in all cases.
This repository contains sets of micro benchmarks designed to run on single machine to help Apache Flink's developers assess performance implications of their changes.
The main methods defined in the various classes (test cases) are using jmh micro benchmark suite to define runners to execute those test cases. You can execute the default benchmark suite (which takes ~1hour) at once:
mvn clean install exec:exec
There is also a separate benchmark suit for state backend, and you can execute this suit (which takes ~1hour) using below command:
mvn clean package exec:exec \ -Dbenchmarks="org.apache.flink.state.benchmark.*"
If you want to execute just one benchmark, the best approach is to execute selected main function manually. There're mainly two ways:
From your IDE (hint there is a plugin for Intellij IDEA).
flink.version, default value for the property is defined in pom.xml.
From command line, using command like:
mvn -Dflink.version=<FLINK_VERSION> clean package exec:exec \ -Dbenchmarks="<benchmark_class>"
An example flink version can be -Dflink.version=1.12-SNAPSHOT.
We also support to run each benchmark once (with only one fork and one iteration) for testing, with below command:
mvn test -P test
The recent addition of OpenSSL-based benchmarks require one of two modes to be active:
mvn -Dnetty-tcnative.flavor=staticbut requires
flink-shaded-netty-tcnative-staticin the version from
pom.xml. This module is not provided by Apache Flink by default due to licensing issues (see https://issues.apache.org/jira/browse/LEGAL-393) but can be generated from inside a corresponding
mvn clean install -Pinclude-netty-tcnative-static -pl flink-shaded-netty-tcnative-static
If both options are not working, OpenSSL benchmarks will fail but that should not influence any other benchmarks.
Recommended code structure is to define all benchmarks in Apache Flink and only wrap them here, in this repository, into executor classes.
Regarding naming the benchmark methods, there is one important thing. When uploading the results to the codespeed web UI, uploader is using just the benchmark's method name combined with the parameters to generate visible name of the benchmark in the UI. Because of that it is important to:
Good example of how to name benchmark methods are:
Please attach the results of your benchmarks.