blob: c0494f8771c6fe9a901c506b58a76fe45d330e55 [file] [log] [blame]
\u001B[1mSYNOPSIS\u001B[0m
${project.description}
Original Maven URL:
\u001B[33mmvn:${pkgGroupId}/${pkgArtifactId}/${pkgVersion}\u001B[0m
\u001B[1mDESCRIPTION\u001B[0m
Profiling is an art, an art that it is very hard to master. Writing micro-benchmarks is one of the many tools
available to programmers today. Micro-benchmarks are simple benchmarks that rarely involve complicated deployments
and are often used to test specific parts of an application. They are also characterized by the use of wall-clock
blocks: start a clock, run the code, stop the clock and report the result.
The lack of common tools for writing micro-benchmarks makes comparing results published by different people
impractical. Questions such as "When did you start/stop the clock?" or "What exactly did you include in the
wall-clock block?" or "Are you reporting latency or throughput?" often arise.
Japex is a simple yet powerful tool to write Java-based micro-benchmarks. It started as a simple project primarily
aimed at testing XML and Fast Infoset performance, but has evolved into a rather sophisticated framework with
support for XML and HTML output as well as various types of charts for displaying the results. It is similar in
spirit to JUnit in that if factors out most of the repetitive programming logic that is necessary to write
micro-benchmarks. This logic includes loading and initializing multiple drivers, warming up the VM, forking multiple
threads, timing the inner loop, etc. One of the key design goals for Japex was extensibility. Via the use of a
simple model of input and output parameters, it is possible to write micro-benchmarks to test practically anything.
\u001B[1mSEE ALSO\u001B[0m
\u001B[36mhttp://japex.java.net/\u001B[0m