enso/docs/infrastructure/benchmarks.md
Pavel Marek 8e49255d92
Invoke all Enso benchmarks via JMH (#7101)
# Important Notes
#### The Plot

- there used to be two kinds of benchmarks: in Java and in Enso
- those in Java got quite a good treatment
- there even are results updated daily: https://enso-org.github.io/engine-benchmark-results/
- the benchmarks written in Enso used to be 2nd class citizen

#### The Revelation
This PR has the potential to fix it all!
- It designs new [Bench API](88fd6fb988) ready for non-batch execution
- It allows for _single benchmark in a dedicated JVM_ execution
- It provides a simple way to wrap such an Enso benchmark as a Java benchmark
- thus the results of Enso and Java benchmarks are [now unified](https://github.com/enso-org/enso/pull/7101#discussion_r1257504440)

Long live _single benchmarking infrastructure for Java and Enso_!
2023-08-07 12:39:01 +00:00

4.8 KiB

Benchmarks

In this document, we describe the benchmark types used for the runtime - Engine micro benchmarks in the section Engine JMH microbenchmarks and standard library benchmarks in the section Standard library benchmarks, and how and where are the results stored and visualized in the section Visualization.

To track the performance of the engine, we use JMH. There are two types of benchmarks:

  • micro benchmarks located directly in the runtime SBT project. These benchmarks are written in Java, and are used to measure the performance of specific parts of the engine.
  • standard library benchmarks located in the test/Benchmarks Enso project. These benchmarks are entirelly written in Enso, along with the harness code.

Engine JMH microbenchmarks

These benchmarks are written in Java and are used to measure the performance of specific parts of the engine. The sources are located in the runtime SBT project, under src/bench source directory.

Running the benchmarks

To run the benchmarks, use bench or benchOnly command - bench runs all the benchmarks and benchOnly runs only one benchmark specified with the fully qualified name. The parameters for these benchmarks are hard-coded inside the JMH annotations in the source files. In order to change, e.g., the number of measurement iterations, you need to modify the parameter to the @Measurement annotation.

Debugging the benchmarks

Currently, the best way to debug the benchmark is to set the @Fork annotation to 0, and to run withDebug command like this:

withDebug --debugger benchOnly -- <fully qualified benchmark name>

Standard library benchmarks

Unlike the Engine micro benchmarks, these benchmarks are written entirely in Enso and located in the test/Benchmarks Enso project. There are two ways to run these benchmarks:

Running standalone

A single source file in the project may contain multiple benchmark definitions. If the source file defines main method, we can evaluate it the same way as any other Enso source file, for example via runEngineDistribution --in-project test/Benchmarks --run <source file>. The harness within the project is not meant for any sophisticated benchmarking, but rather for quick local evaluation. See the Bench.measure method documentation for more details. For more sophisticated approach, run the benchmarks via the JMH launcher.

Running via JMH launcher

The JMH launcher is located in std-bits/benchmarks directory, as std-benchmarks SBT project. It is a single Java class with a main method that just delegates to the standard JMH launcher, therefore, supports all the command line options as the standard launcher. For the full options summary, either see the JMH source code, or run the launcher with -h option.

The std-benchmarks SBT project supports bench and benchOnly commands, that work the same as in the runtime project, with the exception that the benchmark name does not have to be specified as a fully qualified name, but as a regular expression. To access the full flexibility of the JMH launcher, run it via Bench/run - for example, to see the help message: Bench/run -h. For example, you can run all the benchmarks that have "New_Vector" in their name with just 3 seconds for warmup iterations and 2 measurement iterations with Bench/run -w 3 -i 2 New_Vector.

Whenever you add or delete any benchmarks from test/Benchmarks project, the generated JMH sources need to be recompiled with Bench/clean; Bench/compile. You do not need to recompile the std-benchmarks project if you only modify the benchmark sources.

Visualization

The benchmarks are invoked as a daily GitHub Action, that can be invoked manually on a specific branch as well. The results are kept in the artifacts produced from the actions. In tools/performance/engine-benchmarks directory, there is a simple Python script for collecting and processing the results. See the README in that directory for more information about how to run that script. This script is invoked regularly on a private machine and the results are published in https://enso-org.github.io/engine-benchmark-results/.