streamly/benchmark
Harendra Kumar 7ce2dc87ea merge nested and nested-unfold benchmarks with others
merge them with other serial stream benchmarks.
2020-02-21 16:32:46 +05:30
..
lib/Streamly/Benchmark Reorganize benchmark modules 2020-02-21 16:32:46 +05:30
Streamly/Benchmark merge nested and nested-unfold benchmarks with others 2020-02-21 16:32:46 +05:30
Adaptive.hs Update maintainer email to streamly@composewell.com. 2019-11-07 12:31:07 +05:30
Array.hs Update maintainer email to streamly@composewell.com. 2019-11-07 12:31:07 +05:30
ArrayOps.hs Update maintainer email to streamly@composewell.com. 2019-11-07 12:31:07 +05:30
BaseStreams.hs Fix build for GHC 8.0 and 7.10 2020-02-15 23:51:25 +05:30
Chart.hs Add adaptive benchmark, report cpuTime as well 2020-02-14 17:50:58 +05:30
Concurrent.hs Add documentation, format code, use forkIO for baseline 2020-02-14 17:50:28 +05:30
FileIO.hs Use less memory for single chunk chunksOf benchmark 2020-02-14 17:50:58 +05:30
LinearAsync.hs Reorganize benchmark modules 2020-02-21 16:32:46 +05:30
LinearRate.hs Reorganize benchmark modules 2020-02-21 16:32:46 +05:30
NanoBenchmarks.hs Move Streams/StreamK* to Internal/Data/Stream 2019-12-09 15:14:05 +05:30
NestedConcurrent.hs Reorganize benchmark modules 2020-02-21 16:32:46 +05:30
NestedOps.hs separate const mem benchmarks from buffered ones 2020-01-16 18:50:30 +05:30
NestedUnfoldOps.hs pass stream-size from CLI for nested-unfold 2020-01-16 02:25:10 +05:30
Parallel.hs Reorganize benchmark modules 2020-02-21 16:32:46 +05:30
README.md Add a README for benchmarks 2018-10-27 14:22:11 +05:30
StreamDKOps.hs use stack and heap limits on benchmarks 2020-02-14 17:50:58 +05:30
StreamDOps.hs Use StreamD for iterate and iterateM 2019-12-30 15:37:03 +05:30
StreamKOps.hs Move Streams/Prelude.hs to Internal/Data/Stream 2019-12-09 15:59:34 +05:30
streamly-benchmarks.cabal merge nested and nested-unfold benchmarks with others 2020-02-21 16:32:46 +05:30

Running Benchmarks

bench.sh script at the root of the repo is the top level driver for running benchmarks. It runs the requested benchmarks and then creates a report from the results using the bench-show package. Try bench.sh --help for available options to run it.

Quick start

Run these commands from the root of the repo.

To run the default benchmarks:

$ ./bench.sh

To run all benchmarks:

$ ./bench.sh --benchmarks all

To run linear and linear-async benchmarks:

$ ./bench.sh --benchmarks "linear linear-async"

To run only the base benchmark and only the benchmarks prefixed with StreamD in that (anything after a -- is passed to gauge):

$ ./bench.sh --benchmarks base -- StreamD

Comparing benchmarks

To compare two sets of results, first run the benchmarks at the baseline commit:

$ ./bench.sh

And then run with the --append option at the commit that you want to compare with the baseline. It will show the comparison with the baseline:

$ ./bench.sh --append

Append just adds the next set of results in the same results file. You can keep appending more results and all of them will be compared with the baseline.

You can use --compare to compare the previous commit with the head commit:

$ ./bench.sh --compare

To compare the head commit with some other base commit:

$ ./bench.sh --compare --base d918833

To compare two arbitrary commits:

$ ./bench.sh --compare --base d918833 --candidate 38aa5f2

Note that the above may not always work because the script and the benchmarks themselves might have changed across the commits. The --append method is more reliable to compare.

Available Benchmarks

The benchmark names that you can use when running bench.sh:

  • base: a benchmark that measures the raw operations of the basic streams StreamD and StreamK.

  • linear: measures the non-monadic operations of serial streams

  • linear-async: measures the non-monadic operations of concurrent streams

  • linear-rate: measures the rate limiting operations

  • nested: measures the monadic operations of all streams

  • all: runs all of the above benchmarks

Reporting without measuring

You can use the --no-measure option to report the already measured results in the benchmarks results file. A results file may collect an arbitrary number of results by running with --append multiple times. Each benchmark has its own results file, for example the linear benchmark has the results file at charts/linear/results.csv.

You can also manually edit the file to remove a set of results if you like or to append results from previously saved results or from some other results file. After editing you can run bench.sh with the --no-measure option to see the reports corresponding to the results.