streamly/benchmark
Harendra Kumar afc02e6970 separate const mem benchmarks from buffered ones
Streaming benchmarks take constant memory whereas the buffering ones take
memory proportional to the stream size. We would like to test them separately.
For streaming benchmarks we can use a very large stream size to make sure there
is no space leak and we are running in constant memory for long time.
2020-01-16 18:50:30 +05:30
..
Streamly/Benchmark Make some updates to takeByTime/dropByTime 2020-01-16 02:02:37 +05:30
Adaptive.hs Update maintainer email to streamly@composewell.com. 2019-11-07 12:31:07 +05:30
Array.hs Update maintainer email to streamly@composewell.com. 2019-11-07 12:31:07 +05:30
ArrayOps.hs Update maintainer email to streamly@composewell.com. 2019-11-07 12:31:07 +05:30
BaseStreams.hs Use StreamD for iterate and iterateM 2019-12-30 15:37:03 +05:30
Chart.hs Run long benchmarks before release 2020-01-16 18:43:26 +05:30
Common.hs pass stream-size parameter to child process 2020-01-16 02:25:10 +05:30
Concurrent.hs Update maintainer email to streamly@composewell.com. 2019-11-07 12:31:07 +05:30
FileIO.hs Move Benchmark* to benchmark/ + Rm benchmark flag 2019-12-17 13:00:09 +05:30
Linear.hs separate const mem benchmarks from buffered ones 2020-01-16 18:50:30 +05:30
LinearAsync.hs separate const mem benchmarks from buffered ones 2020-01-16 18:50:30 +05:30
LinearRate.hs use "seq" on "value" 2020-01-08 18:15:43 +05:30
NanoBenchmarks.hs Move Streams/StreamK* to Internal/Data/Stream 2019-12-09 15:14:05 +05:30
Nested.hs separate const mem benchmarks from buffered ones 2020-01-16 18:50:30 +05:30
NestedConcurrent.hs separate const mem benchmarks from buffered ones 2020-01-16 18:50:30 +05:30
NestedOps.hs separate const mem benchmarks from buffered ones 2020-01-16 18:50:30 +05:30
NestedUnfold.hs pass stream-size from CLI for nested-unfold 2020-01-16 02:25:10 +05:30
NestedUnfoldOps.hs pass stream-size from CLI for nested-unfold 2020-01-16 02:25:10 +05:30
README.md Add a README for benchmarks 2018-10-27 14:22:11 +05:30
StreamDKOps.hs Move Streams/Prelude.hs to Internal/Data/Stream 2019-12-09 15:59:34 +05:30
StreamDOps.hs Use StreamD for iterate and iterateM 2019-12-30 15:37:03 +05:30
StreamKOps.hs Move Streams/Prelude.hs to Internal/Data/Stream 2019-12-09 15:59:34 +05:30

Running Benchmarks

bench.sh script at the root of the repo is the top level driver for running benchmarks. It runs the requested benchmarks and then creates a report from the results using the bench-show package. Try bench.sh --help for available options to run it.

Quick start

Run these commands from the root of the repo.

To run the default benchmarks:

$ ./bench.sh

To run all benchmarks:

$ ./bench.sh --benchmarks all

To run linear and linear-async benchmarks:

$ ./bench.sh --benchmarks "linear linear-async"

To run only the base benchmark and only the benchmarks prefixed with StreamD in that (anything after a -- is passed to gauge):

$ ./bench.sh --benchmarks base -- StreamD

Comparing benchmarks

To compare two sets of results, first run the benchmarks at the baseline commit:

$ ./bench.sh

And then run with the --append option at the commit that you want to compare with the baseline. It will show the comparison with the baseline:

$ ./bench.sh --append

Append just adds the next set of results in the same results file. You can keep appending more results and all of them will be compared with the baseline.

You can use --compare to compare the previous commit with the head commit:

$ ./bench.sh --compare

To compare the head commit with some other base commit:

$ ./bench.sh --compare --base d918833

To compare two arbitrary commits:

$ ./bench.sh --compare --base d918833 --candidate 38aa5f2

Note that the above may not always work because the script and the benchmarks themselves might have changed across the commits. The --append method is more reliable to compare.

Available Benchmarks

The benchmark names that you can use when running bench.sh:

  • base: a benchmark that measures the raw operations of the basic streams StreamD and StreamK.

  • linear: measures the non-monadic operations of serial streams

  • linear-async: measures the non-monadic operations of concurrent streams

  • linear-rate: measures the rate limiting operations

  • nested: measures the monadic operations of all streams

  • all: runs all of the above benchmarks

Reporting without measuring

You can use the --no-measure option to report the already measured results in the benchmarks results file. A results file may collect an arbitrary number of results by running with --append multiple times. Each benchmark has its own results file, for example the linear benchmark has the results file at charts/linear/results.csv.

You can also manually edit the file to remove a set of results if you like or to append results from previously saved results or from some other results file. After editing you can run bench.sh with the --no-measure option to see the reports corresponding to the results.