LaTeX in README

This commit is contained in:
Bodigrim 2023-07-05 21:22:38 +01:00
parent 8b9f5850d1
commit 6496998c66
3 changed files with 16 additions and 30 deletions

View File

@ -171,15 +171,15 @@ use `--time-mode` command-line option or set it locally via `TimeMode` option.
Here is a procedure used by `tasty-bench` to measure execution time:
1. Set _n_ ← 1.
2. Measure execution time _tₙ_ of _n_ iterations
and execution time _t₂ₙ_ of _2n_ iterations.
3. Find _t_ which minimizes deviation of (_nt_, _2nt_) from (_tₙ_, _t₂ₙ_),
namely _t_ ← (_tₙ_ + _2t₂ₙ_) / _5n_.
1. Set $n \leftarrow 1$.
2. Measure execution time $t_n$ of $n$ iterations
and execution time $t_{2n}$ of $2n$ iterations.
3. Find $t$ which minimizes deviation of $(nt,2nt)$ from $(t_n,t_{2n})$,
namely $t \leftarrow (t_n + 2t_{2n}) / 5n$.
4. If deviation is small enough (see `--stdev` below)
or time is running out soon (see `--timeout` below),
return _t_ as a mean execution time.
5. Otherwise set _n__2n_ and jump back to Step 2.
return $t$ as a mean execution time.
5. Otherwise set $n \leftarrow 2n$ and jump back to Step 2.
This is roughly similar to the linear regression approach which `criterion` takes,
but we fit only two last points. This allows us to simplify away all heavy-weight
@ -187,7 +187,7 @@ statistical analysis. More importantly, earlier measurements,
which are presumably shorter and noisier, do not affect overall result.
This is in contrast to `criterion`, which fits all measurements and
is biased to use more data points corresponding to shorter runs
(it employs _n__1.05n_ progression).
(it employs $n \leftarrow 1.05n$ progression).
Mean time and its deviation does not say much about the
distribution of individual timings. E. g., imagine a computation which

View File

@ -57,20 +57,6 @@ for word in getCPUTime TimeMode RTSStats allocated_bytes copied_bytes max_mem_in
sed -i '' "s/@$word@/'$word'/g" README.haddock
done
sed -i '' 's:/n/ ← /1.05n/:\\( n \\leftarrow 1.05n \\):g' README.haddock
sed -i '' 's:/n/ ← /2n/:\\( n \\leftarrow 2n \\):g' README.haddock
sed -i '' 's:/t/ ← (/tₙ/ + /2t₂ₙ/) \\/ /5n/:\\( t \\leftarrow (t_n + 2t_{2n}) / 5n \\):g' README.haddock
sed -i '' 's:(/nt/, /2nt/):\\( (nt, 2nt) \\):g' README.haddock
sed -i '' 's:from (/tₙ/,:from:g' README.haddock
sed -i '' 's:/t₂ₙ/),:\\( (t_n, t_{2n}) \\),:g' README.haddock
sed -i '' 's:/n/ ← 1:\\( n \\leftarrow 1 \\):g' README.haddock
sed -i '' 's:/tₙ/:\\( t_n \\):g' README.haddock
sed -i '' 's:/n/:\\( n \\):g' README.haddock
sed -i '' 's:/t₂ₙ/:\\( t_{2n} \\):g' README.haddock
sed -i '' 's:/2n/:\\( 2n \\):g' README.haddock
sed -i '' 's:/t/:\\( t \\):g' README.haddock
sed -i '' "s;<<https://hackage.haskell.org/package/tasty-bench/src/example.svg Plotting>>;![Plotting](example.svg);g" README.haddock
sed -i '' "s/^Plotting$//g" README.haddock

View File

@ -151,15 +151,15 @@ option.
Here is a procedure used by @tasty-bench@ to measure execution time:
1. Set \( n \leftarrow 1 \).
2. Measure execution time \( t_n \) of \( n \) iterations and execution time
\( t_{2n} \) of \( 2n \) iterations.
3. Find \( t \) which minimizes deviation of \( (nt, 2nt) \) from
\( (t_n, t_{2n}) \), namely \( t \leftarrow (t_n + 2t_{2n}) / 5n \).
1. Set \(n \leftarrow 1\).
2. Measure execution time \(t_n\) of \(n\) iterations and execution
time \(t_{2n}\) of \(2n\) iterations.
3. Find \(t\) which minimizes deviation of \((nt,2nt)\) from
\((t_n,t_{2n})\), namely \(t \leftarrow (t_n + 2t_{2n}) / 5n\).
4. If deviation is small enough (see @--stdev@ below) or time is
running out soon (see @--timeout@ below), return \( t \) as a mean
running out soon (see @--timeout@ below), return \(t\) as a mean
execution time.
5. Otherwise set \( n \leftarrow 2n \) and jump back to Step 2.
5. Otherwise set \(n \leftarrow 2n\) and jump back to Step 2.
This is roughly similar to the linear regression approach which
@criterion@ takes, but we fit only two last points. This allows us to
@ -167,7 +167,7 @@ simplify away all heavy-weight statistical analysis. More importantly,
earlier measurements, which are presumably shorter and noisier, do not
affect overall result. This is in contrast to @criterion@, which fits
all measurements and is biased to use more data points corresponding to
shorter runs (it employs \( n \leftarrow 1.05n \) progression).
shorter runs (it employs \(n \leftarrow 1.05n\) progression).
Mean time and its deviation does not say much about the distribution of
individual timings. E. g., imagine a computation which (according to a