- Do the same with `bindWith`, `concatMapBy`, `concatMapIterateWith`,
`concatMapTreeWith`.
- `concatMapLoopWith`,`concatMapTreeYieldLeavesWith` are not updated
in the same way as they necessarily need the combining function to be
polymorphic.
- Update Changelog to reflect this change.
- `take`, `takeEQ`, `takeGE` now work when the count is <0.
- `sliceSepByMax` behaviour is now more in line with what the
documentaton says in `Streamly.Internal.Data.Parser`.
In the quick mode we anyway use --include-first-iter, we should not be running
many iterations, one iteration is enough, anyway allocations do not change
across iterations. We can use this for super quick comparison of benchmarks.
Now they are in FileSystem.Handle module corresponding to the source module
with the same name. Also, now we have them arranged based on space complexity
so that we can apply RTS memory restrictions when running.
Also, now longer benchmarks use a shorter file.
"allocated" is much more stable for regression comparisons as it stays the same
whereas "time" varies based on various factors like cpu frequency, other things
running on the computer, context switches etc.
bytesCopied is a measure of long lived data being retained across GCs, which is
also a good measure of performance.
* UndecidableInstances -- Don't want to use unless indicated
* TypeFamilies -- Causes large regressions and improvements
* UnboxedTuples -- interferes with (#.)
Yield => Partial
Skip => Continue
Stop => Done
The new names are hopefully more intuitive. They also decouple the names from
the corresponding stream constructors so that there is no confusion about
mixing the two intuitions.
fixes#559
Some of the benchmarks were order of magnitude off due to missing INLINE for
type class operations. Now, all of them are in reasonable limits. Benchmarks
affected for serial streams:
* Functor, Applicative, Monad, transformers
We need to do a similar exercise for other types of streams and for
folds/parsers as well.
* Add 3 interesting cases for each concatMap case
* For mapM, map concurrently on a serial stream so that we measure the
concurrency overhead of mapM only and not both concurrent generation + mapM
* For Async streams add some benchmarks involving the `async` combinator.
* Add a benchmark for `foldrS`