The names were incorrect as per the definition of the type. This problem
occurred because the order of the arguments was changed at some point but we
missed changing these.
Use tail recursion in the worker loops. Run the work item under the saved
monadic state and restore the state after the work is done and before the next
work item is picked.
Benchmarks:
* asyncly/unfoldrM maxThreads 1
* wAsyncly/unfoldrM maxThreads 1
* aheadly/unfoldrM maxThreads 1
When executing a stream in ahead composition where we have composed multiple
streams using `ahead`, we are executing each element of the stream using the
fresh state from SVar at the fork point. This fix changes it such that we only
use the fresh state from SVar only at the start of stream execution and not for
each element of the stream. When yielding subsequent elements we have to carry
any state changes made when we yielded the previous elements.
The "after", "finally" and "bracket" combinators did not run the "cleanup"
handler in case the stream is lazily partially evaluated e.g. using the lazy
right fold (foldrM), "Streamly.Prelude.head" is an example of such a fold.
Since we run the cleanup action when the stream Stops, the action won't be run
if the stream is not fully drained.
In the new implementation, we use a GC hook to run the cleanup action in case
the stream got garbage collected even before finishing. This will take care of
the lazy right fold cases mentioned above.
This is to make it the same as other benchmarks. We had to change the size of
the extreme concatMapWith benchmark to keep it in reasonable time limits.
* We are testing two extreme cases, add a middle case as well where the outer
and inner streams are of equal size.
* Enable some pure benchmarks as well
* Separate the zip benchmarks in a separate group as they are scalable
(memory consumption does not increase with stream size) and parallel
benchmarks are not scalable.
Streaming benchmarks take constant memory whereas the buffering ones take
memory proportional to the stream size. We would like to test them separately.
For streaming benchmarks we can use a very large stream size to make sure there
is no space leak and we are running in constant memory for long time.
gauge --measure-with forks a child process for each benchmark. The options
passed to the main gauge process are not passed to the child process. We use an
environment variable to set the stream-size and pass it on to the child
process.
* Document the precise behavior, some changes were made to the earlier behavior
* Make some changes to implementation according to (newly) documented behavior
* TakeByTime: perform the time check before generating the element so that we
do not drop an element after generation.
* TakeByTime now yields at least one element if the duration is non-zero
* dropByTime does not check the time after the drop duration is over
* Add inspection tests
* make the tests for shorter duration, earlier tests took too long
Use --stream-size to accept the stream size. The CLI argument position
is not fixed anymore so it can be specified anywhere on the command line. The
remaining arguments are used by gauge.
- We should try to match any of the core libraries version bounds
with the min and max versions of ghc we support. This would allow
the users to use the library together with another one that uses
the ghc api with ease, preventing any possible version conflicts.
* pollCounts to poll the element count in another thread
* delayPost to introduce a delay in polling
* rollingMap to compute diff of successive elements
These combinators can be used to compute and report the element
processing rate in a stream.