This is the initial version of stream parsing. It implements a "Parse"
type, some parsers based on that type, applicative composition, operations to
run parsers (parse) and to run parsers over chunks of a stream (parseChunks).
Parsers are just an extension of Folds in Streamly.Data.Fold. Parsers just add
backtracking and failure capabilities to folds.
Operations like splitOn to split a stream on a predicate can now be expressed
using a parser applied repeatedly on a stream. For example, line splitting can
be done using parsers. Parsers are as fast as fastest custom code for
line splitting.
So that we can limit the heap to a reasonable value in tests.
Also move the chunksOf benchmarks out of the DEVBUILD flag as they do not
depend on a well formed text file.
The "after", "finally" and "bracket" combinators did not run the "cleanup"
handler in case the stream is lazily partially evaluated e.g. using the lazy
right fold (foldrM), "Streamly.Prelude.head" is an example of such a fold.
Since we run the cleanup action when the stream Stops, the action won't be run
if the stream is not fully drained.
In the new implementation, we use a GC hook to run the cleanup action in case
the stream got garbage collected even before finishing. This will take care of
the lazy right fold cases mentioned above.
The change of StreamK/Type.serial impl makes this test fail because of IsStream
dictionary still being around. However, there is not much difference in the
actual benchmark results. And, serial impl change has other much better results
in other benchmarks.