Commit Graph

15 Commits

Author SHA1 Message Date
Harendra Kumar
633c55eb31 rename splitParse/parseMany, concatParse/parseIterate 2020-05-11 21:18:09 +05:30
Harendra Kumar
684d2786eb Add concatParse to chain parsers on a stream
Add mconcatTo, update fold docs
Add fold docs/snippets using Monoids.
Add benchmark for sum using foldMap
Add concatParse benchmark
Add splitParseTill, update docs
2020-05-11 20:54:22 +05:30
Harendra Kumar
a03e2872a6 rename Parser module to ParserD
The top level module "Parser" will be using both ParserK and ParserD.
2020-04-24 00:30:01 +05:30
Harendra Kumar
f51bbb25bd Rename some parsers and add some TBD and comments 2020-04-16 23:11:49 +05:30
Harendra Kumar
fc24e44be1 fix ci to build benchmarks as well 2020-04-16 18:21:04 +05:30
Harendra Kumar
202c33d6e8 Add Alternative instance, combinators and parsers
parsers: peek, satisfy, eof, yield, yieldM, die, dieM
combinators: shortest, longest, alt, many, some, manyTill
2020-04-14 23:34:08 +05:30
Harendra Kumar
f1eb9a18c2 rename parseChunks to splitParse 2020-04-09 04:19:00 +05:30
Harendra Kumar
b9c461f7b9 rename some parsers, add some parsers
rename endOn etc to sepBy etc.
add sepByMax
add unimplemented skeletons for wordBy, groupBy
2020-04-09 04:19:00 +05:30
Harendra Kumar
136bf79bab Rename Parse type and Step constructors
Parse => Parser
Keep => Yield
Back => Skip
2020-04-09 04:06:57 +05:30
Harendra Kumar
2723c15d36 Implement stream parsing
This is the initial version of stream parsing. It implements a "Parse"
type, some parsers based on that type, applicative composition, operations to
run parsers (parse) and to run parsers over chunks of a stream (parseChunks).

Parsers are just an extension of Folds in Streamly.Data.Fold. Parsers just add
backtracking and failure capabilities to folds.

Operations like splitOn to split a stream on a predicate can now be expressed
using a parser applied repeatedly on a stream. For example, line splitting can
be done using parsers. Parsers are as fast as fastest custom code for
line splitting.
2020-04-09 04:06:57 +05:30
Harendra Kumar
eccf24a4b1 Add singletonM, update some docs 2020-02-26 13:44:25 +05:30
Harendra Kumar
4f4964afca Use less memory for single chunk chunksOf benchmark
So that we can limit the heap to a reasonable value in tests.

Also move the chunksOf benchmarks out of the DEVBUILD flag as they do not
depend on a well formed text file.
2020-02-14 17:50:58 +05:30
Harendra Kumar
60cee489ae Add exception handling combinators with guaranteed cleanup
The "after", "finally" and "bracket" combinators did not run the "cleanup"
handler in case the stream is lazily partially evaluated e.g. using the lazy
right fold (foldrM), "Streamly.Prelude.head" is an example of such a fold.

Since we run the cleanup action when the stream Stops, the action won't be run
if the stream is not fully drained.

In the new implementation, we use a GC hook to run the cleanup action in case
the stream got garbage collected even before finishing. This will take care of
the lazy right fold cases mentioned above.
2020-01-18 15:02:34 +05:30
Harendra Kumar
617a9e738d disable inspection test for splitOnSeq
The change of StreamK/Type.serial impl makes this test fail because of IsStream
dictionary still being around. However, there is not much difference in the
actual benchmark results. And, serial impl change has other much better results
in other benchmarks.
2019-12-17 16:43:30 +05:30
adithyaov
ea875a020e Move Benchmark* to benchmark/ + Rm benchmark flag 2019-12-17 13:00:09 +05:30