High performance, concurrent functional programming abstractions
Go to file
Harendra Kumar a929a1682c add linear-async, base streams benchmark reporting
split linear to separate serial and parallel modules. Parallel modules use
lower number of elements in the stream so that they can run faster.
2018-10-10 12:39:27 +05:30
.circleci use circleci for coverage builds 2018-08-12 18:20:03 +05:30
benchmark add linear-async, base streams benchmark reporting 2018-10-10 12:39:27 +05:30
charts-0 update performance benchmark results 2018-07-14 10:55:33 +05:30
docs Create a separate overview section 2018-09-13 19:00:05 +05:30
examples remove the Quit constructor 2018-10-10 12:39:27 +05:30
src Fix ahead style nil stream merging live lock 2018-10-10 12:39:27 +05:30
test disable some intermittently failing tests 2018-09-12 12:39:18 +05:30
.hlint.yaml ran src through hlint, made most changes; added .hlint.yaml. 2018-10-01 11:02:10 -04:00
.travis.yml Fix travis build timeout 2018-08-16 06:38:30 +05:30
appveyor.yml Avoid stack sdist bug in CI 2018-06-23 21:55:59 +05:30
bench.sh add linear-async, base streams benchmark reporting 2018-10-10 12:39:27 +05:30
Changelog.md Fix ahead style nil stream merging live lock 2018-10-10 12:39:27 +05:30
CONTRIBUTING.md Add benchmarks and tests for new operations 2018-08-09 00:43:48 +05:30
LICENSE Use BSD 3-clause license for the lazy implementation 2017-09-04 15:56:05 +05:30
MAINTAINING.md Remove charts from tar, fix version typo 2018-05-14 03:38:59 +05:30
README.md Update and reorganize the prelude API docs 2018-09-17 17:01:17 +05:30
stack-7.10.yaml fix foldxM strictness 2018-07-13 19:00:38 +05:30
stack-8.0.yaml update gauge version 2018-07-08 11:21:27 +05:30
stack.yaml add linear-async, base streams benchmark reporting 2018-10-10 12:39:27 +05:30
streamly.cabal add linear-async, base streams benchmark reporting 2018-10-10 12:39:27 +05:30

Streamly

Hackage Gitter chat Build Status Windows Build status CircleCI Coverage Status

Streaming Concurrently

Haskell lists express pure computations using composable stream operations like :, unfold, map, filter, zip and fold. Streamly extends this data flow programming model of pure lists to lists of concurrent monadic computations (streams) using the same primitives.

Streamly expresses concurrency using the list primitives, and standard, well known abstractions, without having to know any low level notions of concurrency like threads, locking or synchronization. Concurrency is automatically scaled up or down based on the need of the application, so that we can say goodbye to managing thread pools and associated sizing issues as well. This is true, fearless and declarative concurrency. Streamly can be thought of as concurrent monadic lists, if you know Haskell lists then you already know how to use streamly.

Where to use streamly?

Everywhere. The answer to this question would be similar to the answer to - "Where do I use Haskell lists?". Streamly generalizes lists to monadic streams, and IO monad to non-deterministic stream composition with concurrency. The IO monad becomes a special case of streamly, if we use single element streams the behavior of streamly is identical to the IO monad. It can be replaced with streamly by just prefixing IO actions with liftIO, without any loss of performance. Pure lists too become a special case of streamly; if we use the Identity monad, streams turn into pure lists. Non-concurrent programs become a special case of concurrent ones, by just adding a combinator, a non-concurrent program becomes concurrent.

We can say that streamly is a superset of lists and IO, with builtin concurrency. If you want to write a program that involves IO, concurrent or not, then you can just use streamly as the base monad, heck, you could even use streamly for pure computations, as streamly performs at par with pure lists or vector. In fact, streamly is better than lists because it appends much faster than lists, you do not need difference lists for that.

If you need convincing for using streaming or data flow programming paradigm itself then try to answer this question - why do we use lists? It boils down to why we use functional programming in the first place, and Haskell is successful in enforcing this for pure computations, but not for monadic computations. In the absence of a standard, easy to use or enforced data flow programming library for monadic computations, and the IO monad providing an escape hatch to an imperative model, we just love to fall into the imperative trap.

Show me an example

Here is an IO monad code to list a directory recursively:

import Control.Monad.IO.Class (liftIO)
import Path.IO (listDir, getCurrentDir) -- from path-io package

listDirRecursive = getCurrentDir >>= readdir
  where
    readdir dir = do
      (dirs, files) <- listDir dir
      liftIO $ mapM_ putStrLn
             $ map show dirs ++ map show files
      foldMap readdir dirs

This is your usual IO monad code, with no streamly specific code whatsoever. This is how you can run this:

main :: IO ()
main = listDirRecursive

And, this is how you can run exactly the same code using streamly with lookahead style concurrency, the only difference is that this time multiple directories are read concurrently:

import Streamly (runStream, aheadly)

main :: IO ()
main = runStream $ aheadly $ listDirRecursive

Isn't that magical? What's going on here? Streamly does not introduce any new abstractions, it just uses the standard abstractions like Semigroup or Monoid to combine monadic streams concurrently, the way lists combine a sequence of pure values non-concurrently. Therefore, the foldMap in the code above turns into a concurrent monoidal composition of a stream of readdir computations.

How does it perform?

Providing monadic streaming and high level declarative concurrency does not mean that streamly compromises with performance in any way. The non-concurrent performance of streamly competes with lists and the vector library. The concurrent performance is as good as it gets, see concurrency benchmarks for detailed performance results and a comparison with the async package.

The following chart shows a summary of the cost of key streaming operations processing a million elements. The timings for streamly and vector are in the 600-700 microseconds range and therefore can barely be seen in the graph. For more details, see streaming benchmarks.

Streaming Operations at a Glance

Streaming Pipelines

The following snippet provides a simple stream composition example that reads numbers from stdin, prints the squares of even numbers and exits if an even number more than 9 is entered.

import Streamly
import qualified Streamly.Prelude as S
import Data.Function ((&))

main = runStream $
       S.repeatM getLine
     & fmap read
     & S.filter even
     & S.takeWhile (<= 9)
     & fmap (\x -> x * x)
     & S.mapM print

Unlike pipes or conduit and like vector and streaming, streamly composes stream data instead of stream processors (functions). A stream is just like a list and is explicitly passed around to functions that process the stream. Therefore, no special operator is needed to join stages in a streaming pipeline, just the standard function application ($) or reverse function application (&) operator is enough. Combinators are provided in Streamly.Prelude to transform or fold streams.

Concurrent Stream Generation

Monadic construction and generation functions e.g. consM, unfoldrM, replicateM, repeatM, iterateM and fromFoldableM etc. work concurrently when used with appropriate stream type combinator (e.g. asyncly, aheadly or parallely).

The following code finishes in 3 seconds (6 seconds when serial):

> let p n = threadDelay (n * 1000000) >> return n
> S.toList $ aheadly $ p 3 |: p 2 |: p 1 |: S.nil
[3,2,1]

> S.toList $ parallely $ p 3 |: p 2 |: p 1 |: S.nil
[1,2,3]

The following finishes in 10 seconds (100 seconds when serial):

runStream $ asyncly $ S.replicateM 10 $ p 10

Concurrent Streaming Pipelines

Use |& or |$ to apply stream processing functions concurrently. The following example prints a "hello" every second; if you use & instead of |& you will see that the delay doubles to 2 seconds instead because of serial application.

main = runStream $
      S.repeatM (threadDelay 1000000 >> return "hello")
   |& S.mapM (\x -> threadDelay 1000000 >> putStrLn x)

Mapping Concurrently

We can use mapM or sequence functions concurrently on a stream.

> let p n = threadDelay (n * 1000000) >> return n
> runStream $ aheadly $ S.mapM (\x -> p 1 >> print x) (serially $ repeatM (p 1))

Serial and Concurrent Merging

Semigroup and Monoid instances can be used to fold streams serially or concurrently. In the following example we compose ten actions in the stream, each with a delay of 1 to 10 seconds, respectively. Since all the actions are concurrent we see one output printed every second:

import Streamly
import qualified Streamly.Prelude as S
import Control.Concurrent (threadDelay)

main = S.toList $ parallely $ foldMap delay [1..10]
 where delay n = S.yieldM $ threadDelay (n * 1000000) >> print n

Streams can be combined together in many ways. We provide some examples below, see the tutorial for more ways. We use the following delay function in the examples to demonstrate the concurrency aspects:

import Streamly
import qualified Streamly.Prelude as S
import Control.Concurrent

delay n = S.yieldM $ do
    threadDelay (n * 1000000)
    tid <- myThreadId
    putStrLn (show tid ++ ": Delay " ++ show n)

Serial

main = runStream $ delay 3 <> delay 2 <> delay 1
ThreadId 36: Delay 3
ThreadId 36: Delay 2
ThreadId 36: Delay 1

Parallel

main = runStream . parallely $ delay 3 <> delay 2 <> delay 1
ThreadId 42: Delay 1
ThreadId 41: Delay 2
ThreadId 40: Delay 3

Nested Loops (aka List Transformer)

The monad instance composes like a list monad.

import Streamly
import qualified Streamly.Prelude as S

loops = do
    x <- S.fromFoldable [1,2]
    y <- S.fromFoldable [3,4]
    S.yieldM $ putStrLn $ show (x, y)

main = runStream loops
(1,3)
(1,4)
(2,3)
(2,4)

Concurrent Nested Loops

To run the above code with, lookahead style concurrency i.e. each iteration in the loop can run run concurrently by but the results are presented in the same order as serial execution:

main = runStream $ aheadly $ loops

To run it with depth first concurrency yielding results asynchronously in the same order as they become available (deep async composition):

main = runStream $ asyncly $ loops

To run it with breadth first concurrency and yeilding results asynchronously (wide async composition):

main = runStream $ wAsyncly $ loops

The above streams provide lazy/demand-driven concurrency which is automatically scaled as per demand and is controlled/bounded so that it can be used on infinite streams. The following combinator provides strict, unbounded concurrency irrespective of demand:

main = runStream $ parallely $ loops

To run it serially but interleaving the outer and inner loop iterations (breadth first serial):

main = runStream $ wSerially $ loops

Magical Concurrency

Streams can perform semigroup (<>) and monadic bind (>>=) operations concurrently using combinators like asyncly, parallelly. For example, to concurrently generate squares of a stream of numbers and then concurrently sum the square roots of all combinations of two streams:

import Streamly
import qualified Streamly.Prelude as S

main = do
    s <- S.sum $ asyncly $ do
        -- Each square is performed concurrently, (<>) is concurrent
        x2 <- foldMap (\x -> return $ x * x) [1..100]
        y2 <- foldMap (\y -> return $ y * y) [1..100]
        -- Each addition is performed concurrently, monadic bind is concurrent
        return $ sqrt (x2 + y2)
    print s

The concurrency facilities provided by streamly can be compared with OpenMP and Cilk but with a more declarative expression.

Rate Limiting

For bounded concurrent streams, stream yield rate can be specified. For example, to print hello once every second you can simply write this:

import Streamly
import Streamly.Prelude as S

main = runStream $ asyncly $ avgRate 1 $ S.repeatM $ putStrLn "hello"

For some practical uses of rate control, see AcidRain.hs and CirclingSquare.hs . Concurrency of the stream is automatically controlled to match the specified rate. Rate control works precisely even at throughputs as high as millions of yields per second. For more sophisticated rate control see the haddock documentation.

Reactive Programming (FRP)

Streamly is a foundation for first class reactive programming as well by virtue of integrating concurrency and streaming. See AcidRain.hs for a console based FRP game example and CirclingSquare.hs for an SDL based animation example.

Conclusion

Streamly, short for streaming concurrently, provides monadic streams, with a simple API, almost identical to standard lists, and an in-built support for concurrency. By using stream-style combinators on stream composition, streams can be generated, merged, chained, mapped, zipped, and consumed concurrently providing a generalized high level programming framework unifying streaming and concurrency. Controlled concurrency allows even infinite streams to be evaluated concurrently. Concurrency is auto scaled based on feedback from the stream consumer. The programmer does not have to be aware of threads, locking or synchronization to write scalable concurrent programs.

Streamly is a programmer first library, designed to be useful and friendly to programmers for solving practical problems in a simple and concise manner. Some key points in favor of streamly are:

  • Simplicity: Simple list like streaming API, if you know how to use lists then you know how to use streamly. This library is built with simplicity and ease of use as a design goal.
  • Concurrency: Simple, powerful, and scalable concurrency. Concurrency is built-in, and not intrusive, concurrent programs are written exactly the same way as non-concurrent ones.
  • Generality: Unifies functionality provided by several disparate packages (streaming, concurrency, list transformer, logic programming, reactive programming) in a concise API.
  • Performance: Streamly is designed for high performance. It employs stream fusion optimizations for best possible performance. Serial peformance is equivalent to the venerable vector library in most cases and even better in some cases. Concurrent performance is unbeatable. See streaming-benchmarks for a comparison of popular streaming libraries on micro-benchmarks.

The basic streaming functionality of streamly is equivalent to that provided by streaming libraries like vector, streaming, pipes, and conduit. In addition to providing streaming functionality, streamly subsumes the functionality of list transformer libraries like pipes or list-t, and also the logic programming library logict. On the concurrency side, it subsumes the functionality of the async package, and provides even higher level concurrent composition. Because it supports streaming with concurrency we can write FRP applications similar in concept to Yampa or reflex.

Further Reading

For more information, see:

Contributing

The code is available under BSD-3 license on github. Join the gitter chat channel for discussions. You can find some of the todo items on the github wiki. Please ask on the gitter channel or contact the maintainer directly for more details on each item. All contributions are welcome!

This library was originally inspired by the transient package authored by Alberto G. Corona.