Fix some easy-to-make typos

This commit is contained in:
Brian Wignall 2019-11-25 13:50:32 -05:00 committed by Harendra Kumar
parent 50b7824d2e
commit c0a93a4033
15 changed files with 21 additions and 21 deletions

View File

@ -191,7 +191,7 @@ folds where there is no incoming state, we start with the initial state
1. When we are building a concurrent stream that needs to share the same `SVar`
we pass the incoming state as is.
2. In all other cases we must not share the SVar and everytime we pass on the
2. In all other cases we must not share the SVar and every time we pass on the
state to run a stream we must use `rstState` to reset the `SVar` in the
state.

View File

@ -355,7 +355,7 @@ concurrently using this.
* Add `iterate`, `iterateM` stream operations
### Bug Fixes
* Fixed a bug that casued unexpected behavior when `pure` was used to inject
* Fixed a bug that caused unexpected behavior when `pure` was used to inject
values in Applicative composition of `ZipStream` and `ZipAsync` types.
## 0.1.1

View File

@ -21,7 +21,7 @@ print_help () {
echo "commit is generated, in the 'charts' directory."
echo "Use --base and --candidate to select the commits to compare."
echo
echo "Any arguments after a '--' are passed directly to guage"
echo "Any arguments after a '--' are passed directly to gauge"
exit
}
@ -86,7 +86,7 @@ build_report_progs() {
# We run the benchmarks in isolation in a separate process so that different
# benchmarks do not interfere with other. To enable that we need to pass the
# benchmark exe path to guage as an argument. Unfortunately it cannot find its
# benchmark exe path to gauge as an argument. Unfortunately it cannot find its
# own path currently.
# The path is dependent on the architecture and cabal version.

View File

@ -1,6 +1,6 @@
# Internal vs External
We keep all modules exposed to faciliate convenient exposure of experimental
We keep all modules exposed to facilitate convenient exposure of experimental
APIs and constructors to users. It allows users of the library to experiment
much more easily and carry a caveat that these APIs can change in future
without notice. Since everything is exposed, maintainers do not have to think

View File

@ -139,7 +139,7 @@ null-terminated UTF-8 encoded string.
### Error recovery
It is sometimes desireable to recover from errors when decoding strings
It is sometimes desirable to recover from errors when decoding strings
that are supposed to be UTF-8 encoded. Programmers should be aware that
this can negatively affect the security properties of their application.
A common recovery method is to replace malformed sequences with a
@ -155,8 +155,8 @@ aswell.
The following code implements one such recovery strategy. When an
unexpected byte is encountered, the sequence up to that point will be
replaced and, if the error occured in the middle of a sequence, will
retry the byte as if it occured at the beginning of a string. Note that
replaced and, if the error occurred in the middle of a sequence, will
retry the byte as if it occurred at the beginning of a string. Note that
the decode function detects errors as early as possible, so the sequence
`0xED 0xA0 0x80` would result in three replacement characters.

View File

@ -69,7 +69,7 @@ mainMaybeBelow = do
-- of non-determinism below.
--
-- Note that this is redundant configuration as the same behavior can be
-- acheived with just streamly, using mzero.
-- achieved with just streamly, using mzero.
--
getSequenceMaybeAbove :: (IsStream t, MonadIO (t m)) => MaybeT (t m) ()
getSequenceMaybeAbove = do

View File

@ -236,7 +236,7 @@ import qualified Streamly.Prelude as P
import qualified Streamly.Internal.Prelude as IP
import qualified Streamly.Streams.StreamK as K
-- XXX provide good succinct examples of pipelining, merging, splitting ect.
-- XXX provide good succinct examples of pipelining, merging, splitting etc.
-- below.
--
-- $streams

View File

@ -190,7 +190,7 @@ module Streamly.Data.Fold
-- ...
-- @
--
-- To compute the average of numbers in a stream without going throught he
-- To compute the average of numbers in a stream without going through the
-- stream twice:
--
-- >>> let avg = (/) <$> FL.sum <*> fmap fromIntegral FL.length

View File

@ -141,7 +141,7 @@ module Streamly.Internal.Data.Pipe
-- ...
-- @
--
-- To compute the average of numbers in a stream without going throught he
-- To compute the average of numbers in a stream without going through the
-- stream twice:
--
-- >>> let avg = (/) <$> FL.sum <*> fmap fromIntegral FL.length
@ -1805,7 +1805,7 @@ classifyKeepAliveChunks spanout = classifyChunksBy spanout True
-- All the input elements belonging to a session are collected using the fold
-- @f@. The session key and the fold result are emitted in the output stream
-- when the session is purged either via the session close event or via the
-- session liftime timeout.
-- session lifetime timeout.
--
-- @since 0.7.0
{-# INLINABLE classifySessionsBy #-}

View File

@ -1674,7 +1674,7 @@ estimateWorkers workerLimit svarYields gainLossYields
-- maxWorkerLatency.
--
let
-- How many workers do we need to acheive the required rate?
-- How many workers do we need to achieve the required rate?
--
-- When the workers are IO bound we can increase the throughput by
-- increasing the number of workers as long as the IO device has enough
@ -1691,7 +1691,7 @@ estimateWorkers workerLimit svarYields gainLossYields
-- use that to determine the max rate of workers, and also take the CPU
-- bandwidth into account. We can also discover the IO bandwidth if we
-- know that we are not CPU bound, then how much steady state rate are
-- we able to acheive. Design tests for CPU bound and IO bound cases.
-- we able to achieve. Design tests for CPU bound and IO bound cases.
-- Calculate how many yields are we ahead or behind to match the exact
-- required rate. Based on that we increase or decrease the effective

View File

@ -22,7 +22,7 @@
-- session consisting of multiple reads and writes to the handle, these APIs
-- are one shot read or write APIs. These APIs open the file handle, perform
-- the requested operation and close the handle. Thease are safer compared to
-- the handle based APIs as there is no possiblity of a file descriptor
-- the handle based APIs as there is no possibility of a file descriptor
-- leakage.
--
-- > import qualified Streamly.Internal.FileSystem.File as File

View File

@ -2077,7 +2077,7 @@ insertBy cmp x m = fromStreamS $ S.insertBy cmp x (toStreamS m)
-- Deleting
------------------------------------------------------------------------------
-- | Deletes the first occurence of the element in the stream that satisfies
-- | Deletes the first occurrence of the element in the stream that satisfies
-- the given equality predicate.
--
-- @
@ -3644,7 +3644,7 @@ classifyKeepAliveChunks spanout = classifyChunksBy spanout True
-- All the input elements belonging to a session are collected using the fold
-- @f@. The session key and the fold result are emitted in the output stream
-- when the session is purged either via the session close event or via the
-- session liftime timeout.
-- session lifetime timeout.
--
-- @since 0.7.0
{-# INLINABLE classifySessionsBy #-}

View File

@ -781,7 +781,7 @@ import Streamly.Internal.Prelude
-- left fold reconstructs in a LIFO style, thereby reversing the order of
-- elements..
-- 3. A right fold has termination control and therefore can terminate early
-- without going throught the entire input, a left fold cannot terminate
-- without going through the entire input, a left fold cannot terminate
-- without consuming all of its input. For example, a right fold
-- implementation of 'or' can terminate as soon as it finds the first 'True'
-- element, whereas a left fold would necessarily go through the entire input

View File

@ -308,7 +308,7 @@ instance IsStream WSerialT where
------------------------------------------------------------------------------
-- Additionally we can have m elements yield from the first stream and n
-- elements yeilding from the second stream. We can also have time slicing
-- elements yielding from the second stream. We can also have time slicing
-- variants of positional interleaving, e.g. run first stream for m seconds and
-- run the second stream for n seconds.
--

View File

@ -579,7 +579,7 @@ enumerateFromThenIntegral from next =
-- 9007199254740992 + 2 :: Double => 9.007199254740994e15
-- Instead we accumulate the increment counter and compute the increment
-- everytime before adding it to the starting number.
-- every time before adding it to the starting number.
--
-- This works for Integrals as well as floating point numbers, but
-- enumerateFromStepIntegral is faster for integrals.