Summary: Here I try to populate the writes done as part of a Haxl computation in an IORef inside the Environment. `IVar` which is the synchornisation point, also acts as the point where we store intermediate writes for Haxl computations, so they can be memoized and reused whenever a memoized computation is done again. This is done inside `getIVarWithWrites` function. This works, because we create new IVars when running a memo computation or a data fetch, and it is only at these places where we need to create a new environment with empty writes to run the computation in. So I run every memoized computation in a new environment (with empty writes) and populate the writes in this new environment. At the end of the memoized computation, I look up these writes from the `IVar` and also add them to the original environment. This way ultimately all writes are correctly propagated upwards to the top level environment user passes to `runHaxl`. This logic lives inside `execMemoNow`. Reviewed By: simonmar Differential Revision: D14342181 fbshipit-source-id: a410dae1a477f27b480804b67b2212e7500997ab
2.5 KiB
Changes in version 2.1.0.0
-
Add a new 'w' parameter to 'GenHaxl' to allow arbitrary writes during a computation. These writes are stored as a running log in the Env, and are not memoized. This allows users to extract information from a Haxl computation which throws. Our advise is to limit these writes to monitoring and debugging logs.
-
A 'WriteTree' constructor to maintain log of writes inside the Environment. This is defined to allow O(1) mappend.
Changes in version 2.0.1.1
- Support for GHC 8.6.1
- Bugfixes
Changes in version 2.0.1.0
- Exported MemoVar from Haxl.Core.Memo
- Updated the facebook example
- Fixed some links in the documentation
- Bump some version bounds
Changes in version 2.0.0.0
-
Completely rewritten internals to support arbitrarily overlapping I/O and computation. Haxl no longer runs batches of I/O in "rounds", waiting for all the I/O to complete before resuming the computation. In Haxl 2, we can spawn I/O that returns results in the background and computation fragments are resumed when the values they depend on are available. See
tests/FullyAsyncTest.hs
for an example. -
A new
PerformFetch
constructor supports the new concurrency features:BackgroundFetch
. The data source is expected to callputResult
in the background on eachBlockedFetch
when its result is ready. -
There is a generic
DataSource
implementation inHaxl.DataSource.ConcurrentIO
for performing each I/O operation in a separate thread. -
Lots of cleanup and refactoring of the APIs.
-
License changed from BSD+PATENTS to plain BSD3.
Changes in version 0.5.1.0
- 'pAnd' and 'pOr' were added
- 'asyncFetchAcquireRelease' was added
- 'cacheResultWithShow' was exposed
- GHC 8.2.1 compatibility
Changes in version 0.5.0.0
- Rename 'Show1' to 'ShowP' (#62)
Changes in version 0.3.0.0
-
Some performance improvements, including avoiding quadratic slowdown with left-associated binds.
-
Documentation cleanup; Haxl.Core is the single entry point for the core and engine docs.
-
(>>) is now defined to be (*>), and therefore no longer forces sequencing. This can have surprising consequences if you are using Haxl with side-effecting data sources, so watch out!
-
New function withEnv, for running a sub-computation in a local Env
-
Add a higher-level memoization API, see 'memo'
-
Show is no longer required for keys in cachedComputation
-
Exceptions now have
Eq
instances