There are two major changes in this patch. The first is that sequence maps
now have special representations for point updates, of the sort produced by
the 'update' and 'updateEnd' primtives. These updates are stored in a
finite map, rather than as a functional-update thunk using lambdas; this
reduces memory usage and improves time efficecy of sequences defined by
sequences of updates.
The second change is that the caching policy for sequences is changed
to retain all previously-calculated values. This is a change from the
previous LRU policy, which retained only a small finite number of previous
values. Benchmarking showed that unbounded memoization was better for
performance in essentially all cases over both an LRU and an adaptive
caching strategy. The potential downside is that we may retain values
longer than necessary. If this becomes a problem, we may need to revisit
this decision.
This greatly increases mutator productivity, and thus provides
significant speedups on some examples. The tradeoff with
larger nurseries is the potential for long GC pause times.
This is probably acceptable tradeoff for a tool like Cryptol,
despite the potential for unresponsiveness at the REPL.
Fix some minor conflicts in the test suite.
Conflicts:
tests/issues/issue002.icry.fails
tests/issues/issue148.icry.stdout
tests/issues/issue198.icry.stdout
tests/issues/issue214.icry.stdout
tests/issues/issue290v2.icry.stdout
tests/issues/issue312.icry.fails
The major change in this patch is to add a new type of 'WordValue'
which is always used to represent finite sequences of bits. A word
value is either a packed word, or a sequence of lazy bits. The 'VSeq'
constructor, in constrast, is now never used for a finite sequence of
bits.
This avoids certain thorny problems that arise when trying to faithfully
implement the lazy semantics of Cryptol. We now do not have to commit
to a value at type '[n]' being represented as a packed word or as an
unpacked word until relatively late. This allows us to perform type-directed
eta-expansion at every recursive call before we know how words will be represented.
This patch fixes all the outstanding strictness bugs that I know of.
Unfortunately, we seem to lose some ground on performance. The new evaluator
is now about 5% slower than the old one on the AES benchmark, and quite a bit
slower on the SHA1 benchmark. Fortunately, the use if LRU caches for memoization
of sequences seems to keep heap usage to manageable levels; so programs generally
complete, even if they take a long time.
The definitions added in #299 cause a regression in Prelude typechecking
performance. Until we sort out the performance, we'll keep these
definitions in the module `Cryptol::Extras`.
We're only using this package for 7.8 compatibility, which will end when
GHC 8.0 is released soon. For now, just limit to the older version to
avoid import errors.
This is to set up improvements to the cryptol-server, and therefore
pycryptol interface.
This patch also fixes a regression in pretty-printing output caused by a
previous error in fixity of the `<>` operator
This notably adds a `MonadBaseControl` instance for the `REPL` monad so
that we can catch asynchronous exceptions in the middle of `REPL`
computations. Since this is a generalization of Haskeline's exception
support, that orphan instance for the console executable is now in terms
of this new instance.
This removes the `self-contained` flag, since it is fine for all builds
to have the baked-in prelude as a last resort. Tinkerers can still
change the prelude as long as it's in the search path.
Also removes some unnecessary extra prelude loading by the Cryptol
server by means of a new command
The benchmark suite no longer uses the `iteSolver` option.
Added a dependency on `deepseq-generics` so that the `NFData` instances
work correctly in Stackage LTS 2. Versions of `deepseq` before 1.4 have
a default instance that amounts to WHNF rather than NF, so benchmarks
were being measured incorrectly.
Attempt to normalize equality constraints involving a single unification
variable into a form where that variable is alone on one side of the equation.
Applying this normalization before attempting to simplify goals seems to clean
up some cases that the solver wasn't able to deal with.