For each discovered transfer refinement, the Discovery process must be
re-started from scratch for the particular target function in
question. The implication is that subsequent refinements for the same
nominal function must use the newer version, and that all previous
refinements must be re-applied each time the function Discovery
process is repeated.
These changes ensure that both of these occur during the refinement
iterations.
After the SMT evaluation has identified possible targets for a
previously unknown transfer TermStmt, the updateDiscovery will update
the DiscoveryState (using a locally-supplied TermStmt rewriter) to
resolve the transfer targets.
The caller (e.g. macaw-refinement) can provide an additional Rewriter
operation that can operate on TermStmts for blocks (typically those
for which a previous Discovery was unable to determine a transfer
target). There is an additional
entrypoint ('addDiscoveredFunctionBlockTargets') that will allow this
additional rewriter to be supplied for updating an existing
DiscoveryState.
Some of the TermStmt and other elements might generate new blocks as
part of the Rewrite operation (e.g. adding a new 'Branch' TermStmt) so
this change allows the rewrite context to update the generated block
labels and collect these newly generated blocks for inclusing in the
results passed to the parser.
Currently smt solving just uses the first block of the first path,
which is not correct, but the framework for providing the list of
paths is now present.
The previous implementation invoked the SMT solver at the top level
for prototyping. This version moves the SMT solver invocation to the
intended location in the algorithm where the path is successively
extended and solutions are compared to identify the "best" refinement
solution.
This means far fewer instructions (and hence fewer registers), and in
turn a lot less heap space. Peak memory usage is cut in half running
Brittle on a PPC64 exe with standard library.