The TOC parser now doesn't require a Memory object, making it easier to actually
instantiate this in derived tools (where the TOC parser needs to be used before
a memory is available). To do this, we use MemAddr as the base type for the TOC
instead of MemSegmentOff
The recursive simplifier could exhibit exponential behavior in cases where a
nested tree of irreducable terms were accumulated. The recursive calls quickly
exploded execution times.
The fix was to remove the recursive calls from the simplifier, but to
incrementally simplify expressions to constants as they are added (via the
addExpr function). This simplifies as much as the recursive case, but more
efficiently. This change required exporting the simplifyApp function.
This code now pulls all of the function addresses from the TOC as entry points
for the code discovery search. This lets us trivially find code reachable via
indirect calls, as the function pointer discovery heuristic doesn't seem to be
well-suited to PowerPC. I'd like to push on that, but it seems like a good
start for now.
The code pointer discovery in macaw can't handle this case because we never
write the code pointers into memory - we only read them. We really need a way
to tell macaw about code pointers.
The easy workaround is to pull all of the function entry points out of the TOC
and just seed the macaw search with them, but it would be nice to be able to
identify them from first principles.
This change now memoizes translations of SimpleBuilder expression fragments,
which allows us to restore the sharing in semantics formulas. The generator
re-uses shared sub-expressions automatically now. This generates less Haskell
code, yielding better code density and fewer terms constructed at run time. It
also reduces compile times.
It seems to cut the size of the generated TH code by about half. It also
generates less deeply-nested Haskell code, making the resulting TH splices human
readable.
It runs code discovery over a large-ish binary to test coverage. We currently
fail due to unsupported instructions (expected). This test will guide
priorities on implementing new semantics.
This helper additionally simplifies constants. This is very useful for dealing
with simplifying the instruction pointer. That is required by the rest of
macaw, which expects IP values it wants to explore to be fully reduced.
The current heuristic isn't great, but is probably okay for now. It just checks
to see if the LNK register is an address plus four. Something more precise
would require knowing the address of the next instruction, but we can't get that
from the IP, which has already been changed due to the call.
The semantics of each instruction are atomic updates over the register state.
Prior to this commit, changes were not atomic and updates to register values
were visible to later register definitions, which causes a huge number of
problems. Now, we take a snapshot of the register state at the beginning of the
instruction and read all values we need from that snapshot. This way, updates
are isolated from one another.
My understanding of how macaw splits up blocks was incorrect when I wrote the
test initially. Macaw doesn't split blocks just because a jump happens to land
in the middle of the block, so the middle block in this example is actually a
few instructions longer.
It now recursively traverses its arguments. This isn't great from an efficiency
perspective, but we need it to be able to simplify instruction pointers computed
from relative jumps (which involve some sign extensions and shifts).
These values are new values of the IP to explore, and the code consuming these
values expects them to be BV literals (i.e., simplified from expressions to
values).
The simplifier isn't currently powerful enough to simplify everything we throw
at it, but this is at least the right place to apply it. If we don't simplify
here, the core of macaw won't know how to follow the IP changes and will miss
blocks.