It is now (optionally) pure via the MonadThrow class. It also exposes a new
binary format repr, which currently only has constructors for ELF containers.
The generic binary loading interface is instantiated once for each
architecture/binary container pair. This isn't great, but there is enough
custom work in each setting to justify it.
The binary loading interface isn't finished yet, and needs to learn some
additional operations to support relocation. It already supports additional
information that is architecture specific and binary container format
specific (that operations will have to use on a per-format basis).
On the PowerPC side, the Table of Contents (TOC) is now architecture-specific
information constructed by the loader (currently from ELF binaries). The new
TOC data type is in place to support this more easily (the old format was just a
function).
This change is in the core generator monad and applied in the PowerPC backend.
This change includes some macaw updates (which required a new elf-edit version).
Now test to ensure that no blocks end in a classification failure (or a
disassembly failure). Before, many blocks were not classified, which causes
problems downstream. This required some changes in macaw core in two places:
1. The simplifier needed some additional rules to remove some redundant
constructions that threw off the abstract interpretation of values. This was
particularly an issue while reading return values off of the stack in
PowerPC.
2. Extending the abstract interpretation to be able to handle more operations (shiftl)
We need special treatment of the return, as the low two bits are cleared on
PowerPC, so we can't just rely on pattern matching against the ReturnAddr in the
IP register.
The identifyReturn was previously unused because the Macaw Discovery
performed this test inline, but some architectures have different
semantics so the identifyReturn is now used by the Discovery process.
This implements the return discovery that should be sufficient for the
PPC.
Recent changes in macaw(-base) mean that we split blocks more aggressively. The
old expected outputs were conservative - these new values are much more in line
with intuitive expectation (with more aggressive splitting of blocks and less
code duplication between blocks).
Pass operand and architecture types and instead of
case opcode of
ADD -> case operands of
Just GPR gpr0 :< Nil of ->
SSA-semantics
Generate:
let opc_ADD operands = case operands of
Just GPR gpr0 :< Nil of ->
SSA-semantics
in case opcode of
ADD -> opc_ADD operand
This provides better encapsulation for the individual operands and
more specific control over the types (at the cost of a pair of
additional type specifications in the call). This also seems to
reduce memory consumption by about half.
The system call instructions TRAP and SC were updating the IP twice, which led
to skipping instructions. The IP increment for these instructions was already
handled in the abstract interpretation of arch-specific terminators.
Macaw has removed all floating point expression types, so we duplicate those as
arch-specific functions for PowerPC until the more general floating point
support is ready.
The old method involved providing the TH code a list of match expressions. This
made it very difficult to inspect arguments of instructions. The new approach
has the architecture backend provide a function that gets the first opportunity
to process instructions, which is much more flexible. This commit also includes
support for a number of cache hint instructions that use the new features.
The semantics for many of the vector instructions are incomplete and just set
the target register to undefined. This is enough for code discovery (for now).
This code was mostly architecture independent already, so this commit moves it
to the macaw-semmc module so that it can be shared with the ARM backend. I
still plan to move the main TH module with the SimpleBuilder to macaw
translation, but that requires a few other changes first.
The TOC parser now doesn't require a Memory object, making it easier to actually
instantiate this in derived tools (where the TOC parser needs to be used before
a memory is available). To do this, we use MemAddr as the base type for the TOC
instead of MemSegmentOff
The recursive simplifier could exhibit exponential behavior in cases where a
nested tree of irreducable terms were accumulated. The recursive calls quickly
exploded execution times.
The fix was to remove the recursive calls from the simplifier, but to
incrementally simplify expressions to constants as they are added (via the
addExpr function). This simplifies as much as the recursive case, but more
efficiently. This change required exporting the simplifyApp function.
This code now pulls all of the function addresses from the TOC as entry points
for the code discovery search. This lets us trivially find code reachable via
indirect calls, as the function pointer discovery heuristic doesn't seem to be
well-suited to PowerPC. I'd like to push on that, but it seems like a good
start for now.
The code pointer discovery in macaw can't handle this case because we never
write the code pointers into memory - we only read them. We really need a way
to tell macaw about code pointers.
The easy workaround is to pull all of the function entry points out of the TOC
and just seed the macaw search with them, but it would be nice to be able to
identify them from first principles.
This change now memoizes translations of SimpleBuilder expression fragments,
which allows us to restore the sharing in semantics formulas. The generator
re-uses shared sub-expressions automatically now. This generates less Haskell
code, yielding better code density and fewer terms constructed at run time. It
also reduces compile times.
It seems to cut the size of the generated TH code by about half. It also
generates less deeply-nested Haskell code, making the resulting TH splices human
readable.
It runs code discovery over a large-ish binary to test coverage. We currently
fail due to unsupported instructions (expected). This test will guide
priorities on implementing new semantics.
This helper additionally simplifies constants. This is very useful for dealing
with simplifying the instruction pointer. That is required by the rest of
macaw, which expects IP values it wants to explore to be fully reduced.
The current heuristic isn't great, but is probably okay for now. It just checks
to see if the LNK register is an address plus four. Something more precise
would require knowing the address of the next instruction, but we can't get that
from the IP, which has already been changed due to the call.
The semantics of each instruction are atomic updates over the register state.
Prior to this commit, changes were not atomic and updates to register values
were visible to later register definitions, which causes a huge number of
problems. Now, we take a snapshot of the register state at the beginning of the
instruction and read all values we need from that snapshot. This way, updates
are isolated from one another.
My understanding of how macaw splits up blocks was incorrect when I wrote the
test initially. Macaw doesn't split blocks just because a jump happens to land
in the middle of the block, so the middle block in this example is actually a
few instructions longer.
It now recursively traverses its arguments. This isn't great from an efficiency
perspective, but we need it to be able to simplify instruction pointers computed
from relative jumps (which involve some sign extensions and shifts).
These values are new values of the IP to explore, and the code consuming these
values expects them to be BV literals (i.e., simplified from expressions to
values).
The simplifier isn't currently powerful enough to simplify everything we throw
at it, but this is at least the right place to apply it. If we don't simplify
here, the core of macaw won't know how to follow the IP changes and will miss
blocks.
These operations generate a lot of code, so it is helpful to factor them out and
reduce the burden on the type checker. Factoring these two definitions out cuts
the generated code nearly in half.
The change is actually in the semantics (semmc), where we were extracting the
wrong part of the 128 bit vector registers to operate on. Many operations were
being simplified to zero, which manifest as unused fprc registers.