This more or less follows the rules from ICFP '10 "Scrapping your
inefficient engine" in that it generates new partially evaluated
definitions by evaluating with the statically known arguments, and
caches the results so that they can be reused if the evaluator
encounters them again. Some polish is still needed:
- proper tests
- nice display of residual programs (which will help with proper tests!)
- sensible handling of partial functions (probably a small limit on how
much to evaluate, and fail on encountering it?)
- checking whether evaluation has made progress, and failing if not
Once the above works, we're in a position to start specialising
interfaces, which is the main point of this.
Useful in partial evaluation, because if we erase the inferrable
arguments in an argument to a partially evaluated definition, we can
make something more general. It might also be useful in interactive
editing later, to decide what to display.
Flag definitions as partial evaluation definitions, in which case the
RHS gets evaluated under certain limits (not yet defined...), pausing at
any new specialisable calls to recursively specialise.
Still to do: set appropriate reduction limits depending on
partiality/totality of functions concerned, and introduce transformation
rules for rewriting by already known and cached specialisations.
Generate the initial definition of a specialised function, by applying
the function to its static arguments.
Next step: evaluate the RHS on elaboration.
This isn't strictly right yet (there should be no newline on the last
lie if there was no newline in the input file) but it puts the behaviour
back to what it was before fGetLine was changed to discard it.
This needs revisiting...
ANF, in which everything is applied to variables only, and VMCode, in
which everything is reduced to a list of assignments.
I don't plan to add a defunctionalisation step. I've tested with with a
quick hack C backend, though I don't intend to commit that (not here at
least) because even though it basically works, it's Very Slow.
Solving later arguments first means that they can refer to solutions of
earlier arguments. This isn't really the best solution for ensuring that
metavariables refer to things defined earlier, but it helps, and it does
fix#304.
Instead of the previous unreliable hack, actually look for occurrences
of the solved metavariable. Block if there are any, and fail if there
are any under a constructor. I had expected this to hurt performance
quite a bit, but it seems it doesn't.
This was prompted by #304, which is now partly fixed, but there's still
a refinement to unification needed to fix it fully (and at least it
doesn't segfault due to the cycle now!)
If it's a newtype with an erased WorldVal, turn any case blocks into
non-inlinable let bindings, which is more accurate than the fiddly thing
it was doing.
Also better usage calculation in the inliner (but we still can't inline
lets because we don't know yet which 'case' functions are guaranteed
safe to inline because they don't duplicate variables anywhere. That's
coming!)
A bug found via the very handy, but occasionally bad for performance so
off by default, --debug-elab-check. The multiplicity of a lambda type
needs to be checked against the expected multiplicity, not the given
multiplicity. Oops!
If you have any (\0 x => e) or similar, you may find you need to fix a
type error after this update...
We no longer need to do this since we now expand fromInteger. And, we
didn't even need to do it fully, because we never needed to normalise
the pattern variable types. This is a big win on code which has a lot of
type level computation.
Now only retry the constraint if at least one of the blocking holes has
some new information (either that it has been flagged invertible, or
that it is now defined). This can lead to big performance gains on
programs which use dependent types heavily.