This slows things down a bit because to find the holes and give them the
right multiplicities, we need to normalise all the arguments which might
have been metavariables. Maybe we should skip this if we're not using
anything linear, for efficiency?
As patterns are handled by deciding which side of the as is considered
'used'. In case blocks, that should be the variable name, but in general
it should be the pattern, so IAs now has a flag to say which one.
Need to record which holes are still to be solved (not counting
elaborations where there are user defined holes) and check at the end
that they are now solved.
There was a check on evaluating lets which was in Blodwen but I hadn't
added to the normaliser yet! Also, normalisation needs to reduce as
patterns for unification, but not when reducing finished LHS and
argument terms. This is a bit of a hack (but then, so is the
implementation of as patterns in general...).
So, when we're checking a nested expression, we have the as pattern as a
let bound variable (so that it has the necessary computational force)
but when we compile we just pass it as an ordinary argument, then it
gets the desired behaviour in case trees.
It's not quite there yet, though, because the treatment of 'as' patterns
isn't quite right and the slightly hacky approach we're taking might not
be the best. Rethinking now...
This is not quite an occurs check (there's no guarantee anyone will ask
for normalisation after all) but it catches the error which will result
if there's a cyclic metavariable solution.
My plan is to implement the occurs check by going through all the
metavariables at the end of elaboration of an expression, so that we
don't do too much work checking half-finished solutions.
Changed nested names (and case blocks) to store the resolved name as the
outer name, rather than the unresolved name, otherwise we'll have issues
when loading from TTC
This is for tracking locally defined names, so that we can expand their
applications to the full globally lifted name. At the moment, they don't
do anything, but since it's a change throughout the elaborator it
probably deserves its own commit.
Since the NF might refer to hole names, and those hole names might be
possible to evaluate now, we'll need to recalculate the expected type's
normal form before rerunning the delayed elaborator
Works by running all possible elaborators and checking that exactly one
succeeds. Still to do: pruning the list of elaborators by target type,
dealing with 'UniqueDefault', checking that delaying on failure works as
it should.
When we encounter them, not that they're a binding as normal, but also
record the thing they expand to. Then bind as a PLet, and convert that
to a Let on the RHS so it has computational force. The case tree
compiler knows about as patterns, so they get compiled to use the
appropriate name on the rhs (rather than a let).
Search on syntax rather than values, since this makes it easier to deal
with local variables accurately, and since we need to go back to syntax
anyway. We ensure that target types are normalised before search.
We'll need this for interface resolution - build up applications of fst
and snd as we go deeper. This also adds options for controlling names of
pairs, proofs, primitives, etc.
This is for updates we make while branching in an elaborator - we don't
want to update the context immediately, because if the branch fails,
we'll need to revert, so for safety we'll note the updates we're going
to make and only complete the update when we're back at the top level.
This only searches for lambdas and locals so far, and doesn't take into
account that it shouldn't search for holes, but it does set up the basic
mechanism.
Since they're saved, they need to be unique across different input
files, so put them in a namespace. We might also add more metavariables
in the REPL, so save the name of the next variable in the current
namespace.
Only save the parts of terms and definitions which are needed for
typechecking and evaluation (so, for example, we don't need types for
machine generated metavariables, don't need types on lambdas, etc).
Added Core.Directory (more or less directly from Blodwen) and now
writing out TTCs from the test Main, but it's still not writing out
definitions because we're not yet making a note of the names which need
saving (this will be the defined names and any holes introduced).
Also it's clear from logging that holes/guess/constraints aren't being
cleared so this needs to be fixed next.
Mostly this is being copied from Blodwen, with an additional case for
dealing with 'as' patterns more nicely. Still some bits to do to finish
it off though.
We'll use fresh names for the variables in the environment, then
recalculate the local variables as we build the case tree (we don't know
how many we'll need in advance, especially given 'as' patterns)
Also implement 'subst' for this (it may turn out to have other uses too)
We do this because, working left to right, if there's an implicit
argument we won't necessarily know the types of later arguments before
elaborating them. But, since we want to have type directed
disambiguation of names, it's better to learn the values of those
implicits from the return types than from the elaboration of the later
arguments, because then we'll be able to use that type to help
disambiguate later arguments if necessary.
Since the context is mutable state, we need to record changes (that's
basically the variables we've solved and the next name id to use) and
undo them one by one. So this will make backtracking potentially a
little bit expensive, but at least the updates are constant time!
If an argument's type isn't known (i.e. it's still a metavariable) make
a hole for it, and delay its elaboration until the other arguments have
been elaborated.
This is to help type-directed disambiguation (so we know as much as
possible about the target type before elaborating an argument).
Tricky bit is the 'NameRefs'. When we load a file back in, we load a
mapping from resolved ids in the file to their full names, and
immediately resolve them in the current context so that we have the
correct ids after loading.
This checks lambdas where the type isn't immediately a pi binder. First
normalises, if that doesn't work, just guesses and unifies with the
expected type.
Hole list represented as a map now for quicker deletion of solved holes.
Unification returns whether it solved any holes as part of its result,
so we know if it's worth going back to solve constraints if progress has
been made.
Changed 'check' so that it returns the type as a 'Glued', which is a
pair of a term and its normalised form (or, more accurately, the process
to generate each in Core). Sometimes the elaborator needs Term,
sometimes in needs NF; this way we can return both but only actually
generate it when its needed.
So far, mostly done applications, binders and variables. Unification is
still to do.
Better to have the elaborator across multiple files, for ease of
navigation. Also abandon the idea of local unification variables -
we'll keep them global, and try let binding them on leaving a scope.
Also removed 'MV' as a special name type, and added a Meta constructor
in TT which is applied to exactly the right number of arguments for the
environment the meta was constructed in.
It's possible the local unification UCtxt may be more trouble than it's
worth! Instead perhaps we can try let binding unification solutions where
possible on leaving the scope in which they're introduced.
Unification based only on constructor arguments and matching - anything
else fails. Idea is to solve 'easy' local unification problems quickly,
without having to create a global metavar applied to the current context.