This gives useful information for expression search, because we can add
lambdas while we're still building the environment, and start looking at
locals after that.
It turns out we need a new kind of map, mapping possibly ambiguous names
to values. This is what 'Context' does but we can only have one of them
because it resolves names to a position in an array and it'd be
confusing to have more than one index per name.
Mostly direct from Blodwen (some minor modifications to deal with new
way of going into a new scope in the elaborator as well as the usual
bits dealing with name lookup and Glued terms)
And process them on loading. We record that hints need saving out when
adding them, and clear that list unless we happen to be reexporting the
thing we've just read (import public).
Like Idris 1, these are implicitly added on encountering a repeated name
or a non-constructor application. Unlike Idris 1 (and Blodwen) they are
checking by unification rather than matching (which means in particular
that function argument names can't be bound in dot patterns) which is
slightly less expressive, but better overall because matching is
potentially more error prone.
Slight change of plan: instead of having special names, add Lazy, Inf,
Delay and Force and keywords and elaborate them specially.
Correspondingly, add DelayCase for case trees. Given that implicit
laziness is important, it seems better to do it this way than constantly
check whether the name we're working with is important.
This turns out to make implicit laziness much easier, because the
unifier can flag whether it had to go under a 'Delayed' to succeed, and
report that back to the elaborator which can then insert the necessary
coercion.
This shortcuts ambiguity checking by only keeping the things which have
the target type. Given a set of ambiguous applications, the rules are:
- keep any which return the expected target type (a concrete match), or
a polymorphic type (a possible match)
- if there are any concrete matches, drop any possible match which has
the '%allow_overloads' flag set (in practice this will be interface
methods, but other things can be flagged).
- if there ar eno matches, keep everything so that the error messages
show everything that went wrong.
Most notably, when elaborating deferring argument, if the hole
standing for the argument is still a hole, fill it in directly rather
than going via unification. This prevents some needless evaluation.
This slows things down a bit because to find the holes and give them the
right multiplicities, we need to normalise all the arguments which might
have been metavariables. Maybe we should skip this if we're not using
anything linear, for efficiency?
As patterns are handled by deciding which side of the as is considered
'used'. In case blocks, that should be the variable name, but in general
it should be the pattern, so IAs now has a flag to say which one.
Need to record which holes are still to be solved (not counting
elaborations where there are user defined holes) and check at the end
that they are now solved.
There was a check on evaluating lets which was in Blodwen but I hadn't
added to the normaliser yet! Also, normalisation needs to reduce as
patterns for unification, but not when reducing finished LHS and
argument terms. This is a bit of a hack (but then, so is the
implementation of as patterns in general...).
So, when we're checking a nested expression, we have the as pattern as a
let bound variable (so that it has the necessary computational force)
but when we compile we just pass it as an ordinary argument, then it
gets the desired behaviour in case trees.
It's not quite there yet, though, because the treatment of 'as' patterns
isn't quite right and the slightly hacky approach we're taking might not
be the best. Rethinking now...
This is not quite an occurs check (there's no guarantee anyone will ask
for normalisation after all) but it catches the error which will result
if there's a cyclic metavariable solution.
My plan is to implement the occurs check by going through all the
metavariables at the end of elaboration of an expression, so that we
don't do too much work checking half-finished solutions.