This required a small change to auto implicit search (and I'm still not
sure about this). Now search arguments right to left, because solving
later arguments may resolve earlier arguments by unification and this
can happen in particular when chasing parent interfaces (which may have
fewer parameters).
At least on Linux, \r needs to be in singles quotes as an argument to tr
or it removes all the 'r' instead! Hopefully it also works this way on
Windows...
Now supports with applications on the RHS when auto implicits are
involved. Auto implicit bound names in patterns now become searches on
the rhs in a with-application (I should write this construct up properly
in a paper some time!)
Elaborate via either === (homogeneous equality) or ~=~ (heterogeneous
equality) both of which are synonyms for Equal. This is to get the Idris
1 behaviour that equality is homogeneous by default to reduce the need
for type annotations, but heterogeneous if that doesn't work.
There's a bit of a trade off here. It would be better to report the
ambiguity but this would lead to a need for (I think) excessive
precision in types which would impact usability. It will always take the
leftmost interface.
Chapter 7 tests added.
Idris 1 will fill in the last metavariables by matching rather than
unification, as a convenience. I still think this is okay, even if it's
a bit hacky, because it's a huge convenience and doesn't affect other
unification problems.
Also abstract over lets in guesses, like in delayed elaborators, to
avoid any difficulties when linearity checking and to make sure that let
bound things don't get reevaluated.
This is enough to get the Chapter 6 TypeDD tests working
Allow matching rather than unification, as long as it doesn't solve any
metavariables on the way. I noticed a potential unification bug on the
way, forgetting to update whether holes are solved when unifying
argument lists.
This was left over from Blodwen (where it was also wrong :)) but the way
we apply metavariables now means we don't need to do anything fancy when
unelaborating them for pretty printing.
This has shown up a problem with 'case' which is hard to fix - since it
works by generating a function with the appropriate type, it's hard to
ensure that let bindings computational behaviour is propagated while
maintaining appropriate dependencies between arguments and keeping the
let so that it only evaluates once. So, I've disabled the computational
behaviour of 'let' inside case blocks. I hope this isn't a big
inconvenience (there are workarounds if it's ever needed, anyway).
Need to add by full name, due to ordering of loading (the name it's
attached to may not be resolved yet!). This doesn't seem to cause any
performance problems but we can revisit if it does.
Don't use the type of a scrutinee to restrict possible patterns, because
it might have been refined by a Rig0 argument that has a missing case.
Instead, generate all the possible cases and check that the generated
ones are impossible (there's no obvious change in performance)
Small change needed to fix one - assume given implicits which are of the
form x@_ arise from types. It's a bit of a hack but I don't think
there's any need for anything more complicated.
Only valid if unifying the pattern at the end doesn't solve any
metavariables. Also when elaborating applications of fromInteger etc to
constants on the LHS we need to be in expression mode, then reduce the
result later.
This was a slight difference from Blodwen that wasn't accounted for -
there might be lets in the nested environment, so when building the
expanded application type, make sure we go under them
This wasn't necessary before, since we always inlined, but since we can
now postpone things longer and don't always inline until much later, we
need to know what names everything refers to earlier.
We can't refine by a name in Rig0 because we can't assume it covers all
possibilities, so only refine by them at the end, at which point we
check that there's only one case left (otherwise we have to match on it,
which is not allowed for an erased thing)
With the --yaffle flag, you get the old behaviour which is to invoke the
checker for the core theory (and all the tests are updated appropriately
for this).
This gives useful information for expression search, because we can add
lambdas while we're still building the environment, and start looking at
locals after that.
Mostly direct from Blodwen (some minor modifications to deal with new
way of going into a new scope in the elaborator as well as the usual
bits dealing with name lookup and Glued terms)
And process them on loading. We record that hints need saving out when
adding them, and clear that list unless we happen to be reexporting the
thing we've just read (import public).
Like Idris 1, these are implicitly added on encountering a repeated name
or a non-constructor application. Unlike Idris 1 (and Blodwen) they are
checking by unification rather than matching (which means in particular
that function argument names can't be bound in dot patterns) which is
slightly less expressive, but better overall because matching is
potentially more error prone.
Slight change of plan: instead of having special names, add Lazy, Inf,
Delay and Force and keywords and elaborate them specially.
Correspondingly, add DelayCase for case trees. Given that implicit
laziness is important, it seems better to do it this way than constantly
check whether the name we're working with is important.
This turns out to make implicit laziness much easier, because the
unifier can flag whether it had to go under a 'Delayed' to succeed, and
report that back to the elaborator which can then insert the necessary
coercion.
Most notably, when elaborating deferring argument, if the hole
standing for the argument is still a hole, fill it in directly rather
than going via unification. This prevents some needless evaluation.
This slows things down a bit because to find the holes and give them the
right multiplicities, we need to normalise all the arguments which might
have been metavariables. Maybe we should skip this if we're not using
anything linear, for efficiency?
As patterns are handled by deciding which side of the as is considered
'used'. In case blocks, that should be the variable name, but in general
it should be the pattern, so IAs now has a flag to say which one.
Need to record which holes are still to be solved (not counting
elaborations where there are user defined holes) and check at the end
that they are now solved.
There was a check on evaluating lets which was in Blodwen but I hadn't
added to the normaliser yet! Also, normalisation needs to reduce as
patterns for unification, but not when reducing finished LHS and
argument terms. This is a bit of a hack (but then, so is the
implementation of as patterns in general...).
So, when we're checking a nested expression, we have the as pattern as a
let bound variable (so that it has the necessary computational force)
but when we compile we just pass it as an ordinary argument, then it
gets the desired behaviour in case trees.
It's not quite there yet, though, because the treatment of 'as' patterns
isn't quite right and the slightly hacky approach we're taking might not
be the best. Rethinking now...
Changed nested names (and case blocks) to store the resolved name as the
outer name, rather than the unresolved name, otherwise we'll have issues
when loading from TTC
Since the NF might refer to hole names, and those hole names might be
possible to evaluate now, we'll need to recalculate the expected type's
normal form before rerunning the delayed elaborator
Works by running all possible elaborators and checking that exactly one
succeeds. Still to do: pruning the list of elaborators by target type,
dealing with 'UniqueDefault', checking that delaying on failure works as
it should.
When we encounter them, not that they're a binding as normal, but also
record the thing they expand to. Then bind as a PLet, and convert that
to a Let on the RHS so it has computational force. The case tree
compiler knows about as patterns, so they get compiled to use the
appropriate name on the rhs (rather than a let).