Checking the let expression in full can break sharing when unifying the
types, and it's unnecessary because we've already checked the type of
the scope unifies with the expected type.
Fixes#63
for some reason running idris2 _without_ an input file fails which
throws this test in a spin. We start the interpreter in ide mode on a
socket loading an empty module without prelude and then request
loading a file that does not require Prelude.
Elaborate the scope of a let without the computational behaviour,
meaning that `let x = v in e` is equivalent to `(\x => e) v`. This makes
things more consistent (in that let bindings already don't propagate
inside case or with blocks) at the cost of not being able to rely on the
computational behaviour in types. More importantly, it removes a
significant potential source of slowness.
Fixes#58
If you need the computational behaviour, you can use a local function
definition instead.
Fixes#42. If we don't do this, the name is treated in the saem way as
an unbound implicit, which is not what we want, so update with the
method applied to the parameters.
We were only doing implicits, so add auto implicits too. It's slightly
tricky, because we might also have implicits given of the form @{x}
which stands for the next auto implicit.
Fixes#50
Just like all other pi-bound things, if m is an unbound implicit and we
have m ?x = m y as a unification problem, we can conclude ?x = y because
it has to be true for all ms.
This was implemented in Blodwen but I hadn't got around to it yet for
Idris2... fortunately it's a bit easier in Idris2!
Fixes#44
We were only checking parameters, meaning that there were potential
clashes leading to confusing behaviour, and meaning that it was somehow
relevant what the names were in the interface!
Now by marking a method as multiplicity 0, we can explicitly say that
it's compile time only, so we can use it to compute types based on other
erased things - see tests/idris2/interface008 for a small example.
This fixes#8 - at least in that it allows the interface to be expressed
properly now, although the multiplicity annotations mean that
unfortunately it can't be compatible with Idris 1.
A local variable can't be applied to itself when searching (otherwise,
for example, we could end up trying something like id id id id id id etc
forever). So remove it from the environment before searching for its
arguments.
This and the previous patch fix#24. (Or, at least, the minimised cases
reported as part of it!)
We can't begin a search until we know what we're searching for! For some
reason I forgot to add this case, and without it the search space can
explode, or we might find an answer too soon and commit to the wrong
thing!
Fixes#36
This means that even if the relevant parameters aren't used by a method
body, the method can still see what the implicits are (though they will
be 0 multiplicity).
This is relevant to #8, but doesn't really fix it because we still need
a way of saying that methods are 0 multiplicity.
This is part of what we used to have in Enum but I think it's better to
separate the two. Added implementations for Nat, and anything in
Integral/Ord/Neg, so that we get range syntax (at least when its
implemeted) for the most useful cases.
This required a small change to auto implicit search (and I'm still not
sure about this). Now search arguments right to left, because solving
later arguments may resolve earlier arguments by unification and this
can happen in particular when chasing parent interfaces (which may have
fewer parameters).
At least on Linux, \r needs to be in singles quotes as an argument to tr
or it removes all the 'r' instead! Hopefully it also works this way on
Windows...
Now supports with applications on the RHS when auto implicits are
involved. Auto implicit bound names in patterns now become searches on
the rhs in a with-application (I should write this construct up properly
in a paper some time!)
Elaborate via either === (homogeneous equality) or ~=~ (heterogeneous
equality) both of which are synonyms for Equal. This is to get the Idris
1 behaviour that equality is homogeneous by default to reduce the need
for type annotations, but heterogeneous if that doesn't work.
There's a bit of a trade off here. It would be better to report the
ambiguity but this would lead to a need for (I think) excessive
precision in types which would impact usability. It will always take the
leftmost interface.
Chapter 7 tests added.
Idris 1 will fill in the last metavariables by matching rather than
unification, as a convenience. I still think this is okay, even if it's
a bit hacky, because it's a huge convenience and doesn't affect other
unification problems.
Also abstract over lets in guesses, like in delayed elaborators, to
avoid any difficulties when linearity checking and to make sure that let
bound things don't get reevaluated.
This is enough to get the Chapter 6 TypeDD tests working
Allow matching rather than unification, as long as it doesn't solve any
metavariables on the way. I noticed a potential unification bug on the
way, forgetting to update whether holes are solved when unifying
argument lists.
This was left over from Blodwen (where it was also wrong :)) but the way
we apply metavariables now means we don't need to do anything fancy when
unelaborating them for pretty printing.
This has shown up a problem with 'case' which is hard to fix - since it
works by generating a function with the appropriate type, it's hard to
ensure that let bindings computational behaviour is propagated while
maintaining appropriate dependencies between arguments and keeping the
let so that it only evaluates once. So, I've disabled the computational
behaviour of 'let' inside case blocks. I hope this isn't a big
inconvenience (there are workarounds if it's ever needed, anyway).
Need to add by full name, due to ordering of loading (the name it's
attached to may not be resolved yet!). This doesn't seem to cause any
performance problems but we can revisit if it does.
Don't use the type of a scrutinee to restrict possible patterns, because
it might have been refined by a Rig0 argument that has a missing case.
Instead, generate all the possible cases and check that the generated
ones are impossible (there's no obvious change in performance)
Small change needed to fix one - assume given implicits which are of the
form x@_ arise from types. It's a bit of a hack but I don't think
there's any need for anything more complicated.
Only valid if unifying the pattern at the end doesn't solve any
metavariables. Also when elaborating applications of fromInteger etc to
constants on the LHS we need to be in expression mode, then reduce the
result later.
This was a slight difference from Blodwen that wasn't accounted for -
there might be lets in the nested environment, so when building the
expanded application type, make sure we go under them
This wasn't necessary before, since we always inlined, but since we can
now postpone things longer and don't always inline until much later, we
need to know what names everything refers to earlier.
We can't refine by a name in Rig0 because we can't assume it covers all
possibilities, so only refine by them at the end, at which point we
check that there's only one case left (otherwise we have to match on it,
which is not allowed for an erased thing)
With the --yaffle flag, you get the old behaviour which is to invoke the
checker for the core theory (and all the tests are updated appropriately
for this).
This gives useful information for expression search, because we can add
lambdas while we're still building the environment, and start looking at
locals after that.
Mostly direct from Blodwen (some minor modifications to deal with new
way of going into a new scope in the elaborator as well as the usual
bits dealing with name lookup and Glued terms)
And process them on loading. We record that hints need saving out when
adding them, and clear that list unless we happen to be reexporting the
thing we've just read (import public).
Like Idris 1, these are implicitly added on encountering a repeated name
or a non-constructor application. Unlike Idris 1 (and Blodwen) they are
checking by unification rather than matching (which means in particular
that function argument names can't be bound in dot patterns) which is
slightly less expressive, but better overall because matching is
potentially more error prone.
Slight change of plan: instead of having special names, add Lazy, Inf,
Delay and Force and keywords and elaborate them specially.
Correspondingly, add DelayCase for case trees. Given that implicit
laziness is important, it seems better to do it this way than constantly
check whether the name we're working with is important.
This turns out to make implicit laziness much easier, because the
unifier can flag whether it had to go under a 'Delayed' to succeed, and
report that back to the elaborator which can then insert the necessary
coercion.
Most notably, when elaborating deferring argument, if the hole
standing for the argument is still a hole, fill it in directly rather
than going via unification. This prevents some needless evaluation.
This slows things down a bit because to find the holes and give them the
right multiplicities, we need to normalise all the arguments which might
have been metavariables. Maybe we should skip this if we're not using
anything linear, for efficiency?
As patterns are handled by deciding which side of the as is considered
'used'. In case blocks, that should be the variable name, but in general
it should be the pattern, so IAs now has a flag to say which one.
Need to record which holes are still to be solved (not counting
elaborations where there are user defined holes) and check at the end
that they are now solved.
There was a check on evaluating lets which was in Blodwen but I hadn't
added to the normaliser yet! Also, normalisation needs to reduce as
patterns for unification, but not when reducing finished LHS and
argument terms. This is a bit of a hack (but then, so is the
implementation of as patterns in general...).
So, when we're checking a nested expression, we have the as pattern as a
let bound variable (so that it has the necessary computational force)
but when we compile we just pass it as an ordinary argument, then it
gets the desired behaviour in case trees.
It's not quite there yet, though, because the treatment of 'as' patterns
isn't quite right and the slightly hacky approach we're taking might not
be the best. Rethinking now...
Changed nested names (and case blocks) to store the resolved name as the
outer name, rather than the unresolved name, otherwise we'll have issues
when loading from TTC
Since the NF might refer to hole names, and those hole names might be
possible to evaluate now, we'll need to recalculate the expected type's
normal form before rerunning the delayed elaborator
Works by running all possible elaborators and checking that exactly one
succeeds. Still to do: pruning the list of elaborators by target type,
dealing with 'UniqueDefault', checking that delaying on failure works as
it should.
When we encounter them, not that they're a binding as normal, but also
record the thing they expand to. Then bind as a PLet, and convert that
to a Let on the RHS so it has computational force. The case tree
compiler knows about as patterns, so they get compiled to use the
appropriate name on the rhs (rather than a let).