The prelude interfaces that have default definitions for all of
their fields are declared total so that users are forced to think
about meeting the minimal requirements for an implementation to be
valid.
* Add field for universe level to TType
This doesn't do anything yet, other than introduce new universe
variables whenever we introduce a new type, but it's the first step
towards checking the universe hierarchy. Next step is to add constraints
when checking pi, unifying/converting types, and when adding data
constructors.
* TTC version increment
Thought I'd done this, but apparently I didn't save the file. Oops!
* Add structure for universe constraints
* Fix display of ambiguity errors
We need to store the Context in errors at the point where the error
occurs, or we might get some nonsense in the message. There's still a
couple of places in Error where we don't do this right. This fixes one
of them, and improves a few messages in the process.
We need to store the Context in errors at the point where the error
occurs, or we might get some nonsense in the message. There's still a
couple of places in Error where we don't do this right. This fixes one
of them, and improves a few messages in the process.
* deprecate Data.Nat.Order.decideLTE
* Add properties for LTE/GTE that produce the difference.
* remove deprecated function now that it is available in the base library.
* remove two deprecated lines.
* remove module deprecated since v0.4.0
* fix prelude reference to renamed primitive.
* finish removing Data.Num.Implementations
* remove deprecated dirEntry function.
* remove deprecated fastAppend. Update CHANGELOG.
* replace fastAppend in test case
* replace fastAppend uses in compiler.
* remove new properties that weren't actually very new.
* Implemented %noinline
* Removed trailing spaces.
* Added missing case in Reify FnOpt
* Added error message when both %inline and %noinline are set.
* Added test.
* Changed from perror to error
* Fix casts in scheme evaluator
We really need test cases for all the primitives before we can use this
evaluator properly. Also test cases that run inside an environment,
which are a bit harder to construct.
* Add the cast fixes to racket support code
* More racket compile time evaluation fixes
We had the chez version of some primtives in the ct-support file. We
need a full set of tests for the primitives here too...
* Normalise types fully at the REPL
It was a bit odd that we only normalised the scope of function types and
not the arguments, and I can't remember the reason for that if there
even was one.
* Better way of using nf_metavars_threshold
If a term is getting big and probably needs normalising, we now have a
sizeLimit flag in quote, so we can use that instead of checking the size
afterwards. This is a handy heuristic for speeding up unification when
there's a term with lots of suspended computation. Fixes#1991
For error reporting purposes it's better to have an (approximate)
virtual location for code that was introduced by the elaborator
than to have an `EmptyFC` that does not help.
* so much experimentation
* tests that show preliminary evidence the new stuff is working.
* small amount of cleanup
* more cleanup of various troubleshooting code.
* new test case added.
* only log unreachable indices if there are any.
* when traversing deeper simply skip over defaults since they have already been reviewed.
* Remove fallback clause that the changes in this PR correctly identified as unreachable.
* tidying up more.
* move some common functions to a new Core.Case.Util module.
* refer to case builder and case tree under new parent module.
* update imports to look for CaseTree in new submodule.
* update api ipkg
* remove unneeded application operators.
* remove or comment out unreachable default clauses caught by the changes in this PR.
* a bit of code documentation and renaming for clarity.
* bump previous version in CI.
* fix API usage of Util module.
* Add issue 1079 test cases.
* forgot to add new test cases file.
* remove commented-out lines by request of RefC author.
* Use a SortedSet instead of nubbing a list.
* update new case tree import.
* Update src/Core/Case/Util.idr
Co-authored-by: G. Allais <guillaume.allais@ens-lyon.org>
* remove function with nothing to offer above and beyond a differently named copy of the same code.
* replace a large tuple with a record; discover not all of the tuple's fields were needed.
* fix shadowing warning.
Co-authored-by: G. Allais <guillaume.allais@ens-lyon.org>
This is for compiled evaluation at compile-time, for full normalisation. You can try it by setting the evaluation mode to scheme (that is, :set eval scheme at the REPL). It's certainly an order of magnitude faster than the standard evaluator, based on my playing around with it, although still quite a bit slower than compilation for various reasons, including:
* It has to evaluate under binders, and therefore deal with blocked symbols
* It has to maintain enough information to be able to read back a Term from the evaluated scheme object, which means retaining things like types and other metadata
* We can't do a lot of the optimisations we'd do for runtime evaluation particularly setting things up so we don't need to do arity checking
Also added a new option evaltiming (set with :set evaltiming) to display how long evaluation itself takes, which is handy for checking performance.
I also don't think we should aim to replace the standard evaluator, in general, at least not for a while, because that will involve rewriting a lot of things and working out how to make it work as Call By Name (which is clearly possible, but fiddly).
Still, it's going to be interesting to experiment with it! I think it will be a good idea to use it for elaborator reflection and type providers when we eventually get around to implementing them.
Original commit details:
* Add ability to evaluate open terms via Scheme
Still lots of polish and more formal testing to do here before we can
use it in practice, but you can still use ':scheme <term>' at the REPL
to evaluate an expression by compiling to scheme then reading back the
result.
Also added 'evaltiming' option at the REPL, which, when set, displays
how long normalisaton takes (doesn't count resugaring, just the
normalisation step).
* Add scheme evaluation mode
Different when evaluating everything, vs only evaluating visible things.
We want the latter when type checking, the former at the REPL.
* Bring support.rkt up to date
A couple of missing things required for interfacing with scheme objects
* More Scheme readback machinery
We need these things in the next version so that the next-but-one
version can have a scheme evaluator!
* Add top level interface to scheme based normaliser
Also check it's available - currently chez only - and revert to the
default slow normaliser if it's not.
* Bring Context up to date with changes in main
* Now need Idris 0.5.0 to build
* Add SNF type for scheme values
This will allow us to incrementally evaluate under lambdas, which will
be useful for elaborator reflection and type providers.
* Add Quote for scheme evaluator
So, we can now get a weak head normal form, and evaluate the scope of
a binder when we have an argument to plug in, or just quote back the
whole thing.
* Add new 'scheme' evaluator mode at the REPL
Replacing the temporary 'TmpScheme', this is a better way to try out the
scheme based evaluator
* Fix name generation for new UN format
* Add scheme evaluator support to Racket
* Add another scheme eval test
With metavariables this time
* evaltiming now times execution too
This was handy for finding out the difference between the scheme based
evaluator and compilation. Compilation was something like 20 times
faster in my little test, so that'd be about 4-500 times faster than the
standard evaluator. Ouch!
* Fix whitespace errors
* Error handling when trying to evaluate Scheme
Even though the `Comment` aspect is not (currently) supported
in the IDEMode, this is crucial to get proper highlighting in
Katla's LaTeX & HTML backends.
* Version increment to 0.5.1
This is to remove the requirement on Chez >9.5
* Disable -Xcheck-hases, at least for the moment
If we're going to have this as an option, we need to have a portable way
of finding a sha256sum command. At the moment, we might find a command,
but different versions accept different options. We should at least
allow setting it via an environment variable, and we certainly shouldn't
fail if running the command fails.
* Update bootstrap code ready for 0.5.1 release
* Abandon auto search on undefined name
These might arise from names in other modules that haven't been
imported. But it's going to be an error whatever, so give up straight
away. Fixes#1925
* Fix typo
* Fix test source
* Record possible cause when we can't solve a goal
Normally, it's just because we searched and failed. But maybe sometimes,
it's because there's an undefined name, in which case, we can include
this in the error message.
This is good to record because it means we don't abandon elaboration at
the wrong time! Say, if a search fails due to an undefined name, but it
was only in one branch of an ambiguous elaboration.
* Add necessary arguments for perf009 test
* Update version numbers and bootstrap scheme
* Use wall clock time for search timeouts
That was always the intention in any case, rather than the process time.
Instead of having UN & RF (& Hole in the near future & maybe even
more later e.g. operator names) we have a single UN constructor
that takes a UserName instead of a String.
UserName is (for now)
```idris
data UserName : Type where
Basic : String -> UserName -- default name constructor e.g. map
Field : String -> UserName -- field accessor e.g. .fst
Underscore : UserName -- no name e.g. _
```
This is extracted from the draft PR #1852 which is too big to easily
debug. Once this is working, I can go back to it.
* Abandon ambiguity resolution on undefined name
This has finally annoyed me enough to do something about it. If we get a
"no such variable" error there's no point exploring other branches.
* Removes spaces from test file
One day I'll update the linter to ignore test files. We should really
accept literally anything as a possiblity for test files, even if
removing the spaces is tidier.
* Reset context before throwing in 'successful'
Although I don't think this is strictly necessary for a fatal error, we
should still for the sake of tidiness reset the state when backtracking.
* Move Context into its own file
Just the core definition - this is so that we have access to it in
Core.Core, for inclusion in error messages, to save normalisation of
terms in errors until we actually show them.
* Normalise errors on display, not when they arise
This can save a lot of time in ambiguity resolution if the errors are
complicated, because the errors might never be displayed if it's in an
abandoned branch.
This involves lifting 'Context' out of Core.Context, because we need to
store it in Error, which is needed by Core, which in turn is needed in
Core.Context.
Also moved a couple of test caes from ttimp to idris2, so that the
errors get rendered properly and won't need updating unnecessarily. In
fact all of the ttimp tests - which were just part of the initial
scaffolding - are probably now subsumed by the idris2 tests.
* Add new coverage001 test files
If they don't, we can't turn them into patterns to match on, and we end
up looping. Possibly we could throw a different and maybe more
informative error instead of just making an unmatchable pattern.
Fixes#1895
* [ breaking ] remove parsing of dangling binders
It used to be the case that
```
ID : Type -> Type
ID a = a
test : ID (a : Type) -> a -> a
test = \ a, x => x
```
and
```
head : List $ a -> Maybe a
head [] = Nothing
head (x :: _) = Just x
```
were accepted but these are now rejected because:
* `ID (a : Type) -> a -> a` is parsed as `(ID (a : Type)) -> a -> a`
* `List $ a -> Maybe a` is parsed as `List (a -> Maybe a)`
Similarly if you want to use a lambda / rewrite / let expression as
part of the last argument of an application, the use of `$` or parens
is now mandatory.
This should hopefully allow us to make progress on #1703
* Skip forced arguments in conversion check
This isn't always safe - we have to have also checked the type of the
things we're converting - but in the place where it is safe it can be a
pretty significant saving.
* Use Closures, not NF, in Binders for normal forms
This means we don't need to reduce argument types during unification if
we don't need to. Also, we now try to avoid reducing closures during
unification if they are unified with a metavariable. Together, this
saves a huge amount of unnecessary evaluation in programs that do a lot
of compile time evaluation.
* Didn't mean to update idris2api.ipkg
* Fix dodgy merge
* Stub for future 'identity' optimisation
I plan to add this later, but I'm using for now for
NaturalToInteger and IntegerToNatural
* Refactor `%builtin`
fixes#1799
- automatically optimise all Natural shaped things
- NaturalToInteger and IntegerToNatural now use
new `Identity` flag (internal use only for now)
which signals the function is identity at runtime
* Use NaturalToInteger and IntegerToNatural for Nat and Fin
Also define show fin in terms of finToInteger, for speed
* Fix name handling for %builtin
* [ tests ] fixes + #1799
* remove %builtin from libs
Add back after next version
* Use resolved names where convenient
* Add `break` in each case alt in js backend
Fixes#1795
* Remove some uneeded `break`s
* linter
* Follow @stefan-hoeck 's advice
This is neater
Note: I renamed breakAfterAssignment because it's too much work to type
* [ test ] Test for #1795
* cleanup: remove unneeded vcat
I can't make sense of this code, it seems to try to convert the
case function corresponding to `let (a, b) = f n in ...` into a
different case function where `f n` and `(a, b)` have been unified.
But if `f n` is a bona fide stuck computation, there's no chance of
this happening.
Just turning this off solves the #1782 and only breaks one function
in the whole of the idris2 repo (I would have expected our current
termination oracle to be too weak to detect it as valid anyway!)
and one in frex (which, again, should not have been seen as terminating).
Also fixes#1460
In the `MkFix : f (Fix f) -> Fix f` example, using `Erased` for `f`
makes the type reduce to `[__] (Fix [__]) -> Fix [__]` and because
`[__] e1 ... en` computes to `[__]`, we end up with `[__] -> Fix [__]`
which does not reference `Fix` anymore.
In theory argument elaboration order doesn't matter, but in practice we
sometimes make choices for performance reasons, like helping with
disambiguation by knowing the target type.
This was kind of messy, now we can more clearly see what's going on.
Also, more importantly, it gives us a bit more control which we
sometimes need. For example, if we go choose to go right to left for
some performance heuristic it might turn out we don't have enough
information yet, in which case we need to delay and try again later.
Fixes#1743
The `if then else` syntax expects a block for the `then` and `else`
parts. Before this patch, the token `InterpEnd` was not a valid
follow up token to end a block. This adds `InterpEnd` as a closing
token for blocks, allowing `if then else` in interpolation slices
without additional parens.
Instead of having an arbitrary looking priority number, record explicit
reasons for the delay, which helps order them sensibly when rerunning
them. Mostly this allows us to choose which ones to rerun, where it
helps, and helps order things to get better error messages.
`testInDir dir ...` lists all directories in `dir` which contains
`run` files, and such directories are considered tests.
This is done to make test addition/maintenance cheaper.
Convert some test directories to `testInDir`, but not all of them
because
* some directories are listed in several test groups
* other directories are have some tests disabled
These entries returned by `readdir` are legacy of Unix API, we don't
really need them. Most APIs do not return them (for example Java
`Files.newDirectoryStream` or Python `os.listdir`).
* add `nextDirEntry` which returns `Maybe String`, so `Nothing` on
the end of directory unlike `dirEntry` which returns unspecified error
on the end of directory
* `dirEntry` is deprecated now, but not removed because compiler depends on it
* native implementation of `dirEntry` is patched to explicitly reset `errno`
before the `readdir` call: without it end of directory and error were
indistinguishable
* test added
* Add trailing newline on non-empty list in unlines
There are several reasons to do that:
* a line in a text file is something which ends with newline,
and the last line is not special
* `unlines []` should be different from `unlines [""]`
* `unlines (a ++ b) = unlines a ++ unlines b`
* Haskell does it
* Change lines function behaviour
* Propagate 'do qualification' to inner bangs and comprehensions
* Minor
* Remove banner in test
* Move tests from reg045 to reg047
* Move mbNS from Desugar.idr to Name.idr, renaming it to mbApplyNS
Operating system counter stores signals as flag set without counter.
So sending two signals to a process may result to one or two signal
handler invocation. Queueing signals inside Idris could give users
false sense of signals being are queue, while they are not.
In particular, test for signal could not work reliably for that
reason.
Also, practically we usually don't need have more than once signal
event.
This is follow-up to #1660. CC @mattpolzin
The 'with' type and application need to treat the parameters with the
same plicity, but the application has just always treated them as
explicit since it never looked. It's easiest just to make them all
explicit, since this isn't a user visible type. Fixes#1695.
While the discussion about how to refactor test framework is not
finished (#1654), make this change: move `rm -rf build` in the
beginning of the test. For these reasons:
* it is useful to inspect the contents of the `build` directory
especially after the test failure
* if build crashes mid-test (e.g. process killed), next run should
not be affected by the `build` directory from the previous run
* add `strerror` function
* move `getErrno` to `System.Errno`
* use `strerror` in `Show FileError`
* on node there's no access to `strerror`, so `strerror` just converts the number to string
Ideally we'd have a complete incremental build in CI, but that could be
a bit fiddly to set up at the moment (updating bootstrap code might make
it easier). This tests that the basic facilities work, though - there's
a lot can go wrong even in a small test like this, trust me, I have made
those mistakes :).
This involves making 'unelab' aware of nested names so that it can
remove the parameters from names in the current block. It's a bit of a
hacky solution, but it is also the easiest one.
Ideally we'd build the getter types directly, rather than using unelab,
but that's one to save for another time.
Fixes#1482
Convert `App.Control.Exception` interface to an alias to `HasErr`.
Probably `Exception` interface need to be deprecated or removed.
Note similar problem exists with `PrimIO` calling `PrimIO, Exception`,
also need to be fixed.
Fix this scenario:
```
throwBoth : Has [Exception String, Exception Int] es => App es ()
throwOne : Has [Exception Int] es => App es Int
throwOne {es} = handle {err = String} {e = es} throwBoth (\r => pure 1) (\e => pure 3)
```
With this commit it works, before this commit it failed with:
```
Error: While processing right hand side of throwOne. Can't find an implementation for Exception Int (String :: es).
TestException.idr:8:48--8:57
|
8 | throwOne {es} = handle {err = String} {e = es} throwBoth (\r => pure 1) (\e => pure 3)
| ^^^^^^^^^
```
Pragma once is supported by all compilers for the last ten years.
Better use it instead of include guards (which use different styles
in different files).
If it's solved by unification, expression search should just print the
unified value. In fact it almost did this, but wasn't reducing the holes
so the result was being rendered incorrectly.
This is set to 1 second by default. Usually if it hasn't found a result
by then, it never will, but given that we find the first batch of
results then sort them, the timeout also stops us fruitlessly searching
for more solutions.
Hopefully 1s is more than enough for CI too. There is a mechanism to
change the timeout (%search_timeout) so if it turns out that CI needs
longer in some cases, we can increase it there.
I haven't documented this yet, but proof/definition search needs
documenting in general. I'll get to that.
The timer mechanism may also be useful elsewhere - I'm considering it
for ambiguity warnings, because the ambiguity depth limit isn't working
very well for that.
The option is hidden being a flag (`-Xcheck-hashes`) so that by default `touch`ing
a file is enough to get it recompiled.
Co-authored-by: Ben Hormann <benhormann@users.noreply.github.com>
We already did this, but missed a few cases due to the way arguments are
elaborated. So now, when checking an LHS, we don't allow LHS argument
types to be inferred from the pattern, but rather they must be inferred
from elsewhere. To do this, we keep track of the constraints which would
be solved when inferring the type, and make sure they don't solve any
new metavariables. Fixes#1510, and also now gets the error location
right as a bonus!
Because it relies on the source file that I've just fixed for the
linter. I think I've now spent more time pleasing the linter than fixing
the actual bug...
We need to fully evaluate, not just the public export names, otherwise
we don't pattern match properly and potentially generate catch all
patterns we don't mean.
Fixes#1537
Why:
* To implement robust cross-project go-to-definition in LSP
i.e you can jump to definition of any global name coming
from library dependencies, as well as from the local project files.
What it does:
* Modify `FC`s to carry `ModuleIdent` for .idr sources,
file name for .ipkg sources or nothing for interactive runs.
* Add `--install-with-src` to install the source code alongside
the ttc binaries. The source is installed into the same directory
as the corresponding ttc file. Installed sources are made read-only.
* As we install the sources pinned to the related ttc files we gain
the versioning of sources for free.
Need to use the 8 byte version of Ints for the shortcut version of
reading TTCs too, otherwise we'll read the wrong hash and build things
unnecessarily.
* Add utility functions to treat All as a heterogeneous container
* Distinguish RefC Int and Bits types
* Change RefC Integers to be arbitrary precision
* Add RefC Bits maths operations
* Make RefC div and mod Euclidean
* Add RefC bit-ops tests
* Add RefC integer comparison tests
* Add RefC IntN support
This saves a lot of unnecessary exploring of size change graphs, which
can get over the top quite quickly if there's complex mutual
definitions, or even just a single function with an interesting variety
of recursive calls.
Fixes#1365Fixes#1277Fixes#645
- Fix off-by-one error in String reverse
- Correct order of arguments in strSubstr
- Actually use start index of strSubstr
- Reduce memory usage of strSubstr in case of overrunning string end
- Add fastPack/fastUnpack/fastConcat
- Use unsigned chars for character comparisons
- Fix generated C character encodings
We do this during desugaring because elaboration may insert valid
`?` values on the LHS (e.g. when elaborating things that cannot be
pattern-matched on and should be checked to be forced).
Where 'small' means they don't refer to other metavariables, except
right at the top level, and they don't go beyond a certain small depth,
arrived at by experimenting.
We already did a bit of this, but only for depth 0. The effect of this
is that we don't need to save out lots of metavariables, so ttc loading
is faster. This takes about 8s off the Idris build time!
We stored them as equations between terms, I think because terms are
easy to re-evaluate with new information, and because I thought we might
want to save them out. It's not usually a problem to do that. However...
Going back and forth between terms and values can be expensive if
we're stuck in the middle of a complicated unification problem, the like
of which can turn up a lot if your types are complicated. So, we need to
be able to handle this.
Now store the postponed problems as NF, rather than Term, and add a
fuction to resume evaluating a NF with an updated context.
* Banners for test pools
* Summary with the list of failing tests at the end
* Option to write the list of failing tests to a file
* Option to read the list of tests to run from a file
* Using these two latest features to add a new make target to rerun precisely the tests that failed last time
It has always bothered me that 'False' got mapped to tag 1 and 'True'
got mapped to tag 0. This doesn't change much in practice (except that
perhaps a code generator might notice some useful things in intToBool)
but I'm changing it now anyway. Also added a couple of inlinings of
boolean operations.
This saves a small amount of allocation, especially since we never
actually look at the tag in a record. We can use null? for Nothing just
like for Nil.