We've always just used 0, which isn't correct if the function is going
to be used in a runtime pattern match. Now calculate correctly so that
we're explicit about which type level variables are used at runtime.
This might cause some programs to fail to compile, if they use functions
that calculate Pi types. The solution is to make those functions
explicitly 0 multiplicity. If that doesn't work, you may have been
accidentally trying to use compile-time only data at run time!
Fixes#1163
When bootstrapping, we're building things without packages being
available, so we can't expect to find them when looking for
dependencies. So, we find them another way, with an environment
variable. This flag is to tell Idris not to worry about missing
dependencies in this situation.
We also need to update the bootstrapping code, to deal with the new
version number format and new flag in the ipkg files for the libraries.
I think it's still safe to build from the previous version though - lets
see if CI agrees!
Packages are now installed in a directory with their version number.
On adding a package directory, we now look in a local 'depends'
directory first (to allow packages to be installed locally to another
project) before the global install directory.
Dependencies can have version bounds (details yet to be implemented) and
we pick the package with the highest version number that matches.
This is an alternative to using `fastUnpack` and `fastPack` to work
on lists of characters.
Using this to refactor the lexer and benchmarking the resulting
compiler on building idris2 shows it's 3 to 5s slower than the
current implementation that goes via `List Char`.
This may be backend-dependent so I still push this to contrib,
with the plan of running further benchmarks in the future.
`countFrom` must have been made public accidentally:
* it is defined in the ranges section of the file, not stream section
* it is used only in `Range` implementation
* the same function `iterate` is defined in `Data.Stream`
```
countFrom start next
```
is the same as
```
iterate next start
```
Added several functions for `Dec`. The set of functions and names
are picked consistently with `Maybe`:
* `isNothing` -> `isNo`
* `isJust` -> `isYes`
* `IsJust` -> `IsYes`
* `isItJust` -> `isItYes`
This is follow-up to #942
Ideally, liftIO would always be linear, but that has lots of knock-on
effects for other monads which we might want to put in HasIO, now that
subtyping is gone. We'll have to revisit this when we have some kind of
multiplicity polymorphism.
Snoc add an element at the end of the vector. The main use case
for such a function is to get the expected type signature
Vect n a -> a -> Vect (S n) a instead of
Vect n a -> a -> Vect (n + 1) a which you get by using `++ [x]`
Snoc gets is name from `cons` by reversin each letter, indicating
tacking on the element at the end rather than the begining.
`append` would also be a suitable name.
It's disappointing to have to do this, but I think necessary because
various issue reports have shown it to be unsound (at least as far as
inference goes) and, at the very least, confusing. This patch brings us
back to the basic rules of QTT.
On the one hand, this makes the 1 multiplicity less useful, because it
means we can't flag arguments as being used exactly once which would be
useful for optimisation purposes as well as precision in the type. On
the other hand, it removes some complexity (and a hack) from
unification, and has the advantage of being correct! Also, I still
consider the 1 multiplicity an experiment.
We can still do interesting things like protocol state tracking, which
is my primary motivation at least.
Ideally, if the 1 multiplicity is going to be more generall useful,
we'll need some kind of way of doing multiplicity polymorphism in the
future. I don't think subtyping is the way (I've pretty much always come
to regret adding some form of subtyping).
Fixes#73 (and maybe some others).
This is done to make able for `Data.*` modules of datatypes declared in
prelude to import modules that have their own definitions of `DecEq`
inside them (i.e. modules of datatypes declared in the `base`).
This also changes the return type of `char` and `string`. They
previously returned `()`, they now return `Char` and `String`
repectively.
Signed-off-by: Alex Humphreys <alex.humphreys@here.com>
Rather than translating the constraints to a Dybjer-Setzer IR code
we can produce an ad-hoc definition of a `Domain` that we will be
able to make runtime irrelevant.
This means that compiled code will never need to construct a proof
that a value is in the domain of the function: it will simply run
the function!
Refactor the DIY equational reasoning library to be a bit more like
the generic pre-order reasoning library:
Change the `...` notation into a constructor for a new `Step` datatype.
This seems to help idris disambiguate between the two kinds of
reasoning when they're used in the same file (e.g., frex).
Co-authored-by: Ohad Kammar <ohad.kammar@ed.ac.uk>
Division Theorem. For every natural number `x` and positive natural
number `n`, there is a unique decomposition:
`x = q*n + r`
with `q`,`r` natural and `r` < `n`.
`q` is the quotient when dividing `x` by `n`
`r` is the remainder when dividing `x` by `n`.
This commit adds a proof for this fact, in case
we want to reason about modular arithmetic (for example, when dealing
with binary representations). A future, more systematic, development could
perhaps follow: @clayrat 's (idris1) port of Coq's binary arithmetics:
https://github.com/sbp/idris-bi/blob/master/src/Data/Bin/DivMod.idrhttps://github.com/sbp/idris-bi/blob/master/src/Data/Biz/DivMod.idrhttps://github.com/sbp/idris-bi/blob/master/src/Data/BizMod2/DivMod.idr
In the process, it bulks up the stdlib with:
+ a generic PreorderReasoning module for arbitrary preorders,
analogous for the equational reasoning module
+ some missing facts about Nat operations.
+ Refactor some Nat order properties using a 'reflect' function
Co-authored-by: Ohad Kammar <ohad.kammar@ed.ac.uk>
Co-authored-by: G. Allais <guillaume.allais@ens-lyon.org>
This mirrors the (>>) found in Haskell, which is the same as (>>=), except the
second computation (on the right hand side) takes no arguments, and the result
of the first computation is discarded. This is a trivial implementation written
in terms of (>>=).
The Network.Socket.Data code previously used hardcoded constants manually read
from auto-generated C source code, however these constants are specific to
Linux. The original code looked like this:
export
ToCode SocketFamily where
-- Don't know how to read a constant value from C code in idris2...
-- gotta to hardcode those for now
toCode AF_UNSPEC = 0 -- unsafePerformIO (cMacro "#AF_UNSPEC" Int)
toCode AF_UNIX = 1
toCode AF_INET = 2
toCode AF_INET6 = 10
The AF_INET6 constant is correct on my Debian 10 laptop:
molly on flywheel ~> grep -rE '^#define AF_INET6' /usr/include
/usr/include/x86_64-linux-gnu/bits/socket.h:#define AF_INET6 PF_INET6
molly on flywheel ~> grep -rE '^#define PF_INET6' /usr/include
/usr/include/x86_64-linux-gnu/bits/socket.h:#define PF_INET6 10 /* IP version 6. */
However, this is not the case on an OpenBSD machine:
spanner# grep -rE '^#define[[:space:]]+AF_INET6' /usr/include
/usr/include/sys/socket.h:#define AF_INET6 24 /* IPv6 */
This commit adds accessor functions to the C runtime support library for
retrieving the values of these macros as they appear in the system libc header
files, which can then be invoked using the normal C FFI machinery.
useful items for applying multiple predicates, e.g.
sortBy (comparing length <+> compare)
To sort some lists elements by length and then lexographically
broaden what Names can be reflected and refied
I did not add the Names I wasn't sure how to test but have put placeholders
that produce clearer error messages.
Nipping this historical artifact in the bud before it roots. It's often
useful to be able to `map` directly to the result of a StateT computation
and due to how Functor works this is made harder when the tuple is
(a,state) vs (state,a)
* [contrib] Add misc libraries to contrib
Expose some `private` function in libs/base that I needed, and seem like
their visibility was forgotten
I'd appreciate a code review, especially to tell me I'm
re-implementing something that's already elsewhere in the library
Mostly extending existing functionality:
* `Data/Void.idr`: add some utility functions for manipulating absurdity.
* `Decidable/Decidable/Extra.idr`: add support for double negation elimination in decidable relations
* `Data/Fun/Extra.idr`:
+ add `application` (total and partil) for n-ary functions
+ add (slightly) dependent versions of these operations
* `Decidable/Order/Strict.idr`: a strict preorder is what you get when
you remove the diagonal from a pre-order. For example, `<` is the
associated preorder for `<=` over `Nat`.
Analogous to `Decidable.Order`. The proof search mechanism struggled
a bit, so I had to hack it --- sorry.
Eventually we should move `Data.Fun.Extra.Pointwise` to `Data.Vect.Quantifiers` in base
but we don't have any interesting uses for it at the moment so it's not
urgent.
Co-authored by @gallais
Until now namespaces were stored as (reversed) lists of strings.
It led to:
* confusing code where we work on the underlying representation of
namespaces rather than say what we mean (using `isSuffixOf` to mean
`isParentOf`)
* potentially introducing errors by not respecting the invariant cf.
bug report #616 (but also name generation in the scheme backend
although that did not lead to bugs as it was self-consistent AFAICT)
* ad-hoc code to circumvent overlapping interface implementation when
showing / pretty-printing namespaces
This PR introduces a `Namespace` newtype containing a list of strings.
Nested namespaces are still stored in reverse order but the exposed
interface aims to support programming by saying what we mean
(`isParentOf`, `isApproximationOf`, `X <.> Y` computes to `X.Y`, etc.)
irrespective of the underlying representation.
Until now namespaces were stored as (reversed) lists of strings.
It led to:
* confusing code where we work on the representation rather than say
what we mean (e.g. using `isSuffixOf` to mean `isParentOf`)
* potentially introducing errors by not respecting the invariant cf.
bug report #616 (but also name generation in the scheme backend
although that did not lead to bugs as it was self-consistent AFAICT)
* ad-hoc code to circumvent overlapping interface implementations when
showing / pretty-printing namespaces
This introduces a Namespace newtype containing non-empty lists of
strings. Nested namespaces are still stored in reverse order but the
exposed interface aims to support programming by saying what we mean
(`isParentOf`, `isApproximationOf`, `X <.> Y` computes to `X.Y`, etc.)
irrespective of the underlying representation.
Main change
===========
The main change is to the type of function dealing with an untouched
segment of the local scope. e.g.
```
weak : {outer, vars : _} -> (ns : List Name) ->
tm (outer ++ inner) -> tm (outer ++ ns ++ inner)
```
Instead we now write
```
weak : SizeOf ns -> tm (outer ++ inner) -> tm (outer ++ ns ++ inner)
```
meaning that we do not need the values of `outer`, `inner` and `ns`
at runtime. Instead we only demand a `SizeOf ns` which is a `Nat`
together with an (erased) proof that `ns` is of that length.
Other modifications
===================
Quadratic behaviour
-------------------
A side effect of this refactor is the removal of two sources of
quadratic behaviour. They typically arise in a situation where
work is done on a scope of the form
```
outer ++ done ++ ns ++ inner
```
When `ns` is non-empty, some work is performed and then the variable
is moved to the pile of things we are `done` with. This leads to
recursive calls of the form `f done` -> `f (done ++ [v])` leading
to a cost quadratic in the size of `ns`.
Now that we only care about `SizeOf done`, the recursive call is
(once all the runtime irrelevant content is erased) for the form
`f n` -> `f (S n)`!
More runtime irrelevance
------------------------
In some places we used to rely on a list of names `vars` being
available. However once we only care about the length of `vars`,
the fact it is not available is not a limitation.
For instance a `SizeOf vars` can be reconstructed from an environment
assigning values to `vars` even if `vars` is irrelevant. Indeed the
size of the environment is the same as that of `vars`.
For lack of a better place, I've put it in `Syntax.PreorderReasoning`
These equations are natural in equational reasoning, but less so when
rewriting, so that's why it's there
For Void and Either
This is because I ended up using them elsewhere, so why not include them in the stdlib.
Also expose left/rightInjective functions, as are used in the DecEq proofs.
In a 'Bind', normalise the result of the first action, rather than
quoting the HNF. This improves performance since the HNF could be quite
big when quoted back.
Ideally, we wouldn't have to quote and unquote here, and we can probably
achieve this by tinkering with the evaluator.
This has an unfortunate effect on the reflection002 test, in that the
"typed template Idris" example now evaluates too much. But, I think the
overall performance is too important for the primary motivation
behind elaborator reflection. I will return to this!
This is partly to tidy things up, but also a good test for 'import as'.
Requires some internal changes since there are parts of reflection,
unelaboration and a compiler hack that rely on where things are in the
Prelude.
This is helpful when defining auto-implicits of the form:
pairEqF : DecEq a
=> (thisX, x, y : a)
-> {prfRefl : Equal x thisX}
-> (prfEq : decEq x thisX = Yes prfRefl)
=> Pair a a
pairEqF thisX x y {prfRefl} {prfEq} = MkPair x y
before auto-implicit search would fail to find `Refl` for `prfRefl`.
With this fix the solution is found.
This fix means we can avoid having to write the following.
pairEqF' : DecEq a
=> (thisX, x, y : a)
-> (prfEq : decEq x thisX = Yes (the (Equal x x) Refl))
=> Pair a a
pairEqF' thisX x y {prfEq} = MkPair x y
This didn't cause a problem before as it was likely just ignored by the C
function. According to Edwin the extra argument is a leftover from when this
was a pure scheme call.
There is an argument that, for import public, this should be automatic
(that is, the publicly imported things should be reexported with the
parent namespace). I decided not to do this, because another use of
import public which we do a lot in the Idris 2 code base is purely as a
convenience, where we still expect things to be in their original
namespace.
Also, where there's a choice between things being explicit and implicit,
I prefer to err on the side of explicit now.
So, if what you really want in an API is to reexport, you can do that,
but explicitly.
The ports are rather straight forward and I have purposefully written
the documentation to be beginner friendly.
Note, I have diverged from Idris1 over the naming of the projection
functions to make them consistent with `Pair` and `DPair`.
Apparently the concrete syntax is controversial ("{apples}"
vs. "$oranges"), so I'm just adding a simple DYI version until we
agree on a concrete syntax
This involves updating the bootstrap code since it needs a fix in
interface resolution. It shouldn't affect the Idris2-boot build though,
since it's in the libraries not the Idris2 source.
It's worth delaying in case doing some more work (and some more
interface resolution) can make the type more concrete. This makes
test linear011 work more smoothly, and will help with this sort of
program in general.
A better way, later, would be to try elaborating and delay only if the
type is not yet known. We have the facilities, but it's too fiddly for
me to want to implement it right now...
We need to check below top level too, since there could be holes that
we're happy to resolve by searching. The linearity test added
illustrates a place where this is needed.
Doing this is awkward, for a number of reasons, including 'pure' not
being linear in its argument - there's no guarantee it'll be used
linearly after all - and lack of multiplicity polymorphism. Fortunately
the user doesn't have to know about the ugliness underneath!
We can look at ways to make it less horrible later :). For now, this is
starting to look like something that allows us to write linear protocols
without too much extra machinery on top of IO.
Conditional variables with timeout in Chez didn't work, so changed to a
consistent meaning of the timeout (microseconds). Also fix linearity of
unsafePerformIO.
Following a fairly detailed discussion on slack, the feeling is
generally that it's better to have a single interface. While precision
is nice, it doesn't appear to buy us anything here. If that turns out to
be wrong, or limiting somehow, we can revisit it later. Also:
- it's easier for backend authors if the type of IO operations is
slightly less restrictive. For example, if it's in HasIO, that limits
alternative implementations, which might be awkward for some
alternative back ends.
- it's one less extra detail to learn. This is minor, but there needs to
be a clear advantage if there's more detail to learn.
- It is difficult to think of an underlying type that can't have a Monad
instance (I have personally never encountered one - if they turns out
to exist, again, we can revisit!)
Backed by Data.IOArray. Also moved the array external primitives to a
separate module Data.IOArray.Prims, since the next step is to add a
linear bounded array type where the bounds checks are done at compile
time, so we'll want to read and write without bounds likes.
Having unsolved holes in a 'core' library unneccessarily pollutes the list of holes shown to the user.
Thus, having unfilled holes in a 'core' library is not right.
These constructs can be re-added once the holes have been filled in.
I'm playing with some linear structures and finding these useful a lot,
so good to have a consistent syntax for it. '#' is chosen because it's
short, looks a bit like a cross if you look at it from the right angle
(!) and so as not to clash with '@@' in preorder reasoning syntax.
Meaning that the FFI is aware of it, so you can send arbitrary byte data
to foreign calls. Fixes#209
This means that we no longer need the hacky way of reading and writing
binary data via scheme, so can have a more general interface for reading
and writing buffer data in files.
It will also enable more interesting high level interfaces to binary
data, with C calls being used where necessary.
Note that the Buffer primitive are unsafe! They always have been, of
course... so perhaps (later) they should have 'unsafe' as part of their
name and better high level safe interfaces on top.
This requires updating the scheme to support Buffer as an FFI primitive,
but shouldn't affect Idris2-boot which loads buffers its own way.
This involves new primitives GCPtr and GCAnyPtr which are pointer types
that have finalisers attached. The finalisers are run when the
associated pointer goes out of scope.
In the test, I am assuming that the GC will only be called once, right
at the end. Otherwise, the output isn't guaranteed to be deterministic!
Let's see how this assumption holds...
This is currently Chez only. I think it'll be easy enough to add to
the Racket and Gambit back ends too.
The old way only worked by chance, because the argumemt order happens to
be the same in all cases. I noticed due to some experiments elsewhere
with different ways of elaborating case, which broke that assumption.
The meaning of the list of Vars is actually the opposite of what it was
taken to be... fortunately, the performance works out roughly the same.
Also this way is (arguably) simpler, which is usually a good sign.
Allows quoting a term back to a TTImp. Test reflection007 shows one
possible use for this, building a reflected, type safe, representation
of an expression.
So the type of Elab now gives the expected type that's being elaborated
to, meaning that we can run 'check' in the middle of scripts and use the
result.
Get the names of local variables. and add the ability to look up their
types.
When we get a reflected TTImp, either checking the Goal or looking up a
type, it's not impossible that there'll be some repeated binder names,
so also make sure binders are unique relative to the current context.
Ideally we'd also rename things in the environment to guarantee that all
names are unique, but we don't yet.
(This would be much easier if reflected terms were typed such that they
were well scoped, but that would also make reflection harder to use.)
Including appropriate casts, and Num/Eq/Ord/Show implementations.
Also includes new primitives in Data.Buffer, and calls to foreign
functions in C as 'unsigned'.
If available (sometimes, say a top level expression, it might need
inferring so there'll be no goal available). Also add the ability to log
the current goal, or indeed any term.
Still all they can do is check and log. Now scripts must return
something of type TT, which is in practice a TTImp that goes to the
elaborator for final checking
Add %runElab and start on scripts, although all they can do so far is
check a term. This does gives us, sort of, "template Idris" (as
demonstrated in test reflection002)
Can't export a type which refers to a private name. This has caught a
couple of visibility errors in the libraries, code and tests, so they've
been updated too.
Now all common console IO functions available from the
prelude are available through the `Control.App.Console`
interface.
Added:
- putChar
- getChar
- getCharLn
- print
- printLn
Renamed:
- getStr to getLine
Don't get too excited yet - I want this in so that it doesn't get too
out of sync, but I still have to think about exactly how it's going to
work in practice.
This means it abstracts over the value syntactically, rather than by
value, and can significantly speed up elaboration where large types are
involved, at a cost of being less general. Try it if "with" is slow.
There are more flags we want on with (well, at least one: "proof")
The required totality of interface methods is now only affected if
there's an explicit modifier on the method. This allows us to set
%default total on the Prelude, which is a good thing to do anyway,
without also requiring that every implementation of the interface in the
prelude has to be total, which would potentially be a pain.
Another good affect is that it speeds up totality checking elsewhere
because totality checking is done lazily, and so with the total flag set
we know in advance that prelude functions are total.
If we're going to use assert_total. we might as well not have the
distinction between empty and non empty results, which only makes the
code harder to follow.
Also there's some kind of issue which seems to mean the token list can't
be erased if we do that. I will investigate if I can make a smaller
example.
This was taking too long, and adding too many things, because it was
going too deep in the name of having everything accessible at the REPL
and for the compiler. So, it's done a bit differently now, only chasing
everything on a "full" load (i.e., final load at the REPL)
This has some effects:
+ As systems get bigger, load time gets better (on my machine, checking
Idris.Main now takes 52s from scratch, down from 76s)
+ You might find import errors that you didn't previously get, because
things were being imported that shouldn't have been. The new way is
correct!
An unfortunate effect is that sometimes you end up getting "undefined
name" errors even if you didn't explicitly use the name, because
sometimes a module uses a name from another module in a type, which then
gets exported, and eventually needs to be reduced. This mostly happens
because there is a compile time check that should be done which I
haven't implemented yet. That is, public export definitions should only
be allowed to use names that are also public export. I'll get to this
soon.
Still a couple of things to resolve in coverage and totality checking
before we can switch on %default, so don't expect quite the right
behaviour just yet. More progress though!
Also working on this has caught a few totality errors in the Idris 2
code base that Idris 1 missed... so these are fixed on the way.
For the same behaviour as Idris 1, the primitive cast should return 0 if
the integer is out of bounds. (We should probably drop the Cast
implementation though, since ideally they won't be lossy in general, but
that's an issue for another time...)
All the tests pass in racket now, for me.
Racket appears to have a different notion of current directory than the
system does, so we need to tell it which directory we think we're in
when reading and writing bytevectors using the scheme file functions.
As in Idris 1 - if an interface declares a method as total or covering,
then all implementations have to satisfy that.
Temporarily turn off %default directive again, at least until totality
checking works as it should (this is probably better than removing it
from various places because I'll forget to put them back)
This means temporarily removing some '%default total' from the libraries
and tests, since the input for codata checking isn't right yet (it needs
appropriate use of inlining)
Need to pass the LD_LIBRARY_PATH all the way through or racket doesn't
know where to look. I really don't know why it doesn't work to just set
it at the top level in the script, but it didn't (on my Mac, at least).