* Add utility functions to treat All as a heterogeneous container
* Distinguish RefC Int and Bits types
* Change RefC Integers to be arbitrary precision
* Add RefC Bits maths operations
* Make RefC div and mod Euclidean
* Add RefC bit-ops tests
* Add RefC integer comparison tests
* Add RefC IntN support
We should really get these automatically, but until we do, add the
flags. Chez seems to spot these anyway, but again it makes the generated
code easier to look at, and it removes some needless abstraction over
interfaces at runtime.
We've had these for a while, used for interface specialisation, but
they're not yet used anywhere else or properly documented. We should
document them soon, but for now, it's a useful performance boost to
always use the fast versions of pack/unpack/concat at runtime.
Also moves a couple to the prelude, to ensure that the fast versions are
defined in the same place as the 'normal' version so that the
transformation will always fire (that is, no need to import Data.String
for the transformation to work).
This has a much better behaviour with respect to proof search and
the coverage checker realising we don't need to consider the Z case
than the `Not (x = Z)` we used earlier.
Ideally, liftIO would always be linear, but that has lots of knock-on
effects for other monads which we might want to put in HasIO, now that
subtyping is gone. We'll have to revisit this when we have some kind of
multiplicity polymorphism.
Snoc add an element at the end of the vector. The main use case
for such a function is to get the expected type signature
Vect n a -> a -> Vect (S n) a instead of
Vect n a -> a -> Vect (n + 1) a which you get by using `++ [x]`
Snoc gets is name from `cons` by reversin each letter, indicating
tacking on the element at the end rather than the begining.
`append` would also be a suitable name.
It's disappointing to have to do this, but I think necessary because
various issue reports have shown it to be unsound (at least as far as
inference goes) and, at the very least, confusing. This patch brings us
back to the basic rules of QTT.
On the one hand, this makes the 1 multiplicity less useful, because it
means we can't flag arguments as being used exactly once which would be
useful for optimisation purposes as well as precision in the type. On
the other hand, it removes some complexity (and a hack) from
unification, and has the advantage of being correct! Also, I still
consider the 1 multiplicity an experiment.
We can still do interesting things like protocol state tracking, which
is my primary motivation at least.
Ideally, if the 1 multiplicity is going to be more generall useful,
we'll need some kind of way of doing multiplicity polymorphism in the
future. I don't think subtyping is the way (I've pretty much always come
to regret adding some form of subtyping).
Fixes#73 (and maybe some others).
Division Theorem. For every natural number `x` and positive natural
number `n`, there is a unique decomposition:
`x = q*n + r`
with `q`,`r` natural and `r` < `n`.
`q` is the quotient when dividing `x` by `n`
`r` is the remainder when dividing `x` by `n`.
This commit adds a proof for this fact, in case
we want to reason about modular arithmetic (for example, when dealing
with binary representations). A future, more systematic, development could
perhaps follow: @clayrat 's (idris1) port of Coq's binary arithmetics:
https://github.com/sbp/idris-bi/blob/master/src/Data/Bin/DivMod.idrhttps://github.com/sbp/idris-bi/blob/master/src/Data/Biz/DivMod.idrhttps://github.com/sbp/idris-bi/blob/master/src/Data/BizMod2/DivMod.idr
In the process, it bulks up the stdlib with:
+ a generic PreorderReasoning module for arbitrary preorders,
analogous for the equational reasoning module
+ some missing facts about Nat operations.
+ Refactor some Nat order properties using a 'reflect' function
Co-authored-by: Ohad Kammar <ohad.kammar@ed.ac.uk>
Co-authored-by: G. Allais <guillaume.allais@ens-lyon.org>
* [contrib] Add misc libraries to contrib
Expose some `private` function in libs/base that I needed, and seem like
their visibility was forgotten
I'd appreciate a code review, especially to tell me I'm
re-implementing something that's already elsewhere in the library
Mostly extending existing functionality:
* `Data/Void.idr`: add some utility functions for manipulating absurdity.
* `Decidable/Decidable/Extra.idr`: add support for double negation elimination in decidable relations
* `Data/Fun/Extra.idr`:
+ add `application` (total and partil) for n-ary functions
+ add (slightly) dependent versions of these operations
* `Decidable/Order/Strict.idr`: a strict preorder is what you get when
you remove the diagonal from a pre-order. For example, `<` is the
associated preorder for `<=` over `Nat`.
Analogous to `Decidable.Order`. The proof search mechanism struggled
a bit, so I had to hack it --- sorry.
Eventually we should move `Data.Fun.Extra.Pointwise` to `Data.Vect.Quantifiers` in base
but we don't have any interesting uses for it at the moment so it's not
urgent.
Co-authored by @gallais
Until now namespaces were stored as (reversed) lists of strings.
It led to:
* confusing code where we work on the underlying representation of
namespaces rather than say what we mean (using `isSuffixOf` to mean
`isParentOf`)
* potentially introducing errors by not respecting the invariant cf.
bug report #616 (but also name generation in the scheme backend
although that did not lead to bugs as it was self-consistent AFAICT)
* ad-hoc code to circumvent overlapping interface implementation when
showing / pretty-printing namespaces
This PR introduces a `Namespace` newtype containing a list of strings.
Nested namespaces are still stored in reverse order but the exposed
interface aims to support programming by saying what we mean
(`isParentOf`, `isApproximationOf`, `X <.> Y` computes to `X.Y`, etc.)
irrespective of the underlying representation.
Until now namespaces were stored as (reversed) lists of strings.
It led to:
* confusing code where we work on the representation rather than say
what we mean (e.g. using `isSuffixOf` to mean `isParentOf`)
* potentially introducing errors by not respecting the invariant cf.
bug report #616 (but also name generation in the scheme backend
although that did not lead to bugs as it was self-consistent AFAICT)
* ad-hoc code to circumvent overlapping interface implementations when
showing / pretty-printing namespaces
This introduces a Namespace newtype containing non-empty lists of
strings. Nested namespaces are still stored in reverse order but the
exposed interface aims to support programming by saying what we mean
(`isParentOf`, `isApproximationOf`, `X <.> Y` computes to `X.Y`, etc.)
irrespective of the underlying representation.
Main change
===========
The main change is to the type of function dealing with an untouched
segment of the local scope. e.g.
```
weak : {outer, vars : _} -> (ns : List Name) ->
tm (outer ++ inner) -> tm (outer ++ ns ++ inner)
```
Instead we now write
```
weak : SizeOf ns -> tm (outer ++ inner) -> tm (outer ++ ns ++ inner)
```
meaning that we do not need the values of `outer`, `inner` and `ns`
at runtime. Instead we only demand a `SizeOf ns` which is a `Nat`
together with an (erased) proof that `ns` is of that length.
Other modifications
===================
Quadratic behaviour
-------------------
A side effect of this refactor is the removal of two sources of
quadratic behaviour. They typically arise in a situation where
work is done on a scope of the form
```
outer ++ done ++ ns ++ inner
```
When `ns` is non-empty, some work is performed and then the variable
is moved to the pile of things we are `done` with. This leads to
recursive calls of the form `f done` -> `f (done ++ [v])` leading
to a cost quadratic in the size of `ns`.
Now that we only care about `SizeOf done`, the recursive call is
(once all the runtime irrelevant content is erased) for the form
`f n` -> `f (S n)`!
More runtime irrelevance
------------------------
In some places we used to rely on a list of names `vars` being
available. However once we only care about the length of `vars`,
the fact it is not available is not a limitation.
For instance a `SizeOf vars` can be reconstructed from an environment
assigning values to `vars` even if `vars` is irrelevant. Indeed the
size of the environment is the same as that of `vars`.
For Void and Either
This is because I ended up using them elsewhere, so why not include them in the stdlib.
Also expose left/rightInjective functions, as are used in the DecEq proofs.
The ports are rather straight forward and I have purposefully written
the documentation to be beginner friendly.
Note, I have diverged from Idris1 over the naming of the projection
functions to make them consistent with `Pair` and `DPair`.
Following a fairly detailed discussion on slack, the feeling is
generally that it's better to have a single interface. While precision
is nice, it doesn't appear to buy us anything here. If that turns out to
be wrong, or limiting somehow, we can revisit it later. Also:
- it's easier for backend authors if the type of IO operations is
slightly less restrictive. For example, if it's in HasIO, that limits
alternative implementations, which might be awkward for some
alternative back ends.
- it's one less extra detail to learn. This is minor, but there needs to
be a clear advantage if there's more detail to learn.
- It is difficult to think of an underlying type that can't have a Monad
instance (I have personally never encountered one - if they turns out
to exist, again, we can revisit!)
Backed by Data.IOArray. Also moved the array external primitives to a
separate module Data.IOArray.Prims, since the next step is to add a
linear bounded array type where the bounds checks are done at compile
time, so we'll want to read and write without bounds likes.
Having unsolved holes in a 'core' library unneccessarily pollutes the list of holes shown to the user.
Thus, having unfilled holes in a 'core' library is not right.
These constructs can be re-added once the holes have been filled in.
Meaning that the FFI is aware of it, so you can send arbitrary byte data
to foreign calls. Fixes#209
This means that we no longer need the hacky way of reading and writing
binary data via scheme, so can have a more general interface for reading
and writing buffer data in files.
It will also enable more interesting high level interfaces to binary
data, with C calls being used where necessary.
Note that the Buffer primitive are unsafe! They always have been, of
course... so perhaps (later) they should have 'unsafe' as part of their
name and better high level safe interfaces on top.
This requires updating the scheme to support Buffer as an FFI primitive,
but shouldn't affect Idris2-boot which loads buffers its own way.
The old way only worked by chance, because the argumemt order happens to
be the same in all cases. I noticed due to some experiments elsewhere
with different ways of elaborating case, which broke that assumption.
The meaning of the list of Vars is actually the opposite of what it was
taken to be... fortunately, the performance works out roughly the same.
Also this way is (arguably) simpler, which is usually a good sign.
Including appropriate casts, and Num/Eq/Ord/Show implementations.
Also includes new primitives in Data.Buffer, and calls to foreign
functions in C as 'unsigned'.
This means it abstracts over the value syntactically, rather than by
value, and can significantly speed up elaboration where large types are
involved, at a cost of being less general. Try it if "with" is slow.
There are more flags we want on with (well, at least one: "proof")
This was taking too long, and adding too many things, because it was
going too deep in the name of having everything accessible at the REPL
and for the compiler. So, it's done a bit differently now, only chasing
everything on a "full" load (i.e., final load at the REPL)
This has some effects:
+ As systems get bigger, load time gets better (on my machine, checking
Idris.Main now takes 52s from scratch, down from 76s)
+ You might find import errors that you didn't previously get, because
things were being imported that shouldn't have been. The new way is
correct!
An unfortunate effect is that sometimes you end up getting "undefined
name" errors even if you didn't explicitly use the name, because
sometimes a module uses a name from another module in a type, which then
gets exported, and eventually needs to be reduced. This mostly happens
because there is a compile time check that should be done which I
haven't implemented yet. That is, public export definitions should only
be allowed to use names that are also public export. I'll get to this
soon.
Still a couple of things to resolve in coverage and totality checking
before we can switch on %default, so don't expect quite the right
behaviour just yet. More progress though!
Also working on this has caught a few totality errors in the Idris 2
code base that Idris 1 missed... so these are fixed on the way.