This is for compiled evaluation at compile-time, for full normalisation. You can try it by setting the evaluation mode to scheme (that is, :set eval scheme at the REPL). It's certainly an order of magnitude faster than the standard evaluator, based on my playing around with it, although still quite a bit slower than compilation for various reasons, including:
* It has to evaluate under binders, and therefore deal with blocked symbols
* It has to maintain enough information to be able to read back a Term from the evaluated scheme object, which means retaining things like types and other metadata
* We can't do a lot of the optimisations we'd do for runtime evaluation particularly setting things up so we don't need to do arity checking
Also added a new option evaltiming (set with :set evaltiming) to display how long evaluation itself takes, which is handy for checking performance.
I also don't think we should aim to replace the standard evaluator, in general, at least not for a while, because that will involve rewriting a lot of things and working out how to make it work as Call By Name (which is clearly possible, but fiddly).
Still, it's going to be interesting to experiment with it! I think it will be a good idea to use it for elaborator reflection and type providers when we eventually get around to implementing them.
Original commit details:
* Add ability to evaluate open terms via Scheme
Still lots of polish and more formal testing to do here before we can
use it in practice, but you can still use ':scheme <term>' at the REPL
to evaluate an expression by compiling to scheme then reading back the
result.
Also added 'evaltiming' option at the REPL, which, when set, displays
how long normalisaton takes (doesn't count resugaring, just the
normalisation step).
* Add scheme evaluation mode
Different when evaluating everything, vs only evaluating visible things.
We want the latter when type checking, the former at the REPL.
* Bring support.rkt up to date
A couple of missing things required for interfacing with scheme objects
* More Scheme readback machinery
We need these things in the next version so that the next-but-one
version can have a scheme evaluator!
* Add top level interface to scheme based normaliser
Also check it's available - currently chez only - and revert to the
default slow normaliser if it's not.
* Bring Context up to date with changes in main
* Now need Idris 0.5.0 to build
* Add SNF type for scheme values
This will allow us to incrementally evaluate under lambdas, which will
be useful for elaborator reflection and type providers.
* Add Quote for scheme evaluator
So, we can now get a weak head normal form, and evaluate the scope of
a binder when we have an argument to plug in, or just quote back the
whole thing.
* Add new 'scheme' evaluator mode at the REPL
Replacing the temporary 'TmpScheme', this is a better way to try out the
scheme based evaluator
* Fix name generation for new UN format
* Add scheme evaluator support to Racket
* Add another scheme eval test
With metavariables this time
* evaltiming now times execution too
This was handy for finding out the difference between the scheme based
evaluator and compilation. Compilation was something like 20 times
faster in my little test, so that'd be about 4-500 times faster than the
standard evaluator. Ouch!
* Fix whitespace errors
* Error handling when trying to evaluate Scheme
This is step 0 in a plan to use the scheme evaluator to evaluate Idris
expressions at compile time. As a proof of concept, I've got this
working for a toy language here: https://github.com/edwinb/SchemeEval
We won't be able to do anything interesting with this in Idris itself
until the next release because it involves updating the bootstrap code
and adding the ability to pass 'Integer' to foreign calls, which really
should have been allowed anyway since it's for a backend to decide what
it can cope with, not Idris itself.
To be able to eventually refactor/extend `system` function: to be
able to specify a directory, environment variables, specify arguments
as array etc. Ideally it should be something like Rust
[`std::process::Command`](https://doc.rust-lang.org/std/process/struct.Command.html).
This also involves adding a flag to constructors and case alternatives
in CExp which say whether it's a NIL or CONS. Currently, we only do this
for Prelude.List, which already has an effect, but soon I'll extend this
to work for all list-shaped things and rather than being hard coded. We
could also imagine spotting other shapes (enumerations especially) for
code generators to spot as they see fit.
This will require code generators to be fixed to recognise the new
ConInfo flag, but you can just ignore it.
Bootstrap code also updated, because we don't currently have a way of
having separate support.ss/rkt for the bootstrap and normal builds!
This adds new `Int8`, `Int16`, `Int32` and `Int64` data types
to the compiler, thus working towards properly specified integer
types as discussed in #1048.
In addition, the following changes / corrections are made:
* Support casts from `Char`, `String`, and `Double` to all integer
types (and back). This fixes#1270.
* Make sure that all casts to limited-precision integers are properly
bounds checked (this was not the case so far for casts from `String`
and `Double` to `Int`)
* Add a thorough set of tests to make sure all bounds checks work
correctly for all supported casts and arithmetic operations
Conditional variables with timeout in Chez didn't work, so changed to a
consistent meaning of the timeout (microseconds). Also fix linearity of
unsafePerformIO.
Meaning that the FFI is aware of it, so you can send arbitrary byte data
to foreign calls. Fixes#209
This means that we no longer need the hacky way of reading and writing
binary data via scheme, so can have a more general interface for reading
and writing buffer data in files.
It will also enable more interesting high level interfaces to binary
data, with C calls being used where necessary.
Note that the Buffer primitive are unsafe! They always have been, of
course... so perhaps (later) they should have 'unsafe' as part of their
name and better high level safe interfaces on top.
This requires updating the scheme to support Buffer as an FFI primitive,
but shouldn't affect Idris2-boot which loads buffers its own way.
This involves new primitives GCPtr and GCAnyPtr which are pointer types
that have finalisers attached. The finalisers are run when the
associated pointer goes out of scope.
In the test, I am assuming that the GC will only be called once, right
at the end. Otherwise, the output isn't guaranteed to be deterministic!
Let's see how this assumption holds...
This is currently Chez only. I think it'll be easy enough to add to
the Racket and Gambit back ends too.
Including appropriate casts, and Num/Eq/Ord/Show implementations.
Also includes new primitives in Data.Buffer, and calls to foreign
functions in C as 'unsigned'.
For the same behaviour as Idris 1, the primitive cast should return 0 if
the integer is out of bounds. (We should probably drop the Cast
implementation though, since ideally they won't be lossy in general, but
that's an issue for another time...)
All the tests pass in racket now, for me.
Racket appears to have a different notion of current directory than the
system does, so we need to tell it which directory we think we're in
when reading and writing bytevectors using the scheme file functions.
Since they might be... This is especially likely for module hashes, and
if we don't get it right, the Racket runtime might fail to write the
buffer. This makes the code buildable with the Racket back end.