* Restore --version for CLI, include git revision
* Use git revision from nix during the build
* Update flake.nix
---------
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
For example, in Kubernetes deployments it is common practice to package
multiple YAML documents into a single file. For processing such files
with Nickel we need to support importing multiple YAML documents from
a single file. The concern with this is: how does such a file fit into
Nickel's data model? Importing it as an array of Nickel values felt most
reasonable to me. At the same time, a YAML file containing a single
document should still be transparently serialized as that document, not
as a single element array.
* Add linker directive for *-darwin to fix PyO3
Without these flags I get linking errors like this when building from
nixpkgs:
= note: Undefined symbols for architecture arm64:
"_PyExc_ConnectionResetError", referenced from:
pyo3::type_object::PyTypeInfo::type_object::hd73bc9c256c52717 in libpyo3-c59aeb8b74bb57e0.rlib(pyo3-c59aeb8b74bb57e0.pyo3.e60578ba-cgu.0.rcgu.o)
See: https://pyo3.rs/main/building_and_distribution#macos
* Move from cargo `config.toml` options to a build script for pyckel
* Use the same revision of tree-sitter-nickel as topiary does
---------
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
* Add test
* Fix test
* Convert paths to absolute before normalization
The path normalization function doesn't work on paths starting with '..'.
One fix is to make paths absolute.
* Add comment
* Actually persist output files for formatting
All the code for properly formatting files in-place was there, but
in the original PR I forgot to persist the output file. As a result,
formatting works for as a pipeline from stdin to stdout but when
an output file is specified, the formatted Nickel code is silently
discarded.
With this change we actually persist the formatted result.
* Add a rudimentary CLI integration test to make sure output files are created
* Ignore missing field definition errors during documentation extraction
My first implementation of a shallow evaluation mode for extracting
documentation tried to evaluate to WHNF every record field that
has a value. Then it reports any error that it encounters. But we
actually expect to hit `EvalError::MissingFieldDef` in normal partial
configurations because presumably an undefined field is intended to
be used recursively somewhere. With this change, we report all errors
except for `MissingFieldDef`.
* Add a comment describing the error recovery rationale
* Proper type instantiation discipline
This commit is the last missing piece (at least for the time being)
toward a reasonable bidirectional typechecking algorithm that properly
handle higher-rank types.
Instantiation of polymorphic types has been ad-hoc and full of issues.
Simple programs such as:
```nickel
let r : { id : forall a. a -> a } = { id = fun r => r } in
r.id "x"
```
or
```nickel
let eval : forall a. (forall b. b -> b) -> a -> a = fun f x => f x in
(eval (fun x => x) 1) : Number
```
wouldn't typecheck. Previously introduced variable levels were required
before we could change that.
This commit is building on the variable level to use a proper
instantiation discipline from the specification of the Nickel type
system located in this repository, and the original inspiration, [A
Quick Look at Impredicativity]() (although type instantiation is
currently predicative, the general structure of the bidirectional
type system is similar). Doing so, we also move code between `check` and
`infer` to better reflect the specification: `infer` is not
wrapper around check + unification anymore, but it actually implements
infer rules (function application, primop application, variable and annotation).
Instantiation is taken care of when switching from infer mode to check
mode within `subsumption` and at function application.
* Add test for unsound generalization
And fix the error expectation for `TypecheckError::VarLevelMismatch`
that still had the old longer name `VariableLevelMismatch`.
* Update core/tests/integration/main.rs
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
* Update core/src/typecheck/mod.rs
* Update core/src/typecheck/mod.rs
* Post-rebase fixup + clippy warnings
* Formatting of Nickel test files
---------
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
* Types -> Type, types -> typ
`type` is a reserved keyword in Rust, which is why we've been adding an
`s` to uppercase `Type` and lowercase as in `crate::types::Types`.
However:
1. uppercase `Type` is actually valid, only lowercase `type` clashes
with the Rust keyword.
2. Adding an `s` has caused confusion, because there's no easy way to
tell if a struct field `types` means there are several types, or if
it's just a work-around to avoid the keyword.
In consequence, this commits rename lowercase `types` to `typ` (and in
particular the module `types` is now `typ`), and the uppercase `Types`
to `Type` (so, previously `types::Types` is now `type::Type`).
* Formatting
* Put unify::(e,r)rows_add in dedicated trait
The free-standing functions `erows_add` and `rrows_add` are very
similar. The name and documentation was also misleading: what those
methods do is rather to remove a row than adding a new one.
This refactoring put them in their own trait, `RemoveRow`, and update
the documentation accordingly. Doing so, we also get rid of an optional
for the `UnsatConstr` error, which as unwrapped at some point anyway.
* Reword comment
* Error on serializing very large numbers
Instead of blindly using the nearest `f64` for all Nickel `Number`s when
serializing, with this change we only use nearest rounding when a number
is in the range `[f64::MIN, f64::MAX]`. Otherwise we trigger a newly
introduced serialization error.
This stops the previous, arguably surprising, result that for example
`e400` gets serialized as `1.7976931348623157e308`. While techincally
this is the nearest `f64` to `1e400`, one wouldn't expect serialization
to incur an absolute error of about `1e92`.
Fixes#1457
* Add tests
* Move parts of typecheck into new typecheck::unif
The typecheck module is several kLoC. This PR creates a new submodule
`typecheck::unif`, which focus on unification internals, as to make those modules
more digestible.
* Fix broken link in doc
* Introduce variable levels
Typechecking higher-rank polymorphic types involves an additional
discipline on unification variables, to avoid something similar to
unsound generalization in ML. Morally, unification variables should keep
track of all the rigid type variables introduced before them, and refuse
to unify with rigid type variables that are not in this list.
In practice, it's sufficient and more efficient to just keep track of a
variable level, which is incremented each time a polymorphic type is
instantiated by the check rule with rigid type variables. Each
unification variable remembers its level, and can only unify with rigid
type variables of level smaller or equals.
This commit introduces the machinery for tracking and threading those
variable levels throughout typechecking, but doesn't put them to use
yet. In particular there is currently no additional check performed when
unifying type variables.
* Add cached var levels data to GenericUnifType
This commit continues the implementation of variable levels. This adds
level-related metadata to unification types to avoid eager traversal to
update variable levels, following Didier Remy's "Efficient and
insightful generalization" algorithm.
This commit doesn't update the new variable levels of types correctly
yet, but merely add the infrastructure to store them and updates the
code to use the new types.
* Fix NLS after adding cached variable levels
* Add trait to compute var levels upper bound
* Correctly propagate variable levels through type constructors
* Formatting
* Add levels to record rows unification variables
* Add init_level to erows unif var
* Add var_levels_data to enum erows
* Implement missing var_level_max
* SubstXXX -> one subst trait
Get rid of three variations on a `Subst` trait, to keep only one with a
template parameter. Introduce a substitution method to be implemented
which, in addition to substitution variables with something, also bubble
up new variable levels data, as they might have been updated by
substitution.
This commit doesn't actually implement the new substitution method yet,
leaving a trail of `todo!()` to be filled in follow-up commits.
* Implement new subst trait method
* Fix typo introducing infinite type bug
* Implement level update for type unification variables
* Improve/fix implementation of (forcing) variable level update
* Fix panic indexing in the wrong table
* Add simple test for variable levels
* Fix bound checking missing some unsound generalization
* Add test to guard against unsound generalization
* Implement var level updates for enum and record rows
* Introduce a proper type error for var level mismatch
* Improve specific var level mismatch error, fix tests in consequence
* Add variable level checks for (enum/record) rows
* Add tests for rows unification var level checks
* Fix various warning, clean TODOs and remove unusued functions
* Remove notes of variable levels
Adding those notes is postponed to follow-up pull request.
* Remove TODO comments
* Various renaming and doc fixes/improvements
* Apply suggestions from code review
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
* Update core/src/typecheck/mod.rs
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
* Fix links in rustdoc
---------
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
We defined various `assert_xxx` macro helpers in the early days for
tests. The only reason to use macros over simple functions was to report
the location of the caller located in the failing test, and not the code inside the helper, when
an assertion would fail. Since then we discovered the `#[track_caller]`
annotation which allows to do the same thing with bare functions. This
commit change such remaining macros to be bare functions.
* Implement `nickel format` PoC as a topiary wrapper
Improve error reporting for `nickel format`
nix build with topiary
Use upstream topiary query export
* Pin wasm-bindgen version
* Disable `format` feature in nickelStatic for now
* Remove thiserror dependency and make tempfile optional
* Update Cargo.lock
* Disable `nickel format` by default
* Add a Types variant to Term
* Mark Types as non-atomic
* Update core/src/term/mod.rs
Co-authored-by: Yann Hamdaoui <yann.hamdaoui@tweag.io>
* Only allow without_pos in tests
* Review comments
* Add a comment
* Improve typechecking of types in contract position
* Add a test for the new error
* Update core/src/eval/mod.rs
Co-authored-by: Yann Hamdaoui <yann.hamdaoui@tweag.io>
* Improve comment
---------
Co-authored-by: Yann Hamdaoui <yann.hamdaoui@tweag.io>
* Improve :query interface
Previously, the query command of the REPL would accept a first argument
that could be a record access chain, followed by a second one which
could also be a field path. This means that, for querying a given path
`foo.bar.baz.blo`, there are 3 different but equivalent query
invocations:
- :query foo bar.baz.blo
- :query foo.bar baz.blo
- :query foo.bar.baz blo
Moreover, `:query foo.bar.baz.blo` without a second argument wouldn't
fail but pretend that there are no metadata, even if there are.
This commit simplify the overall interfaceo of the query command by just
accepting one argument which is a field path.
* Update core/src/program.rs
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
* Update core/src/repl/mod.rs
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
---------
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
* First draft on polymorphic contracts and records
* Try to handle polarity flipping in arrow types correctly
* Update the note with a description of the issue and our process of resolving it
* Describe the problem in #1161 in more detail
---------
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
An unintended and unnoticed side effect of reorganizing the workspace
crates seems to have been that crane puts the built static binary in a
subdirectory. Previously the derivation output was the static executable
directly. This throws off the release artifacts GitHub action.
* Add `term::make::static_access`
Add a helper function to the `term::make` module to construct chains of
static record accesses.
* Provide a variant accepting an arbitrary `RichTerm` as the record
* Remove `term::make::static_access_`
* Version bump: 1.0.0 -> 1.1.0
* Add release notes for 1.1
* Bump version - part 2, reset to 0.1 for new crates
* Add missing README to nickel-lang-cli
* Update Cargo.lock, make nickel-wasm-repl's version consistent
* Update releasing instructions for new crate layout
* Fix typo in RELEASES.md
* Update Cargo.lock
* StringEndMismatch -> StringDelimiterMismatch
* Remove unneeded lexer hack
We introduced a hack in the lexer because of some Logo's issue about
backtracking. Pre 1.0, the problem was related to sequences like:
```nickel
let x = "string" in m%%"some "%%{x}"%%m
```
Here, `"%%{x}` had to be interpreted as a double quote `"` followed by
an interpolation sequence `%%{`. The problem has been that backtracking
is required, because up to `{`, we don't know yet if we are lexing an
end delimiter `"%%m` or `"` followed by interpolation sequence. So we
added another token for this "ambiguous" (not literally) sequence, the
`CandidateQuoteInterpolation`. However, now that that the final `m` is
not required, a sequence of the form `"%%{` is either an end delimiter
followed by `{` (if there's as many `%` as in the opening delimiter), or
otherwise a verbatim `"%%{` (if there's less `%` than in the opening
delimiter). This can never be a double quote followed by an interpolated
sequence.
Thus, we can get rid of the ad-hoc `CandidateQuoteInterpolation` and
related special treatments, which makes the lexer simpler. This also get
rid of regression tests, because this subtelty had caused several issues
up to know, but they're now moot since the 1.0 syntax.
* Fix dangling reference in documentation
* Lexer refactoring
Refactors the lexer for a better and cleaner separation between
different lexer modes, making it more readable and statically forbid
more invalid states. Also break the previously huge `Lexer::next` into
separate methods depending on the lexer mode.
* Fix comments
* Update core/src/parser/lexer.rs
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
* Update core/src/parser/lexer.rs
---------
Co-authored-by: Viktor Kleen <viktor.kleen@tweag.io>
* add install info using homebrew on macos
* Update README.md
Fix markdownlint-cli not being happy
* add two examples: imports and foreach pattern
* changes according to review suggestions
* Format new Nickel examples
* Add missing test annotations to examples
* Fix failing example annotations
* Make sure the test TOML parser doesn't choke
---------
Co-authored-by: Yann Hamdaoui <yann.hamdaoui@gmail.com>
Co-authored-by: Yann Hamdaoui <yann.hamdaoui@tweag.io>
Co-authored-by: Viktor Kleen <vkleen+github@17220103.de>
* Add lsp-harness support for modifying files
* Fix a few related import issues.
- record imports correctly even if the imported file is already cached.
- avoid infinite recursion with circular imports. This previously
worked only because the imports weren't correctly recorded.
- track reverse-dependencies also, and invalidate the LSP correctly.
* Small consistency cleanup
* Make better use of the typechecking and transforming states
* Remove boilerplate