Final tweaks

This commit is contained in:
regnat 2017-08-21 17:12:00 +02:00
parent a5603de793
commit 4362020a2b
18 changed files with 106 additions and 147 deletions

View File

@ -36,7 +36,7 @@
°{ x = e; $\cdots$ }.x° &\leadsto °e° \\
°{ x = e; $\cdots$ }.x or e'° &\leadsto °e° \\
°{ $x_1$ = $e_1$; $\cdots$; $x_n$ = $e_n$ }.x or e'° &\leadsto °e'°
\quad if $x \notin \{ $x_1$, \cdots, $x_n$ \}$ \\
\quad if $x \notin \{ x_1, \cdots, x_n \}$ \\
°Head(Cons($e_1$, $e_2$))° &\leadsto °$e_1$° \\
°Tail(Cons($e_1$, $e_2$))° &\leadsto °$e_2$° \\
°e : τ° &\leadsto °e°

View File

@ -48,12 +48,12 @@ aspects:
- \texttt{nixpkgs}, the Nix package collection is today several hundreds of
thousands of lines of code, with some really complex parts.
The absence of typing makes every non-trivial modification particularly
complicated and dangerous.
complicated and dangerous;
- Partly because of its very fast grow, the project suffers from a huge lack of
documentation. In particular, a lot of complex internal abstractions are used
but not documented at all.
A type-system would reduce the impact of this problem.
A type-system would reduce the impact of this problem;
- Nix explores some really interesting ideas by applying to system management
some principles coming from programing. Adding a type-system to it opens new
@ -66,11 +66,11 @@ The language presents many characteristics that constrain a type-system for it:
- It is possible to know at runtime the type of a value. This functionality is
used a lot in practice and requires a notion of union types to be useful, as
we will see.
we will see;
- The fields of the records may have dynamically defined labels (i.e., labels
which are defined as the result of the evaluation of an arbitrary expression
provided it evaluates to a string).
provided it evaluates to a string);
- The language has been existing for ten years without a type-system. This
naturally led to the introduction of several programing patterns are hard to

View File

@ -13,9 +13,9 @@ This language is a lazily-evaluated lambda calculus, with some additions,
namely:
- Constants, let-bindings (always recursive), some (hardcoded) infix operators
and `if` constructs.
and `if` constructs;
- Lists, constructed by the `[<expr> $\cdots$ <expr>]` syntax.
- Lists, constructed by the `[<expr> $\cdots$ <expr>]` syntax;
- Records, defined by the `{ <record-field>; ... <record-field>; }` syntax.
The labels of record fields may be dynamically defined as the result of
@ -26,7 +26,7 @@ namely:
Records may be recursively defined (using the `rec` keyword), in which case,
fields may depend one from another.
For example, the expression `rec { x = 1; y = x; }` is equivalent to
`{ x = 1; y = 1; }`.
`{ x = 1; y = 1; }`;
- A syntax for accessing record fields, of the form `<expr>.<access-path>`.
Like for the definition of record litterals, the field names may be arbitrary
@ -37,22 +37,25 @@ namely:
For example, `{ x = { y = 1; }; }.x.y` evaluates as `1`,
`{ x = { y = 1; }; }.x.z or 2` evaluates as `2` and `{ x = { y = 2; }; }.y`
raises an error.
raises an error;
- Lambda-abstractions can be defined with patterns.
- Lambda-abstractions, that can be defined with patterns.
Patterns only exists for records and are of the form
`{ <pattern-field>, $\cdots$, <pattern-field> }`
or
`{ <pattern-field>, $\cdots$, <pattern-field>, "..." }`, with the
`{ <pattern-field>, $\cdots$, <pattern-field>, .. }`, with the
`<pattern-field>` construct of the form `<ident>` or `<ident> ? <expr>`
(the latter specifying a default value in case the field is absent).
The `..` at the end of the pattern indicates that this pattern is *open*,
which means that it will accept a record with more fields than the ones that
are written.
Contrary to most languages where the capture variable may be different
from the name of the field (for example in OCaml, a pattern matching e
record would be of the form `{ x = fieldname1; y = fieldname2; }` and the
pattern `{ x; y }` is nothing but syntactic sugar for `{ x = x; y = y }`),
Nix requires the name of the capture variable to be the same as the name of
the field.
the field;
- Type annotations (absent in the actual language but added for this work).
These annotations have been added as they are required by the typechecker.
@ -60,7 +63,7 @@ namely:
the gradual types (noted `<τ>`).
The meaning of the types will be presented in \Cref{sec:typing}.
The constructions `<R>` and `<ρ>` represents type regular exepressions
The constructions `<R>` and `<ρ>` represents type regular expressions
which will be presented in \Cref{typing::structures::listes}.
The construction `<constant>` (for a type) represents singleton types: for
@ -69,7 +72,7 @@ namely:
In addition to these syntactic constructions, a lot of the expressiveness of
the language resides in some predefined functions.
For example, some functions perform some advanced operations on records, like
the `attNames` function, which when applied to a record returns the list of the
the `attrNames` function, which when applied to a record returns the list of the
labels of this record (as strings).
Another important class of functions is the set of functions that discriminate
over a type, that is functions such as `isInt`, `isString`, `isBool`, and so on,

View File

@ -17,14 +17,13 @@ type-system. In particular :
`λx. if isInt x then x==1 else not x` can be typed (with type
`Int $\vee$ Bool -> Bool`).
Several existing type-systems already try to satisfy these requirements.
The most well-known one is probably the type-system of Typed Racket (@FH08), which
Several existing type-systems already try to satisfy these requirements. The
most well-known one is probably the type-system of Typed Racket (@FH08), which
already implements at an industrial level most of what is needed in the context
of the Scheme language.
The approach we choose here is based on the work of Alain Frisch and Giuseppe
Castagna on set-theoretic types (@Fri04 and @Cas15), with extensions brought by
Kim Nguyễn (@phdkim) and Giuseppe Castagna and Victor Lanvin (@CL17).
The approach we choose here is based on the work of @Fri04 and @Cas15 on
set-theoretic types, with extensions brought by @phdkim and @CL17.
The system is based on a set-theoretic interpretation of types as a sets of
values, which provides a natural interpretation for union, intersection or

View File

@ -1,73 +0,0 @@
This works makes several technical contributions. In particular:
- the compilation of if constructs into typecases to account for occurrence
typing,
- the addition of bidirectional typing to set-theoretic type-systems.
In particular, a technique to propagate type informations through the
syntactic tree (and in particular through lambdas thanks to the $\mathcal{A}$
function which decomposes a function type into the set of its toplevel
arrows),
- a new way of typing records with dynamic labels extending the formalism of
@Fri04 with static records,
- an extension of the gradual type-system of @CL17 with records,
- \vdots
But besides these technical aspects, the most important contribution of
this work is, by far, that it brings together and integrates into a unique
system five different typing techniques that hitherto lived in
isolation one from the other, that is:
1. gradual typing
2. occurrence typing
3. set-theoretic types
4. bidirectional typing techniques
5. dynamic records
The important distinctive aspect that characterizes our study is that
this integration was studied not for some *ad hoc* toy/idealized
academic language, but for an existing programming language with an important
community of programmers, with thousands of lines of existing code,
and, last but surely not least, which was designed not having types in
mind, far from that. The choice of the language dictated the adoption
of the five characteristics above: gradual typing was the solution we
chose to inject the flexibility needed to accept already existing code
that would not fit standard static typing discipline; occurrence typing was
needed to account for common usage patterns in which programmers use
distinct piece of codes according to dynamic type-checks of some
expressions; set-theoretic types were chosen because they provide both
intersection types (we need them to precisely type overloaded
functions which, typically, appear when occurrence typing is
performed on a parameter of a function) and union types (we need them to
give maximum flexibility for occurrence typing, which can typed by
calculating the least upper bound of the different alternatives);
bidirectional typing was adopted to allow the programmer to specify
overloaded types for functions via a simple explicit annotation,
without resorting to the heavy annotation that characterise functions
in CDuce; dynamic records were forced on us by the insanely great
flexibility that Nix designer have decided to give to their language.
Choosing an existing language also forced us to privilege practical
aspects over theoretical ones with the drawbacks and the advantages that this
choice implies. The main drawback is that we gave up having a system that is
formally proved to be sound. For instance, we could have designed a type system
in which record field selections are statically ensured always to succeed, but
this would have meant to reject nearly all programs that use dynamic labels
(which are many); likewise, gradual typing is used to shunt the type system
rather than to insert explicit casts that dynamically check the soundness of
programs as @ST06 do: in that sense we completely adhere to the philosophy of
Typed Racked that wants to annotate and document existing code but not modify
(or, worse, reject) any of it.
The implementation of this type system is the most important practical
contribution of this work. To this and to the design of how practically
integrate types in Nix we dedicated an important part of the time spent on this
internship.

View File

@ -1,5 +1,4 @@
\newcommand{\cduce}{CDuce}
Instead of implementing our own implementation of types and of the subtyping
Instead of working our own implementation of types and of the subtyping
algorithm, we reused those of the implementation of the \cduce{}
language[@Ball03].
@ -8,8 +7,8 @@ features a type-system based on the set-theoretic types of @Fri04 (on which
this work is itself based).
Although its type-system is obviously not the same as the one presented here,
its types form a supertype of ours (with the exception of the gradual type
which is absent in \cduce{} but may be encoded as a distinguished type constant),
so we could reuse its representation.
which is absent in \cduce{} but may be encoded), so we could reuse its
representation.
We also wanted to reuse its subtyping algorithm (since implementing it would
have taken way too much time), but this was more difficult, as it implements a

View File

@ -45,9 +45,9 @@ will not.
It is possible to get more flexibility by recognizing that the notion of a
predicate on a type `t` is a function of type
`(t -> true) AND ($\lnot$t -> false)`.
We can thus modify the Nix-light language by replacing the typecase and
replacing it with an if construct, and replace the typecase rules by the
following ones:
We can thus modify the Nix-light language by removing the typecase and
replacing it with an if construct, and replace the associated typing rules by
the following ones:
\begin{mathpar}
\inferrule{
@ -113,10 +113,10 @@ Otherwise, if its arguments evaluates to a value `v`, it evaluates to
As long as we do not try to track exceptions, a reasonable rule for this is:
\begin{displaymath}
\inferrule{Γ \vdash e : τ}{%
Γ \vdash \operatorname{tryEval}(e) : \{ success = \text{Bool}; value = \text{Bool} \vee τ \}}
\inferrule{Γ \tIC e : τ}{%
Γ \tIC \operatorname{tryEval}(e) : \{ success = \text{Bool}; value = \text{false} \vee τ \}}
\end{displaymath}
(note that in presence of polymorphism, it would have been enough to type
`tryEval` as a function of type
`$\forall \alpha$. $\alpha$ -> { success = Bool; value = $\alpha$ | Bool}`).
`$\forall \alpha$. $\alpha$ -> { success = Bool; value = $\alpha$ $\vee$ false}`).

View File

@ -90,7 +90,7 @@ We now extends our core calculus with the two builtins data structures of Nix
\label{sec:implementation}
\input{out/generated/implem/intro.tex}
## CDuce and subtyping
## \cduce{} and subtyping
\input{out/generated/implem/cduce.tex}
## Adaptation of the type-system

View File

@ -87,6 +87,7 @@
\catcode`λ=13
\catcode`ρ=13
\catcode`Γ=13
\newcommand{\cduce}{$\mathbb{C}$Duce}
\newcommand{λ}{\ensuremath{\lambda}}
\newcommand{σ}{\ensuremath{\sigma}}
\newcommand{τ}{\ensuremath{\tau}}

View File

@ -47,7 +47,7 @@ The compilation of records is slightly complex, for two reasons:
For example, the record `{ DOLLAR{e1}.x = 1; DOLLAR{e2}.y = 2; }` may be
either of the form `{ z = { x = 1; y = 2; }; }` or
`{ z1 = { x = 1; }; z2 = { x = 2; }; }` depending on whether `e1` and `e2`
evaluate to the same value or not.
evaluate to the same value `z` or not.
It is then impossible to compile this exactly.
The solution we use here is to always assume that they are different.
@ -90,4 +90,5 @@ be compiled to the Nix-light expression
`(x = x tin Int) ? x : ((x = x tin Bool) ? x : false)`, on which we will be
able to perform occurrence typing for `x`.
This pre-processing can be done for any boolean combination of expressions in
the `if` clause thanks to the rewriting rules of \Cref{nix-light::preprocessing}
the `if` clause thanks to the rewriting rules of
\Cref{nix-light::preprocessing}.

View File

@ -6,8 +6,8 @@ which require a special treatment from the typechecker.
The differences are:
- The `if` construct is replaced by the more general "typecase" construct of the
form `(x = e0 tin t) ? e1 : e2` (which evaluates to `e1` if `e0` evaluates to
a value of type `t` and to `e2` otherwise).
form `(x = e0 tin t) ? e1 : e2` (which evaluates to `e1[x:=e0]` if `e0`
evaluates to a value of type `t` and to `e2[x:=e0]` otherwise).
The general case `if e0 then e1 else e2` will be compiled to
`(x = (e0 : Bool) tin true) ? e1 : e2` (where `x` is a fresh variable),
whereas particular cases such as `if isInt x then e1 else e2` will be compiled
@ -52,7 +52,7 @@ pairwise distinct.
##### Pattern-matching
Pattern-matching in Nix-light has a rather classical semantic for a lazy
language, with the simplification that as patterns are not recursive, the
language, with the simplification that, as patterns are not recursive, the
argument is either non-evaluated at all, or evaluated in head normal form.
\newcommand{\var}{\mathcal{V}}
@ -62,11 +62,11 @@ $\var(r)$) as $\var(x) = \var(x:τ) = x$.
We extend this defintion to a record pattern field `l` (of the form `r` or
`r?c` where `c` is a constant) by stating that $\var(r ? c) = \var(r)$.
In what follows, `l` denotes a record-pattern field of the form `r` or `r?c`
(where `c` is a constant).
(where `c` is a constant and `r` a variable pattern).
For a pattern `p` and a value `v` (resp. an expression `e`), we note
$\sfrac{p}{v}$ the substitution generated by the matching of `v` (resp. `e`)
against `p`.
$\sfrac{p}{v}$ (resp. $\sfrac{p}{e}$) the substitution generated by the
matching of `v` (resp. `e`) against `p`.
It is defined in \Cref{nix-light::pattern-matching}.
The basic idea from which all the rules derive is that the matching of any
expression `e` against a variable pattern will produce the substitution `x :=

View File

@ -18,13 +18,14 @@ the type `{ x = Int }`.
Thanks to the use of overloaded functions, we can even give a useful typing to
more complex applications. If `f` is of type
`(Int -> Bool) AND (String -> String)` (i.e., `f` is an overloaded function
which is a function of type `Int -> Bool` and as a function of type
which is both a function of type `Int -> Bool` and a function of type
`String -> String`), we can give the type `{ x = Bool; y = String }` to the
expression `mapAttr f { x = 1; y = "foo"; }`.
But these two expressions corresponds to two incompatibles types for `mapAttr`
(namely `(Int -> Int) -> { x = Int } -> { x = Int }` and
`(Int -> Bool AND String -> String) -> { x = Int; y = String } -> { x = Bool; y = String }`.
`(Int -> Bool AND String -> String) -> { x = Int; y = String } -> { x = Bool; y = String }`).
Because of this, it is quite hard to reason on Nix.
Thus, we decided not to work on Nix itself, but on a simpler language, to which
Nix (or at least a large enough subset of it) can be compiled.
Thus, we decided not to work on Nix itself, but on a simpler language,
Nix-light, to which Nix (or at least a large enough subset of it) can be
compiled.

View File

@ -86,6 +86,15 @@ month = {September}
year = {1998}
}
@InProceedings{DN13,
author = {Joshua Dunfield and Neelakantan R. Krishnaswami},
title = {Complete and Easy Bidirectional Typechecking for Higher-Rank Polymorphism},
booktitle = {Int'l Conf. Functional Programming},
month = sep,
year = {2013},
note = {\url{arXiv:1306.6032 [cs.PL]}}
}
@phdthesis{phdeelco,
author={Eelco Dolstra},
title={The Purely Functional Software Deployment Model},

View File

@ -5,8 +5,8 @@ apply concepts coming from the world of programing
languages (functional ones in particular) to package management.
This approach solves in a very elegant way many problems encountered by
conventional package managers (Apt, Pacman, Yum, \ldots).
The state of a machine is (apart from some irreducible mutable state)
entirely described by the result of the evaluation of an expression in a
The state of a machine managed by Nix is (apart from some irreducible mutable
state) entirely described by the result of the evaluation of an expression in a
(purely functional) specialized language also called Nix.
## Problem studied {-}
@ -33,20 +33,20 @@ This works makes several technical contributions. In particular:
occurrence typing,
- an improved definition of bidirectional typing for set-theoretic type-systems.
In particular, a technique to propagate type informations through the
syntactic tree (and in particular through lambdas)
syntactic tree (especially through lambdas),
- an extension of the gradual type-system of @CL17 with records,
- a new way of typing records with dynamic labels that extends the formalism of
@Fri04 with static records.
- a new way of typing records with dynamic labels that extends the formalism
established by @Fri04 in the case of static records.
But besides these technical aspects, the most important contribution of
this work is, by far, that it brings together and integrates into a unique
system five different typing techniques that hitherto lived in
isolation one from the other, that is:
1. gradual typing [@ST06][@CL17]
1. gradual typing [@ST06; @CL17]
2. occurrence typing [@FH08]
3. set-theoretic types [@Fri04]
4. bidirectional typing techniques [@HP98]
4. bidirectional typing techniques [@HP98; @DN13]
5. dynamic records
The important distinctive aspect that characterizes our study is that this
@ -67,9 +67,9 @@ flexibility to occurrence typing, which can typed by calculating the least
upper bound of the different alternatives); *bidirectional typing* was adopted
to allow the programmer to specify overloaded types for functions via a simple
explicit annotation, without resorting to the heavy annotation that
characterise functions in CDuce; *dynamic records* were forced on us by the
insanely great flexibility that Nix designers have decided to give to their
language.
characterise functions in most similar type-systems; *dynamic records* were
forced on us by the insanely great flexibility that Nix designers have decided
to give to their language.
Choosing an existing language also forced us to privilege practical
aspects over theoretical ones with the drawbacks and the advantages that this
@ -117,3 +117,10 @@ An annoying lack that would deserve to be filled is the absence of polymorphism
in the type system. This was essentially due to the fact that the gradual
typing system of @CL17 was set in a monomorphic type-system. However, a recent
extension by @Call18 adds polymorphism to this system, which may be used here.
#### Note on the language {-}
A large part of this document is intended to be presented in the conference
\<Programming\> 2018^[http://2018.programming-conference.org/], which is
why it is written in English.

View File

@ -6,7 +6,7 @@ consisting of a lambda-calculus with typecase (i.e., we exclude lists and
records and all the related operations which will be dealt with in
\Cref{typing::structures}).
The type-system is divided into two parts: an "inference" system and a "check"
The type-system is divided into two parts: an *inference* system and a *check*
one.
The first one corresponds to classical bottom-up type inference while the
second one is a top-down system which does not do any inference but only tries
@ -128,8 +128,8 @@ then $\A(\tau)$ is defined as
where $\sqcup$ is itself defined as
\begin{displaymath}
\{ σ_i \rightarrow \tau_i \| i \in I \} \sqcup \{ σ_j \rightarrow \tau_j \| j \in J \} =
\{ (σ_i \wedge σ_j) \rightarrow (\tau_i \vee \tau_j) \| i \in I, j \in J \}
\{ σ_i \rightarrow \tau_i | i \in I \} \sqcup \{ σ_j \rightarrow \tau_j | j \in J \} =
\{ (σ_i \wedge σ_j) \rightarrow (\tau_i \vee \tau_j) | i \in I, j \in J \}
\end{displaymath}
@Fri04 shows that $τ$ can always be expressed in the form of the

View File

@ -9,7 +9,7 @@ types].
The regular expressions `ρ` describe the types of the elements of the list the
same way as a textual regular expression describes a family of characters
sequence.
For example, `[ Int* ]` is the type of (possibly empty )lists whose elements
For example, `[ Int* ]` is the type of (possibly empty) lists whose elements
are of type `Int`, `[ Bool Int* ]` the type of lists containing a boolean and
then an arbitrary number of integers and `[ (Bool+ | (Int String*)) Int? ]` is
the type of lists that contain either at least one boolean, or an integer

View File

@ -10,14 +10,14 @@ In Nix, there are two main uses of records
- Static records: This is the classical use of records as they are used in
statically typed languages: some predefined fields contain various
informations about a structure.
In this setting, the labels are all statically defined as litteral
strings^[For some shameful reasons, it sometimes happens that those are
defined as a finite union of literal strings, so the rules should also support
this use-case] and the different fields are likely to contain values of
different types.
In this setting, the labels are all statically defined as expressions whose
type is type is a singleton string^[For some shameful reasons, it sometimes
happens that their type is a finite union of singleton strings, but the rules
are easy to extend to this use-case] and the different fields are likely to
contain values of different types.
- Dynamic maps: In this case, the record is used to store arbitrary key-value associations
(where the key is a string).
- Dynamic maps: In this case, the record is used to store arbitrary key-value
associations (where the key is a string).
The labels here may be defined as arbitrary expressions, however most of the
time the values will all be of the same type.
@ -31,10 +31,15 @@ requirements of lazy evaluation and gradual typing).
We assume the existence of a distinguished constant type `$\undefr$` which
represents an absent field in a record type.
We write `{ $x_1$ = $τ_1$; $\cdots$; $x_n$ = $τ_n$; }` as a shorthand for
`{ $x_1$ = $τ_1$; $\cdots$; $x_n$ = $τ_n$; _ = $τ$ }` is the type of a record
whose fields `$x_1$`, $\cdots$, `$x_n$` are respectively of type `$τ_1$`,
$\cdots$, `$τ_n$` and whose other fields are of type `$τ$` (all these fields
being possibly equal to or containing the special `$\undefr$` field meaning
that they are not defined in the record).
We write `{ $x_1$ = $τ_1$; $\cdots$; $x_n$ = $τ_n$ }` as a shorthand for
`{ $x_1$ = $τ_1$; $\cdots$; $x_n$ = $τ_n$; _ = $\undefr$ }` and
`{ $x_1$ = $τ_1$; $\cdots$; $x_n$ = $τ_n$; .. }` for
`{ $x_1$ = $τ_1$; $\cdots$; $x_n$ = $τ_n$; _ = Any }`.
`{ $x_1$ = $τ_1$; $\cdots$; $x_n$ = $τ_n$; _ = Any $\vee$ $\undefr$ }`.
The former corresponds to a *closed* record type (i.e., a record of this type
will have *exactly* the listed fields with the given types) while the latter
corresponds to an *open* record type (i.e., a record of this type will have *at
@ -65,9 +70,8 @@ We have two rules for the literal records, corresponding to the two use-cases
of records:
- The *RFinite* rule handles the case of static records.
It applies when the labels have singleton types (or a finite union of
singleton types) and corresponds to the classical typing rule for records in
absence of dynamic labels.
It applies when the labels have singleton types and corresponds to the
classical typing rule for records in absence of dynamic labels.
- The *IRInfinite* and *CRInfinite* rules handle the case of dynamic maps. In
this case, we do not try to track all the elements of the record, we just
@ -114,15 +118,21 @@ There are in fact three possible cases in this formula:
type $τ_x$. The type of the result may be $τ_x$ or $τ_2(x)$.
This definition enjoys a natural extension to arbitrary record types (i.e.,
subtypes of `{..}`), as shown by @Cas15 (a record type can be expressed as an
subtypes of `{..}`), as shown by @Cas15: a record type can be expressed as an
union of atomic record types, and we extends the $+$ operator by stating that
\begin{displaymath}
\left(\bigvee\limits_{i \in I} τ_i\right) + \left(\bigvee\limits_{j \in J} τ_j\right) =
\bigvee\limits_{i \in I,j \in J} (τ_i + τ_j)
\end{displaymath}
This effectively allows us to type the union of expressions whose types are
arbitrary record types.
#### Field access
We now define the typing of expressions of the form `e1.e2` or `e1.e2 or e3`
(i.e., the access of a field of a record).
\newcommand{\defr}{\operatorname{def}}
For a record type $τ = \{ x_1 = τ_1; \cdots; x_n = τ_n; \_ = τ_0 \}$, we
refer to $τ_0$ by $\defr(τ)$.

View File

@ -1,6 +1,6 @@
### Presentation of the types
The types are inspired by the set-theoretic types used in the CDuce language
The types are inspired by the set-theoretic types used in the \cduce{} language
and described by @Fri04.
Their syntax is given alongside Nix-light's grammar in
\Cref{nix-light::grammar}.
@ -38,12 +38,12 @@ problems on a lazy semantic.
The reason for this is that the interpretation supposes fully evaluated values,
while a lazy language manipulates possibly non-evaluated expressions, whose
evaluation may never finish if forced. In particular, the `Empty` types behaves
differently: there exists no value of this type (so its interpretation as set
differently: there exists no value of this type (so its interpretation as a set
is naturally the empty set), but there may be expressions of this type, which
themselves may appear inside values in a lazy setting.
For example, the type `{ x = Empty }` would be not inhabited in a strict semantic
(so it would itself be a subtype of every other type, the same way as `Empty`
itself), whereas in a lazy semantic such as the one of Nix-light, it is
For example, the type `{ x = Empty }` would not be inhabited in a strict
semantic (so it would itself be a subtype of every other type, the same way as
`Empty` itself), whereas in a lazy semantic such as the one of Nix-light, it is
inhabited for example by `{ x = let y = y in y; }`.
It is possible to modify this interpretation to take this difference into
@ -58,3 +58,5 @@ interpretation of @Fri04. Here, it becomes $\left(\llbracket A \rrbracket \cup
This new interpretation (which is described by @CP17) entails a new subtyping
algorithm that fits the requirements of a lazy semantic.
It is this relation (that we note $\subtype$) that we use and extend to the
gradual subtyping relation $\subtypeG$.