Merge pull request #3997 from roc-lang/lint_markdown

Lint all markdown files once, guided by markdown-cli2
This commit is contained in:
Jan Van Bruggen 2022-09-09 09:27:31 -06:00 committed by GitHub
commit aec2baefe3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
27 changed files with 620 additions and 545 deletions

View File

@ -15,22 +15,28 @@ On Macos and Linux, we highly recommend Using [nix](https://nixos.org/download.h
If you are running ArchLinux or a derivative like Manjaro, you'll need to run `sudo sysctl -w kernel.unprivileged_userns_clone=1` before installing nix.
Install nix (not necessary on NixOS):
- If you are using WSL (Windows subsystem for Linux):
```
```sh
sh <(curl -L https://nixos.org/nix/install) --no-daemon
```
- For everything else:
```
```sh
sh <(curl -L https://nixos.org/nix/install) --daemon
```
Open a new terminal and install nixFlakes in your environment:
```
```sh
nix-env -iA nixpkgs.nixFlakes
```
Edit either `~/.config/nix/nix.conf` or `/etc/nix/nix.conf` and add:
```
```text
experimental-features = nix-command flakes
```
@ -40,9 +46,11 @@ If you don't know how to do this, restarting your computer will also do the job.
#### Usage
Now with nix set up, you just need to run one command from the roc project root directory:
```
```sh
nix develop
```
You should be in a shell with everything needed to build already installed.
Use `cargo run help` to see all subcommands.
To use the `repl` subcommand, execute `cargo run repl`.
@ -61,7 +69,8 @@ The editor is a :construction:WIP:construction: and not ready yet to replace you
`cargo run edit` should work on NixOS and MacOS. If you use Linux x86_64, follow the instructions below.
If you're not already in a nix shell, execute `nix develop` at the the root of the repo folder and then execute:
```
```sh
nixVulkanIntel cargo run edit
```
@ -74,16 +83,16 @@ That will help us improve this document for everyone who reads it in the future!
To build the compiler, you need these installed:
* [Zig](https://ziglang.org/), see below for version
* `libxkbcommon` - macOS seems to have it already; on Ubuntu or Debian you can get it with `apt-get install libxkbcommon-dev`
* On Debian/Ubuntu `sudo apt-get install pkg-config`
* LLVM, see below for version
* [rust](https://rustup.rs/)
* Also run `cargo install bindgen` after installing rust. You may need to open a new terminal.
- [Zig](https://ziglang.org/), see below for version
- `libxkbcommon` - macOS seems to have it already; on Ubuntu or Debian you can get it with `apt-get install libxkbcommon-dev`
- On Debian/Ubuntu `sudo apt-get install pkg-config`
- LLVM, see below for version
- [rust](https://rustup.rs/)
- Also run `cargo install bindgen` after installing rust. You may need to open a new terminal.
To run the test suite (via `cargo test`), you additionally need to install:
* [`valgrind`](https://www.valgrind.org/) (needs special treatment to [install on macOS](https://stackoverflow.com/a/61359781)
- [`valgrind`](https://www.valgrind.org/) (needs special treatment to [install on macOS](https://stackoverflow.com/a/61359781)
Alternatively, you can use `cargo test --no-fail-fast` or `cargo test -p specific_tests` to skip over the valgrind failures & tests.
For debugging LLVM IR, we use [DebugIR](https://github.com/vaivaswatha/debugir). This dependency is only required to build with the `--debug` flag, and for normal development you should be fine without it.
@ -92,7 +101,7 @@ For debugging LLVM IR, we use [DebugIR](https://github.com/vaivaswatha/debugir).
You may see an error like this during builds:
```
```text
/usr/bin/ld: cannot find -lxcb-render
/usr/bin/ld: cannot find -lxcb-shape
/usr/bin/ld: cannot find -lxcb-xfixes
@ -100,16 +109,18 @@ You may see an error like this during builds:
If so, you can fix it like so:
```
```sh
sudo apt-get install libxcb-render0-dev libxcb-shape0-dev libxcb-xfixes0-dev
```
### Zig
**version: 0.9.1**
For any OS, you can use [`zigup`](https://github.com/marler8997/zigup) to manage zig installations.
If you prefer a package manager, you can try the following:
- For MacOS, you can install with `brew install zig`
- For, Ubuntu, you can use Snap, you can install with `snap install zig --classic --beta`
- For other systems, checkout this [page](https://github.com/ziglang/zig/wiki/Install-Zig-from-a-Package-Manager)
@ -119,18 +130,21 @@ If you want to install it manually, you can also download Zig directly [here](ht
> WINDOWS NOTE: when you unpack the Zig archive on windows, the result is nested in an extra directory. The instructions on the zig website will seem to not work. So, double-check that the path to zig executable does not include the same directory name twice.
### LLVM
**version: 13.0.x**
For macOS, you can install LLVM 13 using `brew install llvm@13` and then adding
`$(brew --prefix llvm@13)/bin` to your `PATH`. You can confirm this worked by
running `llc --version` - it should mention "LLVM version 13.0.0" at the top.
You may also need to manually specify a prefix env var like so:
```
```sh
export LLVM_SYS_130_PREFIX=/usr/local/opt/llvm@13
```
For Ubuntu and Debian:
```
```sh
sudo apt -y install lsb-release software-properties-common gnupg
wget https://apt.llvm.org/llvm.sh
chmod +x llvm.sh
@ -140,11 +154,11 @@ chmod +x llvm.sh
If you use this script, you'll need to add `clang` to your `PATH`.
By default, the script installs it as `clang-13`. You can address this with symlinks like so:
```
```sh
sudo ln -s /usr/bin/clang-13 /usr/bin/clang
```
There are also alternative installation options at http://releases.llvm.org/download.html
There are also alternative installation options at <http://releases.llvm.org/download.html>
[Troubleshooting](#troubleshooting)
@ -164,10 +178,12 @@ On Ubuntu, running `sudo apt install pkg-config cmake libx11-dev` fixed this.
If you encounter `cannot find -lz` run `sudo apt install zlib1g-dev`.
If you encounter:
```
```text
error: No suitable version of LLVM was found system-wide or pointed
to by LLVM_SYS_130_PREFIX.
```
Add `export LLVM_SYS_130_PREFIX=/usr/lib/llvm-13` to your `~/.bashrc` or equivalent file for your shell.
### LLVM installation on macOS
@ -176,7 +192,7 @@ If installing LLVM fails, it might help to run `sudo xcode-select -r` before ins
It might also be useful to add these exports to your shell:
```
```sh
export LDFLAGS="-L/usr/local/opt/llvm/lib -Wl,-rpath,/usr/local/opt/llvm/lib"
export CPPFLAGS="-I/usr/local/opt/llvm/include"
```
@ -191,12 +207,15 @@ The official LLVM pre-built binaries for Windows lack features that roc needs. I
1. [Download 7-zip](https://www.7-zip.org/) to be able to extract this archive.
1. Extract the 7z file to where you want to permanently keep the folder. We recommend you pick a path without any spaces in it.
1. In powershell, set the `LLVM_SYS_130_PREFIX` environment variable (check [here](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-7.2#saving-environment-variables-with-the-system-control-panel) to make this a permanent environment variable):
```
```text
<# ! Replace YOUR_USERNAME ! #>
$env:LLVM_SYS_130_PREFIX = 'C:\Users\YOUR_USERNAME\Downloads\LLVM-13.0.0-win64'
```
1. add the LLVM bin to the path to prevent issue #3952:
```
```text
<# ! Replace YOUR_USERNAME ! #>
[Environment]::SetEnvironmentVariable(
"Path",
@ -223,7 +242,7 @@ makes build times a lot faster, and I highly recommend it.
Create `~/.cargo/config.toml` if it does not exist and add this to it:
```
```toml
[build]
# Link with lld, per https://github.com/rust-lang/rust/issues/39915#issuecomment-538049306
# Use target-cpu=native, per https://deterministic.space/high-performance-rust.html

View File

@ -8,20 +8,20 @@ In the interest of fostering an open and welcoming environment, we as participan
Examples of behavior that contributes to creating a positive environment include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Kindly giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Kindly giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
- Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address, without their explicit permission
* Telling others to be less sensitive, or that they should not feel hurt or offended by something
- The use of sexualized language or imagery, and sexual attention or advances of any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email address, without their explicit permission
- Telling others to be less sensitive, or that they should not feel hurt or offended by something
## Enforcement Responsibilities
@ -41,4 +41,4 @@ Moderators who do not follow or enforce the Code of Conduct in good faith may fa
## Attribution
This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at <https://www.contributor-covenant.org/version/1/4/code-of-conduct.html>

View File

@ -11,14 +11,17 @@ Check [Building from source](BUILDING_FROM_SOURCE.md) for instructions.
## Running Tests
Most contributors execute the following commands befor pushing their code:
```
```sh
cargo test
cargo fmt --all -- --check
cargo clippy --workspace --tests -- --deny warnings
```
Execute `cargo fmt --all` to fix the formatting.
## Contribution Tips
- If you've never made a pull request on github before, [this](https://www.freecodecamp.org/news/how-to-make-your-first-pull-request-on-github-3/) will be a good place to start.
- Create an issue if the purpose of a struct/field/type/function/... is not immediately clear from its name or nearby comments.
- You can find good first issues [here](https://github.com/roc-lang/roc/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
@ -30,7 +33,8 @@ Execute `cargo fmt --all` to fix the formatting.
2. [Make a key to sign your commits.](https://docs.github.com/en/authentication/managing-commit-signature-verification/generating-a-new-gpg-key).
3. [Configure git to use your key.](https://docs.github.com/en/authentication/managing-commit-signature-verification/telling-git-about-your-signing-key)
4. Make git sign your commits automatically:
```
```sh
git config --global commit.gpgsign true
```

4
FAQ.md
View File

@ -29,7 +29,7 @@ fantastical, and it has incredible potential for puns. Here are some different w
Fun fact: "roc" translates to 鹏 in Chinese, [which means](https://www.mdbg.net/chinese/dictionary?page=worddict&wdrst=0&wdqb=%E9%B9%8F) "a large fabulous bird."
# Why make a new editor instead of making an LSP plugin for VSCode, Vim or Emacs?
## Why make a new editor instead of making an LSP plugin for VSCode, Vim or Emacs?
The Roc editor is one of the key areas where we want to innovate. Constraining ourselves to a plugin for existing editors would severely limit our possibilities for innovation.
@ -475,4 +475,4 @@ The split of Rust for the compiler and Zig for the standard library has worked w
## Why is the website so basic?
We have a very basic website on purpose, it helps set expectations that roc is a work in progress and not ready yet for a first release.
We have a very basic website on purpose, it helps set expectations that roc is a work in progress and not ready yet for a first release.

View File

@ -1,6 +1,7 @@
# Work in progress!
Roc is not ready for a 0.1 release yet, but we do have:
- [**installation** guide](https://github.com/roc-lang/roc/tree/main/getting_started)
- [**tutorial**](https://github.com/roc-lang/roc/blob/main/TUTORIAL.md)
- [frequently asked questions](https://github.com/roc-lang/roc/blob/main/FAQ.md)
@ -8,7 +9,7 @@ Roc is not ready for a 0.1 release yet, but we do have:
If you'd like to get involved in contributing to the language, the Zulip chat is also the best place to get help with [good first issues](https://github.com/roc-lang/roc/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
# Sponsors
## Sponsors
We are very grateful to our sponsors [NoRedInk](https://www.noredink.com/), [rwx](https://www.rwx.com), and [Tweede golf](https://tweedegolf.nl/en).

View File

@ -15,13 +15,13 @@ Learn how to install roc on your machine [here](https://github.com/roc-lang/roc/
Lets start by getting acquainted with Rocs Read Eval Print Loop, or REPL for
short. Run this in a terminal:
```
```sh
$ roc repl
```
You should see this:
```
```sh
The rockin roc repl
```
@ -131,13 +131,13 @@ main = Stdout.line "I'm a Roc application!"
Try running this with:
```
```sh
$ roc Hello.roc
```
You should see this:
```
```sh
I'm a Roc application!
```
@ -157,7 +157,7 @@ total = Num.toStr (birds + iguanas)
Now if you run `roc Hello.roc`, you should see this:
```
```sh
There are 5 animals.
```
@ -165,6 +165,7 @@ There are 5 animals.
short - namely, `main`, `birds`, `iguanas`, and `total`.
A definition names an expression.
- The first def assigns the name `main` to the expression `Stdout.line "I have \(numDefs) definitions."`. The `Stdout.line` function takes a string and prints it as a line to [`stdout`] (the terminal's standard output device).
- The next two defs assign the names `birds` and `iguanas` to the expressions `3` and `2`.
- The last def assigns the name `total` to the expression `Num.toStr (birds + iguanas)`.
@ -231,8 +232,9 @@ addAndStringify = \num1, num2 ->
```
We did two things here:
* We introduced a local def named `sum`, and set it equal to `num1 + num2`. Because we defined `sum` inside `addAndStringify`, it will not be accessible outside that function.
* We added an `if` / `then` / `else` conditional to return either `""` or `Num.toStr sum` depending on whether `sum == 0`.
- We introduced a local def named `sum`, and set it equal to `num1 + num2`. Because we defined `sum` inside `addAndStringify`, it will not be accessible outside that function.
- We added an `if` / `then` / `else` conditional to return either `""` or `Num.toStr sum` depending on whether `sum == 0`.
Of note, we couldn't have done `total = num1 + num2` because that would be
redefining `total` in the global scope, and defs can't be redefined. (However, we could use the name
@ -399,8 +401,8 @@ fromOriginal = { original & birds: 4, iguanas: 3 }
The `fromScratch` and `fromOriginal` records are equal, although they're assembled in
different ways.
* `fromScratch` was built using the same record syntax we've been using up to this point.
* `fromOriginal` created a new record using the contents of `original` as defaults for fields that it didn't specify after the `&`.
- `fromScratch` was built using the same record syntax we've been using up to this point.
- `fromOriginal` created a new record using the contents of `original` as defaults for fields that it didn't specify after the `&`.
Note that when we do this, the fields you're overriding must all be present on the original record,
and their values must have the same type as the corresponding values in the original record.
@ -701,6 +703,7 @@ in the list returns `True`:
List.any [1, 2, 3] Num.isOdd
# returns True because 1 and 3 are odd
```
```coffee
List.any [1, 2, 3] Num.isNegative
# returns False because none of these is negative
@ -712,6 +715,7 @@ There's also `List.all` which only returns `True` if all the elements in the lis
List.all [1, 2, 3] Num.isOdd
# returns False because 2 is not odd
```
```coffee
List.all [1, 2, 3] Num.isPositive
# returns True because all of these are positive
@ -791,6 +795,7 @@ For example, what do each of these return?
```coffee
List.get ["a", "b", "c"] 1
```
```coffee
List.get ["a", "b", "c"] 100
```
@ -864,6 +869,7 @@ functions where argument order matters. For example, these two uses of `List.app
```coffee
List.append ["a", "b", "c"] "d"
```
```coffee
["a", "b", "c"]
|> List.append "d"
@ -875,9 +881,11 @@ sugar for `Num.div a b`:
```coffee
first / second
```
```coffee
Num.div first second
```
```coffee
first
|> Num.div second
@ -1020,9 +1028,11 @@ What we want is something like one of these:
```coffee
reverse : List elem -> List elem
```
```coffee
reverse : List value -> List value
```
```coffee
reverse : List a -> List a
```
@ -1083,9 +1093,9 @@ the lowest `U16` would be zero (since it always is for unsigned integers), and t
Choosing a size depends on your performance needs and the range of numbers you want to represent. Consider:
* Larger integer sizes can represent a wider range of numbers. If you absolutely need to represent numbers in a certain range, make sure to pick an integer size that can hold them!
* Smaller integer sizes take up less memory. These savings rarely matters in variables and function arguments, but the sizes of integers that you use in data structures can add up. This can also affect whether those data structures fit in [cache lines](https://en.wikipedia.org/wiki/CPU_cache#Cache_performance), which can easily be a performance bottleneck.
* Certain processors work faster on some numeric sizes than others. There isn't even a general rule like "larger numeric sizes run slower" (or the reverse, for that matter) that applies to all processors. In fact, if the CPU is taking too long to run numeric calculations, you may find a performance improvement by experimenting with numeric sizes that are larger than otherwise necessary. However, in practice, doing this typically degrades overall performance, so be careful to measure properly!
- Larger integer sizes can represent a wider range of numbers. If you absolutely need to represent numbers in a certain range, make sure to pick an integer size that can hold them!
- Smaller integer sizes take up less memory. These savings rarely matters in variables and function arguments, but the sizes of integers that you use in data structures can add up. This can also affect whether those data structures fit in [cache lines](https://en.wikipedia.org/wiki/CPU_cache#Cache_performance), which can easily be a performance bottleneck.
- Certain processors work faster on some numeric sizes than others. There isn't even a general rule like "larger numeric sizes run slower" (or the reverse, for that matter) that applies to all processors. In fact, if the CPU is taking too long to run numeric calculations, you may find a performance improvement by experimenting with numeric sizes that are larger than otherwise necessary. However, in practice, doing this typically degrades overall performance, so be careful to measure properly!
Here are the different fixed-size integer types that Roc supports:
@ -1126,9 +1136,9 @@ As such, it's very important to design your integer operations not to exceed the
Roc has three fractional types:
* `F32`, a 32-bit [floating-point number](https://en.wikipedia.org/wiki/IEEE_754)
* `F64`, a 64-bit [floating-point number](https://en.wikipedia.org/wiki/IEEE_754)
* `Dec`, a 128-bit decimal [fixed-point number](https://en.wikipedia.org/wiki/Fixed-point_arithmetic)
- `F32`, a 32-bit [floating-point number](https://en.wikipedia.org/wiki/IEEE_754)
- `F64`, a 64-bit [floating-point number](https://en.wikipedia.org/wiki/IEEE_754)
- `Dec`, a 128-bit decimal [fixed-point number](https://en.wikipedia.org/wiki/Fixed-point_arithmetic)
These are different from integers in that they can represent numbers with fractional components,
such as 1.5 and -0.123.
@ -1193,18 +1203,21 @@ and also `Num.cos 1` and have them all work as expected; the number literal `1`
you can pass number literals to functions expecting even more constrained types, like `I32` or `F64`.
### Typed Number Literals
When writing a number literal in Roc you can specify the numeric type as a suffix of the literal.
`1u8` specifies `1` as an unsigned 8-bit integer, `5i32` specifies `5` as a signed 32-bit integer, etc.
The full list of possible suffixes includes:
`i8`, `u8`, `i16`, `u16`, `i32`, `u32`, `i64`, `u64`, `i128`, `u128`, `nat`, `f32`, `f64`, `dec`
### Hexadecimal Integer Literals
Integer literals can be written in hexadecimal form by prefixing with `0x` followed by hexadecimal characters.
`0xFE` evaluates to decimal `254`
The integer type can be specified as a suffix to the hexadecimal literal,
so `0xC8u8` evaluates to decimal `200` as an unsigned 8-bit integer.
### Binary Integer Literals
Integer literals can be written in binary form by prefixing with `0b` followed by the 1's and 0's representing
each bit. `0b0000_1000` evaluates to decimal `8`
The integer type can be specified as a suffix to the binary literal,
@ -1235,8 +1248,8 @@ Roc compiler. That's why they're called "builtins!"
Besides being built into the compiler, the builtin modules are different from other modules in that:
* They are always imported. You never need to add them to `imports`.
* All their types are imported unqualified automatically. So you never need to write `Num.Nat`, because it's as if the `Num` module was imported using `imports [Num.{ Nat }]` (and the same for all the other types in the `Num` module).
- They are always imported. You never need to add them to `imports`.
- All their types are imported unqualified automatically. So you never need to write `Num.Nat`, because it's as if the `Num` module was imported using `imports [Num.{ Nat }]` (and the same for all the other types in the `Num` module).
## The app module header
@ -1295,8 +1308,6 @@ this `imports` line tells the Roc compiler that when we call `Stdout.line`, it
should look for that `line` function in the `Stdout` module of the
`examples/interactive/cli-platform/main.roc` package.
# Building a Command-Line Interface (CLI)
## Tasks
Tasks are technically not part of the Roc language, but they're very common in
@ -1304,10 +1315,10 @@ platforms. Let's use the CLI platform in `examples/interactive/cli-platform/main
In the CLI platform, we have four operations we can do:
* Write a string to the console
* Read a string from user input
* Write a string to a file
* Read a string from a file
- Write a string to the console
- Read a string from user input
- Write a string to a file
- Read a string from a file
We'll use these four operations to learn about tasks.
@ -1513,22 +1524,24 @@ main =
```
This way, it reads like a series of instructions:
1. First, run the `Stdout.line` task and await its completion. Ignore its output (hence the underscore in `_ <-`)
2. Next, run the `Stdin.line` task and await its completion. Name its output `text`.
3. Finally, run the `Stdout.line` task again, using the `text` value we got from the `Stdin.line` effect.
Some important things to note about backpassing and `await`:
* `await` is not a language keyword in Roc! It's referring to the `Task.await` function, which we imported unqualified by writing `Task.{ await }` in our module imports. (That said, it is playing a similar role here to the `await` keyword in languages that have `async`/`await` keywords, even though in this case it's a function instead of a special keyword.)
* Backpassing syntax does not need to be used with `await` in particular. It can be used with any function.
* Roc's compiler treats functions defined with backpassing exactly the same way as functions defined the other way. The only difference between `\text ->` and `text <-` is how they look, so feel free to use whichever looks nicer to you!
# Appendix: Advanced Concepts
- `await` is not a language keyword in Roc! It's referring to the `Task.await` function, which we imported unqualified by writing `Task.{ await }` in our module imports. (That said, it is playing a similar role here to the `await` keyword in languages that have `async`/`await` keywords, even though in this case it's a function instead of a special keyword.)
- Backpassing syntax does not need to be used with `await` in particular. It can be used with any function.
- Roc's compiler treats functions defined with backpassing exactly the same way as functions defined the other way. The only difference between `\text ->` and `text <-` is how they look, so feel free to use whichever looks nicer to you!
## Appendix: Advanced Concepts
Here are some concepts you likely won't need as a beginner, but may want to know about eventually.
This is listed as an appendix rather than the main tutorial, to emphasize that it's totally fine
to stop reading here and go build things!
## Open Records and Closed Records
### Open Records and Closed Records
Let's say I write a function which takes a record with a `firstName`
and `lastName` field, and puts them together with a space in between:
@ -1542,9 +1555,9 @@ I can pass this function a record that has more fields than just
`firstName` and `lastName`, as long as it has *at least* both of those fields
(and both of them are strings). So any of these calls would work:
* `fullName { firstName: "Sam", lastName: "Sample" }`
* `fullName { firstName: "Sam", lastName: "Sample", email: "blah@example.com" }`
* `fullName { age: 5, firstName: "Sam", things: 3, lastName: "Sample", role: Admin }`
- `fullName { firstName: "Sam", lastName: "Sample" }`
- `fullName { firstName: "Sam", lastName: "Sample", email: "blah@example.com" }`
- `fullName { age: 5, firstName: "Sam", things: 3, lastName: "Sample", role: Admin }`
This `user` argument is an *open record* - that is, a description of a minimum set of fields
on a record, and their types. When a function takes an open record as an argument,
@ -1586,7 +1599,7 @@ a closed record by putting a `{}` as the type variable (so for example, `{ email
`{ email : Str }`). In practice, closed records are basically always written without the `{}` on the end,
but later on we'll see a situation where putting types other than `*` in that spot can be useful.
## Constrained Records
### Constrained Records
The type variable can also be a named type variable, like so:
@ -1597,9 +1610,10 @@ addHttps = \record ->
```
This function uses *constrained records* in its type. The annotation is saying:
* This function takes a record which has at least a `url` field, and possibly others
* That `url` field has the type `Str`
* It returns a record of exactly the same type as the one it was given
- This function takes a record which has at least a `url` field, and possibly others
- That `url` field has the type `Str`
- It returns a record of exactly the same type as the one it was given
So if we give this function a record with five fields, it will return a record with those
same five fields. The only requirement is that one of those fields must be `url : Str`.
@ -1621,7 +1635,7 @@ field of that record. So if you passed it a record that was not guaranteed to ha
present (such as an `{ a : Str, b : Bool }*` record, which only guarantees that the fields `a` and `b` are present),
the function might try to access a `c` field at runtime that did not exist!
## Type Variables in Record Annotations
### Type Variables in Record Annotations
You can add type annotations to make record types less flexible than what the compiler infers, but not more
flexible. For example, you can use an annotation to tell the compiler to treat a record as closed when it would
@ -1739,7 +1753,7 @@ prevent the compiler from raising an error that would have revealed the mistake.
That said, this is a useful technique to know about if you want to (for example) make a record
type that accumulates more and more fields as it progresses through a series of operations.
## Open and Closed Tag Unions
### Open and Closed Tag Unions
Just like how Roc has open records and closed records, it also has open and closed tag unions.
@ -1791,7 +1805,7 @@ the implementation actually handles.
> accept `example : [Foo Str, Bar Bool] -> Bool` as the type annotation, even though the catch-all branch
> would permit the more flexible `example : [Foo Str, Bar Bool]* -> Bool` annotation instead.
## Combining Open Unions
### Combining Open Unions
When we make a new record, it's inferred to be a closed record. For example, in `foo { a: "hi" }`,
the type of `{ a: "hi" }` is inferred to be `{ a : Str }`. In contrast, when we make a new tag, it's inferred
@ -1836,14 +1850,14 @@ the tags in the open union you're providing.
So if I have an `[Ok Str]*` value, I can pass it to functions with any of these types (among others):
* `[Ok Str]* -> Bool`
* `[Ok Str] -> Bool`
* `[Ok Str, Err Bool]* -> Bool`
* `[Ok Str, Err Bool] -> Bool`
* `[Ok Str, Err Bool, Whatever]* -> Bool`
* `[Ok Str, Err Bool, Whatever] -> Bool`
* `Result Str Bool -> Bool`
* `[Err Bool, Whatever]* -> Bool`
- `[Ok Str]* -> Bool`
- `[Ok Str] -> Bool`
- `[Ok Str, Err Bool]* -> Bool`
- `[Ok Str, Err Bool] -> Bool`
- `[Ok Str, Err Bool, Whatever]* -> Bool`
- `[Ok Str, Err Bool, Whatever] -> Bool`
- `Result Str Bool -> Bool`
- `[Err Bool, Whatever]* -> Bool`
That last one works because a function accepting an open union can accept any unrecognized tag, including
`Ok Str` - even though it is not mentioned as one of the tags in `[Err Bool, Whatever]*`! Remember, when
@ -1867,12 +1881,12 @@ a catch-all `_ ->` branch, it might not know what to do with an `Ok Str` if it r
In summary, here's a way to think about the difference between open unions in a value you have, compared to a value you're accepting:
* If you *have* a closed union, that means it has all the tags it ever will, and can't accumulate more.
* If you *have* an open union, that means it can accumulate more tags through conditional branches.
* If you *accept* a closed union, that means you only have to handle the possibilities listed in the union.
* If you *accept* an open union, that means you have to handle the possibility that it has a tag you can't know about.
- If you *have* a closed union, that means it has all the tags it ever will, and can't accumulate more.
- If you *have* an open union, that means it can accumulate more tags through conditional branches.
- If you *accept* a closed union, that means you only have to handle the possibilities listed in the union.
- If you *accept* an open union, that means you have to handle the possibility that it has a tag you can't know about.
## Type Variables in Tag Unions
### Type Variables in Tag Unions
Earlier we saw these two examples, one with an open tag union and the other with a closed one:
@ -1918,8 +1932,8 @@ the `Foo Str` and `Bar Bool` we already know about).
If we removed the type annotation from `example` above, Roc's compiler would infer the same type anyway.
This may be surprising if you look closely at the body of the function, because:
* The return type includes `Foo Str`, but no branch explicitly returns `Foo`. Couldn't the return type be `[Bar Bool]a` instead?
* The argument type includes `Bar Bool` even though we never look at `Bar`'s payload. Couldn't the argument type be inferred to be `Bar *` instead of `Bar Bool`, since we never look at it?
- The return type includes `Foo Str`, but no branch explicitly returns `Foo`. Couldn't the return type be `[Bar Bool]a` instead?
- The argument type includes `Bar Bool` even though we never look at `Bar`'s payload. Couldn't the argument type be inferred to be `Bar *` instead of `Bar Bool`, since we never look at it?
The reason it has this type is the `other -> other` branch. Take a look at that branch, and ask this question:
"What is the type of `other`?" There has to be exactly one answer! It can't be the case that `other` has one
@ -1936,11 +1950,11 @@ be equivalent.
> Also just like with records, you can use this to compose tag union type aliases. For example, you can write
> `NetworkError : [Timeout, Disconnected]` and then `Problem : [InvalidInput, UnknownFormat]NetworkError`
## Phantom Types
### Phantom Types
[This part of the tutorial has not been written yet. Coming soon!]
## Operator Desugaring Table
### Operator Desugaring Table
Here are various Roc expressions involving operators, and what they desugar to.

View File

@ -2,16 +2,19 @@
# Running the benchmarks
Install cargo criterion:
```
```sh
cargo install cargo-criterion
```
To prevent stack overflow on the `CFold` benchmark:
```
```sh
ulimit -s unlimited
```
In the `cli` folder execute:
```
```sh
cargo criterion
```
```

View File

@ -1,6 +1,8 @@
# The Roc Compiler
Here's how the compiler is laid out.
# Parsing
## Parsing
The main goal of parsing is to take a plain old String (such as the contents a .roc source file read from the filesystem) and translate that String into an `Expr` value.
@ -45,7 +47,7 @@ This is gibberish to the parser, so it will produce an error rather than an `Exp
Roc's parser is implemented using the [`marwes/combine`](http://github.com/marwes/combine-language/) crate.
# Evaluating
## Evaluating
One of the useful things we can do with an `Expr` is to evaluate it.
@ -123,7 +125,7 @@ If a function is "small enough" it's probably worth inlining too.
## Fusion
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/deforestation-short-cut.pdf
<https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/deforestation-short-cut.pdf>
Basic approach:
@ -139,9 +141,9 @@ Advanced approach:
Express operations like map and filter in terms of toStream and fromStream, to unlock more deforestation.
More info on here:
https://wiki.haskell.org/GHC_optimisations#Fusion
<https://wiki.haskell.org/GHC_optimisations#Fusion>
# Getting started with the code
## Getting started with the code
The compiler contains a lot of code! If you're new to the project it can be hard to know where to start. It's useful to have some sort of "main entry point", or at least a "good place to start" for each of the main phases.
@ -172,7 +174,7 @@ ask the compiler to emit debug information during various stages of compilation.
There are some goals for more sophisticated debugging tools:
- A nicer unification debugger, see https://github.com/roc-lang/roc/issues/2486.
- A nicer unification debugger, see <https://github.com/roc-lang/roc/issues/2486>.
Any interest in helping out here is greatly appreciated.
### General Tips

View File

@ -21,16 +21,17 @@ Towards the bottom of `symbol.rs` there is a `define_builtins!` macro being used
Some of these have `#` inside their name (`first#list`, `#lt` ..). This is a trick we are doing to hide implementation details from Roc programmers. To a Roc programmer, a name with `#` in it is invalid, because `#` means everything after it is parsed to a comment. We are constructing these functions manually, so we are circumventing the parsing step and dont have such restrictions. We get to make functions and values with `#` which as a consequence are not accessible to Roc programmers. Roc programmers simply cannot reference them.
But we can use these values and some of these are necessary for implementing builtins. For example, `List.get` returns tags, and it is not easy for us to create tags when composing LLVM. What is easier however, is:
- ..writing `List.#getUnsafe` that has the dangerous signature of `List elem, Nat -> elem` in LLVM
- ..writing `List elem, Nat -> Result elem [OutOfBounds]*` in a type safe way that uses `getUnsafe` internally, only after it checks if the `elem` at `Nat` index exists.
### can/src/builtins.rs
Right at the top of this module is a function called `builtin_defs`. All this is doing is mapping the `Symbol` defined in `module/src/symbol.rs` to its implementation. Some of the builtins are quite complex, such as `list_get`. What makes `list_get` is that it returns tags, and in order to return tags it first has to defer to lower-level functions via an if statement.
Lets look at `List.repeat : elem, Nat -> List elem`, which is more straight-forward, and points directly to its lower level implementation:
```
```rust
fn list_repeat(symbol: Symbol, var_store: &mut VarStore) -> Def {
let elem_var = var_store.fresh();
let len_var = var_store.fresh();
@ -54,30 +55,42 @@ fn list_repeat(symbol: Symbol, var_store: &mut VarStore) -> Def {
)
}
```
In these builtin definitions you will need to allocate for and list the arguments. For `List.repeat`, the arguments are the `elem_var` and the `len_var`. So in both the `body` and `defn` we list these arguments in a vector, with the `Symbol::ARG_1` and` Symvol::ARG_2` designating which argument is which.
In these builtin definitions you will need to allocate for and list the arguments. For `List.repeat`, the arguments are the `elem_var` and the `len_var`. So in both the `body` and `defn` we list these arguments in a vector, with the `Symbol::ARG_1` and `Symvol::ARG_2` designating which argument is which.
Since `List.repeat` is implemented entirely as low level functions, its `body` is a `RunLowLevel`, and the `op` is `LowLevel::ListRepeat`. Lets talk about `LowLevel` in the next section.
## Connecting the definition to the implementation
### module/src/low_level.rs
This `LowLevel` thing connects the builtin defined in this module to its implementation. It's referenced in `can/src/builtins.rs` and it is used in `gen/src/llvm/build.rs`.
## Bottom level LLVM values and functions
### gen/src/llvm/build.rs
This is where bottom-level functions that need to be written as LLVM are created. If the function leads to a tag thats a good sign it should not be written here in `build.rs`. If it's simple fundamental stuff like `INT_ADD` then it certainly should be written here.
## Letting the compiler know these functions exist
### builtins/src/std.rs
It's one thing to actually write these functions, it's _another_ thing to let the Roc compiler know they exist as part of the standard library. You have to tell the compiler "Hey, this function exists, and it has this type signature". That happens in `std.rs`.
## Specifying how we pass args to the function
### builtins/mono/src/borrow.rs
After we have all of this, we need to specify if the arguments we're passing are owned, borrowed or irrelevant. Towards the bottom of this file, add a new case for your builtin and specify each arg. Be sure to read the comment, as it explains this in more detail.
## Testing it
### solve/tests/solve_expr.rs
To make sure that Roc is properly inferring the type of the new builtin, add a test to this file similar to:
```
```rust
#[test]
fn atan() {
infer_eq_without_problem(
@ -90,19 +103,23 @@ fn atan() {
);
}
```
But replace `Num.atan` and the type signature with the new builtin.
### test_gen/test/*.rs
In this directory, there are a couple files like `gen_num.rs`, `gen_str.rs`, etc. For the `Str` module builtins, put the test in `gen_str.rs`, etc. Find the one for the new builtin, and add a test like:
```
```rust
#[test]
fn atan() {
assert_evals_to!("Num.atan 10", 1.4711276743037347, f64);
}
```
But replace `Num.atan`, the return value, and the return type with your new builtin.
# Mistakes that are easy to make!!
## Mistakes that are easy to make!!
When implementing a new builtin, it is often easy to copy and paste the implementation for an existing builtin. This can take you quite far since many builtins are very similar, but it also risks forgetting to change one small part of what you copy and pasted and losing a lot of time later on when you cant figure out why things dont work. So, speaking from experience, even if you are copying an existing builtin, try and implement it manually without copying and pasting. Two recent instances of this (as of September 7th, 2020):

View File

@ -3,6 +3,7 @@
## Adding a bitcode builtin
To add a builtin:
1. Add the function to the relevant module. For `Num` builtin use it in `src/num.zig`, for `Str` builtins use `src/str.zig`, and so on. **For anything you add, you must add tests for it!** Not only does to make the builtins more maintainable, it's the the easiest way to test these functions on Zig. To run the test, run: `zig build test`
2. Make sure the function is public with the `pub` keyword and uses the C calling convention. This is really easy, just add `pub` and `callconv(.C)` to the function declaration like so: `pub fn atan(num: f64) callconv(.C) f64 { ... }`
3. In `src/main.zig`, export the function. This is also organized by module. For example, for a `Num` function find the `Num` section and add: `comptime { exportNumFn(num.atan, "atan"); }`. The first argument is the function, the second is the name of it in LLVM.

View File

@ -81,14 +81,13 @@ WebAssembly functions can have any number of local variables. They are declared
In this backend, each symbol in the Mono IR gets one WebAssembly local. To illustrate, let's translate a simple Roc example to WebAssembly text format.
The WebAssembly code below is completely unoptimised and uses far more locals than necessary. But that does help to illustrate the concept of locals.
```
```coffee
app "test" provides [main] to "./platform"
main =
1 + 2 + 4
```
### Direct translation of Mono IR
The Mono IR contains two functions, `Num.add` and `main`, so we generate two corresponding WebAssembly functions.
@ -97,7 +96,7 @@ The code ends up being quite bloated, with lots of `local.set` and `local.get` i
I've added comments on each line to show what is on the stack and in the locals at each point in the program.
```
```text
(func (;0;) (param i64 i64) (result i64) ; declare function index 0 (Num.add) with two i64 parameters and an i64 result
local.get 0 ; load param 0 stack=[param0]
local.get 1 ; load param 1 stack=[param0, param1]
@ -127,7 +126,7 @@ I've added comments on each line to show what is on the stack and in the locals
This code doesn't actually require any locals at all.
(It also doesn't need the `return` instructions, but that's less of a problem.)
```
```text
(func (;0;) (param i64 i64) (result i64)
local.get 0
local.get 1
@ -154,7 +153,7 @@ When the `WasmBackend` generates code for a `Let` statement, it can "label" the
In practice it should be very common for values to appear on the VM stack in the right order, because in the Mono IR, statements occur in dependency order! We should only generate locals when the dependency graph is a little more complicated, and we actually need them.
```
```text
┌─────────────────┐ ┌─────────────┐
│ │ │ │
│ ├─────► Storage ├──────┐
@ -234,12 +233,14 @@ We implement a few linking operations in the Wasm backend. The most important ar
In the host .wasm file, `roc__mainForHost_1_exposed` is defined as a Wasm Import, as if it were an external JavaScript function. But when we link the host and app, we need to make it an internal function instead.
There are a few important facts to note about the Wasm binary format:
- Function calls refer to the callee by its function index in the file.
- If we move a function from one index to another, all of its call sites need to be updated. So we want to minimise this to make linking fast.
- If we _remove_ a function, then all functions above it will implicitly have their indices shifted down by 1! This is not good for speed. We should try to _swap_ rather than remove.
- JavaScript imports always get the lower indices.
With that background, here are the linking steps for a single app function that gets called by the host:
- Remove `roc__mainForHost_1_exposed` from the imports, updating all call sites to the new index, which is somewhere in the app.
- Swap the _last_ JavaScript import into the slot where `roc__mainForHost_1_exposed` was, updating all of its call sites in the host.
- Insert an internally-defined dummy function at the index where the last JavaScript import used to be.

View File

@ -1,6 +1,8 @@
# fuzz
To setup fuzzing you will need to install cargo-fuzz and run with rust nightly:
```
```sh
$ cargo install cargo-fuzz
$ cargo +nightly fuzz run -j<cores> <target> -- -dict=dict.txt
```

View File

@ -2,14 +2,14 @@
Ayaz Hafiz
# Summary
## Summary
This document describes how polymorphic lambda sets are specialized and resolved
in the compiler's type solver. It's derived from the original document at https://rwx.notion.site/Ambient-Lambda-Set-Specialization-50e0208a39844ad096626f4143a6394e.
in the compiler's type solver. It's derived from the original document at <https://rwx.notion.site/Ambient-Lambda-Set-Specialization-50e0208a39844ad096626f4143a6394e>.
TL;DR: lambda sets are resolved by unifying their ambient arrow types in a “bottom-up” fashion.
# Background
## Background
In this section Ill explain how lambda sets and specialization lambda sets work today, mostly from the ground-up. Ill gloss over a few details and assume an understanding of type unification. The background will leave us with a direct presentation of the current specialization lambda set unification algorithm, and its limitation.
@ -90,7 +90,7 @@ f (@Foo {})
The unification trace for the call `f (@Foo {})` proceeds as follows. I use `'tN`, where `N` is a number, to represent fresh unbound type variables. Since `f` is a generalized type, `a'` is the fresh type “based on `a`" created for a particular usage of `f`.
```
```text
typeof f
~ Foo -'t1-> 't2
=>
@ -102,14 +102,14 @@ The unification trace for the call `f (@Foo {})` proceeds as follows. I use `'tN
Now that the specialization lambdas type variables point to concrete types, we can resolve the concrete lambdas of `Foo:hashThunk:1` and `Foo:hashThunk:2`. Cool! Lets do that. We know that
```
```text
hashThunk = \@Foo {} -> \{} -> 1
#^^^^^^^^ Foo -[[Foo#hashThunk]]-> \{} -[[lam2]]-> U64
```
So `Foo:hashThunk:1` is `[[Foo#hashThunk]]` and `Foo:hashThunk:2` is `[[lam2]]`. Applying that to the type of `f` we get the trace
```
```text
Foo -[[zeroHash] + Foo:hashThunk:1]-> ({} -[[lam1] + Foo:hashThunk:2]-> U64)
<specialization time>
Foo:hashThunk:1 -> [[Foo#hashThunk]]
@ -120,7 +120,7 @@ So `Foo:hashThunk:1` is `[[Foo#hashThunk]]` and `Foo:hashThunk:2` is `[[lam2]]`.
Great, so now we know our options to dispatch `f` in the call `f (@Foo {})`, and the code-generator will insert tags appropriately for the specialization definition of `f` where `a = Foo` knowing the concrete lambda symbols.
# The Problem
## The Problem
This technique for lambda set resolution is all well and good when the specialized lambda sets are monomorphic, that is, they contain only concrete lambdas. So far in our development of the end-to-end compilation model thats been the case, and when it wasnt, theres been enough ambient information to coerce the specializations to be monomorphic.
@ -160,7 +160,7 @@ Suppose we have the call
With the present specialization technique, unification proceeds as follows:
```
```text
== solve (f (@Fo {})) ==
typeof f
~ Fo -'t1-> 't2
@ -194,25 +194,25 @@ But in key bit 2, we see that we know what we want `b''` to be! We want it to be
So where did we go wrong? Well, our problem is that we never saw that `b'` and `b''` should really be the same type variable. If only we knew that in this specialization `b'` and `b''` are the same instantiation, wed be all good.
# A Solution
## A Solution
Ill now explain the best way Ive thought of for us to solve this problem. If you see a better way, please let me know! Im not sure I love this solution, but I do like it a lot more than some other naive approaches.
Okay, so first well enumerate some terminology, and the exact algorithm. Then well illustrate the algorithm with some examples; my hope is this will help explain why it must proceed in the way it does. Well see that the algorithm depends on a few key invariants; Ill discuss them and their consequences along the way. Finally, well discuss a couple details regarding the algorithm not directly related to its operation, but important to recognize. I hope then, you will tell me where I have gone wrong, or where you see a better opportunity to do things.
## The algorithm
### The algorithm
### Some definitions
#### Some definitions
- **The region invariant.** Previously we discussed the “region” of a lambda set in a specialization function definition. The way regions are assigned in the compiler follows a very specific ordering and holds a invariant well call the “region invariant”. First, lets define a procedure for creating function types and assigning regions:
```
```text
Type = \region ->
(Type_atom, region)
| Type_function region
Type_function = \region ->
let left_type, new_region = Type (region + 1)
let left_type, new_region = Type (region + 1)
let right_type, new_region = Type (new_region)
let func_type = left_type -[Lambda region]-> right_type
(func_type, new_region)
@ -220,7 +220,7 @@ Okay, so first well enumerate some terminology, and the exact algorithm. Then
This procedure would create functions that look like the trees(abbreviating `L=Lambda`, `a=atom` below)
```
```text
-[L 1]->
a a
@ -247,7 +247,7 @@ Okay, so first well enumerate some terminology, and the exact algorithm. Then
- `uls_of_var`. A look aside table of the unspecialized lambda sets (uls) depending on a variable. For example, in `a -[[] + a:f:1]-> (b -[[] + a:f:2]-> {})`, there would be a mapping of `a => { [[] + a:f:1]; [[] + a:f:2] }`. When `a` gets instantiated with a concrete type, we know that these lambda sets are ready to be resolved.
### Explicit Description
#### Explicit Description
The algorithm concerns what happens during the lambda-set-specialization-time. You may want to read it now, but its also helpful to first look at the intuition below, then the examples, then revisit the explicit algorithm description.
@ -265,13 +265,13 @@ Suppose a type variable `a` with `uls_of_var` mapping `uls_a = {l1, ... ln}` has
1. For example, `(b -[[] + b:g:1]-> {})` if `C:f:r=Fo:f:2`, running on example from above.
3. Unify `t_f1 ~ t_f2`.
### Intuition
#### Intuition
The intuition is that we walk up the function type being specialized, starting from the leaves. Along the way we pick up bound type variables from both the function type being specialized, and the specialization type. The region invariant makes sure we thread bound variables through an increasingly larger scope.
## Some Examples
### Some Examples
### The motivating example
#### The motivating example
Recall the program from our problem statement
@ -303,7 +303,7 @@ With our algorithm, the call
has unification proceed as follows:
```
```text
== solve (f (@Fo {})) ==
typeof f
~ Fo -'t1-> 't2
@ -313,9 +313,9 @@ has unification proceed as follows:
=> Fo -[[] + Fo:f:1]-> (b' -[[] + Fo:f:2]-> {})
<specialization time>
step 1:
uls_Fo = { [[] + Fo:f:1], [[] + Fo:f:2] }
uls_Fo = { [[] + Fo:f:1], [[] + Fo:f:2] }
step 2 (sort):
uls_Fo' = { [[] + Fo:f:2], [[] + Fo:f:1] }
uls_Fo' = { [[] + Fo:f:2], [[] + Fo:f:1] }
step 3:
1. iteration: [[] + Fo:f:2]
b' -[[]]-> {} (t_f1 after removing Fo:f:2)
@ -342,7 +342,7 @@ has unification proceed as follows:
uls_Go = { [[] + Go:g:1] }
step 2 (sort):
uls_Go' = { [[] + Go:g:1] }
step 3:
step 3:
1. iteration: [[] + Go:g:1]
Go -[[]]-> {} (t_f1 after removing Go:g:1)
~ Go -[[Go#g]]-> {}
@ -356,36 +356,36 @@ f : Fo -[[Fo#f]]-> (Go -[[Go#g]]-> {})
There we go. Weve recovered the specialization type of the second lambda set to `Go#g`, as we wanted.
### The motivating example, in the presence of let-generalization
#### The motivating example, in the presence of let-generalization
Suppose instead we let-generalized the motivating example, so it was a program like
```
```coffee
h = f (@Fo {})
h (@Go {})
```
`h` still gets resolved correctly in this case. Its basically the same unification trace as above, except that after we find out that
```
```text
typeof f = Fo -[[Fo#f]]-> (b''' -[[] + b''':g:1]-> {})
```
we see that `h` has type
```
```text
b''' -[[] + b''':g:1]-> {}
```
We generalize this to
```
```text
h : c -[[] + c:g:1]-> {}
```
Then, the call `h (@Go {})` has the trace
```
```text
=== solve h (@Go {}) ===
typeof h
~ Go -'t1-> 't2
@ -398,7 +398,7 @@ Then, the call `h (@Go {})` has the trace
uls_Go = { [[] + Go:g:1] }
step 2 (sort):
uls_Go' = { [[] + Go:g:1] }
step 3:
step 3:
1. iteration: [[] + Go:g:1]
Go -[[]]-> {} (t_f1 after removing Go:g:1)
~ Go -[[Go#g]]-> {}
@ -406,7 +406,7 @@ Then, the call `h (@Go {})` has the trace
=> Go -[[Go#g]]-> {}
```
### Bindings on the right side of an arrow
#### Bindings on the right side of an arrow
This continues to work if instead of a type variable being bound on the left side of an arrow, it is bound on the right side. Lets see what that looks like. Consider
@ -429,7 +429,7 @@ g = \{} -> @Go {}
This is symmetrical to the first example we ran through. I can include a trace if you all would like, though it could be helpful to go through yourself and see that it would work.
### Deep specializations and captures
#### Deep specializations and captures
Alright, bear with me, this is a long and contrived one, but it demonstrates how this works in the presence of polymorphic captures (its “nothing special”), and more importantly, why the bottom-up unification is important.
@ -462,7 +462,7 @@ Here is the call were going to trace:
Lets get to it.
```
```text
=== solve (f (@Fo {}) (@Go {})) ===
typeof f
~ Fo, Go -'t1-> 't2
@ -497,8 +497,8 @@ Lets get to it.
uls_Go = { [[] + Go:g:2] }
step 2:
uls_Go = { [[] + Go:g:2] } (sorted)
step_3:
1. iteration: [[] + Go:g:2]
step_3:
1. iteration: [[] + Go:g:2]
{} -[[]]-> {} (t_f1 after removing Go:g:2)
~ {} -[[lamG]]-> {}
= {} -[[lamG]]-> {}
@ -512,7 +512,7 @@ Look at that! Resolved the capture, and all the lambdas.
Notice that in the first `<specialization time>` trace, had we not sorted the `Fo:f:_` specialization lambdas in descending order of region, we would have resolved `Fo:f:3` last, and not bound the specialized `[[] + b':g:2]` to any `b'` variable. Intuitively, thats because the variable we need to bind it to occurs in the most ambient function type of all those specialization lambdas: the one at `[[] + Fo:f:1]`
## An important requirement
### An important requirement
There is one invariant I have left implicit in this construction, that may not hold in general. (Maybe I left others that you noticed that dont hold - let me know!). That invariant is that any type variable in a signature is bound in either the left or right hand side of an arrow.
@ -547,7 +547,7 @@ How do we make this a type error? A couple options have been considered, but we
1. One approach, suggested by Richard, is to sort abilities into strongly-connected components and see if there is any zig-zag chain of member signatures in a SCC where an ability-bound type variable doesnt escape through the front or back. We can observe two things: (1) such SCCs can only exist within a single module because Roc doesnt have (source-level) circular dependencies and (2) we only need to examine pairs of functions have at least one type variable only appearing on one side of an arrow. That means the worst case performance of this analysis is quadratic in the number of ability members in a module. The downside of this approach is that it would reject some uses of abilities that can be resolved and code-generated by the compiler.
2. Another approach is to check whether generalized variables in a let-bound definitions body escaped out the front or back of the let-generalized definitions type (and **not** in a lambda set, for the reasons described above). This admits some programs that would be illegal with the other analysis but cant be performed until typechecking. As for performance, note that new unbound type variables in a body can only be introduced by using a let-generalized symbol that is polymorphic. Those variables would need to be checked, so the performance of this approach on a per-module basis is linear in the number of let-generalized symbols used in the module (assuming the number of generalized variables returned is a constant factor).
## A Property thats lost, and how we can hold on to it
### A Property thats lost, and how we can hold on to it
One question I asked myself was, does this still ensure lambda sets can vary over multiple able type parameters? At first, I believed the answer was yes — however, this may not hold and be sound. For example, consider
@ -573,13 +573,13 @@ f = \flag, a, b, c ->
The first branch has type (`a` has generalized type `a'`)
```
```text
c'' -[[] + a':j:2]-> {}
```
The second branch has type (`b` has generalized type `b'`)
```
```text
c''' -[[] + b':j:2]-> {}
```
@ -587,7 +587,7 @@ So now, how do we unify this? Well, following the construction above, we must un
Well, one idea is that during normal type unification, we simply take the union of unspecialized lambda sets with **disjoint** variables. In the case above, we would get `c' -[[] + a':j:2 + b':j:2]` (supposing `c` has type `c'`). During lambda set compaction, when we unify ambient types, choose one non-concrete type to unify with. Since were maintaining the invariant that each generalized type variable appears at least once on one side of an arrow, eventually you will have picked up all type variables in unspecialized lambda sets.
```
```text
=== monomorphize (f A (@C {}) (@D {}) (@E {})) ===
(inside f, solving `it`:)
@ -633,7 +633,7 @@ it : E -[[lamE]]-> {}
The disjointedness is important - we want to unify unspecialized lambdas whose type variables are equivalent. For example,
```
```coffee
f = \flag, a, c ->
it = when flag is
A -> j a
@ -643,27 +643,27 @@ f = \flag, a, c ->
Should produce `it` having generalized type
```
```text
c' -[[] + a':j:2]-> {}
```
and not
```
```text
c' -[[] + a':j:2 + a':j:2]-> {}
```
For now, we will not try to preserve this property, and instead unify all type variables with the same member/region in a lambda set. We can improve the status of this over time.
# Conclusion
## Conclusion
Will this work? I think so, but I dont know. In the sense that, I am sure it will work for some of the problems we are dealing with today, but there may be even more interactions that arent clear to us until further down the road.
Obviously, this is not a rigorous study of this problem. We are making several assumptions, and I have not proved any of the properties I claim. However, the intuition makes sense to me, predicated on the “type variables escape either the front or back of a type” invariant, and this is the only approach that really makes sense to me while only being a little bit complicated. Let me know what you think.
# Appendix
## Appendix
## Optimization: only the lowest-region ambient function type is needed
### Optimization: only the lowest-region ambient function type is needed
You may have observed that step 1 and step 2 of the algorithm are somewhat overkill, really, it seems you only need the lowest-number regions directly ambient function type to unify the specialization with. Thats because by the region invariant, the lowest-regions ambient function would contain every other regions ambient function.
@ -675,7 +675,7 @@ Type = \region ->
| Type_function region
Type_function = \region ->
let left_type = Type (region * 2)
let left_type = Type (region * 2)
let right_type = Type (region * 2 + 1)
let func_type = left_type -[Lambda region]-> right_type
func_type
@ -683,7 +683,7 @@ Type_function = \region ->
Which produces a tree like
```
```text
-[L 1]->
-[L 2]-> -[L 3]->
-[L 4]-> -[L 5]-> -[L 6]-> -[L 7]->
@ -694,6 +694,6 @@ Now, given a set of `uls` sorted in increasing order of region, you can remove a
Then, when running the algorithm, you must remove unspecialized lambdas of form `C:f:_` from **all** nested lambda sets in the directly ambient function, not just in the directly ambient function. This will still be cheaper than unifying deeper lambda sets, but may be an inconvenience.
## Testing Strategies
### Testing Strategies
- Quickcheck - the shape of functions we care about is quite clearly defined. Basically just create a bunch of let-bound functions, polymorphic over able variables, use them in an expression that evaluates monomorphically, and check that everything in the monomorphic expression is resolved.

View File

@ -17,8 +17,8 @@ On a 64-bit system, this `struct` would take up 16B in memory. On a 32-bit syste
Here's what the fields mean:
* `pointer` is the memory address of the heap-allocated memory containing the `Bool` elements. For an empty list, the pointer is null (that is, 0).
* `length` is the number of `Bool` elements in the list. For an empty list, this is also 0.
- `pointer` is the memory address of the heap-allocated memory containing the `Bool` elements. For an empty list, the pointer is null (that is, 0).
- `length` is the number of `Bool` elements in the list. For an empty list, this is also 0.
## Nonempty list
@ -28,7 +28,7 @@ First we'd have the `struct` above, with both `length` and `capacity` set to 2.
Here's how that heap memory would be laid out on a 64-bit system. It's a total of 48 bytes.
```
```text
|------16B------|------16B------|---8B---|---8B---|
string #1 string #2 refcount unused
```
@ -96,7 +96,7 @@ Some lists may end up beginning with excess capacity due to memory alignment req
This means the list `[True, True, False]` would have a memory layout like this):
```
```text
|--------------8B--------------|--1B--|--1B--|--1B--|-----5B-----|
either refcount or capacity bool1 bool2 bool3 unused
```
@ -115,8 +115,8 @@ We use a very simple system to distinguish the two: if the high bit in that `usi
This has a couple of implications:
* `capacity` actually has a maximum of `isize::MAX`, not `usize::MAX` - because if its high bit flips to 1, then now suddenly it's considered a refcount by the host. As it happens, capacity's maximum value is naturally `isize::MAX` anyway, so there's no downside here.
* `refcount` actually begins at `isize::MIN` and increments towards 0, rather than beginning at 0 and incrementing towards a larger number. When a decrement instruction is executed and the refcount is `isize::MIN`, it gets freed instead. Since all refcounts do is count up and down, and they only ever compare the refcount to a fixed constant, there's no real performance cost to comparing to `isize::MIN` instead of to 0. So this has no significant downside either.
- `capacity` actually has a maximum of `isize::MAX`, not `usize::MAX` - because if its high bit flips to 1, then now suddenly it's considered a refcount by the host. As it happens, capacity's maximum value is naturally `isize::MAX` anyway, so there's no downside here.
- `refcount` actually begins at `isize::MIN` and increments towards 0, rather than beginning at 0 and incrementing towards a larger number. When a decrement instruction is executed and the refcount is `isize::MIN`, it gets freed instead. Since all refcounts do is count up and down, and they only ever compare the refcount to a fixed constant, there's no real performance cost to comparing to `isize::MIN` instead of to 0. So this has no significant downside either.
Using this representation, hosts can trivially distinguish any list they receive as being either refcounted or having a capacity value, without any runtime cost in either the refcounted case or the capacity case.
@ -146,14 +146,14 @@ Whenever a list grows, it will grow in-place if it's Unique and there is enough
Strings have several things in common with lists:
* They are a `2 * usize` struct, sometimes with a non-null pointer to some heap memory
* They have a length and a capacity, and they can grow in basically the same way
* They are reference counted in basically the same way
- They are a `2 * usize` struct, sometimes with a non-null pointer to some heap memory
- They have a length and a capacity, and they can grow in basically the same way
- They are reference counted in basically the same way
However, they also have two things going on that lists do not:
* The Small String Optimization
* Literals stored in read-only memory
- The Small String Optimization
- Literals stored in read-only memory
## The Small String Optimization
@ -181,7 +181,7 @@ Using this strategy with a [conditional move instruction](https://stackoverflow.
Thus, the layout of a small string on a 64-bit big-endian architecture would be:
```
```text
|-----------usize length field----------|-----------usize pointer field---------|
|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|
len 'R' 'i' 'c' 'h' 'a' 'r' 'd' ' ' 'F' 'e' 'l' 'd' 'm' 'a' 'n'
@ -196,7 +196,7 @@ that would make an `isize` either negative or positive) will actually be the `us
That means we'd have to move swap the order of the struct's length and pointer fields. Here's how the string `"Roc string"` would be stored on a little-endian system:
```
```text
|-----------usize pointer field---------|-----------usize length field----------|
|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|-1B-|
'R' 'o' 'c' ' ' 's' 't' 'r' 'i' 'n' 'g' 0 0 0 0 0 len

View File

@ -11,13 +11,13 @@ test-gen-wasm = "test -p test_gen --no-default-features --features gen-wasm"
So we can run:
```
```sh
cargo test-gen-llvm
```
To run the gen tests with the LLVM backend. To filter tests, append a filter like so:
```
```sh
> cargo test-gen-wasm wasm_str::small
Finished test [unoptimized + debuginfo] target(s) in 0.13s
Running src/tests.rs (target/debug/deps/test_gen-b4ad63a9dd50f050)

View File

@ -1,6 +1,4 @@
## :construction: Work In Progress :construction:
# :construction: Work In Progress :construction:
The editor is a work in progress, only a limited subset of Roc expressions are currently supported.
@ -11,7 +9,7 @@ Unlike most editors, we use projectional or structural editing to edit the [Abst
- Install the compiler, see [here](../BUILDING_FROM_SOURCE.md).
- Run the following from the roc folder:
```
```sh
cargo run edit
```
@ -28,6 +26,7 @@ Make sure to [create an issue](https://github.com/roc-lang/roc/issues/new/choose
## Inspiration
We thank the following open source projects in particular for inspiring us when designing the Roc editor:
- [learn-wgpu](https://github.com/sotrh/learn-wgpu)
- [rgx](https://github.com/cloudhead/rgx)
- [elm-editor](https://github.com/jxxcarlson/elm-editor)
@ -40,22 +39,23 @@ This debug view shows important data structures that can be found in `editor/src
Add or delete some code to see how these data structures are updated.
From roc to render:
- `./roc edit` or `cargo run edit` is first handled in `cli/src/main.rs`, from there the editor's launch function is called (`editor/src/editor/main.rs`).
- In `run_event_loop` we initialize the winit window, wgpu, the editor's model(`EdModel`) and start the rendering loop.
- The `ed_model` is filled in part with data obtained by loading and typechecking the roc file with the same function (`load_and_typecheck`) that is used by the compiler.
- `ed_model` also contains an `EdModule`, which holds the parsed abstract syntax tree (AST).
- In the `init_model` function:
+ The AST is converted into a tree of `MarkupNode`. The different types of `MarkupNode` are similar to the elements/nodes in HTML. A line of roc code is represented as a nested `MarkupNode` containing mostly text `MarkupNode`s. The line `foo = "bar"` is represented as
- The AST is converted into a tree of `MarkupNode`. The different types of `MarkupNode` are similar to the elements/nodes in HTML. A line of roc code is represented as a nested `MarkupNode` containing mostly text `MarkupNode`s. The line `foo = "bar"` is represented as
three text `MarkupNode`; representing `foo`, ` = ` and `bar`. Multiple lines of roc code are represented as nested `MarkupNode` that contain other nested `MarkupNode`.
+ `CodeLines` holds a `Vec` of `String`, each line of code is a `String`. When saving the file, the content of `CodeLines` is written to disk.
+ `GridNodeMap` maps every position of a char of roc code to a `MarkNodeId`, for easy interaction with the caret.
- `CodeLines` holds a `Vec` of `String`, each line of code is a `String`. When saving the file, the content of `CodeLines` is written to disk.
- `GridNodeMap` maps every position of a char of roc code to a `MarkNodeId`, for easy interaction with the caret.
- Back in `editor/src/editor/main.rs` we convert the `EdModel` to `RenderedWgpu` by calling `model_to_wgpu`.
- The `RenderedWgpu` is passed to the `glyph_brush` to draw the characters(glyphs) on the screen.
### Important files
To understand how the editor works it is useful to know the most important files:
- editor/src/editor/main.rs
- editor/src/editor/mvc/ed_update.rs
- editor/src/editor/mvc/ed_model.rs
@ -64,6 +64,7 @@ To understand how the editor works it is useful to know the most important files
- editor/src/editor/render_debug.rs
Important folders/files outside the editor folder:
- code_markup/src/markup/convert
- code_markup/src/markup/nodes.rs
- ast/src/lang/core/def
@ -71,7 +72,6 @@ Important folders/files outside the editor folder:
- ast/src/lang/core/ast.rs
- ast/src/lang/env.rs
## Contributing
We welcome new contributors :heart: and are happy to help you get started.

View File

@ -1,7 +1,7 @@
(For background, [this talk](https://youtu.be/ZnYa99QoznE?t=4790) has an overview of the design goals for the editor.)
# Editor Ideas
(For background, [this talk](https://youtu.be/ZnYa99QoznE?t=4790) has an overview of the design goals for the editor.)
Here are some ideas and interesting resources for the editor. Feel free to make a PR to add more!
## Sources of Potential Inspiration
@ -14,48 +14,49 @@ Nice collection of research on innovative editors, [link](https://futureofcoding
(Or possibly module-specific integrations, type-specific integrations, etc.)
* [What FP can learn from Smalltalk](https://youtu.be/baxtyeFVn3w) by [Aditya Siram](https://github.com/deech)
* [Moldable development](https://youtu.be/Pot9GnHFOVU) by [Tudor Gîrba](https://github.com/girba)
* [Unity game engine](https://unity.com/)
* Scripts can expose values as text inputs, sliders, checkboxes, etc or even generate custom graphical inputs
* Drag-n-drop game objects and component into script interfaces
* [How to Visualize Data Structures in VS Code](https://addyosmani.com/blog/visualize-data-structures-vscode/)
- [What FP can learn from Smalltalk](https://youtu.be/baxtyeFVn3w) by [Aditya Siram](https://github.com/deech)
- [Moldable development](https://youtu.be/Pot9GnHFOVU) by [Tudor Gîrba](https://github.com/girba)
- [Unity game engine](https://unity.com/)
- Scripts can expose values as text inputs, sliders, checkboxes, etc or even generate custom graphical inputs
- Drag-n-drop game objects and component into script interfaces
- [How to Visualize Data Structures in VS Code](https://addyosmani.com/blog/visualize-data-structures-vscode/)
### Live Interactivity
* [Up and Down the Ladder of Abstraction](http://worrydream.com/LadderOfAbstraction/) by [Bret Victor](http://worrydream.com/)
* [7 Bret Victor talks](https://www.youtube.com/watch?v=PUv66718DII&list=PLS4RYH2XfpAmswi1WDU6lwwggruEZrlPH)
* [Against the Current](https://youtu.be/WT2CMS0MxJ0) by [Chris Granger](https://github.com/ibdknox/)
* [Sketch-n-Sketch: Interactive SVG Programming with Direct Manipulation](https://youtu.be/YuGVC8VqXz0) by [Ravi Chugh](http://people.cs.uchicago.edu/~rchugh/)
* [Xi](https://xi-editor.io/) modern text editor with concurrent editing (related to [Druid](https://github.com/linebender/druid))
* [Self](https://selflanguage.org/) programming language
* [Primitive](https://primitive.io/) code exploration in Virtual Reality
* [Luna](https://www.luna-lang.org/) language for interactive data processing and visualization
* [Hazel Livelits](https://hazel.org/papers/livelits-paper.pdf) interactive plugins, see GIF's [here](https://twitter.com/disconcision/status/1408155781120376833).
* [Thorough review](https://drossbucket.com/2021/06/30/hacker-news-folk-wisdom-on-visual-programming/) of pros and cons of text versus visual programming.
- [Up and Down the Ladder of Abstraction](http://worrydream.com/LadderOfAbstraction/) by [Bret Victor](http://worrydream.com/)
- [7 Bret Victor talks](https://www.youtube.com/watch?v=PUv66718DII&list=PLS4RYH2XfpAmswi1WDU6lwwggruEZrlPH)
- [Against the Current](https://youtu.be/WT2CMS0MxJ0) by [Chris Granger](https://github.com/ibdknox/)
- [Sketch-n-Sketch: Interactive SVG Programming with Direct Manipulation](https://youtu.be/YuGVC8VqXz0) by [Ravi Chugh](http://people.cs.uchicago.edu/~rchugh/)
- [Xi](https://xi-editor.io/) modern text editor with concurrent editing (related to [Druid](https://github.com/linebender/druid))
- [Self](https://selflanguage.org/) programming language
- [Primitive](https://primitive.io/) code exploration in Virtual Reality
- [Luna](https://www.luna-lang.org/) language for interactive data processing and visualization
- [Hazel Livelits](https://hazel.org/papers/livelits-paper.pdf) interactive plugins, see GIF's [here](https://twitter.com/disconcision/status/1408155781120376833).
- [Thorough review](https://drossbucket.com/2021/06/30/hacker-news-folk-wisdom-on-visual-programming/) of pros and cons of text versus visual programming.
### Good error messages
* [https://twitter.com/firstdrafthell/status/1427364851593224197/photo/1] very clean error message layout
* If the user explicitly allows it, we can keep record of which errors take a long time to fix. This way we know where to focus our efforts for improving error messages.
- [https://twitter.com/firstdrafthell/status/1427364851593224197/photo/1] very clean error message layout
- If the user explicitly allows it, we can keep record of which errors take a long time to fix. This way we know where to focus our efforts for improving error messages.
### Debugging
* [VS code debug visualization](https://marketplace.visualstudio.com/items?itemName=hediet.debug-visualizer)
* [Algorithm visualization for javascript](https://algorithm-visualizer.org)
* [godbolt.org Compiler Explorer](https://godbolt.org/)
* [whitebox debug visualization](https://vimeo.com/483795097)
* [Hest](https://ivanish.ca/hest-time-travel/) tool for making highly interactive simulations.
* [replit](https://replit.com/) collaborative browser based IDE.
* [paper](https://openreview.net/pdf?id=SJeqs6EFvB) on finding and fixing bugs automatically.
* [specialized editors that can be embedded in main editor](https://elliot.website/editor/)
* Say you have a failing test that used to work, it would be very valuable to see all code that was changed that was used only by that test.
- [VS code debug visualization](https://marketplace.visualstudio.com/items?itemName=hediet.debug-visualizer)
- [Algorithm visualization for javascript](https://algorithm-visualizer.org)
- [godbolt.org Compiler Explorer](https://godbolt.org/)
- [whitebox debug visualization](https://vimeo.com/483795097)
- [Hest](https://ivanish.ca/hest-time-travel/) tool for making highly interactive simulations.
- [replit](https://replit.com/) collaborative browser based IDE.
- [paper](https://openreview.net/pdf?id=SJeqs6EFvB) on finding and fixing bugs automatically.
- [specialized editors that can be embedded in main editor](https://elliot.website/editor/)
- Say you have a failing test that used to work, it would be very valuable to see all code that was changed that was used only by that test.
e.g. you have a test `calculate_sum_test` that only uses the function `add`, when the test fails you should be able to see a diff showing only what changed for the function `add`. It would also be great to have a diff of [expression values](https://homepages.cwi.nl/~storm/livelit/images/bret.png) Bret Victor style. An ambitious project would be to suggest or automatically try fixes based on these diffs.
* I think it could be possible to create a minimal reproduction of a program / block of code / code used by a single test. So for a failing unit test I would expect it to extract imports, the platform, types and functions that are necessary to run only that unit test and put them in a standalone roc project. This would be useful for sharing bugs with library+application authors and colleagues, for profiling or debugging with all "clutter" removed.
* Ability to share program state at a breakpoint with someone else.
* For debugging we should aim for maximal useful observability. For example Rust's enum values can not be easily viewed in the CodeLLDB debugger, you actually need to call a print method that does pattern matching to be able to view useful information.
* We previuously discussed recording full traces of programs so they do not have to be re-run multiple times in the debugging process. We should encourage roc developers to experiment with creating debugging representations of this AST+"execution trace", it could lead to some cool stuff.
* We previuously mentioned showing expression values next to the code. I think when debugging it would be valuable to focus more on these valuas/data. A possible way to do this would be to create scrollable view(without need to jump between files) of inputs and outputs of user defined functions. Clicking on a function could then show the code with the expression values side by side. Having a good overview of how the values change could make it easy to find where exactly things go wrong.
- I think it could be possible to create a minimal reproduction of a program / block of code / code used by a single test. So for a failing unit test I would expect it to extract imports, the platform, types and functions that are necessary to run only that unit test and put them in a standalone roc project. This would be useful for sharing bugs with library+application authors and colleagues, for profiling or debugging with all "clutter" removed.
- Ability to share program state at a breakpoint with someone else.
- For debugging we should aim for maximal useful observability. For example Rust's enum values can not be easily viewed in the CodeLLDB debugger, you actually need to call a print method that does pattern matching to be able to view useful information.
- We previuously discussed recording full traces of programs so they do not have to be re-run multiple times in the debugging process. We should encourage roc developers to experiment with creating debugging representations of this AST+"execution trace", it could lead to some cool stuff.
- We previuously mentioned showing expression values next to the code. I think when debugging it would be valuable to focus more on these valuas/data. A possible way to do this would be to create scrollable view(without need to jump between files) of inputs and outputs of user defined functions. Clicking on a function could then show the code with the expression values side by side. Having a good overview of how the values change could make it easy to find where exactly things go wrong.
- (Machine learning) algorithms to extract and show useful information from debug values.
- Ability to mark e.g. a specific record field for tracking(filter out the noise) that is being repeatedly updated throughout the program.
- Ability to collapse/fold debug output coming from specific line.
@ -65,255 +66,253 @@ e.g. you have a test `calculate_sum_test` that only uses the function `add`, whe
- VR debugging: render massive curved screen with rectangle showing code (and expression values) for every function in call stack.
### Testing
- [Wallaby.js](https://wallabyjs.com/) could serve as inspiration for live gutters showing tested / untested / passing / failing code based on tests, combined with time travel debugging (inline runtime values / inline error reports / inline code coverage); could be useful for debugging as well
### Cool regular editors
* [Helix](https://github.com/helix-editor/helix) modal (terminal, for now) editor in rust. Good UX.
* [Kakoune](https://kakoune.org/) editor with advanced text selection and manipulation features.
- [Helix](https://github.com/helix-editor/helix) modal (terminal, for now) editor in rust. Good UX.
- [Kakoune](https://kakoune.org/) editor with advanced text selection and manipulation features.
### Structured Editing
* [Greenfoot](https://www.youtube.com/watch?v=uUVA7nTh0XY)
* [Deuce](http://ravichugh.github.io/sketch-n-sketch/) (videos on the right) by [Ravi Chugh](http://people.cs.uchicago.edu/~rchugh/) and others
* [Fructure: A Structured Editing Engine in Racket](https://youtu.be/CnbVCNIh1NA) by Andrew Blinn
* [Hazel: A Live FP Environment with Typed Holes](https://youtu.be/UkDSL0U9ndQ) by [Cyrus Omar](https://web.eecs.umich.edu/~comar/)
* [Dark Demo](https://youtu.be/QgimI2SnpTQ) by [Ellen Chisa](https://twitter.com/ellenchisa)
* [Introduction to JetBrains MPS](https://youtu.be/JoyzxjgVlQw) by [Kolja Dummann](https://www.youtube.com/channel/UCq_mWDvKdXYJJzBmXkci17w)
* [Eve](http://witheve.com/)
* code editor as prose writer
* live preview
* possible inspiration for live interactivity as well
* [Unreal Engine 4](https://www.unrealengine.com/en-US/)
* [Blueprints](https://docs.unrealengine.com/en-US/Engine/Blueprints/index.html) visual scripting (not suggesting visual scripting for Roc)
- [Greenfoot](https://www.youtube.com/watch?v=uUVA7nTh0XY)
- [Deuce](http://ravichugh.github.io/sketch-n-sketch/) (videos on the right) by [Ravi Chugh](http://people.cs.uchicago.edu/~rchugh/) and others
- [Fructure: A Structured Editing Engine in Racket](https://youtu.be/CnbVCNIh1NA) by Andrew Blinn
- [Hazel: A Live FP Environment with Typed Holes](https://youtu.be/UkDSL0U9ndQ) by [Cyrus Omar](https://web.eecs.umich.edu/~comar/)
- [Dark Demo](https://youtu.be/QgimI2SnpTQ) by [Ellen Chisa](https://twitter.com/ellenchisa)
- [Introduction to JetBrains MPS](https://youtu.be/JoyzxjgVlQw) by [Kolja Dummann](https://www.youtube.com/channel/UCq_mWDvKdXYJJzBmXkci17w)
- [Eve](http://witheve.com/)
- code editor as prose writer
- live preview
- possible inspiration for live interactivity as well
- [Unreal Engine 4](https://www.unrealengine.com/en-US/)
- [Blueprints](https://docs.unrealengine.com/en-US/Engine/Blueprints/index.html) visual scripting (not suggesting visual scripting for Roc)
* [Live Programing](https://www.microsoft.com/en-us/research/project/live-programming/?from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fprojects%2Fliveprogramming%2Ftypography.aspx#!publications) by [Microsoft Research] it contains many interesting research papers.
* [Math Inspector](https://mathinspector.com/), [github](https://github.com/MathInspector/MathInspector)
* [Lamdu](http://www.lamdu.org/) live functional programming.
* [Sourcetrail](https://www.sourcetrail.com/) nice tree-like source explorer.
* [Unisonweb](https://www.unisonweb.org), definition based [editor](https://twitter.com/shojberg/status/1364666092598288385) as opposed to file based.
* [Utopia](https://utopia.app/) integrated design and development environment for React. Design and code update each other, in real time.
* [Paredit](https://calva.io/paredit/) structural clojure editing, navigation and selection. [Another overview](http://danmidwood.com/content/2014/11/21/animated-paredit.html)
- [Live Programing](https://www.microsoft.com/en-us/research/project/live-programming/?from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fprojects%2Fliveprogramming%2Ftypography.aspx#!publications) by [Microsoft Research] it contains many interesting research papers.
- [Math Inspector](https://mathinspector.com/), [github](https://github.com/MathInspector/MathInspector)
- [Lamdu](http://www.lamdu.org/) live functional programming.
- [Sourcetrail](https://www.sourcetrail.com/) nice tree-like source explorer.
- [Unisonweb](https://www.unisonweb.org), definition based [editor](https://twitter.com/shojberg/status/1364666092598288385) as opposed to file based.
- [Utopia](https://utopia.app/) integrated design and development environment for React. Design and code update each other, in real time.
- [Paredit](https://calva.io/paredit/) structural clojure editing, navigation and selection. [Another overview](http://danmidwood.com/content/2014/11/21/animated-paredit.html)
### Project exploration
* Tree view or circle view (like Github Next) of project where exposed values and functions can be seen on hover.
- Tree view or circle view (like Github Next) of project where exposed values and functions can be seen on hover.
#### Inspiration
* [Github Next](https://next.github.com/projects/repo-visualization) each file and folder is visualised as a circle: the circles color is the type of file, and the circles size represents the size of the file. Sidenote, a cool addition to this might be to use heatmap colors for the circles; circles for files that have had lots of commits could be more red, files with few commits would be blue.
* [AppMap](https://appland.com/docs/appmap-overview.html) records code execution traces, collecting information about how your code works and what it does. Then it presents this information as interactive diagrams that you can search and navigate. In the diagrams, you can see exactly how functions, web services, data stores, security, I/O, and dependent services all work together when application code runs.
- [Github Next](https://next.github.com/projects/repo-visualization) each file and folder is visualised as a circle: the circles color is the type of file, and the circles size represents the size of the file. Sidenote, a cool addition to this might be to use heatmap colors for the circles; circles for files that have had lots of commits could be more red, files with few commits would be blue.
- [AppMap](https://appland.com/docs/appmap-overview.html) records code execution traces, collecting information about how your code works and what it does. Then it presents this information as interactive diagrams that you can search and navigate. In the diagrams, you can see exactly how functions, web services, data stores, security, I/O, and dependent services all work together when application code runs.
### Voice Interaction Related
* We should label as many things as possible and expose jumps to those labels as shortkeys.
* Update without user interaction. e.g. autosave.
* Could be efficient way to communicate with smart assistant.
* You don't have to remember complex keyboard shortcuts if you can describe actions to execute them. Examples:
* Add latest datetime package to dependencies.
* Generate unit test for this function.
* Show edit history for this function.
* Adjusting settings: switch to light theme, increase font size...
* Use (context specific) voice command state machine to assist Machine Learning voice recognition model.
* Nice special use case: using voice to code while on treadmill desk.
* Use word embeddings to find most similar voice command to recorded input in vector space.
- We should label as many things as possible and expose jumps to those labels as shortkeys.
- Update without user interaction. e.g. autosave.
- Could be efficient way to communicate with smart assistant.
- You don't have to remember complex keyboard shortcuts if you can describe actions to execute them. Examples:
- Add latest datetime package to dependencies.
- Generate unit test for this function.
- Show edit history for this function.
- Adjusting settings: switch to light theme, increase font size...
- Use (context specific) voice command state machine to assist Machine Learning voice recognition model.
- Nice special use case: using voice to code while on treadmill desk.
- Use word embeddings to find most similar voice command to recorded input in vector space.
#### Useful voice commands
* clear all breakpoints
* increase/decrease font size
* switch to dark/light/high-contrast mode
* open/go to file "Main"(fuzzy matching)
* go to function "foo"
* go to definition
* show all references(uses) of this function/type/...
* show history timeline of this function/file
* show recent projects
* generate unit test for this function
* generate unit test for this function based on debug trace (input and output is recorded and used in test)
* who wrote this line (git blame integration)
* search documentation of library X for Foo
* show example of how to use library function Foo
* open google/github/duckduckgo search for error...
* show editor plugins for library X
* commands to control log filtering
* collaps all arms of when
* "complex" filtered search: search for all occurrences of `"#` but ignore all like `"#,`
* color this debug print orange
* remove unused imports
- clear all breakpoints
- increase/decrease font size
- switch to dark/light/high-contrast mode
- open/go to file "Main"(fuzzy matching)
- go to function "foo"
- go to definition
- show all references(uses) of this function/type/...
- show history timeline of this function/file
- show recent projects
- generate unit test for this function
- generate unit test for this function based on debug trace (input and output is recorded and used in test)
- who wrote this line (git blame integration)
- search documentation of library X for Foo
- show example of how to use library function Foo
- open google/github/duckduckgo search for error...
- show editor plugins for library X
- commands to control log filtering
- collaps all arms of when
- "complex" filtered search: search for all occurrences of `"#` but ignore all like `"#,`
- color this debug print orange
- remove unused imports
#### Inspiration
* Voice control and eye tracking with [Talon](https://github.com/Gauteab/talon-tree-sitter-service)
* [Seminar about programming by voice](https://www.youtube.com/watch?v=G8B71MbA9u4)
* [Talon voice commands in elm](https://github.com/Gauteab/talon-tree-sitter-service)
* Mozilla DeepSpeech model runs fast, works pretty well for actions but would need additional training for code input.
- Voice control and eye tracking with [Talon](https://github.com/Gauteab/talon-tree-sitter-service)
- [Seminar about programming by voice](https://www.youtube.com/watch?v=G8B71MbA9u4)
- [Talon voice commands in elm](https://github.com/Gauteab/talon-tree-sitter-service)
- Mozilla DeepSpeech model runs fast, works pretty well for actions but would need additional training for code input.
Possible to reuse [Mozilla common voice](https://github.com/common-voice/common-voice) for creating more "spoken code" data.
* [Voice Attack](https://voiceattack.com/) voice recognition for apps and games.
- [Voice Attack](https://voiceattack.com/) voice recognition for apps and games.
### Beginner-focused Features
* Show Roc cheat sheet on start-up.
* Plugin that translates short pieces of code from another programming language to Roc. [Relevant research](https://www.youtube.com/watch?v=xTzFJIknh7E). Someone who only knows the R language could get started with Roc with less friction if they could quickly define a list R style (`lst <- c(1,2,3)`) and get it translated to Roc.
* Being able to asses or ask the user for the amount of experience they have with Roc would be a valuable feature for recommending plugins, editor tips, recommending tutorials, automated error search (e.g searching common beginner errors first), ... .
* Adjust UI based on beginner/novice/expert?
- Show Roc cheat sheet on start-up.
- Plugin that translates short pieces of code from another programming language to Roc. [Relevant research](https://www.youtube.com/watch?v=xTzFJIknh7E). Someone who only knows the R language could get started with Roc with less friction if they could quickly define a list R style (`lst <- c(1,2,3)`) and get it translated to Roc.
- Being able to asses or ask the user for the amount of experience they have with Roc would be a valuable feature for recommending plugins, editor tips, recommending tutorials, automated error search (e.g searching common beginner errors first), ... .
- Adjust UI based on beginner/novice/expert?
### Productivity features
* When refactoring;
- Cutting and pasting code to a new file should automatically add imports to the new file and delete them from the old file.
- Ability to link e.g. variable name in comments to actual variable name. Comment is automatically updated when variable name is changed.
- When updating dependencies with breaking changes; show similar diffs from github projects that have successfully updated that dependency.
- AST backed renaming, changing variable/function/type name should change it all over the codebase.
* Automatically create all "arms" when pattern matching after entering `when var is` based on the type.
- All `when ... is` should be updated if the type is changed, e.g. adding Indigo to the Color type should add an arm everywhere where `when color is` is used.
* When a function is called like `foo(false)`, the name of the boolean argument should be shown automatically; `foo(`*is_active:*`false)`. This should be done for booleans and numbers.
* Suggest automatically creating a function if the compiler says it does not exist.
* Integrated search:
* Searchbar for examples/docs. With permission search strings could be shared with the platform/package authors so they know exactly what their users are struggling with.
* Show productivity/feature tips on startup. Show link to page with all tips. Allow not seeing tips next time.
* Search friendly editor docs inside the editor. Offer to send search string to Roc maintainers when no results, or if no results were clicked.
* File history timeline view. Show timeline with commits that changed this file, the number of lines added and deleted as well as which user made the changes. Arrow navigation should allow you to quickly view different versions of the file.
* Suggested quick fixes should be directly visible and clickable. Not like in vs code where you put the caret on an error until a lightbulb appears in the margin which you have to click for the fixes to appear, after which you click to apply the fix you want :( . You should be able to apply suggestions in rapid succession. e.g. if you copy some roc code from the internet you should be able to apply 5 import suggestions quickly.
* Regex-like find and substitution based on plain english description and example (replacement). i.e. replace all `[` between double quotes with `{`. [Inspiration](https://alexmoltzau.medium.com/english-to-regex-thanks-to-gpt-3-13f03b68236e).
* Show productivity tips based on behavior. i.e. if the user is scrolling through the error bar and clicking on the next error several times, show a tip with "go to next error" shortcut.
* Command to "benchmark this function" or "benchmark this test" with flamegraph and execution time per line.
* Instead of going to definition and having to navigate back and forth between files, show an editable view inside the current file. See [this video](https://www.youtube.com/watch?v=EenznqbW5w8)
* When encountering an unexpected error in the user's program we show a button at the bottom to start an automated search on this error. The search would:
* look for similar errors in github issues of the relevant libraries
* search stackoverflow questions
* search a local history of previously encountered errors and fixes
* search through a database of our zullip questions
* ...
* smart insert: press a shortcut and enter a plain english description of a code snippet you need. Examples: "convert string to list of chars", "sort list of records by field foo descending", "plot this list with date on x-axis"...
* After the user has refactored code to be simpler, try finding other places in the code base where the same simplification can be made.
* Show most commonly changed settings on first run so new users can quickly customize their experience. Keeping record of changed settings should be opt-in.
* Detection of multiple people within same company/team working on same code at the same time (opt-in).
* Autocorrect likely typos for stuff like `-<` when not in string.
* If multiple functions are available for import, use function were types would match in insetion position.
* Recommend imports based on imports in other files in same project.
* Machine Learning model to determine confidence in a possiblte auto import. Automatically add the importt if confidence is very high.
* Ability to print logs in different color depending on which file they come from.
* Clicking on a log print should take you to the exact line of code that called the log function
* When detecting that the user is repeating a transformation such as replacing a string in a text manually, offer to do the replacement for all occurrences in this string/function/file/workspace.
* Auto remove unused imports? Perhaps save the removed imports on a scratchpad for easy re-enabling.
* It should be easy to toggle search and replace to apply to the whole project.
* Taking into account the eye position with eye tracking could make commands very powerful/accurate. e.g.: make `Num *` a `List (Num *)`, use eye position to determine which `Num *`.
* Feature to automatically minimize visibility(exposing values/functions/...) based on usage in tests. Suggested changes can be shown to the user for fine-grained control.
* Locally record file/function navigation behavior to offer suggestions where to navigate next. With user permission, this navigation behavior can be shared with their team so that e.g. new members get offered useful suggestions on navigating to the next relevant file.
* Intelligent search: "search this folder for <term>", "search all tests for <term>"
* Show some kind of warning if path str in code does not exist locally.
* repl on panic/error: ability to inspect all values and try executing some things at the location of the error.
* show values in memory on panic/error
* automatic clustering of (text) search results in groups by similarity
* fill screen with little windows of clustered search results
* clustering of examples similar to current code
* ability to easily screenshot a subwindow -> create static duplicate of subwindow
* Show references is a common editor feature, often I only want to see non-recursive references in the case of a recursive function.
* ability to add error you were stuck on but have now solved to error database, to help others in the future.
* For quick navigation and good overview: whole file should be shown as folded tree showing only top level defs. Hovering with mouse should allow you to show and traverse the branches, with a click to keep this view. See also ginkowriter.
* clicking on any output should take you to the place in the code where that output was printed and/or calculated.
* ability to edit printed output in such a way that the appropriate changes are made in the code that produced it. Example: edit json key in output-> code is changed to print this new key.
- When refactoring;
- Cutting and pasting code to a new file should automatically add imports to the new file and delete them from the old file.
- Ability to link e.g. variable name in comments to actual variable name. Comment is automatically updated when variable name is changed.
- When updating dependencies with breaking changes; show similar diffs from github projects that have successfully updated that dependency.
- AST backed renaming, changing variable/function/type name should change it all over the codebase.
- Automatically create all "arms" when pattern matching after entering `when var is` based on the type.
- All `when ... is` should be updated if the type is changed, e.g. adding Indigo to the Color type should add an arm everywhere where `when color is` is used.
- When a function is called like `foo(false)`, the name of the boolean argument should be shown automatically; `foo(`*is_active:*`false)`. This should be done for booleans and numbers.
- Suggest automatically creating a function if the compiler says it does not exist.
- Integrated search:
- Searchbar for examples/docs. With permission search strings could be shared with the platform/package authors so they know exactly what their users are struggling with.
- Show productivity/feature tips on startup. Show link to page with all tips. Allow not seeing tips next time.
- Search friendly editor docs inside the editor. Offer to send search string to Roc maintainers when no results, or if no results were clicked.
- File history timeline view. Show timeline with commits that changed this file, the number of lines added and deleted as well as which user made the changes. Arrow navigation should allow you to quickly view different versions of the file.
- Suggested quick fixes should be directly visible and clickable. Not like in vs code where you put the caret on an error until a lightbulb appears in the margin which you have to click for the fixes to appear, after which you click to apply the fix you want :( . You should be able to apply suggestions in rapid succession. e.g. if you copy some roc code from the internet you should be able to apply 5 import suggestions quickly.
- Regex-like find and substitution based on plain english description and example (replacement). i.e. replace all `[` between double quotes with `{`. [Inspiration](https://alexmoltzau.medium.com/english-to-regex-thanks-to-gpt-3-13f03b68236e).
- Show productivity tips based on behavior. i.e. if the user is scrolling through the error bar and clicking on the next error several times, show a tip with "go to next error" shortcut.
- Command to "benchmark this function" or "benchmark this test" with flamegraph and execution time per line.
- Instead of going to definition and having to navigate back and forth between files, show an editable view inside the current file. See [this video](https://www.youtube.com/watch?v=EenznqbW5w8)
- When encountering an unexpected error in the user's program we show a button at the bottom to start an automated search on this error. The search would:
- look for similar errors in github issues of the relevant libraries
- search stackoverflow questions
- search a local history of previously encountered errors and fixes
- search through a database of our zullip questions
- ...
- smart insert: press a shortcut and enter a plain english description of a code snippet you need. Examples: "convert string to list of chars", "sort list of records by field foo descending", "plot this list with date on x-axis"...
- After the user has refactored code to be simpler, try finding other places in the code base where the same simplification can be made.
- Show most commonly changed settings on first run so new users can quickly customize their experience. Keeping record of changed settings should be opt-in.
- Detection of multiple people within same company/team working on same code at the same time (opt-in).
- Autocorrect likely typos for stuff like `-<` when not in string.
- If multiple functions are available for import, use function were types would match in insetion position.
- Recommend imports based on imports in other files in same project.
- Machine Learning model to determine confidence in a possiblte auto import. Automatically add the importt if confidence is very high.
- Ability to print logs in different color depending on which file they come from.
- Clicking on a log print should take you to the exact line of code that called the log function
- When detecting that the user is repeating a transformation such as replacing a string in a text manually, offer to do the replacement for all occurrences in this string/function/file/workspace.
- Auto remove unused imports? Perhaps save the removed imports on a scratchpad for easy re-enabling.
- It should be easy to toggle search and replace to apply to the whole project.
- Taking into account the eye position with eye tracking could make commands very powerful/accurate. e.g.: make `Num *` a `List (Num *)`, use eye position to determine which `Num *`.
- Feature to automatically minimize visibility(exposing values/functions/...) based on usage in tests. Suggested changes can be shown to the user for fine-grained control.
- Locally record file/function navigation behavior to offer suggestions where to navigate next. With user permission, this navigation behavior can be shared with their team so that e.g. new members get offered useful suggestions on navigating to the next relevant file.
- Intelligent search: "search this folder for <term>", "search all tests for <term>"
- Show some kind of warning if path str in code does not exist locally.
- repl on panic/error: ability to inspect all values and try executing some things at the location of the error.
- show values in memory on panic/error
- automatic clustering of (text) search results in groups by similarity
- fill screen with little windows of clustered search results
- clustering of examples similar to current code
- ability to easily screenshot a subwindow -> create static duplicate of subwindow
- Show references is a common editor feature, often I only want to see non-recursive references in the case of a recursive function.
- ability to add error you were stuck on but have now solved to error database, to help others in the future.
- For quick navigation and good overview: whole file should be shown as folded tree showing only top level defs. Hovering with mouse should allow you to show and traverse the branches, with a click to keep this view. See also ginkowriter.
- clicking on any output should take you to the place in the code where that output was printed and/or calculated.
- ability to edit printed output in such a way that the appropriate changes are made in the code that produced it. Example: edit json key in output-> code is changed to print this new key.
#### Autocomplete
- Use more space for autocomplete options:
* Multiple columns. Columns could have different sources, i.e. middle column suggests based on current folder, left column on whole project, right column on github.
* show cell with completion + auto import suggestion
- Multiple columns. Columns could have different sources, i.e. middle column suggests based on current folder, left column on whole project, right column on github.
- show cell with completion + auto import suggestion
- Webcam based eye tracking for quick selection.
- Machine Learning:
* GPT-3 can generate correct python functions based on a comment describing the functionality, video [here](https://www.youtube.com/watch?v=utuz7wBGjKM). It's possible that training a model using ast's may lead to better results than text based models.
- GPT-3 can generate correct python functions based on a comment describing the functionality, video [here](https://www.youtube.com/watch?v=utuz7wBGjKM). It's possible that training a model using ast's may lead to better results than text based models.
- Current autocomplete lacks flow, moving through suggestions with arrows is slow. Being able to code by weaving together autocomplete suggestions laid out in rows using eye tracking, that could flow.
- It's possible that with strong static types, pure functions and a good search algorithm we can develop a more reliable autocomplete than one with machine learning.
- When ranking autocomplete suggestions, take into account how new a function is. Newly created functions are likely to be used soon.
#### Productivity Inspiration
* [Kite](https://www.kite.com/) AI autocomplete and doc viewer.
* [Tabnine](https://www.tabnine.com/) AI autocomplete.
* [Codota](https://www.codota.com) AI autocomplete and example searching.
* [Github copilot](https://copilot.github.com/) AI autocomplete.
* [Aroma](https://ai.facebook.com/blog/aroma-ml-for-code-recommendation) showing examples similar to current code.
* [MISM](https://arxiv.org/abs/2006.05265) neural network based code similarity scoring.
* [Inquisitive code editor](https://web.eecs.utk.edu/~azh/blog/inquisitivecodeeditor.html) Interactive bug detection with doc+test generation.
* [NextJournal](https://nextjournal.com/joe-loco/command-bar?token=DpU6ewNQnLhYtVkwhs9GeX) Discoverable commands and shortcuts.
* [Code Ribbon](https://web.eecs.utk.edu/~azh/blog/coderibbon.html) fast navigation between files. Feature suggestion: top and down are filled with suggested files, whereas left and right are manually filled.
* [Automatic data transformation based on examples](https://youtu.be/Ej91F1fpmEw). Feature suggestion: use in combination with voice commands: e.g. "only keep time from list of datetimes".
* [Codesee](https://www.codesee.io/) code base visualization.
* [Loopy](https://dl.acm.org/doi/10.1145/3485530?sid=SCITRUS) interactive program synthesis.
* [bracket guides](https://mobile.twitter.com/elyktrix/status/1461380028609048576)
- [Kite](https://www.kite.com/) AI autocomplete and doc viewer.
- [Tabnine](https://www.tabnine.com/) AI autocomplete.
- [Codota](https://www.codota.com) AI autocomplete and example searching.
- [Github copilot](https://copilot.github.com/) AI autocomplete.
- [Aroma](https://ai.facebook.com/blog/aroma-ml-for-code-recommendation) showing examples similar to current code.
- [MISM](https://arxiv.org/abs/2006.05265) neural network based code similarity scoring.
- [Inquisitive code editor](https://web.eecs.utk.edu/~azh/blog/inquisitivecodeeditor.html) Interactive bug detection with doc+test generation.
- [NextJournal](https://nextjournal.com/joe-loco/command-bar?token=DpU6ewNQnLhYtVkwhs9GeX) Discoverable commands and shortcuts.
- [Code Ribbon](https://web.eecs.utk.edu/~azh/blog/coderibbon.html) fast navigation between files. Feature suggestion: top and down are filled with suggested files, whereas left and right are manually filled.
- [Automatic data transformation based on examples](https://youtu.be/Ej91F1fpmEw). Feature suggestion: use in combination with voice commands: e.g. "only keep time from list of datetimes".
- [Codesee](https://www.codesee.io/) code base visualization.
- [Loopy](https://dl.acm.org/doi/10.1145/3485530?sid=SCITRUS) interactive program synthesis.
- [bracket guides](https://mobile.twitter.com/elyktrix/status/1461380028609048576)
### Non-Code Related Inspiration
* [Scrivner](https://www.literatureandlatte.com/scrivener/overview) writing app for novelists, screenwriters, and more
* Word processors (Word, Google Docs, etc)
* Comments that are parallel to the text of the document.
* Comments can act as discussions and not just statements.
* Easy tooling around adding tables and other stylised text
* Excel and Google Sheets
* Not sure, maybe something they do well that we (code editors) could learn from
- [Scrivner](https://www.literatureandlatte.com/scrivener/overview) writing app for novelists, screenwriters, and more
- Word processors (Word, Google Docs, etc)
- Comments that are parallel to the text of the document.
- Comments can act as discussions and not just statements.
- Easy tooling around adding tables and other stylised text
- Excel and Google Sheets
- Not sure, maybe something they do well that we (code editors) could learn from
## Machine Learning Ideas
* Ability to record all changes to abstract syntax tree with user permission.
* I think it is possible to create powerful automatic error resolution by having a dataset available of ast's with a specific error and the subsequent transformation that fixed the error.
* Users with large private code bases could (re)train a publicly available error recovery model to experience benefits without having to share their code.
* It could be useful to a user who is creating a function to show them the most similar function (type signature, name, comment) in a public+their private database. Say I was using a web framework and I just created a function that has a multipart form as argument, it would be great to have an example instantly available.
* A simpler start for this idea without user data gathering: how the user a code snippet that is most similar to what they are currently writing. Snippets can be aggregated from examples, tests, docstrings at zero cost to the package/platform authors.
* See [codata](https://www.codota.com/code/java/classes/okhttp3.OkHttpClient) for inspiration on a snippet/example finder.
* Fuzzy natural language based setting adjustment in search bar or with voice input: increase font size, enable autosave, switch to light theme...
* Detect deviation of best practices, example case: alert developer when they are defining a color inline (rgb(30,30,30)) while all colors have been previously imported from a single file. See also [Codota](https://www.codota.com).
* It would be valuable to record the user's interactions with the editor when debugging as well as the AST. On enough data we could train a model to perform a bunch of debugging steps and show values of the most important variables in relation to the bug. Having assistance in finding the exact code that causes the problem could be super valuable. There could be sensitive data, so it should only be recorded and or shared for open source codebases with permissive licenses and with explicit user permission.
* To allow for more privacy; data gathering can be kept only local or only shared within a team/company. Say we offer the ability to save the changes made after an error occurred. Another developer in the company who encounters this error could be notified someone has previously encountered this error along with their changes made after the error. Optionally, the first developer's name can be shown (only within team/company) so the second developer can quickly ask for help.
- Ability to record all changes to abstract syntax tree with user permission.
- I think it is possible to create powerful automatic error resolution by having a dataset available of ast's with a specific error and the subsequent transformation that fixed the error.
- Users with large private code bases could (re)train a publicly available error recovery model to experience benefits without having to share their code.
- It could be useful to a user who is creating a function to show them the most similar function (type signature, name, comment) in a public+their private database. Say I was using a web framework and I just created a function that has a multipart form as argument, it would be great to have an example instantly available.
- A simpler start for this idea without user data gathering: how the user a code snippet that is most similar to what they are currently writing. Snippets can be aggregated from examples, tests, docstrings at zero cost to the package/platform authors.
- See [codata](https://www.codota.com/code/java/classes/okhttp3.OkHttpClient) for inspiration on a snippet/example finder.
- Fuzzy natural language based setting adjustment in search bar or with voice input: increase font size, enable autosave, switch to light theme...
- Detect deviation of best practices, example case: alert developer when they are defining a color inline (rgb(30,30,30)) while all colors have been previously imported from a single file. See also [Codota](https://www.codota.com).
- It would be valuable to record the user's interactions with the editor when debugging as well as the AST. On enough data we could train a model to perform a bunch of debugging steps and show values of the most important variables in relation to the bug. Having assistance in finding the exact code that causes the problem could be super valuable. There could be sensitive data, so it should only be recorded and or shared for open source codebases with permissive licenses and with explicit user permission.
- To allow for more privacy; data gathering can be kept only local or only shared within a team/company. Say we offer the ability to save the changes made after an error occurred. Another developer in the company who encounters this error could be notified someone has previously encountered this error along with their changes made after the error. Optionally, the first developer's name can be shown (only within team/company) so the second developer can quickly ask for help.
## Testing
* From Google Docs' comments, adding tests in a similar manner, where they exists in the same "document" but parallel to the code being written
* Makes sense for unit tests, keeps the test close to the source
* Doesn't necessarily make sense for integration or e2e testing
* Maybe easier to manually trigger a test related to exactly what code you're writing
* Ability to generate unit tests for a selected function in context menu
* A table should appear to enter input and expected output pairs quickly
* Ability to "record" unit tests
* Select a function to record.
* Do a normal run, and save the input and output of the selected function.
* Generate a unit test with that input-output pair
* [vitest](https://twitter.com/antfu7/status/1468233216939245579) only run tests that could possibly have changed (because the code they test/use has changed)
* Ability to show in sidebar if code is tested by a test. Clicking on the test in the sidebar should bring you to that test.
- From Google Docs' comments, adding tests in a similar manner, where they exists in the same "document" but parallel to the code being written
- Makes sense for unit tests, keeps the test close to the source
- Doesn't necessarily make sense for integration or e2e testing
- Maybe easier to manually trigger a test related to exactly what code you're writing
- Ability to generate unit tests for a selected function in context menu
- A table should appear to enter input and expected output pairs quickly
- Ability to "record" unit tests
- Select a function to record.
- Do a normal run, and save the input and output of the selected function.
- Generate a unit test with that input-output pair
- [vitest](https://twitter.com/antfu7/status/1468233216939245579) only run tests that could possibly have changed (because the code they test/use has changed)
- Ability to show in sidebar if code is tested by a test. Clicking on the test in the sidebar should bring you to that test.
### Inspiration
* [Haskell language server plugin](https://github.com/haskell/haskell-language-server/blob/master/plugins/hls-eval-plugin/README.md) evaluate code in comments, to test and document functions and to quickly evaluate small expressions.
* [Hazel live test](https://mobile.twitter.com/disconcision/status/1459933500656730112)
- [Haskell language server plugin](https://github.com/haskell/haskell-language-server/blob/master/plugins/hls-eval-plugin/README.md) evaluate code in comments, to test and document functions and to quickly evaluate small expressions.
- [Hazel live test](https://mobile.twitter.com/disconcision/status/1459933500656730112)
## Documentation
* Ability to see module as it would be presented on a package website.
* Modern editors may guide developers to the source code too easily.
- Ability to see module as it would be presented on a package website.
- Modern editors may guide developers to the source code too easily.
The API and documentation are meant to interface with humans.
* [DocC](https://developer.apple.com/videos/play/wwdc2021/10166/) neat documentation approach for swift.
* Make it easy to ask for/add examples and suggest improvements to a project's docs.
* Library should have cheat sheet with most used/important docs summarized.
* With explicit user permission, anonymously track viewing statistics for documentation. Can be used to show most important documentation, report pain points to library authors.
* Easy side-by-side docs for multiple versions of library.
* ability to add questions and answers to library documentation
- [DocC](https://developer.apple.com/videos/play/wwdc2021/10166/) neat documentation approach for swift.
- Make it easy to ask for/add examples and suggest improvements to a project's docs.
- Library should have cheat sheet with most used/important docs summarized.
- With explicit user permission, anonymously track viewing statistics for documentation. Can be used to show most important documentation, report pain points to library authors.
- Easy side-by-side docs for multiple versions of library.
- ability to add questions and answers to library documentation
## Tutorials
* Inclusion of step-by-step tutrials in Roc libraries, platforms or business specific code.
* Having to set up your own website for a tutorial can be a lot of work, making it easy to make quality tutorials would make for a more delightful experience.
- Inclusion of step-by-step tutrials in Roc libraries, platforms or business specific code.
- Having to set up your own website for a tutorial can be a lot of work, making it easy to make quality tutorials would make for a more delightful experience.
## General Plugin Ideas
### Ideas
* Plugin to translate linux commands like curl to Roc code
* Plugin to view diff between two texts
* Plugin to present codebase to new developer or walk co-worker through a problem. Records sequence of filenames and line numbers.
* A Logbook plugin. I've found that writing down steps and thoughts when you're implementing or debugging something can be really useful for later.
- Plugin to translate linux commands like curl to Roc code
- Plugin to view diff between two texts
- Plugin to present codebase to new developer or walk co-worker through a problem. Records sequence of filenames and line numbers.
- A Logbook plugin. I've found that writing down steps and thoughts when you're implementing or debugging something can be really useful for later.
If we make an integrated terminal, we can automatically add executed commands to this logbook. This plugin could have a publish button so you can produce useful "blogs" for others with minimal effort.
### Inspiration
@ -328,14 +327,13 @@ If we make an integrated terminal, we can automatically add executed commands to
- [10x editor](http://www.10xeditor.com/) IDE/Editor targeted at the professional developer with an emphasis on performance and scalability.
## Positive feedback
- It's nice to enhance the feeling of reward after completing a task, this increases motivation.
- Great for tutorials and the first run of the editor.
- Suggestions of occasions for positive feedback:
- Being able to compile successfully after starting out with more than X errors.
- Making a test succeed after repeated failures.
- Being able to compile successfully after starting out with more than X errors.
- Making a test succeed after repeated failures.
- Positive feedback could be delivered with messages and/or animations. Animations could be with fireworks, flying roc logo birds, sounds...
- The intensity of the message/animation could be increased based on the duration/difficulty of the task.
- Suggest to search for help or take a break after being stuck on a test/compile errors... for some time. A search could be done for group chats for relevant libraries.
@ -355,8 +353,8 @@ If we make an integrated terminal, we can automatically add executed commands to
Thoughts and ideas possibly taken from above inspirations or separate.
* ACCESSIBILITY === EMPATHY
* Visual Imapirments
- ACCESSIBILITY === EMPATHY
- Visual Imapirments
No Animation is most benign form of cognitive disabity but really important base line of people with tense nerve system.
Insensitivity to certain or all colors.
Need of highcontrast
@ -367,34 +365,34 @@ Thoughts and ideas possibly taken from above inspirations or separate.
Imagine if everytime for the user doesnt want to rely on shining rendered pixels on the screen for a feedback from machine, we make a acoustic room simulation, where with moving the "stick", either with mouse or with key arrows, we bump into one of the objects and that produces certain contextually appropriate sound (clean)*ding*
On the each level of abstraction they can make sounds more deeper, so then when you type letters you feel like you are playing with the sand (soft)*shh*. We would need help from some sound engineer about it, but imagine moving down, which can be voice triggered command for motion impaired, you hear (soft)*pup* and the name of the module, and then you have options and commands appropriate for the module, they could map to those basic 4 buttons that we trained user on, and he would shortcut all the soft talk with click of a button. Think of the satisfaction when you can skip the dialog of the game and get straight into action. (X) Open functions! each function would make a sound and say its name, unless you press search and start searching for a specific function inside module, if you want one you select or move to next.
- Related idea: Playing sounds in rapid succession for different expressions in your program might be a high throughput alternative to stepping through your code line by line. I'd bet you quickly learn what your program should sound like. The difference in throughput would be even larger for those who need to rely on voice transcription.
- Related idea: Playing sounds in rapid succession for different expressions in your program might be a high throughput alternative to stepping through your code line by line. I'd bet you quickly learn what your program should sound like. The difference in throughput would be even larger for those who need to rely on voice transcription.
* Motor impariments
- Motor impariments
[rant]BACKS OF CODERS ARE NOT HEALTHY! We need to change that![/neverstop]
Too much mouse waving and sitting for too long is bad for humans.
Keyboard is basic accessability tool but
Keyboard is also optional, some people have too shaky hands even for keyboard.
They rely on eye tracking to move mouse cursor arond.
If we employ _some_ voice recognition functions we could make same interface as we could do for consoles where 4+2 buttons and directional pad would suffice.
If we employ *some* voice recognition functions we could make same interface as we could do for consoles where 4+2 buttons and directional pad would suffice.
That is 10 phrases that need to be pulled trough as many possible translations so people don't have to pretend that they are from Maine or Texas so they get voice recognition to work. Believe me I was there with Apple's Siri :D That is why we have 10 phrases for movement and management and most basic syntax.
* Builtin fonts that can be read more easily by those with dyslexia.
- Builtin fonts that can be read more easily by those with dyslexia.
* Nice backtraces that highlight important information
* Ability to show import connection within project visually
* This could be done by drawing connections between files or functions in the tree view. This would make it easier for people to get their bearings in new big projects.
* Connections could also be drawn between functions that call each other in the tree view. The connections could be animated to show the execution flow of the program.
* Ability to inline statements contained in called functions into the callee function for debugging.
* The value of expressions can be shown at the end of the line like in the [Inventing on Principle talk](https://youtu.be/8QiPFmIMxFc?t=1181)
* This would give a clear overview of the execution and should make it easy to pinpoint the line where the bug originates.
* That specific line can then be right clicked to go to the actual function.
* Having to jump around between different functions and files is unnecessary and makes it difficult to see the forest through the trees.
* "Error mode" where the editor jumps you to the next error
* Similar in theory to diff tools that jump you to the next merge conflict
* dependency recommendation
* Command to change the file to put all exposed functions at the top of the file, private functions below. Other alternative; ability to show a "file explorer" that shows exposed functions first, followed by private functions.
* We could provide a more expansive explanation in errors that can benefit from it. This explanation could be folded(shown on click) by default in the editor.
* Code coverage visualization: allow to display code in different color when it is covered by test.
* Make "maximal privacy version" of editor available for download, next to regular version. This version would not be capable of sharing any usage/user data.
* Live code view with wasm editor. This saves bandwidth when pairing.
* [Gingkowriter](https://gingkowriter.com/) structured writing app.
* Performance improvement recommendation: show if code is eligible for tail call optimization or can do in place mutation.
- Nice backtraces that highlight important information
- Ability to show import connection within project visually
- This could be done by drawing connections between files or functions in the tree view. This would make it easier for people to get their bearings in new big projects.
- Connections could also be drawn between functions that call each other in the tree view. The connections could be animated to show the execution flow of the program.
- Ability to inline statements contained in called functions into the callee function for debugging.
- The value of expressions can be shown at the end of the line like in the [Inventing on Principle talk](https://youtu.be/8QiPFmIMxFc?t=1181)
- This would give a clear overview of the execution and should make it easy to pinpoint the line where the bug originates.
- That specific line can then be right clicked to go to the actual function.
- Having to jump around between different functions and files is unnecessary and makes it difficult to see the forest through the trees.
- "Error mode" where the editor jumps you to the next error
- Similar in theory to diff tools that jump you to the next merge conflict
- dependency recommendation
- Command to change the file to put all exposed functions at the top of the file, private functions below. Other alternative; ability to show a "file explorer" that shows exposed functions first, followed by private functions.
- We could provide a more expansive explanation in errors that can benefit from it. This explanation could be folded(shown on click) by default in the editor.
- Code coverage visualization: allow to display code in different color when it is covered by test.
- Make "maximal privacy version" of editor available for download, next to regular version. This version would not be capable of sharing any usage/user data.
- Live code view with wasm editor. This saves bandwidth when pairing.
- [Gingkowriter](https://gingkowriter.com/) structured writing app.
- Performance improvement recommendation: show if code is eligible for tail call optimization or can do in place mutation.

View File

@ -1,3 +1,5 @@
# Snippet ideas
I think snippet insertion would make for an awesome demo that shows off the potential of the editor and a basic version would not be that difficult to implement.
With snippet insertion I mean the following:
@ -26,22 +28,20 @@ The CC0 license seems like a good fit for the snippets.
Fuzzy matching should be done to suggest a closest fuzzy match, so if the user types the snippet command `empty Map`, we should suggest `empty Dict`.
# Snippet ideas
## Pure Text Snippets
Pure text snippets are not templates and do not contain typed holes.
Fish hooks are used when subvariants should be created e.g.: <collection> means this pure text snippets should be created for all Roc collections such as Dict, Set, List...
- command: empty <collection>
+ example: empty dict >> `{::}`
- example: empty dict >> `{::}`
- command: <common algorithm>
+ example: sieve of erathostenes >> `inserts function for sieve of erathostenes`
+ common algorithms: sieve of erathostenes, greatest common divisor, prime factorisation, A* path finding, Dijkstra's algorithm, Breadth First Search...
- example: sieve of erathostenes >> `inserts function for sieve of erathostenes`
- common algorithms: sieve of erathostenes, greatest common divisor, prime factorisation, A* path finding, Dijkstra's algorithm, Breadth First Search...
- command: current date/datetime
+ example: current datetime >> `now <- Time.now\n`
- example: current datetime >> `now <- Time.now\n`
- command: list range 1 to 5
+ example: [1, 2, 3, 4, 5]
- example: [1, 2, 3, 4, 5]
- command: use commandline args
- command: post/get/put request
- command: extract float(s)/number/emal addresses from string. regex match float/number/email address/...
@ -54,27 +54,28 @@ Fish hooks are used when subvariants should be created e.g.: <collection> means
Snippets are inserted based on type of value on which the cursor is located.
- command: <all builtins for current type>
+ example:
* We have the cursor like this `people|`
* User presses snippet shortcut or dot key
* We show a list with all builtin functions for the List type
* User chooses contains
* We change code to `List.contains people |Blank`
- example:
- We have the cursor like this `people|`
- User presses snippet shortcut or dot key
- We show a list with all builtin functions for the List type
- User chooses contains
- We change code to `List.contains people |Blank`
- command: Str to chars/charlist
## Snippets with Typed Holes
- command: sort ^List *^ (by ^Record Field^) {ascending/descending}
+ example: sort people by age descending >> ...
- example: sort people by age descending >> ...
- command: escape url
+ example: >> `percEncodedString = Url.percentEncode ^String^`
- example: >> `percEncodedString = Url.percentEncode ^String^`
- command: list files in directory
+ example: >>
- example: >>
```
path <- File.pathFromStr ^String^
dirContents <- File.enumerateDir path
```
- command: remove/create file
- command: read/write from file
- command: concatenate strings
@ -85,26 +86,27 @@ Snippets are inserted based on type of value on which the cursor is located.
- command: reverse stirng
- command: lambda/anonymous function
- we should auto create type hole commands for all builtins.
+ example: List has builtins reverse, repeat, len... generated snippet commands should be:
* reverse list > List.reverse ^List *^
* repeat list > List.repeat ^elem^ ^Nat^
* len list (fuzzy matches should be length of list)
- example: List has builtins reverse, repeat, len... generated snippet commands should be:
- reverse list > List.reverse ^List *^
- repeat list > List.repeat ^elem^ ^Nat^
- len list (fuzzy matches should be length of list)
- append element to list
# fuzzy matching
## fuzzy matching
some pairs for fuzzy matching unit tests:
- hashmap > Dict
- map > map (function), Dict
- for > map, mapWithIndex, walk, walkBackwards, zip
- apply/for yield > map
- fold > walk, walkBackwards
- foldl > walkBackwards
- foldr > walk
- head > takeFirst
- filter > keepIf
# Inspiration
- hashmap > Dict
- map > map (function), Dict
- for > map, mapWithIndex, walk, walkBackwards, zip
- apply/for yield > map
- fold > walk, walkBackwards
- foldl > walkBackwards
- foldr > walk
- head > takeFirst
- filter > keepIf
## Inspiration
- [grepper](https://www.codegrepper.com/) snippet collection that embeds in google search results. See also this [collection of common questions](https://www.codegrepper.com/code-examples/rust).
- [github copilot](https://copilot.github.com/) snippet generation with machine learning

View File

@ -2,4 +2,4 @@
# Where are the tests?
We have a lot of tests at the end of source files, this allows us to test functions that are not exposed by the editor itself.
`editor/mvc/ed_update.rs` and `editor/ui/text/big_text_area.rs` have many important tests.
`editor/mvc/ed_update.rs` and `editor/ui/text/big_text_area.rs` have many important tests.

View File

@ -5,11 +5,13 @@
### 1. Build the Roc website
For a minimal build (when just working on the web REPL)
```bash
cp -r www/public www/build
```
Or, for a full build (with std lib documentation, downloadable source code, etc.)
```bash
www/build.sh
```
@ -36,7 +38,8 @@ python3 -m http.server
```
### 3. Open your browser
You should be able to find the Roc REPL at http://127.0.0.1:8000/repl (or whatever port your web server mentioned when it started up.)
You should be able to find the Roc REPL at <http://127.0.0.1:8000/repl> (or whatever port your web server mentioned when it started up.)
**Warning:** This is work in progress! Not all language features are implemented yet, error messages don't look nice yet, up/down arrows don't work for history, etc.

View File

@ -1,9 +1,12 @@
# devtools
To make rust-analyzer and other vscode extensions work well you want them using the same rustc, glibc, zig... as specified in the roc nix flake.
The easiest way to do this is to use another flake for all your dev tools that takes the roc flake as an input.
Use the flake in this folder that uses your editor of choice as a starting template. If your editor is not listed, feel free to make a PR and add your flake.
Further steps:
1. Copy the flake for your favorite editor to a new folder outside of the roc repo folder.
1. Run `git init` in the new folder.
1. Rename the copied flake to `flake.nix`.
@ -20,11 +23,13 @@ I recommend creating a git repository to save this custom flake.
If you use lorri or direnv it is possible to load the dev flake instead of the roc flake.
For lorri:
1. copy the `shell.nix` at the root of this repo to the folder containing your dev tools flake.
1. edit `.envrc` to contain:
```
```sh
eval "$(lorri direnv --shell-file path-to-your-dev-flake-folder/shell.nix)"
```
```
## Extensions
@ -38,6 +43,7 @@ If your extension is not available on nix, you can add them [from the vscode mar
Instead of running `code` in the last step you can use the `--extensions-dir` flag to allow you to install extensions using the vscode GUI.
On MacOS or Linux:
```
```sh
code --extensions-dir="$HOME/.vscode/extensions"
```

View File

@ -9,9 +9,8 @@ roc run
This will run `main.roc` because, unless you explicitly give it a filename, `roc run`
defaults to running a file named `main.roc`. Other `roc` commands (like `roc build`, `roc test`, and so on) also default to `main.roc` unless you explicitly give them a filename.
# About this example
## About this example
This uses a very simple platform which does nothing more than printing the string you give it.
The line `main = "Hello, World!\n"` sets this string to be `"Hello, World!"` with a newline at the end, and the lines `packages { pf: "platform/main.roc" }` and `provides [main] to pf` specify that the `platform/` directory contains this app's platform.

View File

@ -9,7 +9,7 @@ roc run
This will run `main.roc` because, unless you explicitly give it a filename, `roc run`
defaults to running a file named `main.roc`. Other `roc` commands (like `roc build`, `roc test`, and so on) also default to `main.roc` unless you explicitly give them a filename.
# About this example
## About this example
This uses a very simple platform which does nothing more than printing the string you give it.

View File

@ -21,7 +21,7 @@ npm install -g http-server
http-server
```
Now open your browser at http://localhost:8080
Now open your browser at <http://localhost:8080>
## Design Notes

View File

@ -16,7 +16,7 @@ Make sure you have the right versions of Ruby and Clang especially! This example
First, `cd` into this directory and run this in your terminal:
```
```sh
roc build --lib
```
@ -26,7 +26,7 @@ This compiles your Roc code into a binary library in the current directory. The
Next, run this: (remember that you need Ruby 2.7.6 or higher - otherwise later steps will fail!)
```
```sh
ruby extconf.rb
```
@ -36,7 +36,7 @@ This generates a `Makefile`. (There are only two Roc-specific lines in `extconf.
Finally, run this:
```
```sh
make
```
@ -46,7 +46,7 @@ This uses the `Makefile` generated earlier to take the compiled Roc library and
You can now try this out in Ruby's REPL (`irb`), like so:
```
```sh
$ irb
irb(main):001:0> require_relative 'demo'
Ruby just required Roc. Let's get READY TO ROC.
@ -59,13 +59,13 @@ irb(main):002:0> RocStuff::hello 'Hello, World'
To rebuild after changing either the `demo.c` file or any `.roc` files, run:
```
```sh
roc build --lib && make -B
```
The `-B` flag is necessary when you only change .roc files, because otherwise `make` thinks there's no work to do and doesn't bother rebuilding.
# About this example
## About this example
This was created by following a [tutorial on Ruby C extensions](https://silverhammermba.github.io/emberb/c/) and [some documentation](https://github.com/ruby/ruby/blob/master/doc/extension.rdoc#label-Prepare+extconf.rb) (along with [more nicely formatted, but potentially out-of-date docs](https://docs.ruby-lang.org/en/2.4.0/extension_rdoc.html)).

View File

@ -125,6 +125,7 @@ to destructure variants inline in function declarations, like in these two examp
```elm
\(UserId id1) (UserId id2) ->
```
```elm
\(UserId id) ->
```
@ -137,6 +138,7 @@ You can write the above like so in Roc:
```elm
\UserId id1, UserId id2 ->
```
```elm
\UserId id ->
```
@ -214,13 +216,13 @@ Closed record annotations look the same as they do in Elm, e.g.
In Elm:
```
```elm
{ a | name : Str, email : Str } -> Str
```
In Roc:
```
```coffee
{ name : Str, email : Str }* -> Str
```
@ -356,8 +358,8 @@ ergonomics of destructuring mean this wouldn't be a good fit for data modeling.
Roc's pattern matching conditionals work about the same as how they do in Elm.
Here are two differences:
* Roc uses the syntax `when`...`is` instead of `case`...`of`
* In Roc, you can use `|` to handle multiple patterns in the same way
- Roc uses the syntax `when`...`is` instead of `case`...`of`
- In Roc, you can use `|` to handle multiple patterns in the same way
For example:
@ -398,9 +400,9 @@ This is the biggest semantic difference between Roc and Elm.
Let's start with the motivation. Suppose I'm using a platform for making a
web server, and I want to:
* Read some data from a file
* Send an HTTP request containing some of the data from the file
* Write some data to a file containing some of the data from the HTTP response
- Read some data from a file
- Send an HTTP request containing some of the data from the file
- Write some data to a file containing some of the data from the HTTP response
Assuming I'm writing this on a Roc platform which has a `Task`-based API,
and that `Task.await` is like Elm's `Task.andThen` but with the arguments
@ -524,7 +526,7 @@ the type of the union it goes in.
Here are some examples of using tags in a REPL:
```
```coffee
> True
True : [True]*
@ -599,8 +601,8 @@ tagToStr = \tag ->
Each of these type annotations involves a *tag union* - a collection of tags bracketed by `[` and `]`.
* The type `[Foo, Bar Str]` is a **closed** tag union.
* The type `[Foo]*` is an **open** tag union.
- The type `[Foo, Bar Str]` is a **closed** tag union.
- The type `[Foo]*` is an **open** tag union.
You can pass `x` to `tagToStr` because an open tag union is type-compatible with
any closed tag union which contains its tags (in this case, the `Foo` tag). You can also
@ -688,8 +690,8 @@ includes in its union."
## Opaque Types
In Elm, you can choose to expose (or not) custom types' constructors in order to create [opaque types](http://sporto.github.io/elm-patterns/advanced/opaque-types.html).
Since Roc's _tags_ can be constructed in any module without importing anything, Roc has a separate
_opaque type_ language feature to enable information hiding.
Since Roc's *tags* can be constructed in any module without importing anything, Roc has a separate
*opaque type* language feature to enable information hiding.
As an example, suppose I define these inside the `Username` module:
@ -1011,6 +1013,7 @@ list =
num + 1
```
Both snippets are calling `List.map` passing `numbers` as the first argument,
and a `\num -> num + 1` function for the other argument.
@ -1123,8 +1126,8 @@ Like Elm, Roc organizes numbers into integers and floating-point numbers.
However, Roc breaks them down even further. For example, Roc has two different
sizes of float types to choose from:
* `F64` - a 64-bit [IEEE 754 binary floating point number](https://en.wikipedia.org/wiki/IEEE_754#Binary)
* `F32` - a 32-bit [IEEE 754 binary floating point number](https://en.wikipedia.org/wiki/IEEE_754#Binary)
- `F64` - a 64-bit [IEEE 754 binary floating point number](https://en.wikipedia.org/wiki/IEEE_754#Binary)
- `F32` - a 32-bit [IEEE 754 binary floating point number](https://en.wikipedia.org/wiki/IEEE_754#Binary)
Both types are desirable in different situations. For example, when doing
simulations, the precision of the `F64` type is desirable. On the other hand,
@ -1141,11 +1144,11 @@ them take longer than they do with floats.
Similarly to how there are different sizes of floating point numbers,
there are also different sizes of integer to choose from:
* `I8`
* `I16`
* `I32`
* `I64`
* `I128`
- `I8`
- `I16`
- `I32`
- `I64`
- `I128`
Roc also has *unsigned* integers which are never negative. They are
`U8`, `U16`, `U32`, `U64`, `U128`, and `Nat`.
@ -1156,9 +1159,9 @@ target (for example, WebAssembly) at runtime it will be the same as `U32` instea
`Nat` comes up most often with collection lengths and indexing into collections.
For example:
* `List.len : List * -> Nat`
* `List.get : List elem, Nat -> Result elem [OutOfBounds]*`
* `List.set : List elem, Nat, elem -> List elem`
- `List.len : List * -> Nat`
- `List.get : List elem, Nat -> Result elem [OutOfBounds]*`
- `List.set : List elem, Nat, elem -> List elem`
As with floats, which integer type to use depends on the values you want to support
as well as your performance needs. For example, raw sequences of bytes are typically
@ -1177,10 +1180,10 @@ This accepts any of the numeric types discussed above, from `I128` to `F32`
to `D64` and everything in between. This is because those are all type aliases
for `Num` types. For example:
* `I64` is a type alias for `Num (Integer Signed64)`
* `U8` is a type alias for `Num (Integer Unsigned8)`
* `F32` is a type alias for `Num (Fraction Binary32)`
* `Dec` is a type alias for `Num (Fraction Decimal)`
- `I64` is a type alias for `Num (Integer Signed64)`
- `U8` is a type alias for `Num (Integer Unsigned8)`
- `F32` is a type alias for `Num (Fraction Binary32)`
- `Dec` is a type alias for `Num (Fraction Decimal)`
(Those types like `Integer`, `Fraction`, and `Signed64` are all defined like `Never`;
you can never instantiate one. They are used only as phantom types.)
@ -1235,9 +1238,9 @@ If you put these into a hypothetical Roc REPL, here's what you'd see:
`comparable`, `appendable`, and `number` don't exist in Roc.
* `number` is replaced by `Num`, as described previously.
* `appendable` is only used in Elm for the `(++)` operator, and Roc doesn't have that operator.
* `comparable` is used in Elm for comparison operators (like `<` and such), plus `List.sort`, `Dict`, and `Set`. Roc's comparison operators (like `<`) only accept numbers; `"foo" < "bar"` is valid Elm, but will not compile in Roc. Roc's dictionaries and sets are hashmaps behind the scenes (rather than ordered trees), so their keys need to be hashable but not necessarily comparable.
- `number` is replaced by `Num`, as described previously.
- `appendable` is only used in Elm for the `(++)` operator, and Roc doesn't have that operator.
- `comparable` is used in Elm for comparison operators (like `<` and such), plus `List.sort`, `Dict`, and `Set`. Roc's comparison operators (like `<`) only accept numbers; `"foo" < "bar"` is valid Elm, but will not compile in Roc. Roc's dictionaries and sets are hashmaps behind the scenes (rather than ordered trees), so their keys need to be hashable but not necessarily comparable.
That said, Roc's `Dict` and `Set` do have a restriction on their keys, just not `comparable`.
See the section on Abilities in [the tutorial](TUTORIAL.md) for details.
@ -1246,23 +1249,23 @@ See the section on Abilities in [the tutorial](TUTORIAL.md) for details.
`elm/core` has these modules:
* `Array`
* `Basics`
* `Bitwise`
* `Char`
* `Debug`
* `Dict`
* `List`
* `Maybe`
* `Platform`
* `Platform.Cmd`
* `Platform.Sub`
* `Process`
* `Result`
* `Set`
* `String`
* `Task`
* `Tuple`
- `Array`
- `Basics`
- `Bitwise`
- `Char`
- `Debug`
- `Dict`
- `List`
- `Maybe`
- `Platform`
- `Platform.Cmd`
- `Platform.Sub`
- `Process`
- `Result`
- `Set`
- `String`
- `Task`
- `Tuple`
In Roc, the standard library is not a standalone package. It is baked into the compiler,
and you can't upgrade it independently of a compiler release; whatever version of
@ -1272,25 +1275,25 @@ possible to ship Roc's standard library as a separate package!)
Roc's standard library has these modules:
* `Str`
* `Bool`
* `Num`
* `List`
* `Dict`
* `Set`
* `Result`
- `Str`
- `Bool`
- `Num`
- `List`
- `Dict`
- `Set`
- `Result`
Some differences to note:
* All these standard modules are imported by default into every module. They also expose all their types (e.g. `Bool`, `List`, `Result`) but they do not expose any values - not even `negate` or `not`. (`True`, `False`, `Ok`, and `Err` are all tags, so they do not need to be exposed; they are globally available regardless!)
* In Roc it's called `Str` instead of `String`.
* `List` refers to something more like Elm's `Array`, as noted earlier.
* No `Char`. This is by design. What most people think of as a "character" is a rendered glyph. However, rendered glyphs are comprised of [grapheme clusters](https://stackoverflow.com/a/27331885), which are a variable number of Unicode code points - and there's no upper bound on how many code points there can be in a single cluster. In a world of emoji, I think this makes `Char` error-prone and it's better to have `Str` be the only first-class unit. For convenience when working with unicode code points (e.g. for performance-critical tasks like parsing), the single-quote syntax is sugar for the corresponding `U32` code point - for example, writing `'鹏'` is exactly the same as writing `40527`. Like Rust, you get a compiler error if you put something in single quotes that's not a valid [Unicode scalar value](http://www.unicode.org/glossary/#unicode_scalar_value).
* No `Basics`. You use everything from the standard library fully-qualified; e.g. `Bool.not` or `Num.negate` or `Num.ceiling`. There is no `Never` because `[]` already serves that purpose. (Roc's standard library doesn't include an equivalent of `Basics.never`, but it's one line of code and anyone can implement it: `never = \a -> never a`.)
* No `Tuple`. Roc doesn't have tuple syntax. As a convention, `Pair` can be used to represent tuples (e.g. `List.zip : List a, List b -> List [Pair a b]*`), but this comes up infrequently compared to languages that have dedicated syntax for it.
* No `Task`. By design, platform authors implement `Task` (or don't; it's up to them) - it's not something that really *could* be usefully present in Roc's standard library.
* No `Process`, `Platform`, `Cmd`, or `Sub` - similarly to `Task`, these are things platform authors would include, or not.
* No `Maybe`. This is by design. If a function returns a potential error, use `Result` with an error type that uses a zero-arg tag to describe what went wrong. (For example, `List.first : List a -> Result a [ListWasEmpty]*` instead of `List.first : List a -> Maybe a`.) If you want to have a record field be optional, use an Optional Record Field directly (see earlier). If you want to describe something that's neither an operation that can fail nor an optional field, use a more descriptive tag - e.g. for a nullable JSON decoder, instead of `nullable : Decoder a -> Decoder (Maybe a)`, make a self-documenting API like `nullable : Decoder a -> Decoder [Null, NonNull a]*`.
- All these standard modules are imported by default into every module. They also expose all their types (e.g. `Bool`, `List`, `Result`) but they do not expose any values - not even `negate` or `not`. (`True`, `False`, `Ok`, and `Err` are all tags, so they do not need to be exposed; they are globally available regardless!)
- In Roc it's called `Str` instead of `String`.
- `List` refers to something more like Elm's `Array`, as noted earlier.
- No `Char`. This is by design. What most people think of as a "character" is a rendered glyph. However, rendered glyphs are comprised of [grapheme clusters](https://stackoverflow.com/a/27331885), which are a variable number of Unicode code points - and there's no upper bound on how many code points there can be in a single cluster. In a world of emoji, I think this makes `Char` error-prone and it's better to have `Str` be the only first-class unit. For convenience when working with unicode code points (e.g. for performance-critical tasks like parsing), the single-quote syntax is sugar for the corresponding `U32` code point - for example, writing `'鹏'` is exactly the same as writing `40527`. Like Rust, you get a compiler error if you put something in single quotes that's not a valid [Unicode scalar value](http://www.unicode.org/glossary/#unicode_scalar_value).
- No `Basics`. You use everything from the standard library fully-qualified; e.g. `Bool.not` or `Num.negate` or `Num.ceiling`. There is no `Never` because `[]` already serves that purpose. (Roc's standard library doesn't include an equivalent of `Basics.never`, but it's one line of code and anyone can implement it: `never = \a -> never a`.)
- No `Tuple`. Roc doesn't have tuple syntax. As a convention, `Pair` can be used to represent tuples (e.g. `List.zip : List a, List b -> List [Pair a b]*`), but this comes up infrequently compared to languages that have dedicated syntax for it.
- No `Task`. By design, platform authors implement `Task` (or don't; it's up to them) - it's not something that really *could* be usefully present in Roc's standard library.
- No `Process`, `Platform`, `Cmd`, or `Sub` - similarly to `Task`, these are things platform authors would include, or not.
- No `Maybe`. This is by design. If a function returns a potential error, use `Result` with an error type that uses a zero-arg tag to describe what went wrong. (For example, `List.first : List a -> Result a [ListWasEmpty]*` instead of `List.first : List a -> Maybe a`.) If you want to have a record field be optional, use an Optional Record Field directly (see earlier). If you want to describe something that's neither an operation that can fail nor an optional field, use a more descriptive tag - e.g. for a nullable JSON decoder, instead of `nullable : Decoder a -> Decoder (Maybe a)`, make a self-documenting API like `nullable : Decoder a -> Decoder [Null, NonNull a]*`.
## Operator Desugaring Table

View File

@ -1,6 +1,6 @@
# www.roc-lang.org
How to update www.roc-lang.org :
## How to update site
- create a new branch, for example `update-www` based on `www`
- pull `main` into `update-www`