Adds support for appending to an existing Excel table.
# Important Notes
- Renamed `Column_Mapping` to `Column_Name_Mapping`
- Changed new type name to `Map_Column`
- Added last modified time and creation time to `File`.
Implement generation of Java AST types from the Rust AST type definitions, with support for deserializing in Java syntax trees created in Rust.
### New Libraries
#### `enso-reflect`
Implements a `#[derive(Reflect)]` macro to enable runtime analysis of datatypes. Macro interface includes helper attributes; **the Rust types and the `reflect` attributes applied to them fully determine the Java types** ultimately produced (by `enso-metamodel`). This is the most important API, as it is used in the subject crates (`enso-parser`, and dependencies with types used in the AST). [Module docs](https://github.com/enso-org/enso/blob/wip/kw/parser/ast-transpiler/lib/rust/reflect/macros/src/lib.rs).
#### `enso-metamodel`
Provides data models for data models in Rust/Java/Meta (a highly-abstracted language-independent model--I have referred to it before as the "generic representation", but that was an overloaded term).
The high-level interface consists of operations on data models, and between them. For example, the only operations needed by [the binary that drives datatype transpilation](https://github.com/enso-org/enso/blob/wip/kw/parser/ast-transpiler/lib/rust/parser/generate-java/src/main.rs) are: `rust::to_meta`, `java::from_meta`, `java::transform::optional_to_null`, `java::to_syntax`.
The low-level interface consists of direct usage of the datatypes; this is used by [the module that implements some serialization overrides](https://github.com/enso-org/enso/blob/wip/kw/parser/ast-transpiler/lib/rust/parser/generate-java/src/serialization.rs) (so that the Java interface to `Code` references can produce `String`s on demand based on serialized offset/length pairs). The serialization override mechanism is based on customizing, not replacing, the generated deserialization methods, so as to be as robust as possible to changes in the Rust source or in the transpilation process.
### Important Notes
- Rust/Java serialization is exhaustively tested for structural compatibility. A function [`metamodel::meta::serialization::testcases`](https://github.com/enso-org/enso/blob/wip/kw/parser/ast-transpiler/lib/rust/metamodel/src/meta/serialization.rs) uses `reflect`-derived data to generate serialized representations of ASTs to use as test cases. Its should-accept cases cover every type a tree can contain; it also produces a representative set of should-reject cases. A Rust `#[test]` confirms that these cases are accepted/rejected as expected, and generated Java tests (see Binaries below) check the generated Java deserialization code against the same test cases.
- Deserializing `Code` is untested. The mechanism is in place (in Rust, we serialize only the offset/length of the `Cow`; in Java, during deserialization we obtain a context object holding a buffer for all string data; the accessor generated in Java uses the buffer and the offset/length to return `String`s), but it will be easier to test once we have implemented actually parsing something and instantiating the `Cow`s with source code.
- `#[tagged_enum]` [now supports](https://github.com/enso-org/enso/blob/wip/kw/parser/ast-transpiler/lib/rust/shapely/macros/src/tagged_enum.rs#L36-L51) control over what is done with container-level attributes; they can be applied to the container and variants (default), only to the container, or only to variants.
- Generation of `sealed` classes is supported, but currently disabled by `TARGET_VERSION` in `metamodel::java::syntax` so that tests don't require Java 15 to run. (The same logic is run either way; there is a shallow difference in output.)
### Binaries
The `enso-parser-generate-java` crate defines several binaries:
- `enso-parser-generate-java`: Performs the transpilation; after integration, this will be invoked by the build script.
- `java-tests`: Generates the Java code that tests format deserialization; after integration this command will be invoked by the build script, and its Java output compiled and run during testing.
- `graph-rust`/`graph-meta`/`graph-java`: Produce GraphViz representations of data models in different typesystems; these are for developing and understanding model transformations.
Until integration, a **script regenerates the Java and runs the format tests: `./tools/parser_generate_java.sh`**. The generated code can be browsed in `target/generated_java`.
If a node created by the user gets placed off-screen, the screen's camera is panned to make the node visible.
https://www.pivotaltracker.com/story/show/181188687
#### Visuals
A screencast showing a number of node creation scenarios when the camera is panned to the newly created node, including when zoomed out.
https://user-images.githubusercontent.com/273837/177169716-50a12b0a-c742-4b01-9766-56206e7938b9.mov
# Important Notes
- Camera is panned also if the node is only partially visible, or if there's not enough free space visible around the node. The specific amount of free space that needs to be visible around a newly created node is configured in the theme.
- If the screen area is so small that the node cannot be fully fit in it (either horizontally or vertically), showing the left and top boundaries of the node's area takes priority over showing the corresponding opposite edges.
Initial work restructuring the `Database.connect` API
- New SQLite API with support for InMemory.
- Updated PostgreSQL API with SSL and Client Certificate Support.
- Updated Redshift API.
# Important Notes
Follow up tasks:
- PostgreSQL SSL additional testing.
- Driver version updating.
- `.pgpass` support.
Implement simple placeholder icons for all entry kinds supported in the Suggestion Database. The icons are planned to be used in Component Browser as default icons for entries. This is intended to allow visually distinguishing different entry kinds.
The following additional fixes and tweaks are applied:
- Icons previously using only 1 color from the theme now use the color provided through shape parameters instead.
- The `data_science` and `network` icons now use only the 2 colors provided through shape parameters.
- The `join` icon has its shape and colors modified and uses only the 2 colors provided through shape parameters.
- The demo scene now parametrizes icon shapes using colors from the Component Browser Design Doc.
https://www.pivotaltracker.com/story/show/182584322
#### Visuals
Original contents of the demo scene before the PR:
<img width="2197" alt="x-orig" src="https://user-images.githubusercontent.com/273837/176669422-ee2e14c7-9ef4-42fd-acb7-ae3be6b68587.png">
Final contents of the demo scene after the PR:
<img width="2201" alt="x2-final" src="https://user-images.githubusercontent.com/273837/176668720-6f1685fd-f7e6-44d7-85f5-f6a6d6789644.png">
Make the local scope components available to the Component Browser through `component::List` (previously known as "Hierarchical Action List"). (See the [Local Scope Section in the Design Doc](https://github.com/enso-org/design/blob/main/epics/component-browser/design.md#local-scope-section) for more details.)
https://www.pivotaltracker.com/story/show/181869186
# Important Notes
- Note: the PR description does not include a screencast due to the changes in this PR not having any visual effect at this moment. The result of this PR's changes would be only observed in the Component Browser, but the Component Browser is not merged yet. As described in the task's Acceptance Criteria, the changes in this PR are currently only covered by tests.
- The `component::group::Data::new_empty_visible` constructor was renamed to `from_name_and_id` and changed to set the `visible` field to a default value. All known code paths appear to call the `update_sorting_and_visibility` method before checking the value of the `visible` field, so setting the value to `true` when creating the object does not seem needed.
[ci no changelog needed]
Truffle is using [MultiTier compilation](190e0b2bb7) by default since 21.1.0. That mean every `RootNode` is compiled twice. However benchmarks only care about peak performance. Let's return back the previous behavior and compile only once after profiling in interpreter.
# Important Notes
This change does not influence the peak performance. Just the amount of IGV graphs produced from benchmarks when running:
```
enso$ sbt "project runtime" "withDebug --dumpGraphs benchOnly -- AtomBenchmark"
```
is cut to half.
Try following enso program:
```
main =
here.debug
foreign js debug = """
debugger;
```
it crashes the engine with exception:
```
Execution finished with an error: java.lang.ClassCastException: class com.oracle.truffle.js.runtime.builtins.JSFunctionObject$Unbound cannot be cast to class org.enso.interpreter.runtime.callable.CallerInfo (com.oracle.truffle.js.runtime.builtins.JSFunctionObject$Unbound and org.enso.interpreter.runtime.callable.CallerInfo are in unnamed module of loader com.oracle.graalvm.locator.GraalVMLocator$GuestLangToolsLoader @55cb6996)
at <java> org.enso.interpreter.runtime.callable.function.Function$ArgumentsHelper.getCallerInfo(Function.java:352)
at <java> org.enso.interpreter.instrument.ReplDebuggerInstrument$ReplExecutionEventNodeImpl.onEnter(ReplDebuggerInstrument.java:179)
at <java> org.graalvm.truffle/com.oracle.truffle.api.instrumentation.ProbeNode$EventProviderChainNode.innerOnEnter(ProbeNode.java:1397)
at <java> org.graalvm.truffle/com.oracle.truffle.api.instrumentation.ProbeNode$EventChainNode.onEnter(ProbeNode.java:912)
at <java> org.graalvm.truffle/com.oracle.truffle.api.instrumentation.ProbeNode.onEnter(ProbeNode.java:216)
at <java> com.oracle.truffle.js.nodes.JavaScriptNodeWrapper.execute(JavaScriptNodeWrapper.java:44)
at <java> com.oracle.truffle.js.nodes.control.DiscardResultNode.execute(DiscardResultNode.java:88)
at <java> com.oracle.truffle.js.nodes.function.FunctionBodyNode.execute(FunctionBodyNode.java:73)
at <java> com.oracle.truffle.js.nodes.JavaScriptNodeWrapper.execute(JavaScriptNodeWrapper.java:45)
at <java> com.oracle.truffle.js.nodes.function.FunctionRootNode.executeInRealm(FunctionRootNode.java:150)
at <java> com.oracle.truffle.js.runtime.JavaScriptRealmBoundaryRootNode.execute(JavaScriptRealmBoundaryRootNode.java:93)
at <js> <js> poly_enso_eval(Unknown)
at <epb> <epb> null(Unknown)
at <enso> Prg.debug(Prg.enso:27-28)
```
`AtomBenchmarks` are broken since the introduction of [micro distribution](https://github.com/enso-org/enso/pull/3531). The micro distribution doesn't contain `Range` and as such one cannot use `1.up_to` method.
# Important Notes
I have rewritten enso code to manual `generator`. The results of the benchmark seem comparable. Executed as:
```
sbt:runtime> benchOnly AtomBenchmarks
```
I was modifying `Date_Spec.enso` today and made a mistake. When executing with I could see the error, but the process got stuck...
```
enso/test/Tests/src/Data/Time$ ../../../../built-distribution/enso-engine-0.0.0-dev-linux-amd64/enso-0.0.0-dev/bin/enso --run Date_Spec.enso In module Date_Spec:
Compiler encountered warnings:
Date_Spec.enso[14:29-14:37]: Unused function argument parseDate.
Compiler encountered errors:
Date_Spec.enso[18:13-18:20]: Variable `debugger` is not defined.
Exception in thread "main" Compilation aborted due to errors.
at org.graalvm.sdk/org.graalvm.polyglot.Value.invokeMember(Value.java:932)
at org.enso.polyglot.Module.getAssociatedConstructor(Module.scala:19)
at org.enso.runner.Main$.runMain(Main.scala:755)
at org.enso.runner.Main$.runSingleFile(Main.scala:695)
at org.enso.runner.Main$.run(Main.scala:582)
at org.enso.runner.Main$.main(Main.scala:1031)
at org.enso.runner.Main.main(Main.scala)
^C
```
...had to kill it with Ctrl-C. This PR fixes the problem by moving `getAssociatedConstructor` call into `try` block, catching the exception and exiting properly.
This PR adds sources for Enso language support in IGV (and NetBeans). The support is based on TextMate grammar shown in the editor and registration of the Enso language so IGV can find it. Then this PR adds new GitHub Actions workflow file to build the project using Maven.
- Remove `from_xls` and `from_xlsx`.
- Add `headers` support to `File_Format.Excel`.
- Altered default read for Excel to be the first sheet.
- Altered behavior so that single cells grow down and right when reading sheet.
- Altered `Excel_Range` so knows if single cell or 1x1 range address.
# Important Notes
- Renamed `Range` to `Cell_Range` to avoid name clash.
A semi-manual s/this/self appied to the whole standard library.
Related to https://www.pivotaltracker.com/story/show/182328601
In the compiler promoted to use constants instead of hardcoded
`this`/`self` whenever possible.
# Important Notes
The PR **does not** require explicit `self` parameter declaration for methods as this part
of the design is still under consideration.
This PR merges existing variants of `LiteralNode` (`Integer`, `BigInteger`, `Decimal`, `Text`) into a single `LiteralNode`. It adds `PatchableLiteralNode` variant (with non `final` `value` field) and uses `Node.replace` to modify the AST to be patchable. With such change one can remove the `UnwindHelper` workaround as `IdExecutionInstrument` now sees _patched_ return values without any tricks.
Changelog:
- Describe the `EditKind` optional parameter that allows enabling engine-side optimizations, like value updates
- Describe the `text/openBuffer` message required for the lazy visualizations feature
This introduces a tiny alternative to our stdlib, that can be used for testing the interpreter. There are 2 main advantages of such a solution:
1. Performance: on my machine, `runtime-with-intstruments/test` drops from 146s to 65s, while `runtime/test` drops from 165s to 51s. >6 mins total becoming <2 mins total is awesome. This alone means I'll drink less coffee in these breaks and will be healthier.
2. Better separation of concepts – currently working on a feature that breaks _all_ enso code. The dependency of interpreter tests on the stdlib means I have no means of incremental testing – ALL of stdlib must compile. This is horrible, rendered my work impossible, and resulted in this PR.
[Task link](https://www.pivotaltracker.com/story/show/182194574).
[ci no changelog needed]
This PR implements a new selection box that will replace an old (not really working) one in the component browser. The old selection box wasn't working well with the headers of the component groups, so we were forced to make a much harder implementation.
The new implementation duplicates some visual components and places them in a separate layer. Then, a rectangular mask cuts off everything that is not "selected". This way:
- We have more control over what the selected entries should look like.
- We can easily support the multi-layer structure of the component groups with headers.
- We avoid problems with nested masks that our renderer doesn't support at the moment.
To be more precise, we duplicate the following:
- Background of the component group becomes the "fill" of the selection.
- Entries text and icons - we can alter them easily.
- Header background and header text. By placing them in separate scene layers we ensure correct rendering order.
https://user-images.githubusercontent.com/6566674/173657899-1067f538-4329-44f9-9dc2-78c8a4708b5a.mp4
# Important Notes
- This PR implements the base of our future selection mechanism, selecting entries with a mouse and keyboard still has several issues that will be fixed in the future tasks.
- The scrolling behavior will also be improved in future tasks. Right we only restrict the selection box position so that it never leaves the borders of the component group.
- I added a new function to `add` shapes to new layers in a non-exclusive way (we had only `add_exclusive`) before. I have no idea how we didn't use this feature before even though we mention it a lot in the docs.
- The demo scene restricts the position of the selection box for one-column component groups but does not for the wide component group.
Put information about Virtual Component Groups in the Hierarchical Actions List.
https://www.pivotaltracker.com/story/show/181865548
# Important Notes
- This PR implements the subtask 2 of 2 in the ["Virtual Component Groups in the Hierarchical Action List" task](https://www.pivotaltracker.com/story/show/181865548).
- Note: the PR description does not include a screencast due to the changes in this PR not having any visual effect at this moment. The result of this PR's changes would be only observed in the Component Browser, but the Component Browser is not merged yet. As described in the task's Acceptance Criteria, the changes in this PR are currently only covered by tests.
- Manual integration testing with the Engine showed that a response to an `executionContext/getComponentGroups` request is non-empty only after an `executionContext/executionComplete` message is received by the IDE. (See also [a discussion on Discord](https://discord.com/channels/401396655599124480/983669600854106112).) This was not known by the IDE team or documented before, and the existing code was modified to take this into account. The protocol docs are expected to be updated by the Engine team.
- A list of component groups looked up in the suggestion database is cached in the node searcher as an optimization.
[ci no changelog needed]
This change makes sure that Runtime configuration of `runtime` is listed
as a dependency of `runtime-with-instruments`.
That way `buildEngineDistribution` which indirectly depends on
`runtime-with-instruments`/assembly triggers compilation for std-bits,
if necessary.
# Important Notes
Minor adjustments for a problem introduced in https://github.com/enso-org/enso/pull/3509
Remove a `Symbol`from the `SymbolRegistry` when its `SpriteSystem` is dropped.
This fixes the remaining buffer leak (after #3504) in https://www.pivotaltracker.com/story/show/181943457
# Important Notes
- The `SymbolRegistry` now assigns unique `SymbolId`s, so that we can tell if a `SymbolId` refers to a `Symbol` that has already been unregistered (this shouldn't happen, but it's not statically-obvious that it doesn't, so if it occurs we shouldn't misbehave).
- Also fix a bug in how `buffer_count` was tracked (we were decrementing more than incrementing!).
- Removed `select` method.
- Removed `group` method.
- Removed `Aggregate_Table` type.
- Removed `Order_Rule` type.
- Removed `sort` method from Table.
- Expanded comments on `order_by`.
- Update comment on `aggregate` on Database.
- Update Visualisation to use new APIs.
- Updated Data Science examples to use new APIs.
- Moved Examples test out of Tests to own test.
# Important Notes
Need to get Examples_Tests added to CI.
Implements https://www.pivotaltracker.com/story/show/182309559
This task implements common scaffolding for the `Table.write`, so that the particular implementations for Delimited and Excel file formats can be done in parallel.
* fixes a regression for watching ide in dev profile;
* add support for passing cargo-watch options;
* general improvements and cleanups around watch commands.
New plan to [fix the `sbt` build](https://www.pivotaltracker.com/n/projects/2539304/stories/182209126) and its annoying:
```
log.error(
"Truffle Instrumentation is not up to date, " +
"which will lead to runtime errors\n" +
"Fixes have been applied to ensure consistent Instrumentation state, " +
"but compilation has to be triggered again.\n" +
"Please re-run the previous command.\n" +
"(If this for some reason fails, " +
s"please do a clean build of the $projectName project)"
)
```
When it is hard to fix `sbt` incremental compilation, let's restructure our project sources so that each `@TruffleInstrument` and `@TruffleLanguage` registration is in individual compilation unit. Each such unit is either going to be compiled or not going to be compiled as a batch - that will eliminate the `sbt` incremental compilation issues without addressing them in `sbt` itself.
fa2cf6a33ec4a5b2e3370e1b22c2b5f712286a75 is the first step - it introduces `IdExecutionService` and moves all the `IdExecutionInstrument` API up to that interface. The rest of the `runtime` project then depends only on `IdExecutionService`. Such refactoring allows us to move the `IdExecutionInstrument` out of `runtime` project into independent compilation unit.
Auto-generate all builtin methods for builtin `File` type from method signatures.
Similarly, for `ManagedResource` and `Warning`.
Additionally, support for specializations for overloaded and non-overloaded methods is added.
Coverage can be tracked by the number of hard-coded builtin classes that are now deleted.
## Important notes
Notice how `type File` now lacks `prim_file` field and we were able to get rid off all of those
propagating method calls without writing a single builtin node class.
Similarly `ManagedResource` and `Warning` are now builtins and `Prim_Warnings` stub is now gone.
### Pull Request Description
Using the new tooling (#3491), I investigated the **performance / compile-time tradeoff** of different codegen options for release mode builds. By scripting the testing procedure, I was able to explore many possible combinations of options, which is important because their interactions (on both application performance and build time) are complex. I found **two candidate profiles** that offer specific advantages over the current `release` settings (`baseline`):
- `thin16`: Supports incremental compiles in 1/3 the time of `baseline` in common cases. Application runs about 2% slower than `baseline`.
- `fat1-O4`: Application performs 13% better than `baseline`. Compile time is almost 3x `baseline`, and non-incremental.
(See key in first chart for the settings defining these profiles.)
We can build faster or run faster, though not in the same build. Because the effect sizes are large enough to be impactful to developer and user experience, respectively, I think we should consider having it both ways. We could **split the `release` profile** into two profiles to serve different purposes:
- `release`: A profile that supports fast developer iteration, while offering realistic performance.
- `production`: A maximally-optimized profile, for nightly builds and actual releases.
Since `wasm-pack` doesn't currently support custom profiles (rustwasm/wasm-pack#1111), we can't use a Cargo profile for `production`; however, we can implement our own profile by overriding rustc flags.
### Performance details
![perf](https://user-images.githubusercontent.com/1047859/170788530-ab6d7910-5253-4a2b-b432-8bfa0b4735ba.png)
As you can see, `thin16` is slightly slower than `baseline`; `fat1-O4` is dramatically faster.
<details>
<summary>Methodology (click to show)</summary>
I developed a procedure for benchmarking "whole application" performance, using the new "open project" workflow (which opens the IDE and loads a complex project), and some statistical analysis to account for variance. To gather this data:
Build the application with profiling:
`./run.sh ide build --profiling-level=debug`
Run the `open_project` workflow repeatedly:
`for i in $(seq 0 9); do dist/ide/linux-unpacked/enso --entry-point profile --workflow open_project --save-profile open_project_thin16_${i}.json; done`
For each profile recorded, take the new `total_self_time` output of the `intervals` tool; gather into CSV:
`echo $(for i in $(seq 0 9); do target/rust/debug/intervals < open_project_thin16_${i}.json | tail -n1 | awk '{print $2}'; do`
(Note that the output of intervals should not be considered stable; this command may need modification in the future. Eventually it would be nice to support formatted outputs...)
The data is ready to graph. I used the `boxplot` method of the [seaborn](https://seaborn.pydata.org/index.html) package, in order to show the distribution of data.
</details>
#### Build times
![thin16](https://user-images.githubusercontent.com/1047859/170788539-1578e41b-bc30-4f30-9b71-0b0181322fa5.png)
In the case of changing a file in `enso-prelude`, with the current `baseline` settings rebuilding takes over 3 minutes. With the `thin16` settings, the same rebuild completes in 40 seconds.
(To gather this data on different hardware or in the future, just run the new `bench-build.sh` script for each case to be measured.)
Implemented the `order_by` function with support for all modes of operation.
Added support for case insensitive natural order.
# Important Notes
- Improved MultiValueIndex/Key to not create loads of arrays.
- Adjusted HashCode for MultiValueKey to have a simple algorithm.
- Added Text_Utils.compare_normalized_ignoring_case to allow for case insensitive comparisons.
- Fixed issues with ObjectComparator and added some unit tests for it.
[ci no changelog needed]
This PR implements a new helper for the future Component Browser - `component_group::multi::Wrapper`. It propagates FRP events from multiple component groups and ensures that only a single component group is focused at all times.
See the updated component group demo scene (console logs shows propagated FRP events from all component groups):
https://user-images.githubusercontent.com/6566674/172359141-8ea6f1ba-e357-4c1b-852a-adb4d5207e03.mp4
- Fixed a `define_endpoints_2!` macro. FRP endpoints for `focus` events weren't connected properly.
- List View now uses an overlay shape to catch mouse events, it allows much easier implementation of `is_header_selected` in the component group.