Since the cloud dashboard is now embeded in Enso repository this PR adds owners over dashboard libraries. It also adds eslint specific files owner over @ somebody1234
Changelog:
- add the results of the `scalafmtSbt` command, reformatting the build files.
- add the `scalafmtSbtCheck` check to the CI formatting workflow
Fixing node.js dependencies as:
```bash
enso/tools/enso4igv$ mvn clean install -Pvsix
```
was broken. Making sure the VSIX build is part of the _actions workflow_.
# Important Notes
To reproduce/verify remove `enso/tools/enso4igv/node_modules` first and try to build.
Upgrading to GraalVM 22.3.0.
# Important Notes
- Removed all deprecated `FrameSlot`, and replaced them with frame indexes - integers.
- Add more information to `AliasAnalysis` so that it also gathers these indexes.
- Add quick build mode option to `native-image` as default for non-release builds
- `graaljs` and `native-image` should now be downloaded via `gu` automatically, as dependencies.
- Remove `engine-runner-native` project - native image is now build straight from `engine-runner`.
- We used to have `engine-runner-native` without `sqldf` in classpath as a workaround for an internal native image bug.
- Fixed chrome inspector integration, such that it shows values of local variables both for current stack frame and caller stack frames.
- There are still many issues with the debugging in general, for example, when there is a polyglot value among local variables, a `NullPointerException` is thrown and no values are displayed.
- Removed some deprecated `native-image` options
- Remove some deprecated Truffle API method calls.
This PR updates the build script:
* fixed issue where program version check was not properly triggering;
* improved `git-clean` command to correctly clear Scala artifacts;
* added `run.ps1` wrapper to the build script that works better with PowerShell than `run.cmd`;
* increased timeouts to work around failures on macOS nightly builds;
* replaced depracated GitHub Actions APIs (set-output) with their new equivalents;
* workaround for issue with electron builder (python2 lookup) on newer macOS runner images;
* GUI and backend dispatches to cloud were completed;
* release workflow allows creating RC releases.
Each builtin `makeFunction` method has to be currently registered for reflection. Adding registration of `SliceVectorMethodGen` to make following example work ag
```bash
enso$ sbt engine-runner-native/buildNativeImage
enso$ ./runner --run engine/runner-native/src/test/resources/Factorial.enso 6
```
# Important Notes
Regression caused by #3724 - Once the above code is executed in the CI, we'll discover such breakages before integration.
This PR adds two new labels to interact with CI systems:
* `CI: Keep up to date` tells mergify bot to auto-update the PR against develop but do not merge it.
* `CI: No changelog needed` is meant to replace old `[ci no changelog needed]`
* `CI: Clean build required` marks PR as being incompatible with main development branch and will ensure that runners are cleaned before and after running jobs related to this PR.
Note that currently only first label is supported, others will follow with the build script update. However they need to be merged before, so I am able to develop and test them.
This PR reenables code signing and notarization on macOS.
[ci no changelog needed]
# Important Notes
* electron-builder has been bumped, mostly to avoid missing Python issue. A workaround for a regression with Windows installer is provided as a patch.
* added polyfill globals plugin to fix issue with missing types like Buffer that was affecting nightly releases;
* fixed exit code propagation for Windows build script wrapper;
* bumped the build script and refreshed the generated workflows.
Includes https://github.com/enso-org/ci-build/pull/8
This PR reenables code signing on Windows.
Each Windows package built on CI should be now signed.
Additionally, some refactorings were done around electron-builder config, so it is easier to use outside the build script and offers more configuration options.
Execution of `sbt runtime/bench` doesn't seem to be part of the gate. As such it can happen a change into the Enso language syntax, standard libraries, runtime & co. can break the benchmarks suite without being noticed. Integrating such PR causes unnecessary disruptions to others using the benchmarks.
Let's make sure verification of the benchmarks (e.g. that they compile and can execute without error) is part of the CI.
# Important Notes
Currently the gate shall fail. The fix is being prepared in parallel PR - #3639. When the two PRs are combined, the gate shall succeed again.
This PR replaces webpack with esbuild, as our bundler.
The change leads to out-of-the-box ~5x improvement in bundling times, reducing the latency in watch-based workflows.
Along with this a new development server (with live reload capacity) has been introduced to support watch command.
[ci no changelog needed]
### Important Notes
* workflow for checking docs has been removed because it was using outdated prettier version and caused troubles; while the same check is performed in a better way by the GUI/Lint job.
* introduced little more typescript in the scripts in place of js, usually with minimal changes.
Implement generation of Java AST types from the Rust AST type definitions, with support for deserializing in Java syntax trees created in Rust.
### New Libraries
#### `enso-reflect`
Implements a `#[derive(Reflect)]` macro to enable runtime analysis of datatypes. Macro interface includes helper attributes; **the Rust types and the `reflect` attributes applied to them fully determine the Java types** ultimately produced (by `enso-metamodel`). This is the most important API, as it is used in the subject crates (`enso-parser`, and dependencies with types used in the AST). [Module docs](https://github.com/enso-org/enso/blob/wip/kw/parser/ast-transpiler/lib/rust/reflect/macros/src/lib.rs).
#### `enso-metamodel`
Provides data models for data models in Rust/Java/Meta (a highly-abstracted language-independent model--I have referred to it before as the "generic representation", but that was an overloaded term).
The high-level interface consists of operations on data models, and between them. For example, the only operations needed by [the binary that drives datatype transpilation](https://github.com/enso-org/enso/blob/wip/kw/parser/ast-transpiler/lib/rust/parser/generate-java/src/main.rs) are: `rust::to_meta`, `java::from_meta`, `java::transform::optional_to_null`, `java::to_syntax`.
The low-level interface consists of direct usage of the datatypes; this is used by [the module that implements some serialization overrides](https://github.com/enso-org/enso/blob/wip/kw/parser/ast-transpiler/lib/rust/parser/generate-java/src/serialization.rs) (so that the Java interface to `Code` references can produce `String`s on demand based on serialized offset/length pairs). The serialization override mechanism is based on customizing, not replacing, the generated deserialization methods, so as to be as robust as possible to changes in the Rust source or in the transpilation process.
### Important Notes
- Rust/Java serialization is exhaustively tested for structural compatibility. A function [`metamodel::meta::serialization::testcases`](https://github.com/enso-org/enso/blob/wip/kw/parser/ast-transpiler/lib/rust/metamodel/src/meta/serialization.rs) uses `reflect`-derived data to generate serialized representations of ASTs to use as test cases. Its should-accept cases cover every type a tree can contain; it also produces a representative set of should-reject cases. A Rust `#[test]` confirms that these cases are accepted/rejected as expected, and generated Java tests (see Binaries below) check the generated Java deserialization code against the same test cases.
- Deserializing `Code` is untested. The mechanism is in place (in Rust, we serialize only the offset/length of the `Cow`; in Java, during deserialization we obtain a context object holding a buffer for all string data; the accessor generated in Java uses the buffer and the offset/length to return `String`s), but it will be easier to test once we have implemented actually parsing something and instantiating the `Cow`s with source code.
- `#[tagged_enum]` [now supports](https://github.com/enso-org/enso/blob/wip/kw/parser/ast-transpiler/lib/rust/shapely/macros/src/tagged_enum.rs#L36-L51) control over what is done with container-level attributes; they can be applied to the container and variants (default), only to the container, or only to variants.
- Generation of `sealed` classes is supported, but currently disabled by `TARGET_VERSION` in `metamodel::java::syntax` so that tests don't require Java 15 to run. (The same logic is run either way; there is a shallow difference in output.)
### Binaries
The `enso-parser-generate-java` crate defines several binaries:
- `enso-parser-generate-java`: Performs the transpilation; after integration, this will be invoked by the build script.
- `java-tests`: Generates the Java code that tests format deserialization; after integration this command will be invoked by the build script, and its Java output compiled and run during testing.
- `graph-rust`/`graph-meta`/`graph-java`: Produce GraphViz representations of data models in different typesystems; these are for developing and understanding model transformations.
Until integration, a **script regenerates the Java and runs the format tests: `./tools/parser_generate_java.sh`**. The generated code can be browsed in `target/generated_java`.
This PR adds sources for Enso language support in IGV (and NetBeans). The support is based on TextMate grammar shown in the editor and registration of the Enso language so IGV can find it. Then this PR adds new GitHub Actions workflow file to build the project using Maven.
- Removed `select` method.
- Removed `group` method.
- Removed `Aggregate_Table` type.
- Removed `Order_Rule` type.
- Removed `sort` method from Table.
- Expanded comments on `order_by`.
- Update comment on `aggregate` on Database.
- Update Visualisation to use new APIs.
- Updated Data Science examples to use new APIs.
- Moved Examples test out of Tests to own test.
# Important Notes
Need to get Examples_Tests added to CI.
### Pull Request Description
Using the new tooling (#3491), I investigated the **performance / compile-time tradeoff** of different codegen options for release mode builds. By scripting the testing procedure, I was able to explore many possible combinations of options, which is important because their interactions (on both application performance and build time) are complex. I found **two candidate profiles** that offer specific advantages over the current `release` settings (`baseline`):
- `thin16`: Supports incremental compiles in 1/3 the time of `baseline` in common cases. Application runs about 2% slower than `baseline`.
- `fat1-O4`: Application performs 13% better than `baseline`. Compile time is almost 3x `baseline`, and non-incremental.
(See key in first chart for the settings defining these profiles.)
We can build faster or run faster, though not in the same build. Because the effect sizes are large enough to be impactful to developer and user experience, respectively, I think we should consider having it both ways. We could **split the `release` profile** into two profiles to serve different purposes:
- `release`: A profile that supports fast developer iteration, while offering realistic performance.
- `production`: A maximally-optimized profile, for nightly builds and actual releases.
Since `wasm-pack` doesn't currently support custom profiles (rustwasm/wasm-pack#1111), we can't use a Cargo profile for `production`; however, we can implement our own profile by overriding rustc flags.
### Performance details
![perf](https://user-images.githubusercontent.com/1047859/170788530-ab6d7910-5253-4a2b-b432-8bfa0b4735ba.png)
As you can see, `thin16` is slightly slower than `baseline`; `fat1-O4` is dramatically faster.
<details>
<summary>Methodology (click to show)</summary>
I developed a procedure for benchmarking "whole application" performance, using the new "open project" workflow (which opens the IDE and loads a complex project), and some statistical analysis to account for variance. To gather this data:
Build the application with profiling:
`./run.sh ide build --profiling-level=debug`
Run the `open_project` workflow repeatedly:
`for i in $(seq 0 9); do dist/ide/linux-unpacked/enso --entry-point profile --workflow open_project --save-profile open_project_thin16_${i}.json; done`
For each profile recorded, take the new `total_self_time` output of the `intervals` tool; gather into CSV:
`echo $(for i in $(seq 0 9); do target/rust/debug/intervals < open_project_thin16_${i}.json | tail -n1 | awk '{print $2}'; do`
(Note that the output of intervals should not be considered stable; this command may need modification in the future. Eventually it would be nice to support formatted outputs...)
The data is ready to graph. I used the `boxplot` method of the [seaborn](https://seaborn.pydata.org/index.html) package, in order to show the distribution of data.
</details>
#### Build times
![thin16](https://user-images.githubusercontent.com/1047859/170788539-1578e41b-bc30-4f30-9b71-0b0181322fa5.png)
In the case of changing a file in `enso-prelude`, with the current `baseline` settings rebuilding takes over 3 minutes. With the `thin16` settings, the same rebuild completes in 40 seconds.
(To gather this data on different hardware or in the future, just run the new `bench-build.sh` script for each case to be measured.)
- Change dev profile settings. Improves build performance; will not affect anything else. Details below.
- Introduce script for benchmarking various incremental builds. Usage is explained in the script comments.
- Add a line to `intervals` showing total main-thread CPU work logged in a profile; this can be used to compare the results of optimizations (I'll be starting a discussion informed by that data separately; this change just enables the tooling to report it).
* The bash entry point was renamed `run.sh` -> `run`. Thanks to that `./run` works both on Linux and Windows with PowerShell (sadly not on CMD).
* Everyone's favorite checks for WASM size and program versions are back. These can be disabled through `--wasm-size-limit=0` and `--skip-version-check` respectively. WASM size limit is stored in `build-config.yaml`.
* Improved diagnostics for case when downloaded CI run artifact archive cannot be extracted.
* Added GH API authentication to the build script calls on CI. This should fix the macOS build failures that were occurring from time to time. (Actually they were due to runner being GitHub-hosted, not really an OS-specific issue by itself.)
* If the GH API Personal Access Token is provided, it will be validated. Later on it is difficult to say, whether fail was caused by wrong PAT or other issue.
* Renamed `clean` to `git-clean` as per suggestion to reduce risk of user accidently deleting unstaged work.
* Whitelisting dependabot from changelog checks, so PRs created by it are mergeable.
* Fixing issue where wasm-pack-action (third party) randomly failed to recognize the latest version of wasm-pack (macOS runners), leading to failed builds.
* Build logs can be filtered using `ENSO_BUILD_LOG` environment variable. See https://docs.rs/tracing-subscriber/0.3.11/tracing_subscriber/struct.EnvFilter.html#directives for the supported syntax.
* Improve help for ci-run source, to make clear that PAT token is required and what scope is expected there.
Also, JS parts were updated with some cleanups and fixes following the changes made when introducing the build script.
Before, when running Enso with `-ea`, some assertions were broken and the interpreter would not start.
This PR fixes two very minor bugs that were the cause of this - now we can successfully run Enso with `-ea`, to test that any assertions in Truffle or in our own libraries are indeed satisfied.
Additionally, this PR adds a setting to SBT that ensures that IntelliJ uses the right language level (Java 17) for our projects.
Implements infrastructure for new aggregations in the Database. It comes with only some basic aggregations and limited error-handling. More aggregations and problem handling will be added in subsequent PRs.
# Important Notes
This introduces basic aggregations using our existing codegen and sets-up our testing infrastructure to be able to use the same aggregate tests as in-memory backend for the database backends.
Many aggregations are not yet implemented - they will be added in subsequent tasks.
There are some TODOs left - they will be addressed in the next tasks.
* Profiling application details
Add enough profiling to account for every missed frame during startup.
See https://www.pivotaltracker.com/story/show/181499507
* Build ActiveInterval hierarchy in profiler_data
* update doctests / await_!
* docs/formatting/naming
* more graph modes
* increase WASM size
Due to new render-profile-flamegraph scene. We should remove these from the main release WASM blob one way or another.
* lint
* fix a test
* Organization (feedback)
* Add @wdanilo to Cargo.lock CODEOWNERS
As discussed after my previous PR got stuck waiting for Cargo.lock review.
* fix doctests
* Update docs. Removed a limitation.
* Creating a new node with the (+) button (#3278)
[The Task](https://www.pivotaltracker.com/story/show/180887253)
A new (+) button on the left-bottom corner appeared. It may be clicked to open searcher in the middle of the scene, as an alternative to tab key.
https://user-images.githubusercontent.com/3919101/154514279-7972ed6a-0203-47cb-9a09-82dba948cf2f.mp4
* The window_control_buttons::common was extracted to separate crate `ensogl-component-button` almost without change.
* This includes a severe refactoring of adding nodes in general in the Graph Editor. The whole responsibility of adding new nodes (and starting their editing) was moved to Graph Editor - the Project View only reacts for GE events to show searcher properly.
* The status bar was moved from the bottom-left corner to the middle-top of the scene. It does not collide with (+) button, and plays "notification" role anyway.
* The `interface` debug scene was buggy. The problem was with one expression's span-tree. When I replaced it, the scene works.
* I've removed "new searcher" API, as it is completely outdated.
* I've changed code owners of integration tests to GUI team, as it is the team writing mostly the integration tests (int rust)
* Fix regression #181528359
* Add docs & remove unused function
* Fix & enable native Rust tests
* Fix formatting
Co-authored-by: Adam Obuchowicz <adam.obuchowicz@enso.org>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
[ci no changelog needed]
This PR reverts commit [0836ce741d](0836ce741d) because of the spotted regression:
To reproduce:
1. Open a default project.
2. Without doing anything else, cmd + click on any node to edit it.
3. Abort editing by pressing escape.
4. Top-most node disappears (it is actually removed from scene)
If you start editing the bottom node - you will also see a visible regression in node searcher's position.
See thread https://discord.com/channels/401396655599124480/950730235719065620/950731247909478410 for details.
API for storing metadata.
See: https://www.pivotaltracker.com/story/show/181149277
# Important Notes
**New APIs**:
- Storing metadata is implemented with `profiler::MetadataLogger`.
- A full metadata storage/retrieval example is in [the top-level doctests](https://github.com/enso-org/enso/blob/wip/kw/profiling-metadata-api/lib/rust/profiler/data/src/lib.rs) for profiler::data, a crate which implements an API for profiling data consumers (it abstracts away the low-level details of the event log, and checks its invariants in the process) [after review of this new API here I'll open a PR to add it to the design doc].
**Implementation**:
- `profiler::Event` is parameterized by a metadata type, so that different types of metadata can be dependency-injected into it.
- A data consumer defines its metadata type as an enum of all the kinds of metadata it is interested in.
- Producing the metadata enum is accomplished without defining its type (which would require dependencies from around the app): A `MetadataLogger` internally use a serialization helper `Variant` to serialize its variant of the metadata enum without knowledge of the other possible variants.
**Performance impact**: still in the low ns/measurement range, comparable to pushing to a vec.
*Note*: `LocalVecBuilder` is currently present under the name `Log`, which is accurate but probably too overloaded. I'd like to find the right name for it, document it with examples, and move it to its own crate under data-structures, but I don't want doing that to hold up this PR.
[The Task](https://www.pivotaltracker.com/story/show/180887253)
A new (+) button on the left-bottom corner appeared. It may be clicked to open searcher in the middle of the scene, as an alternative to tab key.
https://user-images.githubusercontent.com/3919101/154514279-7972ed6a-0203-47cb-9a09-82dba948cf2f.mp4
# Important Notes
* The window_control_buttons::common was extracted to separate crate `ensogl-component-button` almost without change.
* This includes a severe refactoring of adding nodes in general in the Graph Editor. The whole responsibility of adding new nodes (and starting their editing) was moved to Graph Editor - the Project View only reacts for GE events to show searcher properly.
* The status bar was moved from the bottom-left corner to the middle-top of the scene. It does not collide with (+) button, and plays "notification" role anyway.
* The `interface` debug scene was buggy. The problem was with one expression's span-tree. When I replaced it, the scene works.
* I've removed "new searcher" API, as it is completely outdated.
* I've changed code owners of integration tests to GUI team, as it is the team writing mostly the integration tests (int rust)