Now that #9815 has landed, we can finally bump electron-builder to the latest release. As this brings in python3 support out-of-the-box, workaround of the runtime-bump on macOS runners can be removed.
This PR introduces a new installer and uninstaller for the Windows platform.
Both are written in Rust and compiled to a single executable. The executable has no dependencies (other than what is included in the Windows), links the C++ runtime statically if needed.
The change is motivated by numerous issues with with the `electron-builder`-generated installers. The new installer should behave better, not have issues with long paths and unblock the `electron-builder` upgrade (which will significantly simplify the workflow definitions).
To build an installer, one needs to provide the unpacked application (generated by `electron-builder`) and the `electron-builder` configuration (with a few minor extensions). Code signing is also supported.
GitHub made arm64 runners generally available and changed macos-latest label to point to them.
The runner architecture is coupled with GH-hosted runners OS version: macos-13 is the last one to run on x64.
This PR essentially brings back the previous behavior, by explicitly requesting that all our x64 macOS jobs are run on macos-12 (as was before).
We should eventually migrate to macos-13 for x64 macOS and macos-14/macos-latest for arm64 macOS. However, this leads to issues with `npm install` getting stuck, so it should be probably reattempted after the CI rework.
- #9779 introduced (incorrect) detection to determine when to inject the Google Analytics tag. Instead, it should be injected by CI, because sending Google Analytics events is undesirable in development mode.
# Important Notes
None
This PR bumps the FlatBuffers version used by the backend to `24.3.25` (the latest version as of now).
Since the newer FlatBuffers releases come with prebuilt binaries for all platforms we target, we can simplify the build process by simply downloading the required `flatc` binary from the official FlatBuffers GitHub release page. This allows us to remove the dependency on `conda`, which was the only reliable way to get the outdated `flatc`.
The `conda` setup has been removed from the CI steps and the relevant code has been removed from the build script.
The FlatBuffers version is no longer hard-coded in the Rust build script, it is inferred from the `build.sbt` definition (similar to GraalVM).
# Important Notes
This does not affect the GUI binary protocol implementation.
While I initially wanted to update it, it turned out farly non-trivial.
As there are multiple issues with the generated TS code, it was significantly refactored by hand and it is impossible to automatically update it. Work to address this problem is left as [a future task](https://github.com/enso-org/enso/issues/9658).
As the Flatbuffers binary protocol is guaranteed to be compatible between versions (unlike the generated sources), there should be no adverse effects from bumping `flatc` only on the backend side.
This PR adds a new file to the release: `assets.json`, that offers information about the assets available in the release.
The purpose is to have one persistent link `https://github.com/enso-org/enso/releases/latest/download/assets.json` that has the current download links that can be consumed by the website.
Also, the release template has been updated to use the same assets information source, rather than duplicate the information about artifact names.
Additionally, additional step for release validation was added, so the CI can alert if one of the expected assets is missing.
This PR introduces [a new workflow — nightly checks](https://github.com/enso-org/enso/actions/workflows/nightly-tests.yml). It consists of the whole array of Backend checks:
* build check, Scala tests and Standard Library tests;
* covers both Community and Oracle (Enterprise) GraalVM editions (Linux-only);
* includes checks for Aarch64 macOS runner.
We do not want to run these checks on each PR due to limited runners capacity. By running them nightly, we can still catch any issues that might arise on `develop` branch.
# Important Notes
* [ ] Before merging, this requires updating the GH required checks list.
The `sysinfo` crate returns now bytes, not kilobytes.
This was changed in `sysinfo`'s `0.26.0` version. I have missed this change previously while bumping CI code dependencies.
The effects were not drastic as both fast and slow paths were meant to be generally equivalent.
Removes a bulk of rust crates that we no longer need, but that added significant install, build and testing time to the Rust parser.
Most significantly, removed `enso-web` and `enso-shapely`, and got rid of many no longer necessary `#![feature]`s. Moved two still used proc-macros from shapely to prelude. The last remaining usage of `web-sys` is within the logger (`console.log`), but we may actually want to keep that one.
- Closes#9284
- Now our tests run without the default `AWS_` config, thus ensuring that the tested setups work in a clean environment.
- After all, more complicated logic was needed for buckets access - apparently the AWS SDK only allows for some operations on buckets to happen if the client is connected to the correct region. Thus detection of bucket regions had to be implemented.
- Added `AWS_Region` widget based on autoscoping.
- Fixed `AWS_Credential.profile_names` crashing if no AWS config was found. Now it returns no profiles if not found. Added a regression test.
This PR updates the Rust toolchain to recent nightly.
Most of the changes are related to fixing newly added warnings and adjusting the feature flags. Also the formatter changed its behavior slightly, causing some whitespace changes.
Other points:
* Changed debug level of the `buildscript` profile to `lint-tables-only` — this should improve the build times and space usage somewhat.
* Moved lint configuration to the worksppace `Cargo.toml` definition. Adjusted the formatter appropriately.
* Removed auto-generated IntelliJ run configurations, as they are not useful anymore.
* Added a few trivial stdlib nightly functions that were removed to our codebase.
* Bumped many dependencies but still not all:
* `clap` bump encountered https://github.com/clap-rs/clap/issues/5407 — for now the warnings were silenced by the lint config.
* `octocrab` — our forked diverged to far with the original, needs more refactoring.
* `derivative` — is unmaintained and has no updated version, despite introducing warnings in the generated code. There is no direct replacement.
This PR removes enso-pack (ensogl-pack) crate.
It still keeps the `enso-runner` JS package, as it is used for CLI argument parser and logger. The runner should be probably refactored (and possible removed altogether).
# Important Notes
I've temporarily extracted the `enso-runner` to `lib/js` directory, as I wanted to avoid keeping pure JS library under `lib/rust`. Attempts at integrating this with `app/ide-desktop` and family caused too much trouble for this PR. The expectation is that the package will be removed or moved elsewhere soon anyway.
If some benchmark fails in dry-run (compileOnly) mode, the whole process exits with non-zero return code. Also fixes failing engine compiler benchmarks.
# Important Notes
Manually added failure:
```diff
diff --git a/engine/runtime-benchmarks/src/main/java/org/enso/interpreter/bench/benchmarks/semantic/ArrayProxyBenchmarks.java b/engine/runtime-benchmarks/src/main/java/org/enso/interpreter/bench/benchmarks/semantic/ArrayProxyBenchmarks.java
index c8d86cecc..f9f4d7cbc 100644
--- a/engine/runtime-benchmarks/src/main/java/org/enso/interpreter/bench/benchmarks/semantic/ArrayProxyBenchmarks.java
+++ b/engine/runtime-benchmarks/src/main/java/org/enso/interpreter/bench/benchmarks/semantic/ArrayProxyBenchmarks.java
@@ -95,7 +95,8 @@ public class ArrayProxyBenchmarks {
@Benchmark
public void sumOverComputingProxy(Blackhole matter) {
- performBenchmark(matter);
+ //performBenchmark(matter);
+ throw new AssertionError("My error");
}
@Benchmark
```
Run with `sbt "-Dbench.compileOnly=true runtime-benchmarks/benchOnly org.enso.interpreter.bench.benchmarks.semantic.ArrayProxyBenchmarks.sumOverComputingProxy"` fails with:
```
[info] Running benchmarks [org.enso.interpreter.bench.benchmarks.semantic.ArrayProxyBenchmarks.sumOverComputingProxy] in compileOnly mode
[info] # JMH version: 1.36
[info] # VM version: JDK 21.0.2, Java HotSpot(TM) 64-Bit Server VM, 21.0.2+13-LTS-jvmci-23.1-b30
[info] # VM invoker: /home/pavel/.sdkman/candidates/java/21.0.2-graal/bin/java
[info] # VM options: -XX:ThreadPriorityPolicy=1 -XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCIProduct -XX:-UnlockExperimentalVMOptions -Dslf4j.provider=org.slf4j.nop.NOPServiceProvider -Dbench.compileOnly=true --module-path=/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/sdk/nativeimage/23.1.2/nativeimage-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/sdk/word/23.1.2/word-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/sdk/jniutils/23.1.2/jniutils-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/sdk/collections/23.1.2/collections-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/polyglot/polyglot/23.1.2/polyglot-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/truffle/truffle-api/23.1.2/truffle-api-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/truffle/truffle-runtime/23.1.2/truffle-runtime-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/truffle/truffle-compiler/23.1.2/truffle-compiler-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/js/js-language/23.1.2/js-language-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/regex/regex/23.1.2/regex-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/shadowed/icu4j/23.1.2/icu4j-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/python/python-language/23.1.2/python-language-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/python/python-resources/23.1.2/python-resources-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/bouncycastle/bcutil-jdk18on/1.76/bcutil-jdk18on-1.76.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/bouncycastle/bcpkix-jdk18on/1.76/bcpkix-jdk18on-1.76.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/bouncycastle/bcprov-jdk18on/1.76/bcprov-jdk18on-1.76.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/llvm/llvm-api/23.1.2/llvm-api-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/truffle/truffle-nfi/23.1.2/truffle-nfi-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/truffle/truffle-nfi-libffi/23.1.2/truffle-nfi-libffi-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/tools/profiler-tool/23.1.2/profiler-tool-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/graalvm/shadowed/json/23.1.2/json-23.1.2.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/tukaani/xz/1.9/xz-1.9.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/slf4j/slf4j-api/2.0.9/slf4j-api-2.0.9.jar:/home/pavel/.cache/coursier/v1/https/repo1.maven.org/maven2/org/slf4j/slf4j-nop/2.0.9/slf4j-nop-2.0.9.jar:/home/pavel/dev/enso/runtime.jar --add-modules=org.enso.runtime --add-exports=org.slf4j.nop/org.slf4j.nop=org.slf4j
[info] # Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false to disable)
[info] # Warmup: <none>
[info] # Measurement: 1 iterations, 1 s each
[info] # Timeout: 10 min per iteration
[info] # Threads: 1 thread, will synchronize iterations
[info] # Benchmark mode: Average time, time/op
[info] # Benchmark: org.enso.interpreter.bench.benchmarks.semantic.ArrayProxyBenchmarks.sumOverComputingProxy
[info] # Run progress: 0.00% complete, ETA 00:00:01
[info] # Fork: N/A, test runs in the host VM
[info] # *** WARNING: Non-forked runs may silently omit JVM options, mess up profilers, disable compiler hints, etc. ***
[info] # *** WARNING: Use non-forked runs only for debugging purposes, not for actual performance runs. ***
[error] SLF4J: Attempting to load provider "org.slf4j.nop.NOPServiceProvider" specified via "slf4j.provider" system property
[info] Iteration 1: <failure>
[info] java.lang.AssertionError: My error
[info] at org.enso.interpreter.bench.benchmarks.semantic.ArrayProxyBenchmarks.sumOverComputingProxy(ArrayProxyBenchmarks.java:99)
[info] at org.enso.interpreter.bench.benchmarks.semantic.jmh_generated.ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.sumOverComputingProxy_avgt_jmhStub(ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.java:232)
[info] at org.enso.interpreter.bench.benchmarks.semantic.jmh_generated.ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.sumOverComputingProxy_AverageTime(ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.java:173)
[info] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[info] at java.base/java.lang.reflect.Method.invoke(Method.java:580)
[info] at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:475)
[info] at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:458)
[info] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
[info] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
[info] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
[info] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
[info] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
[error] Benchmark run failed: Benchmark caught the exception
[info] at java.base/java.lang.Thread.run(Thread.java:1583)
[error] org.openjdk.jmh.runner.RunnerException: Benchmark caught the exception
[error] at org.openjdk.jmh.runner.Runner.runBenchmarks(Runner.java:575)
[error] at org.openjdk.jmh.runner.Runner.internalRun(Runner.java:310)
[error] at org.openjdk.jmh.runner.Runner.run(Runner.java:209)
[error] at org.enso.interpreter.bench.BenchmarksRunner.runCompileOnly(BenchmarksRunner.java:93)
[error] at org.enso.interpreter.bench.BenchmarksRunner.run(BenchmarksRunner.java:36)
[error] at org.enso.interpreter.bench.benchmarks.RuntimeBenchmarksRunner.main(RuntimeBenchmarksRunner.java:8)
[error] Caused by: org.openjdk.jmh.runner.BenchmarkException: Benchmark error during the run
[error] at org.openjdk.jmh.runner.BenchmarkHandler.runIteration(BenchmarkHandler.java:424)
[error] at org.openjdk.jmh.runner.BaseRunner.runBenchmark(BaseRunner.java:281)
[error] at org.openjdk.jmh.runner.BaseRunner.runBenchmark(BaseRunner.java:233)
[error] at org.openjdk.jmh.runner.BaseRunner.doSingle(BaseRunner.java:138)
[error] at org.openjdk.jmh.runner.BaseRunner.runBenchmarksEmbedded(BaseRunner.java:110)
[error] at org.openjdk.jmh.runner.Runner.runBenchmarks(Runner.java:555)
[error] ... 5 more
[error] Suppressed: java.lang.AssertionError: My error
[error] at org.enso.interpreter.bench.benchmarks.semantic.ArrayProxyBenchmarks.sumOverComputingProxy(ArrayProxyBenchmarks.java:99)
[error] at org.enso.interpreter.bench.benchmarks.semantic.jmh_generated.ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.sumOverComputingProxy_avgt_jmhStub(ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.java:232)
[error] at org.enso.interpreter.bench.benchmarks.semantic.jmh_generated.ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.sumOverComputingProxy_AverageTime(ArrayProxyBenchmarks_sumOverComputingProxy_jmhTest.java:173)
[error] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[error] at java.base/java.lang.reflect.Method.invoke(Method.java:580)
[error] at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:475)
[error] at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:458)
[error] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
[error] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
[error] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
[error] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
[error] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
[error] at java.base/java.lang.Thread.run(Thread.java:1583)
[error] Nonzero exit code returned from runner: 1
[error] (Compile / run) Nonzero exit code returned from runner: 1
[error] Total time: 5 s, completed Mar 13, 2024, 12:49:59 PM
```
This PR:
* removes much of logic related to building and packaging the gui1;
* made `./run gui` and `./run ide` work with the new gui;
* rename numerous references to the "gui2" or "new gui" in favor of simply "gui", same for "ide".
Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`.
### Important Notes
- The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used.
- sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings.
- If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed.
Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR):
- Engine (GraalVM CE) (linux, x86_64)
- Engine (GraalVM CE) (macos, x86_64)
- Engine (GraalVM CE) (windows, x86_64)
- **Engine (Oracle GraalVM) (linux, x86_64)**
- Scala Tests (GraalVM CE) (linux, x86_64)
- Scala Tests (GraalVM CE) (macos, x86_64)
- Scala Tests (GraalVM CE) (windows, x86_64)
- **Scala Tests (Oracle GraalVM) (linux, x86_64)**
- Standard Library Tests (GraalVM CE) (linux, x86_64)
- Standard Library Tests (GraalVM CE) (macos, x86_64)
- Standard Library Tests (GraalVM CE) (windows, x86_64)
- **Standard Library Tests (Oracle GraalVM) (linux x86_64)**
- Verify License Packages (linux, x86_64)
Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet):
- Benchmark Engine (GraalVM CE)
- **Benchmark Engine (Oracle GraalVM)**
Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet):
- Benchmark Standard Libraries (GraalVM CE)
- **Benchmark Standard Libraries (Oracle GraalVM)**
Removed `enso-types` crate which had only one reference in unused part of the code. Removed some unused dependencies from `Cargo.toml` files.
# Important Notes
CI has a similar hiccup as before. Please disregard this for now in the review.
- Close https://github.com/enso-org/cloud-v2/issues/866
- Remove *all* references to client keys and API base URLs from the codebase.
- The app can still be built by external contributors. *However*, the cloud backend (among some other things) will be completely disabled, as the required keys and base URLs will be missing.
- Add entry to `.gitignore` to allow `*.env` files in `app/ide-desktop/lib/dashboard/`
# Important Notes
- Tested (no `.env`; `.env` with prod backend; `.pbuchu.env`) on:
- `npm run dev` in `app/ide-desktop/lib/dashboard/`
- `./run ide build`
- `./run ide2 build`
- `./run gui watch`
Removes the old GUI1 code base and reduces the Rust code footprint by removing unused code.
# Important Notes
Updates build scripts and reformats part of the codebase with the autoformatter.
I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of:
1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture.
2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have.
3. Splitting the Standard Library Tests into a separate job.
The time is nicely split across the jobs now. The last run has:
* 27 min for Scala tests;
* 25 min for Standard Library tests;
* 24 min for the "rest": the old job containing everything that has not been split.
While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup.
The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks.
The PR is pretty minimal. Several future improvements can be made:
* Reorganizing and splitting other "heavy" jobs, like the native image generation.
* Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought.
* Building package takes several minutes, so duplicating this job is not that expensive.
* The package is OS-specific.
* Scala tests don't really benefit from it, they'd need way more compilation artifacts.
It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests.
* Reusing the Rust build script binary.
* As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed.
* This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
This PR allows requesting a clean build when triggering the workflow through the manual dispatch.
Previously it was possible only by creating PR and adding the label to it.
After investigating some errors, I found another two missing awaits in our tests. Because those are so easy to overlook, I added a lint rule which makes failure on unhandled promise (for e2e tests only).
Also, enabled HTML reports again, with traces this time, to enable closer investigation of any failure in the future. @mwu-tow added code for uploading them in GH.
- Closes#9120
- Reorders CI steps to do the license check last (to avoid it preventing tests from running which are more important than the license check)
- Tries to reword the warnings to be clearer
- Adds some CSS to the report to more clearly indicate which elements can be clicked.
In PR #8953, in commit ba0a69de6e, I have introduced argument files to the `native-image`. In this PR, let's try to upload these argfiles as artifacts on GH, so that we can inspect them later.
CI currently manually compiles standard library. However, this became redundant since the `buildEngineDistribution` included the very same behavior.
Because of that, we ended up compiling the libraries twice.
* Use glob pattern to discover stdlib tests (rather than a hardcoded list).
* Don't fail CI check immediately after failing Scala test.
* Remove meta test suite tests.
# Important Notes
The meta test suite tests are removed following the discussion with @radeusgd. In short, these were failing anyway and were supposed to be rewritten (probably using a different technology, like JUnit). The current code will be a useful reference but it doesn't have to be kept on a repository head. The relevant information and references shall be added to the task.
This has been observed to be the most random error-prone part of the Rust build scripts.
This adds several retries to the patching of the artifact size (which finalizes the upload).
Additional diagnostics was added, so we observe if the retries are actually helping, so we can better understand the issue if this is not enough to fix it.
This PR adds a native aarch64 target to our release process.
It also includes refactoring of workflow generation and minor tweaks:
* removing some workarounds in the generated action code that are not needed anymore;
* some version bumps that are harmless;
* release builds have cleaning enabled unconditionally.
- Close#8911
- Add dashboard unit tests to GUI2 CI
- Add dashboard E2E tests to GUI2 CI
- Fix (minor) issues in dashboard unit tests
# Important Notes
None
Now the `clean` CI steps are run always for benchmarking jobs. We run the full `./run git-clean` before and after benchmarks. Benchmarks take long enough to make any savings by not cleaning negligible.
### Important Notes
This PR brings partial refactoring in the workflow generating code which was very dirty. I'll build on this further soon when adding proper aarch64 macOS support.
Also, some minor tweaks to the generation were made:
* not writing `always() &&` twice;
* run only the latter cleaning step for canceled jobs.
Refactor `test/Table_Test` to the builder API. The builder API is in a new library called `Test_New` that is alongside the old `Test` library. There will be follow-up PRs that will migrate the rest of the tests. Meanwhile, let's keep these two libraries, and merge them after the last PR.
# Important Notes
- For a brief introduction into the new API, see **Prototype 1** section in https://github.com/enso-org/enso/pull/8622#issuecomment-1889706168
- When executing all the tests, the behavior should be the same as with the old library. With the only exception that if `ENSO_TEST_ANSI_COLORS` env var is set, the output is more colorful than it used to be.
Replace our port-finding code with `portpicker` crate.
We expect that it'll greatly reduce possibility of race conditions, as the port will be picked at random, so they won't collide as easily when we use the routine more than once.
This is required because `index.ts` was renamed to `entrypoint.ts` in order to avoid colliding with the main export of the `enso-dashboard` module, which is used by `gui2` as `dashboard.run()`. This file was renamed in #8587, causing the issue.
Build script changes also included thanks to @mwu-tow.
# Important Notes
This should only affect projects opened against the remote (cloud) backend, meaning that to test this, you should open a project against the cloud backend.