This PR fixes several issues that were appearing when running CI jobs on PRs created against the repository forks:
* electron-builder on Windows and macOS will properly recognize that the secrets are missing and will not attempt to sign the artifacts;
* similarly, fixed the S3 library tests;
* test reporter step will be now skipped, as it does not support forks.
GitHub made arm64 runners generally available and changed macos-latest label to point to them.
The runner architecture is coupled with GH-hosted runners OS version: macos-13 is the last one to run on x64.
This PR essentially brings back the previous behavior, by explicitly requesting that all our x64 macOS jobs are run on macos-12 (as was before).
We should eventually migrate to macos-13 for x64 macOS and macos-14/macos-latest for arm64 macOS. However, this leads to issues with `npm install` getting stuck, so it should be probably reattempted after the CI rework.
This PR bumps the FlatBuffers version used by the backend to `24.3.25` (the latest version as of now).
Since the newer FlatBuffers releases come with prebuilt binaries for all platforms we target, we can simplify the build process by simply downloading the required `flatc` binary from the official FlatBuffers GitHub release page. This allows us to remove the dependency on `conda`, which was the only reliable way to get the outdated `flatc`.
The `conda` setup has been removed from the CI steps and the relevant code has been removed from the build script.
The FlatBuffers version is no longer hard-coded in the Rust build script, it is inferred from the `build.sbt` definition (similar to GraalVM).
# Important Notes
This does not affect the GUI binary protocol implementation.
While I initially wanted to update it, it turned out farly non-trivial.
As there are multiple issues with the generated TS code, it was significantly refactored by hand and it is impossible to automatically update it. Work to address this problem is left as [a future task](https://github.com/enso-org/enso/issues/9658).
As the Flatbuffers binary protocol is guaranteed to be compatible between versions (unlike the generated sources), there should be no adverse effects from bumping `flatc` only on the backend side.
This PR introduces [a new workflow — nightly checks](https://github.com/enso-org/enso/actions/workflows/nightly-tests.yml). It consists of the whole array of Backend checks:
* build check, Scala tests and Standard Library tests;
* covers both Community and Oracle (Enterprise) GraalVM editions (Linux-only);
* includes checks for Aarch64 macOS runner.
We do not want to run these checks on each PR due to limited runners capacity. By running them nightly, we can still catch any issues that might arise on `develop` branch.
# Important Notes
* [ ] Before merging, this requires updating the GH required checks list.
- Closes#9284
- Now our tests run without the default `AWS_` config, thus ensuring that the tested setups work in a clean environment.
- After all, more complicated logic was needed for buckets access - apparently the AWS SDK only allows for some operations on buckets to happen if the client is connected to the correct region. Thus detection of bucket regions had to be implemented.
- Added `AWS_Region` widget based on autoscoping.
- Fixed `AWS_Credential.profile_names` crashing if no AWS config was found. Now it returns no profiles if not found. Added a regression test.
Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`.
### Important Notes
- The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used.
- sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings.
- If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed.
Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR):
- Engine (GraalVM CE) (linux, x86_64)
- Engine (GraalVM CE) (macos, x86_64)
- Engine (GraalVM CE) (windows, x86_64)
- **Engine (Oracle GraalVM) (linux, x86_64)**
- Scala Tests (GraalVM CE) (linux, x86_64)
- Scala Tests (GraalVM CE) (macos, x86_64)
- Scala Tests (GraalVM CE) (windows, x86_64)
- **Scala Tests (Oracle GraalVM) (linux, x86_64)**
- Standard Library Tests (GraalVM CE) (linux, x86_64)
- Standard Library Tests (GraalVM CE) (macos, x86_64)
- Standard Library Tests (GraalVM CE) (windows, x86_64)
- **Standard Library Tests (Oracle GraalVM) (linux x86_64)**
- Verify License Packages (linux, x86_64)
Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet):
- Benchmark Engine (GraalVM CE)
- **Benchmark Engine (Oracle GraalVM)**
Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet):
- Benchmark Standard Libraries (GraalVM CE)
- **Benchmark Standard Libraries (Oracle GraalVM)**
Removed `enso-types` crate which had only one reference in unused part of the code. Removed some unused dependencies from `Cargo.toml` files.
# Important Notes
CI has a similar hiccup as before. Please disregard this for now in the review.
I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of:
1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture.
2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have.
3. Splitting the Standard Library Tests into a separate job.
The time is nicely split across the jobs now. The last run has:
* 27 min for Scala tests;
* 25 min for Standard Library tests;
* 24 min for the "rest": the old job containing everything that has not been split.
While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup.
The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks.
The PR is pretty minimal. Several future improvements can be made:
* Reorganizing and splitting other "heavy" jobs, like the native image generation.
* Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought.
* Building package takes several minutes, so duplicating this job is not that expensive.
* The package is OS-specific.
* Scala tests don't really benefit from it, they'd need way more compilation artifacts.
It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests.
* Reusing the Rust build script binary.
* As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed.
* This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
This PR allows requesting a clean build when triggering the workflow through the manual dispatch.
Previously it was possible only by creating PR and adding the label to it.
This PR adds a native aarch64 target to our release process.
It also includes refactoring of workflow generation and minor tweaks:
* removing some workarounds in the generated action code that are not needed anymore;
* some version bumps that are harmless;
* release builds have cleaning enabled unconditionally.
Now the `clean` CI steps are run always for benchmarking jobs. We run the full `./run git-clean` before and after benchmarks. Benchmarks take long enough to make any savings by not cleaning negligible.
### Important Notes
This PR brings partial refactoring in the workflow generating code which was very dirty. I'll build on this further soon when adding proper aarch64 macOS support.
Also, some minor tweaks to the generation were made:
* not writing `always() &&` twice;
* run only the latter cleaning step for canceled jobs.
Upgrade to GraalVM JDK 21.
```
> java -version
openjdk version "21" 2023-09-19
OpenJDK Runtime Environment GraalVM CE 21+35.1 (build 21+35-jvmci-23.1-b15)
OpenJDK 64-Bit Server VM GraalVM CE 21+35.1 (build 21+35-jvmci-23.1-b15, mixed mode, sharing)
```
With SDKMan, download with `sdk install java 21-graalce`.
# Important Notes
- After this PR, one can theoretically run enso with any JRE with version at least 21.
- Removed `sbt bootstrap` hack and all the other build time related hacks related to the handling of GraalVM distribution.
- `project-manager` remains backward compatible - it can open older engines with runtimes. New engines now do no longer require a separate runtime to be downloaded.
- sbt does not support compilation of `module-info.java` files in mixed projects - https://github.com/sbt/sbt/issues/3368
- Which means that we can have `module-info.java` files only for Java-only projects.
- Anyway, we need just a single `module-info.class` in the resulting `runtime.jar` fat jar.
- `runtime.jar` is assembled in `runtime-with-instruments` with a custom merge strategy (`sbt-assembly` plugin). Caching is disabled for custom merge strategies, which means that re-assembly of `runtime.jar` will be more frequent.
- Engine distribution contains multiple JAR archives (modules) in `component` directory, along with `runner/runner.jar` that is hidden inside a nested directory.
- The new entry point to the engine runner is [EngineRunnerBootLoader](https://github.com/enso-org/enso/pull/7991/files#diff-9ab172d0566c18456472aeb95c4345f47e2db3965e77e29c11694d3a9333a2aa) that contains a custom ClassLoader - to make sure that everything that does not have to be loaded from a module is loaded from `runner.jar`, which is not a module.
- The new command line for launching the engine runner is in [distribution/bin/enso](https://github.com/enso-org/enso/pull/7991/files#diff-0b66983403b2c329febc7381cd23d45871d4d555ce98dd040d4d1e879c8f3725)
- [Newest version of Frgaal](https://repo1.maven.org/maven2/org/frgaal/compiler/20.0.1/) (20.0.1) does not recognize `--source 21` option, only `--source 20`.
This PR updates the build script:
* fixed issue where program version check was not properly triggering;
* improved `git-clean` command to correctly clear Scala artifacts;
* added `run.ps1` wrapper to the build script that works better with PowerShell than `run.cmd`;
* increased timeouts to work around failures on macOS nightly builds;
* replaced depracated GitHub Actions APIs (set-output) with their new equivalents;
* workaround for issue with electron builder (python2 lookup) on newer macOS runner images;
* GUI and backend dispatches to cloud were completed;
* release workflow allows creating RC releases.
* added polyfill globals plugin to fix issue with missing types like Buffer that was affecting nightly releases;
* fixed exit code propagation for Windows build script wrapper;
* bumped the build script and refreshed the generated workflows.
Includes https://github.com/enso-org/ci-build/pull/8
This PR replaces webpack with esbuild, as our bundler.
The change leads to out-of-the-box ~5x improvement in bundling times, reducing the latency in watch-based workflows.
Along with this a new development server (with live reload capacity) has been introduced to support watch command.
[ci no changelog needed]
### Important Notes
* workflow for checking docs has been removed because it was using outdated prettier version and caused troubles; while the same check is performed in a better way by the GUI/Lint job.
* introduced little more typescript in the scripts in place of js, usually with minimal changes.