enso/.github/workflows/scala-new.yml

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

552 lines
24 KiB
YAML
Raw Normal View History

2023-01-27 03:09:09 +03:00
# This file is auto-generated. Do not edit it manually!
# Edit the enso_build::ci_gen module instead and run `cargo run --package enso-build-ci-gen`.
2022-07-01 04:58:14 +03:00
name: Engine CI
on:
push:
2022-07-01 04:58:14 +03:00
branches:
- develop
pull_request: {}
workflow_dispatch:
inputs:
clean_build_required:
description: Clean before and after the run.
required: false
type: boolean
default: false
jobs:
2024-08-30 11:16:01 +03:00
enso-build-ci-gen-job-cancel-workflow-linux-amd64:
2022-07-01 04:58:14 +03:00
name: Cancel Previous Runs
if: github.ref != 'refs/heads/develop'
runs-on:
- ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.12.1
with:
access_token: ${{ github.token }}
2023-07-26 21:13:34 +03:00
permissions:
actions: write
2024-08-30 11:16:01 +03:00
enso-build-ci-gen-job-ci-check-backend-graal-vm-ce-linux-amd64:
name: Engine (GraalVM CE) (linux, amd64)
runs-on:
- self-hosted
2022-07-01 04:58:14 +03:00
- Linux
steps:
- if: startsWith(runner.name, 'GitHub Actions') || startsWith(runner.name, 'Hosted Agent')
name: Installing wasm-pack
uses: jetli/wasm-pack-action@v0.4.0
2022-07-01 04:58:14 +03:00
with:
version: v0.10.2
- name: Expose Artifact API and context information.
uses: actions/github-script@v7
2022-07-01 04:58:14 +03:00
with:
script: "\n core.exportVariable(\"ACTIONS_RUNTIME_TOKEN\", process.env[\"ACTIONS_RUNTIME_TOKEN\"])\n core.exportVariable(\"ACTIONS_RUNTIME_URL\", process.env[\"ACTIONS_RUNTIME_URL\"])\n core.exportVariable(\"GITHUB_RETENTION_DAYS\", process.env[\"GITHUB_RETENTION_DAYS\"])\n console.log(context)\n "
2022-07-01 04:58:14 +03:00
- name: Checking out the repository
2023-10-17 01:59:52 +03:00
uses: actions/checkout@v4
2022-07-01 04:58:14 +03:00
with:
clean: false
submodules: recursive
2022-07-01 04:58:14 +03:00
- name: Build Script Setup
run: ./run --help || (git clean -ffdx && ./run --help)
2022-07-01 04:58:14 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: "(contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean before
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- run: ./run backend ci-check
2022-07-01 04:58:14 +03:00
env:
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: failure() && runner.os == 'Windows'
name: List files if failed (Windows)
run: Get-ChildItem -Force -Recurse
- if: failure() && runner.os != 'Windows'
name: List files if failed (non-Windows)
run: ls -lAR
- if: "(always()) && (contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean after
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Add some engine jobs that run with Oracle GraalVM (#9322) Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`. ### Important Notes - The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used. - sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings. - If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed. Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR): - Engine (GraalVM CE) (linux, x86_64) - Engine (GraalVM CE) (macos, x86_64) - Engine (GraalVM CE) (windows, x86_64) - **Engine (Oracle GraalVM) (linux, x86_64)** - Scala Tests (GraalVM CE) (linux, x86_64) - Scala Tests (GraalVM CE) (macos, x86_64) - Scala Tests (GraalVM CE) (windows, x86_64) - **Scala Tests (Oracle GraalVM) (linux, x86_64)** - Standard Library Tests (GraalVM CE) (linux, x86_64) - Standard Library Tests (GraalVM CE) (macos, x86_64) - Standard Library Tests (GraalVM CE) (windows, x86_64) - **Standard Library Tests (Oracle GraalVM) (linux x86_64)** - Verify License Packages (linux, x86_64) Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Engine (GraalVM CE) - **Benchmark Engine (Oracle GraalVM)** Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Standard Libraries (GraalVM CE) - **Benchmark Standard Libraries (Oracle GraalVM)**
2024-03-12 22:25:26 +03:00
env:
GRAAL_EDITION: GraalVM CE
2024-08-30 11:16:01 +03:00
enso-build-ci-gen-job-ci-check-backend-graal-vm-ce-macos-amd64:
name: Engine (GraalVM CE) (macos, amd64)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
runs-on:
- macos-12
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
steps:
- if: startsWith(runner.name, 'GitHub Actions') || startsWith(runner.name, 'Hosted Agent')
name: Installing wasm-pack
uses: jetli/wasm-pack-action@v0.4.0
with:
version: v0.10.2
- name: Expose Artifact API and context information.
uses: actions/github-script@v7
with:
script: "\n core.exportVariable(\"ACTIONS_RUNTIME_TOKEN\", process.env[\"ACTIONS_RUNTIME_TOKEN\"])\n core.exportVariable(\"ACTIONS_RUNTIME_URL\", process.env[\"ACTIONS_RUNTIME_URL\"])\n core.exportVariable(\"GITHUB_RETENTION_DAYS\", process.env[\"GITHUB_RETENTION_DAYS\"])\n console.log(context)\n "
- name: Checking out the repository
uses: actions/checkout@v4
with:
clean: false
submodules: recursive
- name: Build Script Setup
run: ./run --help || (git clean -ffdx && ./run --help)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: "(contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean before
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- run: ./run backend ci-check
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: failure() && runner.os == 'Windows'
name: List files if failed (Windows)
run: Get-ChildItem -Force -Recurse
- if: failure() && runner.os != 'Windows'
name: List files if failed (non-Windows)
run: ls -lAR
- if: "(always()) && (contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean after
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Add some engine jobs that run with Oracle GraalVM (#9322) Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`. ### Important Notes - The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used. - sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings. - If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed. Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR): - Engine (GraalVM CE) (linux, x86_64) - Engine (GraalVM CE) (macos, x86_64) - Engine (GraalVM CE) (windows, x86_64) - **Engine (Oracle GraalVM) (linux, x86_64)** - Scala Tests (GraalVM CE) (linux, x86_64) - Scala Tests (GraalVM CE) (macos, x86_64) - Scala Tests (GraalVM CE) (windows, x86_64) - **Scala Tests (Oracle GraalVM) (linux, x86_64)** - Standard Library Tests (GraalVM CE) (linux, x86_64) - Standard Library Tests (GraalVM CE) (macos, x86_64) - Standard Library Tests (GraalVM CE) (windows, x86_64) - **Standard Library Tests (Oracle GraalVM) (linux x86_64)** - Verify License Packages (linux, x86_64) Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Engine (GraalVM CE) - **Benchmark Engine (Oracle GraalVM)** Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Standard Libraries (GraalVM CE) - **Benchmark Standard Libraries (Oracle GraalVM)**
2024-03-12 22:25:26 +03:00
env:
GRAAL_EDITION: GraalVM CE
2024-08-30 11:16:01 +03:00
enso-build-ci-gen-job-ci-check-backend-graal-vm-ce-windows-amd64:
name: Engine (GraalVM CE) (windows, amd64)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
runs-on:
- self-hosted
- Windows
steps:
- if: startsWith(runner.name, 'GitHub Actions') || startsWith(runner.name, 'Hosted Agent')
name: Installing wasm-pack
uses: jetli/wasm-pack-action@v0.4.0
with:
version: v0.10.2
- name: Expose Artifact API and context information.
uses: actions/github-script@v7
with:
script: "\n core.exportVariable(\"ACTIONS_RUNTIME_TOKEN\", process.env[\"ACTIONS_RUNTIME_TOKEN\"])\n core.exportVariable(\"ACTIONS_RUNTIME_URL\", process.env[\"ACTIONS_RUNTIME_URL\"])\n core.exportVariable(\"GITHUB_RETENTION_DAYS\", process.env[\"GITHUB_RETENTION_DAYS\"])\n console.log(context)\n "
- name: Checking out the repository
uses: actions/checkout@v4
with:
clean: false
submodules: recursive
- name: Build Script Setup
run: ./run --help || (git clean -ffdx && ./run --help)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: "(contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean before
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- run: ./run backend ci-check
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: failure() && runner.os == 'Windows'
name: List files if failed (Windows)
run: Get-ChildItem -Force -Recurse
- if: failure() && runner.os != 'Windows'
name: List files if failed (non-Windows)
run: ls -lAR
- if: "(always()) && (contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean after
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Add some engine jobs that run with Oracle GraalVM (#9322) Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`. ### Important Notes - The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used. - sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings. - If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed. Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR): - Engine (GraalVM CE) (linux, x86_64) - Engine (GraalVM CE) (macos, x86_64) - Engine (GraalVM CE) (windows, x86_64) - **Engine (Oracle GraalVM) (linux, x86_64)** - Scala Tests (GraalVM CE) (linux, x86_64) - Scala Tests (GraalVM CE) (macos, x86_64) - Scala Tests (GraalVM CE) (windows, x86_64) - **Scala Tests (Oracle GraalVM) (linux, x86_64)** - Standard Library Tests (GraalVM CE) (linux, x86_64) - Standard Library Tests (GraalVM CE) (macos, x86_64) - Standard Library Tests (GraalVM CE) (windows, x86_64) - **Standard Library Tests (Oracle GraalVM) (linux x86_64)** - Verify License Packages (linux, x86_64) Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Engine (GraalVM CE) - **Benchmark Engine (Oracle GraalVM)** Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Standard Libraries (GraalVM CE) - **Benchmark Standard Libraries (Oracle GraalVM)**
2024-03-12 22:25:26 +03:00
env:
GRAAL_EDITION: GraalVM CE
2024-08-30 11:16:01 +03:00
enso-build-ci-gen-job-jvm-tests-graal-vm-ce-linux-amd64:
name: JVM Tests (GraalVM CE) (linux, amd64)
Add some engine jobs that run with Oracle GraalVM (#9322) Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`. ### Important Notes - The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used. - sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings. - If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed. Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR): - Engine (GraalVM CE) (linux, x86_64) - Engine (GraalVM CE) (macos, x86_64) - Engine (GraalVM CE) (windows, x86_64) - **Engine (Oracle GraalVM) (linux, x86_64)** - Scala Tests (GraalVM CE) (linux, x86_64) - Scala Tests (GraalVM CE) (macos, x86_64) - Scala Tests (GraalVM CE) (windows, x86_64) - **Scala Tests (Oracle GraalVM) (linux, x86_64)** - Standard Library Tests (GraalVM CE) (linux, x86_64) - Standard Library Tests (GraalVM CE) (macos, x86_64) - Standard Library Tests (GraalVM CE) (windows, x86_64) - **Standard Library Tests (Oracle GraalVM) (linux x86_64)** - Verify License Packages (linux, x86_64) Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Engine (GraalVM CE) - **Benchmark Engine (Oracle GraalVM)** Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Standard Libraries (GraalVM CE) - **Benchmark Standard Libraries (Oracle GraalVM)**
2024-03-12 22:25:26 +03:00
runs-on:
- self-hosted
- Linux
steps:
- if: startsWith(runner.name, 'GitHub Actions') || startsWith(runner.name, 'Hosted Agent')
name: Installing wasm-pack
uses: jetli/wasm-pack-action@v0.4.0
with:
version: v0.10.2
- name: Expose Artifact API and context information.
uses: actions/github-script@v7
with:
script: "\n core.exportVariable(\"ACTIONS_RUNTIME_TOKEN\", process.env[\"ACTIONS_RUNTIME_TOKEN\"])\n core.exportVariable(\"ACTIONS_RUNTIME_URL\", process.env[\"ACTIONS_RUNTIME_URL\"])\n core.exportVariable(\"GITHUB_RETENTION_DAYS\", process.env[\"GITHUB_RETENTION_DAYS\"])\n console.log(context)\n "
- name: Checking out the repository
uses: actions/checkout@v4
with:
clean: false
submodules: recursive
- name: Build Script Setup
run: ./run --help || (git clean -ffdx && ./run --help)
Add some engine jobs that run with Oracle GraalVM (#9322) Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`. ### Important Notes - The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used. - sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings. - If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed. Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR): - Engine (GraalVM CE) (linux, x86_64) - Engine (GraalVM CE) (macos, x86_64) - Engine (GraalVM CE) (windows, x86_64) - **Engine (Oracle GraalVM) (linux, x86_64)** - Scala Tests (GraalVM CE) (linux, x86_64) - Scala Tests (GraalVM CE) (macos, x86_64) - Scala Tests (GraalVM CE) (windows, x86_64) - **Scala Tests (Oracle GraalVM) (linux, x86_64)** - Standard Library Tests (GraalVM CE) (linux, x86_64) - Standard Library Tests (GraalVM CE) (macos, x86_64) - Standard Library Tests (GraalVM CE) (windows, x86_64) - **Standard Library Tests (Oracle GraalVM) (linux x86_64)** - Verify License Packages (linux, x86_64) Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Engine (GraalVM CE) - **Benchmark Engine (Oracle GraalVM)** Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Standard Libraries (GraalVM CE) - **Benchmark Standard Libraries (Oracle GraalVM)**
2024-03-12 22:25:26 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: "(contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean before
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- run: ./run backend test jvm
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: (success() || failure()) && github.event.pull_request.head.repo.full_name == github.repository
name: Engine Test Reporter
2022-07-01 04:58:14 +03:00
uses: dorny/test-reporter@v1
with:
max-annotations: 50
2024-08-30 11:16:01 +03:00
name: Engine Tests Report (GraalVM CE, linux, amd64)
path: ${{ env.ENSO_TEST_JUNIT_DIR }}/*.xml
path-replace-backslashes: true
reporter: java-junit
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
- if: failure() && runner.os == 'Windows'
name: List files if failed (Windows)
run: Get-ChildItem -Force -Recurse
- if: failure() && runner.os != 'Windows'
name: List files if failed (non-Windows)
run: ls -lAR
- if: "(always()) && (contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean after
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Add some engine jobs that run with Oracle GraalVM (#9322) Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`. ### Important Notes - The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used. - sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings. - If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed. Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR): - Engine (GraalVM CE) (linux, x86_64) - Engine (GraalVM CE) (macos, x86_64) - Engine (GraalVM CE) (windows, x86_64) - **Engine (Oracle GraalVM) (linux, x86_64)** - Scala Tests (GraalVM CE) (linux, x86_64) - Scala Tests (GraalVM CE) (macos, x86_64) - Scala Tests (GraalVM CE) (windows, x86_64) - **Scala Tests (Oracle GraalVM) (linux, x86_64)** - Standard Library Tests (GraalVM CE) (linux, x86_64) - Standard Library Tests (GraalVM CE) (macos, x86_64) - Standard Library Tests (GraalVM CE) (windows, x86_64) - **Standard Library Tests (Oracle GraalVM) (linux x86_64)** - Verify License Packages (linux, x86_64) Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Engine (GraalVM CE) - **Benchmark Engine (Oracle GraalVM)** Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Standard Libraries (GraalVM CE) - **Benchmark Standard Libraries (Oracle GraalVM)**
2024-03-12 22:25:26 +03:00
env:
GRAAL_EDITION: GraalVM CE
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
permissions:
checks: write
2024-08-30 11:16:01 +03:00
enso-build-ci-gen-job-jvm-tests-graal-vm-ce-macos-amd64:
name: JVM Tests (GraalVM CE) (macos, amd64)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
runs-on:
- macos-12
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
steps:
- if: startsWith(runner.name, 'GitHub Actions') || startsWith(runner.name, 'Hosted Agent')
name: Installing wasm-pack
uses: jetli/wasm-pack-action@v0.4.0
with:
version: v0.10.2
- name: Expose Artifact API and context information.
uses: actions/github-script@v7
with:
script: "\n core.exportVariable(\"ACTIONS_RUNTIME_TOKEN\", process.env[\"ACTIONS_RUNTIME_TOKEN\"])\n core.exportVariable(\"ACTIONS_RUNTIME_URL\", process.env[\"ACTIONS_RUNTIME_URL\"])\n core.exportVariable(\"GITHUB_RETENTION_DAYS\", process.env[\"GITHUB_RETENTION_DAYS\"])\n console.log(context)\n "
- name: Checking out the repository
uses: actions/checkout@v4
with:
clean: false
submodules: recursive
- name: Build Script Setup
run: ./run --help || (git clean -ffdx && ./run --help)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: "(contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean before
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- run: ./run backend test jvm
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: (success() || failure()) && github.event.pull_request.head.repo.full_name == github.repository
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
name: Engine Test Reporter
uses: dorny/test-reporter@v1
with:
max-annotations: 50
2024-08-30 11:16:01 +03:00
name: Engine Tests Report (GraalVM CE, macos, amd64)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
path: ${{ env.ENSO_TEST_JUNIT_DIR }}/*.xml
path-replace-backslashes: true
2022-07-01 04:58:14 +03:00
reporter: java-junit
- if: failure() && runner.os == 'Windows'
name: List files if failed (Windows)
run: Get-ChildItem -Force -Recurse
- if: failure() && runner.os != 'Windows'
name: List files if failed (non-Windows)
run: ls -lAR
- if: "(always()) && (contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean after
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Add some engine jobs that run with Oracle GraalVM (#9322) Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`. ### Important Notes - The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used. - sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings. - If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed. Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR): - Engine (GraalVM CE) (linux, x86_64) - Engine (GraalVM CE) (macos, x86_64) - Engine (GraalVM CE) (windows, x86_64) - **Engine (Oracle GraalVM) (linux, x86_64)** - Scala Tests (GraalVM CE) (linux, x86_64) - Scala Tests (GraalVM CE) (macos, x86_64) - Scala Tests (GraalVM CE) (windows, x86_64) - **Scala Tests (Oracle GraalVM) (linux, x86_64)** - Standard Library Tests (GraalVM CE) (linux, x86_64) - Standard Library Tests (GraalVM CE) (macos, x86_64) - Standard Library Tests (GraalVM CE) (windows, x86_64) - **Standard Library Tests (Oracle GraalVM) (linux x86_64)** - Verify License Packages (linux, x86_64) Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Engine (GraalVM CE) - **Benchmark Engine (Oracle GraalVM)** Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Standard Libraries (GraalVM CE) - **Benchmark Standard Libraries (Oracle GraalVM)**
2024-03-12 22:25:26 +03:00
env:
GRAAL_EDITION: GraalVM CE
2023-07-26 21:13:34 +03:00
permissions:
checks: write
2024-08-30 11:16:01 +03:00
enso-build-ci-gen-job-jvm-tests-graal-vm-ce-windows-amd64:
name: JVM Tests (GraalVM CE) (windows, amd64)
2022-07-01 04:58:14 +03:00
runs-on:
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
- self-hosted
- Windows
2022-07-01 04:58:14 +03:00
steps:
- if: startsWith(runner.name, 'GitHub Actions') || startsWith(runner.name, 'Hosted Agent')
name: Installing wasm-pack
uses: jetli/wasm-pack-action@v0.4.0
2022-07-01 04:58:14 +03:00
with:
version: v0.10.2
- name: Expose Artifact API and context information.
uses: actions/github-script@v7
2022-07-01 04:58:14 +03:00
with:
script: "\n core.exportVariable(\"ACTIONS_RUNTIME_TOKEN\", process.env[\"ACTIONS_RUNTIME_TOKEN\"])\n core.exportVariable(\"ACTIONS_RUNTIME_URL\", process.env[\"ACTIONS_RUNTIME_URL\"])\n core.exportVariable(\"GITHUB_RETENTION_DAYS\", process.env[\"GITHUB_RETENTION_DAYS\"])\n console.log(context)\n "
2022-07-01 04:58:14 +03:00
- name: Checking out the repository
2023-10-17 01:59:52 +03:00
uses: actions/checkout@v4
with:
clean: false
submodules: recursive
2022-07-01 04:58:14 +03:00
- name: Build Script Setup
run: ./run --help || (git clean -ffdx && ./run --help)
2022-07-01 04:58:14 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: "(contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean before
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- run: ./run backend test jvm
2022-07-01 04:58:14 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: (success() || failure()) && github.event.pull_request.head.repo.full_name == github.repository
name: Engine Test Reporter
2022-07-01 04:58:14 +03:00
uses: dorny/test-reporter@v1
with:
max-annotations: 50
2024-08-30 11:16:01 +03:00
name: Engine Tests Report (GraalVM CE, windows, amd64)
path: ${{ env.ENSO_TEST_JUNIT_DIR }}/*.xml
path-replace-backslashes: true
reporter: java-junit
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
- if: failure() && runner.os == 'Windows'
name: List files if failed (Windows)
run: Get-ChildItem -Force -Recurse
- if: failure() && runner.os != 'Windows'
name: List files if failed (non-Windows)
run: ls -lAR
- if: "(always()) && (contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean after
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Add some engine jobs that run with Oracle GraalVM (#9322) Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`. ### Important Notes - The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used. - sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings. - If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed. Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR): - Engine (GraalVM CE) (linux, x86_64) - Engine (GraalVM CE) (macos, x86_64) - Engine (GraalVM CE) (windows, x86_64) - **Engine (Oracle GraalVM) (linux, x86_64)** - Scala Tests (GraalVM CE) (linux, x86_64) - Scala Tests (GraalVM CE) (macos, x86_64) - Scala Tests (GraalVM CE) (windows, x86_64) - **Scala Tests (Oracle GraalVM) (linux, x86_64)** - Standard Library Tests (GraalVM CE) (linux, x86_64) - Standard Library Tests (GraalVM CE) (macos, x86_64) - Standard Library Tests (GraalVM CE) (windows, x86_64) - **Standard Library Tests (Oracle GraalVM) (linux x86_64)** - Verify License Packages (linux, x86_64) Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Engine (GraalVM CE) - **Benchmark Engine (Oracle GraalVM)** Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Standard Libraries (GraalVM CE) - **Benchmark Standard Libraries (Oracle GraalVM)**
2024-03-12 22:25:26 +03:00
env:
GRAAL_EDITION: GraalVM CE
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
permissions:
checks: write
2024-08-30 11:16:01 +03:00
enso-build-ci-gen-job-standard-library-tests-graal-vm-ce-linux-amd64:
name: Standard Library Tests (GraalVM CE) (linux, amd64)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
runs-on:
- self-hosted
- Linux
steps:
- if: startsWith(runner.name, 'GitHub Actions') || startsWith(runner.name, 'Hosted Agent')
name: Installing wasm-pack
uses: jetli/wasm-pack-action@v0.4.0
with:
version: v0.10.2
- name: Expose Artifact API and context information.
uses: actions/github-script@v7
with:
script: "\n core.exportVariable(\"ACTIONS_RUNTIME_TOKEN\", process.env[\"ACTIONS_RUNTIME_TOKEN\"])\n core.exportVariable(\"ACTIONS_RUNTIME_URL\", process.env[\"ACTIONS_RUNTIME_URL\"])\n core.exportVariable(\"GITHUB_RETENTION_DAYS\", process.env[\"GITHUB_RETENTION_DAYS\"])\n console.log(context)\n "
- name: Checking out the repository
uses: actions/checkout@v4
with:
clean: false
submodules: recursive
- name: Build Script Setup
run: ./run --help || (git clean -ffdx && ./run --help)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: "(contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean before
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- run: ./run backend test standard-library
env:
ENSO_LIB_S3_AWS_ACCESS_KEY_ID: ${{ secrets.ENSO_LIB_S3_AWS_ACCESS_KEY_ID }}
ENSO_LIB_S3_AWS_REGION: ${{ secrets.ENSO_LIB_S3_AWS_REGION }}
ENSO_LIB_S3_AWS_SECRET_ACCESS_KEY: ${{ secrets.ENSO_LIB_S3_AWS_SECRET_ACCESS_KEY }}
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: (success() || failure()) && github.event.pull_request.head.repo.full_name == github.repository
name: Standard Library Test Reporter
uses: dorny/test-reporter@v1
with:
max-annotations: 50
2024-08-30 11:16:01 +03:00
name: Standard Library Tests Report (GraalVM CE, linux, amd64)
path: ${{ env.ENSO_TEST_JUNIT_DIR }}/*/*.xml
path-replace-backslashes: true
2022-07-01 04:58:14 +03:00
reporter: java-junit
- if: failure() && runner.os == 'Windows'
name: List files if failed (Windows)
run: Get-ChildItem -Force -Recurse
- if: failure() && runner.os != 'Windows'
name: List files if failed (non-Windows)
run: ls -lAR
- if: "(always()) && (contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean after
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Add some engine jobs that run with Oracle GraalVM (#9322) Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`. ### Important Notes - The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used. - sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings. - If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed. Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR): - Engine (GraalVM CE) (linux, x86_64) - Engine (GraalVM CE) (macos, x86_64) - Engine (GraalVM CE) (windows, x86_64) - **Engine (Oracle GraalVM) (linux, x86_64)** - Scala Tests (GraalVM CE) (linux, x86_64) - Scala Tests (GraalVM CE) (macos, x86_64) - Scala Tests (GraalVM CE) (windows, x86_64) - **Scala Tests (Oracle GraalVM) (linux, x86_64)** - Standard Library Tests (GraalVM CE) (linux, x86_64) - Standard Library Tests (GraalVM CE) (macos, x86_64) - Standard Library Tests (GraalVM CE) (windows, x86_64) - **Standard Library Tests (Oracle GraalVM) (linux x86_64)** - Verify License Packages (linux, x86_64) Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Engine (GraalVM CE) - **Benchmark Engine (Oracle GraalVM)** Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Standard Libraries (GraalVM CE) - **Benchmark Standard Libraries (Oracle GraalVM)**
2024-03-12 22:25:26 +03:00
env:
GRAAL_EDITION: GraalVM CE
2023-07-26 21:13:34 +03:00
permissions:
checks: write
2024-08-30 11:16:01 +03:00
enso-build-ci-gen-job-standard-library-tests-graal-vm-ce-macos-amd64:
name: Standard Library Tests (GraalVM CE) (macos, amd64)
2022-07-01 04:58:14 +03:00
runs-on:
- macos-12
2022-07-01 04:58:14 +03:00
steps:
- if: startsWith(runner.name, 'GitHub Actions') || startsWith(runner.name, 'Hosted Agent')
name: Installing wasm-pack
uses: jetli/wasm-pack-action@v0.4.0
2022-07-01 04:58:14 +03:00
with:
version: v0.10.2
- name: Expose Artifact API and context information.
uses: actions/github-script@v7
2022-07-01 04:58:14 +03:00
with:
script: "\n core.exportVariable(\"ACTIONS_RUNTIME_TOKEN\", process.env[\"ACTIONS_RUNTIME_TOKEN\"])\n core.exportVariable(\"ACTIONS_RUNTIME_URL\", process.env[\"ACTIONS_RUNTIME_URL\"])\n core.exportVariable(\"GITHUB_RETENTION_DAYS\", process.env[\"GITHUB_RETENTION_DAYS\"])\n console.log(context)\n "
2022-07-01 04:58:14 +03:00
- name: Checking out the repository
2023-10-17 01:59:52 +03:00
uses: actions/checkout@v4
2022-07-01 04:58:14 +03:00
with:
clean: false
submodules: recursive
2022-07-01 04:58:14 +03:00
- name: Build Script Setup
run: ./run --help || (git clean -ffdx && ./run --help)
2022-07-01 04:58:14 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: "(contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean before
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
- run: ./run backend test standard-library
2022-07-01 04:58:14 +03:00
env:
ENSO_LIB_S3_AWS_ACCESS_KEY_ID: ${{ secrets.ENSO_LIB_S3_AWS_ACCESS_KEY_ID }}
ENSO_LIB_S3_AWS_REGION: ${{ secrets.ENSO_LIB_S3_AWS_REGION }}
ENSO_LIB_S3_AWS_SECRET_ACCESS_KEY: ${{ secrets.ENSO_LIB_S3_AWS_SECRET_ACCESS_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: (success() || failure()) && github.event.pull_request.head.repo.full_name == github.repository
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
name: Standard Library Test Reporter
2022-07-01 04:58:14 +03:00
uses: dorny/test-reporter@v1
with:
max-annotations: 50
2024-08-30 11:16:01 +03:00
name: Standard Library Tests Report (GraalVM CE, macos, amd64)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
path: ${{ env.ENSO_TEST_JUNIT_DIR }}/*/*.xml
path-replace-backslashes: true
reporter: java-junit
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
- if: failure() && runner.os == 'Windows'
name: List files if failed (Windows)
run: Get-ChildItem -Force -Recurse
- if: failure() && runner.os != 'Windows'
name: List files if failed (non-Windows)
run: ls -lAR
- if: "(always()) && (contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean after
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Add some engine jobs that run with Oracle GraalVM (#9322) Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`. ### Important Notes - The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used. - sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings. - If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed. Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR): - Engine (GraalVM CE) (linux, x86_64) - Engine (GraalVM CE) (macos, x86_64) - Engine (GraalVM CE) (windows, x86_64) - **Engine (Oracle GraalVM) (linux, x86_64)** - Scala Tests (GraalVM CE) (linux, x86_64) - Scala Tests (GraalVM CE) (macos, x86_64) - Scala Tests (GraalVM CE) (windows, x86_64) - **Scala Tests (Oracle GraalVM) (linux, x86_64)** - Standard Library Tests (GraalVM CE) (linux, x86_64) - Standard Library Tests (GraalVM CE) (macos, x86_64) - Standard Library Tests (GraalVM CE) (windows, x86_64) - **Standard Library Tests (Oracle GraalVM) (linux x86_64)** - Verify License Packages (linux, x86_64) Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Engine (GraalVM CE) - **Benchmark Engine (Oracle GraalVM)** Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Standard Libraries (GraalVM CE) - **Benchmark Standard Libraries (Oracle GraalVM)**
2024-03-12 22:25:26 +03:00
env:
GRAAL_EDITION: GraalVM CE
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
permissions:
checks: write
2024-08-30 11:16:01 +03:00
enso-build-ci-gen-job-standard-library-tests-graal-vm-ce-windows-amd64:
name: Standard Library Tests (GraalVM CE) (windows, amd64)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
runs-on:
- self-hosted
- Windows
steps:
- if: startsWith(runner.name, 'GitHub Actions') || startsWith(runner.name, 'Hosted Agent')
name: Installing wasm-pack
uses: jetli/wasm-pack-action@v0.4.0
with:
version: v0.10.2
- name: Expose Artifact API and context information.
uses: actions/github-script@v7
with:
script: "\n core.exportVariable(\"ACTIONS_RUNTIME_TOKEN\", process.env[\"ACTIONS_RUNTIME_TOKEN\"])\n core.exportVariable(\"ACTIONS_RUNTIME_URL\", process.env[\"ACTIONS_RUNTIME_URL\"])\n core.exportVariable(\"GITHUB_RETENTION_DAYS\", process.env[\"GITHUB_RETENTION_DAYS\"])\n console.log(context)\n "
- name: Checking out the repository
uses: actions/checkout@v4
with:
clean: false
submodules: recursive
- name: Build Script Setup
run: ./run --help || (git clean -ffdx && ./run --help)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: "(contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean before
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- run: ./run backend test standard-library
env:
ENSO_LIB_S3_AWS_ACCESS_KEY_ID: ${{ secrets.ENSO_LIB_S3_AWS_ACCESS_KEY_ID }}
ENSO_LIB_S3_AWS_REGION: ${{ secrets.ENSO_LIB_S3_AWS_REGION }}
ENSO_LIB_S3_AWS_SECRET_ACCESS_KEY: ${{ secrets.ENSO_LIB_S3_AWS_SECRET_ACCESS_KEY }}
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: (success() || failure()) && github.event.pull_request.head.repo.full_name == github.repository
name: Standard Library Test Reporter
uses: dorny/test-reporter@v1
with:
max-annotations: 50
2024-08-30 11:16:01 +03:00
name: Standard Library Tests Report (GraalVM CE, windows, amd64)
Add some engine jobs that run with Oracle GraalVM (#9322) Adds `Oracle GraalVM` configuration for some backend jobs. `Oracle GraalVM` jobs run only on Linux so far. The old jobs use `GraalVM CE`. ### Important Notes - The JDK to download and use is deduced from the `JAVA_VENDOR` environment variable. By default, `GraalVM CE` is used. - sbt can be started with both GraalVM CE and Oracle GraalVM without any warnings. - If you try to start sbt with JDK from a different vendor, but with the same Java version, a warning is printed. Current list of jobs in the `Engine CI` workflow (these jobs are visible on this PR, because they are scheduled to run on every PR): - Engine (GraalVM CE) (linux, x86_64) - Engine (GraalVM CE) (macos, x86_64) - Engine (GraalVM CE) (windows, x86_64) - **Engine (Oracle GraalVM) (linux, x86_64)** - Scala Tests (GraalVM CE) (linux, x86_64) - Scala Tests (GraalVM CE) (macos, x86_64) - Scala Tests (GraalVM CE) (windows, x86_64) - **Scala Tests (Oracle GraalVM) (linux, x86_64)** - Standard Library Tests (GraalVM CE) (linux, x86_64) - Standard Library Tests (GraalVM CE) (macos, x86_64) - Standard Library Tests (GraalVM CE) (windows, x86_64) - **Standard Library Tests (Oracle GraalVM) (linux x86_64)** - Verify License Packages (linux, x86_64) Benchmark Engine workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Engine (GraalVM CE) - **Benchmark Engine (Oracle GraalVM)** Benchmark Standard Libraries workflow (not visible on this PR, cannot schedule manually yet): - Benchmark Standard Libraries (GraalVM CE) - **Benchmark Standard Libraries (Oracle GraalVM)**
2024-03-12 22:25:26 +03:00
path: ${{ env.ENSO_TEST_JUNIT_DIR }}/*/*.xml
path-replace-backslashes: true
reporter: java-junit
- if: failure() && runner.os == 'Windows'
name: List files if failed (Windows)
run: Get-ChildItem -Force -Recurse
- if: failure() && runner.os != 'Windows'
name: List files if failed (non-Windows)
run: ls -lAR
- if: "(always()) && (contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean after
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
env:
GRAAL_EDITION: GraalVM CE
permissions:
checks: write
2024-08-30 11:16:01 +03:00
enso-build-ci-gen-job-verify-license-packages-linux-amd64:
name: Verify License Packages (linux, amd64)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
runs-on:
- self-hosted
- Linux
steps:
- if: startsWith(runner.name, 'GitHub Actions') || startsWith(runner.name, 'Hosted Agent')
name: Installing wasm-pack
uses: jetli/wasm-pack-action@v0.4.0
with:
version: v0.10.2
- name: Expose Artifact API and context information.
uses: actions/github-script@v7
with:
script: "\n core.exportVariable(\"ACTIONS_RUNTIME_TOKEN\", process.env[\"ACTIONS_RUNTIME_TOKEN\"])\n core.exportVariable(\"ACTIONS_RUNTIME_URL\", process.env[\"ACTIONS_RUNTIME_URL\"])\n core.exportVariable(\"GITHUB_RETENTION_DAYS\", process.env[\"GITHUB_RETENTION_DAYS\"])\n console.log(context)\n "
- name: Checking out the repository
uses: actions/checkout@v4
with:
clean: false
submodules: recursive
- name: Build Script Setup
run: ./run --help || (git clean -ffdx && ./run --help)
[CI] Engine CI Rework, Part 1 (#9295) I have created PR with the first set of changes for the Engine CI. The changes are small and effectively consist of: 1. Spltting the `verifyLicensePackages`. It is now run only on Linux. There are hardly any time benefits, as the actual job cost is dominated by the overhead of spinning a new job — but it is not expensive in the big picture. 2. Splitting the Scala Tests into separate job. This is probably the biggest "atomic" piece of work we have. 3. Splitting the Standard Library Tests into a separate job. The time is nicely split across the jobs now. The last run has: * 27 min for Scala tests; * 25 min for Standard Library tests; * 24 min for the "rest": the old job containing everything that has not been split. While total CPU time has increased (as jobs are not effectively reusing the same build context), the wall time has decreased significantly. Previously we had ~1 hour of wall time for the old monolithic job, so we are getting more than 2x speedup. The now-slowest Scala tests job is currently comparable with the native Rust tests (and they should improve when the old gui is gone) — which are the slowest job across all CI checks. The PR is pretty minimal. Several future improvements can be made: * Reorganizing and splitting other "heavy" jobs, like the native image generation. * Reusing the built Engine distribution. However, this is probably a lower priority than I initially thought. * Building package takes several minutes, so duplicating this job is not that expensive. * The package is OS-specific. * Scala tests don't really benefit from it, they'd need way more compilation artifacts. It'd make sense to reuse the distribution if we, for example, decided to split more jobs that actually benefit from it, like Standard Library tests. * Reusing the Rust build script binary. * As our self-hosted runners reuse environment, we effectively get this for free. Especially when Rust part of codebase is less frequently changed. * This is however significant cost for the GitHub-hosted runners, affecting our macOS runners. Reusing the binary does not save wall time for jobs that are run in parallel (as we have enough runners), but if we introduce job dependencies that'd force sequential execution of jobs on macOS, this would be a significant need.
2024-03-06 21:56:13 +03:00
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: "(contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean before
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- run: ./run backend sbt '--' verifyLicensePackages
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- if: failure() && runner.os == 'Windows'
name: List files if failed (Windows)
run: Get-ChildItem -Force -Recurse
- if: failure() && runner.os != 'Windows'
name: List files if failed (non-Windows)
run: ls -lAR
- if: "(always()) && (contains(github.event.pull_request.labels.*.name, 'CI: Clean build required') || inputs.clean_build_required)"
name: Clean after
run: ./run git-clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
2022-07-01 04:58:14 +03:00
env:
ENSO_BUILD_SKIP_VERSION_CHECK: "true"