twenty/.github/workflows/ci-front.yaml

230 lines
7.8 KiB
YAML
Raw Normal View History

name: CI Front
2023-04-13 18:37:37 +03:00
on:
push:
branches:
- main
pull_request:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
2023-04-13 18:37:37 +03:00
jobs:
front-sb-build:
2024-11-08 15:53:38 +03:00
timeout-minutes: 30
fix: fix storybook build cache not being used by tests in CI (#5451) TL;DR: - removed `--configuration={args.scope}` from `storybook:static:test` for the `storybook:static` part, as it was making `front-sb-test` jobs in CI not reuse the cache from the `front-sb-build` job and re-build storybook every time. - replaced it with a new `test` configuration which optimizes storybook build for tests and builds storybook 2x faster. ## Fix storybook:build cache usage in CI `storybook:static:test` executes two scripts in parallel: 1. `storybook:static`, which depends on `storybook:build` 1.a. it builds storybook first with `storybook:build`, the output directory is `storybook-static`. 1.b. then it launches an `http-server`, using what has been built in `storybook-static` 2. `storybook:test` to execute tests (needs the storybook http-server to be running) When passing `--configuration=pages` or `--configuration=modules` to `storybook:static` from step 1, those configurations are passed to the `storybook:build` script from step 1.a as well. But for Nx `storybook:build` and `storybook:build --configuration=pages` (or `modules`) are not the same command, therefore one does not reuse the cache of the other because they could output completely different things. As `front-sb-test` jobs are passing `--configuration={args.scope}` to `storybook:static`, the cache of the previously executed `storybook:build` (from `front-sb-build`) is not reused and therefore each job re-builds Storybook with its own scope, which increases CI time. ### Solution - Removed scope configurations from `storybook:static` and `storybook:build` scripts to avoid confusion. - `storybook:test` and `storybook:dev` can keep scope configurations as they can be useful and this doesn't impact storybook build cache in CI. ### Improve Storybook build time for testing Added the `test` configuration to `storybook:build` and `storybook:static` which makes Storybook build time 2x faster. It disables addons that slow down build time and are not used in tests.
2024-05-17 17:05:31 +03:00
runs-on: ubuntu-latest
env:
REACT_APP_SERVER_BASE_URL: http://localhost:3000
2024-05-15 23:54:51 +03:00
NX_REJECT_UNKNOWN_LOCAL_CACHE: 0
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.11.0
with:
access_token: ${{ github.token }}
- name: Fetch local actions
uses: actions/checkout@v4
2024-10-21 14:16:01 +03:00
with:
fetch-depth: 0
- name: Check for changed files
id: changed-files
uses: tj-actions/changed-files@v11
with:
files: |
package.json
packages/twenty-front/**
packages/twenty-ui/**
- name: Skip if no relevant changes
if: steps.changed-files.outputs.any_changed == 'false'
run: echo "No relevant changes. Skipping CI."
- name: Install dependencies
2024-10-21 14:16:01 +03:00
if: steps.changed-files.outputs.any_changed == 'true'
uses: ./.github/workflows/actions/yarn-install
2024-10-05 10:56:51 +03:00
- name: Diagnostic disk space issue
if: steps.changed-files.outputs.any_changed == 'true'
2024-10-05 10:56:51 +03:00
run: df -h
- name: Front / Restore Storybook Task Cache
if: steps.changed-files.outputs.any_changed == 'true'
uses: ./.github/workflows/actions/task-cache
with:
tag: scope:frontend
tasks: storybook:build
- name: Front / Write .env
if: steps.changed-files.outputs.any_changed == 'true'
run: npx nx reset:env twenty-front
- name: Front / Build storybook
if: steps.changed-files.outputs.any_changed == 'true'
run: npx nx storybook:build twenty-front
front-sb-test:
2024-11-08 15:53:38 +03:00
timeout-minutes: 30
2024-10-21 15:23:57 +03:00
runs-on: shipfox-8vcpu-ubuntu-2204
needs: front-sb-build
strategy:
matrix:
storybook_scope: [pages, modules]
env:
REACT_APP_SERVER_BASE_URL: http://localhost:3000
NX_REJECT_UNKNOWN_LOCAL_CACHE: 0
steps:
- name: Fetch local actions
uses: actions/checkout@v4
2024-10-21 14:16:01 +03:00
with:
fetch-depth: 0
- name: Check for changed files
id: changed-files
uses: tj-actions/changed-files@v11
with:
files: |
packages/twenty-front/**
- name: Skip if no relevant changes
if: steps.changed-files.outputs.any_changed == 'false'
run: echo "No relevant changes. Skipping CI."
- name: Install dependencies
if: steps.changed-files.outputs.any_changed == 'true'
uses: ./.github/workflows/actions/yarn-install
- name: Install Playwright
if: steps.changed-files.outputs.any_changed == 'true'
run: cd packages/twenty-front && npx playwright install
- name: Front / Restore Storybook Task Cache
if: steps.changed-files.outputs.any_changed == 'true'
uses: ./.github/workflows/actions/task-cache
with:
tag: scope:frontend
tasks: storybook:build
- name: Front / Write .env
if: steps.changed-files.outputs.any_changed == 'true'
run: npx nx reset:env twenty-front
- name: Run storybook tests
if: steps.changed-files.outputs.any_changed == 'true'
run: npx nx storybook:serve-and-test:static twenty-front --configuration=${{ matrix.storybook_scope }}
Generic Profiling story to wrap any component (#5341) This PR introduces a Profiling feature for our story book tests. It also implements a new CI job : front-sb-test-performance, that only runs stories suffixed with `.perf.stories.tsx` ## How it works It allows to wrap any component into an array of React Profiler components that will run tests many times to have the most replicable average render time possible. It is simply used by calling the new `getProfilingStory` util. Internally it creates a defined number of tests, separated by an arbitrary waiting time to allow the CPU to give more stable results. It will do 3 warm-up and 3 finishing runs of tests because the first and last renders are always a bit erratic, so we want to measure only the runs in-between. On the UI side it gives a table of results : <img width="515" alt="image" src="https://github.com/twentyhq/twenty/assets/26528466/273d2d91-26da-437a-890e-778cb6c1f993"> On the programmatic side, it stores the result in a div that can then be parsed by the play fonction of storybook, to expect a defined threshold. ```tsx play: async ({ canvasElement }) => { await findByTestId( canvasElement, 'profiling-session-finished', {}, { timeout: 60000 }, ); const profilingReport = getProfilingReportFromDocument(canvasElement); if (!isDefined(profilingReport)) { return; } const p95result = profilingReport?.total.p95; expect( p95result, `Component render time is more than p95 threshold (${p95ThresholdInMs}ms)`, ).toBeLessThan(p95ThresholdInMs); }, ```
2024-05-15 14:50:02 +03:00
front-sb-test-performance:
2024-11-08 15:53:38 +03:00
timeout-minutes: 30
2024-10-18 19:36:01 +03:00
runs-on: shipfox-8vcpu-ubuntu-2204
Generic Profiling story to wrap any component (#5341) This PR introduces a Profiling feature for our story book tests. It also implements a new CI job : front-sb-test-performance, that only runs stories suffixed with `.perf.stories.tsx` ## How it works It allows to wrap any component into an array of React Profiler components that will run tests many times to have the most replicable average render time possible. It is simply used by calling the new `getProfilingStory` util. Internally it creates a defined number of tests, separated by an arbitrary waiting time to allow the CPU to give more stable results. It will do 3 warm-up and 3 finishing runs of tests because the first and last renders are always a bit erratic, so we want to measure only the runs in-between. On the UI side it gives a table of results : <img width="515" alt="image" src="https://github.com/twentyhq/twenty/assets/26528466/273d2d91-26da-437a-890e-778cb6c1f993"> On the programmatic side, it stores the result in a div that can then be parsed by the play fonction of storybook, to expect a defined threshold. ```tsx play: async ({ canvasElement }) => { await findByTestId( canvasElement, 'profiling-session-finished', {}, { timeout: 60000 }, ); const profilingReport = getProfilingReportFromDocument(canvasElement); if (!isDefined(profilingReport)) { return; } const p95result = profilingReport?.total.p95; expect( p95result, `Component render time is more than p95 threshold (${p95ThresholdInMs}ms)`, ).toBeLessThan(p95ThresholdInMs); }, ```
2024-05-15 14:50:02 +03:00
env:
REACT_APP_SERVER_BASE_URL: http://localhost:3000
2024-05-15 23:54:51 +03:00
NX_REJECT_UNKNOWN_LOCAL_CACHE: 0
Generic Profiling story to wrap any component (#5341) This PR introduces a Profiling feature for our story book tests. It also implements a new CI job : front-sb-test-performance, that only runs stories suffixed with `.perf.stories.tsx` ## How it works It allows to wrap any component into an array of React Profiler components that will run tests many times to have the most replicable average render time possible. It is simply used by calling the new `getProfilingStory` util. Internally it creates a defined number of tests, separated by an arbitrary waiting time to allow the CPU to give more stable results. It will do 3 warm-up and 3 finishing runs of tests because the first and last renders are always a bit erratic, so we want to measure only the runs in-between. On the UI side it gives a table of results : <img width="515" alt="image" src="https://github.com/twentyhq/twenty/assets/26528466/273d2d91-26da-437a-890e-778cb6c1f993"> On the programmatic side, it stores the result in a div that can then be parsed by the play fonction of storybook, to expect a defined threshold. ```tsx play: async ({ canvasElement }) => { await findByTestId( canvasElement, 'profiling-session-finished', {}, { timeout: 60000 }, ); const profilingReport = getProfilingReportFromDocument(canvasElement); if (!isDefined(profilingReport)) { return; } const p95result = profilingReport?.total.p95; expect( p95result, `Component render time is more than p95 threshold (${p95ThresholdInMs}ms)`, ).toBeLessThan(p95ThresholdInMs); }, ```
2024-05-15 14:50:02 +03:00
steps:
- name: Fetch local actions
uses: actions/checkout@v4
2024-10-21 14:16:01 +03:00
with:
fetch-depth: 0
- name: Check for changed files
id: changed-files
uses: tj-actions/changed-files@v11
with:
files: |
packages/twenty-front/**
- name: Skip if no relevant changes
if: steps.changed-files.outputs.any_changed == 'false'
run: echo "No relevant changes. Skipping CI."
Generic Profiling story to wrap any component (#5341) This PR introduces a Profiling feature for our story book tests. It also implements a new CI job : front-sb-test-performance, that only runs stories suffixed with `.perf.stories.tsx` ## How it works It allows to wrap any component into an array of React Profiler components that will run tests many times to have the most replicable average render time possible. It is simply used by calling the new `getProfilingStory` util. Internally it creates a defined number of tests, separated by an arbitrary waiting time to allow the CPU to give more stable results. It will do 3 warm-up and 3 finishing runs of tests because the first and last renders are always a bit erratic, so we want to measure only the runs in-between. On the UI side it gives a table of results : <img width="515" alt="image" src="https://github.com/twentyhq/twenty/assets/26528466/273d2d91-26da-437a-890e-778cb6c1f993"> On the programmatic side, it stores the result in a div that can then be parsed by the play fonction of storybook, to expect a defined threshold. ```tsx play: async ({ canvasElement }) => { await findByTestId( canvasElement, 'profiling-session-finished', {}, { timeout: 60000 }, ); const profilingReport = getProfilingReportFromDocument(canvasElement); if (!isDefined(profilingReport)) { return; } const p95result = profilingReport?.total.p95; expect( p95result, `Component render time is more than p95 threshold (${p95ThresholdInMs}ms)`, ).toBeLessThan(p95ThresholdInMs); }, ```
2024-05-15 14:50:02 +03:00
- name: Install dependencies
if: steps.changed-files.outputs.any_changed == 'true'
Generic Profiling story to wrap any component (#5341) This PR introduces a Profiling feature for our story book tests. It also implements a new CI job : front-sb-test-performance, that only runs stories suffixed with `.perf.stories.tsx` ## How it works It allows to wrap any component into an array of React Profiler components that will run tests many times to have the most replicable average render time possible. It is simply used by calling the new `getProfilingStory` util. Internally it creates a defined number of tests, separated by an arbitrary waiting time to allow the CPU to give more stable results. It will do 3 warm-up and 3 finishing runs of tests because the first and last renders are always a bit erratic, so we want to measure only the runs in-between. On the UI side it gives a table of results : <img width="515" alt="image" src="https://github.com/twentyhq/twenty/assets/26528466/273d2d91-26da-437a-890e-778cb6c1f993"> On the programmatic side, it stores the result in a div that can then be parsed by the play fonction of storybook, to expect a defined threshold. ```tsx play: async ({ canvasElement }) => { await findByTestId( canvasElement, 'profiling-session-finished', {}, { timeout: 60000 }, ); const profilingReport = getProfilingReportFromDocument(canvasElement); if (!isDefined(profilingReport)) { return; } const p95result = profilingReport?.total.p95; expect( p95result, `Component render time is more than p95 threshold (${p95ThresholdInMs}ms)`, ).toBeLessThan(p95ThresholdInMs); }, ```
2024-05-15 14:50:02 +03:00
uses: ./.github/workflows/actions/yarn-install
- name: Install Playwright
if: steps.changed-files.outputs.any_changed == 'true'
Generic Profiling story to wrap any component (#5341) This PR introduces a Profiling feature for our story book tests. It also implements a new CI job : front-sb-test-performance, that only runs stories suffixed with `.perf.stories.tsx` ## How it works It allows to wrap any component into an array of React Profiler components that will run tests many times to have the most replicable average render time possible. It is simply used by calling the new `getProfilingStory` util. Internally it creates a defined number of tests, separated by an arbitrary waiting time to allow the CPU to give more stable results. It will do 3 warm-up and 3 finishing runs of tests because the first and last renders are always a bit erratic, so we want to measure only the runs in-between. On the UI side it gives a table of results : <img width="515" alt="image" src="https://github.com/twentyhq/twenty/assets/26528466/273d2d91-26da-437a-890e-778cb6c1f993"> On the programmatic side, it stores the result in a div that can then be parsed by the play fonction of storybook, to expect a defined threshold. ```tsx play: async ({ canvasElement }) => { await findByTestId( canvasElement, 'profiling-session-finished', {}, { timeout: 60000 }, ); const profilingReport = getProfilingReportFromDocument(canvasElement); if (!isDefined(profilingReport)) { return; } const p95result = profilingReport?.total.p95; expect( p95result, `Component render time is more than p95 threshold (${p95ThresholdInMs}ms)`, ).toBeLessThan(p95ThresholdInMs); }, ```
2024-05-15 14:50:02 +03:00
run: cd packages/twenty-front && npx playwright install
- name: Front / Write .env
if: steps.changed-files.outputs.any_changed == 'true'
Generic Profiling story to wrap any component (#5341) This PR introduces a Profiling feature for our story book tests. It also implements a new CI job : front-sb-test-performance, that only runs stories suffixed with `.perf.stories.tsx` ## How it works It allows to wrap any component into an array of React Profiler components that will run tests many times to have the most replicable average render time possible. It is simply used by calling the new `getProfilingStory` util. Internally it creates a defined number of tests, separated by an arbitrary waiting time to allow the CPU to give more stable results. It will do 3 warm-up and 3 finishing runs of tests because the first and last renders are always a bit erratic, so we want to measure only the runs in-between. On the UI side it gives a table of results : <img width="515" alt="image" src="https://github.com/twentyhq/twenty/assets/26528466/273d2d91-26da-437a-890e-778cb6c1f993"> On the programmatic side, it stores the result in a div that can then be parsed by the play fonction of storybook, to expect a defined threshold. ```tsx play: async ({ canvasElement }) => { await findByTestId( canvasElement, 'profiling-session-finished', {}, { timeout: 60000 }, ); const profilingReport = getProfilingReportFromDocument(canvasElement); if (!isDefined(profilingReport)) { return; } const p95result = profilingReport?.total.p95; expect( p95result, `Component render time is more than p95 threshold (${p95ThresholdInMs}ms)`, ).toBeLessThan(p95ThresholdInMs); }, ```
2024-05-15 14:50:02 +03:00
run: npx nx reset:env twenty-front
- name: Run storybook tests
if: steps.changed-files.outputs.any_changed == 'true'
run: npx nx run twenty-front:storybook:serve-and-test:static:performance
front-chromatic-deployment:
2024-11-08 15:53:38 +03:00
timeout-minutes: 30
if: contains(github.event.pull_request.labels.*.name, 'run-chromatic') || github.event_name == 'push'
needs: front-sb-build
runs-on: ubuntu-latest
env:
REACT_APP_SERVER_BASE_URL: http://127.0.0.1:3000
CHROMATIC_PROJECT_TOKEN: ${{ secrets.CHROMATIC_PROJECT_TOKEN }}
2024-05-15 23:54:51 +03:00
NX_REJECT_UNKNOWN_LOCAL_CACHE: 0
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check for changed files
id: changed-files
uses: tj-actions/changed-files@v11
with:
files: |
packages/twenty-front/**
- name: Skip if no relevant changes
if: steps.changed-files.outputs.any_changed == 'false'
run: echo "No relevant changes. Skipping CI."
- name: Install dependencies
if: steps.changed-files.outputs.any_changed == 'true'
uses: ./.github/workflows/actions/yarn-install
- name: Front / Restore Storybook Task Cache
if: steps.changed-files.outputs.any_changed == 'true'
uses: ./.github/workflows/actions/task-cache
with:
tag: scope:frontend
tasks: storybook:build
- name: Front / Write .env
if: steps.changed-files.outputs.any_changed == 'true'
run: |
cd packages/twenty-front
touch .env
echo "REACT_APP_SERVER_BASE_URL: $REACT_APP_SERVER_BASE_URL" >> .env
- name: Publish to Chromatic
if: steps.changed-files.outputs.any_changed == 'true'
run: npx nx run twenty-front:chromatic:ci
front-task:
2024-11-08 15:53:38 +03:00
timeout-minutes: 30
runs-on: ubuntu-latest
2024-05-15 23:54:51 +03:00
env:
NX_REJECT_UNKNOWN_LOCAL_CACHE: 0
strategy:
matrix:
task: [lint, typecheck, test]
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.11.0
with:
access_token: ${{ github.token }}
- name: Fetch custom Github Actions and base branch history
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check for changed files
id: changed-files
uses: tj-actions/changed-files@v11
with:
files: |
packages/twenty-front/**
- name: Skip if no relevant changes
if: steps.changed-files.outputs.any_changed == 'false'
run: echo "No relevant changes. Skipping CI."
- name: Install dependencies
2024-10-21 14:16:01 +03:00
if: steps.changed-files.outputs.any_changed == 'true'
uses: ./.github/workflows/actions/yarn-install
- name: Front / Restore ${{ matrix.task }} task cache
if: steps.changed-files.outputs.any_changed == 'true'
uses: ./.github/workflows/actions/task-cache
with:
tag: scope:frontend
tasks: ${{ matrix.task }}
- name: Reset .env
if: steps.changed-files.outputs.any_changed == 'true'
uses: ./.github/workflows/actions/nx-affected
with:
tag: scope:frontend
tasks: reset:env
- name: Run ${{ matrix.task }} task
if: steps.changed-files.outputs.any_changed == 'true'
uses: ./.github/workflows/actions/nx-affected
with:
tag: scope:frontend
tasks: ${{ matrix.task }}