mirror of
https://github.com/twentyhq/twenty.git
synced 2024-11-27 11:03:40 +03:00
cfacdfce60
This PR introduces a Profiling feature for our story book tests. It also implements a new CI job : front-sb-test-performance, that only runs stories suffixed with `.perf.stories.tsx` ## How it works It allows to wrap any component into an array of React Profiler components that will run tests many times to have the most replicable average render time possible. It is simply used by calling the new `getProfilingStory` util. Internally it creates a defined number of tests, separated by an arbitrary waiting time to allow the CPU to give more stable results. It will do 3 warm-up and 3 finishing runs of tests because the first and last renders are always a bit erratic, so we want to measure only the runs in-between. On the UI side it gives a table of results : <img width="515" alt="image" src="https://github.com/twentyhq/twenty/assets/26528466/273d2d91-26da-437a-890e-778cb6c1f993"> On the programmatic side, it stores the result in a div that can then be parsed by the play fonction of storybook, to expect a defined threshold. ```tsx play: async ({ canvasElement }) => { await findByTestId( canvasElement, 'profiling-session-finished', {}, { timeout: 60000 }, ); const profilingReport = getProfilingReportFromDocument(canvasElement); if (!isDefined(profilingReport)) { return; } const p95result = profilingReport?.total.p95; expect( p95result, `Component render time is more than p95 threshold (${p95ThresholdInMs}ms)`, ).toBeLessThan(p95ThresholdInMs); }, ``` |
||
---|---|---|
.. | ||
ISSUE_TEMPLATE | ||
vale-styles | ||
workflows | ||
CLA.md | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
release-drafter.yml | ||
SECURITY.md |