PR Summary:
1. Added `Twenty Shared` Package to centralize utilitiies as mentioned
in #8942
2. Optimization of `getImageAbsoluteURI.ts` to handle edge cases
![image](https://github.com/user-attachments/assets/c72a3061-6eba-46b8-85ac-869f06bf23c0)
---------
Co-authored-by: Antoine Moreaux <moreaux.antoine@gmail.com>
Co-authored-by: Charles Bochet <charles@twenty.com>
We will remove the `twenty-postgres` image that was used for local
development and only use `twenty-postgres-pilo` (which we use in prod),
bringing the development environment closer to prod and avoiding having
to maintain 2 images.
Instead of provisioning the super user after the db initialization, we
directly rely on the superuser provided by Spilo for simplicity. We also
introduce a change that tries to create the right database (`default` or
`test`) based on the context.
How to test:
```
docker build -t twentycrm/twenty-postgres-spilo:latest -f ./packages/twenty-docker/twenty-postgres-spilo/Dockerfile .
docker images --no-trunc | grep twenty-postgres-spilo
postgres-on-docker:
docker run \
--name twenty_pg \
-e PGUSER_SUPERUSER=twenty \
-e PGPASSWORD_SUPERUSER=twenty \
-e ALLOW_NOSSL=true \
-v twenty_db_data:/home/postgres/pgdata \
-p 5432:5432 \
REPLACE_WITH_IMAGE_ID
```
First step of https://github.com/twentyhq/twenty/issues/6868
Adds min.., max.. queries for DATETIME fields
adds min.., max.., avg.., sum.. queries for NUMBER fields
(count distinct operation and composite fields such as CURRENCY handling
will be dealt with in a future PR)
<img width="1422" alt="Capture d’écran 2024-11-06 à 15 48 46"
src="https://github.com/user-attachments/assets/4bcdece0-ad3e-4536-9720-fe4044a36719">
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
Co-authored-by: Weiko <corentin@twenty.com>
Run the CI integrationin sync mode
and add the option to run it without db reset
cleaning all the useless integration test
---------
Co-authored-by: guillim <guillaume@twenty.com>
https://github.com/twentyhq/private-issues/issues/75
**TLDR**
Add twenty-server package, wich contains the last tinybird datasources
and pipes used in analytics.
This new version of the API endpoints, pipes and datasources are
inspired by the implementation of dub.co
(https://github.com/dubinc/dub/blob/main/packages/tinybird/).
It contains the webhooks analytics, serverless functions duration and
serverless functions error count analytics. As well as
**In order to test**
- Follow the instructions in the README.md on twenty-tinybird using your
admin token from twenty_analytics_playground if you want to modify them.
- For a better experience add the extension Tinybird support for VsCode
- If you want more info about datasources and pipes please read the
tinybird docs.
---------
Co-authored-by: Félix Malfait <felix@twenty.com>
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
This PR was created by [GitStart](https://gitstart.com/) to address the
requirements from this ticket:
[TWNTY-7539](https://clients.gitstart.com/twenty/5449/tickets/TWNTY-7539).
---
### Description
- Move the utilities/dimensions from twenty-front to twenty-ui and
update imports\
Fixestwentyhq/private-issues#79
---------
Co-authored-by: gitstart-twenty <gitstart-twenty@users.noreply.github.com>
Co-authored-by: Charles Bochet <charles@twenty.com>
TLDR:
Secure connexion between tinybird and twenty using jwt when accessing
datasource from tinybird.
Solves:
https://github.com/twentyhq/private-issues/issues/73
In order to test:
1. Set ANALYTICS_ENABLED to true
2. Set TINYBIRD_JWT_TOKEN to the ADMIN token from the workspace
twenty_analytics_playground
3. Set TINYBIRD_JWT_TOKEN to the datasource or your admin token from the
workspace twenty_analytics_playground
4. Create a Webhook in twenty and set wich events it needs to track
5. Run twenty-worker in order to make the webhooks work.
6. Do your tasks in order to populate the data
7. Enter to settings> webhook>your webhook and the statistics section
should be displayed.
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
This Pull Request addresses the need to optimize our Continuous
Integration (CI) workflows for Playwright tests and release processes.
The changes implemented aim to reduce unnecessary resource usage by
conditionally executing jobs based on relevant file changes and
Implement https://github.com/tj-actions/changed-files step
## Changes logs
- Updated `ci-test-docker-compose.yaml , ci-chrome-extension.yaml ` to
check for changed files before running tests.
- Updated `ci-front.yaml , ci-utils.yaml , ci-website.yaml ,
ci-server.yaml` to check for changed files before running tests.
- Enhanced `playwright.yml` to skip unnecessary tests based on file
changes.
### Summary
This PR introduces several integration tests, a mix of manually written
tests and those generated using the `generate-integration-tests` Python
script located in the `scripts` folder.
### Tests Added:
- **Authentication tests**: Validating login, registration, and token
handling.
- **FindMany queries**: Fetching multiple records for all existing
entities that do not require input arguments.
### How the Integration Tests Work:
- A `setupTest` function is called during the Jest test run. This
function initializes a test instance of the application and exposes it
on a dedicated port.
- Since tests are executed in isolated workers, they do not have direct
access to the in-memory app instance. Instead, the tests query the
application through the exposed port.
- A static accessToken is used, this one as a big expiration time so it
will never expire (365 years)
- The queries are executed, and the results are validated against
expected outcomes.
### Current State and Next Steps:
- These tests currently run using the existing development seed data. We
plan to introduce more comprehensive test data using `faker` to improve
coverage.
- At the moment, the only mutation tests implemented are for
authentication. Future updates should include broader mutation testing
for other entities.
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
Continuation of #6644
Now chromium browser is used in workspaces tests instead of firefox and
screenshots after each test are properly saved in one folder when run
from IDE and from terminal using `yarn test:e2e` command
In this PR, I'm simplifying storybook setup:
1) Remove build --test configuration that prevent autodocs. We are not
using autodocs at all (the dev experience is not good enough), so I have
completely disabled it.
2) Clarify `serve` vs `test` vs `serve-and-test` configurations
After this PR:
- you can serve storybook in two modes: `npx nx run
twenty-front:storybook:serve:dev` and `npx nx run
twenty-front:storybook:serve:static`
- you can run tests agains an already served storybook (this is useful
in dev so you don't have to rebuild everytime to run tests): `npx nx run
twenty-front:storybook:test`
- you can conbine both: `npx nx run
twenty-front:storybook:serve-and-test:static`
Here is a fix for https://github.com/twentyhq/twenty/issues/5163
I tested it on another repo and should work as intended this time
arround.
I've created two workflows
- One that creates a PR
- The second that watch PR merge with specific labels
- I also check the author of the PR to make sure it was created by a bot
You can now disable the creation of the a draft release
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
- Fixes#5504
- Fixes#5503
- Return 404 when the page does not exist
- Modified the footer in order to align it properly
- Removed "noticed something to change" in each table of content
- Fixed the URLs of the edit module
- Added the edit module to Developers
- Fixed header style on the REST API page.
- Edited the README to point to Developers
- Fixed selected state when clicking on sidebar elements
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
- Created congratulations bot :
<img width="939" alt="Screenshot 2024-05-14 at 12 47 13"
src="https://github.com/twentyhq/twenty/assets/102751374/5138515f-fe4d-4c6d-9c7a-0240accbfca9">
- Modified OG image
- Added png extension to OG image route
To be noted: The bot will not work until the new API route is not
deployed. Please check OG image with Cloudflare cache.
---------
Co-authored-by: Ady Beraud <a.beraud96@gmail.com>
TL;DR:
- removed `--configuration={args.scope}` from `storybook:static:test`
for the `storybook:static` part, as it was making `front-sb-test` jobs
in CI not reuse the cache from the `front-sb-build` job and re-build
storybook every time.
- replaced it with a new `test` configuration which optimizes storybook
build for tests and builds storybook 2x faster.
## Fix storybook:build cache usage in CI
`storybook:static:test` executes two scripts in parallel:
1. `storybook:static`, which depends on `storybook:build`
1.a. it builds storybook first with `storybook:build`, the output
directory is `storybook-static`.
1.b. then it launches an `http-server`, using what has been built in
`storybook-static`
2. `storybook:test` to execute tests (needs the storybook http-server to
be running)
When passing `--configuration=pages` or `--configuration=modules` to
`storybook:static` from step 1, those configurations are passed to the
`storybook:build` script from step 1.a as well.
But for Nx `storybook:build` and `storybook:build --configuration=pages`
(or `modules`) are not the same command, therefore one does not reuse
the cache of the other because they could output completely different
things.
As `front-sb-test` jobs are passing `--configuration={args.scope}` to
`storybook:static`, the cache of the previously executed
`storybook:build` (from `front-sb-build`) is not reused and therefore
each job re-builds Storybook with its own scope, which increases CI
time.
### Solution
- Removed scope configurations from `storybook:static` and
`storybook:build` scripts to avoid confusion.
- `storybook:test` and `storybook:dev` can keep scope configurations as
they can be useful and this doesn't impact storybook build cache in CI.
### Improve Storybook build time for testing
Added the `test` configuration to `storybook:build` and
`storybook:static` which makes Storybook build time 2x faster. It
disables addons that slow down build time and are not used in tests.
This PR introduces a Profiling feature for our story book tests.
It also implements a new CI job : front-sb-test-performance, that only
runs stories suffixed with `.perf.stories.tsx`
## How it works
It allows to wrap any component into an array of React Profiler
components that will run tests many times to have the most replicable
average render time possible.
It is simply used by calling the new `getProfilingStory` util.
Internally it creates a defined number of tests, separated by an
arbitrary waiting time to allow the CPU to give more stable results.
It will do 3 warm-up and 3 finishing runs of tests because the first and
last renders are always a bit erratic, so we want to measure only the
runs in-between.
On the UI side it gives a table of results :
<img width="515" alt="image"
src="https://github.com/twentyhq/twenty/assets/26528466/273d2d91-26da-437a-890e-778cb6c1f993">
On the programmatic side, it stores the result in a div that can then be
parsed by the play fonction of storybook, to expect a defined threshold.
```tsx
play: async ({ canvasElement }) => {
await findByTestId(
canvasElement,
'profiling-session-finished',
{},
{ timeout: 60000 },
);
const profilingReport = getProfilingReportFromDocument(canvasElement);
if (!isDefined(profilingReport)) {
return;
}
const p95result = profilingReport?.total.p95;
expect(
p95result,
`Component render time is more than p95 threshold (${p95ThresholdInMs}ms)`,
).toBeLessThan(p95ThresholdInMs);
},
```