## Context
We recently enabled the option to bypass SSL certificate authority
validation when establishing a connection to PostgreSQL. Previously, if
this validation failed, the server would revert to unencrypted traffic.
Now, it maintains encryption even if the SSL certificate check fails. In
the process, we overlooked a few DataSource setups, prompting a review
of DataSource creation within our code.
## Current State
Our DataSource initialization is distributed as follows:
- **Database folder**: Contains 'core', 'metadata', and 'raw'
DataSources. The 'core' and 'metadata' DataSources manage migrations and
static resolver calls to the database. The 'raw' DataSource is utilized
in scripts and commands that require handling both aspects.
- **typeorm.service.ts script**: These DataSources facilitate
multi-schema connections.
## Vision for Discussion
- **SystemSchema (formerly core) DataSource**: Manages system schema
migrations and system resolvers/repos. The 'core' schema will be renamed
to 'system' as the Core API will include parts of the system and
workspace schemas.
- **MetadataSchema DataSource**: Handles metadata schema migrations and
metadata API resolvers/repos.
- **(Dynamic) WorkspaceSchema DataSource**: Will be used in the Twenty
ORM to access a specific workspace schema.
We currently do not support cross-schema joins, so maintaining these
DataSources separately should be feasible. Core API resolvers will
select the appropriate DataSource based on the field context.
- **To be discussed**: The potential need for an AdminDataSource (akin
to 'Raw'), which would be used in commands, setup scripts, and the admin
panel to connect to any database schema without loading any model. This
DataSource should be reserved for cases where utilizing metadata,
system, or workspace entities is impractical.
## In This PR
- Ensuring all existing DataSources are compliant with the SSL update.
- Introducing RawDataSource to eliminate the need for declaring new
DataSource() instances in commands.
## Context
@lucasbordeau introduced a new Yoga plugin that allows us to cache our
requests (👏), see https://github.com/twentyhq/twenty/pull/5189
I'm simply updating the implementation to allow us to use different
cache storage types such as redis
Also adding a check so it does not use cache for other operations than
ObjectMetadataItems
## Test
locally, first call takes 340ms, 2nd takes 30ms with 'redis' and 13ms
with 'memory'
In this PR I'm introducing a simple custom graphql-yoga plugin to create
a caching mechanism specific to our metadata.
The cache key is made of : workspace id + workspace cache version, with
this the cache is automatically invalidated each time a change is made
on the workspace metadata.
New strategy:
- add settings field on FieldMetadata. Contains a boolean isIdField and
for numbers, a precision
- if idField, the graphql scalar returned will be a GraphQL id. This
will allow the app to work even for ids that are not uuid
- remove globals dateScalar and numberScalar modes. These were not used
- set limit as Integer
- check manually in query runner mutations that we send a valid id
Todo left:
- remove WorkspaceBuildSchemaOptions since this is not used anymore.
Will do in another PR
---------
Co-authored-by: Thomas Trompette <thomast@twenty.com>
Co-authored-by: Weiko <corentin@twenty.com>
## Description
This PR adds recaptcha on login form. One can add any one of three
recaptcha vendor -
1. Google Recaptcha -
https://developers.google.com/recaptcha/docs/v3#programmatically_invoke_the_challenge
2. HCaptcha -
https://docs.hcaptcha.com/invisible#programmatically-invoke-the-challenge
3. Turnstile -
https://developers.cloudflare.com/turnstile/get-started/client-side-rendering/#execution-modes
### Issue
- #3546
### Environment variables -
1. `CAPTCHA_DRIVER` - `google-recaptcha` | `hcaptcha` | `turnstile`
2. `CAPTCHA_SITE_KEY` - site key
3. `CAPTCHA_SECRET_KEY` - secret key
### Engineering choices
1. If some of the above env variable provided, then, backend generates
an error -
<img width="990" alt="image"
src="https://github.com/twentyhq/twenty/assets/60139930/9fb00fab-9261-4ff3-b23e-2c2e06f1bf89">
Please note that login/signup form will keep working as expected.
2. I'm using a Captcha guard that intercepts the request. If
"captchaToken" is present in the body and all env is set, then, the
captcha token is verified by backend through the service.
3. One can use this guard on any resolver to protect it by the captcha.
4. On frontend, two hooks `useGenerateCaptchaToken` and
`useInsertCaptchaScript` is created. `useInsertCaptchaScript` adds the
respective captcha JS script on frontend. `useGenerateCaptchaToken`
returns a function that one can use to trigger captcha token generation
programatically. This allows one to generate token keeping recaptcha
invisible.
### Note
This PR contains some changes in unrelated files like indentation,
spacing, inverted comma etc. I ran "yarn nx fmt:fix twenty-front" and
"yarn nx lint twenty-front -- --fix".
### Screenshots
<img width="869" alt="image"
src="https://github.com/twentyhq/twenty/assets/60139930/a75f5677-9b66-47f7-9730-4ec916073f8c">
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
Co-authored-by: Charles Bochet <charles@twenty.com>
In this PR:
- Follow up on #5170 as we did not take into account not logged in users
- only apply throttler on root fields to avoid performance overhead
In this PR I'm introducing a new patch on @graphql-yoga/nestjs package.
This patch overrides a previous patch that was made to compute the
conditionnal schema on each request,
Here we use a cache map to compute only once per schema workspace cache
version.
This allows us to have sub 100ms query time.
An error has been recently introduced in the sync of fieldMetadata. This
PR fixes it
Additionnally, we are enabling email for trialing and past_due
workspaces. There is an ongoing work to introduce a more robust
activationStatus on workspace.
While trying to migrate a workspace from 0.3.3 to 0.10.0, we've faced an
issue with the script to migrate default-values format.
This PR fixes it.
We really need to add tests on this part ;)
Previously we had to create a separate API key to give access to chrome
extension so we can make calls to the DB. This PR includes logic to
initiate a oauth flow with PKCE method which redirects to the
`Authorise` screen to give access to server tokens.
Implemented in this PR-
1. make `redirectUrl` a non-nullable parameter
2. Add `NODE_ENV` to environment variable service
3. new env variable `CHROME_EXTENSION_REDIRECT_URL` on server side
4. strict checks for redirectUrl
5. try catch blocks on utils db query methods
6. refactor Apollo Client to handle `unauthorized` condition
7. input field to enter server url (for self-hosting)
8. state to show user if its already connected
9. show error if oauth flow is cancelled by user
Follow up PR -
Renew token logic
---------
Co-authored-by: Félix Malfait <felix@twenty.com>
Refactored the code to introduce two different concepts:
- AuditLogs (immutable, raw data)
- TimelineActivities (user-friendly, transformed data)
Still some work needed:
- Add message, files, calendar events to timeline (~2 hours if done
naively)
- Refactor repository to try to abstract concept when we can (tbd, wait
for Twenty ORM)
- Introduce ability to display child timelines on parent timeline with
filtering (~2 days)
- Improve UI: add links to open note/task, improve diff display, etc
(half a day)
- Decide the path forward for Task vs Notes: either introduce a new
field type "Record Type" and start going into that direction ; or split
in two objects?
- Trigger updates when a field is changed (will be solved by real-time /
websockets: 2 weeks)
- Integrate behavioral events (1 day for POC, 1 week for
clean/documented)
<img width="1248" alt="Screenshot 2024-04-12 at 09 24 49"
src="https://github.com/twentyhq/twenty/assets/6399865/9428db1a-ab2b-492c-8b0b-d4d9a36e81fa">
When distant table does not have an id column, syncing does not work.
Today the check is only made after creating the foreign table locally.
We should do it first, so we avoid having a foreign table created and
failing right after.
Co-authored-by: Thomas Trompette <thomast@twenty.com>
## Context
We have recently added an event listener to create audit logs on objects
update. However, we have only created the structure (relations on event
standard objects) for Company, Person, Opportunity and custom objects.
There is a larger effort in #4936 to refactor this.
For now, we are disabling log auditing on all other objects
## How
Add @IsNotAuditLogged() annotation on all standard objects except
Company, Person, Opportunity
## Context
We recently introduced this verification but we didn't take into account
self-hosting that might not use billing.
## Test
tested locally with
- new workspace and new account
- existing workspace with new account and billing not enabled and status
incomplete => OK
- existing workspace with new account and billing enabled and status
incomplete => NOK
- existing workspace with new account and billing enabled and status
active => OK
We will require remote table entity to map distant table name and local
foreign table name.
Introducing the entity:
- new source of truth to know if a table is sync or not
- created synchronously at the same time as metadata and foreign table
Adding a few more changes:
- exception rather than errors so the user can see these
- `pluralize` library that will allow to stop adding `Remote` suffix on
names
---------
Co-authored-by: Thomas Trompette <thomast@twenty.com>
## Context
- Rename remaining V2 services.
- Delete messages in DB when gmail history tells us they've been
deleted. I removed the logic where we store those in a cache since it's
a bit overkill because we don't need to query gmail and can use those
ids directly. The strategy is to delete the message channel message
association of the current channel, not the message or the thread since
they can still be linked to other channels. However, we will need to
call the threadCleaner service on the workspace to remove orphan
threads/non-associated messages.
Note: deletion for full-sync is a bit tricky because we need the full
list of message ids to compare with the DB and make sure we don't
over-delete. Currently, to keep memory, we don't have a variable that
holds all ids as we flush it after each page. Easier solution would be
to wipe everything before each full sync but it's probably not great for
the user experience if they are currently manipulating messages since
full-sync can happen without a user intervention (if a partial sync
fails due to historyId being invalidated by google for some reason)
## Context
The full-sync job was enqueued within a transaction, which means it
could be executed before the transaction was commit and
connected-account was not created yet.
This PR re-arrange the code a bit to avoid this
cc @bosiraphael thx for flagging this!
## Context
Recent PR introduced a verifyTransientToken inside the
GoogleAPIsProviderEnabledGuard guard. This is used to extract the
workspaceId from the token. This is working fine for the first call sent
to google however the callback is calling the same guard which is
causing an issue because the transientToken is missing from the
callback.
Imho, the same guard shouldn't be used by the callback but for the time
being I'm adding a check to prevent using feature flag when
transientToken is absent. In fact, it is present in the request but not
in the same key. Because the scope is only relevant for the first call,
I'm simply adding a check there.
## Context
Currently the calendar scope is bound to an env variable. We want to
rollout this feature to some users so this PR adds a check on the
existing IS_CALENDAR_ENABLED flag