## Bug Description
We are facing a bug in case recaptcha is enabled.
To reproduce:
- Create your recaptcha: https://www.google.com/recaptcha/about/
- update your server .env with the following variables:
```
CAPTCHA_SECRET_KEY=REPLACE_ME
CAPTCHA_SITE_KEY=REPLACE_ME
CAPTCHA_DRIVER=google-recaptcha
```
- Go to the login page, enter an existing user email and hit 'Reset your
password'.
- Add a console.log in emailPasswordResetLink in auth.resolver.ts to get
the token that would be sent by email if you don't have the mailer setup
- Browse: /reset-password/{passwordToken}
- Update the password:
<img width="1446" alt="image"
src="https://github.com/user-attachments/assets/dd5b077f-293e-451a-8630-22d24ac66c42">
- See that the token is invalid
You should see two calls in your developer network tab. A successful one
to update the password and another to log you in. This 2nd call
(Challenge) does not have the captcha token provided. It should be
## Fix
- Refreshing the token on page load
- providing it to the Challenge graphql call
This PR fixes a few bugs on TwentyORM:
- fix many to one relations that were not properly queries
- fix many to one relations that were not properly parsed
- compute datasource (or use from cache) at run-time and do not use
injected one that could be outdated
We still need to refactor it to simplify, I feel the API are too complex
and we have too many cache layers. Also the relation computation part is
very complex and bug prone
### Description
This PR introduces a custom ESLint rule named
`inject-workspace-repository`. The purpose of this rule is to enforce
naming conventions for files and classes that use the
`@InjectWorkspaceRepository` decorator or include services ending with
`WorkspaceService` in their constructors.
### Rule Overview
The new ESLint rule checks for the following conditions:
1. **File Naming**:
- Only file ending with `.service.ts` or `.workspace-service.ts` are
checked.
- If a file contains a class using the `@InjectWorkspaceRepository`
decorator or a service ending with `WorkspaceService` in the
constructor, the file name must end with `.workspace-service.ts`.
2. **Class Naming**:
- Classes that use the `@InjectWorkspaceRepository` decorator or include
services ending with `WorkspaceService` in their constructors must have
names that end with `WorkspaceService`.
### How It Works
The rule inspects each TypeScript file to ensure that the naming
conventions are adhered to. It specifically looks for:
- Constructor parameters with the `@InjectWorkspaceRepository`
decorator.
- Constructor parameters with a type annotation ending with
`WorkspaceService`.
When such parameters are found, it checks the class name and the file
name to ensure they conform to the expected patterns.
### Example Code
#### Valid Cases
1. **Correct File and Class Name with Decorator**:
```typescript
// Filename: my.workspace-service.ts
class MyWorkspaceService {
constructor(@InjectWorkspaceRepository() private repository) {}
}
```
2. **Service Dependency**:
```typescript
// Filename: another.workspace-service.ts
class AnotherWorkspaceService {
constructor(private myWorkspaceService: MyWorkspaceService) {}
}
```
#### Invalid Cases
1. **Incorrect Class Name**:
```typescript
// Filename: my.workspace-service.ts
class MyService {
constructor(@InjectWorkspaceRepository() private repository) {}
}
// Error: Class name should end with 'WorkspaceService'.
```
2. **Incorrect File Name**:
```typescript
// Filename: my.service.ts
class MyWorkspaceService {
constructor(@InjectWorkspaceRepository() private repository) {}
}
// Error: File name should end with '.workspace-service.ts'.
```
3. **Incorrect File and Class Name**:
```typescript
// Filename: my.service.ts
class MyService {
constructor(@InjectWorkspaceRepository() private repository) {}
}
// Error: Class name should end with 'WorkspaceService'.
// Error: File name should end with '.workspace-service.ts'.
```
4. **Incorrect File Type**:
```typescript
// Filename: another.service.ts
class AnotherService {
constructor(private myWorkspaceService: MyWorkspaceService) {}
}
// Error: Class name should end with 'WorkspaceService'.
// Error: File name should end with '.workspace-service.ts'.
```
5. **Incorrect Class Name with Dependency**:
```typescript
// Filename: another.workspace-service.ts
class AnotherService {
constructor(private myWorkspaceService: MyWorkspaceService) {}
}
// Error: Class name should end with 'WorkspaceService'.
```
### First step
This rule is only a warning for now, and then we'll migrate all the code
that need to be migrated and move from `warn` to `error`.
Fix#6309
Co-authored-by: Charles Bochet <charles@twenty.com>
### Overview
This PR builds upon #5153, adding the ability to get a repository for
custom objects. The `entitySchema` is now generated for both standard
and custom objects based on metadata stored in the database instead of
the decorated `WorkspaceEntity` in the code. This change ensures that
standard objects with custom fields and relations can also support
custom objects.
### Implementation Details
#### Key Changes:
- **Dynamic Schema Generation:** The `entitySchema` for standard and
custom objects is now dynamically generated from the metadata stored in
the database. This shift allows for greater flexibility and
adaptability, particularly for standard objects with custom fields and
relations.
- **Custom Object Repository Retrieval:** A repository for a custom
object can be retrieved using `TwentyORMManager` based on the object's
name. Here's an example of how this can be achieved:
```typescript
const repository = await this.twentyORMManager.getRepository('custom');
/*
* `repository` variable will be typed as follows, ensuring that standard
fields and relations are properly typed:
* const repository: WorkspaceRepository<CustomWorkspaceEntity & {
* [key: string]: any;
* }>
*/
const res = await repository.find({});
```
Fix#6179
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
Co-authored-by: Weiko <corentin@twenty.com>
## Context
An object should always have a labelIdentifier (would be its primary key
at least). If the associated field is deleted by a user, it will break
the app. Ideally we should handle that on the DB level but we don't have
a FK for this column yet.
In the meantime I'm adding the validation check in the backend, note
that this is already handle on the FE side since the "archive/delete"
buttons don't appear for such fields so you need to reassign it to
another field first which is the desired behaviour.
## Context
LabelIdentifier and ImageIdentifier are metadata info attached to
objectMetadata that are used to display a record in a more readable way.
Those columns point to existing fields that are part of the object.
For example, for a relation picker of a person, we will show a record
using the "name" labelIdentifier and the "avatarUrl" imageIdentifier.
<img width="215" alt="Screenshot 2024-07-11 at 18 45 51"
src="https://github.com/twentyhq/twenty/assets/1834158/488f8294-0d7c-4209-b763-2499716ef29d">
Currently, the FE has a specific logic for company and people objects
and we have a way to update this value via the API for custom objects,
but the code is not flexible enough to change other standard objects.
This PR updates the WorkspaceEntity API so we can now provide the
labelIdentifier and imageIdentifier in the WorkspaceEntity decorator.
Example:
```typescript
@WorkspaceEntity({
standardId: STANDARD_OBJECT_IDS.activity,
namePlural: 'activities',
labelSingular: 'Activity',
labelPlural: 'Activities',
description: 'An activity',
icon: 'IconCheckbox',
labelIdentifierStandardId: ACTIVITY_STANDARD_FIELD_IDS.title,
})
@WorkspaceIsSystem()
export class ActivityWorkspaceEntity extends BaseWorkspaceEntity {
@WorkspaceField({
standardId: ACTIVITY_STANDARD_FIELD_IDS.title,
type: FieldMetadataType.TEXT,
label: 'Title',
description: 'Activity title',
icon: 'IconNotes',
})
title: string;
...
```
## Context
We've created a yoga (gql server) hook that catches requests and cache
them when needed. In practice we use it on the "objects" query because
this is often queried on the FE and it should never return something
different unless the schema has been intentionally changed by the user
when editing their data model (updating objects, fields, etc).
The issue here is we always cache the response regardless of its result,
even when it fails. This PR fixes that behaviour by only caching the
query response if it is successful.
I'm also fixing the cache key because the signature let users put
multiple operations and the cache key was not taking this into account
(we always use it on only one operation but we might have issues in the
future because another operation response could have erased the cached
response of another). Now the cache key contains the name of the
operation as well.
## Test
tested locally by manually throwing an error in the JWT auth guard
Services exceptions are not catch when the endpoint comes from an
auto-resolver.
We want to remove auto-resolver but it requires to implement pagination
by ourselves.
As a quick fix, here are interceptors that will trigger the exception
handler.
I had a hard time making it generic so I finally added one interceptor
for each since this is not supposed to stay
- Refactor connected account module
- Move blocklist into it's own module
- Move contact-creation-manager into it's own module
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
We call convertExceptionToGraphQLError in the exception handler for http
exceptions but we don't take into account those that already are
graphqlErrors and because of that the logic of convertExceptionToGraphql
is to fallback to a 500.
Now if the exception is a BaseGraphqlError (custom graphql error we
throw in the code), we throw them directly.
BEFORE
<img width="957" alt="Screenshot 2024-07-12 at 15 33 03"
src="https://github.com/user-attachments/assets/22ddae13-4996-4ad3-8f86-dd17c2922ca8">
AFTER
<img width="923" alt="Screenshot 2024-07-12 at 15 32 01"
src="https://github.com/user-attachments/assets/d3d6db93-6d28-495c-a4b4-ba4e47d45abd">
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
This is the first step of Link field type deprecation
(https://github.com/twentyhq/twenty/issues/5909).
Forbid creation of link field type in product and api. Update to this
type is not a concern as we do not allow the update of field type.
In the longer term, we want to improve the efficiency and reliability of
the sync-metadata command, by choosing an error handling strategy and
paying greater attention to health checks.
In the meantime, this PR adds an option to run the sync-metadata command
on all active workspaces at once.
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
Closes#5735.
The field probability on opportunity will -
- stop being created for new workspaces (after this PR is merged)
- have "isCustom" value set to true and be displayed as such in the
settings (after this PR is merged + sync-metadata is run on workspace)
- still show in the views (all the time)
This field is deprecated as a standard field but not replaced by another
one, so we are not adding the `(deprecated)` suffix in the label.
Add a new command to delete objectMetadataId fieldMetadata that have a
wrong standard-id. This is because we have fixed the missing
objectMetadataId column but one already exists in the fieldMetadataId
with the wrong table. We should run this command before run the
sync-metadata.
Introduced a new module and command to run all the command associated
with the upgrade to 0.22. Not exactly sure with this structure but
ideally we would like to have only 1 command for version upgrades so
this is a first step.
## Context
We want to add an index on our foreign keys since PG does not do it for
us. An index can sometimes be expensive and not always meaningful
depending on different usages but in our case we decided to apply an
index for every foreign keys.
```typescript
@WorkspaceIndex()
@WorkspaceJoinColumn('author')
authorId: string;
```
This syntax is valid but since we want to apply it to every join column
I've decided to update the code of WorkspaceJoinColumn so it properly
registers a new index at the same time which is less error-prone.
Note: We had a bug on index name generation since postgres index names
are unique per schema and not table, the object metadata id (hashed) has
been added to the formula that generates the name of the index
## Test
Sync metadata. We have 45 join columns as of today per workspace, we
should see 45 rows inside IndexMetadata table
Insert inside AuditLog table are all failing due to objectMetadataId
column missing.
The FieldMetadata was sharing the same standard-id with another one
(objectName) so it was skipped during the comparison step of the
sync-metadata.
Running a sync-metadata again should fix this issue. Note that this
column is non-nullable so if the table contains existing records, it
will fail. However, since the insert was failing I'm assuming the table
is empty anyway.
- Refactor calendar modules and some messaging modules to better
organize them by business rules and decouple them
- Work toward a common architecture for the different calendar providers
by introducing interfaces for the drivers
- Modify cron job to use the new sync statuses and stages
We have recently decided that boolean fields should only accept truthy
or falsy value, with users deciding of a default value at creation.
This command helps cleaning the existing data, by
1. updating all boolean fields default values from null to false
2. updating all boolean fields values for records from null to false
---------
Co-authored-by: Weiko <corentin@twenty.com>
Fixes#6032.
Pg has a char limit on identifiers (= table, columns, enum names) of 63
bytes.
Let's limit the metadata names that will be converted to identifiers
(objects names, fields names, relation names, enum values) to 63 chars.
For the sake of simplicity in the FE we will limit the input length of
labels.
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
Added:
- An "Ask AI" command to the command menu.
- A simple GraphQL resolver that converts the user's question into a
relevant SQL query using an LLM, runs the query, and returns the result.
<img width="428" alt="Screenshot 2024-06-09 at 20 53 09"
src="https://github.com/twentyhq/twenty/assets/171685816/57127f37-d4a6-498d-b253-733ffa0d209f">
No security concerns have been addressed, this is only a
proof-of-concept and not intended to be enabled in production.
All changes are behind a feature flag called `IS_ASK_AI_ENABLED`.
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
This PR was first here to fix the issue related to ticket #5004, after
some testing it seems that changing the name of a relation is actually
properly working, if we rename `ONE-TO-MANY` side, the only things that
is going to be updated is the FieldMetadata as the `joinColumn` is
stored on the opposite object.
For `MANY-TO-ONE` relations, the `joinColumn` migration is properly
generated. We need to take care that if we rename a side of a relation,
sometimes the opposite side doesn't have `inverseSideFieldKey`
implemented and used by default the name of the opposite object, so this
is going to throw an error as the field can't be found in the object.
---------
Co-authored-by: Marie <51697796+ijreilly@users.noreply.github.com>
An error was introduced in the calendar cron job because we tried to
inject the workspace context inside the calendarChannelRepository where
we didn't have access to that context.
We have recently deprecated our subscriptionStatus on workspace to
replace it by a check on existing subscription (+ freeAccess
featureFlag) but the logic was not properly implemented
Closes#5748
- Create feature flag
- Add scope `https://www.googleapis.com/auth/profile.emails.read` when
connecting an account
- Get email aliases with google people API, store them in
connectedAccount and refresh them before each message-import
- Update the contact creation logic accordingly
- Refactor
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
Class exception for each metadata module + handler to map on graphql
error
TODO left :
- find a way to call handler on auto-resolvers nestjs query (probably
interceptors)
- discuss what should be done for pre-hooks errors
- discuss what should be done for Unauthorized exception
This PR fix an issue with the `IsNull()` find operator applied on
one-to-many relation, this one is not supported by TypeORM.
We can instead filter by an empty array to retrieve object with empty
relations.
querying workspaceMembers may be slow leads to wrong
setNextOnboardingStatus value. So we added a resolved field in workspace
to get workspaceMemberCount directly
- move front `onboardingStatus` computing to server side
- add logic to `useSetNextOnboardingStatus`
- update some missing redirections in
`usePageChangeEffectNavigateLocation`
- separate subscriptionStatus from onboardingStatus
- Put error handling outside of `refreshAndSaveAccessToken`
- return after failing to refresh access token in
`processMessageBatchImport`
- remove unnecessary token refresh in `processMessageListFetch`
This PR introduce a new decorator named `@WorkspaceJoinColumn`, the goal
of this one is to manually declare the join columns inside the workspace
entities, so we don't have to rely on `ObjectRecord` type.
This decorator can be used that way:
```typescript
@WorkspaceRelation({
standardId: ACTIVITY_TARGET_STANDARD_FIELD_IDS.company,
type: RelationMetadataType.MANY_TO_ONE,
label: 'Company',
description: 'ActivityTarget company',
icon: 'IconBuildingSkyscraper',
inverseSideTarget: () => CompanyWorkspaceEntity,
inverseSideFieldKey: 'activityTargets',
})
@WorkspaceIsNullable()
company: Relation<CompanyWorkspaceEntity> | null;
// The argument is the name of the relation above
@WorkspaceJoinColumn('company')
companyId: string | null;
```
This PR introduces an `upsert` parameter (along the existing `data`
param) for `createOne` and `createMany` mutations.
When upsert is set to `true`, the function will look for records with
the same id if an id was passed. If not id was passed, it will leverage
the existing duplicate check mechanism to find a duplicate. If a record
is found, then the function will perform an update instead of a create.
Unfortunately I had to remove some nice tests that existing on the args
factory. Those tests where mostly testing the duplication rule
generation logic but through a GraphQL angle. Since I moved the
duplication rule logic to a dedicated service, if I kept the tests but
mocked the service we wouldn't really be testing anything useful. The
right path would be to create new tests for this service that compare
the JSON output and not the GraphQL output but I chose not to work on
this as it's equivalent to rewriting the tests from scratch and I have
other competing priorities.
#### Overview
This PR introduces a new API for dynamically registering and executing
pre and post query hooks in the Workspace Query Hook system using the
`@WorkspaceQueryHook` decorator. This approach eliminates the need for
manual provider registration, and fix the issue of `undefined` or `null`
repository using `@InjectWorkspaceRepository`.
#### New API
**Define a Hook**
Use the `@WorkspaceQueryHook` decorator to define pre or post hooks:
```typescript
@WorkspaceQueryHook({
key: `calendarEvent.findMany`,
scope: Scope.REQUEST,
})
export class CalendarEventFindManyPreQueryHook implements WorkspaceQueryHookInstance {
async execute(userId: string, workspaceId: string, payload: FindManyResolverArgs): Promise<void> {
if (!payload?.filter?.id?.eq) {
throw new BadRequestException('id filter is required');
}
// Implement hook logic here
}
}
```
This API simplifies the registration and execution of query hooks,
providing a more flexible and maintainable approach.
---------
Co-authored-by: Weiko <corentin@twenty.com>
As per title!
Also, I'm removing an incorrect logic in the enum migration runner that
takes care of the case where we have no defaultValue but non nullable
which is not a valid business case.
Our tests on FE are red, which is a threat to code quality. I'm adding a
few unit tests to improve the coverage and lowering a bit the lines
coverage threshold
## Context
Our Flexible Schema engine dynamically generates entities/tables/APIs
for us but was not flexible enough to build indexes in the DB. With more
and more features involving heavy queries such as Messaging, we are now
adding a new WorkspaceIndex() decorator for our standard objects (will
come later for custom objects). This decorator will give enough
information to the workspace sync metadata manager to generate the
proper migrations that will create or drop indexes on demand.
To be aligned with the rest of the engine, we are adding 2 new tables:
IndexMetadata and IndexFieldMetadata, that will store the info of our
indexes.
## Implementation
```typescript
@WorkspaceEntity({
standardId: STANDARD_OBJECT_IDS.person,
namePlural: 'people',
labelSingular: 'Person',
labelPlural: 'People',
description: 'A person',
icon: 'IconUser',
})
export class PersonWorkspaceEntity extends BaseWorkspaceEntity {
@WorkspaceField({
standardId: PERSON_STANDARD_FIELD_IDS.email,
type: FieldMetadataType.EMAIL,
label: 'Email',
description: 'Contact’s Email',
icon: 'IconMail',
})
@WorkspaceIndex()
email: string;
```
By simply adding the WorkspaceIndex decorator, sync-metadata command
will create a new index for that column.
We can also add composite indexes, note that the order is important for
PSQL.
```typescript
@WorkspaceEntity({
standardId: STANDARD_OBJECT_IDS.person,
namePlural: 'people',
labelSingular: 'Person',
labelPlural: 'People',
description: 'A person',
icon: 'IconUser',
})
@WorkspaceIndex(['phone', 'email'])
export class PersonWorkspaceEntity extends BaseWorkspaceEntity {
```
Currently composite fields and relation fields are not handled by
@WorkspaceIndex() and you will need to use this notation instead
```typescript
@WorkspaceIndex(['companyId', 'nameFirstName'])
export class PersonWorkspaceEntity extends BaseWorkspaceEntity {
```
<img width="700" alt="Screenshot 2024-06-21 at 15 15 45"
src="https://github.com/twentyhq/twenty/assets/1834158/ac6da1d9-d315-40a4-9ba6-6ab9ae4709d4">
Next step: We might need to implement more complex index expressions,
this is why we have an expression column in IndexMetadata.
What I had in mind for the decorator, still open to discussion
```typescript
@WorkspaceIndex(['nameFirstName', 'nameLastName'], { expression: "$1 || ' ' || $2"})
export class PersonWorkspaceEntity extends BaseWorkspaceEntity {
```
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
This PR is replacing and removing all the raw queries and repositories
with the new `TwentyORM` and injection system using
`@InjectWorkspaceRepository`.
Some logic that was contained inside repositories has been moved to the
services.
In this PR we're only replacing repositories for calendar feature.
---------
Co-authored-by: Weiko <corentin@twenty.com>
Co-authored-by: bosiraphael <raphael.bosi@gmail.com>
Co-authored-by: Charles Bochet <charles@twenty.com>
- Remove filters from metadata rest api
- add limite before and after parameters for metadata
- remove update from metadata relations
- fix typing issue
- fix naming
- fix before parameter
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
Filtering relations is not allowed
(see`packages/twenty-server/src/engine/metadata-modules/relation-metadata/dtos/relation-metadata.dto.ts`)
so we remove filtering for find many relation
we also fixed some bug in result structure and metadata open-api schema
### Overview
This PR introduces significant enhancements to the MessageQueue module
by integrating `@Processor`, `@Process`, and `@InjectMessageQueue`
decorators. These changes streamline the process of defining and
managing queue processors and job handlers, and also allow for
request-scoped handlers, improving compatibility with services that rely
on scoped providers like TwentyORM repositories.
### Key Features
1. **Decorator-based Job Handling**: Use `@Processor` and `@Process`
decorators to define job handlers declaratively.
2. **Request Scope Support**: Job handlers can be scoped per request,
enhancing integration with request-scoped services.
### Usage
#### Defining Processors and Job Handlers
The `@Processor` decorator is used to define a class that processes jobs
for a specific queue. The `@Process` decorator is applied to methods
within this class to define specific job handlers.
##### Example 1: Specific Job Handlers
```typescript
import { Processor, Process, InjectMessageQueue } from 'src/engine/integrations/message-queue';
@Processor('taskQueue')
export class TaskProcessor {
@Process('taskA')
async handleTaskA(job: { id: string, data: any }) {
console.log(`Handling task A with data:`, job.data);
// Logic for task A
}
@Process('taskB')
async handleTaskB(job: { id: string, data: any }) {
console.log(`Handling task B with data:`, job.data);
// Logic for task B
}
}
```
In the example above, `TaskProcessor` is responsible for processing jobs
in the `taskQueue`. The `handleTaskA` method will only be called for
jobs with the name `taskA`, while `handleTaskB` will be called for
`taskB` jobs.
##### Example 2: General Job Handler
```typescript
import { Processor, Process, InjectMessageQueue } from 'src/engine/integrations/message-queue';
@Processor('generalQueue')
export class GeneralProcessor {
@Process()
async handleAnyJob(job: { id: string, name: string, data: any }) {
console.log(`Handling job ${job.name} with data:`, job.data);
// Logic for any job
}
}
```
In this example, `GeneralProcessor` handles all jobs in the
`generalQueue`, regardless of the job name. The `handleAnyJob` method
will be invoked for every job added to the `generalQueue`.
#### Adding Jobs to a Queue
You can use the `@InjectMessageQueue` decorator to inject a queue into a
service and add jobs to it.
##### Example:
```typescript
import { Injectable } from '@nestjs/common';
import { InjectMessageQueue, MessageQueue } from 'src/engine/integrations/message-queue';
@Injectable()
export class TaskService {
constructor(
@InjectMessageQueue('taskQueue') private readonly taskQueue: MessageQueue,
) {}
async addTaskA(data: any) {
await this.taskQueue.add('taskA', data);
}
async addTaskB(data: any) {
await this.taskQueue.add('taskB', data);
}
}
```
In this example, `TaskService` adds jobs to the `taskQueue`. The
`addTaskA` and `addTaskB` methods add jobs named `taskA` and `taskB`,
respectively, to the queue.
#### Using Scoped Job Handlers
To utilize request-scoped job handlers, specify the scope in the
`@Processor` decorator. This is particularly useful for services that
use scoped repositories like those in TwentyORM.
##### Example:
```typescript
import { Processor, Process, InjectMessageQueue, Scope } from 'src/engine/integrations/message-queue';
@Processor({ name: 'scopedQueue', scope: Scope.REQUEST })
export class ScopedTaskProcessor {
@Process('scopedTask')
async handleScopedTask(job: { id: string, data: any }) {
console.log(`Handling scoped task with data:`, job.data);
// Logic for scoped task, which might use request-scoped services
}
}
```
Here, the `ScopedTaskProcessor` is associated with `scopedQueue` and
operates with request scope. This setup is essential when the job
handler relies on services that need to be instantiated per request,
such as scoped repositories.
### Migration Notes
- **Decorators**: Refactor job handlers to use `@Processor` and
`@Process` decorators.
- **Request Scope**: Utilize the scope option in `@Processor` if your
job handlers depend on request-scoped services.
Fix#5628
---------
Co-authored-by: Weiko <corentin@twenty.com>
- Improve the rest api by introducing startingAfter/endingBefore (we
previously had lastCursor), and moving pageInfo/totalCount outside of
the data object.
- Fix broken GraphQL playground on website
- Improve analytics by sending server url
Add a new util called `resolveAbsolutePath` to allow providing absolute
path for environment variable like `STORAGE_LOCAL_PATH`.
If the path in the env start with `/` we'll not prefix it with
`process.cwd()`.
Also we're using a static path for the old `db_initialized` file now
named `db_status` and stop using the env variable for this file as this
one shouldn't ne stored in the `STORAGE_LOCAL_PATH`.
Fix#4794
---------
Co-authored-by: Quentin Galliano <qgalliano@gmail.com>
In `messaging-gmail-messages-import.service`, we were refreshing the
access token before each query but we were passing the old access token
to `fetchAllMessages`.
I modified the function to query the updated connectedAccount with the
new access token.
This will solve the 401 errors we were getting in production.
* Remove relations where they cannot be used
* Removed duplicated schema for findMany
* Reuse schema for Relation variant to reduce size of sent json object
closes#5778
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
Co-authored-by: martmull <martmull@hotmail.fr>
In this PR, I'm doing 2 things:
- refresh connectedAccount token on message-list-fetch. It's currently
only refresh while doing the messages-import. However messages-import
stage are only triggered if new messages are detected (which could take
days or week depending of the messageChannel activity). We should also
refresh it while trying to fetch the list
- handle Unhandled Gmail error code 500 with reason "backendError".
These can occur on gmail side. In this case, we just retry later.
In this PR, I'm mainly doing two things:
- uniformizing messaging-messages-import and
messaging-message-list-fetch behaviors (cron.job and job)
- improving performances of these cron.jobs by not triggering the jobs
if the stage is not relevant
- making sure these jobs have same signature (workspaceId +
messageChannelId)
First step for creating credentials for database proxy.
In next PRs:
- When calling endpoint, create database Postgres on proxy server
- Setup user on database using postgresCredentials
- Build remote server on DB to access workspace data
Update of select fields options was failing if we deleted an option that
was used for at least one row: former code would not update the value to
null but leave it to the no-longer-allowed value.
- Rename syncSubStatus to syncStage
- Rename ongoingSyncStartedAt to syncStageStartedAt
- Remove throttlePauseUntil from db and compute it with
syncStageStartedAt and throttleFailureCount
- Removing existing listener that was backfilling created records
without position
- Switch to a job that backfill all objects within workspace
- Adapting `FIND_BY_POSITION` so it can fetch objects without position.
Currently we needed to input a number
- refactor record position factory and record position query factory
- override position if not present during createMany
To avoid overriding the same positions for all data in createMany, the
logic is:
- if inserted last, use last position + arg index + 1
- if inserted first, use first position - arg index - 1
Our exception handler has to filter out some errors/exceptions so they
are not caught by the ExceptionHandlerDriver (Logged or Sentry for
example). This is done for Http errors in the range of 4xx and also
makes sure they are converted back to Graphql validation errors.
However, graphql validation errors that are already managed by Yoga
(with Schema validation) should also be filtered out, this PR should fix
that behaviour
In this PR, I'm refactoring the messaging module into smaller pieces
that have **ONE** responsibility: import messages, clean messages,
handle message participant creation, instead of having ~30 modules (1
per service, jobs, cron, ...). This is mandatory to start introducing
drivers (gmails, office365, ...) IMO. It is too difficult to enforce
common interfaces as we have too many interfaces (30 modules...). All
modules should not be exposed
Right now, we have services that are almost functions:
do-that-and-this.service.ts / do-that-and-this.module.ts
I believe we should have something more organized at a high level and it
does not matter that much if we have a bit of code duplicates.
Note that the proposal is not fully implemented in the current PR that
has only focused on messaging folder (biggest part)
Here is the high level proposal:
- connected-account: token-refresher
- blocklist
- messaging: message-importer, message-cleaner, message-participants,
... (right now I'm keeping a big messaging-common but this will
disappear see below)
- calendar: calendar-importer, calendar-cleaner, ...
Consequences:
1) It's OK to re-implement several times some things. Example:
- error handling in connected-account, messaging, and calendar instead
of trying to unify. They are actually different error handling. The only
things that might be in common is the GmailError => CommonError parsing
and I'm not even sure it makes a lot of sense as these 3 apis might have
different format actually
- auto-creation. Calendar and Messaging could actually have different
rules
2) **We should not have circular dependencies:**
- I believe this was the reason why we had so many modules, to be able
to cherry pick the one we wanted to avoid circular deps. This is not the
right approach IMO, we need architect the whole messaging by defining
high level blocks that won't have circular dependencies by design. If we
encounter one, we should rethink and break the block in a way that makes
sense.
- ex: connected-account.resolver is not in the same module as
token-refresher. ==> connected-account.resolver => message-importer (as
we trigger full sync job when we connect an account) => token-refresher
(as we refresh token on message import).
connected-account.resolver and token-refresher both in connected-account
folder but should be in different modules. Otherwise it's a circular
dependency. It does not mean that we should create 1 module per service
as it was done before
In a nutshell: The code needs to be thought in term of reponsibilities
and in a way that enforce high level interfaces (and avoid circular
dependencies)
Bonus: As you can see, this code is also removing a lot of code because
of the removal of many .module.ts (also because I'm removing the sync
scripts v2 feature flag end removing old code)
Bonus: I have prefixed services name with Messaging to improve dev xp.
GmailErrorHandler could be different between MessagingGmailErrorHandler
and CalendarGmailErrorHandler for instance
Query read timeouts happen when a remote server is not available. It
breaks:
- the remote server show page
- the record table page of imported remote tables
This PR will catch the exception so it does not go to Sentry in both
cases.
Also did 2 renaming.
- add missing `excludedOperations` in
`packages/twenty-server/src/engine/middlewares/graphql-hydrate-request-from-token.middleware.ts`
- update generated graphql file
- Add missing redirection to index after password update
Closes#5062.
Refactoring tables list to avoid rendering all toggles on each sync or
schema update while using fresh data:
- introducing id for RemoteTables in apollo cache
- manually updating the cache for the record that was updated after a
sync or schema update instead of fetching all tables again
Remote object id columns are not removed anymore when a remote object is
unsynced.
This is because we do not use relations anymore. We only created the id
field. So the current behavior that was implemented for custom objects,
to retrieve the fields to deleted, does not work.
Since remote object relations are really different, I extracted the
logic from `objectMetadataService`. It now handles only the relations
for custom objects creation and deletion (this part should be extracted
as well).
I create a new remote table relation service that will:
- fetch objects metadata linked to remotes (favorites,
activityTargets...)
- look for columns based on remote object name
- delete the fields and columns
Closes#5069 back-end part
And:
- do not display schemaPendingUpdates status on remote server lists as
this call will become too costly if there are dozens of servers
- (refacto) create foreignTableService
After this is merged we will be able to delete remoteTable's
availableTables column
Fix a bug introduced in [this
PR](https://github.com/twentyhq/twenty/pull/5254/files)
When a subscription is created, we need to create the subscription,
#5254 return if no subscription is created so the sub can never be
created at all
This PR fixes that
For remotes, we will only create the foreign key, without the relation
metadata. Expected behavior will be:
- possible to create an activity. But the remote object will not be
displayed in the relations of the activity
- the remote objects should not be available in the search for relations
Also switched the number settings to an enum, since we now have to
handle `BigInt` case.
---------
Co-authored-by: Thomas Trompette <thomast@twenty.com>