- Remove filters from metadata rest api
- add limite before and after parameters for metadata
- remove update from metadata relations
- fix typing issue
- fix naming
- fix before parameter
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
Filtering relations is not allowed
(see`packages/twenty-server/src/engine/metadata-modules/relation-metadata/dtos/relation-metadata.dto.ts`)
so we remove filtering for find many relation
we also fixed some bug in result structure and metadata open-api schema
### Overview
This PR introduces significant enhancements to the MessageQueue module
by integrating `@Processor`, `@Process`, and `@InjectMessageQueue`
decorators. These changes streamline the process of defining and
managing queue processors and job handlers, and also allow for
request-scoped handlers, improving compatibility with services that rely
on scoped providers like TwentyORM repositories.
### Key Features
1. **Decorator-based Job Handling**: Use `@Processor` and `@Process`
decorators to define job handlers declaratively.
2. **Request Scope Support**: Job handlers can be scoped per request,
enhancing integration with request-scoped services.
### Usage
#### Defining Processors and Job Handlers
The `@Processor` decorator is used to define a class that processes jobs
for a specific queue. The `@Process` decorator is applied to methods
within this class to define specific job handlers.
##### Example 1: Specific Job Handlers
```typescript
import { Processor, Process, InjectMessageQueue } from 'src/engine/integrations/message-queue';
@Processor('taskQueue')
export class TaskProcessor {
@Process('taskA')
async handleTaskA(job: { id: string, data: any }) {
console.log(`Handling task A with data:`, job.data);
// Logic for task A
}
@Process('taskB')
async handleTaskB(job: { id: string, data: any }) {
console.log(`Handling task B with data:`, job.data);
// Logic for task B
}
}
```
In the example above, `TaskProcessor` is responsible for processing jobs
in the `taskQueue`. The `handleTaskA` method will only be called for
jobs with the name `taskA`, while `handleTaskB` will be called for
`taskB` jobs.
##### Example 2: General Job Handler
```typescript
import { Processor, Process, InjectMessageQueue } from 'src/engine/integrations/message-queue';
@Processor('generalQueue')
export class GeneralProcessor {
@Process()
async handleAnyJob(job: { id: string, name: string, data: any }) {
console.log(`Handling job ${job.name} with data:`, job.data);
// Logic for any job
}
}
```
In this example, `GeneralProcessor` handles all jobs in the
`generalQueue`, regardless of the job name. The `handleAnyJob` method
will be invoked for every job added to the `generalQueue`.
#### Adding Jobs to a Queue
You can use the `@InjectMessageQueue` decorator to inject a queue into a
service and add jobs to it.
##### Example:
```typescript
import { Injectable } from '@nestjs/common';
import { InjectMessageQueue, MessageQueue } from 'src/engine/integrations/message-queue';
@Injectable()
export class TaskService {
constructor(
@InjectMessageQueue('taskQueue') private readonly taskQueue: MessageQueue,
) {}
async addTaskA(data: any) {
await this.taskQueue.add('taskA', data);
}
async addTaskB(data: any) {
await this.taskQueue.add('taskB', data);
}
}
```
In this example, `TaskService` adds jobs to the `taskQueue`. The
`addTaskA` and `addTaskB` methods add jobs named `taskA` and `taskB`,
respectively, to the queue.
#### Using Scoped Job Handlers
To utilize request-scoped job handlers, specify the scope in the
`@Processor` decorator. This is particularly useful for services that
use scoped repositories like those in TwentyORM.
##### Example:
```typescript
import { Processor, Process, InjectMessageQueue, Scope } from 'src/engine/integrations/message-queue';
@Processor({ name: 'scopedQueue', scope: Scope.REQUEST })
export class ScopedTaskProcessor {
@Process('scopedTask')
async handleScopedTask(job: { id: string, data: any }) {
console.log(`Handling scoped task with data:`, job.data);
// Logic for scoped task, which might use request-scoped services
}
}
```
Here, the `ScopedTaskProcessor` is associated with `scopedQueue` and
operates with request scope. This setup is essential when the job
handler relies on services that need to be instantiated per request,
such as scoped repositories.
### Migration Notes
- **Decorators**: Refactor job handlers to use `@Processor` and
`@Process` decorators.
- **Request Scope**: Utilize the scope option in `@Processor` if your
job handlers depend on request-scoped services.
Fix#5628
---------
Co-authored-by: Weiko <corentin@twenty.com>
- Improve the rest api by introducing startingAfter/endingBefore (we
previously had lastCursor), and moving pageInfo/totalCount outside of
the data object.
- Fix broken GraphQL playground on website
- Improve analytics by sending server url
Add a new util called `resolveAbsolutePath` to allow providing absolute
path for environment variable like `STORAGE_LOCAL_PATH`.
If the path in the env start with `/` we'll not prefix it with
`process.cwd()`.
Also we're using a static path for the old `db_initialized` file now
named `db_status` and stop using the env variable for this file as this
one shouldn't ne stored in the `STORAGE_LOCAL_PATH`.
Fix#4794
---------
Co-authored-by: Quentin Galliano <qgalliano@gmail.com>
In `messaging-gmail-messages-import.service`, we were refreshing the
access token before each query but we were passing the old access token
to `fetchAllMessages`.
I modified the function to query the updated connectedAccount with the
new access token.
This will solve the 401 errors we were getting in production.
* Remove relations where they cannot be used
* Removed duplicated schema for findMany
* Reuse schema for Relation variant to reduce size of sent json object
closes#5778
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
Co-authored-by: martmull <martmull@hotmail.fr>
In this PR, I'm doing 2 things:
- refresh connectedAccount token on message-list-fetch. It's currently
only refresh while doing the messages-import. However messages-import
stage are only triggered if new messages are detected (which could take
days or week depending of the messageChannel activity). We should also
refresh it while trying to fetch the list
- handle Unhandled Gmail error code 500 with reason "backendError".
These can occur on gmail side. In this case, we just retry later.
In this PR, I'm mainly doing two things:
- uniformizing messaging-messages-import and
messaging-message-list-fetch behaviors (cron.job and job)
- improving performances of these cron.jobs by not triggering the jobs
if the stage is not relevant
- making sure these jobs have same signature (workspaceId +
messageChannelId)
First step for creating credentials for database proxy.
In next PRs:
- When calling endpoint, create database Postgres on proxy server
- Setup user on database using postgresCredentials
- Build remote server on DB to access workspace data
Update of select fields options was failing if we deleted an option that
was used for at least one row: former code would not update the value to
null but leave it to the no-longer-allowed value.
- Rename syncSubStatus to syncStage
- Rename ongoingSyncStartedAt to syncStageStartedAt
- Remove throttlePauseUntil from db and compute it with
syncStageStartedAt and throttleFailureCount
- Removing existing listener that was backfilling created records
without position
- Switch to a job that backfill all objects within workspace
- Adapting `FIND_BY_POSITION` so it can fetch objects without position.
Currently we needed to input a number
- refactor record position factory and record position query factory
- override position if not present during createMany
To avoid overriding the same positions for all data in createMany, the
logic is:
- if inserted last, use last position + arg index + 1
- if inserted first, use first position - arg index - 1
Our exception handler has to filter out some errors/exceptions so they
are not caught by the ExceptionHandlerDriver (Logged or Sentry for
example). This is done for Http errors in the range of 4xx and also
makes sure they are converted back to Graphql validation errors.
However, graphql validation errors that are already managed by Yoga
(with Schema validation) should also be filtered out, this PR should fix
that behaviour
In this PR, I'm refactoring the messaging module into smaller pieces
that have **ONE** responsibility: import messages, clean messages,
handle message participant creation, instead of having ~30 modules (1
per service, jobs, cron, ...). This is mandatory to start introducing
drivers (gmails, office365, ...) IMO. It is too difficult to enforce
common interfaces as we have too many interfaces (30 modules...). All
modules should not be exposed
Right now, we have services that are almost functions:
do-that-and-this.service.ts / do-that-and-this.module.ts
I believe we should have something more organized at a high level and it
does not matter that much if we have a bit of code duplicates.
Note that the proposal is not fully implemented in the current PR that
has only focused on messaging folder (biggest part)
Here is the high level proposal:
- connected-account: token-refresher
- blocklist
- messaging: message-importer, message-cleaner, message-participants,
... (right now I'm keeping a big messaging-common but this will
disappear see below)
- calendar: calendar-importer, calendar-cleaner, ...
Consequences:
1) It's OK to re-implement several times some things. Example:
- error handling in connected-account, messaging, and calendar instead
of trying to unify. They are actually different error handling. The only
things that might be in common is the GmailError => CommonError parsing
and I'm not even sure it makes a lot of sense as these 3 apis might have
different format actually
- auto-creation. Calendar and Messaging could actually have different
rules
2) **We should not have circular dependencies:**
- I believe this was the reason why we had so many modules, to be able
to cherry pick the one we wanted to avoid circular deps. This is not the
right approach IMO, we need architect the whole messaging by defining
high level blocks that won't have circular dependencies by design. If we
encounter one, we should rethink and break the block in a way that makes
sense.
- ex: connected-account.resolver is not in the same module as
token-refresher. ==> connected-account.resolver => message-importer (as
we trigger full sync job when we connect an account) => token-refresher
(as we refresh token on message import).
connected-account.resolver and token-refresher both in connected-account
folder but should be in different modules. Otherwise it's a circular
dependency. It does not mean that we should create 1 module per service
as it was done before
In a nutshell: The code needs to be thought in term of reponsibilities
and in a way that enforce high level interfaces (and avoid circular
dependencies)
Bonus: As you can see, this code is also removing a lot of code because
of the removal of many .module.ts (also because I'm removing the sync
scripts v2 feature flag end removing old code)
Bonus: I have prefixed services name with Messaging to improve dev xp.
GmailErrorHandler could be different between MessagingGmailErrorHandler
and CalendarGmailErrorHandler for instance