Provides queries for fetching and inserting metadata into that database that do not assume there is a `resource_version` column. This means that will work when migrating to/from older versions.
Co-authored-by: Lyndon Maydwell <92299+sordina@users.noreply.github.com>
GitOrigin-RevId: dac636d530524082c5a13ae0f016a2d4ced16f7f
Add optimistic concurrency control to the ‘replace_metadata’ call.
Prevents users from submitting out-of-date metadata to metadata-mutating APIs.
See https://github.com/hasura/graphql-engine-mono/issues/472 for details.
GitOrigin-RevId: 5f220f347a3eba288a9098b01e9913ffd7e38166
The metadata storage implementation for graphql-engine-multitenant.
- It uses a centralized PG database to store metadata of all tenants (instead of per tenant database)
- Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant)
- 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries`
Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server.
To run :
- obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`)
- apply the schema on a PG database
- set the `--metadataDatabaseUrl` flag to point to the above database
- run the MT executable
The schema (and its migrations) for the metadata db is managed outside the MT worker.
### New metadata
The following is the new portion of `Metadata` added :
```yaml
version: 3
metrics_config:
analyze_query_variables: true
analyze_response_body: false
api_limits:
disabled: false
depth_limit:
global: 5
per_role:
user: 7
editor: 9
rate_limit:
per_role:
user:
unique_params:
- x-hasura-user-id
- x-hasura-team-id
max_reqs_per_min: 20
global:
unique_params: IP
max_reqs_per_min: 10
```
- In Pro, the code around fetching/updating/syncing pro-config is removed
- That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed
- The required config comes from metadata / schema cache
### New Metadata APIs
- `set_api_limits`
- `remove_api_limits`
- `set_metrics_config`
- `remove_metrics_config`
#### `set_api_limits`
```yaml
type: set_api_limits
args:
disabled: false
depth_limit:
global: 5
per_role:
user: 7
editor: 9
rate_limit:
per_role:
anonymous:
max_reqs_per_min: 10
unique_params: "ip"
editor:
max_reqs_per_min: 30
unique_params:
- x-hasura-user-id
user:
unique_params:
- x-hasura-user-id
- x-hasura-team-id
max_reqs_per_min: 20
global:
unique_params: IP
max_reqs_per_min: 10
```
#### `remove_api_limits`
```yaml
type: remove_api_limits
args: {}
```
#### `set_metrics_config`
```yaml
type: set_metrics_config
args:
analyze_query_variables: true
analyze_response_body: false
```
#### `remove_metrics_config`
```yaml
type: remove_metrics_config
args: {}
```
#### TODO
- [x] on-prem pro implementation for `MonadMetadataStorage`
- [x] move the project config from Lux to pro metadata (PR: #379)
- [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose)
- [x] address other minor TODOs
- [x] TxIso for `MonadSourceResolver`
- [x] enable EKG connection pool metrics
- [x] add logging of connection info when sources are added?
- [x] confirm if the `buildReason` for schema cache is correct
- [ ] testing
- [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508)
- [x] one-time migration of existing metadata from users' db to centralized PG
- [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata
- [ ] integrate with infra team (WIP - cc @hgiasac)
- [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411)
- [ ] benchmark with few tenants having large metadata (100+ tables etc.)
- [ ] when user moves regions (https://github.com/hasura/lux/issues/1717)
- [ ] metadata has to be migrated from one regional PG to another
- [ ] migrate metrics data as well ?
- [ ] operation logs
- [ ] regression test runs
- [ ] find a way to share the schema files with the infra team
Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com>
GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
This PR generalizes a bunch of metadata structures.
Most importantly, it changes `SourceCache` to hold existentially quantified values:
```
data BackendSourceInfo =
forall b. Backend b => BackendSourceInfo (SourceInfo b)
type SourceCache = HashMap SourceName BackendSourceInfo
```
This changes a *lot* of things throughout the code. For now, all code using the schema cache explicitly casts sources to Postgres, meaning that if any non-Postgres `SourceInfo` makes it to the cache, it'll be ignored.
That means that after this PR is submitted, we can split work between two different aspects:
- creating `SourceInfo` for other backends
- handling those other sources down the line
GitOrigin-RevId: fb9ea00f32e840fc33c5467896fb1dfa5283ab42
Fixes https://github.com/hasura/graphql-engine/issues/5704 by checking, for aggregate fields whether we are handling a numeric aggregation.
This PR also adds type information to `ColFld` such that we know the type of the field.
This is the second attempt. See #319 for a less invasive approach. @nicuveo suggested type information might be useful, and since it wasn't hard to add, I think this version is better as well.
GitOrigin-RevId: aa6a259fd5debe9466df6302839ddbbd0ea659b5
This PR is a combination of the following other PRs:
- #169: move HasHttpManager out of RQL.Types
- #170: move UserInfoM to Hasura.Session
- #179: delete dead code from RQL.Types
- #180: move event related code to EventTrigger
GitOrigin-RevId: d97608d7945f2c7a0a37e307369983653eb62eb1
Earlier, while creating the event trigger's internal postgres trigger, we used to get the name of the table from the `TG_TABLE_NAME` special trigger variable. Using this with normal tables works fine, but it breaks when the parent table is partitioned because we associate the ET configuration in the schema only with the original table (as it should be).
In this PR, we supply the table name and schema name through template variables instead of using `TG_TABLE_NAME` and `TG_TABLE_SCHEMA`, so that event triggers work with a partitioned table as well.
TODO:
- [x] Changelog
- [x] unit test (ET on partition table)
GitOrigin-RevId: 556376881a85525300dcf64da0611ee9ad387eb0
This is an incremental PR towards https://github.com/hasura/graphql-engine/pull/5797
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
GitOrigin-RevId: a6cb8c239b2ff840a0095e78845f682af0e588a9
* server: fix handling of empty array values in relationships of set_custom_types
* add a test for dropping the relationship
GitOrigin-RevId: abff138b0021647a1cb6c6c041c2ba53c1ff9b77
Generalize TableCoreInfoRM, TableCoreCacheRT, some table metadata data types, generalize fromPGCol to fromCol, generalize some schema cache functions, prepare some enum schema cache code for generalization
GitOrigin-RevId: a65112bc1688e00fd707d27af087cb2585961da2
An incremental PR towards https://github.com/hasura/graphql-engine/pull/5797
- Expands `MonadMetadataStorage` with operations related to async actions and setting/updating metadata
GitOrigin-RevId: 53386b7b2d007e162050b826d0708897f0b4c8f6
This PR makes a bunch of schema generation code in Hasura.GraphQL.Schema backend-agnostic, by moving the backend-specific parts into a new BackendSchema type class. This way, the schema generation code can be reused for other backends, simply by implementing new instances of the BackendSchema type class.
This work is now in a state where the schema generators are sufficiently generic to accept the implementation of a new backend. That means that we can start exposing MS SQL schema. Execution is not implemented yet, of course.
The branch currently does not support computed fields or Relay. This is, in a sense, intentional: computed field support is normally baked into the schema generation (through the fieldSelection schema generator), and so this branch shows a programming technique that allows us to expose certain GraphQL schema depending on backend support. We can write support for computed fields and Relay at a later stage.
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
GitOrigin-RevId: df369fc3d189cbda1b931d31678e9450a6601314