event catalog:
- `hdb_catalog` is no longer automatically created
- catalog is initialised when the first event trigger is created
- catalog initialisation is done during the schema cache build, using `ArrowCache` so it is only run in response to a change to the set of event triggers
event queue:
- `processEventQueue` thread is prevented from starting when `HASURA_GRAPHQL_EVENTS_FETCH_INTERVAL=0`
- `processEventQueue` thread only processes sources for which at least one event trigger exists in some table in the source
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
GitOrigin-RevId: 73f256465d62490cd2b59dcd074718679993d4fe
This essentially restores the original code from c425b554b8
(https://github.com/hasura/graphql-engine/pull/4013). Prior to this
commit we would slurp messages as fast as possible from the database
(one thing c425b55 fixed).
Another thing broken as a consequence of the same logic was the
removeEventFromLockedEvents logic which unlocks in-flight events
(breaking at-least-once delivery)
Some archeology, post-c425b55:
- cc8e2ccc erroneously attempted to refactor using `bracket`, resulting
in the same slurp-all-events behavior (since we don't ever wait for
processEvent to complete)
- at some point event processing within a batch is made serial, this
reported as a bug. See: https://github.com/hasura/graphql-engine/issues/5189
- in 0ef52292b5 (which I approved...) an `async` is added, again
causing the same issue...
GitOrigin-RevId: d8cbaab385267a4c3f1f173e268a385265980fb1
Aggregate fields - both the table and the relationship fields haven't been enabled till now. This PR fixes enables the aggregate fields and fixes an issue with root aggregate fields.
Co-authored-by: Chris Done <11019+chrisdone@users.noreply.github.com>
GitOrigin-RevId: a6d7ed9b45cda6af659a57576a8623c725a7372f
This claws back ~7min from integration tests (run serially, as with `dev.sh test --integration`
Further improvements would do well to focus on optimizing metadata operations, as `setup` dominates
GitOrigin-RevId: 76637d6fa953c2404627c4391447a05bf09355fa
Removing `schemaSyncDisable` flag and interpreting `schemaPollInterval` of `0` as disabling schema sync.
This change brings the convention in line with how action and other intervals are used to disable processes.
There is an opportunity to abstract the notion of an optional interval similar to how actions uses `AsyncActionsFetchInterval`.
This can be used for the following fields of ServeOptions, with RawServeOptions having a milliseconds value where `0` is interpreted as disable.
OptionalInterval:
```
-- | Sleep time interval for activities
data OptionalInterval
= Skip -- ^ No polling
| Interval !Milliseconds -- ^ Interval time
deriving (Show, Eq)
```
ServeOptions:
```
data ServeOptions impl
= ServeOptions
{
...
, soEventsFetchInterval :: !OptionalInterval
, soAsyncActionsFetchInterval :: !OptionalInterval
, soSchemaPollInterval :: !OptionalInterval
...
}
```
Rather than encoding a `Maybe OptionalInterval` in RawServeOptions, instead a `Maybe Milliseconds` can be used to more directly express the input format, with the ServeOptions constructor interpreting `0` as `Skip`.
Current inconsistencies:
* `soEventsFetchInterval` has no value interpreted as disabling the fetches
* `soAsyncActionsFetchInterval` uses an `OptionalInterval` analog in `RawServeOptions` instead of `Milliseconds`
* `soSchemaPollInterval` currently uses `Milliseconds` directly in `ServeOptions`
---
### Kodiak commit message
Information used by [Kodiak bot](https://kodiakhq.com/) while merging this PR.
#### Commit title
Same as the title of this pull request
GitOrigin-RevId: 3cda1656ae39ae95ba142512ed4e123d6ffeb7fe
Modifying schema-sync implementation to use polling for OSS/Pro. Invalidations are now propagated via the `hdb_catalog.hdb_schema_notifications` table in OSS/Pro. Pattern followed is now a Listener/Processor split with Cloud listening for changes via a LISTEN/NOTIFY channel and OSS polling for resource version changes in the metadata table. See issue #460 for more details.
GitOrigin-RevId: 48434426df02e006f4ec328c0d5cd5b30183db25
Multi source support had limited the availability of async action queries in subscriptions. This PR
adds support for async action query subscriptions with new implementation. Also addresses https://github.com/hasura/graphql-engine/issues/6460.
GitOrigin-RevId: 5ddc321073d224f287dc4b86ce2239ff55190b36
* add `use_prepared_statements` option to add_pg_source API
* update CHANGELOG.md
* change default value for 'use_prepared_statements' to False
* update CHANGELOG.md
GitOrigin-RevId: 6f5b90991f4a8c03a51e5829d2650771bb0e29c1
While debugging issues with HLS, Reed Mullanix noticed that we don't use relative paths. This leads to problems when using HLS + Emacs due to a bug in `lsp-mode` which prevents it from finding the correct project root.
However, it is still a good practice to use relative paths in TH for other reasons, including being able to import these modules in GHCI.
This PR should make it so HLS-1.0 & emacs provide type inference, imports, etc., in all modules in our codebase.
GitOrigin-RevId: 5f53b9a7ccf46df1ea7be94ff0a5c6ec861f4ead
Fixes https://github.com/hasura/graphql-engine-mono/issues/712
Main point of interest: the `Hasura.SQL.Backend` module.
This PR creates an `Exists` type indexed by indexed type and packed constraint while hiding all of its complexity by not exporting the constructor.
Existential constructors/types which are no longer (directly) existential:
- [X] BackendSourceInfo :: BackendSourceInfo
- [x] BackendSourceMetadata :: BackendSourceMetadata
- [x] MOSourceObjId :: MetadatObjId
- [x] SOSourceObj :: SchemaObjId
- [x] RFDB :: RootField
- [x] LQP :: LiveQueryPlan
- [x] ExecutionStep :: ExecStepDB
This PR also removes ALL usages of `Typeable.cast` from our codebase. We still need to derive `Typeable` in a few places in order to be able to derive `Data` in one place. I have not dug deeper to see why this is needed.
GitOrigin-RevId: bb47e957192e4bb0af4c4116aee7bb92f7983445
Previously invalid REST endpoints would throw errors during schema cache build.
This PR changes the validation to instead add to the inconsistent metadata objects in order to allow use of `allow_inconsistent_metadata` with inconsistent REST endpoints.
All non-fatal endpoint definition errors are returned as inconsistent metadata warnings/errors depending on the use of `allow_inconsistent_metadata`. The endpoints with issues are then created and return informational runtime errors when they are called.
Console impact when creating endpoints is that error messages now refer to metadata inconsistencies rather than REST feature at the top level:
![image](https://user-images.githubusercontent.com/92299/109911843-ede9ec00-7cfe-11eb-9c55-7cf924d662a6.png)
<img width="969" alt="image" src="https://user-images.githubusercontent.com/92299/110258597-8336fa00-7ff7-11eb-872c-bfca945aa0e8.png">
Note: Conflicting endpoints generate one error per conflicting set of endpoints due to the implementation of `groupInconsistentMetadataById` and `imObjectIds`. This is done to ensure that error messages are terse, but may pose errors if there are some assumptions made surrounding `imObjectIds`.
Related to https://github.com/hasura/graphql-engine-mono/pull/473 (Allow Inconsistent Metadata (v2) #473 (Merged))
---
### Kodiak commit message
Changes the validation to use inconsistent metadata objects for REST endpoint issues.
#### Commit title
Inconsistent metadata for REST endpoints
GitOrigin-RevId: b9de971208e9bb0a319c57df8dace44cb115ff66
fixes#3868
docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de`
Note:
To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`.
Introduction
------------
This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`.
How are select permissions of different roles are combined?
------------------------------------------------------------
A select permission includes 5 things:
1. Columns accessible to the role
2. Row selection filter
3. Limit
4. Allow aggregation
5. Scalar computed fields accessible to the role
Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`.
Let's say the following GraphQL query is queried with the `combined_roles` role.
```graphql
query {
employees {
address
phone
}
}
```
This will translate to the following SQL query:
```sql
select
(case when (P1 or P2) then address else null end) as address,
(case when P2 then phone else null end) as phone
from employee
where (P1 or P2)
```
The other parameters of the select permission will be combined in the following manner:
1. Limit - Minimum of the limits will be the limit of the inherited role
2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation
3. Scalar computed fields - same as table column fields, as in the above example
APIs for inherited roles:
----------------------
1. `add_inherited_role`
`add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments
`role_name`: the name of the inherited role to be added (String)
`role_set`: list of roles that need to be combined (Array of Strings)
Example:
```json
{
"type": "add_inherited_role",
"args": {
"role_name":"combined_user",
"role_set":[
"user",
"user1"
]
}
}
```
After adding the inherited role, the inherited role can be used like single roles like earlier
Note:
An inherited role can only be created with non-inherited/singular roles.
2. `drop_inherited_role`
The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument:
`role_name`: name of the inherited role to be dropped
Example:
```json
{
"type": "drop_inherited_role",
"args": {
"role_name":"combined_user"
}
}
```
Metadata
---------
The derived roles metadata will be included under the `experimental_features` key while exporting the metadata.
```json
{
"experimental_features": {
"derived_roles": [
{
"role_name": "manager_is_employee_too",
"role_set": [
"employee",
"manager"
]
}
]
}
}
```
Scope
------
Only postgres queries and subscriptions are supported in this PR.
Important points:
-----------------
1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done.
TODOs
-------
- [ ] Tests
- [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features
- [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?)
- [ ] Introspection test with a inherited role (nullability changes in a inherited role)
- [ ] Docs
- [ ] Changelog
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
- [x] **Event Triggers Metrics**
- [x] Distribution of size of event trigger fetches / Number of events fetched in the last `event trigger fetch`
- [x] Event Triggers: Number of event trigger HTTP workers in process
- [x] Event Triggers: Avg event trigger lock time (if an event has been fetched but not processed because http worker is not free)
#### Sample response
The metrics can be viewed from the `/dev/ekg` endpoint
```json
{
"num_events_fetched":{
"max":0,
"mean":0,
"count":1,
"min":0,
"variance":null,
"type":"d",
"sum":0
},
"num_event_trigger_http_workers":{
"type":"g",
"val":0
},
"event_lock_time":{
"max":0,
"mean":0,
"count":0,
"min":0,
"variance":0,
"type":"d",
"sum":0
},
```
#### Todo
- [ ] Group similar metrics together (Eg: Group all the metrics related to Event trigger, How do we do it??)
Closes: https://github.com/hasura/graphql-engine-mono/issues/202
GitOrigin-RevId: bada11d871272b04c8a09d006d9d037a8464a472
Added a note on existentials. I plan to create a subsequent PR with a note on how we use the singletons trick to recover the type inside an existential.
GitOrigin-RevId: 1f227d859dcc384b4ac7e103053f643f879827d1
When adding a rest endpoint that references a query with multiple definitions, throw an error.
Needed a second PR for this as Kodiak merged the prior PR before this commit was added.
---
### Kodiak commit message
Adding check for multiple query definitions for rest endpoints.
GitOrigin-RevId: 6e3977a210e29f143b61282fbb37c93eb5b9d73c
Resolves Issues:
* https://github.com/hasura/graphql-engine-mono/issues/658 - Invalid Slashes
* https://github.com/hasura/graphql-engine-mono/issues/628 - Subscriptions
Implementation:
* Moved some logic from Endpoint.hs to allow reuse of splitting url into PathSegments.
* Additional validation steps alongside checking for overlapping routes
* Logging potential misuse of GET for mutations
Future Work:
* [ ] GET is allowed for mutations (Ignore/Log warning for Now)
* [ ] Add to scInconsistentObjs rather than throwing error
* Add information to scInconsistentObjs instead of raising errors directly.
TODO:
* [x] Duplicate variable segments with the same name in the location should not be allowed
* [x] We should throw an error on trailing and leading slashes and URLs which contain empty segments
* [x] Endpoints can be created using subscriptions. But the error only shows at the time of the query
* [x] Tests
---
### Kodiak commit message
Prohibit Invalid slashes, duplicate variables, subscriptions for REST endpoints.
GitOrigin-RevId: 86c0d4af97984c8afd02699e6071e9c1658710b8
Provides queries for fetching and inserting metadata into that database that do not assume there is a `resource_version` column. This means that will work when migrating to/from older versions.
Co-authored-by: Lyndon Maydwell <92299+sordina@users.noreply.github.com>
GitOrigin-RevId: dac636d530524082c5a13ae0f016a2d4ced16f7f
Add optimistic concurrency control to the ‘replace_metadata’ call.
Prevents users from submitting out-of-date metadata to metadata-mutating APIs.
See https://github.com/hasura/graphql-engine-mono/issues/472 for details.
GitOrigin-RevId: 5f220f347a3eba288a9098b01e9913ffd7e38166
* server: use a leaky bucket algorithm for bytes-per-second cache rate limiting
* Use evalsha properly
* Adds redis cache limit parameters to PoliciesConfig
* Loads Leaky Bucket Script On Server Start
* Adds more redis logging and moves cache update into lua script
* reverts setex in lua and adds notes
* Refactors cacheStore and adds max TTL and cache size limits
* Filter session vars in cache key
* WIP
* parens
* cache-clear-hander POC implementation
* cache-clear-hander POC implementation
* Pro projectId used as cache key
* POC working!
* prefixing query-response keys in redis
* Add cacheClearer to RedisScripts
* Partial implementation of cacheClearer from scripts record
* updating tests
* [automated] stylish-haskell commit
* Adds query look with up with metrics script
* Adds missing module and lua script from last commit
* Changes redis script module structure to match cache clearing branch
* minor change to lua script
* cleaning up cache clearing
* generalising JsonLog
* [automated] stylish-haskell commit
* Draft Cache Metrics Endpoint
* Adds Cache Metrics Handler
* Adds hook handler module
* Missed HandlerHook module in last commit
* glob
* Fixes redis mget bug
* Removes cache totals and changes dashes to colons in metric cache keys
* Adds query param to clear clear endpoint for deleting specific keys
* Adds query param to clear clear endpoint for deleting specific keys
* Cache Metrics on query families rather then queries
* Replace Set with nub
* Base16 Redis Hashes
* Query Family Redis Keys With Roles
* response headers for cache keys
* fixing bug in family key by excluding operation name; using hash for response header instead of entire key
* Adds query family to redis cache keys and cache clear endpoint
* Fixes queryfamily hash bug
* Moves cache endpoints to /pro
* Moved cache clear to POST
* Refactors cache clear function
* Fixes query family format bug
* Adds query cache tests and optional --redis-url flag to python test suite
* Adds session variable cache test
* Update pro changelog
* adding documentation for additional caching features
* more docs
* clearing up units of leaky bucket params
* Adds comments to leaky bucket script
* removes old todo
* Fixes session variable filtering to work with new query rootfield
* more advanced defaulting behaviour for bucket rate and capacity.
* Updates Docs
* Moves Role into QueryFamily hash
* Use Aeson for Cache Clear endpoint response
* Moves trace to bracket the leaky bucket script
* Misc review tweaks
* Adds sum type for cache clear query params
* Hardcodes RegisReplyLog log level
* Update docs/graphql/cloud/response-caching.rst
Co-authored-by: Phil Freeman <phil@hasura.io>
* new prose for rate limiting docs
* [automated] stylish-haskell commit
* make rootToSessVarPreds total
* [automated] stylish-haskell commit
* Fixes out of scope error
* Renamed _acRedis to _acCacheStore
Co-authored-by: Solomon Bothwell <ssbothwell@gmail.com>
Co-authored-by: Lyndon Maydwell <lyndon@sordina.net>
Co-authored-by: David Overton <david@hasura.io>
Co-authored-by: Stylish Haskell Bot <stylish-haskell@users.noreply.github.com>
Co-authored-by: Lyndon Maydwell <lyndon@hasura.io>
GitOrigin-RevId: dda5c1a3f902967b3d78310f950541a55fabb1b0
* Stop shutdown handler retaining the whole serveCtx
This might look like quite a strange way to write the function but it's
the only way I could get GHC to not capture `serveCtx` in the shutdown
handler.
Fixes the metadata issue in #344
* Force argumentNames
The arguments list is often empty so we end up with a lot of duplicate
thunks if this value is not forced.
* Increase sharing in nullableType and nonNullableType
The previous definitions would lead to increased allocation as it would
destory any previously created sharing. The new definition only allocate
a fresh constructor if the value is changed.
* Add memoization for field parsers
It was observed in #344 that many parsers were not being memoised which
led to an increase in memory usage. This patch generalisation memoisation so
that it works for FieldParsers as well as normal Parsers.
There can still be substantial improvement made by also memoising
InputFieldParsers but that is left for future work.
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
* [automated] stylish-haskell commit
* changelog
Co-authored-by: Phil Freeman <paf31@cantab.net>
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
Co-authored-by: Stylish Haskell Bot <stylish-haskell@users.noreply.github.com>
Co-authored-by: Phil Freeman <phil@hasura.io>
GitOrigin-RevId: 36255f77a47cf283ea61df9d6a4f9138d4e5834c
The metadata storage implementation for graphql-engine-multitenant.
- It uses a centralized PG database to store metadata of all tenants (instead of per tenant database)
- Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant)
- 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries`
Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server.
To run :
- obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`)
- apply the schema on a PG database
- set the `--metadataDatabaseUrl` flag to point to the above database
- run the MT executable
The schema (and its migrations) for the metadata db is managed outside the MT worker.
### New metadata
The following is the new portion of `Metadata` added :
```yaml
version: 3
metrics_config:
analyze_query_variables: true
analyze_response_body: false
api_limits:
disabled: false
depth_limit:
global: 5
per_role:
user: 7
editor: 9
rate_limit:
per_role:
user:
unique_params:
- x-hasura-user-id
- x-hasura-team-id
max_reqs_per_min: 20
global:
unique_params: IP
max_reqs_per_min: 10
```
- In Pro, the code around fetching/updating/syncing pro-config is removed
- That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed
- The required config comes from metadata / schema cache
### New Metadata APIs
- `set_api_limits`
- `remove_api_limits`
- `set_metrics_config`
- `remove_metrics_config`
#### `set_api_limits`
```yaml
type: set_api_limits
args:
disabled: false
depth_limit:
global: 5
per_role:
user: 7
editor: 9
rate_limit:
per_role:
anonymous:
max_reqs_per_min: 10
unique_params: "ip"
editor:
max_reqs_per_min: 30
unique_params:
- x-hasura-user-id
user:
unique_params:
- x-hasura-user-id
- x-hasura-team-id
max_reqs_per_min: 20
global:
unique_params: IP
max_reqs_per_min: 10
```
#### `remove_api_limits`
```yaml
type: remove_api_limits
args: {}
```
#### `set_metrics_config`
```yaml
type: set_metrics_config
args:
analyze_query_variables: true
analyze_response_body: false
```
#### `remove_metrics_config`
```yaml
type: remove_metrics_config
args: {}
```
#### TODO
- [x] on-prem pro implementation for `MonadMetadataStorage`
- [x] move the project config from Lux to pro metadata (PR: #379)
- [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose)
- [x] address other minor TODOs
- [x] TxIso for `MonadSourceResolver`
- [x] enable EKG connection pool metrics
- [x] add logging of connection info when sources are added?
- [x] confirm if the `buildReason` for schema cache is correct
- [ ] testing
- [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508)
- [x] one-time migration of existing metadata from users' db to centralized PG
- [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata
- [ ] integrate with infra team (WIP - cc @hgiasac)
- [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411)
- [ ] benchmark with few tenants having large metadata (100+ tables etc.)
- [ ] when user moves regions (https://github.com/hasura/lux/issues/1717)
- [ ] metadata has to be migrated from one regional PG to another
- [ ] migrate metrics data as well ?
- [ ] operation logs
- [ ] regression test runs
- [ ] find a way to share the schema files with the infra team
Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com>
GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
fixes https://github.com/hasura/graphql-engine/issues/6449
A while back we added [support for customizing JWT claims](https://github.com/hasura/graphql-engine/pull/3575) and this enabled to map a session variable to any value within the unregistered claims, but as reported in #6449 , users aren't able to map the `x-hasura-user-id` session variable to the `sub` standard JWT claim.
This PR fixes the above issue by allowing mapping session variables to standard JWT claims as well.
GitOrigin-RevId: d3e63d7580adac55eb212e0a1ecf7c33f5b3ac4b
This PR generalizes a bunch of metadata structures.
Most importantly, it changes `SourceCache` to hold existentially quantified values:
```
data BackendSourceInfo =
forall b. Backend b => BackendSourceInfo (SourceInfo b)
type SourceCache = HashMap SourceName BackendSourceInfo
```
This changes a *lot* of things throughout the code. For now, all code using the schema cache explicitly casts sources to Postgres, meaning that if any non-Postgres `SourceInfo` makes it to the cache, it'll be ignored.
That means that after this PR is submitted, we can split work between two different aspects:
- creating `SourceInfo` for other backends
- handling those other sources down the line
GitOrigin-RevId: fb9ea00f32e840fc33c5467896fb1dfa5283ab42
### Description
Our Prelude provides the very convenient `hoistMaybe :: Maybe b -> MaybeT m b`. This PR adds hlint rules to replace uses of `MaybeT $ pure $ x` with the cleaner `hoistMaybe x`, and rules to specifically replace `MaybeT $ pure Nothing` with `empty`.
GitOrigin-RevId: 7254f4954e34e4d7ca972dc7c12073d3ab8cb0b8
Fixes https://github.com/hasura/graphql-engine/issues/5704 by checking, for aggregate fields whether we are handling a numeric aggregation.
This PR also adds type information to `ColFld` such that we know the type of the field.
This is the second attempt. See #319 for a less invasive approach. @nicuveo suggested type information might be useful, and since it wasn't hard to add, I think this version is better as well.
GitOrigin-RevId: aa6a259fd5debe9466df6302839ddbbd0ea659b5
Earlier (pre catalog separation), the remote schema permissions were in `/v1/query`. This PR moves it to `/v1/metadata`.
GitOrigin-RevId: cb39d9df4cc2288f67231504e3a7909f2f8df4da
fixes https://github.com/hasura/graphql-engine/issues/2109
This PR accepts a new config `allowed_skew` in the JWT config to provide for some leeway while comparing the JWT expiry time.
GitOrigin-RevId: ef50cf77d8e2780478685096ed13794b5c4c9de4
Fixes https://github.com/hasura/graphql-engine/issues/6385
In the v1.3.4-beta.2 version, the SQL generated for a query action containing relationship and configured with permissions parsed the `x-hasura-user-id` session variable value through the `hasura.user` postgres setting instead of passing the session variables as an JSON object to the query as it was in v1.3.3.
This PR fixes the SQL generation to that of v1.3.3
Co-authored-by: Rakesh Emmadi <12475069+rakeshkky@users.noreply.github.com>
GitOrigin-RevId: 838ba812f89b51df7fcead81b9d3c2885dfa39b4
This PR is a combination of the following other PRs:
- #169: move HasHttpManager out of RQL.Types
- #170: move UserInfoM to Hasura.Session
- #179: delete dead code from RQL.Types
- #180: move event related code to EventTrigger
GitOrigin-RevId: d97608d7945f2c7a0a37e307369983653eb62eb1
### Description
This PR updates the graphql schema to be backend-agnostic. To do so, it also moves the definition of operators to `BackendSchema`, to be specified differently per backend.
GitOrigin-RevId: 35b9d2d1bff93fb68b872d6ab0d3d12ec12c1b93
Earlier, while creating the event trigger's internal postgres trigger, we used to get the name of the table from the `TG_TABLE_NAME` special trigger variable. Using this with normal tables works fine, but it breaks when the parent table is partitioned because we associate the ET configuration in the schema only with the original table (as it should be).
In this PR, we supply the table name and schema name through template variables instead of using `TG_TABLE_NAME` and `TG_TABLE_SCHEMA`, so that event triggers work with a partitioned table as well.
TODO:
- [x] Changelog
- [x] unit test (ET on partition table)
GitOrigin-RevId: 556376881a85525300dcf64da0611ee9ad387eb0
This is an incremental PR towards https://github.com/hasura/graphql-engine/pull/5797
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
GitOrigin-RevId: a6cb8c239b2ff840a0095e78845f682af0e588a9
* server: fix handling of empty array values in relationships of set_custom_types
* add a test for dropping the relationship
GitOrigin-RevId: abff138b0021647a1cb6c6c041c2ba53c1ff9b77
* Remove unused ExitCode constructors
* Simplify shutdown logic
* Update server/src-lib/Hasura/App.hs
Co-authored-by: Brandon Simmons <brandon@hasura.io>
* WIP: fix zombie thread issue
* Use forkCodensity for the schema sync thread
* Use forkCodensity for the oauthTokenUpdateWorker
* Use forkCodensity for the schema update processor thread
* Add deprecation notice
* Logger threads use Codensity
* Add the MonadFix instance for Codensity to get log-sender thread logs
* Move outIdleGC out to the top level, WIP
* Update forkImmortal fuction for more logging info
* add back the idle GC to Pro
* setupAuth
* use ImmortalThreadLog
* Fix tests
* Add another finally block
* loud warnings
* Change log level
* hlint
* Finalize the logger in the correct place
* Add ManagedT
* Update server/src-lib/Hasura/Server/Auth.hs
Co-authored-by: Brandon Simmons <brandon@hasura.io>
* Comments etc.
Co-authored-by: Brandon Simmons <brandon@hasura.io>
Co-authored-by: Naveen Naidu <naveennaidu479@gmail.com>
GitOrigin-RevId: 156065c5c3ace0e13d1997daef6921cc2e9f641c
Generalize TableCoreInfoRM, TableCoreCacheRT, some table metadata data types, generalize fromPGCol to fromCol, generalize some schema cache functions, prepare some enum schema cache code for generalization
GitOrigin-RevId: a65112bc1688e00fd707d27af087cb2585961da2
An incremental PR towards https://github.com/hasura/graphql-engine/pull/5797
- Expands `MonadMetadataStorage` with operations related to async actions and setting/updating metadata
GitOrigin-RevId: 53386b7b2d007e162050b826d0708897f0b4c8f6
Since PDV, introspection queries are parsed by a certain kind of reflection where during the GraphQL schema generation, we collect all GraphQL types used during schema generation to generate answers to introspection queries. This has a great advantage, namely that we don't need to keep track of which types are being used in our schema, as this information is extracted after the fact.
But what happens when we encounter two types with the same name in the GraphQL schema? Well, they better be the same, otherwise we likely made a programming error. So what do we do when we *do* encounter a conflict? So far, we've been throwing a rather generic error message, namely `found conflicting definitions for <typename> when collecting types from the schema`. It does not specify what the conflict is, or how it arose. In fact, I'm a little bit hesitant to output more information about what the conflict is, because we support many different kinds of GraphQL types, and these can have disagreements in many different ways. It'd be a bit tiring (not to mention error-prone) to spell this out explicitly for all types. And, in any case, at the moment our equality checks for types is incorrect anyway, as we are avoiding implementing a certain recursive equality checking algorithm.
As it turns out, type conflicts arise not just due to programming errors, but also arise naturally under certain configurations. @codingkarthik encountered an interesting case recently where adding a specific remote and a single unrelated database table would result in a conflict in our Relay schema. It was not readily visible how this conflict arose: this took significant engineering effort.
This adds stack traces to type collection, so that we can inform the user where the type conflict is taking place. The origin of the above conflict can easily be spotted using this PR. Here's a sample error message:
```
Found conflicting definitions for "PageInfo". The definition at "mutation_root.UpdateUser.favourites.anime.edges.node.characters.pageInfo" differs from the the definition at "query_root.test_connection.pageInfo"
```
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
GitOrigin-RevId: d4c01c243022d8570b3c057b168a61c3033244ff
* Add update_by_pk test with more interesting permissions.
* Add new delete test with more interesting permissions.
* Fix byPk operations not providing all columns.
* Empty commit to unlock ci.
GitOrigin-RevId: eb1ca2cc4c59f421bc5fedf49bef91aa4bbdca94
This PR makes a bunch of schema generation code in Hasura.GraphQL.Schema backend-agnostic, by moving the backend-specific parts into a new BackendSchema type class. This way, the schema generation code can be reused for other backends, simply by implementing new instances of the BackendSchema type class.
This work is now in a state where the schema generators are sufficiently generic to accept the implementation of a new backend. That means that we can start exposing MS SQL schema. Execution is not implemented yet, of course.
The branch currently does not support computed fields or Relay. This is, in a sense, intentional: computed field support is normally baked into the schema generation (through the fieldSelection schema generator), and so this branch shows a programming technique that allows us to expose certain GraphQL schema depending on backend support. We can write support for computed fields and Relay at a later stage.
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
GitOrigin-RevId: df369fc3d189cbda1b931d31678e9450a6601314
* Use local graphql-engine, WIP
* Implement cacheLookup and cacheStore
* Fix the key generatio ncode, deduplicate
* changelog
* More deduplication
* Run tests from this repo
* remove changelog
Co-authored-by: Anon Ray <rayanon004@gmail.com>
GitOrigin-RevId: 80d10c4dfdcadbe9f49ec50fe67aa981202c98c9
Accept new server flag --websocket-keepalive to control
websockets keep-alive interval
Co-authored-by: Auke Booij <auke@hasura.io>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* use only required session variables in multiplexed queries for subscriptions
This will reduce the load on Postgres when the result of a subscription
is not dependent on the session variables of the request
* add DerivingVia to the project wide extension list
* expose a more specific function to filter session variables
* improve documentation of session variables of a cohort
Co-Authored-By: Alexis King <lexi.lambda@gmail.com>
* fix bad rebase
* add test for checking only required session variables are used to make query
Co-authored-by: Alexis King <lexi.lambda@gmail.com>
Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io>
* add support for joining to remote interface and union fields
* add test for making a remote relationship with an Union type
* add CHANGELOG
* remove the pretty-simple library
* remove unused imports from Remote.hs
* Apply suggestions from code review
Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com>
* support remote relationships against enum fields as well
* update CHANGELOG
Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
This issue was very tricky to track down, but fortunately easy to fix.
The interaction here is subtle enough that it’s difficult to put into
English what would go wrong in what circumstances, but the new unit test
captures precisely that interaction to ensure it remains fixed.
Add a backend type extension parameter to some RQL types, following the ideas of the paper "Trees that grow" (Najd & Jones 2016)
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* [skip ci] use the args while making the fieldParser
* modify the execution part of the remote queries
* parse union queries deeply
* add test for remote schema field validation
* add tests for validating remote query arguments
Co-authored-by: Auke Booij <auke@hasura.io>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>