## Description ✍️
This PR adds support to generate query params directly using a kriti template which can be used to flatten a list of parameter arguments as well.
### Changes in the Metadata API
Earlier the `query_params` key inside `request_transform` used to take in an object of key/value pairs where the `key` represents the query parameter name and `value` points to the value of the parameter or a kriti template which could be resolved to the value.
With this PR, we provide the user with more freedom to generate the complete query string using kriti template. The `query_params` can now take in a string as well which will be a kriti template. This new change needs to be incorporated on the console and CLI metadata import/export as well.
- [x] CLI: Compatible, no changes required
- [ ] Console
## Changelog ✍️
__Component__ : server
__Type__: feature
__Product__: community-edition
### Short Changelog
use kriti template to generate query param from list of arguments
### Related Issues ✍
https://hasurahq.atlassian.net/browse/GS-243
### Solution and Design ✍
We use a kriti template to generate the complete query parameter string.
| Query Template | Output |
|---|---|
| `{{ concat ([concat({{ range _, x := [\"apple\", \"banana\"] }} \"tags={{x}}&\" {{ end }}), \"flag=smthng\"]) }}`| `tags=apple&tags=banana&flag=smthng` |
| `{{ concat ([\"tags=\", concat({{ range _, x := $body.input }} \"{{x}},\" {{ end }})]) }}` | `tags=apple%2Cbanana%2C` |
### Steps to test and verify ✍
- start HGE and make the following request to `http://localhost:8080/v1/metadata`:
```json
{
"type": "test_webhook_transform",
"args": {
"webhook_url": "http://localhost:3000",
"body": {
"action": {
"name": "actionName"
},
"input": ["apple", "banana"]
},
"request_transform": {
"version": 2,
"url": "{{$base_url}}",
"query_params": "{{ concat ([concat({{ range _, x := $body.input }} \"tags={{x}}&\" {{ end }}), \"flag=smthng\"]) }}",
"template_engine": "Kriti"
}
}
}
```
- you should receive the following as output:
```json
{
"body": {
"action": {
"name": "actionName"
},
"input": [
"apple",
"banana"
]
},
"headers": [],
"method": "GET",
"webhook_url": "http://localhost:3000?tags=apple&tags=banana&flag=smthng"
}
```
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6961
Co-authored-by: Tirumarai Selvan <8663570+tirumaraiselvan@users.noreply.github.com>
GitOrigin-RevId: 712ba038f03009edc3e8eb0435e723304943399a
## Description ✍️
This PR introduces a new feature to enable/disable event triggers during logical replication of table data for PostgreSQL and MS-SQL data sources. We introduce a new field `trigger_on_replication` in the `*_create_event_trigger` metadata API. By default the event triggers will not fire for logical data replication.
## Changelog ✍️
__Component__ : server
__Type__: feature
__Product__: community-edition
### Short Changelog
Add option to enable/disable event triggers on logically replicated tables
### Related Issues ✍
https://github.com/hasura/graphql-engine/issues/8814https://hasurahq.atlassian.net/browse/GS-252
### Solution and Design
- By default, triggers do **not** fire when the session mode is `replica` in Postgres, so if the `triggerOnReplication` is set to `true` for an event trigger we run the query `ALTER TABLE #{tableTxt} ENABLE ALWAYS TRIGGER #{triggerNameTxt};` so that the trigger fires always irrespective of the `session_replication_role`
- By default, triggers do fire in case of replication in MS-SQL, so if the `triggerOnReplication` is set to `false` for an event trigger we add a clause `NOT FOR REPLICATION` to the the SQL when the trigger is created/altered, which sets the `is_not_for_replication` for the trigger as `true` and it does not fire during logical replication.
### Steps to test and verify ✍
- Run hspec integration tests for HGE
## Server checklist ✍
### Metadata ✍
Does this PR add a new Metadata feature?
- ✅ Yes
- Does `export_metadata`/`replace_metadata` supports the new metadata added?
- ✅
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6953
Co-authored-by: Puru Gupta <32328846+purugupta99@users.noreply.github.com>
Co-authored-by: Sean Park-Ross <94021366+seanparkross@users.noreply.github.com>
GitOrigin-RevId: 92731328a2bbdcad2302c829f26f9acb33c36135
Mostly trying to avoid tricky `Arrows` syntax, and unnecessary use of the `Hasura.Incremental` framework.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6997
GitOrigin-RevId: 9a2f5883e7e29af164e1581049ae003afec2cbe4
I encountered this dead code while doing other things: it's a type class with a single method which is never called. Deleting the type class allows us to simplify `TableCoreCacheRT` and `TableCacheRT`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7075
GitOrigin-RevId: 121320349c478a93717b0706037553d8406cbfa9
This increases the speed of `create_query_collection` and `add_collection_to_allowlist` by a factor ~~10~~ 65, by caching the in-memory GraphQL schema. This speedup also applies more broadly to Metadata changes relating to:
- allowlists
- query collections
- cron triggers
- REST endpoints
- API limits
- metrics config
- GraphQL introspection options
- TLS allow lists
- OpenTelemetry
When is construction of the in-memory GraphQL schema cached between Metadata operations?
Before this PR, **never**! It's rebuilt fully, for every role, on every Metadata operation.
However, there are many Metadata operations that don't influence the GraphQL schema. So we should be caching its construction.
The `Hasura.Incremental` framework allows us to cache such constructions: whenever we have an arrow `Rule m a b`, where `a` is the input to the arrow and `b` the output, we can use the `Inc.cache` combinator to obtain a new arrow which is only re-executed when the input `a` changes in a material way. To test this, `a` needs an `Eq` instance. (Before hasura/graphql-engine-mono#6877, this was a `Cacheable` type class which has now been removed.)
We can't simply apply `Inc.cache` to the "Steps 3 and 4" in `buildSchemaCacheRule`, because the inputs (components of `BuildOutputs` such as `SourceCache`) don't have an `Eq` instance.
So the changes to `buildSchemaCacheRule` restructure the code so that the input to "Step 1", namely the Metadata, can be used as a caching key instead, so that `Inc.cache` can be applied to the whole sequence of steps.
That works to cache construction of the GraphQL schema, but it means that now only those Metadata operations that _don't_ influence any of the products of steps 1-4 can use a cached build of the GraphQL schema. The most important intermediate product is `BuildOutputs`. So now the exercise becomes to minimize the amount of stuff stored in `BuildOutputs`, so that as many Metadata operations as possible can be handled outside of the codepath that produces a GraphQL schema.
Per hasura/graphql-engine-mono#6609, the `BuildOutputs` structure is too big, and stores things unnecessarily. Refer to the PR description there for reasoning - the same logic applies to this PR, and simply goes a few steps further. In doing so, it can benefit from hasura/graphql-engine-mono#6765, which allows us to verify at compile time that certain Schema Cache building steps _don't_ generate "Metadata dependencies". If a certain Metadata dependency is never generated, we don't need to handle that case in `deleteMetadataObject`. Thus such intermediate products don't need to be passed through `resolveDependencies`, and thus they don't need to be stored in `BuildOutputs`, and thus their rebuild won't trigger a GraphQL schema rebuild.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6613
GitOrigin-RevId: 27d2e69d3461bd4c32f08febef9995c0369fab3a
What is the `Cacheable` type class about?
```haskell
class Eq a => Cacheable a where
unchanged :: Accesses -> a -> a -> Bool
default unchanged :: (Generic a, GCacheable (Rep a)) => Accesses -> a -> a -> Bool
unchanged accesses a b = gunchanged (from a) (from b) accesses
```
Its only method is an alternative to `(==)`. The added value of `unchanged` (and the additional `Accesses` argument) arises _only_ for one type, namely `Dependency`. Indeed, the `Cacheable (Dependency a)` instance is non-trivial, whereas every other `Cacheable` instance is completely boilerplate (and indeed either generated from `Generic`, or simply `unchanged _ = (==)`). The `Cacheable (Dependency a)` instance is the only one where the `Accesses` argument is not just passed onwards.
The only callsite of the `unchanged` method is in the `ArrowCache (Rule m)` method. That is to say that the `Cacheable` type class is used to decide when we can re-use parts of the schema cache between Metadata operations.
So what is the `Cacheable (Dependency a)` instance about? Normally, the output of a `Rule m a b` is re-used when the new input (of type `a`) is equal to the old one. But sometimes, that's too coarse: it might be that a certain `Rule m a b` only depends on a small part of its input of type `a`. A `Dependency` allows us to spell out what parts of `a` are being depended on, and these parts are recorded as values of types `Access a` in the state `Accesses`.
If the input `a` changes, but not in a way that touches the recorded `Accesses`, then the output `b` of that rule can be re-used without recomputing.
So now you understand _why_ we're passing `Accesses` to the `unchanged` method: `unchanged` is an equality check in disguise that just needs some additional context.
But we don't need to pass `Accesses` as a function argument. We can use the `reflection` package to pass it as type-level context. So the core of this PR is that we change the instance declaration from
```haskell
instance (Cacheable a) => Cacheable (Dependency a) where
```
to
```haskell
instance (Given Accesses, Eq a) => Eq (Dependency a) where
```
and use `(==)` instead of `unchanged`.
If you haven't seen `reflection` before: it's like a `MonadReader`, but it doesn't require a `Monad`.
In order to pass the current `Accesses` value, instead of simply passing the `Accesses` as a function argument, we need to instantiate the `Given Accesses` context. We use the `give` method from the `reflection` package for that.
```haskell
give :: forall r. Accesses -> (Given Accesses => r) -> r
unchanged :: (Given Accesses => Eq a) => Accesses -> a -> a -> Bool
unchanged accesses a b = give accesses (a == b)
```
With these three components in place, we can delete the `Cacheable` type class entirely.
The remainder of this PR is just to remove the `Cacheable` type class and its instances.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6877
GitOrigin-RevId: 7125f5e11d856e7672ab810a23d5bf5ad176e77f
- Avoid a few banana brackets `(| ... |)`, often by just using local `let` bindings
- Use proper `Arrows` syntax rather than helpers like `>->`
- Use monadic `do` syntax instead of `Arrows` syntax where possible
- Avoid `traverseA @Maybe`, in favor of a `case`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6751
GitOrigin-RevId: c07b22a1a259db6d135486ec71a716705e280717
`CollectedInfo` was just an awkward sum type. By using an explicit `Either` instead, we can guarantee at the type level that certain methods only write inconsistencies, or only write dependencies. This is useful, because if we can guarantee that no dependencies are written, then we don't need to run `resolveDependencies` on that part of the Metadata. In other words, we can keep it out of `BuildOutputs`, which greatly benefits performance - see e.g. hasura/graphql-engine-mono#6613.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6765
GitOrigin-RevId: 9ce099d2eee2278dbb6e5bea72063e4b6e064b35
A bunch of configurations are retrieved from the Metadata, then stored in the `BuildOutputs` structure, only to then be forwarded to the `SchemaCache`, with extremely little processing in between.
So this simplifies the build pipeline for some parts of the metadata: just construct those things from `Metadata` directly, and store them in the `SchemaCache` without any intermediate container.
Why did we have the detour via `BuildOutputs` in the first place? Parts of the Metadata (codified by `MetadataObjId`) can generate _metadata inconsistencies_ and/or _schema dependencies_, which are related.
- Metadata inconsistencies are warnings that we show to the user, indicating that there's something wrong with their configuration, and they have to fix it.
- Schema dependencies are an internal mechanism that allow us to build a consistent view of the world. For instance, if we have a relationship from DB tables `books` to `authors`, but the `authors` table is inconsistent (e.g. it doesn't exist in the DB), then we have schema dependencies indicating that. The job of `resolveDependencies` is to then drop the relationship, so that we can at least generate a legal GraphQL schema for `books`.
If we never generate a schema dependency for a certain fragment of Metadata, then there is no reason to call `resolveDependencies` on it, and so there is no reason to store it in `BuildOutputs`.
---
The starting point that allows this refactor is to apply Metadata defaults before it reaches `buildAndCollectInfo`, so that metadata-with-defaults can be used elsewhere.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6609
GitOrigin-RevId: df0c4a7ff9451e10e02a40bf26304b26584ba483
This upgrades the version of Ormolu required by the HGE repository to v0.5.0.1, and reformats all code accordingly.
Ormolu v0.5 reformats code that uses infix operators. This is mostly useful, adding newlines and indentation to make it clear which operators are applied first, but in some cases, it's unpleasant. To make this easier on the eyes, I had to do the following:
* Add a few fixity declarations (search for `infix`)
* Add parentheses to make precedence clear, allowing Ormolu to keep everything on one line
* Rename `relevantEq` to `(==~)` in #6651 and set it to `infix 4`
* Add a few _.ormolu_ files (thanks to @hallettj for helping me get started), mostly for Autodocodec operators that don't have explicit fixity declarations
In general, I think these changes are quite reasonable. They mostly affect indentation.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6675
GitOrigin-RevId: cd47d87f1d089fb0bc9dcbbe7798dbceedcd7d83
The main aim of the PR is:
1. To set up a module structure for 'remote-schemas' package.
2. Move parts by the remote schema codebase into the new module structure to validate it.
## Notes to the reviewer
Why a PR with large-ish diff?
1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section.
1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process.
## Changes in the PR
### Module structure
Sets up the module structure as follows:
```
Hasura/
RemoteSchema/
Metadata/
Types.hs
SchemaCache/
Types.hs
Permission.hs
RemoteRelationship.hs
Build.hs
MetadataAPI/
Types.hs
Execute.hs
```
### 1. Types representing metadata are moved
Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`.
- This new module only depends on very 'core' modules such as
`Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass.
- The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship
definition.
### 2. SchemaCache related types and build logic have been moved
Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`.
Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at.
Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components.
This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in.
While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal.
This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase.
### 3. Metadata APIs related types and build logic have been moved
The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively.
## Open questions:
1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L55)) and [2](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L91)), it made more sense to create a separate top-level module for `MetadataAPI`s.
Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further.
1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type:
```haskell
buildRemoteSchemas ::
( ArrowChoice arr,
Inc.ArrowDistribute arr,
ArrowWriter (Seq CollectedInfo) arr,
Inc.ArrowCache m arr,
MonadIO m,
HasHttpManagerM m,
Inc.Cacheable remoteRelationshipDefinition,
ToJSON remoteRelationshipDefinition,
MonadError QErr m
) =>
Env.Environment ->
( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles),
[RemoteSchemaMetadataG remoteRelationshipDefinition]
)
`arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject)
```
Note the dependence on `CollectedInfo` which is defined as
```haskell
data CollectedInfo
= CIInconsistency InconsistentMetadata
| CIDependency
MetadataObject
-- ^ for error reporting on missing dependencies
SchemaObjId
SchemaDependency
deriving (Eq)
```
this pretty much means that remote schemas is dependent on types from databases, actions, ....
How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine?
1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature:
```haskell
buildSchemaCacheFor ::
(QErrM m, CacheRWM m, MetadataM m) =>
MetadataObjId ->
MetadataModifier ->
```
This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint):
```haskell
runAddRemoteSchema ::
( QErrM m,
CacheRWM m,
MonadIO m,
HasHttpManagerM m,
MetadataM m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m EncJSON
```
This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed.
```haskell
runAddRemoteSchema ::
( QErrM m,
MonadIO m,
HasHttpManagerM m,
MonadReader RemoteSchemasSchemaCache m,
MonadState RemoteSchemaMetadata m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m RemoteSchemeMetadataObjId
```
The idea is that the core graphql-engine would call these functions and then call
`buildSchemaCacheFor`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291
GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
- Remove a few unnecessary helper functions
- Delete kind annotations
- Bring GHC warnings and language extensions more in line with those of the `graphql-engine` library
- Constrain unconstrained dependency on `hasql-pool`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6251
GitOrigin-RevId: 10c2530f007f70cf1464cec36566ee2264589881
- Remove `onJust` in favor of the more general `for_`
- Remove `withJust` which was used only once
- Remove `hashNub` in favor of `Ord`-based `uniques`
- Simplify some of the implementations in `Hasura.Prelude`
- Add `hlint` hint from `maybe True` to `all`, and `maybe False` to `any`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6173
GitOrigin-RevId: 2c6ebbe2d04f60071d2a53a2d43c6d62dbc4b84e
This PR is the result of running the following commands:
```bash
$ git grep -l '".* : "' -- '*.hs' | xargs sed -i -E 's/(".*) : "/\1: "/'
$ scripts/dev.sh test --integration --accept
```
Also manually fixed a few tests and docs
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6148
GitOrigin-RevId: cf8b87605d41d9ce86613a41ac5fd18691f5a641
If we don't do this, we might end up applying metadata with a stale schema cache.
Following the principle of least surprise, replacing the metadata should probably compute inconsistencies with regards to the actual state of the database.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6026
GitOrigin-RevId: ff7469d7d9857c8a9f517d5d0b6f1ecf463621b3
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
### Description
This PR attempts to fix several issues with source customization as it relates to remote relationships. There were several issues regarding casing: at the relationship border, we didn't properly set the target source's case, we didn't have access to the list of supported features to decide whether the feature was allowed or not, and we didn't have access to the global default.
However, all of that information is available when we build the schema cache, as we do resolve the case of some elements such as function names: we can therefore resolve source information at the same time, and simplify both the root of the schema and the remote relationship border.
To do this, this PR introduces a new type, `ResolvedSourceCustomization`, to be used in the Schema Cache, as opposed to the metadata's `SourceCustomization`, following a pattern established by a lot of other types.
### Remaining work and open questions
One major point of confusion: it seems to me that we didn't set the case at all across remote relationships, which would suggest we would use the case of the LHS source across the subset of the RHS one that is accessible through the remote relationship, which would in turn "corrupt" the parser cache and might result in the wrong case being used for that source later on. Is that assesment correct, and was I right to fix it?
Another one is that we seem not to be using the local case of the RHS to name the field in an object relationship; unless I'm mistaken we only use it for array relationships? Is that intentional?
This PR is also missing tests that would show-case the difference, and a changelog entry. To my knowledge, all the tests of this feature are in the python test suite; this could be the opportunity to move them to the hspec suite, but this might be a considerable amount of work?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5619
GitOrigin-RevId: 51a81b713a74575e82d9f96b51633f158ce3a47b
### Description
This PR moves some strictness annotations to a concrete use site, rather than putting `seq` in an helper function.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5436
GitOrigin-RevId: 1f279e05333ab80167ad2e18d09b8792eddc52c3
This introduces an `ErrorMessage` newtype which wraps `Text` in a manner which is designed to be easy to construct, and difficult to deconstruct.
It provides functionality similar to `Data.Text.Extended`, but designed _only_ for error messages. Error messages are constructed through `fromString`, concatenation, or the `toErrorValue` function, which is designed to be overridden for all meaningful domain types that might show up in an error message. Notably, there are not and should never be instances of `ToErrorValue` for `String`, `Text`, `Int`, etc. This is so that we correctly represent the value in a way that is specific to its type. For example, all `Name` values (from the _graphql-parser-hs_ library) are single-quoted now; no exceptions.
I have mostly had to add `instance ToErrorValue` for various backend types (and also add newtypes where necessary). Some of these are not strictly necessary for this changeset, as I had bigger aspirations when I started. These aspirations have been tempered by trying and failing twice.
As such, in this changeset, I have started by introducing this type to the `parseError` and `parseErrorWith` functions. In the future, I would like to extend this to the `QErr` record and the various `throwError` functions, but this is a much larger task and should probably be done in stages.
For now, `toErrorMessage` and `fromErrorMessage` are provided for conversion to and from `Text`, but the intent is to stop exporting these once all error messages are converted to the new type.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5018
GitOrigin-RevId: 84b37e238992e4312255a87ca44f41af65e2d89a
This moves `MkTypename` and `NamingCase` into their own modules, with the intent of reducing the scope of the schema parsers code, and trying to reduce imports of large modules when small ones will do.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4978
GitOrigin-RevId: 19541257fe010035390f6183a4eaa37bae0d3ca1
### Description
This PR removes the need for the `SourceCache` when building the schema for the actions. To do so, it changes the way we represent custom types in the source cache. Instead of trying to reuse the same `ObjectTypeDefinition` and `TypeRelationship`. we now have separate `AnnotatedObjectType` and `AnnotatedRelationship`. When building them, at schema cache building time, we persist all the relevant source information, so that it's all available at schema building time.
This PR makes no attempt at re-using `RemoteRelationship` primitives, to avoid having to change the way async action queries are executed, and to avoid having to make complicated changes to how we parse and represent those relationships.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4813
GitOrigin-RevId: 3cc65c5a043c8d3da5f7214eed40c558c4349327
Pretty much all quasi-quoted names in the server code base have ended up in `Hasura.GraphQL.Parser.Constants`. I'm now finding this unpleasant for two reasons:
1. I would like to factor out the parser code into its own Cabal package, and I don't want to have to expose all these names.
2. Most of them really have nothing to do with the parsers.
In order to remedy this, I have:
1. moved the names used by parser code to `Hasura.GraphQL.Parser.DirectiveName`, as they're all related to directives;
2. moved `Hasura.GraphQL.Parser.Constants` to `Hasura.Name`, changing the qualified import name from `G` to `Name`;
3. moved names only used in tests to the appropriate test case;
4. removed unused items from `Hasura.Name`; and
5. grouped related names.
Most of the changes are simply changing `G` to `Name`, which I find much more meaningful.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4777
GitOrigin-RevId: a77aa0aee137b2b5e6faec94495d3a9fbfa1348b
### Description
This small clean-up PR makes one further step towards backend-agnostic actions: it makes all the code parsing custom types backend agnostic. Surprisingly, this could be done *without* the need to finish generalizing the column parser. The remaining sore point is async queries, that still target Postgres explicitly.
In theory, this is enough to start allowing non-Postgres scalars in custom types. In practice, however:
- no other backend exposes scalars in a way that would allow users to do that as of this PR;
- we currently have no strategy to avoid / detect scalar collisions across backends.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4691
GitOrigin-RevId: bfe63fb131e306663d4406697ce23c02736566c5
When a user changes request options in a custom action through the
console that triggers a request to the server to test webhook transform
options, and to show a preview of the result. If the action uses an
environment variable in its webhook URL, and there is no mock value for
that variable in the action's sample context then the user will see an
error. This change expands the error message to explain what caused the
error, and how to fix it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4556
GitOrigin-RevId: 75b19bae17aac982c2bdfbd4417bd55923889f2f
## Description
Following on from #4572, this removes more dead code as identified by Weeder. Comments and thoughts similarly welcome!
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4587
GitOrigin-RevId: 73aa6a5a2833ee41d29b71fcd0a72ed19822ca73
(Work here originally done by awjchen, rebased and fixed up for merge by
jberryman)
This is part of a merge train towards GHC 9.2 compatibility. The main
issue is the use of the new abstract `KeyMap` in 2.0. See:
https://hackage.haskell.org/package/aeson-2.0.3.0/changelog
Alex's original work is here:
#4305
BEHAVIOR CHANGE NOTE: This change causes a different arbitrary ordering
of serialized Json, for example during metadata export. CLI users care
about this in particular, and so we need to call it out as a _behavior
change_ as we did in v2.5.0. The good news though is that after this
change ordering should be more stable (alphabetical key order).
See: https://hasurahq.slack.com/archives/C01M20G1YRW/p1654012632634389
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4611
Co-authored-by: awjchen <13142944+awjchen@users.noreply.github.com>
GitOrigin-RevId: 700265162c782739b2bb88300ee3cda3819b2e87
### Description
This PR is a first step in a series of cleanups of action relationships. This first step does not contain any behavioral change, and it simply reorganizes / prunes / rearranges / documents the code. Mainly:
- it divides some files in RQL.Types between metadata types, schema cache types, execution types;
- it renames some types for consistency;
- it minimizes exports and prunes unnecessary types;
- it moves some types in places where they make more sense;
- it replaces uses of `DMap BackendTag` with `BackendMap`.
Most of the "movement" within files re-organizes declarations in a "top-down" fashion, by moving all TH splices to the end of the file, which avoids order or declarations mattering.
### Optional list types
One main type change this PR makes is a replacement of variant list types in `CustomTypes.hs`; we had `Maybe [a]`, or sometimes `Maybe (NonEmpty a)`. This PR harmonizes all of them to `[a]`, as most of the code would use them as such, by doing `fromMaybe []` or `maybe [] toList`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4613
GitOrigin-RevId: bc624e10df587eba862ff27a5e8021b32d0d78a2
### Description
There were several functions in `GraphQL.Schema.Common` that were unrelated to the schema building process, and were about metadata manipulation or dependency computation. Having those functions in the schema part of the code forces several places in the code to depend on the schema code, despite being completely unrelated.
This PR moves those functions where they make sense: alongside similar functions in `RQL.Types.*`, and rewrites `getRemoteDependencies` for clarity (it was using the term "indirect dependency" in a way that was inconsistent with the rest of the code).
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4568
GitOrigin-RevId: 948a18cebbb337a8bb6367c1f2d2ef5628209d96
### Description
Several places in the code used `a /= []`, which is inelegant. To my surprise, hlint did not warn about this, despite the fact that it forces an `Eq` instance on the elements. This PR replaces all occurrences of that pattern with `not (null a)` and adds a lint warning for it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4569
GitOrigin-RevId: 6471e75ade9e71e5d583a0dac7815c01870c696b
By generalizing the instances, they can be written as attached instance derivations, rather than standalone ones.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4518
GitOrigin-RevId: 7a387911cf6ad46fe6acd36648275d6c2c68ffe3
Previously, these were represented with a HashMap, but supposedly that map can never be empty. Now, it uses NEHashMap, which carries the non-empty invariant behind a smart constructor.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4481
GitOrigin-RevId: 93ad9aaa9354f25a1ba10e8207ae19614e1e439e
### Description
As part of the cache building process, we create / update / migrate the catalog that each DB uses as a place to store event trigger information. The function that decides how this should be done was doing an explicit `case ... of` on the backend tag, instead of delegating to one of the backend classes. The downsides of this is that:
- it adds a "friction point" where the backend matters in the core of the engine, which is otherwise written to be almost entirely backend-agnostic
- it creates imports from deep in the engine to the `Backends`, which we try to restrict to a very small set of clearly identified files (the `Instances` files)
- it is currently implemented using a "catch all" default case, which might not always be correct for new backends
This PR makes the catalog updating process a part of `BackendMetadata`, and cleans the corresponding schema cache code.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4457
GitOrigin-RevId: 592f0eaa97a7c38f4e6d4400e1d2353aab12c97e
## Description
As the name suggests, `DML.Internal` contains internal implementation details of RQL's DML. However, a lot of unrelated parts of the codebase still use some of the code it contains. This PR fixes this, and removes all imports of `RQL.DML.Internal` from outside of `RQL.DML`. Most of the time, this involves moving a function out of `DML.Internal` to an underlying module (see `getRolePermInfo`) or moving a function _back_ into it (see `checkRetCols`).
This PR also clarifies a bit the situation with `withTyAnn` and `withTypeAnn` by renaming the former into `withScalarTypeAnn` and moving them together. Worth noting: there might be a bug lurking in that function, as it doesn't seem to use the proper type annotations for some extension types!
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4380
GitOrigin-RevId: c8ae5b4e8378fefc0bcccf778d97813df727d3cb
## Description
This PR removes `RQL.Types`, which was now only re-exporting a bunch of unrelated modules.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4363
GitOrigin-RevId: 894f29a19bff70b3dad8abc5d9858434d5065417
## Description
This small PR moves all functions in `RQL.Types.hs` to better locations. Most `askX` functions are moved alongside the `unsafe` functions they use. Several other functions are moved closer to their call site. `MetadataM` is moved alongside `Metadata`. This PR also documents the `ask` functions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4355
GitOrigin-RevId: 0498a7e8f98e7a94af911dd375cad84ace7ddffa
### Description
`HasSystemDefined` is defined in `RQL.Types`, but only used in one place, `LegacyCatalog`, to avoid passing a boolean around. It is easily replaced by an ad-hoc `ReaderT`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4337
GitOrigin-RevId: 649d758bb2b18b39533429dda5ab71afde62fb53
### Description
Small PR that moves code out of `RQL.Types.hs`. Specifically, it moves `HasServerConfigCtx` to where `ServerConfigCtx` is defined. This removes code from `RQL.Types`, makes the dependency on `Server.Types` more explicit, and will make some further cleanups easier.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4336
GitOrigin-RevId: 95bb3467d741763892c4e68a38760497157ba1aa
## Description
Some of the documentation/organizational changes I was putting into the suggestions for #3624 were a bit too convoluted for GitHub's suggestion interface, so I'm putting them here instead.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3910
Co-authored-by: Solomon <24038+solomon-b@users.noreply.github.com>
GitOrigin-RevId: 06e0cb08bd18e7f8b21452df0697cfd80bc56fde
### Motivation
#2338 introduced a way to validate REST queries against the metadata after a change, to properly report any inconsistency that would emerge from a change in the underlying structure of our schema. However, the way this was done was quite complex and error-prone. Namely: we would use the generated schema parsers to statically execute an introspection query, similar to the one we use for remote schemas, then parse the resulting bytestring as it were coming from a remote schema.
This led to several issues: the code was using remote schema primitives, and was associated with remote schema code, despite being unrelated, which led to absurd situations like creating fake `Variable`s whose type was also their name. A lot of the code had to deal with the fact that we might fail to re-parse our own schema. Additionally, some of it was dead code, that for some reason GHC did not warn about? But more fundamentally, this architecture decision creates a dependency between unrelated pieces of the engine: modifying the internal processing of root fields or the introspection of remote schemas now risks impacting the unrelated `OpenAPI` feature.
### Description
This PR decouples that process from the remote schema introspection logic and from the execution engine by making `Analyse` and `OpenAPI` work on the generic `G.SchemaIntrospection` instead. To accomplish this, it:
- adds `GraphQL.Parser.Schema.Convert`, to convert from our "live" schema back to a flat `SchemaIntrospection`
- persists in the schema cache the `admin` introspection generated when building the schema, and uses it both for validation and for generating the `OpenAPI`.
### Known issues and limitations
This adds a bit of memory pressure to the engine, as we persist the entire schema in the schema cache. This might be acceptable in the short-term, but we have several potential ideas going forward should this be a problem:
- cache the result of `Analyze`: when it becomes possible to build the `OpenAPI` purely with the result of `Analyze` without any additional schema information, then we could cache that instead, reducing the footprint
- caching the `OpenAPI`: if it doesn't need to change every time the endpoint is queried, then it should be possible to cache the entire `OpenAPI` object instead of the schema
- cache a copy of the `FieldParsers` used to generate the schema: as those are persisted through the GraphQL `Context`, and are the only input required to generate the `Schema`, making them accessible in the schema cache would allow us to have the exact same feature with no additional memory cost, at the price of a slightly slower and more complicated process (need to rebuild the `Schema` every time we query the OpenAPI endpoint)
- cache nothing at all, and rebuild the admin schema from scratch every time.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3962
Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com>
GitOrigin-RevId: a8b9808170b231fdf6787983b4a9ed286cde27e0
### Description
This is it! This PR enables the Metadata API for remote relationships from remote schemas, adds tests, ~~adds documentation~~, adds an entry to the Changelog. This is the release PR that enables the feature.
### Checklist
- [ ] Tests:
- [x] RS-to-Postgres (high level)
- [x] RS-to-RS (high level)
- [x] From RS specifically (testing for edge cases)
- [x] Metadata API tests
- [ ] Unit testing the actual engine?
- [x] Changelog entry
- [ ] Documentation?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3974
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
Co-authored-by: Vishnu Bharathi <4211715+scriptnull@users.noreply.github.com>
Co-authored-by: jkachmar <8461423+jkachmar@users.noreply.github.com>
GitOrigin-RevId: c9aebf12e6eebef8d264ea831a327b968d4be9d2
### Description
This small PR improves the representation of an endpoint method from `Text` to an enum of the supported methods. Additionally, it cleans some of the instances defined on surrounding types (such as Postgres-specific instances on Endpoint types).
Due to a name conflict, this makes `RQL.Types.Endpoint` impossible to re-export from `RQL.Types`, which in turn forces several other modules to import it explicitly, which I think is fine since we want to ultimately get rid of `RQL.Types`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3965
GitOrigin-RevId: 33869007d0d818ddf486fb61d1f6099f9dad7570
…rmance
It makes sense to try to utilize multiple threads for metadata
operations since we expect them to come one at a time (and likely at
lower load periods anyway).
As noted, although we build roles in parallel now, the admin role is
still a bottleneck. For replace_metadata on huge_schema, on my machine
I get:
BEFORE: 22.7 sec
AFTER: 13.5 sec
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3911
GitOrigin-RevId: 4d4ee6ac8b5506603e70e4fc666a3aacc054d493
### Description
Several libraries define `catMaybes` as `mapMaybe id`. We had it defined in `Data.HashMap.Strict.Extended` already. This small PR also defines it in `Extended` modules for other containers and replaces every occurrence of `mapMaybe id` accordingly.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3884
GitOrigin-RevId: d222a2ca2f4eb9b725b20450a62a626d3886dbf4