## Description
This PR updates the JWK refresh thread to poll every second instead of the previous behaviour where the thread used to sleep based on the expiry time in `Cache-Control`/`Expires` response headers.
## Motivation
As a part of dynamically updating environment variables on cloud without restart the user projects, we want to implement a mechanism which makes HGE aware of any changes in the user configuration by updating a shared variable data type which can be accessed by relevant threads/core functionality before their execution.
The above updates requires us to make the threads polling in nature such that before executing their code, any change in the user config is captured and the appropriate behaviour is channelised. In the case of JWK updating thread, the thread used to sleep for the time as mentioned in the `Cache-Control` or `Expires` headers which make the thread unware of any new changes in the user config in that period of time, hence requiring a restart to propogate the new changes.
To solve this problem we have now updated the JWK update thread to poll every second for change in `AuthMode`(from a shared variable in subsequent changes to implement the dynamic env var update feature) and update the JWK accordingly such that it does not use any stale configurations and works without HGE restart.
### Related Issues
https://hasurahq.atlassian.net/browse/GS-300
### Solution and Design
- We store the expiry time in the `JWTCtx`
- On every poll check whether the current time exceeds the expiry time, in which case we call the JWK url to fetch the new JWK and expiry.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7177
Co-authored-by: Krushan Bauva <31391329+krushanbauva@users.noreply.github.com>
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
GitOrigin-RevId: bc1e44a8c3823d7554167a7f01c3ce085646cedb
Codecs for event triggers, including webhook transforms. These are not hooked into the higher-up table metadata codec yet because some backend implementations implement event triggers with `error` which causes an error when codecs are evaluated. I plan to follow up with another PR to resolve that.
Ticket: https://hasurahq.atlassian.net/browse/GDC-585
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7237
GitOrigin-RevId: 8ce40fe6fedcf8b109d6ca50a505333df855a8ce
context: This is foundation work, before we change how the server chooses to compress or not
part of effort: #5518
-----
Prior to this change it was difficult to understand how the functionality in this module related to the semantics of Accept-Encoding. We also didn't correctly handle directives with qvalues.
After this change certain technical infelicities are called out without modifying the behavior of the server; for instance we continue to fall back to identity (no compression) in the case where technically we're supposed to return 406, and we also continue to treat `*` conservatively as meaning “use no compression”.
The only external change here is `gzip;q=x.y` now results in a zipped response.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7213
GitOrigin-RevId: 1910ffd70d29f1ab8825c601f1bd998be70ceeeb
## Description ✍️
This PR adds support to generate query params directly using a kriti template which can be used to flatten a list of parameter arguments as well.
### Changes in the Metadata API
Earlier the `query_params` key inside `request_transform` used to take in an object of key/value pairs where the `key` represents the query parameter name and `value` points to the value of the parameter or a kriti template which could be resolved to the value.
With this PR, we provide the user with more freedom to generate the complete query string using kriti template. The `query_params` can now take in a string as well which will be a kriti template. This new change needs to be incorporated on the console and CLI metadata import/export as well.
- [x] CLI: Compatible, no changes required
- [ ] Console
## Changelog ✍️
__Component__ : server
__Type__: feature
__Product__: community-edition
### Short Changelog
use kriti template to generate query param from list of arguments
### Related Issues ✍
https://hasurahq.atlassian.net/browse/GS-243
### Solution and Design ✍
We use a kriti template to generate the complete query parameter string.
| Query Template | Output |
|---|---|
| `{{ concat ([concat({{ range _, x := [\"apple\", \"banana\"] }} \"tags={{x}}&\" {{ end }}), \"flag=smthng\"]) }}`| `tags=apple&tags=banana&flag=smthng` |
| `{{ concat ([\"tags=\", concat({{ range _, x := $body.input }} \"{{x}},\" {{ end }})]) }}` | `tags=apple%2Cbanana%2C` |
### Steps to test and verify ✍
- start HGE and make the following request to `http://localhost:8080/v1/metadata`:
```json
{
"type": "test_webhook_transform",
"args": {
"webhook_url": "http://localhost:3000",
"body": {
"action": {
"name": "actionName"
},
"input": ["apple", "banana"]
},
"request_transform": {
"version": 2,
"url": "{{$base_url}}",
"query_params": "{{ concat ([concat({{ range _, x := $body.input }} \"tags={{x}}&\" {{ end }}), \"flag=smthng\"]) }}",
"template_engine": "Kriti"
}
}
}
```
- you should receive the following as output:
```json
{
"body": {
"action": {
"name": "actionName"
},
"input": [
"apple",
"banana"
]
},
"headers": [],
"method": "GET",
"webhook_url": "http://localhost:3000?tags=apple&tags=banana&flag=smthng"
}
```
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6961
Co-authored-by: Tirumarai Selvan <8663570+tirumaraiselvan@users.noreply.github.com>
GitOrigin-RevId: 712ba038f03009edc3e8eb0435e723304943399a
We currently have a fairly intricate way of running our PostgreSQL and MSSQL integration tests (not the API tests). By splitting them out, we can simplify this a lot. Most prominently, we can rely on Cabal to be our argument parser instead of writing our own.
We can also simplify how they're run in CI. They are currently (weirdly) run alongside the Python integration tests. This breaks them out into their own jobs for better visibility, and to avoid conflating the two.
The changes are as follows:
- The "unit" tests that rely on a running PostgreSQL database are extracted out to a new test directory so they can be run separately.
- Most of the `Main` module comes with them.
- We now refer to these as "integration" tests instead.
- Likewise for the "unit" tests that rely on a running MS SQL Server database. These are a little simpler and we can use `hspec-discover`, with a `SpecHook` to extract the connection string from an environment variable.
- Henceforth, these are the MS SQL Server integration tests.
- New CI jobs have been added for each of these.
- There wasn't actually a job for the MS SQL Server integration tests. It's pretty amazing they still run well.
- The "haskell-tests" CI job, which used to run the PostgreSQL integration tests, has been removed.
- The makefiles and contributing guide have been updated to run these.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6912
GitOrigin-RevId: 67bbe2941bba31793f63d04a9a693779d4463ee1
### Description
This monster of a PR took way too long. As the title suggests, it reduces the schema context carried in the readers to the very strict minimum. In practice, that means that to build a source, we only require:
- the global `SchemaContext`
- the global `SchemaOptions` (soon to be renamed `SchemaSourceOptions`)
- that source's `SourceInfo`
Furthermore, _we no longer carry "default" customization options throughout the schema_. All customization information is extracted from the `SourceInfo`, when required. This prevents an entire category of bugs we had previously encountered, such as parts of the code using uninitialized / unupdated customization info.
In turn, this meant that we could remove the explicit threading of the `SourceInfo` throughout the schema, since it is now always available through the reader context.
Finally, this meant making a few adjustments to relay and actions as well, such as the introduction of a new separate "context" for actions, and a change to how we create some of the action-specific postgres scalar parsers.
I'll highlight with review comments the areas of interest.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6709
GitOrigin-RevId: ea80fddcb24e2513779dd04b0b700a55f0028dd1
`CollectedInfo` was just an awkward sum type. By using an explicit `Either` instead, we can guarantee at the type level that certain methods only write inconsistencies, or only write dependencies. This is useful, because if we can guarantee that no dependencies are written, then we don't need to run `resolveDependencies` on that part of the Metadata. In other words, we can keep it out of `BuildOutputs`, which greatly benefits performance - see e.g. hasura/graphql-engine-mono#6613.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6765
GitOrigin-RevId: 9ce099d2eee2278dbb6e5bea72063e4b6e064b35
This upgrades the version of Ormolu required by the HGE repository to v0.5.0.1, and reformats all code accordingly.
Ormolu v0.5 reformats code that uses infix operators. This is mostly useful, adding newlines and indentation to make it clear which operators are applied first, but in some cases, it's unpleasant. To make this easier on the eyes, I had to do the following:
* Add a few fixity declarations (search for `infix`)
* Add parentheses to make precedence clear, allowing Ormolu to keep everything on one line
* Rename `relevantEq` to `(==~)` in #6651 and set it to `infix 4`
* Add a few _.ormolu_ files (thanks to @hallettj for helping me get started), mostly for Autodocodec operators that don't have explicit fixity declarations
In general, I think these changes are quite reasonable. They mostly affect indentation.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6675
GitOrigin-RevId: cd47d87f1d089fb0bc9dcbbe7798dbceedcd7d83
>
## Description
->
This PR allows DC agents to define custom aggregate functions for their scalar types.
### Related Issues
->
GDC-189
### Solution and Design
>
We added a new property `aggregate_functions` to the scalar types capabilities. This allows the agent author to specify a set of aggregate functions supported by each scalar type, along with the function's result type.
During GraphQL schema generation, the custom aggregate functions are available via a new method `getCustomAggregateOperators` on the `Backend` type class.
Custom functions are merged with the builtin aggregate functions when building GraphQL schemas for table aggregate fields and for `order_by` operators on array relations.
### Steps to test and verify
>
• Codec tests for aggregate function capabilities have been added to the unit tests.
• Some custom aggregate operators have been added to the reference agent and are used in a new test in `api-tests`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6199
GitOrigin-RevId: e9c0d1617af93847c1493671fdbb794f573bde0c
This installs the ODBC Driver 18 for SQL Server in all our shipped Docker images, and update our tests and documentation accordingly.
This version supports arm64, and therefore can run natively (or via Docker) on macOS on aarch64.
`msodbcsql17` is still installed in production-targeted Docker images so that users do not _have_ to migrate to the new driver.
Nix expressions are packaged for the new driver, as it is not yet available in nixpkgs.
In this version, [the default encryption setting was changed from "no" to "yes"](https://techcommunity.microsoft.com/t5/sql-server-blog/odbc-driver-18-0-for-sql-server-released/ba-p/3169228). In addition, "mandatory" and "optional" were added as synonyms for "yes" and "no" respectively.
I have therefore modified all connection strings in tests to specify `Encrypt=optional` (and changed some from `Encrypt=no`). I chose "optional" rather than "no" because I feel it's more honest; these connection strings will work with or without an encrypted connection.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6241
GitOrigin-RevId: 959f88dd1f271ef06a3616bc46b358f364f6cdfd
The main aim of the PR is:
1. To set up a module structure for 'remote-schemas' package.
2. Move parts by the remote schema codebase into the new module structure to validate it.
## Notes to the reviewer
Why a PR with large-ish diff?
1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section.
1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process.
## Changes in the PR
### Module structure
Sets up the module structure as follows:
```
Hasura/
RemoteSchema/
Metadata/
Types.hs
SchemaCache/
Types.hs
Permission.hs
RemoteRelationship.hs
Build.hs
MetadataAPI/
Types.hs
Execute.hs
```
### 1. Types representing metadata are moved
Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`.
- This new module only depends on very 'core' modules such as
`Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass.
- The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship
definition.
### 2. SchemaCache related types and build logic have been moved
Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`.
Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at.
Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components.
This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in.
While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal.
This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase.
### 3. Metadata APIs related types and build logic have been moved
The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively.
## Open questions:
1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L55)) and [2](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L91)), it made more sense to create a separate top-level module for `MetadataAPI`s.
Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further.
1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type:
```haskell
buildRemoteSchemas ::
( ArrowChoice arr,
Inc.ArrowDistribute arr,
ArrowWriter (Seq CollectedInfo) arr,
Inc.ArrowCache m arr,
MonadIO m,
HasHttpManagerM m,
Inc.Cacheable remoteRelationshipDefinition,
ToJSON remoteRelationshipDefinition,
MonadError QErr m
) =>
Env.Environment ->
( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles),
[RemoteSchemaMetadataG remoteRelationshipDefinition]
)
`arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject)
```
Note the dependence on `CollectedInfo` which is defined as
```haskell
data CollectedInfo
= CIInconsistency InconsistentMetadata
| CIDependency
MetadataObject
-- ^ for error reporting on missing dependencies
SchemaObjId
SchemaDependency
deriving (Eq)
```
this pretty much means that remote schemas is dependent on types from databases, actions, ....
How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine?
1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature:
```haskell
buildSchemaCacheFor ::
(QErrM m, CacheRWM m, MetadataM m) =>
MetadataObjId ->
MetadataModifier ->
```
This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint):
```haskell
runAddRemoteSchema ::
( QErrM m,
CacheRWM m,
MonadIO m,
HasHttpManagerM m,
MetadataM m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m EncJSON
```
This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed.
```haskell
runAddRemoteSchema ::
( QErrM m,
MonadIO m,
HasHttpManagerM m,
MonadReader RemoteSchemasSchemaCache m,
MonadState RemoteSchemaMetadata m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m RemoteSchemeMetadataObjId
```
The idea is that the core graphql-engine would call these functions and then call
`buildSchemaCacheFor`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291
GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
This upgrades CI and anyone using Nix to HLint v3.4.1.
If you're not using Nix, this doesn't actually _do_ anything on your
local machine; it's just a suggestion.
It also applies a bunch of simple HLint refactors, using
`make lint-hs-fix`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6324
GitOrigin-RevId: de8267e4909d6dcd3f83543188517f3aaeebc5f3
I didn't track why these were left behind. Presumably GHC 9.2 has an improved redundant constraint checker, so that explains a few. Otherwise, perhaps code got refactored along the way.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6256
GitOrigin-RevId: b6275edf3e867f8e33bdec533ce9932381d36bbb
- Remove `onJust` in favor of the more general `for_`
- Remove `withJust` which was used only once
- Remove `hashNub` in favor of `Ord`-based `uniques`
- Simplify some of the implementations in `Hasura.Prelude`
- Add `hlint` hint from `maybe True` to `all`, and `maybe False` to `any`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6173
GitOrigin-RevId: 2c6ebbe2d04f60071d2a53a2d43c6d62dbc4b84e
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
### Description
This PR attempts to fix several issues with source customization as it relates to remote relationships. There were several issues regarding casing: at the relationship border, we didn't properly set the target source's case, we didn't have access to the list of supported features to decide whether the feature was allowed or not, and we didn't have access to the global default.
However, all of that information is available when we build the schema cache, as we do resolve the case of some elements such as function names: we can therefore resolve source information at the same time, and simplify both the root of the schema and the remote relationship border.
To do this, this PR introduces a new type, `ResolvedSourceCustomization`, to be used in the Schema Cache, as opposed to the metadata's `SourceCustomization`, following a pattern established by a lot of other types.
### Remaining work and open questions
One major point of confusion: it seems to me that we didn't set the case at all across remote relationships, which would suggest we would use the case of the LHS source across the subset of the RHS one that is accessible through the remote relationship, which would in turn "corrupt" the parser cache and might result in the wrong case being used for that source later on. Is that assesment correct, and was I right to fix it?
Another one is that we seem not to be using the local case of the RHS to name the field in an object relationship; unless I'm mistaken we only use it for array relationships? Is that intentional?
This PR is also missing tests that would show-case the difference, and a changelog entry. To my knowledge, all the tests of this feature are in the python test suite; this could be the opportunity to move them to the hspec suite, but this might be a considerable amount of work?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5619
GitOrigin-RevId: 51a81b713a74575e82d9f96b51633f158ce3a47b
This abstracts `CircularT`'s test cases to work against "any" memoizer, and then runs them against `MemoizeT` as well.
Surprisingly (or not), this works without issue; `MemoizeT` passes all tests with a couple of extra instances.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5780
GitOrigin-RevId: 461880caf9220dc3f52d622a22e8b8bcd594e404
### Description
This PR changes all the schema code to operate in a specific `SchemaT` monad, rather than in an arbitrary `m` monad. `SchemaT` is intended to be used opaquely with `runSourceSchema` and `runRemoteSchema`. The main goal of this is to allow a different reader context per part of the schema: this PR also minimizes the contexts. This means that we no longer require `SchemaOptions` when building remote schemas' schema, and this PR therefore removes a lot of dummy / placeholder values accordingly.
### Performance and stacking
This PR has been through several iterations. #5339 was the original version, that accomplished the same thing by stacking readers on top of the stack at every remote relationship boundary. This raised performance concerns, and @0x777 confirmed with an ad-hoc test that in some extreme cases we could see up to a 10% performance impact. This version, while more verbose, allows us to unstack / re-stack the readers, and avoid that problem. #5517 adds a new benchmark set to be able to automatically measure this on every PR.
### Remaining work
- [x] a comment (or perhaps even a Note?) should be added to `SchemaT`
- [x] we probably want for #5517 to be merged first so that we can confirm the lack of performance penalty
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5458
GitOrigin-RevId: e06b83d90da475f745b838f1fd8f8b4d9d3f4b10
This removes string interpolation from quasiquoted literals. We only use
this in one place and it's totally unnecessary.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5750
GitOrigin-RevId: 3493a11db6347332e7e3721a7dca616947505be6
This PR expands the OpenAPI specification generated for metadata to include separate definitions for `SourceMetadata` for each native database type, and for DataConnector.
For the most part the changes add `HasCodec` implementations, and don't modify existing code otherwise.
The generated OpenAPI spec can be used to generate TypeScript definitions that distinguish different source metadata types based on the value of the `kind` properly. There is a problem: because the specified `kind` value for a data connector source is any string, when TypeScript gets a source with a `kind` value of, say, `"postgres"`, it cannot unambiguously determine whether the source is postgres, or a data connector. For example,
```ts
function consumeSourceMetadata(source: SourceMetadata) {
if (source.kind === "postgres" || source.kind === "pg") {
// At this point TypeScript infers that `source` is either an instance
// of `PostgresSourceMetadata`, or `DataconnectorSourceMetadata`. It
// can't narrow further.
source
}
if (source.kind === "something else") {
// TypeScript infers that this `source` must be an instance of
// `DataconnectorSourceMetadata` because `source.kind` does not match
// any of the other options.
source
}
}
```
The simplest way I can think of to fix this would be to add a boolean property to the `SourceMetadata` type along the lines of `isNative` or `isDataConnector`. This could be a field that only exists in serialized data, like the metadata version field. The combination of one of the native database names for `kind`, and a true value for `isNative` would be enough for TypeScript to unambiguously distinguish the source kinds.
But note that in the current state TypeScript is able to reference the short `"pg"` name correctly!
~~Tests are not passing yet due to some discrepancies in DTO serialization vs existing Metadata serialization. I'm working on that.~~
The placeholders that I used for table and function metadata are not compatible with the ordered JSON serialization in use. I think the best solution is to write compatible codecs for those types in another PR. For now I have disabled some DTO tests for this PR.
Here are the generated [OpenAPI spec](https://github.com/hasura/graphql-engine-mono/files/9397333/openapi.tar.gz) based on these changes, and the generated [TypeScript client code](https://github.com/hasura/graphql-engine-mono/files/9397339/client-typescript.tar.gz) based on that spec.
Ticket: [MM-66](https://hasurahq.atlassian.net/browse/MM-66)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5582
GitOrigin-RevId: e1446191c6c832879db04f129daa397a3be03f62
It's about time.
To do this I had to check a few more boxes.
* I copied the flags from `graphql-engine.cabal` to the libraries in `server/lib`.
* I moved `Cacheable` instances of schema parser types beside the typeclass declaration.
* I removed imports of `Hasura.Prelude` from the tests, and rewrote them accordingly.
* I copied the `TestMonad` parse monad into `server/src-test/Hasura/GraphQL/Schema/RemoteTest.hs`, which was using it. I think this could be done with the real thing, but I tried replacing it with constraints and it messed with my head somewhat.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5311
GitOrigin-RevId: ebebcc50a16f2d517b7f730fe72410827ca3e86c
Followup to hasura/graphql-engine-mono#4713.
The `memoizeOn` method, part of `MonadSchema`, originally had the following type:
```haskell
memoizeOn
:: (HasCallStack, Ord a, Typeable a, Typeable b, Typeable k)
=> TH.Name
-> a
-> m (Parser k n b)
-> m (Parser k n b)
```
The reason for operating on `Parser`s specifically was that the `MonadSchema` effect would additionally initialize certain `Unique` values, which appear (nested in) the type of `Parser`.
hasura/graphql-engine-mono#518 changed the type of `memoizeOn`, to additionally allow memoizing `FieldParser`s. These also contained a `Unique` value, which was similarly initialized by the `MonadSchema` effect. The new type of `memoizeOn` was as follows:
```haskell
memoizeOn
:: forall p d a b
. (HasCallStack, HasDefinition (p n b) d, Ord a, Typeable p, Typeable a, Typeable b)
=> TH.Name
-> a
-> m (p n b)
-> m (p n b)
```
Note the type `p n b` of the value being memoized: by choosing `p` to be either `Parser k` or `FieldParser`, both can be memoized. Also note the new `HasDefinition (p n b) d` constraint, which provided a `Lens` for accessing the `Unique` value to be initialized.
A quick simplification is that the `HasCallStack` constraint has never been used by any code. This was realized in hasura/graphql-engine-mono#4713, by removing that constraint.
hasura/graphql-engine-mono#2980 removed the `Unique` value from our GraphQL-related types entirely, as their original purpose was never truly realized. One part of removing `Unique` consisted of dropping the `HasDefinition (p n b) d` constraint from `memoizeOn`.
What I didn't realize at the time was that this meant that the type of `memoizeOn` could be generalized and simplified much further. This PR finally implements that generalization. The new type is as follows:
```haskell
memoizeOn ::
forall a p.
(Ord a, Typeable a, Typeable p) =>
TH.Name ->
a ->
m p ->
m p
```
This change has a couple of consequences.
1. While constructing the schema, we often output `Maybe (Parser ...)`, to model that the existence of certain pieces of GraphQL schema sometimes depends on the permissions that a certain role has. The previous versions of `memoizeOn` were not able to handle this, as the only thing they could memoize was fully-defined (if not yet fully-evaluated) `(Field)Parser`s. This much more general API _would_ allow memoizing `Maybe (Parser ...)`s. However, we probably have to be continue being cautious with this: if we blindly memoize all `Maybe (Parser ...)`s, the resulting code may never be able to decide whether the value is `Just` or `Nothing` - i.e. it never commits to the existence-or-not of a GraphQL schema fragment. This would manifest as a non-well-founded knot tying, and this would get reported as an error by the implementation of `memoizeOn`.
tl;dr: This generalization _technically_ allows for memoizing `Maybe` values, but we probably still want to avoid doing so.
For this reason, the PR adds a specialized version of `memoizeOn` to `Hasura.GraphQL.Schema.Parser`.
2. There is no longer any need to connect the `MonadSchema` knot-tying effect with the `MonadParse` effect. In fact, after this PR, the `memoizeOn` method is completely GraphQL-agnostic, and so we implement hasura/graphql-engine-mono#4726, separating `memoizeOn` from `MonadParse` entirely - `memoizeOn` can be defined and implemented as a general Haskell typeclass method.
Since `MonadSchema` has been made into a single-type-parameter type class, it has been renamed to something more general, namely `MonadMemoize`. Its only task is to memoize arbitrary `Typeable p` objects under a combined key consisting of a `TH.Name` and a `Typeable a`.
Also for this reason, the new `MonadMemoize` has been moved to the more general `Control.Monad.Memoize`.
3. After this change, it's somewhat clearer what `memoizeOn` does: it memoizes an arbitrary value of a `Typeable` type. The only thing that needs to be understood in its implementation is how the manual blackholing works. There is no more semantic interaction with _any_ GraphQL code.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4725
Co-authored-by: Daniel Harvey <4729125+danieljharvey@users.noreply.github.com>
GitOrigin-RevId: 089fa2e82c2ce29da76850e994eabb1e261f9c92
### Description
By definition, root fields are at the root of the schema: only functions that craft root fields need to know about how to customize the name of root fields. However, the presence of `Has MkRootFieldName` in `MonadBuildSchemaBase` meant that the entirety of the schema building code was implicitly aware of / capable of altering root field names.
This PR removes this constraint, and moves it to the functions that do craft root fields. This has several upsides:
- it makes it more explicit where root fields are being crafted
- it prevents functions that should not use this from mistakenly applying it to non-root fields
- it simplifies the shared schema context
### Future work
- can we maybe pass this as an argument, instead of making it a required part of the context?
- ~~AFAICT, we only ever use `mempty` for it: is this actually dead code that we should actually just remove altogether?~~
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5235
GitOrigin-RevId: 4268751f3ab87ae8e03b6fe9e1efa1b096200027
Introduces a new function, `metadataToDTO`, that converts a `Metadata` value to a `MetadataV3` DTO value. This is the next step in the alternative serialization path for metadata that comes with a generated OpenAPI specification.
This PR carves up the existing `metadataToOrdJSON` function so that helpers previously embedded in the `where` block of that function can also be used in the implementation of `metadataToDTO`. If I did everything correctly `metadataToOrdJSON` should behave exactly as before.
In a followup PR I will move the extracted helpers to a new submodule, `Hasura.RQL.Types.Metadata.Serialization`, since they add up to several hundred lines of code.
I'm breaking up #5184 into smaller PRs, and this is the second PR in that effort. This PR is stacked on #5210.
The tracking issue is https://hasurahq.atlassian.net/browse/MM-35
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5211
GitOrigin-RevId: 2596ed5312d7b1232c47ae1d08a51d8ead11fcb8
### Description
I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise.
In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`.
However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose...
### Alternatives
Some alternatives worth considering:
- attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous
- keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances
- changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context.
What do y'all think?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073
GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
In the process of decoupling the schema parsers from the GraphQL Engine, we need to remove dependencies on `Hasura.Base.Error`.
First of all, we have avoided using `QErr` in schema parsers code, instead returning a more appropriate data type which can be converted to a `Hasura.Base.Error.QErr` later.
Secondly, we create a new `ParseErrorCode` type to represent parse failure types, which are then converted to a `Hasura.Base.Error.Code` later.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5181
GitOrigin-RevId: 8655e26adb1e7d5e3d552c77a8a403f987b53467
Updates to the latest version of autodocodec and uses the new features, in particular `discriminatedUnionCodec`.
This allows us to remove the `ValueWrapper*` types and `sumTypeCodec`. Sum types are now encoded as discriminated unions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5155
GitOrigin-RevId: 20bfdc12b28d35db354c4a149b9175fab0b2b7d2
This is now the sole in-universe dependency of the schema parsers. As
such, we need to extract it as a library before we can extract the
schema parsers as a library.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5202
GitOrigin-RevId: fbe571855768e56dc8b8e259b8efe900de3ecc54
Makes the init specs shorter by using `shouldBe` instead of `shouldSatisfy` wherever possible. This also makes test failures more expressive.
It also simplifies boolean logic in most places, following HLint warnings. These changes brought to you by `hlint --refactor`, which is basically magic.
I have left some redundancy in the boolean logic for clarity, along with the appropriate HLint suppressions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5087
GitOrigin-RevId: 52bf3626be2615e6a32a0fc0e8be19cca31ee4ad
This introduces an `ErrorMessage` newtype which wraps `Text` in a manner which is designed to be easy to construct, and difficult to deconstruct.
It provides functionality similar to `Data.Text.Extended`, but designed _only_ for error messages. Error messages are constructed through `fromString`, concatenation, or the `toErrorValue` function, which is designed to be overridden for all meaningful domain types that might show up in an error message. Notably, there are not and should never be instances of `ToErrorValue` for `String`, `Text`, `Int`, etc. This is so that we correctly represent the value in a way that is specific to its type. For example, all `Name` values (from the _graphql-parser-hs_ library) are single-quoted now; no exceptions.
I have mostly had to add `instance ToErrorValue` for various backend types (and also add newtypes where necessary). Some of these are not strictly necessary for this changeset, as I had bigger aspirations when I started. These aspirations have been tempered by trying and failing twice.
As such, in this changeset, I have started by introducing this type to the `parseError` and `parseErrorWith` functions. In the future, I would like to extend this to the `QErr` record and the various `throwError` functions, but this is a much larger task and should probably be done in stages.
For now, `toErrorMessage` and `fromErrorMessage` are provided for conversion to and from `Text`, but the intent is to stop exporting these once all error messages are converted to the new type.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5018
GitOrigin-RevId: 84b37e238992e4312255a87ca44f41af65e2d89a
This removes the one remaining instance of `unsafeMkName` in production
code, uses `G.name` where possible in tests, and ignores instances where
it's not possible (such as `instance Arbitrary G.Name`).
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5074
GitOrigin-RevId: d8049edf1f1bc2ef25f34874ef5bd5a5934bd33d
### Description
A trivial PR, extracted out of #4936, that removes remote schema permissions from the schema context, as they are only ever used at the top level: whether or not we need to use remote schema permissions is not something that impacts _how_ we build the schema, but whether some parts of the schema should be built at all, and therefore doesn't need to be accessible throughout the build process.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5050
GitOrigin-RevId: 734673370393d5640ad753222982baf2698f6d8f