This upgrades the version of Ormolu required by the HGE repository to v0.5.0.1, and reformats all code accordingly.
Ormolu v0.5 reformats code that uses infix operators. This is mostly useful, adding newlines and indentation to make it clear which operators are applied first, but in some cases, it's unpleasant. To make this easier on the eyes, I had to do the following:
* Add a few fixity declarations (search for `infix`)
* Add parentheses to make precedence clear, allowing Ormolu to keep everything on one line
* Rename `relevantEq` to `(==~)` in #6651 and set it to `infix 4`
* Add a few _.ormolu_ files (thanks to @hallettj for helping me get started), mostly for Autodocodec operators that don't have explicit fixity declarations
In general, I think these changes are quite reasonable. They mostly affect indentation.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6675
GitOrigin-RevId: cd47d87f1d089fb0bc9dcbbe7798dbceedcd7d83
>
## Description
->
This PR allows DC agents to define custom aggregate functions for their scalar types.
### Related Issues
->
GDC-189
### Solution and Design
>
We added a new property `aggregate_functions` to the scalar types capabilities. This allows the agent author to specify a set of aggregate functions supported by each scalar type, along with the function's result type.
During GraphQL schema generation, the custom aggregate functions are available via a new method `getCustomAggregateOperators` on the `Backend` type class.
Custom functions are merged with the builtin aggregate functions when building GraphQL schemas for table aggregate fields and for `order_by` operators on array relations.
### Steps to test and verify
>
• Codec tests for aggregate function capabilities have been added to the unit tests.
• Some custom aggregate operators have been added to the reference agent and are used in a new test in `api-tests`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6199
GitOrigin-RevId: e9c0d1617af93847c1493671fdbb794f573bde0c
The main aim of the PR is:
1. To set up a module structure for 'remote-schemas' package.
2. Move parts by the remote schema codebase into the new module structure to validate it.
## Notes to the reviewer
Why a PR with large-ish diff?
1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section.
1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process.
## Changes in the PR
### Module structure
Sets up the module structure as follows:
```
Hasura/
RemoteSchema/
Metadata/
Types.hs
SchemaCache/
Types.hs
Permission.hs
RemoteRelationship.hs
Build.hs
MetadataAPI/
Types.hs
Execute.hs
```
### 1. Types representing metadata are moved
Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`.
- This new module only depends on very 'core' modules such as
`Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass.
- The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship
definition.
### 2. SchemaCache related types and build logic have been moved
Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`.
Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at.
Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components.
This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in.
While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal.
This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase.
### 3. Metadata APIs related types and build logic have been moved
The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively.
## Open questions:
1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L55)) and [2](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L91)), it made more sense to create a separate top-level module for `MetadataAPI`s.
Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further.
1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type:
```haskell
buildRemoteSchemas ::
( ArrowChoice arr,
Inc.ArrowDistribute arr,
ArrowWriter (Seq CollectedInfo) arr,
Inc.ArrowCache m arr,
MonadIO m,
HasHttpManagerM m,
Inc.Cacheable remoteRelationshipDefinition,
ToJSON remoteRelationshipDefinition,
MonadError QErr m
) =>
Env.Environment ->
( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles),
[RemoteSchemaMetadataG remoteRelationshipDefinition]
)
`arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject)
```
Note the dependence on `CollectedInfo` which is defined as
```haskell
data CollectedInfo
= CIInconsistency InconsistentMetadata
| CIDependency
MetadataObject
-- ^ for error reporting on missing dependencies
SchemaObjId
SchemaDependency
deriving (Eq)
```
this pretty much means that remote schemas is dependent on types from databases, actions, ....
How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine?
1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature:
```haskell
buildSchemaCacheFor ::
(QErrM m, CacheRWM m, MetadataM m) =>
MetadataObjId ->
MetadataModifier ->
```
This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint):
```haskell
runAddRemoteSchema ::
( QErrM m,
CacheRWM m,
MonadIO m,
HasHttpManagerM m,
MetadataM m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m EncJSON
```
This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed.
```haskell
runAddRemoteSchema ::
( QErrM m,
MonadIO m,
HasHttpManagerM m,
MonadReader RemoteSchemasSchemaCache m,
MonadState RemoteSchemaMetadata m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m RemoteSchemeMetadataObjId
```
The idea is that the core graphql-engine would call these functions and then call
`buildSchemaCacheFor`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291
GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
## Description
This PR fixes hasura/graphql-engine#8345: when creating the final representation of a remote relationship to a remote schema (a `RemoteJoin`), we would mistakenly label ALL join fields in the selection set as being relevant to that one relationship: if there are more than one remote relationship to process in that selection set, that would be the union of all their join fields. The problem with this error is that, when processing remote relationships, we correctly ignore all the ones for which at least one join key is null. Consequently, this error would result in us ignoring remote relationships for which an _unrelated_ join key was null, resulting in that data missing in the final JSON result.
This PR simply ensures that the aggregation of fields that are passed to `createRemoteJoin` is pruned to only contain the fields relevant to the join being created. This is a very small change, and the bulk of this PR is the regression tests.
## Changelog
__Component__ : server
__Type__: bugfix
__Product__: community-edition
### Short Changelog
fix remote relationship to remote schema sometimes being erroneously null when multiple relationships are defined on the same table / graphql object ([#8345](https://github.com/hasura/graphql-engine/issues/8345))
### Long Changelog
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6420
GitOrigin-RevId: eb54462724b007f80b674dcf234adf6d9cfaaf79
This upgrades CI and anyone using Nix to HLint v3.4.1.
If you're not using Nix, this doesn't actually _do_ anything on your
local machine; it's just a suggestion.
It also applies a bunch of simple HLint refactors, using
`make lint-hs-fix`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6324
GitOrigin-RevId: de8267e4909d6dcd3f83543188517f3aaeebc5f3
- Remove a few unnecessary helper functions
- Delete kind annotations
- Bring GHC warnings and language extensions more in line with those of the `graphql-engine` library
- Constrain unconstrained dependency on `hasql-pool`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6251
GitOrigin-RevId: 10c2530f007f70cf1464cec36566ee2264589881
- Remove `onJust` in favor of the more general `for_`
- Remove `withJust` which was used only once
- Remove `hashNub` in favor of `Ord`-based `uniques`
- Simplify some of the implementations in `Hasura.Prelude`
- Add `hlint` hint from `maybe True` to `all`, and `maybe False` to `any`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6173
GitOrigin-RevId: 2c6ebbe2d04f60071d2a53a2d43c6d62dbc4b84e
This PR is the result of running the following commands:
```bash
$ git grep -l '".* : "' -- '*.hs' | xargs sed -i -E 's/(".*) : "/\1: "/'
$ scripts/dev.sh test --integration --accept
```
Also manually fixed a few tests and docs
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6148
GitOrigin-RevId: cf8b87605d41d9ce86613a41ac5fd18691f5a641
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
The intent is to generalize `columnParser` to the point where it is the same across all backends, and then remove the interface in favor of a single implementation.
This extracts out `enumParser` and `possiblyNullable` as the two main areas that differ across backends. We may split `possiblyNullable` further so that we can extract some of that logic out into a common function too.
With these changes, the various `columnParser` implementations become semantically equivalent. They still do different things, and so reconciling them will require further changes.
Co-Authored-By: Antoine Leblanc <antoine@hasura.io>
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5841
GitOrigin-RevId: eec1770931eed5d72da70c97d7d0f00e33fa15d2
### Description
This PR attempts to fix several issues with source customization as it relates to remote relationships. There were several issues regarding casing: at the relationship border, we didn't properly set the target source's case, we didn't have access to the list of supported features to decide whether the feature was allowed or not, and we didn't have access to the global default.
However, all of that information is available when we build the schema cache, as we do resolve the case of some elements such as function names: we can therefore resolve source information at the same time, and simplify both the root of the schema and the remote relationship border.
To do this, this PR introduces a new type, `ResolvedSourceCustomization`, to be used in the Schema Cache, as opposed to the metadata's `SourceCustomization`, following a pattern established by a lot of other types.
### Remaining work and open questions
One major point of confusion: it seems to me that we didn't set the case at all across remote relationships, which would suggest we would use the case of the LHS source across the subset of the RHS one that is accessible through the remote relationship, which would in turn "corrupt" the parser cache and might result in the wrong case being used for that source later on. Is that assesment correct, and was I right to fix it?
Another one is that we seem not to be using the local case of the RHS to name the field in an object relationship; unless I'm mistaken we only use it for array relationships? Is that intentional?
This PR is also missing tests that would show-case the difference, and a changelog entry. To my knowledge, all the tests of this feature are in the python test suite; this could be the opportunity to move them to the hspec suite, but this might be a considerable amount of work?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5619
GitOrigin-RevId: 51a81b713a74575e82d9f96b51633f158ce3a47b
### Description
This PR changes all the schema code to operate in a specific `SchemaT` monad, rather than in an arbitrary `m` monad. `SchemaT` is intended to be used opaquely with `runSourceSchema` and `runRemoteSchema`. The main goal of this is to allow a different reader context per part of the schema: this PR also minimizes the contexts. This means that we no longer require `SchemaOptions` when building remote schemas' schema, and this PR therefore removes a lot of dummy / placeholder values accordingly.
### Performance and stacking
This PR has been through several iterations. #5339 was the original version, that accomplished the same thing by stacking readers on top of the stack at every remote relationship boundary. This raised performance concerns, and @0x777 confirmed with an ad-hoc test that in some extreme cases we could see up to a 10% performance impact. This version, while more verbose, allows us to unstack / re-stack the readers, and avoid that problem. #5517 adds a new benchmark set to be able to automatically measure this on every PR.
### Remaining work
- [x] a comment (or perhaps even a Note?) should be added to `SchemaT`
- [x] we probably want for #5517 to be merged first so that we can confirm the lack of performance penalty
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5458
GitOrigin-RevId: e06b83d90da475f745b838f1fd8f8b4d9d3f4b10
### Description
This PR is a first step towards having a dedicated reader context per schema block. It adds the required Reader instance, and switches from a `SchemaT ReaderT` stack to a `ReaderT SchemaT` stack. Furthermore, it cleans up / harmonizes some of the top-level schema building functions.
Sources and remotes are now built each within their own run of `runReaderT`: for now, the reader context is the same in both cases, meaning no special care is required at the boundary of remote relationships.
Actions are explicitly run with the source context for now; we could envision creating a third and distinct context for them.
This PR is expected to be a no-op.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5300
GitOrigin-RevId: a014e5b3504eb4ef740c820d305d6d2695f622f7
It's about time.
To do this I had to check a few more boxes.
* I copied the flags from `graphql-engine.cabal` to the libraries in `server/lib`.
* I moved `Cacheable` instances of schema parser types beside the typeclass declaration.
* I removed imports of `Hasura.Prelude` from the tests, and rewrote them accordingly.
* I copied the `TestMonad` parse monad into `server/src-test/Hasura/GraphQL/Schema/RemoteTest.hs`, which was using it. I think this could be done with the real thing, but I tried replacing it with constraints and it messed with my head somewhat.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5311
GitOrigin-RevId: ebebcc50a16f2d517b7f730fe72410827ca3e86c