The simplification will allow us to avoid a few `MonadError QErr m` constsraints in the future - this effect can be created locally instead of reusing a global one.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8729
GitOrigin-RevId: 851e28b1f5bfe4c47da43fa324714a941ef25c57
## Description
This change adds support for nested object fields in HGE IR and Schema Cache, the Data Connectors backend and API, and the MongoDB agent.
### Data Connector API changes
- The `/schema` endpoint response now includes an optional set of GraphQL type definitions. Table column types can refer to these definitions by name.
- Queries can now include a new field type `object` which contains a column name and a nested query. This allows querying into a nested object within a field.
### MongoDB agent changes
- Add support for querying into nested documents using the new `object` field type.
### HGE changes
- The `Backend` type class has a new type family `XNestedObjects b` which controls whether or not a backend supports querying into nested objects. This is currently enabled only for the `DataConnector` backend.
- For backends that support nested objects, the `FieldInfo` type gets a new constructor `FINestedObject`, and the `AnnFieldG` type gets a new constructor `AFNestedObject`.
- If the DC `/schema` endpoint returns any custom GraphQL type definitions they are stored in the `TableInfo` for each table in the source.
- During schema cache building, the function `addNonColumnFields` will check whether any column types match custom GraphQL object types stored in the `TableInfo`. If so, they are converted into `FINestedObject` instead of `FIColumn` in the `FieldInfoMap`.
- When building the `FieldParser`s from `FieldInfo` (function `fieldSelection`) any `FINestedObject` fields are converted into nested object parsers returning `AFNestedObject`.
- The `DataConnector` query planner converts `AFNestedObject` fields into `object` field types in the query sent to the agent.
## Limitations
### HGE not yet implemented:
- Support for nested arrays
- Support for nested objects/arrays in mutations
- Support for nested objects/arrays in order-by
- Support for filters (`where`) in nested objects/arrays
- Support for adding custom GraphQL types via track table metadata API
- Support for interface and union types
- Tests for nested objects
### Mongo agent not yet implemented:
- Generate nested object types from validation schema
- Support for aggregates
- Support for order-by
- Configure agent port
- Build agent in CI
- Agent tests for nested objects and MongoDB agent
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7844
GitOrigin-RevId: aec9ec1e4216293286a68f9b1af6f3f5317db423
This PR is on top of #7789.
### Description
This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks:
- we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it
- we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces
- we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible
- we had to declare several behavioral instances on `TraceT m`
This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`.
In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line.
### Remaining work
This PR is a draft; what's left to do is:
- [x] make Pro compile; i haven't updated `HasuraPro/Main` yet
- [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls
- [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet
- [x] it depends on #7789 being merged first
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791
GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
### Description
Each Backend executes queries against the database in a slightly different stack: Postgres uses its own `TXeT`, MSSQL uses a variant of it, BigQuery is simply in `ExceptT QErr IO`... To accommodate those variations, we had originally introduced an `ExecutionMonad b` type family in `BackendExecute`, allowing each backend to describe its own stack. It was then up to that backend's `BackendTransport` instance to implement running said stack, and converting the result back into our main app monad.
However, this was not without complications: `TraceT` is one of them: as it usually needs to be on the top of the stack, converting from one stack to the other implies the use `interpTraceT`, which is quite monstrous. Furthermore, as part of the Entitlement Services work, we're trying to move to a "Services" architecture in which the entire engine runs in one base monad, that delegates features and dependencies to monad constraints; and as a result we'd like to minimize the number of different monad stacks we have to maintain and translate from and to in the codebase.
To improve things, this PR changes `ExecutionMonad b` from an _absolute_ stack to a _relative_ one: i.e.: what needs to be stacked on top of our base monad for the execution. In `Transport`, we then only need to pop the top of the stack, and voila. This greatly simplifies the implementation of the backends, as there's no longer any need to do any stack transformation: MySQL's implementation becomes a `runIdentityT`! This also removes most mentions of `TraceT` from the execution code since it's no longer required: we can rely on the base monad's existing `MonadTrace` constraint.
To continue encapsulating monadic actions in `DBStepInfo` and avoid threading a bunch of `forall` all over the place, this PR introduces a small local helper: `OnBaseMonad`. One only downside of all this is that this requires adding `MonadBaseControl IO m` constraint all over the place: previously, we would run directly on `IO` and lift, and would therefore not need to bring that constraint all the way.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7789
GitOrigin-RevId: e9b2e431c5c47fa9851abf87545c0415ff6d1a12
Basic MongoDB agent. This is intended as a starting point for playing with nested documents in a MongoDB back end. Currently supports basic queries with projections, where expressions, limit and offset. No support for joins, aggregates or mutations.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7840
GitOrigin-RevId: 3f03b8416c95acf2b68da1db56cbe36a513a4bde
Hooks up event trigger codecs from #7237. This required fixing a problem where some backend types implemented `defaultTriggerOnReplication` with `error` which caused the server to crash when evaluating those for default values in codecs. The changes here add a type family to `Backend` called `XEventTriggers` that signals backend support for event triggers, and changes the type of `defaultTriggerOnReplication` to from `TriggerOnReplication` to `Maybe (XEventTriggers b, TriggerOnReplication)` so that it can only be implemented with a `Just` value if `XEventTriggers b` is inhabited. This emulates some existing type families in `Backend`. (Thanks to @daniel-chambers for this suggestion!)
I used the implementation of `defaultTriggerOnReplication` as a signal for event triggers support to prune the Metadata API so that event trigger fields will not appear in the OpenAPI spec for backend types that do not support event triggers. The codec version of the API will also not emit or accept those fields for those backend types. I think I could use `Typeable` to test whether `XEventTriggers` is `Void` instead of testing whether `defaultTriggerOnReplication` is `Nothing`. But the codec implementation will crash anyway if `defaultTriggerOnReplication` is `Nothing`.
I checked to make sure that graphql-engine-pro still compiles.
Ticket: https://hasurahq.atlassian.net/browse/GDC-521
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7295
GitOrigin-RevId: 2b2dd44291513266107ca25cf330319bf53a8b66
## Description
There is a bug in the metadata defaults code, see [the original PR](https://github.com/hasura/graphql-engine-mono/pull/6286).
Steps to reproduce this issue:
* Start a new HGE project
* Start HGE with a defaults argument: `HASURA_GRAPHQL_LOG_LEVEL=debug cabal run exe:graphql-engine -- serve --enable-console --console-assets-dir=./console/static/dist --metadata-defaults='{"backend_configs": {"dataconnector": {"mongo": {"display_name": "BONGOBB", "uri": "http://localhost:8123"}}}}'`
* Add a source (doesn't need to be related to the defaults)
* Export metadata
* See that the defaults are present in the exported metadata
## Related Issues
* Github Issue: https://github.com/hasura/graphql-engine/issues/9237
* Jira: https://hasurahq.atlassian.net/browse/GDC-647
* Original PR: https://github.com/hasura/graphql-engine-mono/pull/6286
## Solution
* The test for if defaults should be included for metadata api operations has been extended to check for updates
* Metadata inconsistencies have been hidden for `/capabilities` calls on startup
## TODO
* [x] Fix bug
* [x] Write tests
* [x] OSS Metadata Migration to correct persisted data - `server/src-rsr/migrations/47_to_48.sql`
* [x] Cloud Metadata Migration - `pro/server/res/cloud/migrations/6_to_7.sql`
* [x] Bump Catalog Version - `server/src-rsr/catalog_version.txt`
* [x] Update Catalog Versions - `server/src-rsr/catalog_versions.txt` (This will be done by Infra when creating a release)
* [x] Log connection error as it occurs *(Already being logged. Requires `--enabled-log-types startup,webhook-log,websocket-log,http-log,data-connector-log`)
* [x] Don't mark metadata inconsistencies for this call.
## Questions
* [ ] Does the `pro/server/res/cloud/migrations/6_to_7.sql` cover the cloud scenarios?
* [ ] Should we have `SET search_path` in migrations?
* [x] What should be in `server/src-rsr/catalog_versions.txt`?
## Testing
To test the solution locally run:
> docker compose up -d
and
> cabal run -- exe:api-tests --skip BigQuery --skip SQLServer --skip '/Test.API.Explain/Postgres/'
## Solution
In `runMetadataQuery` in `server/src-lib/Hasura/Server/API/Metadata.hs`:
```diff
- if (exportsMetadata _rqlMetadata)
+ if (exportsMetadata _rqlMetadata || queryModifiesMetadata _rqlMetadata)
```
This ensures that defaults aren't present in operations that serialise metadata.
Note: You might think that `X_add_source` would need the defaults to be present to add a source that references the defaults, but since the resolution occurs in the schema-cache building phase, the defaults can be excluded for the metadata modifications required for `X_add_source`.
In addition to the code-change, a metadata migration has been introduced in order to clean up serialised defaults.
The following scenarios need to be considered for both OSS and Cloud:
* The user has not had defaults serialised
* The user has had the defaults serialised and no other backends configured
* The user has had the defaults serialised and has also configured other backends
We want to remove as much of the metadata as possible without any user-specified data and this should be reflected in migration `server/src-rsr/migrations/47_to_48.sql`.
## Server checklist
### Catalog upgrade
Does this PR change Hasura Catalog version?
- ✅ Yes
### Metadata
Does this PR add a new Metadata feature?
- ✅ No
### GraphQL
- ✅ No new GraphQL schema is generated
### Breaking changes
- ✅ No Breaking changes
## Changelog
__Component__ : server
__Type__: bugfix
__Product__: community-edition
### Short Changelog
Fixes a metadata defaults serialization bug and introduces a metadata migration to correct data that has been persisted due to the bug.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7034
GitOrigin-RevId: ad7d4f748397a1a607f2c0c886bf0fbbc3f873f2
## Description ✍️
This PR introduces a new feature to enable/disable event triggers during logical replication of table data for PostgreSQL and MS-SQL data sources. We introduce a new field `trigger_on_replication` in the `*_create_event_trigger` metadata API. By default the event triggers will not fire for logical data replication.
## Changelog ✍️
__Component__ : server
__Type__: feature
__Product__: community-edition
### Short Changelog
Add option to enable/disable event triggers on logically replicated tables
### Related Issues ✍
https://github.com/hasura/graphql-engine/issues/8814https://hasurahq.atlassian.net/browse/GS-252
### Solution and Design
- By default, triggers do **not** fire when the session mode is `replica` in Postgres, so if the `triggerOnReplication` is set to `true` for an event trigger we run the query `ALTER TABLE #{tableTxt} ENABLE ALWAYS TRIGGER #{triggerNameTxt};` so that the trigger fires always irrespective of the `session_replication_role`
- By default, triggers do fire in case of replication in MS-SQL, so if the `triggerOnReplication` is set to `false` for an event trigger we add a clause `NOT FOR REPLICATION` to the the SQL when the trigger is created/altered, which sets the `is_not_for_replication` for the trigger as `true` and it does not fire during logical replication.
### Steps to test and verify ✍
- Run hspec integration tests for HGE
## Server checklist ✍
### Metadata ✍
Does this PR add a new Metadata feature?
- ✅ Yes
- Does `export_metadata`/`replace_metadata` supports the new metadata added?
- ✅
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6953
Co-authored-by: Puru Gupta <32328846+purugupta99@users.noreply.github.com>
Co-authored-by: Sean Park-Ross <94021366+seanparkross@users.noreply.github.com>
GitOrigin-RevId: 92731328a2bbdcad2302c829f26f9acb33c36135
What is the `Cacheable` type class about?
```haskell
class Eq a => Cacheable a where
unchanged :: Accesses -> a -> a -> Bool
default unchanged :: (Generic a, GCacheable (Rep a)) => Accesses -> a -> a -> Bool
unchanged accesses a b = gunchanged (from a) (from b) accesses
```
Its only method is an alternative to `(==)`. The added value of `unchanged` (and the additional `Accesses` argument) arises _only_ for one type, namely `Dependency`. Indeed, the `Cacheable (Dependency a)` instance is non-trivial, whereas every other `Cacheable` instance is completely boilerplate (and indeed either generated from `Generic`, or simply `unchanged _ = (==)`). The `Cacheable (Dependency a)` instance is the only one where the `Accesses` argument is not just passed onwards.
The only callsite of the `unchanged` method is in the `ArrowCache (Rule m)` method. That is to say that the `Cacheable` type class is used to decide when we can re-use parts of the schema cache between Metadata operations.
So what is the `Cacheable (Dependency a)` instance about? Normally, the output of a `Rule m a b` is re-used when the new input (of type `a`) is equal to the old one. But sometimes, that's too coarse: it might be that a certain `Rule m a b` only depends on a small part of its input of type `a`. A `Dependency` allows us to spell out what parts of `a` are being depended on, and these parts are recorded as values of types `Access a` in the state `Accesses`.
If the input `a` changes, but not in a way that touches the recorded `Accesses`, then the output `b` of that rule can be re-used without recomputing.
So now you understand _why_ we're passing `Accesses` to the `unchanged` method: `unchanged` is an equality check in disguise that just needs some additional context.
But we don't need to pass `Accesses` as a function argument. We can use the `reflection` package to pass it as type-level context. So the core of this PR is that we change the instance declaration from
```haskell
instance (Cacheable a) => Cacheable (Dependency a) where
```
to
```haskell
instance (Given Accesses, Eq a) => Eq (Dependency a) where
```
and use `(==)` instead of `unchanged`.
If you haven't seen `reflection` before: it's like a `MonadReader`, but it doesn't require a `Monad`.
In order to pass the current `Accesses` value, instead of simply passing the `Accesses` as a function argument, we need to instantiate the `Given Accesses` context. We use the `give` method from the `reflection` package for that.
```haskell
give :: forall r. Accesses -> (Given Accesses => r) -> r
unchanged :: (Given Accesses => Eq a) => Accesses -> a -> a -> Bool
unchanged accesses a b = give accesses (a == b)
```
With these three components in place, we can delete the `Cacheable` type class entirely.
The remainder of this PR is just to remove the `Cacheable` type class and its instances.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6877
GitOrigin-RevId: 7125f5e11d856e7672ab810a23d5bf5ad176e77f
### Description
This monster of a PR took way too long. As the title suggests, it reduces the schema context carried in the readers to the very strict minimum. In practice, that means that to build a source, we only require:
- the global `SchemaContext`
- the global `SchemaOptions` (soon to be renamed `SchemaSourceOptions`)
- that source's `SourceInfo`
Furthermore, _we no longer carry "default" customization options throughout the schema_. All customization information is extracted from the `SourceInfo`, when required. This prevents an entire category of bugs we had previously encountered, such as parts of the code using uninitialized / unupdated customization info.
In turn, this meant that we could remove the explicit threading of the `SourceInfo` throughout the schema, since it is now always available through the reader context.
Finally, this meant making a few adjustments to relay and actions as well, such as the introduction of a new separate "context" for actions, and a change to how we create some of the action-specific postgres scalar parsers.
I'll highlight with review comments the areas of interest.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6709
GitOrigin-RevId: ea80fddcb24e2513779dd04b0b700a55f0028dd1