Commit Graph

32 Commits

Author SHA1 Message Date
Naveen Naidu
a893b1b906 server, multitenant: model usage logs
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9744
Co-authored-by: pranshi06 <85474619+pranshi06@users.noreply.github.com>
Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com>
GitOrigin-RevId: 270755e88fd17f8fd949ac06d31e408202078544
2023-10-03 05:17:39 +00:00
Philip Lykke Carlsen
a3655b0f76 refac: Add sampled feature flags to dynamic schema config
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/10199
GitOrigin-RevId: c9cc446daf4ac8d754faa6875199a4f625642646
2023-08-25 09:56:59 +00:00
paritosh-08
318634c7a1 server/schema-registry: fix metadata resource version -1 for catalog sync
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9864
GitOrigin-RevId: 55340364d138ccb311e3f0e927ef2fd1486a125b
2023-07-17 08:12:31 +00:00
Tom Harding
f8002894c9 Remove some TemplateHaskell from Hasura.RemoteSchema
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9689
GitOrigin-RevId: 6ac85d35faa211713c58e695efc996b9f930baf1
2023-06-28 12:28:56 +00:00
Puru Gupta
f2fe9cfe3b server: add support for header resolution from env vars
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9509
GitOrigin-RevId: 818f747422c5444fcb55419729ad58d74b890d52
2023-06-23 08:39:28 +00:00
Anon Ray
c23320054d server: emit logs when stored introspection is used
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9472
GitOrigin-RevId: 9be2eaa9ebc5f39a7bb44a1fc2340acbd4074bcd
2023-06-12 08:53:05 +00:00
Rakesh Emmadi
427ca18e85 server: collect remote schema and database introspections while building schema cache
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9297
GitOrigin-RevId: 143f50be2eba382d129669e26ef3a7eb24c921ca
2023-06-01 16:34:31 +00:00
Tom Harding
e0c0043e76 Upgrade Ormolu to 0.7.0.0
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9284
GitOrigin-RevId: 2f2cf2ad01900a54e4bdb970205ac0ef313c7e00
2023-05-24 13:53:53 +00:00
Rakesh Emmadi
79d2e20e69 server: use stored introspection only when upstream remote schema unreachable
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9074
GitOrigin-RevId: e0dcdbcab0691e680fa7c8e0df4afe38aa2a7504
2023-05-16 17:03:10 +00:00
Tom Harding
b6799f0882 Import InsOrdHashMap, not OMap, OM, Map, HM, ...
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8946
GitOrigin-RevId: 434e7c335bc69119020dd35761c7d4539bc51ff8
2023-04-27 07:43:22 +00:00
Tom Harding
7e334e08a4 Import HashMap, not HM, Map, M...
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8947
GitOrigin-RevId: 18e52c928e1df535579e2077b4af6c2ce92bdcef
2023-04-26 15:43:44 +00:00
Daniel Harvey
8cf134dad1 Split Hasura.RQL.DDL.Headers to put types in Hasura.RQL.Types.Headers
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8873
GitOrigin-RevId: 566cb4271f0eb27e6688c2e0fbc26711bdf8baa9
2023-04-24 16:45:55 +00:00
Tom Harding
f8ae944dbc Move Hasura.GraphQL.Schema.Options to Hasura.RQL.Types.Options
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8877
GitOrigin-RevId: 8be82f60a57cd9582d6980a6dea2f34c7b0c13c1
2023-04-24 15:18:56 +00:00
Tom Harding
1698f9dd91 Extract RoleName from Hasura.Session, move it into Hasura.RQL.Types.Roles
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8856
Co-authored-by: Daniel Harvey <4729125+danieljharvey@users.noreply.github.com>
GitOrigin-RevId: 38ad67de9b3d765c4eb50943dd52b8fc32317540
2023-04-24 08:51:58 +00:00
Auke Booij
6f78d25932 chore(server): simplify bindErrorA and use more broadly
The simplification will allow us to avoid a few `MonadError QErr m` constsraints in the future - this effect can be created locally instead of reusing a global one.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8729
GitOrigin-RevId: 851e28b1f5bfe4c47da43fa324714a941ef25c57
2023-04-12 15:51:37 +00:00
Antoine Leblanc
306162f477 Remove ServerConfigCtx.
### Description

This PR removes `ServerConfigCtx` and `HasServerConfigCtx`. Instead, it favours different approaches:
- when the code was only using one field, it passes that field explicitly (usually `SQLGenCtx` or `CheckFeatureFlag`)
- when the code was using several fields, but in only one function, it inlines
- for the cache build, it introduces `CacheStaticConfig` and `CacheDynamicConfig`, which are subsets of `AppEnv` and `AppContext` respectively

The main goal of this is to help with the modularization of the engine: as `ServerConfigCtx` had fields whose types were imported from several unrelated parts of the engine, using it tied together parts of the engine that should not be aware of one another (such as tying together `Hasura.LogicalModel` and `Hasura.GraphQL.Schema`).

The bulk of this PR is a change to the cache build, as a follow up to #8509: instead of giving the entire `ServerConfigCtx` as a incremental rule argument, we only give the new `CacheDynamicConfig` struct, which has fewer fields. The other required fields, that were coming from the `AppEnv`, are now given via the `HasCacheStaticConfig` constraint, which is a "subset" of `HasAppEnv`.

(Some further work could include moving `StringifyNumbers` out of `GraphQL.Schema.Options`, given how it is used all across the codebase, including in `RQL.DML`.)

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8513
GitOrigin-RevId: 818cbcd71494e3cd946b06adbb02ca328a8a298e
2023-04-04 16:01:42 +00:00
Jesse Hallett
bd9f93eaef server: codecs for backend configs
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8269
GitOrigin-RevId: 34330f383ca82fb159842a171a763c178b462788
2023-03-30 15:53:55 +00:00
Auke Booij
29f0660dee chore(server): remove some unused function arguments
These didn't trigger GHC warnings because their name starts with an underscore.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7954
GitOrigin-RevId: 6898b165f073e70aad06e1a2aa5f703ac385f9ed
2023-03-17 15:51:33 +00:00
Antoine Leblanc
cf531b05cb Rewrite Tracing to allow for only one TraceT in the entire stack.
This PR is on top of #7789.

### Description

This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks:
- we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it
- we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces
- we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible
- we had to declare several behavioral instances on `TraceT m`

This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become  implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`.

In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line.

### Remaining work

This PR is a draft; what's left to do is:
- [x] make Pro compile; i haven't updated `HasuraPro/Main` yet
- [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls
- [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet
- [x] it depends on #7789 being merged first

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791
GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 17:38:39 +00:00
Antoine Leblanc
6e574f1bbe harmonize network manager handling
## Description

### I want to speak to the `Manager`

Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.

For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.

This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.

### First come, first served

One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.

The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.

### Side-effects? In my Haskell?

This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.

(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)

## Further work

In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:

- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
-  rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base

This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 15:55:54 +00:00
awjchen
12fdac004f server: fix tracing bug where some errors prevent spans from being emitted
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7450
GitOrigin-RevId: 23f6c9cfea8e7ca64b39866d15d2e6187aaaa0d9
2023-01-25 03:38:21 +00:00
Auke Booij
83ea4a254d server: plumb StoredIntrospection while building the Schema Cache
We'd like to be able to build a Schema Cache from only serializable data. We already have Metadata. The data that's missing to build a Schema Cache is referred to as "stored introspection", and this includes:
- DB introspection
- User-defined enum values (i.e. contents of specific DB tables)
- Remote schema introspection

This PR introduces a new `StoredIntrospection` container that holds that data, and plumbs it through to the right parts of the schema cache building process, so that stored introspection can be used as a substitute for fresh introspection requests against live data sources.

The serialization of `StoredIntrospection` is intended to be straightforward: just take the serialized source introspection results, and put them in an appropriate JSON object. Though I don't think that this PR achieves that entirely.

In order for `StoredIntrospection` to be deserializable (through `aeson` instances), while keeping the required code changes low, this piggy-backs off of the `ResolvedSource` data type. `ResolvedSource` is _almost_ exactly what we want, and _almost_ deserializable, so this PR brings it across the finish line by moving a few things out of that type, and adding a `FromJSON (RawFunctionInfo b)` context to the `Backend` type class.

[PLAT-270]: https://hasurahq.atlassian.net/browse/PLAT-270?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
[PLAT-270]: https://hasurahq.atlassian.net/browse/PLAT-270?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
[PLAT-276]: https://hasurahq.atlassian.net/browse/PLAT-276?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
[PLAT-276]: https://hasurahq.atlassian.net/browse/PLAT-276?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7053
GitOrigin-RevId: 5001b4ea086195cb5e65886747eac2a0a657b64c
2023-01-20 14:52:36 +00:00
awjchen
ee78e32c6e server: implement trace sampling
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7300
GitOrigin-RevId: d96d7fa5aaf0c1e71d1c4c0fa8f0162abce39e18
2022-12-22 19:48:51 +00:00
Jesse Hallett
c265e303f6 server: codecs for remote schemas metadata
These codecs should fully cover the `remote_schemas` property of the Metadata type.

Ticket: [GDC-522](https://hasurahq.atlassian.net/browse/GDC-522)

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6812
GitOrigin-RevId: 1b256f6829486295957c232b92ff184bd9a86469
2022-12-15 17:39:22 +00:00
Auke Booij
512340b864 Collect Metadata dependencies in a Sequence rather than a list
Dependencies seem to get concatenated very often, so let's use a data structure that supports efficient concatenation.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7050
GitOrigin-RevId: 6331963f99f17d1b908a6038318d8c4834cf4dd7
2022-11-30 18:13:31 +00:00
Auke Booij
cca0b6e81a Further schema cache cleanups
Mostly trying to avoid tricky `Arrows` syntax, and unnecessary use of the `Hasura.Incremental` framework.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6997
GitOrigin-RevId: 9a2f5883e7e29af164e1581049ae003afec2cbe4
2022-11-29 01:02:09 +00:00
Auke Booij
cdac24c79f server: delete the Cacheable type class in favor of Eq
What is the `Cacheable` type class about?
```haskell
class Eq a => Cacheable a where
  unchanged :: Accesses -> a -> a -> Bool
  default unchanged :: (Generic a, GCacheable (Rep a)) => Accesses -> a -> a -> Bool
  unchanged accesses a b = gunchanged (from a) (from b) accesses
```
Its only method is an alternative to `(==)`. The added value of `unchanged` (and the additional `Accesses` argument) arises _only_ for one type, namely `Dependency`. Indeed, the `Cacheable (Dependency a)` instance is non-trivial, whereas every other `Cacheable` instance is completely boilerplate (and indeed either generated from `Generic`, or simply `unchanged _ = (==)`). The `Cacheable (Dependency a)` instance is the only one where the `Accesses` argument is not just passed onwards.

The only callsite of the `unchanged` method is in the `ArrowCache (Rule m)` method. That is to say that the `Cacheable` type class is used to decide when we can re-use parts of the schema cache between Metadata operations.

So what is the `Cacheable (Dependency a)` instance about? Normally, the output of a `Rule m a b` is re-used when the new input (of type `a`) is equal to the old one. But sometimes, that's too coarse: it might be that a certain `Rule m a b` only depends on a small part of its input of type `a`. A `Dependency` allows us to spell out what parts of `a` are being depended on, and these parts are recorded as values of types `Access a` in the state `Accesses`.

If the input `a` changes, but not in a way that touches the recorded `Accesses`, then the output `b` of that rule can be re-used without recomputing.

So now you understand _why_ we're passing `Accesses` to the `unchanged` method: `unchanged` is an equality check in disguise that just needs some additional context.

But we don't need to pass `Accesses` as a function argument. We can use the `reflection` package to pass it as type-level context. So the core of this PR is that we change the instance declaration from
```haskell
instance (Cacheable a) => Cacheable (Dependency a) where
```
to
```haskell
 instance (Given Accesses, Eq a) => Eq (Dependency a) where
```
and use `(==)` instead of `unchanged`.

If you haven't seen `reflection` before: it's like a `MonadReader`, but it doesn't require a `Monad`.

In order to pass the current `Accesses` value, instead of simply passing the `Accesses` as a function argument, we need to instantiate the `Given Accesses` context. We use the `give` method from the `reflection` package for that.
```haskell
give :: forall r. Accesses -> (Given Accesses => r) -> r

unchanged :: (Given Accesses => Eq a) => Accesses -> a -> a -> Bool
unchanged accesses a b = give accesses (a == b)
```
With these three components in place, we can delete the `Cacheable` type class entirely.

The remainder of this PR is just to remove the `Cacheable` type class and its instances.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6877
GitOrigin-RevId: 7125f5e11d856e7672ab810a23d5bf5ad176e77f
2022-11-21 16:35:37 +00:00
Auke Booij
5b93014ee8 Make Schema Cache building code slightly more readable
- Avoid a few banana brackets `(| ... |)`, often by just using local `let` bindings
- Use proper `Arrows` syntax rather than helpers like `>->`
- Use monadic `do` syntax instead of `Arrows` syntax where possible
- Avoid `traverseA @Maybe`, in favor of a `case`

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6751
GitOrigin-RevId: c07b22a1a259db6d135486ec71a716705e280717
2022-11-15 20:14:22 +00:00
Auke Booij
6ac67a5566 Allow collecting metadata dependencies and inconsistencies separately
`CollectedInfo` was just an awkward sum type. By using an explicit `Either` instead, we can guarantee at the type level that certain methods only write inconsistencies, or only write dependencies. This is useful, because if we can guarantee that no dependencies are written, then we don't need to run `resolveDependencies` on that part of the Metadata. In other words, we can keep it out of `BuildOutputs`, which greatly benefits performance - see e.g. hasura/graphql-engine-mono#6613.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6765
GitOrigin-RevId: 9ce099d2eee2278dbb6e5bea72063e4b6e064b35
2022-11-15 17:00:11 +00:00
Jesse Hallett
de6e5d71b0 server: codecs for table, function, and remote relationship types
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6608
GitOrigin-RevId: 9c91edec06e49b8d44960e322459ba7a05f26b5a
2022-11-04 21:44:41 +00:00
Samir Talwar
342391f39d Upgrade Ormolu to v0.5.
This upgrades the version of Ormolu required by the HGE repository to v0.5.0.1, and reformats all code accordingly.

Ormolu v0.5 reformats code that uses infix operators. This is mostly useful, adding newlines and indentation to make it clear which operators are applied first, but in some cases, it's unpleasant. To make this easier on the eyes, I had to do the following:

* Add a few fixity declarations (search for `infix`)
* Add parentheses to make precedence clear, allowing Ormolu to keep everything on one line
* Rename `relevantEq` to `(==~)` in #6651 and set it to `infix 4`
* Add a few _.ormolu_ files (thanks to @hallettj for helping me get started), mostly for Autodocodec operators that don't have explicit fixity declarations

In general, I think these changes are quite reasonable. They mostly affect indentation.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6675
GitOrigin-RevId: cd47d87f1d089fb0bc9dcbbe7798dbceedcd7d83
2022-11-02 20:55:13 +00:00
Vamshi Surabhi
a01d1188f2 scaffolding for remote-schemas module
The main aim of the PR is:

1. To set up a module structure for 'remote-schemas' package.
2. Move parts by the remote schema codebase into the new module structure to validate it.

## Notes to the reviewer

Why a PR with large-ish diff?

1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section.

1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process.

## Changes in the PR

### Module structure

Sets up the module structure as follows:

```
Hasura/
  RemoteSchema/
    Metadata/
      Types.hs
    SchemaCache/
      Types.hs
      Permission.hs
      RemoteRelationship.hs
      Build.hs
    MetadataAPI/
      Types.hs
      Execute.hs
```

### 1. Types representing metadata are moved

Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`.

- This new module only depends on very 'core' modules such as
  `Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass.

- The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship
  definition.

### 2. SchemaCache related types and build logic have been moved

Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`.

Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at.

Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components.

This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in.

While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal.

This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase.

### 3. Metadata APIs related types and build logic have been moved

The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively.

## Open questions:

1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L55)) and [2](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L91)), it made more sense to create a separate top-level module for `MetadataAPI`s.

   Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further.

1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type:

   ```haskell
   buildRemoteSchemas ::
     ( ArrowChoice arr,
       Inc.ArrowDistribute arr,
       ArrowWriter (Seq CollectedInfo) arr,
       Inc.ArrowCache m arr,
       MonadIO m,
       HasHttpManagerM m,
       Inc.Cacheable remoteRelationshipDefinition,
       ToJSON remoteRelationshipDefinition,
       MonadError QErr m
     ) =>
     Env.Environment ->
     ( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles),
       [RemoteSchemaMetadataG remoteRelationshipDefinition]
     )
       `arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject)
   ```

   Note the dependence on `CollectedInfo` which is defined as

   ```haskell
   data CollectedInfo
     = CIInconsistency InconsistentMetadata
     | CIDependency
         MetadataObject
         -- ^ for error reporting on missing dependencies
         SchemaObjId
         SchemaDependency
     deriving (Eq)
   ```

   this pretty much means that remote schemas is dependent on types from databases, actions, ....

   How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine?

1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature:

   ```haskell
   buildSchemaCacheFor ::
     (QErrM m, CacheRWM m, MetadataM m) =>
     MetadataObjId ->
     MetadataModifier ->
   ```

   This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint):

   ```haskell
   runAddRemoteSchema ::
     ( QErrM m,
       CacheRWM m,
       MonadIO m,
       HasHttpManagerM m,
       MetadataM m,
       Tracing.MonadTrace m
     ) =>
     Env.Environment ->
     AddRemoteSchemaQuery ->
     m EncJSON
   ```

   This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed.

   ```haskell
   runAddRemoteSchema ::
     ( QErrM m,
       MonadIO m,
       HasHttpManagerM m,
       MonadReader RemoteSchemasSchemaCache m,
       MonadState RemoteSchemaMetadata m,
       Tracing.MonadTrace m
     ) =>
     Env.Environment ->
     AddRemoteSchemaQuery ->
     m RemoteSchemeMetadataObjId
   ```

   The idea is that the core graphql-engine would call these functions and then call
   `buildSchemaCacheFor`.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291
GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
2022-10-21 03:15:04 +00:00