2021-06-11 06:26:50 +03:00
|
|
|
module Hasura.GraphQL.Execute.RemoteJoin.Join
|
|
|
|
( processRemoteJoins,
|
2022-03-10 18:25:25 +03:00
|
|
|
foldJoinTreeWith,
|
2020-05-27 18:02:58 +03:00
|
|
|
)
|
|
|
|
where
|
|
|
|
|
Enable remote joins from remote schemas in the execution engine.
### Description
This PR adds the ability to perform remote joins from remote schemas in the engine. To do so, we alter the definition of an `ExecutionStep` targeting a remote schema: the `ExecStepRemote` constructor now expects a `Maybe RemoteJoins`. This new argument is used when processing the execution step, in the transport layer (either `Transport.HTTP` or `Transport.WebSocket`).
For this `Maybe RemoteJoins` to be extracted from a parsed query, this PR also extends the `Execute.RemoteJoin.Collect` module, to implement "collection" from a selection set. Not only do those new functions extract the remote joins, but they also apply all necessary transformations to the selection sets (such as inserting the necessary "phantom" fields used as join keys).
Finally in `Execute.RemoteJoin.Join`, we make two changes. First, we now always look for nested remote joins, regardless of whether the join we just performed went to a source or a remote schema; and second we adapt our join tree logic according to the special cases that were added to deal with remote server edge cases.
Additionally, this PR refactors / cleans / documents `Execute.RemoteJoin.RemoteServer`. This is not required as part of this change and could be moved to a separate PR if needed (a similar cleanup of `Join` is done independently in #3894). It also introduces a draft of a new documentation page for this project, that will be refined in the release PR that ships the feature (either #3069 or a copy of it).
While this PR extends the engine, it doesn't plug such relationships in the schema, meaning that, as of this PR, the new code paths in `Join` are technically unreachable. Adding the corresponding schema code and, ultimately, enabling the metadata API will be done in subsequent PRs.
### Keeping track of concrete type names
The main change this PR makes to the existing `Join` code is to handle a new reserved field we sometimes use when targeting remote servers: the `__hasura_internal_typename` field. In short, a GraphQL selection set can sometimes "branch" based on the concrete "runtime type" of the object on which the selection happens:
```graphql
query {
author(id: 53478) {
... on Writer {
name
articles {
title
}
}
... on Artist {
name
articles {
title
}
}
}
}
```
If both of those `articles` are remote joins, we need to be able, when we get the answer, to differentiate between the two different cases. We do this by asking for `__typename`, to be able to decide if we're in the `Writer` or the `Artist` branch of the query.
To avoid further processing / customization of results, we only insert this `__hasura_internal_typename: __typename` field in the query in the case of unions of interfaces AND if we have the guarantee that we will processing the request as part of the remote joins "folding": that is, if there's any remote join in this branch in the tree. Otherwise, we don't insert the field, and we leave that part of the response untouched.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3810
GitOrigin-RevId: 89aaf16274d68e26ad3730b80c2d2fdc2896b96c
2022-03-09 06:17:28 +03:00
|
|
|
import Control.Lens (view, _3)
|
Allow backend execution to happen on the base app monad.
### Description
Each Backend executes queries against the database in a slightly different stack: Postgres uses its own `TXeT`, MSSQL uses a variant of it, BigQuery is simply in `ExceptT QErr IO`... To accommodate those variations, we had originally introduced an `ExecutionMonad b` type family in `BackendExecute`, allowing each backend to describe its own stack. It was then up to that backend's `BackendTransport` instance to implement running said stack, and converting the result back into our main app monad.
However, this was not without complications: `TraceT` is one of them: as it usually needs to be on the top of the stack, converting from one stack to the other implies the use `interpTraceT`, which is quite monstrous. Furthermore, as part of the Entitlement Services work, we're trying to move to a "Services" architecture in which the entire engine runs in one base monad, that delegates features and dependencies to monad constraints; and as a result we'd like to minimize the number of different monad stacks we have to maintain and translate from and to in the codebase.
To improve things, this PR changes `ExecutionMonad b` from an _absolute_ stack to a _relative_ one: i.e.: what needs to be stacked on top of our base monad for the execution. In `Transport`, we then only need to pop the top of the stack, and voila. This greatly simplifies the implementation of the backends, as there's no longer any need to do any stack transformation: MySQL's implementation becomes a `runIdentityT`! This also removes most mentions of `TraceT` from the execution code since it's no longer required: we can rely on the base monad's existing `MonadTrace` constraint.
To continue encapsulating monadic actions in `DBStepInfo` and avoid threading a bunch of `forall` all over the place, this PR introduces a small local helper: `OnBaseMonad`. One only downside of all this is that this requires adding `MonadBaseControl IO m` constraint all over the place: previously, we would run directly on `IO` and lift, and would therefore not need to bring that constraint all the way.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7789
GitOrigin-RevId: e9b2e431c5c47fa9851abf87545c0415ff6d1a12
2023-02-09 17:38:33 +03:00
|
|
|
import Control.Monad.Trans.Control
|
2021-09-22 13:43:05 +03:00
|
|
|
import Data.Aeson.Ordered qualified as JO
|
2022-03-10 18:25:25 +03:00
|
|
|
import Data.ByteString.Lazy qualified as BL
|
2021-08-06 16:39:00 +03:00
|
|
|
import Data.Environment qualified as Env
|
2023-04-26 18:42:13 +03:00
|
|
|
import Data.HashMap.Strict.Extended qualified as HashMap
|
2023-04-27 10:41:55 +03:00
|
|
|
import Data.HashMap.Strict.InsOrd qualified as InsOrdHashMap
|
2022-03-01 19:03:23 +03:00
|
|
|
import Data.HashMap.Strict.NonEmpty qualified as NEMap
|
2021-08-06 16:39:00 +03:00
|
|
|
import Data.HashSet qualified as HS
|
2022-07-29 17:52:02 +03:00
|
|
|
import Data.IntMap.Strict qualified as IntMap
|
2021-08-06 16:39:00 +03:00
|
|
|
import Data.Text qualified as T
|
2023-10-03 08:15:07 +03:00
|
|
|
import Data.Text.Extended (ToTxt (..))
|
2021-08-06 16:39:00 +03:00
|
|
|
import Data.Tuple (swap)
|
2023-04-05 11:57:19 +03:00
|
|
|
import Hasura.Backends.DataConnector.Agent.Client (AgentLicenseKey)
|
2021-05-11 18:18:31 +03:00
|
|
|
import Hasura.Base.Error
|
2023-04-05 11:57:19 +03:00
|
|
|
import Hasura.CredentialCache
|
2020-10-29 19:58:13 +03:00
|
|
|
import Hasura.EncJSON
|
2021-09-22 13:43:05 +03:00
|
|
|
import Hasura.GraphQL.Execute.Backend qualified as EB
|
|
|
|
import Hasura.GraphQL.Execute.Instances ()
|
2021-08-06 16:39:00 +03:00
|
|
|
import Hasura.GraphQL.Execute.RemoteJoin.RemoteSchema qualified as RS
|
2022-03-10 18:25:25 +03:00
|
|
|
import Hasura.GraphQL.Execute.RemoteJoin.Source qualified as S
|
2021-06-11 06:26:50 +03:00
|
|
|
import Hasura.GraphQL.Execute.RemoteJoin.Types
|
2023-03-15 16:05:17 +03:00
|
|
|
import Hasura.GraphQL.Logging (MonadExecutionLog, MonadQueryLog, statsToAnyBackend)
|
Enable remote joins from remote schemas in the execution engine.
### Description
This PR adds the ability to perform remote joins from remote schemas in the engine. To do so, we alter the definition of an `ExecutionStep` targeting a remote schema: the `ExecStepRemote` constructor now expects a `Maybe RemoteJoins`. This new argument is used when processing the execution step, in the transport layer (either `Transport.HTTP` or `Transport.WebSocket`).
For this `Maybe RemoteJoins` to be extracted from a parsed query, this PR also extends the `Execute.RemoteJoin.Collect` module, to implement "collection" from a selection set. Not only do those new functions extract the remote joins, but they also apply all necessary transformations to the selection sets (such as inserting the necessary "phantom" fields used as join keys).
Finally in `Execute.RemoteJoin.Join`, we make two changes. First, we now always look for nested remote joins, regardless of whether the join we just performed went to a source or a remote schema; and second we adapt our join tree logic according to the special cases that were added to deal with remote server edge cases.
Additionally, this PR refactors / cleans / documents `Execute.RemoteJoin.RemoteServer`. This is not required as part of this change and could be moved to a separate PR if needed (a similar cleanup of `Join` is done independently in #3894). It also introduces a draft of a new documentation page for this project, that will be refined in the release PR that ships the feature (either #3069 or a copy of it).
While this PR extends the engine, it doesn't plug such relationships in the schema, meaning that, as of this PR, the new code paths in `Join` are technically unreachable. Adding the corresponding schema code and, ultimately, enabling the metadata API will be done in subsequent PRs.
### Keeping track of concrete type names
The main change this PR makes to the existing `Join` code is to handle a new reserved field we sometimes use when targeting remote servers: the `__hasura_internal_typename` field. In short, a GraphQL selection set can sometimes "branch" based on the concrete "runtime type" of the object on which the selection happens:
```graphql
query {
author(id: 53478) {
... on Writer {
name
articles {
title
}
}
... on Artist {
name
articles {
title
}
}
}
}
```
If both of those `articles` are remote joins, we need to be able, when we get the answer, to differentiate between the two different cases. We do this by asking for `__typename`, to be able to decide if we're in the `Writer` or the `Artist` branch of the query.
To avoid further processing / customization of results, we only insert this `__hasura_internal_typename: __typename` field in the query in the case of unions of interfaces AND if we have the guarantee that we will processing the request as part of the remote joins "folding": that is, if there's any remote join in this branch in the tree. Otherwise, we don't insert the field, and we leave that part of the response untouched.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3810
GitOrigin-RevId: 89aaf16274d68e26ad3730b80c2d2fdc2896b96c
2022-03-09 06:17:28 +03:00
|
|
|
import Hasura.GraphQL.RemoteServer (execRemoteGQ)
|
2021-09-22 13:43:05 +03:00
|
|
|
import Hasura.GraphQL.Transport.Backend qualified as TB
|
2023-01-25 10:12:53 +03:00
|
|
|
import Hasura.GraphQL.Transport.HTTP.Protocol (GQLReqOutgoing, GQLReqUnparsed, _grOperationName, _unOperationName)
|
2021-09-22 13:43:05 +03:00
|
|
|
import Hasura.GraphQL.Transport.Instances ()
|
2021-08-06 16:39:00 +03:00
|
|
|
import Hasura.Logging qualified as L
|
2020-05-27 18:02:58 +03:00
|
|
|
import Hasura.Prelude
|
2023-03-31 00:18:11 +03:00
|
|
|
import Hasura.QueryTags
|
2023-10-03 08:15:07 +03:00
|
|
|
import Hasura.RQL.IR.ModelInformation (ModelInfoPart (..), ModelOperationType (ModelOperationType), ModelType (ModelTypeRemoteSchema))
|
2022-04-27 16:57:28 +03:00
|
|
|
import Hasura.RQL.Types.Common
|
scaffolding for remote-schemas module
The main aim of the PR is:
1. To set up a module structure for 'remote-schemas' package.
2. Move parts by the remote schema codebase into the new module structure to validate it.
## Notes to the reviewer
Why a PR with large-ish diff?
1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section.
1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process.
## Changes in the PR
### Module structure
Sets up the module structure as follows:
```
Hasura/
RemoteSchema/
Metadata/
Types.hs
SchemaCache/
Types.hs
Permission.hs
RemoteRelationship.hs
Build.hs
MetadataAPI/
Types.hs
Execute.hs
```
### 1. Types representing metadata are moved
Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`.
- This new module only depends on very 'core' modules such as
`Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass.
- The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship
definition.
### 2. SchemaCache related types and build logic have been moved
Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`.
Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at.
Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components.
This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in.
While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal.
This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase.
### 3. Metadata APIs related types and build logic have been moved
The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively.
## Open questions:
1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](https://github.com/hasura/graphql-engine-mono/blob/ceba6d62264603ee5d279814677b29bcc43ecaea/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs#L55) and [2](https://github.com/hasura/graphql-engine-mono/blob/ceba6d62264603ee5d279814677b29bcc43ecaea/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs#L91), it made more sense to create a separate top-level module for `MetadataAPI`s.
Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further.
1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type:
```haskell
buildRemoteSchemas ::
( ArrowChoice arr,
Inc.ArrowDistribute arr,
ArrowWriter (Seq CollectedInfo) arr,
Inc.ArrowCache m arr,
MonadIO m,
HasHttpManagerM m,
Inc.Cacheable remoteRelationshipDefinition,
ToJSON remoteRelationshipDefinition,
MonadError QErr m
) =>
Env.Environment ->
( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles),
[RemoteSchemaMetadataG remoteRelationshipDefinition]
)
`arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject)
```
Note the dependence on `CollectedInfo` which is defined as
```haskell
data CollectedInfo
= CIInconsistency InconsistentMetadata
| CIDependency
MetadataObject
-- ^ for error reporting on missing dependencies
SchemaObjId
SchemaDependency
deriving (Eq)
```
this pretty much means that remote schemas is dependent on types from databases, actions, ....
How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine?
1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature:
```haskell
buildSchemaCacheFor ::
(QErrM m, CacheRWM m, MetadataM m) =>
MetadataObjId ->
MetadataModifier ->
```
This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint):
```haskell
runAddRemoteSchema ::
( QErrM m,
CacheRWM m,
MonadIO m,
HasHttpManagerM m,
MetadataM m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m EncJSON
```
This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed.
```haskell
runAddRemoteSchema ::
( QErrM m,
MonadIO m,
HasHttpManagerM m,
MonadReader RemoteSchemasSchemaCache m,
MonadState RemoteSchemaMetadata m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m RemoteSchemeMetadataObjId
```
The idea is that the core graphql-engine would call these functions and then call
`buildSchemaCacheFor`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291
GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
2022-10-21 06:13:07 +03:00
|
|
|
import Hasura.RemoteSchema.SchemaCache
|
2021-09-22 13:43:05 +03:00
|
|
|
import Hasura.SQL.AnyBackend qualified as AB
|
2023-10-03 08:15:07 +03:00
|
|
|
import Hasura.Server.Types (MonadGetPolicies, RequestId)
|
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
|
|
|
import Hasura.Services.Network
|
2020-10-29 19:58:13 +03:00
|
|
|
import Hasura.Session
|
2021-08-06 16:39:00 +03:00
|
|
|
import Hasura.Tracing qualified as Tracing
|
2023-01-25 10:12:53 +03:00
|
|
|
import Language.GraphQL.Draft.Syntax qualified as G
|
2022-02-16 10:08:51 +03:00
|
|
|
import Network.HTTP.Types qualified as HTTP
|
2020-05-27 18:02:58 +03:00
|
|
|
|
2022-03-10 18:25:25 +03:00
|
|
|
-------------------------------------------------------------------------------
|
2021-08-06 16:39:00 +03:00
|
|
|
|
2022-03-10 18:25:25 +03:00
|
|
|
-- | Process all remote joins, recursively.
|
|
|
|
--
|
|
|
|
-- Given the result of the first step of an execution and its associated remote
|
|
|
|
-- joins, process all joins recursively to build the resulting JSON object.
|
|
|
|
--
|
|
|
|
-- This function is a thin wrapper around 'processRemoteJoinsWith', and starts
|
|
|
|
-- the join tree traversal process by re-parsing the 'EncJSON' value into an
|
|
|
|
-- introspectable JSON 'Value', and "injects" the required functions to process
|
|
|
|
-- each join over the network.
|
2020-11-12 12:25:48 +03:00
|
|
|
processRemoteJoins ::
|
2022-03-10 18:25:25 +03:00
|
|
|
forall m.
|
2021-10-13 19:38:56 +03:00
|
|
|
( MonadError QErr m,
|
2020-11-12 12:25:48 +03:00
|
|
|
MonadIO m,
|
Allow backend execution to happen on the base app monad.
### Description
Each Backend executes queries against the database in a slightly different stack: Postgres uses its own `TXeT`, MSSQL uses a variant of it, BigQuery is simply in `ExceptT QErr IO`... To accommodate those variations, we had originally introduced an `ExecutionMonad b` type family in `BackendExecute`, allowing each backend to describe its own stack. It was then up to that backend's `BackendTransport` instance to implement running said stack, and converting the result back into our main app monad.
However, this was not without complications: `TraceT` is one of them: as it usually needs to be on the top of the stack, converting from one stack to the other implies the use `interpTraceT`, which is quite monstrous. Furthermore, as part of the Entitlement Services work, we're trying to move to a "Services" architecture in which the entire engine runs in one base monad, that delegates features and dependencies to monad constraints; and as a result we'd like to minimize the number of different monad stacks we have to maintain and translate from and to in the codebase.
To improve things, this PR changes `ExecutionMonad b` from an _absolute_ stack to a _relative_ one: i.e.: what needs to be stacked on top of our base monad for the execution. In `Transport`, we then only need to pop the top of the stack, and voila. This greatly simplifies the implementation of the backends, as there's no longer any need to do any stack transformation: MySQL's implementation becomes a `runIdentityT`! This also removes most mentions of `TraceT` from the execution code since it's no longer required: we can rely on the base monad's existing `MonadTrace` constraint.
To continue encapsulating monadic actions in `DBStepInfo` and avoid threading a bunch of `forall` all over the place, this PR introduces a small local helper: `OnBaseMonad`. One only downside of all this is that this requires adding `MonadBaseControl IO m` constraint all over the place: previously, we would run directly on `IO` and lift, and would therefore not need to bring that constraint all the way.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7789
GitOrigin-RevId: e9b2e431c5c47fa9851abf87545c0415ff6d1a12
2023-02-09 17:38:33 +03:00
|
|
|
MonadBaseControl IO m,
|
2023-03-31 00:18:11 +03:00
|
|
|
MonadQueryTags m,
|
2021-09-22 13:43:05 +03:00
|
|
|
MonadQueryLog m,
|
2023-03-15 16:05:17 +03:00
|
|
|
MonadExecutionLog m,
|
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
|
|
|
Tracing.MonadTrace m,
|
2023-10-03 08:15:07 +03:00
|
|
|
ProvidesNetwork m,
|
|
|
|
MonadGetPolicies m
|
2020-11-12 12:25:48 +03:00
|
|
|
) =>
|
2021-08-06 16:39:00 +03:00
|
|
|
RequestId ->
|
|
|
|
L.Logger L.Hasura ->
|
2023-04-05 11:57:19 +03:00
|
|
|
Maybe (CredentialCache AgentLicenseKey) ->
|
2021-08-06 16:39:00 +03:00
|
|
|
Env.Environment ->
|
2022-02-16 10:08:51 +03:00
|
|
|
[HTTP.Header] ->
|
2020-11-12 12:25:48 +03:00
|
|
|
UserInfo ->
|
2021-08-06 16:39:00 +03:00
|
|
|
EncJSON ->
|
|
|
|
Maybe RemoteJoins ->
|
2021-09-22 13:43:05 +03:00
|
|
|
GQLReqUnparsed ->
|
2023-09-13 16:40:54 +03:00
|
|
|
Tracing.HttpPropagator ->
|
2023-10-03 08:15:07 +03:00
|
|
|
m (EncJSON, [ModelInfoPart])
|
2023-09-13 16:40:54 +03:00
|
|
|
processRemoteJoins requestId logger agentLicenseKey env requestHeaders userInfo lhs maybeJoinTree gqlreq tracesPropagator =
|
2023-10-03 08:15:07 +03:00
|
|
|
Tracing.newSpan "Process remote joins" $ forRemoteJoins maybeJoinTree (lhs, []) \joinTree -> do
|
2022-03-10 18:25:25 +03:00
|
|
|
lhsParsed <-
|
|
|
|
JO.eitherDecode (encJToLBS lhs)
|
|
|
|
`onLeft` (throw500 . T.pack)
|
2023-10-03 08:15:07 +03:00
|
|
|
(jsonResult, modelInfoList) <-
|
2022-03-10 18:25:25 +03:00
|
|
|
foldJoinTreeWith
|
|
|
|
callSource
|
|
|
|
callRemoteServer
|
2021-08-06 16:39:00 +03:00
|
|
|
userInfo
|
|
|
|
(Identity lhsParsed)
|
2022-03-10 18:25:25 +03:00
|
|
|
joinTree
|
2023-01-25 10:12:53 +03:00
|
|
|
requestHeaders
|
|
|
|
(_unOperationName <$> _grOperationName gqlreq)
|
2023-10-03 08:15:07 +03:00
|
|
|
pure $ (encJFromOrderedValue $ runIdentity jsonResult, (modelInfoList))
|
2022-03-10 18:25:25 +03:00
|
|
|
where
|
|
|
|
-- How to process a source join call over the network.
|
|
|
|
callSource ::
|
|
|
|
-- Generated information about the step
|
|
|
|
AB.AnyBackend S.SourceJoinCall ->
|
|
|
|
-- Resulting JSON object, as a 'ByteString'.
|
|
|
|
m BL.ByteString
|
|
|
|
callSource sourceJoinCall =
|
|
|
|
AB.dispatchAnyBackend @TB.BackendTransport sourceJoinCall \(S.SourceJoinCall {..} :: S.SourceJoinCall b) -> do
|
|
|
|
response <-
|
|
|
|
TB.runDBQuery @b
|
|
|
|
requestId
|
|
|
|
gqlreq
|
|
|
|
_sjcRootFieldAlias
|
|
|
|
userInfo
|
|
|
|
logger
|
2023-04-05 11:57:19 +03:00
|
|
|
agentLicenseKey
|
2022-03-10 18:25:25 +03:00
|
|
|
_sjcSourceConfig
|
2023-03-14 14:32:20 +03:00
|
|
|
(fmap (statsToAnyBackend @b) (EB.dbsiAction _sjcStepInfo))
|
2022-03-10 18:25:25 +03:00
|
|
|
(EB.dbsiPreparedQuery _sjcStepInfo)
|
2023-01-25 10:12:53 +03:00
|
|
|
(EB.dbsiResolvedConnectionTemplate _sjcStepInfo)
|
2022-03-10 18:25:25 +03:00
|
|
|
pure $ encJToLBS $ snd response
|
2021-08-06 16:39:00 +03:00
|
|
|
|
2022-03-10 18:25:25 +03:00
|
|
|
-- How to process a remote schema join call over the network.
|
|
|
|
callRemoteServer ::
|
|
|
|
-- Information about the remote schema
|
|
|
|
ValidatedRemoteSchemaDef ->
|
|
|
|
-- Generated GraphQL request
|
|
|
|
GQLReqOutgoing ->
|
|
|
|
-- Resulting JSON object, as a 'ByteString'.
|
|
|
|
m BL.ByteString
|
|
|
|
callRemoteServer remoteSchemaInfo request =
|
2023-05-24 16:51:56 +03:00
|
|
|
fmap (view _3)
|
2023-09-13 16:40:54 +03:00
|
|
|
$ execRemoteGQ env tracesPropagator userInfo requestHeaders remoteSchemaInfo request
|
2022-03-10 18:25:25 +03:00
|
|
|
|
|
|
|
-- | Fold the join tree.
|
|
|
|
--
|
|
|
|
-- This function takes as an argument the functions that will be used to do the
|
|
|
|
-- actual network calls; this allows this function not to require 'MonadIO',
|
|
|
|
-- allowing it to be used in tests.
|
|
|
|
foldJoinTreeWith ::
|
2021-10-13 19:38:56 +03:00
|
|
|
( MonadError QErr m,
|
2023-03-31 00:18:11 +03:00
|
|
|
MonadQueryTags m,
|
2023-05-31 08:47:40 +03:00
|
|
|
Traversable f,
|
|
|
|
Tracing.MonadTrace m,
|
2023-10-03 08:15:07 +03:00
|
|
|
MonadIO m,
|
|
|
|
MonadGetPolicies m
|
2021-08-06 16:39:00 +03:00
|
|
|
) =>
|
2022-03-10 18:25:25 +03:00
|
|
|
-- | How to process a call to a source.
|
|
|
|
(AB.AnyBackend S.SourceJoinCall -> m BL.ByteString) ->
|
|
|
|
-- | How to process a call to a remote schema.
|
|
|
|
(ValidatedRemoteSchemaDef -> GQLReqOutgoing -> m BL.ByteString) ->
|
|
|
|
-- | User information.
|
2021-08-06 16:39:00 +03:00
|
|
|
UserInfo ->
|
2022-03-10 18:25:25 +03:00
|
|
|
-- | Initial accumulator; the LHS of this join tree.
|
|
|
|
(f JO.Value) ->
|
2021-08-06 16:39:00 +03:00
|
|
|
RemoteJoins ->
|
2023-01-25 10:12:53 +03:00
|
|
|
[HTTP.Header] ->
|
|
|
|
Maybe G.Name ->
|
2023-10-03 08:15:07 +03:00
|
|
|
m (f JO.Value, [ModelInfoPart])
|
2023-04-13 04:29:15 +03:00
|
|
|
foldJoinTreeWith callSource callRemoteSchema userInfo lhs joinTree reqHeaders operationName = do
|
2021-08-06 16:39:00 +03:00
|
|
|
(compositeValue, joins) <- collectJoinArguments (assignJoinIds joinTree) lhs
|
2023-10-03 08:15:07 +03:00
|
|
|
(joinIndices) <- fmap catMaybes
|
2023-05-24 16:51:56 +03:00
|
|
|
$ for joins
|
|
|
|
$ \JoinArguments {..} -> do
|
2023-04-26 18:42:13 +03:00
|
|
|
let joinArguments = IntMap.fromList $ map swap $ HashMap.toList _jalArguments
|
2023-10-03 08:15:07 +03:00
|
|
|
(previousStep, modelInfo') <- case _jalJoin of
|
Enable remote joins from remote schemas in the execution engine.
### Description
This PR adds the ability to perform remote joins from remote schemas in the engine. To do so, we alter the definition of an `ExecutionStep` targeting a remote schema: the `ExecStepRemote` constructor now expects a `Maybe RemoteJoins`. This new argument is used when processing the execution step, in the transport layer (either `Transport.HTTP` or `Transport.WebSocket`).
For this `Maybe RemoteJoins` to be extracted from a parsed query, this PR also extends the `Execute.RemoteJoin.Collect` module, to implement "collection" from a selection set. Not only do those new functions extract the remote joins, but they also apply all necessary transformations to the selection sets (such as inserting the necessary "phantom" fields used as join keys).
Finally in `Execute.RemoteJoin.Join`, we make two changes. First, we now always look for nested remote joins, regardless of whether the join we just performed went to a source or a remote schema; and second we adapt our join tree logic according to the special cases that were added to deal with remote server edge cases.
Additionally, this PR refactors / cleans / documents `Execute.RemoteJoin.RemoteServer`. This is not required as part of this change and could be moved to a separate PR if needed (a similar cleanup of `Join` is done independently in #3894). It also introduces a draft of a new documentation page for this project, that will be refined in the release PR that ships the feature (either #3069 or a copy of it).
While this PR extends the engine, it doesn't plug such relationships in the schema, meaning that, as of this PR, the new code paths in `Join` are technically unreachable. Adding the corresponding schema code and, ultimately, enabling the metadata API will be done in subsequent PRs.
### Keeping track of concrete type names
The main change this PR makes to the existing `Join` code is to handle a new reserved field we sometimes use when targeting remote servers: the `__hasura_internal_typename` field. In short, a GraphQL selection set can sometimes "branch" based on the concrete "runtime type" of the object on which the selection happens:
```graphql
query {
author(id: 53478) {
... on Writer {
name
articles {
title
}
}
... on Artist {
name
articles {
title
}
}
}
}
```
If both of those `articles` are remote joins, we need to be able, when we get the answer, to differentiate between the two different cases. We do this by asking for `__typename`, to be able to decide if we're in the `Writer` or the `Artist` branch of the query.
To avoid further processing / customization of results, we only insert this `__hasura_internal_typename: __typename` field in the query in the case of unions of interfaces AND if we have the guarantee that we will processing the request as part of the remote joins "folding": that is, if there's any remote join in this branch in the tree. Otherwise, we don't insert the field, and we leave that part of the response untouched.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3810
GitOrigin-RevId: 89aaf16274d68e26ad3730b80c2d2fdc2896b96c
2022-03-09 06:17:28 +03:00
|
|
|
RemoteJoinRemoteSchema remoteSchemaJoin childJoinTree -> do
|
|
|
|
let remoteSchemaInfo = rsDef $ _rsjRemoteSchema remoteSchemaJoin
|
2023-05-31 08:47:40 +03:00
|
|
|
maybeJoinIndex <- RS.makeRemoteSchemaJoinCall (callRemoteSchema remoteSchemaInfo) userInfo remoteSchemaJoin _jalFieldName joinArguments
|
2023-10-03 08:15:07 +03:00
|
|
|
let remoteSchemaModel = ModelInfoPart (toTxt $ _vrsdName remoteSchemaInfo) ModelTypeRemoteSchema Nothing Nothing (ModelOperationType G.OperationTypeQuery)
|
|
|
|
pure $ (fmap (childJoinTree,) maybeJoinIndex, Just [remoteSchemaModel])
|
2022-03-10 18:25:25 +03:00
|
|
|
RemoteJoinSource sourceJoin childJoinTree -> do
|
2023-04-13 04:29:15 +03:00
|
|
|
maybeJoinIndex <- S.makeSourceJoinCall callSource userInfo sourceJoin _jalFieldName joinArguments reqHeaders operationName
|
2023-10-03 08:15:07 +03:00
|
|
|
pure $ (fmap (childJoinTree,) $ fst <$> maybeJoinIndex, snd <$> maybeJoinIndex)
|
|
|
|
result <- for previousStep $ \((childJoinTree, joinIndex)) -> do
|
|
|
|
forRemoteJoins childJoinTree (joinIndex, []) $ \childRemoteJoins -> do
|
|
|
|
(results, modelInfo) <-
|
2022-03-10 18:25:25 +03:00
|
|
|
foldJoinTreeWith
|
|
|
|
callSource
|
|
|
|
callRemoteSchema
|
Enable remote joins from remote schemas in the execution engine.
### Description
This PR adds the ability to perform remote joins from remote schemas in the engine. To do so, we alter the definition of an `ExecutionStep` targeting a remote schema: the `ExecStepRemote` constructor now expects a `Maybe RemoteJoins`. This new argument is used when processing the execution step, in the transport layer (either `Transport.HTTP` or `Transport.WebSocket`).
For this `Maybe RemoteJoins` to be extracted from a parsed query, this PR also extends the `Execute.RemoteJoin.Collect` module, to implement "collection" from a selection set. Not only do those new functions extract the remote joins, but they also apply all necessary transformations to the selection sets (such as inserting the necessary "phantom" fields used as join keys).
Finally in `Execute.RemoteJoin.Join`, we make two changes. First, we now always look for nested remote joins, regardless of whether the join we just performed went to a source or a remote schema; and second we adapt our join tree logic according to the special cases that were added to deal with remote server edge cases.
Additionally, this PR refactors / cleans / documents `Execute.RemoteJoin.RemoteServer`. This is not required as part of this change and could be moved to a separate PR if needed (a similar cleanup of `Join` is done independently in #3894). It also introduces a draft of a new documentation page for this project, that will be refined in the release PR that ships the feature (either #3069 or a copy of it).
While this PR extends the engine, it doesn't plug such relationships in the schema, meaning that, as of this PR, the new code paths in `Join` are technically unreachable. Adding the corresponding schema code and, ultimately, enabling the metadata API will be done in subsequent PRs.
### Keeping track of concrete type names
The main change this PR makes to the existing `Join` code is to handle a new reserved field we sometimes use when targeting remote servers: the `__hasura_internal_typename` field. In short, a GraphQL selection set can sometimes "branch" based on the concrete "runtime type" of the object on which the selection happens:
```graphql
query {
author(id: 53478) {
... on Writer {
name
articles {
title
}
}
... on Artist {
name
articles {
title
}
}
}
}
```
If both of those `articles` are remote joins, we need to be able, when we get the answer, to differentiate between the two different cases. We do this by asking for `__typename`, to be able to decide if we're in the `Writer` or the `Artist` branch of the query.
To avoid further processing / customization of results, we only insert this `__hasura_internal_typename: __typename` field in the query in the case of unions of interfaces AND if we have the guarantee that we will processing the request as part of the remote joins "folding": that is, if there's any remote join in this branch in the tree. Otherwise, we don't insert the field, and we leave that part of the response untouched.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3810
GitOrigin-RevId: 89aaf16274d68e26ad3730b80c2d2fdc2896b96c
2022-03-09 06:17:28 +03:00
|
|
|
userInfo
|
|
|
|
(IntMap.elems joinIndex)
|
|
|
|
childRemoteJoins
|
2023-01-25 10:12:53 +03:00
|
|
|
reqHeaders
|
|
|
|
operationName
|
2023-10-03 08:15:07 +03:00
|
|
|
pure $ ((IntMap.fromAscList $ zip (IntMap.keys joinIndex) results), modelInfo)
|
|
|
|
pure $ fmap (\(iMap, newModelInfo) -> (iMap, newModelInfo <> fromMaybe [] modelInfo')) result
|
|
|
|
let (key, (compositeValue')) = unzip (IntMap.toList joinIndices)
|
|
|
|
(intMap, model) = unzip compositeValue'
|
|
|
|
joinIndices' = IntMap.fromList $ zip key intMap
|
|
|
|
modelInfoList = concat model
|
2023-05-31 08:47:40 +03:00
|
|
|
Tracing.newSpan "Join remote join results"
|
2023-10-03 08:15:07 +03:00
|
|
|
$ (,(modelInfoList))
|
|
|
|
<$> (joinResults joinIndices' compositeValue)
|
2020-05-27 18:02:58 +03:00
|
|
|
|
2022-03-10 18:25:25 +03:00
|
|
|
-------------------------------------------------------------------------------
|
2020-05-27 18:02:58 +03:00
|
|
|
|
2022-03-10 18:25:25 +03:00
|
|
|
-- | Simple convenient wrapper around @Maybe RemoteJoins@.
|
|
|
|
forRemoteJoins ::
|
|
|
|
(Applicative f) =>
|
|
|
|
Maybe RemoteJoins ->
|
|
|
|
a ->
|
|
|
|
(RemoteJoins -> f a) ->
|
|
|
|
f a
|
|
|
|
forRemoteJoins remoteJoins onNoJoins f =
|
|
|
|
maybe (pure onNoJoins) f remoteJoins
|
2021-08-06 16:39:00 +03:00
|
|
|
|
|
|
|
-- | When traversing a responses's json, wherever the join columns of a remote
|
|
|
|
-- join are expected, we want to collect these arguments.
|
|
|
|
--
|
|
|
|
-- However looking up by a remote join's definition to collect these arguments
|
|
|
|
-- does not work because we don't have an 'Ord' or a 'Hashable' instance (it
|
|
|
|
-- would be a bit of work).
|
|
|
|
--
|
|
|
|
-- So this assigned each remote join a unique integer ID by using just the 'Eq'
|
|
|
|
-- instance. This ID then can be used for the collection of arguments (which
|
|
|
|
-- should also be faster).
|
2022-03-10 18:25:25 +03:00
|
|
|
--
|
|
|
|
-- TODO(nicuveo): https://github.com/hasura/graphql-engine-mono/issues/3891.
|
2021-08-06 16:39:00 +03:00
|
|
|
assignJoinIds :: JoinTree RemoteJoin -> JoinTree (JoinCallId, RemoteJoin)
|
|
|
|
assignJoinIds joinTree =
|
|
|
|
evalState (traverse assignId joinTree) (0, [])
|
2020-05-27 18:02:58 +03:00
|
|
|
where
|
2021-08-06 16:39:00 +03:00
|
|
|
assignId ::
|
|
|
|
RemoteJoin ->
|
|
|
|
State (JoinCallId, [(JoinCallId, RemoteJoin)]) (JoinCallId, RemoteJoin)
|
|
|
|
assignId remoteJoin = do
|
|
|
|
(joinCallId, joinIds) <- get
|
2021-09-22 13:43:05 +03:00
|
|
|
let mJoinId = joinIds & find \(_, j) -> j == remoteJoin
|
2021-08-06 16:39:00 +03:00
|
|
|
mJoinId `onNothing` do
|
|
|
|
put (joinCallId + 1, (joinCallId, remoteJoin) : joinIds)
|
|
|
|
pure (joinCallId, remoteJoin)
|
|
|
|
|
|
|
|
collectJoinArguments ::
|
|
|
|
forall f m.
|
|
|
|
(MonadError QErr m, Traversable f) =>
|
|
|
|
JoinTree (JoinCallId, RemoteJoin) ->
|
2021-09-22 13:43:05 +03:00
|
|
|
f JO.Value ->
|
2021-08-06 16:39:00 +03:00
|
|
|
m (f (CompositeValue ReplacementToken), IntMap.IntMap JoinArguments)
|
|
|
|
collectJoinArguments joinTree lhs = do
|
|
|
|
result <- flip runStateT (0, mempty) $ traverse (traverseValue joinTree) lhs
|
|
|
|
-- Discard the 'JoinArgumentId' from the intermediate state transformation.
|
|
|
|
pure $ second snd result
|
2020-05-27 18:02:58 +03:00
|
|
|
where
|
2021-08-06 16:39:00 +03:00
|
|
|
getReplacementToken ::
|
|
|
|
IntMap.Key ->
|
|
|
|
RemoteJoin ->
|
|
|
|
JoinArgument ->
|
2021-09-22 13:43:05 +03:00
|
|
|
FieldName ->
|
2021-08-06 16:39:00 +03:00
|
|
|
StateT
|
|
|
|
(JoinArgumentId, IntMap.IntMap JoinArguments)
|
|
|
|
m
|
|
|
|
ReplacementToken
|
2021-09-22 13:43:05 +03:00
|
|
|
getReplacementToken joinId remoteJoin argument fieldName = do
|
2021-08-06 16:39:00 +03:00
|
|
|
(counter, joins) <- get
|
|
|
|
case IntMap.lookup joinId joins of
|
2021-09-22 13:43:05 +03:00
|
|
|
-- XXX: We're making an explicit decision to ignore the existing
|
|
|
|
-- 'fieldName' and replace it with the argument provided to this
|
|
|
|
-- function.
|
|
|
|
--
|
|
|
|
-- This needs to be tested so we can verify that the result of this
|
|
|
|
-- function call is reasonable.
|
|
|
|
Just (JoinArguments _remoteJoin arguments _fieldName) ->
|
2023-04-26 18:42:13 +03:00
|
|
|
case HashMap.lookup argument arguments of
|
2021-08-06 16:39:00 +03:00
|
|
|
Just argumentId -> pure $ ReplacementToken joinId argumentId
|
|
|
|
Nothing -> addNewArgument counter joins arguments
|
|
|
|
Nothing -> addNewArgument counter joins mempty
|
|
|
|
where
|
|
|
|
addNewArgument counter joins arguments = do
|
|
|
|
let argumentId = counter
|
2021-09-22 13:43:05 +03:00
|
|
|
newArguments =
|
|
|
|
JoinArguments
|
|
|
|
remoteJoin
|
2023-04-26 18:42:13 +03:00
|
|
|
(HashMap.insert argument argumentId arguments)
|
2021-09-22 13:43:05 +03:00
|
|
|
fieldName
|
2021-08-06 16:39:00 +03:00
|
|
|
put (counter + 1, IntMap.insert joinId newArguments joins)
|
|
|
|
pure $ ReplacementToken joinId argumentId
|
2020-05-27 18:02:58 +03:00
|
|
|
|
2021-08-06 16:39:00 +03:00
|
|
|
traverseValue ::
|
|
|
|
JoinTree (IntMap.Key, RemoteJoin) ->
|
2021-09-22 13:43:05 +03:00
|
|
|
JO.Value ->
|
2021-08-06 16:39:00 +03:00
|
|
|
StateT
|
|
|
|
(JoinArgumentId, IntMap.IntMap JoinArguments)
|
|
|
|
m
|
|
|
|
(CompositeValue ReplacementToken)
|
|
|
|
traverseValue joinTree_ = \case
|
2021-09-22 13:43:05 +03:00
|
|
|
-- 'JO.Null' is a special case of scalar value here, which indicates that
|
2021-08-06 16:39:00 +03:00
|
|
|
-- the previous step did not return enough data for us to continue
|
|
|
|
-- traversing down this path.
|
|
|
|
--
|
|
|
|
-- This can occur in the following cases:
|
|
|
|
-- * Permission errors; when the user joins on a value they are not
|
|
|
|
-- allowed to access
|
|
|
|
-- * Queries with remote sources that resolve to null, for example:
|
|
|
|
-- {
|
|
|
|
-- q {
|
|
|
|
-- user_by_pk() {
|
|
|
|
-- id
|
|
|
|
-- name
|
|
|
|
-- r {
|
|
|
|
-- }
|
|
|
|
-- address {
|
|
|
|
-- r_geo {
|
|
|
|
-- }
|
|
|
|
-- }
|
|
|
|
-- }
|
|
|
|
-- }
|
|
|
|
-- }
|
2021-09-22 13:43:05 +03:00
|
|
|
JO.Null -> pure $ CVOrdValue JO.Null
|
|
|
|
JO.Object object -> CVObject <$> traverseObject joinTree_ object
|
|
|
|
JO.Array array -> CVObjectArray <$> mapM (traverseValue joinTree_) (toList array)
|
2021-08-06 16:39:00 +03:00
|
|
|
_ -> throw500 "found a scalar value when traversing with a non-empty join tree"
|
|
|
|
|
|
|
|
traverseObject ::
|
|
|
|
JoinTree (IntMap.Key, RemoteJoin) ->
|
2021-09-22 13:43:05 +03:00
|
|
|
JO.Object ->
|
2021-08-06 16:39:00 +03:00
|
|
|
StateT
|
|
|
|
(JoinArgumentId, IntMap.IntMap JoinArguments)
|
|
|
|
m
|
|
|
|
(InsOrdHashMap Text (CompositeValue ReplacementToken))
|
|
|
|
traverseObject joinTree_ object = do
|
Enable remote joins from remote schemas in the execution engine.
### Description
This PR adds the ability to perform remote joins from remote schemas in the engine. To do so, we alter the definition of an `ExecutionStep` targeting a remote schema: the `ExecStepRemote` constructor now expects a `Maybe RemoteJoins`. This new argument is used when processing the execution step, in the transport layer (either `Transport.HTTP` or `Transport.WebSocket`).
For this `Maybe RemoteJoins` to be extracted from a parsed query, this PR also extends the `Execute.RemoteJoin.Collect` module, to implement "collection" from a selection set. Not only do those new functions extract the remote joins, but they also apply all necessary transformations to the selection sets (such as inserting the necessary "phantom" fields used as join keys).
Finally in `Execute.RemoteJoin.Join`, we make two changes. First, we now always look for nested remote joins, regardless of whether the join we just performed went to a source or a remote schema; and second we adapt our join tree logic according to the special cases that were added to deal with remote server edge cases.
Additionally, this PR refactors / cleans / documents `Execute.RemoteJoin.RemoteServer`. This is not required as part of this change and could be moved to a separate PR if needed (a similar cleanup of `Join` is done independently in #3894). It also introduces a draft of a new documentation page for this project, that will be refined in the release PR that ships the feature (either #3069 or a copy of it).
While this PR extends the engine, it doesn't plug such relationships in the schema, meaning that, as of this PR, the new code paths in `Join` are technically unreachable. Adding the corresponding schema code and, ultimately, enabling the metadata API will be done in subsequent PRs.
### Keeping track of concrete type names
The main change this PR makes to the existing `Join` code is to handle a new reserved field we sometimes use when targeting remote servers: the `__hasura_internal_typename` field. In short, a GraphQL selection set can sometimes "branch" based on the concrete "runtime type" of the object on which the selection happens:
```graphql
query {
author(id: 53478) {
... on Writer {
name
articles {
title
}
}
... on Artist {
name
articles {
title
}
}
}
}
```
If both of those `articles` are remote joins, we need to be able, when we get the answer, to differentiate between the two different cases. We do this by asking for `__typename`, to be able to decide if we're in the `Writer` or the `Artist` branch of the query.
To avoid further processing / customization of results, we only insert this `__hasura_internal_typename: __typename` field in the query in the case of unions of interfaces AND if we have the guarantee that we will processing the request as part of the remote joins "folding": that is, if there's any remote join in this branch in the tree. Otherwise, we don't insert the field, and we leave that part of the response untouched.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3810
GitOrigin-RevId: 89aaf16274d68e26ad3730b80c2d2fdc2896b96c
2022-03-09 06:17:28 +03:00
|
|
|
let joinTreeNodes = unJoinTree joinTree_
|
|
|
|
phantomFields =
|
2023-05-24 16:51:56 +03:00
|
|
|
HS.fromList
|
|
|
|
$ map getFieldNameTxt
|
|
|
|
$ concatMap (getPhantomFields . snd)
|
|
|
|
$ toList joinTree_
|
2021-08-06 16:39:00 +03:00
|
|
|
|
Enable remote joins from remote schemas in the execution engine.
### Description
This PR adds the ability to perform remote joins from remote schemas in the engine. To do so, we alter the definition of an `ExecutionStep` targeting a remote schema: the `ExecStepRemote` constructor now expects a `Maybe RemoteJoins`. This new argument is used when processing the execution step, in the transport layer (either `Transport.HTTP` or `Transport.WebSocket`).
For this `Maybe RemoteJoins` to be extracted from a parsed query, this PR also extends the `Execute.RemoteJoin.Collect` module, to implement "collection" from a selection set. Not only do those new functions extract the remote joins, but they also apply all necessary transformations to the selection sets (such as inserting the necessary "phantom" fields used as join keys).
Finally in `Execute.RemoteJoin.Join`, we make two changes. First, we now always look for nested remote joins, regardless of whether the join we just performed went to a source or a remote schema; and second we adapt our join tree logic according to the special cases that were added to deal with remote server edge cases.
Additionally, this PR refactors / cleans / documents `Execute.RemoteJoin.RemoteServer`. This is not required as part of this change and could be moved to a separate PR if needed (a similar cleanup of `Join` is done independently in #3894). It also introduces a draft of a new documentation page for this project, that will be refined in the release PR that ships the feature (either #3069 or a copy of it).
While this PR extends the engine, it doesn't plug such relationships in the schema, meaning that, as of this PR, the new code paths in `Join` are technically unreachable. Adding the corresponding schema code and, ultimately, enabling the metadata API will be done in subsequent PRs.
### Keeping track of concrete type names
The main change this PR makes to the existing `Join` code is to handle a new reserved field we sometimes use when targeting remote servers: the `__hasura_internal_typename` field. In short, a GraphQL selection set can sometimes "branch" based on the concrete "runtime type" of the object on which the selection happens:
```graphql
query {
author(id: 53478) {
... on Writer {
name
articles {
title
}
}
... on Artist {
name
articles {
title
}
}
}
}
```
If both of those `articles` are remote joins, we need to be able, when we get the answer, to differentiate between the two different cases. We do this by asking for `__typename`, to be able to decide if we're in the `Writer` or the `Artist` branch of the query.
To avoid further processing / customization of results, we only insert this `__hasura_internal_typename: __typename` field in the query in the case of unions of interfaces AND if we have the guarantee that we will processing the request as part of the remote joins "folding": that is, if there's any remote join in this branch in the tree. Otherwise, we don't insert the field, and we leave that part of the response untouched.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3810
GitOrigin-RevId: 89aaf16274d68e26ad3730b80c2d2fdc2896b96c
2022-03-09 06:17:28 +03:00
|
|
|
-- If we need the typename to disambiguate branches in the join tree, it
|
|
|
|
-- will be present in the answer as a placeholder internal field.
|
|
|
|
--
|
|
|
|
-- We currently have no way of checking whether we explicitly requested
|
|
|
|
-- that field, and it would be possible for a malicious user to attempt to
|
|
|
|
-- spoof that value by explicitly requesting a value they control.
|
|
|
|
-- However, there's no actual risk: we only use that value for lookups
|
|
|
|
-- inside the join tree, and if we didn't request this field, the keys in
|
|
|
|
-- the join tree map will explicitly require a typename NOT to be
|
|
|
|
-- provided. Meaning that any spoofing attempt will just, at worst, result
|
|
|
|
-- in remote joins not being performed.
|
|
|
|
--
|
|
|
|
-- We always remove that key from the resulting object.
|
|
|
|
joinTypeName <- case JO.lookup "__hasura_internal_typename" object of
|
|
|
|
Nothing -> pure Nothing
|
|
|
|
Just (JO.String typename) -> pure $ Just typename
|
|
|
|
Just value -> throw500 $ "The reserved __hasura_internal_typename field contains an unexpected value: " <> tshow value
|
2021-08-06 16:39:00 +03:00
|
|
|
|
|
|
|
-- during this traversal we assume that the remote join column has some
|
|
|
|
-- placeholder value in the response. If this weren't present it would
|
|
|
|
-- involve a lot more book-keeping to preserve the order of the original
|
|
|
|
-- selection set in the response
|
2021-09-22 13:43:05 +03:00
|
|
|
compositeObject <- for (JO.toList object) $ \(fieldName, value_) ->
|
Enable remote joins from remote schemas in the execution engine.
### Description
This PR adds the ability to perform remote joins from remote schemas in the engine. To do so, we alter the definition of an `ExecutionStep` targeting a remote schema: the `ExecStepRemote` constructor now expects a `Maybe RemoteJoins`. This new argument is used when processing the execution step, in the transport layer (either `Transport.HTTP` or `Transport.WebSocket`).
For this `Maybe RemoteJoins` to be extracted from a parsed query, this PR also extends the `Execute.RemoteJoin.Collect` module, to implement "collection" from a selection set. Not only do those new functions extract the remote joins, but they also apply all necessary transformations to the selection sets (such as inserting the necessary "phantom" fields used as join keys).
Finally in `Execute.RemoteJoin.Join`, we make two changes. First, we now always look for nested remote joins, regardless of whether the join we just performed went to a source or a remote schema; and second we adapt our join tree logic according to the special cases that were added to deal with remote server edge cases.
Additionally, this PR refactors / cleans / documents `Execute.RemoteJoin.RemoteServer`. This is not required as part of this change and could be moved to a separate PR if needed (a similar cleanup of `Join` is done independently in #3894). It also introduces a draft of a new documentation page for this project, that will be refined in the release PR that ships the feature (either #3069 or a copy of it).
While this PR extends the engine, it doesn't plug such relationships in the schema, meaning that, as of this PR, the new code paths in `Join` are technically unreachable. Adding the corresponding schema code and, ultimately, enabling the metadata API will be done in subsequent PRs.
### Keeping track of concrete type names
The main change this PR makes to the existing `Join` code is to handle a new reserved field we sometimes use when targeting remote servers: the `__hasura_internal_typename` field. In short, a GraphQL selection set can sometimes "branch" based on the concrete "runtime type" of the object on which the selection happens:
```graphql
query {
author(id: 53478) {
... on Writer {
name
articles {
title
}
}
... on Artist {
name
articles {
title
}
}
}
}
```
If both of those `articles` are remote joins, we need to be able, when we get the answer, to differentiate between the two different cases. We do this by asking for `__typename`, to be able to decide if we're in the `Writer` or the `Artist` branch of the query.
To avoid further processing / customization of results, we only insert this `__hasura_internal_typename: __typename` field in the query in the case of unions of interfaces AND if we have the guarantee that we will processing the request as part of the remote joins "folding": that is, if there's any remote join in this branch in the tree. Otherwise, we don't insert the field, and we leave that part of the response untouched.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3810
GitOrigin-RevId: 89aaf16274d68e26ad3730b80c2d2fdc2896b96c
2022-03-09 06:17:28 +03:00
|
|
|
(fieldName,) <$> case NEMap.lookup (QualifiedFieldName joinTypeName fieldName) joinTreeNodes of
|
2021-08-06 16:39:00 +03:00
|
|
|
Just (Leaf (joinId, remoteJoin)) -> do
|
|
|
|
joinArgument <- forM (getJoinColumnMapping remoteJoin) $ \alias -> do
|
|
|
|
let aliasTxt = getFieldNameTxt $ getAliasFieldName alias
|
2023-05-24 16:51:56 +03:00
|
|
|
onNothing (JO.lookup aliasTxt object)
|
|
|
|
$ throw500
|
|
|
|
$ "a join column is missing from the response: "
|
|
|
|
<> aliasTxt
|
2023-04-26 18:42:13 +03:00
|
|
|
if HashMap.null (HashMap.filter (== JO.Null) joinArgument)
|
2021-08-06 16:39:00 +03:00
|
|
|
then
|
2023-05-24 16:51:56 +03:00
|
|
|
Just
|
|
|
|
. CVFromRemote
|
2021-09-22 13:43:05 +03:00
|
|
|
<$> getReplacementToken joinId remoteJoin (JoinArgument joinArgument) (FieldName fieldName)
|
2021-08-06 16:39:00 +03:00
|
|
|
else -- we do not join with the remote field if any of the leaves of
|
|
|
|
-- the join argument are null
|
2021-09-22 13:43:05 +03:00
|
|
|
pure $ Just $ CVOrdValue JO.Null
|
2021-08-06 16:39:00 +03:00
|
|
|
Just (Tree joinSubTree) ->
|
|
|
|
Just <$> traverseValue joinSubTree value_
|
|
|
|
Nothing ->
|
Enable remote joins from remote schemas in the execution engine.
### Description
This PR adds the ability to perform remote joins from remote schemas in the engine. To do so, we alter the definition of an `ExecutionStep` targeting a remote schema: the `ExecStepRemote` constructor now expects a `Maybe RemoteJoins`. This new argument is used when processing the execution step, in the transport layer (either `Transport.HTTP` or `Transport.WebSocket`).
For this `Maybe RemoteJoins` to be extracted from a parsed query, this PR also extends the `Execute.RemoteJoin.Collect` module, to implement "collection" from a selection set. Not only do those new functions extract the remote joins, but they also apply all necessary transformations to the selection sets (such as inserting the necessary "phantom" fields used as join keys).
Finally in `Execute.RemoteJoin.Join`, we make two changes. First, we now always look for nested remote joins, regardless of whether the join we just performed went to a source or a remote schema; and second we adapt our join tree logic according to the special cases that were added to deal with remote server edge cases.
Additionally, this PR refactors / cleans / documents `Execute.RemoteJoin.RemoteServer`. This is not required as part of this change and could be moved to a separate PR if needed (a similar cleanup of `Join` is done independently in #3894). It also introduces a draft of a new documentation page for this project, that will be refined in the release PR that ships the feature (either #3069 or a copy of it).
While this PR extends the engine, it doesn't plug such relationships in the schema, meaning that, as of this PR, the new code paths in `Join` are technically unreachable. Adding the corresponding schema code and, ultimately, enabling the metadata API will be done in subsequent PRs.
### Keeping track of concrete type names
The main change this PR makes to the existing `Join` code is to handle a new reserved field we sometimes use when targeting remote servers: the `__hasura_internal_typename` field. In short, a GraphQL selection set can sometimes "branch" based on the concrete "runtime type" of the object on which the selection happens:
```graphql
query {
author(id: 53478) {
... on Writer {
name
articles {
title
}
}
... on Artist {
name
articles {
title
}
}
}
}
```
If both of those `articles` are remote joins, we need to be able, when we get the answer, to differentiate between the two different cases. We do this by asking for `__typename`, to be able to decide if we're in the `Writer` or the `Artist` branch of the query.
To avoid further processing / customization of results, we only insert this `__hasura_internal_typename: __typename` field in the query in the case of unions of interfaces AND if we have the guarantee that we will processing the request as part of the remote joins "folding": that is, if there's any remote join in this branch in the tree. Otherwise, we don't insert the field, and we leave that part of the response untouched.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3810
GitOrigin-RevId: 89aaf16274d68e26ad3730b80c2d2fdc2896b96c
2022-03-09 06:17:28 +03:00
|
|
|
if HS.member fieldName phantomFields || fieldName == "__hasura_internal_typename"
|
2021-08-06 16:39:00 +03:00
|
|
|
then pure Nothing
|
|
|
|
else pure $ Just $ CVOrdValue value_
|
|
|
|
|
2023-05-24 16:51:56 +03:00
|
|
|
pure
|
|
|
|
. InsOrdHashMap.fromList
|
|
|
|
$
|
2021-08-06 16:39:00 +03:00
|
|
|
-- filter out the Nothings
|
|
|
|
mapMaybe sequenceA compositeObject
|
2022-03-10 18:25:25 +03:00
|
|
|
|
|
|
|
joinResults ::
|
|
|
|
forall f m.
|
|
|
|
(MonadError QErr m, Traversable f) =>
|
|
|
|
IntMap.IntMap (IntMap.IntMap JO.Value) ->
|
|
|
|
f (CompositeValue ReplacementToken) ->
|
|
|
|
m (f JO.Value)
|
|
|
|
joinResults remoteResults compositeValues = do
|
|
|
|
traverse (fmap compositeValueToJSON . traverse replaceToken) compositeValues
|
|
|
|
where
|
|
|
|
replaceToken :: ReplacementToken -> m JO.Value
|
|
|
|
replaceToken (ReplacementToken joinCallId argumentId) = do
|
|
|
|
joinCallResults <-
|
2023-05-24 16:51:56 +03:00
|
|
|
onNothing (IntMap.lookup joinCallId remoteResults)
|
|
|
|
$ throw500
|
|
|
|
$ "couldn't find results for the join with id: "
|
|
|
|
<> tshow joinCallId
|
|
|
|
onNothing (IntMap.lookup argumentId joinCallResults)
|
|
|
|
$ throw500
|
|
|
|
$ "couldn't find a value for argument id in the join results: "
|
|
|
|
<> tshow (argumentId, joinCallId)
|
2022-03-10 18:25:25 +03:00
|
|
|
|
|
|
|
-------------------------------------------------------------------------------
|
|
|
|
|
2023-04-27 10:41:55 +03:00
|
|
|
type CompositeObject a = InsOrdHashMap.InsOrdHashMap Text (CompositeValue a)
|
2022-03-10 18:25:25 +03:00
|
|
|
|
|
|
|
-- | A hybrid JSON value representation which captures the context of remote join field in type parameter.
|
|
|
|
data CompositeValue a
|
|
|
|
= CVOrdValue !JO.Value
|
|
|
|
| CVObject !(CompositeObject a)
|
|
|
|
| CVObjectArray ![CompositeValue a]
|
|
|
|
| CVFromRemote !a
|
|
|
|
deriving (Show, Eq, Functor, Foldable, Traversable)
|
|
|
|
|
|
|
|
compositeValueToJSON :: CompositeValue JO.Value -> JO.Value
|
|
|
|
compositeValueToJSON = \case
|
|
|
|
CVOrdValue v -> v
|
2023-04-27 10:41:55 +03:00
|
|
|
CVObject obj -> JO.object $ InsOrdHashMap.toList $ InsOrdHashMap.map compositeValueToJSON obj
|
2022-03-10 18:25:25 +03:00
|
|
|
CVObjectArray vals -> JO.array $ map compositeValueToJSON vals
|
|
|
|
CVFromRemote v -> v
|
|
|
|
|
|
|
|
-- | A token used to uniquely identify the results within a join call that are
|
|
|
|
-- associated with a particular argument.
|
|
|
|
data ReplacementToken = ReplacementToken
|
|
|
|
{ -- | Unique identifier for a remote join call.
|
|
|
|
_rtCallId :: !JoinCallId,
|
|
|
|
-- | Unique identifier for an argument to some remote join.
|
|
|
|
_rtArgumentId :: !JoinArgumentId
|
|
|
|
}
|