2021-11-04 19:08:33 +03:00
|
|
|
module Hasura.GraphQL.Execute.Backend
|
|
|
|
( BackendExecute (..),
|
|
|
|
DBStepInfo (..),
|
2023-03-14 14:32:20 +03:00
|
|
|
ActionResult (..),
|
|
|
|
withNoStatistics,
|
2021-11-04 19:08:33 +03:00
|
|
|
ExecutionPlan,
|
|
|
|
ExecutionStep (..),
|
|
|
|
ExplainPlan (..),
|
Allow backend execution to happen on the base app monad.
### Description
Each Backend executes queries against the database in a slightly different stack: Postgres uses its own `TXeT`, MSSQL uses a variant of it, BigQuery is simply in `ExceptT QErr IO`... To accommodate those variations, we had originally introduced an `ExecutionMonad b` type family in `BackendExecute`, allowing each backend to describe its own stack. It was then up to that backend's `BackendTransport` instance to implement running said stack, and converting the result back into our main app monad.
However, this was not without complications: `TraceT` is one of them: as it usually needs to be on the top of the stack, converting from one stack to the other implies the use `interpTraceT`, which is quite monstrous. Furthermore, as part of the Entitlement Services work, we're trying to move to a "Services" architecture in which the entire engine runs in one base monad, that delegates features and dependencies to monad constraints; and as a result we'd like to minimize the number of different monad stacks we have to maintain and translate from and to in the codebase.
To improve things, this PR changes `ExecutionMonad b` from an _absolute_ stack to a _relative_ one: i.e.: what needs to be stacked on top of our base monad for the execution. In `Transport`, we then only need to pop the top of the stack, and voila. This greatly simplifies the implementation of the backends, as there's no longer any need to do any stack transformation: MySQL's implementation becomes a `runIdentityT`! This also removes most mentions of `TraceT` from the execution code since it's no longer required: we can rely on the base monad's existing `MonadTrace` constraint.
To continue encapsulating monadic actions in `DBStepInfo` and avoid threading a bunch of `forall` all over the place, this PR introduces a small local helper: `OnBaseMonad`. One only downside of all this is that this requires adding `MonadBaseControl IO m` constraint all over the place: previously, we would run directly on `IO` and lift, and would therefore not need to bring that constraint all the way.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7789
GitOrigin-RevId: e9b2e431c5c47fa9851abf87545c0415ff6d1a12
2023-02-09 17:38:33 +03:00
|
|
|
OnBaseMonad (..),
|
2021-11-04 19:08:33 +03:00
|
|
|
convertRemoteSourceRelationship,
|
|
|
|
)
|
|
|
|
where
|
2021-02-12 06:04:09 +03:00
|
|
|
|
2021-06-11 06:26:50 +03:00
|
|
|
import Control.Monad.Trans.Control (MonadBaseControl)
|
|
|
|
import Data.Aeson qualified as J
|
|
|
|
import Data.Aeson.Casing qualified as J
|
|
|
|
import Data.Aeson.Ordered qualified as JO
|
|
|
|
import Data.Kind (Type)
|
2021-03-10 10:26:22 +03:00
|
|
|
import Data.Text.Extended
|
2021-09-22 13:43:05 +03:00
|
|
|
import Data.Text.NonEmpty (mkNonEmptyTextUnsafe)
|
2021-05-11 18:18:31 +03:00
|
|
|
import Hasura.Base.Error
|
2021-03-10 10:26:22 +03:00
|
|
|
import Hasura.EncJSON
|
2021-06-11 06:26:50 +03:00
|
|
|
import Hasura.GraphQL.Execute.Action.Types (ActionExecutionPlan)
|
|
|
|
import Hasura.GraphQL.Execute.RemoteJoin.Types
|
2022-03-21 13:39:49 +03:00
|
|
|
import Hasura.GraphQL.Execute.Subscription.Plan
|
2021-10-29 17:42:07 +03:00
|
|
|
import Hasura.GraphQL.Namespace (RootFieldAlias, RootFieldMap)
|
2021-06-11 06:26:50 +03:00
|
|
|
import Hasura.GraphQL.Transport.HTTP.Protocol qualified as GH
|
2021-03-10 10:26:22 +03:00
|
|
|
import Hasura.Prelude
|
2021-07-29 11:29:12 +03:00
|
|
|
import Hasura.QueryTags
|
2021-06-11 06:26:50 +03:00
|
|
|
import Hasura.RQL.IR
|
2021-04-13 10:00:43 +03:00
|
|
|
import Hasura.RQL.Types.Action
|
2021-03-10 10:26:22 +03:00
|
|
|
import Hasura.RQL.Types.Backend
|
2023-04-24 21:35:48 +03:00
|
|
|
import Hasura.RQL.Types.BackendType
|
2021-09-22 13:43:05 +03:00
|
|
|
import Hasura.RQL.Types.Column (ColumnType, fromCol)
|
2021-04-01 23:40:31 +03:00
|
|
|
import Hasura.RQL.Types.Common
|
2023-05-31 13:32:57 +03:00
|
|
|
import Hasura.RQL.Types.Relationships.Local (Nullable (..))
|
2021-10-29 17:42:07 +03:00
|
|
|
import Hasura.RQL.Types.ResultCustomization
|
2023-04-24 18:17:15 +03:00
|
|
|
import Hasura.RQL.Types.Schema.Options qualified as Options
|
scaffolding for remote-schemas module
The main aim of the PR is:
1. To set up a module structure for 'remote-schemas' package.
2. Move parts by the remote schema codebase into the new module structure to validate it.
## Notes to the reviewer
Why a PR with large-ish diff?
1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section.
1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process.
## Changes in the PR
### Module structure
Sets up the module structure as follows:
```
Hasura/
RemoteSchema/
Metadata/
Types.hs
SchemaCache/
Types.hs
Permission.hs
RemoteRelationship.hs
Build.hs
MetadataAPI/
Types.hs
Execute.hs
```
### 1. Types representing metadata are moved
Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`.
- This new module only depends on very 'core' modules such as
`Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass.
- The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship
definition.
### 2. SchemaCache related types and build logic have been moved
Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`.
Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at.
Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components.
This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in.
While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal.
This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase.
### 3. Metadata APIs related types and build logic have been moved
The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively.
## Open questions:
1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](https://github.com/hasura/graphql-engine-mono/blob/ceba6d62264603ee5d279814677b29bcc43ecaea/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs#L55) and [2](https://github.com/hasura/graphql-engine-mono/blob/ceba6d62264603ee5d279814677b29bcc43ecaea/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs#L91), it made more sense to create a separate top-level module for `MetadataAPI`s.
Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further.
1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type:
```haskell
buildRemoteSchemas ::
( ArrowChoice arr,
Inc.ArrowDistribute arr,
ArrowWriter (Seq CollectedInfo) arr,
Inc.ArrowCache m arr,
MonadIO m,
HasHttpManagerM m,
Inc.Cacheable remoteRelationshipDefinition,
ToJSON remoteRelationshipDefinition,
MonadError QErr m
) =>
Env.Environment ->
( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles),
[RemoteSchemaMetadataG remoteRelationshipDefinition]
)
`arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject)
```
Note the dependence on `CollectedInfo` which is defined as
```haskell
data CollectedInfo
= CIInconsistency InconsistentMetadata
| CIDependency
MetadataObject
-- ^ for error reporting on missing dependencies
SchemaObjId
SchemaDependency
deriving (Eq)
```
this pretty much means that remote schemas is dependent on types from databases, actions, ....
How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine?
1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature:
```haskell
buildSchemaCacheFor ::
(QErrM m, CacheRWM m, MetadataM m) =>
MetadataObjId ->
MetadataModifier ->
```
This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint):
```haskell
runAddRemoteSchema ::
( QErrM m,
CacheRWM m,
MonadIO m,
HasHttpManagerM m,
MetadataM m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m EncJSON
```
This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed.
```haskell
runAddRemoteSchema ::
( QErrM m,
MonadIO m,
HasHttpManagerM m,
MonadReader RemoteSchemasSchemaCache m,
MonadState RemoteSchemaMetadata m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m RemoteSchemeMetadataObjId
```
The idea is that the core graphql-engine would call these functions and then call
`buildSchemaCacheFor`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291
GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
2022-10-21 06:13:07 +03:00
|
|
|
import Hasura.RemoteSchema.SchemaCache
|
2021-06-11 06:26:50 +03:00
|
|
|
import Hasura.SQL.AnyBackend qualified as AB
|
2021-03-10 10:26:22 +03:00
|
|
|
import Hasura.Session
|
2023-03-31 00:18:11 +03:00
|
|
|
import Hasura.Tracing (MonadTrace)
|
2021-06-11 06:26:50 +03:00
|
|
|
import Language.GraphQL.Draft.Syntax qualified as G
|
|
|
|
import Network.HTTP.Types qualified as HTTP
|
2021-02-12 06:04:09 +03:00
|
|
|
|
|
|
|
-- | This typeclass enacapsulates how a given backend translates a root field into an execution
|
|
|
|
-- plan. For now, each root field maps to one execution step, but in the future, when we have
|
|
|
|
-- a client-side dataloader, each root field might translate into a multi-step plan.
|
2021-02-20 16:45:49 +03:00
|
|
|
class
|
|
|
|
( Backend b,
|
|
|
|
ToTxt (MultiplexedQuery b),
|
2023-01-25 10:12:53 +03:00
|
|
|
Show (ResolvedConnectionTemplate b),
|
|
|
|
Eq (ResolvedConnectionTemplate b),
|
|
|
|
Hashable (ResolvedConnectionTemplate b)
|
2021-02-20 16:45:49 +03:00
|
|
|
) =>
|
|
|
|
BackendExecute (b :: BackendType)
|
|
|
|
where
|
2021-02-12 06:04:09 +03:00
|
|
|
-- generated query information
|
2021-02-20 16:45:49 +03:00
|
|
|
type PreparedQuery b :: Type
|
|
|
|
type MultiplexedQuery b :: Type
|
Allow backend execution to happen on the base app monad.
### Description
Each Backend executes queries against the database in a slightly different stack: Postgres uses its own `TXeT`, MSSQL uses a variant of it, BigQuery is simply in `ExceptT QErr IO`... To accommodate those variations, we had originally introduced an `ExecutionMonad b` type family in `BackendExecute`, allowing each backend to describe its own stack. It was then up to that backend's `BackendTransport` instance to implement running said stack, and converting the result back into our main app monad.
However, this was not without complications: `TraceT` is one of them: as it usually needs to be on the top of the stack, converting from one stack to the other implies the use `interpTraceT`, which is quite monstrous. Furthermore, as part of the Entitlement Services work, we're trying to move to a "Services" architecture in which the entire engine runs in one base monad, that delegates features and dependencies to monad constraints; and as a result we'd like to minimize the number of different monad stacks we have to maintain and translate from and to in the codebase.
To improve things, this PR changes `ExecutionMonad b` from an _absolute_ stack to a _relative_ one: i.e.: what needs to be stacked on top of our base monad for the execution. In `Transport`, we then only need to pop the top of the stack, and voila. This greatly simplifies the implementation of the backends, as there's no longer any need to do any stack transformation: MySQL's implementation becomes a `runIdentityT`! This also removes most mentions of `TraceT` from the execution code since it's no longer required: we can rely on the base monad's existing `MonadTrace` constraint.
To continue encapsulating monadic actions in `DBStepInfo` and avoid threading a bunch of `forall` all over the place, this PR introduces a small local helper: `OnBaseMonad`. One only downside of all this is that this requires adding `MonadBaseControl IO m` constraint all over the place: previously, we would run directly on `IO` and lift, and would therefore not need to bring that constraint all the way.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7789
GitOrigin-RevId: e9b2e431c5c47fa9851abf87545c0415ff6d1a12
2023-02-09 17:38:33 +03:00
|
|
|
type ExecutionMonad b :: (Type -> Type) -> (Type -> Type)
|
2021-02-12 06:04:09 +03:00
|
|
|
|
|
|
|
-- execution plan generation
|
|
|
|
mkDBQueryPlan ::
|
2021-07-29 11:29:12 +03:00
|
|
|
forall m.
|
|
|
|
( MonadError QErr m,
|
|
|
|
MonadQueryTags m,
|
2021-09-23 15:37:56 +03:00
|
|
|
MonadReader QueryTagsComment m
|
2021-07-29 11:29:12 +03:00
|
|
|
) =>
|
2021-06-11 06:26:50 +03:00
|
|
|
UserInfo ->
|
2021-04-01 23:40:31 +03:00
|
|
|
SourceName ->
|
2021-02-12 06:04:09 +03:00
|
|
|
SourceConfig b ->
|
2021-12-07 16:12:02 +03:00
|
|
|
QueryDB b Void (UnpreparedValue b) ->
|
2023-01-25 10:12:53 +03:00
|
|
|
[HTTP.Header] ->
|
|
|
|
Maybe G.Name ->
|
2021-06-11 06:26:50 +03:00
|
|
|
m (DBStepInfo b)
|
2021-02-12 06:04:09 +03:00
|
|
|
mkDBMutationPlan ::
|
|
|
|
forall m.
|
|
|
|
( MonadError QErr m,
|
2021-07-29 11:29:12 +03:00
|
|
|
MonadQueryTags m,
|
2021-09-23 15:37:56 +03:00
|
|
|
MonadReader QueryTagsComment m
|
2021-02-12 06:04:09 +03:00
|
|
|
) =>
|
2021-06-11 06:26:50 +03:00
|
|
|
UserInfo ->
|
2022-07-14 20:57:28 +03:00
|
|
|
Options.StringifyNumbers ->
|
2021-04-01 23:40:31 +03:00
|
|
|
SourceName ->
|
2021-02-12 06:04:09 +03:00
|
|
|
SourceConfig b ->
|
2021-12-07 16:12:02 +03:00
|
|
|
MutationDB b Void (UnpreparedValue b) ->
|
2023-01-25 10:12:53 +03:00
|
|
|
[HTTP.Header] ->
|
|
|
|
Maybe G.Name ->
|
2021-06-11 06:26:50 +03:00
|
|
|
m (DBStepInfo b)
|
2022-04-07 17:41:43 +03:00
|
|
|
mkLiveQuerySubscriptionPlan ::
|
2021-02-20 16:45:49 +03:00
|
|
|
forall m.
|
|
|
|
( MonadError QErr m,
|
|
|
|
MonadIO m,
|
2021-10-22 17:49:15 +03:00
|
|
|
MonadBaseControl IO m,
|
2021-09-23 15:37:56 +03:00
|
|
|
MonadReader QueryTagsComment m
|
2021-02-20 16:45:49 +03:00
|
|
|
) =>
|
|
|
|
UserInfo ->
|
2021-04-01 23:40:31 +03:00
|
|
|
SourceName ->
|
2021-02-20 16:45:49 +03:00
|
|
|
SourceConfig b ->
|
2021-10-29 17:42:07 +03:00
|
|
|
Maybe G.Name ->
|
2021-12-07 16:12:02 +03:00
|
|
|
RootFieldMap (QueryDB b Void (UnpreparedValue b)) ->
|
2023-01-25 10:12:53 +03:00
|
|
|
[HTTP.Header] ->
|
|
|
|
Maybe G.Name ->
|
2022-03-21 13:39:49 +03:00
|
|
|
m (SubscriptionQueryPlan b (MultiplexedQuery b))
|
2022-04-07 17:41:43 +03:00
|
|
|
mkDBStreamingSubscriptionPlan ::
|
|
|
|
forall m.
|
|
|
|
( MonadError QErr m,
|
|
|
|
MonadIO m,
|
|
|
|
MonadBaseControl IO m,
|
|
|
|
MonadReader QueryTagsComment m
|
|
|
|
) =>
|
|
|
|
UserInfo ->
|
|
|
|
SourceName ->
|
|
|
|
SourceConfig b ->
|
|
|
|
(RootFieldAlias, (QueryDB b Void (UnpreparedValue b))) ->
|
2023-01-25 10:12:53 +03:00
|
|
|
[HTTP.Header] ->
|
|
|
|
Maybe G.Name ->
|
2022-04-07 17:41:43 +03:00
|
|
|
m (SubscriptionQueryPlan b (MultiplexedQuery b))
|
2021-04-13 14:10:08 +03:00
|
|
|
mkDBQueryExplain ::
|
|
|
|
forall m.
|
|
|
|
( MonadError QErr m
|
|
|
|
) =>
|
2021-10-29 17:42:07 +03:00
|
|
|
RootFieldAlias ->
|
2021-04-13 14:10:08 +03:00
|
|
|
UserInfo ->
|
|
|
|
SourceName ->
|
|
|
|
SourceConfig b ->
|
2021-12-07 16:12:02 +03:00
|
|
|
QueryDB b Void (UnpreparedValue b) ->
|
2023-01-25 10:12:53 +03:00
|
|
|
[HTTP.Header] ->
|
|
|
|
Maybe G.Name ->
|
Allow backend execution to happen on the base app monad.
### Description
Each Backend executes queries against the database in a slightly different stack: Postgres uses its own `TXeT`, MSSQL uses a variant of it, BigQuery is simply in `ExceptT QErr IO`... To accommodate those variations, we had originally introduced an `ExecutionMonad b` type family in `BackendExecute`, allowing each backend to describe its own stack. It was then up to that backend's `BackendTransport` instance to implement running said stack, and converting the result back into our main app monad.
However, this was not without complications: `TraceT` is one of them: as it usually needs to be on the top of the stack, converting from one stack to the other implies the use `interpTraceT`, which is quite monstrous. Furthermore, as part of the Entitlement Services work, we're trying to move to a "Services" architecture in which the entire engine runs in one base monad, that delegates features and dependencies to monad constraints; and as a result we'd like to minimize the number of different monad stacks we have to maintain and translate from and to in the codebase.
To improve things, this PR changes `ExecutionMonad b` from an _absolute_ stack to a _relative_ one: i.e.: what needs to be stacked on top of our base monad for the execution. In `Transport`, we then only need to pop the top of the stack, and voila. This greatly simplifies the implementation of the backends, as there's no longer any need to do any stack transformation: MySQL's implementation becomes a `runIdentityT`! This also removes most mentions of `TraceT` from the execution code since it's no longer required: we can rely on the base monad's existing `MonadTrace` constraint.
To continue encapsulating monadic actions in `DBStepInfo` and avoid threading a bunch of `forall` all over the place, this PR introduces a small local helper: `OnBaseMonad`. One only downside of all this is that this requires adding `MonadBaseControl IO m` constraint all over the place: previously, we would run directly on `IO` and lift, and would therefore not need to bring that constraint all the way.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7789
GitOrigin-RevId: e9b2e431c5c47fa9851abf87545c0415ff6d1a12
2023-02-09 17:38:33 +03:00
|
|
|
m (AB.AnyBackend (DBStepInfo))
|
2022-03-21 13:39:49 +03:00
|
|
|
mkSubscriptionExplain ::
|
2021-04-13 14:10:08 +03:00
|
|
|
( MonadError QErr m,
|
|
|
|
MonadIO m,
|
|
|
|
MonadBaseControl IO m
|
|
|
|
) =>
|
2022-03-21 13:39:49 +03:00
|
|
|
SubscriptionQueryPlan b (MultiplexedQuery b) ->
|
|
|
|
m SubscriptionQueryPlanExplanation
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2021-09-22 13:43:05 +03:00
|
|
|
mkDBRemoteRelationshipPlan ::
|
|
|
|
forall m.
|
|
|
|
( MonadError QErr m,
|
|
|
|
MonadQueryTags m
|
|
|
|
) =>
|
|
|
|
UserInfo ->
|
|
|
|
SourceName ->
|
|
|
|
SourceConfig b ->
|
|
|
|
-- | List of json objects, each of which becomes a row of the table.
|
|
|
|
NonEmpty J.Object ->
|
|
|
|
-- | The above objects have this schema.
|
|
|
|
HashMap FieldName (Column b, ScalarType b) ->
|
|
|
|
-- | This is a field name from the lhs that *has* to be selected in the
|
2021-11-24 19:21:59 +03:00
|
|
|
-- response along with the relationship. It is populated in
|
|
|
|
-- `Hasura.GraphQL.Execute.RemoteJoin.Join.processRemoteJoins_` and
|
|
|
|
-- the function `convertRemoteSourceRelationship` below assumes it
|
|
|
|
-- to be returned as either a number or a string with a number in it
|
2021-09-22 13:43:05 +03:00
|
|
|
FieldName ->
|
2021-12-07 16:12:02 +03:00
|
|
|
(FieldName, SourceRelationshipSelection b Void UnpreparedValue) ->
|
2023-01-25 10:12:53 +03:00
|
|
|
[HTTP.Header] ->
|
|
|
|
Maybe G.Name ->
|
2022-12-19 17:03:13 +03:00
|
|
|
Options.StringifyNumbers ->
|
2021-09-22 13:43:05 +03:00
|
|
|
m (DBStepInfo b)
|
|
|
|
|
|
|
|
-- | This is a helper function to convert a remote source's relationship to a
|
|
|
|
-- normal relationship to a temporary table. This function can be used to
|
|
|
|
-- implement executeRemoteRelationship function in databases which support
|
|
|
|
-- constructing a temporary table for a list of json objects.
|
|
|
|
convertRemoteSourceRelationship ::
|
|
|
|
forall b.
|
|
|
|
(Backend b) =>
|
|
|
|
-- | Join columns for the relationship
|
|
|
|
HashMap (Column b) (Column b) ->
|
|
|
|
-- | The LHS of the join, this is the expression which selects from json
|
|
|
|
-- objects
|
|
|
|
SelectFromG b (UnpreparedValue b) ->
|
|
|
|
-- | This is the __argument__ id column, that needs to be added to the response
|
|
|
|
-- This is used by by the remote joins processing logic to convert the
|
|
|
|
-- response from upstream to join indices
|
|
|
|
Column b ->
|
|
|
|
-- | This is the type of the __argument__ id column
|
|
|
|
ColumnType b ->
|
|
|
|
-- | The relationship column and its name (how it should be selected in the
|
|
|
|
-- response)
|
2021-12-07 16:12:02 +03:00
|
|
|
(FieldName, SourceRelationshipSelection b Void UnpreparedValue) ->
|
2022-12-19 17:03:13 +03:00
|
|
|
Options.StringifyNumbers ->
|
2021-12-07 16:12:02 +03:00
|
|
|
QueryDB b Void (UnpreparedValue b)
|
2021-09-22 13:43:05 +03:00
|
|
|
convertRemoteSourceRelationship
|
|
|
|
columnMapping
|
|
|
|
selectFrom
|
|
|
|
argumentIdColumn
|
|
|
|
argumentIdColumnType
|
2022-12-19 17:03:13 +03:00
|
|
|
(relationshipName, relationship)
|
|
|
|
stringifyNumbers =
|
2021-09-22 13:43:05 +03:00
|
|
|
QDBMultipleRows simpleSelect
|
|
|
|
where
|
|
|
|
-- TODO: FieldName should have also been a wrapper around NonEmptyText
|
|
|
|
relName = RelName $ mkNonEmptyTextUnsafe $ getFieldNameTxt relationshipName
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2021-09-22 13:43:05 +03:00
|
|
|
relationshipField = case relationship of
|
|
|
|
SourceRelationshipObject s ->
|
2023-05-31 13:32:57 +03:00
|
|
|
AFObjectRelation $ AnnRelationSelectG relName columnMapping Nullable s
|
2021-09-22 13:43:05 +03:00
|
|
|
SourceRelationshipArray s ->
|
2023-05-31 13:32:57 +03:00
|
|
|
AFArrayRelation $ ASSimple $ AnnRelationSelectG relName columnMapping Nullable s
|
2021-09-22 13:43:05 +03:00
|
|
|
SourceRelationshipArrayAggregate s ->
|
2023-05-31 13:32:57 +03:00
|
|
|
AFArrayRelation $ ASAggregate $ AnnRelationSelectG relName columnMapping Nullable s
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2021-09-22 13:43:05 +03:00
|
|
|
argumentIdField =
|
|
|
|
( fromCol @b argumentIdColumn,
|
2023-05-24 16:51:56 +03:00
|
|
|
AFColumn
|
|
|
|
$ AnnColumnField
|
2021-09-22 13:43:05 +03:00
|
|
|
{ _acfColumn = argumentIdColumn,
|
|
|
|
_acfType = argumentIdColumnType,
|
|
|
|
_acfAsText = False,
|
2022-05-03 11:58:56 +03:00
|
|
|
_acfArguments = Nothing,
|
2021-09-22 13:43:05 +03:00
|
|
|
_acfCaseBoolExpression = Nothing
|
|
|
|
}
|
|
|
|
)
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2021-09-22 13:43:05 +03:00
|
|
|
simpleSelect =
|
|
|
|
AnnSelectG
|
|
|
|
{ _asnFields = [argumentIdField, (relationshipName, relationshipField)],
|
|
|
|
_asnFrom = selectFrom,
|
|
|
|
_asnPerm = TablePerm annBoolExpTrue Nothing,
|
|
|
|
_asnArgs = noSelectArgs,
|
2022-12-19 17:03:13 +03:00
|
|
|
_asnStrfyNum = stringifyNumbers,
|
2022-07-19 09:55:42 +03:00
|
|
|
_asnNamingConvention = Nothing
|
2021-09-22 13:43:05 +03:00
|
|
|
}
|
|
|
|
|
2021-04-01 23:40:31 +03:00
|
|
|
data DBStepInfo b = DBStepInfo
|
|
|
|
{ dbsiSourceName :: SourceName,
|
|
|
|
dbsiSourceConfig :: SourceConfig b,
|
|
|
|
dbsiPreparedQuery :: Maybe (PreparedQuery b),
|
2023-03-14 14:32:20 +03:00
|
|
|
dbsiAction :: OnBaseMonad (ExecutionMonad b) (ActionResult b),
|
2023-01-25 10:12:53 +03:00
|
|
|
dbsiResolvedConnectionTemplate :: ResolvedConnectionTemplate b
|
2021-04-01 23:40:31 +03:00
|
|
|
}
|
2021-02-12 06:04:09 +03:00
|
|
|
|
2023-03-14 14:32:20 +03:00
|
|
|
data ActionResult b = ActionResult
|
|
|
|
{ arStatistics :: Maybe (ExecutionStatistics b),
|
|
|
|
arResult :: EncJSON
|
|
|
|
}
|
|
|
|
|
|
|
|
-- | Lift a result from the database into an 'ActionResult'.
|
|
|
|
withNoStatistics :: EncJSON -> ActionResult b
|
|
|
|
withNoStatistics arResult = ActionResult {arStatistics = Nothing, arResult}
|
|
|
|
|
Allow backend execution to happen on the base app monad.
### Description
Each Backend executes queries against the database in a slightly different stack: Postgres uses its own `TXeT`, MSSQL uses a variant of it, BigQuery is simply in `ExceptT QErr IO`... To accommodate those variations, we had originally introduced an `ExecutionMonad b` type family in `BackendExecute`, allowing each backend to describe its own stack. It was then up to that backend's `BackendTransport` instance to implement running said stack, and converting the result back into our main app monad.
However, this was not without complications: `TraceT` is one of them: as it usually needs to be on the top of the stack, converting from one stack to the other implies the use `interpTraceT`, which is quite monstrous. Furthermore, as part of the Entitlement Services work, we're trying to move to a "Services" architecture in which the entire engine runs in one base monad, that delegates features and dependencies to monad constraints; and as a result we'd like to minimize the number of different monad stacks we have to maintain and translate from and to in the codebase.
To improve things, this PR changes `ExecutionMonad b` from an _absolute_ stack to a _relative_ one: i.e.: what needs to be stacked on top of our base monad for the execution. In `Transport`, we then only need to pop the top of the stack, and voila. This greatly simplifies the implementation of the backends, as there's no longer any need to do any stack transformation: MySQL's implementation becomes a `runIdentityT`! This also removes most mentions of `TraceT` from the execution code since it's no longer required: we can rely on the base monad's existing `MonadTrace` constraint.
To continue encapsulating monadic actions in `DBStepInfo` and avoid threading a bunch of `forall` all over the place, this PR introduces a small local helper: `OnBaseMonad`. One only downside of all this is that this requires adding `MonadBaseControl IO m` constraint all over the place: previously, we would run directly on `IO` and lift, and would therefore not need to bring that constraint all the way.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7789
GitOrigin-RevId: e9b2e431c5c47fa9851abf87545c0415ff6d1a12
2023-02-09 17:38:33 +03:00
|
|
|
-- | Provides an abstraction over the base monad in which a computation runs.
|
|
|
|
--
|
|
|
|
-- Given a transformer @t@ and a type @a@, @OnBaseMonad t a@ represents a
|
|
|
|
-- computation of type @t m a@, for any base monad @m@. This allows 'DBStepInfo'
|
|
|
|
-- to store a backend-specific computation, using a backend-specific monad
|
|
|
|
-- transformer, on top of the base app monad, without 'DBStepInfo' needing to
|
|
|
|
-- know about the base monad @m@.
|
|
|
|
--
|
|
|
|
-- However, this kind of type erasure forces us to bundle all of the constraints
|
|
|
|
-- on the base monad @m@ here. The constraints here are the union of the
|
|
|
|
-- constraints required across all backends. If it were possible to express
|
|
|
|
-- constraint functions of the form @(Type -> Type) -> Constraint@ at the type
|
|
|
|
-- level, we could make the list of constraints a type family in
|
|
|
|
-- 'BackendExecute', allowing each backend to specify its own specific
|
|
|
|
-- constraints; and we could then provide the list of constraints as an
|
|
|
|
-- additional argument to @OnBaseMonad@, pushing the requirement to implement
|
|
|
|
-- the union of all constraints to the base execution functions.
|
|
|
|
--
|
|
|
|
-- All backends require @MonadError QErr@ to report errors, and 'MonadIO' to be
|
|
|
|
-- able to communicate over the network. Most of them require 'MonadTrace' to
|
|
|
|
-- be able to create new spans as part of the execution, and several use
|
|
|
|
-- @MonadBaseControl IO@ to use 'try' in their error handling.
|
|
|
|
newtype OnBaseMonad t a = OnBaseMonad
|
2023-03-14 14:32:20 +03:00
|
|
|
{ runOnBaseMonad :: forall m. (Functor (t m), MonadIO m, MonadBaseControl IO m, MonadTrace m, MonadError QErr m) => t m a
|
Allow backend execution to happen on the base app monad.
### Description
Each Backend executes queries against the database in a slightly different stack: Postgres uses its own `TXeT`, MSSQL uses a variant of it, BigQuery is simply in `ExceptT QErr IO`... To accommodate those variations, we had originally introduced an `ExecutionMonad b` type family in `BackendExecute`, allowing each backend to describe its own stack. It was then up to that backend's `BackendTransport` instance to implement running said stack, and converting the result back into our main app monad.
However, this was not without complications: `TraceT` is one of them: as it usually needs to be on the top of the stack, converting from one stack to the other implies the use `interpTraceT`, which is quite monstrous. Furthermore, as part of the Entitlement Services work, we're trying to move to a "Services" architecture in which the entire engine runs in one base monad, that delegates features and dependencies to monad constraints; and as a result we'd like to minimize the number of different monad stacks we have to maintain and translate from and to in the codebase.
To improve things, this PR changes `ExecutionMonad b` from an _absolute_ stack to a _relative_ one: i.e.: what needs to be stacked on top of our base monad for the execution. In `Transport`, we then only need to pop the top of the stack, and voila. This greatly simplifies the implementation of the backends, as there's no longer any need to do any stack transformation: MySQL's implementation becomes a `runIdentityT`! This also removes most mentions of `TraceT` from the execution code since it's no longer required: we can rely on the base monad's existing `MonadTrace` constraint.
To continue encapsulating monadic actions in `DBStepInfo` and avoid threading a bunch of `forall` all over the place, this PR introduces a small local helper: `OnBaseMonad`. One only downside of all this is that this requires adding `MonadBaseControl IO m` constraint all over the place: previously, we would run directly on `IO` and lift, and would therefore not need to bring that constraint all the way.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7789
GitOrigin-RevId: e9b2e431c5c47fa9851abf87545c0415ff6d1a12
2023-02-09 17:38:33 +03:00
|
|
|
}
|
|
|
|
|
2023-03-14 14:32:20 +03:00
|
|
|
instance Functor (OnBaseMonad t) where
|
|
|
|
fmap f (OnBaseMonad xs) = OnBaseMonad (fmap f xs)
|
|
|
|
|
2021-04-13 14:10:08 +03:00
|
|
|
-- | The result of an explain query: for a given root field (denoted by its name): the generated SQL
|
|
|
|
-- query, and the detailed explanation obtained from the database (if any). We mostly use this type
|
|
|
|
-- as an intermediary step, and immediately tranform any value we obtain into an equivalent JSON
|
|
|
|
-- representation.
|
|
|
|
data ExplainPlan = ExplainPlan
|
2021-10-29 17:42:07 +03:00
|
|
|
{ _fpField :: !RootFieldAlias,
|
2021-04-13 14:10:08 +03:00
|
|
|
_fpSql :: !(Maybe Text),
|
|
|
|
_fpPlan :: !(Maybe [Text])
|
|
|
|
}
|
|
|
|
deriving (Show, Eq, Generic)
|
|
|
|
|
|
|
|
instance J.ToJSON ExplainPlan where
|
|
|
|
toJSON = J.genericToJSON $ J.aesonPrefix J.camelCase
|
|
|
|
|
2021-02-12 06:04:09 +03:00
|
|
|
-- | One execution step to processing a GraphQL query (e.g. one root field).
|
2021-02-20 16:45:49 +03:00
|
|
|
data ExecutionStep where
|
2021-06-11 06:26:50 +03:00
|
|
|
-- | A query to execute against the database
|
2021-02-12 06:04:09 +03:00
|
|
|
ExecStepDB ::
|
2021-03-15 16:02:58 +03:00
|
|
|
HTTP.ResponseHeaders ->
|
|
|
|
AB.AnyBackend DBStepInfo ->
|
2021-06-11 06:26:50 +03:00
|
|
|
Maybe RemoteJoins ->
|
2021-02-20 16:45:49 +03:00
|
|
|
ExecutionStep
|
2021-06-11 06:26:50 +03:00
|
|
|
-- | Execute an action
|
2021-02-12 06:04:09 +03:00
|
|
|
ExecStepAction ::
|
2021-06-11 06:26:50 +03:00
|
|
|
ActionExecutionPlan ->
|
|
|
|
ActionsInfo ->
|
|
|
|
Maybe RemoteJoins ->
|
2021-02-20 16:45:49 +03:00
|
|
|
ExecutionStep
|
2021-06-11 06:26:50 +03:00
|
|
|
-- | A graphql query to execute against a remote schema
|
2021-02-12 06:04:09 +03:00
|
|
|
ExecStepRemote ::
|
|
|
|
!RemoteSchemaInfo ->
|
2021-10-29 17:42:07 +03:00
|
|
|
!ResultCustomizer ->
|
2021-02-12 06:04:09 +03:00
|
|
|
!GH.GQLReqOutgoing ->
|
Enable remote joins from remote schemas in the execution engine.
### Description
This PR adds the ability to perform remote joins from remote schemas in the engine. To do so, we alter the definition of an `ExecutionStep` targeting a remote schema: the `ExecStepRemote` constructor now expects a `Maybe RemoteJoins`. This new argument is used when processing the execution step, in the transport layer (either `Transport.HTTP` or `Transport.WebSocket`).
For this `Maybe RemoteJoins` to be extracted from a parsed query, this PR also extends the `Execute.RemoteJoin.Collect` module, to implement "collection" from a selection set. Not only do those new functions extract the remote joins, but they also apply all necessary transformations to the selection sets (such as inserting the necessary "phantom" fields used as join keys).
Finally in `Execute.RemoteJoin.Join`, we make two changes. First, we now always look for nested remote joins, regardless of whether the join we just performed went to a source or a remote schema; and second we adapt our join tree logic according to the special cases that were added to deal with remote server edge cases.
Additionally, this PR refactors / cleans / documents `Execute.RemoteJoin.RemoteServer`. This is not required as part of this change and could be moved to a separate PR if needed (a similar cleanup of `Join` is done independently in #3894). It also introduces a draft of a new documentation page for this project, that will be refined in the release PR that ships the feature (either #3069 or a copy of it).
While this PR extends the engine, it doesn't plug such relationships in the schema, meaning that, as of this PR, the new code paths in `Join` are technically unreachable. Adding the corresponding schema code and, ultimately, enabling the metadata API will be done in subsequent PRs.
### Keeping track of concrete type names
The main change this PR makes to the existing `Join` code is to handle a new reserved field we sometimes use when targeting remote servers: the `__hasura_internal_typename` field. In short, a GraphQL selection set can sometimes "branch" based on the concrete "runtime type" of the object on which the selection happens:
```graphql
query {
author(id: 53478) {
... on Writer {
name
articles {
title
}
}
... on Artist {
name
articles {
title
}
}
}
}
```
If both of those `articles` are remote joins, we need to be able, when we get the answer, to differentiate between the two different cases. We do this by asking for `__typename`, to be able to decide if we're in the `Writer` or the `Artist` branch of the query.
To avoid further processing / customization of results, we only insert this `__hasura_internal_typename: __typename` field in the query in the case of unions of interfaces AND if we have the guarantee that we will processing the request as part of the remote joins "folding": that is, if there's any remote join in this branch in the tree. Otherwise, we don't insert the field, and we leave that part of the response untouched.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3810
GitOrigin-RevId: 89aaf16274d68e26ad3730b80c2d2fdc2896b96c
2022-03-09 06:17:28 +03:00
|
|
|
Maybe RemoteJoins ->
|
2021-02-20 16:45:49 +03:00
|
|
|
ExecutionStep
|
2021-06-11 06:26:50 +03:00
|
|
|
-- | Output a plain JSON object
|
2021-02-12 06:04:09 +03:00
|
|
|
ExecStepRaw ::
|
2021-05-19 19:37:47 +03:00
|
|
|
JO.Value ->
|
2021-02-20 16:45:49 +03:00
|
|
|
ExecutionStep
|
2022-07-25 18:53:25 +03:00
|
|
|
ExecStepMulti ::
|
|
|
|
[ExecutionStep] ->
|
|
|
|
ExecutionStep
|
2021-03-13 17:40:50 +03:00
|
|
|
|
2021-02-12 06:04:09 +03:00
|
|
|
-- | The series of steps that need to be executed for a given query. For now, those steps are all
|
|
|
|
-- independent. In the future, when we implement a client-side dataloader and generalized joins,
|
|
|
|
-- this will need to be changed into an annotated tree.
|
2021-10-29 17:42:07 +03:00
|
|
|
type ExecutionPlan = RootFieldMap ExecutionStep
|