mirror of
https://github.com/hasura/graphql-engine.git
synced 2024-12-17 12:31:52 +03:00
6e574f1bbe
## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
161 lines
6.6 KiB
Haskell
161 lines
6.6 KiB
Haskell
module Hasura.GraphQL.Execute.Mutation
|
|
( convertMutationSelectionSet,
|
|
)
|
|
where
|
|
|
|
import Data.Environment qualified as Env
|
|
import Data.HashMap.Strict qualified as Map
|
|
import Data.HashMap.Strict.InsOrd qualified as OMap
|
|
import Data.Tagged qualified as Tagged
|
|
import Hasura.Base.Error
|
|
import Hasura.GraphQL.Context
|
|
import Hasura.GraphQL.Execute.Action
|
|
import Hasura.GraphQL.Execute.Backend
|
|
import Hasura.GraphQL.Execute.Common
|
|
import Hasura.GraphQL.Execute.Instances ()
|
|
import Hasura.GraphQL.Execute.Remote
|
|
import Hasura.GraphQL.Execute.RemoteJoin.Collect qualified as RJ
|
|
import Hasura.GraphQL.Execute.Resolve
|
|
import Hasura.GraphQL.Namespace
|
|
import Hasura.GraphQL.ParameterizedQueryHash
|
|
import Hasura.GraphQL.Parser.Directives
|
|
import Hasura.GraphQL.Schema.Parser (runParse, toQErr)
|
|
import Hasura.GraphQL.Transport.HTTP.Protocol qualified as GH
|
|
import Hasura.Logging qualified as L
|
|
import Hasura.Metadata.Class
|
|
import Hasura.Prelude
|
|
import Hasura.QueryTags
|
|
import Hasura.RQL.IR
|
|
import Hasura.RQL.Types.Action
|
|
import Hasura.RQL.Types.Backend
|
|
import Hasura.RQL.Types.Common
|
|
import Hasura.RQL.Types.GraphqlSchemaIntrospection
|
|
import Hasura.RQL.Types.QueryTags
|
|
import Hasura.SQL.AnyBackend qualified as AB
|
|
import Hasura.Server.Prometheus (PrometheusMetrics (..))
|
|
import Hasura.Server.Types (RequestId (..))
|
|
import Hasura.Services
|
|
import Hasura.Session
|
|
import Hasura.Tracing qualified as Tracing
|
|
import Language.GraphQL.Draft.Syntax qualified as G
|
|
import Network.HTTP.Types qualified as HTTP
|
|
|
|
convertMutationAction ::
|
|
( MonadIO m,
|
|
MonadError QErr m,
|
|
MonadMetadataStorage m,
|
|
ProvidesNetwork m
|
|
) =>
|
|
Env.Environment ->
|
|
L.Logger L.Hasura ->
|
|
PrometheusMetrics ->
|
|
UserInfo ->
|
|
HTTP.RequestHeaders ->
|
|
Maybe GH.GQLQueryText ->
|
|
ActionMutation Void ->
|
|
m ActionExecutionPlan
|
|
convertMutationAction env logger prometheusMetrics userInfo reqHeaders gqlQueryText action = do
|
|
httpManager <- askHTTPManager
|
|
case action of
|
|
AMSync s ->
|
|
pure $ AEPSync $ resolveActionExecution httpManager env logger prometheusMetrics userInfo s actionExecContext gqlQueryText
|
|
AMAsync s ->
|
|
AEPAsyncMutation <$> resolveActionMutationAsync s reqHeaders userSession
|
|
where
|
|
userSession = _uiSession userInfo
|
|
actionExecContext = ActionExecContext reqHeaders (_uiSession userInfo)
|
|
|
|
convertMutationSelectionSet ::
|
|
forall m.
|
|
( Tracing.MonadTrace m,
|
|
MonadIO m,
|
|
MonadError QErr m,
|
|
MonadMetadataStorage m,
|
|
MonadGQLExecutionCheck m,
|
|
MonadQueryTags m,
|
|
ProvidesNetwork m
|
|
) =>
|
|
Env.Environment ->
|
|
L.Logger L.Hasura ->
|
|
PrometheusMetrics ->
|
|
GQLContext ->
|
|
SQLGenCtx ->
|
|
UserInfo ->
|
|
HTTP.RequestHeaders ->
|
|
[G.Directive G.Name] ->
|
|
G.SelectionSet G.NoFragments G.Name ->
|
|
[G.VariableDefinition] ->
|
|
GH.GQLReqUnparsed ->
|
|
SetGraphqlIntrospectionOptions ->
|
|
RequestId ->
|
|
-- | Graphql Operation Name
|
|
Maybe G.Name ->
|
|
m (ExecutionPlan, ParameterizedQueryHash)
|
|
convertMutationSelectionSet
|
|
env
|
|
logger
|
|
prometheusMetrics
|
|
gqlContext
|
|
SQLGenCtx {stringifyNum}
|
|
userInfo
|
|
reqHeaders
|
|
directives
|
|
fields
|
|
varDefs
|
|
gqlUnparsed
|
|
introspectionDisabledRoles
|
|
reqId
|
|
maybeOperationName = do
|
|
mutationParser <-
|
|
onNothing (gqlMutationParser gqlContext) $
|
|
throw400 ValidationFailed "no mutations exist"
|
|
|
|
(resolvedDirectives, resolvedSelSet) <- resolveVariables varDefs (fromMaybe Map.empty (GH._grVariables gqlUnparsed)) directives fields
|
|
-- Parse the GraphQL query into the RQL AST
|
|
unpreparedQueries ::
|
|
RootFieldMap (MutationRootField UnpreparedValue) <-
|
|
liftEither $ mutationParser resolvedSelSet
|
|
|
|
-- Process directives on the mutation
|
|
_dirMap <- toQErr $ runParse (parseDirectives customDirectives (G.DLExecutable G.EDLMUTATION) resolvedDirectives)
|
|
|
|
let parameterizedQueryHash = calculateParameterizedQueryHash resolvedSelSet
|
|
|
|
resolveExecutionSteps rootFieldName rootFieldUnpreparedValue = do
|
|
case rootFieldUnpreparedValue of
|
|
RFDB sourceName exists ->
|
|
AB.dispatchAnyBackend @BackendExecute
|
|
exists
|
|
\(SourceConfigWith (sourceConfig :: SourceConfig b) queryTagsConfig (MDBR db)) -> do
|
|
let mReqId =
|
|
case _qtcOmitRequestId <$> queryTagsConfig of
|
|
-- we include the request id only if a user explicitly wishes for it to be included.
|
|
Just False -> Just reqId
|
|
_ -> Nothing
|
|
mutationQueryTagsAttributes = encodeQueryTags $ QTMutation $ MutationMetadata mReqId maybeOperationName rootFieldName parameterizedQueryHash
|
|
queryTagsComment = Tagged.untag $ createQueryTags @m mutationQueryTagsAttributes queryTagsConfig
|
|
(noRelsDBAST, remoteJoins) = RJ.getRemoteJoinsMutationDB db
|
|
dbStepInfo <- flip runReaderT queryTagsComment $ mkDBMutationPlan @b userInfo env stringifyNum sourceName sourceConfig noRelsDBAST reqHeaders maybeOperationName
|
|
pure $ ExecStepDB [] (AB.mkAnyBackend dbStepInfo) remoteJoins
|
|
RFRemote remoteField -> do
|
|
RemoteSchemaRootField remoteSchemaInfo resultCustomizer resolvedRemoteField <- runVariableCache $ resolveRemoteField userInfo remoteField
|
|
let (noRelsRemoteField, remoteJoins) = RJ.getRemoteJoinsGraphQLField resolvedRemoteField
|
|
pure $
|
|
buildExecStepRemote remoteSchemaInfo resultCustomizer G.OperationTypeMutation noRelsRemoteField remoteJoins (GH._grOperationName gqlUnparsed)
|
|
RFAction action -> do
|
|
let (noRelsDBAST, remoteJoins) = RJ.getRemoteJoinsActionMutation action
|
|
(actionName, _fch) <- pure $ case noRelsDBAST of
|
|
AMSync s -> (_aaeName s, _aaeForwardClientHeaders s)
|
|
AMAsync s -> (_aamaName s, _aamaForwardClientHeaders s)
|
|
plan <- convertMutationAction env logger prometheusMetrics userInfo reqHeaders (Just (GH._grQuery gqlUnparsed)) noRelsDBAST
|
|
pure $ ExecStepAction plan (ActionsInfo actionName _fch) remoteJoins -- `_fch` represents the `forward_client_headers` option from the action
|
|
-- definition which is currently being ignored for actions that are mutations
|
|
RFRaw customFieldVal -> flip onLeft throwError =<< executeIntrospection userInfo customFieldVal introspectionDisabledRoles
|
|
RFMulti lst -> do
|
|
allSteps <- traverse (resolveExecutionSteps rootFieldName) lst
|
|
pure $ ExecStepMulti allSteps
|
|
|
|
-- Transform the RQL AST into a prepared SQL query
|
|
txs <- flip OMap.traverseWithKey unpreparedQueries $ resolveExecutionSteps
|
|
return (txs, parameterizedQueryHash)
|