harmonize network manager handling

## Description

### I want to speak to the `Manager`

Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.

For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.

This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.

### First come, first served

One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.

The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.

### Side-effects? In my Haskell?

This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.

(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)

## Further work

In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:

- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
-  rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base

This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
This commit is contained in:
Antoine Leblanc 2023-02-22 15:53:52 +00:00 committed by hasura-bot
parent 3423e53480
commit 6e574f1bbe
38 changed files with 410 additions and 291 deletions

View File

@ -756,6 +756,8 @@ library
, Hasura.Server.Migrate.Internal , Hasura.Server.Migrate.Internal
, Hasura.Server.Auth.JWT.Internal , Hasura.Server.Auth.JWT.Internal
, Hasura.Server.Auth.JWT.Logging , Hasura.Server.Auth.JWT.Logging
, Hasura.Services
, Hasura.Services.Network
, Hasura.RemoteSchema.Metadata.Base , Hasura.RemoteSchema.Metadata.Base
, Hasura.RemoteSchema.Metadata.Customization , Hasura.RemoteSchema.Metadata.Customization
, Hasura.RemoteSchema.Metadata.Permission , Hasura.RemoteSchema.Metadata.Permission
@ -987,7 +989,6 @@ library
, Hasura.Tracing.TraceId , Hasura.Tracing.TraceId
, Hasura.QueryTags , Hasura.QueryTags
, Network.HTTP.Client.Transformable , Network.HTTP.Client.Transformable
, Network.HTTP.Client.Manager
, Network.HTTP.Client.DynamicTlsPermissions , Network.HTTP.Client.DynamicTlsPermissions
, Network.HTTP.Client.Restricted , Network.HTTP.Client.Restricted
, Network.HTTP.Client.Blocklisting , Network.HTTP.Client.Blocklisting

View File

@ -317,15 +317,20 @@ runApp serveOptions = do
env <- Env.getEnvironment env <- Env.getEnvironment
initTime <- liftIO getCurrentTime initTime <- liftIO getCurrentTime
globalCtx <- App.initGlobalCtx env metadataDbUrl rci globalCtx <- App.initGlobalCtx env metadataDbUrl rci
(ekgStore, serverMetrics) <- liftIO do (ekgStore, serverMetrics) <-
liftIO $ do
store <- EKG.newStore @TestMetricsSpec store <- EKG.newStore @TestMetricsSpec
serverMetrics <- liftIO . createServerMetrics $ EKG.subset ServerSubset store serverMetrics <-
liftIO $ createServerMetrics $ EKG.subset ServerSubset store
pure (EKG.subset EKG.emptyOf store, serverMetrics) pure (EKG.subset EKG.emptyOf store, serverMetrics)
prometheusMetrics <- makeDummyPrometheusMetrics prometheusMetrics <- makeDummyPrometheusMetrics
let managedServerCtx = App.initialiseServerCtx env globalCtx serveOptions Nothing serverMetrics prometheusMetrics sampleAlways (FeatureFlag.checkFeatureFlag env) let featureFlag = FeatureFlag.checkFeatureFlag env
managedServerCtx = App.initialiseServerCtx env globalCtx serveOptions Nothing serverMetrics prometheusMetrics sampleAlways featureFlag
runManagedT managedServerCtx \serverCtx@ServerCtx {..} -> do runManagedT managedServerCtx \serverCtx@ServerCtx {..} -> do
let Loggers _ _ pgLogger = scLoggers let Loggers _ _ pgLogger = scLoggers
flip App.runPGMetadataStorageAppT (scMetadataDbPool, pgLogger) . lowerManagedT $ appContext = App.AppContext scManager pgLogger scMetadataDbPool
flip App.runPGMetadataStorageAppT appContext $
lowerManagedT $
App.runHGEServer App.runHGEServer
(const $ pure ()) (const $ pure ())
env env
@ -334,7 +339,7 @@ runApp serveOptions = do
initTime initTime
Nothing Nothing
ekgStore ekgStore
(FeatureFlag.checkFeatureFlag env) featureFlag
-- | Used only for 'runApp' above. -- | Used only for 'runApp' above.
data TestMetricsSpec name metricType tags data TestMetricsSpec name metricType tags

View File

@ -1,11 +1,9 @@
module Main (main) where module Main (main) where
import Data.Aeson.Encode.Pretty (encodePretty) import Data.Aeson.Encode.Pretty (encodePretty)
import Data.ByteString.Lazy qualified as LBS import Data.ByteString.Lazy.Char8 qualified as LBS
import Hasura.Server.MetadataOpenAPI (metadataOpenAPI) import Hasura.Server.MetadataOpenAPI (metadataOpenAPI)
import Prelude import Prelude
main :: IO () main :: IO ()
main = do main = LBS.putStrLn $ encodePretty metadataOpenAPI
LBS.putStr $ encodePretty metadataOpenAPI
putStrLn ""

View File

@ -97,7 +97,10 @@ runApp env (HGEOptions rci metadataDbUrl hgeCmd) = do
C.forkImmortal "ourIdleGC" logger $ C.forkImmortal "ourIdleGC" logger $
GC.ourIdleGC logger (seconds 0.3) (seconds 10) (seconds 60) GC.ourIdleGC logger (seconds 0.3) (seconds 10) (seconds 60)
flip runPGMetadataStorageAppT (scMetadataDbPool, pgLogger) . lowerManagedT $ do -- TODO: why don't we just run a reader with ServerCtx from here?
-- the AppContext doesn't add any new information
let appContext = AppContext scManager pgLogger scMetadataDbPool
flip runPGMetadataStorageAppT appContext . lowerManagedT $ do
runHGEServer (const $ pure ()) env serveOptions serverCtx initTime Nothing ekgStore (FeatureFlag.checkFeatureFlag env) runHGEServer (const $ pure ()) env serveOptions serverCtx initTime Nothing ekgStore (FeatureFlag.checkFeatureFlag env)
HCExport -> do HCExport -> do
GlobalCtx {..} <- initGlobalCtx env metadataDbUrl rci GlobalCtx {..} <- initGlobalCtx env metadataDbUrl rci

View File

@ -3,11 +3,16 @@
{-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableInstances #-}
{-# LANGUAGE ViewPatterns #-} {-# LANGUAGE ViewPatterns #-}
-- | Imported by 'server/src-exec/Main.hs'. -- | Defines the CE version of the engine.
--
-- This module contains everything that is required to run the community edition
-- of the engine: the base application monad and the implementation of all its
-- behaviour classes.
module Hasura.App module Hasura.App
( ExitCode (AuthConfigurationError, DatabaseMigrationError, DowngradeProcessError, MetadataCleanError, MetadataExportError, SchemaCacheInitError), ( ExitCode (AuthConfigurationError, DatabaseMigrationError, DowngradeProcessError, MetadataCleanError, MetadataExportError, SchemaCacheInitError),
ExitException (ExitException), ExitException (ExitException),
GlobalCtx (..), GlobalCtx (..),
AppContext (..),
PGMetadataStorageAppT (runPGMetadataStorageAppT), PGMetadataStorageAppT (runPGMetadataStorageAppT),
accessDeniedErrMsg, accessDeniedErrMsg,
flushLogger, flushLogger,
@ -140,13 +145,13 @@ import Hasura.Server.SchemaUpdate
import Hasura.Server.Telemetry import Hasura.Server.Telemetry
import Hasura.Server.Types import Hasura.Server.Types
import Hasura.Server.Version import Hasura.Server.Version
import Hasura.Services
import Hasura.Session import Hasura.Session
import Hasura.ShutdownLatch import Hasura.ShutdownLatch
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Network.HTTP.Client qualified as HTTP import Network.HTTP.Client qualified as HTTP
import Network.HTTP.Client.Blocklisting (Blocklist) import Network.HTTP.Client.Blocklisting (Blocklist)
import Network.HTTP.Client.CreateManager (mkHttpManager) import Network.HTTP.Client.CreateManager (mkHttpManager)
import Network.HTTP.Client.Manager (HasHttpManagerM (..))
import Network.Wai (Application) import Network.Wai (Application)
import Network.Wai.Handler.Warp qualified as Warp import Network.Wai.Handler.Warp qualified as Warp
import Options.Applicative import Options.Applicative
@ -267,8 +272,17 @@ initGlobalCtx env metadataDbUrl defaultPgConnInfo = do
let mdConnInfo = mkConnInfoFromMDb mdUrl let mdConnInfo = mkConnInfoFromMDb mdUrl
mkGlobalCtx mdConnInfo (Just (dbUrl, srcConnInfo)) mkGlobalCtx mdConnInfo (Just (dbUrl, srcConnInfo))
-- | Base application context.
--
-- This defines all base information required to run the engine.
data AppContext = AppContext
{ _acHTTPManager :: HTTP.Manager,
_acPGLogger :: PG.PGLogger,
_acPGPool :: PG.PGPool
}
-- | An application with Postgres database as a metadata storage -- | An application with Postgres database as a metadata storage
newtype PGMetadataStorageAppT m a = PGMetadataStorageAppT {runPGMetadataStorageAppT :: (PG.PGPool, PG.PGLogger) -> m a} newtype PGMetadataStorageAppT m a = PGMetadataStorageAppT {runPGMetadataStorageAppT :: AppContext -> m a}
deriving deriving
( Functor, ( Functor,
Applicative, Applicative,
@ -278,17 +292,19 @@ newtype PGMetadataStorageAppT m a = PGMetadataStorageAppT {runPGMetadataStorageA
MonadCatch, MonadCatch,
MonadThrow, MonadThrow,
MonadMask, MonadMask,
HasHttpManagerM,
HasServerConfigCtx, HasServerConfigCtx,
MonadReader (PG.PGPool, PG.PGLogger), MonadReader AppContext,
MonadBase b, MonadBase b,
MonadBaseControl b MonadBaseControl b
) )
via (ReaderT (PG.PGPool, PG.PGLogger) m) via (ReaderT AppContext m)
deriving deriving
( MonadTrans ( MonadTrans
) )
via (ReaderT (PG.PGPool, PG.PGLogger)) via (ReaderT AppContext)
instance Monad m => ProvidesNetwork (PGMetadataStorageAppT m) where
askHTTPManager = asks _acHTTPManager
resolvePostgresConnInfo :: resolvePostgresConnInfo ::
(MonadIO m) => Env.Environment -> UrlConf -> Maybe Int -> m PG.ConnInfo (MonadIO m) => Env.Environment -> UrlConf -> Maybe Int -> m PG.ConnInfo
@ -596,7 +612,8 @@ runHGEServer ::
MonadMetadataStorageQueryAPI m, MonadMetadataStorageQueryAPI m,
MonadResolveSource m, MonadResolveSource m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
MonadEventLogCleanup m MonadEventLogCleanup m,
ProvidesHasuraServices m
) => ) =>
(ServerCtx -> Spock.SpockT m ()) -> (ServerCtx -> Spock.SpockT m ()) ->
Env.Environment -> Env.Environment ->
@ -687,7 +704,8 @@ mkHGEServer ::
MonadMetadataStorageQueryAPI m, MonadMetadataStorageQueryAPI m,
MonadResolveSource m, MonadResolveSource m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
MonadEventLogCleanup m MonadEventLogCleanup m,
ProvidesHasuraServices m
) => ) =>
(ServerCtx -> Spock.SpockT m ()) -> (ServerCtx -> Spock.SpockT m ()) ->
Env.Environment -> Env.Environment ->
@ -765,7 +783,6 @@ mkHGEServer setupHook env ServeOptions {..} serverCtx@ServerCtx {..} ekgStore ch
_ <- _ <-
startSchemaSyncProcessorThread startSchemaSyncProcessorThread
logger logger
scManager
scMetaVersionRef scMetaVersionRef
cacheRef cacheRef
scInstanceId scInstanceId
@ -1006,7 +1023,6 @@ mkHGEServer setupHook env ServeOptions {..} serverCtx@ServerCtx {..} ekgStore ch
logger logger
(getSchemaCache cacheRef) (getSchemaCache cacheRef)
(leActionEvents lockedEventsCtx) (leActionEvents lockedEventsCtx)
scManager
scPrometheusMetrics scPrometheusMetrics
sleepTime sleepTime
Nothing Nothing
@ -1137,7 +1153,7 @@ instance (MonadIO m) => WS.MonadWSLog (PGMetadataStorageAppT m) where
logWSLog logger = unLogger logger logWSLog logger = unLogger logger
instance (Monad m) => MonadResolveSource (PGMetadataStorageAppT m) where instance (Monad m) => MonadResolveSource (PGMetadataStorageAppT m) where
getPGSourceResolver = mkPgSourceResolver <$> asks snd getPGSourceResolver = mkPgSourceResolver <$> asks _acPGLogger
getMSSQLSourceResolver = return mkMSSQLSourceResolver getMSSQLSourceResolver = return mkMSSQLSourceResolver
instance (Monad m) => EB.MonadQueryTags (PGMetadataStorageAppT m) where instance (Monad m) => EB.MonadQueryTags (PGMetadataStorageAppT m) where
@ -1153,7 +1169,7 @@ runInSeparateTx ::
PG.TxE QErr a -> PG.TxE QErr a ->
PGMetadataStorageAppT m (Either QErr a) PGMetadataStorageAppT m (Either QErr a)
runInSeparateTx tx = do runInSeparateTx tx = do
pool <- asks fst pool <- asks _acPGPool
liftIO $ runExceptT $ PG.runTx pool (PG.RepeatableRead, Nothing) tx liftIO $ runExceptT $ PG.runTx pool (PG.RepeatableRead, Nothing) tx
notifySchemaCacheSyncTx :: MetadataResourceVersion -> InstanceId -> CacheInvalidations -> PG.TxE QErr () notifySchemaCacheSyncTx :: MetadataResourceVersion -> InstanceId -> CacheInvalidations -> PG.TxE QErr ()

View File

@ -46,11 +46,11 @@ import Hasura.SQL.Backend (BackendSourceKind (..), BackendType (..))
import Hasura.SQL.Types (CollectableType (..)) import Hasura.SQL.Types (CollectableType (..))
import Hasura.Server.Migrate.Version (SourceCatalogMigrationState (..)) import Hasura.Server.Migrate.Version (SourceCatalogMigrationState (..))
import Hasura.Server.Utils qualified as HSU import Hasura.Server.Utils qualified as HSU
import Hasura.Services.Network
import Hasura.Session (SessionVariable, mkSessionVariable) import Hasura.Session (SessionVariable, mkSessionVariable)
import Hasura.Tracing (ignoreTraceT) import Hasura.Tracing (ignoreTraceT)
import Language.GraphQL.Draft.Syntax qualified as GQL import Language.GraphQL.Draft.Syntax qualified as GQL
import Network.HTTP.Client qualified as HTTP import Network.HTTP.Client qualified as HTTP
import Network.HTTP.Client.Manager
import Servant.Client.Core.HasClient ((//)) import Servant.Client.Core.HasClient ((//))
import Servant.Client.Generic (genericClient) import Servant.Client.Generic (genericClient)
import Witch qualified import Witch qualified
@ -82,7 +82,7 @@ resolveBackendInfo' ::
ArrowWriter (Seq (Either InconsistentMetadata MetadataDependency)) arr, ArrowWriter (Seq (Either InconsistentMetadata MetadataDependency)) arr,
MonadIO m, MonadIO m,
MonadBaseControl IO m, MonadBaseControl IO m,
HasHttpManagerM m ProvidesNetwork m
) => ) =>
Logger Hasura -> Logger Hasura ->
(Inc.Dependency (Maybe (HashMap DC.DataConnectorName Inc.InvalidationKey)), InsOrdHashMap DC.DataConnectorName DC.DataConnectorOptions) `arr` HashMap DC.DataConnectorName DC.DataConnectorInfo (Inc.Dependency (Maybe (HashMap DC.DataConnectorName Inc.InvalidationKey)), InsOrdHashMap DC.DataConnectorName DC.DataConnectorOptions) `arr` HashMap DC.DataConnectorName DC.DataConnectorInfo
@ -103,12 +103,12 @@ resolveBackendInfo' logger = proc (invalidationKeys, optionsMap) -> do
ArrowWriter (Seq (Either InconsistentMetadata MetadataDependency)) arr, ArrowWriter (Seq (Either InconsistentMetadata MetadataDependency)) arr,
MonadIO m, MonadIO m,
MonadBaseControl IO m, MonadBaseControl IO m,
HasHttpManagerM m ProvidesNetwork m
) => ) =>
(Inc.Dependency (Maybe (HashMap DC.DataConnectorName Inc.InvalidationKey)), DC.DataConnectorName, DC.DataConnectorOptions) `arr` Maybe DC.DataConnectorInfo (Inc.Dependency (Maybe (HashMap DC.DataConnectorName Inc.InvalidationKey)), DC.DataConnectorName, DC.DataConnectorOptions) `arr` Maybe DC.DataConnectorInfo
getDataConnectorCapabilitiesIfNeeded = Inc.cache proc (invalidationKeys, dataConnectorName, dataConnectorOptions) -> do getDataConnectorCapabilitiesIfNeeded = Inc.cache proc (invalidationKeys, dataConnectorName, dataConnectorOptions) -> do
let metadataObj = MetadataObject (MODataConnectorAgent dataConnectorName) $ J.toJSON dataConnectorName let metadataObj = MetadataObject (MODataConnectorAgent dataConnectorName) $ J.toJSON dataConnectorName
httpMgr <- bindA -< askHttpManager httpMgr <- bindA -< askHTTPManager
Inc.dependOn -< Inc.selectMaybeD (Inc.ConstS dataConnectorName) invalidationKeys Inc.dependOn -< Inc.selectMaybeD (Inc.ConstS dataConnectorName) invalidationKeys
(| (|
withRecordInconsistency withRecordInconsistency

View File

@ -59,19 +59,21 @@ import Hasura.SQL.Backend
import Hasura.Server.Init qualified as Init import Hasura.Server.Init qualified as Init
import Hasura.Server.Prometheus (PrometheusMetrics) import Hasura.Server.Prometheus (PrometheusMetrics)
import Hasura.Server.Types (ReadOnlyMode (..), RequestId (..)) import Hasura.Server.Types (ReadOnlyMode (..), RequestId (..))
import Hasura.Services
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Language.GraphQL.Draft.Syntax qualified as G import Language.GraphQL.Draft.Syntax qualified as G
import Network.HTTP.Client qualified as HTTP
import Network.HTTP.Types qualified as HTTP import Network.HTTP.Types qualified as HTTP
-- | Execution context -- | Execution context
--
-- TODO: can this be deduplicated with Run? is there anything in here that isn't
-- already in the stack?
data ExecutionCtx = ExecutionCtx data ExecutionCtx = ExecutionCtx
{ _ecxLogger :: L.Logger L.Hasura, { _ecxLogger :: L.Logger L.Hasura,
_ecxSqlGenCtx :: SQLGenCtx, _ecxSqlGenCtx :: SQLGenCtx,
_ecxSchemaCache :: SchemaCache, _ecxSchemaCache :: SchemaCache,
_ecxSchemaCacheVer :: SchemaCacheVer, _ecxSchemaCacheVer :: SchemaCacheVer,
_ecxHttpManager :: HTTP.Manager,
_ecxEnableAllowList :: Init.AllowListStatus, _ecxEnableAllowList :: Init.AllowListStatus,
_ecxReadOnlyMode :: ReadOnlyMode, _ecxReadOnlyMode :: ReadOnlyMode,
_ecxPrometheusMetrics :: PrometheusMetrics _ecxPrometheusMetrics :: PrometheusMetrics
@ -310,6 +312,8 @@ checkQueryInAllowlist allowListStatus allowlistMode userInfo req schemaCache =
-- | Construct a 'ResolvedExecutionPlan' from a 'GQLReqParsed' and a -- | Construct a 'ResolvedExecutionPlan' from a 'GQLReqParsed' and a
-- bunch of metadata. -- bunch of metadata.
--
-- Labelling it as inlineable fixed a performance regression on GHC 8.10.7.
{-# INLINEABLE getResolvedExecPlan #-} {-# INLINEABLE getResolvedExecPlan #-}
getResolvedExecPlan :: getResolvedExecPlan ::
forall m. forall m.
@ -319,7 +323,8 @@ getResolvedExecPlan ::
MonadBaseControl IO m, MonadBaseControl IO m,
Tracing.MonadTrace m, Tracing.MonadTrace m,
EC.MonadGQLExecutionCheck m, EC.MonadGQLExecutionCheck m,
EB.MonadQueryTags m EB.MonadQueryTags m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
L.Logger L.Hasura -> L.Logger L.Hasura ->
@ -330,7 +335,6 @@ getResolvedExecPlan ::
SchemaCache -> SchemaCache ->
SchemaCacheVer -> SchemaCacheVer ->
ET.GraphQLQueryType -> ET.GraphQLQueryType ->
HTTP.Manager ->
[HTTP.Header] -> [HTTP.Header] ->
GQLReqUnparsed -> GQLReqUnparsed ->
SingleOperation -> -- the first step of the execution plan SingleOperation -> -- the first step of the execution plan
@ -347,7 +351,6 @@ getResolvedExecPlan
sc sc
_scVer _scVer
queryType queryType
httpManager
reqHeaders reqHeaders
reqUnparsed reqUnparsed
queryParts -- the first step of the execution plan queryParts -- the first step of the execution plan
@ -366,7 +369,6 @@ getResolvedExecPlan
prometheusMetrics prometheusMetrics
gCtx gCtx
userInfo userInfo
httpManager
reqHeaders reqHeaders
directives directives
inlinedSelSet inlinedSelSet
@ -387,7 +389,6 @@ getResolvedExecPlan
gCtx gCtx
sqlGenCtx sqlGenCtx
userInfo userInfo
httpManager
reqHeaders reqHeaders
directives directives
inlinedSelSet inlinedSelSet

View File

@ -79,6 +79,7 @@ import Hasura.Server.Utils
( mkClientHeadersForward, ( mkClientHeadersForward,
mkSetCookieHeaders, mkSetCookieHeaders,
) )
import Hasura.Services.Network
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Language.GraphQL.Draft.Syntax qualified as G import Language.GraphQL.Draft.Syntax qualified as G
@ -137,6 +138,7 @@ asSingleRowJsonResp query args =
-- | Synchronously execute webhook handler and resolve response to action "output" -- | Synchronously execute webhook handler and resolve response to action "output"
resolveActionExecution :: resolveActionExecution ::
HTTP.Manager ->
Env.Environment -> Env.Environment ->
L.Logger L.Hasura -> L.Logger L.Hasura ->
PrometheusMetrics -> PrometheusMetrics ->
@ -145,7 +147,7 @@ resolveActionExecution ::
ActionExecContext -> ActionExecContext ->
Maybe GQLQueryText -> Maybe GQLQueryText ->
ActionExecution ActionExecution
resolveActionExecution env logger prometheusMetrics _userInfo IR.AnnActionExecution {..} ActionExecContext {..} gqlQueryText = resolveActionExecution httpManager env logger prometheusMetrics _userInfo IR.AnnActionExecution {..} ActionExecContext {..} gqlQueryText =
ActionExecution $ first (encJFromOrderedValue . makeActionResponseNoRelations _aaeFields _aaeOutputType _aaeOutputFields True) <$> runWebhook ActionExecution $ first (encJFromOrderedValue . makeActionResponseNoRelations _aaeFields _aaeOutputType _aaeOutputFields True) <$> runWebhook
where where
handlerPayload = ActionWebhookPayload (ActionContext _aaeName) _aecSessionVariables _aaePayload gqlQueryText handlerPayload = ActionWebhookPayload (ActionContext _aaeName) _aecSessionVariables _aaePayload gqlQueryText
@ -154,10 +156,11 @@ resolveActionExecution env logger prometheusMetrics _userInfo IR.AnnActionExecut
(MonadIO m, MonadError QErr m, Tracing.MonadTrace m) => (MonadIO m, MonadError QErr m, Tracing.MonadTrace m) =>
m (ActionWebhookResponse, HTTP.ResponseHeaders) m (ActionWebhookResponse, HTTP.ResponseHeaders)
runWebhook = runWebhook =
-- TODO: do we need to add the logger as a reader? can't we just give it as an argument?
flip runReaderT logger $ flip runReaderT logger $
callWebhook callWebhook
env env
_aecManager httpManager
prometheusMetrics prometheusMetrics
_aaeOutputType _aaeOutputType
_aaeOutputFields _aaeOutputFields
@ -430,18 +433,18 @@ asyncActionsProcessor ::
MonadBaseControl IO m, MonadBaseControl IO m,
LA.Forall (LA.Pure m), LA.Forall (LA.Pure m),
Tracing.HasReporter m, Tracing.HasReporter m,
MonadMetadataStorage m MonadMetadataStorage m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
L.Logger L.Hasura -> L.Logger L.Hasura ->
IO SchemaCache -> IO SchemaCache ->
STM.TVar (Set LockedActionEventId) -> STM.TVar (Set LockedActionEventId) ->
HTTP.Manager ->
PrometheusMetrics -> PrometheusMetrics ->
Milliseconds -> Milliseconds ->
Maybe GH.GQLQueryText -> Maybe GH.GQLQueryText ->
m (Forever m) m (Forever m)
asyncActionsProcessor env logger getSCFromRef' lockedActionEvents httpManager prometheusMetrics sleepTime gqlQueryText = asyncActionsProcessor env logger getSCFromRef' lockedActionEvents prometheusMetrics sleepTime gqlQueryText =
return $ return $
Forever () $ Forever () $
const $ do const $ do
@ -467,6 +470,7 @@ asyncActionsProcessor env logger getSCFromRef' lockedActionEvents httpManager pr
where where
callHandler :: ActionCache -> ActionLogItem -> m () callHandler :: ActionCache -> ActionLogItem -> m ()
callHandler actionCache actionLogItem = Tracing.runTraceT Tracing.sampleAlways "async actions processor" do callHandler actionCache actionLogItem = Tracing.runTraceT Tracing.sampleAlways "async actions processor" do
httpManager <- askHTTPManager
let ActionLogItem let ActionLogItem
actionId actionId
actionName actionName
@ -488,6 +492,7 @@ asyncActionsProcessor env logger getSCFromRef' lockedActionEvents httpManager pr
metadataResponseTransform = _adResponseTransform definition metadataResponseTransform = _adResponseTransform definition
eitherRes <- eitherRes <-
runExceptT $ runExceptT $
-- TODO: do we need to add the logger as a reader? can't we just give it as an argument?
flip runReaderT logger $ flip runReaderT logger $
callWebhook callWebhook
env env

View File

@ -34,34 +34,36 @@ import Hasura.RQL.Types.QueryTags
import Hasura.SQL.AnyBackend qualified as AB import Hasura.SQL.AnyBackend qualified as AB
import Hasura.Server.Prometheus (PrometheusMetrics (..)) import Hasura.Server.Prometheus (PrometheusMetrics (..))
import Hasura.Server.Types (RequestId (..)) import Hasura.Server.Types (RequestId (..))
import Hasura.Services
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Language.GraphQL.Draft.Syntax qualified as G import Language.GraphQL.Draft.Syntax qualified as G
import Network.HTTP.Client qualified as HTTP
import Network.HTTP.Types qualified as HTTP import Network.HTTP.Types qualified as HTTP
convertMutationAction :: convertMutationAction ::
( MonadIO m, ( MonadIO m,
MonadError QErr m, MonadError QErr m,
MonadMetadataStorage m MonadMetadataStorage m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
L.Logger L.Hasura -> L.Logger L.Hasura ->
PrometheusMetrics -> PrometheusMetrics ->
UserInfo -> UserInfo ->
HTTP.Manager ->
HTTP.RequestHeaders -> HTTP.RequestHeaders ->
Maybe GH.GQLQueryText -> Maybe GH.GQLQueryText ->
ActionMutation Void -> ActionMutation Void ->
m ActionExecutionPlan m ActionExecutionPlan
convertMutationAction env logger prometheusMetrics userInfo manager reqHeaders gqlQueryText = \case convertMutationAction env logger prometheusMetrics userInfo reqHeaders gqlQueryText action = do
httpManager <- askHTTPManager
case action of
AMSync s -> AMSync s ->
pure $ AEPSync $ resolveActionExecution env logger prometheusMetrics userInfo s actionExecContext gqlQueryText pure $ AEPSync $ resolveActionExecution httpManager env logger prometheusMetrics userInfo s actionExecContext gqlQueryText
AMAsync s -> AMAsync s ->
AEPAsyncMutation <$> resolveActionMutationAsync s reqHeaders userSession AEPAsyncMutation <$> resolveActionMutationAsync s reqHeaders userSession
where where
userSession = _uiSession userInfo userSession = _uiSession userInfo
actionExecContext = ActionExecContext manager reqHeaders $ _uiSession userInfo actionExecContext = ActionExecContext reqHeaders (_uiSession userInfo)
convertMutationSelectionSet :: convertMutationSelectionSet ::
forall m. forall m.
@ -70,7 +72,8 @@ convertMutationSelectionSet ::
MonadError QErr m, MonadError QErr m,
MonadMetadataStorage m, MonadMetadataStorage m,
MonadGQLExecutionCheck m, MonadGQLExecutionCheck m,
MonadQueryTags m MonadQueryTags m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
L.Logger L.Hasura -> L.Logger L.Hasura ->
@ -78,7 +81,6 @@ convertMutationSelectionSet ::
GQLContext -> GQLContext ->
SQLGenCtx -> SQLGenCtx ->
UserInfo -> UserInfo ->
HTTP.Manager ->
HTTP.RequestHeaders -> HTTP.RequestHeaders ->
[G.Directive G.Name] -> [G.Directive G.Name] ->
G.SelectionSet G.NoFragments G.Name -> G.SelectionSet G.NoFragments G.Name ->
@ -96,7 +98,6 @@ convertMutationSelectionSet
gqlContext gqlContext
SQLGenCtx {stringifyNum} SQLGenCtx {stringifyNum}
userInfo userInfo
manager
reqHeaders reqHeaders
directives directives
fields fields
@ -146,7 +147,7 @@ convertMutationSelectionSet
(actionName, _fch) <- pure $ case noRelsDBAST of (actionName, _fch) <- pure $ case noRelsDBAST of
AMSync s -> (_aaeName s, _aaeForwardClientHeaders s) AMSync s -> (_aaeName s, _aaeForwardClientHeaders s)
AMAsync s -> (_aamaName s, _aamaForwardClientHeaders s) AMAsync s -> (_aamaName s, _aamaForwardClientHeaders s)
plan <- convertMutationAction env logger prometheusMetrics userInfo manager reqHeaders (Just (GH._grQuery gqlUnparsed)) noRelsDBAST plan <- convertMutationAction env logger prometheusMetrics userInfo reqHeaders (Just (GH._grQuery gqlUnparsed)) noRelsDBAST
pure $ ExecStepAction plan (ActionsInfo actionName _fch) remoteJoins -- `_fch` represents the `forward_client_headers` option from the action pure $ ExecStepAction plan (ActionsInfo actionName _fch) remoteJoins -- `_fch` represents the `forward_client_headers` option from the action
-- definition which is currently being ignored for actions that are mutations -- definition which is currently being ignored for actions that are mutations
RFRaw customFieldVal -> flip onLeft throwError =<< executeIntrospection userInfo customFieldVal introspectionDisabledRoles RFRaw customFieldVal -> flip onLeft throwError =<< executeIntrospection userInfo customFieldVal introspectionDisabledRoles

View File

@ -34,9 +34,9 @@ import Hasura.RQL.Types.QueryTags
import Hasura.SQL.AnyBackend qualified as AB import Hasura.SQL.AnyBackend qualified as AB
import Hasura.Server.Prometheus (PrometheusMetrics (..)) import Hasura.Server.Prometheus (PrometheusMetrics (..))
import Hasura.Server.Types (RequestId (..)) import Hasura.Server.Types (RequestId (..))
import Hasura.Services.Network
import Hasura.Session import Hasura.Session
import Language.GraphQL.Draft.Syntax qualified as G import Language.GraphQL.Draft.Syntax qualified as G
import Network.HTTP.Client qualified as HTTP
import Network.HTTP.Types qualified as HTTP import Network.HTTP.Types qualified as HTTP
parseGraphQLQuery :: parseGraphQLQuery ::
@ -61,14 +61,14 @@ convertQuerySelSet ::
forall m. forall m.
( MonadError QErr m, ( MonadError QErr m,
MonadGQLExecutionCheck m, MonadGQLExecutionCheck m,
MonadQueryTags m MonadQueryTags m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
L.Logger L.Hasura -> L.Logger L.Hasura ->
PrometheusMetrics -> PrometheusMetrics ->
GQLContext -> GQLContext ->
UserInfo -> UserInfo ->
HTTP.Manager ->
HTTP.RequestHeaders -> HTTP.RequestHeaders ->
[G.Directive G.Name] -> [G.Directive G.Name] ->
G.SelectionSet G.NoFragments G.Name -> G.SelectionSet G.NoFragments G.Name ->
@ -85,7 +85,6 @@ convertQuerySelSet
prometheusMetrics prometheusMetrics
gqlContext gqlContext
userInfo userInfo
manager
reqHeaders reqHeaders
directives directives
fields fields
@ -127,9 +126,23 @@ convertQuerySelSet
let (noRelsRemoteField, remoteJoins) = RJ.getRemoteJoinsGraphQLField remoteField let (noRelsRemoteField, remoteJoins) = RJ.getRemoteJoinsGraphQLField remoteField
pure $ buildExecStepRemote remoteSchemaInfo resultCustomizer G.OperationTypeQuery noRelsRemoteField remoteJoins (GH._grOperationName gqlUnparsed) pure $ buildExecStepRemote remoteSchemaInfo resultCustomizer G.OperationTypeQuery noRelsRemoteField remoteJoins (GH._grOperationName gqlUnparsed)
RFAction action -> do RFAction action -> do
httpManager <- askHTTPManager
let (noRelsDBAST, remoteJoins) = RJ.getRemoteJoinsActionQuery action let (noRelsDBAST, remoteJoins) = RJ.getRemoteJoinsActionQuery action
(actionExecution, actionName, fch) <- pure $ case noRelsDBAST of (actionExecution, actionName, fch) <- pure $ case noRelsDBAST of
AQQuery s -> (AEPSync $ resolveActionExecution env logger prometheusMetrics userInfo s (ActionExecContext manager reqHeaders (_uiSession userInfo)) (Just (GH._grQuery gqlUnparsed)), _aaeName s, _aaeForwardClientHeaders s) AQQuery s ->
( AEPSync $
resolveActionExecution
httpManager
env
logger
prometheusMetrics
userInfo
s
(ActionExecContext reqHeaders (_uiSession userInfo))
(Just (GH._grQuery gqlUnparsed)),
_aaeName s,
_aaeForwardClientHeaders s
)
AQAsync s -> (AEPAsyncQuery $ AsyncActionQueryExecutionPlan (_aaaqActionId s) $ resolveAsyncActionQuery userInfo s, _aaaqName s, _aaaqForwardClientHeaders s) AQAsync s -> (AEPAsyncQuery $ AsyncActionQueryExecutionPlan (_aaaqActionId s) $ resolveAsyncActionQuery userInfo s, _aaaqName s, _aaaqForwardClientHeaders s)
pure $ ExecStepAction actionExecution (ActionsInfo actionName fch) remoteJoins pure $ ExecStepAction actionExecution (ActionsInfo actionName fch) remoteJoins
RFRaw r -> flip onLeft throwError =<< executeIntrospection userInfo r introspectionDisabledRoles RFRaw r -> flip onLeft throwError =<< executeIntrospection userInfo r introspectionDisabledRoles

View File

@ -34,10 +34,10 @@ import Hasura.RQL.Types.Common
import Hasura.RemoteSchema.SchemaCache import Hasura.RemoteSchema.SchemaCache
import Hasura.SQL.AnyBackend qualified as AB import Hasura.SQL.AnyBackend qualified as AB
import Hasura.Server.Types (RequestId) import Hasura.Server.Types (RequestId)
import Hasura.Services.Network
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Language.GraphQL.Draft.Syntax qualified as G import Language.GraphQL.Draft.Syntax qualified as G
import Network.HTTP.Client qualified as HTTP
import Network.HTTP.Types qualified as HTTP import Network.HTTP.Types qualified as HTTP
------------------------------------------------------------------------------- -------------------------------------------------------------------------------
@ -58,19 +58,19 @@ processRemoteJoins ::
MonadBaseControl IO m, MonadBaseControl IO m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
MonadQueryLog m, MonadQueryLog m,
Tracing.MonadTrace m Tracing.MonadTrace m,
ProvidesNetwork m
) => ) =>
RequestId -> RequestId ->
L.Logger L.Hasura -> L.Logger L.Hasura ->
Env.Environment -> Env.Environment ->
HTTP.Manager ->
[HTTP.Header] -> [HTTP.Header] ->
UserInfo -> UserInfo ->
EncJSON -> EncJSON ->
Maybe RemoteJoins -> Maybe RemoteJoins ->
GQLReqUnparsed -> GQLReqUnparsed ->
m EncJSON m EncJSON
processRemoteJoins requestId logger env manager requestHeaders userInfo lhs maybeJoinTree gqlreq = processRemoteJoins requestId logger env requestHeaders userInfo lhs maybeJoinTree gqlreq =
forRemoteJoins maybeJoinTree lhs \joinTree -> do forRemoteJoins maybeJoinTree lhs \joinTree -> do
lhsParsed <- lhsParsed <-
JO.eitherDecode (encJToLBS lhs) JO.eitherDecode (encJToLBS lhs)
@ -117,7 +117,7 @@ processRemoteJoins requestId logger env manager requestHeaders userInfo lhs mayb
m BL.ByteString m BL.ByteString
callRemoteServer remoteSchemaInfo request = callRemoteServer remoteSchemaInfo request =
fmap (view _3) $ fmap (view _3) $
execRemoteGQ env manager userInfo requestHeaders remoteSchemaInfo request execRemoteGQ env userInfo requestHeaders remoteSchemaInfo request
-- | Fold the join tree. -- | Fold the join tree.
-- --

View File

@ -36,6 +36,7 @@ import Hasura.RQL.Types.Common
import Hasura.RemoteSchema.Metadata import Hasura.RemoteSchema.Metadata
import Hasura.RemoteSchema.SchemaCache.Types import Hasura.RemoteSchema.SchemaCache.Types
import Hasura.Server.Utils import Hasura.Server.Utils
import Hasura.Services.Network
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Language.GraphQL.Draft.Parser qualified as G import Language.GraphQL.Draft.Parser qualified as G
@ -53,15 +54,14 @@ import Network.Wreq qualified as Wreq
-- and also is called by schema cache rebuilding code in "Hasura.RQL.DDL.Schema.Cache". -- and also is called by schema cache rebuilding code in "Hasura.RQL.DDL.Schema.Cache".
fetchRemoteSchema :: fetchRemoteSchema ::
forall m. forall m.
(MonadIO m, MonadError QErr m, Tracing.MonadTrace m) => (MonadIO m, MonadError QErr m, Tracing.MonadTrace m, ProvidesNetwork m) =>
Env.Environment -> Env.Environment ->
HTTP.Manager ->
RemoteSchemaName -> RemoteSchemaName ->
ValidatedRemoteSchemaDef -> ValidatedRemoteSchemaDef ->
m (IntrospectionResult, BL.ByteString, RemoteSchemaInfo) m (IntrospectionResult, BL.ByteString, RemoteSchemaInfo)
fetchRemoteSchema env manager _rscName rsDef = do fetchRemoteSchema env _rscName rsDef = do
(_, _, rawIntrospectionResult) <- (_, _, rawIntrospectionResult) <-
execRemoteGQ env manager adminUserInfo [] rsDef introspectionQuery execRemoteGQ env adminUserInfo [] rsDef introspectionQuery
(ir, rsi) <- stitchRemoteSchema rawIntrospectionResult _rscName rsDef (ir, rsi) <- stitchRemoteSchema rawIntrospectionResult _rscName rsDef
-- The 'rawIntrospectionResult' contains the 'Bytestring' response of -- The 'rawIntrospectionResult' contains the 'Bytestring' response of
-- the introspection result of the remote server. We store this in the -- the introspection result of the remote server. We store this in the
@ -128,10 +128,10 @@ stitchRemoteSchema rawIntrospectionResult _rscName rsDef@ValidatedRemoteSchemaDe
execRemoteGQ :: execRemoteGQ ::
( MonadIO m, ( MonadIO m,
MonadError QErr m, MonadError QErr m,
Tracing.MonadTrace m Tracing.MonadTrace m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
HTTP.Manager ->
UserInfo -> UserInfo ->
[HTTP.Header] -> [HTTP.Header] ->
ValidatedRemoteSchemaDef -> ValidatedRemoteSchemaDef ->
@ -139,7 +139,7 @@ execRemoteGQ ::
-- | Returns the response body and headers, along with the time taken for the -- | Returns the response body and headers, along with the time taken for the
-- HTTP request to complete -- HTTP request to complete
m (DiffTime, [HTTP.Header], BL.ByteString) m (DiffTime, [HTTP.Header], BL.ByteString)
execRemoteGQ env manager userInfo reqHdrs rsdef gqlReq@GQLReq {..} = do execRemoteGQ env userInfo reqHdrs rsdef gqlReq@GQLReq {..} = do
let gqlReqUnparsed = renderGQLReqOutgoing gqlReq let gqlReqUnparsed = renderGQLReqOutgoing gqlReq
when (G._todType _grQuery == G.OperationTypeSubscription) $ when (G._todType _grQuery == G.OperationTypeSubscription) $
@ -163,6 +163,7 @@ execRemoteGQ env manager userInfo reqHdrs rsdef gqlReq@GQLReq {..} = do
& set HTTP.body (Just $ J.encode gqlReqUnparsed) & set HTTP.body (Just $ J.encode gqlReqUnparsed)
& set HTTP.timeout (HTTP.responseTimeoutMicro (timeout * 1000000)) & set HTTP.timeout (HTTP.responseTimeoutMicro (timeout * 1000000))
manager <- askHTTPManager
Tracing.tracedHttpRequest req \req' -> do Tracing.tracedHttpRequest req \req' -> do
(time, res) <- withElapsedTime $ liftIO $ try $ HTTP.performRequest req' manager (time, res) <- withElapsedTime $ liftIO $ try $ HTTP.performRequest req' manager
resp <- onLeft res (throwRemoteSchemaHttp webhookEnvRecord) resp <- onLeft res (throwRemoteSchemaHttp webhookEnvRecord)

View File

@ -82,11 +82,11 @@ import Hasura.Server.Prometheus
) )
import Hasura.Server.Telemetry.Counters qualified as Telem import Hasura.Server.Telemetry.Counters qualified as Telem
import Hasura.Server.Types (RequestId) import Hasura.Server.Types (RequestId)
import Hasura.Services.Network
import Hasura.Session import Hasura.Session
import Hasura.Tracing (MonadTrace, TraceT, trace) import Hasura.Tracing (MonadTrace, TraceT, trace)
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Language.GraphQL.Draft.Syntax qualified as G import Language.GraphQL.Draft.Syntax qualified as G
import Network.HTTP.Client qualified as HTTP
import Network.HTTP.Types qualified as HTTP import Network.HTTP.Types qualified as HTTP
import Network.Wai.Extended qualified as Wai import Network.Wai.Extended qualified as Wai
import System.Metrics.Prometheus.Counter qualified as Prometheus.Counter import System.Metrics.Prometheus.Counter qualified as Prometheus.Counter
@ -307,7 +307,8 @@ runGQ ::
MonadExecuteQuery m, MonadExecuteQuery m,
MonadMetadataStorage m, MonadMetadataStorage m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
HasResourceLimits m HasResourceLimits m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
L.Logger L.Hasura -> L.Logger L.Hasura ->
@ -319,7 +320,7 @@ runGQ ::
GQLReqUnparsed -> GQLReqUnparsed ->
m (GQLQueryOperationSuccessLog, HttpResponse (Maybe GQResponse, EncJSON)) m (GQLQueryOperationSuccessLog, HttpResponse (Maybe GQResponse, EncJSON))
runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do
E.ExecutionCtx _ sqlGenCtx sc scVer httpManager enableAL readOnlyMode prometheusMetrics <- ask E.ExecutionCtx _ sqlGenCtx sc scVer enableAL readOnlyMode prometheusMetrics <- ask
let gqlMetrics = pmGraphQLRequestMetrics prometheusMetrics let gqlMetrics = pmGraphQLRequestMetrics prometheusMetrics
(totalTime, (response, parameterizedQueryHash, gqlOpType)) <- withElapsedTime $ do (totalTime, (response, parameterizedQueryHash, gqlOpType)) <- withElapsedTime $ do
@ -351,7 +352,6 @@ runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do
sc sc
scVer scVer
queryType queryType
httpManager
reqHeaders reqHeaders
reqUnparsed reqUnparsed
queryParts queryParts
@ -359,7 +359,7 @@ runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do
reqId reqId
-- 4. Execute the execution plan producing a 'AnnotatedResponse'. -- 4. Execute the execution plan producing a 'AnnotatedResponse'.
response <- executePlan httpManager reqParsed runLimits execPlan response <- executePlan reqParsed runLimits execPlan
return (response, parameterizedQueryHash, gqlOpType) return (response, parameterizedQueryHash, gqlOpType)
-- 5. Record telemetry -- 5. Record telemetry
@ -382,12 +382,11 @@ runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do
forWithKey = flip OMap.traverseWithKey forWithKey = flip OMap.traverseWithKey
executePlan :: executePlan ::
HTTP.Manager ->
GQLReqParsed -> GQLReqParsed ->
(m AnnotatedResponse -> m AnnotatedResponse) -> (m AnnotatedResponse -> m AnnotatedResponse) ->
E.ResolvedExecutionPlan -> E.ResolvedExecutionPlan ->
m AnnotatedResponse m AnnotatedResponse
executePlan httpManager reqParsed runLimits execPlan = case execPlan of executePlan reqParsed runLimits execPlan = case execPlan of
E.QueryExecutionPlan queryPlans asts dirMap -> trace "Query" $ do E.QueryExecutionPlan queryPlans asts dirMap -> trace "Query" $ do
-- Attempt to lookup a cached response in the query cache. -- Attempt to lookup a cached response in the query cache.
-- 'keyedLookup' is a monadic action possibly returning a cache hit. -- 'keyedLookup' is a monadic action possibly returning a cache hit.
@ -408,7 +407,8 @@ runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do
-- If we get a cache miss, we must run the query against the graphql engine. -- If we get a cache miss, we must run the query against the graphql engine.
Nothing -> runLimits $ do Nothing -> runLimits $ do
-- 1. 'traverse' the 'ExecutionPlan' executing every step. -- 1. 'traverse' the 'ExecutionPlan' executing every step.
conclusion <- runExceptT $ forWithKey queryPlans $ executeQueryStep httpManager -- TODO: can this be a `catch` rather than a `runExceptT`?
conclusion <- runExceptT $ forWithKey queryPlans executeQueryStep
-- 2. Construct an 'AnnotatedResponse' from the results of all steps in the 'ExecutionPlan'. -- 2. Construct an 'AnnotatedResponse' from the results of all steps in the 'ExecutionPlan'.
result <- buildResponseFromParts Telem.Query conclusion cachingHeaders result <- buildResponseFromParts Telem.Query conclusion cachingHeaders
let response@(HttpResponse responseData _) = arResponse result let response@(HttpResponse responseData _) = arResponse result
@ -435,6 +435,7 @@ runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do
-- we are in the aforementioned case; we circumvent the normal process -- we are in the aforementioned case; we circumvent the normal process
Just (sourceConfig, resolvedConnectionTemplate, pgMutations) -> do Just (sourceConfig, resolvedConnectionTemplate, pgMutations) -> do
res <- res <-
-- TODO: can this be a `catch` rather than a `runExceptT`?
runExceptT $ runExceptT $
doQErr $ doQErr $
runPGMutationTransaction reqId reqUnparsed userInfo logger sourceConfig resolvedConnectionTemplate pgMutations runPGMutationTransaction reqId reqUnparsed userInfo logger sourceConfig resolvedConnectionTemplate pgMutations
@ -453,17 +454,17 @@ runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do
-- we are not in the transaction case; proceeding normally -- we are not in the transaction case; proceeding normally
Nothing -> do Nothing -> do
conclusion <- runExceptT $ forWithKey mutationPlans $ executeMutationStep httpManager -- TODO: can this be a `catch` rather than a `runExceptT`?
conclusion <- runExceptT $ forWithKey mutationPlans executeMutationStep
buildResponseFromParts Telem.Mutation conclusion [] buildResponseFromParts Telem.Mutation conclusion []
E.SubscriptionExecutionPlan _sub -> E.SubscriptionExecutionPlan _sub ->
throw400 UnexpectedPayload "subscriptions are not supported over HTTP, use websockets instead" throw400 UnexpectedPayload "subscriptions are not supported over HTTP, use websockets instead"
executeQueryStep :: executeQueryStep ::
HTTP.Manager ->
RootFieldAlias -> RootFieldAlias ->
EB.ExecutionStep -> EB.ExecutionStep ->
ExceptT (Either GQExecError QErr) m AnnotatedResponsePart ExceptT (Either GQExecError QErr) m AnnotatedResponsePart
executeQueryStep httpManager fieldName = \case executeQueryStep fieldName = \case
E.ExecStepDB _headers exists remoteJoins -> doQErr $ do E.ExecStepDB _headers exists remoteJoins -> doQErr $ do
(telemTimeIO_DT, resp) <- (telemTimeIO_DT, resp) <-
AB.dispatchAnyBackend @BackendTransport AB.dispatchAnyBackend @BackendTransport
@ -471,17 +472,17 @@ runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do
\(EB.DBStepInfo _ sourceConfig genSql tx resolvedConnectionTemplate :: EB.DBStepInfo b) -> \(EB.DBStepInfo _ sourceConfig genSql tx resolvedConnectionTemplate :: EB.DBStepInfo b) ->
runDBQuery @b reqId reqUnparsed fieldName userInfo logger sourceConfig tx genSql resolvedConnectionTemplate runDBQuery @b reqId reqUnparsed fieldName userInfo logger sourceConfig tx genSql resolvedConnectionTemplate
finalResponse <- finalResponse <-
RJ.processRemoteJoins reqId logger env httpManager reqHeaders userInfo resp remoteJoins reqUnparsed RJ.processRemoteJoins reqId logger env reqHeaders userInfo resp remoteJoins reqUnparsed
pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Local finalResponse [] pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Local finalResponse []
E.ExecStepRemote rsi resultCustomizer gqlReq remoteJoins -> do E.ExecStepRemote rsi resultCustomizer gqlReq remoteJoins -> do
logQueryLog logger $ QueryLog reqUnparsed Nothing reqId QueryLogKindRemoteSchema logQueryLog logger $ QueryLog reqUnparsed Nothing reqId QueryLogKindRemoteSchema
runRemoteGQ httpManager fieldName rsi resultCustomizer gqlReq remoteJoins runRemoteGQ fieldName rsi resultCustomizer gqlReq remoteJoins
E.ExecStepAction aep _ remoteJoins -> do E.ExecStepAction aep _ remoteJoins -> do
logQueryLog logger $ QueryLog reqUnparsed Nothing reqId QueryLogKindAction logQueryLog logger $ QueryLog reqUnparsed Nothing reqId QueryLogKindAction
(time, resp) <- doQErr $ do (time, resp) <- doQErr $ do
(time, (resp, _)) <- EA.runActionExecution userInfo aep (time, (resp, _)) <- EA.runActionExecution userInfo aep
finalResponse <- finalResponse <-
RJ.processRemoteJoins reqId logger env httpManager reqHeaders userInfo resp remoteJoins reqUnparsed RJ.processRemoteJoins reqId logger env reqHeaders userInfo resp remoteJoins reqUnparsed
pure (time, finalResponse) pure (time, finalResponse)
pure $ AnnotatedResponsePart time Telem.Empty resp [] pure $ AnnotatedResponsePart time Telem.Empty resp []
E.ExecStepRaw json -> do E.ExecStepRaw json -> do
@ -489,15 +490,14 @@ runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do
buildRaw json buildRaw json
-- For `ExecStepMulti`, execute all steps and then concat them in a list -- For `ExecStepMulti`, execute all steps and then concat them in a list
E.ExecStepMulti lst -> do E.ExecStepMulti lst -> do
_all <- traverse (executeQueryStep httpManager fieldName) lst _all <- traverse (executeQueryStep fieldName) lst
pure $ AnnotatedResponsePart 0 Telem.Local (encJFromList (map arpResponse _all)) [] pure $ AnnotatedResponsePart 0 Telem.Local (encJFromList (map arpResponse _all)) []
executeMutationStep :: executeMutationStep ::
HTTP.Manager ->
RootFieldAlias -> RootFieldAlias ->
EB.ExecutionStep -> EB.ExecutionStep ->
ExceptT (Either GQExecError QErr) m AnnotatedResponsePart ExceptT (Either GQExecError QErr) m AnnotatedResponsePart
executeMutationStep httpManager fieldName = \case executeMutationStep fieldName = \case
E.ExecStepDB responseHeaders exists remoteJoins -> doQErr $ do E.ExecStepDB responseHeaders exists remoteJoins -> doQErr $ do
(telemTimeIO_DT, resp) <- (telemTimeIO_DT, resp) <-
AB.dispatchAnyBackend @BackendTransport AB.dispatchAnyBackend @BackendTransport
@ -505,17 +505,17 @@ runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do
\(EB.DBStepInfo _ sourceConfig genSql tx resolvedConnectionTemplate :: EB.DBStepInfo b) -> \(EB.DBStepInfo _ sourceConfig genSql tx resolvedConnectionTemplate :: EB.DBStepInfo b) ->
runDBMutation @b reqId reqUnparsed fieldName userInfo logger sourceConfig tx genSql resolvedConnectionTemplate runDBMutation @b reqId reqUnparsed fieldName userInfo logger sourceConfig tx genSql resolvedConnectionTemplate
finalResponse <- finalResponse <-
RJ.processRemoteJoins reqId logger env httpManager reqHeaders userInfo resp remoteJoins reqUnparsed RJ.processRemoteJoins reqId logger env reqHeaders userInfo resp remoteJoins reqUnparsed
pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Local finalResponse responseHeaders pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Local finalResponse responseHeaders
E.ExecStepRemote rsi resultCustomizer gqlReq remoteJoins -> do E.ExecStepRemote rsi resultCustomizer gqlReq remoteJoins -> do
logQueryLog logger $ QueryLog reqUnparsed Nothing reqId QueryLogKindRemoteSchema logQueryLog logger $ QueryLog reqUnparsed Nothing reqId QueryLogKindRemoteSchema
runRemoteGQ httpManager fieldName rsi resultCustomizer gqlReq remoteJoins runRemoteGQ fieldName rsi resultCustomizer gqlReq remoteJoins
E.ExecStepAction aep _ remoteJoins -> do E.ExecStepAction aep _ remoteJoins -> do
logQueryLog logger $ QueryLog reqUnparsed Nothing reqId QueryLogKindAction logQueryLog logger $ QueryLog reqUnparsed Nothing reqId QueryLogKindAction
(time, (resp, hdrs)) <- doQErr $ do (time, (resp, hdrs)) <- doQErr $ do
(time, (resp, hdrs)) <- EA.runActionExecution userInfo aep (time, (resp, hdrs)) <- EA.runActionExecution userInfo aep
finalResponse <- finalResponse <-
RJ.processRemoteJoins reqId logger env httpManager reqHeaders userInfo resp remoteJoins reqUnparsed RJ.processRemoteJoins reqId logger env reqHeaders userInfo resp remoteJoins reqUnparsed
pure (time, (finalResponse, hdrs)) pure (time, (finalResponse, hdrs))
pure $ AnnotatedResponsePart time Telem.Empty resp $ fromMaybe [] hdrs pure $ AnnotatedResponsePart time Telem.Empty resp $ fromMaybe [] hdrs
E.ExecStepRaw json -> do E.ExecStepRaw json -> do
@ -523,12 +523,12 @@ runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do
buildRaw json buildRaw json
-- For `ExecStepMulti`, execute all steps and then concat them in a list -- For `ExecStepMulti`, execute all steps and then concat them in a list
E.ExecStepMulti lst -> do E.ExecStepMulti lst -> do
_all <- traverse (executeQueryStep httpManager fieldName) lst _all <- traverse (executeQueryStep fieldName) lst
pure $ AnnotatedResponsePart 0 Telem.Local (encJFromList (map arpResponse _all)) [] pure $ AnnotatedResponsePart 0 Telem.Local (encJFromList (map arpResponse _all)) []
runRemoteGQ httpManager fieldName rsi resultCustomizer gqlReq remoteJoins = do runRemoteGQ fieldName rsi resultCustomizer gqlReq remoteJoins = do
(telemTimeIO_DT, remoteResponseHeaders, resp) <- (telemTimeIO_DT, remoteResponseHeaders, resp) <-
doQErr $ E.execRemoteGQ env httpManager userInfo reqHeaders (rsDef rsi) gqlReq doQErr $ E.execRemoteGQ env userInfo reqHeaders (rsDef rsi) gqlReq
value <- extractFieldFromResponse fieldName resultCustomizer resp value <- extractFieldFromResponse fieldName resultCustomizer resp
finalResponse <- finalResponse <-
doQErr $ doQErr $
@ -536,7 +536,6 @@ runGQ env logger reqId userInfo ipAddress reqHeaders queryType reqUnparsed = do
reqId reqId
logger logger
env env
httpManager
reqHeaders reqHeaders
userInfo userInfo
-- TODO: avoid encode and decode here -- TODO: avoid encode and decode here
@ -736,7 +735,8 @@ runGQBatched ::
MonadExecuteQuery m, MonadExecuteQuery m,
MonadMetadataStorage m, MonadMetadataStorage m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
HasResourceLimits m HasResourceLimits m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
L.Logger L.Hasura -> L.Logger L.Hasura ->

View File

@ -43,6 +43,7 @@ import Hasura.Server.Prometheus
incWebsocketConnections, incWebsocketConnections,
) )
import Hasura.Server.Types (ReadOnlyMode) import Hasura.Server.Types (ReadOnlyMode)
import Hasura.Services.Network
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Network.HTTP.Client qualified as HTTP import Network.HTTP.Client qualified as HTTP
import Network.WebSockets qualified as WS import Network.WebSockets qualified as WS
@ -60,7 +61,8 @@ createWSServerApp ::
MonadExecuteQuery m, MonadExecuteQuery m,
MonadMetadataStorage m, MonadMetadataStorage m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
HasResourceLimits m HasResourceLimits m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
HashSet (L.EngineLogType L.Hasura) -> HashSet (L.EngineLogType L.Hasura) ->

View File

@ -88,6 +88,7 @@ import Hasura.Server.Prometheus
) )
import Hasura.Server.Telemetry.Counters qualified as Telem import Hasura.Server.Telemetry.Counters qualified as Telem
import Hasura.Server.Types (RequestId, getRequestId) import Hasura.Server.Types (RequestId, getRequestId)
import Hasura.Services.Network
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Language.GraphQL.Draft.Syntax (Name (..)) import Language.GraphQL.Draft.Syntax (Name (..))
@ -408,7 +409,8 @@ onStart ::
MC.MonadBaseControl IO m, MC.MonadBaseControl IO m,
MonadMetadataStorage m, MonadMetadataStorage m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
HasResourceLimits m HasResourceLimits m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
HashSet (L.EngineLogType L.Hasura) -> HashSet (L.EngineLogType L.Hasura) ->
@ -467,7 +469,6 @@ onStart env enabledLogTypes serverEnv wsConn shouldCaptureVariables (StartMsg op
sc sc
scVer scVer
queryType queryType
httpMgr
reqHdrs reqHdrs
q q
queryParts queryParts
@ -526,7 +527,7 @@ onStart env enabledLogTypes serverEnv wsConn shouldCaptureVariables (StartMsg op
genSql genSql
resolvedConnectionTemplate resolvedConnectionTemplate
finalResponse <- finalResponse <-
RJ.processRemoteJoins requestId logger env httpMgr reqHdrs userInfo resp remoteJoins q RJ.processRemoteJoins requestId logger env reqHdrs userInfo resp remoteJoins q
pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Local finalResponse [] pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Local finalResponse []
E.ExecStepRemote rsi resultCustomizer gqlReq remoteJoins -> do E.ExecStepRemote rsi resultCustomizer gqlReq remoteJoins -> do
logQueryLog logger $ QueryLog q Nothing requestId QueryLogKindRemoteSchema logQueryLog logger $ QueryLog q Nothing requestId QueryLogKindRemoteSchema
@ -536,7 +537,7 @@ onStart env enabledLogTypes serverEnv wsConn shouldCaptureVariables (StartMsg op
(time, (resp, _)) <- doQErr $ do (time, (resp, _)) <- doQErr $ do
(time, (resp, hdrs)) <- EA.runActionExecution userInfo actionExecPlan (time, (resp, hdrs)) <- EA.runActionExecution userInfo actionExecPlan
finalResponse <- finalResponse <-
RJ.processRemoteJoins requestId logger env httpMgr reqHdrs userInfo resp remoteJoins q RJ.processRemoteJoins requestId logger env reqHdrs userInfo resp remoteJoins q
pure (time, (finalResponse, hdrs)) pure (time, (finalResponse, hdrs))
pure $ AnnotatedResponsePart time Telem.Empty resp [] pure $ AnnotatedResponsePart time Telem.Empty resp []
E.ExecStepRaw json -> do E.ExecStepRaw json -> do
@ -604,14 +605,14 @@ onStart env enabledLogTypes serverEnv wsConn shouldCaptureVariables (StartMsg op
genSql genSql
resolvedConnectionTemplate resolvedConnectionTemplate
finalResponse <- finalResponse <-
RJ.processRemoteJoins requestId logger env httpMgr reqHdrs userInfo resp remoteJoins q RJ.processRemoteJoins requestId logger env reqHdrs userInfo resp remoteJoins q
pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Local finalResponse [] pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Local finalResponse []
E.ExecStepAction actionExecPlan _ remoteJoins -> do E.ExecStepAction actionExecPlan _ remoteJoins -> do
logQueryLog logger $ QueryLog q Nothing requestId QueryLogKindAction logQueryLog logger $ QueryLog q Nothing requestId QueryLogKindAction
(time, (resp, hdrs)) <- doQErr $ do (time, (resp, hdrs)) <- doQErr $ do
(time, (resp, hdrs)) <- EA.runActionExecution userInfo actionExecPlan (time, (resp, hdrs)) <- EA.runActionExecution userInfo actionExecPlan
finalResponse <- finalResponse <-
RJ.processRemoteJoins requestId logger env httpMgr reqHdrs userInfo resp remoteJoins q RJ.processRemoteJoins requestId logger env reqHdrs userInfo resp remoteJoins q
pure (time, (finalResponse, hdrs)) pure (time, (finalResponse, hdrs))
pure $ AnnotatedResponsePart time Telem.Empty resp $ fromMaybe [] hdrs pure $ AnnotatedResponsePart time Telem.Empty resp $ fromMaybe [] hdrs
E.ExecStepRemote rsi resultCustomizer gqlReq remoteJoins -> do E.ExecStepRemote rsi resultCustomizer gqlReq remoteJoins -> do
@ -768,7 +769,7 @@ onStart env enabledLogTypes serverEnv wsConn shouldCaptureVariables (StartMsg op
runRemoteGQ requestId reqUnparsed fieldName userInfo reqHdrs rsi resultCustomizer gqlReq remoteJoins = do runRemoteGQ requestId reqUnparsed fieldName userInfo reqHdrs rsi resultCustomizer gqlReq remoteJoins = do
(telemTimeIO_DT, _respHdrs, resp) <- (telemTimeIO_DT, _respHdrs, resp) <-
doQErr $ doQErr $
E.execRemoteGQ env httpMgr userInfo reqHdrs (rsDef rsi) gqlReq E.execRemoteGQ env userInfo reqHdrs (rsDef rsi) gqlReq
value <- mapExceptT lift $ extractFieldFromResponse fieldName resultCustomizer resp value <- mapExceptT lift $ extractFieldFromResponse fieldName resultCustomizer resp
finalResponse <- finalResponse <-
doQErr $ doQErr $
@ -776,7 +777,6 @@ onStart env enabledLogTypes serverEnv wsConn shouldCaptureVariables (StartMsg op
requestId requestId
logger logger
env env
httpMgr
reqHdrs reqHdrs
userInfo userInfo
-- TODO: avoid encode and decode here -- TODO: avoid encode and decode here
@ -789,7 +789,7 @@ onStart env enabledLogTypes serverEnv wsConn shouldCaptureVariables (StartMsg op
logger logger
subscriptionsState subscriptionsState
getSchemaCache getSchemaCache
httpMgr _
_ _
sqlGenCtx sqlGenCtx
readOnlyMode readOnlyMode
@ -1004,7 +1004,8 @@ onMessage ::
MC.MonadBaseControl IO m, MC.MonadBaseControl IO m,
MonadMetadataStorage m, MonadMetadataStorage m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
HasResourceLimits m HasResourceLimits m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
HashSet (L.EngineLogType L.Hasura) -> HashSet (L.EngineLogType L.Hasura) ->

View File

@ -1,5 +1,3 @@
{-# LANGUAGE UndecidableInstances #-}
-- | This module has type class and types which implements the Metadata Storage Abstraction -- | This module has type class and types which implements the Metadata Storage Abstraction
module Hasura.Metadata.Class module Hasura.Metadata.Class
( SchemaSyncEventProcessResult (..), ( SchemaSyncEventProcessResult (..),

View File

@ -36,8 +36,8 @@ import Hasura.RQL.Types.Metadata qualified as Metadata
import Hasura.RQL.Types.SchemaCache.Build qualified as SC.Build import Hasura.RQL.Types.SchemaCache.Build qualified as SC.Build
import Hasura.SQL.Backend qualified as Backend import Hasura.SQL.Backend qualified as Backend
import Hasura.SQL.BackendMap qualified as BackendMap import Hasura.SQL.BackendMap qualified as BackendMap
import Hasura.Services.Network
import Hasura.Tracing (ignoreTraceT) import Hasura.Tracing (ignoreTraceT)
import Network.HTTP.Client.Manager qualified as HTTP
import Servant.Client qualified as Servant import Servant.Client qualified as Servant
import Servant.Client.Core.HasClient ((//)) import Servant.Client.Core.HasClient ((//))
import Servant.Client.Generic (genericClient) import Servant.Client.Generic (genericClient)
@ -81,13 +81,13 @@ instance ToJSON DCAddAgent where
-- | Insert a new Data Connector Agent into Metadata. -- | Insert a new Data Connector Agent into Metadata.
runAddDataConnectorAgent :: runAddDataConnectorAgent ::
( Metadata.MetadataM m, ( Metadata.MetadataM m,
ProvidesNetwork m,
SC.Build.CacheRWM m, SC.Build.CacheRWM m,
Has (L.Logger L.Hasura) r, Has (L.Logger L.Hasura) r,
MonadReader r m, MonadReader r m,
MonadBaseControl IO m, MonadBaseControl IO m,
MonadError Error.QErr m, MonadError Error.QErr m,
MonadIO m, MonadIO m
HTTP.HasHttpManagerM m
) => ) =>
DCAddAgent -> DCAddAgent ->
m EncJSON m EncJSON
@ -123,16 +123,16 @@ data Availability = Available | NotAvailable Error.QErr
-- | Check DC Agent availability by checking its Capabilities endpoint. -- | Check DC Agent availability by checking its Capabilities endpoint.
checkAgentAvailability :: checkAgentAvailability ::
( Has (L.Logger L.Hasura) r, ( ProvidesNetwork m,
Has (L.Logger L.Hasura) r,
MonadReader r m, MonadReader r m,
MonadIO m, MonadIO m,
MonadBaseControl IO m, MonadBaseControl IO m
HTTP.HasHttpManagerM m
) => ) =>
Servant.BaseUrl -> Servant.BaseUrl ->
m Availability m Availability
checkAgentAvailability url = do checkAgentAvailability url = do
manager <- HTTP.askHttpManager manager <- askHTTPManager
logger <- asks getter logger <- asks getter
res <- runExceptT $ do res <- runExceptT $ do
capabilitiesU <- (ignoreTraceT . flip runAgentClientT (AgentClientContext logger url manager Nothing) $ genericClient @API.Routes // API._capabilities) capabilitiesU <- (ignoreTraceT . flip runAgentClientT (AgentClientContext logger url manager Nothing) $ genericClient @API.Routes // API._capabilities)

View File

@ -90,10 +90,10 @@ import Hasura.SQL.BackendMap qualified as BackendMap
import Hasura.SQL.Tag import Hasura.SQL.Tag
import Hasura.Server.Migrate.Version import Hasura.Server.Migrate.Version
import Hasura.Server.Types import Hasura.Server.Types
import Hasura.Services
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Language.GraphQL.Draft.Syntax qualified as G import Language.GraphQL.Draft.Syntax qualified as G
import Network.HTTP.Client.Manager (HasHttpManagerM (..))
{- Note [Roles Inheritance] {- Note [Roles Inheritance]
~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -167,13 +167,13 @@ newtype CacheRWT m a
MonadReader r, MonadReader r,
MonadError e, MonadError e,
UserInfoM, UserInfoM,
HasHttpManagerM,
MonadMetadataStorage, MonadMetadataStorage,
MonadMetadataStorageQueryAPI, MonadMetadataStorageQueryAPI,
Tracing.MonadTrace, Tracing.MonadTrace,
HasServerConfigCtx, HasServerConfigCtx,
MonadBase b, MonadBase b,
MonadBaseControl b MonadBaseControl b,
ProvidesNetwork
) )
instance (MonadEventLogCleanup m) => MonadEventLogCleanup (CacheRWT m) where instance (MonadEventLogCleanup m) => MonadEventLogCleanup (CacheRWT m) where
@ -198,7 +198,7 @@ instance (Monad m) => CacheRM (CacheRWT m) where
instance instance
( MonadIO m, ( MonadIO m,
MonadError QErr m, MonadError QErr m,
HasHttpManagerM m, ProvidesNetwork m,
MonadResolveSource m, MonadResolveSource m,
HasServerConfigCtx m HasServerConfigCtx m
) => ) =>
@ -306,7 +306,7 @@ buildSchemaCacheRule ::
MonadBaseControl IO m, MonadBaseControl IO m,
MonadError QErr m, MonadError QErr m,
MonadReader BuildReason m, MonadReader BuildReason m,
HasHttpManagerM m, ProvidesNetwork m,
MonadResolveSource m, MonadResolveSource m,
HasServerConfigCtx m HasServerConfigCtx m
) => ) =>
@ -440,7 +440,7 @@ buildSchemaCacheRule logger env = proc (metadataNoDefaults, invalidationKeys, st
ArrowWriter (Seq (Either InconsistentMetadata MetadataDependency)) arr, ArrowWriter (Seq (Either InconsistentMetadata MetadataDependency)) arr,
MonadIO m, MonadIO m,
MonadBaseControl IO m, MonadBaseControl IO m,
HasHttpManagerM m ProvidesNetwork m
) => ) =>
(BackendConfigWrapper b, Inc.Dependency (BackendMap BackendInvalidationKeysWrapper)) `arr` BackendCache (BackendConfigWrapper b, Inc.Dependency (BackendMap BackendInvalidationKeysWrapper)) `arr` BackendCache
resolveBackendInfo' = proc (backendConfigWrapper, backendInvalidationMap) -> do resolveBackendInfo' = proc (backendConfigWrapper, backendInvalidationMap) -> do
@ -458,7 +458,7 @@ buildSchemaCacheRule logger env = proc (metadataNoDefaults, invalidationKeys, st
ArrowWriter (Seq (Either InconsistentMetadata MetadataDependency)) arr, ArrowWriter (Seq (Either InconsistentMetadata MetadataDependency)) arr,
MonadIO m, MonadIO m,
MonadBaseControl IO m, MonadBaseControl IO m,
HasHttpManagerM m ProvidesNetwork m
) => ) =>
(Inc.Dependency (BackendMap BackendInvalidationKeysWrapper), [AB.AnyBackend BackendConfigWrapper]) `arr` BackendCache (Inc.Dependency (BackendMap BackendInvalidationKeysWrapper), [AB.AnyBackend BackendConfigWrapper]) `arr` BackendCache
resolveBackendCache = proc (backendInvalidationMap, backendConfigs) -> do resolveBackendCache = proc (backendInvalidationMap, backendConfigs) -> do
@ -478,7 +478,7 @@ buildSchemaCacheRule logger env = proc (metadataNoDefaults, invalidationKeys, st
MonadIO m, MonadIO m,
MonadBaseControl IO m, MonadBaseControl IO m,
MonadResolveSource m, MonadResolveSource m,
HasHttpManagerM m, ProvidesNetwork m,
BackendMetadata b BackendMetadata b
) => ) =>
( Inc.Dependency (HashMap SourceName Inc.InvalidationKey), ( Inc.Dependency (HashMap SourceName Inc.InvalidationKey),
@ -490,7 +490,11 @@ buildSchemaCacheRule logger env = proc (metadataNoDefaults, invalidationKeys, st
`arr` Maybe (SourceConfig b) `arr` Maybe (SourceConfig b)
tryGetSourceConfig = Inc.cache proc (invalidationKeys, sourceName, sourceConfig, backendKind, backendInfo) -> do tryGetSourceConfig = Inc.cache proc (invalidationKeys, sourceName, sourceConfig, backendKind, backendInfo) -> do
let metadataObj = MetadataObject (MOSource sourceName) $ toJSON sourceName let metadataObj = MetadataObject (MOSource sourceName) $ toJSON sourceName
httpMgr <- bindA -< askHttpManager -- TODO: if we make all of 'resolveSourceConfig' a Service, we could
-- delegate to it the responsibility of extracting the HTTP manager, and
-- avoid having to thread 'ProvidesNetwork' throughout the cache building
-- code.
httpMgr <- bindA -< askHTTPManager
Inc.dependOn -< Inc.selectKeyD sourceName invalidationKeys Inc.dependOn -< Inc.selectKeyD sourceName invalidationKeys
(| (|
withRecordInconsistency withRecordInconsistency
@ -506,7 +510,7 @@ buildSchemaCacheRule logger env = proc (metadataNoDefaults, invalidationKeys, st
MonadIO m, MonadIO m,
MonadBaseControl IO m, MonadBaseControl IO m,
MonadResolveSource m, MonadResolveSource m,
HasHttpManagerM m, ProvidesNetwork m,
BackendMetadata b BackendMetadata b
) => ) =>
( Inc.Dependency (HashMap SourceName Inc.InvalidationKey), ( Inc.Dependency (HashMap SourceName Inc.InvalidationKey),
@ -711,7 +715,7 @@ buildSchemaCacheRule logger env = proc (metadataNoDefaults, invalidationKeys, st
MonadError QErr m, MonadError QErr m,
MonadReader BuildReason m, MonadReader BuildReason m,
MonadBaseControl IO m, MonadBaseControl IO m,
HasHttpManagerM m, ProvidesNetwork m,
HasServerConfigCtx m, HasServerConfigCtx m,
MonadResolveSource m MonadResolveSource m
) => ) =>

View File

@ -78,8 +78,8 @@ import Hasura.SQL.Backend
import Hasura.SQL.BackendMap (BackendMap) import Hasura.SQL.BackendMap (BackendMap)
import Hasura.SQL.BackendMap qualified as BackendMap import Hasura.SQL.BackendMap qualified as BackendMap
import Hasura.Server.Types import Hasura.Server.Types
import Hasura.Services
import Hasura.Session import Hasura.Session
import Network.HTTP.Client.Manager (HasHttpManagerM (..))
import Network.HTTP.Client.Transformable qualified as HTTP import Network.HTTP.Client.Transformable qualified as HTTP
newtype BackendInvalidationKeysWrapper (b :: BackendType) = BackendInvalidationKeysWrapper newtype BackendInvalidationKeysWrapper (b :: BackendType) = BackendInvalidationKeysWrapper
@ -272,8 +272,8 @@ newtype CacheBuild a = CacheBuild (ReaderT CacheBuildParams (ExceptT QErr IO) a)
MonadBaseControl IO MonadBaseControl IO
) )
instance HasHttpManagerM CacheBuild where instance ProvidesNetwork CacheBuild where
askHttpManager = asks _cbpManager askHTTPManager = asks _cbpManager
instance HasServerConfigCtx CacheBuild where instance HasServerConfigCtx CacheBuild where
askServerConfigCtx = asks _cbpServerConfigCtx askServerConfigCtx = asks _cbpServerConfigCtx
@ -295,16 +295,16 @@ runCacheBuild params (CacheBuild m) = do
runCacheBuildM :: runCacheBuildM ::
( MonadIO m, ( MonadIO m,
MonadError QErr m, MonadError QErr m,
HasHttpManagerM m,
HasServerConfigCtx m, HasServerConfigCtx m,
MonadResolveSource m MonadResolveSource m,
ProvidesNetwork m
) => ) =>
CacheBuild a -> CacheBuild a ->
m a m a
runCacheBuildM m = do runCacheBuildM m = do
params <- params <-
CacheBuildParams CacheBuildParams
<$> askHttpManager <$> askHTTPManager
<*> getPGSourceResolver <*> getPGSourceResolver
<*> getMSSQLSourceResolver <*> getMSSQLSourceResolver
<*> askServerConfigCtx <*> askServerConfigCtx

View File

@ -77,8 +77,9 @@ import Hasura.SQL.Backend
import Hasura.SQL.Backend qualified as Backend import Hasura.SQL.Backend qualified as Backend
import Hasura.SQL.BackendMap qualified as BackendMap import Hasura.SQL.BackendMap qualified as BackendMap
import Hasura.Server.Logging (MetadataLog (..)) import Hasura.Server.Logging (MetadataLog (..))
import Hasura.Services
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Network.HTTP.Client.Manager qualified as HTTP.Manager import Network.HTTP.Client qualified as HTTP
import Servant.API (Union) import Servant.API (Union)
import Servant.Client (BaseUrl, (//)) import Servant.Client (BaseUrl, (//))
import Servant.Client.Generic qualified as Servant.Client import Servant.Client.Generic qualified as Servant.Client
@ -356,12 +357,12 @@ instance FromJSON GetSourceTables where
runGetSourceTables :: runGetSourceTables ::
( CacheRM m, ( CacheRM m,
Has (L.Logger L.Hasura) r, Has (L.Logger L.Hasura) r,
HTTP.Manager.HasHttpManagerM m,
MonadReader r m, MonadReader r m,
MonadError Error.QErr m, MonadError Error.QErr m,
Metadata.MetadataM m, Metadata.MetadataM m,
MonadIO m, MonadIO m,
MonadBaseControl IO m MonadBaseControl IO m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
GetSourceTables -> GetSourceTables ->
@ -378,7 +379,7 @@ runGetSourceTables env GetSourceTables {..} = do
case _smKind of case _smKind of
Backend.DataConnectorKind dcName -> do Backend.DataConnectorKind dcName -> do
logger :: L.Logger L.Hasura <- asks getter logger :: L.Logger L.Hasura <- asks getter
manager <- HTTP.Manager.askHttpManager manager <- askHTTPManager
let timeout = DC.Types.timeout _smConfiguration let timeout = DC.Types.timeout _smConfiguration
DC.Types.DataConnectorOptions {..} <- lookupDataConnectorOptions dcName bmap DC.Types.DataConnectorOptions {..} <- lookupDataConnectorOptions dcName bmap
@ -408,7 +409,7 @@ instance FromJSON GetTableInfo where
runGetTableInfo :: runGetTableInfo ::
( CacheRM m, ( CacheRM m,
Has (L.Logger L.Hasura) r, Has (L.Logger L.Hasura) r,
HTTP.Manager.HasHttpManagerM m, ProvidesNetwork m,
MonadReader r m, MonadReader r m,
MonadError Error.QErr m, MonadError Error.QErr m,
Metadata.MetadataM m, Metadata.MetadataM m,
@ -430,7 +431,7 @@ runGetTableInfo env GetTableInfo {..} = do
case _smKind of case _smKind of
Backend.DataConnectorKind dcName -> do Backend.DataConnectorKind dcName -> do
logger :: L.Logger L.Hasura <- asks getter logger :: L.Logger L.Hasura <- asks getter
manager <- HTTP.Manager.askHttpManager manager <- askHTTPManager
let timeout = DC.Types.timeout _smConfiguration let timeout = DC.Types.timeout _smConfiguration
DC.Types.DataConnectorOptions {..} <- lookupDataConnectorOptions dcName bmap DC.Types.DataConnectorOptions {..} <- lookupDataConnectorOptions dcName bmap
@ -459,7 +460,7 @@ lookupDataConnectorOptions dcName bmap =
querySourceSchema :: querySourceSchema ::
(MonadIO m, MonadBaseControl IO m, MonadError QErr m) => (MonadIO m, MonadBaseControl IO m, MonadError QErr m) =>
L.Logger L.Hasura -> L.Logger L.Hasura ->
HTTP.Manager.Manager -> HTTP.Manager ->
Maybe DC.Types.SourceTimeout -> Maybe DC.Types.SourceTimeout ->
BaseUrl -> BaseUrl ->
SourceName -> SourceName ->

View File

@ -79,7 +79,6 @@ import Hasura.RQL.Types.CustomTypes
import Hasura.RQL.Types.Eventing (EventId (..)) import Hasura.RQL.Types.Eventing (EventId (..))
import Hasura.Session import Hasura.Session
import Language.GraphQL.Draft.Syntax qualified as G import Language.GraphQL.Draft.Syntax qualified as G
import Network.HTTP.Client qualified as HTTP
import Network.HTTP.Types qualified as HTTP import Network.HTTP.Types qualified as HTTP
import PostgreSQL.Binary.Encoding qualified as PE import PostgreSQL.Binary.Encoding qualified as PE
@ -289,8 +288,7 @@ newtype ActionPermissionInfo = ActionPermissionInfo
-- GraphQL.Execute. -- GraphQL.Execute.
data ActionExecContext = ActionExecContext data ActionExecContext = ActionExecContext
{ _aecManager :: HTTP.Manager, { _aecHeaders :: HTTP.RequestHeaders,
_aecHeaders :: HTTP.RequestHeaders,
_aecSessionVariables :: SessionVariables _aecSessionVariables :: SessionVariables
} }

View File

@ -31,8 +31,8 @@ import Hasura.RQL.Types.Table
import Hasura.SQL.Backend import Hasura.SQL.Backend
import Hasura.SQL.Types import Hasura.SQL.Types
import Hasura.Server.Migrate.Version import Hasura.Server.Migrate.Version
import Hasura.Services.Network
import Network.HTTP.Client qualified as HTTP import Network.HTTP.Client qualified as HTTP
import Network.HTTP.Client.Manager (HasHttpManagerM)
class class
( Backend b, ( Backend b,
@ -76,7 +76,7 @@ class
ArrowWriter (Seq (Either InconsistentMetadata MetadataDependency)) arr, ArrowWriter (Seq (Either InconsistentMetadata MetadataDependency)) arr,
MonadIO m, MonadIO m,
MonadBaseControl IO m, MonadBaseControl IO m,
HasHttpManagerM m ProvidesNetwork m
) => ) =>
Logger Hasura -> Logger Hasura ->
(Inc.Dependency (Maybe (BackendInvalidationKeys b)), BackendConfig b) `arr` BackendInfo b (Inc.Dependency (Maybe (BackendInvalidationKeys b)), BackendConfig b) `arr` BackendInfo b

View File

@ -7,19 +7,19 @@ module Hasura.RQL.Types.Run
) )
where where
import Control.Monad.Trans
import Control.Monad.Trans.Control (MonadBaseControl) import Control.Monad.Trans.Control (MonadBaseControl)
import Hasura.Metadata.Class import Hasura.Metadata.Class
import Hasura.Prelude import Hasura.Prelude
import Hasura.RQL.DDL.EventTrigger (MonadEventLogCleanup (..)) import Hasura.RQL.DDL.EventTrigger (MonadEventLogCleanup (..))
import Hasura.RQL.Types.Source import Hasura.RQL.Types.Source
import Hasura.Server.Types import Hasura.Server.Types
import Hasura.Services
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Network.HTTP.Client.Manager qualified as HTTP
data RunCtx = RunCtx data RunCtx = RunCtx
{ _rcUserInfo :: UserInfo, { _rcUserInfo :: UserInfo,
_rcHttpMgr :: HTTP.Manager,
_rcServerConfigCtx :: ServerConfigCtx _rcServerConfigCtx :: ServerConfigCtx
} }
@ -35,7 +35,8 @@ newtype RunT m a = RunT {unRunT :: ReaderT RunCtx m a}
MonadBase b, MonadBase b,
MonadBaseControl b, MonadBaseControl b,
MonadMetadataStorage, MonadMetadataStorage,
MonadMetadataStorageQueryAPI MonadMetadataStorageQueryAPI,
ProvidesNetwork
) )
instance MonadTrans RunT where instance MonadTrans RunT where
@ -44,9 +45,6 @@ instance MonadTrans RunT where
instance (Monad m) => UserInfoM (RunT m) where instance (Monad m) => UserInfoM (RunT m) where
askUserInfo = asks _rcUserInfo askUserInfo = asks _rcUserInfo
instance (Monad m) => HTTP.HasHttpManagerM (RunT m) where
askHttpManager = asks _rcHttpMgr
instance (Monad m) => HasServerConfigCtx (RunT m) where instance (Monad m) => HasServerConfigCtx (RunT m) where
askServerConfigCtx = asks _rcServerConfigCtx askServerConfigCtx = asks _rcServerConfigCtx

View File

@ -58,11 +58,11 @@ import Hasura.RQL.Types.QueryCollection
import Hasura.RQL.Types.SchemaCache import Hasura.RQL.Types.SchemaCache
import Hasura.RemoteSchema.Metadata (RemoteSchemaName) import Hasura.RemoteSchema.Metadata (RemoteSchemaName)
import Hasura.Server.Types import Hasura.Server.Types
import Hasura.Services.Network
import Hasura.Session import Hasura.Session
import Hasura.Tracing (TraceT) import Hasura.Tracing (TraceT)
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Language.GraphQL.Draft.Syntax qualified as G import Language.GraphQL.Draft.Syntax qualified as G
import Network.HTTP.Client.Manager (HasHttpManagerM (..))
-- * Inconsistencies -- * Inconsistencies
@ -237,16 +237,14 @@ newtype MetadataT m a = MetadataT {unMetadataT :: StateT Metadata m a}
MFunctor, MFunctor,
Tracing.MonadTrace, Tracing.MonadTrace,
MonadBase b, MonadBase b,
MonadBaseControl b MonadBaseControl b,
ProvidesNetwork
) )
instance (Monad m) => MetadataM (MetadataT m) where instance (Monad m) => MetadataM (MetadataT m) where
getMetadata = MetadataT get getMetadata = MetadataT get
putMetadata = MetadataT . put putMetadata = MetadataT . put
instance (HasHttpManagerM m) => HasHttpManagerM (MetadataT m) where
askHttpManager = lift askHttpManager
instance (UserInfoM m) => UserInfoM (MetadataT m) where instance (UserInfoM m) => UserInfoM (MetadataT m) where
askUserInfo = lift askUserInfo askUserInfo = lift askUserInfo

View File

@ -33,9 +33,9 @@ import Hasura.RQL.Types.SchemaCacheTypes
import Hasura.RemoteSchema.Metadata import Hasura.RemoteSchema.Metadata
import Hasura.RemoteSchema.SchemaCache.Build (addRemoteSchemaP2Setup) import Hasura.RemoteSchema.SchemaCache.Build (addRemoteSchemaP2Setup)
import Hasura.RemoteSchema.SchemaCache.Types import Hasura.RemoteSchema.SchemaCache.Types
import Hasura.Services
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Network.HTTP.Client.Manager (HasHttpManagerM (..))
-- | The payload for 'add_remote_schema', and a component of 'Metadata'. -- | The payload for 'add_remote_schema', and a component of 'Metadata'.
data AddRemoteSchemaQuery = AddRemoteSchemaQuery data AddRemoteSchemaQuery = AddRemoteSchemaQuery
@ -62,7 +62,7 @@ runAddRemoteSchema ::
( QErrM m, ( QErrM m,
CacheRWM m, CacheRWM m,
MonadIO m, MonadIO m,
HasHttpManagerM m, ProvidesNetwork m,
MetadataM m, MetadataM m,
Tracing.MonadTrace m Tracing.MonadTrace m
) => ) =>
@ -163,7 +163,7 @@ runUpdateRemoteSchema ::
( QErrM m, ( QErrM m,
CacheRWM m, CacheRWM m,
MonadIO m, MonadIO m,
HasHttpManagerM m, ProvidesNetwork m,
MetadataM m, MetadataM m,
Tracing.MonadTrace m Tracing.MonadTrace m
) => ) =>
@ -196,9 +196,8 @@ runUpdateRemoteSchema env (AddRemoteSchemaQuery name defn comment) = do
( (isJust metadataRMSchemaURL && isJust currentRMSchemaURL && metadataRMSchemaURL == currentRMSchemaURL) ( (isJust metadataRMSchemaURL && isJust currentRMSchemaURL && metadataRMSchemaURL == currentRMSchemaURL)
|| (isJust metadataRMSchemaURLFromEnv && isJust currentRMSchemaURLFromEnv && metadataRMSchemaURLFromEnv == currentRMSchemaURLFromEnv) || (isJust metadataRMSchemaURLFromEnv && isJust currentRMSchemaURLFromEnv && metadataRMSchemaURLFromEnv == currentRMSchemaURLFromEnv)
) )
$ do $ void
httpMgr <- askHttpManager $ fetchRemoteSchema env name rsi
void $ fetchRemoteSchema env httpMgr name rsi
-- This will throw an error if the new schema fetched in incompatible -- This will throw an error if the new schema fetched in incompatible
-- with the existing permissions and relations -- with the existing permissions and relations

View File

@ -28,9 +28,9 @@ import Hasura.RQL.Types.SchemaCache.Build
import Hasura.RemoteSchema.Metadata import Hasura.RemoteSchema.Metadata
import Hasura.RemoteSchema.SchemaCache.Permission (resolveRoleBasedRemoteSchema) import Hasura.RemoteSchema.SchemaCache.Permission (resolveRoleBasedRemoteSchema)
import Hasura.RemoteSchema.SchemaCache.Types import Hasura.RemoteSchema.SchemaCache.Types
import Hasura.Services
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Network.HTTP.Client.Manager (HasHttpManagerM (..))
-- Resolves a user specified `RemoteSchemaMetadata` into information rich `RemoteSchemaCtx` -- Resolves a user specified `RemoteSchemaMetadata` into information rich `RemoteSchemaCtx`
-- However, given the nature of remote relationships, we cannot fully 'resolve' them, so -- However, given the nature of remote relationships, we cannot fully 'resolve' them, so
@ -42,10 +42,10 @@ buildRemoteSchemas ::
Inc.ArrowCache m arr, Inc.ArrowCache m arr,
MonadIO m, MonadIO m,
MonadBaseControl IO m, MonadBaseControl IO m,
HasHttpManagerM m,
Eq remoteRelationshipDefinition, Eq remoteRelationshipDefinition,
ToJSON remoteRelationshipDefinition, ToJSON remoteRelationshipDefinition,
MonadError QErr m MonadError QErr m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles, Maybe (HashMap RemoteSchemaName BL.ByteString)), ( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles, Maybe (HashMap RemoteSchemaName BL.ByteString)),
@ -170,12 +170,11 @@ buildRemoteSchemaPermissions = proc ((remoteSchemaName, originalIntrospection, o
in MetadataObject objectId $ toJSON defn in MetadataObject objectId $ toJSON defn
addRemoteSchemaP2Setup :: addRemoteSchemaP2Setup ::
(QErrM m, MonadIO m, HasHttpManagerM m, Tracing.MonadTrace m) => (QErrM m, MonadIO m, ProvidesNetwork m, Tracing.MonadTrace m) =>
Env.Environment -> Env.Environment ->
RemoteSchemaName -> RemoteSchemaName ->
RemoteSchemaDef -> RemoteSchemaDef ->
m (IntrospectionResult, BL.ByteString, RemoteSchemaInfo) m (IntrospectionResult, BL.ByteString, RemoteSchemaInfo)
addRemoteSchemaP2Setup env name def = do addRemoteSchemaP2Setup env name def = do
rsi <- validateRemoteSchemaDef env def rsi <- validateRemoteSchemaDef env def
httpMgr <- askHttpManager fetchRemoteSchema env name rsi
fetchRemoteSchema env httpMgr name rsi

View File

@ -80,9 +80,9 @@ import Hasura.Server.Logging (SchemaSyncLog (..), SchemaSyncThreadType (TTMetada
import Hasura.Server.SchemaCacheRef import Hasura.Server.SchemaCacheRef
import Hasura.Server.Types import Hasura.Server.Types
import Hasura.Server.Utils (APIVersion (..)) import Hasura.Server.Utils (APIVersion (..))
import Hasura.Services
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Network.HTTP.Client.Manager qualified as HTTP
data RQLMetadataV1 data RQLMetadataV1
= -- Sources = -- Sources
@ -383,18 +383,18 @@ runMetadataQuery ::
Tracing.MonadTrace m, Tracing.MonadTrace m,
MonadMetadataStorageQueryAPI m, MonadMetadataStorageQueryAPI m,
MonadResolveSource m, MonadResolveSource m,
MonadEventLogCleanup m MonadEventLogCleanup m,
ProvidesHasuraServices m
) => ) =>
Env.Environment -> Env.Environment ->
L.Logger L.Hasura -> L.Logger L.Hasura ->
InstanceId -> InstanceId ->
UserInfo -> UserInfo ->
HTTP.Manager ->
ServerConfigCtx -> ServerConfigCtx ->
SchemaCacheRef -> SchemaCacheRef ->
RQLMetadata -> RQLMetadata ->
m (EncJSON, RebuildableSchemaCache) m (EncJSON, RebuildableSchemaCache)
runMetadataQuery env logger instanceId userInfo httpManager serverConfigCtx schemaCacheRef RQLMetadata {..} = do runMetadataQuery env logger instanceId userInfo serverConfigCtx schemaCacheRef RQLMetadata {..} = do
schemaCache <- liftIO $ fst <$> readSchemaCacheRef schemaCacheRef schemaCache <- liftIO $ fst <$> readSchemaCacheRef schemaCacheRef
(metadata, currentResourceVersion) <- Tracing.trace "fetchMetadata" $ liftEitherM fetchMetadata (metadata, currentResourceVersion) <- Tracing.trace "fetchMetadata" $ liftEitherM fetchMetadata
let exportsMetadata = \case let exportsMetadata = \case
@ -425,7 +425,7 @@ runMetadataQuery env logger instanceId userInfo httpManager serverConfigCtx sche
& flip runReaderT logger & flip runReaderT logger
& runMetadataT metadata metadataDefaults & runMetadataT metadata metadataDefaults
& runCacheRWT schemaCache & runCacheRWT schemaCache
& peelRun (RunCtx userInfo httpManager serverConfigCtx) & peelRun (RunCtx userInfo serverConfigCtx)
-- set modified metadata in storage -- set modified metadata in storage
if queryModifiesMetadata _rqlMetadata if queryModifiesMetadata _rqlMetadata
then case (_sccMaintenanceMode serverConfigCtx, _sccReadOnlyMode serverConfigCtx) of then case (_sccMaintenanceMode serverConfigCtx, _sccReadOnlyMode serverConfigCtx) of
@ -457,7 +457,7 @@ runMetadataQuery env logger instanceId userInfo httpManager serverConfigCtx sche
Tracing.trace "setMetadataResourceVersionInSchemaCache" $ Tracing.trace "setMetadataResourceVersionInSchemaCache" $
setMetadataResourceVersionInSchemaCache newResourceVersion setMetadataResourceVersionInSchemaCache newResourceVersion
& runCacheRWT modSchemaCache & runCacheRWT modSchemaCache
& peelRun (RunCtx userInfo httpManager serverConfigCtx) & peelRun (RunCtx userInfo serverConfigCtx)
pure (r, modSchemaCache') pure (r, modSchemaCache')
(MaintenanceModeEnabled (), ReadOnlyModeDisabled) -> (MaintenanceModeEnabled (), ReadOnlyModeDisabled) ->
@ -593,14 +593,14 @@ runMetadataQueryM ::
CacheRWM m, CacheRWM m,
Tracing.MonadTrace m, Tracing.MonadTrace m,
UserInfoM m, UserInfoM m,
HTTP.HasHttpManagerM m,
MetadataM m, MetadataM m,
MonadMetadataStorageQueryAPI m, MonadMetadataStorageQueryAPI m,
HasServerConfigCtx m, HasServerConfigCtx m,
MonadReader r m, MonadReader r m,
Has (L.Logger L.Hasura) r, Has (L.Logger L.Hasura) r,
MonadError QErr m, MonadError QErr m,
MonadEventLogCleanup m MonadEventLogCleanup m,
ProvidesHasuraServices m
) => ) =>
Env.Environment -> Env.Environment ->
MetadataResourceVersion -> MetadataResourceVersion ->
@ -624,14 +624,14 @@ runMetadataQueryV1M ::
CacheRWM m, CacheRWM m,
Tracing.MonadTrace m, Tracing.MonadTrace m,
UserInfoM m, UserInfoM m,
HTTP.HasHttpManagerM m,
MetadataM m, MetadataM m,
MonadMetadataStorageQueryAPI m, MonadMetadataStorageQueryAPI m,
HasServerConfigCtx m, HasServerConfigCtx m,
MonadReader r m, MonadReader r m,
Has (L.Logger L.Hasura) r, Has (L.Logger L.Hasura) r,
MonadError QErr m, MonadError QErr m,
MonadEventLogCleanup m MonadEventLogCleanup m,
ProvidesHasuraServices m
) => ) =>
Env.Environment -> Env.Environment ->
MetadataResourceVersion -> MetadataResourceVersion ->

View File

@ -56,10 +56,9 @@ import Hasura.RemoteSchema.MetadataAPI
import Hasura.SQL.Backend import Hasura.SQL.Backend
import Hasura.Server.Types import Hasura.Server.Types
import Hasura.Server.Utils import Hasura.Server.Utils
import Hasura.Services
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Network.HTTP.Client qualified as HTTP
import Network.HTTP.Client.Manager (HasHttpManagerM (..))
data RQLQueryV1 data RQLQueryV1
= RQAddExistingTableOrView !(TrackTable ('Postgres 'Vanilla)) = RQAddExistingTableOrView !(TrackTable ('Postgres 'Vanilla))
@ -182,18 +181,18 @@ runQuery ::
MonadMetadataStorageQueryAPI m, MonadMetadataStorageQueryAPI m,
MonadResolveSource m, MonadResolveSource m,
MonadQueryTags m, MonadQueryTags m,
MonadEventLogCleanup m MonadEventLogCleanup m,
ProvidesHasuraServices m
) => ) =>
Env.Environment -> Env.Environment ->
L.Logger L.Hasura -> L.Logger L.Hasura ->
InstanceId -> InstanceId ->
UserInfo -> UserInfo ->
RebuildableSchemaCache -> RebuildableSchemaCache ->
HTTP.Manager ->
ServerConfigCtx -> ServerConfigCtx ->
RQLQuery -> RQLQuery ->
m (EncJSON, RebuildableSchemaCache) m (EncJSON, RebuildableSchemaCache)
runQuery env logger instanceId userInfo sc hMgr serverConfigCtx query = do runQuery env logger instanceId userInfo sc serverConfigCtx query = do
when ((_sccReadOnlyMode serverConfigCtx == ReadOnlyModeEnabled) && queryModifiesUserDB query) $ when ((_sccReadOnlyMode serverConfigCtx == ReadOnlyModeEnabled) && queryModifiesUserDB query) $
throw400 NotSupported "Cannot run write queries when read-only mode is enabled" throw400 NotSupported "Cannot run write queries when read-only mode is enabled"
@ -216,7 +215,7 @@ runQuery env logger instanceId userInfo sc hMgr serverConfigCtx query = do
pure (js, rsc, ci, meta) pure (js, rsc, ci, meta)
withReload currentResourceVersion result withReload currentResourceVersion result
where where
runCtx = RunCtx userInfo hMgr serverConfigCtx runCtx = RunCtx userInfo serverConfigCtx
withReload currentResourceVersion (result, updatedCache, invalidations, updatedMetadata) = do withReload currentResourceVersion (result, updatedCache, invalidations, updatedMetadata) = do
when (queryModifiesSchemaCache query) $ do when (queryModifiesSchemaCache query) $ do
@ -394,7 +393,6 @@ runQueryM ::
UserInfoM m, UserInfoM m,
MonadBaseControl IO m, MonadBaseControl IO m,
MonadIO m, MonadIO m,
HasHttpManagerM m,
HasServerConfigCtx m, HasServerConfigCtx m,
Tracing.MonadTrace m, Tracing.MonadTrace m,
MetadataM m, MetadataM m,
@ -403,7 +401,8 @@ runQueryM ::
MonadReader r m, MonadReader r m,
MonadError QErr m, MonadError QErr m,
Has (L.Logger L.Hasura) r, Has (L.Logger L.Hasura) r,
MonadEventLogCleanup m MonadEventLogCleanup m,
ProvidesHasuraServices m
) => ) =>
Env.Environment -> Env.Environment ->
RQLQuery -> RQLQuery ->

View File

@ -46,10 +46,10 @@ import Hasura.RQL.Types.SchemaCache.Build
import Hasura.RQL.Types.Source import Hasura.RQL.Types.Source
import Hasura.SQL.Backend import Hasura.SQL.Backend
import Hasura.Server.Types import Hasura.Server.Types
import Hasura.Services
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Language.GraphQL.Draft.Syntax qualified as GQL import Language.GraphQL.Draft.Syntax qualified as GQL
import Network.HTTP.Client qualified as HTTP
data RQLQuery data RQLQuery
= RQInsert !InsertQuery = RQInsert !InsertQuery
@ -108,17 +108,17 @@ runQuery ::
Tracing.MonadTrace m, Tracing.MonadTrace m,
MonadMetadataStorage m, MonadMetadataStorage m,
MonadResolveSource m, MonadResolveSource m,
MonadQueryTags m MonadQueryTags m,
ProvidesHasuraServices m
) => ) =>
Env.Environment -> Env.Environment ->
InstanceId -> InstanceId ->
UserInfo -> UserInfo ->
RebuildableSchemaCache -> RebuildableSchemaCache ->
HTTP.Manager ->
ServerConfigCtx -> ServerConfigCtx ->
RQLQuery -> RQLQuery ->
m (EncJSON, RebuildableSchemaCache) m (EncJSON, RebuildableSchemaCache)
runQuery env instanceId userInfo schemaCache httpManager serverConfigCtx rqlQuery = do runQuery env instanceId userInfo schemaCache serverConfigCtx rqlQuery = do
when ((_sccReadOnlyMode serverConfigCtx == ReadOnlyModeEnabled) && queryModifiesUserDB rqlQuery) $ when ((_sccReadOnlyMode serverConfigCtx == ReadOnlyModeEnabled) && queryModifiesUserDB rqlQuery) $
throw400 NotSupported "Cannot run write queries when read-only mode is enabled" throw400 NotSupported "Cannot run write queries when read-only mode is enabled"
@ -134,7 +134,7 @@ runQuery env instanceId userInfo schemaCache httpManager serverConfigCtx rqlQuer
pure (js, rsc, ci, meta) pure (js, rsc, ci, meta)
withReload currentResourceVersion result withReload currentResourceVersion result
where where
runCtx = RunCtx userInfo httpManager serverConfigCtx runCtx = RunCtx userInfo serverConfigCtx
withReload currentResourceVersion (result, updatedCache, invalidations, updatedMetadata) = do withReload currentResourceVersion (result, updatedCache, invalidations, updatedMetadata) = do
when (queryModifiesSchema rqlQuery) $ do when (queryModifiesSchema rqlQuery) $ do

View File

@ -1,4 +1,5 @@
{-# LANGUAGE CPP #-} {-# LANGUAGE CPP #-}
{-# LANGUAGE UndecidableInstances #-}
module Hasura.Server.App module Hasura.Server.App
( APIResp (JSONResp, RawResp), ( APIResp (JSONResp, RawResp),
@ -100,6 +101,7 @@ import Hasura.Server.SchemaCacheRef
import Hasura.Server.Types import Hasura.Server.Types
import Hasura.Server.Utils import Hasura.Server.Utils
import Hasura.Server.Version import Hasura.Server.Version
import Hasura.Services
import Hasura.Session import Hasura.Session
import Hasura.ShutdownLatch import Hasura.ShutdownLatch
import Hasura.Tracing (MonadTrace) import Hasura.Tracing (MonadTrace)
@ -165,7 +167,40 @@ data HandlerCtx = HandlerCtx
hcSourceIpAddress :: !Wai.IpAddress hcSourceIpAddress :: !Wai.IpAddress
} }
type Handler m = ReaderT HandlerCtx (ExceptT QErr m) newtype Handler m a = Handler (ReaderT HandlerCtx (ExceptT QErr m) a)
deriving
( Functor,
Applicative,
Monad,
MonadIO,
MonadFix,
MonadBase b,
MonadBaseControl b,
MonadReader HandlerCtx,
MonadError QErr,
-- Tracing.HasReporter,
Tracing.MonadTrace,
HasResourceLimits,
MonadResolveSource,
HasServerConfigCtx,
E.MonadGQLExecutionCheck,
MonadEventLogCleanup,
MonadQueryLog,
EB.MonadQueryTags,
GH.MonadExecuteQuery,
MonadMetadataApiAuthorization,
MonadMetadataStorage,
MonadMetadataStorageQueryAPI,
ProvidesNetwork
)
instance MonadTrans Handler where
lift = Handler . lift . lift
runHandler :: (HasResourceLimits m, MonadBaseControl IO m) => HandlerCtx -> Handler m a -> m (Either QErr a)
runHandler ctx (Handler r) = do
handlerLimit <- askHTTPHandlerLimit
runExceptT $ flip runReaderT ctx $ runResourceLimits handlerLimit r
data APIResp data APIResp
= JSONResp !(HttpResponse EncJSON) = JSONResp !(HttpResponse EncJSON)
@ -311,7 +346,6 @@ mkSpockAction serverCtx@ServerCtx {..} qErrEncoder qErrModifier apiHandler = do
(requestId, headers) <- getRequestId origHeaders (requestId, headers) <- getRequestId origHeaders
tracingCtx <- liftIO $ Tracing.extractB3HttpContext headers tracingCtx <- liftIO $ Tracing.extractB3HttpContext headers
handlerLimit <- lift askHTTPHandlerLimit
let runTraceT :: let runTraceT ::
forall m1 a1. forall m1 a1.
@ -323,14 +357,6 @@ mkSpockAction serverCtx@ServerCtx {..} qErrEncoder qErrModifier apiHandler = do
scTraceSamplingPolicy scTraceSamplingPolicy
(fromString (B8.unpack pathInfo)) (fromString (B8.unpack pathInfo))
runHandler ::
MonadBaseControl IO m2 =>
HandlerCtx ->
ReaderT HandlerCtx (ExceptT QErr m2) a2 ->
m2 (Either QErr a2)
runHandler handlerCtx handler =
runExceptT $ flip runReaderT handlerCtx $ runResourceLimits handlerLimit $ handler
getInfo parsedRequest = do getInfo parsedRequest = do
authenticationResp <- lift (resolveUserInfo (_lsLogger scLoggers) scManager headers scAuthMode parsedRequest) authenticationResp <- lift (resolveUserInfo (_lsLogger scLoggers) scManager headers scAuthMode parsedRequest)
authInfo <- onLeft authenticationResp (logErrorAndResp Nothing requestId req (reqBody, Nothing) False origHeaders (ExtraUserInfo Nothing) . qErrModifier) authInfo <- onLeft authenticationResp (logErrorAndResp Nothing requestId req (reqBody, Nothing) False origHeaders (ExtraUserInfo Nothing) . qErrModifier)
@ -430,7 +456,8 @@ v1QueryHandler ::
MonadMetadataStorageQueryAPI m, MonadMetadataStorageQueryAPI m,
MonadResolveSource m, MonadResolveSource m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
MonadEventLogCleanup m MonadEventLogCleanup m,
ProvidesNetwork m
) => ) =>
RQLQuery -> RQLQuery ->
m (HttpResponse EncJSON) m (HttpResponse EncJSON)
@ -446,7 +473,6 @@ v1QueryHandler query = do
scRef <- asks (scCacheRef . hcServerCtx) scRef <- asks (scCacheRef . hcServerCtx)
metadataDefaults <- asks (scMetadataDefaults . hcServerCtx) metadataDefaults <- asks (scMetadataDefaults . hcServerCtx)
schemaCache <- liftIO $ fst <$> readSchemaCacheRef scRef schemaCache <- liftIO $ fst <$> readSchemaCacheRef scRef
httpMgr <- asks (scManager . hcServerCtx)
sqlGenCtx <- asks (scSQLGenCtx . hcServerCtx) sqlGenCtx <- asks (scSQLGenCtx . hcServerCtx)
instanceId <- asks (scInstanceId . hcServerCtx) instanceId <- asks (scInstanceId . hcServerCtx)
env <- asks (scEnvironment . hcServerCtx) env <- asks (scEnvironment . hcServerCtx)
@ -476,7 +502,6 @@ v1QueryHandler query = do
instanceId instanceId
userInfo userInfo
schemaCache schemaCache
httpMgr
serverConfigCtx serverConfigCtx
query query
@ -489,7 +514,8 @@ v1MetadataHandler ::
MonadMetadataStorageQueryAPI m, MonadMetadataStorageQueryAPI m,
MonadResolveSource m, MonadResolveSource m,
MonadMetadataApiAuthorization m, MonadMetadataApiAuthorization m,
MonadEventLogCleanup m MonadEventLogCleanup m,
ProvidesNetwork m
) => ) =>
RQLMetadata -> RQLMetadata ->
m (HttpResponse EncJSON) m (HttpResponse EncJSON)
@ -497,7 +523,6 @@ v1MetadataHandler query = Tracing.trace "Metadata" $ do
(liftEitherM . authorizeV1MetadataApi query) =<< ask (liftEitherM . authorizeV1MetadataApi query) =<< ask
userInfo <- asks hcUser userInfo <- asks hcUser
scRef <- asks (scCacheRef . hcServerCtx) scRef <- asks (scCacheRef . hcServerCtx)
httpMgr <- asks (scManager . hcServerCtx)
_sccSQLGenCtx <- asks (scSQLGenCtx . hcServerCtx) _sccSQLGenCtx <- asks (scSQLGenCtx . hcServerCtx)
env <- asks (scEnvironment . hcServerCtx) env <- asks (scEnvironment . hcServerCtx)
instanceId <- asks (scInstanceId . hcServerCtx) instanceId <- asks (scInstanceId . hcServerCtx)
@ -522,7 +547,6 @@ v1MetadataHandler query = Tracing.trace "Metadata" $ do
logger logger
instanceId instanceId
userInfo userInfo
httpMgr
serverConfigCtx serverConfigCtx
scRef scRef
query query
@ -537,7 +561,8 @@ v2QueryHandler ::
MonadReader HandlerCtx m, MonadReader HandlerCtx m,
MonadMetadataStorage m, MonadMetadataStorage m,
MonadResolveSource m, MonadResolveSource m,
EB.MonadQueryTags m EB.MonadQueryTags m,
ProvidesNetwork m
) => ) =>
V2Q.RQLQuery -> V2Q.RQLQuery ->
m (HttpResponse EncJSON) m (HttpResponse EncJSON)
@ -555,7 +580,6 @@ v2QueryHandler query = Tracing.trace "v2 Query" $ do
userInfo <- asks hcUser userInfo <- asks hcUser
scRef <- asks (scCacheRef . hcServerCtx) scRef <- asks (scCacheRef . hcServerCtx)
schemaCache <- liftIO $ fst <$> readSchemaCacheRef scRef schemaCache <- liftIO $ fst <$> readSchemaCacheRef scRef
httpMgr <- asks (scManager . hcServerCtx)
sqlGenCtx <- asks (scSQLGenCtx . hcServerCtx) sqlGenCtx <- asks (scSQLGenCtx . hcServerCtx)
instanceId <- asks (scInstanceId . hcServerCtx) instanceId <- asks (scInstanceId . hcServerCtx)
env <- asks (scEnvironment . hcServerCtx) env <- asks (scEnvironment . hcServerCtx)
@ -581,7 +605,7 @@ v2QueryHandler query = Tracing.trace "v2 Query" $ do
defaultMetadata defaultMetadata
checkFeatureFlag checkFeatureFlag
V2Q.runQuery env instanceId userInfo schemaCache httpMgr serverConfigCtx query V2Q.runQuery env instanceId userInfo schemaCache serverConfigCtx query
v1Alpha1GQHandler :: v1Alpha1GQHandler ::
( MonadIO m, ( MonadIO m,
@ -594,7 +618,8 @@ v1Alpha1GQHandler ::
MonadReader HandlerCtx m, MonadReader HandlerCtx m,
MonadMetadataStorage m, MonadMetadataStorage m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
HasResourceLimits m HasResourceLimits m,
ProvidesNetwork m
) => ) =>
E.GraphQLQueryType -> E.GraphQLQueryType ->
GH.GQLBatchedReqs (GH.GQLReq GH.GQLQueryText) -> GH.GQLBatchedReqs (GH.GQLReq GH.GQLQueryText) ->
@ -619,7 +644,6 @@ mkExecutionContext ::
) => ) =>
m E.ExecutionCtx m E.ExecutionCtx
mkExecutionContext = do mkExecutionContext = do
manager <- asks (scManager . hcServerCtx)
scRef <- asks (scCacheRef . hcServerCtx) scRef <- asks (scCacheRef . hcServerCtx)
(sc, scVer) <- liftIO $ readSchemaCacheRef scRef (sc, scVer) <- liftIO $ readSchemaCacheRef scRef
sqlGenCtx <- asks (scSQLGenCtx . hcServerCtx) sqlGenCtx <- asks (scSQLGenCtx . hcServerCtx)
@ -627,7 +651,7 @@ mkExecutionContext = do
logger <- asks (_lsLogger . scLoggers . hcServerCtx) logger <- asks (_lsLogger . scLoggers . hcServerCtx)
readOnlyMode <- asks (scEnableReadOnlyMode . hcServerCtx) readOnlyMode <- asks (scEnableReadOnlyMode . hcServerCtx)
prometheusMetrics <- asks (scPrometheusMetrics . hcServerCtx) prometheusMetrics <- asks (scPrometheusMetrics . hcServerCtx)
pure $ E.ExecutionCtx logger sqlGenCtx (lastBuiltSchemaCache sc) scVer manager enableAL readOnlyMode prometheusMetrics pure $ E.ExecutionCtx logger sqlGenCtx (lastBuiltSchemaCache sc) scVer enableAL readOnlyMode prometheusMetrics
v1GQHandler :: v1GQHandler ::
( MonadIO m, ( MonadIO m,
@ -640,7 +664,8 @@ v1GQHandler ::
MonadReader HandlerCtx m, MonadReader HandlerCtx m,
MonadMetadataStorage m, MonadMetadataStorage m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
HasResourceLimits m HasResourceLimits m,
ProvidesNetwork m
) => ) =>
GH.GQLBatchedReqs (GH.GQLReq GH.GQLQueryText) -> GH.GQLBatchedReqs (GH.GQLReq GH.GQLQueryText) ->
m (HttpLogGraphQLInfo, HttpResponse EncJSON) m (HttpLogGraphQLInfo, HttpResponse EncJSON)
@ -657,7 +682,8 @@ v1GQRelayHandler ::
MonadReader HandlerCtx m, MonadReader HandlerCtx m,
MonadMetadataStorage m, MonadMetadataStorage m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
HasResourceLimits m HasResourceLimits m,
ProvidesNetwork m
) => ) =>
GH.GQLBatchedReqs (GH.GQLReq GH.GQLQueryText) -> GH.GQLBatchedReqs (GH.GQLReq GH.GQLQueryText) ->
m (HttpLogGraphQLInfo, HttpResponse EncJSON) m (HttpLogGraphQLInfo, HttpResponse EncJSON)
@ -749,7 +775,13 @@ renderHtmlTemplate template jVal =
-- | Default implementation of the 'MonadConfigApiHandler' -- | Default implementation of the 'MonadConfigApiHandler'
configApiGetHandler :: configApiGetHandler ::
forall m. forall m.
(MonadIO m, MonadBaseControl IO m, UserAuthentication (Tracing.TraceT m), HttpLog m, Tracing.HasReporter m, HasResourceLimits m) => ( MonadIO m,
MonadBaseControl IO m,
UserAuthentication (Tracing.TraceT m),
HttpLog m,
Tracing.HasReporter m,
HasResourceLimits m
) =>
ServerCtx -> ServerCtx ->
Maybe Text -> Maybe Text ->
Spock.SpockCtxT () m () Spock.SpockCtxT () m ()
@ -802,7 +834,8 @@ mkWaiApp ::
MonadMetadataStorageQueryAPI m, MonadMetadataStorageQueryAPI m,
MonadResolveSource m, MonadResolveSource m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
MonadEventLogCleanup m MonadEventLogCleanup m,
ProvidesNetwork m
) => ) =>
(ServerCtx -> Spock.SpockT m ()) -> (ServerCtx -> Spock.SpockT m ()) ->
-- | Set of environment variables for reference in UIs -- | Set of environment variables for reference in UIs
@ -840,15 +873,14 @@ mkWaiApp
wsConnInitTimeout wsConnInitTimeout
ekgStore = do ekgStore = do
let getSchemaCache' = first lastBuiltSchemaCache <$> readSchemaCacheRef schemaCacheRef let getSchemaCache' = first lastBuiltSchemaCache <$> readSchemaCacheRef schemaCacheRef
corsPolicy = mkDefaultCorsPolicy corsCfg
let corsPolicy = mkDefaultCorsPolicy corsCfg httpManager <- askHTTPManager
wsServerEnv <- wsServerEnv <-
WS.createWSServerEnv WS.createWSServerEnv
(_lsLogger scLoggers) (_lsLogger scLoggers)
scSubscriptionState scSubscriptionState
getSchemaCache' getSchemaCache'
scManager httpManager
corsPolicy corsPolicy
scSQLGenCtx scSQLGenCtx
scEnableReadOnlyMode scEnableReadOnlyMode
@ -890,7 +922,8 @@ httpApp ::
HasResourceLimits m, HasResourceLimits m,
MonadResolveSource m, MonadResolveSource m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
MonadEventLogCleanup m MonadEventLogCleanup m,
ProvidesNetwork m
) => ) =>
(ServerCtx -> Spock.SpockT m ()) -> (ServerCtx -> Spock.SpockT m ()) ->
CorsConfig -> CorsConfig ->
@ -970,7 +1003,8 @@ httpApp setupHook corsCfg serverCtx consoleStatus consoleAssetsDir consoleSentry
GH.MonadExecuteQuery n, GH.MonadExecuteQuery n,
MonadMetadataStorage n, MonadMetadataStorage n,
EB.MonadQueryTags n, EB.MonadQueryTags n,
HasResourceLimits n HasResourceLimits n,
ProvidesNetwork n
) => ) =>
RestRequest Spock.SpockMethod -> RestRequest Spock.SpockMethod ->
Handler (Tracing.TraceT n) (HttpLogGraphQLInfo, APIResp) Handler (Tracing.TraceT n) (HttpLogGraphQLInfo, APIResp)
@ -1138,7 +1172,14 @@ httpApp setupHook corsCfg serverCtx consoleStatus consoleAssetsDir consoleSentry
spockAction :: spockAction ::
forall a n. forall a n.
(FromJSON a, MonadIO n, MonadBaseControl IO n, UserAuthentication (Tracing.TraceT n), HttpLog n, Tracing.HasReporter n, HasResourceLimits n) => ( FromJSON a,
MonadIO n,
MonadBaseControl IO n,
UserAuthentication (Tracing.TraceT n),
HttpLog n,
Tracing.HasReporter n,
HasResourceLimits n
) =>
(Bool -> QErr -> Value) -> (Bool -> QErr -> Value) ->
(QErr -> QErr) -> (QErr -> QErr) ->
APIHandler (Tracing.TraceT n) a -> APIHandler (Tracing.TraceT n) a ->

View File

@ -32,6 +32,7 @@ import Hasura.Server.Limits
import Hasura.Server.Logging import Hasura.Server.Logging
import Hasura.Server.Name qualified as Name import Hasura.Server.Name qualified as Name
import Hasura.Server.Types import Hasura.Server.Types
import Hasura.Services.Network
import Hasura.Session import Hasura.Session
import Hasura.Tracing qualified as Tracing import Hasura.Tracing qualified as Tracing
import Language.GraphQL.Draft.Syntax qualified as G import Language.GraphQL.Draft.Syntax qualified as G
@ -100,7 +101,8 @@ runCustomEndpoint ::
GH.MonadExecuteQuery m, GH.MonadExecuteQuery m,
MonadMetadataStorage m, MonadMetadataStorage m,
EB.MonadQueryTags m, EB.MonadQueryTags m,
HasResourceLimits m HasResourceLimits m,
ProvidesNetwork m
) => ) =>
Env.Environment -> Env.Environment ->
E.ExecutionCtx -> E.ExecutionCtx ->

View File

@ -39,8 +39,8 @@ import Hasura.Server.SchemaCacheRef
withSchemaCacheUpdate, withSchemaCacheUpdate,
) )
import Hasura.Server.Types import Hasura.Server.Types
import Hasura.Services
import Hasura.Session import Hasura.Session
import Network.HTTP.Client qualified as HTTP
import Refined (NonNegative, Refined, unrefine) import Refined (NonNegative, Refined, unrefine)
data ThreadError data ThreadError
@ -138,10 +138,10 @@ startSchemaSyncListenerThread logger pool instanceId interval metaVersionRef = d
startSchemaSyncProcessorThread :: startSchemaSyncProcessorThread ::
( C.ForkableMonadIO m, ( C.ForkableMonadIO m,
MonadMetadataStorage m, MonadMetadataStorage m,
MonadResolveSource m MonadResolveSource m,
ProvidesHasuraServices m
) => ) =>
Logger Hasura -> Logger Hasura ->
HTTP.Manager ->
STM.TMVar MetadataResourceVersion -> STM.TMVar MetadataResourceVersion ->
SchemaCacheRef -> SchemaCacheRef ->
InstanceId -> InstanceId ->
@ -150,7 +150,6 @@ startSchemaSyncProcessorThread ::
ManagedT m Immortal.Thread ManagedT m Immortal.Thread
startSchemaSyncProcessorThread startSchemaSyncProcessorThread
logger logger
httpMgr
schemaSyncEventRef schemaSyncEventRef
cacheRef cacheRef
instanceId instanceId
@ -159,7 +158,7 @@ startSchemaSyncProcessorThread
-- Start processor thread -- Start processor thread
processorThread <- processorThread <-
C.forkManagedT "SchemeUpdate.processor" logger $ C.forkManagedT "SchemeUpdate.processor" logger $
processor logger httpMgr schemaSyncEventRef cacheRef instanceId serverConfigCtx logTVar processor logger schemaSyncEventRef cacheRef instanceId serverConfigCtx logTVar
logThreadStarted logger instanceId TTProcessor processorThread logThreadStarted logger instanceId TTProcessor processorThread
pure processorThread pure processorThread
@ -251,10 +250,10 @@ processor ::
forall m void. forall m void.
( C.ForkableMonadIO m, ( C.ForkableMonadIO m,
MonadMetadataStorage m, MonadMetadataStorage m,
MonadResolveSource m MonadResolveSource m,
ProvidesHasuraServices m
) => ) =>
Logger Hasura -> Logger Hasura ->
HTTP.Manager ->
STM.TMVar MetadataResourceVersion -> STM.TMVar MetadataResourceVersion ->
SchemaCacheRef -> SchemaCacheRef ->
InstanceId -> InstanceId ->
@ -263,28 +262,24 @@ processor ::
m void m void
processor processor
logger logger
httpMgr
metaVersionRef metaVersionRef
cacheRef cacheRef
instanceId instanceId
serverConfigCtx serverConfigCtx
logTVar = forever $ do logTVar = forever $ do
metaVersion <- liftIO $ STM.atomically $ STM.takeTMVar metaVersionRef metaVersion <- liftIO $ STM.atomically $ STM.takeTMVar metaVersionRef
respErr <- refreshSchemaCache metaVersion instanceId logger cacheRef TTProcessor serverConfigCtx logTVar
runExceptT $
refreshSchemaCache metaVersion instanceId logger httpMgr cacheRef TTProcessor serverConfigCtx logTVar
onLeft respErr (logError logger TTProcessor . TEQueryError)
refreshSchemaCache :: refreshSchemaCache ::
( MonadIO m, ( MonadIO m,
MonadBaseControl IO m, MonadBaseControl IO m,
MonadMetadataStorage m, MonadMetadataStorage m,
MonadResolveSource m MonadResolveSource m,
ProvidesNetwork m
) => ) =>
MetadataResourceVersion -> MetadataResourceVersion ->
InstanceId -> InstanceId ->
Logger Hasura -> Logger Hasura ->
HTTP.Manager ->
SchemaCacheRef -> SchemaCacheRef ->
SchemaSyncThreadType -> SchemaSyncThreadType ->
ServerConfigCtx -> ServerConfigCtx ->
@ -294,7 +289,6 @@ refreshSchemaCache
resourceVersion resourceVersion
instanceId instanceId
logger logger
httpManager
cacheRef cacheRef
threadType threadType
serverConfigCtx serverConfigCtx
@ -368,7 +362,7 @@ refreshSchemaCache
pure (msg, cache) pure (msg, cache)
onLeft respErr (logError logger threadType . TEQueryError) onLeft respErr (logError logger threadType . TEQueryError)
where where
runCtx = RunCtx adminUserInfo httpManager serverConfigCtx runCtx = RunCtx adminUserInfo serverConfigCtx
logInfo :: (MonadIO m) => Logger Hasura -> SchemaSyncThreadType -> Value -> m () logInfo :: (MonadIO m) => Logger Hasura -> SchemaSyncThreadType -> Value -> m ()
logInfo logger threadType val = logInfo logger threadType val =

View File

@ -0,0 +1,22 @@
module Hasura.Services (module Services, ProvidesHasuraServices) where
import Hasura.Services.Network as Services
{- Note [Services]
Different editions of the GraphQL Engine use the same common core, but provide
different features. To avoid having logic deep within the common core that
decides whether a feature is active or not, we favour an "injection" approach:
the core of the engine delegates the implementation details of features /
external dependencies to class constraints, and it's the role of the top-level
caller to implement those constraints.
Those services are implemented on the base monad on which we run the engine. See
'PGMetadataStorageT' in Hasura/App.
-}
-- | A constraint alias that groups all services together.
type ProvidesHasuraServices m =
( ProvidesNetwork m
)

View File

@ -0,0 +1,37 @@
-- | Network service provider.
--
-- This module defines a Service (see Note [Services]) that provides access to
-- the network; for now, that only means providing a HTTP Manager. This is
-- consequentlt a simple analogue to `(MonadReader r m, Has Manager r)`, but
-- could be updated to either encompass other network utilities, or to provide a
-- more restricted interface if deemed useful. Alternatively this could be
-- removed altogether if all network calls were to be hidden behind more
-- specific services.
module Hasura.Services.Network
( ProvidesNetwork (..),
)
where
import Hasura.Prelude
import Hasura.Tracing
import Network.HTTP.Client qualified as HTTP
--------------------------------------------------------------------------------
class Monad m => ProvidesNetwork m where
askHTTPManager :: m HTTP.Manager
instance ProvidesNetwork m => ProvidesNetwork (ReaderT r m) where
askHTTPManager = lift askHTTPManager
instance (Monoid w, ProvidesNetwork m) => ProvidesNetwork (WriterT w m) where
askHTTPManager = lift askHTTPManager
instance ProvidesNetwork m => ProvidesNetwork (StateT s m) where
askHTTPManager = lift askHTTPManager
instance ProvidesNetwork m => ProvidesNetwork (ExceptT e m) where
askHTTPManager = lift askHTTPManager
instance ProvidesNetwork m => ProvidesNetwork (TraceT m) where
askHTTPManager = lift askHTTPManager

View File

@ -46,7 +46,6 @@ import Hasura.Tracing.TraceId
traceIdFromHex, traceIdFromHex,
traceIdToHex, traceIdToHex,
) )
import Network.HTTP.Client.Manager (HasHttpManagerM (..))
import Network.HTTP.Client.Transformable qualified as HTTP import Network.HTTP.Client.Transformable qualified as HTTP
import Refined (Positive, Refined, unrefine) import Refined (Positive, Refined, unrefine)
import System.Random.Stateful qualified as Random import System.Random.Stateful qualified as Random
@ -190,7 +189,18 @@ sampleOneInN denominator
-- | The 'TraceT' monad transformer adds the ability to keep track of -- | The 'TraceT' monad transformer adds the ability to keep track of
-- the current trace context. -- the current trace context.
newtype TraceT m a = TraceT {unTraceT :: ReaderT TraceTEnv m a} newtype TraceT m a = TraceT {unTraceT :: ReaderT TraceTEnv m a}
deriving (Functor, Applicative, Monad, MonadIO, MonadFix, MonadMask, MonadCatch, MonadThrow, MonadBase b, MonadBaseControl b) deriving
( Functor,
Applicative,
Monad,
MonadIO,
MonadFix,
MonadMask,
MonadCatch,
MonadThrow,
MonadBase b,
MonadBaseControl b
)
instance MonadTrans TraceT where instance MonadTrans TraceT where
lift = TraceT . lift lift = TraceT . lift
@ -206,9 +216,6 @@ instance MonadReader r m => MonadReader r (TraceT m) where
ask = TraceT $ lift ask ask = TraceT $ lift ask
local f m = TraceT $ mapReaderT (local f) (unTraceT m) local f m = TraceT $ mapReaderT (local f) (unTraceT m)
instance (HasHttpManagerM m) => HasHttpManagerM (TraceT m) where
askHttpManager = lift askHttpManager
-- | Run an action in the 'TraceT' monad transformer. -- | Run an action in the 'TraceT' monad transformer.
-- 'runTraceT' delimits a new trace with its root span, and the arguments -- 'runTraceT' delimits a new trace with its root span, and the arguments
-- specify a name and metadata for that span. -- specify a name and metadata for that span.

View File

@ -1,26 +0,0 @@
module Network.HTTP.Client.Manager
( HasHttpManagerM (..),
HTTP.Manager,
)
where
import Control.Monad.Except
import Control.Monad.Reader
import Control.Monad.State.Strict
import Control.Monad.Writer.Strict
import Network.HTTP.Client as HTTP
class (Monad m) => HasHttpManagerM m where
askHttpManager :: m HTTP.Manager
instance (HasHttpManagerM m) => HasHttpManagerM (ExceptT e m) where
askHttpManager = lift askHttpManager
instance (HasHttpManagerM m) => HasHttpManagerM (ReaderT r m) where
askHttpManager = lift askHttpManager
instance (HasHttpManagerM m) => HasHttpManagerM (StateT s m) where
askHttpManager = lift askHttpManager
instance (Monoid w, HasHttpManagerM m) => HasHttpManagerM (WriterT w m) where
askHttpManager = lift askHttpManager

View File

@ -12,7 +12,8 @@ import Data.Time.Clock (getCurrentTime)
import Data.URL.Template import Data.URL.Template
import Database.PG.Query qualified as PG import Database.PG.Query qualified as PG
import Hasura.App import Hasura.App
( PGMetadataStorageAppT (..), ( AppContext (..),
PGMetadataStorageAppT (..),
mkMSSQLSourceResolver, mkMSSQLSourceResolver,
mkPgSourceResolver, mkPgSourceResolver,
) )
@ -91,12 +92,12 @@ main = do
emptyMetadataDefaults emptyMetadataDefaults
(FF.checkFeatureFlag mempty) (FF.checkFeatureFlag mempty)
cacheBuildParams = CacheBuildParams httpManager (mkPgSourceResolver print) mkMSSQLSourceResolver serverConfigCtx cacheBuildParams = CacheBuildParams httpManager (mkPgSourceResolver print) mkMSSQLSourceResolver serverConfigCtx
pgLogger = print appContext = AppContext httpManager print pgPool
run :: ExceptT QErr (PGMetadataStorageAppT CacheBuild) a -> IO a run :: ExceptT QErr (PGMetadataStorageAppT CacheBuild) a -> IO a
run = run =
runExceptT runExceptT
>>> flip runPGMetadataStorageAppT (pgPool, pgLogger) >>> flip runPGMetadataStorageAppT appContext
>>> runCacheBuild cacheBuildParams >>> runCacheBuild cacheBuildParams
>>> runExceptT >>> runExceptT
>=> flip onLeft printErrJExit >=> flip onLeft printErrJExit

View File

@ -30,8 +30,8 @@ import Hasura.Server.API.PGDump
import Hasura.Server.Init (DowngradeOptions (..)) import Hasura.Server.Init (DowngradeOptions (..))
import Hasura.Server.Migrate import Hasura.Server.Migrate
import Hasura.Server.Types import Hasura.Server.Types
import Hasura.Services.Network
import Hasura.Session import Hasura.Session
import Network.HTTP.Client.Manager qualified as HTTP
import Test.Hspec.Core.Spec import Test.Hspec.Core.Spec
import Test.Hspec.Expectations.Lifted import Test.Hspec.Expectations.Lifted
@ -48,10 +48,10 @@ newtype CacheRefT m a = CacheRefT {runCacheRefT :: MVar RebuildableSchemaCache -
MonadBaseControl b, MonadBaseControl b,
MonadTx, MonadTx,
UserInfoM, UserInfoM,
HTTP.HasHttpManagerM,
HasServerConfigCtx, HasServerConfigCtx,
MonadMetadataStorage, MonadMetadataStorage,
MonadMetadataStorageQueryAPI MonadMetadataStorageQueryAPI,
ProvidesNetwork
) )
via (ReaderT (MVar RebuildableSchemaCache) m) via (ReaderT (MVar RebuildableSchemaCache) m)
@ -74,9 +74,9 @@ instance
( MonadIO m, ( MonadIO m,
MonadBaseControl IO m, MonadBaseControl IO m,
MonadError QErr m, MonadError QErr m,
HTTP.HasHttpManagerM m,
MonadResolveSource m, MonadResolveSource m,
HasServerConfigCtx m HasServerConfigCtx m,
ProvidesNetwork m
) => ) =>
CacheRWM (CacheRefT m) CacheRWM (CacheRefT m)
where where
@ -104,11 +104,11 @@ suite ::
( MonadIO m, ( MonadIO m,
MonadError QErr m, MonadError QErr m,
MonadBaseControl IO m, MonadBaseControl IO m,
HTTP.HasHttpManagerM m,
HasServerConfigCtx m, HasServerConfigCtx m,
MonadResolveSource m, MonadResolveSource m,
MonadMetadataStorageQueryAPI m, MonadMetadataStorageQueryAPI m,
MonadEventLogCleanup m MonadEventLogCleanup m,
ProvidesNetwork m
) => ) =>
PostgresConnConfiguration -> PostgresConnConfiguration ->
PGExecCtx -> PGExecCtx ->