graphql-engine/server/src-lib/Hasura/Server/SchemaUpdate.hs

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

373 lines
14 KiB
Haskell
Raw Normal View History

module Hasura.Server.SchemaUpdate
( startSchemaSyncListenerThread,
startSchemaSyncProcessorThread,
SchemaSyncThreadType (..),
)
where
import Control.Concurrent.Extended qualified as C
import Control.Concurrent.STM qualified as STM
import Control.Immortal qualified as Immortal
import Control.Monad.Loops qualified as L
import Control.Monad.Trans.Control (MonadBaseControl)
import Control.Monad.Trans.Managed (ManagedT)
import Data.Aeson
import Data.Aeson.Casing
import Data.HashMap.Strict qualified as HashMap
import Data.HashSet qualified as HS
import Data.Text qualified as T
import Database.PG.Query qualified as PG
import Hasura.App.State
import Hasura.Base.Error
import Hasura.Logging
import Hasura.Metadata.Class
import Hasura.Prelude
import Hasura.RQL.DDL.Schema (runCacheRWT)
2023-04-04 18:59:58 +03:00
import Hasura.RQL.DDL.Schema.Cache.Config
import Hasura.RQL.DDL.Schema.Catalog
import Hasura.RQL.Types.BackendType (BackendType (..))
import Hasura.RQL.Types.SchemaCache
import Hasura.RQL.Types.SchemaCache.Build
import Hasura.RQL.Types.Source
import Hasura.SQL.BackendMap qualified as BackendMap
import Hasura.Server.AppStateRef
( AppStateRef,
getAppContext,
getRebuildableSchemaCacheWithVersion,
withSchemaCacheUpdate,
)
import Hasura.Server.Logging
import Hasura.Server.Types
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
import Hasura.Services
import Refined (NonNegative, Refined, unrefine)
data ThreadError
= TEPayloadParse !Text
| TEQueryError !QErr
deriving (Generic)
instance ToJSON ThreadError where
toJSON =
genericToJSON
defaultOptions
{ constructorTagModifier = snakeCase . drop 2,
sumEncoding = TaggedObject "type" "info"
}
toEncoding =
genericToEncoding
defaultOptions
{ constructorTagModifier = snakeCase . drop 2,
sumEncoding = TaggedObject "type" "info"
}
logThreadStarted ::
(MonadIO m) =>
Logger Hasura ->
InstanceId ->
SchemaSyncThreadType ->
Immortal.Thread ->
m ()
logThreadStarted logger instanceId threadType thread =
let msg = tshow threadType <> " thread started"
in unLogger logger
$ StartupLog LevelInfo "schema-sync"
$ object
[ "instance_id" .= getInstanceId instanceId,
"thread_id" .= show (Immortal.threadId thread),
"message" .= msg
]
{- Note [Schema Cache Sync]
~~~~~~~~~~~~~~~~~~~~~~~~~~~
When multiple graphql-engine instances are serving on same metadata storage,
each instance should have schema cache in sync with latest metadata. Somehow
all instances should communicate each other when any request has modified metadata.
We track the metadata schema version in postgres and poll for this
value in a thread. When the schema version has changed, the instance
will update its local metadata schema and remove any invalidated schema cache data.
The following steps take place when an API request made to update metadata:
1. After handling the request we insert the new metadata schema json
into a postgres tablealong with a schema version.
2. On start up, before initialising schema cache, an async thread is
invoked to continuously poll the Postgres notifications table for
the latest metadata schema version. The schema version is pushed to
a shared `TMVar`.
3. Before starting API server, another async thread is invoked to
process events pushed by the listener thread via the `TMVar`. If
the instance's schema version is not current with the freshly
updated TMVar version then we update the local metadata.
Why we need two threads if we can capture and reload schema cache in a single thread?
If we want to implement schema sync in a single async thread we have to invoke the same
after initialising schema cache. We may loose events that published after schema cache
init and before invoking the thread. In such case, schema cache is not in sync with metadata.
So we choose two threads in which one will start listening before schema cache init and the
other after it.
What happens if listen connection to Postgres is lost?
Listener thread will keep trying to establish connection to Postgres for every one second.
Once connection established, it pushes @'SSEListenStart' event with time. We aren't sure
about any metadata modify requests made in meanwhile. So we reload schema cache unconditionally
if listen started after schema cache init start time.
-}
-- | An async thread which listen to Postgres notify to enable schema syncing
-- See Note [Schema Cache Sync]
startSchemaSyncListenerThread ::
(C.ForkableMonadIO m) =>
Logger Hasura ->
PG.PGPool ->
InstanceId ->
Refined NonNegative Milliseconds ->
STM.TMVar MetadataResourceVersion ->
ManagedT m (Immortal.Thread)
Disable schema sync with an interval of 0 instead of an explicit flag Removing `schemaSyncDisable` flag and interpreting `schemaPollInterval` of `0` as disabling schema sync. This change brings the convention in line with how action and other intervals are used to disable processes. There is an opportunity to abstract the notion of an optional interval similar to how actions uses `AsyncActionsFetchInterval`. This can be used for the following fields of ServeOptions, with RawServeOptions having a milliseconds value where `0` is interpreted as disable. OptionalInterval: ``` -- | Sleep time interval for activities data OptionalInterval = Skip -- ^ No polling | Interval !Milliseconds -- ^ Interval time deriving (Show, Eq) ``` ServeOptions: ``` data ServeOptions impl = ServeOptions { ... , soEventsFetchInterval :: !OptionalInterval , soAsyncActionsFetchInterval :: !OptionalInterval , soSchemaPollInterval :: !OptionalInterval ... } ``` Rather than encoding a `Maybe OptionalInterval` in RawServeOptions, instead a `Maybe Milliseconds` can be used to more directly express the input format, with the ServeOptions constructor interpreting `0` as `Skip`. Current inconsistencies: * `soEventsFetchInterval` has no value interpreted as disabling the fetches * `soAsyncActionsFetchInterval` uses an `OptionalInterval` analog in `RawServeOptions` instead of `Milliseconds` * `soSchemaPollInterval` currently uses `Milliseconds` directly in `ServeOptions` --- ### Kodiak commit message Information used by [Kodiak bot](https://kodiakhq.com/) while merging this PR. #### Commit title Same as the title of this pull request GitOrigin-RevId: 3cda1656ae39ae95ba142512ed4e123d6ffeb7fe
2021-04-07 12:59:48 +03:00
startSchemaSyncListenerThread logger pool instanceId interval metaVersionRef = do
-- Start listener thread
listenerThread <-
C.forkManagedT "SchemeUpdate.listener" logger
$ listener logger pool metaVersionRef (unrefine interval)
logThreadStarted logger instanceId TTListener listenerThread
pure listenerThread
-- | An async thread which processes the schema sync events
-- See Note [Schema Cache Sync]
startSchemaSyncProcessorThread ::
( C.ForkableMonadIO m,
HasAppEnv m,
2023-04-04 18:59:58 +03:00
HasCacheStaticConfig m,
MonadMetadataStorage m,
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
MonadResolveSource m,
ProvidesNetwork m
) =>
AppStateRef impl ->
STM.TVar Bool ->
ManagedT m Immortal.Thread
startSchemaSyncProcessorThread appStateRef logTVar = do
AppEnv {..} <- lift askAppEnv
let logger = _lsLogger appEnvLoggers
-- Start processor thread
processorThread <-
C.forkManagedT "SchemeUpdate.processor" logger
$ processor appEnvMetadataVersionRef appStateRef logTVar
logThreadStarted logger appEnvInstanceId TTProcessor processorThread
pure processorThread
-- TODO: This is also defined in multitenant, consider putting it in a library somewhere
forcePut :: STM.TMVar a -> a -> IO ()
forcePut v a = STM.atomically $ STM.tryTakeTMVar v >> STM.putTMVar v a
schemaVersionCheckHandler ::
PG.PGPool -> STM.TMVar MetadataResourceVersion -> IO (Either QErr ())
schemaVersionCheckHandler pool metaVersionRef =
runExceptT
( PG.runTx pool (PG.RepeatableRead, Nothing)
$ fetchMetadataResourceVersionFromCatalog
)
>>= \case
Right version -> Right <$> forcePut metaVersionRef version
Left err -> pure $ Left err
data ErrorState = ErrorState
{ _esLastErrorSeen :: !(Maybe QErr),
_esLastMetadataVersion :: !(Maybe MetadataResourceVersion)
}
deriving (Eq)
-- NOTE: The ErrorState type is to be used mainly for the `listener` method below.
-- This will help prevent logging the same error with the same MetadataResourceVersion
-- multiple times consecutively. When the `listener` is in ErrorState we don't log the
-- next error until the resource version has changed/updated.
defaultErrorState :: ErrorState
defaultErrorState = ErrorState Nothing Nothing
-- | NOTE: this can be updated to use lenses
updateErrorInState :: ErrorState -> QErr -> MetadataResourceVersion -> ErrorState
updateErrorInState es qerr mrv =
es
{ _esLastErrorSeen = Just qerr,
_esLastMetadataVersion = Just mrv
}
isInErrorState :: ErrorState -> Bool
isInErrorState es =
(isJust . _esLastErrorSeen) es && (isJust . _esLastMetadataVersion) es
toLogError :: ErrorState -> QErr -> MetadataResourceVersion -> Bool
toLogError es qerr mrv = not $ isQErrLastSeen || isMetadataResourceVersionLastSeen
where
isQErrLastSeen = case _esLastErrorSeen es of
Just lErrS -> lErrS == qerr
Nothing -> False
isMetadataResourceVersionLastSeen = case _esLastMetadataVersion es of
Just lMRV -> lMRV == mrv
Nothing -> False
-- | An IO action that listens to postgres for events and pushes them to a Queue, in a loop forever.
listener ::
(MonadIO m) =>
Logger Hasura ->
PG.PGPool ->
STM.TMVar MetadataResourceVersion ->
Milliseconds ->
m void
listener logger pool metaVersionRef interval = L.iterateM_ listenerLoop defaultErrorState
where
listenerLoop errorState = do
mrv <- liftIO $ STM.atomically $ STM.tryTakeTMVar metaVersionRef
resp <- liftIO $ schemaVersionCheckHandler pool metaVersionRef
let metadataVersion = fromMaybe initialResourceVersion mrv
nextErr <- case resp of
Left respErr -> do
if (toLogError errorState respErr metadataVersion)
then do
logError logger TTListener $ TEQueryError respErr
logInfo logger TTListener $ object ["metadataResourceVersion" .= toJSON metadataVersion]
pure $ updateErrorInState errorState respErr metadataVersion
else do
pure errorState
Right _ -> do
when (isInErrorState errorState)
$ logInfo logger TTListener
$ object ["message" .= ("SchemaSync Restored..." :: Text)]
pure defaultErrorState
liftIO $ C.sleep $ milliseconds interval
pure nextErr
-- | An IO action that processes events from Queue, in a loop forever.
processor ::
forall m void impl.
( C.ForkableMonadIO m,
HasAppEnv m,
2023-04-04 18:59:58 +03:00
HasCacheStaticConfig m,
MonadMetadataStorage m,
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
MonadResolveSource m,
ProvidesNetwork m
) =>
STM.TMVar MetadataResourceVersion ->
AppStateRef impl ->
STM.TVar Bool ->
m void
processor
metaVersionRef
appStateRef
logTVar = forever do
metaVersion <- liftIO $ STM.atomically $ STM.takeTMVar metaVersionRef
refreshSchemaCache metaVersion appStateRef TTProcessor logTVar
refreshSchemaCache ::
( MonadIO m,
MonadBaseControl IO m,
HasAppEnv m,
2023-04-04 18:59:58 +03:00
HasCacheStaticConfig m,
MonadMetadataStorage m,
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
MonadResolveSource m,
ProvidesNetwork m
) =>
MetadataResourceVersion ->
AppStateRef impl ->
SchemaSyncThreadType ->
STM.TVar Bool ->
m ()
refreshSchemaCache
resourceVersion
appStateRef
threadType
logTVar = do
AppEnv {..} <- askAppEnv
let logger = _lsLogger appEnvLoggers
respErr <- runExceptT
$ withSchemaCacheUpdate appStateRef logger (Just logTVar)
$ do
rebuildableCache <- liftIO $ getRebuildableSchemaCacheWithVersion appStateRef
appContext <- liftIO $ getAppContext appStateRef
let dynamicConfig = buildCacheDynamicConfig appContext
-- the instance which triggered the schema sync event would have stored
-- the source introspection, hence we can ignore it here
(msg, cache, _, _sourcesIntrospection, _schemaRegistryAction) <-
2023-04-04 18:59:58 +03:00
runCacheRWT dynamicConfig rebuildableCache $ do
schemaCache <- askSchemaCache
let engineResourceVersion = scMetadataResourceVersion schemaCache
unless (engineResourceVersion == resourceVersion) $ do
logInfo logger threadType
$ String
$ T.unwords
[ "Received metadata resource version:",
showMetadataResourceVersion resourceVersion <> ",",
"different from the current engine resource version:",
showMetadataResourceVersion engineResourceVersion <> ".",
"Trying to update the schema cache."
]
MetadataWithResourceVersion metadata latestResourceVersion <- liftEitherM fetchMetadata
logInfo logger threadType
$ String
$ T.unwords
[ "Fetched metadata with resource version:",
showMetadataResourceVersion latestResourceVersion
]
notifications <- liftEitherM $ fetchMetadataNotifications engineResourceVersion appEnvInstanceId
case notifications of
[] -> do
logInfo logger threadType
$ String
$ T.unwords
[ "Fetched metadata notifications and received no notifications. Not updating the schema cache.",
"Only setting resource version:",
showMetadataResourceVersion latestResourceVersion,
"in schema cache"
]
setMetadataResourceVersionInSchemaCache latestResourceVersion
_ -> do
logInfo logger threadType
$ String "Fetched metadata notifications and received some notifications. Updating the schema cache."
let cacheInvalidations =
if any ((== (engineResourceVersion + 1)) . fst) notifications
then -- If (engineResourceVersion + 1) is in the list of notifications then
-- we know that we haven't missed any.
mconcat $ snd <$> notifications
else -- Otherwise we may have missed some notifications so we need to invalidate the
-- whole cache.
CacheInvalidations
{ ciMetadata = True,
ciRemoteSchemas = HS.fromList $ getAllRemoteSchemas schemaCache,
ciSources = HS.fromList $ HashMap.keys $ scSources schemaCache,
ciDataConnectors =
maybe mempty (HS.fromList . HashMap.keys . unBackendInfoWrapper)
$ BackendMap.lookup @'DataConnector
$ scBackendCache schemaCache
}
buildSchemaCacheWithOptions CatalogSync cacheInvalidations metadata (Just latestResourceVersion)
setMetadataResourceVersionInSchemaCache latestResourceVersion
logInfo logger threadType
$ String
$ "Schema cache updated with resource version: "
<> showMetadataResourceVersion latestResourceVersion
pure (msg, cache)
onLeft respErr (logError logger threadType . TEQueryError)
logInfo :: (MonadIO m) => Logger Hasura -> SchemaSyncThreadType -> Value -> m ()
logInfo logger threadType val =
unLogger logger
$ SchemaSyncLog LevelInfo threadType val
logError :: (MonadIO m, ToJSON a) => Logger Hasura -> SchemaSyncThreadType -> a -> m ()
logError logger threadType err =
unLogger logger
$ SchemaSyncLog LevelError threadType
$ object ["error" .= toJSON err]