graphql-engine/server/test-postgres/Test/Hasura/Server/MigrateSuite.hs
Antoine Leblanc 6e574f1bbe harmonize network manager handling
## Description

### I want to speak to the `Manager`

Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.

For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.

This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.

### First come, first served

One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.

The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.

### Side-effects? In my Haskell?

This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.

(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)

## Further work

In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:

- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
-  rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base

This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 15:55:54 +00:00

206 lines
8.1 KiB
Haskell

{-# LANGUAGE UndecidableInstances #-}
module Test.Hasura.Server.MigrateSuite (CacheRefT (..), suite) where
import Control.Concurrent.MVar.Lifted
import Control.Monad.Morph
import Control.Monad.Trans.Control (MonadBaseControl)
import Control.Natural ((:~>) (..))
import Data.Aeson (encode)
import Data.ByteString.Lazy.UTF8 qualified as LBS
import Data.Environment qualified as Env
import Data.Time.Clock (getCurrentTime)
import Database.PG.Query qualified as PG
import Hasura.Backends.Postgres.Connection
import Hasura.Base.Error
import Hasura.EncJSON
import Hasura.Logging
import Hasura.Metadata.Class
import Hasura.Prelude
import Hasura.RQL.DDL.EventTrigger (MonadEventLogCleanup (..))
import Hasura.RQL.DDL.Metadata (ClearMetadata (..), runClearMetadata)
import Hasura.RQL.DDL.Schema
import Hasura.RQL.DDL.Schema.Cache.Common
import Hasura.RQL.DDL.Schema.LegacyCatalog (recreateSystemMetadata)
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.SchemaCache
import Hasura.RQL.Types.SchemaCache.Build
import Hasura.RQL.Types.Source
import Hasura.Server.API.PGDump
import Hasura.Server.Init (DowngradeOptions (..))
import Hasura.Server.Migrate
import Hasura.Server.Types
import Hasura.Services.Network
import Hasura.Session
import Test.Hspec.Core.Spec
import Test.Hspec.Expectations.Lifted
-- -- NOTE: downgrade test disabled for now (see #5273)
newtype CacheRefT m a = CacheRefT {runCacheRefT :: MVar RebuildableSchemaCache -> m a}
deriving
( Functor,
Applicative,
Monad,
MonadIO,
MonadError e,
MonadBase b,
MonadBaseControl b,
MonadTx,
UserInfoM,
HasServerConfigCtx,
MonadMetadataStorage,
MonadMetadataStorageQueryAPI,
ProvidesNetwork
)
via (ReaderT (MVar RebuildableSchemaCache) m)
instance MonadTrans CacheRefT where
lift = CacheRefT . const
instance MFunctor CacheRefT where
hoist f (CacheRefT m) = CacheRefT (f . m)
-- instance (MonadBase IO m) => TableCoreInfoRM 'Postgres (CacheRefT m)
instance (MonadBase IO m) => CacheRM (CacheRefT m) where
askSchemaCache = CacheRefT (fmap lastBuiltSchemaCache . readMVar)
instance (MonadEventLogCleanup m) => MonadEventLogCleanup (CacheRefT m) where
runLogCleaner conf = lift $ runLogCleaner conf
generateCleanupSchedules sourceInfo triggerName cleanupConfig = lift $ generateCleanupSchedules sourceInfo triggerName cleanupConfig
updateTriggerCleanupSchedules logger oldSources newSources schemaCache = lift $ updateTriggerCleanupSchedules logger oldSources newSources schemaCache
instance
( MonadIO m,
MonadBaseControl IO m,
MonadError QErr m,
MonadResolveSource m,
HasServerConfigCtx m,
ProvidesNetwork m
) =>
CacheRWM (CacheRefT m)
where
buildSchemaCacheWithOptions reason invalidations metadata =
CacheRefT $ flip modifyMVar \schemaCache -> do
((), cache, _) <- runCacheRWT schemaCache (buildSchemaCacheWithOptions reason invalidations metadata)
pure (cache, ())
setMetadataResourceVersionInSchemaCache resourceVersion =
CacheRefT $ flip modifyMVar \schemaCache -> do
((), cache, _) <- runCacheRWT schemaCache (setMetadataResourceVersionInSchemaCache resourceVersion)
pure (cache, ())
instance Example (MetadataT (CacheRefT m) ()) where
type Arg (MetadataT (CacheRefT m) ()) = MetadataT (CacheRefT m) :~> IO
evaluateExample m params action = evaluateExample (action (\x -> x $$ m)) params ($ ())
type SpecWithCache m = SpecWith (MetadataT (CacheRefT m) :~> IO)
singleTransaction :: MetadataT (CacheRefT m) () -> MetadataT (CacheRefT m) ()
singleTransaction = id
suite ::
forall m.
( MonadIO m,
MonadError QErr m,
MonadBaseControl IO m,
HasServerConfigCtx m,
MonadResolveSource m,
MonadMetadataStorageQueryAPI m,
MonadEventLogCleanup m,
ProvidesNetwork m
) =>
PostgresConnConfiguration ->
PGExecCtx ->
PG.ConnInfo ->
SpecWithCache m
suite srcConfig pgExecCtx pgConnInfo = do
let logger :: Logger Hasura = Logger $ \l -> do
let (logLevel, logType :: EngineLogType Hasura, logDetail) = toEngineLog l
t <- liftIO $ getFormattedTime Nothing
liftIO $ putStrLn $ LBS.toString $ encode $ EngineLog t logLevel logType logDetail
migrateCatalogAndBuildCache env time = do
(migrationResult, metadata) <- runTx' pgExecCtx $ migrateCatalog (Just srcConfig) (ExtensionsSchema "public") MaintenanceModeDisabled time
(,migrationResult) <$> runCacheBuildM (buildRebuildableSchemaCache logger env metadata)
dropAndInit env time = lift $
CacheRefT $ flip modifyMVar \_ ->
(runTx' pgExecCtx dropHdbCatalogSchema) *> (migrateCatalogAndBuildCache env time)
downgradeTo v = runTx' pgExecCtx . downgradeCatalog (Just srcConfig) DowngradeOptions {dgoDryRun = False, dgoTargetVersion = v}
describe "migrateCatalog" $ do
it "initializes the catalog" $ singleTransaction do
env <- liftIO Env.getEnvironment
time <- liftIO getCurrentTime
dropAndInit env time `shouldReturn` MRInitialized
it "is idempotent" \(NT transact) -> do
let dumpSchema = execPGDump (PGDumpReqBody defaultSource ["--schema-only"] False) pgConnInfo
env <- Env.getEnvironment
time <- getCurrentTime
transact (dropAndInit env time) `shouldReturn` MRInitialized
firstDump <- transact dumpSchema
transact (dropAndInit env time) `shouldReturn` MRInitialized
secondDump <- transact dumpSchema
secondDump `shouldBe` firstDump
it "supports upgrades after downgrade to version 12" \(NT transact) -> do
let upgradeToLatest env time = lift $
CacheRefT $ flip modifyMVar \_ ->
migrateCatalogAndBuildCache env time
env <- Env.getEnvironment
time <- getCurrentTime
transact (dropAndInit env time) `shouldReturn` MRInitialized
downgradeResult <- (transact . lift) (downgradeTo "12" time)
downgradeResult `shouldSatisfy` \case
MRMigrated {} -> True
_ -> False
transact (upgradeToLatest env time) `shouldReturn` MRMigrated "12"
-- -- NOTE: this has been problematic in CI and we're not quite sure how to
-- -- make this work reliably given the way we do releases and create
-- -- beta tags and so on. Phil and Alexis are okay just commenting
-- -- this until we need revisit. See #5273:
-- it "supports downgrades for every Git tag" $ singleTransaction do
-- gitOutput <- liftIO $ readProcess "git" ["log", "--no-walk", "--tags", "--pretty=%D"] ""
-- let filterOldest = filter (not . isPrefixOf "v1.0.0-alpha")
-- extractTagName = Safe.headMay . splitOn ", " <=< stripPrefix "tag: "
-- supportedDowngrades = sort (map fst downgradeShortcuts)
-- gitTags = (sort . filterOldest . mapMaybe extractTagName . tail . lines) gitOutput
-- for_ gitTags \t ->
-- t `shouldSatisfy` (`elem` supportedDowngrades)
describe "recreateSystemMetadata" $ do
let dumpMetadata = execPGDump (PGDumpReqBody defaultSource ["--schema=hdb_catalog"] False) pgConnInfo
it "is idempotent" \(NT transact) -> do
env <- Env.getEnvironment
time <- getCurrentTime
transact (dropAndInit env time) `shouldReturn` MRInitialized
-- Downgrade to catalog version before metadata separation
downgradeResult <- (transact . lift) (downgradeTo "42" time)
downgradeResult `shouldSatisfy` \case
MRMigrated {} -> True
_ -> False
firstDump <- transact dumpMetadata
transact (runTx' pgExecCtx recreateSystemMetadata)
secondDump <- transact dumpMetadata
secondDump `shouldBe` firstDump
it "does not create any objects affected by ClearMetadata" \(NT transact) -> do
env <- Env.getEnvironment
time <- getCurrentTime
transact (dropAndInit env time) `shouldReturn` MRInitialized
firstDump <- transact dumpMetadata
encJToBS <$> transact (flip runReaderT logger $ runClearMetadata ClearMetadata) `shouldReturn` encJToBS successMsg
secondDump <- transact dumpMetadata
secondDump `shouldBe` firstDump
runTx' ::
(MonadError QErr m, MonadIO m, MonadBaseControl IO m) =>
PGExecCtx ->
PG.TxET QErr m a ->
m a
runTx' pgExecCtx = liftEitherM . runExceptT . _pecRunTx pgExecCtx (PGExecCtxInfo (Tx PG.ReadWrite Nothing) InternalRawQuery)