2022-03-16 03:39:21 +03:00
{- # LANGUAGE QuasiQuotes # -}
{- # LANGUAGE TemplateHaskell # -}
2020-10-07 11:55:39 +03:00
{- # LANGUAGE UndecidableInstances # -}
2022-08-17 04:07:44 +03:00
{- # LANGUAGE ViewPatterns # -}
2019-11-26 15:14:21 +03:00
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
-- | Defines the CE version of the engine.
--
-- This module contains everything that is required to run the community edition
-- of the engine: the base application monad and the implementation of all its
-- behaviour classes.
2021-11-04 19:08:33 +03:00
module Hasura.App
2023-03-23 00:40:26 +03:00
( -- * top-level error handling
ExitCode ( .. ) ,
ExitException ( .. ) ,
throwErrExit ,
throwErrJExit ,
accessDeniedErrMsg ,
-- * printing helpers
printJSON ,
-- * logging
mkLoggers ,
mkPGLogger ,
2023-03-27 13:25:55 +03:00
flushLogger ,
2023-03-23 00:40:26 +03:00
-- * basic connection info
BasicConnectionInfo ( .. ) ,
initMetadataConnectionInfo ,
initBasicConnectionInfo ,
resolvePostgresConnInfo ,
2023-03-23 18:51:18 +03:00
2023-03-27 13:25:55 +03:00
-- * app init
initialiseAppEnv ,
initialiseAppContext ,
migrateCatalogAndFetchMetadata ,
buildFirstSchemaCache ,
initSubscriptionsState ,
initLockedEventsCtx ,
2023-03-23 18:51:18 +03:00
-- * app monad
AppM ,
runAppM ,
-- * misc
2021-11-04 19:08:33 +03:00
getCatalogStateTx ,
2023-03-17 13:29:07 +03:00
updateJwkCtxThread ,
2021-11-04 19:08:33 +03:00
notifySchemaCacheSyncTx ,
parseArgs ,
runHGEServer ,
setCatalogStateTx ,
mkHGEServer ,
mkPgSourceResolver ,
2022-01-04 14:53:50 +03:00
mkMSSQLSourceResolver ,
2021-11-04 19:08:33 +03:00
)
where
2019-11-26 15:14:21 +03:00
2021-03-31 13:39:01 +03:00
import Control.Concurrent.Async.Lifted.Safe qualified as LA
2023-03-17 13:29:07 +03:00
import Control.Concurrent.Extended ( sleep )
2021-03-31 13:39:01 +03:00
import Control.Concurrent.Extended qualified as C
2023-03-17 13:29:07 +03:00
import Control.Concurrent.STM
2021-04-06 06:25:02 +03:00
import Control.Concurrent.STM qualified as STM
2021-03-31 13:39:01 +03:00
import Control.Exception ( bracket_ , throwIO )
import Control.Monad.Catch
2020-06-19 09:42:32 +03:00
( Exception ,
2021-03-31 13:39:01 +03:00
MonadCatch ,
MonadMask ,
MonadThrow ,
onException ,
2021-09-24 01:56:37 +03:00
)
2021-09-20 10:34:59 +03:00
import Control.Monad.Morph ( hoist )
2021-03-31 13:39:01 +03:00
import Control.Monad.Stateless
import Control.Monad.Trans.Control ( MonadBaseControl ( .. ) )
2022-12-19 15:45:21 +03:00
import Control.Monad.Trans.Managed ( ManagedT ( .. ) , allocate , allocate_ )
2022-06-07 10:08:53 +03:00
import Control.Retry qualified as Retry
2021-01-09 02:09:15 +03:00
import Data.Aeson qualified as A
2021-03-31 13:39:01 +03:00
import Data.ByteString.Char8 qualified as BC
2022-09-13 18:35:58 +03:00
import Data.ByteString.Lazy qualified as BL
2021-05-14 12:38:37 +03:00
import Data.ByteString.Lazy.Char8 qualified as BLC
import Data.Environment qualified as Env
2021-01-09 02:09:15 +03:00
import Data.FileEmbed ( makeRelativeToProject )
2021-03-31 13:39:01 +03:00
import Data.HashMap.Strict qualified as HM
2022-06-30 14:26:10 +03:00
import Data.Set.NonEmpty qualified as NE
2021-03-31 13:39:01 +03:00
import Data.Text qualified as T
import Data.Time.Clock ( UTCTime )
import Data.Time.Clock qualified as Clock
2022-10-17 11:04:54 +03:00
import Database.MSSQL.Pool qualified as MSPool
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
import Database.PG.Query qualified as PG
2022-10-17 11:04:54 +03:00
import Database.PG.Query qualified as Q
2021-09-16 21:51:32 +03:00
import GHC.AssertNF.CPP
2023-02-24 21:09:36 +03:00
import Hasura.App.State
2022-01-04 14:53:50 +03:00
import Hasura.Backends.MSSQL.Connection
2021-09-16 14:03:01 +03:00
import Hasura.Backends.Postgres.Connection
2021-05-11 18:18:31 +03:00
import Hasura.Base.Error
2020-07-02 14:57:09 +03:00
import Hasura.Eventing.Common
2021-09-16 14:03:01 +03:00
import Hasura.Eventing.EventTrigger
import Hasura.Eventing.ScheduledTrigger
2021-05-05 15:25:27 +03:00
import Hasura.GraphQL.Execute
2021-09-16 14:03:01 +03:00
( ExecutionStep ( .. ) ,
MonadGQLExecutionCheck ( .. ) ,
checkQueryInAllowlist ,
)
2020-12-14 07:30:19 +03:00
import Hasura.GraphQL.Execute.Action
2021-01-09 02:09:15 +03:00
import Hasura.GraphQL.Execute.Action.Subscription
2021-07-29 11:29:12 +03:00
import Hasura.GraphQL.Execute.Backend qualified as EB
2022-03-21 13:39:49 +03:00
import Hasura.GraphQL.Execute.Subscription.Poll qualified as ES
2023-01-06 12:33:13 +03:00
import Hasura.GraphQL.Execute.Subscription.State qualified as ES
2023-03-15 16:05:17 +03:00
import Hasura.GraphQL.Logging ( MonadExecutionLog ( .. ) , MonadQueryLog ( .. ) )
2021-08-25 04:52:38 +03:00
import Hasura.GraphQL.Transport.HTTP
( CacheStoreSuccess ( CacheStoreSkipped ) ,
MonadExecuteQuery ( .. ) ,
2021-09-24 01:56:37 +03:00
)
2021-03-31 13:39:01 +03:00
import Hasura.GraphQL.Transport.HTTP.Protocol ( toParsed )
2023-03-17 13:29:07 +03:00
import Hasura.GraphQL.Transport.WSServerApp qualified as WS
2021-03-31 13:39:01 +03:00
import Hasura.GraphQL.Transport.WebSocket.Server qualified as WS
import Hasura.Logging
2021-08-25 04:52:38 +03:00
import Hasura.Metadata.Class
2022-11-23 19:40:21 +03:00
import Hasura.PingSources
2021-03-31 13:39:01 +03:00
import Hasura.Prelude
2021-09-23 15:37:56 +03:00
import Hasura.QueryTags
2023-03-13 14:44:18 +03:00
import Hasura.RQL.DDL.ApiLimit ( MonadGetApiTimeLimit ( .. ) )
2022-09-15 14:45:14 +03:00
import Hasura.RQL.DDL.EventTrigger ( MonadEventLogCleanup ( .. ) )
2020-06-19 09:42:32 +03:00
import Hasura.RQL.DDL.Schema.Cache
2020-12-28 15:56:00 +03:00
import Hasura.RQL.DDL.Schema.Cache.Common
2020-12-14 07:30:19 +03:00
import Hasura.RQL.DDL.Schema.Catalog
2022-04-27 16:57:28 +03:00
import Hasura.RQL.Types.Allowlist
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.Common
2021-09-20 10:34:59 +03:00
import Hasura.RQL.Types.Eventing.Backend
2022-04-27 16:57:28 +03:00
import Hasura.RQL.Types.Metadata
import Hasura.RQL.Types.Network
2022-10-20 04:32:54 +03:00
import Hasura.RQL.Types.ResizePool
2022-04-27 16:57:28 +03:00
import Hasura.RQL.Types.SchemaCache
import Hasura.RQL.Types.SchemaCache.Build
import Hasura.RQL.Types.Source
2021-09-20 10:34:59 +03:00
import Hasura.SQL.AnyBackend qualified as AB
2022-04-27 16:57:28 +03:00
import Hasura.SQL.Backend
2021-09-08 16:06:24 +03:00
import Hasura.Server.API.Query ( requiresAdmin )
2019-11-26 15:14:21 +03:00
import Hasura.Server.App
2023-03-17 13:29:07 +03:00
import Hasura.Server.AppStateRef
2019-11-26 15:14:21 +03:00
import Hasura.Server.Auth
2021-03-31 13:39:01 +03:00
import Hasura.Server.CheckUpdates ( checkForUpdates )
2023-02-24 21:09:36 +03:00
import Hasura.Server.Init
2021-09-29 19:20:06 +03:00
import Hasura.Server.Limits
2019-11-26 15:14:21 +03:00
import Hasura.Server.Logging
2021-08-06 00:07:17 +03:00
import Hasura.Server.Metrics ( ServerMetrics ( .. ) )
2021-11-10 17:34:22 +03:00
import Hasura.Server.Migrate ( migrateCatalog )
2022-07-24 00:18:01 +03:00
import Hasura.Server.Prometheus
( PrometheusMetrics ( .. ) ,
decWarpThreads ,
incWarpThreads ,
)
2019-11-26 15:14:21 +03:00
import Hasura.Server.SchemaUpdate
import Hasura.Server.Telemetry
2020-11-25 13:56:44 +03:00
import Hasura.Server.Types
2019-11-26 15:14:21 +03:00
import Hasura.Server.Version
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
import Hasura.Services
2020-04-24 12:10:53 +03:00
import Hasura.Session
2022-11-22 04:36:35 +03:00
import Hasura.ShutdownLatch
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack.
This PR is on top of #7789.
### Description
This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks:
- we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it
- we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces
- we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible
- we had to declare several behavioral instances on `TraceT m`
This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`.
In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line.
### Remaining work
This PR is a draft; what's left to do is:
- [x] make Pro compile; i haven't updated `HasuraPro/Main` yet
- [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls
- [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet
- [x] it depends on #7789 being merged first
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791
GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
import Hasura.Tracing
2023-01-06 12:33:13 +03:00
import Network.HTTP.Client qualified as HTTP
server: http ip blocklist (closes #2449)
## Description
This PR is in reference to #2449 (support IP blacklisting for multitenant)
*RFC Update: Add support for IPv6 blocking*
### Solution and Design
Using [http-client-restricted](https://hackage.haskell.org/package/http-client-restricted) package, we're creating the HTTP manager with restricting capabilities. The IPs can be supplied from the CLI arguments as `--ipv4BlocklistCidrs cidr1, cidr2...` or `--disableDefaultIPv4Blocklist` for a default IP list. The new manager will block all requests to the provided CIDRs.
We are extracting the error message string to show the end-user that given IP is blocked from being set as a webhook. There are 2 ways to extract the error message "connection to IP address is blocked". Given below are the responses from event trigger to a blocked IP for these implementations:
- 6d74fde316f61e246c861befcca5059d33972fa7 - We return the error message string as a HTTPErr(HOther) from `Hasura/Eventing/HTTP.hs`.
```
{
"data": {
"message": "blocked connection to private IP address "
},
"version": "2",
"type": "client_error"
}
```
- 88e17456345cbb449a5ecd4877c84c9f319dbc25 - We case match on HTTPExceptionContent for InternaException in `Hasura/HTTP.hs` and extract the error message string from it. (this is implemented as it handles all the cases where pro engine makes webhook requests)
```
{
"data": {
"message": {
"type": "http_exception",
"message": "blocked connection to private IP address ",
"request": {
"secure": false,
"path": "/webhook",
"responseTimeout": "ResponseTimeoutMicro 60000000",
"queryString": "",
"method": "POST",
"requestHeaders": {
"Content-Type": "application/json",
"X-B3-ParentSpanId": "5ae6573edb2a6b36",
"X-B3-TraceId": "29ea7bd6de6ebb8f",
"X-B3-SpanId": "303137d9f1d4f341",
"User-Agent": "hasura-graphql-engine/cerebushttp-ip-blacklist-a793a0e41-dirty"
},
"host": "139.59.90.109",
"port": 8000
}
}
},
"version": "2",
"type": "client_error"
}
```
### Steps to test and verify
The restricted IPs can be used as webhooks in event triggers, and hasura will return an error message in reponse.
### Limitations, known bugs & workarounds
- The `http-client-restricted` has a needlessly complex interface, and puts effort into implementing proxy support which we don't want, so we've inlined a stripped down version.
- Performance constraint: As the blocking is checked for each request, if a long list of blocked CIDRs is supplied, iterating through all of them is not what we would prefer. Using trie is suggested to overcome this. (Added to RFC)
- Calls to Lux endpoints are inconsistent: We use either the http manager from the ProServeCtx which is unrestricted, or the http manager from the ServeCtx which is restricted (the latter through the instances for MonadMetadataApiAuthorization and UserAuthentication). (The failure scenario here would be: cloud sets PRO_ENDPOINT to something that resolves to an internal address, and then restricted requests to those endpoints fail, causing auth to fail on user requests. This is about HTTP requests to lux auth endpoints.)
## Changelog
- ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR.
## Affected components
- ✅ Server
- ✅ Tests
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3186
Co-authored-by: Robert <132113+robx@users.noreply.github.com>
GitOrigin-RevId: 5bd2de2d028bc416b02c99e996c7bebce56fb1e7
2022-02-25 16:29:55 +03:00
import Network.HTTP.Client.CreateManager ( mkHttpManager )
2021-10-20 23:01:22 +03:00
import Network.Wai ( Application )
2021-03-31 13:39:01 +03:00
import Network.Wai.Handler.Warp qualified as Warp
import Options.Applicative
2022-09-21 21:01:48 +03:00
import Refined ( unrefine )
2021-03-31 13:39:01 +03:00
import System.Log.FastLogger qualified as FL
import System.Metrics qualified as EKG
import System.Metrics.Gauge qualified as EKG . Gauge
import Text.Mustache.Compile qualified as M
import Web.Spock.Core qualified as Spock
2021-09-23 15:37:56 +03:00
2023-03-23 00:40:26 +03:00
--------------------------------------------------------------------------------
-- Error handling (move to another module!)
2020-07-14 22:00:58 +03:00
data ExitCode
2020-12-21 21:56:00 +03:00
= -- these are used during server initialization:
2020-07-14 22:00:58 +03:00
InvalidEnvironmentVariableOptionsError
| InvalidDatabaseConnectionParamsError
| AuthConfigurationError
2020-12-21 21:56:00 +03:00
| DatabaseMigrationError
2021-02-11 20:54:25 +03:00
| -- | used by MT because it initialises the schema cache only
2020-12-21 21:56:00 +03:00
-- these are used in app/Main.hs:
2021-02-11 20:54:25 +03:00
SchemaCacheInitError
2020-07-14 22:00:58 +03:00
| MetadataExportError
| MetadataCleanError
| DowngradeProcessError
deriving ( Show )
data ExitException = ExitException
{ eeCode :: ! ExitCode ,
eeMessage :: ! BC . ByteString
}
deriving ( Show )
instance Exception ExitException
2022-03-14 21:31:46 +03:00
throwErrExit :: ( MonadIO m ) => forall a . ExitCode -> String -> m a
throwErrExit reason = liftIO . throwIO . ExitException reason . BC . pack
2020-07-14 22:00:58 +03:00
2022-03-14 21:31:46 +03:00
throwErrJExit :: ( A . ToJSON a , MonadIO m ) => forall b . ExitCode -> a -> m b
throwErrJExit reason = liftIO . throwIO . ExitException reason . BLC . toStrict . A . encode
2019-11-26 15:14:21 +03:00
2023-03-23 00:40:26 +03:00
accessDeniedErrMsg :: Text
accessDeniedErrMsg = " restricted access : admin only "
--------------------------------------------------------------------------------
-- Printing helpers (move to another module!)
printJSON :: ( A . ToJSON a , MonadIO m ) => a -> m ()
printJSON = liftIO . BLC . putStrLn . A . encode
--------------------------------------------------------------------------------
-- Logging
mkPGLogger :: Logger Hasura -> PG . PGLogger
mkPGLogger ( Logger logger ) ( PG . PLERetryMsg msg ) = logger $ PGLog LevelWarn msg
2023-03-27 13:25:55 +03:00
-- | Create all loggers based on the set of enabled logs and chosen log level.
2023-03-23 00:40:26 +03:00
mkLoggers ::
( MonadIO m , MonadBaseControl IO m ) =>
HashSet ( EngineLogType Hasura ) ->
LogLevel ->
ManagedT m Loggers
mkLoggers enabledLogs logLevel = do
loggerCtx <- mkLoggerCtx ( defaultLoggerSettings True logLevel ) enabledLogs
let logger = mkLogger loggerCtx
pgLogger = mkPGLogger logger
pure $ Loggers loggerCtx logger pgLogger
2023-03-27 13:25:55 +03:00
-- | If an exception is thrown, and the process exits without the log buffer
-- being flushed, we will be missing some log lines (see:
-- https://github.com/hasura/graphql-engine/issues/4772). This function forces a
-- flush of the buffer.
flushLogger :: MonadIO m => LoggerCtx impl -> m ()
flushLogger = liftIO . FL . flushLogStr . _lcLoggerSet
2023-03-23 00:40:26 +03:00
--------------------------------------------------------------------------------
-- Basic connection info
-- | Basic information required to connect to the metadata DB, and to the
-- default Postgres DB if any.
data BasicConnectionInfo = BasicConnectionInfo
{ -- | metadata db connection info
bciMetadataConnInfo :: PG . ConnInfo ,
-- | default postgres connection info, if any
bciDefaultPostgres :: Maybe PostgresConnConfiguration
}
-- | Only create the metadata connection info.
--
-- Like 'initBasicConnectionInfo', it prioritizes @--metadata-database-url@, and
-- falls back to @--database-url@ otherwise.
--
-- !!! This function throws a fatal error if the @--database-url@ cannot be
-- !!! resolved.
initMetadataConnectionInfo ::
( MonadIO m ) =>
Env . Environment ->
-- | metadata DB URL (--metadata-database-url)
Maybe String ->
-- | user's DB URL (--database-url)
PostgresConnInfo ( Maybe UrlConf ) ->
m PG . ConnInfo
initMetadataConnectionInfo env metadataDbURL dbURL =
fmap bciMetadataConnInfo $
initBasicConnectionInfo
env
metadataDbURL
dbURL
Nothing -- ignored
False -- ignored
PG . ReadCommitted -- ignored
-- | Create a 'BasicConnectionInfo' based on the given options.
--
-- The default postgres connection is only created when the @--database-url@
-- option is given. If the @--metadata-database-url@ isn't given, the
-- @--database-url@ will be used for the metadata connection.
--
-- All arguments related to the default postgres connection are ignored if the
-- @--database-url@ is missing.
--
-- !!! This function throws a fatal error if the @--database-url@ cannot be
-- !!! resolved.
initBasicConnectionInfo ::
( MonadIO m ) =>
Env . Environment ->
-- | metadata DB URL (--metadata-database-url)
Maybe String ->
-- | user's DB URL (--database-url)
PostgresConnInfo ( Maybe UrlConf ) ->
-- | pool settings of the default PG connection
Maybe PostgresPoolSettings ->
-- | whether the default PG config should use prepared statements
Bool ->
-- | default transaction isolation level
PG . TxIsolation ->
m BasicConnectionInfo
initBasicConnectionInfo
env
metadataDbUrl
( PostgresConnInfo dbUrlConf maybeRetries )
poolSettings
usePreparedStatements
isolationLevel =
case ( metadataDbUrl , dbUrlConf ) of
( Nothing , Nothing ) ->
throwErrExit
InvalidDatabaseConnectionParamsError
" Fatal Error: Either of --metadata-database-url or --database-url option expected "
-- If no metadata storage specified consider use default database as
-- metadata storage
( Nothing , Just srcURL ) -> do
srcConnInfo <- resolvePostgresConnInfo env srcURL maybeRetries
pure $ BasicConnectionInfo srcConnInfo ( Just $ mkSourceConfig srcURL )
( Just mdURL , Nothing ) ->
pure $ BasicConnectionInfo ( mkConnInfoFromMDB mdURL ) Nothing
( Just mdURL , Just srcURL ) -> do
_srcConnInfo <- resolvePostgresConnInfo env srcURL maybeRetries
pure $ BasicConnectionInfo ( mkConnInfoFromMDB mdURL ) ( Just $ mkSourceConfig srcURL )
where
mkConnInfoFromMDB mdbURL =
PG . ConnInfo
{ PG . ciRetries = fromMaybe 1 maybeRetries ,
PG . ciDetails = PG . CDDatabaseURI $ txtToBs $ T . pack mdbURL
}
mkSourceConfig srcURL =
PostgresConnConfiguration
{ _pccConnectionInfo =
PostgresSourceConnInfo
{ _psciDatabaseUrl = srcURL ,
_psciPoolSettings = poolSettings ,
_psciUsePreparedStatements = usePreparedStatements ,
_psciIsolationLevel = isolationLevel ,
_psciSslConfiguration = Nothing
} ,
_pccReadReplicas = Nothing ,
_pccExtensionsSchema = defaultPostgresExtensionsSchema ,
_pccConnectionTemplate = Nothing ,
_pccConnectionSet = mempty
}
-- | Creates a 'PG.ConnInfo' from a 'UrlConf' parameter.
--
-- !!! throws a fatal error if the configuration is invalid
resolvePostgresConnInfo ::
( MonadIO m ) =>
Env . Environment ->
UrlConf ->
Maybe Int ->
m PG . ConnInfo
resolvePostgresConnInfo env dbUrlConf ( fromMaybe 1 -> retries ) = do
dbUrlText <-
resolveUrlConf env dbUrlConf ` onLeft ` \ err ->
liftIO ( throwErrJExit InvalidDatabaseConnectionParamsError err )
pure $ PG . ConnInfo retries $ PG . CDDatabaseURI $ txtToBs dbUrlText
2023-03-27 13:25:55 +03:00
--------------------------------------------------------------------------------
-- App init
-- | The initialisation of the app is split into several functions, for clarity;
-- but there are several pieces of information that need to be threaded across
-- those initialisation functions. This small data structure groups together all
-- such pieces of information that are required throughout the initialisation,
-- but that aren't needed in the rest of the application.
data AppInit = AppInit
{ aiTLSAllowListRef :: TLSAllowListRef Hasura ,
2023-03-28 16:26:08 +03:00
aiMetadataWithResourceVersion :: MetadataWithResourceVersion
2023-03-27 13:25:55 +03:00
}
-- | Initializes or migrates the catalog and creates the 'AppEnv' required to
-- start the server, and also create the 'AppInit' that needs to be threaded
-- along the init code.
--
-- For historical reasons, this function performs a few additional startup tasks
-- that are not required to create the 'AppEnv', such as starting background
-- processes and logging startup information. All of those are flagged with a
-- comment marking them as a side-effect.
initialiseAppEnv ::
( C . ForkableMonadIO m , MonadCatch m ) =>
Env . Environment ->
BasicConnectionInfo ->
ServeOptions Hasura ->
Maybe ES . SubscriptionPostPollHook ->
ServerMetrics ->
PrometheusMetrics ->
SamplingPolicy ->
ManagedT m ( AppInit , AppEnv )
initialiseAppEnv env BasicConnectionInfo { .. } serveOptions @ ServeOptions { .. } liveQueryHook serverMetrics prometheusMetrics traceSamplingPolicy = do
loggers @ ( Loggers loggerCtx logger pgLogger ) <- mkLoggers soEnabledLogTypes soLogLevel
-- SIDE EFFECT: print a warning if no admin secret is set.
when ( null soAdminSecret ) $
unLogger
logger
StartupLog
{ slLogLevel = LevelWarn ,
slKind = " no_admin_secret " ,
slInfo = A . toJSON ( " WARNING: No admin secret provided " :: Text )
}
-- SIDE EFFECT: log all server options.
unLogger logger $ serveOptsToLog serveOptions
-- SIDE EFFECT: log metadata postgres connection info.
unLogger logger $ connInfoToLog bciMetadataConnInfo
-- Generate the instance id.
instanceId <- liftIO generateInstanceId
-- Init metadata db pool.
metadataDbPool <-
allocate
( liftIO $ PG . initPGPool bciMetadataConnInfo soConnParams pgLogger )
( liftIO . PG . destroyPGPool )
-- Migrate the catalog and fetch the metdata.
2023-03-28 16:26:08 +03:00
metadataWithVersion <-
2023-03-27 13:25:55 +03:00
lift $
flip onException ( flushLogger loggerCtx ) $
migrateCatalogAndFetchMetadata
logger
metadataDbPool
bciDefaultPostgres
soEnableMaintenanceMode
soExtensionsSchema
-- Create the TLSAllowListRef and the HTTP Manager.
2023-03-28 16:26:08 +03:00
let metadata = _mwrvMetadata metadataWithVersion
2023-03-27 13:25:55 +03:00
tlsAllowListRef <- liftIO $ createTLSAllowListRef $ networkTlsAllowlist $ _metaNetwork metadata
httpManager <- liftIO $ mkHttpManager ( readTLSAllowList tlsAllowListRef ) mempty
-- Start a background thread for listening schema sync events from other
-- server instances (an interval of 0 indicates that no schema sync is
-- required). Logs whether the thread is started or not, and with what
-- interval.
-- TODO: extract into a separate init function.
metaVersionRef <- liftIO $ STM . newEmptyTMVarIO
case soSchemaPollInterval of
Skip -> unLogger logger $ mkGenericLog @ Text LevelInfo " schema-sync " " Schema sync disabled "
Interval interval -> do
unLogger logger $ mkGenericLog @ String LevelInfo " schema-sync " ( " Schema sync enabled. Polling at " <> show interval )
void $ startSchemaSyncListenerThread logger metadataDbPool instanceId interval metaVersionRef
-- Generate the shutdown latch.
latch <- liftIO newShutdownLatch
-- Generate subscription state.
subscriptionsState <- liftIO $ initSubscriptionsState logger liveQueryHook
-- Generate event's trigger shared state
lockedEventsCtx <- liftIO $ initLockedEventsCtx
pure
( AppInit
{ aiTLSAllowListRef = tlsAllowListRef ,
2023-03-28 16:26:08 +03:00
aiMetadataWithResourceVersion = metadataWithVersion
2023-03-27 13:25:55 +03:00
} ,
AppEnv
{ appEnvPort = soPort ,
appEnvHost = soHost ,
appEnvMetadataDbPool = metadataDbPool ,
appEnvManager = httpManager ,
appEnvLoggers = loggers ,
appEnvMetadataVersionRef = metaVersionRef ,
appEnvInstanceId = instanceId ,
appEnvEnableMaintenanceMode = soEnableMaintenanceMode ,
appEnvLoggingSettings = LoggingSettings soEnabledLogTypes soEnableMetadataQueryLogging ,
appEnvEventingMode = soEventingMode ,
appEnvEnableReadOnlyMode = soReadOnlyMode ,
appEnvServerMetrics = serverMetrics ,
appEnvShutdownLatch = latch ,
appEnvMetaVersionRef = metaVersionRef ,
appEnvPrometheusMetrics = prometheusMetrics ,
appEnvTraceSamplingPolicy = traceSamplingPolicy ,
appEnvSubscriptionState = subscriptionsState ,
appEnvLockedEventsCtx = lockedEventsCtx ,
appEnvConnParams = soConnParams ,
appEnvTxIso = soTxIso ,
appEnvConsoleAssetsDir = soConsoleAssetsDir ,
appEnvConsoleSentryDsn = soConsoleSentryDsn ,
appEnvConnectionOptions = soConnectionOptions ,
appEnvWebSocketKeepAlive = soWebSocketKeepAlive ,
appEnvWebSocketConnectionInitTimeout = soWebSocketConnectionInitTimeout ,
appEnvGracefulShutdownTimeout = soGracefulShutdownTimeout ,
appEnvCheckFeatureFlag = CheckFeatureFlag $ checkFeatureFlag env ,
appEnvSchemaPollInterval = soSchemaPollInterval
}
)
-- | Initializes the 'AppContext' and returns a corresponding 'AppStateRef'.
--
-- This function is meant to be run in the app monad, which provides the
-- 'AppEnv'.
initialiseAppContext ::
( C . ForkableMonadIO m , HasAppEnv m ) =>
Env . Environment ->
ServeOptions Hasura ->
AppInit ->
m ( AppStateRef Hasura )
initialiseAppContext env serveOptions @ ServeOptions { .. } AppInit { .. } = do
AppEnv { .. } <- askAppEnv
let Loggers _ logger pgLogger = appEnvLoggers
sqlGenCtx = initSQLGenCtx soExperimentalFeatures soStringifyNum soDangerousBooleanCollapse
serverConfigCtx =
ServerConfigCtx
soInferFunctionPermissions
soEnableRemoteSchemaPermissions
sqlGenCtx
soEnableMaintenanceMode
soExperimentalFeatures
soEventingMode
soReadOnlyMode
soDefaultNamingConvention
soMetadataDefaults
( CheckFeatureFlag $ checkFeatureFlag env )
soApolloFederationStatus
-- Create the schema cache
rebuildableSchemaCache <-
buildFirstSchemaCache
env
logger
serverConfigCtx
( mkPgSourceResolver pgLogger )
mkMSSQLSourceResolver
2023-03-28 16:26:08 +03:00
aiMetadataWithResourceVersion
2023-03-27 13:25:55 +03:00
appEnvManager
-- Build the RebuildableAppContext.
-- (See note [Hasura Application State].)
rebuildableAppCtxE <- liftIO $ runExceptT ( buildRebuildableAppContext ( logger , appEnvManager ) serveOptions env )
! rebuildableAppCtx <- onLeft rebuildableAppCtxE $ \ e -> throwErrExit InvalidEnvironmentVariableOptionsError $ T . unpack $ qeError e
-- Initialise the 'AppStateRef' from 'RebuildableSchemaCacheRef' and 'RebuildableAppContext'.
initialiseAppStateRef aiTLSAllowListRef appEnvServerMetrics rebuildableSchemaCache rebuildableAppCtx
-- | Runs catalogue migration, and returns the metadata that was fetched.
--
-- On success, this function logs the result of the migration, on failure it
-- logs a 'catalog_migrate' error and throws a fatal error.
migrateCatalogAndFetchMetadata ::
( MonadIO m , MonadBaseControl IO m ) =>
Logger Hasura ->
PG . PGPool ->
Maybe ( SourceConnConfiguration ( 'Postgres 'Vanilla ) ) ->
MaintenanceMode () ->
ExtensionsSchema ->
2023-03-28 16:26:08 +03:00
m MetadataWithResourceVersion
2023-03-27 13:25:55 +03:00
migrateCatalogAndFetchMetadata
logger
pool
defaultSourceConfig
maintenanceMode
extensionsSchema = do
-- TODO: should we allow the migration to happen during maintenance mode?
-- Allowing this can be a sanity check, to see if the hdb_catalog in the
-- DB has been set correctly
currentTime <- liftIO Clock . getCurrentTime
result <-
runExceptT $
PG . runTx pool ( PG . Serializable , Just PG . ReadWrite ) $
migrateCatalog
defaultSourceConfig
extensionsSchema
maintenanceMode
currentTime
case result of
Left err -> do
unLogger
logger
StartupLog
{ slLogLevel = LevelError ,
slKind = " catalog_migrate " ,
slInfo = A . toJSON err
}
liftIO ( throwErrJExit DatabaseMigrationError err )
2023-03-28 16:26:08 +03:00
Right ( migrationResult , metadataWithVersion ) -> do
2023-03-27 13:25:55 +03:00
unLogger logger migrationResult
2023-03-28 16:26:08 +03:00
pure metadataWithVersion
2023-03-27 13:25:55 +03:00
-- | Build the original 'RebuildableSchemaCache'.
--
-- On error, it logs a 'catalog_migrate' error and throws a fatal error. This
-- misnomer is intentional: it is to preserve a previous behaviour of the code
-- and avoid a breaking change.
buildFirstSchemaCache ::
( MonadIO m ) =>
Env . Environment ->
Logger Hasura ->
ServerConfigCtx ->
SourceResolver ( 'Postgres 'Vanilla ) ->
SourceResolver ( 'MSSQL ) ->
2023-03-28 16:26:08 +03:00
MetadataWithResourceVersion ->
2023-03-27 13:25:55 +03:00
HTTP . Manager ->
m RebuildableSchemaCache
buildFirstSchemaCache
env
logger
serverConfigCtx
pgSourceResolver
mssqlSourceResolver
2023-03-28 16:26:08 +03:00
metadataWithVersion
2023-03-27 13:25:55 +03:00
httpManager = do
Remove `HasServerConfigCtx` from the schema cache build.
## Description
This PR is a incremental step towards achieving the goal of #8344. It is a less ambitious version of #8484.
This PR removes all references to `HasServerConfigCtx` from the cache build and removes `ServerConfigCtx` from `CacheBuildParams`, making `ServerConfigCtx` an argument being passed around manually instead. This has several benefits: by making it an arrow argument, we now properly integrate the fields that change over time in the dependency framework, as they should be, and we can clean up some of the top-level app code.
## Implementation
In practice, this PR introduces a `HasServerConfigCtx` instance for `CacheRWT`, the monad we use to build the cache, so we can retrieve the `ServerConfigCtx` in the implementation of `CacheRWM`. This contributes to reducing the amount of `HasServerConfigCtx` in the code: we can remove `SchemaUpdateT` altogether, and we can remove the `HasServerConfigCtx` instance of `Handler`. This makes `HasServerConfigCtx` almost **an implementation detail of the Metadata API**.
This first step is enough to achieve the goal of #8344: we can now build the schema cache in the app monad, since we no longer rely on `HasServerConfigCtx` to build it.
## Drawbacks
This PR does not attempt to remove the use of `ServerConfigCtx` itself in the schema cache build: doing so would make this PR much much bigger. Ideally, to avoid having all the static fields given as arrow-ish arguments to the cache, we could depend on `HasAppEnv` in the cache build, and use `AppContext` as an arrow argument. But making the cache build depend on the full `AppEnv` and `AppContext` creates a lot of circular imports; and since removing `ServerConfigCtx` itself isn't required to achieve #8344, this PR keeps it wholesale and defers cleaning it to a future PR.
A negative consequence of this is that we need an `Eq` instance on `ServerConfigCtx`, and that instance is inelegant.
## Future work
There are several further steps we can take in parallel after this is merged. First, again, we can make a new version of #8344, removing `CacheBuild`, FINALLY. As for `ServerConfigCtx`, we can split it / rename it to make ad-hoc structures. If it turns out that `ServerConfigCtx` is only ever used for the schema cache build, we could split it between `CacheBuildEnv` and `CacheBuildContext`, which will be subsets of `AppEnv` and `AppContext`, avoiding import loops.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8509
GitOrigin-RevId: 01b37cc3fd3490d6b117701e22fc4ac88b62b6b5
2023-03-27 20:42:37 +03:00
let cacheBuildParams = CacheBuildParams httpManager pgSourceResolver mssqlSourceResolver
2023-03-27 13:25:55 +03:00
buildReason = CatalogSync
result <-
runExceptT $
runCacheBuild cacheBuildParams $
2023-03-28 16:26:08 +03:00
buildRebuildableSchemaCacheWithReason buildReason logger env metadataWithVersion serverConfigCtx
2023-03-27 13:25:55 +03:00
result ` onLeft ` \ err -> do
-- TODO: we used to bundle the first schema cache build with the catalog
-- migration, using the same error handler for both, meaning that an
-- error in the first schema cache build would be reported as
-- follows. Changing this will be a breaking change.
unLogger
logger
StartupLog
{ slLogLevel = LevelError ,
slKind = " catalog_migrate " ,
slInfo = A . toJSON err
}
liftIO ( throwErrJExit DatabaseMigrationError err )
initSubscriptionsState ::
Logger Hasura ->
Maybe ES . SubscriptionPostPollHook ->
IO ES . SubscriptionsState
initSubscriptionsState logger liveQueryHook = ES . initSubscriptionsState postPollHook
where
postPollHook = fromMaybe ( ES . defaultSubscriptionPostPollHook logger ) liveQueryHook
initLockedEventsCtx :: IO LockedEventsCtx
initLockedEventsCtx =
liftM4
LockedEventsCtx
( STM . newTVarIO mempty )
( STM . newTVarIO mempty )
( STM . newTVarIO mempty )
( STM . newTVarIO mempty )
2022-07-15 11:54:27 +03:00
--------------------------------------------------------------------------------
2023-03-23 18:51:18 +03:00
-- App monad
2022-07-15 11:54:27 +03:00
2023-03-23 18:51:18 +03:00
-- | The base app monad of the CE engine.
newtype AppM a = AppM ( ReaderT AppEnv ( TraceT IO ) a )
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack.
This PR is on top of #7789.
### Description
This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks:
- we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it
- we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces
- we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible
- we had to declare several behavioral instances on `TraceT m`
This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`.
In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line.
### Remaining work
This PR is a draft; what's left to do is:
- [x] make Pro compile; i haven't updated `HasuraPro/Main` yet
- [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls
- [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet
- [x] it depends on #7789 being merged first
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791
GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
deriving newtype
2020-11-24 09:10:04 +03:00
( Functor ,
Applicative ,
2021-05-26 19:19:26 +03:00
Monad ,
MonadIO ,
Rewrite OpenAPI
### Description
This PR rewrites OpenAPI to be more idiomatic. Some noteworthy changes:
- we accumulate all required information during the Analyze phase, to avoid having to do a single lookup in the schema cache during the OpenAPI generation phase (we now only need the schema cache as input to run the analysis)
- we no longer build intermediary endpoint information and aggregate it, we directly build the the `PathItem` for each endpoint; additionally, that means we no longer have to assume that different methods have the same metadata
- we no longer have to first declare types, then craft references: we do everything in one step
- we now properly deal with nullability by treating "typeName" and "typeName!" as different
- we add a bunch of additional fields in the generated "schema", such as title
- we do now support enum values in both input and output positions
- checking whether the request body is required is now performed on the fly rather than by introspecting the generated schema
- the methods in the file are sorted by topic
### Controversial point
However, this PR creates some additional complexity, that we might not want to keep. The main complexity is _knot-tying_: to avoid lookups when generating the OpenAPI, it builds an actual graph of input types, which means that we need something similar to (but simpler than) `MonadSchema`, to avoid infinite recursions when analyzing the input types of a query. To do this, this PR introduces `CircularT`, a lesser `SchemaT` that aims at avoiding ever having to reinvent this particular wheel ever again.
### Remaining work
- [x] fix existing tests (they are all failing due to some of the schema changes)
- [ ] add tests to cover the new features:
- [x] tests for `CircularT`
- [ ] tests for enums in output schemas
- [x] extract / document `CircularT` if we wish to keep it
- [x] add more comments to `OpenAPI`
- [x] have a second look at `buildVariableSchema`
- [x] fix all missing diagnostics in `Analyze`
- [x] add a Changelog entry?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4654
Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com>
GitOrigin-RevId: f4a9191f22dfcc1dccefd6a52f5c586b6ad17172
2022-06-30 15:55:56 +03:00
MonadFix ,
2021-05-26 19:19:26 +03:00
MonadCatch ,
MonadThrow ,
MonadMask ,
2023-03-17 13:29:07 +03:00
MonadReader AppEnv ,
2023-03-23 18:51:18 +03:00
MonadBase IO ,
MonadBaseControl IO
2021-05-26 19:19:26 +03:00
)
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack.
This PR is on top of #7789.
### Description
This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks:
- we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it
- we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces
- we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible
- we had to declare several behavioral instances on `TraceT m`
This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`.
In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line.
### Remaining work
This PR is a draft; what's left to do is:
- [x] make Pro compile; i haven't updated `HasuraPro/Main` yet
- [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls
- [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet
- [x] it depends on #7789 being merged first
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791
GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
2023-03-23 18:51:18 +03:00
runAppM :: AppEnv -> AppM a -> IO a
runAppM c ( AppM a ) = ignoreTraceT $ runReaderT a c
Clean `AppEnv` and `AppContext` passing, remove `RunT`, reduce `ServerConfigCtx` uses
## Description
This PR does several different things that happen to overlap; the most important being:
- it removes `RunT`: it was redundant in places where we already had `Handler`, and only used in one other place, `SchemaUpdate`, for which a local `SchemaUpdateT` is more than enough;
- it reduces the number of places where we create a `ServerConfigCtx`, since now `HasServerConfigCtx` can be implemented directly by `SchemaUpdateT` and `Handler` based on the full `AppContext`;
- it drastically reduces the number of arguments we pass around in the app init code, by introducing `HasAppEnv`;
- it simplifies `HandlerCtx` to reduce duplication
In doing so, this changes paves the way towards removing `ServerConfigCtx`, since there are only very few places where we construct it: we can now introduce smaller classes than `HasServerConfigCtx`, that expose only a relevant subset of fields, and implement them where we now implement `HasServerConfigCtx`.
This PR is loosely based on ideas in #8337, that are no longer applicable due to the changes introduced in #8159. A challenge of this PR was the postgres tests, which were running in `PGMetadataStorageAppT CacheBuild` :scream_cat:
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8392
GitOrigin-RevId: b90c1359066d20dbea329c87762ccdd1217b4d69
2023-03-21 13:44:21 +03:00
2023-03-23 18:51:18 +03:00
instance HasAppEnv AppM where
askAppEnv = ask
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack.
This PR is on top of #7789.
### Description
This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks:
- we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it
- we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces
- we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible
- we had to declare several behavioral instances on `TraceT m`
This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`.
In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line.
### Remaining work
This PR is a draft; what's left to do is:
- [x] make Pro compile; i haven't updated `HasuraPro/Main` yet
- [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls
- [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet
- [x] it depends on #7789 being merged first
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791
GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
2023-03-23 18:51:18 +03:00
instance MonadTrace AppM where
newTraceWith c p n ( AppM a ) = AppM $ newTraceWith c p n a
newSpanWith i n ( AppM a ) = AppM $ newSpanWith i n a
currentContext = AppM currentContext
attachMetadata = AppM . attachMetadata
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
2023-03-23 18:51:18 +03:00
instance ProvidesNetwork AppM where
2023-03-17 13:29:07 +03:00
askHTTPManager = asks appEnvManager
2020-12-28 15:56:00 +03:00
2023-03-23 18:51:18 +03:00
instance HasResourceLimits AppM where
askHTTPHandlerLimit = pure $ ResourceLimits id
askGraphqlOperationLimit _ _ _ = pure $ ResourceLimits id
instance HttpLog AppM where
type ExtraHttpLogMetadata AppM = ()
emptyExtraHttpLogMetadata = ()
buildExtraHttpLogMetadata _ _ = ()
logHttpError logger loggingSettings userInfoM reqId waiReq req qErr headers _ =
unLogger logger $
mkHttpLog $
mkHttpErrorLogContext userInfoM loggingSettings reqId waiReq req qErr Nothing Nothing headers
logHttpSuccess logger loggingSettings userInfoM reqId waiReq reqBody response compressedResponse qTime cType headers ( CommonHttpLogMetadata rb batchQueryOpLogs , () ) =
unLogger logger $
mkHttpLog $
mkHttpAccessLogContext userInfoM loggingSettings reqId waiReq reqBody ( BL . length response ) compressedResponse qTime cType headers rb batchQueryOpLogs
instance MonadExecuteQuery AppM where
cacheLookup _ _ _ _ = pure $ Right ( [] , Nothing )
cacheStore _ _ _ = pure $ Right ( Right CacheStoreSkipped )
instance UserAuthentication AppM where
resolveUserInfo logger manager headers authMode reqs =
runExceptT $ do
( a , b , c ) <- getUserInfoWithExpTime logger manager headers authMode reqs
pure $ ( a , b , c , ExtraUserInfo Nothing )
instance MonadMetadataApiAuthorization AppM where
authorizeV1QueryApi query handlerCtx = runExceptT do
let currRole = _uiRole $ hcUser handlerCtx
when ( requiresAdmin query && currRole /= adminRoleName ) $
withPathK " args " $
throw400 AccessDenied accessDeniedErrMsg
authorizeV1MetadataApi _ handlerCtx = runExceptT do
let currRole = _uiRole $ hcUser handlerCtx
when ( currRole /= adminRoleName ) $
withPathK " args " $
throw400 AccessDenied accessDeniedErrMsg
authorizeV2QueryApi _ handlerCtx = runExceptT do
let currRole = _uiRole $ hcUser handlerCtx
when ( currRole /= adminRoleName ) $
withPathK " args " $
throw400 AccessDenied accessDeniedErrMsg
instance ConsoleRenderer AppM where
renderConsole path authMode enableTelemetry consoleAssetsDir consoleSentryDsn =
return $ mkConsoleHTML path authMode enableTelemetry consoleAssetsDir consoleSentryDsn
instance MonadVersionAPIWithExtraData AppM where
-- we always default to CE as the `server_type` in this codebase
getExtraDataForVersionAPI = return [ " server_type " A ..= ( " ce " :: Text ) ]
instance MonadGQLExecutionCheck AppM where
checkGQLExecution userInfo _ enableAL sc query _ = runExceptT $ do
req <- toParsed query
checkQueryInAllowlist enableAL AllowlistModeGlobalOnly userInfo req sc
return req
executeIntrospection _ introspectionQuery _ =
pure $ Right $ ExecStepRaw introspectionQuery
checkGQLBatchedReqs _ _ _ _ = runExceptT $ pure ()
instance MonadConfigApiHandler AppM where
runConfigApiHandler = configApiGetHandler
instance MonadQueryLog AppM where
logQueryLog logger = unLogger logger
instance MonadExecutionLog AppM where
logExecutionLog logger = unLogger logger
instance WS . MonadWSLog AppM where
logWSLog logger = unLogger logger
instance MonadResolveSource AppM where
getPGSourceResolver = asks ( mkPgSourceResolver . _lsPgLogger . appEnvLoggers )
getMSSQLSourceResolver = return mkMSSQLSourceResolver
instance EB . MonadQueryTags AppM where
createQueryTags _attributes _qtSourceConfig = return $ emptyQueryTagsComment
instance MonadEventLogCleanup AppM where
runLogCleaner _ = pure $ throw400 NotSupported " Event log cleanup feature is enterprise edition only "
generateCleanupSchedules _ _ _ = pure $ Right ()
updateTriggerCleanupSchedules _ _ _ _ = pure $ Right ()
instance MonadGetApiTimeLimit AppM where
runGetApiTimeLimit = pure $ Nothing
-- | Each of the function in the type class is executed in a totally separate transaction.
instance MonadMetadataStorage AppM where
fetchMetadataResourceVersion = runInSeparateTx fetchMetadataResourceVersionFromCatalog
fetchMetadata = runInSeparateTx fetchMetadataAndResourceVersionFromCatalog
fetchMetadataNotifications a b = runInSeparateTx $ fetchMetadataNotificationsFromCatalog a b
setMetadata r = runInSeparateTx . setMetadataInCatalog r
notifySchemaCacheSync a b c = runInSeparateTx $ notifySchemaCacheSyncTx a b c
getCatalogState = runInSeparateTx getCatalogStateTx
setCatalogState a b = runInSeparateTx $ setCatalogStateTx a b
getMetadataDbUid = runInSeparateTx getDbId
checkMetadataStorageHealth = runInSeparateTx $ checkDbConnection
getDeprivedCronTriggerStats = runInSeparateTx . getDeprivedCronTriggerStatsTx
getScheduledEventsForDelivery = runInSeparateTx getScheduledEventsForDeliveryTx
insertCronEvents = runInSeparateTx . insertCronEventsTx
insertOneOffScheduledEvent = runInSeparateTx . insertOneOffScheduledEventTx
insertScheduledEventInvocation a b = runInSeparateTx $ insertInvocationTx a b
setScheduledEventOp a b c = runInSeparateTx $ setScheduledEventOpTx a b c
unlockScheduledEvents a b = runInSeparateTx $ unlockScheduledEventsTx a b
unlockAllLockedScheduledEvents = runInSeparateTx unlockAllLockedScheduledEventsTx
clearFutureCronEvents = runInSeparateTx . dropFutureCronEventsTx
getOneOffScheduledEvents a b c = runInSeparateTx $ getOneOffScheduledEventsTx a b c
getCronEvents a b c d = runInSeparateTx $ getCronEventsTx a b c d
getScheduledEventInvocations a = runInSeparateTx $ getScheduledEventInvocationsTx a
deleteScheduledEvent a b = runInSeparateTx $ deleteScheduledEventTx a b
insertAction a b c d = runInSeparateTx $ insertActionTx a b c d
fetchUndeliveredActionEvents = runInSeparateTx fetchUndeliveredActionEventsTx
setActionStatus a b = runInSeparateTx $ setActionStatusTx a b
fetchActionResponse = runInSeparateTx . fetchActionResponseTx
clearActionData = runInSeparateTx . clearActionDataTx
setProcessingActionLogsToPending = runInSeparateTx . setProcessingActionLogsToPendingTx
instance MonadMetadataStorageQueryAPI AppM
--------------------------------------------------------------------------------
2023-03-27 13:25:55 +03:00
-- misc
2023-03-23 18:51:18 +03:00
-- TODO(SOLOMON): Move Into `Hasura.Server.Init`. Unable to do so
-- currently due `throwErrExit`.
-- | Parse cli arguments to graphql-engine executable.
parseArgs :: EnabledLogTypes impl => Env . Environment -> IO ( HGEOptions ( ServeOptions impl ) )
parseArgs env = do
rawHGEOpts <- execParser opts
let eitherOpts = runWithEnv ( Env . toList env ) $ mkHGEOptions rawHGEOpts
onLeft eitherOpts $ throwErrExit InvalidEnvironmentVariableOptionsError
where
opts =
info
( helper <*> parseHgeOpts )
( fullDesc
<> header " Hasura GraphQL Engine: Blazing fast, instant realtime GraphQL APIs on your DB with fine grained access control, also trigger webhooks on database events. "
<> footerDoc ( Just mainCmdFooter )
)
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack.
This PR is on top of #7789.
### Description
This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks:
- we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it
- we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces
- we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible
- we had to declare several behavioral instances on `TraceT m`
This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`.
In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line.
### Remaining work
This PR is a draft; what's left to do is:
- [x] make Pro compile; i haven't updated `HasuraPro/Main` yet
- [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls
- [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet
- [x] it depends on #7789 being merged first
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791
GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
2023-03-17 13:29:07 +03:00
-- | Core logic to fork a poller thread to update the JWK based on the
-- expiry time specified in @Expires@ header or @Cache-Control@ header
updateJwkCtxThread ::
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack.
This PR is on top of #7789.
### Description
This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks:
- we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it
- we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces
- we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible
- we had to declare several behavioral instances on `TraceT m`
This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`.
In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line.
### Remaining work
This PR is a draft; what's left to do is:
- [x] make Pro compile; i haven't updated `HasuraPro/Main` yet
- [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls
- [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet
- [x] it depends on #7789 being merged first
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791
GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
( C . ForkableMonadIO m ) =>
2023-03-17 13:29:07 +03:00
IO AppContext ->
2023-01-06 12:33:13 +03:00
HTTP . Manager ->
Logger Hasura ->
2023-03-17 13:29:07 +03:00
m Void
updateJwkCtxThread getAppCtx httpManager logger = forever $ do
authMode <- liftIO $ acAuthMode <$> getAppCtx
updateJwkCtx authMode httpManager logger
liftIO $ sleep $ seconds 1
2023-01-06 12:33:13 +03:00
2021-05-14 12:38:37 +03:00
-- | Event triggers live in the user's DB and other events
-- (cron, one-off and async actions)
-- live in the metadata DB, so we need a way to differentiate the
-- type of shutdown action
data ShutdownAction
= EventTriggerShutdownAction ( IO () )
2023-02-03 04:03:23 +03:00
| MetadataDBShutdownAction ( ExceptT QErr IO () )
2021-05-14 12:38:37 +03:00
2020-12-21 21:56:00 +03:00
-- | This function acts as the entrypoint for the graphql-engine webserver.
--
-- Note: at the exit of this function, or in case of a graceful server shutdown
-- (SIGTERM, or more generally, whenever the shutdown latch is set), we need to
-- make absolutely sure that we clean up any resources which were allocated during
-- server setup. In the case of a multitenant process, failure to do so can lead to
2020-12-28 15:56:00 +03:00
-- resource leaks.
2020-12-21 21:56:00 +03:00
--
-- To track these resources, we use the ManagedT monad, and attach finalizers at
-- the same point in the code where we allocate resources. If you fork a new
-- long-lived thread, or create a connection pool, or allocate any other
-- long-lived resource, make sure to pair the allocator with its finalizer.
-- There are plenty of examples throughout the code. For example, see
-- 'C.forkManagedT'.
--
-- Note also: the order in which the finalizers run can be important. Specifically,
-- we want the finalizers for the logger threads to run last, so that we retain as
-- many "thread stopping" log messages as possible. The order in which the
-- finalizers is run is determined by the order in which they are introduced in the
-- code.
2021-09-24 01:56:37 +03:00
2020-12-02 09:16:05 +03:00
{- HLINT ignore runHGEServer "Avoid lambda" -}
2023-02-20 20:41:55 +03:00
{- HLINT ignore runHGEServer "Use withAsync" -}
2019-11-26 15:14:21 +03:00
runHGEServer ::
2023-03-17 13:29:07 +03:00
forall m impl .
2021-10-13 19:38:56 +03:00
( MonadIO m ,
Rewrite OpenAPI
### Description
This PR rewrites OpenAPI to be more idiomatic. Some noteworthy changes:
- we accumulate all required information during the Analyze phase, to avoid having to do a single lookup in the schema cache during the OpenAPI generation phase (we now only need the schema cache as input to run the analysis)
- we no longer build intermediary endpoint information and aggregate it, we directly build the the `PathItem` for each endpoint; additionally, that means we no longer have to assume that different methods have the same metadata
- we no longer have to first declare types, then craft references: we do everything in one step
- we now properly deal with nullability by treating "typeName" and "typeName!" as different
- we add a bunch of additional fields in the generated "schema", such as title
- we do now support enum values in both input and output positions
- checking whether the request body is required is now performed on the fly rather than by introspecting the generated schema
- the methods in the file are sorted by topic
### Controversial point
However, this PR creates some additional complexity, that we might not want to keep. The main complexity is _knot-tying_: to avoid lookups when generating the OpenAPI, it builds an actual graph of input types, which means that we need something similar to (but simpler than) `MonadSchema`, to avoid infinite recursions when analyzing the input types of a query. To do this, this PR introduces `CircularT`, a lesser `SchemaT` that aims at avoiding ever having to reinvent this particular wheel ever again.
### Remaining work
- [x] fix existing tests (they are all failing due to some of the schema changes)
- [ ] add tests to cover the new features:
- [x] tests for `CircularT`
- [ ] tests for enums in output schemas
- [x] extract / document `CircularT` if we wish to keep it
- [x] add more comments to `OpenAPI`
- [x] have a second look at `buildVariableSchema`
- [x] fix all missing diagnostics in `Analyze`
- [x] add a Changelog entry?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4654
Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com>
GitOrigin-RevId: f4a9191f22dfcc1dccefd6a52f5c586b6ad17172
2022-06-30 15:55:56 +03:00
MonadFix m ,
2020-07-14 22:00:58 +03:00
MonadMask m ,
2019-11-26 15:14:21 +03:00
MonadStateless IO m ,
2020-06-16 18:23:06 +03:00
LA . Forall ( LA . Pure m ) ,
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack.
This PR is on top of #7789.
### Description
This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks:
- we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it
- we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces
- we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible
- we had to declare several behavioral instances on `TraceT m`
This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`.
In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line.
### Remaining work
This PR is a draft; what's left to do is:
- [x] make Pro compile; i haven't updated `HasuraPro/Main` yet
- [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls
- [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet
- [x] it depends on #7789 being merged first
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791
GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
UserAuthentication m ,
2019-11-26 15:14:21 +03:00
HttpLog m ,
Clean `AppEnv` and `AppContext` passing, remove `RunT`, reduce `ServerConfigCtx` uses
## Description
This PR does several different things that happen to overlap; the most important being:
- it removes `RunT`: it was redundant in places where we already had `Handler`, and only used in one other place, `SchemaUpdate`, for which a local `SchemaUpdateT` is more than enough;
- it reduces the number of places where we create a `ServerConfigCtx`, since now `HasServerConfigCtx` can be implemented directly by `SchemaUpdateT` and `Handler` based on the full `AppContext`;
- it drastically reduces the number of arguments we pass around in the app init code, by introducing `HasAppEnv`;
- it simplifies `HandlerCtx` to reduce duplication
In doing so, this changes paves the way towards removing `ServerConfigCtx`, since there are only very few places where we construct it: we can now introduce smaller classes than `HasServerConfigCtx`, that expose only a relevant subset of fields, and implement them where we now implement `HasServerConfigCtx`.
This PR is loosely based on ideas in #8337, that are no longer applicable due to the changes introduced in #8159. A challenge of this PR was the postgres tests, which were running in `PGMetadataStorageAppT CacheBuild` :scream_cat:
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8392
GitOrigin-RevId: b90c1359066d20dbea329c87762ccdd1217b4d69
2023-03-21 13:44:21 +03:00
HasAppEnv m ,
2019-11-26 15:14:21 +03:00
ConsoleRenderer m ,
2022-12-07 14:28:58 +03:00
MonadVersionAPIWithExtraData m ,
2021-01-07 12:04:22 +03:00
MonadMetadataApiAuthorization m ,
2020-06-16 18:23:06 +03:00
MonadGQLExecutionCheck m ,
MonadConfigApiHandler m ,
2020-07-14 22:00:58 +03:00
MonadQueryLog m ,
2023-03-15 16:05:17 +03:00
MonadExecutionLog m ,
2020-06-19 09:42:32 +03:00
WS . MonadWSLog m ,
2020-07-15 13:40:48 +03:00
MonadExecuteQuery m ,
2020-12-03 07:06:22 +03:00
HasResourceLimits m ,
2023-02-03 04:03:23 +03:00
MonadMetadataStorageQueryAPI m ,
2020-12-28 15:56:00 +03:00
MonadResolveSource m ,
2022-09-09 11:26:44 +03:00
EB . MonadQueryTags m ,
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
MonadEventLogCleanup m ,
2023-03-13 14:44:18 +03:00
ProvidesHasuraServices m ,
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack.
This PR is on top of #7789.
### Description
This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks:
- we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it
- we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces
- we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible
- we had to declare several behavioral instances on `TraceT m`
This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`.
In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line.
### Remaining work
This PR is a draft; what's left to do is:
- [x] make Pro compile; i haven't updated `HasuraPro/Main` yet
- [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls
- [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet
- [x] it depends on #7789 being merged first
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791
GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
MonadTrace m ,
2023-03-13 14:44:18 +03:00
MonadGetApiTimeLimit m
2019-11-26 15:14:21 +03:00
) =>
2023-03-21 18:49:48 +03:00
( AppStateRef impl -> Spock . SpockT m () ) ->
2023-03-17 13:29:07 +03:00
AppStateRef impl ->
2019-11-26 15:14:21 +03:00
-- | start time
UTCTime ->
2022-08-05 14:50:11 +03:00
-- | A hook which can be called to indicate when the server is started succesfully
Maybe ( IO () ) ->
2023-01-06 12:33:13 +03:00
EKG . Store EKG . EmptyMetrics ->
2020-12-21 21:56:00 +03:00
ManagedT m ()
2023-03-21 18:49:48 +03:00
runHGEServer setupHook appStateRef initTime startupStatusHook ekgStore = do
AppEnv { .. } <- lift askAppEnv
waiApplication <- mkHGEServer setupHook appStateRef ekgStore
2021-10-20 23:01:22 +03:00
2023-02-24 21:09:36 +03:00
let logger = _lsLogger appEnvLoggers
2022-08-05 14:50:11 +03:00
-- `startupStatusHook`: add `Service started successfully` message to config_status
-- table when a tenant starts up in multitenant
2021-10-20 23:01:22 +03:00
let warpSettings :: Warp . Settings
warpSettings =
2023-02-28 12:09:31 +03:00
Warp . setPort ( _getPort appEnvPort )
. Warp . setHost appEnvHost
2021-10-20 23:01:22 +03:00
. Warp . setGracefulShutdownTimeout ( Just 30 ) -- 30s graceful shutdown
. Warp . setInstallShutdownHandler shutdownHandler
2022-10-04 00:49:32 +03:00
. Warp . setBeforeMainLoop ( for_ startupStatusHook id )
2021-10-20 23:01:22 +03:00
. setForkIOWithMetrics
$ Warp . defaultSettings
setForkIOWithMetrics :: Warp . Settings -> Warp . Settings
setForkIOWithMetrics = Warp . setFork \ f -> do
void $
C . forkIOWithUnmask
( \ unmask ->
bracket_
2022-07-24 00:18:01 +03:00
( do
2023-02-24 21:09:36 +03:00
EKG . Gauge . inc ( smWarpThreads appEnvServerMetrics )
incWarpThreads ( pmConnections appEnvPrometheusMetrics )
2022-07-24 00:18:01 +03:00
)
( do
2023-02-24 21:09:36 +03:00
EKG . Gauge . dec ( smWarpThreads appEnvServerMetrics )
decWarpThreads ( pmConnections appEnvPrometheusMetrics )
2022-07-24 00:18:01 +03:00
)
2021-10-20 23:01:22 +03:00
( f unmask )
)
shutdownHandler :: IO () -> IO ()
shutdownHandler closeSocket =
LA . link =<< LA . async do
2023-02-24 21:09:36 +03:00
waitForShutdown appEnvShutdownLatch
2022-11-29 04:00:28 +03:00
unLogger logger $ mkGenericLog @ Text LevelInfo " server " " gracefully shutting down server "
2021-10-20 23:01:22 +03:00
closeSocket
2022-10-20 11:29:14 +03:00
finishTime <- liftIO Clock . getCurrentTime
let apiInitTime = realToFrac $ Clock . diffUTCTime finishTime initTime
unLogger logger $
mkGenericLog LevelInfo " server " $
StartupTimeInfo " starting API server " apiInitTime
2021-10-20 23:01:22 +03:00
-- Here we block until the shutdown latch 'MVar' is filled, and then
-- shut down the server. Once this blocking call returns, we'll tidy up
-- any resources using the finalizers attached using 'ManagedT' above.
-- Structuring things using the shutdown latch in this way lets us decide
-- elsewhere exactly how we want to control shutdown.
liftIO $ Warp . runSettings warpSettings waiApplication
-- | Part of a factorization of 'runHGEServer' to expose the constructed WAI
-- application for testing purposes. See 'runHGEServer' for documentation.
mkHGEServer ::
2023-03-17 13:29:07 +03:00
forall m impl .
2021-10-20 23:01:22 +03:00
( MonadIO m ,
Rewrite OpenAPI
### Description
This PR rewrites OpenAPI to be more idiomatic. Some noteworthy changes:
- we accumulate all required information during the Analyze phase, to avoid having to do a single lookup in the schema cache during the OpenAPI generation phase (we now only need the schema cache as input to run the analysis)
- we no longer build intermediary endpoint information and aggregate it, we directly build the the `PathItem` for each endpoint; additionally, that means we no longer have to assume that different methods have the same metadata
- we no longer have to first declare types, then craft references: we do everything in one step
- we now properly deal with nullability by treating "typeName" and "typeName!" as different
- we add a bunch of additional fields in the generated "schema", such as title
- we do now support enum values in both input and output positions
- checking whether the request body is required is now performed on the fly rather than by introspecting the generated schema
- the methods in the file are sorted by topic
### Controversial point
However, this PR creates some additional complexity, that we might not want to keep. The main complexity is _knot-tying_: to avoid lookups when generating the OpenAPI, it builds an actual graph of input types, which means that we need something similar to (but simpler than) `MonadSchema`, to avoid infinite recursions when analyzing the input types of a query. To do this, this PR introduces `CircularT`, a lesser `SchemaT` that aims at avoiding ever having to reinvent this particular wheel ever again.
### Remaining work
- [x] fix existing tests (they are all failing due to some of the schema changes)
- [ ] add tests to cover the new features:
- [x] tests for `CircularT`
- [ ] tests for enums in output schemas
- [x] extract / document `CircularT` if we wish to keep it
- [x] add more comments to `OpenAPI`
- [x] have a second look at `buildVariableSchema`
- [x] fix all missing diagnostics in `Analyze`
- [x] add a Changelog entry?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4654
Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com>
GitOrigin-RevId: f4a9191f22dfcc1dccefd6a52f5c586b6ad17172
2022-06-30 15:55:56 +03:00
MonadFix m ,
2021-10-20 23:01:22 +03:00
MonadMask m ,
MonadStateless IO m ,
LA . Forall ( LA . Pure m ) ,
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack.
This PR is on top of #7789.
### Description
This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks:
- we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it
- we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces
- we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible
- we had to declare several behavioral instances on `TraceT m`
This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`.
In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line.
### Remaining work
This PR is a draft; what's left to do is:
- [x] make Pro compile; i haven't updated `HasuraPro/Main` yet
- [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls
- [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet
- [x] it depends on #7789 being merged first
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791
GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
UserAuthentication m ,
2021-10-20 23:01:22 +03:00
HttpLog m ,
Clean `AppEnv` and `AppContext` passing, remove `RunT`, reduce `ServerConfigCtx` uses
## Description
This PR does several different things that happen to overlap; the most important being:
- it removes `RunT`: it was redundant in places where we already had `Handler`, and only used in one other place, `SchemaUpdate`, for which a local `SchemaUpdateT` is more than enough;
- it reduces the number of places where we create a `ServerConfigCtx`, since now `HasServerConfigCtx` can be implemented directly by `SchemaUpdateT` and `Handler` based on the full `AppContext`;
- it drastically reduces the number of arguments we pass around in the app init code, by introducing `HasAppEnv`;
- it simplifies `HandlerCtx` to reduce duplication
In doing so, this changes paves the way towards removing `ServerConfigCtx`, since there are only very few places where we construct it: we can now introduce smaller classes than `HasServerConfigCtx`, that expose only a relevant subset of fields, and implement them where we now implement `HasServerConfigCtx`.
This PR is loosely based on ideas in #8337, that are no longer applicable due to the changes introduced in #8159. A challenge of this PR was the postgres tests, which were running in `PGMetadataStorageAppT CacheBuild` :scream_cat:
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8392
GitOrigin-RevId: b90c1359066d20dbea329c87762ccdd1217b4d69
2023-03-21 13:44:21 +03:00
HasAppEnv m ,
2021-10-20 23:01:22 +03:00
ConsoleRenderer m ,
2022-12-07 14:28:58 +03:00
MonadVersionAPIWithExtraData m ,
2021-10-20 23:01:22 +03:00
MonadMetadataApiAuthorization m ,
MonadGQLExecutionCheck m ,
MonadConfigApiHandler m ,
MonadQueryLog m ,
2023-03-15 16:05:17 +03:00
MonadExecutionLog m ,
2021-10-20 23:01:22 +03:00
WS . MonadWSLog m ,
MonadExecuteQuery m ,
HasResourceLimits m ,
2023-02-03 04:03:23 +03:00
MonadMetadataStorageQueryAPI m ,
2021-10-20 23:01:22 +03:00
MonadResolveSource m ,
2022-09-09 11:26:44 +03:00
EB . MonadQueryTags m ,
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
MonadEventLogCleanup m ,
2023-03-13 14:44:18 +03:00
ProvidesHasuraServices m ,
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack.
This PR is on top of #7789.
### Description
This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks:
- we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it
- we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces
- we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible
- we had to declare several behavioral instances on `TraceT m`
This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`.
In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line.
### Remaining work
This PR is a draft; what's left to do is:
- [x] make Pro compile; i haven't updated `HasuraPro/Main` yet
- [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls
- [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet
- [x] it depends on #7789 being merged first
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791
GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
MonadTrace m ,
2023-03-13 14:44:18 +03:00
MonadGetApiTimeLimit m
2021-10-20 23:01:22 +03:00
) =>
2023-03-21 18:49:48 +03:00
( AppStateRef impl -> Spock . SpockT m () ) ->
2023-03-17 13:29:07 +03:00
AppStateRef impl ->
2021-10-20 23:01:22 +03:00
EKG . Store EKG . EmptyMetrics ->
ManagedT m Application
2023-03-21 18:49:48 +03:00
mkHGEServer setupHook appStateRef ekgStore = do
2020-06-16 20:44:59 +03:00
-- Comment this to enable expensive assertions from "GHC.AssertNF". These
-- will log lines to STDOUT containing "not in normal form". In the future we
-- could try to integrate this into our tests. For now this is a development
-- tool.
2020-03-18 04:31:22 +03:00
--
-- NOTE: be sure to compile WITHOUT code coverage, for this to work properly.
liftIO disableAssertNF
2023-03-21 18:49:48 +03:00
AppEnv { .. } <- lift askAppEnv
2023-02-28 12:09:31 +03:00
let Loggers loggerCtx logger _ = appEnvLoggers
2023-01-06 09:39:10 +03:00
2023-03-17 13:29:07 +03:00
wsServerEnv <-
WS . createWSServerEnv
( _lsLogger appEnvLoggers )
appEnvSubscriptionState
appStateRef
appEnvManager
appEnvEnableReadOnlyMode
appEnvWebSocketKeepAlive
appEnvServerMetrics
appEnvPrometheusMetrics
appEnvTraceSamplingPolicy
HasuraApp app actionSubState stopWsServer <-
2021-03-31 13:39:01 +03:00
lift $
flip onException ( flushLogger loggerCtx ) $
2021-02-13 03:05:23 +03:00
mkWaiApp
setupHook
2023-03-17 13:29:07 +03:00
appStateRef
2023-01-06 12:33:13 +03:00
ekgStore
2023-03-17 13:29:07 +03:00
wsServerEnv
2019-11-26 15:14:21 +03:00
2021-05-25 13:49:59 +03:00
-- Log Warning if deprecated environment variables are used
2023-03-17 13:29:07 +03:00
sources <- scSources <$> liftIO ( getSchemaCache appStateRef )
-- TODO: naveen: send IO to logDeprecatedEnvVars
AppContext { .. } <- liftIO $ getAppContext appStateRef
2023-02-28 12:09:31 +03:00
liftIO $ logDeprecatedEnvVars logger acEnvironment sources
2021-05-25 13:49:59 +03:00
2019-11-26 15:14:21 +03:00
-- log inconsistent schema objects
2023-03-17 13:29:07 +03:00
inconsObjs <- scInconsistentObjs <$> liftIO ( getSchemaCache appStateRef )
2022-03-09 01:59:28 +03:00
liftIO $ logInconsistentMetadata logger inconsObjs
2019-11-26 15:14:21 +03:00
2021-07-27 08:41:16 +03:00
-- NOTE: `newLogTVar` is being used to make sure that the metadata logger runs only once
-- while logging errors or any `inconsistent_metadata` logs.
newLogTVar <- liftIO $ STM . newTVarIO False
2021-11-30 15:31:27 +03:00
2020-11-24 09:10:04 +03:00
-- Start a background thread for processing schema sync event present in the '_sscSyncEventRef'
Clean `AppEnv` and `AppContext` passing, remove `RunT`, reduce `ServerConfigCtx` uses
## Description
This PR does several different things that happen to overlap; the most important being:
- it removes `RunT`: it was redundant in places where we already had `Handler`, and only used in one other place, `SchemaUpdate`, for which a local `SchemaUpdateT` is more than enough;
- it reduces the number of places where we create a `ServerConfigCtx`, since now `HasServerConfigCtx` can be implemented directly by `SchemaUpdateT` and `Handler` based on the full `AppContext`;
- it drastically reduces the number of arguments we pass around in the app init code, by introducing `HasAppEnv`;
- it simplifies `HandlerCtx` to reduce duplication
In doing so, this changes paves the way towards removing `ServerConfigCtx`, since there are only very few places where we construct it: we can now introduce smaller classes than `HasServerConfigCtx`, that expose only a relevant subset of fields, and implement them where we now implement `HasServerConfigCtx`.
This PR is loosely based on ideas in #8337, that are no longer applicable due to the changes introduced in #8159. A challenge of this PR was the postgres tests, which were running in `PGMetadataStorageAppT CacheBuild` :scream_cat:
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8392
GitOrigin-RevId: b90c1359066d20dbea329c87762ccdd1217b4d69
2023-03-21 13:44:21 +03:00
_ <- startSchemaSyncProcessorThread appStateRef newLogTVar
2021-09-24 01:56:37 +03:00
2023-02-28 12:09:31 +03:00
case appEnvEventingMode of
2021-11-30 15:31:27 +03:00
EventingEnabled -> do
2023-03-17 13:29:07 +03:00
startEventTriggerPollerThread logger appEnvLockedEventsCtx
startAsyncActionsPollerThread logger appEnvLockedEventsCtx actionSubState
2021-05-14 12:38:37 +03:00
2023-02-07 15:22:08 +03:00
-- Create logger for logging the statistics of fetched cron triggers
fetchedCronTriggerStatsLogger <-
allocate
( createFetchedCronTriggerStatsLogger logger )
( closeFetchedCronTriggersStatsLogger logger )
2021-11-30 15:31:27 +03:00
-- start a background thread to create new cron events
_cronEventsThread <-
C . forkManagedT " runCronEventsGenerator " logger $
2023-03-17 13:29:07 +03:00
runCronEventsGenerator logger fetchedCronTriggerStatsLogger ( getSchemaCache appStateRef )
2021-09-24 01:56:37 +03:00
2023-03-17 13:29:07 +03:00
startScheduledEventsPollerThread logger appEnvLockedEventsCtx
2021-11-30 15:31:27 +03:00
EventingDisabled ->
2022-11-29 04:00:28 +03:00
unLogger logger $ mkGenericLog @ Text LevelInfo " server " " starting in eventing disabled mode "
2020-05-13 15:33:16 +03:00
2019-11-26 15:14:21 +03:00
-- start a background thread to check for updates
2021-05-14 12:38:37 +03:00
_updateThread <-
C . forkManagedT " checkForUpdates " logger $
liftIO $
2023-02-24 21:09:36 +03:00
checkForUpdates loggerCtx appEnvManager
2019-11-26 15:14:21 +03:00
2022-11-23 19:40:21 +03:00
-- Start a background thread for source pings
_sourcePingPoller <-
C . forkManagedT " sourcePingPoller " logger $ do
let pingLog =
2022-11-29 04:00:28 +03:00
unLogger logger . mkGenericLog @ String LevelInfo " sources-ping "
2022-11-23 19:40:21 +03:00
liftIO
( runPingSources
2023-02-28 12:09:31 +03:00
acEnvironment
2022-11-23 19:40:21 +03:00
pingLog
2023-03-17 13:29:07 +03:00
( scSourcePingConfig <$> getSchemaCache appStateRef )
2022-11-23 19:40:21 +03:00
)
2019-11-26 15:14:21 +03:00
-- start a background thread for telemetry
2020-12-21 21:56:00 +03:00
_telemetryThread <-
2023-02-28 12:09:31 +03:00
if isTelemetryEnabled acEnableTelemetry
2020-07-30 05:34:50 +03:00
then do
2022-11-29 04:00:28 +03:00
lift . unLogger logger $ mkGenericLog @ Text LevelInfo " telemetry " telemetryNotice
2021-09-24 01:56:37 +03:00
2022-03-14 21:31:46 +03:00
dbUid <-
2023-02-03 04:03:23 +03:00
getMetadataDbUid ` onLeftM ` throwErrJExit DatabaseMigrationError
2022-03-14 21:31:46 +03:00
pgVersion <-
2023-02-24 21:09:36 +03:00
liftIO ( runExceptT $ PG . runTx appEnvMetadataDbPool ( PG . ReadCommitted , Nothing ) $ getPgVersion )
2023-02-03 04:03:23 +03:00
` onLeftM ` throwErrJExit DatabaseMigrationError
2021-09-24 01:56:37 +03:00
2021-05-14 12:38:37 +03:00
telemetryThread <-
C . forkManagedT " runTelemetry " logger $
2022-03-14 21:31:46 +03:00
liftIO $
2023-03-17 13:29:07 +03:00
runTelemetry logger appEnvManager ( getSchemaCache appStateRef ) dbUid appEnvInstanceId pgVersion acExperimentalFeatures
2020-07-30 05:34:50 +03:00
return $ Just telemetryThread
else return Nothing
2023-03-17 13:29:07 +03:00
-- forking a dedicated polling thread to dynamically get the latest JWK settings
-- set by the user and update the JWK accordingly. This will help in applying the
-- updates without restarting HGE.
_ <-
C . forkManagedT " update JWK " logger $
updateJwkCtxThread ( getAppContext appStateRef ) appEnvManager logger
2021-10-20 23:01:22 +03:00
-- These cleanup actions are not directly associated with any
-- resource, but we still need to make sure we clean them up here.
allocate_ ( pure () ) ( liftIO stopWsServer )
2021-09-24 01:56:37 +03:00
2021-10-20 23:01:22 +03:00
pure app
2019-11-26 15:14:21 +03:00
where
2022-06-07 10:08:53 +03:00
isRetryRequired _ resp = do
return $ case resp of
Right _ -> False
Left err -> qeCode err == ConcurrentUpdate
2020-11-25 13:56:44 +03:00
prepareScheduledEvents ( Logger logger ) = do
2022-11-29 04:00:28 +03:00
liftIO $ logger $ mkGenericLog @ Text LevelInfo " scheduled_triggers " " preparing data "
2023-02-03 04:03:23 +03:00
res <- Retry . retrying Retry . retryPolicyDefault isRetryRequired ( return unlockAllLockedScheduledEvents )
2022-11-29 04:00:28 +03:00
onLeft res ( \ err -> logger $ mkGenericLog @ String LevelError " scheduled_triggers " ( show $ qeError err ) )
2020-07-02 14:57:09 +03:00
2021-05-14 12:38:37 +03:00
getProcessingScheduledEventsCount :: LockedEventsCtx -> IO Int
getProcessingScheduledEventsCount LockedEventsCtx { .. } = do
processingCronEvents <- readTVarIO leCronEvents
processingOneOffEvents <- readTVarIO leOneOffEvents
return $ length processingOneOffEvents + length processingCronEvents
2021-09-24 01:56:37 +03:00
2021-05-14 12:38:37 +03:00
shutdownEventTriggerEvents ::
2021-09-20 10:34:59 +03:00
[ BackendSourceInfo ] ->
2020-07-02 14:57:09 +03:00
Logger Hasura ->
LockedEventsCtx ->
IO ()
2021-09-20 10:34:59 +03:00
shutdownEventTriggerEvents sources ( Logger logger ) LockedEventsCtx { .. } = do
2021-05-14 12:38:37 +03:00
-- TODO: is this correct?
-- event triggers should be tied to the life cycle of a source
2021-09-20 10:34:59 +03:00
lockedEvents <- readTVarIO leEvents
forM_ sources $ \ backendSourceInfo -> do
2023-03-30 18:13:42 +03:00
AB . dispatchAnyBackend @ BackendEventTrigger backendSourceInfo \ ( SourceInfo sourceName _ _ _ _ sourceConfig _ _ :: SourceInfo b ) -> do
2022-07-01 14:47:20 +03:00
let sourceNameText = sourceNameToText sourceName
logger $ mkGenericLog LevelInfo " event_triggers " $ " unlocking events of source: " <> sourceNameText
2022-10-04 00:49:32 +03:00
for_ ( HM . lookup sourceName lockedEvents ) $ \ sourceLockedEvents -> do
2022-06-30 14:26:10 +03:00
-- No need to execute unlockEventsTx when events are not present
2022-10-04 00:49:32 +03:00
for_ ( NE . nonEmptySet sourceLockedEvents ) $ \ nonEmptyLockedEvents -> do
2022-06-30 14:26:10 +03:00
res <- Retry . retrying Retry . retryPolicyDefault isRetryRequired ( return $ unlockEventsInSource @ b sourceConfig nonEmptyLockedEvents )
case res of
Left err ->
logger $
2022-07-01 14:47:20 +03:00
mkGenericLog LevelWarn " event_trigger " $
" Error while unlocking event trigger events of source: " <> sourceNameText <> " error: " <> showQErr err
2022-06-30 14:26:10 +03:00
Right count ->
logger $
2022-07-01 14:47:20 +03:00
mkGenericLog LevelInfo " event_trigger " $
tshow count <> " events of source " <> sourceNameText <> " were successfully unlocked "
2021-09-24 01:56:37 +03:00
2021-05-14 12:38:37 +03:00
shutdownAsyncActions ::
LockedEventsCtx ->
2023-02-03 04:03:23 +03:00
ExceptT QErr m ()
2021-05-14 12:38:37 +03:00
shutdownAsyncActions lockedEventsCtx = do
lockedActionEvents <- liftIO $ readTVarIO $ leActionEvents lockedEventsCtx
2023-02-03 04:03:23 +03:00
liftEitherM $ setProcessingActionLogsToPending ( LockedActionIdArray $ toList lockedActionEvents )
2020-07-02 14:57:09 +03:00
2021-05-14 12:38:37 +03:00
-- This function is a helper function to do couple of things:
--
-- 1. When the value of the `graceful-shutdown-timeout` > 0, we poll
-- the in-flight events queue we maintain using the `processingEventsCountAction`
-- number of in-flight processing events, in case of actions it is the
-- actions which are in 'processing' state and in scheduled events
-- it is the events which are in 'locked' state. The in-flight events queue is polled
-- every 5 seconds until either the graceful shutdown time is exhausted
-- or the number of in-flight processing events is 0.
-- 2. After step 1, we unlock all the events which were attempted to process by the current
-- graphql-engine instance that are still in the processing
-- state. In actions, it means to set the status of such actions to 'pending'
-- and in scheduled events, the status will be set to 'unlocked'.
waitForProcessingAction ::
Logger Hasura ->
String ->
IO Int ->
ShutdownAction ->
Seconds ->
IO ()
waitForProcessingAction l @ ( Logger logger ) actionType processingEventsCountAction' shutdownAction maxTimeout
| maxTimeout <= 0 = do
case shutdownAction of
EventTriggerShutdownAction userDBShutdownAction -> userDBShutdownAction
MetadataDBShutdownAction metadataDBShutdownAction ->
2023-02-03 04:03:23 +03:00
runExceptT metadataDBShutdownAction >>= \ case
2021-05-14 12:38:37 +03:00
Left err ->
logger $
2022-07-01 14:47:20 +03:00
mkGenericLog LevelWarn ( T . pack actionType ) $
2021-05-14 12:38:37 +03:00
" Error while unlocking the processing "
2022-07-01 14:47:20 +03:00
<> tshow actionType
2021-05-14 12:38:37 +03:00
<> " err - "
2022-07-01 14:47:20 +03:00
<> showQErr err
2021-05-14 12:38:37 +03:00
Right () -> pure ()
| otherwise = do
processingEventsCount <- processingEventsCountAction'
if ( processingEventsCount == 0 )
then
logger $
2022-11-29 04:00:28 +03:00
mkGenericLog @ Text LevelInfo ( T . pack actionType ) $
2021-05-14 12:38:37 +03:00
" All in-flight events have finished processing "
else unless ( processingEventsCount == 0 ) $ do
C . sleep ( 5 ) -- sleep for 5 seconds and then repeat
waitForProcessingAction l actionType processingEventsCountAction' shutdownAction ( maxTimeout - ( Seconds 5 ) )
2021-09-24 01:56:37 +03:00
2023-03-17 13:29:07 +03:00
startEventTriggerPollerThread logger lockedEventsCtx = do
2023-03-21 18:49:48 +03:00
AppEnv { .. } <- lift askAppEnv
2023-03-17 13:29:07 +03:00
schemaCache <- liftIO $ getSchemaCache appStateRef
let allSources = HM . elems $ scSources schemaCache
activeEventProcessingThreads <- liftIO $ newTVarIO 0
-- Initialise the event processing thread
let eventsGracefulShutdownAction =
waitForProcessingAction
logger
" event_triggers "
( length <$> readTVarIO ( leEvents lockedEventsCtx ) )
( EventTriggerShutdownAction ( shutdownEventTriggerEvents allSources logger lockedEventsCtx ) )
( unrefine appEnvGracefulShutdownTimeout )
-- Create logger for logging the statistics of events fetched
fetchedEventsStatsLogger <-
allocate
( createFetchedEventsStatsLogger logger )
( closeFetchedEventsStatsLogger logger )
unLogger logger $ mkGenericLog @ Text LevelInfo " event_triggers " " starting workers "
void
$ C . forkManagedTWithGracefulShutdown
" processEventQueue "
logger
( C . ThreadShutdown ( liftIO eventsGracefulShutdownAction ) )
$ processEventQueue
logger
fetchedEventsStatsLogger
appEnvManager
( getSchemaCache appStateRef )
( acEventEngineCtx <$> getAppContext appStateRef )
activeEventProcessingThreads
lockedEventsCtx
appEnvServerMetrics
( pmEventTriggerMetrics appEnvPrometheusMetrics )
appEnvEnableMaintenanceMode
startAsyncActionsPollerThread logger lockedEventsCtx actionSubState = do
2023-03-21 18:49:48 +03:00
AppEnv { .. } <- lift askAppEnv
2023-03-17 13:29:07 +03:00
AppContext { .. } <- liftIO $ getAppContext appStateRef
2021-11-30 15:31:27 +03:00
-- start a background thread to handle async actions
2023-02-28 12:09:31 +03:00
case acAsyncActionsFetchInterval of
2021-11-30 15:31:27 +03:00
Skip -> pure () -- Don't start the poller thread
2022-09-21 21:01:48 +03:00
Interval ( unrefine -> sleepTime ) -> do
2021-11-30 15:31:27 +03:00
let label = " asyncActionsProcessor "
asyncActionGracefulShutdownAction =
( liftWithStateless \ lowerIO ->
( waitForProcessingAction
logger
" async_actions "
( length <$> readTVarIO ( leActionEvents lockedEventsCtx ) )
( MetadataDBShutdownAction ( hoist lowerIO ( shutdownAsyncActions lockedEventsCtx ) ) )
2023-02-28 12:09:31 +03:00
( unrefine appEnvGracefulShutdownTimeout )
2021-11-30 15:31:27 +03:00
)
)
void
$ C . forkManagedTWithGracefulShutdown
label
logger
( C . ThreadShutdown asyncActionGracefulShutdownAction )
$ asyncActionsProcessor
2023-03-17 13:29:07 +03:00
-- TODO: puru: send IO hook for acEnvironment
2023-02-28 12:09:31 +03:00
acEnvironment
2021-11-30 15:31:27 +03:00
logger
2023-03-17 13:29:07 +03:00
( getSchemaCache appStateRef )
2021-11-30 15:31:27 +03:00
( leActionEvents lockedEventsCtx )
2023-02-24 21:09:36 +03:00
appEnvPrometheusMetrics
2021-11-30 15:31:27 +03:00
sleepTime
Nothing
-- start a background thread to handle async action live queries
void $
C . forkManagedT " asyncActionSubscriptionsProcessor " logger $
asyncActionSubscriptionsProcessor actionSubState
2023-03-17 13:29:07 +03:00
startScheduledEventsPollerThread logger lockedEventsCtx = do
2023-03-21 18:49:48 +03:00
AppEnv { .. } <- lift askAppEnv
2023-03-17 13:29:07 +03:00
AppContext { .. } <- liftIO $ getAppContext appStateRef
2021-11-30 15:31:27 +03:00
-- prepare scheduled triggers
lift $ prepareScheduledEvents logger
2023-01-30 09:06:45 +03:00
-- Create logger for logging the statistics of scheduled events fetched
scheduledEventsStatsLogger <-
allocate
( createFetchedScheduledEventsStatsLogger logger )
( closeFetchedScheduledEventsStatsLogger logger )
2021-11-30 15:31:27 +03:00
-- start a background thread to deliver the scheduled events
-- _scheduledEventsThread <- do
let scheduledEventsGracefulShutdownAction =
( liftWithStateless \ lowerIO ->
( waitForProcessingAction
logger
" scheduled_events "
( getProcessingScheduledEventsCount lockedEventsCtx )
2023-02-03 04:03:23 +03:00
( MetadataDBShutdownAction ( liftEitherM $ hoist lowerIO unlockAllLockedScheduledEvents ) )
2023-02-28 12:09:31 +03:00
( unrefine appEnvGracefulShutdownTimeout )
2021-11-30 15:31:27 +03:00
)
)
void
$ C . forkManagedTWithGracefulShutdown
" processScheduledTriggers "
logger
( C . ThreadShutdown scheduledEventsGracefulShutdownAction )
$ processScheduledTriggers
2023-03-17 13:29:07 +03:00
-- TODO: puru: send IO hook for acEnvironment
2023-02-28 12:09:31 +03:00
acEnvironment
2021-11-30 15:31:27 +03:00
logger
2023-01-30 09:06:45 +03:00
scheduledEventsStatsLogger
2023-02-24 21:09:36 +03:00
appEnvManager
2023-03-30 08:51:18 +03:00
( pmScheduledTriggerMetrics appEnvPrometheusMetrics )
2023-03-17 13:29:07 +03:00
( getSchemaCache appStateRef )
2021-11-30 15:31:27 +03:00
lockedEventsCtx
2021-05-26 19:19:26 +03:00
runInSeparateTx ::
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
PG . TxE QErr a ->
2023-03-23 18:51:18 +03:00
AppM ( Either QErr a )
2020-11-25 13:56:44 +03:00
runInSeparateTx tx = do
2023-03-17 13:29:07 +03:00
pool <- asks appEnvMetadataDbPool
2023-02-03 04:03:23 +03:00
liftIO $ runExceptT $ PG . runTx pool ( PG . RepeatableRead , Nothing ) tx
2020-11-25 13:56:44 +03:00
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
notifySchemaCacheSyncTx :: MetadataResourceVersion -> InstanceId -> CacheInvalidations -> PG . TxE QErr ()
2021-04-06 06:25:02 +03:00
notifySchemaCacheSyncTx ( MetadataResourceVersion resourceVersion ) instanceId invalidations = do
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
PG . Discard () <-
PG . withQE
2020-12-14 07:30:19 +03:00
defaultTxErrorHandler
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
[ PG . sql |
2021-04-06 06:25:02 +03:00
INSERT INTO hdb_catalog . hdb_schema_notifications ( id , notification , resource_version , instance_id )
VALUES ( 1 , $ 1 :: json , $ 2 , $ 3 :: uuid )
ON CONFLICT ( id ) DO UPDATE SET
notification = $ 1 :: json ,
resource_version = $ 2 ,
instance_id = $ 3 :: uuid
| ]
2022-09-21 21:40:41 +03:00
( PG . ViaJSON invalidations , resourceVersion , instanceId )
2021-04-06 06:25:02 +03:00
True
2020-12-14 07:30:19 +03:00
pure ()
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
getCatalogStateTx :: PG . TxE QErr CatalogState
2021-01-07 12:04:22 +03:00
getCatalogStateTx =
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
mkCatalogState . PG . getRow
<$> PG . withQE
2021-01-07 12:04:22 +03:00
defaultTxErrorHandler
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
[ PG . sql |
2021-01-07 12:04:22 +03:00
SELECT hasura_uuid :: text , cli_state :: json , console_state :: json
FROM hdb_catalog . hdb_version
| ]
()
False
where
2022-09-21 21:40:41 +03:00
mkCatalogState ( dbId , PG . ViaJSON cliState , PG . ViaJSON consoleState ) =
2021-01-07 12:04:22 +03:00
CatalogState dbId cliState consoleState
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
setCatalogStateTx :: CatalogStateType -> A . Value -> PG . TxE QErr ()
2021-01-07 12:04:22 +03:00
setCatalogStateTx stateTy stateValue =
case stateTy of
CSTCli ->
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
PG . unitQE
2021-01-07 12:04:22 +03:00
defaultTxErrorHandler
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
[ PG . sql |
2021-01-07 12:04:22 +03:00
UPDATE hdb_catalog . hdb_version
SET cli_state = $ 1
| ]
2022-09-21 21:40:41 +03:00
( Identity $ PG . ViaJSON stateValue )
2021-01-07 12:04:22 +03:00
False
CSTConsole ->
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
PG . unitQE
2021-01-07 12:04:22 +03:00
defaultTxErrorHandler
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
[ PG . sql |
2021-01-07 12:04:22 +03:00
UPDATE hdb_catalog . hdb_version
SET console_state = $ 1
| ]
2022-09-21 21:40:41 +03:00
( Identity $ PG . ViaJSON stateValue )
2021-01-07 12:04:22 +03:00
False
2021-02-18 19:46:14 +03:00
2020-06-16 18:23:06 +03:00
--- helper functions ---
2023-02-08 04:46:21 +03:00
mkConsoleHTML :: Text -> AuthMode -> TelemetryStatus -> Maybe Text -> Maybe Text -> Either String Text
2022-09-27 17:50:32 +03:00
mkConsoleHTML path authMode enableTelemetry consoleAssetsDir consoleSentryDsn =
2019-11-26 15:14:21 +03:00
renderHtmlTemplate consoleTmplt $
-- variables required to render the template
2020-12-21 21:56:00 +03:00
A . object
[ " isAdminSecretSet " A ..= isAdminSecretSet authMode ,
" consolePath " A ..= consolePath ,
2023-02-08 04:46:21 +03:00
" enableTelemetry " A ..= boolToText ( isTelemetryEnabled enableTelemetry ) ,
2020-12-21 21:56:00 +03:00
" cdnAssets " A ..= boolToText ( isNothing consoleAssetsDir ) ,
2022-09-27 17:50:32 +03:00
" consoleSentryDsn " A ..= fromMaybe " " consoleSentryDsn ,
2020-12-21 21:56:00 +03:00
" assetsVersion " A ..= consoleAssetsVersion ,
2022-09-15 19:59:00 +03:00
" serverVersion " A ..= currentVersion ,
" consoleSentryDsn " A ..= ( " " :: Text )
2019-11-26 15:14:21 +03:00
]
where
consolePath = case path of
" " -> " /console "
r -> " /console/ " <> r
2021-09-24 01:56:37 +03:00
2021-03-16 20:35:35 +03:00
consoleTmplt = $ ( makeRelativeToProject " src-rsr/console.html " >>= M . embedSingleTemplate )
2019-11-26 15:14:21 +03:00
2022-11-29 04:00:28 +03:00
telemetryNotice :: Text
2019-11-26 15:14:21 +03:00
telemetryNotice =
" Help us improve Hasura! The graphql-engine server collects anonymized "
<> " usage stats which allows us to keep improving Hasura at warp speed. "
2021-03-01 21:50:24 +03:00
<> " To read more or opt-out, visit https://hasura.io/docs/latest/graphql/core/guides/telemetry.html "
Clean metadata arguments
## Description
Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API:
- whenever possible, it moves those argument types to where they're used (RQL.DDL.*)
- it removes all unrequired instances (mostly `ToJSON`)
This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL.
Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types.
## Notes
This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance:
- before: `expected Object for Object, but encountered X`
after: `expected Object for add computed field, but encountered X`
- before: `Expecting an object for update query`
after: `expected Object for update query, but encountered X`
This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`.
This PR also deletes some dead code, mostly in RQL.DML.
This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs.
https://github.com/hasura/graphql-engine-mono/pull/1844
GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
mkPgSourceResolver :: PG . PGLogger -> SourceResolver ( 'Postgres 'Vanilla )
2023-02-20 16:43:34 +03:00
mkPgSourceResolver pgLogger env _ config = runExceptT do
Clean metadata arguments
## Description
Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API:
- whenever possible, it moves those argument types to where they're used (RQL.DDL.*)
- it removes all unrequired instances (mostly `ToJSON`)
This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL.
Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types.
## Notes
This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance:
- before: `expected Object for Object, but encountered X`
after: `expected Object for add computed field, but encountered X`
- before: `Expecting an object for update query`
after: `expected Object for update query, but encountered X`
This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`.
This PR also deletes some dead code, mostly in RQL.DML.
This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs.
https://github.com/hasura/graphql-engine-mono/pull/1844
GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
let PostgresSourceConnInfo urlConf poolSettings allowPrepare isoLevel _ = _pccConnectionInfo config
-- If the user does not provide values for the pool settings, then use the default values
let ( maxConns , idleTimeout , retries ) = getDefaultPGPoolSettingIfNotExists poolSettings defaultPostgresPoolSettings
urlText <- resolveUrlConf env urlConf
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
let connInfo = PG . ConnInfo retries $ PG . CDDatabaseURI $ txtToBs urlText
Clean metadata arguments
## Description
Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API:
- whenever possible, it moves those argument types to where they're used (RQL.DDL.*)
- it removes all unrequired instances (mostly `ToJSON`)
This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL.
Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types.
## Notes
This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance:
- before: `expected Object for Object, but encountered X`
after: `expected Object for add computed field, but encountered X`
- before: `Expecting an object for update query`
after: `expected Object for update query, but encountered X`
This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`.
This PR also deletes some dead code, mostly in RQL.DML.
This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs.
https://github.com/hasura/graphql-engine-mono/pull/1844
GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
connParams =
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
PG . defaultConnParams
{ PG . cpIdleTime = idleTimeout ,
PG . cpConns = maxConns ,
PG . cpAllowPrepare = allowPrepare ,
PG . cpMbLifetime = _ppsConnectionLifetime =<< poolSettings ,
PG . cpTimeout = _ppsPoolTimeout =<< poolSettings
Clean metadata arguments
## Description
Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API:
- whenever possible, it moves those argument types to where they're used (RQL.DDL.*)
- it removes all unrequired instances (mostly `ToJSON`)
This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL.
Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types.
## Notes
This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance:
- before: `expected Object for Object, but encountered X`
after: `expected Object for add computed field, but encountered X`
- before: `Expecting an object for update query`
after: `expected Object for update query, but encountered X`
This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`.
This PR also deletes some dead code, mostly in RQL.DML.
This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs.
https://github.com/hasura/graphql-engine-mono/pull/1844
GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
}
2022-10-17 11:04:54 +03:00
pgPool <- liftIO $ Q . initPGPool connInfo connParams pgLogger
let pgExecCtx = mkPGExecCtx isoLevel pgPool NeverResizePool
2023-03-21 00:54:55 +03:00
pure $ PGSourceConfig pgExecCtx connInfo Nothing mempty ( _pccExtensionsSchema config ) mempty ConnTemplate_NotApplicable
2022-01-04 14:53:50 +03:00
mkMSSQLSourceResolver :: SourceResolver ( 'MSSQL )
2023-02-20 16:43:34 +03:00
mkMSSQLSourceResolver env _name ( MSSQLConnConfiguration connInfo _ ) = runExceptT do
2022-10-17 11:04:54 +03:00
let MSSQLConnectionInfo iConnString MSSQLPoolSettings { .. } = connInfo
connOptions =
MSPool . ConnectionOptions
{ _coConnections = fromMaybe defaultMSSQLMaxConnections _mpsMaxConnections ,
_coStripes = 1 ,
_coIdleTime = _mpsIdleTimeout
}
( connString , mssqlPool ) <- createMSSQLPool iConnString connOptions env
let mssqlExecCtx = mkMSSQLExecCtx mssqlPool NeverResizePool
2022-01-04 14:53:50 +03:00
pure $ MSSQLSourceConfig connString mssqlExecCtx