2022-03-16 03:39:21 +03:00
{- # LANGUAGE QuasiQuotes # -}
{- # LANGUAGE TemplateHaskell # -}
2020-10-07 11:55:39 +03:00
{- # LANGUAGE UndecidableInstances # -}
2022-08-17 04:07:44 +03:00
{- # LANGUAGE ViewPatterns # -}
2019-11-26 15:14:21 +03:00
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
-- | Defines the CE version of the engine.
--
-- This module contains everything that is required to run the community edition
-- of the engine: the base application monad and the implementation of all its
-- behaviour classes.
2021-11-04 19:08:33 +03:00
module Hasura.App
2023-01-06 12:33:13 +03:00
( ExitCode ( AuthConfigurationError , DatabaseMigrationError , DowngradeProcessError , MetadataCleanError , MetadataExportError , SchemaCacheInitError ) ,
2021-11-04 19:08:33 +03:00
ExitException ( ExitException ) ,
2022-02-24 02:40:35 +03:00
GlobalCtx ( .. ) ,
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
AppContext ( .. ) ,
2021-11-04 19:08:33 +03:00
PGMetadataStorageAppT ( runPGMetadataStorageAppT ) ,
accessDeniedErrMsg ,
flushLogger ,
getCatalogStateTx ,
initGlobalCtx ,
2023-01-06 12:33:13 +03:00
initAuthMode ,
2023-02-24 21:09:36 +03:00
initialiseAppContext ,
initialiseContext ,
2023-01-06 12:33:13 +03:00
initSubscriptionsState ,
2023-02-24 21:09:36 +03:00
initLockedEventsCtx ,
initSQLGenCtx ,
2023-02-28 12:09:31 +03:00
mkResponseInternalErrorsConfig ,
2021-11-04 19:08:33 +03:00
migrateCatalogSchema ,
mkLoggers ,
mkPGLogger ,
notifySchemaCacheSyncTx ,
parseArgs ,
2022-03-14 21:31:46 +03:00
throwErrExit ,
throwErrJExit ,
2021-11-04 19:08:33 +03:00
printJSON ,
printYaml ,
readTlsAllowlist ,
resolvePostgresConnInfo ,
runHGEServer ,
setCatalogStateTx ,
-- * Exported for testing
mkHGEServer ,
mkPgSourceResolver ,
2022-01-04 14:53:50 +03:00
mkMSSQLSourceResolver ,
2021-11-04 19:08:33 +03:00
)
where
2019-11-26 15:14:21 +03:00
2021-03-31 13:39:01 +03:00
import Control.Concurrent.Async.Lifted.Safe qualified as LA
import Control.Concurrent.Extended qualified as C
2021-04-06 06:25:02 +03:00
import Control.Concurrent.STM qualified as STM
2021-09-20 10:34:59 +03:00
import Control.Concurrent.STM.TVar ( readTVarIO )
2021-03-31 13:39:01 +03:00
import Control.Exception ( bracket_ , throwIO )
import Control.Monad.Catch
2020-06-19 09:42:32 +03:00
( Exception ,
2021-03-31 13:39:01 +03:00
MonadCatch ,
MonadMask ,
MonadThrow ,
onException ,
2021-09-24 01:56:37 +03:00
)
2021-09-20 10:34:59 +03:00
import Control.Monad.Morph ( hoist )
2021-03-31 13:39:01 +03:00
import Control.Monad.STM ( atomically )
import Control.Monad.Stateless
import Control.Monad.Trans.Control ( MonadBaseControl ( .. ) )
2022-12-19 15:45:21 +03:00
import Control.Monad.Trans.Managed ( ManagedT ( .. ) , allocate , allocate_ )
2022-06-07 10:08:53 +03:00
import Control.Retry qualified as Retry
2021-01-09 02:09:15 +03:00
import Data.Aeson qualified as A
2021-03-31 13:39:01 +03:00
import Data.ByteString.Char8 qualified as BC
2022-09-13 18:35:58 +03:00
import Data.ByteString.Lazy qualified as BL
2021-05-14 12:38:37 +03:00
import Data.ByteString.Lazy.Char8 qualified as BLC
import Data.Environment qualified as Env
2021-01-09 02:09:15 +03:00
import Data.FileEmbed ( makeRelativeToProject )
2021-03-31 13:39:01 +03:00
import Data.HashMap.Strict qualified as HM
2022-06-30 14:26:10 +03:00
import Data.Set.NonEmpty qualified as NE
2021-03-31 13:39:01 +03:00
import Data.Text qualified as T
import Data.Time.Clock ( UTCTime )
import Data.Time.Clock qualified as Clock
import Data.Yaml qualified as Y
2022-10-17 11:04:54 +03:00
import Database.MSSQL.Pool qualified as MSPool
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
import Database.PG.Query qualified as PG
2022-10-17 11:04:54 +03:00
import Database.PG.Query qualified as Q
2021-09-16 21:51:32 +03:00
import GHC.AssertNF.CPP
2023-02-24 21:09:36 +03:00
import Hasura.App.State
2022-01-04 14:53:50 +03:00
import Hasura.Backends.MSSQL.Connection
2021-09-16 14:03:01 +03:00
import Hasura.Backends.Postgres.Connection
2021-05-11 18:18:31 +03:00
import Hasura.Base.Error
2020-07-02 14:57:09 +03:00
import Hasura.Eventing.Common
2021-09-16 14:03:01 +03:00
import Hasura.Eventing.EventTrigger
import Hasura.Eventing.ScheduledTrigger
2021-05-05 15:25:27 +03:00
import Hasura.GraphQL.Execute
2021-09-16 14:03:01 +03:00
( ExecutionStep ( .. ) ,
MonadGQLExecutionCheck ( .. ) ,
checkQueryInAllowlist ,
)
2020-12-14 07:30:19 +03:00
import Hasura.GraphQL.Execute.Action
2021-01-09 02:09:15 +03:00
import Hasura.GraphQL.Execute.Action.Subscription
2021-07-29 11:29:12 +03:00
import Hasura.GraphQL.Execute.Backend qualified as EB
2022-03-21 13:39:49 +03:00
import Hasura.GraphQL.Execute.Subscription.Poll qualified as ES
2023-01-06 12:33:13 +03:00
import Hasura.GraphQL.Execute.Subscription.State qualified as ES
2021-03-31 13:39:01 +03:00
import Hasura.GraphQL.Logging ( MonadQueryLog ( .. ) )
2022-07-14 20:57:28 +03:00
import Hasura.GraphQL.Schema.Options qualified as Options
2021-08-25 04:52:38 +03:00
import Hasura.GraphQL.Transport.HTTP
( CacheStoreSuccess ( CacheStoreSkipped ) ,
MonadExecuteQuery ( .. ) ,
2021-09-24 01:56:37 +03:00
)
2021-03-31 13:39:01 +03:00
import Hasura.GraphQL.Transport.HTTP.Protocol ( toParsed )
import Hasura.GraphQL.Transport.WebSocket.Server qualified as WS
import Hasura.Logging
2021-08-25 04:52:38 +03:00
import Hasura.Metadata.Class
2022-11-23 19:40:21 +03:00
import Hasura.PingSources
2021-03-31 13:39:01 +03:00
import Hasura.Prelude
2021-09-23 15:37:56 +03:00
import Hasura.QueryTags
2022-09-15 14:45:14 +03:00
import Hasura.RQL.DDL.EventTrigger ( MonadEventLogCleanup ( .. ) )
2020-06-19 09:42:32 +03:00
import Hasura.RQL.DDL.Schema.Cache
2020-12-28 15:56:00 +03:00
import Hasura.RQL.DDL.Schema.Cache.Common
2020-12-14 07:30:19 +03:00
import Hasura.RQL.DDL.Schema.Catalog
2022-04-27 16:57:28 +03:00
import Hasura.RQL.Types.Allowlist
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.Common
2021-09-20 10:34:59 +03:00
import Hasura.RQL.Types.Eventing.Backend
2022-04-27 16:57:28 +03:00
import Hasura.RQL.Types.Metadata
import Hasura.RQL.Types.Network
2022-10-20 04:32:54 +03:00
import Hasura.RQL.Types.ResizePool
2022-04-27 16:57:28 +03:00
import Hasura.RQL.Types.SchemaCache
import Hasura.RQL.Types.SchemaCache.Build
import Hasura.RQL.Types.Source
2021-09-20 10:34:59 +03:00
import Hasura.SQL.AnyBackend qualified as AB
2022-04-27 16:57:28 +03:00
import Hasura.SQL.Backend
2021-09-08 16:06:24 +03:00
import Hasura.Server.API.Query ( requiresAdmin )
2019-11-26 15:14:21 +03:00
import Hasura.Server.App
import Hasura.Server.Auth
2021-03-31 13:39:01 +03:00
import Hasura.Server.CheckUpdates ( checkForUpdates )
2023-02-24 21:09:36 +03:00
import Hasura.Server.Init
2021-09-29 19:20:06 +03:00
import Hasura.Server.Limits
2019-11-26 15:14:21 +03:00
import Hasura.Server.Logging
2021-08-06 00:07:17 +03:00
import Hasura.Server.Metrics ( ServerMetrics ( .. ) )
2021-11-10 17:34:22 +03:00
import Hasura.Server.Migrate ( migrateCatalog )
2022-07-24 00:18:01 +03:00
import Hasura.Server.Prometheus
( PrometheusMetrics ( .. ) ,
decWarpThreads ,
incWarpThreads ,
)
2022-03-09 01:59:28 +03:00
import Hasura.Server.SchemaCacheRef
( SchemaCacheRef ,
getSchemaCache ,
initialiseSchemaCacheRef ,
logInconsistentMetadata ,
)
2019-11-26 15:14:21 +03:00
import Hasura.Server.SchemaUpdate
import Hasura.Server.Telemetry
2020-11-25 13:56:44 +03:00
import Hasura.Server.Types
2019-11-26 15:14:21 +03:00
import Hasura.Server.Version
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
import Hasura.Services
2020-04-24 12:10:53 +03:00
import Hasura.Session
2022-11-22 04:36:35 +03:00
import Hasura.ShutdownLatch
2021-03-31 13:39:01 +03:00
import Hasura.Tracing qualified as Tracing
2023-01-06 12:33:13 +03:00
import Network.HTTP.Client qualified as HTTP
2022-09-27 12:24:19 +03:00
import Network.HTTP.Client.Blocklisting ( Blocklist )
server: http ip blocklist (closes #2449)
## Description
This PR is in reference to #2449 (support IP blacklisting for multitenant)
*RFC Update: Add support for IPv6 blocking*
### Solution and Design
Using [http-client-restricted](https://hackage.haskell.org/package/http-client-restricted) package, we're creating the HTTP manager with restricting capabilities. The IPs can be supplied from the CLI arguments as `--ipv4BlocklistCidrs cidr1, cidr2...` or `--disableDefaultIPv4Blocklist` for a default IP list. The new manager will block all requests to the provided CIDRs.
We are extracting the error message string to show the end-user that given IP is blocked from being set as a webhook. There are 2 ways to extract the error message "connection to IP address is blocked". Given below are the responses from event trigger to a blocked IP for these implementations:
- 6d74fde316f61e246c861befcca5059d33972fa7 - We return the error message string as a HTTPErr(HOther) from `Hasura/Eventing/HTTP.hs`.
```
{
"data": {
"message": "blocked connection to private IP address "
},
"version": "2",
"type": "client_error"
}
```
- 88e17456345cbb449a5ecd4877c84c9f319dbc25 - We case match on HTTPExceptionContent for InternaException in `Hasura/HTTP.hs` and extract the error message string from it. (this is implemented as it handles all the cases where pro engine makes webhook requests)
```
{
"data": {
"message": {
"type": "http_exception",
"message": "blocked connection to private IP address ",
"request": {
"secure": false,
"path": "/webhook",
"responseTimeout": "ResponseTimeoutMicro 60000000",
"queryString": "",
"method": "POST",
"requestHeaders": {
"Content-Type": "application/json",
"X-B3-ParentSpanId": "5ae6573edb2a6b36",
"X-B3-TraceId": "29ea7bd6de6ebb8f",
"X-B3-SpanId": "303137d9f1d4f341",
"User-Agent": "hasura-graphql-engine/cerebushttp-ip-blacklist-a793a0e41-dirty"
},
"host": "139.59.90.109",
"port": 8000
}
}
},
"version": "2",
"type": "client_error"
}
```
### Steps to test and verify
The restricted IPs can be used as webhooks in event triggers, and hasura will return an error message in reponse.
### Limitations, known bugs & workarounds
- The `http-client-restricted` has a needlessly complex interface, and puts effort into implementing proxy support which we don't want, so we've inlined a stripped down version.
- Performance constraint: As the blocking is checked for each request, if a long list of blocked CIDRs is supplied, iterating through all of them is not what we would prefer. Using trie is suggested to overcome this. (Added to RFC)
- Calls to Lux endpoints are inconsistent: We use either the http manager from the ProServeCtx which is unrestricted, or the http manager from the ServeCtx which is restricted (the latter through the instances for MonadMetadataApiAuthorization and UserAuthentication). (The failure scenario here would be: cloud sets PRO_ENDPOINT to something that resolves to an internal address, and then restricted requests to those endpoints fail, causing auth to fail on user requests. This is about HTTP requests to lux auth endpoints.)
## Changelog
- ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR.
## Affected components
- ✅ Server
- ✅ Tests
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3186
Co-authored-by: Robert <132113+robx@users.noreply.github.com>
GitOrigin-RevId: 5bd2de2d028bc416b02c99e996c7bebce56fb1e7
2022-02-25 16:29:55 +03:00
import Network.HTTP.Client.CreateManager ( mkHttpManager )
2021-10-20 23:01:22 +03:00
import Network.Wai ( Application )
2021-03-31 13:39:01 +03:00
import Network.Wai.Handler.Warp qualified as Warp
import Options.Applicative
2022-09-21 21:01:48 +03:00
import Refined ( unrefine )
2021-03-31 13:39:01 +03:00
import System.Log.FastLogger qualified as FL
import System.Metrics qualified as EKG
import System.Metrics.Gauge qualified as EKG . Gauge
import Text.Mustache.Compile qualified as M
import Web.Spock.Core qualified as Spock
2021-09-23 15:37:56 +03:00
2020-07-14 22:00:58 +03:00
data ExitCode
2020-12-21 21:56:00 +03:00
= -- these are used during server initialization:
2020-07-14 22:00:58 +03:00
InvalidEnvironmentVariableOptionsError
| InvalidDatabaseConnectionParamsError
| AuthConfigurationError
| EventSubSystemError
2020-12-21 21:56:00 +03:00
| DatabaseMigrationError
2021-02-11 20:54:25 +03:00
| -- | used by MT because it initialises the schema cache only
2020-12-21 21:56:00 +03:00
-- these are used in app/Main.hs:
2021-02-11 20:54:25 +03:00
SchemaCacheInitError
2020-07-14 22:00:58 +03:00
| MetadataExportError
| MetadataCleanError
| ExecuteProcessError
| DowngradeProcessError
deriving ( Show )
data ExitException = ExitException
{ eeCode :: ! ExitCode ,
eeMessage :: ! BC . ByteString
}
deriving ( Show )
instance Exception ExitException
2022-03-14 21:31:46 +03:00
throwErrExit :: ( MonadIO m ) => forall a . ExitCode -> String -> m a
throwErrExit reason = liftIO . throwIO . ExitException reason . BC . pack
2020-07-14 22:00:58 +03:00
2022-03-14 21:31:46 +03:00
throwErrJExit :: ( A . ToJSON a , MonadIO m ) => forall b . ExitCode -> a -> m b
throwErrJExit reason = liftIO . throwIO . ExitException reason . BLC . toStrict . A . encode
2019-11-26 15:14:21 +03:00
2022-07-15 11:54:27 +03:00
--------------------------------------------------------------------------------
-- TODO(SOLOMON): Move Into `Hasura.Server.Init`. Unable to do so
-- currently due `throwErrExit`.
2019-11-26 15:14:21 +03:00
2022-09-22 16:56:50 +03:00
-- | Parse cli arguments to graphql-engine executable.
2023-02-20 16:43:34 +03:00
parseArgs :: EnabledLogTypes impl => Env . Environment -> IO ( HGEOptions ( ServeOptions impl ) )
parseArgs env = do
2019-11-26 15:14:21 +03:00
rawHGEOpts <- execParser opts
2023-02-20 16:43:34 +03:00
let eitherOpts = runWithEnv ( Env . toList env ) $ mkHGEOptions rawHGEOpts
2022-03-14 21:31:46 +03:00
onLeft eitherOpts $ throwErrExit InvalidEnvironmentVariableOptionsError
2019-11-26 15:14:21 +03:00
where
opts =
info
2022-07-15 11:54:27 +03:00
( helper <*> parseHgeOpts )
2019-11-26 15:14:21 +03:00
( fullDesc
2022-06-23 04:27:00 +03:00
<> header " Hasura GraphQL Engine: Blazing fast, instant realtime GraphQL APIs on your DB with fine grained access control, also trigger webhooks on database events. "
2019-11-26 15:14:21 +03:00
<> footerDoc ( Just mainCmdFooter )
)
2022-07-15 11:54:27 +03:00
--------------------------------------------------------------------------------
2019-11-26 15:14:21 +03:00
printJSON :: ( A . ToJSON a , MonadIO m ) => a -> m ()
printJSON = liftIO . BLC . putStrLn . A . encode
printYaml :: ( A . ToJSON a , MonadIO m ) => a -> m ()
printYaml = liftIO . BC . putStrLn . Y . encode
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
mkPGLogger :: Logger Hasura -> PG . PGLogger
mkPGLogger ( Logger logger ) ( PG . PLERetryMsg msg ) =
2019-11-26 15:14:21 +03:00
logger $ PGLog LevelWarn msg
2020-11-24 09:10:04 +03:00
-- | Context required for all graphql-engine CLI commands
data GlobalCtx = GlobalCtx
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
{ _gcMetadataDbConnInfo :: ! PG . ConnInfo ,
2021-01-07 12:04:22 +03:00
-- | --database-url option, @'UrlConf' is required to construct default source configuration
-- and optional retries
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
_gcDefaultPostgresConnInfo :: ! ( Maybe ( UrlConf , PG . ConnInfo ) , Maybe Int )
2020-11-24 09:10:04 +03:00
}
2019-11-26 15:14:21 +03:00
2021-08-24 10:36:32 +03:00
readTlsAllowlist :: SchemaCacheRef -> IO [ TlsAllow ]
2022-03-09 01:59:28 +03:00
readTlsAllowlist scRef = scTlsAllowlist <$> getSchemaCache scRef
2021-08-24 10:36:32 +03:00
2020-11-24 09:10:04 +03:00
initGlobalCtx ::
2020-12-28 15:56:00 +03:00
( MonadIO m ) =>
2021-01-07 12:04:22 +03:00
Env . Environment ->
2021-02-11 20:54:25 +03:00
-- | the metadata DB URL
2021-01-07 12:04:22 +03:00
Maybe String ->
2021-02-11 20:54:25 +03:00
-- | the user's DB URL
2021-01-07 12:04:22 +03:00
PostgresConnInfo ( Maybe UrlConf ) ->
m GlobalCtx
2020-12-28 15:56:00 +03:00
initGlobalCtx env metadataDbUrl defaultPgConnInfo = do
let PostgresConnInfo dbUrlConf maybeRetries = defaultPgConnInfo
2021-02-11 20:54:25 +03:00
mkConnInfoFromSource dbUrl = do
resolvePostgresConnInfo env dbUrl maybeRetries
mkConnInfoFromMDb mdbUrl =
let retries = fromMaybe 1 maybeRetries
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
in ( PG . ConnInfo retries . PG . CDDatabaseURI . txtToBs . T . pack ) mdbUrl
2021-02-11 20:54:25 +03:00
mkGlobalCtx mdbConnInfo sourceConnInfo =
2022-02-24 02:40:35 +03:00
pure $ GlobalCtx mdbConnInfo ( sourceConnInfo , maybeRetries )
2021-01-07 12:04:22 +03:00
2021-02-11 20:54:25 +03:00
case ( metadataDbUrl , dbUrlConf ) of
( Nothing , Nothing ) ->
2022-03-14 21:31:46 +03:00
throwErrExit
2021-02-11 20:54:25 +03:00
InvalidDatabaseConnectionParamsError
" Fatal Error: Either of --metadata-database-url or --database-url option expected "
-- If no metadata storage specified consider use default database as
-- metadata storage
( Nothing , Just dbUrl ) -> do
connInfo <- mkConnInfoFromSource dbUrl
mkGlobalCtx connInfo $ Just ( dbUrl , connInfo )
( Just mdUrl , Nothing ) -> do
let mdConnInfo = mkConnInfoFromMDb mdUrl
mkGlobalCtx mdConnInfo Nothing
( Just mdUrl , Just dbUrl ) -> do
srcConnInfo <- mkConnInfoFromSource dbUrl
let mdConnInfo = mkConnInfoFromMDb mdUrl
mkGlobalCtx mdConnInfo ( Just ( dbUrl , srcConnInfo ) )
2020-11-24 09:10:04 +03:00
2020-11-25 13:56:44 +03:00
-- | An application with Postgres database as a metadata storage
2023-02-24 21:09:36 +03:00
newtype PGMetadataStorageAppT m a = PGMetadataStorageAppT { runPGMetadataStorageAppT :: ( AppContext , AppEnv ) -> m a }
2020-11-24 09:10:04 +03:00
deriving
( Functor ,
Applicative ,
2021-05-26 19:19:26 +03:00
Monad ,
MonadIO ,
Rewrite OpenAPI
### Description
This PR rewrites OpenAPI to be more idiomatic. Some noteworthy changes:
- we accumulate all required information during the Analyze phase, to avoid having to do a single lookup in the schema cache during the OpenAPI generation phase (we now only need the schema cache as input to run the analysis)
- we no longer build intermediary endpoint information and aggregate it, we directly build the the `PathItem` for each endpoint; additionally, that means we no longer have to assume that different methods have the same metadata
- we no longer have to first declare types, then craft references: we do everything in one step
- we now properly deal with nullability by treating "typeName" and "typeName!" as different
- we add a bunch of additional fields in the generated "schema", such as title
- we do now support enum values in both input and output positions
- checking whether the request body is required is now performed on the fly rather than by introspecting the generated schema
- the methods in the file are sorted by topic
### Controversial point
However, this PR creates some additional complexity, that we might not want to keep. The main complexity is _knot-tying_: to avoid lookups when generating the OpenAPI, it builds an actual graph of input types, which means that we need something similar to (but simpler than) `MonadSchema`, to avoid infinite recursions when analyzing the input types of a query. To do this, this PR introduces `CircularT`, a lesser `SchemaT` that aims at avoiding ever having to reinvent this particular wheel ever again.
### Remaining work
- [x] fix existing tests (they are all failing due to some of the schema changes)
- [ ] add tests to cover the new features:
- [x] tests for `CircularT`
- [ ] tests for enums in output schemas
- [x] extract / document `CircularT` if we wish to keep it
- [x] add more comments to `OpenAPI`
- [x] have a second look at `buildVariableSchema`
- [x] fix all missing diagnostics in `Analyze`
- [x] add a Changelog entry?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4654
Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com>
GitOrigin-RevId: f4a9191f22dfcc1dccefd6a52f5c586b6ad17172
2022-06-30 15:55:56 +03:00
MonadFix ,
2021-05-26 19:19:26 +03:00
MonadCatch ,
MonadThrow ,
MonadMask ,
HasServerConfigCtx ,
2023-02-24 21:09:36 +03:00
MonadReader ( AppContext , AppEnv ) ,
2022-05-24 10:21:39 +03:00
MonadBase b ,
MonadBaseControl b
2021-05-26 19:19:26 +03:00
)
2023-02-24 21:09:36 +03:00
via ( ReaderT ( AppContext , AppEnv ) m )
2022-05-24 10:21:39 +03:00
deriving
( MonadTrans
)
2023-02-24 21:09:36 +03:00
via ( ReaderT ( AppContext , AppEnv ) )
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
instance Monad m => ProvidesNetwork ( PGMetadataStorageAppT m ) where
2023-02-24 21:09:36 +03:00
askHTTPManager = appEnvManager <$> asks snd
2020-12-28 15:56:00 +03:00
resolvePostgresConnInfo ::
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
( MonadIO m ) => Env . Environment -> UrlConf -> Maybe Int -> m PG . ConnInfo
2020-12-28 15:56:00 +03:00
resolvePostgresConnInfo env dbUrlConf maybeRetries = do
dbUrlText <-
runExcept ( resolveUrlConf env dbUrlConf ) ` onLeft ` \ err ->
2022-03-14 21:31:46 +03:00
liftIO ( throwErrJExit InvalidDatabaseConnectionParamsError err )
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
pure $ PG . ConnInfo retries $ PG . CDDatabaseURI $ txtToBs dbUrlText
2020-12-28 15:56:00 +03:00
where
retries = fromMaybe 1 maybeRetries
2020-11-24 09:10:04 +03:00
2023-01-06 12:33:13 +03:00
initAuthMode ::
( C . ForkableMonadIO m , Tracing . HasReporter m ) =>
2023-02-28 12:09:31 +03:00
HashSet AdminSecretHash ->
Maybe AuthHook ->
[ JWTConfig ] ->
Maybe RoleName ->
2023-01-06 12:33:13 +03:00
HTTP . Manager ->
Logger Hasura ->
m AuthMode
2023-02-28 12:09:31 +03:00
initAuthMode adminSecret authHook jwtSecret unAuthRole httpManager logger = do
2023-01-06 12:33:13 +03:00
authModeRes <-
runExceptT $
setupAuthMode
2023-02-28 12:09:31 +03:00
adminSecret
authHook
jwtSecret
unAuthRole
2023-01-06 12:33:13 +03:00
logger
httpManager
authMode <- onLeft authModeRes ( throwErrExit AuthConfigurationError . T . unpack )
-- forking a dedicated polling thread to dynamically get the latest JWK settings
-- set by the user and update the JWK accordingly. This will help in applying the
-- updates without restarting HGE.
_ <- C . forkImmortal " update JWK " logger $ updateJwkCtx authMode httpManager logger
return authMode
initSubscriptionsState ::
Logger Hasura ->
Maybe ES . SubscriptionPostPollHook ->
IO ES . SubscriptionsState
2023-02-24 21:09:36 +03:00
initSubscriptionsState logger liveQueryHook = ES . initSubscriptionsState postPollHook
2023-01-06 12:33:13 +03:00
where
postPollHook = fromMaybe ( ES . defaultSubscriptionPostPollHook logger ) liveQueryHook
2023-02-24 21:09:36 +03:00
initLockedEventsCtx :: IO LockedEventsCtx
initLockedEventsCtx = LockedEventsCtx <$> STM . newTVarIO mempty <*> STM . newTVarIO mempty <*> STM . newTVarIO mempty <*> STM . newTVarIO mempty
2023-02-28 12:09:31 +03:00
mkResponseInternalErrorsConfig :: AdminInternalErrorsStatus -> DevModeStatus -> ResponseInternalErrorsConfig
mkResponseInternalErrorsConfig adminInternalErrors devMode = do
if
| isDevModeEnabled devMode -> InternalErrorsAllRequests
| isAdminInternalErrorsEnabled adminInternalErrors -> InternalErrorsAdminOnly
| otherwise -> InternalErrorsDisabled
initSQLGenCtx :: Options . StringifyNumbers -> Options . DangerouslyCollapseBooleans -> HashSet ExperimentalFeature -> SQLGenCtx
initSQLGenCtx strinfigyNum dangerousBooleanCollapse experimentalFeatures =
2023-02-24 21:09:36 +03:00
SQLGenCtx
2023-02-28 12:09:31 +03:00
strinfigyNum
dangerousBooleanCollapse
2023-02-24 21:09:36 +03:00
optimizePermissionFilters
bigqueryStringNumericInput
where
optimizePermissionFilters
2023-02-28 12:09:31 +03:00
| EFOptimizePermissionFilters ` elem ` experimentalFeatures = Options . OptimizePermissionFilters
2023-02-24 21:09:36 +03:00
| otherwise = Options . Don'tOptimizePermissionFilters
bigqueryStringNumericInput
2023-02-28 12:09:31 +03:00
| EFBigQueryStringNumericInput ` elem ` experimentalFeatures = Options . EnableBigQueryStringNumericInput
2023-02-24 21:09:36 +03:00
| otherwise = Options . DisableBigQueryStringNumericInput
initialiseAppContext :: ( MonadIO m ) => HTTP . Manager -> ServeOptions impl -> Env . Environment -> SchemaCacheRef -> Logger Hasura -> SQLGenCtx -> m AppContext
2023-02-28 12:09:31 +03:00
initialiseAppContext httpManager ServeOptions { .. } env schemaCacheRef logger sqlGenCtx = do
authMode <- liftIO $ initAuthMode soAdminSecret soAuthHook soJwtSecret soUnAuthRole httpManager logger
2023-02-24 21:09:36 +03:00
let appCtx =
AppContext
{ acCacheRef = schemaCacheRef ,
acAuthMode = authMode ,
acSQLGenCtx = sqlGenCtx ,
acEnabledAPIs = soEnabledAPIs ,
acEnableAllowlist = soEnableAllowList ,
2023-02-28 12:09:31 +03:00
acResponseInternalErrorsConfig = mkResponseInternalErrorsConfig soAdminInternalErrors soDevMode ,
2023-02-24 21:09:36 +03:00
acEnvironment = env ,
acRemoteSchemaPermsCtx = soEnableRemoteSchemaPermissions ,
acFunctionPermsCtx = soInferFunctionPermissions ,
acExperimentalFeatures = soExperimentalFeatures ,
acDefaultNamingConvention = soDefaultNamingConvention ,
acMetadataDefaults = soMetadataDefaults ,
acLiveQueryOptions = soLiveQueryOpts ,
2023-02-28 12:09:31 +03:00
acStreamQueryOptions = soStreamingQueryOpts ,
acCorsConfig = soCorsConfig ,
acConsoleStatus = soConsoleStatus ,
acEnableTelemetry = soEnableTelemetry ,
acEventsHttpPoolSize = soEventsHttpPoolSize ,
acEventsFetchInterval = soEventsFetchInterval ,
acAsyncActionsFetchInterval = soAsyncActionsFetchInterval ,
acSchemaPollInterval = soSchemaPollInterval ,
acEventsFetchBatchSize = soEventsFetchBatchSize
2023-02-24 21:09:36 +03:00
}
return appCtx
2020-11-24 09:10:04 +03:00
-- | Initializes or migrates the catalog and returns the context required to start the server.
2023-02-24 21:09:36 +03:00
initialiseContext ::
2021-10-13 19:38:56 +03:00
( C . ForkableMonadIO m , MonadCatch m ) =>
2020-07-14 22:00:58 +03:00
Env . Environment ->
2020-11-24 09:10:04 +03:00
GlobalCtx ->
ServeOptions Hasura ->
2023-01-06 12:33:13 +03:00
Maybe ES . SubscriptionPostPollHook ->
2022-03-09 01:59:28 +03:00
ServerMetrics ->
2023-01-06 12:33:13 +03:00
PrometheusMetrics ->
Tracing . SamplingPolicy ->
2023-02-24 21:09:36 +03:00
ManagedT m ( AppContext , AppEnv )
initialiseContext env GlobalCtx { .. } serveOptions @ ServeOptions { .. } liveQueryHook serverMetrics prometheusMetrics traceSamplingPolicy = do
2019-11-26 15:14:21 +03:00
instanceId <- liftIO generateInstanceId
2020-06-03 00:27:14 +03:00
latch <- liftIO newShutdownLatch
2020-11-24 09:10:04 +03:00
loggers @ ( Loggers loggerCtx logger pgLogger ) <- mkLoggers soEnabledLogTypes soLogLevel
2022-07-27 14:46:19 +03:00
when ( null soAdminSecret ) $ do
let errMsg :: Text
errMsg = " WARNING: No admin secret provided "
unLogger logger $
StartupLog
{ slLogLevel = LevelWarn ,
slKind = " no_admin_secret " ,
slInfo = A . toJSON errMsg
}
2020-11-24 09:10:04 +03:00
-- log serve options
2023-01-06 12:33:13 +03:00
unLogger logger $ serveOptsToLog serveOptions
2020-12-28 15:56:00 +03:00
2020-11-24 09:10:04 +03:00
-- log postgres connection info
2020-12-28 15:56:00 +03:00
unLogger logger $ connInfoToLog _gcMetadataDbConnInfo
2022-12-19 15:45:21 +03:00
metadataDbPool <-
allocate
( liftIO $ PG . initPGPool _gcMetadataDbConnInfo soConnParams pgLogger )
( liftIO . PG . destroyPGPool )
2020-12-28 15:56:00 +03:00
2021-01-07 12:04:22 +03:00
let maybeDefaultSourceConfig =
fst _gcDefaultPostgresConnInfo <&> \ ( dbUrlConf , _ ) ->
let connSettings =
PostgresPoolSettings
2022-10-17 11:04:54 +03:00
{ _ppsMaxConnections = Just $ Q . cpConns soConnParams ,
_ppsTotalMaxConnections = Nothing ,
_ppsIdleTimeout = Just $ Q . cpIdleTime soConnParams ,
2021-04-28 19:49:23 +03:00
_ppsRetries = snd _gcDefaultPostgresConnInfo <|> Just 1 ,
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
_ppsPoolTimeout = PG . cpTimeout soConnParams ,
_ppsConnectionLifetime = PG . cpMbLifetime soConnParams
2020-12-28 15:56:00 +03:00
}
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
sourceConnInfo = PostgresSourceConnInfo dbUrlConf ( Just connSettings ) ( PG . cpAllowPrepare soConnParams ) soTxIso Nothing
2023-01-25 10:12:53 +03:00
in PostgresConnConfiguration sourceConnInfo Nothing defaultPostgresExtensionsSchema Nothing mempty
2023-02-28 12:09:31 +03:00
sqlGenCtx = initSQLGenCtx soStringifyNum soDangerousBooleanCollapse soExperimentalFeatures
2023-02-24 21:09:36 +03:00
checkFeatureFlag' = checkFeatureFlag env
serverConfigCtx =
[Preview] Inherited roles for postgres read queries
fixes #3868
docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de`
Note:
To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`.
Introduction
------------
This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`.
How are select permissions of different roles are combined?
------------------------------------------------------------
A select permission includes 5 things:
1. Columns accessible to the role
2. Row selection filter
3. Limit
4. Allow aggregation
5. Scalar computed fields accessible to the role
Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`.
Let's say the following GraphQL query is queried with the `combined_roles` role.
```graphql
query {
employees {
address
phone
}
}
```
This will translate to the following SQL query:
```sql
select
(case when (P1 or P2) then address else null end) as address,
(case when P2 then phone else null end) as phone
from employee
where (P1 or P2)
```
The other parameters of the select permission will be combined in the following manner:
1. Limit - Minimum of the limits will be the limit of the inherited role
2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation
3. Scalar computed fields - same as table column fields, as in the above example
APIs for inherited roles:
----------------------
1. `add_inherited_role`
`add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments
`role_name`: the name of the inherited role to be added (String)
`role_set`: list of roles that need to be combined (Array of Strings)
Example:
```json
{
"type": "add_inherited_role",
"args": {
"role_name":"combined_user",
"role_set":[
"user",
"user1"
]
}
}
```
After adding the inherited role, the inherited role can be used like single roles like earlier
Note:
An inherited role can only be created with non-inherited/singular roles.
2. `drop_inherited_role`
The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument:
`role_name`: name of the inherited role to be dropped
Example:
```json
{
"type": "drop_inherited_role",
"args": {
"role_name":"combined_user"
}
}
```
Metadata
---------
The derived roles metadata will be included under the `experimental_features` key while exporting the metadata.
```json
{
"experimental_features": {
"derived_roles": [
{
"role_name": "manager_is_employee_too",
"role_set": [
"employee",
"manager"
]
}
]
}
}
```
Scope
------
Only postgres queries and subscriptions are supported in this PR.
Important points:
-----------------
1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done.
TODOs
-------
- [ ] Tests
- [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features
- [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?)
- [ ] Introspection test with a inherited role (nullability changes in a inherited role)
- [ ] Docs
- [ ] Changelog
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 14:14:13 +03:00
ServerConfigCtx
soInferFunctionPermissions
soEnableRemoteSchemaPermissions
sqlGenCtx
soEnableMaintenanceMode
soExperimentalFeatures
2021-11-30 15:31:27 +03:00
soEventingMode
2021-12-08 09:26:46 +03:00
soReadOnlyMode
2022-05-26 14:54:30 +03:00
soDefaultNamingConvention
2022-10-20 15:45:31 +03:00
soMetadataDefaults
2023-02-24 21:09:36 +03:00
checkFeatureFlag'
2021-02-18 19:46:14 +03:00
2022-03-14 21:31:46 +03:00
rebuildableSchemaCache <-
2020-12-28 15:56:00 +03:00
lift . flip onException ( flushLogger loggerCtx ) $
2021-08-06 20:05:17 +03:00
migrateCatalogSchema
env
logger
metadataDbPool
maybeDefaultSourceConfig
2022-09-27 12:24:19 +03:00
mempty
2021-02-18 19:46:14 +03:00
serverConfigCtx
( mkPgSourceResolver pgLogger )
2022-01-04 14:53:50 +03:00
mkMSSQLSourceResolver
2022-08-09 14:42:12 +03:00
soExtensionsSchema
2021-04-06 06:25:02 +03:00
-- Start a background thread for listening schema sync events from other server instances,
metaVersionRef <- liftIO $ STM . newEmptyTMVarIO
2021-04-07 12:59:48 +03:00
-- An interval of 0 indicates that no schema sync is required
case soSchemaPollInterval of
2022-11-29 04:00:28 +03:00
Skip -> unLogger logger $ mkGenericLog @ Text LevelInfo " schema-sync " " Schema sync disabled "
2022-08-17 04:07:44 +03:00
Interval interval -> do
2022-11-29 04:00:28 +03:00
unLogger logger $ mkGenericLog @ String LevelInfo " schema-sync " ( " Schema sync enabled. Polling at " <> show interval )
2022-08-17 04:07:44 +03:00
void $ startSchemaSyncListenerThread logger metadataDbPool instanceId interval metaVersionRef
2021-04-06 06:25:02 +03:00
2022-03-09 01:59:28 +03:00
schemaCacheRef <- initialiseSchemaCacheRef serverMetrics rebuildableSchemaCache
2021-02-11 20:54:25 +03:00
server: http ip blocklist (closes #2449)
## Description
This PR is in reference to #2449 (support IP blacklisting for multitenant)
*RFC Update: Add support for IPv6 blocking*
### Solution and Design
Using [http-client-restricted](https://hackage.haskell.org/package/http-client-restricted) package, we're creating the HTTP manager with restricting capabilities. The IPs can be supplied from the CLI arguments as `--ipv4BlocklistCidrs cidr1, cidr2...` or `--disableDefaultIPv4Blocklist` for a default IP list. The new manager will block all requests to the provided CIDRs.
We are extracting the error message string to show the end-user that given IP is blocked from being set as a webhook. There are 2 ways to extract the error message "connection to IP address is blocked". Given below are the responses from event trigger to a blocked IP for these implementations:
- 6d74fde316f61e246c861befcca5059d33972fa7 - We return the error message string as a HTTPErr(HOther) from `Hasura/Eventing/HTTP.hs`.
```
{
"data": {
"message": "blocked connection to private IP address "
},
"version": "2",
"type": "client_error"
}
```
- 88e17456345cbb449a5ecd4877c84c9f319dbc25 - We case match on HTTPExceptionContent for InternaException in `Hasura/HTTP.hs` and extract the error message string from it. (this is implemented as it handles all the cases where pro engine makes webhook requests)
```
{
"data": {
"message": {
"type": "http_exception",
"message": "blocked connection to private IP address ",
"request": {
"secure": false,
"path": "/webhook",
"responseTimeout": "ResponseTimeoutMicro 60000000",
"queryString": "",
"method": "POST",
"requestHeaders": {
"Content-Type": "application/json",
"X-B3-ParentSpanId": "5ae6573edb2a6b36",
"X-B3-TraceId": "29ea7bd6de6ebb8f",
"X-B3-SpanId": "303137d9f1d4f341",
"User-Agent": "hasura-graphql-engine/cerebushttp-ip-blacklist-a793a0e41-dirty"
},
"host": "139.59.90.109",
"port": 8000
}
}
},
"version": "2",
"type": "client_error"
}
```
### Steps to test and verify
The restricted IPs can be used as webhooks in event triggers, and hasura will return an error message in reponse.
### Limitations, known bugs & workarounds
- The `http-client-restricted` has a needlessly complex interface, and puts effort into implementing proxy support which we don't want, so we've inlined a stripped down version.
- Performance constraint: As the blocking is checked for each request, if a long list of blocked CIDRs is supplied, iterating through all of them is not what we would prefer. Using trie is suggested to overcome this. (Added to RFC)
- Calls to Lux endpoints are inconsistent: We use either the http manager from the ProServeCtx which is unrestricted, or the http manager from the ServeCtx which is restricted (the latter through the instances for MonadMetadataApiAuthorization and UserAuthentication). (The failure scenario here would be: cloud sets PRO_ENDPOINT to something that resolves to an internal address, and then restricted requests to those endpoints fail, causing auth to fail on user requests. This is about HTTP requests to lux auth endpoints.)
## Changelog
- ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR.
## Affected components
- ✅ Server
- ✅ Tests
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3186
Co-authored-by: Robert <132113+robx@users.noreply.github.com>
GitOrigin-RevId: 5bd2de2d028bc416b02c99e996c7bebce56fb1e7
2022-02-25 16:29:55 +03:00
srvMgr <- liftIO $ mkHttpManager ( readTlsAllowlist schemaCacheRef ) mempty
2021-08-24 10:36:32 +03:00
2023-02-24 21:09:36 +03:00
subscriptionsState <- liftIO $ initSubscriptionsState logger liveQueryHook
lockedEventsCtx <- liftIO $ initLockedEventsCtx
appCtx <- liftIO $ initialiseAppContext srvMgr serveOptions env schemaCacheRef logger sqlGenCtx
let appEnv =
AppEnv
2023-02-28 12:09:31 +03:00
{ appEnvPort = soPort ,
appEnvHost = soHost ,
appEnvMetadataDbPool = metadataDbPool ,
2023-02-24 21:09:36 +03:00
appEnvManager = srvMgr ,
appEnvLoggers = loggers ,
appEnvMetadataVersionRef = metaVersionRef ,
appEnvInstanceId = instanceId ,
appEnvEnableMaintenanceMode = soEnableMaintenanceMode ,
appEnvLoggingSettings = LoggingSettings soEnabledLogTypes soEnableMetadataQueryLogging ,
appEnvEventingMode = soEventingMode ,
appEnvEnableReadOnlyMode = soReadOnlyMode ,
appEnvServerMetrics = serverMetrics ,
appEnvShutdownLatch = latch ,
appEnvMetaVersionRef = metaVersionRef ,
appEnvPrometheusMetrics = prometheusMetrics ,
appEnvTraceSamplingPolicy = traceSamplingPolicy ,
appEnvSubscriptionState = subscriptionsState ,
appEnvLockedEventsCtx = lockedEventsCtx ,
2023-02-28 12:09:31 +03:00
appEnvConnParams = soConnParams ,
appEnvTxIso = soTxIso ,
appEnvConsoleAssetsDir = soConsoleAssetsDir ,
appEnvConsoleSentryDsn = soConsoleSentryDsn ,
appEnvConnectionOptions = soConnectionOptions ,
appEnvWebSocketKeepAlive = soWebSocketKeepAlive ,
appEnvWebSocketConnectionInitTimeout = soWebSocketConnectionInitTimeout ,
appEnvGracefulShutdownTimeout = soGracefulShutdownTimeout ,
2023-02-24 21:09:36 +03:00
appEnvCheckFeatureFlag = checkFeatureFlag'
}
pure ( appCtx , appEnv )
2021-09-24 01:56:37 +03:00
2020-11-24 09:10:04 +03:00
mkLoggers ::
2020-12-21 21:56:00 +03:00
( MonadIO m , MonadBaseControl IO m ) =>
HashSet ( EngineLogType Hasura ) ->
LogLevel ->
ManagedT m Loggers
2020-11-24 09:10:04 +03:00
mkLoggers enabledLogs logLevel = do
2020-12-21 21:56:00 +03:00
loggerCtx <- mkLoggerCtx ( defaultLoggerSettings True logLevel ) enabledLogs
2020-11-24 09:10:04 +03:00
let logger = mkLogger loggerCtx
pgLogger = mkPGLogger logger
return $ Loggers loggerCtx logger pgLogger
2019-11-26 15:14:21 +03:00
2020-06-19 09:42:32 +03:00
-- | helper function to initialize or migrate the @hdb_catalog@ schema (used by pro as well)
2021-01-29 08:48:17 +03:00
migrateCatalogSchema ::
2021-10-13 19:38:56 +03:00
( MonadIO m , MonadBaseControl IO m ) =>
2021-02-13 03:05:23 +03:00
Env . Environment ->
2021-08-24 10:36:32 +03:00
Logger Hasura ->
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
PG . PGPool ->
2021-08-24 10:36:32 +03:00
Maybe ( SourceConnConfiguration ( 'Postgres 'Vanilla ) ) ->
2022-09-27 12:24:19 +03:00
Blocklist ->
2021-08-24 10:36:32 +03:00
ServerConfigCtx ->
2022-01-04 14:53:50 +03:00
SourceResolver ( 'Postgres 'Vanilla ) ->
SourceResolver ( 'MSSQL ) ->
2022-08-09 14:42:12 +03:00
ExtensionsSchema ->
2022-03-14 21:31:46 +03:00
m RebuildableSchemaCache
2020-06-19 09:42:32 +03:00
migrateCatalogSchema
2021-08-24 10:36:32 +03:00
env
logger
2021-01-29 08:48:17 +03:00
pool
defaultSourceConfig
2022-09-27 12:24:19 +03:00
blockList
2021-08-06 20:05:17 +03:00
serverConfigCtx
2022-01-04 14:53:50 +03:00
pgSourceResolver
2022-08-09 14:42:12 +03:00
mssqlSourceResolver
extensionsSchema = do
2020-12-28 15:56:00 +03:00
initialiseResult <- runExceptT $ do
2021-02-18 19:46:14 +03:00
-- TODO: should we allow the migration to happen during maintenance mode?
-- Allowing this can be a sanity check, to see if the hdb_catalog in the
-- DB has been set correctly
2022-03-14 21:31:46 +03:00
currentTime <- liftIO Clock . getCurrentTime
2021-02-18 19:46:14 +03:00
( migrationResult , metadata ) <-
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
PG . runTx pool ( PG . Serializable , Just PG . ReadWrite ) $
2021-02-18 19:46:14 +03:00
migrateCatalog
defaultSourceConfig
2022-08-09 14:42:12 +03:00
extensionsSchema
2021-02-18 19:46:14 +03:00
( _sccMaintenanceMode serverConfigCtx )
currentTime
2022-09-27 12:24:19 +03:00
let tlsAllowList = networkTlsAllowlist $ _metaNetwork metadata
httpManager <- liftIO $ mkHttpManager ( pure tlsAllowList ) blockList
2021-02-18 19:46:14 +03:00
let cacheBuildParams =
2022-01-04 14:53:50 +03:00
CacheBuildParams httpManager pgSourceResolver mssqlSourceResolver serverConfigCtx
2021-11-10 17:34:22 +03:00
buildReason = CatalogSync
2020-12-28 15:56:00 +03:00
schemaCache <-
runCacheBuild cacheBuildParams $
2021-11-09 17:21:48 +03:00
buildRebuildableSchemaCacheWithReason buildReason logger env metadata
2020-12-28 15:56:00 +03:00
pure ( migrationResult , schemaCache )
2021-09-24 01:56:37 +03:00
2020-11-24 09:10:04 +03:00
( migrationResult , schemaCache ) <-
2020-06-19 09:42:32 +03:00
initialiseResult ` onLeft ` \ err -> do
unLogger
logger
StartupLog
{ slLogLevel = LevelError ,
2020-08-05 13:23:14 +03:00
slKind = " catalog_migrate " ,
2020-06-19 09:42:32 +03:00
slInfo = A . toJSON err
}
2022-03-14 21:31:46 +03:00
liftIO ( throwErrJExit DatabaseMigrationError err )
2020-06-19 09:42:32 +03:00
unLogger logger migrationResult
2022-03-14 21:31:46 +03:00
pure schemaCache
2020-04-01 18:14:26 +03:00
2021-05-14 12:38:37 +03:00
-- | Event triggers live in the user's DB and other events
-- (cron, one-off and async actions)
-- live in the metadata DB, so we need a way to differentiate the
-- type of shutdown action
data ShutdownAction
= EventTriggerShutdownAction ( IO () )
2023-02-03 04:03:23 +03:00
| MetadataDBShutdownAction ( ExceptT QErr IO () )
2021-05-14 12:38:37 +03:00
2020-06-19 09:42:32 +03:00
-- | If an exception is encountered , flush the log buffer and
-- rethrow If we do not flush the log buffer on exception, then log lines
-- may be missed
-- See: https://github.com/hasura/graphql-engine/issues/4772
2020-07-14 22:00:58 +03:00
flushLogger :: MonadIO m => LoggerCtx impl -> m ()
flushLogger = liftIO . FL . flushLogStr . _lcLoggerSet
2020-06-19 09:42:32 +03:00
2020-12-21 21:56:00 +03:00
-- | This function acts as the entrypoint for the graphql-engine webserver.
--
-- Note: at the exit of this function, or in case of a graceful server shutdown
-- (SIGTERM, or more generally, whenever the shutdown latch is set), we need to
-- make absolutely sure that we clean up any resources which were allocated during
-- server setup. In the case of a multitenant process, failure to do so can lead to
2020-12-28 15:56:00 +03:00
-- resource leaks.
2020-12-21 21:56:00 +03:00
--
-- To track these resources, we use the ManagedT monad, and attach finalizers at
-- the same point in the code where we allocate resources. If you fork a new
-- long-lived thread, or create a connection pool, or allocate any other
-- long-lived resource, make sure to pair the allocator with its finalizer.
-- There are plenty of examples throughout the code. For example, see
-- 'C.forkManagedT'.
--
-- Note also: the order in which the finalizers run can be important. Specifically,
-- we want the finalizers for the logger threads to run last, so that we retain as
-- many "thread stopping" log messages as possible. The order in which the
-- finalizers is run is determined by the order in which they are introduced in the
-- code.
2021-09-24 01:56:37 +03:00
2020-12-02 09:16:05 +03:00
{- HLINT ignore runHGEServer "Avoid lambda" -}
2023-02-20 20:41:55 +03:00
{- HLINT ignore runHGEServer "Use withAsync" -}
2019-11-26 15:14:21 +03:00
runHGEServer ::
2023-02-28 12:09:31 +03:00
forall m .
2021-10-13 19:38:56 +03:00
( MonadIO m ,
Rewrite OpenAPI
### Description
This PR rewrites OpenAPI to be more idiomatic. Some noteworthy changes:
- we accumulate all required information during the Analyze phase, to avoid having to do a single lookup in the schema cache during the OpenAPI generation phase (we now only need the schema cache as input to run the analysis)
- we no longer build intermediary endpoint information and aggregate it, we directly build the the `PathItem` for each endpoint; additionally, that means we no longer have to assume that different methods have the same metadata
- we no longer have to first declare types, then craft references: we do everything in one step
- we now properly deal with nullability by treating "typeName" and "typeName!" as different
- we add a bunch of additional fields in the generated "schema", such as title
- we do now support enum values in both input and output positions
- checking whether the request body is required is now performed on the fly rather than by introspecting the generated schema
- the methods in the file are sorted by topic
### Controversial point
However, this PR creates some additional complexity, that we might not want to keep. The main complexity is _knot-tying_: to avoid lookups when generating the OpenAPI, it builds an actual graph of input types, which means that we need something similar to (but simpler than) `MonadSchema`, to avoid infinite recursions when analyzing the input types of a query. To do this, this PR introduces `CircularT`, a lesser `SchemaT` that aims at avoiding ever having to reinvent this particular wheel ever again.
### Remaining work
- [x] fix existing tests (they are all failing due to some of the schema changes)
- [ ] add tests to cover the new features:
- [x] tests for `CircularT`
- [ ] tests for enums in output schemas
- [x] extract / document `CircularT` if we wish to keep it
- [x] add more comments to `OpenAPI`
- [x] have a second look at `buildVariableSchema`
- [x] fix all missing diagnostics in `Analyze`
- [x] add a Changelog entry?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4654
Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com>
GitOrigin-RevId: f4a9191f22dfcc1dccefd6a52f5c586b6ad17172
2022-06-30 15:55:56 +03:00
MonadFix m ,
2020-07-14 22:00:58 +03:00
MonadMask m ,
2019-11-26 15:14:21 +03:00
MonadStateless IO m ,
2020-06-16 18:23:06 +03:00
LA . Forall ( LA . Pure m ) ,
2020-07-15 13:40:48 +03:00
UserAuthentication ( Tracing . TraceT m ) ,
2019-11-26 15:14:21 +03:00
HttpLog m ,
ConsoleRenderer m ,
2022-12-07 14:28:58 +03:00
MonadVersionAPIWithExtraData m ,
2021-01-07 12:04:22 +03:00
MonadMetadataApiAuthorization m ,
2020-06-16 18:23:06 +03:00
MonadGQLExecutionCheck m ,
MonadConfigApiHandler m ,
2020-07-14 22:00:58 +03:00
MonadQueryLog m ,
2020-06-19 09:42:32 +03:00
WS . MonadWSLog m ,
2020-07-15 13:40:48 +03:00
MonadExecuteQuery m ,
Tracing . HasReporter m ,
2020-12-03 07:06:22 +03:00
HasResourceLimits m ,
2023-02-03 04:03:23 +03:00
MonadMetadataStorageQueryAPI m ,
2020-12-28 15:56:00 +03:00
MonadResolveSource m ,
2022-09-09 11:26:44 +03:00
EB . MonadQueryTags m ,
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
MonadEventLogCleanup m ,
ProvidesHasuraServices m
2019-11-26 15:14:21 +03:00
) =>
2023-02-24 21:09:36 +03:00
( AppContext -> Spock . SpockT m () ) ->
AppContext ->
AppEnv ->
2019-11-26 15:14:21 +03:00
-- | start time
UTCTime ->
2022-08-05 14:50:11 +03:00
-- | A hook which can be called to indicate when the server is started succesfully
Maybe ( IO () ) ->
2023-01-06 12:33:13 +03:00
EKG . Store EKG . EmptyMetrics ->
2020-12-21 21:56:00 +03:00
ManagedT m ()
2023-02-28 12:09:31 +03:00
runHGEServer setupHook appCtx appEnv @ AppEnv { .. } initTime startupStatusHook ekgStore = do
2021-10-20 23:01:22 +03:00
waiApplication <-
2023-02-28 12:09:31 +03:00
mkHGEServer setupHook appCtx appEnv ekgStore
2021-10-20 23:01:22 +03:00
2023-02-24 21:09:36 +03:00
let logger = _lsLogger appEnvLoggers
2022-08-05 14:50:11 +03:00
-- `startupStatusHook`: add `Service started successfully` message to config_status
-- table when a tenant starts up in multitenant
2021-10-20 23:01:22 +03:00
let warpSettings :: Warp . Settings
warpSettings =
2023-02-28 12:09:31 +03:00
Warp . setPort ( _getPort appEnvPort )
. Warp . setHost appEnvHost
2021-10-20 23:01:22 +03:00
. Warp . setGracefulShutdownTimeout ( Just 30 ) -- 30s graceful shutdown
. Warp . setInstallShutdownHandler shutdownHandler
2022-10-04 00:49:32 +03:00
. Warp . setBeforeMainLoop ( for_ startupStatusHook id )
2021-10-20 23:01:22 +03:00
. setForkIOWithMetrics
$ Warp . defaultSettings
setForkIOWithMetrics :: Warp . Settings -> Warp . Settings
setForkIOWithMetrics = Warp . setFork \ f -> do
void $
C . forkIOWithUnmask
( \ unmask ->
bracket_
2022-07-24 00:18:01 +03:00
( do
2023-02-24 21:09:36 +03:00
EKG . Gauge . inc ( smWarpThreads appEnvServerMetrics )
incWarpThreads ( pmConnections appEnvPrometheusMetrics )
2022-07-24 00:18:01 +03:00
)
( do
2023-02-24 21:09:36 +03:00
EKG . Gauge . dec ( smWarpThreads appEnvServerMetrics )
decWarpThreads ( pmConnections appEnvPrometheusMetrics )
2022-07-24 00:18:01 +03:00
)
2021-10-20 23:01:22 +03:00
( f unmask )
)
shutdownHandler :: IO () -> IO ()
shutdownHandler closeSocket =
LA . link =<< LA . async do
2023-02-24 21:09:36 +03:00
waitForShutdown appEnvShutdownLatch
2022-11-29 04:00:28 +03:00
unLogger logger $ mkGenericLog @ Text LevelInfo " server " " gracefully shutting down server "
2021-10-20 23:01:22 +03:00
closeSocket
2022-10-20 11:29:14 +03:00
finishTime <- liftIO Clock . getCurrentTime
let apiInitTime = realToFrac $ Clock . diffUTCTime finishTime initTime
unLogger logger $
mkGenericLog LevelInfo " server " $
StartupTimeInfo " starting API server " apiInitTime
2021-10-20 23:01:22 +03:00
-- Here we block until the shutdown latch 'MVar' is filled, and then
-- shut down the server. Once this blocking call returns, we'll tidy up
-- any resources using the finalizers attached using 'ManagedT' above.
-- Structuring things using the shutdown latch in this way lets us decide
-- elsewhere exactly how we want to control shutdown.
liftIO $ Warp . runSettings warpSettings waiApplication
-- | Part of a factorization of 'runHGEServer' to expose the constructed WAI
-- application for testing purposes. See 'runHGEServer' for documentation.
mkHGEServer ::
2023-02-28 12:09:31 +03:00
forall m .
2021-10-20 23:01:22 +03:00
( MonadIO m ,
Rewrite OpenAPI
### Description
This PR rewrites OpenAPI to be more idiomatic. Some noteworthy changes:
- we accumulate all required information during the Analyze phase, to avoid having to do a single lookup in the schema cache during the OpenAPI generation phase (we now only need the schema cache as input to run the analysis)
- we no longer build intermediary endpoint information and aggregate it, we directly build the the `PathItem` for each endpoint; additionally, that means we no longer have to assume that different methods have the same metadata
- we no longer have to first declare types, then craft references: we do everything in one step
- we now properly deal with nullability by treating "typeName" and "typeName!" as different
- we add a bunch of additional fields in the generated "schema", such as title
- we do now support enum values in both input and output positions
- checking whether the request body is required is now performed on the fly rather than by introspecting the generated schema
- the methods in the file are sorted by topic
### Controversial point
However, this PR creates some additional complexity, that we might not want to keep. The main complexity is _knot-tying_: to avoid lookups when generating the OpenAPI, it builds an actual graph of input types, which means that we need something similar to (but simpler than) `MonadSchema`, to avoid infinite recursions when analyzing the input types of a query. To do this, this PR introduces `CircularT`, a lesser `SchemaT` that aims at avoiding ever having to reinvent this particular wheel ever again.
### Remaining work
- [x] fix existing tests (they are all failing due to some of the schema changes)
- [ ] add tests to cover the new features:
- [x] tests for `CircularT`
- [ ] tests for enums in output schemas
- [x] extract / document `CircularT` if we wish to keep it
- [x] add more comments to `OpenAPI`
- [x] have a second look at `buildVariableSchema`
- [x] fix all missing diagnostics in `Analyze`
- [x] add a Changelog entry?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4654
Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com>
GitOrigin-RevId: f4a9191f22dfcc1dccefd6a52f5c586b6ad17172
2022-06-30 15:55:56 +03:00
MonadFix m ,
2021-10-20 23:01:22 +03:00
MonadMask m ,
MonadStateless IO m ,
LA . Forall ( LA . Pure m ) ,
UserAuthentication ( Tracing . TraceT m ) ,
HttpLog m ,
ConsoleRenderer m ,
2022-12-07 14:28:58 +03:00
MonadVersionAPIWithExtraData m ,
2021-10-20 23:01:22 +03:00
MonadMetadataApiAuthorization m ,
MonadGQLExecutionCheck m ,
MonadConfigApiHandler m ,
MonadQueryLog m ,
WS . MonadWSLog m ,
MonadExecuteQuery m ,
Tracing . HasReporter m ,
HasResourceLimits m ,
2023-02-03 04:03:23 +03:00
MonadMetadataStorageQueryAPI m ,
2021-10-20 23:01:22 +03:00
MonadResolveSource m ,
2022-09-09 11:26:44 +03:00
EB . MonadQueryTags m ,
harmonize network manager handling
## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
MonadEventLogCleanup m ,
ProvidesHasuraServices m
2021-10-20 23:01:22 +03:00
) =>
2023-02-24 21:09:36 +03:00
( AppContext -> Spock . SpockT m () ) ->
AppContext ->
AppEnv ->
2021-10-20 23:01:22 +03:00
EKG . Store EKG . EmptyMetrics ->
ManagedT m Application
2023-02-28 12:09:31 +03:00
mkHGEServer setupHook appCtx @ AppContext { .. } appEnv @ AppEnv { .. } ekgStore = do
2020-06-16 20:44:59 +03:00
-- Comment this to enable expensive assertions from "GHC.AssertNF". These
-- will log lines to STDOUT containing "not in normal form". In the future we
-- could try to integrate this into our tests. For now this is a development
-- tool.
2020-03-18 04:31:22 +03:00
--
-- NOTE: be sure to compile WITHOUT code coverage, for this to work properly.
liftIO disableAssertNF
2023-02-28 12:09:31 +03:00
let Loggers loggerCtx logger _ = appEnvLoggers
2023-01-06 09:39:10 +03:00
2021-03-31 13:39:01 +03:00
HasuraApp app cacheRef actionSubState stopWsServer <-
lift $
flip onException ( flushLogger loggerCtx ) $
2021-02-13 03:05:23 +03:00
mkWaiApp
setupHook
2023-02-24 21:09:36 +03:00
appCtx
appEnv
2023-01-06 12:33:13 +03:00
ekgStore
2021-02-18 19:46:14 +03:00
2023-01-06 12:33:13 +03:00
-- Init ServerConfigCtx
2021-02-18 19:46:14 +03:00
let serverConfigCtx =
[Preview] Inherited roles for postgres read queries
fixes #3868
docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de`
Note:
To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`.
Introduction
------------
This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`.
How are select permissions of different roles are combined?
------------------------------------------------------------
A select permission includes 5 things:
1. Columns accessible to the role
2. Row selection filter
3. Limit
4. Allow aggregation
5. Scalar computed fields accessible to the role
Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`.
Let's say the following GraphQL query is queried with the `combined_roles` role.
```graphql
query {
employees {
address
phone
}
}
```
This will translate to the following SQL query:
```sql
select
(case when (P1 or P2) then address else null end) as address,
(case when P2 then phone else null end) as phone
from employee
where (P1 or P2)
```
The other parameters of the select permission will be combined in the following manner:
1. Limit - Minimum of the limits will be the limit of the inherited role
2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation
3. Scalar computed fields - same as table column fields, as in the above example
APIs for inherited roles:
----------------------
1. `add_inherited_role`
`add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments
`role_name`: the name of the inherited role to be added (String)
`role_set`: list of roles that need to be combined (Array of Strings)
Example:
```json
{
"type": "add_inherited_role",
"args": {
"role_name":"combined_user",
"role_set":[
"user",
"user1"
]
}
}
```
After adding the inherited role, the inherited role can be used like single roles like earlier
Note:
An inherited role can only be created with non-inherited/singular roles.
2. `drop_inherited_role`
The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument:
`role_name`: name of the inherited role to be dropped
Example:
```json
{
"type": "drop_inherited_role",
"args": {
"role_name":"combined_user"
}
}
```
Metadata
---------
The derived roles metadata will be included under the `experimental_features` key while exporting the metadata.
```json
{
"experimental_features": {
"derived_roles": [
{
"role_name": "manager_is_employee_too",
"role_set": [
"employee",
"manager"
]
}
]
}
}
```
Scope
------
Only postgres queries and subscriptions are supported in this PR.
Important points:
-----------------
1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done.
TODOs
-------
- [ ] Tests
- [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features
- [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?)
- [ ] Introspection test with a inherited role (nullability changes in a inherited role)
- [ ] Docs
- [ ] Changelog
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 14:14:13 +03:00
ServerConfigCtx
2023-02-28 12:09:31 +03:00
acFunctionPermsCtx
acRemoteSchemaPermsCtx
acSQLGenCtx
appEnvEnableMaintenanceMode
acExperimentalFeatures
appEnvEventingMode
appEnvEnableReadOnlyMode
acDefaultNamingConvention
acMetadataDefaults
2023-02-24 21:09:36 +03:00
appEnvCheckFeatureFlag
2019-11-26 15:14:21 +03:00
2021-05-25 13:49:59 +03:00
-- Log Warning if deprecated environment variables are used
2022-03-09 01:59:28 +03:00
sources <- scSources <$> liftIO ( getSchemaCache cacheRef )
2023-02-28 12:09:31 +03:00
liftIO $ logDeprecatedEnvVars logger acEnvironment sources
2021-05-25 13:49:59 +03:00
2019-11-26 15:14:21 +03:00
-- log inconsistent schema objects
2022-03-09 01:59:28 +03:00
inconsObjs <- scInconsistentObjs <$> liftIO ( getSchemaCache cacheRef )
liftIO $ logInconsistentMetadata logger inconsObjs
2019-11-26 15:14:21 +03:00
2021-07-27 08:41:16 +03:00
-- NOTE: `newLogTVar` is being used to make sure that the metadata logger runs only once
-- while logging errors or any `inconsistent_metadata` logs.
newLogTVar <- liftIO $ STM . newTVarIO False
2021-11-30 15:31:27 +03:00
2020-11-24 09:10:04 +03:00
-- Start a background thread for processing schema sync event present in the '_sscSyncEventRef'
2021-04-06 06:25:02 +03:00
_ <-
startSchemaSyncProcessorThread
logger
2023-02-24 21:09:36 +03:00
appEnvMetaVersionRef
2021-04-06 06:25:02 +03:00
cacheRef
2023-02-24 21:09:36 +03:00
appEnvInstanceId
2021-07-27 08:41:16 +03:00
serverConfigCtx
newLogTVar
2021-09-24 01:56:37 +03:00
2023-02-28 12:09:31 +03:00
case appEnvEventingMode of
2021-11-30 15:31:27 +03:00
EventingEnabled -> do
2023-02-24 21:09:36 +03:00
startEventTriggerPollerThread logger appEnvLockedEventsCtx cacheRef
startAsyncActionsPollerThread logger appEnvLockedEventsCtx cacheRef actionSubState
2021-05-14 12:38:37 +03:00
2023-02-07 15:22:08 +03:00
-- Create logger for logging the statistics of fetched cron triggers
fetchedCronTriggerStatsLogger <-
allocate
( createFetchedCronTriggerStatsLogger logger )
( closeFetchedCronTriggersStatsLogger logger )
2021-11-30 15:31:27 +03:00
-- start a background thread to create new cron events
_cronEventsThread <-
C . forkManagedT " runCronEventsGenerator " logger $
2023-02-07 15:22:08 +03:00
runCronEventsGenerator logger fetchedCronTriggerStatsLogger ( getSchemaCache cacheRef )
2021-09-24 01:56:37 +03:00
2023-02-24 21:09:36 +03:00
startScheduledEventsPollerThread logger appEnvLockedEventsCtx cacheRef
2021-11-30 15:31:27 +03:00
EventingDisabled ->
2022-11-29 04:00:28 +03:00
unLogger logger $ mkGenericLog @ Text LevelInfo " server " " starting in eventing disabled mode "
2020-05-13 15:33:16 +03:00
2019-11-26 15:14:21 +03:00
-- start a background thread to check for updates
2021-05-14 12:38:37 +03:00
_updateThread <-
C . forkManagedT " checkForUpdates " logger $
liftIO $
2023-02-24 21:09:36 +03:00
checkForUpdates loggerCtx appEnvManager
2019-11-26 15:14:21 +03:00
2022-11-23 19:40:21 +03:00
-- Start a background thread for source pings
_sourcePingPoller <-
C . forkManagedT " sourcePingPoller " logger $ do
let pingLog =
2022-11-29 04:00:28 +03:00
unLogger logger . mkGenericLog @ String LevelInfo " sources-ping "
2022-11-23 19:40:21 +03:00
liftIO
( runPingSources
2023-02-28 12:09:31 +03:00
acEnvironment
2022-11-23 19:40:21 +03:00
pingLog
( scSourcePingConfig <$> getSchemaCache cacheRef )
)
2019-11-26 15:14:21 +03:00
-- start a background thread for telemetry
2020-12-21 21:56:00 +03:00
_telemetryThread <-
2023-02-28 12:09:31 +03:00
if isTelemetryEnabled acEnableTelemetry
2020-07-30 05:34:50 +03:00
then do
2022-11-29 04:00:28 +03:00
lift . unLogger logger $ mkGenericLog @ Text LevelInfo " telemetry " telemetryNotice
2021-09-24 01:56:37 +03:00
2022-03-14 21:31:46 +03:00
dbUid <-
2023-02-03 04:03:23 +03:00
getMetadataDbUid ` onLeftM ` throwErrJExit DatabaseMigrationError
2022-03-14 21:31:46 +03:00
pgVersion <-
2023-02-24 21:09:36 +03:00
liftIO ( runExceptT $ PG . runTx appEnvMetadataDbPool ( PG . ReadCommitted , Nothing ) $ getPgVersion )
2023-02-03 04:03:23 +03:00
` onLeftM ` throwErrJExit DatabaseMigrationError
2021-09-24 01:56:37 +03:00
2021-05-14 12:38:37 +03:00
telemetryThread <-
C . forkManagedT " runTelemetry " logger $
2022-03-14 21:31:46 +03:00
liftIO $
2023-02-28 12:09:31 +03:00
runTelemetry logger appEnvManager ( getSchemaCache cacheRef ) dbUid appEnvInstanceId pgVersion acExperimentalFeatures
2020-07-30 05:34:50 +03:00
return $ Just telemetryThread
else return Nothing
2021-10-20 23:01:22 +03:00
-- These cleanup actions are not directly associated with any
-- resource, but we still need to make sure we clean them up here.
allocate_ ( pure () ) ( liftIO stopWsServer )
2021-09-24 01:56:37 +03:00
2021-10-20 23:01:22 +03:00
pure app
2019-11-26 15:14:21 +03:00
where
2022-06-07 10:08:53 +03:00
isRetryRequired _ resp = do
return $ case resp of
Right _ -> False
Left err -> qeCode err == ConcurrentUpdate
2020-11-25 13:56:44 +03:00
prepareScheduledEvents ( Logger logger ) = do
2022-11-29 04:00:28 +03:00
liftIO $ logger $ mkGenericLog @ Text LevelInfo " scheduled_triggers " " preparing data "
2023-02-03 04:03:23 +03:00
res <- Retry . retrying Retry . retryPolicyDefault isRetryRequired ( return unlockAllLockedScheduledEvents )
2022-11-29 04:00:28 +03:00
onLeft res ( \ err -> logger $ mkGenericLog @ String LevelError " scheduled_triggers " ( show $ qeError err ) )
2020-07-02 14:57:09 +03:00
2021-05-14 12:38:37 +03:00
getProcessingScheduledEventsCount :: LockedEventsCtx -> IO Int
getProcessingScheduledEventsCount LockedEventsCtx { .. } = do
processingCronEvents <- readTVarIO leCronEvents
processingOneOffEvents <- readTVarIO leOneOffEvents
return $ length processingOneOffEvents + length processingCronEvents
2021-09-24 01:56:37 +03:00
2021-05-14 12:38:37 +03:00
shutdownEventTriggerEvents ::
2021-09-20 10:34:59 +03:00
[ BackendSourceInfo ] ->
2020-07-02 14:57:09 +03:00
Logger Hasura ->
LockedEventsCtx ->
IO ()
2021-09-20 10:34:59 +03:00
shutdownEventTriggerEvents sources ( Logger logger ) LockedEventsCtx { .. } = do
2021-05-14 12:38:37 +03:00
-- TODO: is this correct?
-- event triggers should be tied to the life cycle of a source
2021-09-20 10:34:59 +03:00
lockedEvents <- readTVarIO leEvents
forM_ sources $ \ backendSourceInfo -> do
2023-01-16 20:19:45 +03:00
AB . dispatchAnyBackend @ BackendEventTrigger backendSourceInfo \ ( SourceInfo sourceName _ _ _ sourceConfig _ _ :: SourceInfo b ) -> do
2022-07-01 14:47:20 +03:00
let sourceNameText = sourceNameToText sourceName
logger $ mkGenericLog LevelInfo " event_triggers " $ " unlocking events of source: " <> sourceNameText
2022-10-04 00:49:32 +03:00
for_ ( HM . lookup sourceName lockedEvents ) $ \ sourceLockedEvents -> do
2022-06-30 14:26:10 +03:00
-- No need to execute unlockEventsTx when events are not present
2022-10-04 00:49:32 +03:00
for_ ( NE . nonEmptySet sourceLockedEvents ) $ \ nonEmptyLockedEvents -> do
2022-06-30 14:26:10 +03:00
res <- Retry . retrying Retry . retryPolicyDefault isRetryRequired ( return $ unlockEventsInSource @ b sourceConfig nonEmptyLockedEvents )
case res of
Left err ->
logger $
2022-07-01 14:47:20 +03:00
mkGenericLog LevelWarn " event_trigger " $
" Error while unlocking event trigger events of source: " <> sourceNameText <> " error: " <> showQErr err
2022-06-30 14:26:10 +03:00
Right count ->
logger $
2022-07-01 14:47:20 +03:00
mkGenericLog LevelInfo " event_trigger " $
tshow count <> " events of source " <> sourceNameText <> " were successfully unlocked "
2021-09-24 01:56:37 +03:00
2021-05-14 12:38:37 +03:00
shutdownAsyncActions ::
LockedEventsCtx ->
2023-02-03 04:03:23 +03:00
ExceptT QErr m ()
2021-05-14 12:38:37 +03:00
shutdownAsyncActions lockedEventsCtx = do
lockedActionEvents <- liftIO $ readTVarIO $ leActionEvents lockedEventsCtx
2023-02-03 04:03:23 +03:00
liftEitherM $ setProcessingActionLogsToPending ( LockedActionIdArray $ toList lockedActionEvents )
2020-07-02 14:57:09 +03:00
2021-05-14 12:38:37 +03:00
-- This function is a helper function to do couple of things:
--
-- 1. When the value of the `graceful-shutdown-timeout` > 0, we poll
-- the in-flight events queue we maintain using the `processingEventsCountAction`
-- number of in-flight processing events, in case of actions it is the
-- actions which are in 'processing' state and in scheduled events
-- it is the events which are in 'locked' state. The in-flight events queue is polled
-- every 5 seconds until either the graceful shutdown time is exhausted
-- or the number of in-flight processing events is 0.
-- 2. After step 1, we unlock all the events which were attempted to process by the current
-- graphql-engine instance that are still in the processing
-- state. In actions, it means to set the status of such actions to 'pending'
-- and in scheduled events, the status will be set to 'unlocked'.
waitForProcessingAction ::
Logger Hasura ->
String ->
IO Int ->
ShutdownAction ->
Seconds ->
IO ()
waitForProcessingAction l @ ( Logger logger ) actionType processingEventsCountAction' shutdownAction maxTimeout
| maxTimeout <= 0 = do
case shutdownAction of
EventTriggerShutdownAction userDBShutdownAction -> userDBShutdownAction
MetadataDBShutdownAction metadataDBShutdownAction ->
2023-02-03 04:03:23 +03:00
runExceptT metadataDBShutdownAction >>= \ case
2021-05-14 12:38:37 +03:00
Left err ->
logger $
2022-07-01 14:47:20 +03:00
mkGenericLog LevelWarn ( T . pack actionType ) $
2021-05-14 12:38:37 +03:00
" Error while unlocking the processing "
2022-07-01 14:47:20 +03:00
<> tshow actionType
2021-05-14 12:38:37 +03:00
<> " err - "
2022-07-01 14:47:20 +03:00
<> showQErr err
2021-05-14 12:38:37 +03:00
Right () -> pure ()
| otherwise = do
processingEventsCount <- processingEventsCountAction'
if ( processingEventsCount == 0 )
then
logger $
2022-11-29 04:00:28 +03:00
mkGenericLog @ Text LevelInfo ( T . pack actionType ) $
2021-05-14 12:38:37 +03:00
" All in-flight events have finished processing "
else unless ( processingEventsCount == 0 ) $ do
C . sleep ( 5 ) -- sleep for 5 seconds and then repeat
waitForProcessingAction l actionType processingEventsCountAction' shutdownAction ( maxTimeout - ( Seconds 5 ) )
2021-09-24 01:56:37 +03:00
2022-06-05 23:27:09 +03:00
startEventTriggerPollerThread logger lockedEventsCtx cacheRef = do
2022-12-01 10:47:19 +03:00
schemaCache <- liftIO $ getSchemaCache cacheRef
2023-02-28 12:09:31 +03:00
let maxEventThreads = unrefine acEventsHttpPoolSize
fetchInterval = milliseconds $ unrefine acEventsFetchInterval
2022-12-01 10:47:19 +03:00
allSources = HM . elems $ scSources schemaCache
2021-11-30 15:31:27 +03:00
2023-02-28 12:09:31 +03:00
unless ( unrefine acEventsFetchBatchSize == 0 || fetchInterval == 0 ) $ do
2021-11-30 15:31:27 +03:00
-- Don't start the events poller thread when fetchBatchSize or fetchInterval is 0
-- prepare event triggers data
2023-02-28 12:09:31 +03:00
eventEngineCtx <- liftIO $ atomically $ initEventEngineCtx maxEventThreads fetchInterval acEventsFetchBatchSize
2021-11-30 15:31:27 +03:00
let eventsGracefulShutdownAction =
waitForProcessingAction
logger
" event_triggers "
( length <$> readTVarIO ( leEvents lockedEventsCtx ) )
( EventTriggerShutdownAction ( shutdownEventTriggerEvents allSources logger lockedEventsCtx ) )
2023-02-28 12:09:31 +03:00
( unrefine appEnvGracefulShutdownTimeout )
2023-01-30 09:06:45 +03:00
-- Create logger for logging the statistics of events fetched
fetchedEventsStatsLogger <-
allocate
( createFetchedEventsStatsLogger logger )
( closeFetchedEventsStatsLogger logger )
2022-11-29 04:00:28 +03:00
unLogger logger $ mkGenericLog @ Text LevelInfo " event_triggers " " starting workers "
2021-11-30 15:31:27 +03:00
void
$ C . forkManagedTWithGracefulShutdown
" processEventQueue "
logger
( C . ThreadShutdown ( liftIO eventsGracefulShutdownAction ) )
$ processEventQueue
logger
2023-01-30 09:06:45 +03:00
fetchedEventsStatsLogger
2023-02-24 21:09:36 +03:00
appEnvManager
2022-03-09 01:59:28 +03:00
( getSchemaCache cacheRef )
2021-11-30 15:31:27 +03:00
eventEngineCtx
lockedEventsCtx
2023-02-24 21:09:36 +03:00
appEnvServerMetrics
( pmEventTriggerMetrics appEnvPrometheusMetrics )
2023-02-28 12:09:31 +03:00
appEnvEnableMaintenanceMode
2021-11-30 15:31:27 +03:00
startAsyncActionsPollerThread logger lockedEventsCtx cacheRef actionSubState = do
-- start a background thread to handle async actions
2023-02-28 12:09:31 +03:00
case acAsyncActionsFetchInterval of
2021-11-30 15:31:27 +03:00
Skip -> pure () -- Don't start the poller thread
2022-09-21 21:01:48 +03:00
Interval ( unrefine -> sleepTime ) -> do
2021-11-30 15:31:27 +03:00
let label = " asyncActionsProcessor "
asyncActionGracefulShutdownAction =
( liftWithStateless \ lowerIO ->
( waitForProcessingAction
logger
" async_actions "
( length <$> readTVarIO ( leActionEvents lockedEventsCtx ) )
( MetadataDBShutdownAction ( hoist lowerIO ( shutdownAsyncActions lockedEventsCtx ) ) )
2023-02-28 12:09:31 +03:00
( unrefine appEnvGracefulShutdownTimeout )
2021-11-30 15:31:27 +03:00
)
)
void
$ C . forkManagedTWithGracefulShutdown
label
logger
( C . ThreadShutdown asyncActionGracefulShutdownAction )
$ asyncActionsProcessor
2023-02-28 12:09:31 +03:00
acEnvironment
2021-11-30 15:31:27 +03:00
logger
2022-03-09 01:59:28 +03:00
( getSchemaCache cacheRef )
2021-11-30 15:31:27 +03:00
( leActionEvents lockedEventsCtx )
2023-02-24 21:09:36 +03:00
appEnvPrometheusMetrics
2021-11-30 15:31:27 +03:00
sleepTime
Nothing
-- start a background thread to handle async action live queries
void $
C . forkManagedT " asyncActionSubscriptionsProcessor " logger $
asyncActionSubscriptionsProcessor actionSubState
2022-06-05 23:27:09 +03:00
startScheduledEventsPollerThread logger lockedEventsCtx cacheRef = do
2021-11-30 15:31:27 +03:00
-- prepare scheduled triggers
lift $ prepareScheduledEvents logger
2023-01-30 09:06:45 +03:00
-- Create logger for logging the statistics of scheduled events fetched
scheduledEventsStatsLogger <-
allocate
( createFetchedScheduledEventsStatsLogger logger )
( closeFetchedScheduledEventsStatsLogger logger )
2021-11-30 15:31:27 +03:00
-- start a background thread to deliver the scheduled events
-- _scheduledEventsThread <- do
let scheduledEventsGracefulShutdownAction =
( liftWithStateless \ lowerIO ->
( waitForProcessingAction
logger
" scheduled_events "
( getProcessingScheduledEventsCount lockedEventsCtx )
2023-02-03 04:03:23 +03:00
( MetadataDBShutdownAction ( liftEitherM $ hoist lowerIO unlockAllLockedScheduledEvents ) )
2023-02-28 12:09:31 +03:00
( unrefine appEnvGracefulShutdownTimeout )
2021-11-30 15:31:27 +03:00
)
)
void
$ C . forkManagedTWithGracefulShutdown
" processScheduledTriggers "
logger
( C . ThreadShutdown scheduledEventsGracefulShutdownAction )
$ processScheduledTriggers
2023-02-28 12:09:31 +03:00
acEnvironment
2021-11-30 15:31:27 +03:00
logger
2023-01-30 09:06:45 +03:00
scheduledEventsStatsLogger
2023-02-24 21:09:36 +03:00
appEnvManager
appEnvPrometheusMetrics
2022-03-09 01:59:28 +03:00
( getSchemaCache cacheRef )
2021-11-30 15:31:27 +03:00
lockedEventsCtx
2021-05-26 19:19:26 +03:00
instance ( Monad m ) => Tracing . HasReporter ( PGMetadataStorageAppT m )
2020-07-15 13:40:48 +03:00
2021-05-26 19:19:26 +03:00
instance ( Monad m ) => HasResourceLimits ( PGMetadataStorageAppT m ) where
2021-09-29 19:20:06 +03:00
askHTTPHandlerLimit = pure $ ResourceLimits id
2022-10-27 18:34:43 +03:00
askGraphqlOperationLimit _ _ _ = pure $ ResourceLimits id
2020-12-03 07:06:22 +03:00
2021-05-26 19:19:26 +03:00
instance ( MonadIO m ) => HttpLog ( PGMetadataStorageAppT m ) where
2021-07-05 12:45:31 +03:00
type ExtraHttpLogMetadata ( PGMetadataStorageAppT m ) = ()
2021-04-06 20:52:55 +03:00
2021-07-05 12:45:31 +03:00
emptyExtraHttpLogMetadata = ()
2022-12-15 10:48:18 +03:00
buildExtraHttpLogMetadata _ _ = ()
2021-04-06 20:52:55 +03:00
2022-12-15 10:48:18 +03:00
logHttpError logger loggingSettings userInfoM reqId waiReq req qErr headers _ =
2019-11-26 15:14:21 +03:00
unLogger logger $
mkHttpLog $
2022-04-11 20:49:25 +03:00
mkHttpErrorLogContext userInfoM loggingSettings reqId waiReq req qErr Nothing Nothing headers
2019-11-26 15:14:21 +03:00
2022-09-13 18:35:58 +03:00
logHttpSuccess logger loggingSettings userInfoM reqId waiReq reqBody response compressedResponse qTime cType headers ( CommonHttpLogMetadata rb batchQueryOpLogs , () ) =
2019-11-26 15:14:21 +03:00
unLogger logger $
mkHttpLog $
2022-09-13 18:35:58 +03:00
mkHttpAccessLogContext userInfoM loggingSettings reqId waiReq reqBody ( BL . length response ) compressedResponse qTime cType headers rb batchQueryOpLogs
2019-11-26 15:14:21 +03:00
2021-05-26 19:19:26 +03:00
instance ( Monad m ) => MonadExecuteQuery ( PGMetadataStorageAppT m ) where
2021-06-17 09:27:58 +03:00
cacheLookup _ _ _ _ = pure ( [] , Nothing )
2021-08-25 04:52:38 +03:00
cacheStore _ _ _ = pure ( Right CacheStoreSkipped )
2020-07-15 13:40:48 +03:00
2021-05-26 19:19:26 +03:00
instance ( MonadIO m , MonadBaseControl IO m ) => UserAuthentication ( Tracing . TraceT ( PGMetadataStorageAppT m ) ) where
2021-02-03 10:10:39 +03:00
resolveUserInfo logger manager headers authMode reqs =
2022-12-15 10:48:18 +03:00
runExceptT $ do
( a , b , c ) <- getUserInfoWithExpTime logger manager headers authMode reqs
pure $ ( a , b , c , ExtraUserInfo Nothing )
2019-11-26 15:14:21 +03:00
2021-01-07 12:04:22 +03:00
accessDeniedErrMsg :: Text
accessDeniedErrMsg =
" restricted access : admin only "
2021-05-26 19:19:26 +03:00
instance ( Monad m ) => MonadMetadataApiAuthorization ( PGMetadataStorageAppT m ) where
2021-01-07 12:04:22 +03:00
authorizeV1QueryApi query handlerCtx = runExceptT do
2020-12-14 07:30:19 +03:00
let currRole = _uiRole $ hcUser handlerCtx
2020-04-24 12:10:53 +03:00
when ( requiresAdmin query && currRole /= adminRoleName ) $
2021-01-07 12:04:22 +03:00
withPathK " args " $
throw400 AccessDenied accessDeniedErrMsg
authorizeV1MetadataApi _ handlerCtx = runExceptT do
let currRole = _uiRole $ hcUser handlerCtx
when ( currRole /= adminRoleName ) $
withPathK " args " $
throw400 AccessDenied accessDeniedErrMsg
2021-04-16 16:26:11 +03:00
authorizeV2QueryApi _ handlerCtx = runExceptT do
2021-01-07 12:04:22 +03:00
let currRole = _uiRole $ hcUser handlerCtx
2021-04-16 16:26:11 +03:00
when ( currRole /= adminRoleName ) $
2021-01-07 12:04:22 +03:00
withPathK " args " $
throw400 AccessDenied accessDeniedErrMsg
2019-11-26 15:14:21 +03:00
2021-05-26 19:19:26 +03:00
instance ( Monad m ) => ConsoleRenderer ( PGMetadataStorageAppT m ) where
2022-09-27 17:50:32 +03:00
renderConsole path authMode enableTelemetry consoleAssetsDir consoleSentryDsn =
return $ mkConsoleHTML path authMode enableTelemetry consoleAssetsDir consoleSentryDsn
2019-11-26 15:14:21 +03:00
2022-12-07 14:28:58 +03:00
instance ( Monad m ) => MonadVersionAPIWithExtraData ( PGMetadataStorageAppT m ) where
getExtraDataForVersionAPI = return []
2021-05-26 19:19:26 +03:00
instance ( Monad m ) => MonadGQLExecutionCheck ( PGMetadataStorageAppT m ) where
2022-04-05 10:18:21 +03:00
checkGQLExecution userInfo _ enableAL sc query _ = runExceptT $ do
2020-06-16 18:23:06 +03:00
req <- toParsed query
2022-02-08 19:53:30 +03:00
checkQueryInAllowlist enableAL AllowlistModeGlobalOnly userInfo req sc
2020-06-16 18:23:06 +03:00
return req
2021-05-05 15:25:27 +03:00
executeIntrospection _ introspectionQuery _ =
pure $ Right $ ExecStepRaw introspectionQuery
2022-10-13 19:50:05 +03:00
checkGQLBatchedReqs _ _ _ _ = runExceptT $ pure ()
2021-05-26 19:19:26 +03:00
instance ( MonadIO m , MonadBaseControl IO m ) => MonadConfigApiHandler ( PGMetadataStorageAppT m ) where
2020-06-16 18:23:06 +03:00
runConfigApiHandler = configApiGetHandler
2021-05-26 19:19:26 +03:00
instance ( MonadIO m ) => MonadQueryLog ( PGMetadataStorageAppT m ) where
2022-06-26 01:08:01 +03:00
logQueryLog logger = unLogger logger
2020-06-19 09:42:32 +03:00
2021-05-26 19:19:26 +03:00
instance ( MonadIO m ) => WS . MonadWSLog ( PGMetadataStorageAppT m ) where
2022-06-26 01:08:01 +03:00
logWSLog logger = unLogger logger
2020-06-19 09:42:32 +03:00
2021-05-26 19:19:26 +03:00
instance ( Monad m ) => MonadResolveSource ( PGMetadataStorageAppT m ) where
2023-02-24 21:09:36 +03:00
getPGSourceResolver = ( mkPgSourceResolver . _lsPgLogger . appEnvLoggers ) <$> asks snd
2022-01-04 14:53:50 +03:00
getMSSQLSourceResolver = return mkMSSQLSourceResolver
2020-12-28 15:56:00 +03:00
2021-07-29 11:29:12 +03:00
instance ( Monad m ) => EB . MonadQueryTags ( PGMetadataStorageAppT m ) where
2021-09-23 15:37:56 +03:00
createQueryTags _attributes _qtSourceConfig = return $ emptyQueryTagsComment
2021-07-29 11:29:12 +03:00
2022-09-09 11:26:44 +03:00
instance ( Monad m ) => MonadEventLogCleanup ( PGMetadataStorageAppT m ) where
2022-09-13 11:33:44 +03:00
runLogCleaner _ = pure $ throw400 NotSupported " Event log cleanup feature is enterprise edition only "
2022-09-15 14:45:14 +03:00
generateCleanupSchedules _ _ _ = pure $ Right ()
2023-02-03 15:27:53 +03:00
updateTriggerCleanupSchedules _ _ _ _ = pure $ Right ()
2022-09-09 11:26:44 +03:00
2021-05-26 19:19:26 +03:00
runInSeparateTx ::
( MonadIO m ) =>
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
PG . TxE QErr a ->
2023-02-03 04:03:23 +03:00
PGMetadataStorageAppT m ( Either QErr a )
2020-11-25 13:56:44 +03:00
runInSeparateTx tx = do
2023-02-24 21:09:36 +03:00
pool <- appEnvMetadataDbPool <$> asks snd
2023-02-03 04:03:23 +03:00
liftIO $ runExceptT $ PG . runTx pool ( PG . RepeatableRead , Nothing ) tx
2020-11-25 13:56:44 +03:00
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
notifySchemaCacheSyncTx :: MetadataResourceVersion -> InstanceId -> CacheInvalidations -> PG . TxE QErr ()
2021-04-06 06:25:02 +03:00
notifySchemaCacheSyncTx ( MetadataResourceVersion resourceVersion ) instanceId invalidations = do
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
PG . Discard () <-
PG . withQE
2020-12-14 07:30:19 +03:00
defaultTxErrorHandler
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
[ PG . sql |
2021-04-06 06:25:02 +03:00
INSERT INTO hdb_catalog . hdb_schema_notifications ( id , notification , resource_version , instance_id )
VALUES ( 1 , $ 1 :: json , $ 2 , $ 3 :: uuid )
ON CONFLICT ( id ) DO UPDATE SET
notification = $ 1 :: json ,
resource_version = $ 2 ,
instance_id = $ 3 :: uuid
| ]
2022-09-21 21:40:41 +03:00
( PG . ViaJSON invalidations , resourceVersion , instanceId )
2021-04-06 06:25:02 +03:00
True
2020-12-14 07:30:19 +03:00
pure ()
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
getCatalogStateTx :: PG . TxE QErr CatalogState
2021-01-07 12:04:22 +03:00
getCatalogStateTx =
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
mkCatalogState . PG . getRow
<$> PG . withQE
2021-01-07 12:04:22 +03:00
defaultTxErrorHandler
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
[ PG . sql |
2021-01-07 12:04:22 +03:00
SELECT hasura_uuid :: text , cli_state :: json , console_state :: json
FROM hdb_catalog . hdb_version
| ]
()
False
where
2022-09-21 21:40:41 +03:00
mkCatalogState ( dbId , PG . ViaJSON cliState , PG . ViaJSON consoleState ) =
2021-01-07 12:04:22 +03:00
CatalogState dbId cliState consoleState
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
setCatalogStateTx :: CatalogStateType -> A . Value -> PG . TxE QErr ()
2021-01-07 12:04:22 +03:00
setCatalogStateTx stateTy stateValue =
case stateTy of
CSTCli ->
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
PG . unitQE
2021-01-07 12:04:22 +03:00
defaultTxErrorHandler
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
[ PG . sql |
2021-01-07 12:04:22 +03:00
UPDATE hdb_catalog . hdb_version
SET cli_state = $ 1
| ]
2022-09-21 21:40:41 +03:00
( Identity $ PG . ViaJSON stateValue )
2021-01-07 12:04:22 +03:00
False
CSTConsole ->
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
PG . unitQE
2021-01-07 12:04:22 +03:00
defaultTxErrorHandler
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
[ PG . sql |
2021-01-07 12:04:22 +03:00
UPDATE hdb_catalog . hdb_version
SET console_state = $ 1
| ]
2022-09-21 21:40:41 +03:00
( Identity $ PG . ViaJSON stateValue )
2021-01-07 12:04:22 +03:00
False
2021-02-18 19:46:14 +03:00
2020-11-25 13:56:44 +03:00
-- | Each of the function in the type class is executed in a totally separate transaction.
2023-02-03 04:03:23 +03:00
instance ( MonadIO m ) => MonadMetadataStorage ( PGMetadataStorageAppT m ) where
2021-04-06 06:25:02 +03:00
fetchMetadataResourceVersion = runInSeparateTx fetchMetadataResourceVersionFromCatalog
fetchMetadata = runInSeparateTx fetchMetadataAndResourceVersionFromCatalog
fetchMetadataNotifications a b = runInSeparateTx $ fetchMetadataNotificationsFromCatalog a b
2022-02-18 15:46:55 +03:00
setMetadata r = runInSeparateTx . setMetadataInCatalog r
2021-04-06 06:25:02 +03:00
notifySchemaCacheSync a b c = runInSeparateTx $ notifySchemaCacheSyncTx a b c
2021-01-07 12:04:22 +03:00
getCatalogState = runInSeparateTx getCatalogStateTx
setCatalogState a b = runInSeparateTx $ setCatalogStateTx a b
2020-12-14 07:30:19 +03:00
2022-06-15 11:02:29 +03:00
getMetadataDbUid = runInSeparateTx getDbId
2022-06-07 14:23:16 +03:00
checkMetadataStorageHealth = runInSeparateTx $ checkDbConnection
2020-12-28 15:56:00 +03:00
2021-05-26 19:19:26 +03:00
getDeprivedCronTriggerStats = runInSeparateTx . getDeprivedCronTriggerStatsTx
2020-11-25 13:56:44 +03:00
getScheduledEventsForDelivery = runInSeparateTx getScheduledEventsForDeliveryTx
2021-09-13 21:00:53 +03:00
insertCronEvents = runInSeparateTx . insertCronEventsTx
insertOneOffScheduledEvent = runInSeparateTx . insertOneOffScheduledEventTx
2020-11-25 13:56:44 +03:00
insertScheduledEventInvocation a b = runInSeparateTx $ insertInvocationTx a b
setScheduledEventOp a b c = runInSeparateTx $ setScheduledEventOpTx a b c
unlockScheduledEvents a b = runInSeparateTx $ unlockScheduledEventsTx a b
unlockAllLockedScheduledEvents = runInSeparateTx unlockAllLockedScheduledEventsTx
2020-12-14 07:30:19 +03:00
clearFutureCronEvents = runInSeparateTx . dropFutureCronEventsTx
2022-09-15 22:10:53 +03:00
getOneOffScheduledEvents a b c = runInSeparateTx $ getOneOffScheduledEventsTx a b c
getCronEvents a b c d = runInSeparateTx $ getCronEventsTx a b c d
2022-11-03 13:21:56 +03:00
getScheduledEventInvocations a = runInSeparateTx $ getScheduledEventInvocationsTx a
2021-01-07 12:04:22 +03:00
deleteScheduledEvent a b = runInSeparateTx $ deleteScheduledEventTx a b
2021-09-24 01:56:37 +03:00
2021-05-14 12:38:37 +03:00
insertAction a b c d = runInSeparateTx $ insertActionTx a b c d
fetchUndeliveredActionEvents = runInSeparateTx fetchUndeliveredActionEventsTx
setActionStatus a b = runInSeparateTx $ setActionStatusTx a b
fetchActionResponse = runInSeparateTx . fetchActionResponseTx
clearActionData = runInSeparateTx . clearActionDataTx
setProcessingActionLogsToPending = runInSeparateTx . setProcessingActionLogsToPendingTx
2020-11-25 13:56:44 +03:00
2023-02-03 04:03:23 +03:00
instance MonadIO m => MonadMetadataStorageQueryAPI ( PGMetadataStorageAppT m )
2021-05-26 19:19:26 +03:00
2020-06-16 18:23:06 +03:00
--- helper functions ---
2023-02-08 04:46:21 +03:00
mkConsoleHTML :: Text -> AuthMode -> TelemetryStatus -> Maybe Text -> Maybe Text -> Either String Text
2022-09-27 17:50:32 +03:00
mkConsoleHTML path authMode enableTelemetry consoleAssetsDir consoleSentryDsn =
2019-11-26 15:14:21 +03:00
renderHtmlTemplate consoleTmplt $
-- variables required to render the template
2020-12-21 21:56:00 +03:00
A . object
[ " isAdminSecretSet " A ..= isAdminSecretSet authMode ,
" consolePath " A ..= consolePath ,
2023-02-08 04:46:21 +03:00
" enableTelemetry " A ..= boolToText ( isTelemetryEnabled enableTelemetry ) ,
2020-12-21 21:56:00 +03:00
" cdnAssets " A ..= boolToText ( isNothing consoleAssetsDir ) ,
2022-09-27 17:50:32 +03:00
" consoleSentryDsn " A ..= fromMaybe " " consoleSentryDsn ,
2020-12-21 21:56:00 +03:00
" assetsVersion " A ..= consoleAssetsVersion ,
2022-09-15 19:59:00 +03:00
" serverVersion " A ..= currentVersion ,
" consoleSentryDsn " A ..= ( " " :: Text )
2019-11-26 15:14:21 +03:00
]
where
consolePath = case path of
" " -> " /console "
r -> " /console/ " <> r
2021-09-24 01:56:37 +03:00
2021-03-16 20:35:35 +03:00
consoleTmplt = $ ( makeRelativeToProject " src-rsr/console.html " >>= M . embedSingleTemplate )
2019-11-26 15:14:21 +03:00
2022-11-29 04:00:28 +03:00
telemetryNotice :: Text
2019-11-26 15:14:21 +03:00
telemetryNotice =
" Help us improve Hasura! The graphql-engine server collects anonymized "
<> " usage stats which allows us to keep improving Hasura at warp speed. "
2021-03-01 21:50:24 +03:00
<> " To read more or opt-out, visit https://hasura.io/docs/latest/graphql/core/guides/telemetry.html "
Clean metadata arguments
## Description
Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API:
- whenever possible, it moves those argument types to where they're used (RQL.DDL.*)
- it removes all unrequired instances (mostly `ToJSON`)
This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL.
Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types.
## Notes
This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance:
- before: `expected Object for Object, but encountered X`
after: `expected Object for add computed field, but encountered X`
- before: `Expecting an object for update query`
after: `expected Object for update query, but encountered X`
This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`.
This PR also deletes some dead code, mostly in RQL.DML.
This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs.
https://github.com/hasura/graphql-engine-mono/pull/1844
GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
mkPgSourceResolver :: PG . PGLogger -> SourceResolver ( 'Postgres 'Vanilla )
2023-02-20 16:43:34 +03:00
mkPgSourceResolver pgLogger env _ config = runExceptT do
Clean metadata arguments
## Description
Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API:
- whenever possible, it moves those argument types to where they're used (RQL.DDL.*)
- it removes all unrequired instances (mostly `ToJSON`)
This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL.
Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types.
## Notes
This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance:
- before: `expected Object for Object, but encountered X`
after: `expected Object for add computed field, but encountered X`
- before: `Expecting an object for update query`
after: `expected Object for update query, but encountered X`
This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`.
This PR also deletes some dead code, mostly in RQL.DML.
This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs.
https://github.com/hasura/graphql-engine-mono/pull/1844
GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
let PostgresSourceConnInfo urlConf poolSettings allowPrepare isoLevel _ = _pccConnectionInfo config
-- If the user does not provide values for the pool settings, then use the default values
let ( maxConns , idleTimeout , retries ) = getDefaultPGPoolSettingIfNotExists poolSettings defaultPostgresPoolSettings
urlText <- resolveUrlConf env urlConf
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
let connInfo = PG . ConnInfo retries $ PG . CDDatabaseURI $ txtToBs urlText
Clean metadata arguments
## Description
Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API:
- whenever possible, it moves those argument types to where they're used (RQL.DDL.*)
- it removes all unrequired instances (mostly `ToJSON`)
This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL.
Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types.
## Notes
This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance:
- before: `expected Object for Object, but encountered X`
after: `expected Object for add computed field, but encountered X`
- before: `Expecting an object for update query`
after: `expected Object for update query, but encountered X`
This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`.
This PR also deletes some dead code, mostly in RQL.DML.
This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs.
https://github.com/hasura/graphql-engine-mono/pull/1844
GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
connParams =
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
PG . defaultConnParams
{ PG . cpIdleTime = idleTimeout ,
PG . cpConns = maxConns ,
PG . cpAllowPrepare = allowPrepare ,
PG . cpMbLifetime = _ppsConnectionLifetime =<< poolSettings ,
PG . cpTimeout = _ppsPoolTimeout =<< poolSettings
Clean metadata arguments
## Description
Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API:
- whenever possible, it moves those argument types to where they're used (RQL.DDL.*)
- it removes all unrequired instances (mostly `ToJSON`)
This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL.
Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types.
## Notes
This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance:
- before: `expected Object for Object, but encountered X`
after: `expected Object for add computed field, but encountered X`
- before: `Expecting an object for update query`
after: `expected Object for update query, but encountered X`
This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`.
This PR also deletes some dead code, mostly in RQL.DML.
This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs.
https://github.com/hasura/graphql-engine-mono/pull/1844
GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
}
2022-10-17 11:04:54 +03:00
pgPool <- liftIO $ Q . initPGPool connInfo connParams pgLogger
let pgExecCtx = mkPGExecCtx isoLevel pgPool NeverResizePool
2023-01-25 10:12:53 +03:00
pure $ PGSourceConfig pgExecCtx connInfo Nothing mempty ( _pccExtensionsSchema config ) mempty Nothing
2022-01-04 14:53:50 +03:00
mkMSSQLSourceResolver :: SourceResolver ( 'MSSQL )
2023-02-20 16:43:34 +03:00
mkMSSQLSourceResolver env _name ( MSSQLConnConfiguration connInfo _ ) = runExceptT do
2022-10-17 11:04:54 +03:00
let MSSQLConnectionInfo iConnString MSSQLPoolSettings { .. } = connInfo
connOptions =
MSPool . ConnectionOptions
{ _coConnections = fromMaybe defaultMSSQLMaxConnections _mpsMaxConnections ,
_coStripes = 1 ,
_coIdleTime = _mpsIdleTimeout
}
( connString , mssqlPool ) <- createMSSQLPool iConnString connOptions env
let mssqlExecCtx = mkMSSQLExecCtx mssqlPool NeverResizePool
2022-01-04 14:53:50 +03:00
pure $ MSSQLSourceConfig connString mssqlExecCtx