graphql-engine/server/src-lib/Hasura/App.hs

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

1325 lines
51 KiB
Haskell
Raw Normal View History

{-# LANGUAGE QuasiQuotes #-}
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE UndecidableInstances #-}
{-# LANGUAGE ViewPatterns #-}
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
-- | Defines the CE version of the engine.
--
-- This module contains everything that is required to run the community edition
-- of the engine: the base application monad and the implementation of all its
-- behaviour classes.
module Hasura.App
( ExitCode (AuthConfigurationError, DatabaseMigrationError, DowngradeProcessError, MetadataCleanError, MetadataExportError, SchemaCacheInitError),
ExitException (ExitException),
GlobalCtx (..),
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
AppContext (..),
PGMetadataStorageAppT (runPGMetadataStorageAppT),
accessDeniedErrMsg,
flushLogger,
getCatalogStateTx,
initGlobalCtx,
initAuthMode,
initialiseServerCtx,
initSubscriptionsState,
migrateCatalogSchema,
mkLoggers,
mkPGLogger,
notifySchemaCacheSyncTx,
parseArgs,
throwErrExit,
throwErrJExit,
printJSON,
printYaml,
readTlsAllowlist,
resolvePostgresConnInfo,
runHGEServer,
setCatalogStateTx,
-- * Exported for testing
mkHGEServer,
mkPgSourceResolver,
mkMSSQLSourceResolver,
)
where
import Control.Concurrent.Async.Lifted.Safe qualified as LA
import Control.Concurrent.Extended qualified as C
import Control.Concurrent.STM qualified as STM
import Control.Concurrent.STM.TVar (readTVarIO)
import Control.Exception (bracket_, throwIO)
import Control.Monad.Catch
( Exception,
MonadCatch,
MonadMask,
MonadThrow,
onException,
)
import Control.Monad.Morph (hoist)
import Control.Monad.STM (atomically)
import Control.Monad.Stateless
import Control.Monad.Trans.Control (MonadBaseControl (..))
import Control.Monad.Trans.Managed (ManagedT (..), allocate, allocate_)
import Control.Retry qualified as Retry
import Data.Aeson qualified as A
import Data.ByteString.Char8 qualified as BC
import Data.ByteString.Lazy qualified as BL
import Data.ByteString.Lazy.Char8 qualified as BLC
import Data.Environment qualified as Env
import Data.FileEmbed (makeRelativeToProject)
import Data.HashMap.Strict qualified as HM
import Data.Set.NonEmpty qualified as NE
import Data.Text qualified as T
import Data.Time.Clock (UTCTime)
import Data.Time.Clock qualified as Clock
import Data.Yaml qualified as Y
import Database.MSSQL.Pool qualified as MSPool
import Database.PG.Query qualified as PG
import Database.PG.Query qualified as Q
import GHC.AssertNF.CPP
import Hasura.Backends.MSSQL.Connection
import Hasura.Backends.Postgres.Connection
import Hasura.Base.Error
import Hasura.Eventing.Common
import Hasura.Eventing.EventTrigger
import Hasura.Eventing.ScheduledTrigger
import Hasura.GraphQL.Execute
( ExecutionStep (..),
MonadGQLExecutionCheck (..),
checkQueryInAllowlist,
)
import Hasura.GraphQL.Execute.Action
import Hasura.GraphQL.Execute.Action.Subscription
import Hasura.GraphQL.Execute.Backend qualified as EB
import Hasura.GraphQL.Execute.Subscription.Poll qualified as ES
import Hasura.GraphQL.Execute.Subscription.State qualified as ES
import Hasura.GraphQL.Logging (MonadQueryLog (..))
import Hasura.GraphQL.Schema.Options qualified as Options
import Hasura.GraphQL.Transport.HTTP
( CacheStoreSuccess (CacheStoreSkipped),
MonadExecuteQuery (..),
)
import Hasura.GraphQL.Transport.HTTP.Protocol (toParsed)
import Hasura.GraphQL.Transport.WebSocket.Server qualified as WS
import Hasura.Logging
import Hasura.Metadata.Class
import Hasura.PingSources
import Hasura.Prelude
import Hasura.QueryTags
import Hasura.RQL.DDL.EventTrigger (MonadEventLogCleanup (..))
import Hasura.RQL.DDL.Schema.Cache
import Hasura.RQL.DDL.Schema.Cache.Common
import Hasura.RQL.DDL.Schema.Catalog
import Hasura.RQL.Types.Allowlist
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.Eventing.Backend
import Hasura.RQL.Types.Metadata
import Hasura.RQL.Types.Network
import Hasura.RQL.Types.ResizePool
import Hasura.RQL.Types.SchemaCache
import Hasura.RQL.Types.SchemaCache.Build
import Hasura.RQL.Types.Source
import Hasura.SQL.AnyBackend qualified as AB
import Hasura.SQL.Backend
import Hasura.Server.API.Query (requiresAdmin)
import Hasura.Server.App
import Hasura.Server.Auth
import Hasura.Server.CheckUpdates (checkForUpdates)
import Hasura.Server.Init hiding (checkFeatureFlag)
server: operation timeout with postgres cancelling ### Description This PR implements operation timeouts, as specced in #1232. RFC: [rfcs/operation-timeout-api-limits.md](https://github.com/hasura/graphql-engine-mono/blob/c025a90fe9779436bc0188a2bbf0ad95b5ed1f32/rfcs/operation-timeout-api-limits.md) There's still some things to be done (tests and docs most notably), but apart from that it can be reviewed. I'd still appreciate feedback on the RFC! TODO: - [x] break out the `ApiLimits` refactoring into a separate PR: #2103 - [x] finish the `pg-client-hs` PR: https://github.com/hasura/pg-client-hs/pull/39 - [x] remove configurability, after testing, prior to merging - [ ] tests: #2390 has some tests that I've run locally to confirm things work on a fundamental level - [x] changelog - [x] documentation - [x] fill in the detailed PR checklist ### Changelog - [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label. ### Affected components - [x] Server - [ ] Console - [ ] CLI - [x] Docs - [ ] Tests ### Related Issues Product spec: #1232. ### Solution and Design Compare `rfcs/operation-timeout-api-limits.md`. ### Steps to test and verify Configure operation timeouts, e.g. by posting ``` { "type": "set_api_limits", "args": { "operation_timeout": { "global": 3 } } } ``` to `v1/metadata` to set an operation timeout of 3s. Then verify that 1. non-admin queries that take longer than 3s time out with a nice error message 2. that those queries return after ~3s (at least for postgres) 3. also that everything else still works as usual ### Limitations, known bugs & workarounds - while this will cause slow queries against any backends to fail, it's only verified to actually interrupt queries against postgres - this will only successfully short-cut (cancel) queries to postgres if the database server is responsive #### Catalog upgrade Does this PR change Hasura Catalog version? - [x] No #### Metadata Does this PR add a new Metadata feature? - [x] Yes - Does `run_sql` auto manages the new metadata through schema diffing? - [x] Not required - Does `run_sql` auto manages the definitions of metadata on renaming? - [x] Not required - Does `export_metadata`/`replace_metadata` supports the new metadata added? - [x] Yes #### GraphQL - [x] No new GraphQL schema is generated #### Breaking changes - [x] No Breaking changes PR-URL: https://github.com/hasura/graphql-engine-mono/pull/1593 GitOrigin-RevId: f0582d0be3ed9fadf89e0c4aaf96344d18331dc4
2021-09-29 19:20:06 +03:00
import Hasura.Server.Limits
import Hasura.Server.Logging
import Hasura.Server.Metrics (ServerMetrics (..))
import Hasura.Server.Migrate (migrateCatalog)
import Hasura.Server.Prometheus
( PrometheusMetrics (..),
decWarpThreads,
incWarpThreads,
)
import Hasura.Server.SchemaCacheRef
( SchemaCacheRef,
getSchemaCache,
initialiseSchemaCacheRef,
logInconsistentMetadata,
)
import Hasura.Server.SchemaUpdate
import Hasura.Server.Telemetry
import Hasura.Server.Types
import Hasura.Server.Version
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
import Hasura.Services
backend only insert permissions (rfc #4120) (#4224) * move user info related code to Hasura.User module * the RFC #4120 implementation; insert permissions with admin secret * revert back to old RoleName based schema maps An attempt made to avoid duplication of schema contexts in types if any role doesn't possess any admin secret specific schema * fix compile errors in haskell test * keep 'user_vars' for session variables in http-logs * no-op refacto * tests for admin only inserts * update docs for admin only inserts * updated CHANGELOG.md * default behaviour when admin secret is not set * fix x-hasura-role to X-Hasura-Role in pytests * introduce effective timeout in actions async tests * update docs for admin-secret not configured case * Update docs/graphql/manual/api-reference/schema-metadata-api/permission.rst Co-Authored-By: Marion Schleifer <marion@hasura.io> * Apply suggestions from code review Co-Authored-By: Marion Schleifer <marion@hasura.io> * a complete iteration backend insert permissions accessable via 'x-hasura-backend-privilege' session variable * console changes for backend-only permissions * provide tooltip id; update labels and tooltips; * requested changes * requested changes - remove className from Toggle component - use appropriate function name (capitalizeFirstChar -> capitalize) * use toggle props from definitelyTyped * fix accidental commit * Revert "introduce effective timeout in actions async tests" This reverts commit b7a59c19d643520cfde6af579889e1038038438a. * generate complete schema for both 'default' and 'backend' sessions * Apply suggestions from code review Co-Authored-By: Marion Schleifer <marion@hasura.io> * remove unnecessary import, export Toggle as is * update session variable in tooltip * 'x-hasura-use-backend-only-permissions' variable to switch * update help texts * update docs * update docs * update console help text * regenerate package-lock * serve no backend schema when backend_only: false and header set to true - Few type name refactor as suggested by @0x777 * update CHANGELOG.md * Update CHANGELOG.md * Update CHANGELOG.md * fix a merge bug where a certain entity didn't get removed Co-authored-by: Marion Schleifer <marion@hasura.io> Co-authored-by: Rishichandra Wawhal <rishi@hasura.io> Co-authored-by: rikinsk <rikin.kachhia@gmail.com> Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
2020-04-24 12:10:53 +03:00
import Hasura.Session
import Hasura.ShutdownLatch
import Hasura.Tracing qualified as Tracing
import Network.HTTP.Client qualified as HTTP
import Network.HTTP.Client.Blocklisting (Blocklist)
server: http ip blocklist (closes #2449) ## Description This PR is in reference to #2449 (support IP blacklisting for multitenant) *RFC Update: Add support for IPv6 blocking* ### Solution and Design Using [http-client-restricted](https://hackage.haskell.org/package/http-client-restricted) package, we're creating the HTTP manager with restricting capabilities. The IPs can be supplied from the CLI arguments as `--ipv4BlocklistCidrs cidr1, cidr2...` or `--disableDefaultIPv4Blocklist` for a default IP list. The new manager will block all requests to the provided CIDRs. We are extracting the error message string to show the end-user that given IP is blocked from being set as a webhook. There are 2 ways to extract the error message "connection to IP address is blocked". Given below are the responses from event trigger to a blocked IP for these implementations: - 6d74fde316f61e246c861befcca5059d33972fa7 - We return the error message string as a HTTPErr(HOther) from `Hasura/Eventing/HTTP.hs`. ``` { "data": { "message": "blocked connection to private IP address " }, "version": "2", "type": "client_error" } ``` - 88e17456345cbb449a5ecd4877c84c9f319dbc25 - We case match on HTTPExceptionContent for InternaException in `Hasura/HTTP.hs` and extract the error message string from it. (this is implemented as it handles all the cases where pro engine makes webhook requests) ``` { "data": { "message": { "type": "http_exception", "message": "blocked connection to private IP address ", "request": { "secure": false, "path": "/webhook", "responseTimeout": "ResponseTimeoutMicro 60000000", "queryString": "", "method": "POST", "requestHeaders": { "Content-Type": "application/json", "X-B3-ParentSpanId": "5ae6573edb2a6b36", "X-B3-TraceId": "29ea7bd6de6ebb8f", "X-B3-SpanId": "303137d9f1d4f341", "User-Agent": "hasura-graphql-engine/cerebushttp-ip-blacklist-a793a0e41-dirty" }, "host": "139.59.90.109", "port": 8000 } } }, "version": "2", "type": "client_error" } ``` ### Steps to test and verify The restricted IPs can be used as webhooks in event triggers, and hasura will return an error message in reponse. ### Limitations, known bugs & workarounds - The `http-client-restricted` has a needlessly complex interface, and puts effort into implementing proxy support which we don't want, so we've inlined a stripped down version. - Performance constraint: As the blocking is checked for each request, if a long list of blocked CIDRs is supplied, iterating through all of them is not what we would prefer. Using trie is suggested to overcome this. (Added to RFC) - Calls to Lux endpoints are inconsistent: We use either the http manager from the ProServeCtx which is unrestricted, or the http manager from the ServeCtx which is restricted (the latter through the instances for MonadMetadataApiAuthorization and UserAuthentication). (The failure scenario here would be: cloud sets PRO_ENDPOINT to something that resolves to an internal address, and then restricted requests to those endpoints fail, causing auth to fail on user requests. This is about HTTP requests to lux auth endpoints.) ## Changelog - ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR. ## Affected components - ✅ Server - ✅ Tests PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3186 Co-authored-by: Robert <132113+robx@users.noreply.github.com> GitOrigin-RevId: 5bd2de2d028bc416b02c99e996c7bebce56fb1e7
2022-02-25 16:29:55 +03:00
import Network.HTTP.Client.CreateManager (mkHttpManager)
import Network.Wai (Application)
import Network.Wai.Handler.Warp qualified as Warp
import Options.Applicative
import Refined (unrefine)
import System.Log.FastLogger qualified as FL
import System.Metrics qualified as EKG
import System.Metrics.Gauge qualified as EKG.Gauge
import Text.Mustache.Compile qualified as M
import Web.Spock.Core qualified as Spock
data ExitCode
= -- these are used during server initialization:
InvalidEnvironmentVariableOptionsError
| InvalidDatabaseConnectionParamsError
| AuthConfigurationError
| EventSubSystemError
| DatabaseMigrationError
server: multitenant metadata storage The metadata storage implementation for graphql-engine-multitenant. - It uses a centralized PG database to store metadata of all tenants (instead of per tenant database) - Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant) - 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries` Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server. To run : - obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`) - apply the schema on a PG database - set the `--metadataDatabaseUrl` flag to point to the above database - run the MT executable The schema (and its migrations) for the metadata db is managed outside the MT worker. ### New metadata The following is the new portion of `Metadata` added : ```yaml version: 3 metrics_config: analyze_query_variables: true analyze_response_body: false api_limits: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` - In Pro, the code around fetching/updating/syncing pro-config is removed - That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed - The required config comes from metadata / schema cache ### New Metadata APIs - `set_api_limits` - `remove_api_limits` - `set_metrics_config` - `remove_metrics_config` #### `set_api_limits` ```yaml type: set_api_limits args: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: anonymous: max_reqs_per_min: 10 unique_params: "ip" editor: max_reqs_per_min: 30 unique_params: - x-hasura-user-id user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` #### `remove_api_limits` ```yaml type: remove_api_limits args: {} ``` #### `set_metrics_config` ```yaml type: set_metrics_config args: analyze_query_variables: true analyze_response_body: false ``` #### `remove_metrics_config` ```yaml type: remove_metrics_config args: {} ``` #### TODO - [x] on-prem pro implementation for `MonadMetadataStorage` - [x] move the project config from Lux to pro metadata (PR: #379) - [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose) - [x] address other minor TODOs - [x] TxIso for `MonadSourceResolver` - [x] enable EKG connection pool metrics - [x] add logging of connection info when sources are added? - [x] confirm if the `buildReason` for schema cache is correct - [ ] testing - [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508) - [x] one-time migration of existing metadata from users' db to centralized PG - [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata - [ ] integrate with infra team (WIP - cc @hgiasac) - [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411) - [ ] benchmark with few tenants having large metadata (100+ tables etc.) - [ ] when user moves regions (https://github.com/hasura/lux/issues/1717) - [ ] metadata has to be migrated from one regional PG to another - [ ] migrate metrics data as well ? - [ ] operation logs - [ ] regression test runs - [ ] find a way to share the schema files with the infra team Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com> GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
2021-02-11 20:54:25 +03:00
| -- | used by MT because it initialises the schema cache only
-- these are used in app/Main.hs:
server: multitenant metadata storage The metadata storage implementation for graphql-engine-multitenant. - It uses a centralized PG database to store metadata of all tenants (instead of per tenant database) - Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant) - 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries` Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server. To run : - obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`) - apply the schema on a PG database - set the `--metadataDatabaseUrl` flag to point to the above database - run the MT executable The schema (and its migrations) for the metadata db is managed outside the MT worker. ### New metadata The following is the new portion of `Metadata` added : ```yaml version: 3 metrics_config: analyze_query_variables: true analyze_response_body: false api_limits: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` - In Pro, the code around fetching/updating/syncing pro-config is removed - That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed - The required config comes from metadata / schema cache ### New Metadata APIs - `set_api_limits` - `remove_api_limits` - `set_metrics_config` - `remove_metrics_config` #### `set_api_limits` ```yaml type: set_api_limits args: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: anonymous: max_reqs_per_min: 10 unique_params: "ip" editor: max_reqs_per_min: 30 unique_params: - x-hasura-user-id user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` #### `remove_api_limits` ```yaml type: remove_api_limits args: {} ``` #### `set_metrics_config` ```yaml type: set_metrics_config args: analyze_query_variables: true analyze_response_body: false ``` #### `remove_metrics_config` ```yaml type: remove_metrics_config args: {} ``` #### TODO - [x] on-prem pro implementation for `MonadMetadataStorage` - [x] move the project config from Lux to pro metadata (PR: #379) - [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose) - [x] address other minor TODOs - [x] TxIso for `MonadSourceResolver` - [x] enable EKG connection pool metrics - [x] add logging of connection info when sources are added? - [x] confirm if the `buildReason` for schema cache is correct - [ ] testing - [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508) - [x] one-time migration of existing metadata from users' db to centralized PG - [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata - [ ] integrate with infra team (WIP - cc @hgiasac) - [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411) - [ ] benchmark with few tenants having large metadata (100+ tables etc.) - [ ] when user moves regions (https://github.com/hasura/lux/issues/1717) - [ ] metadata has to be migrated from one regional PG to another - [ ] migrate metrics data as well ? - [ ] operation logs - [ ] regression test runs - [ ] find a way to share the schema files with the infra team Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com> GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
2021-02-11 20:54:25 +03:00
SchemaCacheInitError
| MetadataExportError
| MetadataCleanError
| ExecuteProcessError
| DowngradeProcessError
deriving (Show)
data ExitException = ExitException
{ eeCode :: !ExitCode,
eeMessage :: !BC.ByteString
}
deriving (Show)
instance Exception ExitException
throwErrExit :: (MonadIO m) => forall a. ExitCode -> String -> m a
throwErrExit reason = liftIO . throwIO . ExitException reason . BC.pack
throwErrJExit :: (A.ToJSON a, MonadIO m) => forall b. ExitCode -> a -> m b
throwErrJExit reason = liftIO . throwIO . ExitException reason . BLC.toStrict . A.encode
--------------------------------------------------------------------------------
-- TODO(SOLOMON): Move Into `Hasura.Server.Init`. Unable to do so
-- currently due `throwErrExit`.
-- | Parse cli arguments to graphql-engine executable.
parseArgs :: EnabledLogTypes impl => Env.Environment -> IO (HGEOptions (ServeOptions impl))
parseArgs env = do
rawHGEOpts <- execParser opts
let eitherOpts = runWithEnv (Env.toList env) $ mkHGEOptions rawHGEOpts
onLeft eitherOpts $ throwErrExit InvalidEnvironmentVariableOptionsError
where
opts =
info
(helper <*> parseHgeOpts)
( fullDesc
<> header "Hasura GraphQL Engine: Blazing fast, instant realtime GraphQL APIs on your DB with fine grained access control, also trigger webhooks on database events."
<> footerDoc (Just mainCmdFooter)
)
--------------------------------------------------------------------------------
printJSON :: (A.ToJSON a, MonadIO m) => a -> m ()
printJSON = liftIO . BLC.putStrLn . A.encode
printYaml :: (A.ToJSON a, MonadIO m) => a -> m ()
printYaml = liftIO . BC.putStrLn . Y.encode
mkPGLogger :: Logger Hasura -> PG.PGLogger
mkPGLogger (Logger logger) (PG.PLERetryMsg msg) =
logger $ PGLog LevelWarn msg
-- | Context required for all graphql-engine CLI commands
data GlobalCtx = GlobalCtx
{ _gcMetadataDbConnInfo :: !PG.ConnInfo,
-- | --database-url option, @'UrlConf' is required to construct default source configuration
-- and optional retries
_gcDefaultPostgresConnInfo :: !(Maybe (UrlConf, PG.ConnInfo), Maybe Int)
}
readTlsAllowlist :: SchemaCacheRef -> IO [TlsAllow]
readTlsAllowlist scRef = scTlsAllowlist <$> getSchemaCache scRef
initGlobalCtx ::
(MonadIO m) =>
Env.Environment ->
server: multitenant metadata storage The metadata storage implementation for graphql-engine-multitenant. - It uses a centralized PG database to store metadata of all tenants (instead of per tenant database) - Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant) - 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries` Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server. To run : - obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`) - apply the schema on a PG database - set the `--metadataDatabaseUrl` flag to point to the above database - run the MT executable The schema (and its migrations) for the metadata db is managed outside the MT worker. ### New metadata The following is the new portion of `Metadata` added : ```yaml version: 3 metrics_config: analyze_query_variables: true analyze_response_body: false api_limits: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` - In Pro, the code around fetching/updating/syncing pro-config is removed - That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed - The required config comes from metadata / schema cache ### New Metadata APIs - `set_api_limits` - `remove_api_limits` - `set_metrics_config` - `remove_metrics_config` #### `set_api_limits` ```yaml type: set_api_limits args: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: anonymous: max_reqs_per_min: 10 unique_params: "ip" editor: max_reqs_per_min: 30 unique_params: - x-hasura-user-id user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` #### `remove_api_limits` ```yaml type: remove_api_limits args: {} ``` #### `set_metrics_config` ```yaml type: set_metrics_config args: analyze_query_variables: true analyze_response_body: false ``` #### `remove_metrics_config` ```yaml type: remove_metrics_config args: {} ``` #### TODO - [x] on-prem pro implementation for `MonadMetadataStorage` - [x] move the project config from Lux to pro metadata (PR: #379) - [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose) - [x] address other minor TODOs - [x] TxIso for `MonadSourceResolver` - [x] enable EKG connection pool metrics - [x] add logging of connection info when sources are added? - [x] confirm if the `buildReason` for schema cache is correct - [ ] testing - [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508) - [x] one-time migration of existing metadata from users' db to centralized PG - [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata - [ ] integrate with infra team (WIP - cc @hgiasac) - [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411) - [ ] benchmark with few tenants having large metadata (100+ tables etc.) - [ ] when user moves regions (https://github.com/hasura/lux/issues/1717) - [ ] metadata has to be migrated from one regional PG to another - [ ] migrate metrics data as well ? - [ ] operation logs - [ ] regression test runs - [ ] find a way to share the schema files with the infra team Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com> GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
2021-02-11 20:54:25 +03:00
-- | the metadata DB URL
Maybe String ->
server: multitenant metadata storage The metadata storage implementation for graphql-engine-multitenant. - It uses a centralized PG database to store metadata of all tenants (instead of per tenant database) - Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant) - 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries` Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server. To run : - obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`) - apply the schema on a PG database - set the `--metadataDatabaseUrl` flag to point to the above database - run the MT executable The schema (and its migrations) for the metadata db is managed outside the MT worker. ### New metadata The following is the new portion of `Metadata` added : ```yaml version: 3 metrics_config: analyze_query_variables: true analyze_response_body: false api_limits: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` - In Pro, the code around fetching/updating/syncing pro-config is removed - That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed - The required config comes from metadata / schema cache ### New Metadata APIs - `set_api_limits` - `remove_api_limits` - `set_metrics_config` - `remove_metrics_config` #### `set_api_limits` ```yaml type: set_api_limits args: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: anonymous: max_reqs_per_min: 10 unique_params: "ip" editor: max_reqs_per_min: 30 unique_params: - x-hasura-user-id user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` #### `remove_api_limits` ```yaml type: remove_api_limits args: {} ``` #### `set_metrics_config` ```yaml type: set_metrics_config args: analyze_query_variables: true analyze_response_body: false ``` #### `remove_metrics_config` ```yaml type: remove_metrics_config args: {} ``` #### TODO - [x] on-prem pro implementation for `MonadMetadataStorage` - [x] move the project config from Lux to pro metadata (PR: #379) - [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose) - [x] address other minor TODOs - [x] TxIso for `MonadSourceResolver` - [x] enable EKG connection pool metrics - [x] add logging of connection info when sources are added? - [x] confirm if the `buildReason` for schema cache is correct - [ ] testing - [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508) - [x] one-time migration of existing metadata from users' db to centralized PG - [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata - [ ] integrate with infra team (WIP - cc @hgiasac) - [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411) - [ ] benchmark with few tenants having large metadata (100+ tables etc.) - [ ] when user moves regions (https://github.com/hasura/lux/issues/1717) - [ ] metadata has to be migrated from one regional PG to another - [ ] migrate metrics data as well ? - [ ] operation logs - [ ] regression test runs - [ ] find a way to share the schema files with the infra team Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com> GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
2021-02-11 20:54:25 +03:00
-- | the user's DB URL
PostgresConnInfo (Maybe UrlConf) ->
m GlobalCtx
initGlobalCtx env metadataDbUrl defaultPgConnInfo = do
let PostgresConnInfo dbUrlConf maybeRetries = defaultPgConnInfo
server: multitenant metadata storage The metadata storage implementation for graphql-engine-multitenant. - It uses a centralized PG database to store metadata of all tenants (instead of per tenant database) - Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant) - 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries` Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server. To run : - obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`) - apply the schema on a PG database - set the `--metadataDatabaseUrl` flag to point to the above database - run the MT executable The schema (and its migrations) for the metadata db is managed outside the MT worker. ### New metadata The following is the new portion of `Metadata` added : ```yaml version: 3 metrics_config: analyze_query_variables: true analyze_response_body: false api_limits: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` - In Pro, the code around fetching/updating/syncing pro-config is removed - That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed - The required config comes from metadata / schema cache ### New Metadata APIs - `set_api_limits` - `remove_api_limits` - `set_metrics_config` - `remove_metrics_config` #### `set_api_limits` ```yaml type: set_api_limits args: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: anonymous: max_reqs_per_min: 10 unique_params: "ip" editor: max_reqs_per_min: 30 unique_params: - x-hasura-user-id user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` #### `remove_api_limits` ```yaml type: remove_api_limits args: {} ``` #### `set_metrics_config` ```yaml type: set_metrics_config args: analyze_query_variables: true analyze_response_body: false ``` #### `remove_metrics_config` ```yaml type: remove_metrics_config args: {} ``` #### TODO - [x] on-prem pro implementation for `MonadMetadataStorage` - [x] move the project config from Lux to pro metadata (PR: #379) - [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose) - [x] address other minor TODOs - [x] TxIso for `MonadSourceResolver` - [x] enable EKG connection pool metrics - [x] add logging of connection info when sources are added? - [x] confirm if the `buildReason` for schema cache is correct - [ ] testing - [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508) - [x] one-time migration of existing metadata from users' db to centralized PG - [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata - [ ] integrate with infra team (WIP - cc @hgiasac) - [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411) - [ ] benchmark with few tenants having large metadata (100+ tables etc.) - [ ] when user moves regions (https://github.com/hasura/lux/issues/1717) - [ ] metadata has to be migrated from one regional PG to another - [ ] migrate metrics data as well ? - [ ] operation logs - [ ] regression test runs - [ ] find a way to share the schema files with the infra team Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com> GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
2021-02-11 20:54:25 +03:00
mkConnInfoFromSource dbUrl = do
resolvePostgresConnInfo env dbUrl maybeRetries
mkConnInfoFromMDb mdbUrl =
let retries = fromMaybe 1 maybeRetries
in (PG.ConnInfo retries . PG.CDDatabaseURI . txtToBs . T.pack) mdbUrl
server: multitenant metadata storage The metadata storage implementation for graphql-engine-multitenant. - It uses a centralized PG database to store metadata of all tenants (instead of per tenant database) - Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant) - 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries` Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server. To run : - obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`) - apply the schema on a PG database - set the `--metadataDatabaseUrl` flag to point to the above database - run the MT executable The schema (and its migrations) for the metadata db is managed outside the MT worker. ### New metadata The following is the new portion of `Metadata` added : ```yaml version: 3 metrics_config: analyze_query_variables: true analyze_response_body: false api_limits: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` - In Pro, the code around fetching/updating/syncing pro-config is removed - That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed - The required config comes from metadata / schema cache ### New Metadata APIs - `set_api_limits` - `remove_api_limits` - `set_metrics_config` - `remove_metrics_config` #### `set_api_limits` ```yaml type: set_api_limits args: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: anonymous: max_reqs_per_min: 10 unique_params: "ip" editor: max_reqs_per_min: 30 unique_params: - x-hasura-user-id user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` #### `remove_api_limits` ```yaml type: remove_api_limits args: {} ``` #### `set_metrics_config` ```yaml type: set_metrics_config args: analyze_query_variables: true analyze_response_body: false ``` #### `remove_metrics_config` ```yaml type: remove_metrics_config args: {} ``` #### TODO - [x] on-prem pro implementation for `MonadMetadataStorage` - [x] move the project config from Lux to pro metadata (PR: #379) - [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose) - [x] address other minor TODOs - [x] TxIso for `MonadSourceResolver` - [x] enable EKG connection pool metrics - [x] add logging of connection info when sources are added? - [x] confirm if the `buildReason` for schema cache is correct - [ ] testing - [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508) - [x] one-time migration of existing metadata from users' db to centralized PG - [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata - [ ] integrate with infra team (WIP - cc @hgiasac) - [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411) - [ ] benchmark with few tenants having large metadata (100+ tables etc.) - [ ] when user moves regions (https://github.com/hasura/lux/issues/1717) - [ ] metadata has to be migrated from one regional PG to another - [ ] migrate metrics data as well ? - [ ] operation logs - [ ] regression test runs - [ ] find a way to share the schema files with the infra team Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com> GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
2021-02-11 20:54:25 +03:00
mkGlobalCtx mdbConnInfo sourceConnInfo =
pure $ GlobalCtx mdbConnInfo (sourceConnInfo, maybeRetries)
server: multitenant metadata storage The metadata storage implementation for graphql-engine-multitenant. - It uses a centralized PG database to store metadata of all tenants (instead of per tenant database) - Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant) - 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries` Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server. To run : - obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`) - apply the schema on a PG database - set the `--metadataDatabaseUrl` flag to point to the above database - run the MT executable The schema (and its migrations) for the metadata db is managed outside the MT worker. ### New metadata The following is the new portion of `Metadata` added : ```yaml version: 3 metrics_config: analyze_query_variables: true analyze_response_body: false api_limits: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` - In Pro, the code around fetching/updating/syncing pro-config is removed - That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed - The required config comes from metadata / schema cache ### New Metadata APIs - `set_api_limits` - `remove_api_limits` - `set_metrics_config` - `remove_metrics_config` #### `set_api_limits` ```yaml type: set_api_limits args: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: anonymous: max_reqs_per_min: 10 unique_params: "ip" editor: max_reqs_per_min: 30 unique_params: - x-hasura-user-id user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` #### `remove_api_limits` ```yaml type: remove_api_limits args: {} ``` #### `set_metrics_config` ```yaml type: set_metrics_config args: analyze_query_variables: true analyze_response_body: false ``` #### `remove_metrics_config` ```yaml type: remove_metrics_config args: {} ``` #### TODO - [x] on-prem pro implementation for `MonadMetadataStorage` - [x] move the project config from Lux to pro metadata (PR: #379) - [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose) - [x] address other minor TODOs - [x] TxIso for `MonadSourceResolver` - [x] enable EKG connection pool metrics - [x] add logging of connection info when sources are added? - [x] confirm if the `buildReason` for schema cache is correct - [ ] testing - [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508) - [x] one-time migration of existing metadata from users' db to centralized PG - [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata - [ ] integrate with infra team (WIP - cc @hgiasac) - [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411) - [ ] benchmark with few tenants having large metadata (100+ tables etc.) - [ ] when user moves regions (https://github.com/hasura/lux/issues/1717) - [ ] metadata has to be migrated from one regional PG to another - [ ] migrate metrics data as well ? - [ ] operation logs - [ ] regression test runs - [ ] find a way to share the schema files with the infra team Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com> GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
2021-02-11 20:54:25 +03:00
case (metadataDbUrl, dbUrlConf) of
(Nothing, Nothing) ->
throwErrExit
server: multitenant metadata storage The metadata storage implementation for graphql-engine-multitenant. - It uses a centralized PG database to store metadata of all tenants (instead of per tenant database) - Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant) - 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries` Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server. To run : - obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`) - apply the schema on a PG database - set the `--metadataDatabaseUrl` flag to point to the above database - run the MT executable The schema (and its migrations) for the metadata db is managed outside the MT worker. ### New metadata The following is the new portion of `Metadata` added : ```yaml version: 3 metrics_config: analyze_query_variables: true analyze_response_body: false api_limits: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` - In Pro, the code around fetching/updating/syncing pro-config is removed - That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed - The required config comes from metadata / schema cache ### New Metadata APIs - `set_api_limits` - `remove_api_limits` - `set_metrics_config` - `remove_metrics_config` #### `set_api_limits` ```yaml type: set_api_limits args: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: anonymous: max_reqs_per_min: 10 unique_params: "ip" editor: max_reqs_per_min: 30 unique_params: - x-hasura-user-id user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` #### `remove_api_limits` ```yaml type: remove_api_limits args: {} ``` #### `set_metrics_config` ```yaml type: set_metrics_config args: analyze_query_variables: true analyze_response_body: false ``` #### `remove_metrics_config` ```yaml type: remove_metrics_config args: {} ``` #### TODO - [x] on-prem pro implementation for `MonadMetadataStorage` - [x] move the project config from Lux to pro metadata (PR: #379) - [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose) - [x] address other minor TODOs - [x] TxIso for `MonadSourceResolver` - [x] enable EKG connection pool metrics - [x] add logging of connection info when sources are added? - [x] confirm if the `buildReason` for schema cache is correct - [ ] testing - [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508) - [x] one-time migration of existing metadata from users' db to centralized PG - [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata - [ ] integrate with infra team (WIP - cc @hgiasac) - [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411) - [ ] benchmark with few tenants having large metadata (100+ tables etc.) - [ ] when user moves regions (https://github.com/hasura/lux/issues/1717) - [ ] metadata has to be migrated from one regional PG to another - [ ] migrate metrics data as well ? - [ ] operation logs - [ ] regression test runs - [ ] find a way to share the schema files with the infra team Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com> GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
2021-02-11 20:54:25 +03:00
InvalidDatabaseConnectionParamsError
"Fatal Error: Either of --metadata-database-url or --database-url option expected"
-- If no metadata storage specified consider use default database as
-- metadata storage
(Nothing, Just dbUrl) -> do
connInfo <- mkConnInfoFromSource dbUrl
mkGlobalCtx connInfo $ Just (dbUrl, connInfo)
(Just mdUrl, Nothing) -> do
let mdConnInfo = mkConnInfoFromMDb mdUrl
mkGlobalCtx mdConnInfo Nothing
(Just mdUrl, Just dbUrl) -> do
srcConnInfo <- mkConnInfoFromSource dbUrl
let mdConnInfo = mkConnInfoFromMDb mdUrl
mkGlobalCtx mdConnInfo (Just (dbUrl, srcConnInfo))
-- | Base application context.
--
-- This defines all base information required to run the engine.
data AppContext = AppContext
{ _acHTTPManager :: HTTP.Manager,
_acPGLogger :: PG.PGLogger,
_acPGPool :: PG.PGPool
}
-- | An application with Postgres database as a metadata storage
newtype PGMetadataStorageAppT m a = PGMetadataStorageAppT {runPGMetadataStorageAppT :: AppContext -> m a}
deriving
( Functor,
Applicative,
Monad,
MonadIO,
Rewrite OpenAPI ### Description This PR rewrites OpenAPI to be more idiomatic. Some noteworthy changes: - we accumulate all required information during the Analyze phase, to avoid having to do a single lookup in the schema cache during the OpenAPI generation phase (we now only need the schema cache as input to run the analysis) - we no longer build intermediary endpoint information and aggregate it, we directly build the the `PathItem` for each endpoint; additionally, that means we no longer have to assume that different methods have the same metadata - we no longer have to first declare types, then craft references: we do everything in one step - we now properly deal with nullability by treating "typeName" and "typeName!" as different - we add a bunch of additional fields in the generated "schema", such as title - we do now support enum values in both input and output positions - checking whether the request body is required is now performed on the fly rather than by introspecting the generated schema - the methods in the file are sorted by topic ### Controversial point However, this PR creates some additional complexity, that we might not want to keep. The main complexity is _knot-tying_: to avoid lookups when generating the OpenAPI, it builds an actual graph of input types, which means that we need something similar to (but simpler than) `MonadSchema`, to avoid infinite recursions when analyzing the input types of a query. To do this, this PR introduces `CircularT`, a lesser `SchemaT` that aims at avoiding ever having to reinvent this particular wheel ever again. ### Remaining work - [x] fix existing tests (they are all failing due to some of the schema changes) - [ ] add tests to cover the new features: - [x] tests for `CircularT` - [ ] tests for enums in output schemas - [x] extract / document `CircularT` if we wish to keep it - [x] add more comments to `OpenAPI` - [x] have a second look at `buildVariableSchema` - [x] fix all missing diagnostics in `Analyze` - [x] add a Changelog entry? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4654 Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com> GitOrigin-RevId: f4a9191f22dfcc1dccefd6a52f5c586b6ad17172
2022-06-30 15:55:56 +03:00
MonadFix,
MonadCatch,
MonadThrow,
MonadMask,
HasServerConfigCtx,
MonadReader AppContext,
MonadBase b,
MonadBaseControl b
)
via (ReaderT AppContext m)
deriving
( MonadTrans
)
via (ReaderT AppContext)
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
instance Monad m => ProvidesNetwork (PGMetadataStorageAppT m) where
askHTTPManager = asks _acHTTPManager
resolvePostgresConnInfo ::
(MonadIO m) => Env.Environment -> UrlConf -> Maybe Int -> m PG.ConnInfo
resolvePostgresConnInfo env dbUrlConf maybeRetries = do
dbUrlText <-
runExcept (resolveUrlConf env dbUrlConf) `onLeft` \err ->
liftIO (throwErrJExit InvalidDatabaseConnectionParamsError err)
pure $ PG.ConnInfo retries $ PG.CDDatabaseURI $ txtToBs dbUrlText
where
retries = fromMaybe 1 maybeRetries
initAuthMode ::
(C.ForkableMonadIO m, Tracing.HasReporter m) =>
ServeOptions impl ->
HTTP.Manager ->
Logger Hasura ->
m AuthMode
initAuthMode ServeOptions {..} httpManager logger = do
authModeRes <-
runExceptT $
setupAuthMode
soAdminSecret
soAuthHook
soJwtSecret
soUnAuthRole
logger
httpManager
authMode <- onLeft authModeRes (throwErrExit AuthConfigurationError . T.unpack)
-- forking a dedicated polling thread to dynamically get the latest JWK settings
-- set by the user and update the JWK accordingly. This will help in applying the
-- updates without restarting HGE.
_ <- C.forkImmortal "update JWK" logger $ updateJwkCtx authMode httpManager logger
return authMode
initSubscriptionsState ::
ServeOptions impl ->
Logger Hasura ->
Maybe ES.SubscriptionPostPollHook ->
IO ES.SubscriptionsState
initSubscriptionsState ServeOptions {..} logger liveQueryHook = ES.initSubscriptionsState soLiveQueryOpts soStreamingQueryOpts postPollHook
where
postPollHook = fromMaybe (ES.defaultSubscriptionPostPollHook logger) liveQueryHook
-- | Initializes or migrates the catalog and returns the context required to start the server.
initialiseServerCtx ::
(C.ForkableMonadIO m, MonadCatch m) =>
Env.Environment ->
GlobalCtx ->
ServeOptions Hasura ->
Maybe ES.SubscriptionPostPollHook ->
ServerMetrics ->
PrometheusMetrics ->
Tracing.SamplingPolicy ->
(FeatureFlag -> IO Bool) ->
ManagedT m ServerCtx
initialiseServerCtx env GlobalCtx {..} serveOptions@ServeOptions {..} liveQueryHook serverMetrics prometheusMetrics traceSamplingPolicy checkFeatureFlag = do
instanceId <- liftIO generateInstanceId
latch <- liftIO newShutdownLatch
loggers@(Loggers loggerCtx logger pgLogger) <- mkLoggers soEnabledLogTypes soLogLevel
when (null soAdminSecret) $ do
let errMsg :: Text
errMsg = "WARNING: No admin secret provided"
unLogger logger $
StartupLog
{ slLogLevel = LevelWarn,
slKind = "no_admin_secret",
slInfo = A.toJSON errMsg
}
-- log serve options
unLogger logger $ serveOptsToLog serveOptions
-- log postgres connection info
unLogger logger $ connInfoToLog _gcMetadataDbConnInfo
metadataDbPool <-
allocate
(liftIO $ PG.initPGPool _gcMetadataDbConnInfo soConnParams pgLogger)
(liftIO . PG.destroyPGPool)
let maybeDefaultSourceConfig =
fst _gcDefaultPostgresConnInfo <&> \(dbUrlConf, _) ->
let connSettings =
PostgresPoolSettings
{ _ppsMaxConnections = Just $ Q.cpConns soConnParams,
_ppsTotalMaxConnections = Nothing,
_ppsIdleTimeout = Just $ Q.cpIdleTime soConnParams,
_ppsRetries = snd _gcDefaultPostgresConnInfo <|> Just 1,
_ppsPoolTimeout = PG.cpTimeout soConnParams,
_ppsConnectionLifetime = PG.cpMbLifetime soConnParams
}
sourceConnInfo = PostgresSourceConnInfo dbUrlConf (Just connSettings) (PG.cpAllowPrepare soConnParams) soTxIso Nothing
in PostgresConnConfiguration sourceConnInfo Nothing defaultPostgresExtensionsSchema Nothing mempty
optimizePermissionFilters
| EFOptimizePermissionFilters `elem` soExperimentalFeatures = Options.OptimizePermissionFilters
| otherwise = Options.Don'tOptimizePermissionFilters
bigqueryStringNumericInput
| EFBigQueryStringNumericInput `elem` soExperimentalFeatures = Options.EnableBigQueryStringNumericInput
| otherwise = Options.DisableBigQueryStringNumericInput
sqlGenCtx = SQLGenCtx soStringifyNum soDangerousBooleanCollapse optimizePermissionFilters bigqueryStringNumericInput
let serverConfigCtx =
[Preview] Inherited roles for postgres read queries fixes #3868 docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de` Note: To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`. Introduction ------------ This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`. How are select permissions of different roles are combined? ------------------------------------------------------------ A select permission includes 5 things: 1. Columns accessible to the role 2. Row selection filter 3. Limit 4. Allow aggregation 5. Scalar computed fields accessible to the role Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`. Let's say the following GraphQL query is queried with the `combined_roles` role. ```graphql query { employees { address phone } } ``` This will translate to the following SQL query: ```sql select (case when (P1 or P2) then address else null end) as address, (case when P2 then phone else null end) as phone from employee where (P1 or P2) ``` The other parameters of the select permission will be combined in the following manner: 1. Limit - Minimum of the limits will be the limit of the inherited role 2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation 3. Scalar computed fields - same as table column fields, as in the above example APIs for inherited roles: ---------------------- 1. `add_inherited_role` `add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments `role_name`: the name of the inherited role to be added (String) `role_set`: list of roles that need to be combined (Array of Strings) Example: ```json { "type": "add_inherited_role", "args": { "role_name":"combined_user", "role_set":[ "user", "user1" ] } } ``` After adding the inherited role, the inherited role can be used like single roles like earlier Note: An inherited role can only be created with non-inherited/singular roles. 2. `drop_inherited_role` The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument: `role_name`: name of the inherited role to be dropped Example: ```json { "type": "drop_inherited_role", "args": { "role_name":"combined_user" } } ``` Metadata --------- The derived roles metadata will be included under the `experimental_features` key while exporting the metadata. ```json { "experimental_features": { "derived_roles": [ { "role_name": "manager_is_employee_too", "role_set": [ "employee", "manager" ] } ] } } ``` Scope ------ Only postgres queries and subscriptions are supported in this PR. Important points: ----------------- 1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done. TODOs ------- - [ ] Tests - [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features - [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?) - [ ] Introspection test with a inherited role (nullability changes in a inherited role) - [ ] Docs - [ ] Changelog Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com> GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 14:14:13 +03:00
ServerConfigCtx
soInferFunctionPermissions
soEnableRemoteSchemaPermissions
sqlGenCtx
soEnableMaintenanceMode
soExperimentalFeatures
soEventingMode
soReadOnlyMode
soDefaultNamingConvention
soMetadataDefaults
checkFeatureFlag
rebuildableSchemaCache <-
lift . flip onException (flushLogger loggerCtx) $
migrateCatalogSchema
env
logger
metadataDbPool
maybeDefaultSourceConfig
mempty
serverConfigCtx
(mkPgSourceResolver pgLogger)
mkMSSQLSourceResolver
soExtensionsSchema
-- Start a background thread for listening schema sync events from other server instances,
metaVersionRef <- liftIO $ STM.newEmptyTMVarIO
Disable schema sync with an interval of 0 instead of an explicit flag Removing `schemaSyncDisable` flag and interpreting `schemaPollInterval` of `0` as disabling schema sync. This change brings the convention in line with how action and other intervals are used to disable processes. There is an opportunity to abstract the notion of an optional interval similar to how actions uses `AsyncActionsFetchInterval`. This can be used for the following fields of ServeOptions, with RawServeOptions having a milliseconds value where `0` is interpreted as disable. OptionalInterval: ``` -- | Sleep time interval for activities data OptionalInterval = Skip -- ^ No polling | Interval !Milliseconds -- ^ Interval time deriving (Show, Eq) ``` ServeOptions: ``` data ServeOptions impl = ServeOptions { ... , soEventsFetchInterval :: !OptionalInterval , soAsyncActionsFetchInterval :: !OptionalInterval , soSchemaPollInterval :: !OptionalInterval ... } ``` Rather than encoding a `Maybe OptionalInterval` in RawServeOptions, instead a `Maybe Milliseconds` can be used to more directly express the input format, with the ServeOptions constructor interpreting `0` as `Skip`. Current inconsistencies: * `soEventsFetchInterval` has no value interpreted as disabling the fetches * `soAsyncActionsFetchInterval` uses an `OptionalInterval` analog in `RawServeOptions` instead of `Milliseconds` * `soSchemaPollInterval` currently uses `Milliseconds` directly in `ServeOptions` --- ### Kodiak commit message Information used by [Kodiak bot](https://kodiakhq.com/) while merging this PR. #### Commit title Same as the title of this pull request GitOrigin-RevId: 3cda1656ae39ae95ba142512ed4e123d6ffeb7fe
2021-04-07 12:59:48 +03:00
-- An interval of 0 indicates that no schema sync is required
case soSchemaPollInterval of
Skip -> unLogger logger $ mkGenericLog @Text LevelInfo "schema-sync" "Schema sync disabled"
Interval interval -> do
unLogger logger $ mkGenericLog @String LevelInfo "schema-sync" ("Schema sync enabled. Polling at " <> show interval)
void $ startSchemaSyncListenerThread logger metadataDbPool instanceId interval metaVersionRef
schemaCacheRef <- initialiseSchemaCacheRef serverMetrics rebuildableSchemaCache
server: multitenant metadata storage The metadata storage implementation for graphql-engine-multitenant. - It uses a centralized PG database to store metadata of all tenants (instead of per tenant database) - Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant) - 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries` Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server. To run : - obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`) - apply the schema on a PG database - set the `--metadataDatabaseUrl` flag to point to the above database - run the MT executable The schema (and its migrations) for the metadata db is managed outside the MT worker. ### New metadata The following is the new portion of `Metadata` added : ```yaml version: 3 metrics_config: analyze_query_variables: true analyze_response_body: false api_limits: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` - In Pro, the code around fetching/updating/syncing pro-config is removed - That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed - The required config comes from metadata / schema cache ### New Metadata APIs - `set_api_limits` - `remove_api_limits` - `set_metrics_config` - `remove_metrics_config` #### `set_api_limits` ```yaml type: set_api_limits args: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: anonymous: max_reqs_per_min: 10 unique_params: "ip" editor: max_reqs_per_min: 30 unique_params: - x-hasura-user-id user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` #### `remove_api_limits` ```yaml type: remove_api_limits args: {} ``` #### `set_metrics_config` ```yaml type: set_metrics_config args: analyze_query_variables: true analyze_response_body: false ``` #### `remove_metrics_config` ```yaml type: remove_metrics_config args: {} ``` #### TODO - [x] on-prem pro implementation for `MonadMetadataStorage` - [x] move the project config from Lux to pro metadata (PR: #379) - [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose) - [x] address other minor TODOs - [x] TxIso for `MonadSourceResolver` - [x] enable EKG connection pool metrics - [x] add logging of connection info when sources are added? - [x] confirm if the `buildReason` for schema cache is correct - [ ] testing - [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508) - [x] one-time migration of existing metadata from users' db to centralized PG - [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata - [ ] integrate with infra team (WIP - cc @hgiasac) - [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411) - [ ] benchmark with few tenants having large metadata (100+ tables etc.) - [ ] when user moves regions (https://github.com/hasura/lux/issues/1717) - [ ] metadata has to be migrated from one regional PG to another - [ ] migrate metrics data as well ? - [ ] operation logs - [ ] regression test runs - [ ] find a way to share the schema files with the infra team Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com> GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
2021-02-11 20:54:25 +03:00
server: http ip blocklist (closes #2449) ## Description This PR is in reference to #2449 (support IP blacklisting for multitenant) *RFC Update: Add support for IPv6 blocking* ### Solution and Design Using [http-client-restricted](https://hackage.haskell.org/package/http-client-restricted) package, we're creating the HTTP manager with restricting capabilities. The IPs can be supplied from the CLI arguments as `--ipv4BlocklistCidrs cidr1, cidr2...` or `--disableDefaultIPv4Blocklist` for a default IP list. The new manager will block all requests to the provided CIDRs. We are extracting the error message string to show the end-user that given IP is blocked from being set as a webhook. There are 2 ways to extract the error message "connection to IP address is blocked". Given below are the responses from event trigger to a blocked IP for these implementations: - 6d74fde316f61e246c861befcca5059d33972fa7 - We return the error message string as a HTTPErr(HOther) from `Hasura/Eventing/HTTP.hs`. ``` { "data": { "message": "blocked connection to private IP address " }, "version": "2", "type": "client_error" } ``` - 88e17456345cbb449a5ecd4877c84c9f319dbc25 - We case match on HTTPExceptionContent for InternaException in `Hasura/HTTP.hs` and extract the error message string from it. (this is implemented as it handles all the cases where pro engine makes webhook requests) ``` { "data": { "message": { "type": "http_exception", "message": "blocked connection to private IP address ", "request": { "secure": false, "path": "/webhook", "responseTimeout": "ResponseTimeoutMicro 60000000", "queryString": "", "method": "POST", "requestHeaders": { "Content-Type": "application/json", "X-B3-ParentSpanId": "5ae6573edb2a6b36", "X-B3-TraceId": "29ea7bd6de6ebb8f", "X-B3-SpanId": "303137d9f1d4f341", "User-Agent": "hasura-graphql-engine/cerebushttp-ip-blacklist-a793a0e41-dirty" }, "host": "139.59.90.109", "port": 8000 } } }, "version": "2", "type": "client_error" } ``` ### Steps to test and verify The restricted IPs can be used as webhooks in event triggers, and hasura will return an error message in reponse. ### Limitations, known bugs & workarounds - The `http-client-restricted` has a needlessly complex interface, and puts effort into implementing proxy support which we don't want, so we've inlined a stripped down version. - Performance constraint: As the blocking is checked for each request, if a long list of blocked CIDRs is supplied, iterating through all of them is not what we would prefer. Using trie is suggested to overcome this. (Added to RFC) - Calls to Lux endpoints are inconsistent: We use either the http manager from the ProServeCtx which is unrestricted, or the http manager from the ServeCtx which is restricted (the latter through the instances for MonadMetadataApiAuthorization and UserAuthentication). (The failure scenario here would be: cloud sets PRO_ENDPOINT to something that resolves to an internal address, and then restricted requests to those endpoints fail, causing auth to fail on user requests. This is about HTTP requests to lux auth endpoints.) ## Changelog - ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR. ## Affected components - ✅ Server - ✅ Tests PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3186 Co-authored-by: Robert <132113+robx@users.noreply.github.com> GitOrigin-RevId: 5bd2de2d028bc416b02c99e996c7bebce56fb1e7
2022-02-25 16:29:55 +03:00
srvMgr <- liftIO $ mkHttpManager (readTlsAllowlist schemaCacheRef) mempty
authMode <- liftIO $ initAuthMode serveOptions srvMgr logger
subscriptionsState <- liftIO $ initSubscriptionsState serveOptions logger liveQueryHook
pure $
ServerCtx
{ scLoggers = loggers,
scCacheRef = schemaCacheRef,
scAuthMode = authMode,
scManager = srvMgr,
scSQLGenCtx = sqlGenCtx,
scEnabledAPIs = soEnabledAPIs,
scInstanceId = instanceId,
scSubscriptionState = subscriptionsState,
scEnableAllowList = soEnableAllowList,
scEnvironment = env,
scResponseInternalErrorsConfig = soResponseInternalErrorsConfig,
scRemoteSchemaPermsCtx = soEnableRemoteSchemaPermissions,
scFunctionPermsCtx = soInferFunctionPermissions,
scEnableMaintenanceMode = soEnableMaintenanceMode,
scExperimentalFeatures = soExperimentalFeatures,
scLoggingSettings = LoggingSettings soEnabledLogTypes soEnableMetadataQueryLogging,
scEventingMode = soEventingMode,
scEnableReadOnlyMode = soReadOnlyMode,
scDefaultNamingConvention = soDefaultNamingConvention,
scServerMetrics = serverMetrics,
scMetadataDefaults = soMetadataDefaults,
scEnabledLogTypes = soEnabledLogTypes,
scMetadataDbPool = metadataDbPool,
scShutdownLatch = latch,
scMetaVersionRef = metaVersionRef,
scPrometheusMetrics = prometheusMetrics,
scTraceSamplingPolicy = traceSamplingPolicy,
scCheckFeatureFlag = checkFeatureFlag
}
mkLoggers ::
(MonadIO m, MonadBaseControl IO m) =>
HashSet (EngineLogType Hasura) ->
LogLevel ->
ManagedT m Loggers
mkLoggers enabledLogs logLevel = do
loggerCtx <- mkLoggerCtx (defaultLoggerSettings True logLevel) enabledLogs
let logger = mkLogger loggerCtx
pgLogger = mkPGLogger logger
return $ Loggers loggerCtx logger pgLogger
-- | helper function to initialize or migrate the @hdb_catalog@ schema (used by pro as well)
migrateCatalogSchema ::
(MonadIO m, MonadBaseControl IO m) =>
Caching, Rate Limiting, Metrics & Session Variable Improvements (#376) * server: use a leaky bucket algorithm for bytes-per-second cache rate limiting * Use evalsha properly * Adds redis cache limit parameters to PoliciesConfig * Loads Leaky Bucket Script On Server Start * Adds more redis logging and moves cache update into lua script * reverts setex in lua and adds notes * Refactors cacheStore and adds max TTL and cache size limits * Filter session vars in cache key * WIP * parens * cache-clear-hander POC implementation * cache-clear-hander POC implementation * Pro projectId used as cache key * POC working! * prefixing query-response keys in redis * Add cacheClearer to RedisScripts * Partial implementation of cacheClearer from scripts record * updating tests * [automated] stylish-haskell commit * Adds query look with up with metrics script * Adds missing module and lua script from last commit * Changes redis script module structure to match cache clearing branch * minor change to lua script * cleaning up cache clearing * generalising JsonLog * [automated] stylish-haskell commit * Draft Cache Metrics Endpoint * Adds Cache Metrics Handler * Adds hook handler module * Missed HandlerHook module in last commit * glob * Fixes redis mget bug * Removes cache totals and changes dashes to colons in metric cache keys * Adds query param to clear clear endpoint for deleting specific keys * Adds query param to clear clear endpoint for deleting specific keys * Cache Metrics on query families rather then queries * Replace Set with nub * Base16 Redis Hashes * Query Family Redis Keys With Roles * response headers for cache keys * fixing bug in family key by excluding operation name; using hash for response header instead of entire key * Adds query family to redis cache keys and cache clear endpoint * Fixes queryfamily hash bug * Moves cache endpoints to /pro * Moved cache clear to POST * Refactors cache clear function * Fixes query family format bug * Adds query cache tests and optional --redis-url flag to python test suite * Adds session variable cache test * Update pro changelog * adding documentation for additional caching features * more docs * clearing up units of leaky bucket params * Adds comments to leaky bucket script * removes old todo * Fixes session variable filtering to work with new query rootfield * more advanced defaulting behaviour for bucket rate and capacity. * Updates Docs * Moves Role into QueryFamily hash * Use Aeson for Cache Clear endpoint response * Moves trace to bracket the leaky bucket script * Misc review tweaks * Adds sum type for cache clear query params * Hardcodes RegisReplyLog log level * Update docs/graphql/cloud/response-caching.rst Co-authored-by: Phil Freeman <phil@hasura.io> * new prose for rate limiting docs * [automated] stylish-haskell commit * make rootToSessVarPreds total * [automated] stylish-haskell commit * Fixes out of scope error * Renamed _acRedis to _acCacheStore Co-authored-by: Solomon Bothwell <ssbothwell@gmail.com> Co-authored-by: Lyndon Maydwell <lyndon@sordina.net> Co-authored-by: David Overton <david@hasura.io> Co-authored-by: Stylish Haskell Bot <stylish-haskell@users.noreply.github.com> Co-authored-by: Lyndon Maydwell <lyndon@hasura.io> GitOrigin-RevId: dda5c1a3f902967b3d78310f950541a55fabb1b0
2021-02-13 03:05:23 +03:00
Env.Environment ->
Logger Hasura ->
PG.PGPool ->
Maybe (SourceConnConfiguration ('Postgres 'Vanilla)) ->
Blocklist ->
ServerConfigCtx ->
SourceResolver ('Postgres 'Vanilla) ->
SourceResolver ('MSSQL) ->
ExtensionsSchema ->
m RebuildableSchemaCache
migrateCatalogSchema
env
logger
pool
defaultSourceConfig
blockList
serverConfigCtx
pgSourceResolver
mssqlSourceResolver
extensionsSchema = do
initialiseResult <- runExceptT $ do
-- TODO: should we allow the migration to happen during maintenance mode?
-- Allowing this can be a sanity check, to see if the hdb_catalog in the
-- DB has been set correctly
currentTime <- liftIO Clock.getCurrentTime
(migrationResult, metadata) <-
PG.runTx pool (PG.Serializable, Just PG.ReadWrite) $
migrateCatalog
defaultSourceConfig
extensionsSchema
(_sccMaintenanceMode serverConfigCtx)
currentTime
let tlsAllowList = networkTlsAllowlist $ _metaNetwork metadata
httpManager <- liftIO $ mkHttpManager (pure tlsAllowList) blockList
let cacheBuildParams =
CacheBuildParams httpManager pgSourceResolver mssqlSourceResolver serverConfigCtx
buildReason = CatalogSync
schemaCache <-
runCacheBuild cacheBuildParams $
buildRebuildableSchemaCacheWithReason buildReason logger env metadata
pure (migrationResult, schemaCache)
(migrationResult, schemaCache) <-
initialiseResult `onLeft` \err -> do
unLogger
logger
StartupLog
{ slLogLevel = LevelError,
slKind = "catalog_migrate",
slInfo = A.toJSON err
}
liftIO (throwErrJExit DatabaseMigrationError err)
unLogger logger migrationResult
pure schemaCache
-- | Event triggers live in the user's DB and other events
-- (cron, one-off and async actions)
-- live in the metadata DB, so we need a way to differentiate the
-- type of shutdown action
data ShutdownAction
= EventTriggerShutdownAction (IO ())
| MetadataDBShutdownAction (ExceptT QErr IO ())
-- | If an exception is encountered , flush the log buffer and
-- rethrow If we do not flush the log buffer on exception, then log lines
-- may be missed
-- See: https://github.com/hasura/graphql-engine/issues/4772
flushLogger :: MonadIO m => LoggerCtx impl -> m ()
flushLogger = liftIO . FL.flushLogStr . _lcLoggerSet
-- | This function acts as the entrypoint for the graphql-engine webserver.
--
-- Note: at the exit of this function, or in case of a graceful server shutdown
-- (SIGTERM, or more generally, whenever the shutdown latch is set), we need to
-- make absolutely sure that we clean up any resources which were allocated during
-- server setup. In the case of a multitenant process, failure to do so can lead to
-- resource leaks.
--
-- To track these resources, we use the ManagedT monad, and attach finalizers at
-- the same point in the code where we allocate resources. If you fork a new
-- long-lived thread, or create a connection pool, or allocate any other
-- long-lived resource, make sure to pair the allocator with its finalizer.
-- There are plenty of examples throughout the code. For example, see
-- 'C.forkManagedT'.
--
-- Note also: the order in which the finalizers run can be important. Specifically,
-- we want the finalizers for the logger threads to run last, so that we retain as
-- many "thread stopping" log messages as possible. The order in which the
-- finalizers is run is determined by the order in which they are introduced in the
-- code.
{- HLINT ignore runHGEServer "Avoid lambda" -}
{- HLINT ignore runHGEServer "Use withAsync" -}
runHGEServer ::
forall m impl.
( MonadIO m,
Rewrite OpenAPI ### Description This PR rewrites OpenAPI to be more idiomatic. Some noteworthy changes: - we accumulate all required information during the Analyze phase, to avoid having to do a single lookup in the schema cache during the OpenAPI generation phase (we now only need the schema cache as input to run the analysis) - we no longer build intermediary endpoint information and aggregate it, we directly build the the `PathItem` for each endpoint; additionally, that means we no longer have to assume that different methods have the same metadata - we no longer have to first declare types, then craft references: we do everything in one step - we now properly deal with nullability by treating "typeName" and "typeName!" as different - we add a bunch of additional fields in the generated "schema", such as title - we do now support enum values in both input and output positions - checking whether the request body is required is now performed on the fly rather than by introspecting the generated schema - the methods in the file are sorted by topic ### Controversial point However, this PR creates some additional complexity, that we might not want to keep. The main complexity is _knot-tying_: to avoid lookups when generating the OpenAPI, it builds an actual graph of input types, which means that we need something similar to (but simpler than) `MonadSchema`, to avoid infinite recursions when analyzing the input types of a query. To do this, this PR introduces `CircularT`, a lesser `SchemaT` that aims at avoiding ever having to reinvent this particular wheel ever again. ### Remaining work - [x] fix existing tests (they are all failing due to some of the schema changes) - [ ] add tests to cover the new features: - [x] tests for `CircularT` - [ ] tests for enums in output schemas - [x] extract / document `CircularT` if we wish to keep it - [x] add more comments to `OpenAPI` - [x] have a second look at `buildVariableSchema` - [x] fix all missing diagnostics in `Analyze` - [x] add a Changelog entry? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4654 Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com> GitOrigin-RevId: f4a9191f22dfcc1dccefd6a52f5c586b6ad17172
2022-06-30 15:55:56 +03:00
MonadFix m,
MonadMask m,
MonadStateless IO m,
LA.Forall (LA.Pure m),
UserAuthentication (Tracing.TraceT m),
HttpLog m,
ConsoleRenderer m,
MonadVersionAPIWithExtraData m,
MonadMetadataApiAuthorization m,
MonadGQLExecutionCheck m,
MonadConfigApiHandler m,
MonadQueryLog m,
WS.MonadWSLog m,
MonadExecuteQuery m,
Tracing.HasReporter m,
HasResourceLimits m,
MonadMetadataStorageQueryAPI m,
MonadResolveSource m,
EB.MonadQueryTags m,
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
MonadEventLogCleanup m,
ProvidesHasuraServices m
) =>
(ServerCtx -> Spock.SpockT m ()) ->
Caching, Rate Limiting, Metrics & Session Variable Improvements (#376) * server: use a leaky bucket algorithm for bytes-per-second cache rate limiting * Use evalsha properly * Adds redis cache limit parameters to PoliciesConfig * Loads Leaky Bucket Script On Server Start * Adds more redis logging and moves cache update into lua script * reverts setex in lua and adds notes * Refactors cacheStore and adds max TTL and cache size limits * Filter session vars in cache key * WIP * parens * cache-clear-hander POC implementation * cache-clear-hander POC implementation * Pro projectId used as cache key * POC working! * prefixing query-response keys in redis * Add cacheClearer to RedisScripts * Partial implementation of cacheClearer from scripts record * updating tests * [automated] stylish-haskell commit * Adds query look with up with metrics script * Adds missing module and lua script from last commit * Changes redis script module structure to match cache clearing branch * minor change to lua script * cleaning up cache clearing * generalising JsonLog * [automated] stylish-haskell commit * Draft Cache Metrics Endpoint * Adds Cache Metrics Handler * Adds hook handler module * Missed HandlerHook module in last commit * glob * Fixes redis mget bug * Removes cache totals and changes dashes to colons in metric cache keys * Adds query param to clear clear endpoint for deleting specific keys * Adds query param to clear clear endpoint for deleting specific keys * Cache Metrics on query families rather then queries * Replace Set with nub * Base16 Redis Hashes * Query Family Redis Keys With Roles * response headers for cache keys * fixing bug in family key by excluding operation name; using hash for response header instead of entire key * Adds query family to redis cache keys and cache clear endpoint * Fixes queryfamily hash bug * Moves cache endpoints to /pro * Moved cache clear to POST * Refactors cache clear function * Fixes query family format bug * Adds query cache tests and optional --redis-url flag to python test suite * Adds session variable cache test * Update pro changelog * adding documentation for additional caching features * more docs * clearing up units of leaky bucket params * Adds comments to leaky bucket script * removes old todo * Fixes session variable filtering to work with new query rootfield * more advanced defaulting behaviour for bucket rate and capacity. * Updates Docs * Moves Role into QueryFamily hash * Use Aeson for Cache Clear endpoint response * Moves trace to bracket the leaky bucket script * Misc review tweaks * Adds sum type for cache clear query params * Hardcodes RegisReplyLog log level * Update docs/graphql/cloud/response-caching.rst Co-authored-by: Phil Freeman <phil@hasura.io> * new prose for rate limiting docs * [automated] stylish-haskell commit * make rootToSessVarPreds total * [automated] stylish-haskell commit * Fixes out of scope error * Renamed _acRedis to _acCacheStore Co-authored-by: Solomon Bothwell <ssbothwell@gmail.com> Co-authored-by: Lyndon Maydwell <lyndon@sordina.net> Co-authored-by: David Overton <david@hasura.io> Co-authored-by: Stylish Haskell Bot <stylish-haskell@users.noreply.github.com> Co-authored-by: Lyndon Maydwell <lyndon@hasura.io> GitOrigin-RevId: dda5c1a3f902967b3d78310f950541a55fabb1b0
2021-02-13 03:05:23 +03:00
Env.Environment ->
ServeOptions impl ->
ServerCtx ->
-- | start time
UTCTime ->
-- | A hook which can be called to indicate when the server is started succesfully
Maybe (IO ()) ->
EKG.Store EKG.EmptyMetrics ->
(FeatureFlag -> IO Bool) ->
ManagedT m ()
runHGEServer setupHook env serveOptions serverCtx@ServerCtx {..} initTime startupStatusHook ekgStore checkFeatureFlag = do
waiApplication <-
mkHGEServer setupHook env serveOptions serverCtx ekgStore checkFeatureFlag
let logger = _lsLogger $ scLoggers
-- `startupStatusHook`: add `Service started successfully` message to config_status
-- table when a tenant starts up in multitenant
let warpSettings :: Warp.Settings
warpSettings =
Warp.setPort (_getPort $ soPort serveOptions)
. Warp.setHost (soHost serveOptions)
. Warp.setGracefulShutdownTimeout (Just 30) -- 30s graceful shutdown
. Warp.setInstallShutdownHandler shutdownHandler
. Warp.setBeforeMainLoop (for_ startupStatusHook id)
. setForkIOWithMetrics
$ Warp.defaultSettings
setForkIOWithMetrics :: Warp.Settings -> Warp.Settings
setForkIOWithMetrics = Warp.setFork \f -> do
void $
C.forkIOWithUnmask
( \unmask ->
bracket_
( do
EKG.Gauge.inc (smWarpThreads scServerMetrics)
incWarpThreads (pmConnections scPrometheusMetrics)
)
( do
EKG.Gauge.dec (smWarpThreads scServerMetrics)
decWarpThreads (pmConnections scPrometheusMetrics)
)
(f unmask)
)
shutdownHandler :: IO () -> IO ()
shutdownHandler closeSocket =
LA.link =<< LA.async do
waitForShutdown $ scShutdownLatch
unLogger logger $ mkGenericLog @Text LevelInfo "server" "gracefully shutting down server"
closeSocket
finishTime <- liftIO Clock.getCurrentTime
let apiInitTime = realToFrac $ Clock.diffUTCTime finishTime initTime
unLogger logger $
mkGenericLog LevelInfo "server" $
StartupTimeInfo "starting API server" apiInitTime
-- Here we block until the shutdown latch 'MVar' is filled, and then
-- shut down the server. Once this blocking call returns, we'll tidy up
-- any resources using the finalizers attached using 'ManagedT' above.
-- Structuring things using the shutdown latch in this way lets us decide
-- elsewhere exactly how we want to control shutdown.
liftIO $ Warp.runSettings warpSettings waiApplication
-- | Part of a factorization of 'runHGEServer' to expose the constructed WAI
-- application for testing purposes. See 'runHGEServer' for documentation.
mkHGEServer ::
forall m impl.
( MonadIO m,
Rewrite OpenAPI ### Description This PR rewrites OpenAPI to be more idiomatic. Some noteworthy changes: - we accumulate all required information during the Analyze phase, to avoid having to do a single lookup in the schema cache during the OpenAPI generation phase (we now only need the schema cache as input to run the analysis) - we no longer build intermediary endpoint information and aggregate it, we directly build the the `PathItem` for each endpoint; additionally, that means we no longer have to assume that different methods have the same metadata - we no longer have to first declare types, then craft references: we do everything in one step - we now properly deal with nullability by treating "typeName" and "typeName!" as different - we add a bunch of additional fields in the generated "schema", such as title - we do now support enum values in both input and output positions - checking whether the request body is required is now performed on the fly rather than by introspecting the generated schema - the methods in the file are sorted by topic ### Controversial point However, this PR creates some additional complexity, that we might not want to keep. The main complexity is _knot-tying_: to avoid lookups when generating the OpenAPI, it builds an actual graph of input types, which means that we need something similar to (but simpler than) `MonadSchema`, to avoid infinite recursions when analyzing the input types of a query. To do this, this PR introduces `CircularT`, a lesser `SchemaT` that aims at avoiding ever having to reinvent this particular wheel ever again. ### Remaining work - [x] fix existing tests (they are all failing due to some of the schema changes) - [ ] add tests to cover the new features: - [x] tests for `CircularT` - [ ] tests for enums in output schemas - [x] extract / document `CircularT` if we wish to keep it - [x] add more comments to `OpenAPI` - [x] have a second look at `buildVariableSchema` - [x] fix all missing diagnostics in `Analyze` - [x] add a Changelog entry? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4654 Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com> GitOrigin-RevId: f4a9191f22dfcc1dccefd6a52f5c586b6ad17172
2022-06-30 15:55:56 +03:00
MonadFix m,
MonadMask m,
MonadStateless IO m,
LA.Forall (LA.Pure m),
UserAuthentication (Tracing.TraceT m),
HttpLog m,
ConsoleRenderer m,
MonadVersionAPIWithExtraData m,
MonadMetadataApiAuthorization m,
MonadGQLExecutionCheck m,
MonadConfigApiHandler m,
MonadQueryLog m,
WS.MonadWSLog m,
MonadExecuteQuery m,
Tracing.HasReporter m,
HasResourceLimits m,
MonadMetadataStorageQueryAPI m,
MonadResolveSource m,
EB.MonadQueryTags m,
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
MonadEventLogCleanup m,
ProvidesHasuraServices m
) =>
(ServerCtx -> Spock.SpockT m ()) ->
Env.Environment ->
ServeOptions impl ->
ServerCtx ->
EKG.Store EKG.EmptyMetrics ->
(FeatureFlag -> IO Bool) ->
ManagedT m Application
mkHGEServer setupHook env ServeOptions {..} serverCtx@ServerCtx {..} ekgStore checkFeatureFlag = do
-- Comment this to enable expensive assertions from "GHC.AssertNF". These
-- will log lines to STDOUT containing "not in normal form". In the future we
-- could try to integrate this into our tests. For now this is a development
-- tool.
--
-- NOTE: be sure to compile WITHOUT code coverage, for this to work properly.
liftIO disableAssertNF
let optimizePermissionFilters
| EFOptimizePermissionFilters `elem` soExperimentalFeatures = Options.OptimizePermissionFilters
| otherwise = Options.Don'tOptimizePermissionFilters
bigqueryStringNumericInput
| EFBigQueryStringNumericInput `elem` soExperimentalFeatures = Options.EnableBigQueryStringNumericInput
| otherwise = Options.DisableBigQueryStringNumericInput
sqlGenCtx = SQLGenCtx soStringifyNum soDangerousBooleanCollapse optimizePermissionFilters bigqueryStringNumericInput
Loggers loggerCtx logger _ = scLoggers
server: update the jwt refresh thread to poll ## Description This PR updates the JWK refresh thread to poll every second instead of the previous behaviour where the thread used to sleep based on the expiry time in `Cache-Control`/`Expires` response headers. ## Motivation As a part of dynamically updating environment variables on cloud without restart the user projects, we want to implement a mechanism which makes HGE aware of any changes in the user configuration by updating a shared variable data type which can be accessed by relevant threads/core functionality before their execution. The above updates requires us to make the threads polling in nature such that before executing their code, any change in the user config is captured and the appropriate behaviour is channelised. In the case of JWK updating thread, the thread used to sleep for the time as mentioned in the `Cache-Control` or `Expires` headers which make the thread unware of any new changes in the user config in that period of time, hence requiring a restart to propogate the new changes. To solve this problem we have now updated the JWK update thread to poll every second for change in `AuthMode`(from a shared variable in subsequent changes to implement the dynamic env var update feature) and update the JWK accordingly such that it does not use any stale configurations and works without HGE restart. ### Related Issues https://hasurahq.atlassian.net/browse/GS-300 ### Solution and Design - We store the expiry time in the `JWTCtx` - On every poll check whether the current time exceeds the expiry time, in which case we call the JWK url to fetch the new JWK and expiry. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7177 Co-authored-by: Krushan Bauva <31391329+krushanbauva@users.noreply.github.com> Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com> GitOrigin-RevId: bc1e44a8c3823d7554167a7f01c3ce085646cedb
2023-01-06 09:39:10 +03:00
HasuraApp app cacheRef actionSubState stopWsServer <-
lift $
flip onException (flushLogger loggerCtx) $
Caching, Rate Limiting, Metrics & Session Variable Improvements (#376) * server: use a leaky bucket algorithm for bytes-per-second cache rate limiting * Use evalsha properly * Adds redis cache limit parameters to PoliciesConfig * Loads Leaky Bucket Script On Server Start * Adds more redis logging and moves cache update into lua script * reverts setex in lua and adds notes * Refactors cacheStore and adds max TTL and cache size limits * Filter session vars in cache key * WIP * parens * cache-clear-hander POC implementation * cache-clear-hander POC implementation * Pro projectId used as cache key * POC working! * prefixing query-response keys in redis * Add cacheClearer to RedisScripts * Partial implementation of cacheClearer from scripts record * updating tests * [automated] stylish-haskell commit * Adds query look with up with metrics script * Adds missing module and lua script from last commit * Changes redis script module structure to match cache clearing branch * minor change to lua script * cleaning up cache clearing * generalising JsonLog * [automated] stylish-haskell commit * Draft Cache Metrics Endpoint * Adds Cache Metrics Handler * Adds hook handler module * Missed HandlerHook module in last commit * glob * Fixes redis mget bug * Removes cache totals and changes dashes to colons in metric cache keys * Adds query param to clear clear endpoint for deleting specific keys * Adds query param to clear clear endpoint for deleting specific keys * Cache Metrics on query families rather then queries * Replace Set with nub * Base16 Redis Hashes * Query Family Redis Keys With Roles * response headers for cache keys * fixing bug in family key by excluding operation name; using hash for response header instead of entire key * Adds query family to redis cache keys and cache clear endpoint * Fixes queryfamily hash bug * Moves cache endpoints to /pro * Moved cache clear to POST * Refactors cache clear function * Fixes query family format bug * Adds query cache tests and optional --redis-url flag to python test suite * Adds session variable cache test * Update pro changelog * adding documentation for additional caching features * more docs * clearing up units of leaky bucket params * Adds comments to leaky bucket script * removes old todo * Fixes session variable filtering to work with new query rootfield * more advanced defaulting behaviour for bucket rate and capacity. * Updates Docs * Moves Role into QueryFamily hash * Use Aeson for Cache Clear endpoint response * Moves trace to bracket the leaky bucket script * Misc review tweaks * Adds sum type for cache clear query params * Hardcodes RegisReplyLog log level * Update docs/graphql/cloud/response-caching.rst Co-authored-by: Phil Freeman <phil@hasura.io> * new prose for rate limiting docs * [automated] stylish-haskell commit * make rootToSessVarPreds total * [automated] stylish-haskell commit * Fixes out of scope error * Renamed _acRedis to _acCacheStore Co-authored-by: Solomon Bothwell <ssbothwell@gmail.com> Co-authored-by: Lyndon Maydwell <lyndon@sordina.net> Co-authored-by: David Overton <david@hasura.io> Co-authored-by: Stylish Haskell Bot <stylish-haskell@users.noreply.github.com> Co-authored-by: Lyndon Maydwell <lyndon@hasura.io> GitOrigin-RevId: dda5c1a3f902967b3d78310f950541a55fabb1b0
2021-02-13 03:05:23 +03:00
mkWaiApp
setupHook
env
soCorsConfig
soConsoleStatus
soConsoleAssetsDir
soConsoleSentryDsn
soEnableTelemetry
scCacheRef
soConnectionOptions
soWebSocketKeepAlive
scEnabledLogTypes
serverCtx
soWebSocketConnectionInitTimeout
ekgStore
-- Init ServerConfigCtx
let serverConfigCtx =
[Preview] Inherited roles for postgres read queries fixes #3868 docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de` Note: To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`. Introduction ------------ This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`. How are select permissions of different roles are combined? ------------------------------------------------------------ A select permission includes 5 things: 1. Columns accessible to the role 2. Row selection filter 3. Limit 4. Allow aggregation 5. Scalar computed fields accessible to the role Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`. Let's say the following GraphQL query is queried with the `combined_roles` role. ```graphql query { employees { address phone } } ``` This will translate to the following SQL query: ```sql select (case when (P1 or P2) then address else null end) as address, (case when P2 then phone else null end) as phone from employee where (P1 or P2) ``` The other parameters of the select permission will be combined in the following manner: 1. Limit - Minimum of the limits will be the limit of the inherited role 2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation 3. Scalar computed fields - same as table column fields, as in the above example APIs for inherited roles: ---------------------- 1. `add_inherited_role` `add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments `role_name`: the name of the inherited role to be added (String) `role_set`: list of roles that need to be combined (Array of Strings) Example: ```json { "type": "add_inherited_role", "args": { "role_name":"combined_user", "role_set":[ "user", "user1" ] } } ``` After adding the inherited role, the inherited role can be used like single roles like earlier Note: An inherited role can only be created with non-inherited/singular roles. 2. `drop_inherited_role` The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument: `role_name`: name of the inherited role to be dropped Example: ```json { "type": "drop_inherited_role", "args": { "role_name":"combined_user" } } ``` Metadata --------- The derived roles metadata will be included under the `experimental_features` key while exporting the metadata. ```json { "experimental_features": { "derived_roles": [ { "role_name": "manager_is_employee_too", "role_set": [ "employee", "manager" ] } ] } } ``` Scope ------ Only postgres queries and subscriptions are supported in this PR. Important points: ----------------- 1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done. TODOs ------- - [ ] Tests - [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features - [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?) - [ ] Introspection test with a inherited role (nullability changes in a inherited role) - [ ] Docs - [ ] Changelog Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com> GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 14:14:13 +03:00
ServerConfigCtx
soInferFunctionPermissions
soEnableRemoteSchemaPermissions
sqlGenCtx
soEnableMaintenanceMode
soExperimentalFeatures
soEventingMode
soReadOnlyMode
soDefaultNamingConvention
soMetadataDefaults
checkFeatureFlag
-- Log Warning if deprecated environment variables are used
sources <- scSources <$> liftIO (getSchemaCache cacheRef)
liftIO $ logDeprecatedEnvVars logger env sources
-- log inconsistent schema objects
inconsObjs <- scInconsistentObjs <$> liftIO (getSchemaCache cacheRef)
liftIO $ logInconsistentMetadata logger inconsObjs
-- NOTE: `newLogTVar` is being used to make sure that the metadata logger runs only once
-- while logging errors or any `inconsistent_metadata` logs.
newLogTVar <- liftIO $ STM.newTVarIO False
-- Start a background thread for processing schema sync event present in the '_sscSyncEventRef'
_ <-
startSchemaSyncProcessorThread
logger
scMetaVersionRef
cacheRef
scInstanceId
serverConfigCtx
newLogTVar
lockedEventsCtx <-
liftIO $
LockedEventsCtx
<$> STM.newTVarIO mempty
<*> STM.newTVarIO mempty
<*> STM.newTVarIO mempty
<*> STM.newTVarIO mempty
case soEventingMode of
EventingEnabled -> do
startEventTriggerPollerThread logger lockedEventsCtx cacheRef
startAsyncActionsPollerThread logger lockedEventsCtx cacheRef actionSubState
-- Create logger for logging the statistics of fetched cron triggers
fetchedCronTriggerStatsLogger <-
allocate
(createFetchedCronTriggerStatsLogger logger)
(closeFetchedCronTriggersStatsLogger logger)
-- start a background thread to create new cron events
_cronEventsThread <-
C.forkManagedT "runCronEventsGenerator" logger $
runCronEventsGenerator logger fetchedCronTriggerStatsLogger (getSchemaCache cacheRef)
startScheduledEventsPollerThread logger lockedEventsCtx cacheRef
EventingDisabled ->
unLogger logger $ mkGenericLog @Text LevelInfo "server" "starting in eventing disabled mode"
-- start a background thread to check for updates
_updateThread <-
C.forkManagedT "checkForUpdates" logger $
liftIO $
checkForUpdates loggerCtx scManager
-- Start a background thread for source pings
_sourcePingPoller <-
C.forkManagedT "sourcePingPoller" logger $ do
let pingLog =
unLogger logger . mkGenericLog @String LevelInfo "sources-ping"
liftIO
( runPingSources
env
pingLog
(scSourcePingConfig <$> getSchemaCache cacheRef)
)
-- start a background thread for telemetry
_telemetryThread <-
if isTelemetryEnabled soEnableTelemetry
then do
lift . unLogger logger $ mkGenericLog @Text LevelInfo "telemetry" telemetryNotice
dbUid <-
getMetadataDbUid `onLeftM` throwErrJExit DatabaseMigrationError
pgVersion <-
liftIO (runExceptT $ PG.runTx scMetadataDbPool (PG.ReadCommitted, Nothing) $ getPgVersion)
`onLeftM` throwErrJExit DatabaseMigrationError
telemetryThread <-
C.forkManagedT "runTelemetry" logger $
liftIO $
runTelemetry logger scManager (getSchemaCache cacheRef) dbUid scInstanceId pgVersion soExperimentalFeatures
return $ Just telemetryThread
else return Nothing
-- These cleanup actions are not directly associated with any
-- resource, but we still need to make sure we clean them up here.
allocate_ (pure ()) (liftIO stopWsServer)
pure app
where
isRetryRequired _ resp = do
return $ case resp of
Right _ -> False
Left err -> qeCode err == ConcurrentUpdate
prepareScheduledEvents (Logger logger) = do
liftIO $ logger $ mkGenericLog @Text LevelInfo "scheduled_triggers" "preparing data"
res <- Retry.retrying Retry.retryPolicyDefault isRetryRequired (return unlockAllLockedScheduledEvents)
onLeft res (\err -> logger $ mkGenericLog @String LevelError "scheduled_triggers" (show $ qeError err))
getProcessingScheduledEventsCount :: LockedEventsCtx -> IO Int
getProcessingScheduledEventsCount LockedEventsCtx {..} = do
processingCronEvents <- readTVarIO leCronEvents
processingOneOffEvents <- readTVarIO leOneOffEvents
return $ length processingOneOffEvents + length processingCronEvents
shutdownEventTriggerEvents ::
[BackendSourceInfo] ->
Logger Hasura ->
LockedEventsCtx ->
IO ()
shutdownEventTriggerEvents sources (Logger logger) LockedEventsCtx {..} = do
-- TODO: is this correct?
-- event triggers should be tied to the life cycle of a source
lockedEvents <- readTVarIO leEvents
forM_ sources $ \backendSourceInfo -> do
AB.dispatchAnyBackend @BackendEventTrigger backendSourceInfo \(SourceInfo sourceName _ _ _ sourceConfig _ _ :: SourceInfo b) -> do
let sourceNameText = sourceNameToText sourceName
logger $ mkGenericLog LevelInfo "event_triggers" $ "unlocking events of source: " <> sourceNameText
for_ (HM.lookup sourceName lockedEvents) $ \sourceLockedEvents -> do
-- No need to execute unlockEventsTx when events are not present
for_ (NE.nonEmptySet sourceLockedEvents) $ \nonEmptyLockedEvents -> do
res <- Retry.retrying Retry.retryPolicyDefault isRetryRequired (return $ unlockEventsInSource @b sourceConfig nonEmptyLockedEvents)
case res of
Left err ->
logger $
mkGenericLog LevelWarn "event_trigger" $
"Error while unlocking event trigger events of source: " <> sourceNameText <> " error:" <> showQErr err
Right count ->
logger $
mkGenericLog LevelInfo "event_trigger" $
tshow count <> " events of source " <> sourceNameText <> " were successfully unlocked"
shutdownAsyncActions ::
LockedEventsCtx ->
ExceptT QErr m ()
shutdownAsyncActions lockedEventsCtx = do
lockedActionEvents <- liftIO $ readTVarIO $ leActionEvents lockedEventsCtx
liftEitherM $ setProcessingActionLogsToPending (LockedActionIdArray $ toList lockedActionEvents)
-- This function is a helper function to do couple of things:
--
-- 1. When the value of the `graceful-shutdown-timeout` > 0, we poll
-- the in-flight events queue we maintain using the `processingEventsCountAction`
-- number of in-flight processing events, in case of actions it is the
-- actions which are in 'processing' state and in scheduled events
-- it is the events which are in 'locked' state. The in-flight events queue is polled
-- every 5 seconds until either the graceful shutdown time is exhausted
-- or the number of in-flight processing events is 0.
-- 2. After step 1, we unlock all the events which were attempted to process by the current
-- graphql-engine instance that are still in the processing
-- state. In actions, it means to set the status of such actions to 'pending'
-- and in scheduled events, the status will be set to 'unlocked'.
waitForProcessingAction ::
Logger Hasura ->
String ->
IO Int ->
ShutdownAction ->
Seconds ->
IO ()
waitForProcessingAction l@(Logger logger) actionType processingEventsCountAction' shutdownAction maxTimeout
| maxTimeout <= 0 = do
case shutdownAction of
EventTriggerShutdownAction userDBShutdownAction -> userDBShutdownAction
MetadataDBShutdownAction metadataDBShutdownAction ->
runExceptT metadataDBShutdownAction >>= \case
Left err ->
logger $
mkGenericLog LevelWarn (T.pack actionType) $
"Error while unlocking the processing "
<> tshow actionType
<> " err - "
<> showQErr err
Right () -> pure ()
| otherwise = do
processingEventsCount <- processingEventsCountAction'
if (processingEventsCount == 0)
then
logger $
mkGenericLog @Text LevelInfo (T.pack actionType) $
"All in-flight events have finished processing"
else unless (processingEventsCount == 0) $ do
C.sleep (5) -- sleep for 5 seconds and then repeat
waitForProcessingAction l actionType processingEventsCountAction' shutdownAction (maxTimeout - (Seconds 5))
startEventTriggerPollerThread logger lockedEventsCtx cacheRef = do
schemaCache <- liftIO $ getSchemaCache cacheRef
let maxEventThreads = unrefine soEventsHttpPoolSize
fetchInterval = milliseconds $ unrefine soEventsFetchInterval
allSources = HM.elems $ scSources schemaCache
unless (unrefine soEventsFetchBatchSize == 0 || fetchInterval == 0) $ do
-- Don't start the events poller thread when fetchBatchSize or fetchInterval is 0
-- prepare event triggers data
eventEngineCtx <- liftIO $ atomically $ initEventEngineCtx maxEventThreads fetchInterval soEventsFetchBatchSize
let eventsGracefulShutdownAction =
waitForProcessingAction
logger
"event_triggers"
(length <$> readTVarIO (leEvents lockedEventsCtx))
(EventTriggerShutdownAction (shutdownEventTriggerEvents allSources logger lockedEventsCtx))
(unrefine soGracefulShutdownTimeout)
-- Create logger for logging the statistics of events fetched
fetchedEventsStatsLogger <-
allocate
(createFetchedEventsStatsLogger logger)
(closeFetchedEventsStatsLogger logger)
unLogger logger $ mkGenericLog @Text LevelInfo "event_triggers" "starting workers"
void
$ C.forkManagedTWithGracefulShutdown
"processEventQueue"
logger
(C.ThreadShutdown (liftIO eventsGracefulShutdownAction))
$ processEventQueue
logger
fetchedEventsStatsLogger
scManager
(getSchemaCache cacheRef)
eventEngineCtx
lockedEventsCtx
scServerMetrics
(pmEventTriggerMetrics scPrometheusMetrics)
soEnableMaintenanceMode
startAsyncActionsPollerThread logger lockedEventsCtx cacheRef actionSubState = do
-- start a background thread to handle async actions
case soAsyncActionsFetchInterval of
Skip -> pure () -- Don't start the poller thread
Interval (unrefine -> sleepTime) -> do
let label = "asyncActionsProcessor"
asyncActionGracefulShutdownAction =
( liftWithStateless \lowerIO ->
( waitForProcessingAction
logger
"async_actions"
(length <$> readTVarIO (leActionEvents lockedEventsCtx))
(MetadataDBShutdownAction (hoist lowerIO (shutdownAsyncActions lockedEventsCtx)))
(unrefine soGracefulShutdownTimeout)
)
)
void
$ C.forkManagedTWithGracefulShutdown
label
logger
(C.ThreadShutdown asyncActionGracefulShutdownAction)
$ asyncActionsProcessor
env
logger
(getSchemaCache cacheRef)
(leActionEvents lockedEventsCtx)
scPrometheusMetrics
sleepTime
Nothing
-- start a background thread to handle async action live queries
void $
C.forkManagedT "asyncActionSubscriptionsProcessor" logger $
asyncActionSubscriptionsProcessor actionSubState
startScheduledEventsPollerThread logger lockedEventsCtx cacheRef = do
-- prepare scheduled triggers
lift $ prepareScheduledEvents logger
-- Create logger for logging the statistics of scheduled events fetched
scheduledEventsStatsLogger <-
allocate
(createFetchedScheduledEventsStatsLogger logger)
(closeFetchedScheduledEventsStatsLogger logger)
-- start a background thread to deliver the scheduled events
-- _scheduledEventsThread <- do
let scheduledEventsGracefulShutdownAction =
( liftWithStateless \lowerIO ->
( waitForProcessingAction
logger
"scheduled_events"
(getProcessingScheduledEventsCount lockedEventsCtx)
(MetadataDBShutdownAction (liftEitherM $ hoist lowerIO unlockAllLockedScheduledEvents))
(unrefine soGracefulShutdownTimeout)
)
)
void
$ C.forkManagedTWithGracefulShutdown
"processScheduledTriggers"
logger
(C.ThreadShutdown scheduledEventsGracefulShutdownAction)
$ processScheduledTriggers
env
logger
scheduledEventsStatsLogger
scManager
scPrometheusMetrics
(getSchemaCache cacheRef)
lockedEventsCtx
instance (Monad m) => Tracing.HasReporter (PGMetadataStorageAppT m)
instance (Monad m) => HasResourceLimits (PGMetadataStorageAppT m) where
server: operation timeout with postgres cancelling ### Description This PR implements operation timeouts, as specced in #1232. RFC: [rfcs/operation-timeout-api-limits.md](https://github.com/hasura/graphql-engine-mono/blob/c025a90fe9779436bc0188a2bbf0ad95b5ed1f32/rfcs/operation-timeout-api-limits.md) There's still some things to be done (tests and docs most notably), but apart from that it can be reviewed. I'd still appreciate feedback on the RFC! TODO: - [x] break out the `ApiLimits` refactoring into a separate PR: #2103 - [x] finish the `pg-client-hs` PR: https://github.com/hasura/pg-client-hs/pull/39 - [x] remove configurability, after testing, prior to merging - [ ] tests: #2390 has some tests that I've run locally to confirm things work on a fundamental level - [x] changelog - [x] documentation - [x] fill in the detailed PR checklist ### Changelog - [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label. ### Affected components - [x] Server - [ ] Console - [ ] CLI - [x] Docs - [ ] Tests ### Related Issues Product spec: #1232. ### Solution and Design Compare `rfcs/operation-timeout-api-limits.md`. ### Steps to test and verify Configure operation timeouts, e.g. by posting ``` { "type": "set_api_limits", "args": { "operation_timeout": { "global": 3 } } } ``` to `v1/metadata` to set an operation timeout of 3s. Then verify that 1. non-admin queries that take longer than 3s time out with a nice error message 2. that those queries return after ~3s (at least for postgres) 3. also that everything else still works as usual ### Limitations, known bugs & workarounds - while this will cause slow queries against any backends to fail, it's only verified to actually interrupt queries against postgres - this will only successfully short-cut (cancel) queries to postgres if the database server is responsive #### Catalog upgrade Does this PR change Hasura Catalog version? - [x] No #### Metadata Does this PR add a new Metadata feature? - [x] Yes - Does `run_sql` auto manages the new metadata through schema diffing? - [x] Not required - Does `run_sql` auto manages the definitions of metadata on renaming? - [x] Not required - Does `export_metadata`/`replace_metadata` supports the new metadata added? - [x] Yes #### GraphQL - [x] No new GraphQL schema is generated #### Breaking changes - [x] No Breaking changes PR-URL: https://github.com/hasura/graphql-engine-mono/pull/1593 GitOrigin-RevId: f0582d0be3ed9fadf89e0c4aaf96344d18331dc4
2021-09-29 19:20:06 +03:00
askHTTPHandlerLimit = pure $ ResourceLimits id
askGraphqlOperationLimit _ _ _ = pure $ ResourceLimits id
instance (MonadIO m) => HttpLog (PGMetadataStorageAppT m) where
type ExtraHttpLogMetadata (PGMetadataStorageAppT m) = ()
emptyExtraHttpLogMetadata = ()
buildExtraHttpLogMetadata _ _ = ()
logHttpError logger loggingSettings userInfoM reqId waiReq req qErr headers _ =
unLogger logger $
mkHttpLog $
mkHttpErrorLogContext userInfoM loggingSettings reqId waiReq req qErr Nothing Nothing headers
logHttpSuccess logger loggingSettings userInfoM reqId waiReq reqBody response compressedResponse qTime cType headers (CommonHttpLogMetadata rb batchQueryOpLogs, ()) =
unLogger logger $
mkHttpLog $
mkHttpAccessLogContext userInfoM loggingSettings reqId waiReq reqBody (BL.length response) compressedResponse qTime cType headers rb batchQueryOpLogs
instance (Monad m) => MonadExecuteQuery (PGMetadataStorageAppT m) where
cacheLookup _ _ _ _ = pure ([], Nothing)
cacheStore _ _ _ = pure (Right CacheStoreSkipped)
instance (MonadIO m, MonadBaseControl IO m) => UserAuthentication (Tracing.TraceT (PGMetadataStorageAppT m)) where
resolveUserInfo logger manager headers authMode reqs =
runExceptT $ do
(a, b, c) <- getUserInfoWithExpTime logger manager headers authMode reqs
pure $ (a, b, c, ExtraUserInfo Nothing)
accessDeniedErrMsg :: Text
accessDeniedErrMsg =
"restricted access : admin only"
instance (Monad m) => MonadMetadataApiAuthorization (PGMetadataStorageAppT m) where
authorizeV1QueryApi query handlerCtx = runExceptT do
let currRole = _uiRole $ hcUser handlerCtx
backend only insert permissions (rfc #4120) (#4224) * move user info related code to Hasura.User module * the RFC #4120 implementation; insert permissions with admin secret * revert back to old RoleName based schema maps An attempt made to avoid duplication of schema contexts in types if any role doesn't possess any admin secret specific schema * fix compile errors in haskell test * keep 'user_vars' for session variables in http-logs * no-op refacto * tests for admin only inserts * update docs for admin only inserts * updated CHANGELOG.md * default behaviour when admin secret is not set * fix x-hasura-role to X-Hasura-Role in pytests * introduce effective timeout in actions async tests * update docs for admin-secret not configured case * Update docs/graphql/manual/api-reference/schema-metadata-api/permission.rst Co-Authored-By: Marion Schleifer <marion@hasura.io> * Apply suggestions from code review Co-Authored-By: Marion Schleifer <marion@hasura.io> * a complete iteration backend insert permissions accessable via 'x-hasura-backend-privilege' session variable * console changes for backend-only permissions * provide tooltip id; update labels and tooltips; * requested changes * requested changes - remove className from Toggle component - use appropriate function name (capitalizeFirstChar -> capitalize) * use toggle props from definitelyTyped * fix accidental commit * Revert "introduce effective timeout in actions async tests" This reverts commit b7a59c19d643520cfde6af579889e1038038438a. * generate complete schema for both 'default' and 'backend' sessions * Apply suggestions from code review Co-Authored-By: Marion Schleifer <marion@hasura.io> * remove unnecessary import, export Toggle as is * update session variable in tooltip * 'x-hasura-use-backend-only-permissions' variable to switch * update help texts * update docs * update docs * update console help text * regenerate package-lock * serve no backend schema when backend_only: false and header set to true - Few type name refactor as suggested by @0x777 * update CHANGELOG.md * Update CHANGELOG.md * Update CHANGELOG.md * fix a merge bug where a certain entity didn't get removed Co-authored-by: Marion Schleifer <marion@hasura.io> Co-authored-by: Rishichandra Wawhal <rishi@hasura.io> Co-authored-by: rikinsk <rikin.kachhia@gmail.com> Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
2020-04-24 12:10:53 +03:00
when (requiresAdmin query && currRole /= adminRoleName) $
withPathK "args" $
throw400 AccessDenied accessDeniedErrMsg
authorizeV1MetadataApi _ handlerCtx = runExceptT do
let currRole = _uiRole $ hcUser handlerCtx
when (currRole /= adminRoleName) $
withPathK "args" $
throw400 AccessDenied accessDeniedErrMsg
authorizeV2QueryApi _ handlerCtx = runExceptT do
let currRole = _uiRole $ hcUser handlerCtx
when (currRole /= adminRoleName) $
withPathK "args" $
throw400 AccessDenied accessDeniedErrMsg
instance (Monad m) => ConsoleRenderer (PGMetadataStorageAppT m) where
renderConsole path authMode enableTelemetry consoleAssetsDir consoleSentryDsn =
return $ mkConsoleHTML path authMode enableTelemetry consoleAssetsDir consoleSentryDsn
instance (Monad m) => MonadVersionAPIWithExtraData (PGMetadataStorageAppT m) where
getExtraDataForVersionAPI = return []
instance (Monad m) => MonadGQLExecutionCheck (PGMetadataStorageAppT m) where
checkGQLExecution userInfo _ enableAL sc query _ = runExceptT $ do
req <- toParsed query
checkQueryInAllowlist enableAL AllowlistModeGlobalOnly userInfo req sc
return req
executeIntrospection _ introspectionQuery _ =
pure $ Right $ ExecStepRaw introspectionQuery
checkGQLBatchedReqs _ _ _ _ = runExceptT $ pure ()
instance (MonadIO m, MonadBaseControl IO m) => MonadConfigApiHandler (PGMetadataStorageAppT m) where
runConfigApiHandler = configApiGetHandler
instance (MonadIO m) => MonadQueryLog (PGMetadataStorageAppT m) where
logQueryLog logger = unLogger logger
instance (MonadIO m) => WS.MonadWSLog (PGMetadataStorageAppT m) where
logWSLog logger = unLogger logger
instance (Monad m) => MonadResolveSource (PGMetadataStorageAppT m) where
getPGSourceResolver = mkPgSourceResolver <$> asks _acPGLogger
getMSSQLSourceResolver = return mkMSSQLSourceResolver
instance (Monad m) => EB.MonadQueryTags (PGMetadataStorageAppT m) where
createQueryTags _attributes _qtSourceConfig = return $ emptyQueryTagsComment
instance (Monad m) => MonadEventLogCleanup (PGMetadataStorageAppT m) where
runLogCleaner _ = pure $ throw400 NotSupported "Event log cleanup feature is enterprise edition only"
generateCleanupSchedules _ _ _ = pure $ Right ()
updateTriggerCleanupSchedules _ _ _ _ = pure $ Right ()
runInSeparateTx ::
(MonadIO m) =>
PG.TxE QErr a ->
PGMetadataStorageAppT m (Either QErr a)
runInSeparateTx tx = do
pool <- asks _acPGPool
liftIO $ runExceptT $ PG.runTx pool (PG.RepeatableRead, Nothing) tx
notifySchemaCacheSyncTx :: MetadataResourceVersion -> InstanceId -> CacheInvalidations -> PG.TxE QErr ()
notifySchemaCacheSyncTx (MetadataResourceVersion resourceVersion) instanceId invalidations = do
PG.Discard () <-
PG.withQE
defaultTxErrorHandler
[PG.sql|
INSERT INTO hdb_catalog.hdb_schema_notifications(id, notification, resource_version, instance_id)
VALUES (1, $1::json, $2, $3::uuid)
ON CONFLICT (id) DO UPDATE SET
notification = $1::json,
resource_version = $2,
instance_id = $3::uuid
|]
(PG.ViaJSON invalidations, resourceVersion, instanceId)
True
pure ()
getCatalogStateTx :: PG.TxE QErr CatalogState
getCatalogStateTx =
mkCatalogState . PG.getRow
<$> PG.withQE
defaultTxErrorHandler
[PG.sql|
SELECT hasura_uuid::text, cli_state::json, console_state::json
FROM hdb_catalog.hdb_version
|]
()
False
where
mkCatalogState (dbId, PG.ViaJSON cliState, PG.ViaJSON consoleState) =
CatalogState dbId cliState consoleState
setCatalogStateTx :: CatalogStateType -> A.Value -> PG.TxE QErr ()
setCatalogStateTx stateTy stateValue =
case stateTy of
CSTCli ->
PG.unitQE
defaultTxErrorHandler
[PG.sql|
UPDATE hdb_catalog.hdb_version
SET cli_state = $1
|]
(Identity $ PG.ViaJSON stateValue)
False
CSTConsole ->
PG.unitQE
defaultTxErrorHandler
[PG.sql|
UPDATE hdb_catalog.hdb_version
SET console_state = $1
|]
(Identity $ PG.ViaJSON stateValue)
False
-- | Each of the function in the type class is executed in a totally separate transaction.
instance (MonadIO m) => MonadMetadataStorage (PGMetadataStorageAppT m) where
fetchMetadataResourceVersion = runInSeparateTx fetchMetadataResourceVersionFromCatalog
fetchMetadata = runInSeparateTx fetchMetadataAndResourceVersionFromCatalog
fetchMetadataNotifications a b = runInSeparateTx $ fetchMetadataNotificationsFromCatalog a b
setMetadata r = runInSeparateTx . setMetadataInCatalog r
notifySchemaCacheSync a b c = runInSeparateTx $ notifySchemaCacheSyncTx a b c
getCatalogState = runInSeparateTx getCatalogStateTx
setCatalogState a b = runInSeparateTx $ setCatalogStateTx a b
getMetadataDbUid = runInSeparateTx getDbId
checkMetadataStorageHealth = runInSeparateTx $ checkDbConnection
getDeprivedCronTriggerStats = runInSeparateTx . getDeprivedCronTriggerStatsTx
getScheduledEventsForDelivery = runInSeparateTx getScheduledEventsForDeliveryTx
insertCronEvents = runInSeparateTx . insertCronEventsTx
insertOneOffScheduledEvent = runInSeparateTx . insertOneOffScheduledEventTx
insertScheduledEventInvocation a b = runInSeparateTx $ insertInvocationTx a b
setScheduledEventOp a b c = runInSeparateTx $ setScheduledEventOpTx a b c
unlockScheduledEvents a b = runInSeparateTx $ unlockScheduledEventsTx a b
unlockAllLockedScheduledEvents = runInSeparateTx unlockAllLockedScheduledEventsTx
clearFutureCronEvents = runInSeparateTx . dropFutureCronEventsTx
getOneOffScheduledEvents a b c = runInSeparateTx $ getOneOffScheduledEventsTx a b c
getCronEvents a b c d = runInSeparateTx $ getCronEventsTx a b c d
getScheduledEventInvocations a = runInSeparateTx $ getScheduledEventInvocationsTx a
deleteScheduledEvent a b = runInSeparateTx $ deleteScheduledEventTx a b
insertAction a b c d = runInSeparateTx $ insertActionTx a b c d
fetchUndeliveredActionEvents = runInSeparateTx fetchUndeliveredActionEventsTx
setActionStatus a b = runInSeparateTx $ setActionStatusTx a b
fetchActionResponse = runInSeparateTx . fetchActionResponseTx
clearActionData = runInSeparateTx . clearActionDataTx
setProcessingActionLogsToPending = runInSeparateTx . setProcessingActionLogsToPendingTx
instance MonadIO m => MonadMetadataStorageQueryAPI (PGMetadataStorageAppT m)
--- helper functions ---
mkConsoleHTML :: Text -> AuthMode -> TelemetryStatus -> Maybe Text -> Maybe Text -> Either String Text
mkConsoleHTML path authMode enableTelemetry consoleAssetsDir consoleSentryDsn =
renderHtmlTemplate consoleTmplt $
-- variables required to render the template
A.object
[ "isAdminSecretSet" A..= isAdminSecretSet authMode,
"consolePath" A..= consolePath,
"enableTelemetry" A..= boolToText (isTelemetryEnabled enableTelemetry),
"cdnAssets" A..= boolToText (isNothing consoleAssetsDir),
"consoleSentryDsn" A..= fromMaybe "" consoleSentryDsn,
"assetsVersion" A..= consoleAssetsVersion,
"serverVersion" A..= currentVersion,
"consoleSentryDsn" A..= ("" :: Text)
]
where
consolePath = case path of
"" -> "/console"
r -> "/console/" <> r
consoleTmplt = $(makeRelativeToProject "src-rsr/console.html" >>= M.embedSingleTemplate)
telemetryNotice :: Text
telemetryNotice =
"Help us improve Hasura! The graphql-engine server collects anonymized "
<> "usage stats which allows us to keep improving Hasura at warp speed. "
<> "To read more or opt-out, visit https://hasura.io/docs/latest/graphql/core/guides/telemetry.html"
Clean metadata arguments ## Description Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API: - whenever possible, it moves those argument types to where they're used (RQL.DDL.*) - it removes all unrequired instances (mostly `ToJSON`) This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL. Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types. ## Notes This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance: - before: `expected Object for Object, but encountered X` after: `expected Object for add computed field, but encountered X` - before: `Expecting an object for update query` after: `expected Object for update query, but encountered X` This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`. This PR also deletes some dead code, mostly in RQL.DML. This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs. https://github.com/hasura/graphql-engine-mono/pull/1844 GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
mkPgSourceResolver :: PG.PGLogger -> SourceResolver ('Postgres 'Vanilla)
mkPgSourceResolver pgLogger env _ config = runExceptT do
Clean metadata arguments ## Description Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API: - whenever possible, it moves those argument types to where they're used (RQL.DDL.*) - it removes all unrequired instances (mostly `ToJSON`) This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL. Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types. ## Notes This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance: - before: `expected Object for Object, but encountered X` after: `expected Object for add computed field, but encountered X` - before: `Expecting an object for update query` after: `expected Object for update query, but encountered X` This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`. This PR also deletes some dead code, mostly in RQL.DML. This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs. https://github.com/hasura/graphql-engine-mono/pull/1844 GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
let PostgresSourceConnInfo urlConf poolSettings allowPrepare isoLevel _ = _pccConnectionInfo config
-- If the user does not provide values for the pool settings, then use the default values
let (maxConns, idleTimeout, retries) = getDefaultPGPoolSettingIfNotExists poolSettings defaultPostgresPoolSettings
urlText <- resolveUrlConf env urlConf
let connInfo = PG.ConnInfo retries $ PG.CDDatabaseURI $ txtToBs urlText
Clean metadata arguments ## Description Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API: - whenever possible, it moves those argument types to where they're used (RQL.DDL.*) - it removes all unrequired instances (mostly `ToJSON`) This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL. Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types. ## Notes This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance: - before: `expected Object for Object, but encountered X` after: `expected Object for add computed field, but encountered X` - before: `Expecting an object for update query` after: `expected Object for update query, but encountered X` This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`. This PR also deletes some dead code, mostly in RQL.DML. This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs. https://github.com/hasura/graphql-engine-mono/pull/1844 GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
connParams =
PG.defaultConnParams
{ PG.cpIdleTime = idleTimeout,
PG.cpConns = maxConns,
PG.cpAllowPrepare = allowPrepare,
PG.cpMbLifetime = _ppsConnectionLifetime =<< poolSettings,
PG.cpTimeout = _ppsPoolTimeout =<< poolSettings
Clean metadata arguments ## Description Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API: - whenever possible, it moves those argument types to where they're used (RQL.DDL.*) - it removes all unrequired instances (mostly `ToJSON`) This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL. Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types. ## Notes This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance: - before: `expected Object for Object, but encountered X` after: `expected Object for add computed field, but encountered X` - before: `Expecting an object for update query` after: `expected Object for update query, but encountered X` This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`. This PR also deletes some dead code, mostly in RQL.DML. This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs. https://github.com/hasura/graphql-engine-mono/pull/1844 GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
}
pgPool <- liftIO $ Q.initPGPool connInfo connParams pgLogger
let pgExecCtx = mkPGExecCtx isoLevel pgPool NeverResizePool
pure $ PGSourceConfig pgExecCtx connInfo Nothing mempty (_pccExtensionsSchema config) mempty Nothing
mkMSSQLSourceResolver :: SourceResolver ('MSSQL)
mkMSSQLSourceResolver env _name (MSSQLConnConfiguration connInfo _) = runExceptT do
let MSSQLConnectionInfo iConnString MSSQLPoolSettings {..} = connInfo
connOptions =
MSPool.ConnectionOptions
{ _coConnections = fromMaybe defaultMSSQLMaxConnections _mpsMaxConnections,
_coStripes = 1,
_coIdleTime = _mpsIdleTimeout
}
(connString, mssqlPool) <- createMSSQLPool iConnString connOptions env
let mssqlExecCtx = mkMSSQLExecCtx mssqlPool NeverResizePool
pure $ MSSQLSourceConfig connString mssqlExecCtx