graphql-engine/server/src-lib/Hasura/Server/API/Metadata.hs

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

842 lines
37 KiB
Haskell
Raw Normal View History

{-# LANGUAGE ViewPatterns #-}
-- | The RQL metadata query ('/v1/metadata')
module Hasura.Server.API.Metadata
( RQLMetadata,
RQLMetadataV1 (..),
runMetadataQuery,
)
where
import Control.Lens (_Just)
import Control.Monad.Trans.Control (MonadBaseControl)
import Data.Aeson
import Data.Aeson.Casing
import Data.Aeson.Types qualified as A
import Data.Environment qualified as Env
import Data.Has (Has)
import Data.Text qualified as T
import Data.Text.Extended qualified as T
import GHC.Generics.Extended (constrName)
import Hasura.App.State
import Hasura.Base.Error
import Hasura.CustomReturnType.API qualified as CustomReturnType
import Hasura.EncJSON
import Hasura.Function.API qualified as Functions
2023-04-04 18:59:58 +03:00
import Hasura.GraphQL.Schema.Options qualified as Options
import Hasura.Logging qualified as L
import Hasura.LogicalModel.API qualified as LogicalModels
import Hasura.Metadata.Class
import Hasura.Prelude hiding (first)
import Hasura.RQL.DDL.Action
server: multitenant metadata storage The metadata storage implementation for graphql-engine-multitenant. - It uses a centralized PG database to store metadata of all tenants (instead of per tenant database) - Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant) - 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries` Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server. To run : - obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`) - apply the schema on a PG database - set the `--metadataDatabaseUrl` flag to point to the above database - run the MT executable The schema (and its migrations) for the metadata db is managed outside the MT worker. ### New metadata The following is the new portion of `Metadata` added : ```yaml version: 3 metrics_config: analyze_query_variables: true analyze_response_body: false api_limits: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` - In Pro, the code around fetching/updating/syncing pro-config is removed - That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed - The required config comes from metadata / schema cache ### New Metadata APIs - `set_api_limits` - `remove_api_limits` - `set_metrics_config` - `remove_metrics_config` #### `set_api_limits` ```yaml type: set_api_limits args: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: anonymous: max_reqs_per_min: 10 unique_params: "ip" editor: max_reqs_per_min: 30 unique_params: - x-hasura-user-id user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` #### `remove_api_limits` ```yaml type: remove_api_limits args: {} ``` #### `set_metrics_config` ```yaml type: set_metrics_config args: analyze_query_variables: true analyze_response_body: false ``` #### `remove_metrics_config` ```yaml type: remove_metrics_config args: {} ``` #### TODO - [x] on-prem pro implementation for `MonadMetadataStorage` - [x] move the project config from Lux to pro metadata (PR: #379) - [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose) - [x] address other minor TODOs - [x] TxIso for `MonadSourceResolver` - [x] enable EKG connection pool metrics - [x] add logging of connection info when sources are added? - [x] confirm if the `buildReason` for schema cache is correct - [ ] testing - [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508) - [x] one-time migration of existing metadata from users' db to centralized PG - [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata - [ ] integrate with infra team (WIP - cc @hgiasac) - [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411) - [ ] benchmark with few tenants having large metadata (100+ tables etc.) - [ ] when user moves regions (https://github.com/hasura/lux/issues/1717) - [ ] metadata has to be migrated from one regional PG to another - [ ] migrate metrics data as well ? - [ ] operation logs - [ ] regression test runs - [ ] find a way to share the schema files with the infra team Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com> GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
2021-02-11 20:54:25 +03:00
import Hasura.RQL.DDL.ApiLimit
import Hasura.RQL.DDL.ComputedField
import Hasura.RQL.DDL.ConnectionTemplate
import Hasura.RQL.DDL.CustomTypes
import Hasura.RQL.DDL.DataConnector
import Hasura.RQL.DDL.Endpoint
import Hasura.RQL.DDL.EventTrigger
import Hasura.RQL.DDL.FeatureFlag
import Hasura.RQL.DDL.GraphqlSchemaIntrospection
[Preview] Inherited roles for postgres read queries fixes #3868 docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de` Note: To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`. Introduction ------------ This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`. How are select permissions of different roles are combined? ------------------------------------------------------------ A select permission includes 5 things: 1. Columns accessible to the role 2. Row selection filter 3. Limit 4. Allow aggregation 5. Scalar computed fields accessible to the role Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`. Let's say the following GraphQL query is queried with the `combined_roles` role. ```graphql query { employees { address phone } } ``` This will translate to the following SQL query: ```sql select (case when (P1 or P2) then address else null end) as address, (case when P2 then phone else null end) as phone from employee where (P1 or P2) ``` The other parameters of the select permission will be combined in the following manner: 1. Limit - Minimum of the limits will be the limit of the inherited role 2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation 3. Scalar computed fields - same as table column fields, as in the above example APIs for inherited roles: ---------------------- 1. `add_inherited_role` `add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments `role_name`: the name of the inherited role to be added (String) `role_set`: list of roles that need to be combined (Array of Strings) Example: ```json { "type": "add_inherited_role", "args": { "role_name":"combined_user", "role_set":[ "user", "user1" ] } } ``` After adding the inherited role, the inherited role can be used like single roles like earlier Note: An inherited role can only be created with non-inherited/singular roles. 2. `drop_inherited_role` The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument: `role_name`: name of the inherited role to be dropped Example: ```json { "type": "drop_inherited_role", "args": { "role_name":"combined_user" } } ``` Metadata --------- The derived roles metadata will be included under the `experimental_features` key while exporting the metadata. ```json { "experimental_features": { "derived_roles": [ { "role_name": "manager_is_employee_too", "role_set": [ "employee", "manager" ] } ] } } ``` Scope ------ Only postgres queries and subscriptions are supported in this PR. Important points: ----------------- 1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done. TODOs ------- - [ ] Tests - [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features - [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?) - [ ] Introspection test with a inherited role (nullability changes in a inherited role) - [ ] Docs - [ ] Changelog Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com> GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 14:14:13 +03:00
import Hasura.RQL.DDL.InheritedRoles
import Hasura.RQL.DDL.Metadata
import Hasura.RQL.DDL.Network
import Hasura.RQL.DDL.OpenTelemetry
import Hasura.RQL.DDL.Permission
import Hasura.RQL.DDL.QueryCollection
import Hasura.RQL.DDL.QueryTags
import Hasura.RQL.DDL.Relationship
import Hasura.RQL.DDL.Relationship.Rename
import Hasura.RQL.DDL.Relationship.Suggest
import Hasura.RQL.DDL.RemoteRelationship
import Hasura.RQL.DDL.ScheduledTrigger
import Hasura.RQL.DDL.Schema
2023-04-04 18:59:58 +03:00
import Hasura.RQL.DDL.Schema.Cache.Config
import Hasura.RQL.DDL.Schema.Source
import Hasura.RQL.DDL.SourceKinds
import Hasura.RQL.DDL.Webhook.Transform.Validation
import Hasura.RQL.Types.Action
import Hasura.RQL.Types.Allowlist
import Hasura.RQL.Types.ApiLimit
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.CustomTypes
import Hasura.RQL.Types.Endpoint
import Hasura.RQL.Types.EventTrigger
import Hasura.RQL.Types.Eventing.Backend
import Hasura.RQL.Types.GraphqlSchemaIntrospection
import Hasura.RQL.Types.Metadata (GetCatalogState, SetCatalogState, emptyMetadataDefaults)
import Hasura.RQL.Types.Metadata.Backend
import Hasura.RQL.Types.Network
import Hasura.RQL.Types.OpenTelemetry
import Hasura.RQL.Types.Permission
import Hasura.RQL.Types.QueryCollection
import Hasura.RQL.Types.Roles
import Hasura.RQL.Types.ScheduledTrigger
import Hasura.RQL.Types.SchemaCache
import Hasura.RQL.Types.SchemaCache.Build
import Hasura.RQL.Types.Source
scaffolding for remote-schemas module The main aim of the PR is: 1. To set up a module structure for 'remote-schemas' package. 2. Move parts by the remote schema codebase into the new module structure to validate it. ## Notes to the reviewer Why a PR with large-ish diff? 1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section. 1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process. ## Changes in the PR ### Module structure Sets up the module structure as follows: ``` Hasura/ RemoteSchema/ Metadata/ Types.hs SchemaCache/ Types.hs Permission.hs RemoteRelationship.hs Build.hs MetadataAPI/ Types.hs Execute.hs ``` ### 1. Types representing metadata are moved Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`. - This new module only depends on very 'core' modules such as `Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass. - The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship definition. ### 2. SchemaCache related types and build logic have been moved Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`. Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at. Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components. This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in. While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal. This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase. ### 3. Metadata APIs related types and build logic have been moved The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively. ## Open questions: 1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](https://github.com/hasura/graphql-engine-mono/blob/ceba6d62264603ee5d279814677b29bcc43ecaea/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs#L55) and [2](https://github.com/hasura/graphql-engine-mono/blob/ceba6d62264603ee5d279814677b29bcc43ecaea/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs#L91), it made more sense to create a separate top-level module for `MetadataAPI`s. Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further. 1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type: ```haskell buildRemoteSchemas :: ( ArrowChoice arr, Inc.ArrowDistribute arr, ArrowWriter (Seq CollectedInfo) arr, Inc.ArrowCache m arr, MonadIO m, HasHttpManagerM m, Inc.Cacheable remoteRelationshipDefinition, ToJSON remoteRelationshipDefinition, MonadError QErr m ) => Env.Environment -> ( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles), [RemoteSchemaMetadataG remoteRelationshipDefinition] ) `arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject) ``` Note the dependence on `CollectedInfo` which is defined as ```haskell data CollectedInfo = CIInconsistency InconsistentMetadata | CIDependency MetadataObject -- ^ for error reporting on missing dependencies SchemaObjId SchemaDependency deriving (Eq) ``` this pretty much means that remote schemas is dependent on types from databases, actions, .... How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine? 1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature: ```haskell buildSchemaCacheFor :: (QErrM m, CacheRWM m, MetadataM m) => MetadataObjId -> MetadataModifier -> ``` This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint): ```haskell runAddRemoteSchema :: ( QErrM m, CacheRWM m, MonadIO m, HasHttpManagerM m, MetadataM m, Tracing.MonadTrace m ) => Env.Environment -> AddRemoteSchemaQuery -> m EncJSON ``` This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed. ```haskell runAddRemoteSchema :: ( QErrM m, MonadIO m, HasHttpManagerM m, MonadReader RemoteSchemasSchemaCache m, MonadState RemoteSchemaMetadata m, Tracing.MonadTrace m ) => Env.Environment -> AddRemoteSchemaQuery -> m RemoteSchemeMetadataObjId ``` The idea is that the core graphql-engine would call these functions and then call `buildSchemaCacheFor`. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291 GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
2022-10-21 06:13:07 +03:00
import Hasura.RemoteSchema.MetadataAPI
import Hasura.SQL.AnyBackend
import Hasura.SQL.Backend
import Hasura.Server.API.Backend
import Hasura.Server.API.Instances ()
2023-04-04 18:59:58 +03:00
import Hasura.Server.Init.FeatureFlag (HasFeatureFlagChecker)
import Hasura.Server.Logging (SchemaSyncLog (..), SchemaSyncThreadType (TTMetadataApi))
import Hasura.Server.Types
import Hasura.Server.Utils (APIVersion (..))
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
import Hasura.Services
import Hasura.Session
import Hasura.Tracing qualified as Tracing
data RQLMetadataV1
= -- Sources
RMAddSource !(AnyBackend AddSource)
| RMDropSource DropSource
| RMRenameSource !RenameSource
| RMUpdateSource !(AnyBackend UpdateSource)
| RMListSourceKinds !ListSourceKinds
| RMGetSourceKindCapabilities !GetSourceKindCapabilities
| RMGetSourceTables !(AnyBackend GetSourceTables)
| RMGetTableInfo !GetTableInfo
| -- Tables
RMTrackTable !(AnyBackend TrackTableV2)
| RMUntrackTable !(AnyBackend UntrackTable)
| RMSetTableCustomization !(AnyBackend SetTableCustomization)
| RMSetApolloFederationConfig (AnyBackend SetApolloFederationConfig)
| RMPgSetTableIsEnum !(AnyBackend SetTableIsEnum)
| -- Tables permissions
RMCreateInsertPermission !(AnyBackend (CreatePerm InsPerm))
| RMCreateSelectPermission !(AnyBackend (CreatePerm SelPerm))
| RMCreateUpdatePermission !(AnyBackend (CreatePerm UpdPerm))
| RMCreateDeletePermission !(AnyBackend (CreatePerm DelPerm))
| RMDropInsertPermission !(AnyBackend DropPerm)
| RMDropSelectPermission !(AnyBackend DropPerm)
| RMDropUpdatePermission !(AnyBackend DropPerm)
| RMDropDeletePermission !(AnyBackend DropPerm)
| RMSetPermissionComment !(AnyBackend SetPermComment)
| -- Tables relationships
RMCreateObjectRelationship !(AnyBackend CreateObjRel)
| RMCreateArrayRelationship !(AnyBackend CreateArrRel)
| RMDropRelationship !(AnyBackend DropRel)
| RMSetRelationshipComment !(AnyBackend SetRelComment)
| RMRenameRelationship !(AnyBackend RenameRel)
| RMSuggestRelationships !(AnyBackend SuggestRels)
| -- Tables remote relationships
RMCreateRemoteRelationship !(AnyBackend CreateFromSourceRelationship)
| RMUpdateRemoteRelationship !(AnyBackend CreateFromSourceRelationship)
Fix several issues with remote relationships. ## Remaining Work - [x] changelog entry - [x] more tests: `<backend>_delete_remote_relationship` is definitely untested - [x] negative tests: we probably want to assert that there are some APIs we DON'T support - [x] update the console to use the new API, if necessary - [x] ~~adding the corresponding documentation for the API for other backends (only `pg_` was added here)~~ - deferred to https://github.com/hasura/graphql-engine-mono/issues/3170 - [x] ~~deciding which backends should support this API~~ - deferred to https://github.com/hasura/graphql-engine-mono/issues/3170 - [x] ~~deciding what to do about potentially overlapping schematic representations~~ - ~~cf. https://github.com/hasura/graphql-engine-mono/pull/3157#issuecomment-995307624~~ - deferred to https://github.com/hasura/graphql-engine-mono/issues/3171 - [x] ~~add more descriptive versioning information to some of the types that are changing in this PR~~ - cf. https://github.com/hasura/graphql-engine-mono/pull/3157#discussion_r769830920 - deferred to https://github.com/hasura/graphql-engine-mono/issues/3172 ## Description This PR fixes several important issues wrt. the remote relationship API. - it fixes a regression introduced by [#3124](https://github.com/hasura/graphql-engine-mono/pull/3124), which prevented `<backend>_create_remote_relationship` from accepting the old argument format (break of backwards compatibility, broke the console) - it removes the command `create_remote_relationship` added to the v1/metadata API as a work-around as part of [#3124](https://github.com/hasura/graphql-engine-mono/pull/3124) - it reverts the subsequent fix in the console: [#3149](https://github.com/hasura/graphql-engine-mono/pull/3149) Furthermore, this PR also addresses two other issues: - THE DOCUMENTATION OF THE METADATA API WAS WRONG, and documented `create_remote_relationship` instead of `<backend>_create_remote_relationship`: this PR fixes this by adding `pg_` everywhere, but does not attempt to add the corresponding documentation for other backends, partly because: - `<backend>_delete_remote_relationship` WAS BROKEN ON NON-POSTGRES BACKENDS; it always expected an argument parameterized by Postgres. As of main, the `<backend>_(create|update|delete)_remote_relationship` commands are supported on Postgres, Citus, BigQuery, but **NOT MSSQL**. I do not know if this is intentional or not, if it even should be publicized or not, and as a result this PR doesn't change this. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3157 Co-authored-by: jkachmar <8461423+jkachmar@users.noreply.github.com> GitOrigin-RevId: 37e2f41522a9229a11c595574c3f4984317d652a
2021-12-16 23:28:08 +03:00
| RMDeleteRemoteRelationship !(AnyBackend DeleteFromSourceRelationship)
| -- Functions
RMTrackFunction !(AnyBackend Functions.TrackFunctionV2)
| RMUntrackFunction !(AnyBackend Functions.UnTrackFunction)
| RMSetFunctionCustomization (AnyBackend Functions.SetFunctionCustomization)
| -- Functions permissions
RMCreateFunctionPermission !(AnyBackend Functions.FunctionPermissionArgument)
| RMDropFunctionPermission !(AnyBackend Functions.FunctionPermissionArgument)
| -- Computed fields
RMAddComputedField !(AnyBackend AddComputedField)
| RMDropComputedField !(AnyBackend DropComputedField)
| -- Connection template
RMTestConnectionTemplate !(AnyBackend TestConnectionTemplate)
| -- Logical Models
RMGetLogicalModel !(AnyBackend LogicalModels.GetLogicalModel)
| RMTrackLogicalModel !(AnyBackend LogicalModels.TrackLogicalModel)
| RMUntrackLogicalModel !(AnyBackend LogicalModels.UntrackLogicalModel)
| -- Custom types
RMGetCustomReturnType !(AnyBackend CustomReturnType.GetCustomReturnType)
| RMTrackCustomReturnType !(AnyBackend CustomReturnType.TrackCustomReturnType)
| RMUntrackCustomReturnType !(AnyBackend CustomReturnType.UntrackCustomReturnType)
| RMCreateSelectCustomReturnTypePermission !(AnyBackend (CustomReturnType.CreateCustomReturnTypePermission SelPerm))
| RMDropSelectCustomReturnTypePermission !(AnyBackend CustomReturnType.DropCustomReturnTypePermission)
| -- Tables event triggers
RMCreateEventTrigger !(AnyBackend (Unvalidated1 CreateEventTriggerQuery))
| RMDeleteEventTrigger !(AnyBackend DeleteEventTriggerQuery)
| RMRedeliverEvent !(AnyBackend RedeliverEventQuery)
| RMInvokeEventTrigger !(AnyBackend InvokeEventTriggerQuery)
| RMCleanupEventTriggerLog !TriggerLogCleanupConfig
| RMResumeEventTriggerCleanup !TriggerLogCleanupToggleConfig
| RMPauseEventTriggerCleanup !TriggerLogCleanupToggleConfig
| -- Remote schemas
RMAddRemoteSchema !AddRemoteSchemaQuery
| RMUpdateRemoteSchema !AddRemoteSchemaQuery
| RMRemoveRemoteSchema !RemoteSchemaNameQuery
| RMReloadRemoteSchema !RemoteSchemaNameQuery
| RMIntrospectRemoteSchema !RemoteSchemaNameQuery
| -- Remote schemas permissions
RMAddRemoteSchemaPermissions !AddRemoteSchemaPermission
| RMDropRemoteSchemaPermissions !DropRemoteSchemaPermissions
| -- Remote Schema remote relationships
RMCreateRemoteSchemaRemoteRelationship CreateRemoteSchemaRemoteRelationship
| RMUpdateRemoteSchemaRemoteRelationship CreateRemoteSchemaRemoteRelationship
| RMDeleteRemoteSchemaRemoteRelationship DeleteRemoteSchemaRemoteRelationship
| -- Scheduled triggers
RMCreateCronTrigger !(Unvalidated CreateCronTrigger)
| RMDeleteCronTrigger !ScheduledTriggerName
| RMCreateScheduledEvent !CreateScheduledEvent
| RMDeleteScheduledEvent !DeleteScheduledEvent
| RMGetScheduledEvents !GetScheduledEvents
| RMGetScheduledEventInvocations !GetScheduledEventInvocations
| RMGetCronTriggers
| -- Actions
RMCreateAction !(Unvalidated CreateAction)
| RMDropAction !DropAction
| RMUpdateAction !(Unvalidated UpdateAction)
| RMCreateActionPermission !CreateActionPermission
| RMDropActionPermission !DropActionPermission
| -- Query collections, allow list related
RMCreateQueryCollection !CreateCollection
| RMRenameQueryCollection !RenameCollection
| RMDropQueryCollection !DropCollection
| RMAddQueryToCollection !AddQueryToCollection
| RMDropQueryFromCollection !DropQueryFromCollection
| RMAddCollectionToAllowlist !AllowlistEntry
| RMDropCollectionFromAllowlist !DropCollectionFromAllowlist
| RMUpdateScopeOfCollectionInAllowlist !UpdateScopeOfCollectionInAllowlist
| -- Rest endpoints
RMCreateRestEndpoint !CreateEndpoint
| RMDropRestEndpoint !DropEndpoint
| -- GraphQL Data Connectors
RMDCAddAgent !DCAddAgent
| RMDCDeleteAgent !DCDeleteAgent
| -- Custom types
RMSetCustomTypes !CustomTypes
| -- Api limits
server: multitenant metadata storage The metadata storage implementation for graphql-engine-multitenant. - It uses a centralized PG database to store metadata of all tenants (instead of per tenant database) - Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant) - 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries` Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server. To run : - obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`) - apply the schema on a PG database - set the `--metadataDatabaseUrl` flag to point to the above database - run the MT executable The schema (and its migrations) for the metadata db is managed outside the MT worker. ### New metadata The following is the new portion of `Metadata` added : ```yaml version: 3 metrics_config: analyze_query_variables: true analyze_response_body: false api_limits: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` - In Pro, the code around fetching/updating/syncing pro-config is removed - That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed - The required config comes from metadata / schema cache ### New Metadata APIs - `set_api_limits` - `remove_api_limits` - `set_metrics_config` - `remove_metrics_config` #### `set_api_limits` ```yaml type: set_api_limits args: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: anonymous: max_reqs_per_min: 10 unique_params: "ip" editor: max_reqs_per_min: 30 unique_params: - x-hasura-user-id user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` #### `remove_api_limits` ```yaml type: remove_api_limits args: {} ``` #### `set_metrics_config` ```yaml type: set_metrics_config args: analyze_query_variables: true analyze_response_body: false ``` #### `remove_metrics_config` ```yaml type: remove_metrics_config args: {} ``` #### TODO - [x] on-prem pro implementation for `MonadMetadataStorage` - [x] move the project config from Lux to pro metadata (PR: #379) - [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose) - [x] address other minor TODOs - [x] TxIso for `MonadSourceResolver` - [x] enable EKG connection pool metrics - [x] add logging of connection info when sources are added? - [x] confirm if the `buildReason` for schema cache is correct - [ ] testing - [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508) - [x] one-time migration of existing metadata from users' db to centralized PG - [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata - [ ] integrate with infra team (WIP - cc @hgiasac) - [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411) - [ ] benchmark with few tenants having large metadata (100+ tables etc.) - [ ] when user moves regions (https://github.com/hasura/lux/issues/1717) - [ ] metadata has to be migrated from one regional PG to another - [ ] migrate metrics data as well ? - [ ] operation logs - [ ] regression test runs - [ ] find a way to share the schema files with the infra team Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com> GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
2021-02-11 20:54:25 +03:00
RMSetApiLimits !ApiLimit
| RMRemoveApiLimits
| -- Metrics config
server: multitenant metadata storage The metadata storage implementation for graphql-engine-multitenant. - It uses a centralized PG database to store metadata of all tenants (instead of per tenant database) - Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant) - 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries` Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server. To run : - obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`) - apply the schema on a PG database - set the `--metadataDatabaseUrl` flag to point to the above database - run the MT executable The schema (and its migrations) for the metadata db is managed outside the MT worker. ### New metadata The following is the new portion of `Metadata` added : ```yaml version: 3 metrics_config: analyze_query_variables: true analyze_response_body: false api_limits: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` - In Pro, the code around fetching/updating/syncing pro-config is removed - That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed - The required config comes from metadata / schema cache ### New Metadata APIs - `set_api_limits` - `remove_api_limits` - `set_metrics_config` - `remove_metrics_config` #### `set_api_limits` ```yaml type: set_api_limits args: disabled: false depth_limit: global: 5 per_role: user: 7 editor: 9 rate_limit: per_role: anonymous: max_reqs_per_min: 10 unique_params: "ip" editor: max_reqs_per_min: 30 unique_params: - x-hasura-user-id user: unique_params: - x-hasura-user-id - x-hasura-team-id max_reqs_per_min: 20 global: unique_params: IP max_reqs_per_min: 10 ``` #### `remove_api_limits` ```yaml type: remove_api_limits args: {} ``` #### `set_metrics_config` ```yaml type: set_metrics_config args: analyze_query_variables: true analyze_response_body: false ``` #### `remove_metrics_config` ```yaml type: remove_metrics_config args: {} ``` #### TODO - [x] on-prem pro implementation for `MonadMetadataStorage` - [x] move the project config from Lux to pro metadata (PR: #379) - [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose) - [x] address other minor TODOs - [x] TxIso for `MonadSourceResolver` - [x] enable EKG connection pool metrics - [x] add logging of connection info when sources are added? - [x] confirm if the `buildReason` for schema cache is correct - [ ] testing - [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508) - [x] one-time migration of existing metadata from users' db to centralized PG - [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata - [ ] integrate with infra team (WIP - cc @hgiasac) - [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411) - [ ] benchmark with few tenants having large metadata (100+ tables etc.) - [ ] when user moves regions (https://github.com/hasura/lux/issues/1717) - [ ] metadata has to be migrated from one regional PG to another - [ ] migrate metrics data as well ? - [ ] operation logs - [ ] regression test runs - [ ] find a way to share the schema files with the infra team Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com> GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
2021-02-11 20:54:25 +03:00
RMSetMetricsConfig !MetricsConfig
| RMRemoveMetricsConfig
| -- Inherited roles
RMAddInheritedRole !InheritedRole
[Preview] Inherited roles for postgres read queries fixes #3868 docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de` Note: To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`. Introduction ------------ This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`. How are select permissions of different roles are combined? ------------------------------------------------------------ A select permission includes 5 things: 1. Columns accessible to the role 2. Row selection filter 3. Limit 4. Allow aggregation 5. Scalar computed fields accessible to the role Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`. Let's say the following GraphQL query is queried with the `combined_roles` role. ```graphql query { employees { address phone } } ``` This will translate to the following SQL query: ```sql select (case when (P1 or P2) then address else null end) as address, (case when P2 then phone else null end) as phone from employee where (P1 or P2) ``` The other parameters of the select permission will be combined in the following manner: 1. Limit - Minimum of the limits will be the limit of the inherited role 2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation 3. Scalar computed fields - same as table column fields, as in the above example APIs for inherited roles: ---------------------- 1. `add_inherited_role` `add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments `role_name`: the name of the inherited role to be added (String) `role_set`: list of roles that need to be combined (Array of Strings) Example: ```json { "type": "add_inherited_role", "args": { "role_name":"combined_user", "role_set":[ "user", "user1" ] } } ``` After adding the inherited role, the inherited role can be used like single roles like earlier Note: An inherited role can only be created with non-inherited/singular roles. 2. `drop_inherited_role` The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument: `role_name`: name of the inherited role to be dropped Example: ```json { "type": "drop_inherited_role", "args": { "role_name":"combined_user" } } ``` Metadata --------- The derived roles metadata will be included under the `experimental_features` key while exporting the metadata. ```json { "experimental_features": { "derived_roles": [ { "role_name": "manager_is_employee_too", "role_set": [ "employee", "manager" ] } ] } } ``` Scope ------ Only postgres queries and subscriptions are supported in this PR. Important points: ----------------- 1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done. TODOs ------- - [ ] Tests - [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features - [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?) - [ ] Introspection test with a inherited role (nullability changes in a inherited role) - [ ] Docs - [ ] Changelog Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com> GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 14:14:13 +03:00
| RMDropInheritedRole !DropInheritedRole
| -- Metadata management
RMReplaceMetadata !ReplaceMetadata
| RMExportMetadata !ExportMetadata
| RMClearMetadata !ClearMetadata
| RMReloadMetadata !ReloadMetadata
| RMGetInconsistentMetadata !GetInconsistentMetadata
| RMDropInconsistentMetadata !DropInconsistentMetadata
| -- Introspection options
RMSetGraphqlSchemaIntrospectionOptions !SetGraphqlIntrospectionOptions
| -- Network
RMAddHostToTLSAllowlist !AddHostToTLSAllowlist
| RMDropHostFromTLSAllowlist !DropHostFromTLSAllowlist
| -- QueryTags
RMSetQueryTagsConfig !SetQueryTagsConfig
| -- OpenTelemetry
RMSetOpenTelemetryConfig !OpenTelemetryConfig
| RMSetOpenTelemetryStatus !OtelStatus
| -- Debug
RMDumpInternalState !DumpInternalState
| RMGetCatalogState !GetCatalogState
| RMSetCatalogState !SetCatalogState
| RMTestWebhookTransform !(Unvalidated TestWebhookTransform)
| -- Feature Flags
RMGetFeatureFlag !GetFeatureFlag
| -- Bulk metadata queries
RMBulk [RQLMetadataRequest]
deriving (Generic)
-- NOTE! If you add a new request type here that is read-only, make sure to
-- update queryModifiesMetadata
instance FromJSON RQLMetadataV1 where
parseJSON = withObject "RQLMetadataV1" \o -> do
queryType <- o .: "type"
let args :: forall a. FromJSON a => A.Parser a
args = o .: "args"
case queryType of
-- backend agnostic
"rename_source" -> RMRenameSource <$> args
"add_remote_schema" -> RMAddRemoteSchema <$> args
"update_remote_schema" -> RMUpdateRemoteSchema <$> args
"remove_remote_schema" -> RMRemoveRemoteSchema <$> args
"reload_remote_schema" -> RMReloadRemoteSchema <$> args
"introspect_remote_schema" -> RMIntrospectRemoteSchema <$> args
"add_remote_schema_permissions" -> RMAddRemoteSchemaPermissions <$> args
"drop_remote_schema_permissions" -> RMDropRemoteSchemaPermissions <$> args
"create_remote_schema_remote_relationship" -> RMCreateRemoteSchemaRemoteRelationship <$> args
"update_remote_schema_remote_relationship" -> RMUpdateRemoteSchemaRemoteRelationship <$> args
"delete_remote_schema_remote_relationship" -> RMDeleteRemoteSchemaRemoteRelationship <$> args
"cleanup_event_trigger_logs" -> RMCleanupEventTriggerLog <$> args
"resume_event_trigger_cleanups" -> RMResumeEventTriggerCleanup <$> args
"pause_event_trigger_cleanups" -> RMPauseEventTriggerCleanup <$> args
"create_cron_trigger" -> RMCreateCronTrigger <$> args
"delete_cron_trigger" -> RMDeleteCronTrigger <$> args
"create_scheduled_event" -> RMCreateScheduledEvent <$> args
"delete_scheduled_event" -> RMDeleteScheduledEvent <$> args
"get_scheduled_events" -> RMGetScheduledEvents <$> args
"get_scheduled_event_invocations" -> RMGetScheduledEventInvocations <$> args
"get_cron_triggers" -> pure RMGetCronTriggers
"create_action" -> RMCreateAction <$> args
"drop_action" -> RMDropAction <$> args
"update_action" -> RMUpdateAction <$> args
"create_action_permission" -> RMCreateActionPermission <$> args
"drop_action_permission" -> RMDropActionPermission <$> args
"create_query_collection" -> RMCreateQueryCollection <$> args
"rename_query_collection" -> RMRenameQueryCollection <$> args
"drop_query_collection" -> RMDropQueryCollection <$> args
"add_query_to_collection" -> RMAddQueryToCollection <$> args
"drop_query_from_collection" -> RMDropQueryFromCollection <$> args
"add_collection_to_allowlist" -> RMAddCollectionToAllowlist <$> args
"drop_collection_from_allowlist" -> RMDropCollectionFromAllowlist <$> args
"update_scope_of_collection_in_allowlist" -> RMUpdateScopeOfCollectionInAllowlist <$> args
"create_rest_endpoint" -> RMCreateRestEndpoint <$> args
"drop_rest_endpoint" -> RMDropRestEndpoint <$> args
"dc_add_agent" -> RMDCAddAgent <$> args
"dc_delete_agent" -> RMDCDeleteAgent <$> args
"list_source_kinds" -> RMListSourceKinds <$> args
"get_source_kind_capabilities" -> RMGetSourceKindCapabilities <$> args
"get_table_info" -> RMGetTableInfo <$> args
"set_custom_types" -> RMSetCustomTypes <$> args
"set_api_limits" -> RMSetApiLimits <$> args
"remove_api_limits" -> pure RMRemoveApiLimits
"set_metrics_config" -> RMSetMetricsConfig <$> args
"remove_metrics_config" -> pure RMRemoveMetricsConfig
"add_inherited_role" -> RMAddInheritedRole <$> args
"drop_inherited_role" -> RMDropInheritedRole <$> args
"replace_metadata" -> RMReplaceMetadata <$> args
"export_metadata" -> RMExportMetadata <$> args
"clear_metadata" -> RMClearMetadata <$> args
"reload_metadata" -> RMReloadMetadata <$> args
"get_inconsistent_metadata" -> RMGetInconsistentMetadata <$> args
"drop_inconsistent_metadata" -> RMDropInconsistentMetadata <$> args
"add_host_to_tls_allowlist" -> RMAddHostToTLSAllowlist <$> args
"drop_host_from_tls_allowlist" -> RMDropHostFromTLSAllowlist <$> args
"dump_internal_state" -> RMDumpInternalState <$> args
"get_catalog_state" -> RMGetCatalogState <$> args
"set_catalog_state" -> RMSetCatalogState <$> args
"set_graphql_schema_introspection_options" -> RMSetGraphqlSchemaIntrospectionOptions <$> args
"test_webhook_transform" -> RMTestWebhookTransform <$> args
"set_query_tags" -> RMSetQueryTagsConfig <$> args
"set_opentelemetry_config" -> RMSetOpenTelemetryConfig <$> args
"set_opentelemetry_status" -> RMSetOpenTelemetryStatus <$> args
"get_feature_flag" -> RMGetFeatureFlag <$> args
"bulk" -> RMBulk <$> args
-- Backend prefixed metadata actions:
_ -> do
-- 1) Parse the backend source kind and metadata command:
(backendSourceKind, cmd) <- parseQueryType queryType
dispatchAnyBackend @BackendAPI backendSourceKind \(backendSourceKind' :: BackendSourceKind b) -> do
-- 2) Parse the args field:
argValue <- args
-- 2) Attempt to run all the backend specific command parsers against the source kind, cmd, and arg:
-- NOTE: If parsers succeed then this will pick out the first successful one.
command <- choice <$> sequenceA [p backendSourceKind' cmd argValue | p <- metadataV1CommandParsers @b]
onNothing command $
fail $
"unknown metadata command \""
<> T.unpack cmd
<> "\" for backend "
<> T.unpack (T.toTxt backendSourceKind')
-- | Parse the Metadata API action type returning a tuple of the
-- 'BackendSourceKind' and the action suffix.
--
-- For example: @"pg_add_source"@ parses as @(PostgresVanillaValue, "add_source")@
parseQueryType :: MonadFail m => Text -> m (AnyBackend BackendSourceKind, Text)
parseQueryType queryType =
let (prefix, T.drop 1 -> cmd) = T.breakOn "_" queryType
in (,cmd)
<$> backendSourceKindFromText prefix
`onNothing` fail
( "unknown metadata command \""
<> T.unpack queryType
<> "\"; \""
<> T.unpack prefix
<> "\" was not recognized as a valid backend name"
)
data RQLMetadataV2
= RMV2ReplaceMetadata !ReplaceMetadataV2
| RMV2ExportMetadata !ExportMetadata
Clean metadata arguments ## Description Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API: - whenever possible, it moves those argument types to where they're used (RQL.DDL.*) - it removes all unrequired instances (mostly `ToJSON`) This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL. Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types. ## Notes This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance: - before: `expected Object for Object, but encountered X` after: `expected Object for add computed field, but encountered X` - before: `Expecting an object for update query` after: `expected Object for update query, but encountered X` This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`. This PR also deletes some dead code, mostly in RQL.DML. This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs. https://github.com/hasura/graphql-engine-mono/pull/1844 GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
deriving (Generic)
instance FromJSON RQLMetadataV2 where
parseJSON =
genericParseJSON $
defaultOptions
{ constructorTagModifier = snakeCase . drop 4,
sumEncoding = TaggedObject "type" "args"
}
data RQLMetadataRequest
= RMV1 !RQLMetadataV1
| RMV2 !RQLMetadataV2
instance FromJSON RQLMetadataRequest where
parseJSON = withObject "RQLMetadataRequest" $ \o -> do
version <- o .:? "version" .!= VIVersion1
let val = Object o
case version of
VIVersion1 -> RMV1 <$> parseJSON val
VIVersion2 -> RMV2 <$> parseJSON val
-- | The payload for the @/v1/metadata@ endpoint. See:
--
-- https://hasura.io/docs/latest/graphql/core/api-reference/metadata-api/index/
data RQLMetadata = RQLMetadata
{ _rqlMetadataResourceVersion :: !(Maybe MetadataResourceVersion),
_rqlMetadata :: !RQLMetadataRequest
Clean metadata arguments ## Description Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API: - whenever possible, it moves those argument types to where they're used (RQL.DDL.*) - it removes all unrequired instances (mostly `ToJSON`) This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL. Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types. ## Notes This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance: - before: `expected Object for Object, but encountered X` after: `expected Object for add computed field, but encountered X` - before: `Expecting an object for update query` after: `expected Object for update query, but encountered X` This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`. This PR also deletes some dead code, mostly in RQL.DML. This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs. https://github.com/hasura/graphql-engine-mono/pull/1844 GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
2021-07-27 13:41:42 +03:00
}
instance FromJSON RQLMetadata where
parseJSON = withObject "RQLMetadata" $ \o -> do
_rqlMetadataResourceVersion <- o .:? "resource_version"
_rqlMetadata <- parseJSON $ Object o
pure RQLMetadata {..}
runMetadataQuery ::
( MonadIO m,
MonadError QErr m,
MonadBaseControl IO m,
HasAppEnv m,
2023-04-04 18:59:58 +03:00
HasCacheStaticConfig m,
HasFeatureFlagChecker m,
Tracing.MonadTrace m,
MonadMetadataStorage m,
MonadResolveSource m,
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
MonadEventLogCleanup m,
ProvidesHasuraServices m,
MonadGetApiTimeLimit m,
Remove `HasServerConfigCtx` from the schema cache build. ## Description This PR is a incremental step towards achieving the goal of #8344. It is a less ambitious version of #8484. This PR removes all references to `HasServerConfigCtx` from the cache build and removes `ServerConfigCtx` from `CacheBuildParams`, making `ServerConfigCtx` an argument being passed around manually instead. This has several benefits: by making it an arrow argument, we now properly integrate the fields that change over time in the dependency framework, as they should be, and we can clean up some of the top-level app code. ## Implementation In practice, this PR introduces a `HasServerConfigCtx` instance for `CacheRWT`, the monad we use to build the cache, so we can retrieve the `ServerConfigCtx` in the implementation of `CacheRWM`. This contributes to reducing the amount of `HasServerConfigCtx` in the code: we can remove `SchemaUpdateT` altogether, and we can remove the `HasServerConfigCtx` instance of `Handler`. This makes `HasServerConfigCtx` almost **an implementation detail of the Metadata API**. This first step is enough to achieve the goal of #8344: we can now build the schema cache in the app monad, since we no longer rely on `HasServerConfigCtx` to build it. ## Drawbacks This PR does not attempt to remove the use of `ServerConfigCtx` itself in the schema cache build: doing so would make this PR much much bigger. Ideally, to avoid having all the static fields given as arrow-ish arguments to the cache, we could depend on `HasAppEnv` in the cache build, and use `AppContext` as an arrow argument. But making the cache build depend on the full `AppEnv` and `AppContext` creates a lot of circular imports; and since removing `ServerConfigCtx` itself isn't required to achieve #8344, this PR keeps it wholesale and defers cleaning it to a future PR. A negative consequence of this is that we need an `Eq` instance on `ServerConfigCtx`, and that instance is inelegant. ## Future work There are several further steps we can take in parallel after this is merged. First, again, we can make a new version of #8344, removing `CacheBuild`, FINALLY. As for `ServerConfigCtx`, we can split it / rename it to make ad-hoc structures. If it turns out that `ServerConfigCtx` is only ever used for the schema cache build, we could split it between `CacheBuildEnv` and `CacheBuildContext`, which will be subsets of `AppEnv` and `AppContext`, avoiding import loops. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8509 GitOrigin-RevId: 01b37cc3fd3490d6b117701e22fc4ac88b62b6b5
2023-03-27 20:42:37 +03:00
UserInfoM m
) =>
AppContext ->
RebuildableSchemaCache ->
RQLMetadata ->
m (EncJSON, RebuildableSchemaCache)
runMetadataQuery appContext schemaCache RQLMetadata {..} = do
appEnv@AppEnv {..} <- askAppEnv
let logger = _lsLogger appEnvLoggers
MetadataWithResourceVersion metadata currentResourceVersion <- Tracing.newSpan "fetchMetadata" $ liftEitherM fetchMetadata
let exportsMetadata = \case
RMV1 (RMExportMetadata _) -> True
RMV2 (RMV2ExportMetadata _) -> True
_ -> False
metadataDefaults =
Fix metadata defaults bug - Defaults serialised into metadata table - GDC-647 ## Description There is a bug in the metadata defaults code, see [the original PR](https://github.com/hasura/graphql-engine-mono/pull/6286). Steps to reproduce this issue: * Start a new HGE project * Start HGE with a defaults argument: `HASURA_GRAPHQL_LOG_LEVEL=debug cabal run exe:graphql-engine -- serve --enable-console --console-assets-dir=./console/static/dist --metadata-defaults='{"backend_configs": {"dataconnector": {"mongo": {"display_name": "BONGOBB", "uri": "http://localhost:8123"}}}}'` * Add a source (doesn't need to be related to the defaults) * Export metadata * See that the defaults are present in the exported metadata ## Related Issues * Github Issue: https://github.com/hasura/graphql-engine/issues/9237 * Jira: https://hasurahq.atlassian.net/browse/GDC-647 * Original PR: https://github.com/hasura/graphql-engine-mono/pull/6286 ## Solution * The test for if defaults should be included for metadata api operations has been extended to check for updates * Metadata inconsistencies have been hidden for `/capabilities` calls on startup ## TODO * [x] Fix bug * [x] Write tests * [x] OSS Metadata Migration to correct persisted data - `server/src-rsr/migrations/47_to_48.sql` * [x] Cloud Metadata Migration - `pro/server/res/cloud/migrations/6_to_7.sql` * [x] Bump Catalog Version - `server/src-rsr/catalog_version.txt` * [x] Update Catalog Versions - `server/src-rsr/catalog_versions.txt` (This will be done by Infra when creating a release) * [x] Log connection error as it occurs *(Already being logged. Requires `--enabled-log-types startup,webhook-log,websocket-log,http-log,data-connector-log`) * [x] Don't mark metadata inconsistencies for this call. ## Questions * [ ] Does the `pro/server/res/cloud/migrations/6_to_7.sql` cover the cloud scenarios? * [ ] Should we have `SET search_path` in migrations? * [x] What should be in `server/src-rsr/catalog_versions.txt`? ## Testing To test the solution locally run: > docker compose up -d and > cabal run -- exe:api-tests --skip BigQuery --skip SQLServer --skip '/Test.API.Explain/Postgres/' ## Solution In `runMetadataQuery` in `server/src-lib/Hasura/Server/API/Metadata.hs`: ```diff - if (exportsMetadata _rqlMetadata) + if (exportsMetadata _rqlMetadata || queryModifiesMetadata _rqlMetadata) ``` This ensures that defaults aren't present in operations that serialise metadata. Note: You might think that `X_add_source` would need the defaults to be present to add a source that references the defaults, but since the resolution occurs in the schema-cache building phase, the defaults can be excluded for the metadata modifications required for `X_add_source`. In addition to the code-change, a metadata migration has been introduced in order to clean up serialised defaults. The following scenarios need to be considered for both OSS and Cloud: * The user has not had defaults serialised * The user has had the defaults serialised and no other backends configured * The user has had the defaults serialised and has also configured other backends We want to remove as much of the metadata as possible without any user-specified data and this should be reflected in migration `server/src-rsr/migrations/47_to_48.sql`. ## Server checklist ### Catalog upgrade Does this PR change Hasura Catalog version? - ✅ Yes ### Metadata Does this PR add a new Metadata feature? - ✅ No ### GraphQL - ✅ No new GraphQL schema is generated ### Breaking changes - ✅ No Breaking changes ## Changelog __Component__ : server __Type__: bugfix __Product__: community-edition ### Short Changelog Fixes a metadata defaults serialization bug and introduces a metadata migration to correct data that has been persisted due to the bug. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7034 GitOrigin-RevId: ad7d4f748397a1a607f2c0c886bf0fbbc3f873f2
2022-12-07 01:33:54 +03:00
-- Note: The following check is performed to determine if the metadata defaults can
-- be safely merged into the reader at this point.
--
-- We want to prevent scenarios:
-- \* Exporting defaults - Contradicting the "roundtrip" principle of metadata operations
-- \* Serializing defaults into the metadata storage - Putting data into the users hdb_catalog
--
-- While this check does have the desired effect it relies on the fact that the only
-- operations that need access to the defaults here do not export or modify metadata.
-- If at some point in future an operation needs access to the defaults and also needs to
-- export/modify metadata, then another approach will need to be taken.
--
-- Luckily, most actual need for defaults access exists within the schema cache build phase,
-- so metadata operations don't need "smarts" that require defaults access.
--
if (exportsMetadata _rqlMetadata || queryModifiesMetadata _rqlMetadata)
then emptyMetadataDefaults
else acMetadataDefaults appContext
dynamicConfig <- buildCacheDynamicConfig appEnv appContext
((r, modMetadata), modSchemaCache, cacheInvalidations) <-
2023-04-04 18:59:58 +03:00
runMetadataQueryM
(acEnvironment appContext)
appEnvCheckFeatureFlag
(acRemoteSchemaPermsCtx appContext)
currentResourceVersion
_rqlMetadata
-- TODO: remove this straight runReaderT that provides no actual new info
& flip runReaderT logger
& runMetadataT metadata metadataDefaults
2023-04-04 18:59:58 +03:00
& runCacheRWT dynamicConfig schemaCache
-- set modified metadata in storage
if queryModifiesMetadata _rqlMetadata
then case (appEnvEnableMaintenanceMode, appEnvEnableReadOnlyMode) of
(MaintenanceModeDisabled, ReadOnlyModeDisabled) -> do
-- set modified metadata in storage
L.unLogger logger $
SchemaSyncLog L.LevelInfo TTMetadataApi $
String $
"Attempting to insert new metadata in storage"
newResourceVersion <-
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack. This PR is on top of #7789. ### Description This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks: - we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it - we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces - we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible - we had to declare several behavioral instances on `TraceT m` This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`. In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line. ### Remaining work This PR is a draft; what's left to do is: - [x] make Pro compile; i haven't updated `HasuraPro/Main` yet - [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls - [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet - [x] it depends on #7789 being merged first PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791 GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
Tracing.newSpan "setMetadata" $
liftEitherM $
setMetadata (fromMaybe currentResourceVersion _rqlMetadataResourceVersion) modMetadata
L.unLogger logger $
SchemaSyncLog L.LevelInfo TTMetadataApi $
String $
"Successfully inserted new metadata in storage with resource version: " <> showMetadataResourceVersion newResourceVersion
-- notify schema cache sync
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack. This PR is on top of #7789. ### Description This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks: - we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it - we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces - we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible - we had to declare several behavioral instances on `TraceT m` This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`. In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line. ### Remaining work This PR is a draft; what's left to do is: - [x] make Pro compile; i haven't updated `HasuraPro/Main` yet - [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls - [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet - [x] it depends on #7789 being merged first PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791 GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
Tracing.newSpan "notifySchemaCacheSync" $
liftEitherM $
notifySchemaCacheSync newResourceVersion appEnvInstanceId cacheInvalidations
L.unLogger logger $
SchemaSyncLog L.LevelInfo TTMetadataApi $
String $
"Inserted schema cache sync notification at resource version:" <> showMetadataResourceVersion newResourceVersion
(_, modSchemaCache', _) <-
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack. This PR is on top of #7789. ### Description This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks: - we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it - we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces - we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible - we had to declare several behavioral instances on `TraceT m` This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`. In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line. ### Remaining work This PR is a draft; what's left to do is: - [x] make Pro compile; i haven't updated `HasuraPro/Main` yet - [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls - [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet - [x] it depends on #7789 being merged first PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791 GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
Tracing.newSpan "setMetadataResourceVersionInSchemaCache" $
setMetadataResourceVersionInSchemaCache newResourceVersion
2023-04-04 18:59:58 +03:00
& runCacheRWT dynamicConfig modSchemaCache
pure (r, modSchemaCache')
(MaintenanceModeEnabled (), ReadOnlyModeDisabled) ->
throw500 "metadata cannot be modified in maintenance mode"
(MaintenanceModeDisabled, ReadOnlyModeEnabled) ->
throw400 NotSupported "metadata cannot be modified in read-only mode"
(MaintenanceModeEnabled (), ReadOnlyModeEnabled) ->
throw500 "metadata cannot be modified in maintenance mode"
else pure (r, modSchemaCache)
queryModifiesMetadata :: RQLMetadataRequest -> Bool
queryModifiesMetadata = \case
RMV1 q ->
case q of
RMRedeliverEvent _ -> False
RMInvokeEventTrigger _ -> False
RMGetInconsistentMetadata _ -> False
RMIntrospectRemoteSchema _ -> False
RMDumpInternalState _ -> False
RMSetCatalogState _ -> False
RMGetCatalogState _ -> False
RMExportMetadata _ -> False
RMGetScheduledEventInvocations _ -> False
RMGetCronTriggers -> False
RMGetScheduledEvents _ -> False
RMCreateScheduledEvent _ -> False
RMDeleteScheduledEvent _ -> False
RMTestWebhookTransform _ -> False
RMGetSourceKindCapabilities _ -> False
RMListSourceKinds _ -> False
RMGetSourceTables _ -> False
RMGetTableInfo _ -> False
RMTestConnectionTemplate _ -> False
RMSuggestRelationships _ -> False
RMGetLogicalModel _ -> False
RMTrackLogicalModel _ -> True
RMUntrackLogicalModel _ -> True
RMGetCustomReturnType _ -> False
RMTrackCustomReturnType _ -> True
RMUntrackCustomReturnType _ -> True
RMCreateSelectCustomReturnTypePermission _ -> True
RMDropSelectCustomReturnTypePermission _ -> True
RMBulk qs -> any queryModifiesMetadata qs
-- We used to assume that the fallthrough was True,
-- but it is better to be explicit here to warn when new constructors are added.
RMAddSource _ -> True
RMDropSource _ -> True
RMRenameSource _ -> True
RMUpdateSource _ -> True
RMTrackTable _ -> True
RMUntrackTable _ -> True
RMSetTableCustomization _ -> True
RMSetApolloFederationConfig _ -> True
RMPgSetTableIsEnum _ -> True
RMCreateInsertPermission _ -> True
RMCreateSelectPermission _ -> True
RMCreateUpdatePermission _ -> True
RMCreateDeletePermission _ -> True
RMDropInsertPermission _ -> True
RMDropSelectPermission _ -> True
RMDropUpdatePermission _ -> True
RMDropDeletePermission _ -> True
RMSetPermissionComment _ -> True
RMCreateObjectRelationship _ -> True
RMCreateArrayRelationship _ -> True
RMDropRelationship _ -> True
RMSetRelationshipComment _ -> True
RMRenameRelationship _ -> True
RMCreateRemoteRelationship _ -> True
RMUpdateRemoteRelationship _ -> True
RMDeleteRemoteRelationship _ -> True
RMTrackFunction _ -> True
RMUntrackFunction _ -> True
RMSetFunctionCustomization _ -> True
RMCreateFunctionPermission _ -> True
RMDropFunctionPermission _ -> True
RMAddComputedField _ -> True
RMDropComputedField _ -> True
RMCreateEventTrigger _ -> True
RMDeleteEventTrigger _ -> True
RMCleanupEventTriggerLog _ -> True
RMResumeEventTriggerCleanup _ -> True
RMPauseEventTriggerCleanup _ -> True
RMAddRemoteSchema _ -> True
RMUpdateRemoteSchema _ -> True
RMRemoveRemoteSchema _ -> True
RMReloadRemoteSchema _ -> True
RMAddRemoteSchemaPermissions _ -> True
RMDropRemoteSchemaPermissions _ -> True
RMCreateRemoteSchemaRemoteRelationship _ -> True
RMUpdateRemoteSchemaRemoteRelationship _ -> True
RMDeleteRemoteSchemaRemoteRelationship _ -> True
RMCreateCronTrigger _ -> True
RMDeleteCronTrigger _ -> True
RMCreateAction _ -> True
RMDropAction _ -> True
RMUpdateAction _ -> True
RMCreateActionPermission _ -> True
RMDropActionPermission _ -> True
RMCreateQueryCollection _ -> True
RMRenameQueryCollection _ -> True
RMDropQueryCollection _ -> True
RMAddQueryToCollection _ -> True
RMDropQueryFromCollection _ -> True
RMAddCollectionToAllowlist _ -> True
RMDropCollectionFromAllowlist _ -> True
RMUpdateScopeOfCollectionInAllowlist _ -> True
RMCreateRestEndpoint _ -> True
RMDropRestEndpoint _ -> True
RMDCAddAgent _ -> True
RMDCDeleteAgent _ -> True
RMSetCustomTypes _ -> True
RMSetApiLimits _ -> True
RMRemoveApiLimits -> True
RMSetMetricsConfig _ -> True
RMRemoveMetricsConfig -> True
RMAddInheritedRole _ -> True
RMDropInheritedRole _ -> True
RMReplaceMetadata _ -> True
RMClearMetadata _ -> True
RMReloadMetadata _ -> True
RMDropInconsistentMetadata _ -> True
RMSetGraphqlSchemaIntrospectionOptions _ -> True
RMAddHostToTLSAllowlist _ -> True
RMDropHostFromTLSAllowlist _ -> True
RMSetQueryTagsConfig _ -> True
RMSetOpenTelemetryConfig _ -> True
RMSetOpenTelemetryStatus _ -> True
RMGetFeatureFlag _ -> False
RMV2 q ->
case q of
RMV2ExportMetadata _ -> False
_ -> True
runMetadataQueryM ::
( MonadIO m,
MonadBaseControl IO m,
CacheRWM m,
Tracing.MonadTrace m,
UserInfoM m,
MetadataM m,
MonadMetadataStorage m,
MonadReader r m,
Has (L.Logger L.Hasura) r,
MonadError QErr m,
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
MonadEventLogCleanup m,
ProvidesHasuraServices m,
2023-04-04 18:59:58 +03:00
MonadGetApiTimeLimit m,
HasFeatureFlagChecker m
) =>
Env.Environment ->
2023-04-04 18:59:58 +03:00
CheckFeatureFlag ->
Options.RemoteSchemaPermissions ->
MetadataResourceVersion ->
RQLMetadataRequest ->
m EncJSON
2023-04-04 18:59:58 +03:00
runMetadataQueryM env checkFeatureFlag remoteSchemaPerms currentResourceVersion =
withPathK "args" . \case
-- NOTE: This is a good place to install tracing, since it's involved in
-- the recursive case via "bulk":
RMV1 q ->
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack. This PR is on top of #7789. ### Description This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks: - we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it - we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces - we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible - we had to declare several behavioral instances on `TraceT m` This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`. In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line. ### Remaining work This PR is a draft; what's left to do is: - [x] make Pro compile; i haven't updated `HasuraPro/Main` yet - [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls - [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet - [x] it depends on #7789 being merged first PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791 GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
Tracing.newSpan ("v1 " <> T.pack (constrName q)) $
2023-04-04 18:59:58 +03:00
runMetadataQueryV1M env checkFeatureFlag remoteSchemaPerms currentResourceVersion q
RMV2 q ->
Rewrite `Tracing` to allow for only one `TraceT` in the entire stack. This PR is on top of #7789. ### Description This PR entirely rewrites the API of the Tracing library, to make `interpTraceT` a thing of the past. Before this change, we ran traces by sticking a `TraceT` on top of whatever we were doing. This had several major drawbacks: - we were carrying a bunch of `TraceT` across the codebase, and the entire codebase had to know about it - we needed to carry a second class constraint around (`HasReporterM`) to be able to run all of those traces - we kept having to do stack rewriting with `interpTraceT`, which went from inconvenient to horrible - we had to declare several behavioral instances on `TraceT m` This PR rewrite all of `Tracing` using a more conventional model: there is ONE `TraceT` at the bottom of the stack, and there is an associated class constraint `MonadTrace`: any part of the code that happens to satisfy `MonadTrace` is able to create new traces. We NEVER have to do stack rewriting, `interpTraceT` is gone, and `TraceT` and `Reporter` become implementation details that 99% of the code is blissfully unaware of: code that needs to do tracing only needs to declare that the monad in which it operates implements `MonadTrace`. In doing so, this PR revealed **several bugs in the codebase**: places where we were expecting to trace something, but due to the default instance of `HasReporterM IO` we would actually not do anything. This PR also splits the code of `Tracing` in more byte-sized modules, with the goal of potentially moving to `server/lib` down the line. ### Remaining work This PR is a draft; what's left to do is: - [x] make Pro compile; i haven't updated `HasuraPro/Main` yet - [x] document Tracing by writing a note that explains how to use the library, and the meaning of "reporter", "trace" and "span", as well as the pitfalls - [x] discuss some of the trade-offs in the implementation, which is why i'm opening this PR already despite it not fully building yet - [x] it depends on #7789 being merged first PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7791 GitOrigin-RevId: cadd32d039134c93ddbf364599a2f4dd988adea8
2023-03-13 20:37:16 +03:00
Tracing.newSpan ("v2 " <> T.pack (constrName q)) $
runMetadataQueryV2M currentResourceVersion q
runMetadataQueryV1M ::
forall m r.
( MonadIO m,
MonadBaseControl IO m,
CacheRWM m,
Tracing.MonadTrace m,
UserInfoM m,
MetadataM m,
MonadMetadataStorage m,
MonadReader r m,
Has (L.Logger L.Hasura) r,
MonadError QErr m,
harmonize network manager handling ## Description ### I want to speak to the `Manager` Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down. For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`. This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code. ### First come, first served One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service. The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent. ### Side-effects? In my Haskell? This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them. (As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.) ## Further work In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could: - delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call - delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx` - remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset) - remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx` - move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth` - rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736 GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
2023-02-22 18:53:52 +03:00
MonadEventLogCleanup m,
ProvidesHasuraServices m,
2023-04-04 18:59:58 +03:00
MonadGetApiTimeLimit m,
HasFeatureFlagChecker m
) =>
Env.Environment ->
2023-04-04 18:59:58 +03:00
CheckFeatureFlag ->
Options.RemoteSchemaPermissions ->
MetadataResourceVersion ->
RQLMetadataV1 ->
m EncJSON
2023-04-04 18:59:58 +03:00
runMetadataQueryV1M env checkFeatureFlag remoteSchemaPerms currentResourceVersion = \case
RMAddSource q -> dispatchMetadata (runAddSource env) q
RMDropSource q -> runDropSource q
RMRenameSource q -> runRenameSource q
RMUpdateSource q -> dispatchMetadata runUpdateSource q
RMListSourceKinds q -> runListSourceKinds q
RMGetSourceKindCapabilities q -> runGetSourceKindCapabilities q
RMGetSourceTables q -> dispatchMetadata runGetSourceTables q
RMGetTableInfo q -> runGetTableInfo q
RMTrackTable q -> dispatchMetadata runTrackTableV2Q q
RMUntrackTable q -> dispatchMetadataAndEventTrigger runUntrackTableQ q
RMSetFunctionCustomization q -> dispatchMetadata Functions.runSetFunctionCustomization q
RMSetTableCustomization q -> dispatchMetadata runSetTableCustomization q
RMSetApolloFederationConfig q -> dispatchMetadata runSetApolloFederationConfig q
RMPgSetTableIsEnum q -> dispatchMetadata runSetExistingTableIsEnumQ q
RMCreateInsertPermission q -> dispatchMetadata runCreatePerm q
RMCreateSelectPermission q -> dispatchMetadata runCreatePerm q
RMCreateUpdatePermission q -> dispatchMetadata runCreatePerm q
RMCreateDeletePermission q -> dispatchMetadata runCreatePerm q
RMDropInsertPermission q -> dispatchMetadata (runDropPerm PTInsert) q
RMDropSelectPermission q -> dispatchMetadata (runDropPerm PTSelect) q
RMDropUpdatePermission q -> dispatchMetadata (runDropPerm PTUpdate) q
RMDropDeletePermission q -> dispatchMetadata (runDropPerm PTDelete) q
RMSetPermissionComment q -> dispatchMetadata runSetPermComment q
RMCreateObjectRelationship q -> dispatchMetadata (runCreateRelationship ObjRel . unCreateObjRel) q
RMCreateArrayRelationship q -> dispatchMetadata (runCreateRelationship ArrRel . unCreateArrRel) q
RMDropRelationship q -> dispatchMetadata runDropRel q
RMSetRelationshipComment q -> dispatchMetadata runSetRelComment q
RMRenameRelationship q -> dispatchMetadata runRenameRel q
RMSuggestRelationships q -> dispatchMetadata runSuggestRels q
RMCreateRemoteRelationship q -> dispatchMetadata runCreateRemoteRelationship q
RMUpdateRemoteRelationship q -> dispatchMetadata runUpdateRemoteRelationship q
Fix several issues with remote relationships. ## Remaining Work - [x] changelog entry - [x] more tests: `<backend>_delete_remote_relationship` is definitely untested - [x] negative tests: we probably want to assert that there are some APIs we DON'T support - [x] update the console to use the new API, if necessary - [x] ~~adding the corresponding documentation for the API for other backends (only `pg_` was added here)~~ - deferred to https://github.com/hasura/graphql-engine-mono/issues/3170 - [x] ~~deciding which backends should support this API~~ - deferred to https://github.com/hasura/graphql-engine-mono/issues/3170 - [x] ~~deciding what to do about potentially overlapping schematic representations~~ - ~~cf. https://github.com/hasura/graphql-engine-mono/pull/3157#issuecomment-995307624~~ - deferred to https://github.com/hasura/graphql-engine-mono/issues/3171 - [x] ~~add more descriptive versioning information to some of the types that are changing in this PR~~ - cf. https://github.com/hasura/graphql-engine-mono/pull/3157#discussion_r769830920 - deferred to https://github.com/hasura/graphql-engine-mono/issues/3172 ## Description This PR fixes several important issues wrt. the remote relationship API. - it fixes a regression introduced by [#3124](https://github.com/hasura/graphql-engine-mono/pull/3124), which prevented `<backend>_create_remote_relationship` from accepting the old argument format (break of backwards compatibility, broke the console) - it removes the command `create_remote_relationship` added to the v1/metadata API as a work-around as part of [#3124](https://github.com/hasura/graphql-engine-mono/pull/3124) - it reverts the subsequent fix in the console: [#3149](https://github.com/hasura/graphql-engine-mono/pull/3149) Furthermore, this PR also addresses two other issues: - THE DOCUMENTATION OF THE METADATA API WAS WRONG, and documented `create_remote_relationship` instead of `<backend>_create_remote_relationship`: this PR fixes this by adding `pg_` everywhere, but does not attempt to add the corresponding documentation for other backends, partly because: - `<backend>_delete_remote_relationship` WAS BROKEN ON NON-POSTGRES BACKENDS; it always expected an argument parameterized by Postgres. As of main, the `<backend>_(create|update|delete)_remote_relationship` commands are supported on Postgres, Citus, BigQuery, but **NOT MSSQL**. I do not know if this is intentional or not, if it even should be publicized or not, and as a result this PR doesn't change this. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3157 Co-authored-by: jkachmar <8461423+jkachmar@users.noreply.github.com> GitOrigin-RevId: 37e2f41522a9229a11c595574c3f4984317d652a
2021-12-16 23:28:08 +03:00
RMDeleteRemoteRelationship q -> dispatchMetadata runDeleteRemoteRelationship q
RMTrackFunction q -> dispatchMetadata Functions.runTrackFunctionV2 q
RMUntrackFunction q -> dispatchMetadata Functions.runUntrackFunc q
RMCreateFunctionPermission q -> dispatchMetadata Functions.runCreateFunctionPermission q
RMDropFunctionPermission q -> dispatchMetadata Functions.runDropFunctionPermission q
RMAddComputedField q -> dispatchMetadata runAddComputedField q
RMDropComputedField q -> dispatchMetadata runDropComputedField q
RMTestConnectionTemplate q -> dispatchMetadata runTestConnectionTemplate q
RMGetLogicalModel q -> dispatchMetadata LogicalModels.runGetLogicalModel q
RMTrackLogicalModel q -> dispatchMetadata (LogicalModels.runTrackLogicalModel env) q
RMUntrackLogicalModel q -> dispatchMetadata LogicalModels.runUntrackLogicalModel q
RMGetCustomReturnType q -> dispatchMetadata CustomReturnType.runGetCustomReturnType q
RMTrackCustomReturnType q -> dispatchMetadata CustomReturnType.runTrackCustomReturnType q
RMUntrackCustomReturnType q -> dispatchMetadata CustomReturnType.runUntrackCustomReturnType q
RMCreateSelectCustomReturnTypePermission q -> dispatchMetadata CustomReturnType.runCreateSelectCustomReturnTypePermission q
RMDropSelectCustomReturnTypePermission q -> dispatchMetadata CustomReturnType.runDropSelectCustomReturnTypePermission q
RMCreateEventTrigger q ->
dispatchMetadataAndEventTrigger
( validateTransforms
(unUnvalidate1 . cetqRequestTransform . _Just)
(unUnvalidate1 . cetqResponseTrasnform . _Just)
(runCreateEventTriggerQuery . _unUnvalidate1)
)
q
RMDeleteEventTrigger q -> dispatchMetadataAndEventTrigger runDeleteEventTriggerQuery q
RMRedeliverEvent q -> dispatchEventTrigger runRedeliverEvent q
RMInvokeEventTrigger q -> dispatchEventTrigger runInvokeEventTrigger q
RMCleanupEventTriggerLog q -> runCleanupEventTriggerLog q
RMResumeEventTriggerCleanup q -> runEventTriggerResumeCleanup q
RMPauseEventTriggerCleanup q -> runEventTriggerPauseCleanup q
RMAddRemoteSchema q -> runAddRemoteSchema env q
RMUpdateRemoteSchema q -> runUpdateRemoteSchema env q
RMRemoveRemoteSchema q -> runRemoveRemoteSchema q
RMReloadRemoteSchema q -> runReloadRemoteSchema q
RMIntrospectRemoteSchema q -> runIntrospectRemoteSchema q
2023-04-04 18:59:58 +03:00
RMAddRemoteSchemaPermissions q -> runAddRemoteSchemaPermissions remoteSchemaPerms q
RMDropRemoteSchemaPermissions q -> runDropRemoteSchemaPermissions q
RMCreateRemoteSchemaRemoteRelationship q -> runCreateRemoteSchemaRemoteRelationship q
RMUpdateRemoteSchemaRemoteRelationship q -> runUpdateRemoteSchemaRemoteRelationship q
RMDeleteRemoteSchemaRemoteRelationship q -> runDeleteRemoteSchemaRemoteRelationship q
RMCreateCronTrigger q ->
validateTransforms
(unUnvalidate . cctRequestTransform . _Just)
(unUnvalidate . cctResponseTransform . _Just)
(runCreateCronTrigger . _unUnvalidate)
q
RMDeleteCronTrigger q -> runDeleteCronTrigger q
RMCreateScheduledEvent q -> runCreateScheduledEvent q
RMDeleteScheduledEvent q -> runDeleteScheduledEvent q
RMGetScheduledEvents q -> runGetScheduledEvents q
RMGetScheduledEventInvocations q -> runGetScheduledEventInvocations q
RMGetCronTriggers -> runGetCronTriggers
RMCreateAction q ->
validateTransforms
(unUnvalidate . caDefinition . adRequestTransform . _Just)
(unUnvalidate . caDefinition . adResponseTransform . _Just)
(runCreateAction . _unUnvalidate)
q
RMDropAction q -> runDropAction q
RMUpdateAction q ->
validateTransforms
(unUnvalidate . uaDefinition . adRequestTransform . _Just)
(unUnvalidate . uaDefinition . adResponseTransform . _Just)
(runUpdateAction . _unUnvalidate)
q
RMCreateActionPermission q -> runCreateActionPermission q
RMDropActionPermission q -> runDropActionPermission q
RMCreateQueryCollection q -> runCreateCollection q
RMRenameQueryCollection q -> runRenameCollection q
RMDropQueryCollection q -> runDropCollection q
RMAddQueryToCollection q -> runAddQueryToCollection q
RMDropQueryFromCollection q -> runDropQueryFromCollection q
RMAddCollectionToAllowlist q -> runAddCollectionToAllowlist q
RMDropCollectionFromAllowlist q -> runDropCollectionFromAllowlist q
RMUpdateScopeOfCollectionInAllowlist q -> runUpdateScopeOfCollectionInAllowlist q
RMCreateRestEndpoint q -> runCreateEndpoint q
RMDropRestEndpoint q -> runDropEndpoint q
RMDCAddAgent q -> runAddDataConnectorAgent q
RMDCDeleteAgent q -> runDeleteDataConnectorAgent q
RMSetCustomTypes q -> runSetCustomTypes q
RMSetApiLimits q -> runSetApiLimits q
RMRemoveApiLimits -> runRemoveApiLimits
RMSetMetricsConfig q -> runSetMetricsConfig q
RMRemoveMetricsConfig -> runRemoveMetricsConfig
RMAddInheritedRole q -> runAddInheritedRole q
RMDropInheritedRole q -> runDropInheritedRole q
RMReplaceMetadata q -> runReplaceMetadata q
RMExportMetadata q -> runExportMetadata q
RMClearMetadata q -> runClearMetadata q
RMReloadMetadata q -> runReloadMetadata q
RMGetInconsistentMetadata q -> runGetInconsistentMetadata q
RMDropInconsistentMetadata q -> runDropInconsistentMetadata q
RMSetGraphqlSchemaIntrospectionOptions q -> runSetGraphqlSchemaIntrospectionOptions q
RMAddHostToTLSAllowlist q -> runAddHostToTLSAllowlist q
RMDropHostFromTLSAllowlist q -> runDropHostFromTLSAllowlist q
RMDumpInternalState q -> runDumpInternalState q
RMGetCatalogState q -> runGetCatalogState q
RMSetCatalogState q -> runSetCatalogState q
RMTestWebhookTransform q ->
validateTransforms
(unUnvalidate . twtRequestTransformer)
(unUnvalidate . twtResponseTransformer . _Just)
(runTestWebhookTransform . _unUnvalidate)
q
RMSetQueryTagsConfig q -> runSetQueryTagsConfig q
RMSetOpenTelemetryConfig q -> runSetOpenTelemetryConfig q
RMSetOpenTelemetryStatus q -> runSetOpenTelemetryStatus q
2023-04-04 18:59:58 +03:00
RMGetFeatureFlag q -> runGetFeatureFlag checkFeatureFlag q
RMBulk q -> encJFromList <$> indexedMapM (runMetadataQueryM env checkFeatureFlag remoteSchemaPerms currentResourceVersion) q
where
dispatchMetadata ::
(forall b. BackendMetadata b => i b -> a) ->
AnyBackend i ->
a
dispatchMetadata f x = dispatchAnyBackend @BackendMetadata x f
dispatchEventTrigger :: (forall b. BackendEventTrigger b => i b -> a) -> AnyBackend i -> a
dispatchEventTrigger f x = dispatchAnyBackend @BackendEventTrigger x f
dispatchMetadataAndEventTrigger ::
(forall b. (BackendMetadata b, BackendEventTrigger b) => i b -> a) ->
AnyBackend i ->
a
dispatchMetadataAndEventTrigger f x = dispatchAnyBackendWithTwoConstraints @BackendMetadata @BackendEventTrigger x f
runMetadataQueryV2M ::
( MonadIO m,
CacheRWM m,
MonadBaseControl IO m,
MetadataM m,
MonadMetadataStorage m,
MonadReader r m,
Has (L.Logger L.Hasura) r,
MonadError QErr m,
MonadEventLogCleanup m,
MonadGetApiTimeLimit m
) =>
MetadataResourceVersion ->
RQLMetadataV2 ->
m EncJSON
runMetadataQueryV2M currentResourceVersion = \case
RMV2ReplaceMetadata q -> runReplaceMetadataV2 q
RMV2ExportMetadata q -> runExportMetadataV2 currentResourceVersion q