graphql-engine/server/src-lib/Hasura/GraphQL/Schema.hs

1043 lines
47 KiB
Haskell
Raw Normal View History

{-# LANGUAGE QuasiQuotes #-}
{-# LANGUAGE ViewPatterns #-}
{-# LANGUAGE NoMonomorphismRestriction #-}
2018-06-27 16:11:32 +03:00
module Hasura.GraphQL.Schema
( buildGQLContext,
)
where
import Control.Concurrent.Extended (concurrentlyEIO, forConcurrentlyEIO)
import Control.Lens hiding (contexts)
server: refactor `MonadSchema` into `MonadMemoize` Followup to hasura/graphql-engine-mono#4713. The `memoizeOn` method, part of `MonadSchema`, originally had the following type: ```haskell memoizeOn :: (HasCallStack, Ord a, Typeable a, Typeable b, Typeable k) => TH.Name -> a -> m (Parser k n b) -> m (Parser k n b) ``` The reason for operating on `Parser`s specifically was that the `MonadSchema` effect would additionally initialize certain `Unique` values, which appear (nested in) the type of `Parser`. hasura/graphql-engine-mono#518 changed the type of `memoizeOn`, to additionally allow memoizing `FieldParser`s. These also contained a `Unique` value, which was similarly initialized by the `MonadSchema` effect. The new type of `memoizeOn` was as follows: ```haskell memoizeOn :: forall p d a b . (HasCallStack, HasDefinition (p n b) d, Ord a, Typeable p, Typeable a, Typeable b) => TH.Name -> a -> m (p n b) -> m (p n b) ``` Note the type `p n b` of the value being memoized: by choosing `p` to be either `Parser k` or `FieldParser`, both can be memoized. Also note the new `HasDefinition (p n b) d` constraint, which provided a `Lens` for accessing the `Unique` value to be initialized. A quick simplification is that the `HasCallStack` constraint has never been used by any code. This was realized in hasura/graphql-engine-mono#4713, by removing that constraint. hasura/graphql-engine-mono#2980 removed the `Unique` value from our GraphQL-related types entirely, as their original purpose was never truly realized. One part of removing `Unique` consisted of dropping the `HasDefinition (p n b) d` constraint from `memoizeOn`. What I didn't realize at the time was that this meant that the type of `memoizeOn` could be generalized and simplified much further. This PR finally implements that generalization. The new type is as follows: ```haskell memoizeOn :: forall a p. (Ord a, Typeable a, Typeable p) => TH.Name -> a -> m p -> m p ``` This change has a couple of consequences. 1. While constructing the schema, we often output `Maybe (Parser ...)`, to model that the existence of certain pieces of GraphQL schema sometimes depends on the permissions that a certain role has. The previous versions of `memoizeOn` were not able to handle this, as the only thing they could memoize was fully-defined (if not yet fully-evaluated) `(Field)Parser`s. This much more general API _would_ allow memoizing `Maybe (Parser ...)`s. However, we probably have to be continue being cautious with this: if we blindly memoize all `Maybe (Parser ...)`s, the resulting code may never be able to decide whether the value is `Just` or `Nothing` - i.e. it never commits to the existence-or-not of a GraphQL schema fragment. This would manifest as a non-well-founded knot tying, and this would get reported as an error by the implementation of `memoizeOn`. tl;dr: This generalization _technically_ allows for memoizing `Maybe` values, but we probably still want to avoid doing so. For this reason, the PR adds a specialized version of `memoizeOn` to `Hasura.GraphQL.Schema.Parser`. 2. There is no longer any need to connect the `MonadSchema` knot-tying effect with the `MonadParse` effect. In fact, after this PR, the `memoizeOn` method is completely GraphQL-agnostic, and so we implement hasura/graphql-engine-mono#4726, separating `memoizeOn` from `MonadParse` entirely - `memoizeOn` can be defined and implemented as a general Haskell typeclass method. Since `MonadSchema` has been made into a single-type-parameter type class, it has been renamed to something more general, namely `MonadMemoize`. Its only task is to memoize arbitrary `Typeable p` objects under a combined key consisting of a `TH.Name` and a `Typeable a`. Also for this reason, the new `MonadMemoize` has been moved to the more general `Control.Monad.Memoize`. 3. After this change, it's somewhat clearer what `memoizeOn` does: it memoizes an arbitrary value of a `Typeable` type. The only thing that needs to be understood in its implementation is how the manual blackholing works. There is no more semantic interaction with _any_ GraphQL code. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4725 Co-authored-by: Daniel Harvey <4729125+danieljharvey@users.noreply.github.com> GitOrigin-RevId: 089fa2e82c2ce29da76850e994eabb1e261f9c92
2022-08-04 16:44:14 +03:00
import Control.Monad.Memoize
import Data.Aeson.Ordered qualified as JO
import Data.HashMap.Strict qualified as Map
import Data.HashMap.Strict.InsOrd qualified as OMap
import Data.HashSet qualified as Set
import Data.List.Extended (duplicates)
import Data.Text.Extended
import Data.Text.NonEmpty qualified as NT
import Hasura.Base.Error
import Hasura.Base.ErrorMessage
import Hasura.Base.ToErrorValue
import Hasura.CustomReturnType.Cache (_crtiPermissions)
import Hasura.Function.Cache
import Hasura.GraphQL.ApolloFederation
import Hasura.GraphQL.Context
import Hasura.GraphQL.Namespace
Decouple `Analyse` and `OpenAPI` from remote schema introspection and internal execution details. ### Motivation #2338 introduced a way to validate REST queries against the metadata after a change, to properly report any inconsistency that would emerge from a change in the underlying structure of our schema. However, the way this was done was quite complex and error-prone. Namely: we would use the generated schema parsers to statically execute an introspection query, similar to the one we use for remote schemas, then parse the resulting bytestring as it were coming from a remote schema. This led to several issues: the code was using remote schema primitives, and was associated with remote schema code, despite being unrelated, which led to absurd situations like creating fake `Variable`s whose type was also their name. A lot of the code had to deal with the fact that we might fail to re-parse our own schema. Additionally, some of it was dead code, that for some reason GHC did not warn about? But more fundamentally, this architecture decision creates a dependency between unrelated pieces of the engine: modifying the internal processing of root fields or the introspection of remote schemas now risks impacting the unrelated `OpenAPI` feature. ### Description This PR decouples that process from the remote schema introspection logic and from the execution engine by making `Analyse` and `OpenAPI` work on the generic `G.SchemaIntrospection` instead. To accomplish this, it: - adds `GraphQL.Parser.Schema.Convert`, to convert from our "live" schema back to a flat `SchemaIntrospection` - persists in the schema cache the `admin` introspection generated when building the schema, and uses it both for validation and for generating the `OpenAPI`. ### Known issues and limitations This adds a bit of memory pressure to the engine, as we persist the entire schema in the schema cache. This might be acceptable in the short-term, but we have several potential ideas going forward should this be a problem: - cache the result of `Analyze`: when it becomes possible to build the `OpenAPI` purely with the result of `Analyze` without any additional schema information, then we could cache that instead, reducing the footprint - caching the `OpenAPI`: if it doesn't need to change every time the endpoint is queried, then it should be possible to cache the entire `OpenAPI` object instead of the schema - cache a copy of the `FieldParsers` used to generate the schema: as those are persisted through the GraphQL `Context`, and are the only input required to generate the `Schema`, making them accessible in the schema cache would allow us to have the exact same feature with no additional memory cost, at the price of a slightly slower and more complicated process (need to rebuild the `Schema` every time we query the OpenAPI endpoint) - cache nothing at all, and rebuild the admin schema from scratch every time. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3962 Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com> GitOrigin-RevId: a8b9808170b231fdf6787983b4a9ed286cde27e0
2022-03-22 10:36:39 +03:00
import Hasura.GraphQL.Parser.Schema.Convert (convertToSchemaIntrospection)
import Hasura.GraphQL.Schema.Backend
import Hasura.GraphQL.Schema.Common
import Hasura.GraphQL.Schema.Instances ()
import Hasura.GraphQL.Schema.Introspect
import Hasura.GraphQL.Schema.Options (SchemaOptions (..))
import Hasura.GraphQL.Schema.Options qualified as Options
server: Metadata origin for definitions (type parameter version v2) The code that builds the GraphQL schema, and `buildGQLContext` in particular, is partial: not every value of `(ServerConfigCtx, GraphQLQueryType, SourceCache, HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject), ActionCache, AnnotatedCustomTypes)` results in a valid GraphQL schema. When it fails, we want to be able to return better error messages than we currently do. The key thing that is missing is a way to trace back GraphQL type information to their origin from the Hasura metadata. Currently, we have a number of correctness checks of our GraphQL schema. But these correctness checks only have access to pure GraphQL type information, and hence can only report errors in terms of that. Possibly the worst is the "conflicting definitions" error, which, in practice, can only be debugged by Hasura engineers. This is terrible DX for customers. This PR allows us to print better error messages, by adding a field to the `Definition` type that traces the GraphQL type to its origin in the metadata. So the idea is simple: just add `MetadataObjId`, or `Maybe` that, or some other sum type of that, to `Definition`. However, we want to avoid having to import a `Hasura.RQL` module from `Hasura.GraphQL.Parser`. So we instead define this additional field of `Definition` through a new type parameter, which is threaded through in `Hasura.GraphQL.Parser`. We then define type synonyms in `Hasura.GraphQL.Schema.Parser` that fill in this type parameter, so that it is not visible for the majority of the codebase. The idea of associating metadata information to `Definition`s really comes to fruition when combined with hasura/graphql-engine-mono#4517. Their combination would allow us to use the API of fatal errors (just like the current `MonadError QErr`) to report _inconsistencies_ in the metadata. Such inconsistencies are then _automatically_ ignored. So no ad-hoc decisions need to be made on how to cut out inconsistent metadata from the GraphQL schema. This will allow us to report much better errors, as well as improve the likelihood of a successful HGE startup. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4770 Co-authored-by: Samir Talwar <47582+SamirTalwar@users.noreply.github.com> GitOrigin-RevId: 728402b0cae83ae8e83463a826ceeb609001acae
2022-06-28 18:52:26 +03:00
import Hasura.GraphQL.Schema.Parser
( FieldParser,
Kind (..),
MonadParse,
Parser,
Schema,
)
import Hasura.GraphQL.Schema.Parser qualified as P
import Hasura.GraphQL.Schema.Postgres
Clean Relay's code, break schema cycles, introduce Node ID V2 ## Motivation This PR rewrites most of Relay to achieve the following: - ~~fix a bug in which the same node id could refer to two different tables in the schema~~ - remove one of the few remaining uses of the source cache in the schema building code In doing so, it also: - simplifies the `BackendSchema` class by removing `node` from it, - makes it much easier for other backends to support Relay, - documents, re-organizes, and clarifies the code. ## Description This PR introduces a new `NodeId` version ~~, and adapts the Postgres code to always generate this V2 version~~. This new id contains the source name, in addition to the table name, in order to disambiguate similar table names across different sources (which is now possible with source customization). In doing so, it now explicitly handles that case for V1 node ids, and returns an explicit error message instead of running the risk of _silently returning the wrong information_. Furthermore, it adapts `nodeField` to support multiple backends; most of the code was trivial to generalize, and as a result it lowers the cost of entry for other backends, that now only need to support `AFNodeId` in their translation layer. Finally, it removes one more cycle in the schema building code, by using the same trick we used for remote relationships instead of using the memoization trick of #4576. ## Remaining work - ~~[ ]write a Changelog entry~~ - ~~[x] adapt all tests that were asserting on an old node id~~ ## Future work This PR was adapted from its original form to avoid a breaking change: while it introduces a Node ID V2, we keep generating V1 IDs and the parser rejects V2 IDs. It will be easy to make the switch at a later data in a subsequent PR. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4593 GitOrigin-RevId: 88e5cb91e8b0646900547fa8c7c0e1463de267a1
2022-06-07 16:35:26 +03:00
import Hasura.GraphQL.Schema.Relay
import Hasura.GraphQL.Schema.Remote (buildRemoteParser)
2022-05-27 20:21:22 +03:00
import Hasura.GraphQL.Schema.RemoteRelationship
import Hasura.GraphQL.Schema.Table
import Hasura.GraphQL.Schema.Typename (MkTypename (..))
import Hasura.LogicalModel.Cache (LogicalModelCache, _lmiReturns)
import Hasura.Name qualified as Name
import Hasura.Prelude
import Hasura.QueryTags.Types
import Hasura.RQL.IR
import Hasura.RQL.Types.Action
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.CustomTypes
import Hasura.RQL.Types.Metadata.Object
import Hasura.RQL.Types.Permission
import Hasura.RQL.Types.Relationships.Remote
import Hasura.RQL.Types.SchemaCache hiding (askTableInfo)
import Hasura.RQL.Types.Source
import Hasura.RQL.Types.SourceCustomization as SC
import Hasura.RQL.Types.Table
scaffolding for remote-schemas module The main aim of the PR is: 1. To set up a module structure for 'remote-schemas' package. 2. Move parts by the remote schema codebase into the new module structure to validate it. ## Notes to the reviewer Why a PR with large-ish diff? 1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section. 1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process. ## Changes in the PR ### Module structure Sets up the module structure as follows: ``` Hasura/ RemoteSchema/ Metadata/ Types.hs SchemaCache/ Types.hs Permission.hs RemoteRelationship.hs Build.hs MetadataAPI/ Types.hs Execute.hs ``` ### 1. Types representing metadata are moved Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`. - This new module only depends on very 'core' modules such as `Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass. - The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship definition. ### 2. SchemaCache related types and build logic have been moved Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`. Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at. Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components. This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in. While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal. This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase. ### 3. Metadata APIs related types and build logic have been moved The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively. ## Open questions: 1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](https://github.com/hasura/graphql-engine-mono/blob/ceba6d62264603ee5d279814677b29bcc43ecaea/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs#L55) and [2](https://github.com/hasura/graphql-engine-mono/blob/ceba6d62264603ee5d279814677b29bcc43ecaea/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs#L91), it made more sense to create a separate top-level module for `MetadataAPI`s. Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further. 1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type: ```haskell buildRemoteSchemas :: ( ArrowChoice arr, Inc.ArrowDistribute arr, ArrowWriter (Seq CollectedInfo) arr, Inc.ArrowCache m arr, MonadIO m, HasHttpManagerM m, Inc.Cacheable remoteRelationshipDefinition, ToJSON remoteRelationshipDefinition, MonadError QErr m ) => Env.Environment -> ( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles), [RemoteSchemaMetadataG remoteRelationshipDefinition] ) `arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject) ``` Note the dependence on `CollectedInfo` which is defined as ```haskell data CollectedInfo = CIInconsistency InconsistentMetadata | CIDependency MetadataObject -- ^ for error reporting on missing dependencies SchemaObjId SchemaDependency deriving (Eq) ``` this pretty much means that remote schemas is dependent on types from databases, actions, .... How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine? 1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature: ```haskell buildSchemaCacheFor :: (QErrM m, CacheRWM m, MetadataM m) => MetadataObjId -> MetadataModifier -> ``` This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint): ```haskell runAddRemoteSchema :: ( QErrM m, CacheRWM m, MonadIO m, HasHttpManagerM m, MetadataM m, Tracing.MonadTrace m ) => Env.Environment -> AddRemoteSchemaQuery -> m EncJSON ``` This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed. ```haskell runAddRemoteSchema :: ( QErrM m, MonadIO m, HasHttpManagerM m, MonadReader RemoteSchemasSchemaCache m, MonadState RemoteSchemaMetadata m, Tracing.MonadTrace m ) => Env.Environment -> AddRemoteSchemaQuery -> m RemoteSchemeMetadataObjId ``` The idea is that the core graphql-engine would call these functions and then call `buildSchemaCacheFor`. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291 GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
2022-10-21 06:13:07 +03:00
import Hasura.RemoteSchema.Metadata
import Hasura.RemoteSchema.SchemaCache
import Hasura.SQL.AnyBackend qualified as AB
import Hasura.SQL.Tag (HasTag)
import Hasura.Server.Types
import Hasura.Session
import Language.GraphQL.Draft.Syntax qualified as G
-------------------------------------------------------------------------------
-- Building contexts
-- | Builds the full GraphQL context for a given query type.
--
-- A 'GQLContext' stores how an incoming request should be processed: how to
-- translate each incoming field of a request into a corresponding semantic
-- representation. There is a different one per 'Role', as each role might have
-- different permissions, and therefore not access to the same set of objects in
-- the schema.
--
-- This function takes all necessary information from the metadata, and the
-- 'GraphQLQueryType', and builds all relevant contexts: a hash map from
-- 'RoleName' to their 'GQLContext' and the "default" context for
-- unauthenticated users.
--
-- When building the schema for each role, we treat the remote schemas as
-- "second-class citizens" compared to sources; more specifically, we attempt to
-- detect whether the inclusion of a given remote schema would result in root
-- fields conflict, and only keep schemas that don't generate any. This results
-- in a partial schema being available to the users, and a better error message
-- than would arise from 'safeSelectionSet'.
buildGQLContext ::
forall m.
( MonadError QErr m,
MonadIO m
) =>
2023-04-04 18:59:58 +03:00
Options.InferFunctionPermissions ->
Options.RemoteSchemaPermissions ->
HashSet ExperimentalFeature ->
SQLGenCtx ->
ApolloFederationStatus ->
SourceCache ->
HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject) ->
ActionCache ->
AnnotatedCustomTypes ->
m
( -- Hasura schema
( G.SchemaIntrospection,
HashMap RoleName (RoleContext GQLContext),
GQLContext,
HashSet InconsistentMetadata
),
-- Relay schema
( HashMap RoleName (RoleContext GQLContext),
GQLContext
)
)
2023-04-04 18:59:58 +03:00
buildGQLContext
functionPermissions
remoteSchemaPermissions
experimentalFeatures
sqlGen
apolloFederationStatus
sources
allRemoteSchemas
allActions
customTypes = do
let remoteSchemasRoles = concatMap (Map.keys . _rscPermissions . fst . snd) $ Map.toList allRemoteSchemas
actionRoles =
Set.insert adminRoleName $
Set.fromList (allActionInfos ^.. folded . aiPermissions . to Map.keys . folded)
<> Set.fromList (bool mempty remoteSchemasRoles $ remoteSchemaPermissions == Options.EnableRemoteSchemaPermissions)
allActionInfos = Map.elems allActions
allTableRoles = Set.fromList $ getTableRoles =<< Map.elems sources
allCustomReturnTypeRoles = Set.fromList $ getCustomReturnTypeRoles =<< Map.elems sources
allRoles = actionRoles <> allTableRoles <> allCustomReturnTypeRoles
2023-04-04 18:59:58 +03:00
contexts <-
-- Buld role contexts in parallel. We'd prefer deterministic parallelism
-- but that isn't really acheivable (see mono #3829). NOTE: the admin role
-- will still be a bottleneck here, even on huge_schema which has many
-- roles.
fmap Map.fromList $
forConcurrentlyEIO 10 (Set.toList allRoles) $ \role -> do
(role,)
<$> concurrentlyEIO
( buildRoleContext
(sqlGen, functionPermissions)
sources
allRemoteSchemas
allActionInfos
customTypes
role
remoteSchemaPermissions
experimentalFeatures
apolloFederationStatus
)
( buildRelayRoleContext
(sqlGen, functionPermissions)
sources
allActionInfos
customTypes
role
experimentalFeatures
)
let hasuraContexts = fst <$> contexts
relayContexts = snd <$> contexts
Decouple `Analyse` and `OpenAPI` from remote schema introspection and internal execution details. ### Motivation #2338 introduced a way to validate REST queries against the metadata after a change, to properly report any inconsistency that would emerge from a change in the underlying structure of our schema. However, the way this was done was quite complex and error-prone. Namely: we would use the generated schema parsers to statically execute an introspection query, similar to the one we use for remote schemas, then parse the resulting bytestring as it were coming from a remote schema. This led to several issues: the code was using remote schema primitives, and was associated with remote schema code, despite being unrelated, which led to absurd situations like creating fake `Variable`s whose type was also their name. A lot of the code had to deal with the fact that we might fail to re-parse our own schema. Additionally, some of it was dead code, that for some reason GHC did not warn about? But more fundamentally, this architecture decision creates a dependency between unrelated pieces of the engine: modifying the internal processing of root fields or the introspection of remote schemas now risks impacting the unrelated `OpenAPI` feature. ### Description This PR decouples that process from the remote schema introspection logic and from the execution engine by making `Analyse` and `OpenAPI` work on the generic `G.SchemaIntrospection` instead. To accomplish this, it: - adds `GraphQL.Parser.Schema.Convert`, to convert from our "live" schema back to a flat `SchemaIntrospection` - persists in the schema cache the `admin` introspection generated when building the schema, and uses it both for validation and for generating the `OpenAPI`. ### Known issues and limitations This adds a bit of memory pressure to the engine, as we persist the entire schema in the schema cache. This might be acceptable in the short-term, but we have several potential ideas going forward should this be a problem: - cache the result of `Analyze`: when it becomes possible to build the `OpenAPI` purely with the result of `Analyze` without any additional schema information, then we could cache that instead, reducing the footprint - caching the `OpenAPI`: if it doesn't need to change every time the endpoint is queried, then it should be possible to cache the entire `OpenAPI` object instead of the schema - cache a copy of the `FieldParsers` used to generate the schema: as those are persisted through the GraphQL `Context`, and are the only input required to generate the `Schema`, making them accessible in the schema cache would allow us to have the exact same feature with no additional memory cost, at the price of a slightly slower and more complicated process (need to rebuild the `Schema` every time we query the OpenAPI endpoint) - cache nothing at all, and rebuild the admin schema from scratch every time. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3962 Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com> GitOrigin-RevId: a8b9808170b231fdf6787983b4a9ed286cde27e0
2022-03-22 10:36:39 +03:00
2023-04-04 18:59:58 +03:00
adminIntrospection <-
case Map.lookup adminRoleName hasuraContexts of
Just (_context, _errors, introspection) -> pure introspection
Nothing -> throw500 "buildGQLContext failed to build for the admin role"
(unauthenticated, unauthenticatedRemotesErrors) <- unauthenticatedContext (sqlGen, functionPermissions) sources allRemoteSchemas experimentalFeatures remoteSchemaPermissions
2023-04-04 18:59:58 +03:00
pure
( ( adminIntrospection,
view _1 <$> hasuraContexts,
unauthenticated,
Set.unions $ unauthenticatedRemotesErrors : (view _2 <$> Map.elems hasuraContexts)
),
( relayContexts,
-- Currently, remote schemas are exposed through Relay, but ONLY through
-- the unauthenticated role. This is probably an oversight. See
-- hasura/graphql-engine-mono#3883.
unauthenticated
)
)
buildSchemaOptions ::
(SQLGenCtx, Options.InferFunctionPermissions) ->
HashSet ExperimentalFeature ->
SchemaOptions
buildSchemaOptions
( SQLGenCtx stringifyNum dangerousBooleanCollapse optimizePermissionFilters bigqueryStringNumericInput,
functionPermsCtx
)
expFeatures =
SchemaOptions
{ soStringifyNumbers = stringifyNum,
soDangerousBooleanCollapse = dangerousBooleanCollapse,
soInferFunctionPermissions = functionPermsCtx,
soOptimizePermissionFilters = optimizePermissionFilters,
soIncludeUpdateManyFields =
if EFHideUpdateManyFields `Set.member` expFeatures
then Options.Don'tIncludeUpdateManyFields
else Options.IncludeUpdateManyFields,
soIncludeAggregationPredicates =
if EFHideAggregationPredicates `Set.member` expFeatures
then Options.Don'tIncludeAggregationPredicates
else Options.IncludeAggregationPredicates,
soIncludeStreamFields =
if EFHideStreamFields `Set.member` expFeatures
then Options.Don'tIncludeStreamFields
else Options.IncludeStreamFields,
soBigQueryStringNumericInput = bigqueryStringNumericInput
}
-- | Build the @QueryHasura@ context for a given role.
buildRoleContext ::
forall m.
Remove `Unique` from `Definition` GraphQL types can refer to each other in a circular way. The PDV framework used to use values of type `Unique` to recognize two fragments of GraphQL schema as being the same instance. Internally, this is based on `Data.Unique` from the `base` package, which simply increases a counter on every creation of a `Unique` object. **NB**: The `Unique` values are _not_ used for knot tying the schema combinators themselves (i.e. `Parser`s). The knot tying for `Parser`s is purely based on keys provided to `memoizeOn`. The `Unique` values are _only_ used to recognize two pieces of GraphQL _schema_ as being identical. Originally, the idea was that this would help us with a perfectly correct identification of GraphQL types. But this fully correct equality checking of GraphQL types was never implemented, and does not seem to be necessary to prevent bugs. Specifically, these `Unique` values are stored as part of `data Definition a`, which specifies a part of our internal abstract syntax tree for the GraphQL types that we expose. The `Unique` values get initialized by the `SchemaT` effect. In #2894 and #2895, we are experimenting with how (parts of) the GraphQL types can be hidden behind certain permission predicates. This would allow a single GraphQL schema in memory to serve all roles, implementing #2711. The permission predicates get evaluated at query parsing time when we know what role is doing a certain request, thus outputting the correct GraphQL types for that role. If the approach of #2895 is followed, then the `Definition` objects, and thus the `Unique` values, would be hidden behind the permission predicates. Since the permission predicates are evaluated only after the schema is already supposed to be built, this means that the permission predicates would prevent us from initializing the `Unique` values, rendering them useless. The simplest remedy to this is to remove our usage of `Unique` altogether from the GraphQL schema and schema combinators. It doesn't serve a functional purpose, doesn't prevent bugs, and requires extra bookkeeping. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2980 GitOrigin-RevId: 50d3f9e0b9fbf578ac49c8fc773ba64a94b1f43d
2021-12-01 19:20:35 +03:00
(MonadError QErr m, MonadIO m) =>
(SQLGenCtx, Options.InferFunctionPermissions) ->
SourceCache ->
HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject) ->
[ActionInfo] ->
AnnotatedCustomTypes ->
RoleName ->
Options.RemoteSchemaPermissions ->
Set.HashSet ExperimentalFeature ->
ApolloFederationStatus ->
Decouple `Analyse` and `OpenAPI` from remote schema introspection and internal execution details. ### Motivation #2338 introduced a way to validate REST queries against the metadata after a change, to properly report any inconsistency that would emerge from a change in the underlying structure of our schema. However, the way this was done was quite complex and error-prone. Namely: we would use the generated schema parsers to statically execute an introspection query, similar to the one we use for remote schemas, then parse the resulting bytestring as it were coming from a remote schema. This led to several issues: the code was using remote schema primitives, and was associated with remote schema code, despite being unrelated, which led to absurd situations like creating fake `Variable`s whose type was also their name. A lot of the code had to deal with the fact that we might fail to re-parse our own schema. Additionally, some of it was dead code, that for some reason GHC did not warn about? But more fundamentally, this architecture decision creates a dependency between unrelated pieces of the engine: modifying the internal processing of root fields or the introspection of remote schemas now risks impacting the unrelated `OpenAPI` feature. ### Description This PR decouples that process from the remote schema introspection logic and from the execution engine by making `Analyse` and `OpenAPI` work on the generic `G.SchemaIntrospection` instead. To accomplish this, it: - adds `GraphQL.Parser.Schema.Convert`, to convert from our "live" schema back to a flat `SchemaIntrospection` - persists in the schema cache the `admin` introspection generated when building the schema, and uses it both for validation and for generating the `OpenAPI`. ### Known issues and limitations This adds a bit of memory pressure to the engine, as we persist the entire schema in the schema cache. This might be acceptable in the short-term, but we have several potential ideas going forward should this be a problem: - cache the result of `Analyze`: when it becomes possible to build the `OpenAPI` purely with the result of `Analyze` without any additional schema information, then we could cache that instead, reducing the footprint - caching the `OpenAPI`: if it doesn't need to change every time the endpoint is queried, then it should be possible to cache the entire `OpenAPI` object instead of the schema - cache a copy of the `FieldParsers` used to generate the schema: as those are persisted through the GraphQL `Context`, and are the only input required to generate the `Schema`, making them accessible in the schema cache would allow us to have the exact same feature with no additional memory cost, at the price of a slightly slower and more complicated process (need to rebuild the `Schema` every time we query the OpenAPI endpoint) - cache nothing at all, and rebuild the admin schema from scratch every time. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3962 Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com> GitOrigin-RevId: a8b9808170b231fdf6787983b4a9ed286cde27e0
2022-03-22 10:36:39 +03:00
m
( RoleContext GQLContext,
HashSet InconsistentMetadata,
G.SchemaIntrospection
)
buildRoleContext options sources remotes actions customTypes role remoteSchemaPermsCtx expFeatures apolloFederationStatus = do
let schemaOptions = buildSchemaOptions options expFeatures
2022-05-27 20:21:22 +03:00
schemaContext =
SchemaContext
Clean Relay's code, break schema cycles, introduce Node ID V2 ## Motivation This PR rewrites most of Relay to achieve the following: - ~~fix a bug in which the same node id could refer to two different tables in the schema~~ - remove one of the few remaining uses of the source cache in the schema building code In doing so, it also: - simplifies the `BackendSchema` class by removing `node` from it, - makes it much easier for other backends to support Relay, - documents, re-organizes, and clarifies the code. ## Description This PR introduces a new `NodeId` version ~~, and adapts the Postgres code to always generate this V2 version~~. This new id contains the source name, in addition to the table name, in order to disambiguate similar table names across different sources (which is now possible with source customization). In doing so, it now explicitly handles that case for V1 node ids, and returns an explicit error message instead of running the risk of _silently returning the wrong information_. Furthermore, it adapts `nodeField` to support multiple backends; most of the code was trivial to generalize, and as a result it lowers the cost of entry for other backends, that now only need to support `AFNodeId` in their translation layer. Finally, it removes one more cycle in the schema building code, by using the same trick we used for remote relationships instead of using the memoization trick of #4576. ## Remaining work - ~~[ ]write a Changelog entry~~ - ~~[x] adapt all tests that were asserting on an old node id~~ ## Future work This PR was adapted from its original form to avoid a breaking change: while it introduces a Node ID V2, we keep generating V1 IDs and the parser rejects V2 IDs. It will be easy to make the switch at a later data in a subsequent PR. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4593 GitOrigin-RevId: 88e5cb91e8b0646900547fa8c7c0e1463de267a1
2022-06-07 16:35:26 +03:00
HasuraSchema
( remoteRelationshipField
schemaContext
schemaOptions
sources
(fst <$> remotes)
remoteSchemaPermsCtx
IncludeRemoteSourceRelationship
)
Move RoleName into SchemaContext. ### Description I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise. In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`. However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose... ### Alternatives Some alternatives worth considering: - attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous - keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances - changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context. What do y'all think? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073 GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
2022-07-29 18:37:09 +03:00
role
runMemoizeT $ do
-- build all sources (`apolloFedTableParsers` contains all the parsers and
-- type names, which are eligible for the `_Entity` Union)
(sourcesQueryFields, sourcesMutationFrontendFields, sourcesMutationBackendFields, sourcesSubscriptionFields, apolloFedTableParsers) <-
fmap mconcat $ for (toList sources) \sourceInfo ->
AB.dispatchAnyBackend @BackendSchema sourceInfo $ buildSource schemaContext schemaOptions
-- build all remote schemas
-- we only keep the ones that don't result in a name conflict
(remoteSchemaFields, !remoteSchemaErrors) <-
runRemoteSchema schemaContext $
buildAndValidateRemoteSchemas remotes sourcesQueryFields sourcesMutationBackendFields role remoteSchemaPermsCtx
let remotesQueryFields = concatMap piQuery remoteSchemaFields
remotesMutationFields = concat $ mapMaybe piMutation remoteSchemaFields
remotesSubscriptionFields = concat $ mapMaybe piSubscription remoteSchemaFields
apolloQueryFields = apolloRootFields apolloFederationStatus apolloFedTableParsers
-- build all actions
-- we use the source context due to how async query relationships are implemented
(actionsQueryFields, actionsMutationFields, actionsSubscriptionFields) <-
runActionSchema schemaContext schemaOptions $
fmap mconcat $ for actions \action -> do
queryFields <- buildActionQueryFields customTypes action
mutationFields <- buildActionMutationFields customTypes action
subscriptionFields <- buildActionSubscriptionFields customTypes action
pure (queryFields, mutationFields, subscriptionFields)
mutationParserFrontend <-
buildMutationParser sourcesMutationFrontendFields remotesMutationFields actionsMutationFields
mutationParserBackend <-
buildMutationParser sourcesMutationBackendFields remotesMutationFields actionsMutationFields
subscriptionParser <-
buildSubscriptionParser sourcesSubscriptionFields remotesSubscriptionFields actionsSubscriptionFields
queryParserFrontend <-
buildQueryParser sourcesQueryFields apolloQueryFields remotesQueryFields actionsQueryFields mutationParserFrontend subscriptionParser
queryParserBackend <-
buildQueryParser sourcesQueryFields apolloQueryFields remotesQueryFields actionsQueryFields mutationParserBackend subscriptionParser
-- In order to catch errors early, we attempt to generate the data
-- required for introspection, which ends up doing a few correctness
Decouple `Analyse` and `OpenAPI` from remote schema introspection and internal execution details. ### Motivation #2338 introduced a way to validate REST queries against the metadata after a change, to properly report any inconsistency that would emerge from a change in the underlying structure of our schema. However, the way this was done was quite complex and error-prone. Namely: we would use the generated schema parsers to statically execute an introspection query, similar to the one we use for remote schemas, then parse the resulting bytestring as it were coming from a remote schema. This led to several issues: the code was using remote schema primitives, and was associated with remote schema code, despite being unrelated, which led to absurd situations like creating fake `Variable`s whose type was also their name. A lot of the code had to deal with the fact that we might fail to re-parse our own schema. Additionally, some of it was dead code, that for some reason GHC did not warn about? But more fundamentally, this architecture decision creates a dependency between unrelated pieces of the engine: modifying the internal processing of root fields or the introspection of remote schemas now risks impacting the unrelated `OpenAPI` feature. ### Description This PR decouples that process from the remote schema introspection logic and from the execution engine by making `Analyse` and `OpenAPI` work on the generic `G.SchemaIntrospection` instead. To accomplish this, it: - adds `GraphQL.Parser.Schema.Convert`, to convert from our "live" schema back to a flat `SchemaIntrospection` - persists in the schema cache the `admin` introspection generated when building the schema, and uses it both for validation and for generating the `OpenAPI`. ### Known issues and limitations This adds a bit of memory pressure to the engine, as we persist the entire schema in the schema cache. This might be acceptable in the short-term, but we have several potential ideas going forward should this be a problem: - cache the result of `Analyze`: when it becomes possible to build the `OpenAPI` purely with the result of `Analyze` without any additional schema information, then we could cache that instead, reducing the footprint - caching the `OpenAPI`: if it doesn't need to change every time the endpoint is queried, then it should be possible to cache the entire `OpenAPI` object instead of the schema - cache a copy of the `FieldParsers` used to generate the schema: as those are persisted through the GraphQL `Context`, and are the only input required to generate the `Schema`, making them accessible in the schema cache would allow us to have the exact same feature with no additional memory cost, at the price of a slightly slower and more complicated process (need to rebuild the `Schema` every time we query the OpenAPI endpoint) - cache nothing at all, and rebuild the admin schema from scratch every time. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3962 Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com> GitOrigin-RevId: a8b9808170b231fdf6787983b4a9ed286cde27e0
2022-03-22 10:36:39 +03:00
-- checks in the GraphQL schema. Furthermore, we want to persist this
-- information in the case of the admin role.
!introspectionSchema <- do
Decouple `Analyse` and `OpenAPI` from remote schema introspection and internal execution details. ### Motivation #2338 introduced a way to validate REST queries against the metadata after a change, to properly report any inconsistency that would emerge from a change in the underlying structure of our schema. However, the way this was done was quite complex and error-prone. Namely: we would use the generated schema parsers to statically execute an introspection query, similar to the one we use for remote schemas, then parse the resulting bytestring as it were coming from a remote schema. This led to several issues: the code was using remote schema primitives, and was associated with remote schema code, despite being unrelated, which led to absurd situations like creating fake `Variable`s whose type was also their name. A lot of the code had to deal with the fact that we might fail to re-parse our own schema. Additionally, some of it was dead code, that for some reason GHC did not warn about? But more fundamentally, this architecture decision creates a dependency between unrelated pieces of the engine: modifying the internal processing of root fields or the introspection of remote schemas now risks impacting the unrelated `OpenAPI` feature. ### Description This PR decouples that process from the remote schema introspection logic and from the execution engine by making `Analyse` and `OpenAPI` work on the generic `G.SchemaIntrospection` instead. To accomplish this, it: - adds `GraphQL.Parser.Schema.Convert`, to convert from our "live" schema back to a flat `SchemaIntrospection` - persists in the schema cache the `admin` introspection generated when building the schema, and uses it both for validation and for generating the `OpenAPI`. ### Known issues and limitations This adds a bit of memory pressure to the engine, as we persist the entire schema in the schema cache. This might be acceptable in the short-term, but we have several potential ideas going forward should this be a problem: - cache the result of `Analyze`: when it becomes possible to build the `OpenAPI` purely with the result of `Analyze` without any additional schema information, then we could cache that instead, reducing the footprint - caching the `OpenAPI`: if it doesn't need to change every time the endpoint is queried, then it should be possible to cache the entire `OpenAPI` object instead of the schema - cache a copy of the `FieldParsers` used to generate the schema: as those are persisted through the GraphQL `Context`, and are the only input required to generate the `Schema`, making them accessible in the schema cache would allow us to have the exact same feature with no additional memory cost, at the price of a slightly slower and more complicated process (need to rebuild the `Schema` every time we query the OpenAPI endpoint) - cache nothing at all, and rebuild the admin schema from scratch every time. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3962 Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com> GitOrigin-RevId: a8b9808170b231fdf6787983b4a9ed286cde27e0
2022-03-22 10:36:39 +03:00
result <-
throwOnConflictingDefinitions $
convertToSchemaIntrospection
<$> buildIntrospectionSchema
(P.parserType queryParserBackend)
(P.parserType <$> mutationParserBackend)
(P.parserType <$> subscriptionParser)
Decouple `Analyse` and `OpenAPI` from remote schema introspection and internal execution details. ### Motivation #2338 introduced a way to validate REST queries against the metadata after a change, to properly report any inconsistency that would emerge from a change in the underlying structure of our schema. However, the way this was done was quite complex and error-prone. Namely: we would use the generated schema parsers to statically execute an introspection query, similar to the one we use for remote schemas, then parse the resulting bytestring as it were coming from a remote schema. This led to several issues: the code was using remote schema primitives, and was associated with remote schema code, despite being unrelated, which led to absurd situations like creating fake `Variable`s whose type was also their name. A lot of the code had to deal with the fact that we might fail to re-parse our own schema. Additionally, some of it was dead code, that for some reason GHC did not warn about? But more fundamentally, this architecture decision creates a dependency between unrelated pieces of the engine: modifying the internal processing of root fields or the introspection of remote schemas now risks impacting the unrelated `OpenAPI` feature. ### Description This PR decouples that process from the remote schema introspection logic and from the execution engine by making `Analyse` and `OpenAPI` work on the generic `G.SchemaIntrospection` instead. To accomplish this, it: - adds `GraphQL.Parser.Schema.Convert`, to convert from our "live" schema back to a flat `SchemaIntrospection` - persists in the schema cache the `admin` introspection generated when building the schema, and uses it both for validation and for generating the `OpenAPI`. ### Known issues and limitations This adds a bit of memory pressure to the engine, as we persist the entire schema in the schema cache. This might be acceptable in the short-term, but we have several potential ideas going forward should this be a problem: - cache the result of `Analyze`: when it becomes possible to build the `OpenAPI` purely with the result of `Analyze` without any additional schema information, then we could cache that instead, reducing the footprint - caching the `OpenAPI`: if it doesn't need to change every time the endpoint is queried, then it should be possible to cache the entire `OpenAPI` object instead of the schema - cache a copy of the `FieldParsers` used to generate the schema: as those are persisted through the GraphQL `Context`, and are the only input required to generate the `Schema`, making them accessible in the schema cache would allow us to have the exact same feature with no additional memory cost, at the price of a slightly slower and more complicated process (need to rebuild the `Schema` every time we query the OpenAPI endpoint) - cache nothing at all, and rebuild the admin schema from scratch every time. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3962 Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com> GitOrigin-RevId: a8b9808170b231fdf6787983b4a9ed286cde27e0
2022-03-22 10:36:39 +03:00
pure $
-- We don't need to persist the introspection schema for all the roles here.
Decouple `Analyse` and `OpenAPI` from remote schema introspection and internal execution details. ### Motivation #2338 introduced a way to validate REST queries against the metadata after a change, to properly report any inconsistency that would emerge from a change in the underlying structure of our schema. However, the way this was done was quite complex and error-prone. Namely: we would use the generated schema parsers to statically execute an introspection query, similar to the one we use for remote schemas, then parse the resulting bytestring as it were coming from a remote schema. This led to several issues: the code was using remote schema primitives, and was associated with remote schema code, despite being unrelated, which led to absurd situations like creating fake `Variable`s whose type was also their name. A lot of the code had to deal with the fact that we might fail to re-parse our own schema. Additionally, some of it was dead code, that for some reason GHC did not warn about? But more fundamentally, this architecture decision creates a dependency between unrelated pieces of the engine: modifying the internal processing of root fields or the introspection of remote schemas now risks impacting the unrelated `OpenAPI` feature. ### Description This PR decouples that process from the remote schema introspection logic and from the execution engine by making `Analyse` and `OpenAPI` work on the generic `G.SchemaIntrospection` instead. To accomplish this, it: - adds `GraphQL.Parser.Schema.Convert`, to convert from our "live" schema back to a flat `SchemaIntrospection` - persists in the schema cache the `admin` introspection generated when building the schema, and uses it both for validation and for generating the `OpenAPI`. ### Known issues and limitations This adds a bit of memory pressure to the engine, as we persist the entire schema in the schema cache. This might be acceptable in the short-term, but we have several potential ideas going forward should this be a problem: - cache the result of `Analyze`: when it becomes possible to build the `OpenAPI` purely with the result of `Analyze` without any additional schema information, then we could cache that instead, reducing the footprint - caching the `OpenAPI`: if it doesn't need to change every time the endpoint is queried, then it should be possible to cache the entire `OpenAPI` object instead of the schema - cache a copy of the `FieldParsers` used to generate the schema: as those are persisted through the GraphQL `Context`, and are the only input required to generate the `Schema`, making them accessible in the schema cache would allow us to have the exact same feature with no additional memory cost, at the price of a slightly slower and more complicated process (need to rebuild the `Schema` every time we query the OpenAPI endpoint) - cache nothing at all, and rebuild the admin schema from scratch every time. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3962 Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com> GitOrigin-RevId: a8b9808170b231fdf6787983b4a9ed286cde27e0
2022-03-22 10:36:39 +03:00
-- TODO(nicuveo): we treat the admin role differently in this function,
-- which is a bit inelegant; we might want to refactor this function and
-- split it into several steps, so that we can make a separate function for
-- the admin role that reuses the common parts and avoid such tests.
if role == adminRoleName
then result
else G.SchemaIntrospection mempty
void . throwOnConflictingDefinitions $
buildIntrospectionSchema
(P.parserType queryParserFrontend)
(P.parserType <$> mutationParserFrontend)
(P.parserType <$> subscriptionParser)
-- (since we're running this in parallel in caller, be strict)
let !frontendContext =
GQLContext
(finalizeParser queryParserFrontend)
(finalizeParser <$> mutationParserFrontend)
(finalizeParser <$> subscriptionParser)
!backendContext =
GQLContext
(finalizeParser queryParserBackend)
(finalizeParser <$> mutationParserBackend)
(finalizeParser <$> subscriptionParser)
Decouple `Analyse` and `OpenAPI` from remote schema introspection and internal execution details. ### Motivation #2338 introduced a way to validate REST queries against the metadata after a change, to properly report any inconsistency that would emerge from a change in the underlying structure of our schema. However, the way this was done was quite complex and error-prone. Namely: we would use the generated schema parsers to statically execute an introspection query, similar to the one we use for remote schemas, then parse the resulting bytestring as it were coming from a remote schema. This led to several issues: the code was using remote schema primitives, and was associated with remote schema code, despite being unrelated, which led to absurd situations like creating fake `Variable`s whose type was also their name. A lot of the code had to deal with the fact that we might fail to re-parse our own schema. Additionally, some of it was dead code, that for some reason GHC did not warn about? But more fundamentally, this architecture decision creates a dependency between unrelated pieces of the engine: modifying the internal processing of root fields or the introspection of remote schemas now risks impacting the unrelated `OpenAPI` feature. ### Description This PR decouples that process from the remote schema introspection logic and from the execution engine by making `Analyse` and `OpenAPI` work on the generic `G.SchemaIntrospection` instead. To accomplish this, it: - adds `GraphQL.Parser.Schema.Convert`, to convert from our "live" schema back to a flat `SchemaIntrospection` - persists in the schema cache the `admin` introspection generated when building the schema, and uses it both for validation and for generating the `OpenAPI`. ### Known issues and limitations This adds a bit of memory pressure to the engine, as we persist the entire schema in the schema cache. This might be acceptable in the short-term, but we have several potential ideas going forward should this be a problem: - cache the result of `Analyze`: when it becomes possible to build the `OpenAPI` purely with the result of `Analyze` without any additional schema information, then we could cache that instead, reducing the footprint - caching the `OpenAPI`: if it doesn't need to change every time the endpoint is queried, then it should be possible to cache the entire `OpenAPI` object instead of the schema - cache a copy of the `FieldParsers` used to generate the schema: as those are persisted through the GraphQL `Context`, and are the only input required to generate the `Schema`, making them accessible in the schema cache would allow us to have the exact same feature with no additional memory cost, at the price of a slightly slower and more complicated process (need to rebuild the `Schema` every time we query the OpenAPI endpoint) - cache nothing at all, and rebuild the admin schema from scratch every time. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3962 Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com> GitOrigin-RevId: a8b9808170b231fdf6787983b4a9ed286cde27e0
2022-03-22 10:36:39 +03:00
pure
( RoleContext frontendContext $ Just backendContext,
remoteSchemaErrors,
introspectionSchema
)
where
buildSource ::
forall b.
BackendSchema b =>
SchemaContext ->
SchemaOptions ->
SourceInfo b ->
MemoizeT
m
( [FieldParser P.Parse (NamespacedField (QueryRootField UnpreparedValue))], -- query fields
[FieldParser P.Parse (NamespacedField (MutationRootField UnpreparedValue))], -- mutation backend fields
[FieldParser P.Parse (NamespacedField (MutationRootField UnpreparedValue))], -- mutation frontend fields
[FieldParser P.Parse (NamespacedField (QueryRootField UnpreparedValue))], -- subscription fields
[(G.Name, Parser 'Output P.Parse (ApolloFederationParserFunction P.Parse))] -- apollo federation tables
)
buildSource schemaContext schemaOptions sourceInfo@(SourceInfo _ tables functions logicalModels _customReturnTypes _ _ sourceCustomization) =
runSourceSchema schemaContext schemaOptions sourceInfo do
let validFunctions = takeValidFunctions functions
validLogicalModels = takeValidLogicalModels logicalModels
validTables = takeValidTables tables
Resolve source customization at schema cache building time. ### Description This PR attempts to fix several issues with source customization as it relates to remote relationships. There were several issues regarding casing: at the relationship border, we didn't properly set the target source's case, we didn't have access to the list of supported features to decide whether the feature was allowed or not, and we didn't have access to the global default. However, all of that information is available when we build the schema cache, as we do resolve the case of some elements such as function names: we can therefore resolve source information at the same time, and simplify both the root of the schema and the remote relationship border. To do this, this PR introduces a new type, `ResolvedSourceCustomization`, to be used in the Schema Cache, as opposed to the metadata's `SourceCustomization`, following a pattern established by a lot of other types. ### Remaining work and open questions One major point of confusion: it seems to me that we didn't set the case at all across remote relationships, which would suggest we would use the case of the LHS source across the subset of the RHS one that is accessible through the remote relationship, which would in turn "corrupt" the parser cache and might result in the wrong case being used for that source later on. Is that assesment correct, and was I right to fix it? Another one is that we seem not to be using the local case of the RHS to name the field in an object relationship; unless I'm mistaken we only use it for array relationships? Is that intentional? This PR is also missing tests that would show-case the difference, and a changelog entry. To my knowledge, all the tests of this feature are in the python test suite; this could be the opportunity to move them to the hspec suite, but this might be a considerable amount of work? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5619 GitOrigin-RevId: 51a81b713a74575e82d9f96b51633f158ce3a47b
2022-09-12 19:05:40 +03:00
mkRootFieldName = _rscRootFields sourceCustomization
makeTypename = SC._rscTypeNames sourceCustomization
(uncustomizedQueryRootFields, uncustomizedSubscriptionRootFields, apolloFedTableParsers) <-
buildQueryAndSubscriptionFields mkRootFieldName sourceInfo validTables validFunctions validLogicalModels
(,,,,apolloFedTableParsers)
<$> customizeFields
sourceCustomization
(makeTypename <> MkTypename (<> Name.__query))
(pure uncustomizedQueryRootFields)
<*> customizeFields
sourceCustomization
(makeTypename <> MkTypename (<> Name.__mutation_frontend))
(buildMutationFields mkRootFieldName Frontend sourceInfo validTables validFunctions)
<*> customizeFields
sourceCustomization
(makeTypename <> MkTypename (<> Name.__mutation_backend))
(buildMutationFields mkRootFieldName Backend sourceInfo validTables validFunctions)
<*> customizeFields
sourceCustomization
(makeTypename <> MkTypename (<> Name.__subscription))
(pure uncustomizedSubscriptionRootFields)
buildRelayRoleContext ::
forall m.
Remove `Unique` from `Definition` GraphQL types can refer to each other in a circular way. The PDV framework used to use values of type `Unique` to recognize two fragments of GraphQL schema as being the same instance. Internally, this is based on `Data.Unique` from the `base` package, which simply increases a counter on every creation of a `Unique` object. **NB**: The `Unique` values are _not_ used for knot tying the schema combinators themselves (i.e. `Parser`s). The knot tying for `Parser`s is purely based on keys provided to `memoizeOn`. The `Unique` values are _only_ used to recognize two pieces of GraphQL _schema_ as being identical. Originally, the idea was that this would help us with a perfectly correct identification of GraphQL types. But this fully correct equality checking of GraphQL types was never implemented, and does not seem to be necessary to prevent bugs. Specifically, these `Unique` values are stored as part of `data Definition a`, which specifies a part of our internal abstract syntax tree for the GraphQL types that we expose. The `Unique` values get initialized by the `SchemaT` effect. In #2894 and #2895, we are experimenting with how (parts of) the GraphQL types can be hidden behind certain permission predicates. This would allow a single GraphQL schema in memory to serve all roles, implementing #2711. The permission predicates get evaluated at query parsing time when we know what role is doing a certain request, thus outputting the correct GraphQL types for that role. If the approach of #2895 is followed, then the `Definition` objects, and thus the `Unique` values, would be hidden behind the permission predicates. Since the permission predicates are evaluated only after the schema is already supposed to be built, this means that the permission predicates would prevent us from initializing the `Unique` values, rendering them useless. The simplest remedy to this is to remove our usage of `Unique` altogether from the GraphQL schema and schema combinators. It doesn't serve a functional purpose, doesn't prevent bugs, and requires extra bookkeeping. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2980 GitOrigin-RevId: 50d3f9e0b9fbf578ac49c8fc773ba64a94b1f43d
2021-12-01 19:20:35 +03:00
(MonadError QErr m, MonadIO m) =>
(SQLGenCtx, Options.InferFunctionPermissions) ->
SourceCache ->
[ActionInfo] ->
AnnotatedCustomTypes ->
RoleName ->
Set.HashSet ExperimentalFeature ->
m (RoleContext GQLContext)
buildRelayRoleContext options sources actions customTypes role expFeatures = do
let schemaOptions = buildSchemaOptions options expFeatures
2022-05-27 20:21:22 +03:00
-- TODO: At the time of writing this, remote schema queries are not supported in relay.
-- When they are supported, we should get do what `buildRoleContext` does. Since, they
-- are not supported yet, we use `mempty` below for `RemoteSchemaMap`.
schemaContext =
SchemaContext
Clean Relay's code, break schema cycles, introduce Node ID V2 ## Motivation This PR rewrites most of Relay to achieve the following: - ~~fix a bug in which the same node id could refer to two different tables in the schema~~ - remove one of the few remaining uses of the source cache in the schema building code In doing so, it also: - simplifies the `BackendSchema` class by removing `node` from it, - makes it much easier for other backends to support Relay, - documents, re-organizes, and clarifies the code. ## Description This PR introduces a new `NodeId` version ~~, and adapts the Postgres code to always generate this V2 version~~. This new id contains the source name, in addition to the table name, in order to disambiguate similar table names across different sources (which is now possible with source customization). In doing so, it now explicitly handles that case for V1 node ids, and returns an explicit error message instead of running the risk of _silently returning the wrong information_. Furthermore, it adapts `nodeField` to support multiple backends; most of the code was trivial to generalize, and as a result it lowers the cost of entry for other backends, that now only need to support `AFNodeId` in their translation layer. Finally, it removes one more cycle in the schema building code, by using the same trick we used for remote relationships instead of using the memoization trick of #4576. ## Remaining work - ~~[ ]write a Changelog entry~~ - ~~[x] adapt all tests that were asserting on an old node id~~ ## Future work This PR was adapted from its original form to avoid a breaking change: while it introduces a Node ID V2, we keep generating V1 IDs and the parser rejects V2 IDs. It will be easy to make the switch at a later data in a subsequent PR. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4593 GitOrigin-RevId: 88e5cb91e8b0646900547fa8c7c0e1463de267a1
2022-06-07 16:35:26 +03:00
(RelaySchema $ nodeInterface sources)
-- Remote relationships aren't currently supported in Relay, due to type conflicts, and
-- introspection issues such as https://github.com/hasura/graphql-engine/issues/5144.
ignoreRemoteRelationship
Move RoleName into SchemaContext. ### Description I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise. In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`. However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose... ### Alternatives Some alternatives worth considering: - attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous - keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances - changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context. What do y'all think? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073 GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
2022-07-29 18:37:09 +03:00
role
runMemoizeT do
-- build all sources, and the node root
(node, fieldsList) <- do
node <- fmap NotNamespaced <$> nodeField sources schemaContext schemaOptions
fieldsList <-
for (toList sources) \sourceInfo ->
AB.dispatchAnyBackend @BackendSchema sourceInfo (buildSource schemaContext schemaOptions)
pure (node, fieldsList)
Clean Relay's code, break schema cycles, introduce Node ID V2 ## Motivation This PR rewrites most of Relay to achieve the following: - ~~fix a bug in which the same node id could refer to two different tables in the schema~~ - remove one of the few remaining uses of the source cache in the schema building code In doing so, it also: - simplifies the `BackendSchema` class by removing `node` from it, - makes it much easier for other backends to support Relay, - documents, re-organizes, and clarifies the code. ## Description This PR introduces a new `NodeId` version ~~, and adapts the Postgres code to always generate this V2 version~~. This new id contains the source name, in addition to the table name, in order to disambiguate similar table names across different sources (which is now possible with source customization). In doing so, it now explicitly handles that case for V1 node ids, and returns an explicit error message instead of running the risk of _silently returning the wrong information_. Furthermore, it adapts `nodeField` to support multiple backends; most of the code was trivial to generalize, and as a result it lowers the cost of entry for other backends, that now only need to support `AFNodeId` in their translation layer. Finally, it removes one more cycle in the schema building code, by using the same trick we used for remote relationships instead of using the memoization trick of #4576. ## Remaining work - ~~[ ]write a Changelog entry~~ - ~~[x] adapt all tests that were asserting on an old node id~~ ## Future work This PR was adapted from its original form to avoid a breaking change: while it introduces a Node ID V2, we keep generating V1 IDs and the parser rejects V2 IDs. It will be easy to make the switch at a later data in a subsequent PR. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4593 GitOrigin-RevId: 88e5cb91e8b0646900547fa8c7c0e1463de267a1
2022-06-07 16:35:26 +03:00
let (queryFields, mutationFrontendFields, mutationBackendFields, subscriptionFields) = mconcat fieldsList
allQueryFields = node : queryFields
allSubscriptionFields = node : subscriptionFields
-- build all actions
-- we only build mutations in the relay schema
actionsMutationFields <-
runActionSchema schemaContext schemaOptions $
fmap concat $
traverse (buildActionMutationFields customTypes) actions
-- Remote schema mutations aren't exposed in relay because many times it throws
-- the conflicting definitions error between the relay types like `Node`, `PageInfo` etc
mutationParserFrontend <-
buildMutationParser mutationFrontendFields mempty actionsMutationFields
mutationParserBackend <-
buildMutationParser mutationBackendFields mempty actionsMutationFields
subscriptionParser <-
buildSubscriptionParser allSubscriptionFields [] []
queryParserFrontend <-
Clean Relay's code, break schema cycles, introduce Node ID V2 ## Motivation This PR rewrites most of Relay to achieve the following: - ~~fix a bug in which the same node id could refer to two different tables in the schema~~ - remove one of the few remaining uses of the source cache in the schema building code In doing so, it also: - simplifies the `BackendSchema` class by removing `node` from it, - makes it much easier for other backends to support Relay, - documents, re-organizes, and clarifies the code. ## Description This PR introduces a new `NodeId` version ~~, and adapts the Postgres code to always generate this V2 version~~. This new id contains the source name, in addition to the table name, in order to disambiguate similar table names across different sources (which is now possible with source customization). In doing so, it now explicitly handles that case for V1 node ids, and returns an explicit error message instead of running the risk of _silently returning the wrong information_. Furthermore, it adapts `nodeField` to support multiple backends; most of the code was trivial to generalize, and as a result it lowers the cost of entry for other backends, that now only need to support `AFNodeId` in their translation layer. Finally, it removes one more cycle in the schema building code, by using the same trick we used for remote relationships instead of using the memoization trick of #4576. ## Remaining work - ~~[ ]write a Changelog entry~~ - ~~[x] adapt all tests that were asserting on an old node id~~ ## Future work This PR was adapted from its original form to avoid a breaking change: while it introduces a Node ID V2, we keep generating V1 IDs and the parser rejects V2 IDs. It will be easy to make the switch at a later data in a subsequent PR. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4593 GitOrigin-RevId: 88e5cb91e8b0646900547fa8c7c0e1463de267a1
2022-06-07 16:35:26 +03:00
queryWithIntrospectionHelper allQueryFields mutationParserFrontend subscriptionParser
queryParserBackend <-
Clean Relay's code, break schema cycles, introduce Node ID V2 ## Motivation This PR rewrites most of Relay to achieve the following: - ~~fix a bug in which the same node id could refer to two different tables in the schema~~ - remove one of the few remaining uses of the source cache in the schema building code In doing so, it also: - simplifies the `BackendSchema` class by removing `node` from it, - makes it much easier for other backends to support Relay, - documents, re-organizes, and clarifies the code. ## Description This PR introduces a new `NodeId` version ~~, and adapts the Postgres code to always generate this V2 version~~. This new id contains the source name, in addition to the table name, in order to disambiguate similar table names across different sources (which is now possible with source customization). In doing so, it now explicitly handles that case for V1 node ids, and returns an explicit error message instead of running the risk of _silently returning the wrong information_. Furthermore, it adapts `nodeField` to support multiple backends; most of the code was trivial to generalize, and as a result it lowers the cost of entry for other backends, that now only need to support `AFNodeId` in their translation layer. Finally, it removes one more cycle in the schema building code, by using the same trick we used for remote relationships instead of using the memoization trick of #4576. ## Remaining work - ~~[ ]write a Changelog entry~~ - ~~[x] adapt all tests that were asserting on an old node id~~ ## Future work This PR was adapted from its original form to avoid a breaking change: while it introduces a Node ID V2, we keep generating V1 IDs and the parser rejects V2 IDs. It will be easy to make the switch at a later data in a subsequent PR. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4593 GitOrigin-RevId: 88e5cb91e8b0646900547fa8c7c0e1463de267a1
2022-06-07 16:35:26 +03:00
queryWithIntrospectionHelper allQueryFields mutationParserBackend subscriptionParser
-- In order to catch errors early, we attempt to generate the data
-- required for introspection, which ends up doing a few correctness
-- checks in the GraphQL schema.
void . throwOnConflictingDefinitions $
buildIntrospectionSchema
(P.parserType queryParserBackend)
(P.parserType <$> mutationParserBackend)
(P.parserType <$> subscriptionParser)
void . throwOnConflictingDefinitions $
buildIntrospectionSchema
(P.parserType queryParserFrontend)
(P.parserType <$> mutationParserFrontend)
(P.parserType <$> subscriptionParser)
let frontendContext =
GQLContext
(finalizeParser queryParserFrontend)
(finalizeParser <$> mutationParserFrontend)
(finalizeParser <$> subscriptionParser)
backendContext =
GQLContext
(finalizeParser queryParserBackend)
(finalizeParser <$> mutationParserBackend)
(finalizeParser <$> subscriptionParser)
pure $ RoleContext frontendContext $ Just backendContext
where
buildSource ::
forall b.
BackendSchema b =>
SchemaContext ->
SchemaOptions ->
SourceInfo b ->
MemoizeT
m
( [FieldParser P.Parse (NamespacedField (QueryRootField UnpreparedValue))],
[FieldParser P.Parse (NamespacedField (MutationRootField UnpreparedValue))],
[FieldParser P.Parse (NamespacedField (MutationRootField UnpreparedValue))],
[FieldParser P.Parse (NamespacedField (QueryRootField UnpreparedValue))]
)
buildSource schemaContext schemaOptions sourceInfo@(SourceInfo _ tables functions _logicalModels _customReturnTypes _ _ sourceCustomization) = do
runSourceSchema schemaContext schemaOptions sourceInfo do
let validFunctions = takeValidFunctions functions
validTables = takeValidTables tables
Resolve source customization at schema cache building time. ### Description This PR attempts to fix several issues with source customization as it relates to remote relationships. There were several issues regarding casing: at the relationship border, we didn't properly set the target source's case, we didn't have access to the list of supported features to decide whether the feature was allowed or not, and we didn't have access to the global default. However, all of that information is available when we build the schema cache, as we do resolve the case of some elements such as function names: we can therefore resolve source information at the same time, and simplify both the root of the schema and the remote relationship border. To do this, this PR introduces a new type, `ResolvedSourceCustomization`, to be used in the Schema Cache, as opposed to the metadata's `SourceCustomization`, following a pattern established by a lot of other types. ### Remaining work and open questions One major point of confusion: it seems to me that we didn't set the case at all across remote relationships, which would suggest we would use the case of the LHS source across the subset of the RHS one that is accessible through the remote relationship, which would in turn "corrupt" the parser cache and might result in the wrong case being used for that source later on. Is that assesment correct, and was I right to fix it? Another one is that we seem not to be using the local case of the RHS to name the field in an object relationship; unless I'm mistaken we only use it for array relationships? Is that intentional? This PR is also missing tests that would show-case the difference, and a changelog entry. To my knowledge, all the tests of this feature are in the python test suite; this could be the opportunity to move them to the hspec suite, but this might be a considerable amount of work? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5619 GitOrigin-RevId: 51a81b713a74575e82d9f96b51633f158ce3a47b
2022-09-12 19:05:40 +03:00
mkRootFieldName = _rscRootFields sourceCustomization
makeTypename = SC._rscTypeNames sourceCustomization
(uncustomizedQueryRootFields, uncustomizedSubscriptionRootFields) <-
buildRelayQueryAndSubscriptionFields mkRootFieldName sourceInfo validTables validFunctions
(,,,)
<$> customizeFields
sourceCustomization
(makeTypename <> MkTypename (<> Name.__query))
(pure uncustomizedQueryRootFields)
<*> customizeFields
sourceCustomization
(makeTypename <> MkTypename (<> Name.__mutation_frontend))
(buildMutationFields mkRootFieldName Frontend sourceInfo validTables validFunctions)
<*> customizeFields
sourceCustomization
(makeTypename <> MkTypename (<> Name.__mutation_backend))
(buildMutationFields mkRootFieldName Backend sourceInfo validTables validFunctions)
<*> customizeFields
sourceCustomization
(makeTypename <> MkTypename (<> Name.__subscription))
(pure uncustomizedSubscriptionRootFields)
-- | Builds the schema context for unauthenticated users.
--
-- This context is used whenever the user queries the engine with a role that is
-- unknown, and therefore not present in the context map. Before remote schema
-- permissions were introduced, remotes were considered to be a public entity,
-- and we therefore allowed an unknown role also to query the remotes. To
-- maintain backwards compatibility, we check if remote schema permissions are
-- enabled; remote schemas will only be available to unauthenticated users if
-- permissions aren't enabled.
unauthenticatedContext ::
forall m.
( MonadError QErr m,
Remove `Unique` from `Definition` GraphQL types can refer to each other in a circular way. The PDV framework used to use values of type `Unique` to recognize two fragments of GraphQL schema as being the same instance. Internally, this is based on `Data.Unique` from the `base` package, which simply increases a counter on every creation of a `Unique` object. **NB**: The `Unique` values are _not_ used for knot tying the schema combinators themselves (i.e. `Parser`s). The knot tying for `Parser`s is purely based on keys provided to `memoizeOn`. The `Unique` values are _only_ used to recognize two pieces of GraphQL _schema_ as being identical. Originally, the idea was that this would help us with a perfectly correct identification of GraphQL types. But this fully correct equality checking of GraphQL types was never implemented, and does not seem to be necessary to prevent bugs. Specifically, these `Unique` values are stored as part of `data Definition a`, which specifies a part of our internal abstract syntax tree for the GraphQL types that we expose. The `Unique` values get initialized by the `SchemaT` effect. In #2894 and #2895, we are experimenting with how (parts of) the GraphQL types can be hidden behind certain permission predicates. This would allow a single GraphQL schema in memory to serve all roles, implementing #2711. The permission predicates get evaluated at query parsing time when we know what role is doing a certain request, thus outputting the correct GraphQL types for that role. If the approach of #2895 is followed, then the `Definition` objects, and thus the `Unique` values, would be hidden behind the permission predicates. Since the permission predicates are evaluated only after the schema is already supposed to be built, this means that the permission predicates would prevent us from initializing the `Unique` values, rendering them useless. The simplest remedy to this is to remove our usage of `Unique` altogether from the GraphQL schema and schema combinators. It doesn't serve a functional purpose, doesn't prevent bugs, and requires extra bookkeeping. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2980 GitOrigin-RevId: 50d3f9e0b9fbf578ac49c8fc773ba64a94b1f43d
2021-12-01 19:20:35 +03:00
MonadIO m
) =>
(SQLGenCtx, Options.InferFunctionPermissions) ->
SourceCache ->
HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject) ->
Set.HashSet ExperimentalFeature ->
Options.RemoteSchemaPermissions ->
m (GQLContext, HashSet InconsistentMetadata)
unauthenticatedContext options sources allRemotes expFeatures remoteSchemaPermsCtx = do
let schemaOptions = buildSchemaOptions options expFeatures
fakeSchemaContext =
2022-05-27 20:21:22 +03:00
SchemaContext
Clean Relay's code, break schema cycles, introduce Node ID V2 ## Motivation This PR rewrites most of Relay to achieve the following: - ~~fix a bug in which the same node id could refer to two different tables in the schema~~ - remove one of the few remaining uses of the source cache in the schema building code In doing so, it also: - simplifies the `BackendSchema` class by removing `node` from it, - makes it much easier for other backends to support Relay, - documents, re-organizes, and clarifies the code. ## Description This PR introduces a new `NodeId` version ~~, and adapts the Postgres code to always generate this V2 version~~. This new id contains the source name, in addition to the table name, in order to disambiguate similar table names across different sources (which is now possible with source customization). In doing so, it now explicitly handles that case for V1 node ids, and returns an explicit error message instead of running the risk of _silently returning the wrong information_. Furthermore, it adapts `nodeField` to support multiple backends; most of the code was trivial to generalize, and as a result it lowers the cost of entry for other backends, that now only need to support `AFNodeId` in their translation layer. Finally, it removes one more cycle in the schema building code, by using the same trick we used for remote relationships instead of using the memoization trick of #4576. ## Remaining work - ~~[ ]write a Changelog entry~~ - ~~[x] adapt all tests that were asserting on an old node id~~ ## Future work This PR was adapted from its original form to avoid a breaking change: while it introduces a Node ID V2, we keep generating V1 IDs and the parser rejects V2 IDs. It will be easy to make the switch at a later data in a subsequent PR. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4593 GitOrigin-RevId: 88e5cb91e8b0646900547fa8c7c0e1463de267a1
2022-06-07 16:35:26 +03:00
HasuraSchema
( remoteRelationshipField
fakeSchemaContext
schemaOptions
sources
(fst <$> allRemotes)
remoteSchemaPermsCtx
-- A remote schema is available in an unauthenticated context when the
-- remote schema permissions are disabled but sources are by default
-- not accessible to the unauthenticated context. Therefore,
-- the remote source relationship building is skipped.
ExcludeRemoteSourceRelationship
)
Move RoleName into SchemaContext. ### Description I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise. In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`. However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose... ### Alternatives Some alternatives worth considering: - attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous - keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances - changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context. What do y'all think? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073 GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
2022-07-29 18:37:09 +03:00
fakeRole
-- chosen arbitrarily to be as improbable as possible
fakeRole = mkRoleNameSafe [NT.nonEmptyTextQQ|MyNameIsOzymandiasKingOfKingsLookOnMyWorksYeMightyAndDespair|]
runMemoizeT do
(queryFields, mutationFields, subscriptionFields, remoteErrors) <- case remoteSchemaPermsCtx of
Options.EnableRemoteSchemaPermissions ->
-- Permissions are enabled, unauthenticated users have access to nothing.
pure ([], [], [], mempty)
Options.DisableRemoteSchemaPermissions -> do
-- Permissions are disabled, unauthenticated users have access to remote schemas.
(remoteFields, remoteSchemaErrors) <-
runRemoteSchema fakeSchemaContext $
buildAndValidateRemoteSchemas allRemotes [] [] fakeRole remoteSchemaPermsCtx
pure
( fmap (fmap RFRemote) <$> concatMap piQuery remoteFields,
fmap (fmap RFRemote) <$> concat (mapMaybe piMutation remoteFields),
fmap (fmap RFRemote) <$> concat (mapMaybe piSubscription remoteFields),
remoteSchemaErrors
)
mutationParser <-
whenMaybe (not $ null mutationFields) $
safeSelectionSet mutationRoot (Just $ G.Description "mutation root") mutationFields
<&> fmap (flattenNamespaces . fmap typenameToNamespacedRawRF)
subscriptionParser <-
whenMaybe (not $ null subscriptionFields) $
safeSelectionSet subscriptionRoot (Just $ G.Description "subscription root") subscriptionFields
<&> fmap (flattenNamespaces . fmap typenameToNamespacedRawRF)
queryParser <- queryWithIntrospectionHelper queryFields mutationParser Nothing
void . throwOnConflictingDefinitions $
buildIntrospectionSchema
(P.parserType queryParser)
(P.parserType <$> mutationParser)
(P.parserType <$> subscriptionParser)
pure (GQLContext (finalizeParser queryParser) (finalizeParser <$> mutationParser) (finalizeParser <$> subscriptionParser), remoteErrors)
-------------------------------------------------------------------------------
-- Building parser fields
buildAndValidateRemoteSchemas ::
forall m.
( MonadError QErr m,
MonadIO m
) =>
HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject) ->
[FieldParser P.Parse (NamespacedField (QueryRootField UnpreparedValue))] ->
[FieldParser P.Parse (NamespacedField (MutationRootField UnpreparedValue))] ->
RoleName ->
Options.RemoteSchemaPermissions ->
SchemaT
( SchemaContext,
MkTypename,
CustomizeRemoteFieldName
)
(MemoizeT m)
([RemoteSchemaParser P.Parse], HashSet InconsistentMetadata)
buildAndValidateRemoteSchemas remotes sourcesQueryFields sourcesMutationFields role remoteSchemaPermsCtx =
runWriterT $ foldlM step [] (Map.elems remotes)
where
server: Metadata origin for definitions (type parameter version v2) The code that builds the GraphQL schema, and `buildGQLContext` in particular, is partial: not every value of `(ServerConfigCtx, GraphQLQueryType, SourceCache, HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject), ActionCache, AnnotatedCustomTypes)` results in a valid GraphQL schema. When it fails, we want to be able to return better error messages than we currently do. The key thing that is missing is a way to trace back GraphQL type information to their origin from the Hasura metadata. Currently, we have a number of correctness checks of our GraphQL schema. But these correctness checks only have access to pure GraphQL type information, and hence can only report errors in terms of that. Possibly the worst is the "conflicting definitions" error, which, in practice, can only be debugged by Hasura engineers. This is terrible DX for customers. This PR allows us to print better error messages, by adding a field to the `Definition` type that traces the GraphQL type to its origin in the metadata. So the idea is simple: just add `MetadataObjId`, or `Maybe` that, or some other sum type of that, to `Definition`. However, we want to avoid having to import a `Hasura.RQL` module from `Hasura.GraphQL.Parser`. So we instead define this additional field of `Definition` through a new type parameter, which is threaded through in `Hasura.GraphQL.Parser`. We then define type synonyms in `Hasura.GraphQL.Schema.Parser` that fill in this type parameter, so that it is not visible for the majority of the codebase. The idea of associating metadata information to `Definition`s really comes to fruition when combined with hasura/graphql-engine-mono#4517. Their combination would allow us to use the API of fatal errors (just like the current `MonadError QErr`) to report _inconsistencies_ in the metadata. Such inconsistencies are then _automatically_ ignored. So no ad-hoc decisions need to be made on how to cut out inconsistent metadata from the GraphQL schema. This will allow us to report much better errors, as well as improve the likelihood of a successful HGE startup. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4770 Co-authored-by: Samir Talwar <47582+SamirTalwar@users.noreply.github.com> GitOrigin-RevId: 728402b0cae83ae8e83463a826ceeb609001acae
2022-06-28 18:52:26 +03:00
getFieldName = P.getName . P.fDefinition
sourcesQueryFieldNames = getFieldName <$> sourcesQueryFields
sourcesMutationFieldNames = getFieldName <$> sourcesMutationFields
step validatedSchemas (remoteSchemaContext, metadataId) = do
let previousSchemasQueryFieldNames = map getFieldName $ concatMap piQuery validatedSchemas
previousSchemasMutationFieldNames = map getFieldName $ concat $ mapMaybe piMutation validatedSchemas
reportInconsistency reason = tell $ Set.singleton $ InconsistentObject reason Nothing metadataId
maybeParser <- lift $ buildRemoteSchemaParser remoteSchemaPermsCtx role remoteSchemaContext
case maybeParser of
Nothing -> pure validatedSchemas
Just remoteSchemaParser -> do
(_, inconsistencies) <- listen $ do
let newSchemaQueryFieldNames = map getFieldName $ piQuery remoteSchemaParser
newSchemaMutationFieldNames = foldMap (map getFieldName) $ piMutation remoteSchemaParser
-- First we check for conflicts in query_root:
-- - between this remote and the previous ones:
for_
(duplicates $ newSchemaQueryFieldNames <> previousSchemasQueryFieldNames)
\name -> reportInconsistency $ "Duplicate remote field " <> squote name
-- - between this remote and the sources:
for_ (duplicates $ newSchemaQueryFieldNames <> sourcesQueryFieldNames) $
\name -> reportInconsistency $ "Field cannot be overwritten by remote field " <> squote name
-- Ditto, but for mutations - i.e. with mutation_root:
unless (null newSchemaMutationFieldNames) do
-- - between this remote and the previous ones:
for_ (duplicates $ newSchemaMutationFieldNames <> previousSchemasMutationFieldNames) $
\name -> reportInconsistency $ "Duplicate remote field " <> squote name
-- - between this remote and the sources:
for_ (duplicates $ newSchemaMutationFieldNames <> sourcesMutationFieldNames) $
\name -> reportInconsistency $ "Field cannot be overwritten by remote field " <> squote name
-- No need to check for conflicts between subscription fields, since
-- remote subscriptions aren't supported yet.
-- Only add this new remote to the list if there was no error
pure $
if Set.null inconsistencies
then remoteSchemaParser : validatedSchemas
else validatedSchemas
buildRemoteSchemaParser ::
forall m.
(MonadError QErr m, MonadIO m) =>
Options.RemoteSchemaPermissions ->
RoleName ->
RemoteSchemaCtx ->
SchemaT
( SchemaContext,
MkTypename,
CustomizeRemoteFieldName
)
(MemoizeT m)
(Maybe (RemoteSchemaParser P.Parse))
buildRemoteSchemaParser remoteSchemaPermsCtx roleName context = do
let maybeIntrospection = getIntrospectionResult remoteSchemaPermsCtx roleName context
for maybeIntrospection \introspection -> do
RemoteSchemaParser {..} <- buildRemoteParser introspection (_rscRemoteRelationships context) (_rscInfo context)
pure $ RemoteSchemaParser (setOrigin piQuery) (setOrigin <$> piMutation) (setOrigin <$> piSubscription)
where
setOrigin = fmap (P.setFieldParserOrigin (MORemoteSchema (_rscName context)))
-- | `buildQueryAndSubscriptionFields` builds the query and the subscription
-- fields of the tables tracked in the source. The query root fields and
-- the subscription root fields may not be equal because a root field may be
-- enabled in the `query_root_field` and not in the `subscription_root_field`,
-- so a tuple of array of field parsers corresponding to query field parsers and
-- subscription field parsers.
buildQueryAndSubscriptionFields ::
forall b r m n.
MonadBuildSchema b r m n =>
MkRootFieldName ->
2022-05-27 20:21:22 +03:00
SourceInfo b ->
TableCache b ->
FunctionCache b ->
LogicalModelCache b ->
SchemaT
r
m
( [P.FieldParser n (QueryRootField UnpreparedValue)],
[P.FieldParser n (SubscriptionRootField UnpreparedValue)],
[(G.Name, Parser 'Output n (ApolloFederationParserFunction n))]
)
buildQueryAndSubscriptionFields mkRootFieldName sourceInfo tables (takeExposedAs FEAQuery -> functions) logicalModels = do
Move RoleName into SchemaContext. ### Description I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise. In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`. However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose... ### Alternatives Some alternatives worth considering: - attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous - keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances - changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context. What do y'all think? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073 GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
2022-07-29 18:37:09 +03:00
roleName <- retrieve scRole
functionPermsCtx <- retrieve Options.soInferFunctionPermissions
functionSelectExpParsers <-
concat . catMaybes
<$> for (Map.toList functions) \(functionName, functionInfo) -> runMaybeT $ do
guard $
roleName == adminRoleName
|| roleName `Map.member` _fiPermissions functionInfo
|| functionPermsCtx == Options.InferFunctionPermissions
let targetTableName = _fiReturnType functionInfo
lift $ mkRFs $ buildFunctionQueryFields mkRootFieldName functionName functionInfo targetTableName
logicalModelRootFields <-
buildLogicalModelFields sourceInfo logicalModels
(tableQueryFields, tableSubscriptionFields, apolloFedTableParsers) <-
unzip3 . catMaybes
<$> for (Map.toList tables) \(tableName, tableInfo) -> runMaybeT $ do
tableIdentifierName <- getTableIdentifierName @b tableInfo
lift $ buildTableQueryAndSubscriptionFields mkRootFieldName tableName tableInfo tableIdentifierName
let tableQueryRootFields = fmap mkRF $ concat tableQueryFields
tableSubscriptionRootFields = fmap mkRF $ concat tableSubscriptionFields
pure
( tableQueryRootFields <> functionSelectExpParsers <> logicalModelRootFields,
tableSubscriptionRootFields <> functionSelectExpParsers <> logicalModelRootFields,
catMaybes apolloFedTableParsers
)
where
mkRFs = mkRootFields sourceName sourceConfig queryTagsConfig QDBR
mkRF = mkRootField sourceName sourceConfig queryTagsConfig QDBR
2022-05-27 20:21:22 +03:00
sourceName = _siName sourceInfo
sourceConfig = _siConfiguration sourceInfo
queryTagsConfig = _siQueryTagsConfig sourceInfo
runMaybeTmempty :: (Monad m, Monoid a) => MaybeT m a -> m a
runMaybeTmempty = (`onNothingM` (pure mempty)) . runMaybeT
buildLogicalModelFields ::
forall b r m n.
MonadBuildSchema b r m n =>
SourceInfo b ->
LogicalModelCache b ->
SchemaT r m [P.FieldParser n (QueryRootField UnpreparedValue)]
buildLogicalModelFields sourceInfo logicalModels = runMaybeTmempty $ do
roleName <- retrieve scRole
map mkRF . catMaybes <$> for (Map.elems logicalModels) \logicalModel -> do
-- only include this logical model in the schema
-- if the current role is admin, or we have a select permission
-- for this role (this is the broad strokes check. later, we'll filter
-- more granularly on columns and then rows)
guard $
roleName == adminRoleName
|| roleName `Map.member` _crtiPermissions (_lmiReturns logicalModel)
lift (buildLogicalModelRootFields logicalModel)
where
mkRF ::
FieldParser n (QueryDB b (RemoteRelationshipField UnpreparedValue) (UnpreparedValue b)) ->
FieldParser n (QueryRootField UnpreparedValue)
mkRF = mkRootField sourceName sourceConfig queryTagsConfig QDBR
sourceName = _siName sourceInfo
sourceConfig = _siConfiguration sourceInfo
queryTagsConfig = _siQueryTagsConfig sourceInfo
buildRelayQueryAndSubscriptionFields ::
forall b r m n.
MonadBuildSchema b r m n =>
MkRootFieldName ->
2022-05-27 20:21:22 +03:00
SourceInfo b ->
TableCache b ->
FunctionCache b ->
SchemaT r m ([P.FieldParser n (QueryRootField UnpreparedValue)], [P.FieldParser n (SubscriptionRootField UnpreparedValue)])
buildRelayQueryAndSubscriptionFields mkRootFieldName sourceInfo tables (takeExposedAs FEAQuery -> functions) = do
Move RoleName into SchemaContext. ### Description I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise. In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`. However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose... ### Alternatives Some alternatives worth considering: - attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous - keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances - changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context. What do y'all think? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073 GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
2022-07-29 18:37:09 +03:00
roleName <- retrieve scRole
(tableConnectionQueryFields, tableConnectionSubscriptionFields) <-
unzip . catMaybes
<$> for (Map.toList tables) \(tableName, tableInfo) -> runMaybeT do
tableIdentifierName <- getTableIdentifierName @b tableInfo
Move RoleName into SchemaContext. ### Description I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise. In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`. However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose... ### Alternatives Some alternatives worth considering: - attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous - keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances - changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context. What do y'all think? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073 GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
2022-07-29 18:37:09 +03:00
SelPermInfo {..} <- hoistMaybe $ tableSelectPermissions roleName tableInfo
pkeyColumns <- hoistMaybe $ tableInfo ^? tiCoreInfo . tciPrimaryKey . _Just . pkColumns
relayRootFields <- lift $ mkRFs $ buildTableRelayQueryFields mkRootFieldName tableName tableInfo tableIdentifierName pkeyColumns
let includeRelayWhen True = Just relayRootFields
includeRelayWhen False = Nothing
pure
( includeRelayWhen (isRootFieldAllowed QRFTSelect spiAllowedQueryRootFields),
includeRelayWhen (isRootFieldAllowed SRFTSelect spiAllowedSubscriptionRootFields)
)
functionConnectionFields <- for (Map.toList functions) $ \(functionName, functionInfo) -> runMaybeT do
let returnTableName = _fiReturnType functionInfo
-- FIXME: only extract the TableInfo once to avoid redundant cache lookups
returnTableInfo <- lift $ askTableInfo returnTableName
pkeyColumns <- MaybeT $ (^? tiCoreInfo . tciPrimaryKey . _Just . pkColumns) <$> pure returnTableInfo
lift $ mkRFs $ buildFunctionRelayQueryFields mkRootFieldName functionName functionInfo returnTableName pkeyColumns
pure $
( concat $ catMaybes $ tableConnectionQueryFields <> functionConnectionFields,
concat $ catMaybes $ tableConnectionSubscriptionFields <> functionConnectionFields
)
where
mkRFs = mkRootFields sourceName sourceConfig queryTagsConfig QDBR
2022-05-27 20:21:22 +03:00
sourceName = _siName sourceInfo
sourceConfig = _siConfiguration sourceInfo
queryTagsConfig = _siQueryTagsConfig sourceInfo
buildMutationFields ::
forall b r m n.
MonadBuildSchema b r m n =>
MkRootFieldName ->
Scenario ->
2022-05-27 20:21:22 +03:00
SourceInfo b ->
TableCache b ->
FunctionCache b ->
SchemaT r m [P.FieldParser n (MutationRootField UnpreparedValue)]
buildMutationFields mkRootFieldName scenario sourceInfo tables (takeExposedAs FEAMutation -> functions) = do
Move RoleName into SchemaContext. ### Description I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise. In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`. However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose... ### Alternatives Some alternatives worth considering: - attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous - keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances - changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context. What do y'all think? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073 GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
2022-07-29 18:37:09 +03:00
roleName <- retrieve scRole
tableMutations <- for (Map.toList tables) \(tableName, tableInfo) -> do
tableIdentifierName <- getTableIdentifierName @b tableInfo
Role-invariant schema constructors We build the GraphQL schema by combining building blocks such as `tableSelectionSet` and `columnParser`. These building blocks individually build `{InputFields,Field,}Parser` objects. Those object specify the valid GraphQL schema. Since the GraphQL schema is role-dependent, at some point we need to know what fragment of the GraphQL schema a specific role is allowed to access, and this is stored in `{Sel,Upd,Ins,Del}PermInfo` objects. We have passed around these permission objects as function arguments to the schema building blocks since we first started dealing with permissions during the PDV refactor - see hasura/graphql-engine@5168b99e463199b1934d8645bd6cd37eddb64ae1 in hasura/graphql-engine#4111. This means that, for instance, `tableSelectionSet` has as its type: ```haskell tableSelectionSet :: forall b r m n. MonadBuildSchema b r m n => SourceName -> TableInfo b -> SelPermInfo b -> m (Parser 'Output n (AnnotatedFields b)) ``` There are three reasons to change this. 1. We often pass a `Maybe (xPermInfo b)` instead of a proper `xPermInfo b`, and it's not clear what the intended semantics of this is. Some potential improvements on the data types involved are discussed in issue hasura/graphql-engine-mono#3125. 2. In most cases we also already pass a `TableInfo b`, and together with the `MonadRole` that is usually also in scope, this means that we could look up the required permissions regardless: so passing the permissions explicitly undermines the "single source of truth" principle. Breaking this principle also makes the code more difficult to read. 3. We are working towards role-based parsers (see hasura/graphql-engine-mono#2711), where the `{InputFields,Field,}Parser` objects are constructed in a role-invariant way, so that we have a single object that can be used for all roles. In particular, this means that the schema building blocks _need_ to be constructed in a role-invariant way. While this PR doesn't accomplish that, it does reduce the amount of role-specific arguments being passed, thus fixing hasura/graphql-engine-mono#3068. Concretely, this PR simply drops the `xPermInfo b` argument from almost all schema building blocks. Instead these objects are looked up from the `TableInfo b` as-needed. The resulting code is considerably simpler and shorter. One way to interpret this change is as follows. Before this PR, we figured out permissions at the top-level in `Hasura.GraphQL.Schema`, passing down the obtained `xPermInfo` objects as required. After this PR, we have a bottom-up approach where the schema building blocks themselves decide whether they want to be included for a particular role. So this moves some permission logic out of `Hasura.GraphQL.Schema`, which is very complex. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3608 GitOrigin-RevId: 51a744f34ec7d57bc8077667ae7f9cb9c4f6c962
2022-02-17 11:16:20 +03:00
inserts <-
mkRFs (MDBR . MDBInsert) $ buildTableInsertMutationFields mkRootFieldName scenario tableName tableInfo tableIdentifierName
Role-invariant schema constructors We build the GraphQL schema by combining building blocks such as `tableSelectionSet` and `columnParser`. These building blocks individually build `{InputFields,Field,}Parser` objects. Those object specify the valid GraphQL schema. Since the GraphQL schema is role-dependent, at some point we need to know what fragment of the GraphQL schema a specific role is allowed to access, and this is stored in `{Sel,Upd,Ins,Del}PermInfo` objects. We have passed around these permission objects as function arguments to the schema building blocks since we first started dealing with permissions during the PDV refactor - see hasura/graphql-engine@5168b99e463199b1934d8645bd6cd37eddb64ae1 in hasura/graphql-engine#4111. This means that, for instance, `tableSelectionSet` has as its type: ```haskell tableSelectionSet :: forall b r m n. MonadBuildSchema b r m n => SourceName -> TableInfo b -> SelPermInfo b -> m (Parser 'Output n (AnnotatedFields b)) ``` There are three reasons to change this. 1. We often pass a `Maybe (xPermInfo b)` instead of a proper `xPermInfo b`, and it's not clear what the intended semantics of this is. Some potential improvements on the data types involved are discussed in issue hasura/graphql-engine-mono#3125. 2. In most cases we also already pass a `TableInfo b`, and together with the `MonadRole` that is usually also in scope, this means that we could look up the required permissions regardless: so passing the permissions explicitly undermines the "single source of truth" principle. Breaking this principle also makes the code more difficult to read. 3. We are working towards role-based parsers (see hasura/graphql-engine-mono#2711), where the `{InputFields,Field,}Parser` objects are constructed in a role-invariant way, so that we have a single object that can be used for all roles. In particular, this means that the schema building blocks _need_ to be constructed in a role-invariant way. While this PR doesn't accomplish that, it does reduce the amount of role-specific arguments being passed, thus fixing hasura/graphql-engine-mono#3068. Concretely, this PR simply drops the `xPermInfo b` argument from almost all schema building blocks. Instead these objects are looked up from the `TableInfo b` as-needed. The resulting code is considerably simpler and shorter. One way to interpret this change is as follows. Before this PR, we figured out permissions at the top-level in `Hasura.GraphQL.Schema`, passing down the obtained `xPermInfo` objects as required. After this PR, we have a bottom-up approach where the schema building blocks themselves decide whether they want to be included for a particular role. So this moves some permission logic out of `Hasura.GraphQL.Schema`, which is very complex. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3608 GitOrigin-RevId: 51a744f34ec7d57bc8077667ae7f9cb9c4f6c962
2022-02-17 11:16:20 +03:00
updates <-
mkRFs (MDBR . MDBUpdate) $ buildTableUpdateMutationFields scenario tableInfo tableIdentifierName
Role-invariant schema constructors We build the GraphQL schema by combining building blocks such as `tableSelectionSet` and `columnParser`. These building blocks individually build `{InputFields,Field,}Parser` objects. Those object specify the valid GraphQL schema. Since the GraphQL schema is role-dependent, at some point we need to know what fragment of the GraphQL schema a specific role is allowed to access, and this is stored in `{Sel,Upd,Ins,Del}PermInfo` objects. We have passed around these permission objects as function arguments to the schema building blocks since we first started dealing with permissions during the PDV refactor - see hasura/graphql-engine@5168b99e463199b1934d8645bd6cd37eddb64ae1 in hasura/graphql-engine#4111. This means that, for instance, `tableSelectionSet` has as its type: ```haskell tableSelectionSet :: forall b r m n. MonadBuildSchema b r m n => SourceName -> TableInfo b -> SelPermInfo b -> m (Parser 'Output n (AnnotatedFields b)) ``` There are three reasons to change this. 1. We often pass a `Maybe (xPermInfo b)` instead of a proper `xPermInfo b`, and it's not clear what the intended semantics of this is. Some potential improvements on the data types involved are discussed in issue hasura/graphql-engine-mono#3125. 2. In most cases we also already pass a `TableInfo b`, and together with the `MonadRole` that is usually also in scope, this means that we could look up the required permissions regardless: so passing the permissions explicitly undermines the "single source of truth" principle. Breaking this principle also makes the code more difficult to read. 3. We are working towards role-based parsers (see hasura/graphql-engine-mono#2711), where the `{InputFields,Field,}Parser` objects are constructed in a role-invariant way, so that we have a single object that can be used for all roles. In particular, this means that the schema building blocks _need_ to be constructed in a role-invariant way. While this PR doesn't accomplish that, it does reduce the amount of role-specific arguments being passed, thus fixing hasura/graphql-engine-mono#3068. Concretely, this PR simply drops the `xPermInfo b` argument from almost all schema building blocks. Instead these objects are looked up from the `TableInfo b` as-needed. The resulting code is considerably simpler and shorter. One way to interpret this change is as follows. Before this PR, we figured out permissions at the top-level in `Hasura.GraphQL.Schema`, passing down the obtained `xPermInfo` objects as required. After this PR, we have a bottom-up approach where the schema building blocks themselves decide whether they want to be included for a particular role. So this moves some permission logic out of `Hasura.GraphQL.Schema`, which is very complex. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3608 GitOrigin-RevId: 51a744f34ec7d57bc8077667ae7f9cb9c4f6c962
2022-02-17 11:16:20 +03:00
deletes <-
mkRFs (MDBR . MDBDelete) $ buildTableDeleteMutationFields mkRootFieldName scenario tableName tableInfo tableIdentifierName
Role-invariant schema constructors We build the GraphQL schema by combining building blocks such as `tableSelectionSet` and `columnParser`. These building blocks individually build `{InputFields,Field,}Parser` objects. Those object specify the valid GraphQL schema. Since the GraphQL schema is role-dependent, at some point we need to know what fragment of the GraphQL schema a specific role is allowed to access, and this is stored in `{Sel,Upd,Ins,Del}PermInfo` objects. We have passed around these permission objects as function arguments to the schema building blocks since we first started dealing with permissions during the PDV refactor - see hasura/graphql-engine@5168b99e463199b1934d8645bd6cd37eddb64ae1 in hasura/graphql-engine#4111. This means that, for instance, `tableSelectionSet` has as its type: ```haskell tableSelectionSet :: forall b r m n. MonadBuildSchema b r m n => SourceName -> TableInfo b -> SelPermInfo b -> m (Parser 'Output n (AnnotatedFields b)) ``` There are three reasons to change this. 1. We often pass a `Maybe (xPermInfo b)` instead of a proper `xPermInfo b`, and it's not clear what the intended semantics of this is. Some potential improvements on the data types involved are discussed in issue hasura/graphql-engine-mono#3125. 2. In most cases we also already pass a `TableInfo b`, and together with the `MonadRole` that is usually also in scope, this means that we could look up the required permissions regardless: so passing the permissions explicitly undermines the "single source of truth" principle. Breaking this principle also makes the code more difficult to read. 3. We are working towards role-based parsers (see hasura/graphql-engine-mono#2711), where the `{InputFields,Field,}Parser` objects are constructed in a role-invariant way, so that we have a single object that can be used for all roles. In particular, this means that the schema building blocks _need_ to be constructed in a role-invariant way. While this PR doesn't accomplish that, it does reduce the amount of role-specific arguments being passed, thus fixing hasura/graphql-engine-mono#3068. Concretely, this PR simply drops the `xPermInfo b` argument from almost all schema building blocks. Instead these objects are looked up from the `TableInfo b` as-needed. The resulting code is considerably simpler and shorter. One way to interpret this change is as follows. Before this PR, we figured out permissions at the top-level in `Hasura.GraphQL.Schema`, passing down the obtained `xPermInfo` objects as required. After this PR, we have a bottom-up approach where the schema building blocks themselves decide whether they want to be included for a particular role. So this moves some permission logic out of `Hasura.GraphQL.Schema`, which is very complex. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3608 GitOrigin-RevId: 51a744f34ec7d57bc8077667ae7f9cb9c4f6c962
2022-02-17 11:16:20 +03:00
pure $ concat [inserts, updates, deletes]
functionMutations <- for (Map.toList functions) \(functionName, functionInfo) -> runMaybeT $ do
let targetTableName = _fiReturnType functionInfo
-- A function exposed as mutation must have a function permission
-- configured for the role. See Note [Function Permissions]
guard $
-- when function permissions are inferred, we don't expose the
-- mutation functions for non-admin roles. See Note [Function Permissions]
-- when function permissions are inferred, we don't expose the
-- mutation functions for non-admin roles. See Note [Function Permissions]
-- when function permissions are inferred, we don't expose the
-- mutation functions for non-admin roles. See Note [Function Permissions]
roleName == adminRoleName || roleName `Map.member` _fiPermissions functionInfo
lift $ mkRFs MDBR $ buildFunctionMutationFields mkRootFieldName functionName functionInfo targetTableName
Role-invariant schema constructors We build the GraphQL schema by combining building blocks such as `tableSelectionSet` and `columnParser`. These building blocks individually build `{InputFields,Field,}Parser` objects. Those object specify the valid GraphQL schema. Since the GraphQL schema is role-dependent, at some point we need to know what fragment of the GraphQL schema a specific role is allowed to access, and this is stored in `{Sel,Upd,Ins,Del}PermInfo` objects. We have passed around these permission objects as function arguments to the schema building blocks since we first started dealing with permissions during the PDV refactor - see hasura/graphql-engine@5168b99e463199b1934d8645bd6cd37eddb64ae1 in hasura/graphql-engine#4111. This means that, for instance, `tableSelectionSet` has as its type: ```haskell tableSelectionSet :: forall b r m n. MonadBuildSchema b r m n => SourceName -> TableInfo b -> SelPermInfo b -> m (Parser 'Output n (AnnotatedFields b)) ``` There are three reasons to change this. 1. We often pass a `Maybe (xPermInfo b)` instead of a proper `xPermInfo b`, and it's not clear what the intended semantics of this is. Some potential improvements on the data types involved are discussed in issue hasura/graphql-engine-mono#3125. 2. In most cases we also already pass a `TableInfo b`, and together with the `MonadRole` that is usually also in scope, this means that we could look up the required permissions regardless: so passing the permissions explicitly undermines the "single source of truth" principle. Breaking this principle also makes the code more difficult to read. 3. We are working towards role-based parsers (see hasura/graphql-engine-mono#2711), where the `{InputFields,Field,}Parser` objects are constructed in a role-invariant way, so that we have a single object that can be used for all roles. In particular, this means that the schema building blocks _need_ to be constructed in a role-invariant way. While this PR doesn't accomplish that, it does reduce the amount of role-specific arguments being passed, thus fixing hasura/graphql-engine-mono#3068. Concretely, this PR simply drops the `xPermInfo b` argument from almost all schema building blocks. Instead these objects are looked up from the `TableInfo b` as-needed. The resulting code is considerably simpler and shorter. One way to interpret this change is as follows. Before this PR, we figured out permissions at the top-level in `Hasura.GraphQL.Schema`, passing down the obtained `xPermInfo` objects as required. After this PR, we have a bottom-up approach where the schema building blocks themselves decide whether they want to be included for a particular role. So this moves some permission logic out of `Hasura.GraphQL.Schema`, which is very complex. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3608 GitOrigin-RevId: 51a744f34ec7d57bc8077667ae7f9cb9c4f6c962
2022-02-17 11:16:20 +03:00
pure $ concat $ tableMutations <> catMaybes functionMutations
where
mkRFs :: forall a db remote action raw. (a -> db b) -> SchemaT r m [FieldParser n a] -> SchemaT r m [FieldParser n (RootField db remote action raw)]
mkRFs = mkRootFields sourceName sourceConfig queryTagsConfig
2022-05-27 20:21:22 +03:00
sourceName = _siName sourceInfo
sourceConfig = _siConfiguration sourceInfo
queryTagsConfig = _siQueryTagsConfig sourceInfo
----------------------------------------------------------------
-- Building root parser from fields
-- | Prepare the parser for query-type GraphQL requests, but with introspection
-- for queries, mutations and subscriptions built in.
buildQueryParser ::
forall n m.
(MonadMemoize m, MonadError QErr m, MonadParse n) =>
[P.FieldParser n (NamespacedField (QueryRootField UnpreparedValue))] ->
[P.FieldParser n (G.SchemaIntrospection -> QueryRootField UnpreparedValue)] ->
[P.FieldParser n (NamespacedField (RemoteSchemaRootField (RemoteRelationshipField UnpreparedValue) RemoteSchemaVariable))] ->
[P.FieldParser n (QueryRootField UnpreparedValue)] ->
Maybe (Parser 'Output n (RootFieldMap (MutationRootField UnpreparedValue))) ->
Maybe (Parser 'Output n (RootFieldMap (QueryRootField UnpreparedValue))) ->
m (Parser 'Output n (RootFieldMap (QueryRootField UnpreparedValue)))
buildQueryParser sourceQueryFields apolloFederationFields remoteQueryFields actionQueryFields mutationParser subscriptionParser = do
-- This method is aware of our rudimentary support for Apollo federation.
-- Apollo federation adds two fields, `_service` and `_entities`. The
-- `_service` field parser is a selection set that contains an `sdl` field.
-- The `sdl` field, exposes a _serialized_ introspection of the schema. So in
-- that sense it is similar to the `__type` and `__schema` introspection
-- fields. However, a few things must be excluded from this introspection
-- data, notably the Apollo federation fields `_service` and `_entities`
-- themselves. So in this method we build a version of the introspection for
-- Apollo federation purposes.
let partialApolloQueryFP = sourceQueryFields <> fmap (fmap NotNamespaced) actionQueryFields <> fmap (fmap $ fmap RFRemote) remoteQueryFields
basicQueryPForApollo <- queryRootFromFields partialApolloQueryFP
let buildApolloIntrospection buildQRF = do
partialSchema <-
parseBuildIntrospectionSchema
(P.parserType basicQueryPForApollo)
(P.parserType <$> mutationParser)
(P.parserType <$> subscriptionParser)
pure $ NotNamespaced $ buildQRF $ convertToSchemaIntrospection partialSchema
apolloFederationFieldsWithIntrospection :: [P.FieldParser n (NamespacedField (QueryRootField UnpreparedValue))]
apolloFederationFieldsWithIntrospection = apolloFederationFields <&> (`P.bindField` buildApolloIntrospection)
allQueryFields = partialApolloQueryFP <> apolloFederationFieldsWithIntrospection
queryWithIntrospectionHelper allQueryFields mutationParser subscriptionParser
Build introspection `Schema` ad-hoc at parsing time In order to respond to GraphQL queries that make use of the introspection fields `__type` or `__schema`, we need two things: - an overview of the relevant GraphQL type information, stored in a `Schema` object, and - to have included the `__type` and `__schema` fields in the `query_root` that we generate. It used to be necessary to do the above items in that order, since the `__type` and `__schema` fields (i.e. the respective `FieldParser`s) were generated _from_ a `Schema` object. Thanks to recent refactorings in `Hasura.GraphQL.Schema.Introspect` (see hasura/graphql-engine-mono#2835 or hasura/graphql-engine@5760d9289cdf19b1b71f95e4985affdec22f87e1), the introspection fields _themselves_ are now `Schema`-agnostic, and simply return a function that takes a `Schema` object after parsing. For instance, the type of `schema`, corresponding to the `__schema` field, has literally changed as follows: ```diff -schema :: MonadParse n => Schema -> FieldParser n ( J.Value) +schema :: MonadParse n => FieldParser n (Schema -> J.Value) ``` This means that the introspection fields can be included in the GraphQL schema *before* we have generated a `Schema` object. In particular, rather than the current architecture of generating `Schema` at startup time for every role, we can instead generate `Schema` ad-hoc at query parsing time, only for those queries that make use of the introspection fields. This avoids us storing a `Schema` for every role for the lifetime of the server. However: this introduces a functional change, as the code that generates the `Schema` object, and in particular the `accumulateTypeDefinitions` method, also does certain correctness checks, to prevent exposing a spec-incompliant GraphQL schema. If these correctness checks are being done at parsing time rather than startup time, then we catch certain errors only later on. For this reason, this PR adds an explicit run of this type accumulation at startup time. For efficiency reasons, and since this correctness check is not essential for correct operation of HGE, this is done for the admin role only. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3231 GitOrigin-RevId: 23701c548b785929b28667025436b6ce60bfe1cd
2022-02-21 23:23:04 +03:00
-- | Builds a @Schema@ at query parsing time
parseBuildIntrospectionSchema ::
MonadParse m =>
P.Type 'Output ->
Maybe (P.Type 'Output) ->
Maybe (P.Type 'Output) ->
m Schema
parseBuildIntrospectionSchema q m s =
buildIntrospectionSchema q m s `onLeft` (P.parseErrorWith P.ConflictingDefinitionsError . toErrorValue)
Build introspection `Schema` ad-hoc at parsing time In order to respond to GraphQL queries that make use of the introspection fields `__type` or `__schema`, we need two things: - an overview of the relevant GraphQL type information, stored in a `Schema` object, and - to have included the `__type` and `__schema` fields in the `query_root` that we generate. It used to be necessary to do the above items in that order, since the `__type` and `__schema` fields (i.e. the respective `FieldParser`s) were generated _from_ a `Schema` object. Thanks to recent refactorings in `Hasura.GraphQL.Schema.Introspect` (see hasura/graphql-engine-mono#2835 or hasura/graphql-engine@5760d9289cdf19b1b71f95e4985affdec22f87e1), the introspection fields _themselves_ are now `Schema`-agnostic, and simply return a function that takes a `Schema` object after parsing. For instance, the type of `schema`, corresponding to the `__schema` field, has literally changed as follows: ```diff -schema :: MonadParse n => Schema -> FieldParser n ( J.Value) +schema :: MonadParse n => FieldParser n (Schema -> J.Value) ``` This means that the introspection fields can be included in the GraphQL schema *before* we have generated a `Schema` object. In particular, rather than the current architecture of generating `Schema` at startup time for every role, we can instead generate `Schema` ad-hoc at query parsing time, only for those queries that make use of the introspection fields. This avoids us storing a `Schema` for every role for the lifetime of the server. However: this introduces a functional change, as the code that generates the `Schema` object, and in particular the `accumulateTypeDefinitions` method, also does certain correctness checks, to prevent exposing a spec-incompliant GraphQL schema. If these correctness checks are being done at parsing time rather than startup time, then we catch certain errors only later on. For this reason, this PR adds an explicit run of this type accumulation at startup time. For efficiency reasons, and since this correctness check is not essential for correct operation of HGE, this is done for the admin role only. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3231 GitOrigin-RevId: 23701c548b785929b28667025436b6ce60bfe1cd
2022-02-21 23:23:04 +03:00
queryWithIntrospectionHelper ::
forall n m.
server: refactor `MonadSchema` into `MonadMemoize` Followup to hasura/graphql-engine-mono#4713. The `memoizeOn` method, part of `MonadSchema`, originally had the following type: ```haskell memoizeOn :: (HasCallStack, Ord a, Typeable a, Typeable b, Typeable k) => TH.Name -> a -> m (Parser k n b) -> m (Parser k n b) ``` The reason for operating on `Parser`s specifically was that the `MonadSchema` effect would additionally initialize certain `Unique` values, which appear (nested in) the type of `Parser`. hasura/graphql-engine-mono#518 changed the type of `memoizeOn`, to additionally allow memoizing `FieldParser`s. These also contained a `Unique` value, which was similarly initialized by the `MonadSchema` effect. The new type of `memoizeOn` was as follows: ```haskell memoizeOn :: forall p d a b . (HasCallStack, HasDefinition (p n b) d, Ord a, Typeable p, Typeable a, Typeable b) => TH.Name -> a -> m (p n b) -> m (p n b) ``` Note the type `p n b` of the value being memoized: by choosing `p` to be either `Parser k` or `FieldParser`, both can be memoized. Also note the new `HasDefinition (p n b) d` constraint, which provided a `Lens` for accessing the `Unique` value to be initialized. A quick simplification is that the `HasCallStack` constraint has never been used by any code. This was realized in hasura/graphql-engine-mono#4713, by removing that constraint. hasura/graphql-engine-mono#2980 removed the `Unique` value from our GraphQL-related types entirely, as their original purpose was never truly realized. One part of removing `Unique` consisted of dropping the `HasDefinition (p n b) d` constraint from `memoizeOn`. What I didn't realize at the time was that this meant that the type of `memoizeOn` could be generalized and simplified much further. This PR finally implements that generalization. The new type is as follows: ```haskell memoizeOn :: forall a p. (Ord a, Typeable a, Typeable p) => TH.Name -> a -> m p -> m p ``` This change has a couple of consequences. 1. While constructing the schema, we often output `Maybe (Parser ...)`, to model that the existence of certain pieces of GraphQL schema sometimes depends on the permissions that a certain role has. The previous versions of `memoizeOn` were not able to handle this, as the only thing they could memoize was fully-defined (if not yet fully-evaluated) `(Field)Parser`s. This much more general API _would_ allow memoizing `Maybe (Parser ...)`s. However, we probably have to be continue being cautious with this: if we blindly memoize all `Maybe (Parser ...)`s, the resulting code may never be able to decide whether the value is `Just` or `Nothing` - i.e. it never commits to the existence-or-not of a GraphQL schema fragment. This would manifest as a non-well-founded knot tying, and this would get reported as an error by the implementation of `memoizeOn`. tl;dr: This generalization _technically_ allows for memoizing `Maybe` values, but we probably still want to avoid doing so. For this reason, the PR adds a specialized version of `memoizeOn` to `Hasura.GraphQL.Schema.Parser`. 2. There is no longer any need to connect the `MonadSchema` knot-tying effect with the `MonadParse` effect. In fact, after this PR, the `memoizeOn` method is completely GraphQL-agnostic, and so we implement hasura/graphql-engine-mono#4726, separating `memoizeOn` from `MonadParse` entirely - `memoizeOn` can be defined and implemented as a general Haskell typeclass method. Since `MonadSchema` has been made into a single-type-parameter type class, it has been renamed to something more general, namely `MonadMemoize`. Its only task is to memoize arbitrary `Typeable p` objects under a combined key consisting of a `TH.Name` and a `Typeable a`. Also for this reason, the new `MonadMemoize` has been moved to the more general `Control.Monad.Memoize`. 3. After this change, it's somewhat clearer what `memoizeOn` does: it memoizes an arbitrary value of a `Typeable` type. The only thing that needs to be understood in its implementation is how the manual blackholing works. There is no more semantic interaction with _any_ GraphQL code. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4725 Co-authored-by: Daniel Harvey <4729125+danieljharvey@users.noreply.github.com> GitOrigin-RevId: 089fa2e82c2ce29da76850e994eabb1e261f9c92
2022-08-04 16:44:14 +03:00
(MonadMemoize m, MonadParse n, MonadError QErr m) =>
[P.FieldParser n (NamespacedField (QueryRootField UnpreparedValue))] ->
Maybe (Parser 'Output n (RootFieldMap (MutationRootField UnpreparedValue))) ->
Maybe (Parser 'Output n (RootFieldMap (QueryRootField UnpreparedValue))) ->
m (Parser 'Output n (RootFieldMap (QueryRootField UnpreparedValue)))
queryWithIntrospectionHelper basicQueryFP mutationP subscriptionP = do
let -- Per the GraphQL spec:
-- * "The query root operation type must be provided and must be an Object type." (§3.2.1)
-- * "An Object type must define one or more fields." (§3.6, type validation)
-- Those two requirements cannot both be met when a service is mutations-only, and does not
-- provide any query. In such a case, to meet both of those, we introduce a placeholder query
-- in the schema.
placeholderText = "There are no queries available to the current role. Either there are no sources or remote schemas configured, or the current role doesn't have the required permissions."
placeholderField = NotNamespaced (RFRaw $ JO.String placeholderText) <$ P.selection_ Name._no_queries_available (Just $ G.Description placeholderText) P.string
fixedQueryFP = if null basicQueryFP then [placeholderField] else basicQueryFP
basicQueryP <- queryRootFromFields fixedQueryFP
let buildIntrospectionResponse printResponseFromSchema =
NotNamespaced . RFRaw . printResponseFromSchema
<$> parseBuildIntrospectionSchema
Build introspection `Schema` ad-hoc at parsing time In order to respond to GraphQL queries that make use of the introspection fields `__type` or `__schema`, we need two things: - an overview of the relevant GraphQL type information, stored in a `Schema` object, and - to have included the `__type` and `__schema` fields in the `query_root` that we generate. It used to be necessary to do the above items in that order, since the `__type` and `__schema` fields (i.e. the respective `FieldParser`s) were generated _from_ a `Schema` object. Thanks to recent refactorings in `Hasura.GraphQL.Schema.Introspect` (see hasura/graphql-engine-mono#2835 or hasura/graphql-engine@5760d9289cdf19b1b71f95e4985affdec22f87e1), the introspection fields _themselves_ are now `Schema`-agnostic, and simply return a function that takes a `Schema` object after parsing. For instance, the type of `schema`, corresponding to the `__schema` field, has literally changed as follows: ```diff -schema :: MonadParse n => Schema -> FieldParser n ( J.Value) +schema :: MonadParse n => FieldParser n (Schema -> J.Value) ``` This means that the introspection fields can be included in the GraphQL schema *before* we have generated a `Schema` object. In particular, rather than the current architecture of generating `Schema` at startup time for every role, we can instead generate `Schema` ad-hoc at query parsing time, only for those queries that make use of the introspection fields. This avoids us storing a `Schema` for every role for the lifetime of the server. However: this introduces a functional change, as the code that generates the `Schema` object, and in particular the `accumulateTypeDefinitions` method, also does certain correctness checks, to prevent exposing a spec-incompliant GraphQL schema. If these correctness checks are being done at parsing time rather than startup time, then we catch certain errors only later on. For this reason, this PR adds an explicit run of this type accumulation at startup time. For efficiency reasons, and since this correctness check is not essential for correct operation of HGE, this is done for the admin role only. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3231 GitOrigin-RevId: 23701c548b785929b28667025436b6ce60bfe1cd
2022-02-21 23:23:04 +03:00
(P.parserType basicQueryP)
(P.parserType <$> mutationP)
(P.parserType <$> subscriptionP)
introspection = [schema, typeIntrospection] <&> (`P.bindField` buildIntrospectionResponse)
server: introspection cleanups this pr modifies the representation chosen for introspection parsers, "pushing down" the `Schema` input so it is not required to build the parser anymore. instead, the value produced when the parser is evaluated becomes a function that consumes a schema: ```diff -schema :: MonadParse n => Schema -> FieldParser n ( J.Value) +schema :: MonadParse n => FieldParser n (Schema -> J.Value) ``` this addresses points (1) and (2) of #2833 and is intended to make #2799 easier: we will need to enforce permissions when generating introspection objects, hiding fields the user is not allowed to see, so if we can pass the schema _later_, we can build this parser once, evaluate it once to (morally) obtain a function `Schema -> Value`, and simply run that single `Schema -> Value` function on different role-based schemas. (we really need some terminology to be fixed here: "parser" is already not the best name, and then we have parser vs value/function "returned" by parser vs...) however, we have immediate benefits: we no longer _need_ a `Schema` object to build the introspection parsers! this means we can remove the bogus "degenerate case" schema that is currently constructed in `emptyIntrospection` (and indeed we remove that binding altogether). (fun fact: the diff for this pull request has a negative line count despite adding a lot of comments. @abooij says i have bragging rights in perpetuity now, à la @nicuveo) changes: - internal changes to the operation of the server, invisible outside of a small number of `GraphQL.Schema.*` modules - no user-facing changes PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2835 Co-authored-by: Auke Booij <164426+abooij@users.noreply.github.com> Co-authored-by: Brandon Simmons <210815+jberryman@users.noreply.github.com> GitOrigin-RevId: 9990f53b8f5c733424c4d71a24d94c13dee842ba
2021-12-21 20:28:50 +03:00
{-# INLINE introspection #-}
Build introspection `Schema` ad-hoc at parsing time In order to respond to GraphQL queries that make use of the introspection fields `__type` or `__schema`, we need two things: - an overview of the relevant GraphQL type information, stored in a `Schema` object, and - to have included the `__type` and `__schema` fields in the `query_root` that we generate. It used to be necessary to do the above items in that order, since the `__type` and `__schema` fields (i.e. the respective `FieldParser`s) were generated _from_ a `Schema` object. Thanks to recent refactorings in `Hasura.GraphQL.Schema.Introspect` (see hasura/graphql-engine-mono#2835 or hasura/graphql-engine@5760d9289cdf19b1b71f95e4985affdec22f87e1), the introspection fields _themselves_ are now `Schema`-agnostic, and simply return a function that takes a `Schema` object after parsing. For instance, the type of `schema`, corresponding to the `__schema` field, has literally changed as follows: ```diff -schema :: MonadParse n => Schema -> FieldParser n ( J.Value) +schema :: MonadParse n => FieldParser n (Schema -> J.Value) ``` This means that the introspection fields can be included in the GraphQL schema *before* we have generated a `Schema` object. In particular, rather than the current architecture of generating `Schema` at startup time for every role, we can instead generate `Schema` ad-hoc at query parsing time, only for those queries that make use of the introspection fields. This avoids us storing a `Schema` for every role for the lifetime of the server. However: this introduces a functional change, as the code that generates the `Schema` object, and in particular the `accumulateTypeDefinitions` method, also does certain correctness checks, to prevent exposing a spec-incompliant GraphQL schema. If these correctness checks are being done at parsing time rather than startup time, then we catch certain errors only later on. For this reason, this PR adds an explicit run of this type accumulation at startup time. For efficiency reasons, and since this correctness check is not essential for correct operation of HGE, this is done for the admin role only. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3231 GitOrigin-RevId: 23701c548b785929b28667025436b6ce60bfe1cd
2022-02-21 23:23:04 +03:00
partialQueryFields = fixedQueryFP ++ introspection
safeSelectionSet queryRoot Nothing partialQueryFields <&> fmap (flattenNamespaces . fmap typenameToNamespacedRawRF)
Rewrite GraphQL schema generation and query parsing (close #2801) (#4111) Aka “the PDV refactor.” History is preserved on the branch 2801-graphql-schema-parser-refactor. * [skip ci] remove stale benchmark commit from commit_diff * [skip ci] Check for root field name conflicts between remotes * [skip ci] Additionally check for conflicts between remotes and DB * [skip ci] Check for conflicts in schema when tracking a table * [skip ci] Fix equality checking in GraphQL AST * server: fix mishandling of GeoJSON inputs in subscriptions (fix #3239) (#4551) * Add support for multiple top-level fields in a subscription to improve testability of subscriptions * Add an internal flag to enable multiple subscriptions * Add missing call to withConstructorFn in live queries (fix #3239) Co-authored-by: Alexis King <lexi.lambda@gmail.com> * Scheduled triggers (close #1914) (#3553) server: add scheduled triggers Co-authored-by: Alexis King <lexi.lambda@gmail.com> Co-authored-by: Marion Schleifer <marion@hasura.io> Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io> Co-authored-by: Aleksandra Sikora <ola.zxcvbnm@gmail.com> * dev.sh: bump version due to addition of croniter python dependency * server: fix an introspection query caching issue (fix #4547) (#4661) Introspection queries accept variables, but we need to make sure to also touch the variables that we ignore, so that an introspection query is marked not reusable if we are not able to build a correct query plan for it. A better solution here would be to deal with such unused variables correctly, so that more introspection queries become reusable. An even better solution would be to type-safely track *how* to reuse which variables, rather than to split the reusage marking from the planning. Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * flush log buffer on exception in mkWaiApp ( fix #4772 ) (#4801) * flush log buffer on exception in mkWaiApp * add comment to explain the introduced change * add changelog * allow logging details of a live query polling thread (#4959) * changes for poller-log add various multiplexed query info in poller-log * minor cleanup, also fixes a bug which will return duplicate data * Live query poller stats can now be logged This also removes in-memory stats that are collected about batched query execution as the log lines when piped into an monitoring tool will give us better insights. * allow poller-log to be configurable * log minimal information in the livequery-poller-log Other information can be retrieved from /dev/subscriptions/extended * fix few review comments * avoid marshalling and unmarshalling from ByteString to EncJSON * separate out SubscriberId and SubscriberMetadata Co-authored-by: Anon Ray <rayanon004@gmail.com> * Don't compile in developer APIs by default * Tighten up handling of admin secret, more docs Store the admin secret only as a hash to prevent leaking the secret inadvertently, and to prevent timing attacks on the secret. NOTE: best practice for stored user passwords is a function with a tunable cost like bcrypt, but our threat model is quite different (even if we thought we could reasonably protect the secret from an attacker who could read arbitrary regions of memory), and bcrypt is far too slow (by design) to perform on each request. We'd have to rely on our (technically savvy) users to choose high entropy passwords in any case. Referencing #4736 * server/docs: add instructions to fix loss of float precision in PostgreSQL <= 11 (#5187) This adds a server flag, --pg-connection-options, that can be used to set a PostgreSQL connection parameter, extra_float_digits, that needs to be used to avoid loss of data on older versions of PostgreSQL, which have odd default behavior when returning float values. (fixes #5092) * [skip ci] Add new commits from master to the commit diff * [skip ci] serve default directives (skip & include) over introspection * [skip ci] Update non-Haskell assets with the version on master * server: refactor GQL execution check and config API (#5094) Co-authored-by: Vamshi Surabhi <vamshi@hasura.io> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] fix js issues in tests by pinning dependencies version * [skip ci] bump graphql version * [skip ci] Add note about memory usage * generalize query execution logic on Postgres (#5110) * generalize PGExecCtx to support specialized functions for various operations * fix tests compilation * allow customising PGExecCtx when starting the web server * server: changes catalog initialization and logging for pro customization (#5139) * new typeclass to abstract the logic of QueryLog-ing * abstract the logic of logging websocket-server logs introduce a MonadWSLog typeclass * move catalog initialization to init step expose a helper function to migrate catalog create schema cache in initialiseCtx * expose various modules and functions for pro * [skip ci] cosmetic change * [skip ci] fix test calling a mutation that does not exist * [skip ci] minor text change * [skip ci] refactored input values * [skip ci] remove VString Origin * server: fix updating of headers behaviour in the update cron trigger API and create future events immediately (#5151) * server: fix bug to update headers in an existing cron trigger and create future events Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * Lower stack chunk size in RTS to reduce thread STACK memory (closes #5190) This reduces memory consumption for new idle subscriptions significantly (see linked ticket). The hypothesis is: we fork a lot of threads per websocket, and some of these use slightly more than the initial 1K stack size, so the first overflow balloons to 32K, when significantly less is required. However: running with `+RTS -K1K -xc` did not seem to show evidence of any overflows! So it's a mystery why this improves things. GHC should probably also be doubling the stack buffer at each overflow or doing something even smarter; the knobs we have aren't so helpful. * [skip ci] fix todo and schema generation for aggregate fields * 5087 libpq pool leak (#5089) Shrink libpq buffers to 1MB before returning connection to pool. Closes #5087 See: https://github.com/hasura/pg-client-hs/pull/19 Also related: #3388 #4077 * bump pg-client-hs version (fixes a build issue on some environments) (#5267) * do not use prepared statements for mutations * server: unlock scheduled events on graceful shutdown (#4928) * Fix buggy parsing of new --conn-lifetime flag in 2b0e3774 * [skip ci] remove cherry-picked commit from commit_diff.txt * server: include additional fields in scheduled trigger webhook payload (#5262) * include scheduled triggers metadata in the webhook body Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * server: call the webhook asynchronously in event triggers (#5352) * server: call the webhook asynchronosly in event triggers * Expose all modules in Cabal file (#5371) * [skip ci] update commit_diff.txt * [skip ci] fix cast exp parser & few TODOs * [skip ci] fix remote fields arguments * [skip ci] fix few more TODO, no-op refactor, move resolve/action.hs to execute/action.hs * Pass environment variables around as a data structure, via @sordina (#5374) * Pass environment variables around as a data structure, via @sordina * Resolving build error * Adding Environment passing note to changelog * Removing references to ILTPollerLog as this seems to have been reintroduced from a bad merge * removing commented-out imports * Language pragmas already set by project * Linking async thread * Apply suggestions from code review Use `runQueryTx` instead of `runLazyTx` for queries. * remove the non-user facing entry in the changelog Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] fix: restrict remote relationship field generation for hasura queries * [skip ci] no-op refactor; move insert execution code from schema parser module * server: call the webhook asynchronously in event triggers (#5352) * server: call the webhook asynchronosly in event triggers * Expose all modules in Cabal file (#5371) * [skip ci] update commit_diff.txt * Pass environment variables around as a data structure, via @sordina (#5374) * Pass environment variables around as a data structure, via @sordina * Resolving build error * Adding Environment passing note to changelog * Removing references to ILTPollerLog as this seems to have been reintroduced from a bad merge * removing commented-out imports * Language pragmas already set by project * Linking async thread * Apply suggestions from code review Use `runQueryTx` instead of `runLazyTx` for queries. * remove the non-user facing entry in the changelog Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] implement header checking Probably closes #14 and #3659. * server: refactor 'pollQuery' to have a hook to process 'PollDetails' (#5391) Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * update pg-client (#5421) * [skip ci] update commit_diff * Fix latency buckets for telemetry data These must have gotten messed up during a refactor. As a consequence almost all samples received so far fall into the single erroneous 0 to 1K seconds (originally supposed to be 1ms?) bucket. I also re-thought what the numbers should be, but these are still arbitrary and might want adjusting in the future. * [skip ci] include the latest commit compared against master in commit_diff * [skip ci] include new commits from master in commit_diff * [skip ci] improve description generation * [skip ci] sort all introspect arrays * [skip ci] allow parsers to specify error codes * [skip ci] fix integer and float parsing error code * [skip ci] scalar from json errors are now parse errors * [skip ci] fixed negative integer error message and code * [skip ci] Re-fix nullability in relationships * [skip ci] no-op refactor and removed couple of FIXMEs * [skip ci] uncomment code in 'deleteMetadataObject' * [skip ci] Fix re-fix of nullability for relationships * [skip ci] fix default arguments error code * [skip ci] updated test error message !!! WARNING !!! Since all fields accept `null`, they all are technically optional in the new schema. Meaning there's no such thing as a missing mandatory field anymore: a field that doesn't have a default value, and which therefore isn't labelled as "optional" in the schema, will be assumed to be null if it's missing, meaning it isn't possible anymore to have an error for a missing mandatory field. The only possible error is now when a optional positional argument is omitted but is not the last positional argument. * [skip ci] cleanup of int scalar parser * [skip ci] retro-compatibility of offset as string * [skip ci] Remove commit from commit_diff.txt Although strictly speaking we don't know if this will work correctly in PDV if we would implement query plan caching, the fact is that in the theoretical case that we would have the same issue in PDV, it would probably apply not just to introspection, and the fix would be written completely differently. So this old commit is of no value to us other than the heads-up "make sure query plan caching works correctly even in the presence of unused variables", which is already part of the test suite. * Add MonadTrace and MonadExecuteQuery abstractions (#5383) * [skip ci] Fix accumulation of input object types Just like object types, interface types, and union types, we have to avoid circularities when collecting input types from the GraphQL AST. Additionally, this fixes equality checks for input object types (whose fields are unordered, and hence should be compared as sets) and enum types (ditto). * [skip ci] fix fragment error path * [skip ci] fix node error code * [skip ci] fix paths in insert queries * [skip ci] fix path in objects * [skip ci] manually alter node id path for consistency * [skip ci] more node error fixups * [skip ci] one last relay error message fix * [skip ci] update commit_diff * Propagate the trace context to event triggers (#5409) * Propagate the trace context to event triggers * Handle missing trace and span IDs * Store trace context as one LOCAL * Add migrations * Documentation * changelog * Fix warnings * Respond to code review suggestions * Respond to code review * Undo changelog * Update CHANGELOG.md Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * server: log request/response sizes for event triggers (#5463) * server: log request/response sizes for event triggers event triggers (and scheduled triggers) now have request/response size in their logs. * add changelog entry * Tracing: Simplify HTTP traced request (#5451) Remove the Inversion of Control (SuspendRequest) and simplify the tracing of HTTP Requests. Co-authored-by: Phil Freeman <phil@hasura.io> * Attach request ID as tracing metadata (#5456) * Propagate the trace context to event triggers * Handle missing trace and span IDs * Store trace context as one LOCAL * Add migrations * Documentation * Include the request ID as trace metadata * changelog * Fix warnings * Respond to code review suggestions * Respond to code review * Undo changelog * Update CHANGELOG.md * Typo Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * server: add logging for action handlers (#5471) * server: add logging for action handlers * add changelog entry * change action-handler log type from internal to non-internal * fix action-handler-log name * server: pass http and websocket request to logging context (#5470) * pass request body to logging context in all cases * add message size logging on the websocket API this is required by graphql-engine-pro/#416 * message size logging on websocket API As we need to log all messages recieved/sent by the websocket server, it makes sense to log them as part of the websocket server event logs. Previously message recieved were logged inside the onMessage handler, and messages sent were logged only for "data" messages (as a server event log) * fix review comments Co-authored-by: Phil Freeman <phil@hasura.io> * server: stop eventing subsystem threads when shutting down (#5479) * server: stop eventing subsystem threads when shutting down * Apply suggestions from code review Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com> * [skip ci] update commit_diff with new commits added in master * Bugfix to support 0-size HASURA_GRAPHQL_QUERY_PLAN_CACHE_SIZE Also some minor refactoring of bounded cache module: - the maxBound check in `trim` was confusing and unnecessary - consequently trim was unnecessary for lookupPure Also add some basic tests * Support only the bounded cache, with default HASURA_GRAPHQL_QUERY_PLAN_CACHE_SIZE of 4000. Closes #5363 * [skip ci] remove merge commit from commit_diff * server: Fix compiler warning caused by GHC upgrade (#5489) Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] update all non server code from master * [skip ci] aligned object field error message with master * [skip ci] fix remaining undefined? * [skip ci] remove unused import * [skip ci] revert to previous error message, fix tests * Move nullableType/nonNullableType to Schema.hs These are functions on Types, not on Parsers. * [skip ci] fix setup to fix backend only test the order in which permission checks are performed on the branch is slightly different than on master, resulting in a slightly different error if there are no other mutations the user has access to. By adding update permissions, we go back to the expected case. * [skip ci] fix insert geojson tests to reflect new paths * [skip ci] fix enum test for better error message * [skip ci] fix header test for better error message * [skip ci] fix fragment cycle test for better error message * [skip ci] fix error message for type mismatch * [skip ci] fix variable path in test * [skip ci] adjust tests after bug fix * [skip ci] more tests fixing * Add hdb_catalog.current_setting abstraction for reading Hasura settings As the comment in the function’s definition explains, this is needed to work around an awkward Postgres behavior. * [skip ci] Update CONTRIBUTING.md to mention Node setup for Python tests * [skip ci] Add missing Python tests env var to CONTRIBUTING.md * [skip ci] fix order of result when subscription is run with multiple nodes * [skip ci] no-op refactor: fix a warning in Internal/Parser.hs * [skip ci] throw error when a subscription contains remote joins * [skip ci] Enable easier profiling by hiding AssertNF behind a flag In order to compile a profiling build, run: $ cabal new-build -f profiling --enable-profiling * [skip ci] Fix two warnings We used to lookup the objects that implement a given interface by filtering all objects in the schema document. However, one of the tests expects us to generate a warning if the provided `implements` field of an introspection query specifies an object not implementing some interface. So we use that field instead. * [skip ci] Fix warnings by commenting out query plan caching * [skip ci] improve masking/commenting query caching related code & few warning fixes * [skip ci] Fixed compiler warnings in graphql-parser-hs * Sync non-Haskell assets with master * [skip ci] add a test inserting invalid GraphQL but valid JSON value in a jsonb column * [skip ci] Avoid converting to/from Map * [skip ci] Apply some hlint suggestions * [skip ci] remove redundant constraints from buildLiveQueryPlan and explainGQLQuery * [skip ci] add NOTEs about missing Tracing constraints in PDV from master * Remove -fdefer-typed-holes, fix warnings * Update cabal.project.freeze * Limit GHC’s heap size to 8GB in CI to avoid the OOM killer * Commit package-lock.json for Python tests’ remote schema server * restrict env variables start with HASURA_GRAPHQL_ for headers configuration in actions, event triggers & remote schemas (#5519) * restrict env variables start with HASURA_GRAPHQL_ for headers definition in actions & event triggers * update CHANGELOG.md * Apply suggestions from code review Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * add test for table_by_pk node when roles doesn't have permission to PK * [skip ci] fix introspection query if any enum column present in primary key (fix #5200) (#5522) * [skip ci] test case fix for a6450e126bc2d98bcfd3791501986e4627ce6c6f * [skip ci] add tests to agg queries when role doesn't have access to any cols * fix backend test * Simplify subscription execution * [skip ci] add test to check if required headers are present while querying * Suppose, table B is related to table A and to query B certain headers are necessary, then the test checks that we are throwing error when the header is not set when B is queried through A * fix mutations not checking for view mutability * [skip ci] add variable type checking and corresponding tests * [skip ci] add test to check if update headers are present while doing an upsert * [skip ci] add positive counterparts to some of the negative permission tests * fix args missing their description in introspect * [skip ci] Remove unused function; insert missing markNotReusable call * [skip ci] Add a Note about InputValue * [skip ci] Delete LegacySchema/ 🎉 * [skip ci] Delete GraphQL/{Resolve,Validate}/ 🎉 * [skip ci] Delete top-level Resolve/Validate modules; tidy .cabal file * [skip ci] Delete LegacySchema top-level module Somehow I missed this one. * fix input value to json * [skip ci] elaborate on JSON objects in GraphQL * [skip ci] add missing file * [skip ci] add a test with subscription containing remote joins * add a test with remote joins in mutation output * [skip ci] Add some comments to Schema/Mutation.hs * [skip ci] Remove no longer needed code from RemoteServer.hs * [skip ci] Use a helper function to generate conflict clause parsers * [skip ci] fix type checker error in fields with default value * capitalize the header keys in select_articles_without_required_headers * Somehow, this was the reason the tests were failing. I have no idea, why! * [skip ci] Add a long Note about optional fields and nullability * Improve comments a bit; simplify Schema/Common.hs a bit * [skip ci] full implementation of 5.8.5 type checking. * [skip ci] fix validation test teardown * [skip ci] fix schema stitching test * fix remote schema ignoring enum nullability * [skip ci] fix fieldOptional to not discard nullability * revert nullability of use_spheroid * fix comment * add required remote fields with arguments for tests * [skip ci] add missing docstrings * [skip ci] fixed description of remote fields * [skip ci] change docstring for consistency * fix several schema inconsistencies * revert behaviour change in function arguments parsing * fix remaining nullability issues in new schema * minor no-op refactor; use isListType from graphql-parser-hs * use nullability of remote schema node, while creating a Remote reln * fix 'ID' input coercing & action 'ID' type relationship mapping * include ASTs in MonadExecuteQuery * needed for PRO code-base * Delete code for "interfaces implementing ifaces" (draft GraphQL spec) Previously I started writing some code that adds support for a future GraphQL feature where interfaces may themselves be sub-types of other interfaces. However, this code was incomplete, and partially incorrect. So this commit deletes support for that entirely. * Ignore a remote schema test during the upgrade/downgrade test The PDV refactor does a better job at exposing a minimal set of types through introspection. In particular, not every type that is present in a remote schema is re-exposed by Hasura. The test test_schema_stitching.py::TestRemoteSchemaBasic::test_introspection assumed that all types were re-exposed, which is not required for GraphQL compatibility, in order to test some aspect of our support for remote schemas. So while this particular test has been updated on PDV, the PDV branch now does not pass the old test, which we argue to be incorrect. Hence this test is disabled while we await a release, after which we can re-enable it. This also re-enables a test that was previously disabled for similar, though unrelated, reasons. * add haddock documentation to the action's field parsers * Deslecting some tests in server-upgrade Some tests with current build are failing on server upgrade which it should not. The response is more accurate than what it was. Also the upgrade tests were not throwing errors when the test is expected to return an error, but succeeds. The test framework is patched to catch this case. * [skip ci] Add a long Note about interfaces and object types * send the response headers back to client after running a query * Deselect a few more tests during upgrade/downgrade test * Update commit_diff.txt * change log kind from db_migrate to catalog_migrate (#5531) * Show method and complete URI in traced HTTP calls (#5525) Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * restrict env variables start with HASURA_GRAPHQL_ for headers configuration in actions, event triggers & remote schemas (#5519) * restrict env variables start with HASURA_GRAPHQL_ for headers definition in actions & event triggers * update CHANGELOG.md * Apply suggestions from code review Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * fix introspection query if any enum column present in primary key (fix #5200) (#5522) * Fix telemetry reporting of transport (websocket was reported as http) * add log kinds in cli-migrations image (#5529) * add log kinds in cli-migrations image * give hint to resolve timeout error * minor changes and CHANGELOG * server: set hasura.tracecontext in RQL mutations [#5542] (#5555) * server: set hasura.tracecontext in RQL mutations [#5542] * Update test suite Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * Add bulldozer auto-merge and -update configuration We still need to add the github app (as of time of opening this PR) Afterwards devs should be able to allow bulldozer to automatically "update" the branch, merging in parent when it changes, as well as automatically merge when all checks pass. This is opt-in by adding the `auto-update-auto-merge` label to the PR. * Remove 'bulldozer' config, try 'kodiak' for auto-merge see: https://github.com/chdsbd/kodiak The main issue that bit us was not being able to auto update forked branches, also: https://github.com/palantir/bulldozer/issues/66 https://github.com/palantir/bulldozer/issues/145 * Cherry-picked all commits * [skip ci] Slightly improve formatting * Revert "fix introspection query if any enum column present in primary key (fix #5200) (#5522)" This reverts commit 0f9a5afa59a88f6824f4d63d58db246a5ba3fb03. This undoes a cherry-pick of 34288e1eb5f2c5dad9e6d1e05453dd52397dc970 that was already done previously in a6450e126bc2d98bcfd3791501986e4627ce6c6f, and subsequently fixed for PDV in 70e89dc250f8ddc6e2b7930bbe2b3eeaa6dbe1db * Do a small bit of tidying in Hasura.GraphQL.Parser.Collect * Fix cherry-picking work Some previous cherry-picks ended up modifying code that is commented out * [skip ci] clarified comment regarding insert representation * [skip ci] removed obsolete todos * cosmetic change * fix action error message * [skip ci] remove obsolete comment * [skip ci] synchronize stylish haskell extensions list * use previously defined scalar names in parsers rather than ad-hoc literals * Apply most syntax hlint hints. * Clarify comment on update mutation. * [skip ci] Clarify what fields should be specified for objects * Update "_inc" description. * Use record types rather than tuples fo IntrospectionResult and ParsedIntrospection * Get rid of checkFieldNamesUnique (use Data.List.Extended.duplicates) * Throw more errors when collecting query root names * [skip ci] clean column parser comment * Remove dead code inserted in ab65b39 * avoid converting to non-empty list where not needed * add note and TODO about the disabled checks in PDV * minor refactor in remoteField' function * Unify two getObject methods * Nitpicks in Remote.hs * Update CHANGELOG.md * Revert "Unify two getObject methods" This reverts commit bd6bb40355b3d189a46c0312eb52225e18be57b3. We do need two different getObject functions as the corresponding error message is different * Fix error message in Remote.hs * Update CHANGELOG.md Co-authored-by: Auke Booij <auke@tulcod.com> * Apply suggested Changelog fix. Co-authored-by: Auke Booij <auke@tulcod.com> * Fix typo in Changelog. * [skip ci] Update changelog. * reuse type names to avoid duplication * Fix Hashable instance for Definition The presence of `Maybe Unique`, and an optional description, as part of `Definition`s, means that `Definition`s that are considered `Eq`ual may get different hashes. This can happen, for instance, when one object is memoized but another is not. * [skip ci] Update commit_diff.txt * Bump parser version. * Bump freeze file after changes in parser. * [skip ci] Incorporate commits from master * Fix developer flag in server/cabal.project.freeze Co-authored-by: Auke Booij <auke@tulcod.com> * Deselect a changed ENUM test for upgrade/downgrade CI * Deselect test here as well * [skip ci] remove dead code * Disable more tests for upgrade/downgrade * Fix which test gets deselected * Revert "Add hdb_catalog.current_setting abstraction for reading Hasura settings" This reverts commit 66e85ab9fbd56cca2c28a80201f6604fbe811b85. * Remove circular reference in cabal.project.freeze Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io> Co-authored-by: Auke Booij <auke@hasura.io> Co-authored-by: Tirumarai Selvan <tiru@hasura.io> Co-authored-by: Marion Schleifer <marion@hasura.io> Co-authored-by: Aleksandra Sikora <ola.zxcvbnm@gmail.com> Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> Co-authored-by: Anon Ray <rayanon004@gmail.com> Co-authored-by: rakeshkky <12475069+rakeshkky@users.noreply.github.com> Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com> Co-authored-by: Vamshi Surabhi <vamshi@hasura.io> Co-authored-by: Antoine Leblanc <antoine@hasura.io> Co-authored-by: Brandon Simmons <brandon@hasura.io> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Lyndon Maydwell <lyndon@sordina.net> Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Naveen Naidu <naveennaidu479@gmail.com> Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com> Co-authored-by: Nizar Malangadan <nizar-m@users.noreply.github.com> Co-authored-by: Antoine Leblanc <crucuny@gmail.com> Co-authored-by: Auke Booij <auke@tulcod.com>
2020-08-21 20:27:01 +03:00
queryRootFromFields ::
forall n m.
(MonadError QErr m, MonadParse n) =>
[P.FieldParser n (NamespacedField (QueryRootField UnpreparedValue))] ->
m (Parser 'Output n (RootFieldMap (QueryRootField UnpreparedValue)))
Rewrite GraphQL schema generation and query parsing (close #2801) (#4111) Aka “the PDV refactor.” History is preserved on the branch 2801-graphql-schema-parser-refactor. * [skip ci] remove stale benchmark commit from commit_diff * [skip ci] Check for root field name conflicts between remotes * [skip ci] Additionally check for conflicts between remotes and DB * [skip ci] Check for conflicts in schema when tracking a table * [skip ci] Fix equality checking in GraphQL AST * server: fix mishandling of GeoJSON inputs in subscriptions (fix #3239) (#4551) * Add support for multiple top-level fields in a subscription to improve testability of subscriptions * Add an internal flag to enable multiple subscriptions * Add missing call to withConstructorFn in live queries (fix #3239) Co-authored-by: Alexis King <lexi.lambda@gmail.com> * Scheduled triggers (close #1914) (#3553) server: add scheduled triggers Co-authored-by: Alexis King <lexi.lambda@gmail.com> Co-authored-by: Marion Schleifer <marion@hasura.io> Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io> Co-authored-by: Aleksandra Sikora <ola.zxcvbnm@gmail.com> * dev.sh: bump version due to addition of croniter python dependency * server: fix an introspection query caching issue (fix #4547) (#4661) Introspection queries accept variables, but we need to make sure to also touch the variables that we ignore, so that an introspection query is marked not reusable if we are not able to build a correct query plan for it. A better solution here would be to deal with such unused variables correctly, so that more introspection queries become reusable. An even better solution would be to type-safely track *how* to reuse which variables, rather than to split the reusage marking from the planning. Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * flush log buffer on exception in mkWaiApp ( fix #4772 ) (#4801) * flush log buffer on exception in mkWaiApp * add comment to explain the introduced change * add changelog * allow logging details of a live query polling thread (#4959) * changes for poller-log add various multiplexed query info in poller-log * minor cleanup, also fixes a bug which will return duplicate data * Live query poller stats can now be logged This also removes in-memory stats that are collected about batched query execution as the log lines when piped into an monitoring tool will give us better insights. * allow poller-log to be configurable * log minimal information in the livequery-poller-log Other information can be retrieved from /dev/subscriptions/extended * fix few review comments * avoid marshalling and unmarshalling from ByteString to EncJSON * separate out SubscriberId and SubscriberMetadata Co-authored-by: Anon Ray <rayanon004@gmail.com> * Don't compile in developer APIs by default * Tighten up handling of admin secret, more docs Store the admin secret only as a hash to prevent leaking the secret inadvertently, and to prevent timing attacks on the secret. NOTE: best practice for stored user passwords is a function with a tunable cost like bcrypt, but our threat model is quite different (even if we thought we could reasonably protect the secret from an attacker who could read arbitrary regions of memory), and bcrypt is far too slow (by design) to perform on each request. We'd have to rely on our (technically savvy) users to choose high entropy passwords in any case. Referencing #4736 * server/docs: add instructions to fix loss of float precision in PostgreSQL <= 11 (#5187) This adds a server flag, --pg-connection-options, that can be used to set a PostgreSQL connection parameter, extra_float_digits, that needs to be used to avoid loss of data on older versions of PostgreSQL, which have odd default behavior when returning float values. (fixes #5092) * [skip ci] Add new commits from master to the commit diff * [skip ci] serve default directives (skip & include) over introspection * [skip ci] Update non-Haskell assets with the version on master * server: refactor GQL execution check and config API (#5094) Co-authored-by: Vamshi Surabhi <vamshi@hasura.io> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] fix js issues in tests by pinning dependencies version * [skip ci] bump graphql version * [skip ci] Add note about memory usage * generalize query execution logic on Postgres (#5110) * generalize PGExecCtx to support specialized functions for various operations * fix tests compilation * allow customising PGExecCtx when starting the web server * server: changes catalog initialization and logging for pro customization (#5139) * new typeclass to abstract the logic of QueryLog-ing * abstract the logic of logging websocket-server logs introduce a MonadWSLog typeclass * move catalog initialization to init step expose a helper function to migrate catalog create schema cache in initialiseCtx * expose various modules and functions for pro * [skip ci] cosmetic change * [skip ci] fix test calling a mutation that does not exist * [skip ci] minor text change * [skip ci] refactored input values * [skip ci] remove VString Origin * server: fix updating of headers behaviour in the update cron trigger API and create future events immediately (#5151) * server: fix bug to update headers in an existing cron trigger and create future events Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * Lower stack chunk size in RTS to reduce thread STACK memory (closes #5190) This reduces memory consumption for new idle subscriptions significantly (see linked ticket). The hypothesis is: we fork a lot of threads per websocket, and some of these use slightly more than the initial 1K stack size, so the first overflow balloons to 32K, when significantly less is required. However: running with `+RTS -K1K -xc` did not seem to show evidence of any overflows! So it's a mystery why this improves things. GHC should probably also be doubling the stack buffer at each overflow or doing something even smarter; the knobs we have aren't so helpful. * [skip ci] fix todo and schema generation for aggregate fields * 5087 libpq pool leak (#5089) Shrink libpq buffers to 1MB before returning connection to pool. Closes #5087 See: https://github.com/hasura/pg-client-hs/pull/19 Also related: #3388 #4077 * bump pg-client-hs version (fixes a build issue on some environments) (#5267) * do not use prepared statements for mutations * server: unlock scheduled events on graceful shutdown (#4928) * Fix buggy parsing of new --conn-lifetime flag in 2b0e3774 * [skip ci] remove cherry-picked commit from commit_diff.txt * server: include additional fields in scheduled trigger webhook payload (#5262) * include scheduled triggers metadata in the webhook body Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * server: call the webhook asynchronously in event triggers (#5352) * server: call the webhook asynchronosly in event triggers * Expose all modules in Cabal file (#5371) * [skip ci] update commit_diff.txt * [skip ci] fix cast exp parser & few TODOs * [skip ci] fix remote fields arguments * [skip ci] fix few more TODO, no-op refactor, move resolve/action.hs to execute/action.hs * Pass environment variables around as a data structure, via @sordina (#5374) * Pass environment variables around as a data structure, via @sordina * Resolving build error * Adding Environment passing note to changelog * Removing references to ILTPollerLog as this seems to have been reintroduced from a bad merge * removing commented-out imports * Language pragmas already set by project * Linking async thread * Apply suggestions from code review Use `runQueryTx` instead of `runLazyTx` for queries. * remove the non-user facing entry in the changelog Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] fix: restrict remote relationship field generation for hasura queries * [skip ci] no-op refactor; move insert execution code from schema parser module * server: call the webhook asynchronously in event triggers (#5352) * server: call the webhook asynchronosly in event triggers * Expose all modules in Cabal file (#5371) * [skip ci] update commit_diff.txt * Pass environment variables around as a data structure, via @sordina (#5374) * Pass environment variables around as a data structure, via @sordina * Resolving build error * Adding Environment passing note to changelog * Removing references to ILTPollerLog as this seems to have been reintroduced from a bad merge * removing commented-out imports * Language pragmas already set by project * Linking async thread * Apply suggestions from code review Use `runQueryTx` instead of `runLazyTx` for queries. * remove the non-user facing entry in the changelog Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] implement header checking Probably closes #14 and #3659. * server: refactor 'pollQuery' to have a hook to process 'PollDetails' (#5391) Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * update pg-client (#5421) * [skip ci] update commit_diff * Fix latency buckets for telemetry data These must have gotten messed up during a refactor. As a consequence almost all samples received so far fall into the single erroneous 0 to 1K seconds (originally supposed to be 1ms?) bucket. I also re-thought what the numbers should be, but these are still arbitrary and might want adjusting in the future. * [skip ci] include the latest commit compared against master in commit_diff * [skip ci] include new commits from master in commit_diff * [skip ci] improve description generation * [skip ci] sort all introspect arrays * [skip ci] allow parsers to specify error codes * [skip ci] fix integer and float parsing error code * [skip ci] scalar from json errors are now parse errors * [skip ci] fixed negative integer error message and code * [skip ci] Re-fix nullability in relationships * [skip ci] no-op refactor and removed couple of FIXMEs * [skip ci] uncomment code in 'deleteMetadataObject' * [skip ci] Fix re-fix of nullability for relationships * [skip ci] fix default arguments error code * [skip ci] updated test error message !!! WARNING !!! Since all fields accept `null`, they all are technically optional in the new schema. Meaning there's no such thing as a missing mandatory field anymore: a field that doesn't have a default value, and which therefore isn't labelled as "optional" in the schema, will be assumed to be null if it's missing, meaning it isn't possible anymore to have an error for a missing mandatory field. The only possible error is now when a optional positional argument is omitted but is not the last positional argument. * [skip ci] cleanup of int scalar parser * [skip ci] retro-compatibility of offset as string * [skip ci] Remove commit from commit_diff.txt Although strictly speaking we don't know if this will work correctly in PDV if we would implement query plan caching, the fact is that in the theoretical case that we would have the same issue in PDV, it would probably apply not just to introspection, and the fix would be written completely differently. So this old commit is of no value to us other than the heads-up "make sure query plan caching works correctly even in the presence of unused variables", which is already part of the test suite. * Add MonadTrace and MonadExecuteQuery abstractions (#5383) * [skip ci] Fix accumulation of input object types Just like object types, interface types, and union types, we have to avoid circularities when collecting input types from the GraphQL AST. Additionally, this fixes equality checks for input object types (whose fields are unordered, and hence should be compared as sets) and enum types (ditto). * [skip ci] fix fragment error path * [skip ci] fix node error code * [skip ci] fix paths in insert queries * [skip ci] fix path in objects * [skip ci] manually alter node id path for consistency * [skip ci] more node error fixups * [skip ci] one last relay error message fix * [skip ci] update commit_diff * Propagate the trace context to event triggers (#5409) * Propagate the trace context to event triggers * Handle missing trace and span IDs * Store trace context as one LOCAL * Add migrations * Documentation * changelog * Fix warnings * Respond to code review suggestions * Respond to code review * Undo changelog * Update CHANGELOG.md Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * server: log request/response sizes for event triggers (#5463) * server: log request/response sizes for event triggers event triggers (and scheduled triggers) now have request/response size in their logs. * add changelog entry * Tracing: Simplify HTTP traced request (#5451) Remove the Inversion of Control (SuspendRequest) and simplify the tracing of HTTP Requests. Co-authored-by: Phil Freeman <phil@hasura.io> * Attach request ID as tracing metadata (#5456) * Propagate the trace context to event triggers * Handle missing trace and span IDs * Store trace context as one LOCAL * Add migrations * Documentation * Include the request ID as trace metadata * changelog * Fix warnings * Respond to code review suggestions * Respond to code review * Undo changelog * Update CHANGELOG.md * Typo Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * server: add logging for action handlers (#5471) * server: add logging for action handlers * add changelog entry * change action-handler log type from internal to non-internal * fix action-handler-log name * server: pass http and websocket request to logging context (#5470) * pass request body to logging context in all cases * add message size logging on the websocket API this is required by graphql-engine-pro/#416 * message size logging on websocket API As we need to log all messages recieved/sent by the websocket server, it makes sense to log them as part of the websocket server event logs. Previously message recieved were logged inside the onMessage handler, and messages sent were logged only for "data" messages (as a server event log) * fix review comments Co-authored-by: Phil Freeman <phil@hasura.io> * server: stop eventing subsystem threads when shutting down (#5479) * server: stop eventing subsystem threads when shutting down * Apply suggestions from code review Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com> * [skip ci] update commit_diff with new commits added in master * Bugfix to support 0-size HASURA_GRAPHQL_QUERY_PLAN_CACHE_SIZE Also some minor refactoring of bounded cache module: - the maxBound check in `trim` was confusing and unnecessary - consequently trim was unnecessary for lookupPure Also add some basic tests * Support only the bounded cache, with default HASURA_GRAPHQL_QUERY_PLAN_CACHE_SIZE of 4000. Closes #5363 * [skip ci] remove merge commit from commit_diff * server: Fix compiler warning caused by GHC upgrade (#5489) Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] update all non server code from master * [skip ci] aligned object field error message with master * [skip ci] fix remaining undefined? * [skip ci] remove unused import * [skip ci] revert to previous error message, fix tests * Move nullableType/nonNullableType to Schema.hs These are functions on Types, not on Parsers. * [skip ci] fix setup to fix backend only test the order in which permission checks are performed on the branch is slightly different than on master, resulting in a slightly different error if there are no other mutations the user has access to. By adding update permissions, we go back to the expected case. * [skip ci] fix insert geojson tests to reflect new paths * [skip ci] fix enum test for better error message * [skip ci] fix header test for better error message * [skip ci] fix fragment cycle test for better error message * [skip ci] fix error message for type mismatch * [skip ci] fix variable path in test * [skip ci] adjust tests after bug fix * [skip ci] more tests fixing * Add hdb_catalog.current_setting abstraction for reading Hasura settings As the comment in the function’s definition explains, this is needed to work around an awkward Postgres behavior. * [skip ci] Update CONTRIBUTING.md to mention Node setup for Python tests * [skip ci] Add missing Python tests env var to CONTRIBUTING.md * [skip ci] fix order of result when subscription is run with multiple nodes * [skip ci] no-op refactor: fix a warning in Internal/Parser.hs * [skip ci] throw error when a subscription contains remote joins * [skip ci] Enable easier profiling by hiding AssertNF behind a flag In order to compile a profiling build, run: $ cabal new-build -f profiling --enable-profiling * [skip ci] Fix two warnings We used to lookup the objects that implement a given interface by filtering all objects in the schema document. However, one of the tests expects us to generate a warning if the provided `implements` field of an introspection query specifies an object not implementing some interface. So we use that field instead. * [skip ci] Fix warnings by commenting out query plan caching * [skip ci] improve masking/commenting query caching related code & few warning fixes * [skip ci] Fixed compiler warnings in graphql-parser-hs * Sync non-Haskell assets with master * [skip ci] add a test inserting invalid GraphQL but valid JSON value in a jsonb column * [skip ci] Avoid converting to/from Map * [skip ci] Apply some hlint suggestions * [skip ci] remove redundant constraints from buildLiveQueryPlan and explainGQLQuery * [skip ci] add NOTEs about missing Tracing constraints in PDV from master * Remove -fdefer-typed-holes, fix warnings * Update cabal.project.freeze * Limit GHC’s heap size to 8GB in CI to avoid the OOM killer * Commit package-lock.json for Python tests’ remote schema server * restrict env variables start with HASURA_GRAPHQL_ for headers configuration in actions, event triggers & remote schemas (#5519) * restrict env variables start with HASURA_GRAPHQL_ for headers definition in actions & event triggers * update CHANGELOG.md * Apply suggestions from code review Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * add test for table_by_pk node when roles doesn't have permission to PK * [skip ci] fix introspection query if any enum column present in primary key (fix #5200) (#5522) * [skip ci] test case fix for a6450e126bc2d98bcfd3791501986e4627ce6c6f * [skip ci] add tests to agg queries when role doesn't have access to any cols * fix backend test * Simplify subscription execution * [skip ci] add test to check if required headers are present while querying * Suppose, table B is related to table A and to query B certain headers are necessary, then the test checks that we are throwing error when the header is not set when B is queried through A * fix mutations not checking for view mutability * [skip ci] add variable type checking and corresponding tests * [skip ci] add test to check if update headers are present while doing an upsert * [skip ci] add positive counterparts to some of the negative permission tests * fix args missing their description in introspect * [skip ci] Remove unused function; insert missing markNotReusable call * [skip ci] Add a Note about InputValue * [skip ci] Delete LegacySchema/ 🎉 * [skip ci] Delete GraphQL/{Resolve,Validate}/ 🎉 * [skip ci] Delete top-level Resolve/Validate modules; tidy .cabal file * [skip ci] Delete LegacySchema top-level module Somehow I missed this one. * fix input value to json * [skip ci] elaborate on JSON objects in GraphQL * [skip ci] add missing file * [skip ci] add a test with subscription containing remote joins * add a test with remote joins in mutation output * [skip ci] Add some comments to Schema/Mutation.hs * [skip ci] Remove no longer needed code from RemoteServer.hs * [skip ci] Use a helper function to generate conflict clause parsers * [skip ci] fix type checker error in fields with default value * capitalize the header keys in select_articles_without_required_headers * Somehow, this was the reason the tests were failing. I have no idea, why! * [skip ci] Add a long Note about optional fields and nullability * Improve comments a bit; simplify Schema/Common.hs a bit * [skip ci] full implementation of 5.8.5 type checking. * [skip ci] fix validation test teardown * [skip ci] fix schema stitching test * fix remote schema ignoring enum nullability * [skip ci] fix fieldOptional to not discard nullability * revert nullability of use_spheroid * fix comment * add required remote fields with arguments for tests * [skip ci] add missing docstrings * [skip ci] fixed description of remote fields * [skip ci] change docstring for consistency * fix several schema inconsistencies * revert behaviour change in function arguments parsing * fix remaining nullability issues in new schema * minor no-op refactor; use isListType from graphql-parser-hs * use nullability of remote schema node, while creating a Remote reln * fix 'ID' input coercing & action 'ID' type relationship mapping * include ASTs in MonadExecuteQuery * needed for PRO code-base * Delete code for "interfaces implementing ifaces" (draft GraphQL spec) Previously I started writing some code that adds support for a future GraphQL feature where interfaces may themselves be sub-types of other interfaces. However, this code was incomplete, and partially incorrect. So this commit deletes support for that entirely. * Ignore a remote schema test during the upgrade/downgrade test The PDV refactor does a better job at exposing a minimal set of types through introspection. In particular, not every type that is present in a remote schema is re-exposed by Hasura. The test test_schema_stitching.py::TestRemoteSchemaBasic::test_introspection assumed that all types were re-exposed, which is not required for GraphQL compatibility, in order to test some aspect of our support for remote schemas. So while this particular test has been updated on PDV, the PDV branch now does not pass the old test, which we argue to be incorrect. Hence this test is disabled while we await a release, after which we can re-enable it. This also re-enables a test that was previously disabled for similar, though unrelated, reasons. * add haddock documentation to the action's field parsers * Deslecting some tests in server-upgrade Some tests with current build are failing on server upgrade which it should not. The response is more accurate than what it was. Also the upgrade tests were not throwing errors when the test is expected to return an error, but succeeds. The test framework is patched to catch this case. * [skip ci] Add a long Note about interfaces and object types * send the response headers back to client after running a query * Deselect a few more tests during upgrade/downgrade test * Update commit_diff.txt * change log kind from db_migrate to catalog_migrate (#5531) * Show method and complete URI in traced HTTP calls (#5525) Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * restrict env variables start with HASURA_GRAPHQL_ for headers configuration in actions, event triggers & remote schemas (#5519) * restrict env variables start with HASURA_GRAPHQL_ for headers definition in actions & event triggers * update CHANGELOG.md * Apply suggestions from code review Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * fix introspection query if any enum column present in primary key (fix #5200) (#5522) * Fix telemetry reporting of transport (websocket was reported as http) * add log kinds in cli-migrations image (#5529) * add log kinds in cli-migrations image * give hint to resolve timeout error * minor changes and CHANGELOG * server: set hasura.tracecontext in RQL mutations [#5542] (#5555) * server: set hasura.tracecontext in RQL mutations [#5542] * Update test suite Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * Add bulldozer auto-merge and -update configuration We still need to add the github app (as of time of opening this PR) Afterwards devs should be able to allow bulldozer to automatically "update" the branch, merging in parent when it changes, as well as automatically merge when all checks pass. This is opt-in by adding the `auto-update-auto-merge` label to the PR. * Remove 'bulldozer' config, try 'kodiak' for auto-merge see: https://github.com/chdsbd/kodiak The main issue that bit us was not being able to auto update forked branches, also: https://github.com/palantir/bulldozer/issues/66 https://github.com/palantir/bulldozer/issues/145 * Cherry-picked all commits * [skip ci] Slightly improve formatting * Revert "fix introspection query if any enum column present in primary key (fix #5200) (#5522)" This reverts commit 0f9a5afa59a88f6824f4d63d58db246a5ba3fb03. This undoes a cherry-pick of 34288e1eb5f2c5dad9e6d1e05453dd52397dc970 that was already done previously in a6450e126bc2d98bcfd3791501986e4627ce6c6f, and subsequently fixed for PDV in 70e89dc250f8ddc6e2b7930bbe2b3eeaa6dbe1db * Do a small bit of tidying in Hasura.GraphQL.Parser.Collect * Fix cherry-picking work Some previous cherry-picks ended up modifying code that is commented out * [skip ci] clarified comment regarding insert representation * [skip ci] removed obsolete todos * cosmetic change * fix action error message * [skip ci] remove obsolete comment * [skip ci] synchronize stylish haskell extensions list * use previously defined scalar names in parsers rather than ad-hoc literals * Apply most syntax hlint hints. * Clarify comment on update mutation. * [skip ci] Clarify what fields should be specified for objects * Update "_inc" description. * Use record types rather than tuples fo IntrospectionResult and ParsedIntrospection * Get rid of checkFieldNamesUnique (use Data.List.Extended.duplicates) * Throw more errors when collecting query root names * [skip ci] clean column parser comment * Remove dead code inserted in ab65b39 * avoid converting to non-empty list where not needed * add note and TODO about the disabled checks in PDV * minor refactor in remoteField' function * Unify two getObject methods * Nitpicks in Remote.hs * Update CHANGELOG.md * Revert "Unify two getObject methods" This reverts commit bd6bb40355b3d189a46c0312eb52225e18be57b3. We do need two different getObject functions as the corresponding error message is different * Fix error message in Remote.hs * Update CHANGELOG.md Co-authored-by: Auke Booij <auke@tulcod.com> * Apply suggested Changelog fix. Co-authored-by: Auke Booij <auke@tulcod.com> * Fix typo in Changelog. * [skip ci] Update changelog. * reuse type names to avoid duplication * Fix Hashable instance for Definition The presence of `Maybe Unique`, and an optional description, as part of `Definition`s, means that `Definition`s that are considered `Eq`ual may get different hashes. This can happen, for instance, when one object is memoized but another is not. * [skip ci] Update commit_diff.txt * Bump parser version. * Bump freeze file after changes in parser. * [skip ci] Incorporate commits from master * Fix developer flag in server/cabal.project.freeze Co-authored-by: Auke Booij <auke@tulcod.com> * Deselect a changed ENUM test for upgrade/downgrade CI * Deselect test here as well * [skip ci] remove dead code * Disable more tests for upgrade/downgrade * Fix which test gets deselected * Revert "Add hdb_catalog.current_setting abstraction for reading Hasura settings" This reverts commit 66e85ab9fbd56cca2c28a80201f6604fbe811b85. * Remove circular reference in cabal.project.freeze Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io> Co-authored-by: Auke Booij <auke@hasura.io> Co-authored-by: Tirumarai Selvan <tiru@hasura.io> Co-authored-by: Marion Schleifer <marion@hasura.io> Co-authored-by: Aleksandra Sikora <ola.zxcvbnm@gmail.com> Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> Co-authored-by: Anon Ray <rayanon004@gmail.com> Co-authored-by: rakeshkky <12475069+rakeshkky@users.noreply.github.com> Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com> Co-authored-by: Vamshi Surabhi <vamshi@hasura.io> Co-authored-by: Antoine Leblanc <antoine@hasura.io> Co-authored-by: Brandon Simmons <brandon@hasura.io> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Lyndon Maydwell <lyndon@sordina.net> Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Naveen Naidu <naveennaidu479@gmail.com> Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com> Co-authored-by: Nizar Malangadan <nizar-m@users.noreply.github.com> Co-authored-by: Antoine Leblanc <crucuny@gmail.com> Co-authored-by: Auke Booij <auke@tulcod.com>
2020-08-21 20:27:01 +03:00
queryRootFromFields fps =
safeSelectionSet queryRoot Nothing fps <&> fmap (flattenNamespaces . fmap typenameToNamespacedRawRF)
Rewrite GraphQL schema generation and query parsing (close #2801) (#4111) Aka “the PDV refactor.” History is preserved on the branch 2801-graphql-schema-parser-refactor. * [skip ci] remove stale benchmark commit from commit_diff * [skip ci] Check for root field name conflicts between remotes * [skip ci] Additionally check for conflicts between remotes and DB * [skip ci] Check for conflicts in schema when tracking a table * [skip ci] Fix equality checking in GraphQL AST * server: fix mishandling of GeoJSON inputs in subscriptions (fix #3239) (#4551) * Add support for multiple top-level fields in a subscription to improve testability of subscriptions * Add an internal flag to enable multiple subscriptions * Add missing call to withConstructorFn in live queries (fix #3239) Co-authored-by: Alexis King <lexi.lambda@gmail.com> * Scheduled triggers (close #1914) (#3553) server: add scheduled triggers Co-authored-by: Alexis King <lexi.lambda@gmail.com> Co-authored-by: Marion Schleifer <marion@hasura.io> Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io> Co-authored-by: Aleksandra Sikora <ola.zxcvbnm@gmail.com> * dev.sh: bump version due to addition of croniter python dependency * server: fix an introspection query caching issue (fix #4547) (#4661) Introspection queries accept variables, but we need to make sure to also touch the variables that we ignore, so that an introspection query is marked not reusable if we are not able to build a correct query plan for it. A better solution here would be to deal with such unused variables correctly, so that more introspection queries become reusable. An even better solution would be to type-safely track *how* to reuse which variables, rather than to split the reusage marking from the planning. Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * flush log buffer on exception in mkWaiApp ( fix #4772 ) (#4801) * flush log buffer on exception in mkWaiApp * add comment to explain the introduced change * add changelog * allow logging details of a live query polling thread (#4959) * changes for poller-log add various multiplexed query info in poller-log * minor cleanup, also fixes a bug which will return duplicate data * Live query poller stats can now be logged This also removes in-memory stats that are collected about batched query execution as the log lines when piped into an monitoring tool will give us better insights. * allow poller-log to be configurable * log minimal information in the livequery-poller-log Other information can be retrieved from /dev/subscriptions/extended * fix few review comments * avoid marshalling and unmarshalling from ByteString to EncJSON * separate out SubscriberId and SubscriberMetadata Co-authored-by: Anon Ray <rayanon004@gmail.com> * Don't compile in developer APIs by default * Tighten up handling of admin secret, more docs Store the admin secret only as a hash to prevent leaking the secret inadvertently, and to prevent timing attacks on the secret. NOTE: best practice for stored user passwords is a function with a tunable cost like bcrypt, but our threat model is quite different (even if we thought we could reasonably protect the secret from an attacker who could read arbitrary regions of memory), and bcrypt is far too slow (by design) to perform on each request. We'd have to rely on our (technically savvy) users to choose high entropy passwords in any case. Referencing #4736 * server/docs: add instructions to fix loss of float precision in PostgreSQL <= 11 (#5187) This adds a server flag, --pg-connection-options, that can be used to set a PostgreSQL connection parameter, extra_float_digits, that needs to be used to avoid loss of data on older versions of PostgreSQL, which have odd default behavior when returning float values. (fixes #5092) * [skip ci] Add new commits from master to the commit diff * [skip ci] serve default directives (skip & include) over introspection * [skip ci] Update non-Haskell assets with the version on master * server: refactor GQL execution check and config API (#5094) Co-authored-by: Vamshi Surabhi <vamshi@hasura.io> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] fix js issues in tests by pinning dependencies version * [skip ci] bump graphql version * [skip ci] Add note about memory usage * generalize query execution logic on Postgres (#5110) * generalize PGExecCtx to support specialized functions for various operations * fix tests compilation * allow customising PGExecCtx when starting the web server * server: changes catalog initialization and logging for pro customization (#5139) * new typeclass to abstract the logic of QueryLog-ing * abstract the logic of logging websocket-server logs introduce a MonadWSLog typeclass * move catalog initialization to init step expose a helper function to migrate catalog create schema cache in initialiseCtx * expose various modules and functions for pro * [skip ci] cosmetic change * [skip ci] fix test calling a mutation that does not exist * [skip ci] minor text change * [skip ci] refactored input values * [skip ci] remove VString Origin * server: fix updating of headers behaviour in the update cron trigger API and create future events immediately (#5151) * server: fix bug to update headers in an existing cron trigger and create future events Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * Lower stack chunk size in RTS to reduce thread STACK memory (closes #5190) This reduces memory consumption for new idle subscriptions significantly (see linked ticket). The hypothesis is: we fork a lot of threads per websocket, and some of these use slightly more than the initial 1K stack size, so the first overflow balloons to 32K, when significantly less is required. However: running with `+RTS -K1K -xc` did not seem to show evidence of any overflows! So it's a mystery why this improves things. GHC should probably also be doubling the stack buffer at each overflow or doing something even smarter; the knobs we have aren't so helpful. * [skip ci] fix todo and schema generation for aggregate fields * 5087 libpq pool leak (#5089) Shrink libpq buffers to 1MB before returning connection to pool. Closes #5087 See: https://github.com/hasura/pg-client-hs/pull/19 Also related: #3388 #4077 * bump pg-client-hs version (fixes a build issue on some environments) (#5267) * do not use prepared statements for mutations * server: unlock scheduled events on graceful shutdown (#4928) * Fix buggy parsing of new --conn-lifetime flag in 2b0e3774 * [skip ci] remove cherry-picked commit from commit_diff.txt * server: include additional fields in scheduled trigger webhook payload (#5262) * include scheduled triggers metadata in the webhook body Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * server: call the webhook asynchronously in event triggers (#5352) * server: call the webhook asynchronosly in event triggers * Expose all modules in Cabal file (#5371) * [skip ci] update commit_diff.txt * [skip ci] fix cast exp parser & few TODOs * [skip ci] fix remote fields arguments * [skip ci] fix few more TODO, no-op refactor, move resolve/action.hs to execute/action.hs * Pass environment variables around as a data structure, via @sordina (#5374) * Pass environment variables around as a data structure, via @sordina * Resolving build error * Adding Environment passing note to changelog * Removing references to ILTPollerLog as this seems to have been reintroduced from a bad merge * removing commented-out imports * Language pragmas already set by project * Linking async thread * Apply suggestions from code review Use `runQueryTx` instead of `runLazyTx` for queries. * remove the non-user facing entry in the changelog Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] fix: restrict remote relationship field generation for hasura queries * [skip ci] no-op refactor; move insert execution code from schema parser module * server: call the webhook asynchronously in event triggers (#5352) * server: call the webhook asynchronosly in event triggers * Expose all modules in Cabal file (#5371) * [skip ci] update commit_diff.txt * Pass environment variables around as a data structure, via @sordina (#5374) * Pass environment variables around as a data structure, via @sordina * Resolving build error * Adding Environment passing note to changelog * Removing references to ILTPollerLog as this seems to have been reintroduced from a bad merge * removing commented-out imports * Language pragmas already set by project * Linking async thread * Apply suggestions from code review Use `runQueryTx` instead of `runLazyTx` for queries. * remove the non-user facing entry in the changelog Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] implement header checking Probably closes #14 and #3659. * server: refactor 'pollQuery' to have a hook to process 'PollDetails' (#5391) Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * update pg-client (#5421) * [skip ci] update commit_diff * Fix latency buckets for telemetry data These must have gotten messed up during a refactor. As a consequence almost all samples received so far fall into the single erroneous 0 to 1K seconds (originally supposed to be 1ms?) bucket. I also re-thought what the numbers should be, but these are still arbitrary and might want adjusting in the future. * [skip ci] include the latest commit compared against master in commit_diff * [skip ci] include new commits from master in commit_diff * [skip ci] improve description generation * [skip ci] sort all introspect arrays * [skip ci] allow parsers to specify error codes * [skip ci] fix integer and float parsing error code * [skip ci] scalar from json errors are now parse errors * [skip ci] fixed negative integer error message and code * [skip ci] Re-fix nullability in relationships * [skip ci] no-op refactor and removed couple of FIXMEs * [skip ci] uncomment code in 'deleteMetadataObject' * [skip ci] Fix re-fix of nullability for relationships * [skip ci] fix default arguments error code * [skip ci] updated test error message !!! WARNING !!! Since all fields accept `null`, they all are technically optional in the new schema. Meaning there's no such thing as a missing mandatory field anymore: a field that doesn't have a default value, and which therefore isn't labelled as "optional" in the schema, will be assumed to be null if it's missing, meaning it isn't possible anymore to have an error for a missing mandatory field. The only possible error is now when a optional positional argument is omitted but is not the last positional argument. * [skip ci] cleanup of int scalar parser * [skip ci] retro-compatibility of offset as string * [skip ci] Remove commit from commit_diff.txt Although strictly speaking we don't know if this will work correctly in PDV if we would implement query plan caching, the fact is that in the theoretical case that we would have the same issue in PDV, it would probably apply not just to introspection, and the fix would be written completely differently. So this old commit is of no value to us other than the heads-up "make sure query plan caching works correctly even in the presence of unused variables", which is already part of the test suite. * Add MonadTrace and MonadExecuteQuery abstractions (#5383) * [skip ci] Fix accumulation of input object types Just like object types, interface types, and union types, we have to avoid circularities when collecting input types from the GraphQL AST. Additionally, this fixes equality checks for input object types (whose fields are unordered, and hence should be compared as sets) and enum types (ditto). * [skip ci] fix fragment error path * [skip ci] fix node error code * [skip ci] fix paths in insert queries * [skip ci] fix path in objects * [skip ci] manually alter node id path for consistency * [skip ci] more node error fixups * [skip ci] one last relay error message fix * [skip ci] update commit_diff * Propagate the trace context to event triggers (#5409) * Propagate the trace context to event triggers * Handle missing trace and span IDs * Store trace context as one LOCAL * Add migrations * Documentation * changelog * Fix warnings * Respond to code review suggestions * Respond to code review * Undo changelog * Update CHANGELOG.md Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * server: log request/response sizes for event triggers (#5463) * server: log request/response sizes for event triggers event triggers (and scheduled triggers) now have request/response size in their logs. * add changelog entry * Tracing: Simplify HTTP traced request (#5451) Remove the Inversion of Control (SuspendRequest) and simplify the tracing of HTTP Requests. Co-authored-by: Phil Freeman <phil@hasura.io> * Attach request ID as tracing metadata (#5456) * Propagate the trace context to event triggers * Handle missing trace and span IDs * Store trace context as one LOCAL * Add migrations * Documentation * Include the request ID as trace metadata * changelog * Fix warnings * Respond to code review suggestions * Respond to code review * Undo changelog * Update CHANGELOG.md * Typo Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * server: add logging for action handlers (#5471) * server: add logging for action handlers * add changelog entry * change action-handler log type from internal to non-internal * fix action-handler-log name * server: pass http and websocket request to logging context (#5470) * pass request body to logging context in all cases * add message size logging on the websocket API this is required by graphql-engine-pro/#416 * message size logging on websocket API As we need to log all messages recieved/sent by the websocket server, it makes sense to log them as part of the websocket server event logs. Previously message recieved were logged inside the onMessage handler, and messages sent were logged only for "data" messages (as a server event log) * fix review comments Co-authored-by: Phil Freeman <phil@hasura.io> * server: stop eventing subsystem threads when shutting down (#5479) * server: stop eventing subsystem threads when shutting down * Apply suggestions from code review Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com> * [skip ci] update commit_diff with new commits added in master * Bugfix to support 0-size HASURA_GRAPHQL_QUERY_PLAN_CACHE_SIZE Also some minor refactoring of bounded cache module: - the maxBound check in `trim` was confusing and unnecessary - consequently trim was unnecessary for lookupPure Also add some basic tests * Support only the bounded cache, with default HASURA_GRAPHQL_QUERY_PLAN_CACHE_SIZE of 4000. Closes #5363 * [skip ci] remove merge commit from commit_diff * server: Fix compiler warning caused by GHC upgrade (#5489) Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] update all non server code from master * [skip ci] aligned object field error message with master * [skip ci] fix remaining undefined? * [skip ci] remove unused import * [skip ci] revert to previous error message, fix tests * Move nullableType/nonNullableType to Schema.hs These are functions on Types, not on Parsers. * [skip ci] fix setup to fix backend only test the order in which permission checks are performed on the branch is slightly different than on master, resulting in a slightly different error if there are no other mutations the user has access to. By adding update permissions, we go back to the expected case. * [skip ci] fix insert geojson tests to reflect new paths * [skip ci] fix enum test for better error message * [skip ci] fix header test for better error message * [skip ci] fix fragment cycle test for better error message * [skip ci] fix error message for type mismatch * [skip ci] fix variable path in test * [skip ci] adjust tests after bug fix * [skip ci] more tests fixing * Add hdb_catalog.current_setting abstraction for reading Hasura settings As the comment in the function’s definition explains, this is needed to work around an awkward Postgres behavior. * [skip ci] Update CONTRIBUTING.md to mention Node setup for Python tests * [skip ci] Add missing Python tests env var to CONTRIBUTING.md * [skip ci] fix order of result when subscription is run with multiple nodes * [skip ci] no-op refactor: fix a warning in Internal/Parser.hs * [skip ci] throw error when a subscription contains remote joins * [skip ci] Enable easier profiling by hiding AssertNF behind a flag In order to compile a profiling build, run: $ cabal new-build -f profiling --enable-profiling * [skip ci] Fix two warnings We used to lookup the objects that implement a given interface by filtering all objects in the schema document. However, one of the tests expects us to generate a warning if the provided `implements` field of an introspection query specifies an object not implementing some interface. So we use that field instead. * [skip ci] Fix warnings by commenting out query plan caching * [skip ci] improve masking/commenting query caching related code & few warning fixes * [skip ci] Fixed compiler warnings in graphql-parser-hs * Sync non-Haskell assets with master * [skip ci] add a test inserting invalid GraphQL but valid JSON value in a jsonb column * [skip ci] Avoid converting to/from Map * [skip ci] Apply some hlint suggestions * [skip ci] remove redundant constraints from buildLiveQueryPlan and explainGQLQuery * [skip ci] add NOTEs about missing Tracing constraints in PDV from master * Remove -fdefer-typed-holes, fix warnings * Update cabal.project.freeze * Limit GHC’s heap size to 8GB in CI to avoid the OOM killer * Commit package-lock.json for Python tests’ remote schema server * restrict env variables start with HASURA_GRAPHQL_ for headers configuration in actions, event triggers & remote schemas (#5519) * restrict env variables start with HASURA_GRAPHQL_ for headers definition in actions & event triggers * update CHANGELOG.md * Apply suggestions from code review Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * add test for table_by_pk node when roles doesn't have permission to PK * [skip ci] fix introspection query if any enum column present in primary key (fix #5200) (#5522) * [skip ci] test case fix for a6450e126bc2d98bcfd3791501986e4627ce6c6f * [skip ci] add tests to agg queries when role doesn't have access to any cols * fix backend test * Simplify subscription execution * [skip ci] add test to check if required headers are present while querying * Suppose, table B is related to table A and to query B certain headers are necessary, then the test checks that we are throwing error when the header is not set when B is queried through A * fix mutations not checking for view mutability * [skip ci] add variable type checking and corresponding tests * [skip ci] add test to check if update headers are present while doing an upsert * [skip ci] add positive counterparts to some of the negative permission tests * fix args missing their description in introspect * [skip ci] Remove unused function; insert missing markNotReusable call * [skip ci] Add a Note about InputValue * [skip ci] Delete LegacySchema/ 🎉 * [skip ci] Delete GraphQL/{Resolve,Validate}/ 🎉 * [skip ci] Delete top-level Resolve/Validate modules; tidy .cabal file * [skip ci] Delete LegacySchema top-level module Somehow I missed this one. * fix input value to json * [skip ci] elaborate on JSON objects in GraphQL * [skip ci] add missing file * [skip ci] add a test with subscription containing remote joins * add a test with remote joins in mutation output * [skip ci] Add some comments to Schema/Mutation.hs * [skip ci] Remove no longer needed code from RemoteServer.hs * [skip ci] Use a helper function to generate conflict clause parsers * [skip ci] fix type checker error in fields with default value * capitalize the header keys in select_articles_without_required_headers * Somehow, this was the reason the tests were failing. I have no idea, why! * [skip ci] Add a long Note about optional fields and nullability * Improve comments a bit; simplify Schema/Common.hs a bit * [skip ci] full implementation of 5.8.5 type checking. * [skip ci] fix validation test teardown * [skip ci] fix schema stitching test * fix remote schema ignoring enum nullability * [skip ci] fix fieldOptional to not discard nullability * revert nullability of use_spheroid * fix comment * add required remote fields with arguments for tests * [skip ci] add missing docstrings * [skip ci] fixed description of remote fields * [skip ci] change docstring for consistency * fix several schema inconsistencies * revert behaviour change in function arguments parsing * fix remaining nullability issues in new schema * minor no-op refactor; use isListType from graphql-parser-hs * use nullability of remote schema node, while creating a Remote reln * fix 'ID' input coercing & action 'ID' type relationship mapping * include ASTs in MonadExecuteQuery * needed for PRO code-base * Delete code for "interfaces implementing ifaces" (draft GraphQL spec) Previously I started writing some code that adds support for a future GraphQL feature where interfaces may themselves be sub-types of other interfaces. However, this code was incomplete, and partially incorrect. So this commit deletes support for that entirely. * Ignore a remote schema test during the upgrade/downgrade test The PDV refactor does a better job at exposing a minimal set of types through introspection. In particular, not every type that is present in a remote schema is re-exposed by Hasura. The test test_schema_stitching.py::TestRemoteSchemaBasic::test_introspection assumed that all types were re-exposed, which is not required for GraphQL compatibility, in order to test some aspect of our support for remote schemas. So while this particular test has been updated on PDV, the PDV branch now does not pass the old test, which we argue to be incorrect. Hence this test is disabled while we await a release, after which we can re-enable it. This also re-enables a test that was previously disabled for similar, though unrelated, reasons. * add haddock documentation to the action's field parsers * Deslecting some tests in server-upgrade Some tests with current build are failing on server upgrade which it should not. The response is more accurate than what it was. Also the upgrade tests were not throwing errors when the test is expected to return an error, but succeeds. The test framework is patched to catch this case. * [skip ci] Add a long Note about interfaces and object types * send the response headers back to client after running a query * Deselect a few more tests during upgrade/downgrade test * Update commit_diff.txt * change log kind from db_migrate to catalog_migrate (#5531) * Show method and complete URI in traced HTTP calls (#5525) Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * restrict env variables start with HASURA_GRAPHQL_ for headers configuration in actions, event triggers & remote schemas (#5519) * restrict env variables start with HASURA_GRAPHQL_ for headers definition in actions & event triggers * update CHANGELOG.md * Apply suggestions from code review Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * fix introspection query if any enum column present in primary key (fix #5200) (#5522) * Fix telemetry reporting of transport (websocket was reported as http) * add log kinds in cli-migrations image (#5529) * add log kinds in cli-migrations image * give hint to resolve timeout error * minor changes and CHANGELOG * server: set hasura.tracecontext in RQL mutations [#5542] (#5555) * server: set hasura.tracecontext in RQL mutations [#5542] * Update test suite Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * Add bulldozer auto-merge and -update configuration We still need to add the github app (as of time of opening this PR) Afterwards devs should be able to allow bulldozer to automatically "update" the branch, merging in parent when it changes, as well as automatically merge when all checks pass. This is opt-in by adding the `auto-update-auto-merge` label to the PR. * Remove 'bulldozer' config, try 'kodiak' for auto-merge see: https://github.com/chdsbd/kodiak The main issue that bit us was not being able to auto update forked branches, also: https://github.com/palantir/bulldozer/issues/66 https://github.com/palantir/bulldozer/issues/145 * Cherry-picked all commits * [skip ci] Slightly improve formatting * Revert "fix introspection query if any enum column present in primary key (fix #5200) (#5522)" This reverts commit 0f9a5afa59a88f6824f4d63d58db246a5ba3fb03. This undoes a cherry-pick of 34288e1eb5f2c5dad9e6d1e05453dd52397dc970 that was already done previously in a6450e126bc2d98bcfd3791501986e4627ce6c6f, and subsequently fixed for PDV in 70e89dc250f8ddc6e2b7930bbe2b3eeaa6dbe1db * Do a small bit of tidying in Hasura.GraphQL.Parser.Collect * Fix cherry-picking work Some previous cherry-picks ended up modifying code that is commented out * [skip ci] clarified comment regarding insert representation * [skip ci] removed obsolete todos * cosmetic change * fix action error message * [skip ci] remove obsolete comment * [skip ci] synchronize stylish haskell extensions list * use previously defined scalar names in parsers rather than ad-hoc literals * Apply most syntax hlint hints. * Clarify comment on update mutation. * [skip ci] Clarify what fields should be specified for objects * Update "_inc" description. * Use record types rather than tuples fo IntrospectionResult and ParsedIntrospection * Get rid of checkFieldNamesUnique (use Data.List.Extended.duplicates) * Throw more errors when collecting query root names * [skip ci] clean column parser comment * Remove dead code inserted in ab65b39 * avoid converting to non-empty list where not needed * add note and TODO about the disabled checks in PDV * minor refactor in remoteField' function * Unify two getObject methods * Nitpicks in Remote.hs * Update CHANGELOG.md * Revert "Unify two getObject methods" This reverts commit bd6bb40355b3d189a46c0312eb52225e18be57b3. We do need two different getObject functions as the corresponding error message is different * Fix error message in Remote.hs * Update CHANGELOG.md Co-authored-by: Auke Booij <auke@tulcod.com> * Apply suggested Changelog fix. Co-authored-by: Auke Booij <auke@tulcod.com> * Fix typo in Changelog. * [skip ci] Update changelog. * reuse type names to avoid duplication * Fix Hashable instance for Definition The presence of `Maybe Unique`, and an optional description, as part of `Definition`s, means that `Definition`s that are considered `Eq`ual may get different hashes. This can happen, for instance, when one object is memoized but another is not. * [skip ci] Update commit_diff.txt * Bump parser version. * Bump freeze file after changes in parser. * [skip ci] Incorporate commits from master * Fix developer flag in server/cabal.project.freeze Co-authored-by: Auke Booij <auke@tulcod.com> * Deselect a changed ENUM test for upgrade/downgrade CI * Deselect test here as well * [skip ci] remove dead code * Disable more tests for upgrade/downgrade * Fix which test gets deselected * Revert "Add hdb_catalog.current_setting abstraction for reading Hasura settings" This reverts commit 66e85ab9fbd56cca2c28a80201f6604fbe811b85. * Remove circular reference in cabal.project.freeze Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io> Co-authored-by: Auke Booij <auke@hasura.io> Co-authored-by: Tirumarai Selvan <tiru@hasura.io> Co-authored-by: Marion Schleifer <marion@hasura.io> Co-authored-by: Aleksandra Sikora <ola.zxcvbnm@gmail.com> Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> Co-authored-by: Anon Ray <rayanon004@gmail.com> Co-authored-by: rakeshkky <12475069+rakeshkky@users.noreply.github.com> Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com> Co-authored-by: Vamshi Surabhi <vamshi@hasura.io> Co-authored-by: Antoine Leblanc <antoine@hasura.io> Co-authored-by: Brandon Simmons <brandon@hasura.io> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Lyndon Maydwell <lyndon@sordina.net> Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Naveen Naidu <naveennaidu479@gmail.com> Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com> Co-authored-by: Nizar Malangadan <nizar-m@users.noreply.github.com> Co-authored-by: Antoine Leblanc <crucuny@gmail.com> Co-authored-by: Auke Booij <auke@tulcod.com>
2020-08-21 20:27:01 +03:00
buildMutationParser ::
forall n m.
(MonadMemoize m, MonadError QErr m, MonadParse n) =>
[P.FieldParser n (NamespacedField (MutationRootField UnpreparedValue))] ->
[P.FieldParser n (NamespacedField (RemoteSchemaRootField (RemoteRelationshipField UnpreparedValue) RemoteSchemaVariable))] ->
[P.FieldParser n (MutationRootField UnpreparedValue)] ->
m (Maybe (Parser 'Output n (RootFieldMap (MutationRootField UnpreparedValue))))
buildMutationParser mutationFields remoteFields actionFields = do
let mutationFieldsParser =
mutationFields
<> (fmap (fmap RFRemote) <$> remoteFields)
<> (fmap NotNamespaced <$> actionFields)
whenMaybe (not $ null mutationFieldsParser) $
safeSelectionSet mutationRoot (Just $ G.Description "mutation root") mutationFieldsParser
<&> fmap (flattenNamespaces . fmap typenameToNamespacedRawRF)
-- | Prepare the parser for subscriptions. Every postgres query field is
-- exposed as a subscription along with fields to get the status of
-- asynchronous actions.
buildSubscriptionParser ::
forall n m.
(MonadMemoize m, MonadError QErr m, MonadParse n) =>
[P.FieldParser n (NamespacedField (QueryRootField UnpreparedValue))] ->
[P.FieldParser n (NamespacedField (RemoteSchemaRootField (RemoteRelationshipField UnpreparedValue) RemoteSchemaVariable))] ->
[P.FieldParser n (QueryRootField UnpreparedValue)] ->
m (Maybe (Parser 'Output n (RootFieldMap (QueryRootField UnpreparedValue))))
buildSubscriptionParser sourceSubscriptionFields remoteSubscriptionFields actionFields = do
let subscriptionFields =
sourceSubscriptionFields
<> fmap (fmap $ fmap RFRemote) remoteSubscriptionFields
<> (fmap NotNamespaced <$> actionFields)
whenMaybe (not $ null subscriptionFields) $
safeSelectionSet subscriptionRoot Nothing subscriptionFields
<&> fmap (flattenNamespaces . fmap typenameToNamespacedRawRF)
-------------------------------------------------------------------------------
-- Local helpers
-- | Calls 'P.safeSelectionSet', and rethrows any error as a 'QErr'.
safeSelectionSet ::
forall n m a.
(QErrM n, MonadParse m) =>
G.Name ->
Maybe G.Description ->
[FieldParser m a] ->
n (Parser 'Output m (OMap.InsOrdHashMap G.Name (P.ParsedSelection a)))
safeSelectionSet name description fields =
P.safeSelectionSet name description fields `onLeft` (throw500 . fromErrorMessage)
-- | Apply a source's customization options to a list of its fields.
customizeFields ::
forall f n db remote action.
(Functor f, MonadParse n) =>
Resolve source customization at schema cache building time. ### Description This PR attempts to fix several issues with source customization as it relates to remote relationships. There were several issues regarding casing: at the relationship border, we didn't properly set the target source's case, we didn't have access to the list of supported features to decide whether the feature was allowed or not, and we didn't have access to the global default. However, all of that information is available when we build the schema cache, as we do resolve the case of some elements such as function names: we can therefore resolve source information at the same time, and simplify both the root of the schema and the remote relationship border. To do this, this PR introduces a new type, `ResolvedSourceCustomization`, to be used in the Schema Cache, as opposed to the metadata's `SourceCustomization`, following a pattern established by a lot of other types. ### Remaining work and open questions One major point of confusion: it seems to me that we didn't set the case at all across remote relationships, which would suggest we would use the case of the LHS source across the subset of the RHS one that is accessible through the remote relationship, which would in turn "corrupt" the parser cache and might result in the wrong case being used for that source later on. Is that assesment correct, and was I right to fix it? Another one is that we seem not to be using the local case of the RHS to name the field in an object relationship; unless I'm mistaken we only use it for array relationships? Is that intentional? This PR is also missing tests that would show-case the difference, and a changelog entry. To my knowledge, all the tests of this feature are in the python test suite; this could be the opportunity to move them to the hspec suite, but this might be a considerable amount of work? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5619 GitOrigin-RevId: 51a81b713a74575e82d9f96b51633f158ce3a47b
2022-09-12 19:05:40 +03:00
ResolvedSourceCustomization ->
MkTypename ->
f [FieldParser n (RootField db remote action JO.Value)] ->
f [FieldParser n (NamespacedField (RootField db remote action JO.Value))]
Resolve source customization at schema cache building time. ### Description This PR attempts to fix several issues with source customization as it relates to remote relationships. There were several issues regarding casing: at the relationship border, we didn't properly set the target source's case, we didn't have access to the list of supported features to decide whether the feature was allowed or not, and we didn't have access to the global default. However, all of that information is available when we build the schema cache, as we do resolve the case of some elements such as function names: we can therefore resolve source information at the same time, and simplify both the root of the schema and the remote relationship border. To do this, this PR introduces a new type, `ResolvedSourceCustomization`, to be used in the Schema Cache, as opposed to the metadata's `SourceCustomization`, following a pattern established by a lot of other types. ### Remaining work and open questions One major point of confusion: it seems to me that we didn't set the case at all across remote relationships, which would suggest we would use the case of the LHS source across the subset of the RHS one that is accessible through the remote relationship, which would in turn "corrupt" the parser cache and might result in the wrong case being used for that source later on. Is that assesment correct, and was I right to fix it? Another one is that we seem not to be using the local case of the RHS to name the field in an object relationship; unless I'm mistaken we only use it for array relationships? Is that intentional? This PR is also missing tests that would show-case the difference, and a changelog entry. To my knowledge, all the tests of this feature are in the python test suite; this could be the opportunity to move them to the hspec suite, but this might be a considerable amount of work? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5619 GitOrigin-RevId: 51a81b713a74575e82d9f96b51633f158ce3a47b
2022-09-12 19:05:40 +03:00
customizeFields ResolvedSourceCustomization {..} =
fmap . customizeNamespace _rscRootNamespace (const typenameToRawRF)
-- | All the 'BackendSchema' methods produce something of the form @m
-- [FieldParser n a]@, where @a@ is something specific to what is being parsed
-- by the given method.
--
-- In order to build the complete schema these must be
-- homogenised and be annotated with query-tag data, which this function makes
-- easy.
-- This function converts a single field parser. @mkRootFields@ transforms a
-- list of field parsers.
mkRootField ::
forall b n a db remote action raw.
(HasTag b, Functor n) =>
SourceName ->
SourceConfig b ->
Maybe QueryTagsConfig ->
(a -> db b) ->
FieldParser n a ->
FieldParser n (RootField db remote action raw)
mkRootField sourceName sourceConfig queryTagsConfig inj =
fmap
( RFDB sourceName
. AB.mkAnyBackend @b
. SourceConfigWith sourceConfig queryTagsConfig
. inj
)
-- | `mkRootFields` is `mkRootField` applied on a list of `FieldParser`.
mkRootFields ::
forall b m n a db remote action raw.
(HasTag b, Functor m, Functor n) =>
SourceName ->
SourceConfig b ->
Maybe QueryTagsConfig ->
(a -> db b) ->
m [FieldParser n a] ->
m [FieldParser n (RootField db remote action raw)]
mkRootFields sourceName sourceConfig queryTagsConfig inj =
fmap
( map
(mkRootField sourceName sourceConfig queryTagsConfig inj)
)
takeExposedAs :: FunctionExposedAs -> FunctionCache b -> FunctionCache b
takeExposedAs x = Map.filter ((== x) . _fiExposedAs)
Refactor type name customization Source typename customization (hasura/graphql-engine@aac64f2c81faa6a3aef4d0cf5fae97289ac4383e) introduced a mechanism to change certain names in the GraphQL schema that is exposed. In particular it allows last-minute modification of: 1. the names of some types, and 2. the names of some root fields. The above two items are assigned distinct customization algorithms, and at times both algorithms are in scope. So a need to distinguish them is needed. In the original design, this was addressed by introducing a newtype wrapper `Typename` around GraphQL `Name`s, dedicated to the names of types. However, in the majority of the codebase, type names are also represented by `Name`. For this reason, it was unavoidable to allow for easy conversion. This was supported by a `HasName Typename` instance, as well as by publishing the constructors of `Typename`. This means that the type safety that newtypes can add is lost. In particular, it is now very easy to confuse type name customization with root field name customization. This refactors the above design by instead introducing newtypes around the customization operations: ```haskell newtype MkTypename = MkTypename {runMkTypename :: Name -> Name} deriving (Semigroup, Monoid) via (Endo Name) newtype MkRootFieldName = MkRootFieldName {runMkRootFieldName :: Name -> Name} deriving (Semigroup, Monoid) via (Endo Name) ``` The `Monoid` instance allows easy composition of customization operations, piggybacking off of the type of `Endo`maps. This design allows safe co-existence of the two customization algorithms, while avoiding the syntactic overhead of packing and unpacking newtypes. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2989 GitOrigin-RevId: da3a353a9b003ee40c8d0a1e02872e99d2edd3ca
2021-11-30 12:51:46 +03:00
subscriptionRoot :: G.Name
subscriptionRoot = Name._subscription_root
Refactor type name customization Source typename customization (hasura/graphql-engine@aac64f2c81faa6a3aef4d0cf5fae97289ac4383e) introduced a mechanism to change certain names in the GraphQL schema that is exposed. In particular it allows last-minute modification of: 1. the names of some types, and 2. the names of some root fields. The above two items are assigned distinct customization algorithms, and at times both algorithms are in scope. So a need to distinguish them is needed. In the original design, this was addressed by introducing a newtype wrapper `Typename` around GraphQL `Name`s, dedicated to the names of types. However, in the majority of the codebase, type names are also represented by `Name`. For this reason, it was unavoidable to allow for easy conversion. This was supported by a `HasName Typename` instance, as well as by publishing the constructors of `Typename`. This means that the type safety that newtypes can add is lost. In particular, it is now very easy to confuse type name customization with root field name customization. This refactors the above design by instead introducing newtypes around the customization operations: ```haskell newtype MkTypename = MkTypename {runMkTypename :: Name -> Name} deriving (Semigroup, Monoid) via (Endo Name) newtype MkRootFieldName = MkRootFieldName {runMkRootFieldName :: Name -> Name} deriving (Semigroup, Monoid) via (Endo Name) ``` The `Monoid` instance allows easy composition of customization operations, piggybacking off of the type of `Endo`maps. This design allows safe co-existence of the two customization algorithms, while avoiding the syntactic overhead of packing and unpacking newtypes. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2989 GitOrigin-RevId: da3a353a9b003ee40c8d0a1e02872e99d2edd3ca
2021-11-30 12:51:46 +03:00
mutationRoot :: G.Name
mutationRoot = Name._mutation_root
Refactor type name customization Source typename customization (hasura/graphql-engine@aac64f2c81faa6a3aef4d0cf5fae97289ac4383e) introduced a mechanism to change certain names in the GraphQL schema that is exposed. In particular it allows last-minute modification of: 1. the names of some types, and 2. the names of some root fields. The above two items are assigned distinct customization algorithms, and at times both algorithms are in scope. So a need to distinguish them is needed. In the original design, this was addressed by introducing a newtype wrapper `Typename` around GraphQL `Name`s, dedicated to the names of types. However, in the majority of the codebase, type names are also represented by `Name`. For this reason, it was unavoidable to allow for easy conversion. This was supported by a `HasName Typename` instance, as well as by publishing the constructors of `Typename`. This means that the type safety that newtypes can add is lost. In particular, it is now very easy to confuse type name customization with root field name customization. This refactors the above design by instead introducing newtypes around the customization operations: ```haskell newtype MkTypename = MkTypename {runMkTypename :: Name -> Name} deriving (Semigroup, Monoid) via (Endo Name) newtype MkRootFieldName = MkRootFieldName {runMkRootFieldName :: Name -> Name} deriving (Semigroup, Monoid) via (Endo Name) ``` The `Monoid` instance allows easy composition of customization operations, piggybacking off of the type of `Endo`maps. This design allows safe co-existence of the two customization algorithms, while avoiding the syntactic overhead of packing and unpacking newtypes. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2989 GitOrigin-RevId: da3a353a9b003ee40c8d0a1e02872e99d2edd3ca
2021-11-30 12:51:46 +03:00
queryRoot :: G.Name
queryRoot = Name._query_root
finalizeParser :: Parser 'Output P.Parse a -> ParserFn a
finalizeParser parser = P.toQErr . P.runParse . P.runParser parser
throwOnConflictingDefinitions :: QErrM m => Either P.ConflictingDefinitions a -> m a
throwOnConflictingDefinitions = either (throw500 . fromErrorMessage . toErrorValue) pure
typenameToNamespacedRawRF ::
P.ParsedSelection (NamespacedField (RootField db remote action JO.Value)) ->
NamespacedField (RootField db remote action JO.Value)
typenameToNamespacedRawRF = P.handleTypename $ NotNamespaced . RFRaw . JO.String . toTxt
typenameToRawRF ::
P.ParsedSelection (RootField db remote action JO.Value) ->
RootField db remote action JO.Value
typenameToRawRF = P.handleTypename $ RFRaw . JO.String . toTxt