graphql-engine/server/src-lib/Hasura/Backends/Postgres/Instances/Schema.hs

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

1098 lines
50 KiB
Haskell
Raw Normal View History

{-# LANGUAGE ApplicativeDo #-}
{-# LANGUAGE QuasiQuotes #-}
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE UndecidableInstances #-}
{-# OPTIONS_GHC -fno-warn-orphans #-}
-- | Postgres Instances Schema
--
-- Defines a 'Hasura.GraphQL.Schema.Backend.BackendSchema' type class instance for Postgres.
module Hasura.Backends.Postgres.Instances.Schema
(
)
where
server: remove remnants of query plan caching (fix #1795) Query plan caching was introduced by - I believe - hasura/graphql-engine#1934 in order to reduce the query response latency. During the development of PDV in hasura/graphql-engine#4111, it was found out that the new architecture (for which query plan caching wasn't implemented) performed comparably to the pre-PDV architecture with caching. Hence, it was decided to leave query plan caching until some day in the future when it was deemed necessary. Well, we're in the future now, and there still isn't a convincing argument for query plan caching. So the time has come to remove some references to query plan caching from the codebase. For the most part, any code being removed would probably not be very well suited to the post-PDV architecture of query execution, so arguably not much is lost. Apart from simplifying the code, this PR will contribute towards making the GraphQL schema generation more modular, testable, and easier to profile. I'd like to eventually work towards a situation in which it's easy to generate a GraphQL schema parser *in isolation*, without being connected to a database, and then parse a GraphQL query *in isolation*, without even listening any HTTP port. It is important that both of these operations can be examined in detail, and in isolation, since they are two major performance bottlenecks, as well as phases where many important upcoming features hook into. Implementation The following have been removed: - The entirety of `server/src-lib/Hasura/GraphQL/Execute/Plan.hs` - The core phases of query parsing and execution no longer have any references to query plan caching. Note that this is not to be confused with query *response* caching, which is not affected by this PR. This includes removal of the types: - - `Opaque`, which is replaced by a tuple. Note that the old implementation was broken and did not adequately hide the constructors. - - `QueryReusability` (and the `markNotReusable` method). Notably, the implementation of the `ParseT` monad now consists of two, rather than three, monad transformers. - Cache-related tests (in `server/src-test/Hasura/CacheBoundedSpec.hs`) have been removed . - References to query plan caching in the documentation. - The `planCacheOptions` in the `TenantConfig` type class was removed. However, during parsing, unrecognized fields in the YAML config get ignored, so this does not cause a breaking change. (Confirmed manually, as well as in consultation with @sordina.) - The metrics no longer send cache hit/miss messages. There are a few places in which one can still find references to query plan caching: - We still accept the `--query-plan-cache-size` command-line option for backwards compatibility. The `HASURA_QUERY_PLAN_CACHE_SIZE` environment variable is not read. https://github.com/hasura/graphql-engine-mono/pull/1815 GitOrigin-RevId: 17d92b254ec093c62a7dfeec478658ede0813eb7
2021-07-27 14:51:52 +03:00
import Data.Aeson qualified as J
import Data.Aeson.Key qualified as K
import Data.Aeson.Types (JSONPathElement (..))
import Data.Has
server: remove remnants of query plan caching (fix #1795) Query plan caching was introduced by - I believe - hasura/graphql-engine#1934 in order to reduce the query response latency. During the development of PDV in hasura/graphql-engine#4111, it was found out that the new architecture (for which query plan caching wasn't implemented) performed comparably to the pre-PDV architecture with caching. Hence, it was decided to leave query plan caching until some day in the future when it was deemed necessary. Well, we're in the future now, and there still isn't a convincing argument for query plan caching. So the time has come to remove some references to query plan caching from the codebase. For the most part, any code being removed would probably not be very well suited to the post-PDV architecture of query execution, so arguably not much is lost. Apart from simplifying the code, this PR will contribute towards making the GraphQL schema generation more modular, testable, and easier to profile. I'd like to eventually work towards a situation in which it's easy to generate a GraphQL schema parser *in isolation*, without being connected to a database, and then parse a GraphQL query *in isolation*, without even listening any HTTP port. It is important that both of these operations can be examined in detail, and in isolation, since they are two major performance bottlenecks, as well as phases where many important upcoming features hook into. Implementation The following have been removed: - The entirety of `server/src-lib/Hasura/GraphQL/Execute/Plan.hs` - The core phases of query parsing and execution no longer have any references to query plan caching. Note that this is not to be confused with query *response* caching, which is not affected by this PR. This includes removal of the types: - - `Opaque`, which is replaced by a tuple. Note that the old implementation was broken and did not adequately hide the constructors. - - `QueryReusability` (and the `markNotReusable` method). Notably, the implementation of the `ParseT` monad now consists of two, rather than three, monad transformers. - Cache-related tests (in `server/src-test/Hasura/CacheBoundedSpec.hs`) have been removed . - References to query plan caching in the documentation. - The `planCacheOptions` in the `TenantConfig` type class was removed. However, during parsing, unrecognized fields in the YAML config get ignored, so this does not cause a breaking change. (Confirmed manually, as well as in consultation with @sordina.) - The metrics no longer send cache hit/miss messages. There are a few places in which one can still find references to query plan caching: - We still accept the `--query-plan-cache-size` command-line option for backwards compatibility. The `HASURA_QUERY_PLAN_CACHE_SIZE` environment variable is not read. https://github.com/hasura/graphql-engine-mono/pull/1815 GitOrigin-RevId: 17d92b254ec093c62a7dfeec478658ede0813eb7
2021-07-27 14:51:52 +03:00
import Data.HashMap.Strict qualified as Map
import Data.HashMap.Strict.Extended qualified as M
import Data.List.NonEmpty qualified as NE
import Data.Parser.JSONPath
import Data.Text.Casing qualified as C
import Data.Text.Extended
import Hasura.Backends.Postgres.SQL.DML as Postgres hiding (CountType, incOp)
import Hasura.Backends.Postgres.SQL.Types as Postgres hiding (FunctionName, TableName)
import Hasura.Backends.Postgres.SQL.Value as Postgres
import Hasura.Backends.Postgres.Schema.OnConflict
import Hasura.Backends.Postgres.Schema.Select
server: remove remnants of query plan caching (fix #1795) Query plan caching was introduced by - I believe - hasura/graphql-engine#1934 in order to reduce the query response latency. During the development of PDV in hasura/graphql-engine#4111, it was found out that the new architecture (for which query plan caching wasn't implemented) performed comparably to the pre-PDV architecture with caching. Hence, it was decided to leave query plan caching until some day in the future when it was deemed necessary. Well, we're in the future now, and there still isn't a convincing argument for query plan caching. So the time has come to remove some references to query plan caching from the codebase. For the most part, any code being removed would probably not be very well suited to the post-PDV architecture of query execution, so arguably not much is lost. Apart from simplifying the code, this PR will contribute towards making the GraphQL schema generation more modular, testable, and easier to profile. I'd like to eventually work towards a situation in which it's easy to generate a GraphQL schema parser *in isolation*, without being connected to a database, and then parse a GraphQL query *in isolation*, without even listening any HTTP port. It is important that both of these operations can be examined in detail, and in isolation, since they are two major performance bottlenecks, as well as phases where many important upcoming features hook into. Implementation The following have been removed: - The entirety of `server/src-lib/Hasura/GraphQL/Execute/Plan.hs` - The core phases of query parsing and execution no longer have any references to query plan caching. Note that this is not to be confused with query *response* caching, which is not affected by this PR. This includes removal of the types: - - `Opaque`, which is replaced by a tuple. Note that the old implementation was broken and did not adequately hide the constructors. - - `QueryReusability` (and the `markNotReusable` method). Notably, the implementation of the `ParseT` monad now consists of two, rather than three, monad transformers. - Cache-related tests (in `server/src-test/Hasura/CacheBoundedSpec.hs`) have been removed . - References to query plan caching in the documentation. - The `planCacheOptions` in the `TenantConfig` type class was removed. However, during parsing, unrecognized fields in the YAML config get ignored, so this does not cause a breaking change. (Confirmed manually, as well as in consultation with @sordina.) - The metrics no longer send cache hit/miss messages. There are a few places in which one can still find references to query plan caching: - We still accept the `--query-plan-cache-size` command-line option for backwards compatibility. The `HASURA_QUERY_PLAN_CACHE_SIZE` environment variable is not read. https://github.com/hasura/graphql-engine-mono/pull/1815 GitOrigin-RevId: 17d92b254ec093c62a7dfeec478658ede0813eb7
2021-07-27 14:51:52 +03:00
import Hasura.Backends.Postgres.Types.BoolExp
import Hasura.Backends.Postgres.Types.Column
import Hasura.Backends.Postgres.Types.Insert as PGIR
import Hasura.Backends.Postgres.Types.Update as PGIR
import Hasura.Base.Error
An `ErrorMessage` type, to encapsulate. This introduces an `ErrorMessage` newtype which wraps `Text` in a manner which is designed to be easy to construct, and difficult to deconstruct. It provides functionality similar to `Data.Text.Extended`, but designed _only_ for error messages. Error messages are constructed through `fromString`, concatenation, or the `toErrorValue` function, which is designed to be overridden for all meaningful domain types that might show up in an error message. Notably, there are not and should never be instances of `ToErrorValue` for `String`, `Text`, `Int`, etc. This is so that we correctly represent the value in a way that is specific to its type. For example, all `Name` values (from the _graphql-parser-hs_ library) are single-quoted now; no exceptions. I have mostly had to add `instance ToErrorValue` for various backend types (and also add newtypes where necessary). Some of these are not strictly necessary for this changeset, as I had bigger aspirations when I started. These aspirations have been tempered by trying and failing twice. As such, in this changeset, I have started by introducing this type to the `parseError` and `parseErrorWith` functions. In the future, I would like to extend this to the `QErr` record and the various `throwError` functions, but this is a much larger task and should probably be done in stages. For now, `toErrorMessage` and `fromErrorMessage` are provided for conversion to and from `Text`, but the intent is to stop exporting these once all error messages are converted to the new type. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5018 GitOrigin-RevId: 84b37e238992e4312255a87ca44f41af65e2d89a
2022-07-18 23:26:01 +03:00
import Hasura.Base.ErrorMessage (toErrorMessage)
import Hasura.Base.ToErrorValue
import Hasura.GraphQL.ApolloFederation (ApolloFederationParserFunction)
server: remove remnants of query plan caching (fix #1795) Query plan caching was introduced by - I believe - hasura/graphql-engine#1934 in order to reduce the query response latency. During the development of PDV in hasura/graphql-engine#4111, it was found out that the new architecture (for which query plan caching wasn't implemented) performed comparably to the pre-PDV architecture with caching. Hence, it was decided to leave query plan caching until some day in the future when it was deemed necessary. Well, we're in the future now, and there still isn't a convincing argument for query plan caching. So the time has come to remove some references to query plan caching from the codebase. For the most part, any code being removed would probably not be very well suited to the post-PDV architecture of query execution, so arguably not much is lost. Apart from simplifying the code, this PR will contribute towards making the GraphQL schema generation more modular, testable, and easier to profile. I'd like to eventually work towards a situation in which it's easy to generate a GraphQL schema parser *in isolation*, without being connected to a database, and then parse a GraphQL query *in isolation*, without even listening any HTTP port. It is important that both of these operations can be examined in detail, and in isolation, since they are two major performance bottlenecks, as well as phases where many important upcoming features hook into. Implementation The following have been removed: - The entirety of `server/src-lib/Hasura/GraphQL/Execute/Plan.hs` - The core phases of query parsing and execution no longer have any references to query plan caching. Note that this is not to be confused with query *response* caching, which is not affected by this PR. This includes removal of the types: - - `Opaque`, which is replaced by a tuple. Note that the old implementation was broken and did not adequately hide the constructors. - - `QueryReusability` (and the `markNotReusable` method). Notably, the implementation of the `ParseT` monad now consists of two, rather than three, monad transformers. - Cache-related tests (in `server/src-test/Hasura/CacheBoundedSpec.hs`) have been removed . - References to query plan caching in the documentation. - The `planCacheOptions` in the `TenantConfig` type class was removed. However, during parsing, unrecognized fields in the YAML config get ignored, so this does not cause a breaking change. (Confirmed manually, as well as in consultation with @sordina.) - The metrics no longer send cache hit/miss messages. There are a few places in which one can still find references to query plan caching: - We still accept the `--query-plan-cache-size` command-line option for backwards compatibility. The `HASURA_QUERY_PLAN_CACHE_SIZE` environment variable is not read. https://github.com/hasura/graphql-engine-mono/pull/1815 GitOrigin-RevId: 17d92b254ec093c62a7dfeec478658ede0813eb7
2021-07-27 14:51:52 +03:00
import Hasura.GraphQL.Schema.Backend
( BackendSchema,
BackendTableSelectSchema,
BackendUpdateOperatorsSchema,
server: remove remnants of query plan caching (fix #1795) Query plan caching was introduced by - I believe - hasura/graphql-engine#1934 in order to reduce the query response latency. During the development of PDV in hasura/graphql-engine#4111, it was found out that the new architecture (for which query plan caching wasn't implemented) performed comparably to the pre-PDV architecture with caching. Hence, it was decided to leave query plan caching until some day in the future when it was deemed necessary. Well, we're in the future now, and there still isn't a convincing argument for query plan caching. So the time has come to remove some references to query plan caching from the codebase. For the most part, any code being removed would probably not be very well suited to the post-PDV architecture of query execution, so arguably not much is lost. Apart from simplifying the code, this PR will contribute towards making the GraphQL schema generation more modular, testable, and easier to profile. I'd like to eventually work towards a situation in which it's easy to generate a GraphQL schema parser *in isolation*, without being connected to a database, and then parse a GraphQL query *in isolation*, without even listening any HTTP port. It is important that both of these operations can be examined in detail, and in isolation, since they are two major performance bottlenecks, as well as phases where many important upcoming features hook into. Implementation The following have been removed: - The entirety of `server/src-lib/Hasura/GraphQL/Execute/Plan.hs` - The core phases of query parsing and execution no longer have any references to query plan caching. Note that this is not to be confused with query *response* caching, which is not affected by this PR. This includes removal of the types: - - `Opaque`, which is replaced by a tuple. Note that the old implementation was broken and did not adequately hide the constructors. - - `QueryReusability` (and the `markNotReusable` method). Notably, the implementation of the `ParseT` monad now consists of two, rather than three, monad transformers. - Cache-related tests (in `server/src-test/Hasura/CacheBoundedSpec.hs`) have been removed . - References to query plan caching in the documentation. - The `planCacheOptions` in the `TenantConfig` type class was removed. However, during parsing, unrecognized fields in the YAML config get ignored, so this does not cause a breaking change. (Confirmed manually, as well as in consultation with @sordina.) - The metrics no longer send cache hit/miss messages. There are a few places in which one can still find references to query plan caching: - We still accept the `--query-plan-cache-size` command-line option for backwards compatibility. The `HASURA_QUERY_PLAN_CACHE_SIZE` environment variable is not read. https://github.com/hasura/graphql-engine-mono/pull/1815 GitOrigin-RevId: 17d92b254ec093c62a7dfeec478658ede0813eb7
2021-07-27 14:51:52 +03:00
ComparisonExp,
MonadBuildSchema,
)
import Hasura.GraphQL.Schema.Backend qualified as BS
import Hasura.GraphQL.Schema.BoolExp
import Hasura.GraphQL.Schema.BoolExp.AggregationPredicates as Agg
server: remove remnants of query plan caching (fix #1795) Query plan caching was introduced by - I believe - hasura/graphql-engine#1934 in order to reduce the query response latency. During the development of PDV in hasura/graphql-engine#4111, it was found out that the new architecture (for which query plan caching wasn't implemented) performed comparably to the pre-PDV architecture with caching. Hence, it was decided to leave query plan caching until some day in the future when it was deemed necessary. Well, we're in the future now, and there still isn't a convincing argument for query plan caching. So the time has come to remove some references to query plan caching from the codebase. For the most part, any code being removed would probably not be very well suited to the post-PDV architecture of query execution, so arguably not much is lost. Apart from simplifying the code, this PR will contribute towards making the GraphQL schema generation more modular, testable, and easier to profile. I'd like to eventually work towards a situation in which it's easy to generate a GraphQL schema parser *in isolation*, without being connected to a database, and then parse a GraphQL query *in isolation*, without even listening any HTTP port. It is important that both of these operations can be examined in detail, and in isolation, since they are two major performance bottlenecks, as well as phases where many important upcoming features hook into. Implementation The following have been removed: - The entirety of `server/src-lib/Hasura/GraphQL/Execute/Plan.hs` - The core phases of query parsing and execution no longer have any references to query plan caching. Note that this is not to be confused with query *response* caching, which is not affected by this PR. This includes removal of the types: - - `Opaque`, which is replaced by a tuple. Note that the old implementation was broken and did not adequately hide the constructors. - - `QueryReusability` (and the `markNotReusable` method). Notably, the implementation of the `ParseT` monad now consists of two, rather than three, monad transformers. - Cache-related tests (in `server/src-test/Hasura/CacheBoundedSpec.hs`) have been removed . - References to query plan caching in the documentation. - The `planCacheOptions` in the `TenantConfig` type class was removed. However, during parsing, unrecognized fields in the YAML config get ignored, so this does not cause a breaking change. (Confirmed manually, as well as in consultation with @sordina.) - The metrics no longer send cache hit/miss messages. There are a few places in which one can still find references to query plan caching: - We still accept the `--query-plan-cache-size` command-line option for backwards compatibility. The `HASURA_QUERY_PLAN_CACHE_SIZE` environment variable is not read. https://github.com/hasura/graphql-engine-mono/pull/1815 GitOrigin-RevId: 17d92b254ec093c62a7dfeec478658ede0813eb7
2021-07-27 14:51:52 +03:00
import Hasura.GraphQL.Schema.Build qualified as GSB
import Hasura.GraphQL.Schema.Common
import Hasura.GraphQL.Schema.Mutation qualified as GSB
import Hasura.GraphQL.Schema.NamingCase
import Hasura.GraphQL.Schema.Options qualified as Options
server: Metadata origin for definitions (type parameter version v2) The code that builds the GraphQL schema, and `buildGQLContext` in particular, is partial: not every value of `(ServerConfigCtx, GraphQLQueryType, SourceCache, HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject), ActionCache, AnnotatedCustomTypes)` results in a valid GraphQL schema. When it fails, we want to be able to return better error messages than we currently do. The key thing that is missing is a way to trace back GraphQL type information to their origin from the Hasura metadata. Currently, we have a number of correctness checks of our GraphQL schema. But these correctness checks only have access to pure GraphQL type information, and hence can only report errors in terms of that. Possibly the worst is the "conflicting definitions" error, which, in practice, can only be debugged by Hasura engineers. This is terrible DX for customers. This PR allows us to print better error messages, by adding a field to the `Definition` type that traces the GraphQL type to its origin in the metadata. So the idea is simple: just add `MetadataObjId`, or `Maybe` that, or some other sum type of that, to `Definition`. However, we want to avoid having to import a `Hasura.RQL` module from `Hasura.GraphQL.Parser`. So we instead define this additional field of `Definition` through a new type parameter, which is threaded through in `Hasura.GraphQL.Parser`. We then define type synonyms in `Hasura.GraphQL.Schema.Parser` that fill in this type parameter, so that it is not visible for the majority of the codebase. The idea of associating metadata information to `Definition`s really comes to fruition when combined with hasura/graphql-engine-mono#4517. Their combination would allow us to use the API of fatal errors (just like the current `MonadError QErr`) to report _inconsistencies_ in the metadata. Such inconsistencies are then _automatically_ ignored. So no ad-hoc decisions need to be made on how to cut out inconsistent metadata from the GraphQL schema. This will allow us to report much better errors, as well as improve the likelihood of a successful HGE startup. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4770 Co-authored-by: Samir Talwar <47582+SamirTalwar@users.noreply.github.com> GitOrigin-RevId: 728402b0cae83ae8e83463a826ceeb609001acae
2022-06-28 18:52:26 +03:00
import Hasura.GraphQL.Schema.Parser
( Definition,
FieldParser,
InputFieldsParser,
Kind (..),
MonadParse,
Parser,
memoize,
memoizeOn,
type (<:),
server: Metadata origin for definitions (type parameter version v2) The code that builds the GraphQL schema, and `buildGQLContext` in particular, is partial: not every value of `(ServerConfigCtx, GraphQLQueryType, SourceCache, HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject), ActionCache, AnnotatedCustomTypes)` results in a valid GraphQL schema. When it fails, we want to be able to return better error messages than we currently do. The key thing that is missing is a way to trace back GraphQL type information to their origin from the Hasura metadata. Currently, we have a number of correctness checks of our GraphQL schema. But these correctness checks only have access to pure GraphQL type information, and hence can only report errors in terms of that. Possibly the worst is the "conflicting definitions" error, which, in practice, can only be debugged by Hasura engineers. This is terrible DX for customers. This PR allows us to print better error messages, by adding a field to the `Definition` type that traces the GraphQL type to its origin in the metadata. So the idea is simple: just add `MetadataObjId`, or `Maybe` that, or some other sum type of that, to `Definition`. However, we want to avoid having to import a `Hasura.RQL` module from `Hasura.GraphQL.Parser`. So we instead define this additional field of `Definition` through a new type parameter, which is threaded through in `Hasura.GraphQL.Parser`. We then define type synonyms in `Hasura.GraphQL.Schema.Parser` that fill in this type parameter, so that it is not visible for the majority of the codebase. The idea of associating metadata information to `Definition`s really comes to fruition when combined with hasura/graphql-engine-mono#4517. Their combination would allow us to use the API of fatal errors (just like the current `MonadError QErr`) to report _inconsistencies_ in the metadata. Such inconsistencies are then _automatically_ ignored. So no ad-hoc decisions need to be made on how to cut out inconsistent metadata from the GraphQL schema. This will allow us to report much better errors, as well as improve the likelihood of a successful HGE startup. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4770 Co-authored-by: Samir Talwar <47582+SamirTalwar@users.noreply.github.com> GitOrigin-RevId: 728402b0cae83ae8e83463a826ceeb609001acae
2022-06-28 18:52:26 +03:00
)
import Hasura.GraphQL.Schema.Parser qualified as P
import Hasura.GraphQL.Schema.Select
import Hasura.GraphQL.Schema.Update qualified as SU
import Hasura.GraphQL.Schema.Update.Batch qualified as SUB
import Hasura.LogicalModel.Schema qualified as LogicalModels
import Hasura.Name qualified as Name
import Hasura.Prelude
import Hasura.RQL.IR.BoolExp
import Hasura.RQL.IR.Root (RemoteRelationshipField)
import Hasura.RQL.IR.Root qualified as IR
import Hasura.RQL.IR.Select
( QueryDB (QDBConnection),
)
server: remove remnants of query plan caching (fix #1795) Query plan caching was introduced by - I believe - hasura/graphql-engine#1934 in order to reduce the query response latency. During the development of PDV in hasura/graphql-engine#4111, it was found out that the new architecture (for which query plan caching wasn't implemented) performed comparably to the pre-PDV architecture with caching. Hence, it was decided to leave query plan caching until some day in the future when it was deemed necessary. Well, we're in the future now, and there still isn't a convincing argument for query plan caching. So the time has come to remove some references to query plan caching from the codebase. For the most part, any code being removed would probably not be very well suited to the post-PDV architecture of query execution, so arguably not much is lost. Apart from simplifying the code, this PR will contribute towards making the GraphQL schema generation more modular, testable, and easier to profile. I'd like to eventually work towards a situation in which it's easy to generate a GraphQL schema parser *in isolation*, without being connected to a database, and then parse a GraphQL query *in isolation*, without even listening any HTTP port. It is important that both of these operations can be examined in detail, and in isolation, since they are two major performance bottlenecks, as well as phases where many important upcoming features hook into. Implementation The following have been removed: - The entirety of `server/src-lib/Hasura/GraphQL/Execute/Plan.hs` - The core phases of query parsing and execution no longer have any references to query plan caching. Note that this is not to be confused with query *response* caching, which is not affected by this PR. This includes removal of the types: - - `Opaque`, which is replaced by a tuple. Note that the old implementation was broken and did not adequately hide the constructors. - - `QueryReusability` (and the `markNotReusable` method). Notably, the implementation of the `ParseT` monad now consists of two, rather than three, monad transformers. - Cache-related tests (in `server/src-test/Hasura/CacheBoundedSpec.hs`) have been removed . - References to query plan caching in the documentation. - The `planCacheOptions` in the `TenantConfig` type class was removed. However, during parsing, unrecognized fields in the YAML config get ignored, so this does not cause a breaking change. (Confirmed manually, as well as in consultation with @sordina.) - The metrics no longer send cache hit/miss messages. There are a few places in which one can still find references to query plan caching: - We still accept the `--query-plan-cache-size` command-line option for backwards compatibility. The `HASURA_QUERY_PLAN_CACHE_SIZE` environment variable is not read. https://github.com/hasura/graphql-engine-mono/pull/1815 GitOrigin-RevId: 17d92b254ec093c62a7dfeec478658ede0813eb7
2021-07-27 14:51:52 +03:00
import Hasura.RQL.IR.Select qualified as IR
Role-invariant schema constructors We build the GraphQL schema by combining building blocks such as `tableSelectionSet` and `columnParser`. These building blocks individually build `{InputFields,Field,}Parser` objects. Those object specify the valid GraphQL schema. Since the GraphQL schema is role-dependent, at some point we need to know what fragment of the GraphQL schema a specific role is allowed to access, and this is stored in `{Sel,Upd,Ins,Del}PermInfo` objects. We have passed around these permission objects as function arguments to the schema building blocks since we first started dealing with permissions during the PDV refactor - see hasura/graphql-engine@5168b99e463199b1934d8645bd6cd37eddb64ae1 in hasura/graphql-engine#4111. This means that, for instance, `tableSelectionSet` has as its type: ```haskell tableSelectionSet :: forall b r m n. MonadBuildSchema b r m n => SourceName -> TableInfo b -> SelPermInfo b -> m (Parser 'Output n (AnnotatedFields b)) ``` There are three reasons to change this. 1. We often pass a `Maybe (xPermInfo b)` instead of a proper `xPermInfo b`, and it's not clear what the intended semantics of this is. Some potential improvements on the data types involved are discussed in issue hasura/graphql-engine-mono#3125. 2. In most cases we also already pass a `TableInfo b`, and together with the `MonadRole` that is usually also in scope, this means that we could look up the required permissions regardless: so passing the permissions explicitly undermines the "single source of truth" principle. Breaking this principle also makes the code more difficult to read. 3. We are working towards role-based parsers (see hasura/graphql-engine-mono#2711), where the `{InputFields,Field,}Parser` objects are constructed in a role-invariant way, so that we have a single object that can be used for all roles. In particular, this means that the schema building blocks _need_ to be constructed in a role-invariant way. While this PR doesn't accomplish that, it does reduce the amount of role-specific arguments being passed, thus fixing hasura/graphql-engine-mono#3068. Concretely, this PR simply drops the `xPermInfo b` argument from almost all schema building blocks. Instead these objects are looked up from the `TableInfo b` as-needed. The resulting code is considerably simpler and shorter. One way to interpret this change is as follows. Before this PR, we figured out permissions at the top-level in `Hasura.GraphQL.Schema`, passing down the obtained `xPermInfo` objects as required. After this PR, we have a bottom-up approach where the schema building blocks themselves decide whether they want to be included for a particular role. So this moves some permission logic out of `Hasura.GraphQL.Schema`, which is very complex. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3608 GitOrigin-RevId: 51a744f34ec7d57bc8077667ae7f9cb9c4f6c962
2022-02-17 11:16:20 +03:00
import Hasura.RQL.IR.Update qualified as IR
import Hasura.RQL.IR.Value qualified as IR
import Hasura.RQL.Types.Backend (Backend (..))
import Hasura.RQL.Types.Column
import Hasura.RQL.Types.Function (FunctionInfo)
2022-05-27 20:21:22 +03:00
import Hasura.RQL.Types.Source
import Hasura.RQL.Types.SourceCustomization
import Hasura.RQL.Types.Table (TableInfo (..), UpdPermInfo (..))
import Hasura.SQL.Backend (BackendType (Postgres), PostgresKind (Citus, Cockroach, Vanilla))
import Hasura.SQL.Types
server: remove remnants of query plan caching (fix #1795) Query plan caching was introduced by - I believe - hasura/graphql-engine#1934 in order to reduce the query response latency. During the development of PDV in hasura/graphql-engine#4111, it was found out that the new architecture (for which query plan caching wasn't implemented) performed comparably to the pre-PDV architecture with caching. Hence, it was decided to leave query plan caching until some day in the future when it was deemed necessary. Well, we're in the future now, and there still isn't a convincing argument for query plan caching. So the time has come to remove some references to query plan caching from the codebase. For the most part, any code being removed would probably not be very well suited to the post-PDV architecture of query execution, so arguably not much is lost. Apart from simplifying the code, this PR will contribute towards making the GraphQL schema generation more modular, testable, and easier to profile. I'd like to eventually work towards a situation in which it's easy to generate a GraphQL schema parser *in isolation*, without being connected to a database, and then parse a GraphQL query *in isolation*, without even listening any HTTP port. It is important that both of these operations can be examined in detail, and in isolation, since they are two major performance bottlenecks, as well as phases where many important upcoming features hook into. Implementation The following have been removed: - The entirety of `server/src-lib/Hasura/GraphQL/Execute/Plan.hs` - The core phases of query parsing and execution no longer have any references to query plan caching. Note that this is not to be confused with query *response* caching, which is not affected by this PR. This includes removal of the types: - - `Opaque`, which is replaced by a tuple. Note that the old implementation was broken and did not adequately hide the constructors. - - `QueryReusability` (and the `markNotReusable` method). Notably, the implementation of the `ParseT` monad now consists of two, rather than three, monad transformers. - Cache-related tests (in `server/src-test/Hasura/CacheBoundedSpec.hs`) have been removed . - References to query plan caching in the documentation. - The `planCacheOptions` in the `TenantConfig` type class was removed. However, during parsing, unrecognized fields in the YAML config get ignored, so this does not cause a breaking change. (Confirmed manually, as well as in consultation with @sordina.) - The metrics no longer send cache hit/miss messages. There are a few places in which one can still find references to query plan caching: - We still accept the `--query-plan-cache-size` command-line option for backwards compatibility. The `HASURA_QUERY_PLAN_CACHE_SIZE` environment variable is not read. https://github.com/hasura/graphql-engine-mono/pull/1815 GitOrigin-RevId: 17d92b254ec093c62a7dfeec478658ede0813eb7
2021-07-27 14:51:52 +03:00
import Language.GraphQL.Draft.Syntax qualified as G
import Language.GraphQL.Draft.Syntax.QQ qualified as G
----------------------------------------------------------------
-- BackendSchema instance
-- | This class is an implementation detail of 'BackendSchema'.
-- Some functions of 'BackendSchema' differ across different Postgres "kinds",
-- or call to functions (such as those related to Relay) that have not been
-- generalized to all kinds of Postgres and still explicitly work on Vanilla
-- Postgres. This class allows each "kind" to specify its own specific
-- implementation. All common code is directly part of `BackendSchema`.
--
-- Note: Users shouldn't ever put this as a constraint. Use `BackendSchema
-- ('Postgres pgKind)` instead.
class PostgresSchema (pgKind :: PostgresKind) where
pgkBuildTableRelayQueryFields ::
forall r m n.
MonadBuildSchema ('Postgres pgKind) r m n =>
MkRootFieldName ->
TableName ('Postgres pgKind) ->
TableInfo ('Postgres pgKind) ->
C.GQLNameIdentifier ->
NESeq (ColumnInfo ('Postgres pgKind)) ->
SchemaT r m [FieldParser n (QueryDB ('Postgres pgKind) (RemoteRelationshipField IR.UnpreparedValue) (IR.UnpreparedValue ('Postgres pgKind)))]
pgkBuildFunctionRelayQueryFields ::
forall r m n.
MonadBuildSchema ('Postgres pgKind) r m n =>
MkRootFieldName ->
FunctionName ('Postgres pgKind) ->
FunctionInfo ('Postgres pgKind) ->
TableName ('Postgres pgKind) ->
NESeq (ColumnInfo ('Postgres pgKind)) ->
SchemaT r m [FieldParser n (QueryDB ('Postgres pgKind) (RemoteRelationshipField IR.UnpreparedValue) (IR.UnpreparedValue ('Postgres pgKind)))]
pgkRelayExtension ::
Maybe (XRelay ('Postgres pgKind))
pgkBuildTableQueryAndSubscriptionFields ::
forall r m n.
( MonadBuildSchema ('Postgres pgKind) r m n,
AggregationPredicatesSchema ('Postgres pgKind),
BackendTableSelectSchema ('Postgres pgKind)
) =>
MkRootFieldName ->
TableName ('Postgres pgKind) ->
TableInfo ('Postgres pgKind) ->
C.GQLNameIdentifier ->
SchemaT
r
m
( [FieldParser n (QueryDB ('Postgres pgKind) (RemoteRelationshipField IR.UnpreparedValue) (IR.UnpreparedValue ('Postgres pgKind)))],
[FieldParser n (QueryDB ('Postgres pgKind) (RemoteRelationshipField IR.UnpreparedValue) (IR.UnpreparedValue ('Postgres pgKind)))],
Maybe (G.Name, Parser 'Output n (ApolloFederationParserFunction n))
)
pgkBuildTableStreamingSubscriptionFields ::
forall r m n.
( MonadBuildSchema ('Postgres pgKind) r m n,
AggregationPredicatesSchema ('Postgres pgKind),
BackendTableSelectSchema ('Postgres pgKind)
) =>
MkRootFieldName ->
TableName ('Postgres pgKind) ->
TableInfo ('Postgres pgKind) ->
C.GQLNameIdentifier ->
SchemaT r m [FieldParser n (QueryDB ('Postgres pgKind) (RemoteRelationshipField IR.UnpreparedValue) (IR.UnpreparedValue ('Postgres pgKind)))]
instance PostgresSchema 'Vanilla where
pgkBuildTableRelayQueryFields = buildTableRelayQueryFields
pgkBuildFunctionRelayQueryFields = buildFunctionRelayQueryFields
pgkRelayExtension = Just ()
pgkBuildTableQueryAndSubscriptionFields = GSB.buildTableQueryAndSubscriptionFields
pgkBuildTableStreamingSubscriptionFields = GSB.buildTableStreamingSubscriptionFields
instance PostgresSchema 'Citus where
pgkBuildTableRelayQueryFields _ _ _ _ _ = pure []
pgkBuildFunctionRelayQueryFields _ _ _ _ _ = pure []
pgkRelayExtension = Nothing
pgkBuildTableQueryAndSubscriptionFields = GSB.buildTableQueryAndSubscriptionFields
pgkBuildTableStreamingSubscriptionFields = GSB.buildTableStreamingSubscriptionFields
instance PostgresSchema 'Cockroach where
pgkBuildTableRelayQueryFields _ _ _ _ _ = pure []
pgkBuildFunctionRelayQueryFields _ _ _ _ _ = pure []
pgkRelayExtension = Nothing
pgkBuildTableQueryAndSubscriptionFields = GSB.buildTableQueryAndSubscriptionFields
pgkBuildTableStreamingSubscriptionFields = GSB.buildTableStreamingSubscriptionFields
-- postgres schema
instance (BackendSchema ('Postgres pgKind)) => AggregationPredicatesSchema ('Postgres pgKind) where
aggregationPredicatesParser = Agg.defaultAggregationPredicatesParser aggregationFunctions
-- | The aggregation functions that are supported by postgres variants.
aggregationFunctions :: [Agg.FunctionSignature ('Postgres pgKind)]
aggregationFunctions =
[ Agg.FunctionSignature
{ fnName = "avg",
fnGQLName = [G.name|avg|],
fnReturnType = PGDouble,
fnArguments = Agg.SingleArgument PGDouble
},
Agg.FunctionSignature
{ fnName = "bool_and",
fnGQLName = [G.name|bool_and|],
fnReturnType = PGBoolean,
fnArguments = Agg.SingleArgument PGBoolean
},
Agg.FunctionSignature
{ fnName = "bool_or",
fnGQLName = [G.name|bool_or|],
fnReturnType = PGBoolean,
fnArguments = Agg.SingleArgument PGBoolean
},
Agg.FunctionSignature
{ fnName = "count",
fnGQLName = [G.name|count|],
fnReturnType = PGInteger,
fnArguments = Agg.ArgumentsStar
},
Agg.FunctionSignature
{ fnName = "max",
fnGQLName = [G.name|max|],
fnReturnType = PGDouble,
fnArguments = Agg.SingleArgument PGDouble
},
Agg.FunctionSignature
{ fnName = "min",
fnGQLName = [G.name|min|],
fnReturnType = PGDouble,
fnArguments = Agg.SingleArgument PGDouble
},
Agg.FunctionSignature
{ fnName = "sum",
fnGQLName = [G.name|sum|],
fnReturnType = PGDouble,
fnArguments = Agg.SingleArgument PGDouble
},
Agg.FunctionSignature
{ fnName = "corr",
fnGQLName = [G.name|corr|],
fnReturnType = PGDouble,
fnArguments =
Agg.Arguments
( NE.fromList
[ Agg.ArgumentSignature
{ argType = PGDouble,
argName = [G.name|Y|]
},
Agg.ArgumentSignature
{ argType = PGDouble,
argName = [G.name|X|]
}
]
)
},
Agg.FunctionSignature
{ fnName = "covar_samp",
fnGQLName = [G.name|covar_samp|],
fnReturnType = PGDouble,
fnArguments =
Agg.Arguments
( NE.fromList
[ Agg.ArgumentSignature
{ argType = PGDouble,
argName = [G.name|Y|]
},
Agg.ArgumentSignature
{ argType = PGDouble,
argName = [G.name|X|]
}
]
)
},
Agg.FunctionSignature
{ fnName = "stddev_samp",
fnGQLName = [G.name|stddev_samp|],
fnReturnType = PGDouble,
fnArguments = Agg.SingleArgument PGDouble
},
Agg.FunctionSignature
{ fnName = "var_samp",
fnGQLName = [G.name|var_samp|],
fnReturnType = PGDouble,
fnArguments = Agg.SingleArgument PGDouble
}
]
instance
( PostgresSchema pgKind,
Backend ('Postgres pgKind)
) =>
BS.BackendTableSelectSchema ('Postgres pgKind)
where
tableArguments = defaultTableArgs
selectTable = defaultSelectTable
selectTableAggregate = defaultSelectTableAggregate
tableSelectionSet = defaultTableSelectionSet
instance
( PostgresSchema pgKind,
Backend ('Postgres pgKind)
) =>
BS.BackendCustomTypeSelectSchema ('Postgres pgKind)
where
logicalModelArguments = defaultLogicalModelArgs
logicalModelSelectionSet = defaultLogicalModelSelectionSet
instance
( Backend ('Postgres pgKind),
PostgresSchema pgKind
) =>
BackendSchema ('Postgres pgKind)
where
-- top level parsers
buildTableQueryAndSubscriptionFields = pgkBuildTableQueryAndSubscriptionFields
buildTableRelayQueryFields = pgkBuildTableRelayQueryFields
buildTableStreamingSubscriptionFields = pgkBuildTableStreamingSubscriptionFields
buildTableInsertMutationFields = GSB.buildTableInsertMutationFields backendInsertParser
Role-invariant schema constructors We build the GraphQL schema by combining building blocks such as `tableSelectionSet` and `columnParser`. These building blocks individually build `{InputFields,Field,}Parser` objects. Those object specify the valid GraphQL schema. Since the GraphQL schema is role-dependent, at some point we need to know what fragment of the GraphQL schema a specific role is allowed to access, and this is stored in `{Sel,Upd,Ins,Del}PermInfo` objects. We have passed around these permission objects as function arguments to the schema building blocks since we first started dealing with permissions during the PDV refactor - see hasura/graphql-engine@5168b99e463199b1934d8645bd6cd37eddb64ae1 in hasura/graphql-engine#4111. This means that, for instance, `tableSelectionSet` has as its type: ```haskell tableSelectionSet :: forall b r m n. MonadBuildSchema b r m n => SourceName -> TableInfo b -> SelPermInfo b -> m (Parser 'Output n (AnnotatedFields b)) ``` There are three reasons to change this. 1. We often pass a `Maybe (xPermInfo b)` instead of a proper `xPermInfo b`, and it's not clear what the intended semantics of this is. Some potential improvements on the data types involved are discussed in issue hasura/graphql-engine-mono#3125. 2. In most cases we also already pass a `TableInfo b`, and together with the `MonadRole` that is usually also in scope, this means that we could look up the required permissions regardless: so passing the permissions explicitly undermines the "single source of truth" principle. Breaking this principle also makes the code more difficult to read. 3. We are working towards role-based parsers (see hasura/graphql-engine-mono#2711), where the `{InputFields,Field,}Parser` objects are constructed in a role-invariant way, so that we have a single object that can be used for all roles. In particular, this means that the schema building blocks _need_ to be constructed in a role-invariant way. While this PR doesn't accomplish that, it does reduce the amount of role-specific arguments being passed, thus fixing hasura/graphql-engine-mono#3068. Concretely, this PR simply drops the `xPermInfo b` argument from almost all schema building blocks. Instead these objects are looked up from the `TableInfo b` as-needed. The resulting code is considerably simpler and shorter. One way to interpret this change is as follows. Before this PR, we figured out permissions at the top-level in `Hasura.GraphQL.Schema`, passing down the obtained `xPermInfo` objects as required. After this PR, we have a bottom-up approach where the schema building blocks themselves decide whether they want to be included for a particular role. So this moves some permission logic out of `Hasura.GraphQL.Schema`, which is very complex. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3608 GitOrigin-RevId: 51a744f34ec7d57bc8077667ae7f9cb9c4f6c962
2022-02-17 11:16:20 +03:00
buildTableUpdateMutationFields = pgkBuildTableUpdateMutationFields
buildTableDeleteMutationFields = GSB.buildTableDeleteMutationFields
buildFunctionQueryFields = buildFunctionQueryFieldsPG
buildFunctionRelayQueryFields = pgkBuildFunctionRelayQueryFields
buildFunctionMutationFields = buildFunctionMutationFieldsPG
buildLogicalModelRootFields = LogicalModels.defaultBuildLogicalModelRootFields
mkRelationshipParser = GSB.mkDefaultRelationshipParser backendInsertParser ()
-- backend extensions
relayExtension = pgkRelayExtension @pgKind
nodesAggExtension = Just ()
streamSubscriptionExtension = Just ()
-- individual components
columnParser = columnParser
enumParser = enumParser @pgKind
possiblyNullable = possiblyNullable
scalarSelectionArgumentsParser = pgScalarSelectionArgumentsParser
-- NOTE: We don't use @orderByOperators@ directly as this will cause memory
-- growth, instead we use separate functions, according to @jberryman on the
-- memory growth, "This is turning a CAF Into a function, And the output is
-- likely no longer going to be shared even for the same arguments, and even
-- though the domain is extremely small (just HasuraCase or GraphqlCase)."
orderByOperators _sourceInfo = \case
HasuraCase -> orderByOperatorsHasuraCase
GraphqlCase -> orderByOperatorsGraphqlCase
comparisonExps = comparisonExps
countTypeInput = countTypeInput
aggregateOrderByCountType = Postgres.PGInteger
computedField = computedFieldPG
instance Backend ('Postgres pgKind) => BackendUpdateOperatorsSchema ('Postgres pgKind) where
type UpdateOperators ('Postgres pgKind) = UpdateOpExpression
parseUpdateOperators = pgkParseUpdateOperators
backendInsertParser ::
forall pgKind m r n.
MonadBuildSchema ('Postgres pgKind) r m n =>
TableInfo ('Postgres pgKind) ->
SchemaT r m (InputFieldsParser n (PGIR.BackendInsert pgKind (IR.UnpreparedValue ('Postgres pgKind))))
backendInsertParser tableInfo =
fmap BackendInsert <$> onConflictFieldParser tableInfo
----------------------------------------------------------------
-- Top level parsers
buildTableRelayQueryFields ::
forall r m n pgKind.
( MonadBuildSchema ('Postgres pgKind) r m n,
BackendTableSelectSchema ('Postgres pgKind)
) =>
MkRootFieldName ->
TableName ('Postgres pgKind) ->
TableInfo ('Postgres pgKind) ->
C.GQLNameIdentifier ->
NESeq (ColumnInfo ('Postgres pgKind)) ->
SchemaT r m [FieldParser n (QueryDB ('Postgres pgKind) (RemoteRelationshipField IR.UnpreparedValue) (IR.UnpreparedValue ('Postgres pgKind)))]
buildTableRelayQueryFields mkRootFieldName tableName tableInfo gqlName pkeyColumns = do
sourceInfo :: SourceInfo ('Postgres pgKind) <- asks getter
let customization = _siCustomization sourceInfo
tCase = _rscNamingConvention customization
fieldDesc = Just $ G.Description $ "fetch data from the table: " <>> tableName
rootFieldName = runMkRootFieldName mkRootFieldName $ applyFieldNameCaseIdentifier tCase (mkRelayConnectionField gqlName)
fmap afold $
optionalFieldParser QDBConnection $
selectTableConnection tableInfo rootFieldName fieldDesc pkeyColumns
Role-invariant schema constructors We build the GraphQL schema by combining building blocks such as `tableSelectionSet` and `columnParser`. These building blocks individually build `{InputFields,Field,}Parser` objects. Those object specify the valid GraphQL schema. Since the GraphQL schema is role-dependent, at some point we need to know what fragment of the GraphQL schema a specific role is allowed to access, and this is stored in `{Sel,Upd,Ins,Del}PermInfo` objects. We have passed around these permission objects as function arguments to the schema building blocks since we first started dealing with permissions during the PDV refactor - see hasura/graphql-engine@5168b99e463199b1934d8645bd6cd37eddb64ae1 in hasura/graphql-engine#4111. This means that, for instance, `tableSelectionSet` has as its type: ```haskell tableSelectionSet :: forall b r m n. MonadBuildSchema b r m n => SourceName -> TableInfo b -> SelPermInfo b -> m (Parser 'Output n (AnnotatedFields b)) ``` There are three reasons to change this. 1. We often pass a `Maybe (xPermInfo b)` instead of a proper `xPermInfo b`, and it's not clear what the intended semantics of this is. Some potential improvements on the data types involved are discussed in issue hasura/graphql-engine-mono#3125. 2. In most cases we also already pass a `TableInfo b`, and together with the `MonadRole` that is usually also in scope, this means that we could look up the required permissions regardless: so passing the permissions explicitly undermines the "single source of truth" principle. Breaking this principle also makes the code more difficult to read. 3. We are working towards role-based parsers (see hasura/graphql-engine-mono#2711), where the `{InputFields,Field,}Parser` objects are constructed in a role-invariant way, so that we have a single object that can be used for all roles. In particular, this means that the schema building blocks _need_ to be constructed in a role-invariant way. While this PR doesn't accomplish that, it does reduce the amount of role-specific arguments being passed, thus fixing hasura/graphql-engine-mono#3068. Concretely, this PR simply drops the `xPermInfo b` argument from almost all schema building blocks. Instead these objects are looked up from the `TableInfo b` as-needed. The resulting code is considerably simpler and shorter. One way to interpret this change is as follows. Before this PR, we figured out permissions at the top-level in `Hasura.GraphQL.Schema`, passing down the obtained `xPermInfo` objects as required. After this PR, we have a bottom-up approach where the schema building blocks themselves decide whether they want to be included for a particular role. So this moves some permission logic out of `Hasura.GraphQL.Schema`, which is very complex. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3608 GitOrigin-RevId: 51a744f34ec7d57bc8077667ae7f9cb9c4f6c962
2022-02-17 11:16:20 +03:00
buildFunctionRelayQueryFields ::
forall r m n pgKind.
( MonadBuildSchema ('Postgres pgKind) r m n,
BackendTableSelectSchema ('Postgres pgKind)
) =>
MkRootFieldName ->
FunctionName ('Postgres pgKind) ->
FunctionInfo ('Postgres pgKind) ->
TableName ('Postgres pgKind) ->
NESeq (ColumnInfo ('Postgres pgKind)) ->
SchemaT r m [FieldParser n (QueryDB ('Postgres pgKind) (RemoteRelationshipField IR.UnpreparedValue) (IR.UnpreparedValue ('Postgres pgKind)))]
buildFunctionRelayQueryFields mkRootFieldName functionName functionInfo tableName pkeyColumns = do
let fieldDesc = Just $ G.Description $ "execute function " <> functionName <<> " which returns " <>> tableName
fmap afold $
optionalFieldParser QDBConnection $
selectFunctionConnection mkRootFieldName functionInfo fieldDesc pkeyColumns
pgkBuildTableUpdateMutationFields ::
forall r m n pgKind.
(MonadBuildSchema ('Postgres pgKind) r m n, PostgresSchema pgKind) =>
Scenario ->
TableInfo ('Postgres pgKind) ->
C.GQLNameIdentifier ->
SchemaT r m [P.FieldParser n (IR.AnnotatedUpdateG ('Postgres pgKind) (IR.RemoteRelationshipField IR.UnpreparedValue) (IR.UnpreparedValue ('Postgres pgKind)))]
pgkBuildTableUpdateMutationFields scenario tableInfo gqlName = do
updateRootFields <- GSB.buildSingleBatchTableUpdateMutationFields SingleBatch scenario tableInfo gqlName
updateManyRootField <- SUB.updateTableMany MultipleBatches scenario tableInfo gqlName
pure $ updateRootFields ++ (maybeToList updateManyRootField)
----------------------------------------------------------------
-- Individual components
columnParser ::
forall pgKind r m n.
MonadBuildSchema ('Postgres pgKind) r m n =>
ColumnType ('Postgres pgKind) ->
G.Nullability ->
SchemaT r m (Parser 'Both n (IR.ValueWithOrigin (ColumnValue ('Postgres pgKind))))
columnParser columnType nullability = case columnType of
ColumnScalar scalarType -> memoizeOn 'columnParser (scalarType, nullability) do
-- We convert the value to JSON and use the FromJSON instance. This avoids
-- having two separate ways of parsing a value in the codebase, which
-- could lead to inconsistencies.
--
-- The mapping from postgres type to GraphQL scalar name is done by
-- 'mkScalarTypeName'. This is confusing, and we might want to fix it
-- later, as we will parse values differently here than how they'd be
-- parsed in other places using the same scalar name; for instance, we
-- will accept strings for postgres columns of type "Integer", despite the
-- fact that they will be represented as GraphQL ints, which otherwise do
-- not accept strings.
--
-- TODO: introduce new dedicated scalars for Postgres column types.
name <- mkScalarTypeName scalarType
let schemaType = P.TNamed P.NonNullable $ P.Definition name Nothing Nothing [] P.TIScalar
pure $
peelWithOrigin $
fmap (ColumnValue columnType) $
possiblyNullable scalarType nullability $
P.Parser
{ pType = schemaType,
pParser =
P.valueToJSON (P.toGraphQLType schemaType) >=> \case
J.Null -> P.parseError $ "unexpected null value for type " <> toErrorValue name
value ->
runAesonParser (parsePGValue scalarType) value
`onLeft` (P.parseErrorWith P.ParseFailed . toErrorMessage . qeError)
}
ColumnEnumReference (EnumReference tableName enumValues tableCustomName) ->
case nonEmpty (Map.toList enumValues) of
Just enumValuesList ->
peelWithOrigin . fmap (ColumnValue columnType)
<$> enumParser @pgKind tableName enumValuesList tableCustomName nullability
Nothing -> throw400 ValidationFailed "empty enum values"
enumParser ::
forall pgKind r m n.
(MonadBuildSchema ('Postgres pgKind) r m n) =>
TableName ('Postgres pgKind) ->
NonEmpty (EnumValue, EnumValueInfo) ->
Maybe G.Name ->
G.Nullability ->
SchemaT r m (Parser 'Both n (ScalarValue ('Postgres pgKind)))
enumParser tableName enumValues tableCustomName nullability = do
sourceInfo :: SourceInfo ('Postgres pgKind) <- asks getter
let customization = _siCustomization sourceInfo
tCase = _rscNamingConvention customization
tableGQLName <- liftEither (getIdentifierQualifiedObject tableName)
let name = addEnumSuffix customization tableGQLName tableCustomName
pure $ possiblyNullable PGText nullability $ P.enum name Nothing (mkEnumValue tCase <$> enumValues)
where
mkEnumValue :: NamingCase -> (EnumValue, EnumValueInfo) -> (P.Definition P.EnumValueInfo, ScalarValue ('Postgres pgKind))
mkEnumValue tCase (EnumValue value, EnumValueInfo description) =
( P.Definition (applyEnumValueCase tCase value) (G.Description <$> description) Nothing [] P.EnumValueInfo,
PGValText $ G.unName value
)
possiblyNullable ::
(MonadParse m, 'Input <: k) =>
ScalarType ('Postgres pgKind) ->
G.Nullability ->
Parser k m (ScalarValue ('Postgres pgKind)) ->
Parser k m (ScalarValue ('Postgres pgKind))
possiblyNullable scalarType (G.Nullability isNullable)
| isNullable = fmap (fromMaybe $ PGNull scalarType) . P.nullable
| otherwise = id
pgScalarSelectionArgumentsParser ::
MonadParse n =>
ColumnType ('Postgres pgKind) ->
InputFieldsParser n (Maybe (ScalarSelectionArguments ('Postgres pgKind)))
pgScalarSelectionArgumentsParser columnType
| isScalarColumnWhere Postgres.isJSONType columnType =
P.fieldOptional fieldName description P.string `P.bindFields` fmap join . traverse toColExp
| otherwise = pure Nothing
where
fieldName = Name._path
description = Just "JSON select path"
toColExp textValue = case parseJSONPath textValue of
An `ErrorMessage` type, to encapsulate. This introduces an `ErrorMessage` newtype which wraps `Text` in a manner which is designed to be easy to construct, and difficult to deconstruct. It provides functionality similar to `Data.Text.Extended`, but designed _only_ for error messages. Error messages are constructed through `fromString`, concatenation, or the `toErrorValue` function, which is designed to be overridden for all meaningful domain types that might show up in an error message. Notably, there are not and should never be instances of `ToErrorValue` for `String`, `Text`, `Int`, etc. This is so that we correctly represent the value in a way that is specific to its type. For example, all `Name` values (from the _graphql-parser-hs_ library) are single-quoted now; no exceptions. I have mostly had to add `instance ToErrorValue` for various backend types (and also add newtypes where necessary). Some of these are not strictly necessary for this changeset, as I had bigger aspirations when I started. These aspirations have been tempered by trying and failing twice. As such, in this changeset, I have started by introducing this type to the `parseError` and `parseErrorWith` functions. In the future, I would like to extend this to the `QErr` record and the various `throwError` functions, but this is a much larger task and should probably be done in stages. For now, `toErrorMessage` and `fromErrorMessage` are provided for conversion to and from `Text`, but the intent is to stop exporting these once all error messages are converted to the new type. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5018 GitOrigin-RevId: 84b37e238992e4312255a87ca44f41af65e2d89a
2022-07-18 23:26:01 +03:00
Left err -> P.parseError $ "parse json path error: " <> toErrorMessage err
Right [] -> pure Nothing
Right jPaths -> pure $ Just $ Postgres.ColumnOp Postgres.jsonbPathOp $ Postgres.SEArray $ map elToColExp jPaths
elToColExp (Key k) = Postgres.SELit $ K.toText k
elToColExp (Index i) = Postgres.SELit $ tshow i
orderByOperatorsHasuraCase ::
(G.Name, NonEmpty (Definition P.EnumValueInfo, (BasicOrderType ('Postgres pgKind), NullsOrderType ('Postgres pgKind))))
orderByOperatorsHasuraCase = orderByOperators HasuraCase
orderByOperatorsGraphqlCase ::
(G.Name, NonEmpty (Definition P.EnumValueInfo, (BasicOrderType ('Postgres pgKind), NullsOrderType ('Postgres pgKind))))
orderByOperatorsGraphqlCase = orderByOperators GraphqlCase
-- | Do NOT use this function directly, this should be used via
-- @orderByOperatorsHasuraCase@ or @orderByOperatorsGraphqlCase@
orderByOperators ::
NamingCase ->
(G.Name, NonEmpty (Definition P.EnumValueInfo, (BasicOrderType ('Postgres pgKind), NullsOrderType ('Postgres pgKind))))
orderByOperators tCase =
(Name._order_by,) $
NE.fromList
[ ( define (applyEnumValueCase tCase Name._asc) "in ascending order, nulls last",
(Postgres.OTAsc, Postgres.NullsLast)
),
( define (applyEnumValueCase tCase Name._asc_nulls_first) "in ascending order, nulls first",
(Postgres.OTAsc, Postgres.NullsFirst)
),
( define (applyEnumValueCase tCase Name._asc_nulls_last) "in ascending order, nulls last",
(Postgres.OTAsc, Postgres.NullsLast)
),
( define (applyEnumValueCase tCase Name._desc) "in descending order, nulls first",
(Postgres.OTDesc, Postgres.NullsFirst)
),
( define (applyEnumValueCase tCase Name._desc_nulls_first) "in descending order, nulls first",
(Postgres.OTDesc, Postgres.NullsFirst)
),
( define (applyEnumValueCase tCase Name._desc_nulls_last) "in descending order, nulls last",
(Postgres.OTDesc, Postgres.NullsLast)
)
]
where
define name desc = P.Definition name (Just desc) Nothing [] P.EnumValueInfo
comparisonExps ::
forall pgKind m n r.
MonadBuildSchema ('Postgres pgKind) r m n =>
ColumnType ('Postgres pgKind) ->
SchemaT r m (Parser 'Input n [ComparisonExp ('Postgres pgKind)])
comparisonExps = memoize 'comparisonExps \columnType -> do
sourceInfo :: SourceInfo ('Postgres pgKind) <- asks getter
let customization = _siCustomization sourceInfo
tCase = _rscNamingConvention customization
-- see Note [Columns in comparison expression are never nullable]
collapseIfNull <- retrieve Options.soDangerousBooleanCollapse
-- parsers used for comparison arguments
geogInputParser <- geographyWithinDistanceInput
geomInputParser <- geometryWithinDistanceInput
ignInputParser <- intersectsGeomNbandInput
ingInputParser <- intersectsNbandGeomInput
typedParser <- columnParser columnType (G.Nullability False)
nullableTextParser <- columnParser (ColumnScalar PGText) (G.Nullability True)
textParser <- columnParser (ColumnScalar PGText) (G.Nullability False)
-- `lquery` represents a regular-expression-like pattern for matching `ltree` values.
lqueryParser <- columnParser (ColumnScalar PGLquery) (G.Nullability False)
-- `ltxtquery` represents a full-text-search-like pattern for matching `ltree` values.
ltxtqueryParser <- columnParser (ColumnScalar PGLtxtquery) (G.Nullability False)
maybeCastParser <- castExp columnType tCase
let name = applyTypeNameCaseCust tCase $ P.getName typedParser <> Name.__comparison_exp
desc =
G.Description $
"Boolean expression to compare columns of type "
<> P.getName typedParser
<<> ". All fields are combined with logical 'AND'."
textListParser = fmap IR.openValueOrigin <$> P.list textParser
columnListParser = fmap IR.openValueOrigin <$> P.list typedParser
-- Naming conventions
pure $
P.object name (Just desc) $
fmap catMaybes $
sequenceA $
concat
[ flip (maybe []) maybeCastParser $ \castParser ->
[ P.fieldOptional Name.__cast Nothing (ACast <$> castParser)
],
-- Common ops for all types
equalityOperators
tCase
collapseIfNull
(IR.mkParameter <$> typedParser)
(mkListParameter columnType <$> columnListParser),
-- Comparison ops for non Raster types
guard (isScalarColumnWhere (/= PGRaster) columnType)
*> comparisonOperators
tCase
collapseIfNull
(IR.mkParameter <$> typedParser),
-- Ops for Raster types
guard (isScalarColumnWhere (== PGRaster) columnType)
*> [ mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "intersects", "rast"]))
Nothing
(ABackendSpecific . ASTIntersectsRast . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "intersects", "nband", "geom"]))
Nothing
(ABackendSpecific . ASTIntersectsNbandGeom <$> ingInputParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "intersects", "geom", "nband"]))
Nothing
(ABackendSpecific . ASTIntersectsGeomNband <$> ignInputParser)
],
-- Ops for String like types
guard (isScalarColumnWhere isStringType columnType)
*> [ mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__like)
(Just "does the column match the given pattern")
(ALIKE . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__nlike)
(Just "does the column NOT match the given pattern")
(ANLIKE . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__ilike)
(Just "does the column match the given case-insensitive pattern")
(ABackendSpecific . AILIKE . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__nilike)
(Just "does the column NOT match the given case-insensitive pattern")
(ABackendSpecific . ANILIKE . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__similar)
(Just "does the column match the given SQL regular expression")
(ABackendSpecific . ASIMILAR . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__nsimilar)
(Just "does the column NOT match the given SQL regular expression")
(ABackendSpecific . ANSIMILAR . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__regex)
(Just "does the column match the given POSIX regular expression, case sensitive")
(ABackendSpecific . AREGEX . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__iregex)
(Just "does the column match the given POSIX regular expression, case insensitive")
(ABackendSpecific . AIREGEX . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__nregex)
(Just "does the column NOT match the given POSIX regular expression, case sensitive")
(ABackendSpecific . ANREGEX . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__niregex)
(Just "does the column NOT match the given POSIX regular expression, case insensitive")
(ABackendSpecific . ANIREGEX . IR.mkParameter <$> typedParser)
],
-- Ops for JSONB type
guard (isScalarColumnWhere (== PGJSONB) columnType)
*> [ mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__contains)
(Just "does the column contain the given json value at the top level")
(ABackendSpecific . AContains . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_contained", "in"]))
(Just "is the column contained in the given json value")
(ABackendSpecific . AContainedIn . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_has", "key"]))
(Just "does the string exist as a top-level key in the column")
(ABackendSpecific . AHasKey . IR.mkParameter <$> nullableTextParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_has", "keys", "any"]))
(Just "do any of these strings exist as top-level keys in the column")
(ABackendSpecific . AHasKeysAny . mkListLiteral (ColumnScalar PGText) <$> textListParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_has", "keys", "all"]))
(Just "do all of these strings exist as top-level keys in the column")
(ABackendSpecific . AHasKeysAll . mkListLiteral (ColumnScalar PGText) <$> textListParser)
],
-- Ops for Geography type
guard (isScalarColumnWhere (== PGGeography) columnType)
*> [ mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "intersects"]))
(Just "does the column spatially intersect the given geography value")
(ABackendSpecific . ASTIntersects . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "d", "within"]))
(Just "is the column within a given distance from the given geography value")
(ABackendSpecific . ASTDWithinGeog <$> geogInputParser)
],
-- Ops for Geometry type
guard (isScalarColumnWhere (== PGGeometry) columnType)
*> [ mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "contains"]))
(Just "does the column contain the given geometry value")
(ABackendSpecific . ASTContains . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "crosses"]))
(Just "does the column cross the given geometry value")
(ABackendSpecific . ASTCrosses . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "equals"]))
(Just "is the column equal to given geometry value (directionality is ignored)")
(ABackendSpecific . ASTEquals . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "overlaps"]))
(Just "does the column 'spatially overlap' (intersect but not completely contain) the given geometry value")
(ABackendSpecific . ASTOverlaps . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "touches"]))
(Just "does the column have atleast one point in common with the given geometry value")
(ABackendSpecific . ASTTouches . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "within"]))
(Just "is the column contained in the given geometry value")
(ABackendSpecific . ASTWithin . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "intersects"]))
(Just "does the column spatially intersect the given geometry value")
(ABackendSpecific . ASTIntersects . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "3d", "intersects"]))
(Just "does the column spatially intersect the given geometry value in 3D")
(ABackendSpecific . AST3DIntersects . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "d", "within"]))
(Just "is the column within a given distance from the given geometry value")
(ABackendSpecific . ASTDWithinGeom <$> geomInputParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "3d", "d", "within"]))
(Just "is the column within a given 3D distance from the given geometry value")
(ABackendSpecific . AST3DDWithinGeom <$> geomInputParser)
],
-- Ops for Ltree type
guard (isScalarColumnWhere (== PGLtree) columnType)
*> [ mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__ancestor)
(Just "is the left argument an ancestor of right (or equal)?")
(ABackendSpecific . AAncestor . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_ancestor", "any"]))
(Just "does array contain an ancestor of `ltree`?")
(ABackendSpecific . AAncestorAny . mkListLiteral columnType <$> columnListParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__descendant)
(Just "is the left argument a descendant of right (or equal)?")
(ABackendSpecific . ADescendant . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_descendant", "any"]))
(Just "does array contain a descendant of `ltree`?")
(ABackendSpecific . ADescendantAny . mkListLiteral columnType <$> columnListParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__matches)
(Just "does `ltree` match `lquery`?")
(ABackendSpecific . AMatches . IR.mkParameter <$> lqueryParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_matches", "any"]))
(Just "does `ltree` match any `lquery` in array?")
(ABackendSpecific . AMatchesAny . mkListLiteral (ColumnScalar PGLquery) <$> textListParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_matches", "fulltext"]))
(Just "does `ltree` match `ltxtquery`?")
(ABackendSpecific . AMatchesFulltext . IR.mkParameter <$> ltxtqueryParser)
]
]
where
mkListLiteral :: ColumnType ('Postgres pgKind) -> [ColumnValue ('Postgres pgKind)] -> IR.UnpreparedValue ('Postgres pgKind)
mkListLiteral columnType columnValues =
IR.UVLiteral $
SETyAnn
(SEArray $ txtEncoder . cvValue <$> columnValues)
(mkTypeAnn $ CollectableTypeArray $ unsafePGColumnToBackend columnType)
mkListParameter :: ColumnType ('Postgres pgKind) -> [ColumnValue ('Postgres pgKind)] -> IR.UnpreparedValue ('Postgres pgKind)
mkListParameter columnType columnValues = do
let scalarType = unsafePGColumnToBackend columnType
IR.UVParameter Nothing $
ColumnValue
(ColumnScalar $ Postgres.PGArray scalarType)
(Postgres.PGValArray $ cvValue <$> columnValues)
castExp :: ColumnType ('Postgres pgKind) -> NamingCase -> SchemaT r m (Maybe (Parser 'Input n (CastExp ('Postgres pgKind) (IR.UnpreparedValue ('Postgres pgKind)))))
castExp sourceType tCase = do
let maybeScalars = case sourceType of
ColumnScalar PGGeography -> Just (PGGeography, PGGeometry)
ColumnScalar PGGeometry -> Just (PGGeometry, PGGeography)
ColumnScalar PGJSONB -> Just (PGJSONB, PGText)
_ -> Nothing
forM maybeScalars $ \(sourceScalar, targetScalar) -> do
scalarTypeName <- C.fromAutogeneratedName <$> mkScalarTypeName sourceScalar
targetName <- mkScalarTypeName targetScalar
targetOpExps <- comparisonExps $ ColumnScalar targetScalar
let field = P.fieldOptional targetName Nothing $ (targetScalar,) <$> targetOpExps
sourceName = applyTypeNameCaseIdentifier tCase (scalarTypeName <> (C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["cast", "exp"])))
Refactor type name customization Source typename customization (hasura/graphql-engine@aac64f2c81faa6a3aef4d0cf5fae97289ac4383e) introduced a mechanism to change certain names in the GraphQL schema that is exposed. In particular it allows last-minute modification of: 1. the names of some types, and 2. the names of some root fields. The above two items are assigned distinct customization algorithms, and at times both algorithms are in scope. So a need to distinguish them is needed. In the original design, this was addressed by introducing a newtype wrapper `Typename` around GraphQL `Name`s, dedicated to the names of types. However, in the majority of the codebase, type names are also represented by `Name`. For this reason, it was unavoidable to allow for easy conversion. This was supported by a `HasName Typename` instance, as well as by publishing the constructors of `Typename`. This means that the type safety that newtypes can add is lost. In particular, it is now very easy to confuse type name customization with root field name customization. This refactors the above design by instead introducing newtypes around the customization operations: ```haskell newtype MkTypename = MkTypename {runMkTypename :: Name -> Name} deriving (Semigroup, Monoid) via (Endo Name) newtype MkRootFieldName = MkRootFieldName {runMkRootFieldName :: Name -> Name} deriving (Semigroup, Monoid) via (Endo Name) ``` The `Monoid` instance allows easy composition of customization operations, piggybacking off of the type of `Endo`maps. This design allows safe co-existence of the two customization algorithms, while avoiding the syntactic overhead of packing and unpacking newtypes. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2989 GitOrigin-RevId: da3a353a9b003ee40c8d0a1e02872e99d2edd3ca
2021-11-30 12:51:46 +03:00
pure $ P.object sourceName Nothing $ M.fromList . maybeToList <$> field
geographyWithinDistanceInput ::
forall pgKind m n r.
MonadBuildSchema ('Postgres pgKind) r m n =>
SchemaT r m (Parser 'Input n (DWithinGeogOp (IR.UnpreparedValue ('Postgres pgKind))))
geographyWithinDistanceInput = do
geographyParser <- columnParser (ColumnScalar PGGeography) (G.Nullability False)
-- FIXME
-- It doesn't make sense for this value to be nullable; it only is for
-- backwards compatibility; if an explicit Null value is given, it will be
-- forwarded to the underlying SQL function, that in turns treat a null value
-- as an error. We can fix this by rejecting explicit null values, by marking
-- this field non-nullable in a future release.
booleanParser <- columnParser (ColumnScalar PGBoolean) (G.Nullability True)
floatParser <- columnParser (ColumnScalar PGFloat) (G.Nullability False)
pure $
P.object Name._st_d_within_geography_input Nothing $
DWithinGeogOp
<$> (IR.mkParameter <$> P.field Name._distance Nothing floatParser)
<*> (IR.mkParameter <$> P.field Name._from Nothing geographyParser)
<*> (IR.mkParameter <$> P.fieldWithDefault Name._use_spheroid Nothing (G.VBoolean True) booleanParser)
geometryWithinDistanceInput ::
forall pgKind m n r.
MonadBuildSchema ('Postgres pgKind) r m n =>
SchemaT r m (Parser 'Input n (DWithinGeomOp (IR.UnpreparedValue ('Postgres pgKind))))
geometryWithinDistanceInput = do
geometryParser <- columnParser (ColumnScalar PGGeometry) (G.Nullability False)
floatParser <- columnParser (ColumnScalar PGFloat) (G.Nullability False)
pure $
P.object Name._st_d_within_input Nothing $
DWithinGeomOp
<$> (IR.mkParameter <$> P.field Name._distance Nothing floatParser)
<*> (IR.mkParameter <$> P.field Name._from Nothing geometryParser)
intersectsNbandGeomInput ::
forall pgKind m n r.
MonadBuildSchema ('Postgres pgKind) r m n =>
SchemaT r m (Parser 'Input n (STIntersectsNbandGeommin (IR.UnpreparedValue ('Postgres pgKind))))
intersectsNbandGeomInput = do
geometryParser <- columnParser (ColumnScalar PGGeometry) (G.Nullability False)
integerParser <- columnParser (ColumnScalar PGInteger) (G.Nullability False)
pure $
P.object Name._st_intersects_nband_geom_input Nothing $
STIntersectsNbandGeommin
<$> (IR.mkParameter <$> P.field Name._nband Nothing integerParser)
<*> (IR.mkParameter <$> P.field Name._geommin Nothing geometryParser)
intersectsGeomNbandInput ::
forall pgKind m n r.
MonadBuildSchema ('Postgres pgKind) r m n =>
SchemaT r m (Parser 'Input n (STIntersectsGeomminNband (IR.UnpreparedValue ('Postgres pgKind))))
intersectsGeomNbandInput = do
geometryParser <- columnParser (ColumnScalar PGGeometry) (G.Nullability False)
integerParser <- columnParser (ColumnScalar PGInteger) (G.Nullability False)
pure $
P.object Name._st_intersects_geom_nband_input Nothing $
STIntersectsGeomminNband
<$> (IR.mkParameter <$> P.field Name._geommin Nothing geometryParser)
<*> (fmap IR.mkParameter <$> P.fieldOptional Name._nband Nothing integerParser)
countTypeInput ::
MonadParse n =>
Maybe (Parser 'Both n (Column ('Postgres pgKind))) ->
InputFieldsParser n (IR.CountDistinct -> CountType ('Postgres pgKind))
countTypeInput = \case
Just columnEnum -> do
columns <- P.fieldOptional Name._columns Nothing (P.list columnEnum)
pure $ flip mkCountType columns
Nothing -> pure $ flip mkCountType Nothing
where
mkCountType :: IR.CountDistinct -> Maybe [Column ('Postgres pgKind)] -> CountType ('Postgres pgKind)
mkCountType _ Nothing = Postgres.CTStar
mkCountType IR.SelectCountDistinct (Just cols) = Postgres.CTDistinct cols
mkCountType IR.SelectCountNonDistinct (Just cols) = Postgres.CTSimple cols
-- | Update operator that prepends a value to a column containing jsonb arrays.
--
-- Note: Currently this is Postgres specific because json columns have not been ported
-- to other backends yet.
prependOp ::
forall pgKind m n r.
MonadBuildSchema ('Postgres pgKind) r m n =>
SU.UpdateOperator ('Postgres pgKind) r m n (IR.UnpreparedValue ('Postgres pgKind))
prependOp = SU.UpdateOperator {..}
where
updateOperatorApplicableColumn = isScalarColumnWhere (== PGJSONB) . ciType
updateOperatorParser tableGQLName _tableName columns = do
let typedParser columnInfo =
fmap IR.mkParameter
<$> BS.columnParser
(ciType columnInfo)
(G.Nullability $ ciIsNullable columnInfo)
desc = "prepend existing jsonb value of filtered columns with new jsonb value"
SU.updateOperator
tableGQLName
(C.fromAutogeneratedName $$(G.litName "prepend"))
(C.fromAutogeneratedName $$(G.litName "_prepend"))
typedParser
columns
desc
desc
-- | Update operator that appends a value to a column containing jsonb arrays.
--
-- Note: Currently this is Postgres specific because json columns have not been ported
-- to other backends yet.
appendOp ::
forall pgKind m n r.
MonadBuildSchema ('Postgres pgKind) r m n =>
SU.UpdateOperator ('Postgres pgKind) r m n (IR.UnpreparedValue ('Postgres pgKind))
appendOp = SU.UpdateOperator {..}
where
updateOperatorApplicableColumn = isScalarColumnWhere (== PGJSONB) . ciType
updateOperatorParser tableGQLName _tableName columns = do
let typedParser columnInfo =
fmap IR.mkParameter
<$> BS.columnParser
(ciType columnInfo)
(G.Nullability $ ciIsNullable columnInfo)
desc = "append existing jsonb value of filtered columns with new jsonb value"
SU.updateOperator
tableGQLName
(C.fromAutogeneratedName $$(G.litName "append"))
(C.fromAutogeneratedName $$(G.litName "_append"))
typedParser
columns
desc
desc
-- | Update operator that deletes a value at a specified key from a column
-- containing jsonb objects.
--
-- Note: Currently this is Postgres specific because json columns have not been ported
-- to other backends yet.
deleteKeyOp ::
forall pgKind m n r.
MonadBuildSchema ('Postgres pgKind) r m n =>
SU.UpdateOperator ('Postgres pgKind) r m n (IR.UnpreparedValue ('Postgres pgKind))
deleteKeyOp = SU.UpdateOperator {..}
where
updateOperatorApplicableColumn = isScalarColumnWhere (== PGJSONB) . ciType
updateOperatorParser tableGQLName _tableName columns = do
let nullableTextParser _ = fmap IR.mkParameter <$> BS.columnParser (ColumnScalar PGText) (G.Nullability True)
desc = "delete key/value pair or string element. key/value pairs are matched based on their key value"
SU.updateOperator
tableGQLName
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["delete", "key"]))
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_delete", "key"]))
nullableTextParser
columns
desc
desc
-- | Update operator that deletes a value at a specific index from a column
-- containing jsonb arrays.
--
-- Note: Currently this is Postgres specific because json columns have not been ported
-- to other backends yet.
deleteElemOp ::
forall pgKind m n r.
MonadBuildSchema ('Postgres pgKind) r m n =>
SU.UpdateOperator ('Postgres pgKind) r m n (IR.UnpreparedValue ('Postgres pgKind))
deleteElemOp = SU.UpdateOperator {..}
where
updateOperatorApplicableColumn = isScalarColumnWhere (== PGJSONB) . ciType
updateOperatorParser tableGQLName _tableName columns = do
let nonNullableIntParser _ = fmap IR.mkParameter <$> BS.columnParser (ColumnScalar PGInteger) (G.Nullability False)
desc =
"delete the array element with specified index (negative integers count from the end). "
<> "throws an error if top level container is not an array"
SU.updateOperator
tableGQLName
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["delete", "elem"]))
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_delete", "elem"]))
nonNullableIntParser
columns
desc
desc
-- | Update operator that deletes a field at a certan path from a column
-- containing jsonb objects.
--
-- Note: Currently this is Postgres specific because json columns have not been ported
-- to other backends yet.
deleteAtPathOp ::
forall pgKind m n r.
MonadBuildSchema ('Postgres pgKind) r m n =>
SU.UpdateOperator ('Postgres pgKind) r m n [IR.UnpreparedValue ('Postgres pgKind)]
deleteAtPathOp = SU.UpdateOperator {..}
where
updateOperatorApplicableColumn = isScalarColumnWhere (== PGJSONB) . ciType
updateOperatorParser tableGQLName _tableName columns = do
let nonNullableTextListParser _ = P.list . fmap IR.mkParameter <$> BS.columnParser (ColumnScalar PGText) (G.Nullability False)
desc = "delete the field or element with specified path (for JSON arrays, negative integers count from the end)"
SU.updateOperator
tableGQLName
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["delete", "at", "path"]))
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_delete", "at", "path"]))
nonNullableTextListParser
columns
desc
desc
-- | The update operators that we support on Postgres.
pgkParseUpdateOperators ::
forall pgKind m n r.
Role-invariant schema constructors We build the GraphQL schema by combining building blocks such as `tableSelectionSet` and `columnParser`. These building blocks individually build `{InputFields,Field,}Parser` objects. Those object specify the valid GraphQL schema. Since the GraphQL schema is role-dependent, at some point we need to know what fragment of the GraphQL schema a specific role is allowed to access, and this is stored in `{Sel,Upd,Ins,Del}PermInfo` objects. We have passed around these permission objects as function arguments to the schema building blocks since we first started dealing with permissions during the PDV refactor - see hasura/graphql-engine@5168b99e463199b1934d8645bd6cd37eddb64ae1 in hasura/graphql-engine#4111. This means that, for instance, `tableSelectionSet` has as its type: ```haskell tableSelectionSet :: forall b r m n. MonadBuildSchema b r m n => SourceName -> TableInfo b -> SelPermInfo b -> m (Parser 'Output n (AnnotatedFields b)) ``` There are three reasons to change this. 1. We often pass a `Maybe (xPermInfo b)` instead of a proper `xPermInfo b`, and it's not clear what the intended semantics of this is. Some potential improvements on the data types involved are discussed in issue hasura/graphql-engine-mono#3125. 2. In most cases we also already pass a `TableInfo b`, and together with the `MonadRole` that is usually also in scope, this means that we could look up the required permissions regardless: so passing the permissions explicitly undermines the "single source of truth" principle. Breaking this principle also makes the code more difficult to read. 3. We are working towards role-based parsers (see hasura/graphql-engine-mono#2711), where the `{InputFields,Field,}Parser` objects are constructed in a role-invariant way, so that we have a single object that can be used for all roles. In particular, this means that the schema building blocks _need_ to be constructed in a role-invariant way. While this PR doesn't accomplish that, it does reduce the amount of role-specific arguments being passed, thus fixing hasura/graphql-engine-mono#3068. Concretely, this PR simply drops the `xPermInfo b` argument from almost all schema building blocks. Instead these objects are looked up from the `TableInfo b` as-needed. The resulting code is considerably simpler and shorter. One way to interpret this change is as follows. Before this PR, we figured out permissions at the top-level in `Hasura.GraphQL.Schema`, passing down the obtained `xPermInfo` objects as required. After this PR, we have a bottom-up approach where the schema building blocks themselves decide whether they want to be included for a particular role. So this moves some permission logic out of `Hasura.GraphQL.Schema`, which is very complex. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3608 GitOrigin-RevId: 51a744f34ec7d57bc8077667ae7f9cb9c4f6c962
2022-02-17 11:16:20 +03:00
MonadBuildSchema ('Postgres pgKind) r m n =>
TableInfo ('Postgres pgKind) ->
UpdPermInfo ('Postgres pgKind) ->
SchemaT r m (InputFieldsParser n (HashMap (Column ('Postgres pgKind)) (UpdateOpExpression (IR.UnpreparedValue ('Postgres pgKind)))))
pgkParseUpdateOperators tableInfo updatePermissions = do
SU.buildUpdateOperators
(PGIR.UpdateSet <$> SU.presetColumns updatePermissions)
[ PGIR.UpdateSet <$> SU.setOp,
PGIR.UpdateInc <$> SU.incOp,
PGIR.UpdatePrepend <$> prependOp,
PGIR.UpdateAppend <$> appendOp,
PGIR.UpdateDeleteKey <$> deleteKeyOp,
PGIR.UpdateDeleteElem <$> deleteElemOp,
PGIR.UpdateDeleteAtPath <$> deleteAtPathOp
]
tableInfo