graphql-engine/server/src-lib/Hasura/Backends/BigQuery/Instances/Schema.hs

478 lines
22 KiB
Haskell
Raw Normal View History

{-# LANGUAGE ApplicativeDo #-}
{-# LANGUAGE TemplateHaskell #-}
{-# OPTIONS_GHC -fno-warn-orphans #-}
module Hasura.Backends.BigQuery.Instances.Schema () where
import Data.Aeson qualified as J
import Data.Has
import Data.HashMap.Strict qualified as Map
import Data.List.NonEmpty qualified as NE
import Data.Text qualified as T
import Data.Text.Casing qualified as C
import Data.Text.Extended
import Hasura.Backends.BigQuery.Name
import Hasura.Backends.BigQuery.Parser.Scalars qualified as BQP
import Hasura.Backends.BigQuery.Types qualified as BigQuery
import Hasura.Base.Error
An `ErrorMessage` type, to encapsulate. This introduces an `ErrorMessage` newtype which wraps `Text` in a manner which is designed to be easy to construct, and difficult to deconstruct. It provides functionality similar to `Data.Text.Extended`, but designed _only_ for error messages. Error messages are constructed through `fromString`, concatenation, or the `toErrorValue` function, which is designed to be overridden for all meaningful domain types that might show up in an error message. Notably, there are not and should never be instances of `ToErrorValue` for `String`, `Text`, `Int`, etc. This is so that we correctly represent the value in a way that is specific to its type. For example, all `Name` values (from the _graphql-parser-hs_ library) are single-quoted now; no exceptions. I have mostly had to add `instance ToErrorValue` for various backend types (and also add newtypes where necessary). Some of these are not strictly necessary for this changeset, as I had bigger aspirations when I started. These aspirations have been tempered by trying and failing twice. As such, in this changeset, I have started by introducing this type to the `parseError` and `parseErrorWith` functions. In the future, I would like to extend this to the `QErr` record and the various `throwError` functions, but this is a much larger task and should probably be done in stages. For now, `toErrorMessage` and `fromErrorMessage` are provided for conversion to and from `Text`, but the intent is to stop exporting these once all error messages are converted to the new type. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5018 GitOrigin-RevId: 84b37e238992e4312255a87ca44f41af65e2d89a
2022-07-18 23:26:01 +03:00
import Hasura.Base.ErrorMessage (toErrorMessage)
import Hasura.GraphQL.Schema.Backend
import Hasura.GraphQL.Schema.BoolExp
import Hasura.GraphQL.Schema.Build qualified as GSB
import Hasura.GraphQL.Schema.Common
import Hasura.GraphQL.Schema.NamingCase
import Hasura.GraphQL.Schema.Options qualified as Options
server: Metadata origin for definitions (type parameter version v2) The code that builds the GraphQL schema, and `buildGQLContext` in particular, is partial: not every value of `(ServerConfigCtx, GraphQLQueryType, SourceCache, HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject), ActionCache, AnnotatedCustomTypes)` results in a valid GraphQL schema. When it fails, we want to be able to return better error messages than we currently do. The key thing that is missing is a way to trace back GraphQL type information to their origin from the Hasura metadata. Currently, we have a number of correctness checks of our GraphQL schema. But these correctness checks only have access to pure GraphQL type information, and hence can only report errors in terms of that. Possibly the worst is the "conflicting definitions" error, which, in practice, can only be debugged by Hasura engineers. This is terrible DX for customers. This PR allows us to print better error messages, by adding a field to the `Definition` type that traces the GraphQL type to its origin in the metadata. So the idea is simple: just add `MetadataObjId`, or `Maybe` that, or some other sum type of that, to `Definition`. However, we want to avoid having to import a `Hasura.RQL` module from `Hasura.GraphQL.Parser`. So we instead define this additional field of `Definition` through a new type parameter, which is threaded through in `Hasura.GraphQL.Parser`. We then define type synonyms in `Hasura.GraphQL.Schema.Parser` that fill in this type parameter, so that it is not visible for the majority of the codebase. The idea of associating metadata information to `Definition`s really comes to fruition when combined with hasura/graphql-engine-mono#4517. Their combination would allow us to use the API of fatal errors (just like the current `MonadError QErr`) to report _inconsistencies_ in the metadata. Such inconsistencies are then _automatically_ ignored. So no ad-hoc decisions need to be made on how to cut out inconsistent metadata from the GraphQL schema. This will allow us to report much better errors, as well as improve the likelihood of a successful HGE startup. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4770 Co-authored-by: Samir Talwar <47582+SamirTalwar@users.noreply.github.com> GitOrigin-RevId: 728402b0cae83ae8e83463a826ceeb609001acae
2022-06-28 18:52:26 +03:00
import Hasura.GraphQL.Schema.Parser
( FieldParser,
InputFieldsParser,
Kind (..),
MonadParse,
Parser,
)
import Hasura.GraphQL.Schema.Parser qualified as P
import Hasura.GraphQL.Schema.Select
import Hasura.GraphQL.Schema.Table
import Hasura.GraphQL.Schema.Typename
import Hasura.Name qualified as Name
import Hasura.Prelude
import Hasura.RQL.IR.BoolExp
import Hasura.RQL.IR.Select qualified as IR
import Hasura.RQL.IR.Value qualified as IR
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.Column
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.ComputedField
import Hasura.RQL.Types.Function
2022-05-27 20:21:22 +03:00
import Hasura.RQL.Types.Source (SourceInfo)
import Hasura.RQL.Types.Table
import Hasura.SQL.Backend
import Language.GraphQL.Draft.Syntax qualified as G
----------------------------------------------------------------
-- BackendSchema instance
instance BackendSchema 'BigQuery where
-- top level parsers
buildTableQueryAndSubscriptionFields = GSB.buildTableQueryAndSubscriptionFields
buildTableRelayQueryFields _ _ _ _ _ _ = pure []
buildTableStreamingSubscriptionFields = GSB.buildTableStreamingSubscriptionFields
buildTableInsertMutationFields _ _ _ _ _ _ = pure []
buildTableUpdateMutationFields _ _ _ _ _ _ = pure []
buildTableDeleteMutationFields _ _ _ _ _ _ = pure []
buildFunctionQueryFields _ _ _ _ _ = pure []
buildFunctionRelayQueryFields _ _ _ _ _ _ = pure []
buildFunctionMutationFields _ _ _ _ _ = pure []
-- backend extensions
relayExtension = Nothing
nodesAggExtension = Just ()
streamSubscriptionExtension = Nothing
-- individual components
columnParser = bqColumnParser
enumParser = bqEnumParser
possiblyNullable = const bqPossiblyNullable
scalarSelectionArgumentsParser _ = pure Nothing
orderByOperators _sourceInfo = bqOrderByOperators
comparisonExps = const bqComparisonExps
countTypeInput = bqCountTypeInput
aggregateOrderByCountType = BigQuery.IntegerScalarType
computedField = bqComputedField
instance BackendTableSelectSchema 'BigQuery where
tableArguments = defaultTableArgs
selectTable = defaultSelectTable
selectTableAggregate = defaultSelectTableAggregate
tableSelectionSet = defaultTableSelectionSet
----------------------------------------------------------------
-- Individual components
bqColumnParser ::
MonadBuildSchema 'BigQuery r m n =>
ColumnType 'BigQuery ->
G.Nullability ->
SchemaT r m (Parser 'Both n (IR.ValueWithOrigin (ColumnValue 'BigQuery)))
bqColumnParser columnType nullability = case columnType of
ColumnScalar scalarType -> P.memoizeOn 'bqColumnParser (columnType, nullability) do
Options.SchemaOptions {soBigQueryStringNumericInput} <- asks getter
let numericInputParser :: forall a. a -> a -> a
numericInputParser builtin custom =
case soBigQueryStringNumericInput of
Options.EnableBigQueryStringNumericInput -> custom
Options.DisableBigQueryStringNumericInput -> builtin
peelWithOrigin . fmap (ColumnValue columnType) . bqPossiblyNullable nullability
<$> case scalarType of
-- bytestrings
-- we only accept string literals
BigQuery.BytesScalarType -> pure $ BigQuery.StringValue <$> stringBased _Bytes
-- text
BigQuery.StringScalarType -> pure $ BigQuery.StringValue <$> P.string
-- floating point values
BigQuery.FloatScalarType ->
pure $
BigQuery.FloatValue
<$> numericInputParser (BigQuery.doubleToFloat64 <$> P.float) BQP.bqFloat64
BigQuery.IntegerScalarType ->
pure $
BigQuery.IntegerValue
<$> numericInputParser (BigQuery.intToInt64 . fromIntegral <$> P.int) BQP.bqInt64
BigQuery.DecimalScalarType ->
pure $
BigQuery.DecimalValue
<$> numericInputParser
(BigQuery.Decimal . BigQuery.scientificToText <$> P.scientific)
BQP.bqDecimal
BigQuery.BigDecimalScalarType ->
pure $
BigQuery.BigDecimalValue
<$> numericInputParser
(BigQuery.BigDecimal . BigQuery.scientificToText <$> P.scientific)
BQP.bqBigDecimal
-- boolean type
BigQuery.BoolScalarType -> pure $ BigQuery.BoolValue <$> P.boolean
BigQuery.DateScalarType -> pure $ BigQuery.DateValue . BigQuery.Date <$> stringBased _Date
BigQuery.TimeScalarType -> pure $ BigQuery.TimeValue . BigQuery.Time <$> stringBased _Time
BigQuery.DatetimeScalarType -> pure $ BigQuery.DatetimeValue . BigQuery.Datetime <$> stringBased _Datetime
BigQuery.GeographyScalarType ->
pure $ BigQuery.GeographyValue . BigQuery.Geography <$> throughJSON _Geography
BigQuery.TimestampScalarType ->
pure $ BigQuery.TimestampValue . BigQuery.Timestamp <$> stringBased _Timestamp
ty -> throwError $ internalError $ T.pack $ "Type currently unsupported for BigQuery: " ++ show ty
ColumnEnumReference (EnumReference tableName enumValues customTableName) ->
case nonEmpty (Map.toList enumValues) of
Just enumValuesList ->
peelWithOrigin . fmap (ColumnValue columnType) . bqPossiblyNullable nullability
<$> bqEnumParser tableName enumValuesList customTableName nullability
Nothing -> throw400 ValidationFailed "empty enum values"
where
throughJSON scalarName =
let schemaType = P.TNamed P.NonNullable $ P.Definition scalarName Nothing Nothing [] P.TIScalar
server: Metadata origin for definitions (type parameter version v2) The code that builds the GraphQL schema, and `buildGQLContext` in particular, is partial: not every value of `(ServerConfigCtx, GraphQLQueryType, SourceCache, HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject), ActionCache, AnnotatedCustomTypes)` results in a valid GraphQL schema. When it fails, we want to be able to return better error messages than we currently do. The key thing that is missing is a way to trace back GraphQL type information to their origin from the Hasura metadata. Currently, we have a number of correctness checks of our GraphQL schema. But these correctness checks only have access to pure GraphQL type information, and hence can only report errors in terms of that. Possibly the worst is the "conflicting definitions" error, which, in practice, can only be debugged by Hasura engineers. This is terrible DX for customers. This PR allows us to print better error messages, by adding a field to the `Definition` type that traces the GraphQL type to its origin in the metadata. So the idea is simple: just add `MetadataObjId`, or `Maybe` that, or some other sum type of that, to `Definition`. However, we want to avoid having to import a `Hasura.RQL` module from `Hasura.GraphQL.Parser`. So we instead define this additional field of `Definition` through a new type parameter, which is threaded through in `Hasura.GraphQL.Parser`. We then define type synonyms in `Hasura.GraphQL.Schema.Parser` that fill in this type parameter, so that it is not visible for the majority of the codebase. The idea of associating metadata information to `Definition`s really comes to fruition when combined with hasura/graphql-engine-mono#4517. Their combination would allow us to use the API of fatal errors (just like the current `MonadError QErr`) to report _inconsistencies_ in the metadata. Such inconsistencies are then _automatically_ ignored. So no ad-hoc decisions need to be made on how to cut out inconsistent metadata from the GraphQL schema. This will allow us to report much better errors, as well as improve the likelihood of a successful HGE startup. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4770 Co-authored-by: Samir Talwar <47582+SamirTalwar@users.noreply.github.com> GitOrigin-RevId: 728402b0cae83ae8e83463a826ceeb609001acae
2022-06-28 18:52:26 +03:00
in P.Parser
{ pType = schemaType,
pParser =
server: Metadata origin for definitions (type parameter version v2) The code that builds the GraphQL schema, and `buildGQLContext` in particular, is partial: not every value of `(ServerConfigCtx, GraphQLQueryType, SourceCache, HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject), ActionCache, AnnotatedCustomTypes)` results in a valid GraphQL schema. When it fails, we want to be able to return better error messages than we currently do. The key thing that is missing is a way to trace back GraphQL type information to their origin from the Hasura metadata. Currently, we have a number of correctness checks of our GraphQL schema. But these correctness checks only have access to pure GraphQL type information, and hence can only report errors in terms of that. Possibly the worst is the "conflicting definitions" error, which, in practice, can only be debugged by Hasura engineers. This is terrible DX for customers. This PR allows us to print better error messages, by adding a field to the `Definition` type that traces the GraphQL type to its origin in the metadata. So the idea is simple: just add `MetadataObjId`, or `Maybe` that, or some other sum type of that, to `Definition`. However, we want to avoid having to import a `Hasura.RQL` module from `Hasura.GraphQL.Parser`. So we instead define this additional field of `Definition` through a new type parameter, which is threaded through in `Hasura.GraphQL.Parser`. We then define type synonyms in `Hasura.GraphQL.Schema.Parser` that fill in this type parameter, so that it is not visible for the majority of the codebase. The idea of associating metadata information to `Definition`s really comes to fruition when combined with hasura/graphql-engine-mono#4517. Their combination would allow us to use the API of fatal errors (just like the current `MonadError QErr`) to report _inconsistencies_ in the metadata. Such inconsistencies are then _automatically_ ignored. So no ad-hoc decisions need to be made on how to cut out inconsistent metadata from the GraphQL schema. This will allow us to report much better errors, as well as improve the likelihood of a successful HGE startup. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4770 Co-authored-by: Samir Talwar <47582+SamirTalwar@users.noreply.github.com> GitOrigin-RevId: 728402b0cae83ae8e83463a826ceeb609001acae
2022-06-28 18:52:26 +03:00
P.valueToJSON (P.toGraphQLType schemaType)
>=> either (P.parseErrorWith P.ParseFailed . toErrorMessage . qeError) pure . runAesonParser J.parseJSON
}
stringBased :: MonadParse m => G.Name -> Parser 'Both m Text
stringBased scalarName =
P.string {P.pType = P.TNamed P.NonNullable $ P.Definition scalarName Nothing Nothing [] P.TIScalar}
bqEnumParser ::
MonadBuildSchema 'BigQuery r m n =>
TableName 'BigQuery ->
NonEmpty (EnumValue, EnumValueInfo) ->
Maybe G.Name ->
G.Nullability ->
SchemaT r m (Parser 'Both n (ScalarValue 'BigQuery))
bqEnumParser tableName enumValues customTableName nullability = do
enumName <- mkEnumTypeName @'BigQuery tableName customTableName
pure $ bqPossiblyNullable nullability $ P.enum enumName Nothing (mkEnumValue <$> enumValues)
where
mkEnumValue :: (EnumValue, EnumValueInfo) -> (P.Definition P.EnumValueInfo, ScalarValue 'BigQuery)
mkEnumValue (EnumValue value, EnumValueInfo description) =
( P.Definition value (G.Description <$> description) Nothing [] P.EnumValueInfo,
BigQuery.StringValue $ G.unName value
)
bqPossiblyNullable ::
MonadParse m =>
G.Nullability ->
Parser 'Both m (ScalarValue 'BigQuery) ->
Parser 'Both m (ScalarValue 'BigQuery)
bqPossiblyNullable (G.Nullability isNullable)
| isNullable = fmap (fromMaybe BigQuery.NullValue) . P.nullable
| otherwise = id
bqOrderByOperators ::
NamingCase ->
( G.Name,
NonEmpty
server: Metadata origin for definitions (type parameter version v2) The code that builds the GraphQL schema, and `buildGQLContext` in particular, is partial: not every value of `(ServerConfigCtx, GraphQLQueryType, SourceCache, HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject), ActionCache, AnnotatedCustomTypes)` results in a valid GraphQL schema. When it fails, we want to be able to return better error messages than we currently do. The key thing that is missing is a way to trace back GraphQL type information to their origin from the Hasura metadata. Currently, we have a number of correctness checks of our GraphQL schema. But these correctness checks only have access to pure GraphQL type information, and hence can only report errors in terms of that. Possibly the worst is the "conflicting definitions" error, which, in practice, can only be debugged by Hasura engineers. This is terrible DX for customers. This PR allows us to print better error messages, by adding a field to the `Definition` type that traces the GraphQL type to its origin in the metadata. So the idea is simple: just add `MetadataObjId`, or `Maybe` that, or some other sum type of that, to `Definition`. However, we want to avoid having to import a `Hasura.RQL` module from `Hasura.GraphQL.Parser`. So we instead define this additional field of `Definition` through a new type parameter, which is threaded through in `Hasura.GraphQL.Parser`. We then define type synonyms in `Hasura.GraphQL.Schema.Parser` that fill in this type parameter, so that it is not visible for the majority of the codebase. The idea of associating metadata information to `Definition`s really comes to fruition when combined with hasura/graphql-engine-mono#4517. Their combination would allow us to use the API of fatal errors (just like the current `MonadError QErr`) to report _inconsistencies_ in the metadata. Such inconsistencies are then _automatically_ ignored. So no ad-hoc decisions need to be made on how to cut out inconsistent metadata from the GraphQL schema. This will allow us to report much better errors, as well as improve the likelihood of a successful HGE startup. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4770 Co-authored-by: Samir Talwar <47582+SamirTalwar@users.noreply.github.com> GitOrigin-RevId: 728402b0cae83ae8e83463a826ceeb609001acae
2022-06-28 18:52:26 +03:00
( P.Definition P.EnumValueInfo,
(BasicOrderType 'BigQuery, NullsOrderType 'BigQuery)
)
)
bqOrderByOperators _tCase =
(Name._order_by,) $
-- NOTE: NamingCase is not being used here as we don't support naming conventions for this DB
NE.fromList
[ ( define Name._asc "in ascending order, nulls first",
(BigQuery.AscOrder, BigQuery.NullsFirst)
),
( define Name._asc_nulls_first "in ascending order, nulls first",
(BigQuery.AscOrder, BigQuery.NullsFirst)
),
( define Name._asc_nulls_last "in ascending order, nulls last",
(BigQuery.AscOrder, BigQuery.NullsLast)
),
( define Name._desc "in descending order, nulls last",
(BigQuery.DescOrder, BigQuery.NullsLast)
),
( define Name._desc_nulls_first "in descending order, nulls first",
(BigQuery.DescOrder, BigQuery.NullsFirst)
),
( define Name._desc_nulls_last "in descending order, nulls last",
(BigQuery.DescOrder, BigQuery.NullsLast)
)
]
where
define name desc = P.Definition name (Just desc) Nothing [] P.EnumValueInfo
bqComparisonExps ::
forall m n r.
2022-05-27 20:21:22 +03:00
(MonadBuildSchema 'BigQuery r m n) =>
ColumnType 'BigQuery ->
SchemaT r m (Parser 'Input n [ComparisonExp 'BigQuery])
bqComparisonExps = P.memoize 'comparisonExps $ \columnType -> do
collapseIfNull <- retrieve Options.soDangerousBooleanCollapse
dWithinGeogOpParser <- geographyWithinDistanceInput
tCase <- asks getter
-- see Note [Columns in comparison expression are never nullable]
typedParser <- columnParser columnType (G.Nullability False)
-- textParser <- columnParser (ColumnScalar @'BigQuery BigQuery.StringScalarType) (G.Nullability False)
let name = P.getName typedParser <> Name.__BigQuery_comparison_exp
desc =
G.Description $
"Boolean expression to compare columns of type "
<> P.getName typedParser
<<> ". All fields are combined with logical 'AND'."
-- textListParser = fmap openValueOrigin <$> P.list textParser
columnListParser = fmap IR.openValueOrigin <$> P.list typedParser
mkListLiteral :: [ColumnValue 'BigQuery] -> IR.UnpreparedValue 'BigQuery
mkListLiteral =
IR.UVLiteral . BigQuery.ListExpression . fmap (BigQuery.ValueExpression . cvValue)
pure $
P.object name (Just desc) $
fmap catMaybes $
sequenceA $
concat
[ -- from https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types:
-- GEOGRAPHY comparisons are not supported. To compare GEOGRAPHY values, use ST_Equals.
guard (isScalarColumnWhere (/= BigQuery.GeographyScalarType) columnType)
*> equalityOperators
tCase
collapseIfNull
(IR.mkParameter <$> typedParser)
(mkListLiteral <$> columnListParser),
guard (isScalarColumnWhere (/= BigQuery.GeographyScalarType) columnType)
*> comparisonOperators
tCase
collapseIfNull
(IR.mkParameter <$> typedParser),
-- Ops for String type
guard (isScalarColumnWhere (== BigQuery.StringScalarType) columnType)
*> [ mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__like)
(Just "does the column match the given pattern")
(ALIKE . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__nlike)
(Just "does the column NOT match the given pattern")
(ANLIKE . IR.mkParameter <$> typedParser)
],
-- Ops for Bytes type
guard (isScalarColumnWhere (== BigQuery.BytesScalarType) columnType)
*> [ mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__like)
(Just "does the column match the given pattern")
(ALIKE . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedName Name.__nlike)
(Just "does the column NOT match the given pattern")
(ANLIKE . IR.mkParameter <$> typedParser)
],
-- Ops for Geography type
guard (isScalarColumnWhere (== BigQuery.GeographyScalarType) columnType)
*> [ mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "contains"]))
(Just "does the column contain the given geography value")
(ABackendSpecific . BigQuery.ASTContains . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "equals"]))
(Just "is the column equal to given geography value (directionality is ignored)")
(ABackendSpecific . BigQuery.ASTEquals . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "touches"]))
(Just "does the column have at least one point in common with the given geography value")
(ABackendSpecific . BigQuery.ASTTouches . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "within"]))
(Just "is the column contained in the given geography value")
(ABackendSpecific . BigQuery.ASTWithin . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "intersects"]))
(Just "does the column spatially intersect the given geography value")
(ABackendSpecific . BigQuery.ASTIntersects . IR.mkParameter <$> typedParser),
mkBoolOperator
tCase
collapseIfNull
(C.fromAutogeneratedTuple $$(G.litGQLIdentifier ["_st", "d", "within"]))
(Just "is the column within a given distance from the given geometry value")
(ABackendSpecific . BigQuery.ASTDWithin <$> dWithinGeogOpParser)
]
]
bqCountTypeInput ::
MonadParse n =>
Maybe (Parser 'Both n (Column 'BigQuery)) ->
InputFieldsParser n (IR.CountDistinct -> CountType 'BigQuery)
bqCountTypeInput = \case
Just columnEnum -> do
columns <- P.fieldOptional Name._columns Nothing $ P.list columnEnum
pure $ flip mkCountType columns
Nothing -> pure $ flip mkCountType Nothing
where
mkCountType :: IR.CountDistinct -> Maybe [Column 'BigQuery] -> CountType 'BigQuery
mkCountType _ Nothing = BigQuery.StarCountable
mkCountType IR.SelectCountDistinct (Just cols) =
maybe BigQuery.StarCountable BigQuery.DistinctCountable $ nonEmpty cols
mkCountType IR.SelectCountNonDistinct (Just cols) =
maybe BigQuery.StarCountable BigQuery.NonNullFieldCountable $ nonEmpty cols
geographyWithinDistanceInput ::
forall m n r.
MonadBuildSchema 'BigQuery r m n =>
SchemaT r m (Parser 'Input n (DWithinGeogOp (IR.UnpreparedValue 'BigQuery)))
geographyWithinDistanceInput = do
geographyParser <- columnParser (ColumnScalar BigQuery.GeographyScalarType) (G.Nullability False)
-- practically BigQuery (as of 2021-11-19) doesn't support TRUE as use_spheroid parameter for ST_DWITHIN
booleanParser <- columnParser (ColumnScalar BigQuery.BoolScalarType) (G.Nullability True)
floatParser <- columnParser (ColumnScalar BigQuery.FloatScalarType) (G.Nullability False)
pure $
P.object Name._st_dwithin_input Nothing $
DWithinGeogOp <$> (IR.mkParameter <$> P.field Name._distance Nothing floatParser)
<*> (IR.mkParameter <$> P.field Name._from Nothing geographyParser)
<*> (IR.mkParameter <$> P.fieldWithDefault Name._use_spheroid Nothing (G.VBoolean False) booleanParser)
-- | Computed field parser.
bqComputedField ::
forall r m n.
MonadBuildSchema 'BigQuery r m n =>
2022-05-27 20:21:22 +03:00
SourceInfo 'BigQuery ->
ComputedFieldInfo 'BigQuery ->
TableName 'BigQuery ->
Role-invariant schema constructors We build the GraphQL schema by combining building blocks such as `tableSelectionSet` and `columnParser`. These building blocks individually build `{InputFields,Field,}Parser` objects. Those object specify the valid GraphQL schema. Since the GraphQL schema is role-dependent, at some point we need to know what fragment of the GraphQL schema a specific role is allowed to access, and this is stored in `{Sel,Upd,Ins,Del}PermInfo` objects. We have passed around these permission objects as function arguments to the schema building blocks since we first started dealing with permissions during the PDV refactor - see hasura/graphql-engine@5168b99e463199b1934d8645bd6cd37eddb64ae1 in hasura/graphql-engine#4111. This means that, for instance, `tableSelectionSet` has as its type: ```haskell tableSelectionSet :: forall b r m n. MonadBuildSchema b r m n => SourceName -> TableInfo b -> SelPermInfo b -> m (Parser 'Output n (AnnotatedFields b)) ``` There are three reasons to change this. 1. We often pass a `Maybe (xPermInfo b)` instead of a proper `xPermInfo b`, and it's not clear what the intended semantics of this is. Some potential improvements on the data types involved are discussed in issue hasura/graphql-engine-mono#3125. 2. In most cases we also already pass a `TableInfo b`, and together with the `MonadRole` that is usually also in scope, this means that we could look up the required permissions regardless: so passing the permissions explicitly undermines the "single source of truth" principle. Breaking this principle also makes the code more difficult to read. 3. We are working towards role-based parsers (see hasura/graphql-engine-mono#2711), where the `{InputFields,Field,}Parser` objects are constructed in a role-invariant way, so that we have a single object that can be used for all roles. In particular, this means that the schema building blocks _need_ to be constructed in a role-invariant way. While this PR doesn't accomplish that, it does reduce the amount of role-specific arguments being passed, thus fixing hasura/graphql-engine-mono#3068. Concretely, this PR simply drops the `xPermInfo b` argument from almost all schema building blocks. Instead these objects are looked up from the `TableInfo b` as-needed. The resulting code is considerably simpler and shorter. One way to interpret this change is as follows. Before this PR, we figured out permissions at the top-level in `Hasura.GraphQL.Schema`, passing down the obtained `xPermInfo` objects as required. After this PR, we have a bottom-up approach where the schema building blocks themselves decide whether they want to be included for a particular role. So this moves some permission logic out of `Hasura.GraphQL.Schema`, which is very complex. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3608 GitOrigin-RevId: 51a744f34ec7d57bc8077667ae7f9cb9c4f6c962
2022-02-17 11:16:20 +03:00
TableInfo 'BigQuery ->
SchemaT r m (Maybe (FieldParser n (AnnotatedField 'BigQuery)))
bqComputedField sourceName ComputedFieldInfo {..} tableName tableInfo = runMaybeT do
stringifyNumbers <- retrieve Options.soStringifyNumbers
Move RoleName into SchemaContext. ### Description I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise. In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`. However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose... ### Alternatives Some alternatives worth considering: - attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous - keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances - changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context. What do y'all think? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073 GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
2022-07-29 18:37:09 +03:00
roleName <- retrieve scRole
fieldName <- lift $ textToName $ computedFieldNameToText _cfiName
functionArgsParser <- lift $ computedFieldFunctionArgs _cfiFunction
case _cfiReturnType of
BigQuery.ReturnExistingTable returnTable -> do
returnTableInfo <- lift $ askTableInfo sourceName returnTable
Move RoleName into SchemaContext. ### Description I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise. In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`. However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose... ### Alternatives Some alternatives worth considering: - attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous - keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances - changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context. What do y'all think? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073 GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
2022-07-29 18:37:09 +03:00
returnTablePermissions <- hoistMaybe $ tableSelectPermissions roleName returnTableInfo
selectionSetParser <- MaybeT (fmap (P.multiple . P.nonNullableParser) <$> tableSelectionSet sourceName returnTableInfo)
selectArgsParser <- lift $ tableArguments sourceName returnTableInfo
let fieldArgsParser = liftA2 (,) functionArgsParser selectArgsParser
pure $
P.subselection fieldName fieldDescription fieldArgsParser selectionSetParser
<&> \((functionArgs', args), fields) ->
IR.AFComputedField _cfiXComputedFieldInfo _cfiName $
IR.CFSTable JASMultipleRows $
IR.AnnSelectG
{ IR._asnFields = fields,
IR._asnFrom = IR.FromFunction (_cffName _cfiFunction) functionArgs' Nothing,
IR._asnPerm = tablePermissionsInfo returnTablePermissions,
IR._asnArgs = args,
IR._asnStrfyNum = stringifyNumbers,
IR._asnNamingConvention = Nothing
}
BigQuery.ReturnTableSchema returnFields -> do
-- Check if the computed field is available in the select permission
Move RoleName into SchemaContext. ### Description I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise. In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`. However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose... ### Alternatives Some alternatives worth considering: - attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous - keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances - changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context. What do y'all think? PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073 GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
2022-07-29 18:37:09 +03:00
selectPermissions <- hoistMaybe $ tableSelectPermissions roleName tableInfo
guard $ Map.member _cfiName $ spiComputedFields selectPermissions
objectTypeName <-
mkTypename =<< do
computedFieldGQLName <- textToName $ computedFieldNameToText _cfiName
pure $ computedFieldGQLName <> Name.__ <> Name.__fields
selectionSetParser <- do
fieldParsers <- lift $ for returnFields selectArbitraryField
let description = G.Description $ "column fields returning by " <>> _cfiName
pure $
P.selectionSetObject objectTypeName (Just description) fieldParsers []
<&> parsedSelectionsToFields IR.AFExpression
pure $
P.subselection fieldName fieldDescription functionArgsParser selectionSetParser
<&> \(functionArgs', fields) ->
IR.AFComputedField _cfiXComputedFieldInfo _cfiName $
IR.CFSTable JASMultipleRows $
IR.AnnSelectG
{ IR._asnFields = fields,
IR._asnFrom = IR.FromFunction (_cffName _cfiFunction) functionArgs' Nothing,
IR._asnPerm = IR.noTablePermissions,
IR._asnArgs = IR.noSelectArgs,
IR._asnStrfyNum = stringifyNumbers,
IR._asnNamingConvention = Nothing
}
where
fieldDescription :: Maybe G.Description
fieldDescription = G.Description <$> _cfiDescription
selectArbitraryField ::
(BigQuery.ColumnName, G.Name, BigQuery.ScalarType) ->
SchemaT r m (FieldParser n (AnnotatedField 'BigQuery))
selectArbitraryField (columnName, graphQLName, columnType) = do
field <- columnParser @'BigQuery (ColumnScalar columnType) (G.Nullability True)
pure $
P.selection_ graphQLName Nothing field
$> IR.mkAnnColumnField columnName (ColumnScalar columnType) Nothing Nothing
computedFieldFunctionArgs ::
ComputedFieldFunction 'BigQuery ->
SchemaT r m (InputFieldsParser n (FunctionArgsExp 'BigQuery (IR.UnpreparedValue 'BigQuery)))
computedFieldFunctionArgs ComputedFieldFunction {..} = do
objectName <-
mkTypename =<< do
computedFieldGQLName <- textToName $ computedFieldNameToText _cfiName
tableGQLName <- getTableGQLName @'BigQuery tableInfo
pure $ computedFieldGQLName <> Name.__ <> tableGQLName <> Name.__args
let userInputArgs = filter (not . flip Map.member _cffComputedFieldImplicitArgs . BigQuery._faName) (toList _cffInputArgs)
argumentParsers <- sequenceA <$> forM userInputArgs parseArgument
let userArgsParser = P.object objectName Nothing argumentParsers
let fieldDesc =
G.Description $
"input parameters for computed field "
<> _cfiName <<> " defined on table " <>> tableName
argsField
| null userInputArgs = P.fieldOptional Name._args (Just fieldDesc) userArgsParser
| otherwise = Just <$> P.field Name._args (Just fieldDesc) userArgsParser
pure $
argsField `P.bindFields` \maybeInputArguments -> do
let tableColumnInputs = Map.map BigQuery.AETableColumn $ Map.mapKeys getFuncArgNameTxt _cffComputedFieldImplicitArgs
pure $ FunctionArgsExp mempty $ maybe mempty Map.fromList maybeInputArguments <> tableColumnInputs
parseArgument :: BigQuery.FunctionArgument -> SchemaT r m (InputFieldsParser n (Text, BigQuery.ArgumentExp (IR.UnpreparedValue 'BigQuery)))
parseArgument arg = do
typedParser <- columnParser (ColumnScalar $ BigQuery._faType arg) (G.Nullability False)
let argumentName = getFuncArgNameTxt $ BigQuery._faName arg
fieldName <- textToName argumentName
let argParser = P.field fieldName Nothing typedParser
pure $ argParser `P.bindFields` \inputValue -> pure ((argumentName, BigQuery.AEInput $ IR.mkParameter inputValue))