server: refactor MonadSchema into MonadMemoize

Followup to hasura/graphql-engine-mono#4713.

The `memoizeOn` method, part of `MonadSchema`, originally had the following type:
```haskell
  memoizeOn
    :: (HasCallStack, Ord a, Typeable a, Typeable b, Typeable k)
    => TH.Name
    -> a
    -> m (Parser k n b)
    -> m (Parser k n b)
```
The reason for operating on `Parser`s specifically was that the `MonadSchema` effect would additionally initialize certain `Unique` values, which appear (nested in) the type of `Parser`.

hasura/graphql-engine-mono#518 changed the type of `memoizeOn`, to additionally allow memoizing `FieldParser`s. These also contained a `Unique` value, which was similarly initialized by the `MonadSchema` effect. The new type of `memoizeOn` was as follows:
```haskell
  memoizeOn
    :: forall p d a b
     . (HasCallStack, HasDefinition (p n b) d, Ord a, Typeable p, Typeable a, Typeable b)
    => TH.Name
    -> a
    -> m (p n b)
    -> m (p n b)
```

Note the type `p n b` of the value being memoized: by choosing `p` to be either `Parser k` or `FieldParser`, both can be memoized. Also note the new `HasDefinition (p n b) d` constraint, which provided a `Lens` for accessing the `Unique` value to be initialized.

A quick simplification is that the `HasCallStack` constraint has never been used by any code. This was realized in hasura/graphql-engine-mono#4713, by removing that constraint.

hasura/graphql-engine-mono#2980 removed the `Unique` value from our GraphQL-related types entirely, as their original purpose was never truly realized. One part of removing `Unique` consisted of dropping the `HasDefinition (p n b) d` constraint from `memoizeOn`.

What I didn't realize at the time was that this meant that the type of `memoizeOn` could be generalized and simplified much further. This PR finally implements that generalization. The new type is as follows:
```haskell
  memoizeOn ::
    forall a p.
    (Ord a, Typeable a, Typeable p) =>
    TH.Name ->
    a ->
    m p ->
    m p
```

This change has a couple of consequences.

1. While constructing the schema, we often output `Maybe (Parser ...)`, to model that the existence of certain pieces of GraphQL schema sometimes depends on the permissions that a certain role has. The previous versions of `memoizeOn` were not able to handle this, as the only thing they could memoize was fully-defined (if not yet fully-evaluated) `(Field)Parser`s. This much more general API _would_ allow memoizing `Maybe (Parser ...)`s. However, we probably have to be continue being cautious with this: if we blindly memoize all `Maybe (Parser ...)`s, the resulting code may never be able to decide whether the value is `Just` or `Nothing` - i.e. it never commits to the existence-or-not of a GraphQL schema fragment. This would manifest as a non-well-founded knot tying, and this would get reported as an error by the implementation of `memoizeOn`.

   tl;dr: This generalization _technically_ allows for memoizing `Maybe` values, but we probably still want to avoid doing so.

   For this reason, the PR adds a specialized version of `memoizeOn` to `Hasura.GraphQL.Schema.Parser`.
2. There is no longer any need to connect the `MonadSchema` knot-tying effect with the `MonadParse` effect. In fact, after this PR, the `memoizeOn` method is completely GraphQL-agnostic, and so we implement hasura/graphql-engine-mono#4726, separating `memoizeOn` from `MonadParse` entirely - `memoizeOn` can be defined and implemented as a general Haskell typeclass method.

   Since `MonadSchema` has been made into a single-type-parameter type class, it has been renamed to something more general, namely `MonadMemoize`. Its only task is to memoize arbitrary `Typeable p` objects under a combined key consisting of a `TH.Name` and a `Typeable a`.

   Also for this reason, the new `MonadMemoize` has been moved to the more general `Control.Monad.Memoize`.
3. After this change, it's somewhat clearer what `memoizeOn` does: it memoizes an arbitrary value of a `Typeable` type. The only thing that needs to be understood in its implementation is how the manual blackholing works. There is no more semantic interaction with _any_ GraphQL code.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4725
Co-authored-by: Daniel Harvey <4729125+danieljharvey@users.noreply.github.com>
GitOrigin-RevId: 089fa2e82c2ce29da76850e994eabb1e261f9c92
This commit is contained in:
Auke Booij 2022-08-04 15:44:14 +02:00 committed by hasura-bot
parent 1f7eed1bc5
commit 1007ea27ae
34 changed files with 371 additions and 392 deletions

View File

@ -395,6 +395,7 @@ library
, Control.Arrow.Trans
, Control.Concurrent.Extended
, Control.Monad.Circular
, Control.Monad.Memoize
, Control.Monad.Stateless
, Control.Monad.Trans.Managed
, Control.Monad.Unique
@ -797,7 +798,6 @@ library
, Hasura.GraphQL.ParameterizedQueryHash
, Hasura.GraphQL.Parser
, Hasura.GraphQL.Parser.Class
, Hasura.GraphQL.Parser.Class.Parse
, Hasura.GraphQL.Parser.Collect
, Hasura.GraphQL.Parser.DirectiveName
, Hasura.GraphQL.Parser.Directives

View File

@ -0,0 +1,219 @@
module Control.Monad.Memoize
( MonadMemoize (..),
memoize,
MemoizeT,
runMemoizeT,
)
where
import Control.Monad.Except
import Control.Monad.Reader (MonadReader, ReaderT, mapReaderT)
import Control.Monad.State.Strict (MonadState (..), StateT, evalStateT)
import Data.Dependent.Map (DMap)
import Data.Dependent.Map qualified as DM
import Data.Functor.Identity
import Data.GADT.Compare.Extended
import Data.IORef
import Data.Kind qualified as K
import Language.Haskell.TH qualified as TH
import System.IO.Unsafe (unsafeInterleaveIO)
import Type.Reflection (Typeable, typeRep, (:~:) (..))
import Prelude
{- Note [Tying the knot]
~~~~~~~~~~~~~~~~~~~~~~~~
GraphQL type definitions can be mutually recursive, and indeed, they quite often
are! For example, two tables that reference one another will be represented by
types such as the following:
type author {
id: Int!
name: String!
articles: [article!]!
}
type article {
id: Int!
title: String!
content: String!
author: author!
}
This doesnt cause any trouble if the schema is represented by a mapping from
type names to type definitions, but the Parser abstraction is all about avoiding
that kind of indirection to improve type safety parsers refer to their
sub-parsers directly. This presents two problems during schema generation:
1. Schema generation needs to terminate in finite time, so we need to ensure
we dont try to eagerly construct an infinitely-large schema due to the
mutually-recursive structure.
2. To serve introspection queries, we do eventually need to construct a
mapping from names to types (a TypeMap), so we need to be able to
recursively walk the entire schema in finite time.
Solving point number 1 could be done with either laziness or sharing, but
neither of those are enough to solve point number 2, which requires /observable/
sharing. We need to construct a Parser graph that contains enough information to
detect cycles during traversal.
It may seem appealing to just use type names to detect cycles, which would allow
us to get away with using laziness rather than true sharing. Unfortunately, that
leads to two further problems:
* Its possible to end up with two different types with the same name, which
is an error and should be reported as such. Using names to break cycles
prevents us from doing that, since we have no way to check that two types
with the same name are actually the same.
* Some Parser constructors can fail the `column` parser checks that the type
name is a valid GraphQL name, for example. This extra validation means lazy
schema construction isnt viable, since we need to eagerly build the schema
to ensure all the validation checks hold.
So were forced to use sharing. But how do we do it? Somehow, we have to /tie
the knot/ we have to build a cyclic data structure and some of the cycles
may be quite large. Doing all this knot-tying by hand would be incredibly
tricky, and it would require a lot of inversion of control to thread the shared
parsers around.
To avoid contorting the program, we instead implement a form of memoization. The
MonadMemoize class provides a mechanism to memoize a parser constructor function,
which allows us to get sharing mostly for free. The memoization strategy also
annotates cached parsers with a Unique that can be used to break cycles while
traversing the graph, so we get observable sharing as well. -}
class Monad m => MonadMemoize m where
-- | Memoizes a parser constructor function for the extent of a single schema
-- construction process. This is mostly useful for recursive parsers;
-- see Note [Tying the knot] for more details.
--
-- The generality of the type here allows us to use this with multiple concrete
-- parser types:
--
-- @
-- 'memoizeOn' :: ('MonadMemoize' m, MonadParse n) => 'TH.Name' -> a -> m (Parser n b) -> m (Parser n b)
-- 'memoizeOn' :: ('MonadMemoize' m, MonadParse n) => 'TH.Name' -> a -> m (FieldParser n b) -> m (FieldParser n b)
-- @
memoizeOn ::
forall a p.
(Ord a, Typeable a, Typeable p) =>
-- | A unique name used to identify the function being memoized. There isnt
-- really any metaprogramming going on here, we just use a Template Haskell
-- 'TH.Name' as a convenient source for a static, unique identifier.
TH.Name ->
-- | The value to use as the memoization key. Its the callers
-- responsibility to ensure multiple calls to the same function dont use
-- the same key.
a ->
m p ->
m p
instance
MonadMemoize m =>
MonadMemoize (ReaderT a m)
where
memoizeOn name key = mapReaderT (memoizeOn name key)
-- | A wrapper around 'memoizeOn' that memoizes a function by using its argument
-- as the key.
memoize ::
(MonadMemoize m, Ord a, Typeable a, Typeable p) =>
TH.Name ->
(a -> m p) ->
(a -> m p)
memoize name f a = memoizeOn name a (f a)
newtype MemoizeT m a = MemoizeT
{ unMemoizeT :: StateT (DMap MemoizationKey Identity) m a
}
deriving (Functor, Applicative, Monad, MonadError e, MonadReader r)
runMemoizeT :: forall m a. Monad m => MemoizeT m a -> m a
runMemoizeT = flip evalStateT mempty . unMemoizeT
-- | see Note [MemoizeT requires MonadIO]
instance
MonadIO m =>
MonadMemoize (MemoizeT m)
where
memoizeOn name key buildParser = MemoizeT do
let parserId = MemoizationKey name key
parsersById <- get
case DM.lookup parserId parsersById of
Just (Identity parser) -> pure parser
Nothing -> do
-- We manually do eager blackholing here using a MutVar rather than
-- relying on MonadFix and ordinary thunk blackholing. Why? A few
-- reasons:
--
-- 1. We have more control. We arent at the whims of whatever
-- MonadFix instance happens to get used.
--
-- 2. We can be more precise. GHCs lazy blackholing doesnt always
-- kick in when youd expect.
--
-- 3. We can provide more useful error reporting if things go wrong.
-- Most usefully, we can include a HasCallStack source location.
cell <- liftIO $ newIORef Nothing
-- We use unsafeInterleaveIO here, which sounds scary, but
-- unsafeInterleaveIO is actually far more safe than unsafePerformIO.
-- unsafeInterleaveIO just defers the execution of the action until its
-- result is needed, adding some laziness.
--
-- That laziness can be dangerous if the action has side-effects, since
-- the point at which the effect is performed can be unpredictable. But
-- this action just reads, never writes, so that isnt a concern.
parserById <-
liftIO $
unsafeInterleaveIO $
readIORef cell >>= \case
Just parser -> pure $ Identity parser
Nothing ->
error $
unlines
[ "memoize: parser was forced before being fully constructed",
" parser constructor: " ++ TH.pprint name
]
put $! DM.insert parserId parserById parsersById
parser <- unMemoizeT buildParser
liftIO $ writeIORef cell (Just parser)
pure parser
{- Note [MemoizeT requires MonadIO]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The MonadMemoize instance for MemoizeT requires MonadIO, which is unsatisfying.
The only reason the constraint is needed is to implement knot-tying via IORefs
(see Note [Tying the knot] above), which really only requires the power of
ST. Alternatively, it might be possible to use the ST monad instead, but that
has not been done for historical reasons.
-}
-- | A key used to distinguish calls to 'memoize'd functions. The 'TH.Name'
-- distinguishes calls to completely different parsers, and the @a@ value
-- records the arguments.
data MemoizationKey (t :: K.Type) where
MemoizationKey :: (Ord a, Typeable a, Typeable p) => TH.Name -> a -> MemoizationKey p
instance GEq MemoizationKey where
geq
(MemoizationKey name1 (arg1 :: a1) :: MemoizationKey t1)
(MemoizationKey name2 (arg2 :: a2) :: MemoizationKey t2)
| name1 == name2,
Just Refl <- typeRep @a1 `geq` typeRep @a2,
arg1 == arg2,
Just Refl <- typeRep @t1 `geq` typeRep @t2 =
Just Refl
| otherwise = Nothing
instance GCompare MemoizationKey where
gcompare
(MemoizationKey name1 (arg1 :: a1) :: MemoizationKey t1)
(MemoizationKey name2 (arg2 :: a2) :: MemoizationKey t2) =
strengthenOrdering (compare name1 name2)
`extendGOrdering` gcompare (typeRep @a1) (typeRep @a2)
`extendGOrdering` strengthenOrdering (compare arg1 arg2)
`extendGOrdering` gcompare (typeRep @t1) (typeRep @t2)
`extendGOrdering` GEQ

View File

@ -25,8 +25,8 @@ import Hasura.GraphQL.Schema.Parser
( FieldParser,
InputFieldsParser,
Kind (..),
MonadMemoize,
MonadParse,
MonadSchema,
Parser,
)
import Hasura.GraphQL.Schema.Parser qualified as P
@ -95,7 +95,7 @@ bqBuildTableRelayQueryFields ::
TableInfo 'BigQuery ->
C.GQLNameIdentifier ->
NESeq (ColumnInfo 'BigQuery) ->
m [a]
m [P.FieldParser n a]
bqBuildTableRelayQueryFields _mkRootFieldName _sourceName _tableName _tableInfo _gqlName _pkeyColumns =
pure []
@ -107,7 +107,7 @@ bqBuildTableInsertMutationFields ::
TableName 'BigQuery ->
TableInfo 'BigQuery ->
C.GQLNameIdentifier ->
m [a]
m [P.FieldParser n a]
bqBuildTableInsertMutationFields _mkRootFieldName _scenario _sourceName _tableName _tableInfo _gqlName =
pure []
@ -119,7 +119,7 @@ bqBuildTableUpdateMutationFields ::
TableName 'BigQuery ->
TableInfo 'BigQuery ->
C.GQLNameIdentifier ->
m [a]
m [P.FieldParser n a]
bqBuildTableUpdateMutationFields _mkRootFieldName _scenario _sourceName _tableName _tableInfo _gqlName =
pure []
@ -131,7 +131,7 @@ bqBuildTableDeleteMutationFields ::
TableName 'BigQuery ->
TableInfo 'BigQuery ->
C.GQLNameIdentifier ->
m [a]
m [P.FieldParser n a]
bqBuildTableDeleteMutationFields _mkRootFieldName _scenario _sourceName _tableName _tableInfo _gqlName =
pure []
@ -142,7 +142,7 @@ bqBuildFunctionQueryFields ::
FunctionName 'BigQuery ->
FunctionInfo 'BigQuery ->
TableName 'BigQuery ->
m [a]
m [P.FieldParser n a]
bqBuildFunctionQueryFields _ _ _ _ _ =
pure []
@ -154,7 +154,7 @@ bqBuildFunctionRelayQueryFields ::
FunctionInfo 'BigQuery ->
TableName 'BigQuery ->
NESeq (ColumnInfo 'BigQuery) ->
m [a]
m [P.FieldParser n a]
bqBuildFunctionRelayQueryFields _mkRootFieldName _sourceName _functionName _functionInfo _tableName _pkeyColumns =
pure []
@ -165,7 +165,7 @@ bqBuildFunctionMutationFields ::
FunctionName 'BigQuery ->
FunctionInfo 'BigQuery ->
TableName 'BigQuery ->
m [a]
m [P.FieldParser n a]
bqBuildFunctionMutationFields _ _ _ _ _ =
pure []
@ -173,7 +173,7 @@ bqBuildFunctionMutationFields _ _ _ _ _ =
-- Individual components
bqColumnParser ::
(MonadSchema n m, MonadError QErr m, MonadReader r m, Has MkTypename r) =>
(MonadParse n, MonadError QErr m, MonadReader r m, Has MkTypename r) =>
ColumnType 'BigQuery ->
G.Nullability ->
m (Parser 'Both n (IR.ValueWithOrigin (ColumnValue 'BigQuery)))
@ -283,7 +283,6 @@ bqComparisonExps = P.memoize 'comparisonExps $ \columnType -> do
tCase <- asks getter
-- see Note [Columns in comparison expression are never nullable]
typedParser <- columnParser columnType (G.Nullability False)
_nullableTextParser <- columnParser (ColumnScalar @'BigQuery BigQuery.StringScalarType) (G.Nullability True)
-- textParser <- columnParser (ColumnScalar @'BigQuery BigQuery.StringScalarType) (G.Nullability False)
let name = P.getName typedParser <> Name.__BigQuery_comparison_exp
desc =
@ -404,7 +403,7 @@ bqCountTypeInput = \case
geographyWithinDistanceInput ::
forall m n r.
(MonadSchema n m, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
(MonadMemoize m, MonadParse n, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
m (Parser 'Input n (DWithinGeogOp (IR.UnpreparedValue 'BigQuery)))
geographyWithinDistanceInput = do
geographyParser <- columnParser (ColumnScalar BigQuery.GeographyScalarType) (G.Nullability False)

View File

@ -87,12 +87,12 @@ experimentalBuildTableRelayQueryFields ::
RQL.TableInfo 'DataConnector ->
GQLNameIdentifier ->
NESeq (RQL.ColumnInfo 'DataConnector) ->
m [a]
m [P.FieldParser n a]
experimentalBuildTableRelayQueryFields _mkRootFieldName _sourceName _tableName _tableInfo _gqlName _pkeyColumns =
pure []
columnParser' ::
(MonadSchema n m, MonadError QErr m) =>
(MonadParse n, MonadError QErr m) =>
RQL.ColumnType 'DataConnector ->
GQL.Nullability ->
m (P.Parser 'P.Both n (IR.ValueWithOrigin (RQL.ColumnValue 'DataConnector)))
@ -132,7 +132,8 @@ orderByOperators' RQL.SourceInfo {_siConfiguration} _tCase =
comparisonExps' ::
forall m n r.
( BackendSchema 'DataConnector,
MonadSchema n m,
P.MonadMemoize m,
MonadParse n,
MonadError QErr m,
MonadReader r m,
Has SchemaOptions r,
@ -145,15 +146,12 @@ comparisonExps' = P.memoize 'comparisonExps' $ \columnType -> do
collapseIfNull <- GS.C.retrieve Options.soDangerousBooleanCollapse
typedParser <- columnParser' columnType (GQL.Nullability False)
nullableTextParser <- columnParser' (RQL.ColumnScalar IR.S.T.String) (GQL.Nullability True)
textParser <- columnParser' (RQL.ColumnScalar IR.S.T.String) (GQL.Nullability False)
let name = P.getName typedParser <> $$(GQL.litName "_Dynamic_comparison_exp")
desc =
GQL.Description $
"Boolean expression to compare columns of type "
<> P.getName typedParser
<<> ". All fields are combined with logical 'AND'."
textListParser = fmap IR.openValueOrigin <$> P.list textParser
columnListParser = fmap IR.openValueOrigin <$> P.list typedParser
pure $
P.object name (Just desc) $

View File

@ -30,8 +30,8 @@ import Hasura.GraphQL.Schema.Parser
( FieldParser,
InputFieldsParser,
Kind (..),
MonadMemoize,
MonadParse,
MonadSchema,
Parser,
)
import Hasura.GraphQL.Schema.Parser qualified as P
@ -106,7 +106,7 @@ msBuildTableRelayQueryFields ::
TableInfo 'MSSQL ->
C.GQLNameIdentifier ->
NESeq (ColumnInfo 'MSSQL) ->
m [a]
m [P.FieldParser n a]
msBuildTableRelayQueryFields _mkRootFieldName _sourceName _tableName _tableInfo _gqlName _pkeyColumns =
pure []
@ -162,7 +162,7 @@ msBuildFunctionQueryFields ::
FunctionName 'MSSQL ->
FunctionInfo 'MSSQL ->
TableName 'MSSQL ->
m [a]
m [P.FieldParser n a]
msBuildFunctionQueryFields _ _ _ _ _ =
pure []
@ -174,7 +174,7 @@ msBuildFunctionRelayQueryFields ::
FunctionInfo 'MSSQL ->
TableName 'MSSQL ->
NESeq (ColumnInfo 'MSSQL) ->
m [a]
m [P.FieldParser n a]
msBuildFunctionRelayQueryFields _mkRootFieldName _sourceName _functionName _functionInfo _tableName _pkeyColumns =
pure []
@ -185,7 +185,7 @@ msBuildFunctionMutationFields ::
FunctionName 'MSSQL ->
FunctionInfo 'MSSQL ->
TableName 'MSSQL ->
m [a]
m [P.FieldParser n a]
msBuildFunctionMutationFields _ _ _ _ _ =
pure []
@ -241,7 +241,7 @@ msMkRelationshipParser _sourceName _relationshipInfo = do
-- * Individual components
msColumnParser ::
(MonadSchema n m, MonadError QErr m, MonadReader r m, Has MkTypename r) =>
(MonadParse n, MonadError QErr m, MonadReader r m, Has MkTypename r) =>
ColumnType 'MSSQL ->
G.Nullability ->
m (Parser 'Both n (ValueWithOrigin (ColumnValue 'MSSQL)))
@ -340,7 +340,8 @@ msOrderByOperators _tCase =
msComparisonExps ::
forall m n r.
( BackendSchema 'MSSQL,
MonadSchema n m,
MonadMemoize m,
MonadParse n,
MonadError QErr m,
MonadReader r m,
Has SchemaOptions r,
@ -355,10 +356,7 @@ msComparisonExps = P.memoize 'comparisonExps \columnType -> do
-- parsers used for individual values
typedParser <- columnParser columnType (G.Nullability False)
_nullableTextParser <- columnParser (ColumnScalar @'MSSQL MSSQL.VarcharType) (G.Nullability True)
textParser <- columnParser (ColumnScalar @'MSSQL MSSQL.VarcharType) (G.Nullability False)
let columnListParser = fmap openValueOrigin <$> P.list typedParser
_textListParser = fmap openValueOrigin <$> P.list textParser
-- field info
let name = P.getName typedParser <> Name.__MSSQL_comparison_exp

View File

@ -22,8 +22,8 @@ import Hasura.GraphQL.Schema.NamingCase
import Hasura.GraphQL.Schema.Parser
( InputFieldsParser,
Kind (..),
MonadMemoize,
MonadParse,
MonadSchema,
Parser,
)
import Hasura.GraphQL.Schema.Parser qualified as P
@ -100,7 +100,7 @@ buildTableRelayQueryFields' ::
TableInfo 'MySQL ->
C.GQLNameIdentifier ->
NESeq (ColumnInfo 'MySQL) ->
m [a]
m [P.FieldParser n a]
buildTableRelayQueryFields' _mkRootFieldName _sourceInfo _tableName _tableInfo _gqlName _pkeyColumns =
pure []
@ -112,7 +112,7 @@ buildTableInsertMutationFields' ::
RQL.TableName 'MySQL ->
TableInfo 'MySQL ->
C.GQLNameIdentifier ->
m [a]
m [P.FieldParser n a]
buildTableInsertMutationFields' _mkRootFieldName _scenario _sourceInfo _tableName _tableInfo _gqlName =
pure []
@ -124,7 +124,7 @@ buildTableUpdateMutationFields' ::
RQL.TableName 'MySQL ->
TableInfo 'MySQL ->
C.GQLNameIdentifier ->
m [a]
m [P.FieldParser n a]
buildTableUpdateMutationFields' _mkRootFieldName _scenario _sourceInfo _tableName _tableInfo _gqlName =
pure []
@ -136,7 +136,7 @@ buildTableDeleteMutationFields' ::
RQL.TableName 'MySQL ->
TableInfo 'MySQL ->
C.GQLNameIdentifier ->
m [a]
m [P.FieldParser n a]
buildTableDeleteMutationFields' _mkRootFieldName _scenario _sourceInfo _tableName _tableInfo _gqlName =
pure []
@ -147,7 +147,7 @@ buildFunctionQueryFields' ::
FunctionName 'MySQL ->
FunctionInfo 'MySQL ->
RQL.TableName 'MySQL ->
m [a]
m [P.FieldParser n a]
buildFunctionQueryFields' _ _ _ _ _ =
pure []
@ -159,7 +159,7 @@ buildFunctionRelayQueryFields' ::
FunctionInfo 'MySQL ->
RQL.TableName 'MySQL ->
NESeq (ColumnInfo 'MySQL) ->
m [a]
m [P.FieldParser n a]
buildFunctionRelayQueryFields' _mkRootFieldName _sourceInfo _functionName _functionInfo _tableName _pkeyColumns =
pure []
@ -170,7 +170,7 @@ buildFunctionMutationFields' ::
FunctionName 'MySQL ->
FunctionInfo 'MySQL ->
RQL.TableName 'MySQL ->
m [a]
m [P.FieldParser n a]
buildFunctionMutationFields' _ _ _ _ _ =
pure []
@ -178,7 +178,7 @@ bsParser :: MonadParse m => Parser 'Both m ByteString
bsParser = encodeUtf8 <$> P.string
columnParser' ::
(MonadSchema n m, MonadError QErr m, MonadReader r m, Has MkTypename r) =>
(MonadParse n, MonadError QErr m, MonadReader r m, Has MkTypename r) =>
ColumnType 'MySQL ->
GQL.Nullability ->
m (Parser 'Both n (ValueWithOrigin (ColumnValue 'MySQL)))
@ -265,22 +265,18 @@ orderByOperators' _tCase =
-- | TODO: Make this as thorough as the one for MSSQL/PostgreSQL
comparisonExps' ::
forall m n r.
(BackendSchema 'MySQL, MonadSchema n m, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
(BackendSchema 'MySQL, MonadMemoize m, MonadParse n, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
ColumnType 'MySQL ->
m (Parser 'Input n [ComparisonExp 'MySQL])
comparisonExps' = P.memoize 'comparisonExps $ \columnType -> do
-- see Note [Columns in comparison expression are never nullable]
typedParser <- columnParser columnType (GQL.Nullability False)
_nullableTextParser <- columnParser (ColumnScalar @'MySQL MySQL.VarChar) (GQL.Nullability True)
textParser <- columnParser (ColumnScalar @'MySQL MySQL.VarChar) (GQL.Nullability False)
let name = P.getName typedParser <> Name.__MySQL_comparison_exp
desc =
GQL.Description $
"Boolean expression to compare columns of type "
<> P.getName typedParser
<<> ". All fields are combined with logical 'AND'."
_textListParser = fmap openValueOrigin <$> P.list textParser
_columnListParser = fmap openValueOrigin <$> P.list typedParser
pure $
P.object name (Just desc) $
catMaybes

View File

@ -52,8 +52,8 @@ import Hasura.GraphQL.Schema.Parser
FieldParser,
InputFieldsParser,
Kind (..),
MonadMemoize,
MonadParse,
MonadSchema,
Parser,
memoize,
)
@ -358,7 +358,7 @@ buildFunctionRelayQueryFields mkRootFieldName sourceName functionName functionIn
-- Individual components
columnParser ::
(MonadSchema n m, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
(MonadParse n, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
ColumnType ('Postgres pgKind) ->
G.Nullability ->
m (Parser 'Both n (IR.ValueWithOrigin (ColumnValue ('Postgres pgKind))))
@ -471,7 +471,8 @@ orderByOperators tCase =
comparisonExps ::
forall pgKind m n r.
( BackendSchema ('Postgres pgKind),
MonadSchema n m,
MonadMemoize m,
MonadParse n,
MonadError QErr m,
MonadReader r m,
Has SchemaOptions r,
@ -801,7 +802,7 @@ comparisonExps = memoize 'comparisonExps \columnType -> do
geographyWithinDistanceInput ::
forall pgKind m n r.
(MonadSchema n m, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
(MonadMemoize m, MonadParse n, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
m (Parser 'Input n (DWithinGeogOp (IR.UnpreparedValue ('Postgres pgKind))))
geographyWithinDistanceInput = do
geographyParser <- columnParser (ColumnScalar PGGeography) (G.Nullability False)
@ -821,7 +822,7 @@ geographyWithinDistanceInput = do
geometryWithinDistanceInput ::
forall pgKind m n r.
(MonadSchema n m, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
(MonadMemoize m, MonadParse n, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
m (Parser 'Input n (DWithinGeomOp (IR.UnpreparedValue ('Postgres pgKind))))
geometryWithinDistanceInput = do
geometryParser <- columnParser (ColumnScalar PGGeometry) (G.Nullability False)
@ -833,7 +834,7 @@ geometryWithinDistanceInput = do
intersectsNbandGeomInput ::
forall pgKind m n r.
(MonadSchema n m, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
(MonadMemoize m, MonadParse n, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
m (Parser 'Input n (STIntersectsNbandGeommin (IR.UnpreparedValue ('Postgres pgKind))))
intersectsNbandGeomInput = do
geometryParser <- columnParser (ColumnScalar PGGeometry) (G.Nullability False)
@ -845,7 +846,7 @@ intersectsNbandGeomInput = do
intersectsGeomNbandInput ::
forall pgKind m n r.
(MonadSchema n m, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
(MonadMemoize m, MonadParse n, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
m (Parser 'Input n (STIntersectsGeomminNband (IR.UnpreparedValue ('Postgres pgKind))))
intersectsGeomNbandInput = do
geometryParser <- columnParser (ColumnScalar PGGeometry) (G.Nullability False)
@ -880,7 +881,7 @@ prependOp ::
( BackendSchema ('Postgres pgKind),
MonadReader r m,
MonadError QErr m,
MonadSchema n m,
MonadParse n,
Has MkTypename r,
Has NamingCase r
) =>
@ -915,7 +916,7 @@ appendOp ::
( BackendSchema ('Postgres pgKind),
MonadReader r m,
MonadError QErr m,
MonadSchema n m,
MonadParse n,
Has MkTypename r,
Has NamingCase r
) =>
@ -950,7 +951,7 @@ deleteKeyOp ::
( BackendSchema ('Postgres pgKind),
MonadReader r m,
MonadError QErr m,
MonadSchema n m,
MonadParse n,
Has MkTypename r,
Has NamingCase r
) =>
@ -981,7 +982,7 @@ deleteElemOp ::
( BackendSchema ('Postgres pgKind),
MonadReader r m,
MonadError QErr m,
MonadSchema n m,
MonadParse n,
Has MkTypename r,
Has NamingCase r
) =>
@ -1014,7 +1015,7 @@ deleteAtPathOp ::
( BackendSchema ('Postgres pgKind),
MonadReader r m,
MonadError QErr m,
MonadSchema n m,
MonadParse n,
Has MkTypename r,
Has NamingCase r
) =>

View File

@ -137,7 +137,7 @@ conflictConstraint ::
TableInfo ('Postgres pgKind) ->
m (Parser 'Both n (UniqueConstraint ('Postgres pgKind)))
conflictConstraint constraints sourceInfo tableInfo =
memoizeOn 'conflictConstraint (_siName sourceInfo, tableName) $ do
P.memoizeOn 'conflictConstraint (_siName sourceInfo, tableName) $ do
tableGQLName <- getTableGQLName tableInfo
constraintEnumValues <- for
constraints

View File

@ -1,114 +1,29 @@
-- | Classes for monads used during schema construction and query parsing.
module Hasura.GraphQL.Parser.Class
( MonadSchema (..),
memoize,
MonadParse (..),
withPath,
( MonadParse (..),
parseError,
withPath,
)
where
import Hasura.GraphQL.Parser.Class.Parse
import Language.Haskell.TH qualified as TH
import Type.Reflection (Typeable)
import Data.Aeson qualified as Aeson
import Data.Aeson.Types qualified as Aeson
import Data.Typeable
import Hasura.Base.ErrorMessage
import Hasura.GraphQL.Parser.ErrorCode
import Prelude
{- Note [Tying the knot]
~~~~~~~~~~~~~~~~~~~~~~~~
GraphQL type definitions can be mutually recursive, and indeed, they quite often
are! For example, two tables that reference one another will be represented by
types such as the following:
-- | A class that provides functionality for parsing GraphQL queries, i.e.
-- running a fully-constructed 'Parser'.
class (Monad m, Typeable m) => MonadParse m where
withKey :: Aeson.JSONPathElement -> m a -> m a
type author {
id: Int!
name: String!
articles: [article!]!
}
-- | Not the full power of 'MonadError' because parse errors cannot be
-- caught.
parseErrorWith :: ParseErrorCode -> ErrorMessage -> m a
type article {
id: Int!
title: String!
content: String!
author: author!
}
withPath :: MonadParse m => Aeson.JSONPath -> m a -> m a
withPath path action = foldr withKey action path
This doesnt cause any trouble if the schema is represented by a mapping from
type names to type definitions, but the Parser abstraction is all about avoiding
that kind of indirection to improve type safety parsers refer to their
sub-parsers directly. This presents two problems during schema generation:
1. Schema generation needs to terminate in finite time, so we need to ensure
we dont try to eagerly construct an infinitely-large schema due to the
mutually-recursive structure.
2. To serve introspection queries, we do eventually need to construct a
mapping from names to types (a TypeMap), so we need to be able to
recursively walk the entire schema in finite time.
Solving point number 1 could be done with either laziness or sharing, but
neither of those are enough to solve point number 2, which requires /observable/
sharing. We need to construct a Parser graph that contains enough information to
detect cycles during traversal.
It may seem appealing to just use type names to detect cycles, which would allow
us to get away with using laziness rather than true sharing. Unfortunately, that
leads to two further problems:
* Its possible to end up with two different types with the same name, which
is an error and should be reported as such. Using names to break cycles
prevents us from doing that, since we have no way to check that two types
with the same name are actually the same.
* Some Parser constructors can fail the `column` parser checks that the type
name is a valid GraphQL name, for example. This extra validation means lazy
schema construction isnt viable, since we need to eagerly build the schema
to ensure all the validation checks hold.
So were forced to use sharing. But how do we do it? Somehow, we have to /tie
the knot/ we have to build a cyclic data structure and some of the cycles
may be quite large. Doing all this knot-tying by hand would be incredibly
tricky, and it would require a lot of inversion of control to thread the shared
parsers around.
To avoid contorting the program, we instead implement a form of memoization. The
MonadSchema class provides a mechanism to memoize a parser constructor function,
which allows us to get sharing mostly for free. The memoization strategy also
annotates cached parsers with a Unique that can be used to break cycles while
traversing the graph, so we get observable sharing as well. -}
-- | A class that provides functionality used when building the GraphQL schema,
-- i.e. constructing the 'Parser' graph.
class (Monad m, MonadParse n) => MonadSchema n m | m -> n where
-- | Memoizes a parser constructor function for the extent of a single schema
-- construction process. This is mostly useful for recursive parsers;
-- see Note [Tying the knot] for more details.
--
-- The generality of the type here allows us to use this with multiple concrete
-- parser types:
--
-- @
-- 'memoizeOn' :: 'MonadSchema' n m => 'TH.Name' -> a -> m (Parser n b) -> m (Parser n b)
-- 'memoizeOn' :: 'MonadSchema' n m => 'TH.Name' -> a -> m (FieldParser n b) -> m (FieldParser n b)
-- @
memoizeOn ::
forall p a b.
(Ord a, Typeable p, Typeable a, Typeable b) =>
-- | A unique name used to identify the function being memoized. There isnt
-- really any metaprogramming going on here, we just use a Template Haskell
-- 'TH.Name' as a convenient source for a static, unique identifier.
TH.Name ->
-- | The value to use as the memoization key. Its the callers
-- responsibility to ensure multiple calls to the same function dont use
-- the same key.
a ->
m (p n b) ->
m (p n b)
-- | A wrapper around 'memoizeOn' that memoizes a function by using its argument
-- as the key.
memoize ::
(MonadSchema n m, Ord a, Typeable a, Typeable b, Typeable p) =>
TH.Name ->
(a -> m (p n b)) ->
(a -> m (p n b))
memoize name f a = memoizeOn name a (f a)
parseError :: MonadParse m => ErrorMessage -> m a
parseError = parseErrorWith ValidationFailed

View File

@ -1,28 +0,0 @@
-- | Classes for monads used during schema construction and query parsing.
module Hasura.GraphQL.Parser.Class.Parse
( MonadParse (..),
parseError,
withPath,
)
where
import Data.Aeson qualified as Aeson
import Data.Aeson.Types qualified as Aeson
import Hasura.Base.ErrorMessage
import Hasura.GraphQL.Parser.ErrorCode
import Prelude
-- | A class that provides functionality for parsing GraphQL queries, i.e.
-- running a fully-constructed 'Parser'.
class Monad m => MonadParse m where
withKey :: Aeson.JSONPathElement -> m a -> m a
-- | Not the full power of 'MonadError' because parse errors cannot be
-- caught.
parseErrorWith :: ParseErrorCode -> ErrorMessage -> m a
withPath :: MonadParse m => Aeson.JSONPath -> m a -> m a
withPath path action = foldr withKey action path
parseError :: MonadParse m => ErrorMessage -> m a
parseError = parseErrorWith ValidationFailed

View File

@ -19,7 +19,7 @@ import Data.Vector qualified as V
import Data.Void (Void, absurd)
import Hasura.Base.ErrorMessage
import Hasura.Base.ToErrorValue
import Hasura.GraphQL.Parser.Class.Parse
import Hasura.GraphQL.Parser.Class
import Hasura.GraphQL.Parser.Internal.TypeChecking
import Hasura.GraphQL.Parser.Variable
import Language.GraphQL.Draft.Syntax qualified as G

View File

@ -34,7 +34,7 @@ import Data.Vector qualified as V
import Data.Void (Void)
import Hasura.Base.ErrorValue qualified as ErrorValue
import Hasura.Base.ToErrorValue
import Hasura.GraphQL.Parser.Class.Parse
import Hasura.GraphQL.Parser.Class
import Hasura.GraphQL.Parser.Internal.TypeChecking
import Hasura.GraphQL.Parser.Internal.Types
import Hasura.GraphQL.Parser.Schema

View File

@ -33,7 +33,7 @@ import Data.Type.Equality
import Data.Void (Void)
import Hasura.Base.ErrorMessage
import Hasura.Base.ToErrorValue
import Hasura.GraphQL.Parser.Class.Parse
import Hasura.GraphQL.Parser.Class
import Hasura.GraphQL.Parser.Collect
import Hasura.GraphQL.Parser.Directives
import Hasura.GraphQL.Parser.Internal.Input

View File

@ -33,7 +33,7 @@ import Data.Text qualified as Text
import Data.Text.Read (decimal)
import Data.UUID qualified as UUID
import Hasura.Base.ErrorMessage (toErrorMessage)
import Hasura.GraphQL.Parser.Class.Parse
import Hasura.GraphQL.Parser.Class
import Hasura.GraphQL.Parser.ErrorCode
import Hasura.GraphQL.Parser.Internal.Convert
import Hasura.GraphQL.Parser.Internal.TypeChecking

View File

@ -16,7 +16,7 @@ import Data.Function (on)
import Data.Void (absurd)
import Hasura.Base.ErrorMessage
import Hasura.Base.ToErrorValue
import Hasura.GraphQL.Parser.Class.Parse
import Hasura.GraphQL.Parser.Class
import Hasura.GraphQL.Parser.Names
import Hasura.GraphQL.Parser.Variable
import Language.GraphQL.Draft.Syntax hiding (Definition)

View File

@ -1,168 +1,22 @@
-- | Monad transformers for GraphQL schema construction and query parsing.
module Hasura.GraphQL.Parser.Monad
( SchemaT,
runSchemaT,
Parse,
( Parse,
runParse,
ParseError (..),
)
where
import Control.Monad.Except
import Control.Monad.Reader (MonadReader, ReaderT, mapReaderT)
import Control.Monad.State.Strict (MonadState (..), StateT, evalStateT)
import Data.Aeson (JSONPath)
import Data.Dependent.Map (DMap)
import Data.Dependent.Map qualified as DM
import Data.GADT.Compare.Extended
import Data.IORef
import Data.Kind qualified as K
import Data.Proxy (Proxy (..))
import Data.Functor.Identity
import Hasura.Base.ErrorMessage
import Hasura.GraphQL.Parser.Class
import Hasura.GraphQL.Parser.ErrorCode
import Language.Haskell.TH qualified as TH
import System.IO.Unsafe (unsafeInterleaveIO)
import Type.Reflection (Typeable, typeRep, (:~:) (..))
import Prelude
-- Disable custom prelude warnings in preparation for extracting this module into a separate package.
{-# ANN module ("HLint: ignore Use onLeft" :: String) #-}
-- -------------------------------------------------------------------------------------------------
-- schema construction
newtype SchemaT n m a = SchemaT
{ unSchemaT :: StateT (DMap ParserId (ParserById n)) m a
}
deriving (Functor, Applicative, Monad, MonadError e, MonadReader r)
runSchemaT :: forall m n a. Monad m => SchemaT n m a -> m a
runSchemaT = flip evalStateT mempty . unSchemaT
-- | see Note [SchemaT requires MonadIO]
instance
(MonadIO m, MonadParse n) =>
MonadSchema n (SchemaT n m)
where
memoizeOn name key buildParser = SchemaT do
let parserId = ParserId name key
parsersById <- get
case DM.lookup parserId parsersById of
Just (ParserById parser) -> pure parser
Nothing -> do
-- We manually do eager blackholing here using a MutVar rather than
-- relying on MonadFix and ordinary thunk blackholing. Why? A few
-- reasons:
--
-- 1. We have more control. We arent at the whims of whatever
-- MonadFix instance happens to get used.
--
-- 2. We can be more precise. GHCs lazy blackholing doesnt always
-- kick in when youd expect.
--
-- 3. We can provide more useful error reporting if things go wrong.
-- Most usefully, we can include a HasCallStack source location.
cell <- liftIO $ newIORef Nothing
-- We use unsafeInterleaveIO here, which sounds scary, but
-- unsafeInterleaveIO is actually far more safe than unsafePerformIO.
-- unsafeInterleaveIO just defers the execution of the action until its
-- result is needed, adding some laziness.
--
-- That laziness can be dangerous if the action has side-effects, since
-- the point at which the effect is performed can be unpredictable. But
-- this action just reads, never writes, so that isnt a concern.
parserById <-
liftIO $
unsafeInterleaveIO $
readIORef cell >>= \case
Just parser -> pure $ ParserById parser
Nothing ->
error $
unlines
[ "memoize: parser was forced before being fully constructed",
" parser constructor: " ++ TH.pprint name
]
put $! DM.insert parserId parserById parsersById
parser <- unSchemaT buildParser
liftIO $ writeIORef cell (Just parser)
pure parser
instance
(MonadIO m, MonadParse n) =>
MonadSchema n (ReaderT a (SchemaT n m))
where
memoizeOn name key = mapReaderT (memoizeOn name key)
{- Note [SchemaT requires MonadIO]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The MonadSchema instance for SchemaT requires MonadIO, which is unsatisfying.
The only reason the constraint is needed is to implement knot-tying via IORefs
(see Note [Tying the knot] in Hasura.GraphQL.Parser.Class), which really only
requires the power of ST. Using ST would be much nicer, since we could discharge
the burden locally, but unfortunately we also want to use MonadUnique, which
is handled by IO in the end.
This means that we need IO at the base of our monad, so to use STRefs, wed need
a hypothetical STT transformer (i.e. a monad transformer version of ST). But
such a thing isnt safe in general, since reentrant monads like ListT or ContT
would incorrectly share state between the different threads of execution.
In theory, this can be resolved by using something like Vault (from the vault
package) to create splittable sets of variable references. That would allow
you to create a transformer with an STRef-like interface that works over any
arbitrary monad. However, while the interface would be safe, the implementation
of such an abstraction requires unsafe primitives, and to the best of my
knowledge no such transformer exists in any existing libraries.
So we decide it isnt worth the trouble and just use MonadIO. If `eff` ever pans
out, it should be able to support this more naturally, so we can fix it then. -}
-- | A key used to distinguish calls to 'memoize'd functions. The 'TH.Name'
-- distinguishes calls to completely different parsers, and the @a@ value
-- records the arguments.
data ParserId (t :: ((K.Type -> K.Type) -> K.Type -> K.Type, K.Type)) where
ParserId :: (Ord a, Typeable p, Typeable a, Typeable b) => TH.Name -> a -> ParserId '(p, b)
instance GEq ParserId where
geq
(ParserId name1 (arg1 :: a1) :: ParserId t1)
(ParserId name2 (arg2 :: a2) :: ParserId t2)
| _ :: Proxy '(p1, b1) <- Proxy @t1,
_ :: Proxy '(p2, b2) <- Proxy @t2,
name1 == name2,
Just Refl <- typeRep @a1 `geq` typeRep @a2,
arg1 == arg2,
Just Refl <- typeRep @p1 `geq` typeRep @p2,
Just Refl <- typeRep @b1 `geq` typeRep @b2 =
Just Refl
| otherwise = Nothing
instance GCompare ParserId where
gcompare
(ParserId name1 (arg1 :: a1) :: ParserId t1)
(ParserId name2 (arg2 :: a2) :: ParserId t2)
| _ :: Proxy '(p1, b1) <- Proxy @t1,
_ :: Proxy '(p2, b2) <- Proxy @t2 =
strengthenOrdering (compare name1 name2)
`extendGOrdering` gcompare (typeRep @a1) (typeRep @a2)
`extendGOrdering` strengthenOrdering (compare arg1 arg2)
`extendGOrdering` gcompare (typeRep @p1) (typeRep @p2)
`extendGOrdering` gcompare (typeRep @b1) (typeRep @b2)
`extendGOrdering` GEQ
-- | A newtype wrapper around a 'Parser' that rearranges the type parameters
-- so that it can be indexed by a 'ParserId' in a 'DMap'.
--
-- This is really just a single newtype, but its implemented as a data family
-- because GHC doesnt allow ordinary datatype declarations to pattern-match on
-- type parameters, and we want to match on the tuple.
data family ParserById (m :: K.Type -> K.Type) (a :: ((K.Type -> K.Type) -> K.Type -> K.Type, K.Type))
newtype instance ParserById m '(p, a) = ParserById (p m a)
-- -------------------------------------------------------------------------------------------------
-- query parsing

View File

@ -142,7 +142,7 @@ sense, since scalar fields dont have selection sets!
Another issue with this representation has to do with effectful parser
constructors (such as constructors that can throw errors). These have types like
mkFooParser :: MonadSchema n m => Blah -> m (Parser k n Foo)
mkFooParser :: (MonadMemoize m, MonadParse n) => Blah -> m (Parser k n Foo)
where the parser construction is itself monadic. This causes some annoyance,
since even if mkFooParser returns a Parser of a polymorphic kind, code like this

View File

@ -9,6 +9,7 @@ where
import Control.Arrow.Extended (left)
import Control.Exception (try)
import Control.Lens (set, (^.))
import Control.Monad.Memoize
import Data.Aeson ((.:), (.:?))
import Data.Aeson qualified as J
import Data.ByteString.Lazy qualified as BL
@ -20,7 +21,7 @@ import Data.List.Extended (duplicates)
import Data.Text qualified as T
import Data.Text.Extended (dquoteList, (<<>))
import Hasura.Base.Error
import Hasura.GraphQL.Parser.Monad (Parse, runSchemaT)
import Hasura.GraphQL.Parser.Monad (Parse)
import Hasura.GraphQL.Parser.Name qualified as GName
import Hasura.GraphQL.Schema.Common
import Hasura.GraphQL.Schema.NamingCase
@ -82,7 +83,7 @@ fetchRemoteSchema env manager _rscName rsDef@ValidatedRemoteSchemaDef {..} = do
-- quickly reject an invalid schema.
void $
flip runReaderT minimumValidContext $
runSchemaT $
runMemoizeT $
buildRemoteParser @_ @_ @Parse
_rscIntroOriginal
_rscRemoteRelationships

View File

@ -9,6 +9,7 @@ where
import Control.Concurrent.Extended (forConcurrentlyEIO)
import Control.Lens
import Control.Monad.Memoize
import Data.Aeson.Ordered qualified as JO
import Data.Has
import Data.HashMap.Strict qualified as Map
@ -36,7 +37,6 @@ import Hasura.GraphQL.Schema.Parser
( FieldParser,
Kind (..),
MonadParse,
MonadSchema,
Parser,
Schema,
)
@ -752,7 +752,7 @@ parseBuildIntrospectionSchema q m s =
queryWithIntrospectionHelper ::
forall n m.
(MonadSchema n m, MonadError QErr m) =>
(MonadMemoize m, MonadParse n, MonadError QErr m) =>
[P.FieldParser n (NamespacedField (QueryRootField UnpreparedValue))] ->
Maybe (Parser 'Output n (RootFieldMap (MutationRootField UnpreparedValue))) ->
Maybe (Parser 'Output n (RootFieldMap (QueryRootField UnpreparedValue))) ->
@ -909,8 +909,7 @@ throwOnConflictingDefinitions :: QErrM m => Either P.ConflictingDefinitions a ->
throwOnConflictingDefinitions = either (throw500 . fromErrorMessage . toErrorValue) pure
type ConcreteSchemaT m a =
P.SchemaT
P.Parse
MemoizeT
( ReaderT
( SchemaOptions,
SchemaContext,
@ -931,7 +930,7 @@ runMonadSchema ::
m a
runMonadSchema options context m =
flip runReaderT (options, context, mempty, mempty, HasuraCase) $
P.runSchemaT m
runMemoizeT m
buildBackendSource ::
(forall b. BackendSchema b => SourceInfo b -> r) ->

View File

@ -279,7 +279,7 @@ actionOutputFields outputType annotatedObject objectTypes = do
outputFieldParser ::
ObjectFieldDefinition (G.GType, AnnotatedObjectFieldType) ->
m (FieldParser n (AnnotatedActionField))
outputFieldParser (ObjectFieldDefinition name _ description (gType, objectFieldType)) = memoizeOn 'actionOutputFields (_aotName annotatedObject, name) do
outputFieldParser (ObjectFieldDefinition name _ description (gType, objectFieldType)) = P.memoizeOn 'actionOutputFields (_aotName annotatedObject, name) do
case objectFieldType of
AOFTScalar def ->
wrapScalar $ customScalarParser def
@ -372,7 +372,7 @@ actionInputArguments nonObjectTypeMap arguments = do
NOCTEnum def -> pure $ mkResult $ customEnumParser def
-- input objects however may recursively contain one another
NOCTInputObject (InputObjectTypeDefinition (InputObjectTypeName objectName) objectDesc inputFields) ->
mkResult <$> memoizeOn 'actionInputArguments objectName do
mkResult <$> P.memoizeOn 'actionInputArguments objectName do
inputFieldsParsers <- forM
(toList inputFields)
\(InputObjectFieldDefinition (InputObjectFieldName fieldName) fieldDesc (GraphQLType fieldType)) -> do

View File

@ -60,8 +60,6 @@ import Hasura.SQL.Backend
import Hasura.Server.Types (StreamingSubscriptionsCtx)
import Language.GraphQL.Draft.Syntax qualified as G
-- TODO: Might it make sense to add those constraints to MonadSchema directly?
-- | Bag of constraints available to the methods of @BackendSchema@.
--
-- Note that @BackendSchema b@ is itself part of this, so a methods may also
@ -211,7 +209,7 @@ class
-- individual components
columnParser ::
(MonadSchema n m, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
(MonadParse n, MonadError QErr m, MonadReader r m, Has MkTypename r, Has NamingCase r) =>
ColumnType b ->
G.Nullability -> -- TODO maybe use Hasura.GraphQL.Parser.Schema.Nullability instead?
m (Parser 'Both n (ValueWithOrigin (ColumnValue b)))

View File

@ -54,7 +54,7 @@ boolExp ::
SourceInfo b ->
TableInfo b ->
m (Parser 'Input n (AnnBoolExp b (UnpreparedValue b)))
boolExp sourceInfo tableInfo = memoizeOn 'boolExp (_siName sourceInfo, tableName) $ do
boolExp sourceInfo tableInfo = P.memoizeOn 'boolExp (_siName sourceInfo, tableName) $ do
tableGQLName <- getTableGQLName tableInfo
name <- mkTypename $ tableGQLName <> Name.__bool_exp
let description =

View File

@ -101,7 +101,8 @@ isHasuraSchema = \case
type MonadBuildSchemaBase r m n =
( MonadError QErr m,
MonadReader r m,
P.MonadSchema n m,
P.MonadMemoize m,
P.MonadParse n,
Has SchemaOptions r,
Has SchemaContext r,
-- TODO: make all `Has x r` explicit fields of 'SchemaContext'

View File

@ -16,7 +16,6 @@ import Data.Has (getter)
import Data.HashMap.Strict qualified as Map
import Data.HashSet qualified as Set
import Data.Text.Extended
import Hasura.GraphQL.Parser.Class
import Hasura.GraphQL.Schema.Backend
import Hasura.GraphQL.Schema.BoolExp
import Hasura.GraphQL.Schema.Common
@ -174,7 +173,7 @@ tableFieldsInput ::
TableInfo b ->
m (Parser 'Input n (IR.AnnotatedInsertRow b (IR.UnpreparedValue b)))
tableFieldsInput sourceInfo tableInfo =
memoizeOn 'tableFieldsInput (_siName sourceInfo, tableName) do
P.memoizeOn 'tableFieldsInput (_siName sourceInfo, tableName) do
tableGQLName <- getTableGQLName tableInfo
objectFields <- traverse mkFieldParser (Map.elems allFields)
objectName <- mkTypename $ tableGQLName <> Name.__insert_input
@ -268,7 +267,7 @@ objectRelationshipInput backendInsertAction sourceInfo tableInfo = runMaybeT $ d
let permissions = getRolePermInfo roleName tableInfo
updatePerms = _permUpd permissions
insertPerms <- hoistMaybe $ _permIns permissions
lift $ memoizeOn 'objectRelationshipInput (_siName sourceInfo, tableName) do
lift $ P.memoizeOn 'objectRelationshipInput (_siName sourceInfo, tableName) do
tableGQLName <- getTableGQLName tableInfo
objectParser <- tableFieldsInput sourceInfo tableInfo
backendInsertParser <- backendInsertAction sourceInfo tableInfo
@ -302,7 +301,7 @@ arrayRelationshipInput backendInsertAction sourceInfo tableInfo = runMaybeT $ do
let permissions = getRolePermInfo roleName tableInfo
updatePerms = _permUpd permissions
insertPerms <- hoistMaybe $ _permIns permissions
lift $ memoizeOn 'arrayRelationshipInput (_siName sourceInfo, tableName) do
lift $ P.memoizeOn 'arrayRelationshipInput (_siName sourceInfo, tableName) do
tableGQLName <- getTableGQLName tableInfo
objectParser <- tableFieldsInput sourceInfo tableInfo
backendInsertParser <- backendInsertAction sourceInfo tableInfo
@ -445,7 +444,7 @@ mutationSelectionSet ::
TableInfo b ->
m (Parser 'Output n (IR.MutFldsG b (IR.RemoteRelationshipField IR.UnpreparedValue) (IR.UnpreparedValue b)))
mutationSelectionSet sourceInfo tableInfo =
memoizeOn 'mutationSelectionSet (_siName sourceInfo, tableName) do
P.memoizeOn 'mutationSelectionSet (_siName sourceInfo, tableName) do
roleName <- retrieve scRole
tableGQLName <- getTableGQLName tableInfo
returning <- runMaybeT do

View File

@ -65,7 +65,7 @@ orderByExp ::
SourceInfo b ->
TableInfo b ->
m (Parser 'Input n [IR.AnnotatedOrderByItemG b (IR.UnpreparedValue b)])
orderByExp sourceInfo tableInfo = memoizeOn 'orderByExp (_siName sourceInfo, tableInfoName tableInfo) $ do
orderByExp sourceInfo tableInfo = P.memoizeOn 'orderByExp (_siName sourceInfo, tableInfoName tableInfo) $ do
tableGQLName <- getTableGQLName tableInfo
tCase <- asks getter
name <- mkTypename $ tableGQLName <> Name.__order_by
@ -156,7 +156,7 @@ orderByAggregation ::
SourceInfo b ->
TableInfo b ->
m (Parser 'Input n [IR.OrderByItemG b (IR.AnnotatedAggregateOrderBy b)])
orderByAggregation sourceInfo tableInfo = memoizeOn 'orderByAggregation (_siName sourceInfo, tableName) do
orderByAggregation sourceInfo tableInfo = P.memoizeOn 'orderByAggregation (_siName sourceInfo, tableName) do
-- WIP NOTE
-- there is heavy duplication between this and Select.tableAggregationFields
-- it might be worth putting some of it in common, just to avoid issues when

View File

@ -56,6 +56,9 @@ module Hasura.GraphQL.Schema.Parser
P.ParseErrorCode (..),
toQErr,
module Hasura.GraphQL.Parser,
Memoize.MonadMemoize,
memoizeOn,
memoize,
)
where
@ -63,6 +66,8 @@ where
-- this module.
import Control.Monad.Error.Class
import Control.Monad.Memoize qualified as Memoize
import Data.Typeable
import Hasura.Base.Error
import Hasura.Base.ErrorMessage (ErrorMessage (fromErrorMessage))
import Hasura.GraphQL.Parser hiding
@ -85,6 +90,7 @@ import Hasura.GraphQL.Parser hiding
import Hasura.GraphQL.Parser qualified as P
import Hasura.Prelude
import Hasura.RQL.Types.Metadata.Object
import Language.Haskell.TH qualified as TH
type FieldParser = P.FieldParser MetadataObjId
@ -132,3 +138,30 @@ toQErr = either (throwError . parseErrorToQErr) pure
parseErrorCodeToCode P.ParseFailed = ParseFailed
parseErrorCodeToCode P.ConflictingDefinitionsError = Unexpected
parseErrorCodeToCode P.NotSupported = NotSupported
memoizeOn ::
(Memoize.MonadMemoize m, Ord a, Typeable a, Typeable p, MonadParse n, Typeable b) =>
-- | A unique name used to identify the function being memoized. There isnt
-- really any metaprogramming going on here, we just use a Template Haskell
-- 'TH.Name' as a convenient source for a static, unique identifier.
TH.Name ->
-- | The value to use as the memoization key. Its the callers
-- responsibility to ensure multiple calls to the same function dont use
-- the same key.
a ->
-- | The value to be memoized. 'p' is intended to be either 'Parser k' or
-- 'FieldParser'.
m (p n b) ->
m (p n b)
memoizeOn = Memoize.memoizeOn
-- | A wrapper around 'memoizeOn' that memoizes a function by using its argument
-- as the key.
memoize ::
(Memoize.MonadMemoize m, Ord a, Typeable a, Typeable p, MonadParse n, Typeable b) =>
TH.Name ->
-- | A function generating something to be memoized. 'p' is intended to be
-- either 'Parser k' or 'FieldParser'.
(a -> m (p n b)) ->
(a -> m (p n b))
memoize = Memoize.memoize

View File

@ -980,14 +980,6 @@ customizeRemoteNamespace remoteSchemaInfo@RemoteSchemaInfo {..} rootTypeName fie
in IR.RemoteSchemaRootField remoteSchemaInfo resultCustomizer $ IR.mkGraphQLField (Just alias) GName.___typename mempty mempty IR.SelectionSetNone
mkNamespaceTypename = MkTypename $ const $ runMkTypename (remoteSchemaCustomizeTypeName rsCustomizer) rootTypeName
{-
NOTE: Unused. Should we remove?
type MonadBuildRemoteSchema r m n = (MonadSchema n m, MonadError QErr m, MonadReader r m, Has MkTypename r, Has CustomizeRemoteFieldName r)
runMonadBuildRemoteSchema :: Monad m => SchemaT n (ReaderT (MkTypename, CustomizeRemoteFieldName) m) a -> m a
runMonadBuildRemoteSchema m = flip runReaderT (mempty, mempty) $ runSchemaT m
-}
withRemoteSchemaCustomization ::
forall m r a.
(MonadReader r m, Has MkTypename r, Has CustomizeRemoteFieldName r) =>

View File

@ -15,7 +15,7 @@ import Hasura.GraphQL.Schema.Common
import Hasura.GraphQL.Schema.Instances ()
import Hasura.GraphQL.Schema.NamingCase
import Hasura.GraphQL.Schema.Options (RemoteSchemaPermissions)
import Hasura.GraphQL.Schema.Parser (FieldParser, MonadSchema)
import Hasura.GraphQL.Schema.Parser (FieldParser, MonadMemoize)
import Hasura.GraphQL.Schema.Parser qualified as P
import Hasura.GraphQL.Schema.Remote
import Hasura.GraphQL.Schema.Select
@ -146,7 +146,7 @@ remoteRelationshipToSchemaField remoteSchemaCache remoteSchemaPermissions lhsFie
foldr (modifyFieldByName . fcName) resultCustomizer $ NE.init fieldCalls
lookupNestedFieldType' ::
(MonadSchema n m, MonadError QErr m) =>
(MonadMemoize m, MonadError QErr m) =>
G.Name ->
RemoteSchemaIntrospection ->
FieldCall ->
@ -160,7 +160,7 @@ lookupNestedFieldType' parentTypeName remoteSchemaIntrospection (FieldCall fcNam
Just G.FieldDefinition {..} -> pure _fldType
lookupNestedFieldType ::
(MonadSchema n m, MonadError QErr m) =>
(MonadMemoize m, MonadError QErr m) =>
G.Name ->
RemoteSchemaIntrospection ->
NonEmpty FieldCall ->

View File

@ -107,7 +107,7 @@ defaultSelectTable sourceInfo tableInfo fieldName description = runMaybeT do
roleName <- retrieve scRole
selectPermissions <- hoistMaybe $ tableSelectPermissions roleName tableInfo
selectionSetParser <- MaybeT $ tableSelectionList sourceInfo tableInfo
lift $ memoizeOn 'defaultSelectTable (_siName sourceInfo, tableName, fieldName) do
lift $ P.memoizeOn 'defaultSelectTable (_siName sourceInfo, tableName, fieldName) do
stringifyNumbers <- retrieve Options.soStringifyNumbers
tableArgsParser <- tableArguments sourceInfo tableInfo
pure $
@ -165,7 +165,7 @@ selectTableConnection sourceInfo tableInfo fieldName description pkeyColumns = r
xRelayInfo <- hoistMaybe $ relayExtension @b
selectPermissions <- hoistMaybe $ tableSelectPermissions roleName tableInfo
selectionSetParser <- fmap P.nonNullableParser <$> MaybeT $ tableConnectionSelectionSet sourceInfo tableInfo
lift $ memoizeOn 'selectTableConnection (_siName sourceInfo, tableName, fieldName) do
lift $ P.memoizeOn 'selectTableConnection (_siName sourceInfo, tableName, fieldName) do
stringifyNumbers <- retrieve Options.soStringifyNumbers
selectArgsParser <- tableConnectionArgs pkeyColumns sourceInfo tableInfo
pure $
@ -217,7 +217,7 @@ selectTableByPk sourceInfo tableInfo fieldName description = runMaybeT do
primaryKeys <- hoistMaybe $ fmap _pkColumns . _tciPrimaryKey . _tiCoreInfo $ tableInfo
selectionSetParser <- MaybeT $ tableSelectionSet sourceInfo tableInfo
guard $ all (\c -> ciColumn c `Map.member` spiCols selectPermissions) primaryKeys
lift $ memoizeOn 'selectTableByPk (_siName sourceInfo, tableName, fieldName) do
lift $ P.memoizeOn 'selectTableByPk (_siName sourceInfo, tableName, fieldName) do
stringifyNumbers <- retrieve Options.soStringifyNumbers
argsParser <-
sequenceA <$> for primaryKeys \columnInfo -> do
@ -272,7 +272,7 @@ defaultSelectTableAggregate sourceInfo tableInfo fieldName description = runMayb
guard $ spiAllowAgg selectPermissions
xNodesAgg <- hoistMaybe $ nodesAggExtension @b
nodesParser <- MaybeT $ tableSelectionList sourceInfo tableInfo
lift $ memoizeOn 'defaultSelectTableAggregate (_siName sourceInfo, tableName, fieldName) do
lift $ P.memoizeOn 'defaultSelectTableAggregate (_siName sourceInfo, tableName, fieldName) do
stringifyNumbers <- retrieve Options.soStringifyNumbers
tableGQLName <- getTableGQLName tableInfo
tableArgsParser <- tableArguments sourceInfo tableInfo
@ -383,7 +383,7 @@ defaultTableSelectionSet sourceInfo tableInfo = runMaybeT do
-- incomplete selection set, we fail early and return 'Nothing'. This check
-- must happen first, since we can't memoize a @Maybe Parser@.
guard $ isHasuraSchema schemaKind || isJust (relayExtension @b)
lift $ memoizeOn 'defaultTableSelectionSet (sourceName, tableName) do
lift $ P.memoizeOn 'defaultTableSelectionSet (sourceName, tableName) do
tableGQLName <- getTableGQLName tableInfo
objectTypename <- mkTypename tableGQLName
let xRelay = relayExtension @b
@ -484,7 +484,7 @@ tableConnectionSelectionSet sourceInfo tableInfo = runMaybeT do
tableGQLName <- lift $ getTableGQLName tableInfo
void $ hoistMaybe $ tableSelectPermissions roleName tableInfo
edgesParser <- MaybeT $ tableEdgesSelectionSet tableGQLName
lift $ memoizeOn 'tableConnectionSelectionSet (_siName sourceInfo, tableName) do
lift $ P.memoizeOn 'tableConnectionSelectionSet (_siName sourceInfo, tableName) do
connectionTypeName <- mkTypename $ tableGQLName <> Name._Connection
let pageInfo =
P.subselection_
@ -870,7 +870,7 @@ tableAggregationFields ::
TableInfo b ->
m (Parser 'Output n (IR.AggregateFields b))
tableAggregationFields sourceInfo tableInfo =
memoizeOn 'tableAggregationFields (_siName sourceInfo, tableInfoName tableInfo) do
P.memoizeOn 'tableAggregationFields (_siName sourceInfo, tableInfoName tableInfo) do
tableGQLName <- getTableGQLName tableInfo
tCase <- asks getter
allColumns <- tableSelectColumns sourceInfo tableInfo

View File

@ -7,6 +7,7 @@ module Hasura.GraphQL.Schema.SubscriptionStream
)
where
import Control.Monad.Memoize
import Data.Has
import Data.List.NonEmpty qualified as NE
import Data.Text.Extended ((<>>))
@ -58,7 +59,7 @@ cursorBatchSizeArg tCase =
-- > }
cursorOrderingArgParser ::
forall n m r.
(MonadSchema n m, Has MkTypename r, Has NamingCase r, MonadReader r m) =>
(MonadMemoize m, MonadParse n, Has MkTypename r, Has NamingCase r, MonadReader r m) =>
m (Parser 'Both n CursorOrdering)
cursorOrderingArgParser = do
tCase <- asks getter
@ -84,7 +85,7 @@ cursorOrderingArgParser = do
-- > ordering: cursor_ordering
cursorOrderingArg ::
forall n m r.
(MonadSchema n m, Has MkTypename r, Has NamingCase r, MonadReader r m) =>
(MonadMemoize m, MonadParse n, Has MkTypename r, Has NamingCase r, MonadReader r m) =>
m (InputFieldsParser n (Maybe CursorOrdering))
cursorOrderingArg = do
cursorOrderingParser' <- cursorOrderingArgParser
@ -95,7 +96,7 @@ cursorOrderingArg = do
-- > column_name: column_type
streamColumnParserArg ::
forall b n m r.
(BackendSchema b, MonadSchema n m, Has MkTypename r, MonadReader r m, MonadError QErr m, Has NamingCase r) =>
(BackendSchema b, MonadMemoize m, MonadParse n, Has MkTypename r, MonadReader r m, MonadError QErr m, Has NamingCase r) =>
ColumnInfo b ->
m (InputFieldsParser n (Maybe (ColumnInfo b, ColumnValue b)))
streamColumnParserArg colInfo = do
@ -117,7 +118,7 @@ streamColumnParserArg colInfo = do
-- > }
streamColumnValueParser ::
forall b n m r.
(BackendSchema b, MonadSchema n m, Has MkTypename r, MonadReader r m, MonadError QErr m, Has NamingCase r) =>
(BackendSchema b, MonadMemoize m, MonadParse n, Has MkTypename r, MonadReader r m, MonadError QErr m, Has NamingCase r) =>
SourceInfo b ->
G.Name ->
[ColumnInfo b] ->
@ -136,7 +137,8 @@ streamColumnValueParser sourceInfo tableGQLName colInfos =
streamColumnValueParserArg ::
forall b n m r.
( BackendSchema b,
MonadSchema n m,
MonadMemoize m,
MonadParse n,
Has MkTypename r,
MonadReader r m,
MonadError QErr m,
@ -158,7 +160,7 @@ streamColumnValueParserArg sourceInfo tableGQLName colInfos = do
-- >
tableStreamColumnArg ::
forall n m r b.
(BackendSchema b, MonadSchema n m, Has MkTypename r, MonadReader r m, MonadError QErr m, Has NamingCase r) =>
(BackendSchema b, MonadMemoize m, MonadParse n, Has MkTypename r, MonadReader r m, MonadError QErr m, Has NamingCase r) =>
SourceInfo b ->
G.Name ->
[ColumnInfo b] ->

View File

@ -193,7 +193,7 @@ setOp ::
Has MkTypename r,
Has NamingCase r,
MonadError QErr m,
P.MonadSchema n m
P.MonadParse n
) =>
UpdateOperator b m n (UnpreparedValue b)
setOp = UpdateOperator {..}
@ -220,7 +220,7 @@ incOp ::
( Backend b,
MonadReader r m,
MonadError QErr m,
P.MonadSchema n m,
P.MonadParse n,
BackendSchema b,
Has MkTypename r,
Has NamingCase r

View File

@ -2,7 +2,7 @@ module Hasura.GraphQL.Parser.MonadParseTest (spec) where
import Data.Aeson.Internal
import Hasura.Base.ErrorMessage
import Hasura.GraphQL.Parser.Class.Parse
import Hasura.GraphQL.Parser.Class
import Hasura.GraphQL.Parser.ErrorCode
import Hasura.GraphQL.Parser.Monad
import Hasura.GraphQL.Parser.TestInstances ()

View File

@ -4,6 +4,7 @@
module Hasura.GraphQL.Schema.RemoteTest (spec) where
import Control.Lens (Prism', prism', to, (^..), _Right)
import Control.Monad.Memoize
import Data.Aeson qualified as J
import Data.Aeson.Key qualified as K
import Data.Aeson.KeyMap qualified as KM
@ -138,7 +139,7 @@ buildQueryParsers introspection = do
RemoteSchemaParser query _ _ <-
runError $
flip runReaderT schemaInfo $
P.runSchemaT $
runMemoizeT $
buildRemoteParser introResult remoteSchemaRels remoteSchemaInfo
pure $
head query <&> \case

View File

@ -11,12 +11,13 @@ module Test.Parser.Monad
)
where
import Control.Monad.Memoize
import Data.Aeson.Internal (JSONPathElement)
import Data.Has (Has (..))
import Data.Text qualified as T
import Hasura.Base.Error (QErr)
import Hasura.Base.ErrorMessage
import Hasura.GraphQL.Parser.Class (MonadParse (..), MonadSchema (..))
import Hasura.GraphQL.Parser.Class
import Hasura.GraphQL.Parser.ErrorCode
import Hasura.GraphQL.Schema.Common (SchemaContext (..), SchemaKind (..), ignoreRemoteRelationship)
import Hasura.GraphQL.Schema.NamingCase
@ -132,8 +133,8 @@ newtype ParserTestT a = ParserTestT (Either (IO ()) a)
deriving stock (Functor)
deriving (Applicative, Monad) via (Either (IO ()))
instance MonadSchema ParserTestT SchemaTestT where
memoizeOn :: TH.Name -> a -> SchemaTestT (p ParserTestT b) -> SchemaTestT (p ParserTestT b)
instance MonadMemoize SchemaTestT where
memoizeOn :: TH.Name -> a -> SchemaTestT p -> SchemaTestT p
memoizeOn _ _ = id
instance MonadParse ParserTestT where