Support tracking SQL functions as mutations. Closes #1514 (#34)

* Support tracking SQL functions as mutations. Closes #1514

Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com>
Co-authored-by: Brandon Simmons <brandon@hasura.io>
GITHUB_PR_NUMBER: 6160
GITHUB_PR_URL: https://github.com/hasura/graphql-engine/pull/6160

* Update docs/graphql/core/api-reference/schema-metadata-api/custom-functions.rst

Co-authored-by: Marion Schleifer <marion@hasura.io>

* Update docs/graphql/core/schema/custom-functions.rst

Co-authored-by: Marion Schleifer <marion@hasura.io>

* Update docs/graphql/core/schema/custom-functions.rst

Co-authored-by: Marion Schleifer <marion@hasura.io>

* Update docs/graphql/core/schema/custom-functions.rst

Co-authored-by: Brandon Simmons <brandon@hasura.io>
Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com>
Co-authored-by: Marion Schleifer <marion@hasura.io>
GitOrigin-RevId: 8fd39258641ecace6e3e9930e497b1655ad35080
This commit is contained in:
hasura-bot 2020-11-18 19:04:57 +01:00
parent 1477ada1bb
commit fb902d4209
25 changed files with 794 additions and 161 deletions

View File

@ -19,6 +19,28 @@ query {
where the articles are fetched from the database, and the weather is fetched from a remote server.
### Support tracking VOLATILE SQL functions as mutations. (closing #1514)
Previously we could only track `STABLE` or `IMMUTABLE` functions, and only as
queries. Now the version 2 of `track_table` also supports tracking functions as
mutations:
```
{
"type": "track_function",
"version": 2,
"args": {
"function": {
"schema": "public",
"name": "some_volatile_function"
},
"configuration": {
"exposed_as": "mutation"
}
}
}
```
### Breaking changes
This release contains the [PDV refactor (#4111)](https://github.com/hasura/graphql-engine/pull/4111), a significant rewrite of the internals of the server, which did include some breaking changes:

View File

@ -24,7 +24,7 @@ Only tracked custom functions are available for querying/mutating/subscribing da
track_function
--------------
``track_function`` is used to add a custom SQL function to the GraphQL schema.
``track_function`` is used to add a custom SQL function to the ``query`` root field of the GraphQL schema.
Also refer a note :ref:`here <note>`.
Add an SQL function ``search_articles``:
@ -48,10 +48,12 @@ Add an SQL function ``search_articles``:
track_function v2
-----------------
Version 2 of ``track_function`` is used to add a custom SQL function to the GraphQL schema with configuration.
Version 2 of ``track_function`` is used to add a custom SQL function to the GraphQL schema.
It supports more configuration options than v1, and also supports tracking
functions as mutations.
Also refer a note :ref:`here <note>`.
Add an SQL function called ``search_articles`` with a Hasura session argument.
Track an SQL function called ``search_articles`` with a Hasura session argument:
.. code-block:: http
@ -73,6 +75,48 @@ Add an SQL function called ``search_articles`` with a Hasura session argument.
}
}
Track ``VOLATILE`` SQL function ``reset_widget`` as a mutation, so it appears
as a top-level field under the ``mutation`` root field:
.. code-block:: http
POST /v1/query HTTP/1.1
Content-Type: application/json
X-Hasura-Role: admin
{
"type": "track_function",
"version": 2,
"args": {
"function": {
"schema": "public",
"name": "reset_widget"
},
"configuration": {
"exposed_as": "mutation"
}
}
}
If ``exposed_as`` is omitted, the location in the schema to expose the function
will be inferred from the function's volatility, with ``VOLATILE`` functions
appearing under the ``mutation`` root, and others ending up under
``query/subscription``.
In most cases you will want ``VOLATILE`` functions to only be exposed as
mutations, and only ``STABLE`` and ``IMMUTABLE`` functions to be queries.
When tracking ``VOLATILE`` functions under the ``query`` root, the user needs
to guarantee that the field is idempotent and side-effect free, in the context
of the resulting GraphQL API.
One such use case might be a function that wraps a simple query and performs
some logging visible only to administrators.
.. note::
It's easy to accidentally give an SQL function the wrong volatility (or for a
function to end up with ``VOLATILE`` mistakenly, since it's the default).
.. _track_function_args_syntax_v2:
Args syntax
@ -110,6 +154,10 @@ Function Configuration
- false
- `String`
- Function argument which accepts session info JSON
* - exposed_as
- false
- `String`
- In which part of the schema should we expose this function? Either "mutation" or "query".
.. _note:
@ -118,8 +166,8 @@ Function Configuration
Currently, only functions which satisfy the following constraints can be exposed over the GraphQL API
(*terminology from* `Postgres docs <https://www.postgresql.org/docs/current/sql-createfunction.html>`__):
- **Function behaviour**: ONLY ``STABLE`` or ``IMMUTABLE``
- **Return type**: MUST be ``SETOF <table-name>``
- **Function behaviour**: ``STABLE`` or ``IMMUTABLE`` functions may *only* be exposed as queries (i.e. with ``exposed_as: query``)
- **Return type**: MUST be ``SETOF <table-name>`` where ``<table-name>`` is already tracked
- **Argument modes**: ONLY ``IN``
.. _untrack_function:

View File

@ -20,7 +20,7 @@ that can be used to either encapsulate some custom business logic or extend the
are also referred to as **stored procedures**.
Hasura GraphQL engine lets you expose certain types of custom functions as top level fields in the GraphQL API to allow
querying them using both ``queries`` and ``subscriptions``.
querying them as either ``queries`` or ``subscriptions``, or (for ``VOLATILE`` functions) as ``mutations``.
.. note::
@ -34,8 +34,8 @@ Supported SQL functions
Currently, only functions which satisfy the following constraints can be exposed as top level fields in the GraphQL API
(*terminology from* `Postgres docs <https://www.postgresql.org/docs/current/sql-createfunction.html>`__):
- **Function behaviour**: ONLY ``STABLE`` or ``IMMUTABLE``
- **Return type**: MUST be ``SETOF <table-name>``
- **Function behaviour**: ``STABLE`` or ``IMMUTABLE`` functions may *only* be exposed as queries (i.e. with ``exposed_as: query``)
- **Return type**: MUST be ``SETOF <table-name>`` where ``<table-name>`` is already tracked
- **Argument modes**: ONLY ``IN``
.. _create_sql_functions:
@ -551,6 +551,30 @@ following example.
The specified session argument will not be included in the ``<function-name>_args`` input object in the GraphQL schema.
Tracking functions with side effects
************************************
You can also use the :ref:`track_function_v2 <track_function_v2>` API to track
`VOLATILE functions <https://www.postgresql.org/docs/current/xfunc-volatility.html>`__
as mutations.
Aside from showing up under the ``mutation`` root (and presumably having
side-effects), these tracked functions behave the same as described above for
``queries``.
We also permit tracking ``VOLATILE`` functions under the ``query`` root, in
which case the user needs to guarantee that the field is idempotent and
side-effect free, in the context of the resulting GraphQL API. One such use
case might be a function that wraps a simple query and performs some logging
visible only to administrators.
.. note::
It's easy to accidentally give an SQL function the wrong volatility (or for a
function to end up with ``VOLATILE`` mistakenly, since it's the default).
Permissions for custom function queries
---------------------------------------

View File

@ -443,6 +443,7 @@ library
, Hasura.GraphQL.Execute.Plan
, Hasura.GraphQL.Execute.Types
, Hasura.GraphQL.Execute.Mutation
, Hasura.GraphQL.Execute.Common
, Hasura.GraphQL.Execute.Remote
, Hasura.GraphQL.Execute.Resolve
, Hasura.GraphQL.Execute.Prepare

View File

@ -106,6 +106,9 @@ data MutationDB (b :: BackendType) v
= MDBInsert (IR.AnnInsert b v)
| MDBUpdate (IR.AnnUpdG b v)
| MDBDelete (IR.AnnDelG b v)
| MDBFunction (IR.AnnSimpleSelG b v)
-- ^ This represents a VOLATILE function, and is AnnSimpleSelG for easy
-- re-use of non-VOLATILE function tracking code.
data ActionMutation (b :: BackendType) v
= AMSync !(RQL.AnnActionExecution b v)

View File

@ -0,0 +1,107 @@
module Hasura.GraphQL.Execute.Common
where
-- Code shared between Hasura.GraphQL.Execute.Query and .Mutation
import Hasura.Prelude
import qualified Data.Aeson as J
import qualified Data.Environment as Env
import qualified Data.IntMap as IntMap
import qualified Database.PG.Query as Q
import qualified Network.HTTP.Client as HTTP
import qualified Network.HTTP.Types as HTTP
import qualified Hasura.Backends.Postgres.SQL.DML as S
import qualified Hasura.Backends.Postgres.Translate.Select as DS
import qualified Hasura.RQL.IR.Select as DS
import qualified Hasura.Tracing as Tracing
import Hasura.Backends.Postgres.Connection
import Hasura.Backends.Postgres.Execute.RemoteJoin
import Hasura.Backends.Postgres.SQL.Value
import Hasura.Backends.Postgres.Translate.Select (asSingleRowJsonResp)
import Hasura.EncJSON
import Hasura.GraphQL.Context
import Hasura.GraphQL.Execute.Action
import Hasura.GraphQL.Execute.Prepare
import Hasura.RQL.Types
import Hasura.Server.Version (HasVersion)
import Hasura.Session
data PreparedSql
= PreparedSql
{ _psQuery :: !Q.Query
, _psPrepArgs :: !PrepArgMap
, _psRemoteJoins :: !(Maybe (RemoteJoins 'Postgres))
}
-- | Required to log in `query-log`
instance J.ToJSON PreparedSql where
toJSON (PreparedSql q prepArgs _) =
J.object [ "query" J..= Q.getQueryText q
, "prepared_arguments" J..= fmap (pgScalarValueToJson . snd) prepArgs
]
data RootFieldPlan
= RFPPostgres !PreparedSql
| RFPActionQuery !ActionExecuteTx
instance J.ToJSON RootFieldPlan where
toJSON = \case
RFPPostgres pgPlan -> J.toJSON pgPlan
RFPActionQuery _ -> J.String "Action Execution Tx"
-- | A method for extracting profiling data from instrumented query results.
newtype ExtractProfile = ExtractProfile
{ runExtractProfile :: forall m. (MonadIO m, Tracing.MonadTrace m) => EncJSON -> m EncJSON
}
-- turn the current plan into a transaction
mkCurPlanTx
:: ( HasVersion
, MonadIO tx
, MonadTx tx
, Tracing.MonadTrace tx
)
=> Env.Environment
-> HTTP.Manager
-> [HTTP.Header]
-> UserInfo
-> (Q.Query -> Q.Query)
-> ExtractProfile
-> RootFieldPlan
-> (tx EncJSON, Maybe PreparedSql)
mkCurPlanTx env manager reqHdrs userInfo instrument ep = \case
-- generate the SQL and prepared vars or the bytestring
RFPPostgres ps@(PreparedSql q prepMap remoteJoinsM) ->
let args = withUserVars (_uiSession userInfo) prepMap
-- WARNING: this quietly assumes the intmap keys are contiguous
prepArgs = fst <$> IntMap.elems args
in (, Just ps) $ case remoteJoinsM of
Nothing -> do
Tracing.trace "Postgres" $ runExtractProfile ep =<< liftTx do
asSingleRowJsonResp (instrument q) prepArgs
Just remoteJoins ->
executeQueryWithRemoteJoins env manager reqHdrs userInfo q prepArgs remoteJoins
RFPActionQuery atx -> (atx, Nothing)
-- convert a query from an intermediate representation to... another
irToRootFieldPlan
:: PrepArgMap
-> QueryDB 'Postgres S.SQLExp -> PreparedSql
irToRootFieldPlan prepped = \case
QDBSimple s -> mkPreparedSql getRemoteJoins (DS.selectQuerySQL DS.JASMultipleRows) s
QDBPrimaryKey s -> mkPreparedSql getRemoteJoins (DS.selectQuerySQL DS.JASSingleObject) s
QDBAggregation s -> mkPreparedSql getRemoteJoinsAggregateSelect DS.selectAggregateQuerySQL s
QDBConnection s -> mkPreparedSql getRemoteJoinsConnectionSelect DS.connectionSelectQuerySQL s
where
mkPreparedSql :: (s -> (t, Maybe (RemoteJoins 'Postgres))) -> (t -> Q.Query) -> s -> PreparedSql
mkPreparedSql getJoins f simpleSel =
let (simpleSel',remoteJoins) = getJoins simpleSel
in PreparedSql (f simpleSel') prepped remoteJoins
-- | A default implementation for queries with no instrumentation
noProfile :: ExtractProfile
noProfile = ExtractProfile pure

View File

@ -20,6 +20,7 @@ import qualified Hasura.Logging as L
import qualified Hasura.RQL.IR.Delete as IR
import qualified Hasura.RQL.IR.Insert as IR
import qualified Hasura.RQL.IR.Returning as IR
import qualified Hasura.RQL.IR.Select as IR
import qualified Hasura.RQL.IR.Update as IR
import qualified Hasura.Tracing as Tracing
@ -27,6 +28,7 @@ import Hasura.Backends.Postgres.Connection
import Hasura.EncJSON
import Hasura.GraphQL.Context
import Hasura.GraphQL.Execute.Action
import Hasura.GraphQL.Execute.Common
import Hasura.GraphQL.Execute.Insert
import Hasura.GraphQL.Execute.Prepare
import Hasura.GraphQL.Execute.Remote
@ -92,24 +94,6 @@ convertInsert env usrVars remoteJoinCtx insertOperation stringifyNum = do
validateSessionVariables expectedVariables usrVars
pure $ convertToSQLTransaction env preparedInsert remoteJoinCtx Seq.empty stringifyNum
convertMutationDB
:: ( HasVersion
, MonadIO m
, MonadError QErr m
, Tracing.MonadTrace tx
, MonadIO tx
, MonadTx tx
)
=> Env.Environment
-> SessionVariables
-> PGE.MutationRemoteJoinCtx
-> Bool
-> MutationDB 'Postgres UnpreparedValue
-> m (tx EncJSON, HTTP.ResponseHeaders)
convertMutationDB env userSession remoteJoinCtx stringifyNum = \case
MDBInsert s -> noResponseHeaders <$> convertInsert env userSession remoteJoinCtx s stringifyNum
MDBUpdate s -> noResponseHeaders <$> convertUpdate env userSession remoteJoinCtx s stringifyNum
MDBDelete s -> noResponseHeaders <$> convertDelete env userSession remoteJoinCtx s stringifyNum
noResponseHeaders :: tx EncJSON -> (tx EncJSON, HTTP.ResponseHeaders)
noResponseHeaders rTx = (rTx, [])
@ -159,7 +143,7 @@ convertMutationSelectionSet
-> [G.VariableDefinition]
-> Maybe GH.VariableValues
-> m (ExecutionPlan (tx EncJSON, HTTP.ResponseHeaders))
convertMutationSelectionSet env logger gqlContext sqlGenCtx userInfo manager reqHeaders fields varDefs varValsM = do
convertMutationSelectionSet env logger gqlContext SQLGenCtx{stringifyNum} userInfo manager reqHeaders fields varDefs varValsM = do
mutationParser <- onNothing (gqlMutationParser gqlContext) $
throw400 ValidationFailed "no mutations exist"
-- Parse the GraphQL query into the RQL AST
@ -172,7 +156,12 @@ convertMutationSelectionSet env logger gqlContext sqlGenCtx userInfo manager req
let userSession = _uiSession userInfo
remoteJoinCtx = (manager, reqHeaders, userInfo)
txs <- for unpreparedQueries \case
RFDB db -> ExecStepDB <$> convertMutationDB env userSession remoteJoinCtx (stringifyNum sqlGenCtx) db
RFDB db -> ExecStepDB . noResponseHeaders <$> case db of
MDBInsert s -> convertInsert env userSession remoteJoinCtx s stringifyNum
MDBUpdate s -> convertUpdate env userSession remoteJoinCtx s stringifyNum
MDBDelete s -> convertDelete env userSession remoteJoinCtx s stringifyNum
MDBFunction s -> convertFunction env userInfo manager reqHeaders s
RFRemote (remoteSchemaInfo, remoteField) ->
pure $ buildExecStepRemote
remoteSchemaInfo
@ -191,3 +180,32 @@ convertMutationSelectionSet env logger gqlContext sqlGenCtx userInfo manager req
-- here. It would be nice to report all of them!
ParseError{ pePath, peMessage, peCode } ->
throwError (err400 peCode peMessage){ qePath = pePath }
-- | A pared-down version of 'Query.convertQuerySelSet', for use in execution of
-- special case of SQL function mutations (see 'MDBFunction').
convertFunction
:: forall m tx .
( MonadError QErr m
, HasVersion
, MonadIO tx
, MonadTx tx
, Tracing.MonadTrace tx
)
=> Env.Environment
-> UserInfo
-> HTTP.Manager
-> HTTP.RequestHeaders
-> IR.AnnSimpleSelG 'Postgres UnpreparedValue
-- ^ VOLATILE function as 'SelectExp'
-> m (tx EncJSON)
convertFunction env userInfo manager reqHeaders unpreparedQuery = do
-- Transform the RQL AST into a prepared SQL query
(preparedQuery, PlanningSt _ _ planVals expectedVariables)
<- flip runStateT initPlanningSt
$ IR.traverseAnnSimpleSelect prepareWithPlan unpreparedQuery
validateSessionVariables expectedVariables $ _uiSession userInfo
pure $!
fst $ -- forget (Maybe PreparedSql)
mkCurPlanTx env manager reqHeaders userInfo id noProfile $
RFPPostgres $ irToRootFieldPlan planVals $ QDBSimple preparedQuery

View File

@ -4,7 +4,6 @@ module Hasura.GraphQL.Execute.Query
-- , ReusableQueryPlan
, PreparedSql(..)
, traverseQueryRootField -- for live query planning
, irToRootFieldPlan
, parseGraphQLQuery
, MonadQueryInstrumentation(..)
@ -18,14 +17,12 @@ import qualified Data.Aeson as J
import qualified Data.Environment as Env
import qualified Data.HashMap.Strict as Map
import qualified Data.HashMap.Strict.InsOrd as OMap
import qualified Data.IntMap as IntMap
import qualified Data.Sequence.NonEmpty as NESeq
import qualified Database.PG.Query as Q
import qualified Language.GraphQL.Draft.Syntax as G
import qualified Network.HTTP.Client as HTTP
import qualified Network.HTTP.Types as HTTP
import qualified Hasura.Backends.Postgres.SQL.DML as S
import qualified Hasura.Backends.Postgres.Translate.Select as DS
import qualified Hasura.GraphQL.Transport.HTTP.Protocol as GH
import qualified Hasura.Logging as L
@ -33,12 +30,10 @@ import qualified Hasura.RQL.IR.Select as DS
import qualified Hasura.Tracing as Tracing
import Hasura.Backends.Postgres.Connection
import Hasura.Backends.Postgres.Execute.RemoteJoin
import Hasura.Backends.Postgres.SQL.Value
import Hasura.Backends.Postgres.Translate.Select (asSingleRowJsonResp)
import Hasura.EncJSON
import Hasura.GraphQL.Context
import Hasura.GraphQL.Execute.Action
import Hasura.GraphQL.Execute.Common
import Hasura.GraphQL.Execute.Prepare
import Hasura.GraphQL.Execute.Remote
import Hasura.GraphQL.Execute.Resolve
@ -48,29 +43,6 @@ import Hasura.Server.Version (HasVersion)
import Hasura.Session
data PreparedSql
= PreparedSql
{ _psQuery :: !Q.Query
, _psPrepArgs :: !PrepArgMap
, _psRemoteJoins :: !(Maybe (RemoteJoins 'Postgres))
}
-- | Required to log in `query-log`
instance J.ToJSON PreparedSql where
toJSON (PreparedSql q prepArgs _) =
J.object [ "query" J..= Q.getQueryText q
, "prepared_arguments" J..= fmap (pgScalarValueToJson . snd) prepArgs
]
data RootFieldPlan
= RFPPostgres !PreparedSql
| RFPActionQuery !ActionExecuteTx
instance J.ToJSON RootFieldPlan where
toJSON = \case
RFPPostgres pgPlan -> J.toJSON pgPlan
RFPActionQuery _ -> J.String "Action Execution Tx"
data ActionQueryPlan (b :: BackendType)
= AQPAsyncQuery !(DS.AnnSimpleSel b) -- ^ Cacheable plan
| AQPQuery !ActionExecuteTx -- ^ Non cacheable transaction
@ -113,49 +85,6 @@ actionQueryToRootFieldPlan prepped = \case
-- let prepVal = (toBinaryValue colVal, pstValue colVal)
-- return $ IntMap.insert prepNo prepVal accum
-- turn the current plan into a transaction
mkCurPlanTx
:: ( HasVersion
, MonadIO tx
, MonadTx tx
, Tracing.MonadTrace tx
)
=> Env.Environment
-> HTTP.Manager
-> [HTTP.Header]
-> UserInfo
-> (Q.Query -> Q.Query)
-> ExtractProfile
-> RootFieldPlan
-> (tx EncJSON, Maybe PreparedSql)
mkCurPlanTx env manager reqHdrs userInfo instrument ep = \case
-- generate the SQL and prepared vars or the bytestring
RFPPostgres ps@(PreparedSql q prepMap remoteJoinsM) ->
let args = withUserVars (_uiSession userInfo) prepMap
-- WARNING: this quietly assumes the intmap keys are contiguous
prepArgs = fst <$> IntMap.elems args
in (, Just ps) $ case remoteJoinsM of
Nothing -> do
Tracing.trace "Postgres" $ runExtractProfile ep =<< liftTx do
asSingleRowJsonResp (instrument q) prepArgs
Just remoteJoins ->
executeQueryWithRemoteJoins env manager reqHdrs userInfo q prepArgs remoteJoins
RFPActionQuery atx -> (atx, Nothing)
-- convert a query from an intermediate representation to... another
irToRootFieldPlan
:: PrepArgMap
-> QueryDB 'Postgres S.SQLExp -> PreparedSql
irToRootFieldPlan prepped = \case
QDBSimple s -> mkPreparedSql getRemoteJoins (DS.selectQuerySQL DS.JASMultipleRows) s
QDBPrimaryKey s -> mkPreparedSql getRemoteJoins (DS.selectQuerySQL DS.JASSingleObject) s
QDBAggregation s -> mkPreparedSql getRemoteJoinsAggregateSelect DS.selectAggregateQuerySQL s
QDBConnection s -> mkPreparedSql getRemoteJoinsConnectionSelect DS.connectionSelectQuerySQL s
where
mkPreparedSql :: (s -> (t, Maybe (RemoteJoins 'Postgres))) -> (t -> Q.Query) -> s -> PreparedSql
mkPreparedSql getJoins f simpleSel =
let (simpleSel',remoteJoins) = getJoins simpleSel
in PreparedSql (f simpleSel') prepped remoteJoins
traverseQueryRootField
:: forall f a b c d h backend
@ -189,15 +118,6 @@ parseGraphQLQuery gqlContext varDefs varValsM fields =
ParseError{ pePath, peMessage, peCode } ->
throwError (err400 peCode peMessage){ qePath = pePath }
-- | A method for extracting profiling data from instrumented query results.
newtype ExtractProfile = ExtractProfile
{ runExtractProfile :: forall m. (MonadIO m, Tracing.MonadTrace m) => EncJSON -> m EncJSON
}
-- | A default implementation for queries with no instrumentation
noProfile :: ExtractProfile
noProfile = ExtractProfile pure
-- | Monads which support query instrumentation
class Monad m => MonadQueryInstrumentation m where
-- | Returns the appropriate /instrumentation/ (if any) for a SQL query, as

View File

@ -80,6 +80,7 @@ buildGQLContext =
adminHasuraDBContext <- bindA -<
buildFullestDBSchema queryContext allTables allFunctions allActionInfos nonObjectCustomTypes
-- TODO factor out the common function; throw500 in both cases:
queryFieldNames :: [G.Name] <- bindA -<
case P.discardNullability $ P.parserType $ fst adminHasuraDBContext of
-- It really ought to be this case; anything else is a programming error.
@ -138,11 +139,11 @@ buildRoleContext queryContext (takeValidTables -> allTables) (takeValidFunctions
runMonadSchema roleName queryContext allTables $ do
mutationParserFrontend <-
buildPGMutationFields Frontend tableNames >>=
buildMutationParser mutationRemotes allActionInfos nonObjectCustomTypes
buildMutationParser mutationRemotes allActionInfos nonObjectCustomTypes allFunctions
mutationParserBackend <-
buildPGMutationFields Backend tableNames >>=
buildMutationParser mutationRemotes allActionInfos nonObjectCustomTypes
buildMutationParser mutationRemotes allActionInfos nonObjectCustomTypes allFunctions
queryPGFields <- buildPostgresQueryFields tableNames allFunctions
subscriptionParser <- buildSubscriptionParser queryPGFields allActionInfos
@ -161,6 +162,13 @@ buildRoleContext queryContext (takeValidTables -> allTables) (takeValidFunctions
where
tableNames = Map.keysSet allTables
-- TODO why do we do these validations at this point? What does it mean to track
-- a function but not add it to the schema...?
-- Auke:
-- I believe the intention is simply to allow the console to do postgres data management
-- Karthikeyan: Yes, this is correct. We allowed this pre PDV but somehow
-- got removed in PDV. OTOH, Im not sure how prevalent this feature
-- actually is
takeValidTables :: TableCache -> TableCache
takeValidTables = Map.filterWithKey graphQLTableFilter . Map.filter tableFilter
where
@ -171,11 +179,16 @@ takeValidTables = Map.filterWithKey graphQLTableFilter . Map.filter tableFilter
isGraphQLCompliantTableName tableName
|| (isJust . _tcCustomName . _tciCustomConfig . _tiCoreInfo $ tableInfo)
-- TODO and what about graphql-compliant function names here too?
takeValidFunctions :: FunctionCache -> [FunctionInfo]
takeValidFunctions = Map.elems . Map.filter functionFilter
where
functionFilter = not . isSystemDefined . fiSystemDefined
takeExposedAs :: FunctionExposedAs -> [FunctionInfo] -> [FunctionInfo]
takeExposedAs x = filter ((== x) . fiExposedAs)
buildFullestDBSchema
:: (MonadError QErr m, MonadIO m, MonadUnique m)
=> QueryContext -> TableCache -> FunctionCache -> [ActionInfo 'Postgres] -> NonObjectTypeMap
@ -187,7 +200,9 @@ buildFullestDBSchema queryContext (takeValidTables -> allTables) (takeValidFunct
runMonadSchema adminRoleName queryContext allTables $ do
mutationParserFrontend <-
buildPGMutationFields Frontend tableNames >>=
buildMutationParser mempty allActionInfos nonObjectCustomTypes
-- NOTE: we omit remotes here on purpose since we're trying to check name
-- clashes with remotes:
buildMutationParser mempty allActionInfos nonObjectCustomTypes allFunctions
queryPGFields <- buildPostgresQueryFields tableNames allFunctions
subscriptionParser <- buildSubscriptionParser queryPGFields allActionInfos
@ -212,11 +227,11 @@ buildRelayRoleContext queryContext (takeValidTables -> allTables) (takeValidFunc
runMonadSchema roleName queryContext allTables $ do
mutationParserFrontend <-
buildPGMutationFields Frontend tableNames >>=
buildMutationParser mutationRemotes allActionInfos nonObjectCustomTypes
buildMutationParser mutationRemotes allActionInfos nonObjectCustomTypes allFunctions
mutationParserBackend <-
buildPGMutationFields Backend tableNames >>=
buildMutationParser mutationRemotes allActionInfos nonObjectCustomTypes
buildMutationParser mutationRemotes allActionInfos nonObjectCustomTypes allFunctions
queryPGFields <- buildRelayPostgresQueryFields tableNames allFunctions
subscriptionParser <- P.safeSelectionSet subscriptionRoot Nothing queryPGFields
@ -316,7 +331,7 @@ buildPostgresQueryFields
=> HashSet QualifiedTable
-> [FunctionInfo]
-> m [P.FieldParser n (QueryRootField UnpreparedValue)]
buildPostgresQueryFields allTables allFunctions = do
buildPostgresQueryFields allTables (takeExposedAs FEAQuery -> queryFunctions) = do
tableSelectExpParsers <- for (toList allTables) \table -> do
selectPerms <- tableSelectPermissions table
customRootFields <- _tcCustomRootFields . _tciCustomConfig . _tiCoreInfo <$> askTableInfo table
@ -332,7 +347,7 @@ buildPostgresQueryFields allTables allFunctions = do
, mapMaybeFieldParser (RFDB . QDBPrimaryKey) $ selectTableByPk table (fromMaybe pkName $ _tcrfSelectByPk customRootFields) (Just pkDesc) perms
, mapMaybeFieldParser (RFDB . QDBAggregation) $ selectTableAggregate table (fromMaybe aggName $ _tcrfSelectAggregate customRootFields) (Just aggDesc) perms
]
functionSelectExpParsers <- for allFunctions \function -> do
functionSelectExpParsers <- for queryFunctions \function -> do
let targetTable = fiReturnType function
functionName = fiName function
selectPerms <- tableSelectPermissions targetTable
@ -347,12 +362,14 @@ buildPostgresQueryFields allTables allFunctions = do
]
pure $ (concat . catMaybes) (tableSelectExpParsers <> functionSelectExpParsers)
where
requiredFieldParser :: (a -> b) -> m (P.FieldParser n a) -> m (Maybe (P.FieldParser n b))
requiredFieldParser f = fmap $ Just . fmap f
mapMaybeFieldParser :: (a -> b) -> m (Maybe (P.FieldParser n a)) -> m (Maybe (P.FieldParser n b))
mapMaybeFieldParser f = fmap $ fmap $ fmap f
requiredFieldParser
:: (Functor n, Functor m)=> (a -> b) -> m (P.FieldParser n a) -> m (Maybe (P.FieldParser n b))
requiredFieldParser f = fmap $ Just . fmap f
-- | Includes remote schema fields and actions
buildActionQueryFields
:: forall m n r
@ -402,7 +419,7 @@ buildRelayPostgresQueryFields
=> HashSet QualifiedTable
-> [FunctionInfo]
-> m [P.FieldParser n (QueryRootField UnpreparedValue)]
buildRelayPostgresQueryFields allTables allFunctions = do
buildRelayPostgresQueryFields allTables (takeExposedAs FEAQuery -> queryFunctions) = do
tableConnectionFields <- for (toList allTables) $ \table -> runMaybeT do
pkeyColumns <- MaybeT $ (^? tiCoreInfo.tciPrimaryKey._Just.pkColumns)
<$> askTableInfo table
@ -412,7 +429,7 @@ buildRelayPostgresQueryFields allTables allFunctions = do
fieldDesc = Just $ G.Description $ "fetch data from the table: " <>> table
lift $ selectTableConnection table fieldName fieldDesc pkeyColumns selectPerms
functionConnectionFields <- for allFunctions $ \function -> runMaybeT do
functionConnectionFields <- for queryFunctions $ \function -> runMaybeT do
let returnTable = fiReturnType function
functionName = fiName function
pkeyColumns <- MaybeT $ (^? tiCoreInfo.tciPrimaryKey._Just.pkColumns)
@ -626,9 +643,27 @@ buildMutationParser
=> [P.FieldParser n RemoteField]
-> [ActionInfo 'Postgres]
-> NonObjectTypeMap
-> [FunctionInfo]
-- ^ all "valid" functions
-> [P.FieldParser n (MutationRootField UnpreparedValue)]
-> m (Maybe (Parser 'Output n (OMap.InsOrdHashMap G.Name (MutationRootField UnpreparedValue))))
buildMutationParser allRemotes allActions nonObjectCustomTypes pgMutationFields = do
buildMutationParser allRemotes allActions nonObjectCustomTypes
(takeExposedAs FEAMutation -> mutationFunctions) pgMutationFields = do
-- NOTE: this is basically copied from functionSelectExpParsers body
functionMutationExpParsers <- for mutationFunctions \function@FunctionInfo{..} -> do
selectPerms <- tableSelectPermissions fiReturnType
for selectPerms \perms -> do
displayName <- qualifiedObjectToName fiName
let functionDesc = G.Description $
"execute VOLATILE function " <> fiName <<> " which returns " <>> fiReturnType
catMaybes <$> sequenceA
[ requiredFieldParser (RFDB . MDBFunction) $
selectFunction function displayName (Just functionDesc) perms
-- FWIW: The equivalent of this is possible for mutations; do we want that?:
-- , mapMaybeFieldParser (RFDB . QDBAggregation) $ selectFunctionAggregate function aggName (Just aggDesc) perms
]
actionParsers <- for allActions $ \actionInfo ->
case _adType (_aiDefinition actionInfo) of
ActionMutation ActionSynchronous ->
@ -636,7 +671,12 @@ buildMutationParser allRemotes allActions nonObjectCustomTypes pgMutationFields
ActionMutation ActionAsynchronous ->
fmap (fmap (RFAction . AMAsync)) <$> actionAsyncMutation nonObjectCustomTypes actionInfo
ActionQuery -> pure Nothing
let mutationFieldsParser = pgMutationFields <> catMaybes actionParsers <> fmap (fmap RFRemote) allRemotes
let mutationFieldsParser =
pgMutationFields <>
concat (catMaybes functionMutationExpParsers) <>
catMaybes actionParsers <>
fmap (fmap RFRemote) allRemotes
if null mutationFieldsParser
then pure Nothing
else P.safeSelectionSet mutationRoot (Just $ G.Description "mutation root") mutationFieldsParser

View File

@ -173,6 +173,9 @@ instance Arbitrary TableMetadata where
instance Arbitrary FunctionConfig where
arbitrary = genericArbitrary
instance Arbitrary FunctionExposedAs where
arbitrary = genericArbitrary
instance Arbitrary TrackFunctionV2 where
arbitrary = genericArbitrary

View File

@ -41,7 +41,7 @@ data FunctionMeta
= FunctionMeta
{ fmOid :: !OID
, fmFunction :: !QualifiedFunction
, fmType :: !FunctionType
, fmType :: !FunctionVolatility
} deriving (Show, Eq)
$(deriveJSON (aesonDrop 2 snakeCase) ''FunctionMeta)
@ -218,7 +218,7 @@ fetchFunctionMeta =
data FunctionDiff
= FunctionDiff
{ fdDropped :: ![QualifiedFunction]
, fdAltered :: ![(QualifiedFunction, FunctionType)]
, fdAltered :: ![(QualifiedFunction, FunctionVolatility)]
} deriving (Show, Eq)
getFuncDiff :: [FunctionMeta] -> [FunctionMeta] -> FunctionDiff

View File

@ -12,7 +12,7 @@ import qualified Data.Sequence as Seq
import qualified Data.Text as T
import qualified Database.PG.Query as Q
import Control.Lens
import Control.Lens hiding ((.=))
import Data.Aeson
import Data.Text.Extended
import Language.Haskell.TH.Syntax (Lift)
@ -57,7 +57,7 @@ data FunctionIntegrityError
| FunctionReturnNotCompositeType
| FunctionReturnNotSetof
| FunctionReturnNotSetofTable
| FunctionVolatile
| NonVolatileFunctionAsMutation
| FunctionSessionArgumentNotJSON !FunctionArgName
| FunctionInvalidSessionArgument !FunctionArgName
| FunctionInvalidArgumentNames [FunctionArgName]
@ -70,12 +70,12 @@ mkFunctionInfo
-> FunctionConfig
-> RawFunctionInfo
-> m (FunctionInfo, SchemaDependency)
mkFunctionInfo qf systemDefined config rawFuncInfo =
mkFunctionInfo qf systemDefined FunctionConfig{..} rawFuncInfo =
either (throw400 NotSupported . showErrors) pure
=<< MV.runValidateT validateFunction
where
functionArgs = mkFunctionArgs defArgsNo inpArgTyps inpArgNames
RawFunctionInfo hasVariadic funTy retSn retN retTyTyp retSet
RawFunctionInfo hasVariadic funVol retSn retN retTyTyp retSet
inpArgTyps inpArgNames defArgsNo returnsTab descM
= rawFuncInfo
returnType = QualifiedPGType retSn retN retTyTyp
@ -89,7 +89,22 @@ mkFunctionInfo qf systemDefined config rawFuncInfo =
when (retTyTyp /= PGKindComposite) $ throwValidateError FunctionReturnNotCompositeType
unless retSet $ throwValidateError FunctionReturnNotSetof
unless returnsTab $ throwValidateError FunctionReturnNotSetofTable
when (funTy == FTVOLATILE) $ throwValidateError FunctionVolatile
-- We mostly take the user at their word here and will, e.g. expose a
-- function as a query if it is marked VOLATILE (since perhaps the user
-- is using the function to do some logging, say). But this is also a
-- footgun we'll need to try to document (since `VOLATILE` is default
-- when volatility is omitted). See the original approach here:
-- https://github.com/hasura/graphql-engine/pull/5858
--
-- This is the one exception where we do some validation. We're not
-- commited to this check, and it would be backwards compatible to remove
-- it, but this seemed like an obvious case:
when (funVol /= FTVOLATILE && _fcExposedAs == Just FEAMutation) $
throwValidateError $ NonVolatileFunctionAsMutation
-- If 'exposed_as' is omitted we'll infer it from the volatility:
let exposeAs = flip fromMaybe _fcExposedAs $ case funVol of
FTVOLATILE -> FEAMutation
_ -> FEAQuery
-- validate function argument names
validateFunctionArgNames
@ -98,7 +113,7 @@ mkFunctionInfo qf systemDefined config rawFuncInfo =
let retTable = typeToTable returnType
pure ( FunctionInfo qf systemDefined funTy inputArguments retTable descM
pure ( FunctionInfo qf systemDefined funVol exposeAs inputArguments retTable descM
, SchemaDependency (SOTable retTable) DRTable
)
@ -109,7 +124,7 @@ mkFunctionInfo qf systemDefined config rawFuncInfo =
throwValidateError $ FunctionInvalidArgumentNames invalidArgs
makeInputArguments =
case _fcSessionArgument config of
case _fcSessionArgument of
Nothing -> pure $ Seq.fromList $ map IAUserProvided functionArgs
Just sessionArgName -> do
unless (any (\arg -> Just sessionArgName == faName arg) functionArgs) $
@ -131,7 +146,9 @@ mkFunctionInfo qf systemDefined config rawFuncInfo =
FunctionReturnNotCompositeType -> "the function does not return a \"COMPOSITE\" type"
FunctionReturnNotSetof -> "the function does not return a SETOF"
FunctionReturnNotSetofTable -> "the function does not return a SETOF table"
FunctionVolatile -> "function of type \"VOLATILE\" is not supported now"
NonVolatileFunctionAsMutation ->
"the function was requested to be exposed as a mutation, but is not marked VOLATILE. " <>
"Maybe the function was given the wrong volatility when it was defined?"
FunctionSessionArgumentNotJSON argName ->
"given session argument " <> argName <<> " is not of type json"
FunctionInvalidSessionArgument argName ->
@ -221,6 +238,9 @@ runTrackFunctionV2 (TrackFunctionV2 qf config) = do
trackFunctionP1 qf
trackFunctionP2 qf config
-- | JSON API payload for 'untrack_function':
--
-- https://hasura.io/docs/1.0/graphql/core/api-reference/schema-metadata-api/custom-functions.html#untrack-function
newtype UnTrackFunction
= UnTrackFunction
{ utfName :: QualifiedFunction }

View File

@ -9,6 +9,7 @@ import Control.Lens
import Data.Aeson
import Data.Aeson.Casing
import Data.Aeson.TH
import Data.Char (toLower)
import Data.Text.Extended
import Language.Haskell.TH.Syntax (Lift)
@ -17,21 +18,22 @@ import Hasura.Incremental (Cacheable)
import Hasura.RQL.Types.Common
data FunctionType
-- | https://www.postgresql.org/docs/current/xfunc-volatility.html
data FunctionVolatility
= FTVOLATILE
| FTIMMUTABLE
| FTSTABLE
deriving (Eq, Generic)
instance NFData FunctionType
instance Cacheable FunctionType
$(deriveJSON defaultOptions{constructorTagModifier = drop 2} ''FunctionType)
instance NFData FunctionVolatility
instance Cacheable FunctionVolatility
$(deriveJSON defaultOptions{constructorTagModifier = drop 2} ''FunctionVolatility)
funcTypToTxt :: FunctionType -> Text
funcTypToTxt :: FunctionVolatility -> Text
funcTypToTxt FTVOLATILE = "VOLATILE"
funcTypToTxt FTIMMUTABLE = "IMMUTABLE"
funcTypToTxt FTSTABLE = "STABLE"
instance Show FunctionType where
instance Show FunctionVolatility where
show = T.unpack . funcTypToTxt
newtype FunctionArgName =
@ -64,13 +66,34 @@ $(makePrisms ''InputArgument)
type FunctionInputArgument = InputArgument FunctionArg
-- | Indicates whether the user requested the corresponding function to be
-- tracked as a mutation or a query/subscription, in @track_function@.
data FunctionExposedAs = FEAQuery | FEAMutation
deriving (Show, Eq, Lift, Generic)
instance NFData FunctionExposedAs
instance Cacheable FunctionExposedAs
$(deriveJSON
defaultOptions{ sumEncoding = UntaggedValue, constructorTagModifier = map toLower . drop 3 }
''FunctionExposedAs)
-- | Tracked SQL function metadata. See 'mkFunctionInfo'.
data FunctionInfo
= FunctionInfo
{ fiName :: !QualifiedFunction
, fiSystemDefined :: !SystemDefined
, fiType :: !FunctionType
, fiVolatility :: !FunctionVolatility
, fiExposedAs :: !FunctionExposedAs
-- ^ In which part of the schema should this function be exposed?
--
-- See 'mkFunctionInfo' and '_fcExposedAs'.
, fiInputArgs :: !(Seq.Seq FunctionInputArgument)
, fiReturnType :: !QualifiedTable
-- ^ NOTE: when a table is created, a new composite type of the same name is
-- automatically created; so strictly speaking this field means "the function
-- returns the composite type corresponding to this table".
, fiDescription :: !(Maybe PGDescription)
} deriving (Show, Eq)
$(deriveToJSON (aesonDrop 2 snakeCase) ''FunctionInfo)
@ -82,17 +105,30 @@ getInputArgs =
type FunctionCache = HashMap QualifiedFunction FunctionInfo -- info of all functions
-- Metadata requests related types
-- | Tracked function configuration, and payload of the 'track_function' API call.
data FunctionConfig
= FunctionConfig
{ _fcSessionArgument :: !(Maybe FunctionArgName)
, _fcExposedAs :: !(Maybe FunctionExposedAs)
-- ^ In which top-level field should we expose this function?
--
-- The user might omit this, in which case we'll infer the location from the
-- SQL functions volatility. See 'mkFunctionInfo' or the @track_function@ API
-- docs for details of validation, etc.
} deriving (Show, Eq, Generic, Lift)
instance NFData FunctionConfig
instance Cacheable FunctionConfig
$(deriveJSON (aesonDrop 3 snakeCase){omitNothingFields = True} ''FunctionConfig)
-- | The default function config; v1 of the API implies this.
emptyFunctionConfig :: FunctionConfig
emptyFunctionConfig = FunctionConfig Nothing
emptyFunctionConfig = FunctionConfig Nothing Nothing
-- | JSON API payload for v2 of 'track_function':
--
-- https://hasura.io/docs/1.0/graphql/core/api-reference/schema-metadata-api/custom-functions.html#track-function-v2
data TrackFunctionV2
= TrackFunctionV2
{ _tfv2Function :: !QualifiedFunction
@ -110,7 +146,7 @@ instance FromJSON TrackFunctionV2 where
data RawFunctionInfo
= RawFunctionInfo
{ rfiHasVariadic :: !Bool
, rfiFunctionType :: !FunctionType
, rfiFunctionType :: !FunctionVolatility
, rfiReturnTypeSchema :: !SchemaName
, rfiReturnTypeName :: !PGScalarType
, rfiReturnTypeType :: !PGTypeKind

View File

@ -104,7 +104,7 @@ module Hasura.RQL.Types.SchemaCache
, getDependentObjs
, getDependentObjsWith
, FunctionType(..)
, FunctionVolatility(..)
, FunctionArg(..)
, FunctionArgName(..)
, FunctionName(..)

View File

@ -0,0 +1,62 @@
- description: Test that a tracked VOLATILE function works as a mutation
url: /v1/graphql
status: 200
query:
query: |
mutation {
add_to_score(args: {search: "Bla", increment: 3}){
name
score
role_echo
}
}
response:
data:
add_to_score:
- name: Starke Blake
score: 3
role_echo: admin
- name: Bellamy Blake
score: 13
role_echo: admin
- name: Dora Black
score: 53
role_echo: admin
- description: Test that defaults in SQL function become default parameters in schema
url: /v1/graphql
status: 200
query:
# We omit 'increment' here, defaulting to 1
query: |
mutation {
add_to_score(args: {search: "Blake"}){
name
score
}
}
response:
data:
add_to_score:
- name: Starke Blake
score: 4
- name: Bellamy Blake
score: 14
- description: Sanity check that VOLATILE function didn't end up in query root too
url: /v1/graphql
# FIXME: here and elsewhere: https://github.com/hasura/graphql-engine/issues/6106
status: 200
query:
query: |
query {
add_to_score(args: {search: "Bla", increment: 3}){
name
}
}
response:
errors:
- extensions:
path: $.selectionSet.add_to_score
code: validation-failed
message: "field \"add_to_score\" not found in type: 'query_root'"

View File

@ -0,0 +1,60 @@
- description: Works as admin
url: /v1/graphql
status: 200
query:
query: |
mutation {
add_to_score(args: {search: "Black"}){
name
score
role_echo
}
}
response:
data:
add_to_score:
- name: Dora Black
score: 51
role_echo: admin
- description: Fails as anonymous due to permissions set up previously
headers:
X-Hasura-Role: anonymous
url: /v1/graphql
status: 200
query:
query: |
mutation {
add_to_score(args: {search: "Black"}){
name
score
role_echo
}
}
response:
# We also expect that the side-effectful function wasn't run (see below)
errors:
- extensions:
path: $.selectionSet.add_to_score.selectionSet.role_echo
code: validation-failed
message: "field \"role_echo\" not found in type: 'user'"
- description: Works with permitted columns
headers:
X-Hasura-Role: anonymous
url: /v1/graphql
status: 200
query:
query: |
mutation {
add_to_score(args: {search: "Black"}){
name
score
}
}
response:
data:
add_to_score:
- name: Dora Black
# NOTE: the function didn't run above (good):
score: 52

View File

@ -0,0 +1,117 @@
# Test VOLATILE SQL functions exposed as top-level fields under the mutation root field
# (#1514)
type: bulk
args:
# test functions having multiple defaults
- type: run_sql
args:
sql: |
CREATE TABLE "user" (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
score INTEGER,
/* We just return the session vars as this column from our function
* to show they're passed through properly.
*
* NOTE: with the addition of function "tracking" we probably want to
* logically be defining permissions on composite types (which might
* or might not have been created implicitly in a CREATE TABLE).
*
* See: https://github.com/hasura/graphql-engine-internal/issues/502
*/
role_echo TEXT DEFAULT ''
);
# Adds a value (defaulting to 1) to users matching 'search', returning updated
# rows and echoing the hasura session vars.
- type: run_sql
args:
sql: |
CREATE FUNCTION add_to_score(hasura_session json, search text, increment integer default 1)
RETURNS SETOF "user" AS $$
UPDATE "user"
SET score = score + increment
WHERE name ilike ('%' || search || '%')
RETURNING id,
name,
score,
/* NOTE: other fields may be added to hasura_session
* depending on the flavor of test run on CI, e.g.
* x-hasura-auth-mode: webhook, so filter just x-hasura-role
*/
hasura_session->>'x-hasura-role' AS role_echo
$$ LANGUAGE sql VOLATILE;
- type: track_table
args:
name: user
schema: public
- type: track_function
version: 2
args:
function:
schema: public
name: add_to_score
configuration:
exposed_as: mutation
session_argument: hasura_session
# We'll use this to check that permissions on "user" table are applied to the
# return set of the tracked function:
- type: create_select_permission
args:
table: user
role: anonymous
permission:
filter: {}
columns:
- name
- score
# A few unimportant functions for smoke tests
- type: run_sql
args:
sql: |
CREATE FUNCTION volatile_func1()
RETURNS SETOF "user" AS $$
SELECT * FROM "user" ORDER BY id
$$ LANGUAGE sql VOLATILE;
- type: run_sql
args:
sql: |
CREATE FUNCTION stable_func1()
RETURNS SETOF "user" AS $$
SELECT * FROM "user" ORDER BY id
$$ LANGUAGE sql STABLE;
- type: run_sql
args:
sql: |
CREATE FUNCTION volatile_func2()
RETURNS SETOF "user" AS $$
SELECT * FROM "user" ORDER BY id
$$ LANGUAGE sql VOLATILE;
- type: run_sql
args:
sql: |
CREATE FUNCTION stable_func2()
RETURNS SETOF "user" AS $$
SELECT * FROM "user" ORDER BY id
$$ LANGUAGE sql STABLE;
# Infer that function should be a mutation from VOLATILE, if exposed_as
- version: 2
type: track_function
args:
function: volatile_func2
# Volatile function as query
- version: 2
type: track_function
args:
function: volatile_func1
configuration:
exposed_as: query

View File

@ -0,0 +1,8 @@
type: bulk
args:
# Drop table and function from postgres
- type: run_sql
args:
sql: |
DROP TABLE "user" cascade;
cascade: true

View File

@ -0,0 +1,52 @@
- description: Volatile function as query (check)
url: /v1/graphql
status: 200
query:
query: |
query {
volatile_func1 {
name
}
}
response:
data:
volatile_func1:
- name: Starke Blake
- name: Bellamy Blake
- name: Dora Black
- description: Volatile function as query (ensure not in mutation)
url: /v1/graphql
status: 200
query:
query: |
mutation {
volatile_func1 {
name
}
}
response:
errors:
- extensions:
path: $.selectionSet.volatile_func1
code: validation-failed
message: "field \"volatile_func1\" not found in type: 'mutation_root'"
# In the future we may want to return an informational warning in these cases,
# as this is a footgun
- description: Volatile function as query (check)
url: /v1/graphql
status: 200
query:
query: |
mutation {
volatile_func2 {
name
}
}
response:
data:
volatile_func2:
- name: Starke Blake
- name: Bellamy Blake
- name: Dora Black

View File

@ -0,0 +1,28 @@
# For now this is an error since this seens very likely user error, but we're
# not committed to this.
- description: non-volatile function as mutation (error, for now)
url: /v1/query
status: 400
response:
internal:
- definition:
schema: public
name: stable_func1
reason: 'in function "stable_func1": the function "stable_func1" cannot be tracked
because the function was requested to be exposed as a mutation, but is not
marked VOLATILE. Maybe the function was given the wrong volatility when it
was defined?'
type: function
path: $.args
error: 'in function "stable_func1": the function "stable_func1" cannot be tracked
because the function was requested to be exposed as a mutation, but is not marked
VOLATILE. Maybe the function was given the wrong volatility when it was defined?'
code: constraint-violation
query:
version: 2
type: track_function
args:
function: stable_func1
configuration:
exposed_as: mutation

View File

@ -0,0 +1,11 @@
type: bulk
args:
- type: run_sql
args:
sql: |
INSERT INTO "user" (name, score) VALUES
('Starke Blake', 0)
, ('Bellamy Blake', 10)
, ('Dora Black', 50)
;

View File

@ -0,0 +1,7 @@
type: bulk
args:
- type: run_sql
args:
sql: |
TRUNCATE "user";

View File

@ -1,18 +1,38 @@
description: Custom GraphQL query using search_posts function
url: /v1/graphql
status: 200
response:
data:
search_posts:
- title: post by hasura
content: content for post
query:
query: |
query {
search_posts(
args: {search: "hasura"}
) {
title
content
- description: Custom GraphQL query using search_posts function
url: /v1/graphql
status: 200
response:
data:
search_posts:
- title: post by hasura
content: content for post
query:
query: |
query {
search_posts(
args: {search: "hasura"}
) {
title
content
}
}
- description: ...and make sure this didn't somehow end up under the mutation root
url: /v1/graphql
status: 200
response:
errors:
- extensions:
path: $.selectionSet.search_posts
code: validation-failed
message: "field \"search_posts\" not found in type: 'mutation_root'"
query:
query: |
mutation {
search_posts(
args: {search: "hasura"}
) {
title
content
}
}
}

View File

@ -587,3 +587,29 @@ class TestGraphQLMutateEnums:
def test_delete_where_enum_field(self, hge_ctx, transport):
check_query_f(hge_ctx, self.dir() + '/delete_where_enum_field.yaml', transport)
# Tracking VOLATILE SQL functions as mutations, or queries (#1514)
@pytest.mark.parametrize('transport', ['http', 'websocket'])
@use_mutation_fixtures
class TestGraphQLMutationFunctions:
@classmethod
def dir(cls):
return 'queries/graphql_mutation/functions'
# basic test that functions are added in the right places in the schema:
def test_smoke(self, hge_ctx, transport):
check_query_f(hge_ctx, self.dir() + '/smoke.yaml', transport)
# separate file since we this only works over http transport:
def test_smoke_errs(self, hge_ctx, transport):
check_query_f(hge_ctx, self.dir() + '/smoke_errs.yaml', 'http')
# Test tracking a VOLATILE function as top-level field of mutation root
# field, also smoke testing basic permissions on the table return type.
def test_functions_as_mutations(self, hge_ctx, transport):
check_query_f(hge_ctx, self.dir() + '/function_as_mutations.yaml', transport)
# Ensure select permissions on the corresponding SETOF table apply to
# the return set of the mutation field backed by the tracked function.
def test_functions_as_mutations_permissions(self, hge_ctx, transport):
check_query_f(hge_ctx, self.dir() + '/function_as_mutations_permissions.yaml', transport)

View File

@ -267,7 +267,7 @@ def validate_gql_ws_q(hge_ctx, conf, headers, retry=False, via_subscription=Fals
def validate_http_anyq(hge_ctx, url, query, headers, exp_code, exp_response):
code, resp, resp_hdrs = hge_ctx.anyq(url, query, headers)
print(headers)
assert code == exp_code, resp
assert code == exp_code, (code, exp_code, resp)
print('http resp: ', resp)
if exp_response:
return assert_graphql_resp_expected(resp, exp_response, query, resp_hdrs, hge_ctx.avoid_err_msg_checks)