2018-12-13 10:26:15 +03:00
|
|
|
module Hasura.RQL.DML.Delete
|
|
|
|
( validateDeleteQWith,
|
|
|
|
validateDeleteQ,
|
2019-04-17 12:48:41 +03:00
|
|
|
AnnDelG (..),
|
|
|
|
AnnDel,
|
2020-05-27 18:02:58 +03:00
|
|
|
execDeleteQuery,
|
2018-12-13 10:26:15 +03:00
|
|
|
runDelete,
|
|
|
|
)
|
|
|
|
where
|
2021-09-24 01:56:37 +03:00
|
|
|
|
Nested array support for Data Connectors Backend and MongoDB
## Description
This change adds support for querying into nested arrays in Data Connector agents that support such a concept (currently MongoDB).
### DC API changes
- New API type `ColumnType` which allows representing the type of a "column" as either a scalar type, an object reference or an array of `ColumnType`s. This recursive definition allows arbitrary nesting of arrays of types.
- The `type` fields in the API types `ColumnInfo` and `ColumnInsertSchema` now take a `ColumnType` instead of a `ScalarType`.
- To ensure backwards compatibility, a `ColumnType` representing a scalar serialises and deserialises to the same representation as `ScalarType`.
- In queries, the `Field` type now has a new constructor `NestedArrayField`. This contains a nested `Field` along with optional `limit`, `offset`, `where` and `order_by` arguments. (These optional arguments are not yet used by either HGE or the MongoDB agent.)
### MongoDB Haskell agent changes
- The `/schema` endpoint will now recognise arrays within the JSON validation schema and generate corresponding arrays in the DC schema.
- The `/query` endpoint will now handle `NestedArrayField`s within queries (although it does not yet handle `limit`, `offset`, `where` and `order_by`).
### HGE server changes
- The `Backend` type class adds a new type family `XNestedArrays b` to enable nested arrays on a per-backend basis (currently enabled only for the `DataConnector` backend.
- Within `RawColumnInfo` the column type is now represented by a new type `RawColumnType b` which mirrors the shape of the DC API `ColumnType`, but uses `XNestedObjects b` and `XNestedArrays b` type families to allow turning nested object and array supports on or off for a particular backend. In the `DataConnector` backend `API.CustomType` is converted into `RawColumnInfo 'DataConnector` while building the schema.
- In the next stage of schema building, the `RawColumnInfo` is converted into a `StructuredColumnInfo` which allows us to represent the three different types of columns: scalar, object and array. TODO: the `StructuredColumnInfo` looks very similar to the Logical Model types. The main difference is that it uses the `XNestedObjects` and `XNestedArrays` type families. We should be able to combine these two representations.
- The `StructuredColumnInfo` is then placed into a `FIColumn` `FieldInfo`. This involved some refactoring of `FieldInfo` as I had previously split out `FINestedObject` into a separate constructor. However it works out better to represent all "column" fields (i.e. scalar, object and array) using `FIColumn` as this make it easier to implement permission checking correctly. This is the reason the `StructuredColumnInfo` was needed.
- Next, the `FieldInfo` are used to generate `FieldParser`s. We add a new constructor to `AnnFieldG` for `AFNestedArray`. An `AFNestedArray` field parser can contain either a simple array selection or an array aggregate. Simple array `FieldParsers` are currently limited to subfield selection. We will add support for limit, offset, where and order_by in a future PR. We also don't yet generate array aggregate `FieldParsers.
- The new `AFNestedArray` field is handled by the `QueryPlan` module in the `DataConnector` backend. There we generate an `API.NestedArrayField` from the AFNestedArray. We also handle nested arrays when reshaping the response from the DC agent.
## Limitations
- Support for limit, offset, filter (where) and order_by is not yet fully implemented, although it should not be hard to add this
- Support for aggregations on nested arrays is not yet fully implemented
- Permissions involving nested arrays (and objects) not yet implemented
- This should be integrated with Logical Model types, but that will happen in a separate PR
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9149
GitOrigin-RevId: 0e7b71a994fc1d2ca1ef73bfe7b96e95b5328531
2023-05-24 11:00:59 +03:00
|
|
|
import Control.Lens ((^?))
|
2020-12-28 15:56:00 +03:00
|
|
|
import Control.Monad.Trans.Control (MonadBaseControl)
|
2018-06-27 16:11:32 +03:00
|
|
|
import Data.Aeson
|
2020-10-29 19:58:13 +03:00
|
|
|
import Data.Sequence qualified as DS
|
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
|
|
|
import Database.PG.Query qualified as PG
|
2020-10-29 19:58:13 +03:00
|
|
|
import Hasura.Backends.Postgres.Connection
|
|
|
|
import Hasura.Backends.Postgres.Execute.Mutation
|
|
|
|
import Hasura.Backends.Postgres.SQL.DML qualified as S
|
|
|
|
import Hasura.Backends.Postgres.Translate.Returning
|
2021-03-25 20:50:08 +03:00
|
|
|
import Hasura.Backends.Postgres.Types.Table
|
2021-05-11 18:18:31 +03:00
|
|
|
import Hasura.Base.Error
|
2019-03-18 19:22:21 +03:00
|
|
|
import Hasura.EncJSON
|
2020-10-27 16:53:49 +03:00
|
|
|
import Hasura.Prelude
|
2021-07-29 11:29:12 +03:00
|
|
|
import Hasura.QueryTags
|
2018-06-27 16:11:32 +03:00
|
|
|
import Hasura.RQL.DML.Internal
|
2020-11-12 12:25:48 +03:00
|
|
|
import Hasura.RQL.DML.Types
|
2020-10-29 19:58:13 +03:00
|
|
|
import Hasura.RQL.IR.Delete
|
2023-04-24 21:35:48 +03:00
|
|
|
import Hasura.RQL.Types.BackendType
|
2022-04-27 16:57:28 +03:00
|
|
|
import Hasura.RQL.Types.Column
|
|
|
|
import Hasura.RQL.Types.Common
|
|
|
|
import Hasura.RQL.Types.Metadata
|
|
|
|
import Hasura.RQL.Types.SchemaCache
|
2021-01-09 02:09:15 +03:00
|
|
|
import Hasura.Session
|
2020-10-29 19:58:13 +03:00
|
|
|
import Hasura.Tracing qualified as Tracing
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2018-12-13 10:26:15 +03:00
|
|
|
validateDeleteQWith ::
|
2021-04-22 00:44:37 +03:00
|
|
|
(UserInfoM m, QErrM m, TableInfoRM ('Postgres 'Vanilla) m) =>
|
Specialize `RQL.DML` to postgres.
### Description
When generalizing the code, back in late 2020, we over-eagerly generalized parts of the code that are specific to RQL's DML. This was in part due to the fact that, at the time, the DML types were all mixed alongside other types in `RQL.Types`. As a result, a lot of `RQL.DML.Internal` was generic over the backend type, instead of being specialized to `'Postgres 'Vanilla`.
A consequence of this is that, before this PR, `DML.Internal` ended up having a dependency on non-Postgres backends, due to the use of `annBoolExp`, which requires a `BackendMetadata` instance. Since the code was written in a generic manner, `DML.Internal` in turn depended on having the metadata instances in scope... This PR changes that to, instead, explicitly import the Postgres instance.
(Note that this module didn't import `RQL.Types.Metadata.Instances`, but depends on a module that imports it, and **orphan instances are transitively imported**, as evidenced by the need for that explicit import in #4568.)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4573
GitOrigin-RevId: 7b82b5d7c23c03654518a1816802d400f37c3c64
2022-05-27 21:22:06 +03:00
|
|
|
SessionVariableBuilder m ->
|
2021-04-22 00:44:37 +03:00
|
|
|
(ColumnType ('Postgres 'Vanilla) -> Value -> m S.SQLExp) ->
|
2018-06-27 16:11:32 +03:00
|
|
|
DeleteQuery ->
|
2021-04-22 00:44:37 +03:00
|
|
|
m (AnnDel ('Postgres 'Vanilla))
|
2018-12-13 10:26:15 +03:00
|
|
|
validateDeleteQWith
|
2019-04-17 12:48:41 +03:00
|
|
|
sessVarBldr
|
|
|
|
prepValBldr
|
2021-01-07 12:04:22 +03:00
|
|
|
(DeleteQuery tableName _ rqlBE mRetCols) = do
|
2022-04-26 18:12:47 +03:00
|
|
|
tableInfo <- askTableInfoSource tableName
|
2019-11-20 21:21:30 +03:00
|
|
|
let coreInfo = _tiCoreInfo tableInfo
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2018-10-12 15:06:12 +03:00
|
|
|
-- If table is view then check if it deletable
|
|
|
|
mutableView
|
|
|
|
tableName
|
|
|
|
viIsDeletable
|
2019-11-20 21:21:30 +03:00
|
|
|
(_tciViewInfo coreInfo)
|
|
|
|
"deletable"
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2018-06-27 16:11:32 +03:00
|
|
|
-- Check if the role has delete permissions
|
|
|
|
delPerm <- askDelPermInfo tableInfo
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2018-06-27 16:11:32 +03:00
|
|
|
-- Check if all dependent headers are present
|
|
|
|
validateHeaders $ dpiRequiredHeaders delPerm
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2018-06-27 16:11:32 +03:00
|
|
|
-- Check if select is allowed
|
|
|
|
selPerm <-
|
2023-05-24 16:51:56 +03:00
|
|
|
modifyErr (<> selNecessaryMsg)
|
|
|
|
$ askSelPermInfo tableInfo
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2019-11-20 21:21:30 +03:00
|
|
|
let fieldInfoMap = _tciFieldInfoMap coreInfo
|
Nested array support for Data Connectors Backend and MongoDB
## Description
This change adds support for querying into nested arrays in Data Connector agents that support such a concept (currently MongoDB).
### DC API changes
- New API type `ColumnType` which allows representing the type of a "column" as either a scalar type, an object reference or an array of `ColumnType`s. This recursive definition allows arbitrary nesting of arrays of types.
- The `type` fields in the API types `ColumnInfo` and `ColumnInsertSchema` now take a `ColumnType` instead of a `ScalarType`.
- To ensure backwards compatibility, a `ColumnType` representing a scalar serialises and deserialises to the same representation as `ScalarType`.
- In queries, the `Field` type now has a new constructor `NestedArrayField`. This contains a nested `Field` along with optional `limit`, `offset`, `where` and `order_by` arguments. (These optional arguments are not yet used by either HGE or the MongoDB agent.)
### MongoDB Haskell agent changes
- The `/schema` endpoint will now recognise arrays within the JSON validation schema and generate corresponding arrays in the DC schema.
- The `/query` endpoint will now handle `NestedArrayField`s within queries (although it does not yet handle `limit`, `offset`, `where` and `order_by`).
### HGE server changes
- The `Backend` type class adds a new type family `XNestedArrays b` to enable nested arrays on a per-backend basis (currently enabled only for the `DataConnector` backend.
- Within `RawColumnInfo` the column type is now represented by a new type `RawColumnType b` which mirrors the shape of the DC API `ColumnType`, but uses `XNestedObjects b` and `XNestedArrays b` type families to allow turning nested object and array supports on or off for a particular backend. In the `DataConnector` backend `API.CustomType` is converted into `RawColumnInfo 'DataConnector` while building the schema.
- In the next stage of schema building, the `RawColumnInfo` is converted into a `StructuredColumnInfo` which allows us to represent the three different types of columns: scalar, object and array. TODO: the `StructuredColumnInfo` looks very similar to the Logical Model types. The main difference is that it uses the `XNestedObjects` and `XNestedArrays` type families. We should be able to combine these two representations.
- The `StructuredColumnInfo` is then placed into a `FIColumn` `FieldInfo`. This involved some refactoring of `FieldInfo` as I had previously split out `FINestedObject` into a separate constructor. However it works out better to represent all "column" fields (i.e. scalar, object and array) using `FIColumn` as this make it easier to implement permission checking correctly. This is the reason the `StructuredColumnInfo` was needed.
- Next, the `FieldInfo` are used to generate `FieldParser`s. We add a new constructor to `AnnFieldG` for `AFNestedArray`. An `AFNestedArray` field parser can contain either a simple array selection or an array aggregate. Simple array `FieldParsers` are currently limited to subfield selection. We will add support for limit, offset, where and order_by in a future PR. We also don't yet generate array aggregate `FieldParsers.
- The new `AFNestedArray` field is handled by the `QueryPlan` module in the `DataConnector` backend. There we generate an `API.NestedArrayField` from the AFNestedArray. We also handle nested arrays when reshaping the response from the DC agent.
## Limitations
- Support for limit, offset, filter (where) and order_by is not yet fully implemented, although it should not be hard to add this
- Support for aggregations on nested arrays is not yet fully implemented
- Permissions involving nested arrays (and objects) not yet implemented
- This should be integrated with Logical Model types, but that will happen in a separate PR
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9149
GitOrigin-RevId: 0e7b71a994fc1d2ca1ef73bfe7b96e95b5328531
2023-05-24 11:00:59 +03:00
|
|
|
allCols = mapMaybe (^? _SCIScalarColumn) $ getCols fieldInfoMap
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2018-06-27 16:11:32 +03:00
|
|
|
-- convert the returning cols into sql returing exp
|
|
|
|
mAnnRetCols <- forM mRetCols $ \retCols ->
|
2018-10-05 11:56:47 +03:00
|
|
|
withPathK "returning" $ checkRetCols fieldInfoMap selPerm retCols
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2018-06-27 16:11:32 +03:00
|
|
|
-- convert the where clause
|
|
|
|
annSQLBoolExp <-
|
2023-05-24 16:51:56 +03:00
|
|
|
withPathK "where"
|
|
|
|
$ convBoolExp fieldInfoMap selPerm rqlBE sessVarBldr fieldInfoMap (valueParserWithCollectableType prepValBldr)
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2019-04-17 12:48:41 +03:00
|
|
|
resolvedDelFltr <-
|
2023-05-24 16:51:56 +03:00
|
|
|
convAnnBoolExpPartialSQL sessVarBldr
|
|
|
|
$ dpiFilter delPerm
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2023-05-24 16:51:56 +03:00
|
|
|
return
|
|
|
|
$ AnnDel
|
2019-04-17 12:48:41 +03:00
|
|
|
tableName
|
|
|
|
(resolvedDelFltr, annSQLBoolExp)
|
2019-03-22 10:08:42 +03:00
|
|
|
(mkDefaultMutFlds mAnnRetCols)
|
|
|
|
allCols
|
2022-07-19 09:55:42 +03:00
|
|
|
Nothing
|
2018-06-27 16:11:32 +03:00
|
|
|
where
|
|
|
|
selNecessaryMsg =
|
|
|
|
"; \"delete\" is only allowed if the role "
|
|
|
|
<> "has \"select\" permission as \"where\" can't be used "
|
|
|
|
<> "without \"select\" permission on the table"
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2018-12-13 10:26:15 +03:00
|
|
|
validateDeleteQ ::
|
2019-11-20 21:21:30 +03:00
|
|
|
(QErrM m, UserInfoM m, CacheRM m) =>
|
2021-04-22 00:44:37 +03:00
|
|
|
DeleteQuery ->
|
Import `pg-client-hs` as `PG`
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
2022-09-20 22:54:43 +03:00
|
|
|
m (AnnDel ('Postgres 'Vanilla), DS.Seq PG.PrepArg)
|
2021-01-07 12:04:22 +03:00
|
|
|
validateDeleteQ query = do
|
|
|
|
let source = doSource query
|
2022-04-26 18:12:47 +03:00
|
|
|
tableCache :: TableCache ('Postgres 'Vanilla) <- fold <$> askTableCache source
|
2023-05-24 16:51:56 +03:00
|
|
|
flip runTableCacheRT tableCache
|
|
|
|
$ runDMLP1T
|
|
|
|
$ validateDeleteQWith sessVarFromCurrentSetting binRHSBuilder query
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2018-12-13 10:26:15 +03:00
|
|
|
runDelete ::
|
2021-07-29 11:29:12 +03:00
|
|
|
forall m.
|
|
|
|
( QErrM m,
|
|
|
|
UserInfoM m,
|
|
|
|
CacheRM m,
|
|
|
|
MonadIO m,
|
|
|
|
Tracing.MonadTrace m,
|
|
|
|
MonadBaseControl IO m,
|
2021-09-23 15:37:56 +03:00
|
|
|
MetadataM m
|
2021-06-11 06:26:50 +03:00
|
|
|
) =>
|
Remove `ServerConfigCtx`.
### Description
This PR removes `ServerConfigCtx` and `HasServerConfigCtx`. Instead, it favours different approaches:
- when the code was only using one field, it passes that field explicitly (usually `SQLGenCtx` or `CheckFeatureFlag`)
- when the code was using several fields, but in only one function, it inlines
- for the cache build, it introduces `CacheStaticConfig` and `CacheDynamicConfig`, which are subsets of `AppEnv` and `AppContext` respectively
The main goal of this is to help with the modularization of the engine: as `ServerConfigCtx` had fields whose types were imported from several unrelated parts of the engine, using it tied together parts of the engine that should not be aware of one another (such as tying together `Hasura.LogicalModel` and `Hasura.GraphQL.Schema`).
The bulk of this PR is a change to the cache build, as a follow up to #8509: instead of giving the entire `ServerConfigCtx` as a incremental rule argument, we only give the new `CacheDynamicConfig` struct, which has fewer fields. The other required fields, that were coming from the `AppEnv`, are now given via the `HasCacheStaticConfig` constraint, which is a "subset" of `HasAppEnv`.
(Some further work could include moving `StringifyNumbers` out of `GraphQL.Schema.Options`, given how it is used all across the codebase, including in `RQL.DML`.)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8513
GitOrigin-RevId: 818cbcd71494e3cd946b06adbb02ca328a8a298e
2023-04-04 18:59:58 +03:00
|
|
|
SQLGenCtx ->
|
2021-06-11 06:26:50 +03:00
|
|
|
DeleteQuery ->
|
2020-07-14 22:00:58 +03:00
|
|
|
m EncJSON
|
Remove `ServerConfigCtx`.
### Description
This PR removes `ServerConfigCtx` and `HasServerConfigCtx`. Instead, it favours different approaches:
- when the code was only using one field, it passes that field explicitly (usually `SQLGenCtx` or `CheckFeatureFlag`)
- when the code was using several fields, but in only one function, it inlines
- for the cache build, it introduces `CacheStaticConfig` and `CacheDynamicConfig`, which are subsets of `AppEnv` and `AppContext` respectively
The main goal of this is to help with the modularization of the engine: as `ServerConfigCtx` had fields whose types were imported from several unrelated parts of the engine, using it tied together parts of the engine that should not be aware of one another (such as tying together `Hasura.LogicalModel` and `Hasura.GraphQL.Schema`).
The bulk of this PR is a change to the cache build, as a follow up to #8509: instead of giving the entire `ServerConfigCtx` as a incremental rule argument, we only give the new `CacheDynamicConfig` struct, which has fewer fields. The other required fields, that were coming from the `AppEnv`, are now given via the `HasCacheStaticConfig` constraint, which is a "subset" of `HasAppEnv`.
(Some further work could include moving `StringifyNumbers` out of `GraphQL.Schema.Options`, given how it is used all across the codebase, including in `RQL.DML`.)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8513
GitOrigin-RevId: 818cbcd71494e3cd946b06adbb02ca328a8a298e
2023-04-04 18:59:58 +03:00
|
|
|
runDelete sqlGen q = do
|
2021-04-22 00:44:37 +03:00
|
|
|
sourceConfig <- askSourceConfig @('Postgres 'Vanilla) (doSource q)
|
Remove `ServerConfigCtx`.
### Description
This PR removes `ServerConfigCtx` and `HasServerConfigCtx`. Instead, it favours different approaches:
- when the code was only using one field, it passes that field explicitly (usually `SQLGenCtx` or `CheckFeatureFlag`)
- when the code was using several fields, but in only one function, it inlines
- for the cache build, it introduces `CacheStaticConfig` and `CacheDynamicConfig`, which are subsets of `AppEnv` and `AppContext` respectively
The main goal of this is to help with the modularization of the engine: as `ServerConfigCtx` had fields whose types were imported from several unrelated parts of the engine, using it tied together parts of the engine that should not be aware of one another (such as tying together `Hasura.LogicalModel` and `Hasura.GraphQL.Schema`).
The bulk of this PR is a change to the cache build, as a follow up to #8509: instead of giving the entire `ServerConfigCtx` as a incremental rule argument, we only give the new `CacheDynamicConfig` struct, which has fewer fields. The other required fields, that were coming from the `AppEnv`, are now given via the `HasCacheStaticConfig` constraint, which is a "subset" of `HasAppEnv`.
(Some further work could include moving `StringifyNumbers` out of `GraphQL.Schema.Options`, given how it is used all across the codebase, including in `RQL.DML`.)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8513
GitOrigin-RevId: 818cbcd71494e3cd946b06adbb02ca328a8a298e
2023-04-04 18:59:58 +03:00
|
|
|
let strfyNum = stringifyNum sqlGen
|
2021-06-11 06:26:50 +03:00
|
|
|
userInfo <- askUserInfo
|
2021-01-07 12:04:22 +03:00
|
|
|
validateDeleteQ q
|
2023-01-25 10:12:53 +03:00
|
|
|
>>= runTxWithCtx (_pscExecCtx sourceConfig) (Tx PG.ReadWrite Nothing) LegacyRQLQuery
|
2023-05-24 16:51:56 +03:00
|
|
|
. flip runReaderT emptyQueryTagsComment
|
|
|
|
. execDeleteQuery strfyNum Nothing userInfo
|