2019-12-09 01:17:39 +03:00
|
|
|
module Hasura.RQL.DDL.Schema.Cache.Dependencies
|
|
|
|
( resolveDependencies,
|
|
|
|
)
|
|
|
|
where
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2019-12-09 01:17:39 +03:00
|
|
|
import Control.Arrow.Extended
|
|
|
|
import Control.Lens hiding ((.=))
|
|
|
|
import Data.Aeson
|
|
|
|
import Data.HashMap.Strict.Extended qualified as M
|
2022-02-03 21:58:37 +03:00
|
|
|
import Data.HashMap.Strict.InsOrd qualified as OMap
|
2019-12-09 01:17:39 +03:00
|
|
|
import Data.HashSet qualified as HS
|
|
|
|
import Data.List (nub)
|
2019-12-15 16:28:23 +03:00
|
|
|
import Data.Monoid (First)
|
2020-10-21 19:35:06 +03:00
|
|
|
import Data.Text.Extended
|
2021-05-11 18:18:31 +03:00
|
|
|
import Hasura.Base.Error
|
|
|
|
import Hasura.Prelude
|
2022-04-06 15:47:35 +03:00
|
|
|
import Hasura.RQL.DDL.Permission.Internal (permissionIsDefined)
|
2019-12-09 01:17:39 +03:00
|
|
|
import Hasura.RQL.DDL.Schema.Cache.Common
|
2022-04-27 16:57:28 +03:00
|
|
|
import Hasura.RQL.Types.Action
|
|
|
|
import Hasura.RQL.Types.Backend
|
|
|
|
import Hasura.RQL.Types.Column
|
|
|
|
import Hasura.RQL.Types.Common
|
|
|
|
import Hasura.RQL.Types.ComputedField
|
2022-03-13 10:40:06 +03:00
|
|
|
import Hasura.RQL.Types.Endpoint
|
2022-04-27 16:57:28 +03:00
|
|
|
import Hasura.RQL.Types.Function
|
|
|
|
import Hasura.RQL.Types.Metadata.Object
|
2022-11-07 09:54:49 +03:00
|
|
|
import Hasura.RQL.Types.OpenTelemetry
|
2022-04-27 16:57:28 +03:00
|
|
|
import Hasura.RQL.Types.Permission
|
|
|
|
import Hasura.RQL.Types.QueryCollection
|
|
|
|
import Hasura.RQL.Types.Relationships.Local
|
|
|
|
import Hasura.RQL.Types.SchemaCache
|
2022-11-15 19:58:51 +03:00
|
|
|
import Hasura.RQL.Types.SchemaCache.Build
|
2022-04-27 16:57:28 +03:00
|
|
|
import Hasura.RQL.Types.SchemaCacheTypes
|
|
|
|
import Hasura.RQL.Types.Source
|
|
|
|
import Hasura.RQL.Types.Table
|
scaffolding for remote-schemas module
The main aim of the PR is:
1. To set up a module structure for 'remote-schemas' package.
2. Move parts by the remote schema codebase into the new module structure to validate it.
## Notes to the reviewer
Why a PR with large-ish diff?
1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section.
1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process.
## Changes in the PR
### Module structure
Sets up the module structure as follows:
```
Hasura/
RemoteSchema/
Metadata/
Types.hs
SchemaCache/
Types.hs
Permission.hs
RemoteRelationship.hs
Build.hs
MetadataAPI/
Types.hs
Execute.hs
```
### 1. Types representing metadata are moved
Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`.
- This new module only depends on very 'core' modules such as
`Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass.
- The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship
definition.
### 2. SchemaCache related types and build logic have been moved
Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`.
Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at.
Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components.
This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in.
While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal.
This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase.
### 3. Metadata APIs related types and build logic have been moved
The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively.
## Open questions:
1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](https://github.com/hasura/graphql-engine-mono/blob/ceba6d62264603ee5d279814677b29bcc43ecaea/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs#L55) and [2](https://github.com/hasura/graphql-engine-mono/blob/ceba6d62264603ee5d279814677b29bcc43ecaea/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs#L91), it made more sense to create a separate top-level module for `MetadataAPI`s.
Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further.
1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type:
```haskell
buildRemoteSchemas ::
( ArrowChoice arr,
Inc.ArrowDistribute arr,
ArrowWriter (Seq CollectedInfo) arr,
Inc.ArrowCache m arr,
MonadIO m,
HasHttpManagerM m,
Inc.Cacheable remoteRelationshipDefinition,
ToJSON remoteRelationshipDefinition,
MonadError QErr m
) =>
Env.Environment ->
( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles),
[RemoteSchemaMetadataG remoteRelationshipDefinition]
)
`arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject)
```
Note the dependence on `CollectedInfo` which is defined as
```haskell
data CollectedInfo
= CIInconsistency InconsistentMetadata
| CIDependency
MetadataObject
-- ^ for error reporting on missing dependencies
SchemaObjId
SchemaDependency
deriving (Eq)
```
this pretty much means that remote schemas is dependent on types from databases, actions, ....
How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine?
1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature:
```haskell
buildSchemaCacheFor ::
(QErrM m, CacheRWM m, MetadataM m) =>
MetadataObjId ->
MetadataModifier ->
```
This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint):
```haskell
runAddRemoteSchema ::
( QErrM m,
CacheRWM m,
MonadIO m,
HasHttpManagerM m,
MetadataM m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m EncJSON
```
This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed.
```haskell
runAddRemoteSchema ::
( QErrM m,
MonadIO m,
HasHttpManagerM m,
MonadReader RemoteSchemasSchemaCache m,
MonadState RemoteSchemaMetadata m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m RemoteSchemeMetadataObjId
```
The idea is that the core graphql-engine would call these functions and then call
`buildSchemaCacheFor`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291
GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
2022-10-21 06:13:07 +03:00
|
|
|
import Hasura.RemoteSchema.SchemaCache (rscPermissions, rscRemoteRelationships)
|
2021-03-15 16:02:58 +03:00
|
|
|
import Hasura.SQL.AnyBackend qualified as AB
|
2022-09-05 05:42:59 +03:00
|
|
|
import Hasura.SQL.Backend
|
|
|
|
import Hasura.SQL.BackendMap qualified as BackendMap
|
2022-02-03 21:58:37 +03:00
|
|
|
import Language.GraphQL.Draft.Syntax qualified as G
|
2021-04-22 00:44:37 +03:00
|
|
|
|
2019-12-09 01:17:39 +03:00
|
|
|
-- | Processes collected 'CIDependency' values into a 'DepMap', performing integrity checking to
|
|
|
|
-- ensure the dependencies actually exist. If a dependency is missing, its transitive dependents are
|
|
|
|
-- removed from the cache, and 'InconsistentMetadata's are returned.
|
|
|
|
resolveDependencies ::
|
|
|
|
(ArrowKleisli m arr, QErrM m) =>
|
2021-02-14 09:07:52 +03:00
|
|
|
( BuildOutputs,
|
2022-11-15 19:58:51 +03:00
|
|
|
[MetadataDependency]
|
2021-02-14 09:07:52 +03:00
|
|
|
)
|
|
|
|
`arr` (BuildOutputs, [InconsistentMetadata], DepMap)
|
2019-12-09 01:17:39 +03:00
|
|
|
resolveDependencies = arrM \(cache, dependencies) -> do
|
|
|
|
let dependencyMap =
|
|
|
|
dependencies
|
2022-11-15 19:58:51 +03:00
|
|
|
& M.groupOn (\case MetadataDependency _ schemaObjId _ -> schemaObjId)
|
|
|
|
& fmap (map \case MetadataDependency metadataObject _ schemaDependency -> (metadataObject, schemaDependency))
|
2019-12-09 01:17:39 +03:00
|
|
|
performIteration 0 cache [] dependencyMap
|
|
|
|
|
|
|
|
-- Processes dependencies using an iterative process that alternates between two steps:
|
|
|
|
--
|
|
|
|
-- 1. First, pruneDanglingDependents searches for any dependencies that do not exist in the
|
|
|
|
-- current cache and removes their dependents from the dependency map, returning an
|
|
|
|
-- InconsistentMetadata for each dependent that was removed. This step does not change
|
|
|
|
-- the schema cache in any way.
|
|
|
|
--
|
|
|
|
-- 2. Second, deleteMetadataObject drops the pruned dependent objects from the cache. It does
|
|
|
|
-- not alter (or consult) the dependency map, so transitive dependents are /not/ removed.
|
|
|
|
--
|
|
|
|
-- By iterating the above process until pruneDanglingDependents does not discover any new
|
|
|
|
-- inconsistencies, all missing dependencies will eventually be removed, and since dependency
|
|
|
|
-- graphs between schema objects are unlikely to be very deep, it will usually terminate in just
|
|
|
|
-- a few iterations.
|
|
|
|
performIteration ::
|
|
|
|
(QErrM m) =>
|
|
|
|
Int ->
|
2021-02-14 09:07:52 +03:00
|
|
|
BuildOutputs ->
|
2019-12-09 01:17:39 +03:00
|
|
|
[InconsistentMetadata] ->
|
|
|
|
HashMap SchemaObjId [(MetadataObject, SchemaDependency)] ->
|
2021-02-14 09:07:52 +03:00
|
|
|
m (BuildOutputs, [InconsistentMetadata], DepMap)
|
2019-12-09 01:17:39 +03:00
|
|
|
performIteration iterationNumber cache inconsistencies dependencies = do
|
|
|
|
let (newInconsistencies, prunedDependencies) = pruneDanglingDependents cache dependencies
|
|
|
|
case newInconsistencies of
|
|
|
|
[] -> pure (cache, inconsistencies, HS.fromList . map snd <$> prunedDependencies)
|
|
|
|
_
|
|
|
|
| iterationNumber < 100 -> do
|
|
|
|
let inconsistentIds = nub $ concatMap imObjectIds newInconsistencies
|
|
|
|
prunedCache = foldl' (flip deleteMetadataObject) cache inconsistentIds
|
|
|
|
allInconsistencies = inconsistencies <> newInconsistencies
|
|
|
|
performIteration (iterationNumber + 1) prunedCache allInconsistencies prunedDependencies
|
|
|
|
| otherwise ->
|
|
|
|
-- Running for 100 iterations without terminating is (hopefully) enormously unlikely
|
|
|
|
-- unless we did something very wrong, so halt the process and abort with some
|
|
|
|
-- debugging information.
|
|
|
|
throwError
|
|
|
|
(err500 Unexpected "schema dependency resolution failed to terminate")
|
2021-09-17 10:43:43 +03:00
|
|
|
{ qeInternal =
|
|
|
|
Just $
|
|
|
|
ExtraInternal $
|
2019-12-09 01:17:39 +03:00
|
|
|
object
|
|
|
|
[ "inconsistent_objects"
|
|
|
|
.= object
|
|
|
|
[ "old" .= inconsistencies,
|
|
|
|
"new" .= newInconsistencies
|
|
|
|
],
|
|
|
|
"pruned_dependencies" .= (map snd <$> prunedDependencies)
|
|
|
|
]
|
|
|
|
}
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2019-12-09 01:17:39 +03:00
|
|
|
pruneDanglingDependents ::
|
2021-02-14 09:07:52 +03:00
|
|
|
BuildOutputs ->
|
2019-12-09 01:17:39 +03:00
|
|
|
HashMap SchemaObjId [(MetadataObject, SchemaDependency)] ->
|
|
|
|
([InconsistentMetadata], HashMap SchemaObjId [(MetadataObject, SchemaDependency)])
|
|
|
|
pruneDanglingDependents cache =
|
|
|
|
fmap (M.filter (not . null)) . traverse do
|
|
|
|
partitionEithers . map \(metadataObject, dependency) -> case resolveDependency dependency of
|
|
|
|
Right () -> Right (metadataObject, dependency)
|
2021-04-14 20:51:02 +03:00
|
|
|
Left errorMessage -> Left (InconsistentObject errorMessage Nothing metadataObject)
|
2019-12-09 01:17:39 +03:00
|
|
|
where
|
|
|
|
resolveDependency :: SchemaDependency -> Either Text ()
|
|
|
|
resolveDependency (SchemaDependency objectId _) = case objectId of
|
2020-12-28 15:56:00 +03:00
|
|
|
SOSource source ->
|
|
|
|
void $
|
|
|
|
M.lookup source (_boSources cache)
|
|
|
|
`onNothing` Left ("no such source exists: " <>> source)
|
2020-05-27 18:02:58 +03:00
|
|
|
SORemoteSchema remoteSchemaName ->
|
|
|
|
unless (remoteSchemaName `M.member` _boRemoteSchemas cache) $
|
|
|
|
Left $
|
|
|
|
"remote schema " <> remoteSchemaName <<> " is not found"
|
2020-12-21 12:11:37 +03:00
|
|
|
SORemoteSchemaPermission remoteSchemaName roleName -> do
|
|
|
|
remoteSchema <-
|
|
|
|
onNothing (M.lookup remoteSchemaName $ _boRemoteSchemas cache) $
|
|
|
|
Left $
|
|
|
|
"remote schema " <> remoteSchemaName <<> " is not found"
|
|
|
|
unless (roleName `M.member` _rscPermissions (fst remoteSchema)) $
|
|
|
|
Left $
|
|
|
|
"no permission defined on remote schema "
|
|
|
|
<> remoteSchemaName
|
|
|
|
<<> " for role "
|
|
|
|
<>> roleName
|
2022-02-03 21:58:37 +03:00
|
|
|
SORemoteSchemaRemoteRelationship remoteSchemaName typeName relationshipName -> do
|
|
|
|
remoteSchema <-
|
|
|
|
fmap fst $
|
|
|
|
onNothing (M.lookup remoteSchemaName $ _boRemoteSchemas cache) $
|
|
|
|
Left $
|
|
|
|
"remote schema " <> remoteSchemaName <<> " is not found"
|
|
|
|
void
|
|
|
|
$ onNothing
|
|
|
|
(OMap.lookup typeName (_rscRemoteRelationships remoteSchema) >>= OMap.lookup relationshipName)
|
|
|
|
$ Left
|
|
|
|
$ "remote relationship "
|
|
|
|
<> relationshipName
|
|
|
|
<<> " on type "
|
|
|
|
<> G.unName typeName
|
|
|
|
<> " on "
|
|
|
|
<> remoteSchemaName
|
|
|
|
<<> " is not found"
|
2021-03-15 16:02:58 +03:00
|
|
|
SOSourceObj source exists -> do
|
2021-10-22 17:49:15 +03:00
|
|
|
AB.dispatchAnyBackend @Backend exists $ \sourceObjId -> do
|
2021-03-15 16:02:58 +03:00
|
|
|
sourceInfo <- castSourceInfo source sourceObjId
|
|
|
|
case sourceObjId of
|
|
|
|
SOITable tableName -> do
|
|
|
|
void $ resolveTable sourceInfo tableName
|
|
|
|
SOIFunction functionName ->
|
|
|
|
void $
|
|
|
|
M.lookup functionName (_siFunctions sourceInfo)
|
|
|
|
`onNothing` Left ("function " <> functionName <<> " is not tracked")
|
|
|
|
SOITableObj tableName tableObjectId -> do
|
|
|
|
tableInfo <- resolveTable sourceInfo tableName
|
|
|
|
case tableObjectId of
|
|
|
|
TOCol columnName ->
|
|
|
|
void $ resolveField tableInfo (columnToFieldName tableInfo columnName) _FIColumn "column"
|
|
|
|
TORel relName ->
|
|
|
|
void $ resolveField tableInfo (fromRel relName) _FIRelationship "relationship"
|
|
|
|
TOComputedField fieldName ->
|
|
|
|
void $ resolveField tableInfo (fromComputedField fieldName) _FIComputedField "computed field"
|
|
|
|
TORemoteRel fieldName ->
|
|
|
|
void $ resolveField tableInfo (fromRemoteRelationship fieldName) _FIRemoteRelationship "remote relationship"
|
|
|
|
TOForeignKey constraintName -> do
|
|
|
|
let foreignKeys = _tciForeignKeys $ _tiCoreInfo tableInfo
|
|
|
|
unless (isJust $ find ((== constraintName) . _cName . _fkConstraint) foreignKeys) $
|
|
|
|
Left $
|
|
|
|
"no foreign key constraint named "
|
|
|
|
<> constraintName <<> " is "
|
|
|
|
<> "defined for table " <>> tableName
|
2022-04-06 15:47:35 +03:00
|
|
|
TOPerm roleName permType -> do
|
2022-10-04 00:49:32 +03:00
|
|
|
unless (any (permissionIsDefined permType) (tableInfo ^? (tiRolePermInfoMap . ix roleName))) $
|
2021-03-15 16:02:58 +03:00
|
|
|
Left $
|
|
|
|
"no "
|
|
|
|
<> permTypeToCode permType
|
|
|
|
<> " permission defined on table "
|
|
|
|
<> tableName <<> " for role " <>> roleName
|
|
|
|
TOTrigger triggerName ->
|
|
|
|
unless (M.member triggerName (_tiEventTriggerInfoMap tableInfo)) $
|
|
|
|
Left $
|
|
|
|
"no event trigger named " <> triggerName <<> " is defined for table " <>> tableName
|
2021-07-17 00:18:58 +03:00
|
|
|
SORole roleName ->
|
|
|
|
void $
|
|
|
|
(M.lookup roleName (_boRoles cache))
|
|
|
|
`onNothing` Left ("parent role " <> roleName <<> " does not exist")
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2021-02-14 09:07:52 +03:00
|
|
|
castSourceInfo ::
|
|
|
|
(Backend b) => SourceName -> SourceObjId b -> Either Text (SourceInfo b)
|
|
|
|
castSourceInfo sourceName _ =
|
|
|
|
-- TODO: if the cast returns Nothing, we should be throwing an internal error
|
|
|
|
-- the type of the dependency in sources is not as recorded
|
2021-03-15 16:02:58 +03:00
|
|
|
(M.lookup sourceName (_boSources cache) >>= unsafeSourceInfo)
|
2021-02-14 09:07:52 +03:00
|
|
|
`onNothing` Left ("no such source found " <>> sourceName)
|
|
|
|
|
|
|
|
resolveTable sourceInfo tableName =
|
|
|
|
M.lookup tableName (_siTables sourceInfo)
|
2020-12-28 15:56:00 +03:00
|
|
|
`onNothing` Left ("table " <> tableName <<> " is not tracked")
|
2019-12-09 01:17:39 +03:00
|
|
|
|
2021-02-14 09:07:52 +03:00
|
|
|
columnToFieldName :: forall b. (Backend b) => TableInfo b -> Column b -> FieldName
|
|
|
|
columnToFieldName _ = fromCol @b
|
|
|
|
|
|
|
|
resolveField ::
|
|
|
|
Backend b =>
|
|
|
|
TableInfo b ->
|
|
|
|
FieldName ->
|
|
|
|
Getting (First a) (FieldInfo b) a ->
|
|
|
|
Text ->
|
|
|
|
Either Text a
|
2019-12-09 01:17:39 +03:00
|
|
|
resolveField tableInfo fieldName fieldType fieldTypeName = do
|
|
|
|
let coreInfo = _tiCoreInfo tableInfo
|
2021-05-18 16:06:42 +03:00
|
|
|
tableName = tableInfoName tableInfo
|
2019-12-09 01:17:39 +03:00
|
|
|
fieldInfo <-
|
|
|
|
M.lookup fieldName (_tciFieldInfoMap coreInfo)
|
|
|
|
`onNothing` Left
|
|
|
|
("table " <> tableName <<> " has no field named " <>> fieldName)
|
|
|
|
(fieldInfo ^? fieldType)
|
|
|
|
`onNothing` Left
|
|
|
|
("field " <> fieldName <<> "of table " <> tableName <<> " is not a " <> fieldTypeName)
|
2021-09-24 01:56:37 +03:00
|
|
|
|
2020-12-28 15:56:00 +03:00
|
|
|
deleteMetadataObject ::
|
2021-02-14 09:07:52 +03:00
|
|
|
MetadataObjId -> BuildOutputs -> BuildOutputs
|
2021-01-20 03:31:53 +03:00
|
|
|
deleteMetadataObject = \case
|
|
|
|
MOSource name -> boSources %~ M.delete name
|
2021-03-15 16:02:58 +03:00
|
|
|
MOSourceObjId source exists -> AB.dispatchAnyBackend @Backend exists (\sourceObjId -> boSources %~ M.adjust (deleteObjId sourceObjId) source)
|
2021-01-20 03:31:53 +03:00
|
|
|
MORemoteSchema name -> boRemoteSchemas %~ M.delete name
|
2020-12-21 12:11:37 +03:00
|
|
|
MORemoteSchemaPermissions name role -> boRemoteSchemas . ix name . _1 . rscPermissions %~ M.delete role
|
2022-02-03 21:58:37 +03:00
|
|
|
MORemoteSchemaRemoteRelationship remoteSchema typeName relationshipName ->
|
|
|
|
boRemoteSchemas . ix remoteSchema . _1 . rscRemoteRelationships . ix typeName %~ OMap.delete relationshipName
|
2021-01-20 03:31:53 +03:00
|
|
|
MOCronTrigger name -> boCronTriggers %~ M.delete name
|
Move, document, and prune action types and custom types types.
### Description
This PR is a first step in a series of cleanups of action relationships. This first step does not contain any behavioral change, and it simply reorganizes / prunes / rearranges / documents the code. Mainly:
- it divides some files in RQL.Types between metadata types, schema cache types, execution types;
- it renames some types for consistency;
- it minimizes exports and prunes unnecessary types;
- it moves some types in places where they make more sense;
- it replaces uses of `DMap BackendTag` with `BackendMap`.
Most of the "movement" within files re-organizes declarations in a "top-down" fashion, by moving all TH splices to the end of the file, which avoids order or declarations mattering.
### Optional list types
One main type change this PR makes is a replacement of variant list types in `CustomTypes.hs`; we had `Maybe [a]`, or sometimes `Maybe (NonEmpty a)`. This PR harmonizes all of them to `[a]`, as most of the code would use them as such, by doing `fromMaybe []` or `maybe [] toList`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4613
GitOrigin-RevId: bc624e10df587eba862ff27a5e8021b32d0d78a2
2022-06-07 18:43:34 +03:00
|
|
|
MOCustomTypes -> boCustomTypes %~ const mempty
|
2021-01-20 03:31:53 +03:00
|
|
|
MOAction name -> boActions %~ M.delete name
|
2021-01-29 04:02:34 +03:00
|
|
|
MOEndpoint name -> boEndpoints %~ M.delete name
|
2021-01-20 03:31:53 +03:00
|
|
|
MOActionPermission name role -> boActions . ix name . aiPermissions %~ M.delete role
|
2021-07-17 00:18:58 +03:00
|
|
|
MOInheritedRole name -> boRoles %~ M.delete name
|
server: Simplify `BuildOutputs`
A bunch of configurations are retrieved from the Metadata, then stored in the `BuildOutputs` structure, only to then be forwarded to the `SchemaCache`, with extremely little processing in between.
So this simplifies the build pipeline for some parts of the metadata: just construct those things from `Metadata` directly, and store them in the `SchemaCache` without any intermediate container.
Why did we have the detour via `BuildOutputs` in the first place? Parts of the Metadata (codified by `MetadataObjId`) can generate _metadata inconsistencies_ and/or _schema dependencies_, which are related.
- Metadata inconsistencies are warnings that we show to the user, indicating that there's something wrong with their configuration, and they have to fix it.
- Schema dependencies are an internal mechanism that allow us to build a consistent view of the world. For instance, if we have a relationship from DB tables `books` to `authors`, but the `authors` table is inconsistent (e.g. it doesn't exist in the DB), then we have schema dependencies indicating that. The job of `resolveDependencies` is to then drop the relationship, so that we can at least generate a legal GraphQL schema for `books`.
If we never generate a schema dependency for a certain fragment of Metadata, then there is no reason to call `resolveDependencies` on it, and so there is no reason to store it in `BuildOutputs`.
---
The starting point that allows this refactor is to apply Metadata defaults before it reaches `buildAndCollectInfo`, so that metadata-with-defaults can be used elsewhere.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6609
GitOrigin-RevId: df0c4a7ff9451e10e02a40bf26304b26584ba483
2022-11-15 15:02:55 +03:00
|
|
|
MOQueryCollectionsQuery _ lq -> \bo@BuildOutputs {..} ->
|
2022-03-08 12:48:21 +03:00
|
|
|
bo
|
server: Simplify `BuildOutputs`
A bunch of configurations are retrieved from the Metadata, then stored in the `BuildOutputs` structure, only to then be forwarded to the `SchemaCache`, with extremely little processing in between.
So this simplifies the build pipeline for some parts of the metadata: just construct those things from `Metadata` directly, and store them in the `SchemaCache` without any intermediate container.
Why did we have the detour via `BuildOutputs` in the first place? Parts of the Metadata (codified by `MetadataObjId`) can generate _metadata inconsistencies_ and/or _schema dependencies_, which are related.
- Metadata inconsistencies are warnings that we show to the user, indicating that there's something wrong with their configuration, and they have to fix it.
- Schema dependencies are an internal mechanism that allow us to build a consistent view of the world. For instance, if we have a relationship from DB tables `books` to `authors`, but the `authors` table is inconsistent (e.g. it doesn't exist in the DB), then we have schema dependencies indicating that. The job of `resolveDependencies` is to then drop the relationship, so that we can at least generate a legal GraphQL schema for `books`.
If we never generate a schema dependency for a certain fragment of Metadata, then there is no reason to call `resolveDependencies` on it, and so there is no reason to store it in `BuildOutputs`.
---
The starting point that allows this refactor is to apply Metadata defaults before it reaches `buildAndCollectInfo`, so that metadata-with-defaults can be used elsewhere.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6609
GitOrigin-RevId: df0c4a7ff9451e10e02a40bf26304b26584ba483
2022-11-15 15:02:55 +03:00
|
|
|
{ _boEndpoints = removeEndpointsUsingQueryCollection lq _boEndpoints
|
2022-03-08 12:48:21 +03:00
|
|
|
}
|
2022-09-05 05:42:59 +03:00
|
|
|
MODataConnectorAgent agentName ->
|
|
|
|
boBackendCache
|
|
|
|
%~ (BackendMap.modify @'DataConnector $ BackendInfoWrapper . M.delete agentName . unBackendInfoWrapper)
|
2022-11-07 09:54:49 +03:00
|
|
|
MOOpenTelemetry subobject ->
|
|
|
|
case subobject of
|
|
|
|
OtelSubobjectExporterOtlp -> boOpenTelemetryInfo . otiExporterOtlp .~ Nothing
|
|
|
|
OtelSubobjectBatchSpanProcessor -> boOpenTelemetryInfo . otiBatchSpanProcessor .~ Nothing
|
2021-01-20 03:31:53 +03:00
|
|
|
where
|
2021-03-15 16:02:58 +03:00
|
|
|
deleteObjId :: forall b. (Backend b) => SourceMetadataObjId b -> BackendSourceInfo -> BackendSourceInfo
|
|
|
|
deleteObjId sourceObjId sourceInfo =
|
|
|
|
maybe
|
|
|
|
sourceInfo
|
|
|
|
(AB.mkAnyBackend . deleteObjFn sourceObjId)
|
|
|
|
$ unsafeSourceInfo sourceInfo
|
|
|
|
|
2021-02-14 09:07:52 +03:00
|
|
|
deleteObjFn :: (Backend b) => SourceMetadataObjId b -> SourceInfo b -> SourceInfo b
|
2021-01-20 03:31:53 +03:00
|
|
|
deleteObjFn = \case
|
|
|
|
SMOTable name -> siTables %~ M.delete name
|
|
|
|
SMOFunction name -> siFunctions %~ M.delete name
|
2021-01-29 08:48:17 +03:00
|
|
|
SMOFunctionPermission functionName role ->
|
2021-08-09 13:20:04 +03:00
|
|
|
siFunctions . ix functionName . fiPermissions %~ M.delete role
|
2021-01-20 03:31:53 +03:00
|
|
|
SMOTableObj tableName tableObjectId ->
|
|
|
|
siTables . ix tableName %~ case tableObjectId of
|
|
|
|
MTORel name _ -> tiCoreInfo . tciFieldInfoMap %~ M.delete (fromRel name)
|
|
|
|
MTOComputedField name -> tiCoreInfo . tciFieldInfoMap %~ M.delete (fromComputedField name)
|
|
|
|
MTORemoteRelationship name -> tiCoreInfo . tciFieldInfoMap %~ M.delete (fromRemoteRelationship name)
|
|
|
|
MTOTrigger name -> tiEventTriggerInfoMap %~ M.delete name
|
2022-04-06 15:47:35 +03:00
|
|
|
MTOPerm roleName PTSelect -> tiRolePermInfoMap . ix roleName . permSel .~ Nothing
|
|
|
|
MTOPerm roleName PTInsert -> tiRolePermInfoMap . ix roleName . permIns .~ Nothing
|
|
|
|
MTOPerm roleName PTUpdate -> tiRolePermInfoMap . ix roleName . permUpd .~ Nothing
|
|
|
|
MTOPerm roleName PTDelete -> tiRolePermInfoMap . ix roleName . permDel .~ Nothing
|
2022-03-08 12:48:21 +03:00
|
|
|
|
|
|
|
removeEndpointsUsingQueryCollection :: ListedQuery -> HashMap EndpointName (EndpointMetadata GQLQueryWithText) -> HashMap EndpointName (EndpointMetadata GQLQueryWithText)
|
|
|
|
removeEndpointsUsingQueryCollection lq endpointMap =
|
|
|
|
case maybeEndpoint of
|
|
|
|
Just (n, _) -> M.delete n endpointMap
|
|
|
|
Nothing -> endpointMap
|
|
|
|
where
|
|
|
|
q = _lqQuery lq
|
|
|
|
maybeEndpoint = find (\(_, def) -> (_edQuery . _ceDefinition) def == q) (M.toList endpointMap)
|