graphql-engine/server/src-lib/Hasura/RQL/DDL/Schema/Rename.hs

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

747 lines
26 KiB
Haskell
Raw Normal View History

-- | Functions for updating the metadata (with integrity checking) to incorporate schema changes
-- discovered after applying a user-supplied SQL query. None of these functions modify the schema
-- cache, so it must be reloaded after the metadata is updated.
module Hasura.RQL.DDL.Schema.Rename
( renameTableInMetadata,
renameColumnInMetadata,
renameRelationshipInMetadata,
)
where
import Control.Lens.Combinators
import Control.Lens.Operators
import Data.Aeson
import Data.HashMap.Strict qualified as M
import Data.HashMap.Strict.InsOrd qualified as OMap
import Data.HashSet qualified as Set
import Data.Text.Extended
import Hasura.Base.Error
import Hasura.Prelude
import Hasura.RQL.DDL.Permission
import Hasura.RQL.IR.BoolExp
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.Column
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.Metadata
import Hasura.RQL.Types.Metadata.Backend
import Hasura.RQL.Types.Permission
import Hasura.RQL.Types.Relationships.Local
import Hasura.RQL.Types.Relationships.Remote
import Hasura.RQL.Types.Relationships.ToSource
import Hasura.RQL.Types.SchemaCache
import Hasura.RQL.Types.SchemaCacheTypes
import Hasura.RQL.Types.Table
scaffolding for remote-schemas module The main aim of the PR is: 1. To set up a module structure for 'remote-schemas' package. 2. Move parts by the remote schema codebase into the new module structure to validate it. ## Notes to the reviewer Why a PR with large-ish diff? 1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section. 1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process. ## Changes in the PR ### Module structure Sets up the module structure as follows: ``` Hasura/ RemoteSchema/ Metadata/ Types.hs SchemaCache/ Types.hs Permission.hs RemoteRelationship.hs Build.hs MetadataAPI/ Types.hs Execute.hs ``` ### 1. Types representing metadata are moved Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`. - This new module only depends on very 'core' modules such as `Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass. - The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship definition. ### 2. SchemaCache related types and build logic have been moved Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`. Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at. Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components. This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in. While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal. This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase. ### 3. Metadata APIs related types and build logic have been moved The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively. ## Open questions: 1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](https://github.com/hasura/graphql-engine-mono/blob/ceba6d62264603ee5d279814677b29bcc43ecaea/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs#L55) and [2](https://github.com/hasura/graphql-engine-mono/blob/ceba6d62264603ee5d279814677b29bcc43ecaea/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs#L91), it made more sense to create a separate top-level module for `MetadataAPI`s. Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further. 1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type: ```haskell buildRemoteSchemas :: ( ArrowChoice arr, Inc.ArrowDistribute arr, ArrowWriter (Seq CollectedInfo) arr, Inc.ArrowCache m arr, MonadIO m, HasHttpManagerM m, Inc.Cacheable remoteRelationshipDefinition, ToJSON remoteRelationshipDefinition, MonadError QErr m ) => Env.Environment -> ( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles), [RemoteSchemaMetadataG remoteRelationshipDefinition] ) `arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject) ``` Note the dependence on `CollectedInfo` which is defined as ```haskell data CollectedInfo = CIInconsistency InconsistentMetadata | CIDependency MetadataObject -- ^ for error reporting on missing dependencies SchemaObjId SchemaDependency deriving (Eq) ``` this pretty much means that remote schemas is dependent on types from databases, actions, .... How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine? 1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature: ```haskell buildSchemaCacheFor :: (QErrM m, CacheRWM m, MetadataM m) => MetadataObjId -> MetadataModifier -> ``` This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint): ```haskell runAddRemoteSchema :: ( QErrM m, CacheRWM m, MonadIO m, HasHttpManagerM m, MetadataM m, Tracing.MonadTrace m ) => Env.Environment -> AddRemoteSchemaQuery -> m EncJSON ``` This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed. ```haskell runAddRemoteSchema :: ( QErrM m, MonadIO m, HasHttpManagerM m, MonadReader RemoteSchemasSchemaCache m, MonadState RemoteSchemaMetadata m, Tracing.MonadTrace m ) => Env.Environment -> AddRemoteSchemaQuery -> m RemoteSchemeMetadataObjId ``` The idea is that the core graphql-engine would call these functions and then call `buildSchemaCacheFor`. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291 GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
2022-10-21 06:13:07 +03:00
import Hasura.RemoteSchema.Metadata
import Hasura.SQL.AnyBackend qualified as AB
import Hasura.SQL.Backend
import Hasura.Session
import Language.GraphQL.Draft.Syntax qualified as G
data RenameItem (b :: BackendType) a = RenameItem
{ _riTable :: TableName b,
_riOld :: a,
_riNew :: a
}
type RenameCol (b :: BackendType) = RenameItem b (Column b)
data RenameField b
= RFCol (RenameCol b)
| RFRel (RenameItem b RelName)
type RenameTable b = (TableName b, TableName b)
data Rename b
= RTable (RenameTable b)
| RField (RenameField b)
otherDeps :: QErrM m => Text -> SchemaObjId -> m ()
otherDeps errMsg d =
throw500 $
"unexpected dependency "
<> reportSchemaObj d
<> "; "
<> errMsg
-- | Replace all references to a given table name by its new name across the entire metadata.
--
-- This function will make use of the metadata dependency graph (see 'getDependentObjs') to identify
-- all places that refer to the old table name, and replace it accordingly. Most operations will
-- occur within the same source, such as table references in relationships and permissions.
-- Dependencies across sources can happen in the case of cross-source relationships.
--
-- This function will fail if it encounters a nonsensical dependency; for instance, if there's a
-- dependency from that table to a source.
--
-- For more information about the dependency graph, see 'SchemaObjId'.
renameTableInMetadata ::
forall b m.
( MonadError QErr m,
CacheRM m,
MonadWriter MetadataModifier m,
BackendMetadata b
) =>
SourceName ->
TableName b ->
TableName b ->
m ()
renameTableInMetadata source newQT oldQT = do
sc <- askSchemaCache
let allDeps =
getDependentObjs sc $
SOSourceObj source $
AB.mkAnyBackend $
SOITable @b oldQT
-- update all dependant schema objects
forM_ allDeps $ \case
-- the dependend object is a source object in the same source
sobj@(SOSourceObj depSourceName exists)
| depSourceName == source,
Just sourceObjId <- AB.unpackAnyBackend @b exists ->
case sourceObjId of
SOITableObj refQT (TORel rn) ->
updateRelDefs @b source refQT rn (oldQT, newQT)
SOITableObj refQT (TOPerm rn pt) ->
updatePermFlds @b source refQT rn pt $ RTable (oldQT, newQT)
-- A trigger's definition is not dependent on the table directly
SOITableObj _ (TOTrigger _) -> pure ()
-- A remote relationship's definition is not dependent on the table directly
SOITableObj _ (TORemoteRel _) -> pure ()
_ -> otherDeps errMsg sobj
-- the dependend object is a source object in a different source
sobj@(SOSourceObj depSourceName exists) ->
server/mssql: add cascade to mssql_run_sql <!-- Thank you for ss in the Title above ^ --> ## Description <!-- Please fill thier. --> <!-- Describe the changes from a user's perspective --> We don't have dependency reporting mechanism for `mssql_run_sql` API i.e when a database object (table, column etc.) is dropped through the API we should raise an exception if any dependencies (relationships, permissions etc.) with the database object exists in the metadata. This PR addresses the above mentioned problem by -> Integrating transaction to the API to rollback the SQL query execution if dependencies exists and exception is thrown -> Accepting `cascade` optional field in the API payload to drop the dependencies, if any -> Accepting `check_metadata_consistency` optional field to bypass (if value set to `false`) the dependency check ### Related Issues <!-- Please make surt title --> <!-- Add the issue number below (e.g. #234) --> Close #1853 ### Solution and Design <!-- How is this iss --> <!-- It's better if we elaborate --> The design/solution follows the `run_sql` API implementation for Postgres backend. ### Steps to test and verify <!-- If this is a fehis is a bug-fix, how do we verify the fix? --> - Create author - article tables and track them - Defined object and array relationships - Try to drop the article table without cascade or cascade set to `false` - The server should raise the relationship dependency exists exception ## Changelog - ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label. ## Affected components <!-- Remove non-affected components from the list --> - ✅ Server - ❎ Console - ❎ CLI - ❎ Docs - ❎ Community Content - ❎ Build System - ✅ Tests - ❎ Other (list it) PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2636 GitOrigin-RevId: 0ab152295394056c4ca6f02923142a1658ad25dc
2021-10-22 17:49:15 +03:00
AB.dispatchAnyBackend @Backend exists \(sourceObjId :: SourceObjId b') ->
case sourceObjId of
SOITableObj tableName (TORemoteRel remoteRelationshipName) -> do
updateTableInRemoteRelationshipRHS @b' @b depSourceName tableName remoteRelationshipName (oldQT, newQT)
-- only remote relationships might create dependencies across sources
_ -> otherDeps errMsg sobj
-- any other kind of dependent object (erroneous)
d -> otherDeps errMsg d
-- Update table name in metadata
tell $
MetadataModifier $
metaSources . ix source . (toSourceMetadata @b) . smTables %~ \tables ->
flip (maybe tables) (OMap.lookup oldQT tables) $
\tableMeta -> OMap.delete oldQT $ OMap.insert newQT tableMeta {_tmTable = newQT} tables
where
errMsg = "cannot rename table " <> oldQT <<> " to " <>> newQT
-- | Replace all references to a given column name by its new name across the entire metadata.
--
-- This function will make use of the metadata dependency graph (see 'getDependentObjs') to identify
-- all places that refer to the old column name, and replace it accordingly. Most operations will
-- occur within the same source, such as column references in relationships and permissions.
-- Dependencies across sources can happen in the case of cross-source relationships.
--
-- This function will fail if it encounters a nonsensical dependency; for instance, if there's a
-- dependency from that table to a source.
--
-- For more information about the dependency graph, see 'SchemaObjId'.
renameColumnInMetadata ::
forall b m.
( MonadError QErr m,
CacheRM m,
MonadWriter MetadataModifier m,
BackendMetadata b
) =>
Column b ->
Column b ->
SourceName ->
TableName b ->
FieldInfoMap (FieldInfo b) ->
m ()
renameColumnInMetadata oCol nCol source qt fieldInfo = do
sc <- askSchemaCache
-- Check if any relation exists with new column name
assertFldNotExists
-- Fetch dependent objects
let depObjs =
getDependentObjs sc $
SOSourceObj source $
AB.mkAnyBackend $
SOITableObj @b qt $
TOCol @b oCol
renameItem = RenameItem @b qt oCol nCol
renameFld = RFCol renameItem
-- Update dependent objects
forM_ depObjs $ \case
-- the dependend object is a source object in the same source
sobj@(SOSourceObj depSourceName exists)
| depSourceName == source,
Just sourceObjId <- AB.unpackAnyBackend @b exists ->
case sourceObjId of
SOITableObj refQT (TOPerm role pt) ->
updatePermFlds @b source refQT role pt $ RField renameFld
SOITableObj refQT (TORel rn) ->
updateColInRel @b source refQT rn renameItem
SOITableObj refQT (TOTrigger triggerName) ->
tell $
MetadataModifier $
tableMetadataSetter @b source refQT . tmEventTriggers . ix triggerName
%~ updateColumnInEventTrigger @b refQT oCol nCol qt
SOITableObj _ (TORemoteRel remoteRelName) ->
updateColInRemoteRelationshipLHS source remoteRelName renameItem
_ -> otherDeps errMsg sobj
-- the dependend object is a source object in a different source
sobj@(SOSourceObj depSourceName exists) ->
server/mssql: add cascade to mssql_run_sql <!-- Thank you for ss in the Title above ^ --> ## Description <!-- Please fill thier. --> <!-- Describe the changes from a user's perspective --> We don't have dependency reporting mechanism for `mssql_run_sql` API i.e when a database object (table, column etc.) is dropped through the API we should raise an exception if any dependencies (relationships, permissions etc.) with the database object exists in the metadata. This PR addresses the above mentioned problem by -> Integrating transaction to the API to rollback the SQL query execution if dependencies exists and exception is thrown -> Accepting `cascade` optional field in the API payload to drop the dependencies, if any -> Accepting `check_metadata_consistency` optional field to bypass (if value set to `false`) the dependency check ### Related Issues <!-- Please make surt title --> <!-- Add the issue number below (e.g. #234) --> Close #1853 ### Solution and Design <!-- How is this iss --> <!-- It's better if we elaborate --> The design/solution follows the `run_sql` API implementation for Postgres backend. ### Steps to test and verify <!-- If this is a fehis is a bug-fix, how do we verify the fix? --> - Create author - article tables and track them - Defined object and array relationships - Try to drop the article table without cascade or cascade set to `false` - The server should raise the relationship dependency exists exception ## Changelog - ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label. ## Affected components <!-- Remove non-affected components from the list --> - ✅ Server - ❎ Console - ❎ CLI - ❎ Docs - ❎ Community Content - ❎ Build System - ✅ Tests - ❎ Other (list it) PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2636 GitOrigin-RevId: 0ab152295394056c4ca6f02923142a1658ad25dc
2021-10-22 17:49:15 +03:00
AB.dispatchAnyBackend @Backend exists \(sourceObjId :: SourceObjId b') ->
case sourceObjId of
SOITableObj tableName (TORemoteRel remoteRelationshipName) -> do
updateColInRemoteRelationshipRHS @b' @b depSourceName tableName remoteRelationshipName renameItem
-- only remote relationships might create dependencies across sources
_ -> otherDeps errMsg sobj
-- any other kind of dependent object (erroneous)
d -> otherDeps errMsg d
-- Update custom column names
possiblyUpdateCustomColumnNames @b source qt oCol nCol
where
errMsg = "cannot rename column " <> oCol <<> " to " <>> nCol
assertFldNotExists =
case M.lookup (fromCol @b oCol) fieldInfo of
Just (FIRelationship _) ->
throw400 AlreadyExists $
"cannot rename column "
<> oCol
<<> " to "
<> nCol
<<> " in table "
<> qt
<<> " as a relationship with the name already exists"
_ -> pure ()
renameRelationshipInMetadata ::
forall b m.
( MonadError QErr m,
CacheRM m,
MonadWriter MetadataModifier m,
BackendMetadata b
) =>
SourceName ->
TableName b ->
RelName ->
RelType ->
RelName ->
m ()
renameRelationshipInMetadata source qt oldRN relType newRN = do
sc <- askSchemaCache
let depObjs =
getDependentObjs sc $
SOSourceObj source $
AB.mkAnyBackend $
SOITableObj @b qt $
TORel oldRN
renameFld = RFRel $ RenameItem @b qt oldRN newRN
forM_ depObjs $ \case
sobj@(SOSourceObj _ exists) -> case AB.unpackAnyBackend @b exists of
Just (SOITableObj refQT (TOPerm role pt)) ->
updatePermFlds @b source refQT role pt $ RField renameFld
_ -> otherDeps errMsg sobj
d -> otherDeps errMsg d
tell $
MetadataModifier $
tableMetadataSetter @b source qt %~ case relType of
ObjRel -> tmObjectRelationships %~ rewriteRelationships
ArrRel -> tmArrayRelationships %~ rewriteRelationships
where
errMsg = "cannot rename relationship " <> oldRN <<> " to " <>> newRN
rewriteRelationships ::
Relationships (RelDef a) -> Relationships (RelDef a)
rewriteRelationships relationsMap =
flip (maybe relationsMap) (OMap.lookup oldRN relationsMap) $
\rd -> OMap.insert newRN rd {_rdName = newRN} $ OMap.delete oldRN relationsMap
-- update table names in relationship definition
updateRelDefs ::
forall b m.
( MonadError QErr m,
CacheRM m,
MonadWriter MetadataModifier m,
BackendMetadata b
) =>
SourceName ->
TableName b ->
RelName ->
RenameTable b ->
m ()
updateRelDefs source qt rn renameTable = do
fim <- askTableFieldInfoMap @b source qt
ri <- askRelType fim rn ""
tell $
MetadataModifier $
tableMetadataSetter source qt %~ case riType ri of
ObjRel -> tmObjectRelationships . ix rn %~ updateObjRelDef renameTable
ArrRel -> tmArrayRelationships . ix rn %~ updateArrRelDef renameTable
where
updateObjRelDef :: RenameTable b -> ObjRelDef b -> ObjRelDef b
updateObjRelDef (oldQT, newQT) =
rdUsing %~ \case
RUFKeyOn fk -> RUFKeyOn fk
RUManual (RelManualConfig origQT rmCols rmIO) ->
let updQT = bool origQT newQT $ oldQT == origQT
in RUManual $ RelManualConfig updQT rmCols rmIO
updateArrRelDef :: RenameTable b -> ArrRelDef b -> ArrRelDef b
updateArrRelDef (oldQT, newQT) =
rdUsing %~ \case
RUFKeyOn (ArrRelUsingFKeyOn origQT c) ->
let updQT = getUpdQT origQT
in RUFKeyOn $ ArrRelUsingFKeyOn updQT c
RUManual (RelManualConfig origQT rmCols rmIO) ->
let updQT = getUpdQT origQT
in RUManual $ RelManualConfig updQT rmCols rmIO
where
getUpdQT origQT = bool origQT newQT $ oldQT == origQT
[Preview] Inherited roles for postgres read queries fixes #3868 docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de` Note: To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`. Introduction ------------ This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`. How are select permissions of different roles are combined? ------------------------------------------------------------ A select permission includes 5 things: 1. Columns accessible to the role 2. Row selection filter 3. Limit 4. Allow aggregation 5. Scalar computed fields accessible to the role Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`. Let's say the following GraphQL query is queried with the `combined_roles` role. ```graphql query { employees { address phone } } ``` This will translate to the following SQL query: ```sql select (case when (P1 or P2) then address else null end) as address, (case when P2 then phone else null end) as phone from employee where (P1 or P2) ``` The other parameters of the select permission will be combined in the following manner: 1. Limit - Minimum of the limits will be the limit of the inherited role 2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation 3. Scalar computed fields - same as table column fields, as in the above example APIs for inherited roles: ---------------------- 1. `add_inherited_role` `add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments `role_name`: the name of the inherited role to be added (String) `role_set`: list of roles that need to be combined (Array of Strings) Example: ```json { "type": "add_inherited_role", "args": { "role_name":"combined_user", "role_set":[ "user", "user1" ] } } ``` After adding the inherited role, the inherited role can be used like single roles like earlier Note: An inherited role can only be created with non-inherited/singular roles. 2. `drop_inherited_role` The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument: `role_name`: name of the inherited role to be dropped Example: ```json { "type": "drop_inherited_role", "args": { "role_name":"combined_user" } } ``` Metadata --------- The derived roles metadata will be included under the `experimental_features` key while exporting the metadata. ```json { "experimental_features": { "derived_roles": [ { "role_name": "manager_is_employee_too", "role_set": [ "employee", "manager" ] } ] } } ``` Scope ------ Only postgres queries and subscriptions are supported in this PR. Important points: ----------------- 1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done. TODOs ------- - [ ] Tests - [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features - [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?) - [ ] Introspection test with a inherited role (nullability changes in a inherited role) - [ ] Docs - [ ] Changelog Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com> GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 14:14:13 +03:00
-- | update fields in permissions
updatePermFlds ::
forall b m.
( MonadError QErr m,
CacheRM m,
MonadWriter MetadataModifier m,
BackendMetadata b
) =>
SourceName ->
TableName b ->
RoleName ->
PermType ->
Rename b ->
m ()
updatePermFlds source refQT rn pt rename = do
tables <- fold <$> askTableCache source
let withTables :: Reader (TableCache b) a -> a
withTables = flip runReader tables
tell $
MetadataModifier $
tableMetadataSetter source refQT %~ case pt of
PTInsert ->
tmInsertPermissions . ix rn . pdPermission %~ \insPerm ->
withTables $ updateInsPermFlds refQT rename insPerm
PTSelect ->
tmSelectPermissions . ix rn . pdPermission %~ \selPerm ->
withTables $ updateSelPermFlds refQT rename selPerm
PTUpdate ->
tmUpdatePermissions . ix rn . pdPermission %~ \updPerm ->
withTables $ updateUpdPermFlds refQT rename updPerm
PTDelete ->
tmDeletePermissions . ix rn . pdPermission %~ \delPerm ->
withTables $ updateDelPermFlds refQT rename delPerm
updateInsPermFlds ::
(MonadReader (TableCache b) m, Backend b) =>
TableName b ->
Rename b ->
PermDefPermission b InsPerm ->
m (PermDefPermission b InsPerm)
updateInsPermFlds refQT rename (InsPerm' (InsPerm chk preset cols backendOnly)) =
case rename of
RTable rt -> do
let updChk = updateTableInBoolExp rt chk
pure $ InsPerm' $ InsPerm updChk preset cols backendOnly
RField rf -> do
updChk <- updateFieldInBoolExp refQT rf chk
let updPresetM = updatePreset refQT rf <$> preset
updColsM = updateCols refQT rf <$> cols
pure $ InsPerm' $ InsPerm updChk updPresetM updColsM backendOnly
updateSelPermFlds ::
(MonadReader (TableCache b) m, Backend b) =>
TableName b ->
Rename b ->
PermDefPermission b SelPerm ->
m (PermDefPermission b SelPerm)
updateSelPermFlds refQT rename (SelPerm' (SelPerm cols fltr limit aggAllwd computedFields allowedQueryRootFieldTypes allowedSubsRootFieldTypes)) = do
case rename of
RTable rt -> do
let updFltr = updateTableInBoolExp rt fltr
pure $ SelPerm' $ SelPerm cols updFltr limit aggAllwd computedFields allowedQueryRootFieldTypes allowedSubsRootFieldTypes
RField rf -> do
updFltr <- updateFieldInBoolExp refQT rf fltr
let updCols = updateCols refQT rf cols
pure $ SelPerm' $ SelPerm updCols updFltr limit aggAllwd computedFields allowedQueryRootFieldTypes allowedSubsRootFieldTypes
updateUpdPermFlds ::
(MonadReader (TableCache b) m, Backend b) =>
TableName b ->
Rename b ->
PermDefPermission b UpdPerm ->
m (PermDefPermission b UpdPerm)
updateUpdPermFlds refQT rename (UpdPerm' (UpdPerm cols preset fltr check backendOnly)) = do
case rename of
RTable rt -> do
let updFltr = updateTableInBoolExp rt fltr
updCheck = fmap (updateTableInBoolExp rt) check
pure $ UpdPerm' $ UpdPerm cols preset updFltr updCheck backendOnly
RField rf -> do
updFltr <- updateFieldInBoolExp refQT rf fltr
updCheck <- traverse (updateFieldInBoolExp refQT rf) check
let updCols = updateCols refQT rf cols
updPresetM = updatePreset refQT rf <$> preset
pure $ UpdPerm' $ UpdPerm updCols updPresetM updFltr updCheck backendOnly
updateDelPermFlds ::
(MonadReader (TableCache b) m, Backend b) =>
TableName b ->
Rename b ->
PermDefPermission b DelPerm ->
m (PermDefPermission b DelPerm)
updateDelPermFlds refQT rename (DelPerm' (DelPerm fltr backendOnly)) = do
case rename of
RTable rt -> do
let updFltr = updateTableInBoolExp rt fltr
pure $ DelPerm' $ DelPerm updFltr backendOnly
RField rf -> do
updFltr <- updateFieldInBoolExp refQT rf fltr
pure $ DelPerm' $ DelPerm updFltr backendOnly
updatePreset ::
(Backend b) =>
TableName b ->
RenameField b ->
ColumnValues b Value ->
ColumnValues b Value
updatePreset qt rf obj =
case rf of
RFCol (RenameItem opQT oCol nCol) ->
if qt == opQT
then updatePreset' oCol nCol
else obj
_ -> obj
where
updatePreset' oCol nCol =
M.fromList updItems
where
updItems = map procObjItem $ M.toList obj
procObjItem (pgCol, v) =
let isUpdated = pgCol == oCol
updCol = bool pgCol nCol isUpdated
in (updCol, v)
updateCols ::
(Backend b) => TableName b -> RenameField b -> PermColSpec b -> PermColSpec b
updateCols qt rf permSpec =
case rf of
RFCol (RenameItem opQT oCol nCol) ->
if qt == opQT
then updateCols' oCol nCol permSpec
else permSpec
_ -> permSpec
where
updateCols' oCol nCol cols = case cols of
PCStar -> cols
PCCols c -> PCCols $
flip map c $
\col -> if col == oCol then nCol else col
updateTableInBoolExp :: (Backend b) => RenameTable b -> BoolExp b -> BoolExp b
updateTableInBoolExp (oldQT, newQT) =
over _Wrapped . transform $
(_BoolExists . geTable) %~ \rqfQT ->
if rqfQT == oldQT then newQT else rqfQT
updateFieldInBoolExp ::
(MonadReader (TableCache b) m, Backend b) =>
TableName b ->
RenameField b ->
BoolExp b ->
m (BoolExp b)
updateFieldInBoolExp qt rf be =
BoolExp
<$> case unBoolExp be of
BoolAnd exps -> BoolAnd <$> procExps exps
BoolOr exps -> BoolOr <$> procExps exps
BoolNot e -> BoolNot <$> updateBoolExp' e
BoolExists (GExists refqt wh) ->
BoolExists . GExists refqt . unBoolExp
<$> updateFieldInBoolExp refqt rf (BoolExp wh)
BoolField fld -> BoolField <$> updateColExp qt rf fld
where
procExps = mapM updateBoolExp'
updateBoolExp' =
fmap unBoolExp . updateFieldInBoolExp qt rf . BoolExp
updateColExp ::
forall b m.
(MonadReader (TableCache b) m, Backend b) =>
TableName b ->
RenameField b ->
ColExp ->
m ColExp
updateColExp qt rf (ColExp fld val) =
ColExp updatedFld <$> updatedVal
where
updatedFld = bool fld nFld $ opQT == qt && oFld == fld
updatedVal = do
tables <- ask
let maybeFieldInfo =
M.lookup qt tables
>>= M.lookup fld . _tciFieldInfoMap . _tiCoreInfo
case maybeFieldInfo of
Nothing -> pure val
Just fi -> case fi of
FIColumn _ -> pure val
FIComputedField _ -> pure val
FIRelationship ri -> do
let remTable = riRTable ri
case decodeValue val of
Left _ -> pure val
Right be -> toJSON <$> updateFieldInBoolExp remTable rf be
FIRemoteRelationship {} -> pure val
(oFld, nFld, opQT) = case rf of
RFCol (RenameItem tn oCol nCol) -> (fromCol @b oCol, fromCol @b nCol, tn)
RFRel (RenameItem tn oRel nRel) -> (fromRel oRel, fromRel nRel, tn)
-- rename columns in relationship definitions
updateColInRel ::
forall b m.
(CacheRM m, MonadWriter MetadataModifier m, BackendMetadata b) =>
SourceName ->
TableName b ->
RelName ->
RenameCol b ->
m ()
updateColInRel source fromQT rn rnCol = do
tables <- fold <$> askTableCache @b source
let maybeRelInfo =
tables ^? ix fromQT . tiCoreInfo . tciFieldInfoMap . ix (fromRel rn) . _FIRelationship
forM_ maybeRelInfo $ \relInfo ->
let relTableName = riRTable relInfo
in tell $
MetadataModifier $
tableMetadataSetter source fromQT
%~ case riType relInfo of
ObjRel ->
tmObjectRelationships . ix rn . rdUsing
%~ updateColInObjRel fromQT relTableName rnCol
ArrRel ->
tmArrayRelationships . ix rn . rdUsing
%~ updateColInArrRel fromQT relTableName rnCol
-- | Local helper: update a column's name in the left-hand side of a remote relationship.
--
-- There are two kinds or remote relationships: remote source relationships, across sources, and
-- remote schema relationships, on remote schemas. In both cases, we maintain a mapping from the
-- source table's colunns to what they should be joined against in the target; when a column is
-- renamed, those references must be renamed as well. This function handles both cases.
--
-- See 'renameColumnInMetadata'.
updateColInRemoteRelationshipLHS ::
forall b m.
( MonadError QErr m,
MonadWriter MetadataModifier m,
BackendMetadata b
) =>
SourceName ->
RelName ->
RenameCol b ->
m ()
updateColInRemoteRelationshipLHS source remoteRelationshipName (RenameItem qt oldCol newCol) = do
oldColName <- parseGraphQLName $ toTxt oldCol
newColName <- parseGraphQLName $ toTxt newCol
let oldFieldName = fromCol @b oldCol
newFieldName = fromCol @b newCol
updateSet =
Set.insert newFieldName . Set.delete oldFieldName
updateMapKey =
-- mapKeys is not available in 0.2.13.0
M.fromList . map (\(key, value) -> (if key == oldFieldName then newFieldName else key, value)) . M.toList
updateFieldCalls (RemoteFields fields) =
RemoteFields $
fields <&> \(FieldCall name (RemoteArguments args)) ->
FieldCall name $ RemoteArguments $ updateVariableName <$> args
updateVariableName =
fmap \v -> if v == oldColName then newColName else v
remoteRelationshipLens =
tableMetadataSetter @b source qt . tmRemoteRelationships . ix remoteRelationshipName . rrDefinition
remoteSchemaLHSModifier =
remoteRelationshipLens . _RelationshipToSchema . _2
%~ (trrdLhsFields %~ updateSet)
. (trrdRemoteField %~ updateFieldCalls)
remoteSourceLHSModifier =
remoteRelationshipLens . _RelationshipToSource . tsrdFieldMapping %~ updateMapKey
tell $ MetadataModifier $ remoteSchemaLHSModifier . remoteSourceLHSModifier
where
parseGraphQLName txt =
G.mkName txt
`onNothing` throw400 ParseFailed (txt <> " is not a valid GraphQL name")
-- | Local helper: update a column's name in the right-hand side of a remote relationship.
--
-- In the case of remote _source_ relationships, the mapping from column to column needs to be
-- updated if one of the rhs columns has been renamed. A dependency is tracked from the rhs source's
-- column to the lhs source's relationship: when a rhs source's column has been renamed, this
-- function performs the corresponding update in the lhs source's relationship definition.
--
-- See 'renameColumnInMetadata'.
updateColInRemoteRelationshipRHS ::
forall source target m.
( MonadWriter MetadataModifier m,
server/mssql: add cascade to mssql_run_sql <!-- Thank you for ss in the Title above ^ --> ## Description <!-- Please fill thier. --> <!-- Describe the changes from a user's perspective --> We don't have dependency reporting mechanism for `mssql_run_sql` API i.e when a database object (table, column etc.) is dropped through the API we should raise an exception if any dependencies (relationships, permissions etc.) with the database object exists in the metadata. This PR addresses the above mentioned problem by -> Integrating transaction to the API to rollback the SQL query execution if dependencies exists and exception is thrown -> Accepting `cascade` optional field in the API payload to drop the dependencies, if any -> Accepting `check_metadata_consistency` optional field to bypass (if value set to `false`) the dependency check ### Related Issues <!-- Please make surt title --> <!-- Add the issue number below (e.g. #234) --> Close #1853 ### Solution and Design <!-- How is this iss --> <!-- It's better if we elaborate --> The design/solution follows the `run_sql` API implementation for Postgres backend. ### Steps to test and verify <!-- If this is a fehis is a bug-fix, how do we verify the fix? --> - Create author - article tables and track them - Defined object and array relationships - Try to drop the article table without cascade or cascade set to `false` - The server should raise the relationship dependency exists exception ## Changelog - ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label. ## Affected components <!-- Remove non-affected components from the list --> - ✅ Server - ❎ Console - ❎ CLI - ❎ Docs - ❎ Community Content - ❎ Build System - ✅ Tests - ❎ Other (list it) PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2636 GitOrigin-RevId: 0ab152295394056c4ca6f02923142a1658ad25dc
2021-10-22 17:49:15 +03:00
Backend source,
Backend target
) =>
SourceName ->
TableName source ->
RelName ->
RenameCol target ->
m ()
updateColInRemoteRelationshipRHS source tableName remoteRelationshipName (RenameItem _ oldCol newCol) =
tell $
MetadataModifier $
tableMetadataSetter @source source tableName
. tmRemoteRelationships
. ix remoteRelationshipName
. rrDefinition
. _RelationshipToSource
. tsrdFieldMapping
%~ updateMapValue
where
oldFieldName = fromCol @target oldCol
newFieldName = fromCol @target newCol
updateMapValue =
fmap \value -> if value == oldFieldName then newFieldName else value
-- | Local helper: update a table's name in the right-hand side of a remote relationship.
--
-- In the case of remote _source_ relationships, the relationship definition targets a specific
-- table in the rhs source, and that reference needs to be updated if the targeted table has been
-- renamed. A dependency is tracked from the rhs source's table to the lhs source's relationship:
-- when a rhs table has been renamed, this function performs the corresponding update in the lhs
-- source's relationship definition.
--
-- See 'renameTableInMetadata'.
updateTableInRemoteRelationshipRHS ::
forall source target m.
( MonadWriter MetadataModifier m,
server/mssql: add cascade to mssql_run_sql <!-- Thank you for ss in the Title above ^ --> ## Description <!-- Please fill thier. --> <!-- Describe the changes from a user's perspective --> We don't have dependency reporting mechanism for `mssql_run_sql` API i.e when a database object (table, column etc.) is dropped through the API we should raise an exception if any dependencies (relationships, permissions etc.) with the database object exists in the metadata. This PR addresses the above mentioned problem by -> Integrating transaction to the API to rollback the SQL query execution if dependencies exists and exception is thrown -> Accepting `cascade` optional field in the API payload to drop the dependencies, if any -> Accepting `check_metadata_consistency` optional field to bypass (if value set to `false`) the dependency check ### Related Issues <!-- Please make surt title --> <!-- Add the issue number below (e.g. #234) --> Close #1853 ### Solution and Design <!-- How is this iss --> <!-- It's better if we elaborate --> The design/solution follows the `run_sql` API implementation for Postgres backend. ### Steps to test and verify <!-- If this is a fehis is a bug-fix, how do we verify the fix? --> - Create author - article tables and track them - Defined object and array relationships - Try to drop the article table without cascade or cascade set to `false` - The server should raise the relationship dependency exists exception ## Changelog - ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label. ## Affected components <!-- Remove non-affected components from the list --> - ✅ Server - ❎ Console - ❎ CLI - ❎ Docs - ❎ Community Content - ❎ Build System - ✅ Tests - ❎ Other (list it) PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2636 GitOrigin-RevId: 0ab152295394056c4ca6f02923142a1658ad25dc
2021-10-22 17:49:15 +03:00
Backend source,
Backend target
) =>
SourceName ->
TableName source ->
RelName ->
RenameTable target ->
m ()
updateTableInRemoteRelationshipRHS source tableName remoteRelationshipName (_, newTableName) =
tell $
MetadataModifier $
tableMetadataSetter @source source tableName
. tmRemoteRelationships
. ix remoteRelationshipName
. rrDefinition
. _RelationshipToSource
. tsrdTable
.~ toJSON newTableName
updateColInObjRel ::
(Backend b) =>
TableName b ->
TableName b ->
RenameCol b ->
ObjRelUsing b ->
ObjRelUsing b
updateColInObjRel fromQT toQT rnCol = \case
RUFKeyOn c ->
RUFKeyOn $ updateRelChoice fromQT toQT rnCol c
RUManual manConfig ->
RUManual $ updateRelManualConfig fromQT toQT rnCol manConfig
updateRelChoice ::
Backend b =>
TableName b ->
TableName b ->
RenameCol b ->
ObjRelUsingChoice b ->
ObjRelUsingChoice b
updateRelChoice fromQT toQT rnCol =
\case
SameTable col -> SameTable $ getNewCol rnCol fromQT col
RemoteTable t c -> RemoteTable t (getNewCol rnCol toQT c)
updateColInArrRel ::
(Backend b) =>
TableName b ->
TableName b ->
RenameCol b ->
ArrRelUsing b ->
ArrRelUsing b
updateColInArrRel fromQT toQT rnCol = \case
RUFKeyOn (ArrRelUsingFKeyOn t c) ->
let updCol = getNewCol rnCol toQT c
in RUFKeyOn $ ArrRelUsingFKeyOn t updCol
RUManual manConfig -> RUManual $ updateRelManualConfig fromQT toQT rnCol manConfig
type ColMap b = HashMap (Column b) (Column b)
getNewCol ::
forall b f.
Backend b =>
Functor f =>
RenameCol b ->
TableName b ->
f (Column b) ->
f (Column b)
getNewCol rnCol qt cols =
if qt == opQT
then go <$> cols
else cols
where
RenameItem opQT oCol nCol = rnCol
go :: Column b -> Column b
go col
| col == oCol = nCol
| otherwise = col
updateRelManualConfig ::
forall b.
(Backend b) =>
TableName b ->
TableName b ->
RenameCol b ->
RelManualConfig b ->
RelManualConfig b
updateRelManualConfig fromQT toQT rnCol manConfig =
RelManualConfig tn (updateColMap fromQT toQT rnCol colMap) io
where
RelManualConfig tn colMap io = manConfig
updateColMap ::
forall b.
(Backend b) =>
TableName b ->
TableName b ->
RenameCol b ->
ColMap b ->
ColMap b
updateColMap fromQT toQT rnCol =
M.fromList . map (modCol fromQT *** modCol toQT) . M.toList
where
RenameItem qt oCol nCol = rnCol
modCol colQt col = if colQt == qt && col == oCol then nCol else col
possiblyUpdateCustomColumnNames ::
forall b m.
(MonadWriter MetadataModifier m, BackendMetadata b) =>
SourceName ->
TableName b ->
Column b ->
Column b ->
m ()
possiblyUpdateCustomColumnNames source tableName oldColumn newColumn = do
tell $
MetadataModifier $
tableMetadataSetter @b source tableName . tmConfiguration . tcColumnConfig %~ swapOldColumnForNewColumn
where
swapOldColumnForNewColumn :: HashMap (Column b) columnData -> HashMap (Column b) columnData
swapOldColumnForNewColumn customColumns =
M.fromList $ (\(dbCol, val) -> (,val) $ if dbCol == oldColumn then newColumn else dbCol) <$> M.toList customColumns