[Preview] Inherited roles for postgres read queries

fixes #3868

docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de`

Note:

To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`.

Introduction
------------

This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`.

How are select permissions of different roles are combined?
------------------------------------------------------------

A select permission includes 5 things:

1. Columns accessible to the role
2. Row selection filter
3. Limit
4. Allow aggregation
5. Scalar computed fields accessible to the role

 Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`.

Let's say the following GraphQL query is queried with the `combined_roles` role.

```graphql
query {
   employees {
     address
     phone
   }
}
```

This will translate to the following SQL query:

```sql

 select
    (case when (P1 or P2) then address else null end) as address,
    (case when P2 then phone else null end) as phone
 from employee
 where (P1 or P2)
```

The other parameters of the select permission will be combined in the following manner:

1. Limit - Minimum of the limits will be the limit of the inherited role
2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation
3. Scalar computed fields - same as table column fields, as in the above example

APIs for inherited roles:
----------------------

1. `add_inherited_role`

`add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments

`role_name`: the name of the inherited role to be added (String)
`role_set`: list of roles that need to be combined (Array of Strings)

Example:

```json
{
  "type": "add_inherited_role",
  "args": {
      "role_name":"combined_user",
      "role_set":[
          "user",
          "user1"
      ]
  }
}
```

After adding the inherited role, the inherited role can be used like single roles like earlier

Note:

An inherited role can only be created with non-inherited/singular roles.

2. `drop_inherited_role`

The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument:

`role_name`: name of the inherited role to be dropped

Example:

```json

{
  "type": "drop_inherited_role",
  "args": {
      "role_name":"combined_user"
  }
}
```

Metadata
---------

The derived roles metadata will be included under the `experimental_features` key while exporting the metadata.

```json
{
  "experimental_features": {
    "derived_roles": [
      {
        "role_name": "manager_is_employee_too",
        "role_set": [
          "employee",
          "manager"
        ]
      }
    ]
  }
}
```

Scope
------

Only postgres queries and subscriptions are supported in this PR.

Important points:
-----------------

1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done.

TODOs
-------

- [ ] Tests
   - [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features
   - [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?)
   - [ ] Introspection test with a inherited role (nullability changes in a inherited role)
- [ ] Docs
- [ ] Changelog

Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
This commit is contained in:
Karthikeyan Chinnakonda 2021-03-08 16:44:13 +05:30 committed by hasura-bot
parent 34b611ae43
commit 92026b769f
76 changed files with 1419 additions and 309 deletions

View File

@ -21,6 +21,7 @@ ws-graphql-api-disabled
ws-metadata-api-disabled
remote-schema-permissions
function-permissions
inherited-roles
remote-schema-https
query-caching
query-logs

View File

@ -699,6 +699,24 @@ case "$SERVER_TEST_TO_RUN" in
kill_hge_servers
;;
inherited-roles)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH EXPERIMENTAL FEATURE: INHERITED-ROLES ########>\n"
TEST_TYPE="experimental-features-inherited-roles"
export HASURA_GRAPHQL_EXPERIMENTAL_FEATURES="inherited_roles"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
run_hge_with_args serve
wait_for_port 8080
pytest -n 1 -vv --hge-urls "$HGE_URL" --pg-urls "$HASURA_GRAPHQL_DATABASE_URL" --hge-key="$HASURA_GRAPHQL_ADMIN_SECRET" --test-inherited-roles test_graphql_queries.py::TestGraphQLInheritedRoles
pytest -vv --hge-urls="$HGE_URL" --pg-urls="$HASURA_GRAPHQL_DATABASE_URL" --hge-key="$HASURA_GRAPHQL_ADMIN_SECRET" --test-inherited-roles test_graphql_mutations.py::TestGraphQLInheritedRoles
unset HASURA_GRAPHQL_EXPERIMENTAL_FEATURES
unset HASURA_GRAPHQL_ADMIN_SECRET
kill_hge_servers
;;
query-caching)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE QUERY CACHING #####################################>\n"
TEST_TYPE="query-caching"

View File

@ -7,6 +7,7 @@
(Add entries here in the order of: server, console, cli, docs, others)
- server/mssql: support tracking and querying from views
- server: inherited roles for PG queries and subscription
- cli: add support for rest endpoints

View File

@ -427,12 +427,14 @@ library
, Hasura.RQL.Types.SchemaCacheTypes
, Hasura.RQL.Types.Source
, Hasura.RQL.Types.Table
, Hasura.RQL.Types.InheritedRoles
, Hasura.RQL.DDL.Action
, Hasura.RQL.DDL.ApiLimit
, Hasura.RQL.DDL.ComputedField
, Hasura.RQL.DDL.CustomTypes
, Hasura.RQL.DDL.Deps
, Hasura.RQL.DDL.Endpoint
, Hasura.RQL.DDL.InheritedRoles
, Hasura.RQL.DDL.Headers
, Hasura.RQL.DDL.Metadata
, Hasura.RQL.DDL.Metadata.Generator

View File

@ -113,7 +113,7 @@ runApp env (HGEOptionsG rci metadataDbUrl hgeCmd) = do
functionPermsCtx = FunctionPermissionsInferred
maintenanceMode = MaintenanceModeDisabled
serverConfigCtx =
ServerConfigCtx functionPermsCtx remoteSchemaPermsCtx sqlGenCtx maintenanceMode
ServerConfigCtx functionPermsCtx remoteSchemaPermsCtx sqlGenCtx maintenanceMode mempty
cacheBuildParams =
CacheBuildParams _gcHttpManager pgSourceResolver serverConfigCtx
runManagedT (mkMinimalPool _gcMetadataDbConnInfo) $ \metadataDbPool -> do

View File

@ -7,6 +7,8 @@ module Data.HashMap.Strict.Extended
, groupOnNE
, differenceOn
, lpadZip
, mapKeys
, unionsWith
) where
import Prelude
@ -15,8 +17,8 @@ import qualified Data.Align as A
import qualified Data.Foldable as F
import Data.Function
import Data.Hashable
import Data.HashMap.Strict as M
import Data.Hashable
import Data.List.NonEmpty (NonEmpty (..))
import Data.These
@ -51,3 +53,28 @@ lpadZip left = catMaybes . flip A.alignWith left \case
This _ -> Nothing
That b -> Just (Nothing, b)
These a b -> Just (Just a, b)
-- | @'mapKeys' f s@ is the map obtained by applying @f@ to each key of @s@.
--
-- The size of the result may be smaller if @f@ maps two or more distinct
-- keys to the same new key. In this case the value at the greatest of the
-- original keys is retained.
--
-- > mapKeys (+ 1) (fromList [(5,"a"), (3,"b")]) == fromList [(4, "b"), (6, "a")]
-- > mapKeys (\ _ -> 1) (fromList [(1,"b"), (2,"a"), (3,"d"), (4,"c")]) == singleton 1 "c"
-- > mapKeys (\ _ -> 3) (fromList [(1,"b"), (2,"a"), (3,"d"), (4,"c")]) == singleton 3 "c"
--
-- copied from https://hackage.haskell.org/package/containers-0.6.4.1/docs/src/Data.Map.Internal.html#mapKeys
mapKeys :: (Ord k2, Hashable k2) => (k1 -> k2) -> HashMap k1 a -> HashMap k2 a
mapKeys f = fromList . foldrWithKey (\k x xs -> (f k, x) : xs) []
-- | The union of a list of maps, with a combining operation:
-- (@'unionsWith' f == 'Prelude.foldl' ('unionWith' f) 'empty'@).
--
-- > unionsWith (++) [(fromList [(5, "a"), (3, "b")]), (fromList [(5, "A"), (7, "C")]), (fromList [(5, "A3"), (3, "B3")])]
-- > == fromList [(3, "bB3"), (5, "aAA3"), (7, "C")]
--
-- copied from https://hackage.haskell.org/package/containers-0.6.4.1/docs/src/Data.Map.Internal.html#unionsWith
unionsWith :: (Foldable f, Hashable k, Ord k) => (a -> a -> a) -> f (HashMap k a) -> HashMap k a
unionsWith f ts
= F.foldl' (unionWith f) empty ts

View File

@ -286,7 +286,8 @@ initialiseServeCtx env GlobalCtx{..} so@ServeOptions{..} = do
(schemaSyncListenerThread, schemaSyncEventRef) <- startSchemaSyncListenerThread metadataDbPool logger instanceId
let serverConfigCtx =
ServerConfigCtx soInferFunctionPermissions soEnableRemoteSchemaPermissions sqlGenCtx soEnableMaintenanceMode
ServerConfigCtx soInferFunctionPermissions soEnableRemoteSchemaPermissions
sqlGenCtx soEnableMaintenanceMode soExperimentalFeatures
(rebuildableSchemaCache, cacheInitStartTime) <-
lift . flip onException (flushLogger loggerCtx) $
@ -484,9 +485,14 @@ runHGEServer setupHook env ServeOptions{..} ServeCtx{..} initTime postPollHook s
soConnectionOptions
soWebsocketKeepAlive
soEnableMaintenanceMode
soExperimentalFeatures
let serverConfigCtx =
ServerConfigCtx soInferFunctionPermissions soEnableRemoteSchemaPermissions sqlGenCtx soEnableMaintenanceMode
ServerConfigCtx soInferFunctionPermissions
soEnableRemoteSchemaPermissions
sqlGenCtx
soEnableMaintenanceMode
soExperimentalFeatures
-- log inconsistent schema objects
inconsObjs <- scInconsistentObjs <$> liftIO (getSCFromRef cacheRef)

View File

@ -115,7 +115,7 @@ fromSelectRows annSelectG = do
selectFrom <-
case from of
IR.FromTable qualifiedObject -> fromQualifiedTable qualifiedObject
IR.FromFunction _ _ _ -> refute $ pure FunctionNotSupported
IR.FromFunction {} -> refute $ pure FunctionNotSupported
Args { argsOrderBy
, argsWhere
, argsJoins
@ -153,9 +153,7 @@ fromSelectRows annSelectG = do
} = annSelectG
IR.TablePerm {_tpLimit = mPermLimit, _tpFilter = permFilter} = perm
permissionBasedTop =
case mPermLimit of
Nothing -> NoTop
Just limit -> Top limit
maybe NoTop Top mPermLimit
stringifyNumbers =
if num
then StringifyNumbers
@ -168,7 +166,7 @@ fromSelectAggregate annSelectG = do
selectFrom <-
case from of
IR.FromTable qualifiedObject -> fromQualifiedTable qualifiedObject
IR.FromFunction _ _ _ -> refute $ pure FunctionNotSupported
IR.FromFunction {} -> refute $ pure FunctionNotSupported
fieldSources <-
runReaderT (traverse fromTableAggregateFieldG fields) (fromAlias selectFrom)
filterExpression <-
@ -202,10 +200,7 @@ fromSelectAggregate annSelectG = do
, _asnStrfyNum = _num -- TODO: Do we ignore this for aggregates?
} = annSelectG
IR.TablePerm {_tpLimit = mPermLimit, _tpFilter = permFilter} = perm
permissionBasedTop =
case mPermLimit of
Nothing -> NoTop
Just limit -> Top limit
permissionBasedTop = maybe NoTop Top mPermLimit
--------------------------------------------------------------------------------
@ -583,7 +578,7 @@ fromAnnFieldsG existingJoins stringifyNumbers (IR.FieldName name, field) =
-- 'ToStringExpression' so that it's casted when being projected.
fromAnnColumnField ::
StringifyNumbers
-> IR.AnnColumnField 'MSSQL
-> IR.AnnColumnField 'MSSQL Expression
-> ReaderT EntityAlias FromIr Expression
fromAnnColumnField _stringifyNumbers annColumnField = do
fieldName <- fromPGCol pgCol

View File

@ -165,6 +165,7 @@ deriving instance Data n => Data (Countable n)
instance NFData n => NFData (Countable n)
instance ToJSON n => ToJSON (Countable n)
instance FromJSON n => FromJSON (Countable n)
deriving instance Ord ColumnName
instance Monoid Where where
mempty = Where mempty

View File

@ -123,7 +123,6 @@ pathToAlias path counter =
parseGraphQLName $ T.intercalate "_" (map getFieldNameTxt $ unFieldPath path)
<> "__" <> (tshow . unCounter) counter
type CompositeObject a = OMap.InsOrdHashMap Text (CompositeValue a)
-- | A hybrid JSON value representation which captures the context of remote join field in type parameter.

View File

@ -49,7 +49,9 @@ checkPermissionRequired = \case
pgColsToSelFlds :: [ColumnInfo 'Postgres] -> [(FieldName, AnnField 'Postgres)]
pgColsToSelFlds cols =
flip map cols $
\pgColInfo -> (fromCol @'Postgres $ pgiColumn pgColInfo, mkAnnColumnField pgColInfo Nothing)
\pgColInfo -> (fromCol @'Postgres $ pgiColumn pgColInfo, mkAnnColumnField pgColInfo Nothing Nothing)
-- ^^ Nothing because mutations aren't supported
-- with inherited role
mkDefaultMutFlds :: Maybe [ColumnInfo 'Postgres] -> MutationOutput 'Postgres
mkDefaultMutFlds = MOutMultirowFields . \case

View File

@ -33,7 +33,6 @@ import Hasura.RQL.IR.Select
import Hasura.RQL.Types hiding (Identifier)
import Hasura.SQL.Types
selectQuerySQL :: JsonAggSelect -> AnnSimpleSel 'Postgres -> Q.Query
selectQuerySQL jsonAggSelect sel =
Q.fromBuilder $ toSQL $ mkSQLSelect jsonAggSelect sel
@ -756,10 +755,31 @@ aggregateFieldsToExtractorExps sourcePrefix aggregateFields =
colAls = toIdentifier c
in (S.Alias colAls, qualCol)
{- Note: [SQL generation for inherited roles]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When a query is executed by an inherited role, each column may contain a predicate
(AnnColumnCaseBoolExp 'Postgres SQLExp) along with it. The predicate is then
converted to a BoolExp, which will be used to check if the said column should
be nullified. For example,
Suppose there are two roles, role1 gives access only to the `addr` column with
row filter P1 and role2 gives access to both addr and phone column with row
filter P2. The `OR`ing of the predicates will have already been done while
the schema has been generated. The SQL generated will look like this:
select
(case when (P1 or P2) then addr else null end) as addr,
(case when P2 then phone else null end) as phone
from employee
where (P1 or P2)
-}
processAnnFields
:: forall m . ( MonadReader Bool m
, MonadWriter (JoinTree 'Postgres) m
)
:: forall m
. ( MonadReader Bool m
, MonadWriter (JoinTree 'Postgres) m
)
=> Identifier
-> FieldName
-> SimilarArrayFields
@ -798,7 +818,19 @@ processAnnFields sourcePrefix fieldAlias similarArrFields annFields = do
processArrayRelation (mkSourcePrefixes arrRelSourcePrefix) fieldName arrRelAlias arrSel
pure $ S.mkQIdenExp arrRelSourcePrefix fieldName
AFComputedField _ (CFSScalar scalar) -> fromScalarComputedField scalar
AFComputedField _ (CFSScalar scalar caseBoolExpMaybe) -> do
computedFieldSQLExp <- fromScalarComputedField scalar
-- The computed field is conditionally outputed depending
-- on the presence of `caseBoolExpMaybe` and the value it
-- evaluates to. `caseBoolExpMaybe` will be set only in the
-- case of an inherited role.
-- See [SQL generation for inherited role]
case caseBoolExpMaybe of
Nothing -> pure computedFieldSQLExp
Just caseBoolExp ->
let boolExp = S.simplifyBoolExp $ toSQLBoolExp (S.QualifiedIdentifier baseTableIdentifier Nothing)
$ _accColCaseBoolExpField <$> caseBoolExp
in pure $ S.SECond boolExp computedFieldSQLExp S.SENull
AFComputedField _ (CFSTable selectTy sel) -> withWriteComputedFieldTableSet $ do
let computedFieldSourcePrefix =
@ -814,7 +846,7 @@ processAnnFields sourcePrefix fieldAlias similarArrFields annFields = do
)
pure $
-- posttgres ignores anything beyond 63 chars for an iden
-- postgres ignores anything beyond 63 chars for an iden
-- in this case, we'll need to use json_build_object function
-- json_build_object is slower than row_to_json hence it is only
-- used when needed
@ -829,11 +861,24 @@ processAnnFields sourcePrefix fieldAlias similarArrFields annFields = do
toRowToJsonExtr (fieldName, fieldExp) =
S.Extractor fieldExp $ Just $ S.toAlias fieldName
toSQLCol :: AnnColumnField 'Postgres -> m S.SQLExp
toSQLCol (AnnColumnField col asText colOpM) = do
baseTableIdentifier = mkBaseTableAlias sourcePrefix
toSQLCol :: AnnColumnField 'Postgres S.SQLExp -> m S.SQLExp
toSQLCol (AnnColumnField col asText colOpM caseBoolExpMaybe) = do
strfyNum <- ask
pure $ toJSONableExp strfyNum (pgiType col) asText $ withColumnOp colOpM $
S.mkQIdenExp (mkBaseTableAlias sourcePrefix) $ pgiColumn col
let sqlExpression =
withColumnOp colOpM $
S.mkQIdenExp baseTableIdentifier $ pgiColumn col
finalSQLExpression =
-- Check out [SQL generation for inherited role]
case caseBoolExpMaybe of
Nothing -> sqlExpression
Just caseBoolExp ->
let boolExp =
S.simplifyBoolExp $ toSQLBoolExp (S.QualifiedIdentifier baseTableIdentifier Nothing) $
_accColCaseBoolExpField <$> caseBoolExp
in S.SECond boolExp sqlExpression S.SENull
pure $ toJSONableExp strfyNum (pgiType col) asText finalSQLExpression
fromScalarComputedField :: ComputedFieldScalarSelect 'Postgres S.SQLExp -> m S.SQLExp
fromScalarComputedField computedFieldScalar = do

View File

@ -108,7 +108,7 @@ getExecPlanPartial
getExecPlanPartial userInfo sc queryType req =
(getGCtx ,) <$> getQueryParts req
where
roleName = _uiRole userInfo
role = _uiRole userInfo
contextMap =
case queryType of
@ -122,7 +122,7 @@ getExecPlanPartial userInfo sc queryType req =
getGCtx :: C.GQLContext
getGCtx =
case Map.lookup roleName contextMap of
case Map.lookup role contextMap of
Nothing -> defaultContext
Just (C.RoleContext frontend backend) ->
case _uiBackendOnlyFieldAccess userInfo of

View File

@ -337,7 +337,8 @@ resolveAsyncActionQuery userInfo annAction actionLogResponse = ActionExecution
sessionVarsColumn = (unsafePGCol "session_variables", PGJSONB)
-- TODO (from master):- Avoid using ColumnInfo
mkAnnFldFromPGCol = flip RS.mkAnnColumnField Nothing . mkPGColumnInfo
mkAnnFldFromPGCol columnInfoArgs =
RS.mkAnnColumnField (mkPGColumnInfo columnInfoArgs) Nothing Nothing
mkPGColumnInfo (column', columnType) =
ColumnInfo column' (G.unsafeMkName $ getPGColTxt column') 0 (ColumnScalar columnType) True Nothing
@ -354,7 +355,7 @@ resolveAsyncActionQuery userInfo annAction actionLogResponse = ActionExecution
-- For non-admin roles, accessing an async action's response should be allowed only for the user
-- who initiated the action through mutation. The action's response is accessible for a query/subscription
-- only when it's session variables are equal to that of action's.
in if isAdmin (_uiRole userInfo) then actionIdColumnEq
in if (adminRoleName == (_uiRole userInfo)) then actionIdColumnEq
else BoolAnd [actionIdColumnEq, sessionVarsColumnEq]

View File

@ -216,7 +216,7 @@ transformAnnFields path fields = do
AFArrayRelation . ASConnection <$> transformArrayConnection fieldPath annRel
AFComputedField x computedField ->
AFComputedField x <$> case computedField of
CFSScalar _ -> pure computedField
CFSScalar _ _ -> pure computedField
CFSTable jas annSel -> CFSTable jas <$> transformSelect fieldPath annSel
AFRemote x rs -> pure $ AFRemote x rs
AFExpression t -> pure $ AFExpression t
@ -224,7 +224,7 @@ transformAnnFields path fields = do
case NE.nonEmpty remoteJoins of
Nothing -> pure transformedFields
Just nonEmptyRemoteJoins -> do
let phantomColumns = map (\ci -> (fromCol @b $ pgiColumn ci, AFColumn $ AnnColumnField ci False Nothing)) $
let phantomColumns = map (\ci -> (fromCol @b $ pgiColumn ci, AFColumn $ AnnColumnField ci False Nothing Nothing)) $
concatMap _rjPhantomFields remoteJoins
modify (Map.insert path nonEmptyRemoteJoins)
pure $ transformedFields <> phantomColumns

View File

@ -133,7 +133,9 @@ explainGQLQuery
-> m EncJSON
explainGQLQuery sc (GQLExplain query userVarsRaw maybeIsRelay) = do
-- NOTE!: we will be executing what follows as though admin role. See e.g. notes in explainField:
userInfo <- mkUserInfo (URBFromSessionVariablesFallback adminRoleName) UAdminSecretSent sessionVariables
userInfo <-
mkUserInfo (URBFromSessionVariablesFallback adminRoleName) UAdminSecretSent
sessionVariables
-- we don't need to check in allow list as we consider it an admin endpoint
let takeFragment =
\case G.ExecutableDefinitionFragment f -> Just f; _ -> Nothing

View File

@ -24,6 +24,7 @@ import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.Error
import Hasura.RQL.Types.Source
import Hasura.RQL.Types.Table
-- import Hasura.SQL.Backend
import Hasura.Session (RoleName)
@ -117,7 +118,7 @@ class (Monad m, MonadParse n) => MonadSchema n m | m -> n where
-- the same key.
-> m (p n b) -> m (p n b)
type MonadRole r m = (MonadReader r m, Has RoleName r)
type MonadRole r m = (MonadReader r m, Has RoleName r)
-- | Gets the current role the schema is being built for.
askRoleName

View File

@ -71,24 +71,28 @@ buildGQLContext
)
buildGQLContext =
proc (queryType, sources, allRemoteSchemas, allActions, nonObjectCustomTypes) -> do
ServerConfigCtx functionPermsCtx remoteSchemaPermsCtx sqlGenCtx@(SQLGenCtx stringifyNum) _maintenanceMode <-
ServerConfigCtx functionPermsCtx remoteSchemaPermsCtx sqlGenCtx@(SQLGenCtx stringifyNum) _maintenanceMode _experimentalFeatures <-
bindA -< askServerConfigCtx
let remoteSchemasRoles = concatMap (Map.keys . _rscPermissions . fst . snd) $ Map.toList allRemoteSchemas
let allRoles = Set.insert adminRoleName $
allTableRoles
<> (allActionInfos ^.. folded.aiPermissions.to Map.keys.folded)
let nonTableRoles =
Set.insert adminRoleName $
(allActionInfos ^.. folded.aiPermissions.to Map.keys.folded)
<> Set.fromList (bool mempty remoteSchemasRoles $ remoteSchemaPermsCtx == RemoteSchemaPermsEnabled)
allActionInfos = Map.elems allActions
allTableRoles = Set.fromList $ getTableRoles =<< Map.elems sources
adminRemoteRelationshipQueryCtx =
allRemoteSchemas
<&> (\(remoteSchemaCtx, _metadataObj) ->
(_rscIntro remoteSchemaCtx, _rscParsed remoteSchemaCtx))
allRoles :: Set.HashSet RoleName
allRoles = nonTableRoles <> allTableRoles
-- The function permissions context doesn't actually matter because the
-- admin will have access to the function anyway
adminQueryContext = QueryContext stringifyNum queryType adminRemoteRelationshipQueryCtx FunctionPermissionsInferred
adminQueryContext =
QueryContext stringifyNum queryType
adminRemoteRelationshipQueryCtx FunctionPermissionsInferred
-- build the admin DB-only context so that we can check against name clashes with remotes
-- TODO: Is there a better way to check for conflicts without actually building the admin schema?
@ -119,14 +123,14 @@ buildGQLContext =
adminMutationRemotes = concatMap (concat . piMutation . snd . snd) remotes
roleContexts <- bindA -<
( Set.toMap allRoles & Map.traverseWithKey \roleName () ->
( Set.toMap allRoles & Map.traverseWithKey \role () ->
case queryType of
QueryHasura ->
buildRoleContext (sqlGenCtx, queryType, functionPermsCtx) sources allRemoteSchemas allActionInfos
nonObjectCustomTypes remotes roleName remoteSchemaPermsCtx
nonObjectCustomTypes remotes role remoteSchemaPermsCtx
QueryRelay ->
buildRelayRoleContext (sqlGenCtx, queryType, functionPermsCtx) sources allActionInfos
nonObjectCustomTypes roleName
nonObjectCustomTypes role
)
unauthenticated <- bindA -< unauthenticatedContext adminQueryRemotes adminMutationRemotes remoteSchemaPermsCtx
returnA -< (roleContexts, unauthenticated)
@ -140,11 +144,11 @@ buildRoleContext
-> RemoteSchemaPermsCtx
-> m (RoleContext GQLContext)
buildRoleContext (SQLGenCtx stringifyNum, queryType, functionPermsCtx) sources
allRemoteSchemas allActionInfos nonObjectCustomTypes remotes roleName remoteSchemaPermsCtx = do
allRemoteSchemas allActionInfos nonObjectCustomTypes remotes role remoteSchemaPermsCtx = do
roleBasedRemoteSchemas <-
if | roleName == adminRoleName -> pure remotes
| remoteSchemaPermsCtx == RemoteSchemaPermsEnabled -> buildRoleBasedRemoteSchemaParser roleName allRemoteSchemas
if | role == adminRoleName -> pure remotes
| remoteSchemaPermsCtx == RemoteSchemaPermsEnabled -> buildRoleBasedRemoteSchemaParser role allRemoteSchemas
-- when remote schema permissions are not enabled, then remote schemas
-- are a public entity which is accesible to all the roles
| otherwise -> pure remotes
@ -152,7 +156,8 @@ buildRoleContext (SQLGenCtx stringifyNum, queryType, functionPermsCtx) sources
let queryRemotes = getQueryRemotes $ snd . snd <$> roleBasedRemoteSchemas
mutationRemotes = getMutationRemotes $ snd . snd <$> roleBasedRemoteSchemas
remoteRelationshipQueryContext = Map.fromList roleBasedRemoteSchemas
roleQueryContext = QueryContext stringifyNum queryType remoteRelationshipQueryContext functionPermsCtx
roleQueryContext =
QueryContext stringifyNum queryType remoteRelationshipQueryContext functionPermsCtx
buildSource :: forall b. BackendSchema b => SourceInfo b ->
m ( [FieldParser (P.ParseT Identity) (QueryRootField UnpreparedValue)]
, [FieldParser (P.ParseT Identity) (MutationRootField UnpreparedValue)]
@ -163,7 +168,7 @@ buildRoleContext (SQLGenCtx stringifyNum, queryType, functionPermsCtx) sources
validTables = takeValidTables tables
xNodesAgg = nodesAggExtension sourceConfig
xRelay = relayExtension sourceConfig
runMonadSchema roleName roleQueryContext sources (BackendExtension @b xRelay xNodesAgg) $
runMonadSchema role roleQueryContext sources (BackendExtension @b xRelay xNodesAgg) $
(,,)
<$> buildQueryFields sourceName sourceConfig validTables validFunctions
<*> buildMutationFields Frontend sourceName sourceConfig validTables validFunctions
@ -178,7 +183,7 @@ buildRoleContext (SQLGenCtx stringifyNum, queryType, functionPermsCtx) sources
-- remotes, which are backend-agnostic.
-- In the long term, all backend-specific processing should be moved to `buildSource`, and this
-- block should be running in the schema for a `None` backend.
runMonadSchema roleName roleQueryContext sources (BackendExtension @'Postgres (Just ()) (Just ())) $ do
runMonadSchema role roleQueryContext sources (BackendExtension @'Postgres (Just ()) (Just ())) $ do
mutationParserFrontend <-
buildMutationParser mutationRemotes allActionInfos nonObjectCustomTypes mutationFrontendFields
@ -215,7 +220,7 @@ buildRelayRoleContext
-> RoleName
-> m (RoleContext GQLContext)
buildRelayRoleContext (SQLGenCtx stringifyNum, queryType, functionPermsCtx) sources
allActionInfos nonObjectCustomTypes roleName = do
allActionInfos nonObjectCustomTypes role = do
-- TODO: At the time of writing this, remote schema queries are not supported in relay.
-- When they are supported, we should get do what `buildRoleContext` does. Since, they
-- are not supported yet, we use `mempty` below for `RemoteRelationshipQueryContext`.
@ -230,7 +235,7 @@ buildRelayRoleContext (SQLGenCtx stringifyNum, queryType, functionPermsCtx) sour
validTables = takeValidTables tables
xNodesAgg = nodesAggExtension sourceConfig
xRelay = relayExtension sourceConfig
runMonadSchema roleName roleQueryContext sources (BackendExtension @b xRelay xNodesAgg) $
runMonadSchema role roleQueryContext sources (BackendExtension @b xRelay xNodesAgg) $
(,,)
<$> buildRelayQueryFields sourceName sourceConfig validTables validFunctions
<*> buildMutationFields Frontend sourceName sourceConfig validTables validFunctions
@ -244,7 +249,7 @@ buildRelayRoleContext (SQLGenCtx stringifyNum, queryType, functionPermsCtx) sour
-- remotes, which are backend-agnostic.
-- In the long term, all backend-specific processing should be moved to `buildSource`, and this
-- block should be running in the schema for a `None` backend.
runMonadSchema roleName roleQueryContext sources (BackendExtension @'Postgres (Just ()) (Just ())) $ do
runMonadSchema role roleQueryContext sources (BackendExtension @'Postgres (Just ()) (Just ())) $ do
-- Add node root field.
-- FIXME: for now this is PG-only. This isn't a problem yet since for now only PG supports relay.
-- To fix this, we'd need to first generalize `nodeField`.
@ -357,11 +362,11 @@ buildRoleBasedRemoteSchemaParser
=> RoleName
-> RemoteSchemaCache
-> m [(RemoteSchemaName, (IntrospectionResult, ParsedIntrospection))]
buildRoleBasedRemoteSchemaParser role remoteSchemaCache = do
buildRoleBasedRemoteSchemaParser roleName remoteSchemaCache = do
let remoteSchemaIntroInfos = map fst $ toList remoteSchemaCache
remoteSchemaPerms <-
for remoteSchemaIntroInfos $ \(RemoteSchemaCtx remoteSchemaName _ remoteSchemaInfo _ _ permissions) ->
for (Map.lookup role permissions) $ \introspectRes -> do
for (Map.lookup roleName permissions) $ \introspectRes -> do
(queryParsers, mutationParsers, subscriptionParsers) <-
P.runSchemaT @m @(P.ParseT Identity) $ buildRemoteParser introspectRes remoteSchemaInfo
let parsedIntrospection = ParsedIntrospection queryParsers mutationParsers subscriptionParsers
@ -488,7 +493,9 @@ buildMutationFields scenario sourceName sourceConfig tables (takeExposedAs FEAMu
if scenario == Frontend && ipiBackendOnly insertPerms
then Nothing
else Just insertPerms
lift $ buildTableInsertMutationFields sourceName sourceConfig tableName tableInfo tableGQLName insertPerms _permSel _permUpd
lift $
buildTableInsertMutationFields sourceName sourceConfig tableName tableInfo
tableGQLName insertPerms _permSel _permUpd
updates <- runMaybeT $ do
guard $ isMutable viIsUpdatable viewInfo
updatePerms <- hoistMaybe _permUpd
@ -506,7 +513,7 @@ buildMutationFields scenario sourceName sourceConfig tables (takeExposedAs FEAMu
guard $
-- when function permissions are inferred, we don't expose the
-- mutation functions for non-admin roles. See Note [Function Permissions]
roleName == adminRoleName || roleName `elem` _fiPermissions functionInfo
roleName == adminRoleName || roleName `elem` (_fiPermissions functionInfo)
lift $ buildFunctionMutationFields sourceName sourceConfig functionName functionInfo targetTable selectPerms
pure $ concat $ catMaybes $ tableMutations <> functionMutations

View File

@ -53,8 +53,8 @@ actionExecute
-> ActionInfo
-> m (Maybe (FieldParser n (AnnActionExecution 'Postgres (UnpreparedValue 'Postgres))))
actionExecute nonObjectTypeMap actionInfo = runMaybeT do
roleName <- lift askRoleName
guard $ roleName == adminRoleName || roleName `Map.member` permissions
roleName <- askRoleName
guard $ (roleName == adminRoleName || roleName `Map.member` permissions)
let fieldName = unActionName actionName
description = G.Description <$> comment
inputArguments <- lift $ actionInputArguments nonObjectTypeMap $ _adArguments definition
@ -125,7 +125,7 @@ actionAsyncQuery
=> ActionInfo
-> m (Maybe (FieldParser n (AnnActionAsyncQuery 'Postgres (UnpreparedValue 'Postgres))))
actionAsyncQuery actionInfo = runMaybeT do
roleName <- lift askRoleName
roleName <- askRoleName
guard $ roleName == adminRoleName || roleName `Map.member` permissions
actionOutputParser <- lift $ actionOutputFields outputObject
createdAtFieldParser <-
@ -212,7 +212,7 @@ actionOutputFields annotatedObject = do
AOFTEnum def -> customEnumParser def
in bool P.nonNullableField id (G.isNullable gType) $
P.selection_ (unObjectFieldName name) description fieldParser
$> RQL.mkAnnColumnField pgColumnInfo Nothing
$> RQL.mkAnnColumnField pgColumnInfo Nothing Nothing
relationshipFieldParser
:: TypeRelationship (TableInfo 'Postgres) (ColumnInfo 'Postgres)
@ -247,7 +247,6 @@ actionOutputFields annotatedObject = do
, fmap (RQL.AFArrayRelation . RQL.ASAggregate . RQL.AnnRelationSelectG tableRelName columnMapping) <$> tableAggField
]
mkDefinitionList :: AnnotatedObjectType -> [(PGCol, ScalarType 'Postgres)]
mkDefinitionList AnnotatedObjectType{..} =
flip map (toList _otdFields) $ \ObjectFieldDefinition{..} ->

View File

@ -19,7 +19,6 @@ import qualified Hasura.RQL.IR.Select as IR
import Hasura.GraphQL.Parser (UnpreparedValue)
import Hasura.RQL.Types
type SelectExp b = IR.AnnSimpleSelG b (UnpreparedValue b)
type AggSelectExp b = IR.AnnAggregateSelectG b (UnpreparedValue b)
type ConnectionSelectExp b = IR.ConnectionSelect b (UnpreparedValue b)

View File

@ -34,8 +34,6 @@ import Hasura.GraphQL.Schema.Select
import Hasura.GraphQL.Schema.Table
import Hasura.RQL.Types
-- insert
-- | Construct a root field, normally called insert_tablename, that can be used to add several rows to a DB table
@ -113,6 +111,7 @@ tableFieldsInput
-> m (Parser 'Input n (IR.AnnInsObj b (UnpreparedValue b)))
tableFieldsInput table insertPerms = memoizeOn 'tableFieldsInput table do
tableGQLName <- getTableGQLName @b table
roleName <- askRoleName
allFields <- _tciFieldInfoMap . _tiCoreInfo <$> askTableInfo table
objectFields <- catMaybes <$> for (Map.elems allFields) \case
FIComputedField _ -> pure Nothing
@ -444,7 +443,7 @@ primaryKeysArguments
primaryKeysArguments table selectPerms = runMaybeT $ do
primaryKeys <- MaybeT $ _tciPrimaryKey . _tiCoreInfo <$> askTableInfo table
let columns = _pkColumns primaryKeys
guard $ all (\c -> pgiColumn c `Set.member` spiCols selectPerms) columns
guard $ all (\c -> pgiColumn c `Map.member` spiCols selectPerms) columns
lift $ fmap (BoolAnd . toList) . sequenceA <$> for columns \columnInfo -> do
field <- columnParser (pgiType columnInfo) (G.Nullability False)
pure $ BoolFld . AVCol columnInfo . pure . AEQ True . mkParameter <$>

View File

@ -66,7 +66,6 @@ import Hasura.RQL.Types
import Hasura.Server.Utils (executeJSONPath)
import Hasura.Session
-- 1. top level selection functions
-- write a blurb?
@ -187,7 +186,7 @@ selectTableByPk
-> m (Maybe (FieldParser n (SelectExp b)))
selectTableByPk table fieldName description selectPermissions = runMaybeT do
primaryKeys <- MaybeT $ fmap _pkColumns . _tciPrimaryKey . _tiCoreInfo <$> askTableInfo table
guard $ all (\c -> pgiColumn c `Set.member` spiCols selectPermissions) primaryKeys
guard $ all (\c -> pgiColumn c `Map.member` spiCols selectPermissions) primaryKeys
lift $ memoizeOn 'selectTableByPk (table, fieldName) do
stringifyNum <- asks $ qcStringifyNum . getter
argsParser <- sequenceA <$> for primaryKeys \columnInfo -> do
@ -888,7 +887,7 @@ lookupRemoteField' fieldInfos (FieldCall fcName _) =
Just (P.Definition _ _ _ fieldInfo) -> pure fieldInfo
lookupRemoteField
:: (MonadSchema n m, MonadError QErr m, MonadRole r m)
:: (MonadSchema n m, MonadError QErr m)
=> [P.Definition P.FieldInfo]
-> NonEmpty FieldCall
-> m P.FieldInfo
@ -919,7 +918,7 @@ fieldSelection
-> FieldInfo b
-> SelPermInfo b
-> m [FieldParser n (AnnotatedField b)]
fieldSelection table maybePkeyColumns fieldInfo selectPermissions =
fieldSelection table maybePkeyColumns fieldInfo selectPermissions = do
case fieldInfo of
FIColumn columnInfo -> maybeToList <$> runMaybeT do
queryType <- asks $ qcQueryType . getter
@ -931,11 +930,34 @@ fieldSelection table maybePkeyColumns fieldInfo selectPermissions =
pure $ P.selection_ fieldName Nothing P.identifier
$> IR.AFNodeId xRelayInfo table pkeyColumns
| otherwise -> do
guard $ Set.member columnName (spiCols selectPermissions)
guard $ columnName `Map.member` (spiCols selectPermissions)
let caseBoolExp = join $ Map.lookup columnName (spiCols selectPermissions)
let caseBoolExpUnpreparedValue =
fmapAnnColumnCaseBoolExp partialSQLExpToUnpreparedValue <$> caseBoolExp
let pathArg = jsonPathArg $ pgiType columnInfo
field <- lift $ columnParser (pgiType columnInfo) (G.Nullability $ pgiIsNullable columnInfo)
-- In an inherited role, when a column is part of all the select
-- permissions which make up the inherited role then the nullability
-- of the field is determined by the nullability of the DB column
-- otherwise it is marked as nullable explicitly, ignoring the column's
-- nullability. We do this because
-- in multiple roles we execute an SQL query like:
--
-- select
-- (case when (P1 or P2) then addr else null end) as addr,
-- (case when P2 then phone else null end) as phone
-- from employee
-- where (P1 or P2)
--
-- In the above example, P(n) is a predicate configured for a role
--
-- NOTE: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf
-- The above is the paper which talks about the idea of cell-level
-- authorization and multiple roles. The paper says that we should only
-- allow the case analysis only on nullable columns.
nullability = pgiIsNullable columnInfo || isJust caseBoolExp
field <- lift $ columnParser (pgiType columnInfo) (G.Nullability nullability)
pure $ P.selection fieldName (pgiDescription columnInfo) pathArg field
<&> IR.mkAnnColumnField columnInfo
<&> IR.mkAnnColumnField columnInfo caseBoolExpUnpreparedValue
FIRelationship relationshipInfo ->
concat . maybeToList <$> relationshipField relationshipInfo
@ -1016,16 +1038,23 @@ computedFieldPG ComputedFieldInfo{..} selectPermissions = runMaybeT do
functionArgsParser <- lift $ computedFieldFunctionArgs _cfiFunction
case _cfiReturnType of
CFRScalar scalarReturnType -> do
guard $ _cfiName `Set.member` spiScalarComputedFields selectPermissions
let fieldArgsParser = do
caseBoolExpMaybe <-
onNothing
(Map.lookup _cfiName (spiScalarComputedFields selectPermissions))
(hoistMaybe Nothing)
let caseBoolExpUnpreparedValue =
fmapAnnColumnCaseBoolExp partialSQLExpToUnpreparedValue <$> caseBoolExpMaybe
fieldArgsParser = do
args <- functionArgsParser
colOp <- jsonPathArg $ ColumnScalar scalarReturnType
pure $ IR.AFComputedField _cfiXComputedFieldInfo $ IR.CFSScalar $ IR.ComputedFieldScalarSelect
{ IR._cfssFunction = _cffName _cfiFunction
, IR._cfssType = scalarReturnType
, IR._cfssColumnOp = colOp
, IR._cfssArguments = args
}
pure $ IR.AFComputedField _cfiXComputedFieldInfo
(IR.CFSScalar (IR.ComputedFieldScalarSelect
{ IR._cfssFunction = _cffName _cfiFunction
, IR._cfssType = scalarReturnType
, IR._cfssColumnOp = colOp
, IR._cfssArguments = args
})
caseBoolExpUnpreparedValue)
dummyParser <- lift $ columnParser @'Postgres (ColumnScalar scalarReturnType) (G.Nullability True)
pure $ P.selection fieldName (Just fieldDescription) fieldArgsParser dummyParser
CFRSetofTable tableName -> do
@ -1091,7 +1120,7 @@ remoteRelationshipFieldPG remoteFieldInfo = runMaybeT do
let hasuraFieldNames = Set.map (FieldName . G.unName . pgiName) hasuraFields
remoteRelationship = RemoteRelationship name source table hasuraFieldNames remoteSchemaName remoteFields
(newInpValDefns, remoteFieldParamMap) <-
if | isAdmin role ->
if | role == adminRoleName ->
-- we don't validate the remote relationship when the role is admin
-- because it's already been validated, when the remote relationship
-- was created

View File

@ -7,8 +7,6 @@ module Hasura.GraphQL.Schema.Table
, tableUpdateColumnsEnum
, tablePermissions
, tableSelectPermissions
, tableUpdatePermissions
, tableDeletePermissions
, tableSelectFields
, tableColumns
, tableSelectColumns
@ -31,7 +29,6 @@ import Hasura.GraphQL.Schema.Backend
import Hasura.RQL.DML.Internal (getRolePermInfo)
import Hasura.RQL.Types
-- | Helper function to get the table GraphQL name. A table may have a
-- custom name configured with it. When the custom name exists, the GraphQL nodes
-- that are generated according to the custom name. For example: Let's say,
@ -52,7 +49,6 @@ getTableGQLName tableName = do
`onNothing` tableGraphQLName @b tableName
`onLeft` throwError
-- | Table select columns enum
--
-- Parser for an enum type that matches the columns of the given
@ -128,18 +124,6 @@ tableSelectPermissions
-> m (Maybe (SelPermInfo b))
tableSelectPermissions table = (_permSel =<<) <$> tablePermissions table
tableUpdatePermissions
:: forall m n r b. (Backend b, MonadSchema n m, MonadTableInfo r m, MonadRole r m)
=> TableName b
-> m (Maybe (UpdPermInfo b))
tableUpdatePermissions table = (_permUpd =<<) <$> tablePermissions table
tableDeletePermissions
:: forall m n r b. (Backend b, MonadSchema n m, MonadTableInfo r m, MonadRole r m)
=> TableName b
-> m (Maybe (DelPermInfo b))
tableDeletePermissions table = (_permDel =<<) <$> tablePermissions table
tableSelectFields
:: forall m n r b. (Backend b, MonadSchema n m, MonadTableInfo r m, MonadRole r m)
=> TableName b
@ -150,13 +134,13 @@ tableSelectFields table permissions = do
filterM canBeSelected $ Map.elems tableFields
where
canBeSelected (FIColumn columnInfo) =
pure $ Set.member (pgiColumn columnInfo) (spiCols permissions)
pure $ Map.member (pgiColumn columnInfo) (spiCols permissions)
canBeSelected (FIRelationship relationshipInfo) =
isJust <$> tableSelectPermissions @_ @_ @_ @b (riRTable relationshipInfo)
canBeSelected (FIComputedField computedFieldInfo) =
case _cfiReturnType computedFieldInfo of
CFRScalar _ ->
pure $ Set.member (_cfiName computedFieldInfo) $ spiScalarComputedFields permissions
pure $ Map.member (_cfiName computedFieldInfo) $ spiScalarComputedFields permissions
CFRSetofTable tableName ->
isJust <$> tableSelectPermissions @_ @_ @_ @b tableName
canBeSelected (FIRemoteRelationship _) = pure True

View File

@ -637,7 +637,8 @@ logWSEvent (L.Logger logger) wsConn wsEv = do
onConnInit
:: (HasVersion, MonadIO m, UserAuthentication (Tracing.TraceT m))
=> L.Logger L.Hasura -> H.Manager -> WSConn -> AuthMode -> Maybe ConnParams -> Tracing.TraceT m ()
=> L.Logger L.Hasura -> H.Manager -> WSConn -> AuthMode
-> Maybe ConnParams -> Tracing.TraceT m ()
onConnInit logger manager wsConn authMode connParamsM = do
-- TODO(from master): what should be the behaviour of connection_init message when a
-- connection is already iniatilized? Currently, we seem to be doing

View File

@ -0,0 +1,54 @@
module Hasura.RQL.DDL.InheritedRoles
( runAddInheritedRole
, runDropInheritedRole
, dropInheritedRoleInMetadata
)
where
import Hasura.Prelude
import Data.Text.Extended
import qualified Data.HashMap.Strict.InsOrd as OMap
import Hasura.EncJSON
import Hasura.RQL.Types
import Hasura.Server.Types (ExperimentalFeature (..))
import Hasura.Session
runAddInheritedRole
:: ( MonadError QErr m
, CacheRWM m
, MetadataM m
, HasServerConfigCtx m
)
=> AddInheritedRole
-> m EncJSON
runAddInheritedRole addInheritedRoleQ@(AddInheritedRole inheritedRoleName roleSet) = do
experimentalFeatures <- _sccExperimentalFeatures <$> askServerConfigCtx
unless (EFInheritedRoles `elem` experimentalFeatures) $
throw400 ConstraintViolation $
"inherited role can only be added when inherited_roles enabled" <>
" in the experimental features"
when (inheritedRoleName `elem` roleSet) $
throw400 InvalidParams "an inherited role name cannot be in the role combination"
buildSchemaCacheFor (MOInheritedRole inheritedRoleName)
$ MetadataModifier
$ metaInheritedRoles %~ OMap.insert inheritedRoleName addInheritedRoleQ
pure successMsg
dropInheritedRoleInMetadata :: RoleName -> MetadataModifier
dropInheritedRoleInMetadata roleName =
MetadataModifier $ metaInheritedRoles %~ OMap.delete roleName
runDropInheritedRole
:: (MonadError QErr m, CacheRWM m, MetadataM m)
=> DropInheritedRole
-> m EncJSON
runDropInheritedRole (DropInheritedRole roleName) = do
inheritedRolesMetadata <- _metaInheritedRoles <$> getMetadata
unless (roleName `OMap.member` inheritedRolesMetadata) $
throw400 NotExists $ roleName <<> " inherited role doesn't exist"
buildSchemaCacheFor (MOInheritedRole roleName) (dropInheritedRoleInMetadata roleName)
pure successMsg

View File

@ -36,6 +36,7 @@ import Hasura.RQL.DDL.ComputedField
import Hasura.RQL.DDL.CustomTypes
import Hasura.RQL.DDL.Endpoint
import Hasura.RQL.DDL.EventTrigger
import Hasura.RQL.DDL.InheritedRoles
import Hasura.RQL.DDL.Permission
import Hasura.RQL.DDL.Relationship
import Hasura.RQL.DDL.RemoteRelationship
@ -46,9 +47,10 @@ import Hasura.RQL.DDL.Schema
import Hasura.EncJSON
import Hasura.RQL.DDL.Metadata.Types
import Hasura.RQL.Types
import Hasura.Server.Types (ExperimentalFeature (..))
runClearMetadata
:: (MonadIO m, CacheRWM m, MetadataM m, MonadError QErr m)
:: (MonadIO m, CacheRWM m, MetadataM m, MonadError QErr m, HasServerConfigCtx m)
=> ClearMetadata -> m EncJSON
runClearMetadata _ = do
metadata <- getMetadata
@ -84,6 +86,7 @@ runReplaceMetadata
, CacheRWM m
, MetadataM m
, MonadIO m
, HasServerConfigCtx m
)
=> ReplaceMetadata -> m EncJSON
runReplaceMetadata = \case
@ -95,6 +98,7 @@ runReplaceMetadataV1
, CacheRWM m
, MetadataM m
, MonadIO m
, HasServerConfigCtx m
)
=> ReplaceMetadataV1 -> m EncJSON
runReplaceMetadataV1 =
@ -106,9 +110,17 @@ runReplaceMetadataV2
, CacheRWM m
, MetadataM m
, MonadIO m
, HasServerConfigCtx m
)
=> ReplaceMetadataV2 -> m EncJSON
runReplaceMetadataV2 ReplaceMetadataV2{..} = do
experimentalFeatures <- _sccExperimentalFeatures <$> askServerConfigCtx
let inheritedRoles =
case _rmv2Metadata of
RMWithSources (Metadata { _metaInheritedRoles }) -> _metaInheritedRoles
RMWithoutSources _ -> mempty
when (inheritedRoles /= mempty && (EFInheritedRoles `notElem` experimentalFeatures)) $
throw400 ConstraintViolation $ "inherited_roles can only be added when it's enabled in the experimental features"
oldMetadata <- getMetadata
metadata <- case _rmv2Metadata of
RMWithSources m -> pure m
@ -123,7 +135,7 @@ runReplaceMetadataV2 ReplaceMetadataV2{..} = do
pure $ Metadata (OMap.singleton defaultSource newDefaultSourceMetadata)
_mnsRemoteSchemas _mnsQueryCollections _mnsAllowlist
_mnsCustomTypes _mnsActions _mnsCronTriggers (_metaRestEndpoints oldMetadata)
emptyApiLimit emptyMetricsConfig
emptyApiLimit emptyMetricsConfig mempty
putMetadata metadata
case _rmv2AllowInconsistentMetadata of
@ -157,18 +169,24 @@ runReplaceMetadataV2 ReplaceMetadataV2{..} = do
where
getPGTriggersMap = OMap.unions . map _tmEventTriggers . OMap.elems . _smTables
processExperimentalFeatures :: HasServerConfigCtx m => Metadata -> m Metadata
processExperimentalFeatures metadata = do
experimentalFeatures <- _sccExperimentalFeatures <$> askServerConfigCtx
let isInheritedRolesSet = EFInheritedRoles `elem` experimentalFeatures
-- export inherited roles only when inherited_roles is set in the experimental features
pure $ bool (metadata { _metaInheritedRoles = mempty }) metadata isInheritedRolesSet
runExportMetadata
:: forall m . ( QErrM m, MetadataM m)
:: forall m . ( QErrM m, MetadataM m, HasServerConfigCtx m)
=> ExportMetadata -> m EncJSON
runExportMetadata ExportMetadata{} =
AO.toEncJSON . metadataToOrdJSON <$> getMetadata
runExportMetadata ExportMetadata{} = do
AO.toEncJSON . metadataToOrdJSON <$> (getMetadata >>= processExperimentalFeatures)
runExportMetadataV2
:: forall m . ( QErrM m, MetadataM m)
:: forall m . ( QErrM m, MetadataM m, HasServerConfigCtx m)
=> MetadataResourceVersion -> ExportMetadata -> m EncJSON
runExportMetadataV2 currentResourceVersion ExportMetadata{} = do
exportMetadata <- getMetadata
exportMetadata <- processExperimentalFeatures =<< getMetadata
pure $ AO.toEncJSON $ AO.object
[ ("resource_version", AO.toOrdered currentResourceVersion)
, ("metadata", metadataToOrdJSON exportMetadata)
@ -261,6 +279,7 @@ purgeMetadataObj = \case
MOAction action -> dropActionInMetadata action -- Nothing
MOActionPermission action role -> dropActionPermissionInMetadata action role
MOCronTrigger ctName -> dropCronTriggerInMetadata ctName
MOInheritedRole role -> dropInheritedRoleInMetadata role
MOEndpoint epName -> dropEndpointInMetadata epName
runGetCatalogState

View File

@ -49,6 +49,7 @@ genMetadata =
<*> arbitrary
<*> arbitrary
<*> arbitrary
<*> arbitrary
instance (Arbitrary k, Eq k, Hashable k, Arbitrary v) => Arbitrary (InsOrdHashMap k v) where
arbitrary = OM.fromList <$> arbitrary
@ -108,6 +109,9 @@ instance (Backend b) => Arbitrary (ArrRelUsingFKeyOn b) where
instance (Arbitrary a) => Arbitrary (PermDef a) where
arbitrary = genericArbitrary
instance Arbitrary AddInheritedRole where
arbitrary = genericArbitrary
instance (Backend b) => Arbitrary (ComputedFieldDefinition b) where
arbitrary = genericArbitrary
@ -448,7 +452,6 @@ sampleGraphQLValues = [ G.VInt 1
, G.VBoolean True
]
instance Arbitrary MetricsConfig where
arbitrary = genericArbitrary

View File

@ -210,7 +210,7 @@ buildSelPermInfo
buildSelPermInfo source tn fieldInfoMap sp = withPathK "permission" $ do
let pgCols = convColSpec fieldInfoMap $ spColumns sp
(be, beDeps) <- withPathK "filter" $
(boolExp, boolExpDeps) <- withPathK "filter" $
procBoolExp source tn fieldInfoMap $ spFilter sp
-- check if the columns exist
@ -228,17 +228,20 @@ buildSelPermInfo source tn fieldInfoMap sp = withPathK "permission" $ do
<<> " are auto-derived from the permissions on its returning table "
<> returnTable <<> " and cannot be specified manually"
let deps = mkParentDep source tn : beDeps ++ map (mkColDep DRUntyped source tn) pgCols
let deps = mkParentDep source tn : boolExpDeps ++ map (mkColDep DRUntyped source tn) pgCols
++ map (mkComputedFieldDep DRUntyped source tn) scalarComputedFields
depHeaders = getDependentHeaders $ spFilter sp
mLimit = spLimit sp
withPathK "limit" $ mapM_ onlyPositiveInt mLimit
return ( SelPermInfo (HS.fromList pgCols) (HS.fromList computedFields)
be mLimit allowAgg depHeaders
, deps
)
let pgColsWithFilter = HM.fromList $ map (, Nothing) pgCols
scalarComputedFieldsWithFilter = HS.toMap (HS.fromList scalarComputedFields) $> Nothing
let selPermInfo =
SelPermInfo pgColsWithFilter scalarComputedFieldsWithFilter boolExp mLimit allowAgg depHeaders
return ( selPermInfo, deps )
where
allowAgg = spAllowAggregations sp
computedFields = spComputedFields sp
@ -352,26 +355,26 @@ instance (Backend b) => FromJSON (SetPermComment b) where
runSetPermComment
:: (QErrM m, CacheRWM m, MetadataM m, BackendMetadata b)
=> SetPermComment b -> m EncJSON
runSetPermComment (SetPermComment source table role permType comment) = do
runSetPermComment (SetPermComment source table roleName permType comment) = do
tableInfo <- askTabInfo source table
-- assert permission exists and return appropriate permission modifier
permModifier <- case permType of
PTInsert -> do
assertPermDefined role PAInsert tableInfo
pure $ tmInsertPermissions.ix role.pdComment .~ comment
assertPermDefined roleName PAInsert tableInfo
pure $ tmInsertPermissions.ix roleName.pdComment .~ comment
PTSelect -> do
assertPermDefined role PASelect tableInfo
pure $ tmSelectPermissions.ix role.pdComment .~ comment
assertPermDefined roleName PASelect tableInfo
pure $ tmSelectPermissions.ix roleName.pdComment .~ comment
PTUpdate -> do
assertPermDefined role PAUpdate tableInfo
pure $ tmUpdatePermissions.ix role.pdComment .~ comment
assertPermDefined roleName PAUpdate tableInfo
pure $ tmUpdatePermissions.ix roleName.pdComment .~ comment
PTDelete -> do
assertPermDefined role PADelete tableInfo
pure $ tmDeletePermissions.ix role.pdComment .~ comment
assertPermDefined roleName PADelete tableInfo
pure $ tmDeletePermissions.ix roleName.pdComment .~ comment
let metadataObject = MOSourceObjId source $
SMOTableObj table $ MTOPerm role permType
SMOTableObj table $ MTOPerm roleName permType
buildSchemaCacheFor metadataObject
$ MetadataModifier
$ tableMetadataSetter source table %~ permModifier

View File

@ -33,15 +33,15 @@ assertPermDefined
-> PermAccessor backend a
-> TableInfo backend
-> m ()
assertPermDefined roleName pa tableInfo =
assertPermDefined role pa tableInfo =
unless (permissionIsDefined rpi pa) $ throw400 PermissionDenied $ mconcat
[ "'" <> tshow (permAccToType pa) <> "'"
, " permission on " <>> _tciName (_tiCoreInfo tableInfo)
, " for role " <>> roleName
, " for role " <>> role
, " does not exist"
]
where
rpi = M.lookup roleName $ _tiRolePermInfoMap tableInfo
rpi = M.lookup role $ _tiRolePermInfoMap tableInfo
askPermInfo
:: (Backend backend, MonadError QErr m)

View File

@ -264,8 +264,9 @@ buildSchemaCacheRule env = proc (metadata, invalidationKeys) -> do
, DBFunctionsMetadata b
, RemoteSchemaMap
, Inc.Dependency InvalidationKeys
, InheritedRoles
) `arr` BackendSourceInfo
buildSource = proc (sourceMetadata, sourceConfig, dbTables, dbFunctions, remoteSchemaMap, invalidationKeys) -> do
buildSource = proc (sourceMetadata, sourceConfig, dbTables, dbFunctions, remoteSchemaMap, invalidationKeys, inheritedRoles) -> do
let SourceMetadata source tables functions _ = sourceMetadata
tablesMetadata = OMap.elems tables
(tableInputs, nonColumnInputs, permissions) = unzip3 $ map mkTableInputs tablesMetadata
@ -292,7 +293,9 @@ buildSchemaCacheRule env = proc (metadata, invalidationKeys) -> do
tableCache <-
(| Inc.keyed (\_ ((tableCoreInfo, permissionInputs), (_, eventTriggerConfs)) -> do
let tableFields = _tciFieldInfoMap tableCoreInfo
permissionInfos <- buildTablePermissions -< (source, tableCoreInfosDep, tableFields, permissionInputs)
permissionInfos <-
buildTablePermissions
-< (source, tableCoreInfosDep, tableFields, permissionInputs, inheritedRoles)
eventTriggerInfos <- buildTableEventTriggers -< (source, sourceConfig, tableCoreInfo, eventTriggerConfs, metadataInvalidationKey)
returnA -< TableInfo tableCoreInfo permissionInfos eventTriggerInfos
)
@ -330,15 +333,16 @@ buildSchemaCacheRule env = proc (metadata, invalidationKeys) -> do
=> ( Inc.Dependency InvalidationKeys
, HashMap RemoteSchemaName RemoteSchemaCtx
, SourceMetadata b
, InheritedRoles
) `arr` Maybe (BackendSourceInfo, DMap.DMap BackendTag ScalarSet)
buildSourceOutput = proc (invalidationKeys, remoteSchemaCtxMap, sourceMetadata) -> do
buildSourceOutput = proc (invalidationKeys, remoteSchemaCtxMap, sourceMetadata, inheritedRoles) -> do
let sourceInvalidationsKeys = Inc.selectD #_ikSources invalidationKeys
maybeResolvedSource <- resolveSourceIfNeeded -< (sourceInvalidationsKeys, sourceMetadata)
case maybeResolvedSource of
Nothing -> returnA -< Nothing
Just (ResolvedSource sourceConfig tablesMeta functionsMeta scalars) -> do
so <- buildSource -< ( sourceMetadata, sourceConfig, tablesMeta, functionsMeta
, remoteSchemaCtxMap, invalidationKeys
, remoteSchemaCtxMap, invalidationKeys, inheritedRoles
)
returnA -< Just (so, DMap.singleton (backendTag @b) $ ScalarSet scalars)
@ -351,7 +355,16 @@ buildSchemaCacheRule env = proc (metadata, invalidationKeys) -> do
=> (Metadata, Inc.Dependency InvalidationKeys) `arr` BuildOutputs
buildAndCollectInfo = proc (metadata, invalidationKeys) -> do
let Metadata sources remoteSchemas collections allowlists
customTypes actions cronTriggers endpoints apiLimits metricsConfig = metadata
customTypes actions cronTriggers endpoints apiLimits metricsConfig inheritedRoles = metadata
actionRoles = map _apmRole . _amPermissions =<< OMap.elems actions
remoteSchemaRoles = map _rspmRole . _rsmPermissions =<< OMap.elems remoteSchemas
sourceRoles =
HS.fromList $ concat $
OMap.elems sources >>= \(BackendSourceMetadata (SourceMetadata _ tables _functions _) ) -> do
table <- OMap.elems tables
pure ( OMap.keys (_tmInsertPermissions table) <> OMap.keys (_tmSelectPermissions table)
<> OMap.keys (_tmUpdatePermissions table) <> OMap.keys (_tmDeletePermissions table))
remoteSchemaPermissions =
let remoteSchemaPermsList = OMap.toList $ _rsmPermissions <$> remoteSchemas
in concat $ flip map remoteSchemaPermsList $
@ -359,6 +372,22 @@ buildSchemaCacheRule env = proc (metadata, invalidationKeys) -> do
flip map remoteSchemaPerms $ \(RemoteSchemaPermissionMetadata role defn comment) ->
AddRemoteSchemaPermissions remoteSchemaName role defn comment
)
nonInheritedRoles = sourceRoles <> HS.fromList (actionRoles <> remoteSchemaRoles)
let commonInheritedRoles = HS.intersection (HS.fromList (OMap.keys inheritedRoles)) nonInheritedRoles
bindA -< do
unless (HS.null commonInheritedRoles) $ do
throw400 AlreadyExists $
"role " <> commaSeparated (map toTxt $ toList commonInheritedRoles) <> " already exists"
for_ (toList inheritedRoles) $ \(AddInheritedRole _ roleSet) ->
for_ roleSet $ \role -> do
unless (role `elem` nonInheritedRoles) $
throw400 NotFound $ role <<> " not found. An inherited role can only be created out of existing roles"
when (role `OMap.member` inheritedRoles) $
throw400 ConstraintError $ role <<> " is an inherited role. An inherited role can only be created out of non-inherited roles"
-- remote schemas
let remoteSchemaInvalidationKeys = Inc.selectD #_ikRemoteSchemas invalidationKeys
@ -382,8 +411,8 @@ buildSchemaCacheRule env = proc (metadata, invalidationKeys) -> do
sourcesOutput <-
(| Inc.keyed (\_ (BackendSourceMetadata (sourceMetadata :: SourceMetadata b)) ->
case backendTag @b of
PostgresTag -> buildSourceOutput @arr @m -< (invalidationKeys, remoteSchemaCtxMap, sourceMetadata :: SourceMetadata 'Postgres)
MSSQLTag -> buildSourceOutput @arr @m -< (invalidationKeys, remoteSchemaCtxMap, sourceMetadata :: SourceMetadata 'MSSQL)
PostgresTag -> buildSourceOutput @arr @m -< (invalidationKeys, remoteSchemaCtxMap, sourceMetadata :: SourceMetadata 'Postgres, inheritedRoles)
MSSQLTag -> buildSourceOutput @arr @m -< (invalidationKeys, remoteSchemaCtxMap, sourceMetadata :: SourceMetadata 'MSSQL, inheritedRoles)
)
|) (M.fromList $ OMap.toList sources)
>-> (\infos -> M.catMaybes infos >- returnA)
@ -424,6 +453,8 @@ buildSchemaCacheRule env = proc (metadata, invalidationKeys) -> do
cronTriggersMap <- buildCronTriggers -< ((), OMap.elems cronTriggers)
let inheritedRolesCache = OMap.toHashMap $ fmap _adrRoleSet inheritedRoles
returnA -< BuildOutputs
{ _boSources = M.map fst sourcesOutput
, _boActions = actionCache
@ -431,6 +462,7 @@ buildSchemaCacheRule env = proc (metadata, invalidationKeys) -> do
, _boAllowlist = allowList
, _boCustomTypes = annotatedCustomTypes
, _boCronTriggers = cronTriggersMap
, _boInheritedRoles = inheritedRolesCache
, _boEndpoints = resolvedEndpoints
, _boApiLimits = apiLimits
, _boMetricsConfig = metricsConfig

View File

@ -101,18 +101,19 @@ mkTableInputs TableMetadata{..} =
-- 'MonadWriter' side channel.
data BuildOutputs
= BuildOutputs
{ _boSources :: SourceCache
, _boActions :: !ActionCache
, _boRemoteSchemas :: !(HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject))
{ _boSources :: SourceCache
, _boActions :: !ActionCache
, _boRemoteSchemas :: !(HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject))
-- ^ We preserve the 'MetadataObject' from the original catalog metadata in the output so we can
-- reuse it later if we need to mark the remote schema inconsistent during GraphQL schema
-- generation (because of field conflicts).
, _boAllowlist :: !(HS.HashSet GQLQuery)
, _boCustomTypes :: !AnnotatedCustomTypes
, _boCronTriggers :: !(M.HashMap TriggerName CronTriggerInfo)
, _boEndpoints :: !(M.HashMap EndpointName (EndpointMetadata GQLQueryWithText))
, _boApiLimits :: !ApiLimit
, _boMetricsConfig :: !MetricsConfig
, _boAllowlist :: !(HS.HashSet GQLQuery)
, _boCustomTypes :: !AnnotatedCustomTypes
, _boCronTriggers :: !(M.HashMap TriggerName CronTriggerInfo)
, _boEndpoints :: !(M.HashMap EndpointName (EndpointMetadata GQLQueryWithText))
, _boApiLimits :: !ApiLimit
, _boMetricsConfig :: !MetricsConfig
, _boInheritedRoles :: !InheritedRolesCache
}
$(makeLenses ''BuildOutputs)
@ -145,7 +146,6 @@ instance HasServerConfigCtx CacheBuild where
instance MonadResolveSource CacheBuild where
getSourceResolver = asks _cbpSourceResolver
runCacheBuild
:: ( MonadIO m
, MonadError QErr m

View File

@ -168,6 +168,7 @@ deleteMetadataObject = \case
MOAction name -> boActions %~ M.delete name
MOEndpoint name -> boEndpoints %~ M.delete name
MOActionPermission name role -> boActions.ix name.aiPermissions %~ M.delete role
MOInheritedRole name -> boInheritedRoles %~ M.delete name
where
deleteObjId :: (Backend b) => SourceMetadataObjId b -> BackendSourceInfo -> BackendSourceInfo
deleteObjId sourceObjId sourceInfo = maybe sourceInfo (BackendSourceInfo . deleteObjFn sourceObjId) $ unsafeSourceInfo sourceInfo

View File

@ -1,4 +1,5 @@
{-# LANGUAGE Arrows #-}
{-# LANGUAGE Arrows #-}
{-# LANGUAGE ViewPatterns #-}
module Hasura.RQL.DDL.Schema.Cache.Permission
( buildTablePermissions
@ -8,9 +9,14 @@ module Hasura.RQL.DDL.Schema.Cache.Permission
import Hasura.Prelude
import qualified Data.HashMap.Strict as M
import qualified Data.HashMap.Strict.Extended as M
import qualified Data.HashMap.Strict.InsOrd as OMap
import qualified Data.HashSet as Set
import qualified Data.List.NonEmpty as NE
import qualified Data.Sequence as Seq
import Control.Arrow.Extended
import Data.Aeson
import Data.Text.Extended
@ -21,28 +27,163 @@ import Hasura.RQL.DDL.Permission
import Hasura.RQL.DDL.Permission.Internal
import Hasura.RQL.DDL.Schema.Cache.Common
import Hasura.RQL.Types
import Hasura.Server.Types
import Hasura.Session
{- Note: [Inherited roles architecture for postgres read queries]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Schema generation
--------------------
Schema generation for inherited roles is similar to the schema
generation of non-inherited roles. In the case of inherited roles,
we combine the `SelectPermInfo`s (see `combineSelectPermInfos`) of the
inherited role's role set and a new `SelectPermInfo` will be generated
which will be the select permission of the inherited role.
2. SQL generation
-----------------
See note [SQL generation for inherited roles]
3. Introspection
----------------
The columns accessible to an inherited role are explicitly set to
nullable irrespective of the nullability of the DB column to accomodate
cell value nullification.
-}
-- | This type is only used in the `combineSelectPermInfos` for
-- combining select permissions efficiently
data CombinedSelPermInfo (b :: BackendType)
= CombinedSelPermInfo
{ cspiCols :: ![(M.HashMap (Column b) (Maybe (AnnColumnCaseBoolExpPartialSQL b)))]
, cspiScalarComputedFields :: ![(M.HashMap ComputedFieldName (Maybe (AnnColumnCaseBoolExpPartialSQL b)))]
, cspiFilter :: ![(AnnBoolExpPartialSQL b)]
, cspiLimit :: !(Maybe Int)
, cspiAllowAgg :: !Bool
, cspiRequiredHeaders :: !(Set.HashSet Text)
}
-- | combineSelectPermInfos combines multiple `SelPermInfo`s
-- into one `SelPermInfo`. Two `SelPermInfo` will
-- be combined in the following manner:
--
-- 1. Columns - The `SelPermInfo` contains a hashset of the columns that are
-- accessible to the role. To combine two `SelPermInfo`s, every column of the
-- hashset is coupled with the boolean expression (filter) of the `SelPermInfo`
-- and a hash map of all the columns is created out of it, this hashmap is
-- generated for the `SelPermInfo`s that are going to be combined. These hashmaps
-- are then unioned and the values of these hashmaps are `OR`ed. When a column
-- is accessible to all the select permissions then the nullability of the column
-- is inferred from the DB column otherwise the column is explicitly marked as
-- nullable to accomodate cell-value nullification.
-- 2. Scalar computed fields - Scalar computed fields work the same as Columns (#1)
-- 3. Filter / Boolean expression - The filters are combined using a `BoolOr`
-- 4. Limit - Limits are combined by taking the minimum of the two limits
-- 5. Allow Aggregation - Aggregation is allowed, if any of the permissions allow it.
-- 6. Request Headers - Request headers are concatenated
--
-- To maintain backwards compatibility, we handle the case of single select permission
-- differently i.e. we don't want the case statements that always evaluate to true with
-- the columns
--
combineSelectPermInfos
:: forall b
. (Backend b)
=> NE.NonEmpty (SelPermInfo b)
-> SelPermInfo b
combineSelectPermInfos (headSelPermInfo NE.:| []) = headSelPermInfo
combineSelectPermInfos selPermInfos@(headSelPermInfo NE.:| restSelPermInfos) =
let CombinedSelPermInfo {..}
= foldr combine (modifySingleSelectPerm headSelPermInfo) restSelPermInfos
mergeColumnsWithBoolExp xs
| length selPermInfos == length xs = Nothing
| otherwise = foldr combineCaseBoolExps Nothing xs
in SelPermInfo (mergeColumnsWithBoolExp <$> M.unionsAll cspiCols)
(mergeColumnsWithBoolExp <$> M.unionsAll cspiScalarComputedFields)
(BoolOr cspiFilter)
cspiLimit
cspiAllowAgg
(toList cspiRequiredHeaders)
where
modifySingleSelectPerm :: SelPermInfo b -> CombinedSelPermInfo b
modifySingleSelectPerm SelPermInfo {..} =
let columnCaseBoolExp = fmap AnnColumnCaseBoolExpField spiFilter
colsWithColCaseBoolExp = spiCols $> Just columnCaseBoolExp
scalarCompFieldsWithColCaseBoolExp = spiScalarComputedFields $> Just columnCaseBoolExp
in
CombinedSelPermInfo [colsWithColCaseBoolExp]
[scalarCompFieldsWithColCaseBoolExp]
[spiFilter]
spiLimit
spiAllowAgg
(Set.fromList spiRequiredHeaders)
combine :: SelPermInfo b -> CombinedSelPermInfo b -> CombinedSelPermInfo b
combine (modifySingleSelectPerm -> lSelPermInfo) accSelPermInfo =
CombinedSelPermInfo
{ cspiCols = (cspiCols lSelPermInfo) <> (cspiCols accSelPermInfo)
, cspiScalarComputedFields =
(cspiScalarComputedFields lSelPermInfo) <> (cspiScalarComputedFields accSelPermInfo)
, cspiFilter = (cspiFilter lSelPermInfo) <> (cspiFilter accSelPermInfo)
, cspiLimit =
case (cspiLimit lSelPermInfo, cspiLimit accSelPermInfo) of
(Nothing, Nothing) -> Nothing
(Just l, Nothing) -> Just l
(Nothing, Just r) -> Just r
(Just l , Just r) -> Just $ min l r
, cspiAllowAgg = cspiAllowAgg lSelPermInfo || cspiAllowAgg accSelPermInfo
, cspiRequiredHeaders = (cspiRequiredHeaders lSelPermInfo) <> (cspiRequiredHeaders accSelPermInfo)
}
combineCaseBoolExps l r =
case (l, r) of
(Nothing, Nothing) -> Nothing
(Just caseBoolExp, Nothing) -> Just caseBoolExp
(Nothing, Just caseBoolExp) -> Just caseBoolExp
(Just caseBoolExpL, Just caseBoolExpR) -> Just $ BoolOr [caseBoolExpL, caseBoolExpR]
buildTablePermissions
:: ( ArrowChoice arr, Inc.ArrowDistribute arr, Inc.ArrowCache m arr
, MonadError QErr m, ArrowWriter (Seq CollectedInfo) arr
, HasServerConfigCtx m
, BackendMetadata b)
=> ( SourceName
, Inc.Dependency (TableCoreCache b)
, FieldInfoMap (FieldInfo b)
, TablePermissionInputs b
, InheritedRoles
) `arr` (RolePermInfoMap b)
buildTablePermissions = Inc.cache proc (source, tableCache, tableFields, tablePermissions) -> do
buildTablePermissions = Inc.cache proc (source, tableCache, tableFields, tablePermissions, inheritedRoles) -> do
let alignedPermissions = alignPermissions tablePermissions
table = _tpiTable tablePermissions
(| Inc.keyed (\_ (insertPermission, selectPermission, updatePermission, deletePermission) -> do
experimentalFeatures <- bindA -< _sccExperimentalFeatures <$> askServerConfigCtx
nonInheritedRolePermissions <-
(| Inc.keyed (\_ (insertPermission, selectPermission, updatePermission, deletePermission) -> do
insert <- buildPermission -< (tableCache, source, table, tableFields, listToMaybe insertPermission)
select <- buildPermission -< (tableCache, source, table, tableFields, listToMaybe selectPermission)
update <- buildPermission -< (tableCache, source, table, tableFields, listToMaybe updatePermission)
delete <- buildPermission -< (tableCache, source, table, tableFields, listToMaybe deletePermission)
returnA -< RolePermInfo insert select update delete)
|) alignedPermissions
-- build permissions for inherited roles only when inherited roles is enabled
let inheritedRolesMap =
bool mempty (OMap.toHashMap inheritedRoles) $ EFInheritedRoles `elem` experimentalFeatures
-- see [Inherited roles architecture for postgres read queries]
inheritedRolePermissions <-
(| Inc.keyed (\_ (AddInheritedRole _ roleSet) -> do
let singleRoleSelectPerms =
map ((_permSel =<<) . (`M.lookup` nonInheritedRolePermissions)) $
toList roleSet
nonEmptySelPerms = NE.nonEmpty =<< sequenceA singleRoleSelectPerms
combinedSelPermInfo = combineSelectPermInfos <$> nonEmptySelPerms
returnA -< RolePermInfo Nothing combinedSelPermInfo Nothing Nothing)
|) inheritedRolesMap
returnA -< nonInheritedRolePermissions <> inheritedRolePermissions
where
mkMap :: [PermDef a] -> HashMap RoleName (PermDef a)
mkMap = mapFromL _pdRole

View File

@ -342,7 +342,8 @@ fetchMetadataFromHdbTables = liftTx do
actions <- oMapFromL _amName <$> fetchActions
MetadataNoSources fullTableMetaMap functions remoteSchemas collections
allowlist customTypes actions <$> fetchCronTriggers
allowlist customTypes actions <$> fetchCronTriggers
where
modMetaMap l f xs = do
st <- get

View File

@ -184,7 +184,7 @@ updateRelDefs source qt rn renameTable = do
where
getUpdQT origQT = bool origQT newQT $ oldQT == origQT
-- | update fields in premissions
-- | update fields in permissions
updatePermFlds
:: forall b m
. ( MonadError QErr m

View File

@ -148,7 +148,7 @@ convInsertQuery objsParser sessVarBldr prepFn (InsertQuery tableName _ val oC mR
-- Check if the role has insert permissions
insPerm <- askInsPermInfo tableInfo
updPerm <- askPermInfo' PAUpdate tableInfo
updPerm <- askPermInfo' PAUpdate tableInfo
-- Check if all dependent headers are present
validateHeaders $ ipiRequiredHeaders insPerm
@ -181,12 +181,11 @@ convInsertQuery objsParser sessVarBldr prepFn (InsertQuery tableName _ val oC mR
updCheck <- traverse (convAnnBoolExpPartialSQL sessVarFromCurrentSetting) (upiCheck =<< updPerm)
conflictClause <- withPathK "on_conflict" $ forM oC $ \c -> do
roleName <- askCurRole
unless (isTabUpdatable roleName tableInfo) $ throw400 PermissionDenied $
"upsert is not allowed for role " <> roleName
<<> " since update permissions are not defined"
buildConflictClause sessVarBldr tableInfo inpCols c
role <- askCurRole
unless (isTabUpdatable role tableInfo) $ throw400 PermissionDenied $
"upsert is not allowed for role " <> role
<<> " since update permissions are not defined"
buildConflictClause sessVarBldr tableInfo inpCols c
return $ InsertQueryP1 tableName insCols sqlExps
conflictClause (insCheck, updCheck) mutOutput allCols
where

View File

@ -26,7 +26,6 @@ import Hasura.RQL.Types
import Hasura.SQL.Types
import Hasura.Session
newtype DMLP1T m a
= DMLP1T { unDMLP1T :: StateT (DS.Seq Q.PrepArg) m a }
deriving ( Functor, Applicative, Monad, MonadTrans
@ -43,13 +42,15 @@ mkAdminRolePermInfo ti =
where
fields = _tciFieldInfoMap ti
pgCols = map pgiColumn $ getCols fields
scalarComputedFields = map _cfiName $ onlyScalarComputedFields $
getComputedFieldInfos fields
pgColsWithFilter = M.fromList $ map (, Nothing) pgCols
scalarComputedFields =
HS.fromList $ map _cfiName $ onlyScalarComputedFields $ getComputedFieldInfos fields
scalarComputedFields' = HS.toMap scalarComputedFields $> Nothing
tn = _tciName ti
i = InsPermInfo (HS.fromList pgCols) annBoolExpTrue M.empty False []
s = SelPermInfo (HS.fromList pgCols) (HS.fromList scalarComputedFields) annBoolExpTrue
Nothing True []
s = SelPermInfo pgColsWithFilter scalarComputedFields' annBoolExpTrue Nothing True []
u = UpdPermInfo (HS.fromList pgCols) tn annBoolExpTrue Nothing M.empty []
d = DelPermInfo tn annBoolExpTrue []
@ -59,21 +60,21 @@ askPermInfo'
-> TableInfo b
-> m (Maybe c)
askPermInfo' pa tableInfo = do
roleName <- askCurRole
return $ getPermInfoMaybe roleName pa tableInfo
role <- askCurRole
return $ getPermInfoMaybe role pa tableInfo
getPermInfoMaybe
:: (Backend b) => RoleName -> PermAccessor b c -> TableInfo b -> Maybe c
getPermInfoMaybe roleName pa tableInfo =
getRolePermInfo roleName tableInfo >>= (^. permAccToLens pa)
getPermInfoMaybe role pa tableInfo =
getRolePermInfo role tableInfo >>= (^. permAccToLens pa)
getRolePermInfo
:: Backend b => RoleName -> TableInfo b -> Maybe (RolePermInfo b)
getRolePermInfo roleName tableInfo
| roleName == adminRoleName =
getRolePermInfo role tableInfo
| role == adminRoleName =
Just $ mkAdminRolePermInfo (_tiCoreInfo tableInfo)
| otherwise =
M.lookup roleName (_tiRolePermInfoMap tableInfo)
| otherwise =
M.lookup role (_tiRolePermInfoMap tableInfo)
askPermInfo
:: (UserInfoM m, QErrM m, Backend b)
@ -124,7 +125,7 @@ verifyAsrns preds xs = indexedForM_ xs $ \a -> mapM_ ($ a) preds
checkSelOnCol :: forall m b. (UserInfoM m, QErrM m, Backend b)
=> SelPermInfo b -> Column b -> m ()
checkSelOnCol selPermInfo =
checkPermOnCol PTSelect (spiCols selPermInfo)
checkPermOnCol PTSelect (HS.fromList $ M.keys $ spiCols selPermInfo)
checkPermOnCol
:: (UserInfoM m, QErrM m, Backend b)
@ -133,14 +134,14 @@ checkPermOnCol
-> Column b
-> m ()
checkPermOnCol pt allowedCols col = do
roleName <- askCurRole
role <- askCurRole
unless (HS.member col allowedCols) $
throw400 PermissionDenied $ permErrMsg roleName
throw400 PermissionDenied $ permErrMsg role
where
permErrMsg roleName
| roleName == adminRoleName = "no such column exists : " <>> col
permErrMsg role
| role == adminRoleName = "no such column exists : " <>> col
| otherwise = mconcat
[ "role " <>> roleName
[ "role " <>> role
, " does not have permission to "
, permTypeToCode pt <> " column " <>> col
]
@ -226,6 +227,14 @@ convAnnBoolExpPartialSQL
convAnnBoolExpPartialSQL f =
traverseAnnBoolExp (convPartialSQLExp f)
convAnnColumnCaseBoolExpPartialSQL
:: (Applicative f)
=> SessVarBldr backend f
-> AnnColumnCaseBoolExpPartialSQL backend
-> f (AnnColumnCaseBoolExp backend (SQLExpression backend))
convAnnColumnCaseBoolExpPartialSQL f =
traverseAnnColumnCaseBoolExp (convPartialSQLExp f)
convPartialSQLExp
:: (Applicative f)
=> SessVarBldr backend f

View File

@ -1,13 +1,11 @@
module Hasura.RQL.DML.Select
( selectP2
, convSelectQuery
, runSelect
( runSelect
)
where
import Hasura.Prelude
import qualified Data.HashSet as HS
import qualified Data.HashMap.Strict as HM
import qualified Data.List.NonEmpty as NE
import qualified Data.Sequence as DS
import qualified Database.PG.Query as Q
@ -31,7 +29,6 @@ import Hasura.RQL.Types.Run
import Hasura.SQL.Types
import Hasura.Session
type SelectQExt b = SelectG (ExtCol b) (BoolExp b) Int
-- Columns in RQL
@ -98,7 +95,7 @@ convWildcard fieldInfoMap selPermInfo wildcard =
pgCols = map pgiColumn $ getCols fieldInfoMap
relColInfos = getRels fieldInfoMap
simpleCols = map ECSimple $ filter (`HS.member` cols) pgCols
simpleCols = map ECSimple $ filter (`HM.member` cols) pgCols
mkRelCol wc relInfo = do
let relName = riName relInfo
@ -106,8 +103,8 @@ convWildcard fieldInfoMap selPermInfo wildcard =
relTabInfo <- fetchRelTabInfo relTab
mRelSelPerm <- askPermInfo' PASelect relTabInfo
forM mRelSelPerm $ \rspi -> do
rExtCols <- convWildcard (_tciFieldInfoMap $ _tiCoreInfo relTabInfo) rspi wc
forM mRelSelPerm $ \relSelPermInfo -> do
rExtCols <- convWildcard (_tciFieldInfoMap $ _tiCoreInfo relTabInfo) relSelPermInfo wc
return $ ECRel relName Nothing $
SelectG rExtCols Nothing Nothing Nothing Nothing
@ -118,14 +115,14 @@ resolveStar :: (UserInfoM m, QErrM m, TableInfoRM 'Postgres m)
-> SelPermInfo 'Postgres
-> SelectQ 'Postgres
-> m (SelectQExt 'Postgres)
resolveStar fim spi (SelectG selCols mWh mOb mLt mOf) = do
resolveStar fim selPermInfo (SelectG selCols mWh mOb mLt mOf) = do
procOverrides <- fmap (concat . catMaybes) $ withPathK "columns" $
indexedForM selCols $ \selCol -> case selCol of
(SCStar _) -> return Nothing
_ -> Just <$> convSelCol fim spi selCol
_ -> Just <$> convSelCol fim selPermInfo selCol
everything <- case wildcards of
[] -> return []
_ -> convWildcard fim spi $ maximum wildcards
_ -> convWildcard fim selPermInfo $ maximum wildcards
let extCols = unionBy equals procOverrides everything
return $ SelectG extCols mWh mOb mLt mOf
where
@ -185,15 +182,19 @@ convOrderByElem sessVarBldr (flds, spi) = \case
[ fldName <<> " is an array relationship"
," and can't be used in 'order_by'"
]
(relFim, relSpi) <- fetchRelDet (riName relInfo) (riRTable relInfo)
resolvedSelFltr <- convAnnBoolExpPartialSQL sessVarBldr $ spiFilter relSpi
(relFim, relSelPermInfo) <- fetchRelDet (riName relInfo) (riRTable relInfo)
resolvedSelFltr <- convAnnBoolExpPartialSQL sessVarBldr $ spiFilter relSelPermInfo
AOCObjectRelation relInfo resolvedSelFltr <$>
convOrderByElem sessVarBldr (relFim, relSpi) rest
convOrderByElem sessVarBldr (relFim, relSelPermInfo) rest
FIRemoteRelationship {} ->
throw400 UnexpectedPayload (mconcat [ fldName <<> " is a remote field" ])
convSelectQ
:: (UserInfoM m, QErrM m, TableInfoRM 'Postgres m, HasServerConfigCtx m)
:: ( UserInfoM m
, QErrM m
, TableInfoRM 'Postgres m
, HasServerConfigCtx m
)
=> TableName 'Postgres
-> FieldInfoMap (FieldInfo 'Postgres) -- Table information of current table
-> SelPermInfo 'Postgres -- Additional select permission info
@ -202,12 +203,18 @@ convSelectQ
-> ValueParser 'Postgres m S.SQLExp
-> m (AnnSimpleSel 'Postgres)
convSelectQ table fieldInfoMap selPermInfo selQ sessVarBldr prepValBldr = do
-- Convert where clause
wClause <- forM (sqWhere selQ) $ \boolExp ->
withPathK "where" $
convBoolExp fieldInfoMap selPermInfo boolExp sessVarBldr prepValBldr
annFlds <- withPathK "columns" $
indexedForM (sqColumns selQ) $ \case
(ECSimple pgCol) -> do
colInfo <- convExtSimple fieldInfoMap selPermInfo pgCol
return (fromCol @'Postgres pgCol, mkAnnColumnField colInfo Nothing)
(colInfo, caseBoolExpMaybe) <- convExtSimple fieldInfoMap selPermInfo pgCol
resolvedCaseBoolExp <-
traverse (convAnnColumnCaseBoolExpPartialSQL sessVarBldr) caseBoolExpMaybe
return (fromCol @'Postgres pgCol, mkAnnColumnField colInfo resolvedCaseBoolExp Nothing)
(ECRel relName mAlias relSelQ) -> do
annRel <- convExtRel fieldInfoMap relName mAlias
relSelQ sessVarBldr prepValBldr
@ -215,11 +222,6 @@ convSelectQ table fieldInfoMap selPermInfo selQ sessVarBldr prepValBldr = do
, either AFObjectRelation AFArrayRelation annRel
)
-- Convert where clause
wClause <- forM (sqWhere selQ) $ \be ->
withPathK "where" $
convBoolExp fieldInfoMap selPermInfo be sessVarBldr prepValBldr
annOrdByML <- forM (sqOrderBy selQ) $ \(OrderByExp obItems) ->
withPathK "order_by" $ indexedForM obItems $ mapM $
convOrderByElem sessVarBldr (fieldInfoMap, selPermInfo)
@ -251,15 +253,20 @@ convExtSimple
=> FieldInfoMap (FieldInfo 'Postgres)
-> SelPermInfo 'Postgres
-> PGCol
-> m (ColumnInfo 'Postgres)
-> m (ColumnInfo 'Postgres, Maybe (AnnColumnCaseBoolExpPartialSQL 'Postgres))
convExtSimple fieldInfoMap selPermInfo pgCol = do
checkSelOnCol selPermInfo pgCol
askColInfo fieldInfoMap pgCol relWhenPGErr
colInfo <- askColInfo fieldInfoMap pgCol relWhenPGErr
pure (colInfo, join $ HM.lookup pgCol (spiCols selPermInfo))
where
relWhenPGErr = "relationships have to be expanded"
convExtRel
:: (UserInfoM m, QErrM m, TableInfoRM 'Postgres m, HasServerConfigCtx m)
:: ( UserInfoM m
, QErrM m
, TableInfoRM 'Postgres m
, HasServerConfigCtx m
)
=> FieldInfoMap (FieldInfo 'Postgres)
-> RelName
-> Maybe RelName
@ -297,7 +304,11 @@ convExtRel fieldInfoMap relName mAlias selQ sessVarBldr prepValBldr = do
]
convSelectQuery
:: (UserInfoM m, QErrM m, TableInfoRM 'Postgres m, HasServerConfigCtx m)
:: ( UserInfoM m
, QErrM m
, TableInfoRM 'Postgres m
, HasServerConfigCtx m
)
=> SessVarBldr 'Postgres m
-> ValueParser 'Postgres m S.SQLExp
-- -> (ColumnType 'Postgres -> Value -> m S.SQLExp)

View File

@ -22,8 +22,13 @@ module Hasura.RQL.IR.BoolExp
, AnnBoolExpFld(..)
, AnnBoolExp
, AnnColumnCaseBoolExpPartialSQL
, AnnColumnCaseBoolExp
, AnnColumnCaseBoolExpField(..)
, traverseAnnBoolExp
, fmapAnnBoolExp
, traverseAnnColumnCaseBoolExp
, fmapAnnColumnCaseBoolExp
, annBoolExpTrue
, andAnnBoolExps
@ -293,7 +298,6 @@ instance (Backend b, NFData a) => NFData (OpExpG b a)
instance (Backend b, Cacheable a) => Cacheable (OpExpG b a)
instance (Backend b, Hashable a) => Hashable (OpExpG b a)
opExpDepCol :: OpExpG backend a -> Maybe (Column backend)
opExpDepCol = \case
CEQ c -> Just c
@ -382,20 +386,47 @@ instance (Backend b, NFData (ColumnInfo b), NFData a) => NFData (AnnBoolExpFld b
instance (Backend b, Cacheable (ColumnInfo b), Cacheable a) => Cacheable (AnnBoolExpFld b a)
instance (Backend b, Hashable (ColumnInfo b), Hashable a) => Hashable (AnnBoolExpFld b a)
newtype AnnColumnCaseBoolExpField (b :: BackendType) a
= AnnColumnCaseBoolExpField { _accColCaseBoolExpField :: (AnnBoolExpFld b a)}
deriving (Functor, Foldable, Traversable, Generic)
deriving instance (Backend b, Eq (ColumnInfo b), Eq a) => Eq (AnnColumnCaseBoolExpField b a)
instance (Backend b, NFData (ColumnInfo b), NFData a) => NFData (AnnColumnCaseBoolExpField b a)
instance (Backend b, Cacheable (ColumnInfo b), Cacheable a) => Cacheable (AnnColumnCaseBoolExpField b a)
instance (Backend b, Hashable (ColumnInfo b), Hashable a) => Hashable (AnnColumnCaseBoolExpField b a)
type AnnBoolExp b a
= GBoolExp b (AnnBoolExpFld b a)
type AnnColumnCaseBoolExp b a
= GBoolExp b (AnnColumnCaseBoolExpField b a)
traverseAnnBoolExpFld
:: (Applicative f)
=> (a -> f b)
-> AnnBoolExpFld backend a
-> f (AnnBoolExpFld backend b)
traverseAnnBoolExpFld f = \case
AVCol pgColInfo opExps ->
AVCol pgColInfo <$> traverse (traverse f) opExps
AVRel relInfo annBoolExp ->
AVRel relInfo <$> traverseAnnBoolExp f annBoolExp
traverseAnnBoolExp
:: (Applicative f)
=> (a -> f b)
-> AnnBoolExp backend a
-> f (AnnBoolExp backend b)
traverseAnnBoolExp f =
traverse $ \case
AVCol pgColInfo opExps ->
AVCol pgColInfo <$> traverse (traverse f) opExps
AVRel relInfo annBoolExp ->
AVRel relInfo <$> traverseAnnBoolExp f annBoolExp
traverseAnnBoolExp f = traverse (traverseAnnBoolExpFld f)
traverseAnnColumnCaseBoolExp
:: (Applicative f)
=> (a -> f b)
-> AnnColumnCaseBoolExp backend a
-> f (AnnColumnCaseBoolExp backend b)
traverseAnnColumnCaseBoolExp f = traverse traverseColCaseBoolExp
where
traverseColCaseBoolExp (AnnColumnCaseBoolExpField annBoolExpField) =
AnnColumnCaseBoolExpField <$> traverseAnnBoolExpFld f annBoolExpField
fmapAnnBoolExp
:: (a -> b)
@ -404,6 +435,13 @@ fmapAnnBoolExp
fmapAnnBoolExp f =
runIdentity . traverseAnnBoolExp (pure . f)
fmapAnnColumnCaseBoolExp
:: (a -> b)
-> AnnColumnCaseBoolExp backend a
-> AnnColumnCaseBoolExp backend b
fmapAnnColumnCaseBoolExp f =
runIdentity . traverseAnnColumnCaseBoolExp (pure . f)
annBoolExpTrue :: AnnBoolExp backend a
annBoolExpTrue = gBoolExpTrue
@ -417,6 +455,8 @@ type AnnBoolExpSQL b = AnnBoolExp b (SQLExpression b)
type AnnBoolExpFldPartialSQL b = AnnBoolExpFld b (PartialSQLExp b)
type AnnBoolExpPartialSQL b = AnnBoolExp b (PartialSQLExp b)
type AnnColumnCaseBoolExpPartialSQL b = AnnColumnCaseBoolExp b (PartialSQLExp b)
type PreSetColsG b v = M.HashMap (Column b) v
type PreSetColsPartial b = M.HashMap (Column b) (PartialSQLExp b)
@ -436,20 +476,22 @@ instance Backend b => ToJSON (PartialSQLExp b) where
PSESQLExp e -> toJSON $ toSQLTxt e
instance Backend b => ToJSON (AnnBoolExpPartialSQL b) where
toJSON = gBoolExpToJSON f
where
f annFld = case annFld of
AVCol pci opExps ->
( toTxt $ pgiColumn pci
, toJSON (pci, map opExpSToJSON opExps)
)
AVRel ri relBoolExp ->
( relNameToTxt $ riName ri
, toJSON (ri, toJSON relBoolExp)
)
opExpSToJSON :: OpExpG b (PartialSQLExp b) -> Value
opExpSToJSON =
object . pure . opExpToJPair toJSON
toJSON = gBoolExpToJSON annBoolExpMakeKeyValuePair
annBoolExpMakeKeyValuePair :: forall b . Backend b => AnnBoolExpFld b (PartialSQLExp b) -> (Text, Value)
annBoolExpMakeKeyValuePair = \case
AVCol pci opExps ->
( toTxt $ pgiColumn pci
, toJSON (pci, map opExpSToJSON opExps))
AVRel ri relBoolExp ->
( relNameToTxt $ riName ri
, toJSON (ri, toJSON relBoolExp))
where
opExpSToJSON :: OpExpG b (PartialSQLExp b) -> Value
opExpSToJSON = object . pure . opExpToJPair toJSON
instance Backend b => ToJSON (AnnColumnCaseBoolExpPartialSQL b) where
toJSON = gBoolExpToJSON (annBoolExpMakeKeyValuePair . _accColCaseBoolExpField)
isStaticValue :: PartialSQLExp backend -> Bool
isStaticValue = \case

View File

@ -113,7 +113,15 @@ deriving instance (Backend b, Show v) => Show (ComputedFieldScalarSelect b v)
deriving instance (Backend b, Eq v) => Eq (ComputedFieldScalarSelect b v)
data ComputedFieldSelect (b :: BackendType) v
= CFSScalar !(ComputedFieldScalarSelect b v)
= CFSScalar
!(ComputedFieldScalarSelect b v)
-- ^ Type containing info about the computed field
!(Maybe (AnnColumnCaseBoolExp b v))
-- ^ This type is used to determine if whether the scalar
-- computed field should be nullified. When the value is `Nothing`,
-- the scalar computed value will be outputted as computed and when the
-- value is `Just c`, the scalar computed field will be outputted when
-- `c` evaluates to `true` and `null` when `c` evaluates to `false`
| CFSTable !JsonAggSelect !(AnnSimpleSelG b v)
traverseComputedFieldSelect
@ -121,7 +129,8 @@ traverseComputedFieldSelect
=> (v -> f w)
-> ComputedFieldSelect backend v -> f (ComputedFieldSelect backend w)
traverseComputedFieldSelect fv = \case
CFSScalar scalarSel -> CFSScalar <$> traverse fv scalarSel
CFSScalar scalarSel caseBoolExpMaybe ->
CFSScalar <$> traverse fv scalarSel <*> traverse (traverseAnnColumnCaseBoolExp fv) caseBoolExpMaybe
CFSTable b tableSel -> CFSTable b <$> traverseAnnSimpleSelect fv tableSel
type Fields a = [(FieldName, a)]
@ -156,16 +165,36 @@ data ColumnOp (b :: BackendType)
deriving instance Backend b => Show (ColumnOp b)
deriving instance Backend b => Eq (ColumnOp b)
data AnnColumnField (b :: BackendType)
type ColumnBoolExpression b = Either (AnnBoolExpPartialSQL b) (AnnBoolExpSQL b)
data AnnColumnField (b :: BackendType) v
= AnnColumnField
{ _acfInfo :: !(ColumnInfo b)
, _acfAsText :: !Bool
{ _acfInfo :: !(ColumnInfo b)
, _acfAsText :: !Bool
-- ^ If this field is 'True', columns are explicitly casted to @text@ when fetched, which avoids
-- an issue that occurs because we dont currently have proper support for array types. See
-- https://github.com/hasura/graphql-engine/pull/3198 for more details.
, _acfOp :: !(Maybe (ColumnOp b))
, _acfOp :: !(Maybe (ColumnOp b))
, _acfCaseBoolExpression :: !(Maybe (AnnColumnCaseBoolExp b v))
-- ^ This type is used to determine if whether the column
-- should be nullified. When the value is `Nothing`, the column value
-- will be outputted as computed and when the value is `Just c`, the
-- column will be outputted when `c` evaluates to `true` and `null`
-- when `c` evaluates to `false`.
}
traverseAnnColumnField
:: (Applicative f)
=> (a -> f b)
-> AnnColumnField backend a
-> f (AnnColumnField backend b)
traverseAnnColumnField f (AnnColumnField info asText op caseBoolExpMaybe) =
AnnColumnField
<$> pure info
<*> pure asText
<*> pure op
<*> (traverse (traverseAnnColumnCaseBoolExp f) caseBoolExpMaybe)
data RemoteFieldArgument
= RemoteFieldArgument
{ _rfaArgument :: !G.Name
@ -182,7 +211,7 @@ data RemoteSelect (b :: BackendType)
}
data AnnFieldG (b :: BackendType) v
= AFColumn !(AnnColumnField b)
= AFColumn !(AnnColumnField b v)
| AFObjectRelation !(ObjectRelationSelectG b v)
| AFArrayRelation !(ArraySelectG b v)
| AFComputedField (XComputedField b) !(ComputedFieldSelect b v)
@ -190,19 +219,25 @@ data AnnFieldG (b :: BackendType) v
| AFNodeId (XRelay b) !(TableName b) !(PrimaryKeyColumns b)
| AFExpression !Text
mkAnnColumnField :: ColumnInfo backend -> Maybe (ColumnOp backend) -> AnnFieldG backend v
mkAnnColumnField ci colOpM =
AFColumn $ AnnColumnField ci False colOpM
mkAnnColumnField
:: ColumnInfo backend
-> Maybe (AnnColumnCaseBoolExp backend v)
-> Maybe (ColumnOp backend)
-> AnnFieldG backend v
mkAnnColumnField ci caseBoolExp colOpM =
AFColumn (AnnColumnField ci False colOpM caseBoolExp)
mkAnnColumnFieldAsText :: ColumnInfo backend -> AnnFieldG backend v
mkAnnColumnFieldAsText
:: ColumnInfo backend
-> AnnFieldG backend v
mkAnnColumnFieldAsText ci =
AFColumn $ AnnColumnField ci True Nothing
AFColumn (AnnColumnField ci True Nothing Nothing)
traverseAnnField
:: (Applicative f)
=> (a -> f b) -> AnnFieldG backend a -> f (AnnFieldG backend b)
traverseAnnField f = \case
AFColumn colFld -> pure $ AFColumn colFld
AFColumn colFld -> AFColumn <$> traverseAnnColumnField f colFld
AFObjectRelation sel -> AFObjectRelation <$> traverse (traverseAnnObjectSelect f) sel
AFArrayRelation sel -> AFArrayRelation <$> traverseArraySelect f sel
AFComputedField x sel -> AFComputedField x <$> traverseComputedFieldSelect f sel

View File

@ -11,6 +11,7 @@ module Hasura.RQL.Types
, HasSystemDefinedT
, runHasSystemDefinedT
, askSourceInfo
, askSourceConfig
, askSourceTables
@ -26,7 +27,6 @@ module Hasura.RQL.Types
, askComputedFieldInfo
, askRemoteRel
, findTable
, module R
) where
@ -53,6 +53,7 @@ import Hasura.RQL.Types.Endpoint as R
import Hasura.RQL.Types.Error as R
import Hasura.RQL.Types.EventTrigger as R
import Hasura.RQL.Types.Function as R
import Hasura.RQL.Types.InheritedRoles as R
import Hasura.RQL.Types.Metadata as R
import Hasura.RQL.Types.Metadata.Backend as R
import Hasura.RQL.Types.Metadata.Object as R
@ -267,3 +268,4 @@ askRemoteRel fieldInfoMap relName = do
(FIRemoteRelationship remoteFieldInfo) -> return remoteFieldInfo
_ ->
throw400 UnexpectedPayload "expecting a remote relationship"

View File

@ -59,6 +59,7 @@ class
, Ord (FunctionName b)
, Ord (ScalarType b)
, Ord (XRelay b)
, Ord (Column b)
, Data (TableName b)
, Data (ScalarType b)
, Data (SQLExpression b)

View File

@ -0,0 +1,26 @@
module Hasura.RQL.Types.InheritedRoles where
import Hasura.Prelude
import Data.Aeson.Casing
import Data.Aeson.TH
import qualified Data.HashSet as Set
import Hasura.Incremental (Cacheable)
import Hasura.Session
data AddInheritedRole
= AddInheritedRole
{ _adrRoleName :: !RoleName
, _adrRoleSet :: !(Set.HashSet RoleName)
} deriving (Show, Eq, Ord, Generic)
instance Hashable AddInheritedRole
$(deriveJSON (aesonDrop 4 snakeCase) ''AddInheritedRole)
instance Cacheable AddInheritedRole
newtype DropInheritedRole
= DropInheritedRole
{ _ddrRoleName :: RoleName
} deriving (Show, Eq)
$(deriveJSON (aesonDrop 4 snakeCase) ''DropInheritedRole)

View File

@ -31,6 +31,7 @@ import Hasura.RQL.Types.CustomTypes
import Hasura.RQL.Types.Endpoint
import Hasura.RQL.Types.EventTrigger
import Hasura.RQL.Types.Function
import Hasura.RQL.Types.InheritedRoles
import Hasura.RQL.Types.Metadata.Backend
import Hasura.RQL.Types.Permission
import Hasura.RQL.Types.QueryCollection
@ -232,6 +233,7 @@ type Allowlist = HSIns.InsOrdHashSet CollectionReq
type Endpoints = InsOrdHashMap EndpointName CreateEndpoint
type Actions = InsOrdHashMap ActionName ActionMetadata
type CronTriggers = InsOrdHashMap TriggerName CronTriggerMetadata
type InheritedRoles = InsOrdHashMap RoleName AddInheritedRole
data SourceMetadata b
= SourceMetadata
@ -299,6 +301,7 @@ parseNonSourcesMetadata
, CronTriggers
, ApiLimit
, MetricsConfig
, InheritedRoles
)
parseNonSourcesMetadata o = do
remoteSchemas <- parseListAsMap "remote schemas" _rsmName $
@ -313,9 +316,11 @@ parseNonSourcesMetadata o = do
apiLimits <- o .:? "api_limits" .!= emptyApiLimit
metricsConfig <- o .:? "metrics_config" .!= emptyMetricsConfig
inheritedRoles <- parseListAsMap "inherited roles" _adrRoleName $
o .:? "inherited_roles" .!= []
pure ( remoteSchemas, queryCollections, allowlist, customTypes
, actions, cronTriggers, apiLimits, metricsConfig
, actions, cronTriggers, apiLimits, metricsConfig, inheritedRoles
)
newtype MetadataResourceVersion
@ -340,6 +345,7 @@ data Metadata
, _metaRestEndpoints :: !Endpoints
, _metaApiLimits :: !ApiLimit
, _metaMetricsConfig :: !MetricsConfig
, _metaInheritedRoles :: !InheritedRoles
} deriving (Show, Eq)
$(makeLenses ''Metadata)
@ -352,14 +358,14 @@ instance FromJSON Metadata where
sources <- oMapFromL getSourceName <$> o .: "sources"
endpoints <- oMapFromL _ceName <$> o .:? "rest_endpoints" .!= []
(remoteSchemas, queryCollections, allowlist, customTypes,
actions, cronTriggers, apiLimits, metricsConfig) <- parseNonSourcesMetadata o
actions, cronTriggers, apiLimits, metricsConfig, inheritedRoles) <- parseNonSourcesMetadata o
pure $ Metadata sources remoteSchemas queryCollections allowlist
customTypes actions cronTriggers endpoints apiLimits metricsConfig
customTypes actions cronTriggers endpoints apiLimits metricsConfig inheritedRoles
emptyMetadata :: Metadata
emptyMetadata =
Metadata mempty mempty mempty mempty emptyCustomTypes mempty mempty mempty
emptyApiLimit emptyMetricsConfig
emptyApiLimit emptyMetricsConfig mempty
tableMetadataSetter
:: (BackendMetadata b)
@ -381,7 +387,7 @@ data MetadataNoSources
$(deriveToJSON hasuraJSON ''MetadataNoSources)
instance FromJSON MetadataNoSources where
parseJSON = withObject "Object" $ \o -> do
parseJSON = withObject "MetadataNoSources" $ \o -> do
version <- o .:? "version" .!= MVVersion1
(tables, functions) <-
case version of
@ -397,7 +403,7 @@ instance FromJSON MetadataNoSources where
pure (tables, functions)
MVVersion3 -> fail "unexpected version for metadata without sources: 3"
(remoteSchemas, queryCollections, allowlist, customTypes,
actions, cronTriggers, _, _) <- parseNonSourcesMetadata o
actions, cronTriggers, _, _, _) <- parseNonSourcesMetadata o
pure $ MetadataNoSources tables functions remoteSchemas queryCollections
allowlist customTypes actions cronTriggers
@ -438,6 +444,7 @@ metadataToOrdJSON ( Metadata
endpoints
apiLimits
metricsConfig
inheritedRoles
) = AO.object $ [ versionPair , sourcesPair] <>
catMaybes [ remoteSchemasPair
, queryCollectionsPair
@ -448,6 +455,7 @@ metadataToOrdJSON ( Metadata
, endpointsPair
, apiLimitsPair
, metricsConfigPair
, inheritedRolesPair
]
where
versionPair = ("version", AO.toOrdered currentMetadataVersion)
@ -460,6 +468,7 @@ metadataToOrdJSON ( Metadata
else Just ("custom_types", customTypesToOrdJSON customTypes)
actionsPair = listToMaybeOrdPairSort "actions" actionMetadataToOrdJSON _amName actions
cronTriggersPair = listToMaybeOrdPairSort "cron_triggers" crontriggerQToOrdJSON ctName cronTriggers
inheritedRolesPair = listToMaybeOrdPairSort "inherited_roles" inheritedRolesQToOrdJSON _adrRoleName inheritedRoles
endpointsPair = listToMaybeOrdPairSort "rest_endpoints" AO.toOrdered _ceUrl endpoints
apiLimitsPair = if apiLimits == emptyApiLimit then Nothing
@ -610,6 +619,12 @@ metadataToOrdJSON ( Metadata
else pure ("permissions", AO.toOrdered _fmPermissions)
in AO.object $ [("function", AO.toOrdered _fmFunction)] <> confKeyPair <> permissionsKeyPair
inheritedRolesQToOrdJSON :: AddInheritedRole -> AO.Value
inheritedRolesQToOrdJSON AddInheritedRole{..} =
AO.object $ [ ("role_name", AO.toOrdered _adrRoleName)
, ("role_set", AO.toOrdered _adrRoleSet)
]
remoteSchemaQToOrdJSON :: RemoteSchemaMetadata -> AO.Value
remoteSchemaQToOrdJSON (RemoteSchemaMetadata name definition comment permissions) =
AO.object $ [ ("name", AO.toOrdered name)

View File

@ -52,6 +52,7 @@ data MetadataObjId
| MOAction !ActionName
| MOActionPermission !ActionName !RoleName
| MOCronTrigger !TriggerName
| MOInheritedRole !RoleName
| MOEndpoint !EndpointName
$(makePrisms ''MetadataObjId)
@ -65,6 +66,7 @@ instance Hashable MetadataObjId where
MOAction actionName -> hashWithSalt salt actionName
MOActionPermission actionName roleName -> hashWithSalt salt (actionName, roleName)
MOCronTrigger triggerName -> hashWithSalt salt triggerName
MOInheritedRole roleName -> hashWithSalt salt roleName
MOEndpoint endpoint -> hashWithSalt salt endpoint
instance Eq MetadataObjId where
@ -75,6 +77,7 @@ instance Eq MetadataObjId where
MOCustomTypes == MOCustomTypes = True
(MOActionPermission an1 r1) == (MOActionPermission an2 r2) = an1 == an2 && r1 == r2
(MOCronTrigger trn1) == (MOCronTrigger trn2) = trn1 == trn2
(MOInheritedRole rn1) == (MOInheritedRole rn2) = rn1 == rn2
_ == _ = False
moiTypeName :: MetadataObjId -> Text
@ -96,6 +99,7 @@ moiTypeName = \case
MOCustomTypes -> "custom_types"
MOAction _ -> "action"
MOActionPermission _ _ -> "action_permission"
MOInheritedRole _ -> "inherited_role"
MOEndpoint _ -> "endpoint"
moiName :: MetadataObjId -> Text
@ -122,6 +126,7 @@ moiName objectId = moiTypeName objectId <> " " <> case objectId of
MOCustomTypes -> "custom_types"
MOAction name -> toTxt name
MOActionPermission name roleName -> toTxt roleName <> " permission in " <> toTxt name
MOInheritedRole inheritedRoleName -> "inherited role " <> toTxt inheritedRoleName
MOEndpoint name -> toTxt name
data MetadataObject

View File

@ -20,7 +20,6 @@ import Hasura.RQL.Types.ComputedField
import Hasura.SQL.Backend
import Hasura.Session
data PermType
= PTInsert
| PTSelect

View File

@ -20,6 +20,7 @@ module Hasura.RQL.Types.SchemaCache
, TableCoreCache
, TableCache
, ActionCache
, InheritedRolesCache
, TypeRelationship(..)
, trName, trType, trRemoteTable, trFieldMapping
@ -82,12 +83,6 @@ module Hasura.RQL.Types.SchemaCache
, isPGColInfo
, RelInfo(..)
, RolePermInfo(..)
, mkRolePermInfo
, permIns
, permSel
, permUpd
, permDel
, PermAccessor(..)
, permAccToLens
, permAccToType
@ -95,8 +90,6 @@ module Hasura.RQL.Types.SchemaCache
, RolePermInfoMap
, InsPermInfo(..)
, SelPermInfo(..)
, getSelectPermissionInfoM
, UpdPermInfo(..)
, DelPermInfo(..)
, PreSetColsPartial
@ -266,6 +259,8 @@ unsafeFunctionInfo
unsafeFunctionInfo sourceName functionName cache =
M.lookup functionName =<< unsafeFunctionCache @b sourceName cache
type InheritedRolesCache = M.HashMap RoleName (HashSet RoleName)
unsafeTableCache
:: forall b. Backend b => SourceName -> SourceCache -> Maybe (TableCache b)
unsafeTableCache sourceName cache = do
@ -286,7 +281,6 @@ data SchemaCache
, scUnauthenticatedGQLContext :: !GQLContext
, scRelayContext :: !(HashMap RoleName (RoleContext GQLContext))
, scUnauthenticatedRelayContext :: !GQLContext
-- , scCustomTypes :: !(NonObjectTypeMap, AnnotatedObjects)
, scDepMap :: !DepMap
, scInconsistentObjs :: ![InconsistentMetadata]
, scCronTriggers :: !(M.HashMap TriggerName CronTriggerInfo)

View File

@ -167,8 +167,15 @@ instance Backend b => ToJSON (InsPermInfo b) where
data SelPermInfo (b :: BackendType)
= SelPermInfo
{ spiCols :: !(HS.HashSet (Column b))
, spiScalarComputedFields :: !(HS.HashSet ComputedFieldName)
{ spiCols :: !(M.HashMap (Column b) (Maybe (AnnColumnCaseBoolExpPartialSQL b)))
-- ^ HashMap of accessible columns to the role, the `Column` may be mapped to
-- an `AnnColumnCaseBoolExpPartialSQL`, which happens only in the case of an
-- inherited role, for a non-inherited role, it will be `Nothing`. The above
-- bool exp will determine if the column should be nullified in a row, when
-- there aren't requisite permissions.
, spiScalarComputedFields :: !(M.HashMap ComputedFieldName (Maybe (AnnColumnCaseBoolExpPartialSQL b)))
-- ^ HashMap of accessible scalar computed fields to the role, mapped to
-- `AnnColumnCaseBoolExpPartialSQL`, simililar to `spiCols`
, spiFilter :: !(AnnBoolExpPartialSQL b)
, spiLimit :: !(Maybe Int)
, spiAllowAgg :: !Bool
@ -207,9 +214,6 @@ instance Backend b => Cacheable (DelPermInfo b)
instance Backend b => ToJSON (DelPermInfo b) where
toJSON = genericToJSON hasuraJSON
mkRolePermInfo :: RolePermInfo backend
mkRolePermInfo = RolePermInfo Nothing Nothing Nothing Nothing
data RolePermInfo (b :: BackendType)
= RolePermInfo
{ _permIns :: !(Maybe (InsPermInfo b))
@ -495,11 +499,6 @@ getColumnInfoM
getColumnInfoM tableInfo fieldName =
(^? _FIColumn) =<< getFieldInfoM tableInfo fieldName
getSelectPermissionInfoM
:: TableInfo b -> RoleName -> Maybe (SelPermInfo b)
getSelectPermissionInfoM tableInfo roleName =
join $ tableInfo ^? tiRolePermInfoMap.at roleName._Just.permSel
data PermAccessor (b :: BackendType) a where
PAInsert :: PermAccessor b (InsPermInfo b)
PASelect :: PermAccessor b (SelPermInfo b)

View File

@ -22,6 +22,7 @@ import Hasura.RQL.DDL.ComputedField
import Hasura.RQL.DDL.CustomTypes
import Hasura.RQL.DDL.Endpoint
import Hasura.RQL.DDL.EventTrigger
import Hasura.RQL.DDL.InheritedRoles
import Hasura.RQL.DDL.Metadata
import Hasura.RQL.DDL.Permission
import Hasura.RQL.DDL.QueryCollection
@ -175,6 +176,10 @@ data RQLMetadataV1
| RMSetMetricsConfig !MetricsConfig
| RMRemoveMetricsConfig
-- inherited roles
| RMAddInheritedRole !AddInheritedRole
| RMDropInheritedRole !DropInheritedRole
-- bulk metadata queries
| RMBulk [RQLMetadataRequest]
deriving (Eq)
@ -452,6 +457,9 @@ runMetadataQueryV1M env currentResourceVersion = \case
RMSetMetricsConfig q -> runSetMetricsConfig q
RMRemoveMetricsConfig -> runRemoveMetricsConfig
RMAddInheritedRole q -> runAddInheritedRole q
RMDropInheritedRole q -> runDropInheritedRole q
RMBulk q -> encJFromList <$> indexedMapM (runMetadataQueryM env currentResourceVersion) q
runMetadataQueryV2M
@ -459,6 +467,7 @@ runMetadataQueryV2M
, CacheRWM m
, MetadataM m
, MonadMetadataStorageQueryAPI m
, HasServerConfigCtx m
)
=> MetadataResourceVersion
-> RQLMetadataV2

View File

@ -42,7 +42,7 @@ import Hasura.RQL.DML.Types
import Hasura.RQL.DML.Update
import Hasura.RQL.Types
import Hasura.RQL.Types.Run
import Hasura.Server.Types (InstanceId (..), MaintenanceMode (..))
import Hasura.Server.Types
import Hasura.Server.Utils
import Hasura.Server.Version (HasVersion)
import Hasura.Session

View File

@ -22,7 +22,7 @@ import Hasura.RQL.DML.Types
import Hasura.RQL.DML.Update
import Hasura.RQL.Types
import Hasura.RQL.Types.Run
import Hasura.Server.Types (InstanceId (..), MaintenanceMode (..))
import Hasura.Server.Types
import Hasura.Server.Version (HasVersion)
import Hasura.Session

View File

@ -115,6 +115,7 @@ data ServerCtx
, scRemoteSchemaPermsCtx :: !RemoteSchemaPermsCtx
, scFunctionPermsCtx :: !FunctionPermissionsCtx
, scEnableMaintenanceMode :: !MaintenanceMode
, scExperimentalFeatures :: !(S.HashSet ExperimentalFeature)
}
data HandlerCtx
@ -217,7 +218,7 @@ parseBody reqBody =
onlyAdmin :: (MonadError QErr m, MonadReader HandlerCtx m) => m ()
onlyAdmin = do
uRole <- asks (_uiRole . hcUser)
when (uRole /= adminRoleName) $
unless (uRole == adminRoleName) $
throw400 AccessDenied "You have to be an admin to access this endpoint"
setHeader :: MonadIO m => HTTP.Header -> Spock.ActionT m ()
@ -428,7 +429,8 @@ v1QueryHandler query = do
remoteSchemaPermsCtx <- asks (scRemoteSchemaPermsCtx . hcServerCtx)
functionPermsCtx <- asks (scFunctionPermsCtx . hcServerCtx)
maintenanceMode <- asks (scEnableMaintenanceMode . hcServerCtx)
let serverConfigCtx = ServerConfigCtx functionPermsCtx remoteSchemaPermsCtx sqlGenCtx maintenanceMode
experimentalFeatures <- asks (scExperimentalFeatures . hcServerCtx)
let serverConfigCtx = ServerConfigCtx functionPermsCtx remoteSchemaPermsCtx sqlGenCtx maintenanceMode experimentalFeatures
runQuery env instanceId userInfo schemaCache httpMgr
serverConfigCtx query
@ -455,8 +457,9 @@ v1MetadataHandler query = do
logger <- asks (scLogger . hcServerCtx)
remoteSchemaPermsCtx <- asks (scRemoteSchemaPermsCtx . hcServerCtx)
functionPermsCtx <- asks (scFunctionPermsCtx . hcServerCtx)
experimentalFeatures <- asks (scExperimentalFeatures . hcServerCtx)
maintenanceMode <- asks (scEnableMaintenanceMode . hcServerCtx)
let serverConfigCtx = ServerConfigCtx functionPermsCtx remoteSchemaPermsCtx sqlGenCtx maintenanceMode
let serverConfigCtx = ServerConfigCtx functionPermsCtx remoteSchemaPermsCtx sqlGenCtx maintenanceMode experimentalFeatures
r <- withSCUpdate scRef logger $
runMetadataQuery env instanceId userInfo httpMgr serverConfigCtx
schemaCache query
@ -488,9 +491,10 @@ v2QueryHandler query = do
instanceId <- asks (scInstanceId . hcServerCtx)
env <- asks (scEnvironment . hcServerCtx)
remoteSchemaPermsCtx <- asks (scRemoteSchemaPermsCtx . hcServerCtx)
experimentalFeatures <- asks (scExperimentalFeatures . hcServerCtx)
functionPermsCtx <- asks (scFunctionPermsCtx . hcServerCtx)
maintenanceMode <- asks (scEnableMaintenanceMode . hcServerCtx)
let serverConfigCtx = ServerConfigCtx functionPermsCtx remoteSchemaPermsCtx sqlGenCtx maintenanceMode
let serverConfigCtx = ServerConfigCtx functionPermsCtx remoteSchemaPermsCtx sqlGenCtx maintenanceMode experimentalFeatures
V2Q.runQuery env instanceId userInfo schemaCache httpMgr serverConfigCtx query
v1Alpha1GQHandler
@ -766,10 +770,13 @@ mkWaiApp
-> KeepAliveDelay
-- ^ Metadata storage connection pool
-> MaintenanceMode
-> S.HashSet ExperimentalFeature
-- ^ Set of the enabled experimental features
-> m HasuraApp
mkWaiApp setupHook env logger sqlGenCtx enableAL httpManager mode corsCfg enableConsole consoleAssetsDir
enableTelemetry instanceId apis lqOpts _ {- planCacheOptions -} responseErrorsConfig
liveQueryHook schemaCacheRef ekgStore enableRSPermsCtx functionPermsCtx connectionOptions keepAliveDelay maintenanceMode = do
liveQueryHook schemaCacheRef ekgStore enableRSPermsCtx functionPermsCtx connectionOptions keepAliveDelay
maintenanceMode experimentalFeatures = do
let getSchemaCache = first lastBuiltSchemaCache <$> readIORef (_scrCache schemaCacheRef)
@ -797,6 +804,7 @@ mkWaiApp setupHook env logger sqlGenCtx enableAL httpManager mode corsCfg enable
, scRemoteSchemaPermsCtx = enableRSPermsCtx
, scFunctionPermsCtx = functionPermsCtx
, scEnableMaintenanceMode = maintenanceMode
, scExperimentalFeatures = experimentalFeatures
}
spockApp <- liftWithStateless $ \lowerIO ->

View File

@ -1,8 +1,7 @@
{-# LANGUAGE DerivingStrategies #-}
module Hasura.Server.Auth
( getUserInfo
, getUserInfoWithExpTime
( getUserInfoWithExpTime
, AuthMode (..)
, setupAuthMode
, AdminSecretHash
@ -180,16 +179,6 @@ setupAuthMode mAdminSecretHash mWebHook mJwtSecret mUnAuthRole httpManager logge
JFEJwkParseError _ e -> throwError e
JFEExpiryParseError _ _ -> return Nothing
getUserInfo
:: (HasVersion, MonadIO m, MonadBaseControl IO m, MonadError QErr m, Tracing.MonadTrace m)
=> Logger Hasura
-> H.Manager
-> [N.Header]
-> AuthMode
-> Maybe ReqsText
-> m UserInfo
getUserInfo l m r a reqs = fst <$> getUserInfoWithExpTime l m r a reqs
-- | Authenticate the request using the headers and the configured 'AuthMode'.
getUserInfoWithExpTime
:: forall m. (HasVersion, MonadIO m, MonadBaseControl IO m, MonadError QErr m, Tracing.MonadTrace m)
@ -204,7 +193,8 @@ getUserInfoWithExpTime = getUserInfoWithExpTime_ userInfoFromAuthHook processJwt
-- Broken out for testing with mocks:
getUserInfoWithExpTime_
:: forall m _Manager _Logger_Hasura. (MonadIO m, MonadError QErr m)
=> (_Logger_Hasura -> _Manager -> AuthHook -> [N.Header] -> Maybe ReqsText -> m (UserInfo, Maybe UTCTime))
=> (_Logger_Hasura -> _Manager -> AuthHook -> [N.Header]
-> Maybe ReqsText -> m (UserInfo, Maybe UTCTime))
-- ^ mock 'userInfoFromAuthHook'
-> (JWTCtx -> [N.Header] -> Maybe RoleName -> m (UserInfo, Maybe UTCTime))
-- ^ mock 'processJwt'
@ -214,7 +204,7 @@ getUserInfoWithExpTime_
-> AuthMode
-> Maybe ReqsText
-> m (UserInfo, Maybe UTCTime)
getUserInfoWithExpTime_ userInfoFromAuthHook_ processJwt_ logger manager rawHeaders authMode reqs = case authMode of
getUserInfoWithExpTime_ userInfoFromAuthHook_ processJwt_ logger manager rawHeaders authMode reqs = case authMode of
AMNoAuth -> withNoExpTime $ mkUserInfoFallbackAdminRole UAuthNotSet

View File

@ -3,7 +3,6 @@ module Hasura.Server.Auth.WebHook
, AuthHookG (..)
, AuthHook
, userInfoFromAuthHook
, userInfoFromAuthHook'
, type ReqsText
) where
@ -60,16 +59,6 @@ hookMethod authHook = case ahType authHook of
type ReqsText = GH.GQLBatchedReqs GH.GQLQueryText
userInfoFromAuthHook'
:: forall m
. (HasVersion, MonadIO m, MonadBaseControl IO m, MonadError QErr m, Tracing.MonadTrace m)
=> Logger Hasura
-> H.Manager
-> AuthHook
-> [N.Header]
-> m (UserInfo, Maybe UTCTime)
userInfoFromAuthHook' l m h r = userInfoFromAuthHook l m h r Nothing
-- | Makes an authentication request to the given AuthHook and returns
-- UserInfo parsed from the response, plus an expiration time if one
-- was returned. Optionally passes a batch of raw GraphQL requests

View File

@ -194,6 +194,7 @@ mkServeOptions rso = do
webSocketKeepAlive <- KeepAliveDelay . fromIntegral . fromMaybe 5
<$> withEnv (rsoWebSocketKeepAlive rso) (fst webSocketKeepAliveEnv)
experimentalFeatures <- maybe mempty Set.fromList <$> withEnv (rsoExperimentalFeatures rso) (fst experimentalFeaturesEnv)
inferFunctionPerms <-
maybe FunctionPermissionsInferred (bool FunctionPermissionsManual FunctionPermissionsInferred) <$>
(withEnv (rsoInferFunctionPermissions rso) (fst inferFunctionPermsEnv))
@ -208,7 +209,7 @@ mkServeOptions rso = do
enabledLogs serverLogLevel planCacheOptions
internalErrorsConfig eventsHttpPoolSize eventsFetchInterval
logHeadersFromEnv enableRemoteSchemaPerms connectionOptions webSocketKeepAlive
inferFunctionPerms maintenanceMode
inferFunctionPerms maintenanceMode experimentalFeatures
where
#ifdef DeveloperAPIs
defaultAPIs = [METADATA,GRAPHQL,PGDUMP,CONFIG,DEVELOPER]
@ -526,6 +527,12 @@ enabledAPIsEnv =
, "Comma separated list of enabled APIs. (default: metadata,graphql,pgdump,config)"
)
experimentalFeaturesEnv :: (String, String)
experimentalFeaturesEnv =
( "HASURA_GRAPHQL_EXPERIMENTAL_FEATURES"
, "Comma separated list of experimental features. (all: inherited_roles)"
)
consoleAssetsDirEnv :: (String, String)
consoleAssetsDirEnv =
( "HASURA_GRAPHQL_CONSOLE_ASSETS_DIR"
@ -840,6 +847,13 @@ parseEnabledAPIs = optional $
help (snd enabledAPIsEnv)
)
parseExperimentalFeatures :: Parser (Maybe [ExperimentalFeature])
parseExperimentalFeatures = optional $
option (eitherReader readExperimentalFeatures)
( long "experimental-features" <>
help (snd experimentalFeaturesEnv)
)
parseMxRefetchInt :: Parser (Maybe LQ.RefetchInterval)
parseMxRefetchInt =
optional $
@ -1031,6 +1045,7 @@ serveOptsToLog so =
, "websocket_keep_alive" J..= show (soWebsocketKeepAlive so)
, "infer_function_permissions" J..= soInferFunctionPermissions so
, "enable_maintenance_mode" J..= soEnableMaintenanceMode so
, "experimental_features" J..= soExperimentalFeatures so
]
mkGenericStrLog :: L.LogLevel -> Text -> String -> StartupLog
@ -1081,6 +1096,7 @@ serveOptionsParser =
<*> parseWebSocketKeepAlive
<*> parseInferFunctionPerms
<*> parseEnableMaintenanceMode
<*> parseExperimentalFeatures
-- | This implements the mapping between application versions
-- and catalog schema versions.

View File

@ -79,6 +79,7 @@ data RawServeOptions impl
, rsoWebSocketKeepAlive :: !(Maybe Int)
, rsoInferFunctionPermissions :: !(Maybe Bool)
, rsoEnableMaintenanceMode :: !Bool
, rsoExperimentalFeatures :: !(Maybe [ExperimentalFeature])
}
-- | @'ResponseInternalErrorsConfig' represents the encoding of the internal
@ -93,7 +94,7 @@ data ResponseInternalErrorsConfig
shouldIncludeInternal :: RoleName -> ResponseInternalErrorsConfig -> Bool
shouldIncludeInternal role = \case
InternalErrorsAllRequests -> True
InternalErrorsAdminOnly -> isAdmin role
InternalErrorsAdminOnly -> role == adminRoleName
InternalErrorsDisabled -> False
newtype KeepAliveDelay
@ -131,6 +132,7 @@ data ServeOptions impl
, soWebsocketKeepAlive :: !KeepAliveDelay
, soInferFunctionPermissions :: !FunctionPermissionsCtx
, soEnableMaintenanceMode :: !MaintenanceMode
, soExperimentalFeatures :: !(Set.HashSet ExperimentalFeature)
}
data DowngradeOptions
@ -251,6 +253,12 @@ readAPIs = mapM readAPI . T.splitOn "," . T.pack
"CONFIG" -> Right CONFIG
_ -> Left "Only expecting list of comma separated API types metadata,graphql,pgdump,developer,config"
readExperimentalFeatures :: String -> Either String [ExperimentalFeature]
readExperimentalFeatures = mapM readAPI . T.splitOn "," . T.pack
where readAPI si = case T.toLower $ T.strip si of
"inherited_roles" -> Right EFInheritedRoles
_ -> Left "Only expecting list of comma separated experimental features"
readLogLevel :: String -> Either String L.LogLevel
readLogLevel s = case T.toLower $ T.strip $ T.pack s of
"debug" -> Right L.LevelDebug
@ -305,6 +313,9 @@ instance FromEnv CorsConfig where
instance FromEnv [API] where
fromEnv = readAPIs
instance FromEnv [ExperimentalFeature] where
fromEnv = readExperimentalFeatures
instance FromEnv LQ.BatchSize where
fromEnv s = do
val <- readEither s

View File

@ -281,7 +281,7 @@ migrations maybeDefaultSourceConfig dryRun maintenanceMode =
SourceMetadata defaultSource _mnsTables _mnsFunctions defaultSourceConfig
in Metadata (OMap.singleton defaultSource defaultSourceMetadata)
_mnsRemoteSchemas _mnsQueryCollections _mnsAllowlist _mnsCustomTypes _mnsActions _mnsCronTriggers mempty
emptyApiLimit emptyMetricsConfig
emptyApiLimit emptyMetricsConfig mempty
liftTx $ insertMetadataInCatalog metadataV3
from43To42 = do

View File

@ -3,7 +3,10 @@ module Hasura.Server.Types where
import Hasura.Prelude
import Data.Aeson
import Data.Aeson.Casing
import Data.Aeson.TH
import qualified Data.HashSet as Set
import qualified Database.PG.Query as Q
import qualified Network.HTTP.Types as HTTP
@ -35,6 +38,22 @@ newtype InstanceId
= InstanceId { getInstanceId :: Text }
deriving (Show, Eq, ToJSON, FromJSON, Q.FromCol, Q.ToPrepArg)
data ExperimentalFeature
= EFInheritedRoles
deriving (Show, Eq, Generic)
$(deriveFromJSON (defaultOptions { constructorTagModifier = snakeCase . drop 2})
''ExperimentalFeature)
instance Hashable ExperimentalFeature
-- TODO: when there are more than one constuctors in `ExperimentalFeature`, we should
-- derive the `ToJSON` instance like we do it for the `FromJSON` instance. Doing it
-- with a single data constructor messes up the `ToJSON` instance which is why it's
-- manually implemented here
instance ToJSON ExperimentalFeature where
toJSON = \case
EFInheritedRoles -> "inherited_roles"
data MaintenanceMode = MaintenanceModeEnabled | MaintenanceModeDisabled
deriving (Show, Eq)
@ -51,4 +70,5 @@ data ServerConfigCtx
, _sccRemoteSchemaPermsCtx :: !RemoteSchemaPermsCtx
, _sccSQLGenCtx :: !SQLGenCtx
, _sccMaintenanceMode :: !MaintenanceMode
, _sccExperimentalFeatures :: !(Set.HashSet ExperimentalFeature)
} deriving (Show, Eq)

View File

@ -42,6 +42,7 @@ import Data.Aeson.Types (Parser, toJSONKeyText)
import Data.Text.Extended
import Data.Text.NonEmpty
import Hasura.Incremental (Cacheable)
import Hasura.RQL.Types.Error
import Hasura.Server.Utils
@ -53,12 +54,12 @@ newtype RoleName
deriving ( Show, Eq, Ord, Hashable, FromJSONKey, ToJSONKey, FromJSON
, ToJSON, Q.FromCol, Q.ToPrepArg, Generic, Arbitrary, NFData, Cacheable )
instance ToTxt RoleName where
toTxt = roleNameToTxt
roleNameToTxt :: RoleName -> Text
roleNameToTxt = unNonEmptyText . getRoleTxt
instance ToTxt RoleName where
toTxt = roleNameToTxt
mkRoleName :: Text -> Maybe RoleName
mkRoleName = fmap RoleName . mkNonEmptyText
@ -204,14 +205,13 @@ mkUserInfo roleBuild userAdminSecret sessionVariables = do
roleName <- case roleBuild of
URBFromSessionVariables -> onNothing maybeSessionRole $
throw400 InvalidParams $ userRoleHeader <> " not found in session variables"
URBFromSessionVariablesFallback role -> pure $ fromMaybe role maybeSessionRole
URBPreDetermined role -> pure role
URBFromSessionVariablesFallback roleName' -> pure $ fromMaybe roleName' maybeSessionRole
URBPreDetermined roleName' -> pure roleName'
backendOnlyFieldAccess <- getBackendOnlyFieldAccess
let modifiedSession = modifySessionVariables roleName sessionVariables
pure $ UserInfo roleName modifiedSession backendOnlyFieldAccess
where
maybeSessionRole = maybeRoleFromSessionVariables sessionVariables
-- | Add x-hasura-role header and remove admin secret headers
modifySessionVariables :: RoleName -> SessionVariables -> SessionVariables
modifySessionVariables roleName =

View File

@ -83,7 +83,6 @@ getUserInfoWithExpTimeTests = describe "getUserInfo" $ do
let ourUnauthRole = mkRoleNameE "an0nymous"
describe "started without admin secret" $ do
it "gives admin by default" $ do
mode <- setupAuthMode'E Nothing Nothing Nothing Nothing
@ -604,7 +603,6 @@ mkRoleNameE = fromMaybe (error "fixme") . mkRoleName
mkJSONPathE :: Text -> JSONPath
mkJSONPathE = either error id . parseJSONPath
newtype NoReporter a = NoReporter { runNoReporter :: IO a }
deriving newtype (Functor, Applicative, Monad, MonadIO, MonadBase IO, MonadBaseControl IO)

View File

@ -100,7 +100,7 @@ buildPostgresSpecs maybeUrlTemplate = do
let sqlGenCtx = SQLGenCtx False
maintenanceMode = MaintenanceModeDisabled
serverConfigCtx =
ServerConfigCtx FunctionPermissionsInferred RemoteSchemaPermsDisabled sqlGenCtx maintenanceMode
ServerConfigCtx FunctionPermissionsInferred RemoteSchemaPermsDisabled sqlGenCtx maintenanceMode mempty
cacheBuildParams = CacheBuildParams httpManager (mkPgSourceResolver print) serverConfigCtx
run :: CacheBuild a -> IO a

View File

@ -160,6 +160,13 @@ This option may result in test failures if the schema has to change between the
help="Flag to indicate if the graphql-engine has enabled remote schema permissions",
)
parser.addoption(
"--test-inherited-roles",
action="store_true",
default=False,
help="Flag to specify if the inherited roles tests are to be run"
)
parser.addoption(
"--redis-url",
metavar="REDIS_URL",
@ -167,6 +174,7 @@ This option may result in test failures if the schema has to change between the
default=False
)
#By default,
#1) Set default parallelism to one
#2) Set test grouping to by filename (--dist=loadfile)
@ -307,6 +315,12 @@ def functions_permissions_fixtures(hge_ctx):
pytest.skip('These tests are meant to be run with --test-function-permissions set')
return
@pytest.fixture(scope='class')
def inherited_role_fixtures(hge_ctx):
if not hge_ctx.inherited_roles_tests:
pytest.skip('These tests are meant to be run with --test-inherited-roles set')
return
@pytest.fixture(scope='class')
def scheduled_triggers_evts_webhook(request):
webhook_httpd = EvtsWebhookServer(server_address=('127.0.0.1', 5594))
@ -346,6 +360,19 @@ def per_class_tests_db_state(request, hge_ctx):
"""
yield from db_state_context(request, hge_ctx)
@pytest.fixture(scope='class')
def per_class_tests_db_state_new(request, hge_ctx):
"""
Set up the database state for select queries.
Has a class level scope, since select queries does not change database state
Expects either `dir()` method which provides the directory
with `setup.yaml` and `teardown.yaml` files
Or class variables `setup_files` and `teardown_files` that provides
the list of setup and teardown files respectively
"""
print ("per_class_tests_db_state_new")
yield from db_state_context_new(request, hge_ctx)
@pytest.fixture(scope='function')
def per_method_tests_db_state(request, hge_ctx):
"""
@ -390,6 +417,12 @@ def db_state_context(request, hge_ctx):
'teardown.yaml', True
)
def db_state_context_new(request, hge_ctx):
yield from db_context_with_schema_common_new (
request, hge_ctx, 'setup_files', 'setup.yaml', 'teardown_files',
'teardown.yaml', 'sql_schema_setup.yaml', 'sql_schema_teardown.yaml', True
)
def db_context_with_schema_common(
request, hge_ctx, setup_files_attr, setup_default_file,
teardown_files_attr, teardown_default_file, check_file_exists=True):
@ -403,6 +436,19 @@ def db_context_with_schema_common(
check_file_exists, skip_setup, skip_teardown
)
def db_context_with_schema_common_new (
request, hge_ctx, setup_files_attr, setup_default_file,
teardown_files_attr, teardown_default_file, setup_sql_file, teardown_sql_file, check_file_exists=True):
(skip_setup, skip_teardown) = [
request.config.getoption('--' + x)
for x in ['skip-schema-setup', 'skip-schema-teardown']
]
yield from db_context_common_new (
request, hge_ctx, setup_files_attr, setup_default_file, setup_sql_file,
teardown_files_attr, teardown_default_file, teardown_sql_file,
check_file_exists, skip_setup, skip_teardown
)
def db_context_common(
request, hge_ctx, setup_files_attr, setup_default_file,
teardown_files_attr, teardown_default_file,
@ -414,9 +460,26 @@ def db_context_common(
return files
setup = get_files(setup_files_attr, setup_default_file)
teardown = get_files(teardown_files_attr, teardown_default_file)
yield from setup_and_teardown(request, hge_ctx, setup, teardown, check_file_exists, skip_setup, skip_teardown)
yield from setup_and_teardown_v1q(request, hge_ctx, setup, teardown, check_file_exists, skip_setup, skip_teardown)
def setup_and_teardown(request, hge_ctx, setup_files, teardown_files, check_file_exists=True, skip_setup=False, skip_teardown=False):
def db_context_common_new(
request, hge_ctx, setup_files_attr, setup_default_file,
setup_default_sql_file,
teardown_files_attr, teardown_default_file, teardown_default_sql_file,
check_file_exists=True, skip_setup=True, skip_teardown=True ):
def get_files(attr, default_file):
files = getattr(request.cls, attr, None)
if not files:
files = os.path.join(request.cls.dir(), default_file)
return files
setup = get_files(setup_files_attr, setup_default_file)
teardown = get_files(teardown_files_attr, teardown_default_file)
setup_default_sql_file = os.path.join(request.cls.dir(), setup_default_sql_file)
teardown_default_sql_file = os.path.join(request.cls.dir(), teardown_default_sql_file)
yield from setup_and_teardown(request, hge_ctx, setup, teardown,
setup_default_sql_file, teardown_default_sql_file, check_file_exists, skip_setup, skip_teardown)
def setup_and_teardown_v1q(request, hge_ctx, setup_files, teardown_files, check_file_exists=True, skip_setup=False, skip_teardown=False):
def assert_file_exists(f):
assert os.path.isfile(f), 'Could not find file ' + f
if check_file_exists:
@ -433,6 +496,34 @@ def setup_and_teardown(request, hge_ctx, setup_files, teardown_files, check_file
if request.session.testsfailed > 0 or not skip_teardown:
run_on_elem_or_list(v1q_f, teardown_files)
def setup_and_teardown(request, hge_ctx, setup_files, teardown_files,
sql_schema_setup_file,sql_schema_teardown_file,
check_file_exists=True, skip_setup=False, skip_teardown=False):
def assert_file_exists(f):
assert os.path.isfile(f), 'Could not find file ' + f
if check_file_exists:
for o in [setup_files, teardown_files, sql_schema_setup_file, sql_schema_teardown_file]:
run_on_elem_or_list(assert_file_exists, o)
def v2q_f(f):
if os.path.isfile(f):
st_code, resp = hge_ctx.v2q_f(f)
assert st_code == 200, resp
def metadataq_f(f):
if os.path.isfile(f):
st_code, resp = hge_ctx.v1metadataq_f(f)
if st_code != 200:
# drop the sql setup, if the metadata calls fail
run_on_elem_or_list(v2q_f, sql_schema_teardown_file)
assert st_code == 200, resp
if not skip_setup:
run_on_elem_or_list(v2q_f, sql_schema_setup_file)
run_on_elem_or_list(metadataq_f, setup_files)
yield
# Teardown anyway if any of the tests have failed
if request.session.testsfailed > 0 or not skip_teardown:
run_on_elem_or_list(metadataq_f, teardown_files)
run_on_elem_or_list(v2q_f, sql_schema_teardown_file)
def run_on_elem_or_list(f, x):
if isinstance(x, str):
return [f(x)]
@ -450,3 +541,8 @@ def is_master(config):
node or not running xdist at all.
"""
return not hasattr(config, 'slaveinput')
use_inherited_roles_fixtures = pytest.mark.usefixtures(
"inherited_role_fixtures",
"per_class_tests_db_state_new"
)

View File

@ -482,6 +482,7 @@ class HGECtx:
self.hge_scale_url = config.getoption('--test-hge-scale-url')
self.avoid_err_msg_checks = config.getoption('--avoid-error-message-checks')
self.inherited_roles_tests = config.getoption('--test-inherited-roles')
self.ws_client = GQLWsClient(self, '/v1/graphql')
@ -576,6 +577,15 @@ class HGECtx:
yml = yaml.YAML()
return self.v1q(yml.load(f))
def v2q(self, q, headers = {}):
return self.execute_query(q, "/v2/query", headers)
def v2q_f(self, fn):
with open(fn) as f:
# NOTE: preserve ordering with ruamel
yml = yaml.YAML()
return self.v2q(yml.load(f))
def v1metadataq(self, q, headers = {}):
return self.execute_query(q, "/v1/metadata", headers)

View File

@ -0,0 +1,19 @@
description: Mutations are not exposed to an inherited role
url: /v1/graphql
headers:
X-Hasura-Role: inherited_user
X-Hasura-User-Id: '3'
status: 200
query:
query: |
mutation {
insert_authors(objects: {name: "J.K.Rowling", phone: "1122334455"}) {
affected_rows
}
}
response:
errors:
- extensions:
path: $
code: validation-failed
message: no mutations exist

View File

@ -0,0 +1,34 @@
type: bulk
args:
- type: pg_track_table
args:
table: authors
- type: pg_create_insert_permission
args:
table: authors
role: user
permission:
set:
id: X-Hasura-User-Id
check: {}
columns: '*'
# same permission as `user` role
- type: pg_create_insert_permission
args:
table: authors
role: user1
permission:
set:
id: X-Hasura-User-Id
check: {}
columns: '*'
- type: add_inherited_role
args:
role_name: inherited_user
role_set:
- user
- user1

View File

@ -0,0 +1,12 @@
type: bulk
args:
- type: run_sql
args:
sql: |
CREATE TABLE authors (
id serial primary key,
name text,
phone text,
email text
);

View File

@ -0,0 +1,7 @@
type: bulk
args:
- type: run_sql
args:
sql: |
DROP table authors;

View File

@ -0,0 +1,3 @@
type: drop_inherited_role
args:
role_name: inherited_user

View File

@ -0,0 +1,189 @@
- description: column not accessible to any of the roles should not be accessible to the inherited role
url: /v1/graphql
status: 200
headers:
X-Hasura-Role: manager_employee
X-Hasura-User-Id: '1'
X-Hasura-Manager-Id: '2'
response:
errors:
- extensions:
path: "$.selectionSet.employee.selectionSet.hr_remarks"
code: validation-failed
message: "field \"hr_remarks\" not found in type: 'employee'"
query:
query: |
query {
employee {
name
title
manager_remarks
hr_remarks
}
}
- description: check when a role doesn't have access to a certain column in a row, it should be `null`
url: /v1/graphql
status: 200
headers:
X-Hasura-Role: manager_employee
X-Hasura-User-Id: '1'
X-Hasura-Manager-Id: '1'
response:
data:
employee:
# All the roles have access all the columns that
# are queried, hence no null values
- id: 1
name: employee 1
title: Sales representative
salary: 10000
manager_remarks: good
# Only the manager role has access to this row
# and the manager can only access the name, title
# and the manager_remarks fields
- id: null
name: employee 2
title: Sales representative
salary: null
manager_remarks: good
query:
query: |
query {
employee {
id
name
title
salary
manager_remarks
}
}
- description: Check if appropriate permissions are applied to the relationship field as well
url: /v1/graphql
status: 200
headers:
X-Hasura-Role: manager_employee
X-Hasura-User-Id: '1'
X-Hasura-Manager-Id: '2'
response:
data:
employee:
- name: employee 1
title: Sales representative
salary: 10000
manager_remarks: null
manager: null
- name: employee 4
title: Sales representative
salary: null
manager_remarks: hard working
manager:
id: 2
name: sales manager 2
phone: '9977223344'
address: buena vista
query:
# `address` field in the `manager` relationship,is not accessible to the employee's role
# but is accessible to the manager's role and the "employee 4"'s manager
# happens to be the manager with id = 2 (x-hasura-manager-id), hence
# the `address` column is accessible to the inherited role
query: |
query {
employee {
name
title
salary
manager_remarks
manager {
id
name
phone
address
}
}
}
- description: Allow aggregation to the inherited role even if one of the parent role allows aggregation
url: /v1/graphql
status: 200
headers:
X-Hasura-Role: manager_employee
X-Hasura-Manager-Id: '2'
X-Hasura-User-Id: '1'
response:
data:
employee_aggregate:
aggregate:
count: 2
sum:
salary: 25430
query:
query: |
query {
employee_aggregate {
aggregate {
count
sum {
salary
}
}
}
}
- description: computed fields should only be accessible to the inherited role to the rows it has access to
url: /v1/graphql
status: 200
headers:
X-Hasura-Role: manager_employee
X-Hasura-Manager-Id: '2'
X-Hasura-User-Id: '1'
response:
data:
employee:
- salary: 10000
yearly_salary: 120000
# The manager role doesn't have access to the yearly_salary
# computed field, so we return a null here
- salary: null
yearly_salary: null
query:
query: |
query {
employee {
salary
yearly_salary
}
}
- description:
query multiple tables using an inherited role, even when one of the role doesn't have select permission
to one of the table
url: /v1/graphql
status: 200
headers:
X-Hasura-Role: manager_employee
X-Hasura-Manager-Id: '2'
X-Hasura-User-Id: '1'
response:
data:
employee:
- name: employee 1
salary: 10000
- name: employee 4
salary: null
manager:
- team_id: 1
name: sales manager 2
query:
query: |
query {
employee {
name
salary
}
manager {
team_id
name
}
}

View File

@ -0,0 +1,102 @@
type: bulk
args:
- type: pg_track_table
args:
table: employee
- type: pg_track_table
args:
table: team
- type: pg_track_table
args:
table: manager
#Object relationship
- type: pg_create_object_relationship
args:
name: manager
table: employee
using:
foreign_key_constraint_on: manager_id
#Array relationship
- type: pg_create_array_relationship
args:
table: manager
name: employees
using:
foreign_key_constraint_on:
table: employee
column: manager_id
- type: pg_add_computed_field
args:
table: employee
name: yearly_salary
definition:
function: employee_yearly_salary
table_argument: employee_row
- type: pg_create_select_permission
args:
table: employee
role: employee
permission:
columns:
- id
- name
- salary
- title
- team_id
computed_fields:
- yearly_salary
filter:
id:
_eq: X-Hasura-User-Id
- type: pg_create_select_permission
args:
table: employee
role: manager
permission:
columns:
- name
- title
- manager_remarks
filter:
manager_id:
_eq: X-Hasura-Manager-Id
allow_aggregations: true
- type: pg_create_select_permission
args:
table: manager
role: manager
permission:
columns: '*'
allow_aggregations: true
filter:
id: X-Hasura-Manager-Id
- type: pg_create_select_permission
args:
table: manager
role: employee
permission:
columns:
- name
- phone
allow_aggregations: true
filter:
employees:
manager_id:
_eq: X-Hasura-Manager-Id
- type: add_inherited_role
args:
role_name: manager_employee
role_set:
- manager
- employee

View File

@ -0,0 +1,45 @@
type: run_sql
args:
sql: |
CREATE TABLE team (
id serial primary key,
name text
);
CREATE TABLE manager (
id serial primary key,
team_id int references team(id),
name text,
phone text,
address text
);
CREATE TABLE employee (
id SERIAL PRIMARY KEY,
name text not null,
salary int not null,
title text,
manager_id int references manager(id),
team_id int references team(id),
manager_remarks text,
hr_remarks text
);
insert into team (name) values ('sales'),('HR');
insert into manager (team_id, name, phone, address) values
(1, 'sales manager 1', '9988998822', 'basant bihar'),
(1, 'sales manager 2', '9977223344', 'buena vista'),
(2, 'HR manager 1', '1122334455', 'MG Road');
insert into employee (name, salary, title, manager_id, team_id, manager_remarks, hr_remarks)
values
('employee 1', 10000, 'Sales representative', 1, 1, 'good', 'good'),
('employee 2', 14000, 'Sales representative', 1, 1, 'good', 'good'),
('employee 3', 12400, 'HR trainee', 3, 2, 'hard working' , 'hard working'),
('employee 4', 15430, 'Sales representative', 2, 1, 'hard working' , 'hard working');
CREATE FUNCTION employee_yearly_salary(employee_row employee)
RETURNS INT AS $$
SELECT employee_row.salary * 12
$$ LANGUAGE sql STABLE;

View File

@ -0,0 +1,5 @@
type: run_sql
args:
cascade: true
sql: |
DROP

View File

@ -0,0 +1,7 @@
type: run_sql
args:
cascade: true
sql: |
DROP TABLE employee CASCADE;
DROP TABLE manager;
DROP TABLE team;

View File

@ -0,0 +1,3 @@
type: drop_inherited_role
args:
role_name: manager_employee

View File

@ -1,5 +1,6 @@
import pytest
from validate import check_query_f, check_query, get_conf_f
from conftest import use_inherited_roles_fixtures
# Marking all tests in this module that server upgrade tests can be run
@ -640,3 +641,17 @@ class TestGraphQLMutationFunctions:
check_query_f(hge_ctx, self.dir() + '/single_row_function_as_mutation.yaml', transport)
st_code, resp = hge_ctx.v1metadataq_f(self.dir() + '/drop_function_permission_add_to_score_by_user_id.yaml')
assert st_code == 200, resp
@pytest.mark.parametrize('transport', ['http', 'websocket'])
@use_inherited_roles_fixtures
class TestGraphQLInheritedRoles:
@classmethod
def dir(cls):
return 'queries/graphql_mutation/insert/permissions/inherited_roles'
# This test exists here as a sanity check to check if mutations aren't exposed
# to an inherited role. When mutations are supported for everything, this test
# should be removed/modified.
def test_mutations_not_exposed_for_inherited_roles(self, hge_ctx, transport):
check_query_f(hge_ctx, self.dir() + '/mutation_not_exposed_to_inherited_roles.yaml')

View File

@ -1,5 +1,6 @@
import pytest
from validate import check_query_f, check_query, get_conf_f
from conftest import use_inherited_roles_fixtures
from context import PytestConf
# Mark that all tests in this module can be run as server upgrade tests
@ -354,6 +355,17 @@ class TestGraphqlQueryPermissions:
def dir(cls):
return 'queries/graphql_query/permissions'
@pytest.mark.parametrize('transport', ['http', 'websocket'])
@use_inherited_roles_fixtures
class TestGraphQLInheritedRoles:
@classmethod
def dir(cls):
return 'queries/graphql_query/permissions/inherited_roles'
def test_basic_inherited_role(self, hge_ctx, transport):
check_query_f(hge_ctx, self.dir() + '/basic_inherited_roles.yaml')
@pytest.mark.parametrize("transport", ['http', 'websocket'])
@usefixtures('per_class_tests_db_state')