graphql-engine/server/src-lib/Hasura/GraphQL/Parser/Class.hs
Karthikeyan Chinnakonda 92026b769f [Preview] Inherited roles for postgres read queries
fixes #3868

docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de`

Note:

To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`.

Introduction
------------

This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`.

How are select permissions of different roles are combined?
------------------------------------------------------------

A select permission includes 5 things:

1. Columns accessible to the role
2. Row selection filter
3. Limit
4. Allow aggregation
5. Scalar computed fields accessible to the role

 Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`.

Let's say the following GraphQL query is queried with the `combined_roles` role.

```graphql
query {
   employees {
     address
     phone
   }
}
```

This will translate to the following SQL query:

```sql

 select
    (case when (P1 or P2) then address else null end) as address,
    (case when P2 then phone else null end) as phone
 from employee
 where (P1 or P2)
```

The other parameters of the select permission will be combined in the following manner:

1. Limit - Minimum of the limits will be the limit of the inherited role
2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation
3. Scalar computed fields - same as table column fields, as in the above example

APIs for inherited roles:
----------------------

1. `add_inherited_role`

`add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments

`role_name`: the name of the inherited role to be added (String)
`role_set`: list of roles that need to be combined (Array of Strings)

Example:

```json
{
  "type": "add_inherited_role",
  "args": {
      "role_name":"combined_user",
      "role_set":[
          "user",
          "user1"
      ]
  }
}
```

After adding the inherited role, the inherited role can be used like single roles like earlier

Note:

An inherited role can only be created with non-inherited/singular roles.

2. `drop_inherited_role`

The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument:

`role_name`: name of the inherited role to be dropped

Example:

```json

{
  "type": "drop_inherited_role",
  "args": {
      "role_name":"combined_user"
  }
}
```

Metadata
---------

The derived roles metadata will be included under the `experimental_features` key while exporting the metadata.

```json
{
  "experimental_features": {
    "derived_roles": [
      {
        "role_name": "manager_is_employee_too",
        "role_set": [
          "employee",
          "manager"
        ]
      }
    ]
  }
}
```

Scope
------

Only postgres queries and subscriptions are supported in this PR.

Important points:
-----------------

1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done.

TODOs
-------

- [ ] Tests
   - [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features
   - [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?)
   - [ ] Introspection test with a inherited role (nullability changes in a inherited role)
- [ ] Docs
- [ ] Changelog

Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 11:15:10 +00:00

178 lines
7.2 KiB
Haskell
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

-- | Classes for monads used during schema construction and query parsing.
module Hasura.GraphQL.Parser.Class
( MonadParse(..)
, parseError
, QueryReusability(..)
, module Hasura.GraphQL.Parser.Class
) where
import Hasura.Prelude
import qualified Data.HashMap.Strict as Map
import qualified Language.Haskell.TH as TH
import Data.Has
import Data.Text.Extended
import Data.Tuple.Extended
import GHC.Stack (HasCallStack)
import Type.Reflection (Typeable)
import Hasura.GraphQL.Parser.Class.Parse
import Hasura.GraphQL.Parser.Internal.Types
import Hasura.GraphQL.Parser.Schema (HasDefinition)
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.Error
import Hasura.RQL.Types.Source
import Hasura.RQL.Types.Table
-- import Hasura.SQL.Backend
import Hasura.Session (RoleName)
{- Note [Tying the knot]
~~~~~~~~~~~~~~~~~~~~~~~~
GraphQL type definitions can be mutually recursive, and indeed, they quite often
are! For example, two tables that reference one another will be represented by
types such as the following:
type author {
id: Int!
name: String!
articles: [article!]!
}
type article {
id: Int!
title: String!
content: String!
author: author!
}
This doesnt cause any trouble if the schema is represented by a mapping from
type names to type definitions, but the Parser abstraction is all about avoiding
that kind of indirection to improve type safety — parsers refer to their
sub-parsers directly. This presents two problems during schema generation:
1. Schema generation needs to terminate in finite time, so we need to ensure
we dont try to eagerly construct an infinitely-large schema due to the
mutually-recursive structure.
2. To serve introspection queries, we do eventually need to construct a
mapping from names to types (a TypeMap), so we need to be able to
recursively walk the entire schema in finite time.
Solving point number 1 could be done with either laziness or sharing, but
neither of those are enough to solve point number 2, which requires /observable/
sharing. We need to construct a Parser graph that contains enough information to
detect cycles during traversal.
It may seem appealing to just use type names to detect cycles, which would allow
us to get away with using laziness rather than true sharing. Unfortunately, that
leads to two further problems:
* Its possible to end up with two different types with the same name, which
is an error and should be reported as such. Using names to break cycles
prevents us from doing that, since we have no way to check that two types
with the same name are actually the same.
* Some Parser constructors can fail — the `column` parser checks that the type
name is a valid GraphQL name, for example. This extra validation means lazy
schema construction isnt viable, since we need to eagerly build the schema
to ensure all the validation checks hold.
So were forced to use sharing. But how do we do it? Somehow, we have to /tie
the knot/ — we have to build a cyclic data structure — and some of the cycles
may be quite large. Doing all this knot-tying by hand would be incredibly
tricky, and it would require a lot of inversion of control to thread the shared
parsers around.
To avoid contorting the program, we instead implement a form of memoization. The
MonadSchema class provides a mechanism to memoize a parser constructor function,
which allows us to get sharing mostly for free. The memoization strategy also
annotates cached parsers with a Unique that can be used to break cycles while
traversing the graph, so we get observable sharing as well. -}
-- | A class that provides functionality used when building the GraphQL schema,
-- i.e. constructing the 'Parser' graph.
class (Monad m, MonadParse n) => MonadSchema n m | m -> n where
-- | Memoizes a parser constructor function for the extent of a single schema
-- construction process. This is mostly useful for recursive parsers;
-- see Note [Tying the knot] for more details.
--
-- The generality of the type here allows us to use this with multiple concrete
-- parser types:
--
-- @
-- 'memoizeOn' :: 'MonadSchema' n m => 'TH.Name' -> a -> m (Parser n b) -> m (Parser n b)
-- 'memoizeOn' :: 'MonadSchema' n m => 'TH.Name' -> a -> m (FieldParser n b) -> m (FieldParser n b)
-- @
memoizeOn
:: forall p d a b
. (HasCallStack, HasDefinition (p n b) d, Ord a, Typeable p, Typeable a, Typeable b)
=> TH.Name
-- ^ A unique name used to identify the function being memoized. There isnt
-- really any metaprogramming going on here, we just use a Template Haskell
-- 'TH.Name' as a convenient source for a static, unique identifier.
-> a
-- ^ The value to use as the memoization key. Its the callers
-- responsibility to ensure multiple calls to the same function dont use
-- the same key.
-> m (p n b) -> m (p n b)
type MonadRole r m = (MonadReader r m, Has RoleName r)
-- | Gets the current role the schema is being built for.
askRoleName
:: MonadRole r m
=> m RoleName
askRoleName = asks getter
type MonadTableInfo r m = (MonadReader r m, Has SourceCache r, MonadError QErr m)
-- | Looks up table information for the given table name. This function
-- should never fail, since the schema cache construction process is
-- supposed to ensure all dependencies are resolved.
askTableInfo
:: forall b r m. (Backend b, MonadTableInfo r m)
=> TableName b
-> m (TableInfo b)
askTableInfo tableName = do
tableInfo <- asks $ getTableInfo . getter
-- This should never fail, since the schema cache construction process is
-- supposed to ensure that all dependencies are resolved.
tableInfo `onNothing` throw500 ("askTableInfo: no info for " <>> tableName)
where
getTableInfo :: SourceCache -> Maybe (TableInfo b)
getTableInfo sc = Map.lookup tableName $ Map.unions $ mapMaybe unsafeSourceTables $ Map.elems sc
-- | A wrapper around 'memoizeOn' that memoizes a function by using its argument
-- as the key.
memoize
:: (HasCallStack, MonadSchema n m, Ord a, Typeable a, Typeable b, Typeable k)
=> TH.Name
-> (a -> m (Parser k n b))
-> (a -> m (Parser k n b))
memoize name f a = memoizeOn name a (f a)
memoize2
:: (HasCallStack, MonadSchema n m, Ord a, Ord b, Typeable a, Typeable b, Typeable c, Typeable k)
=> TH.Name
-> (a -> b -> m (Parser k n c))
-> (a -> b -> m (Parser k n c))
memoize2 name = curry . memoize name . uncurry
memoize3
:: ( HasCallStack, MonadSchema n m, Ord a, Ord b, Ord c
, Typeable a, Typeable b, Typeable c, Typeable d, Typeable k )
=> TH.Name
-> (a -> b -> c -> m (Parser k n d))
-> (a -> b -> c -> m (Parser k n d))
memoize3 name = curry3 . memoize name . uncurry3
memoize4
:: ( HasCallStack, MonadSchema n m, Ord a, Ord b, Ord c, Ord d
, Typeable a, Typeable b, Typeable c, Typeable d, Typeable e, Typeable k )
=> TH.Name
-> (a -> b -> c -> d -> m (Parser k n e))
-> (a -> b -> c -> d -> m (Parser k n e))
memoize4 name = curry4 . memoize name . uncurry4