This commit applies ormolu to the whole Haskell code base by running `make format`.
For in-flight branches, simply merging changes from `main` will result in merge conflicts.
To avoid this, update your branch using the following instructions. Replace `<format-commit>`
by the hash of *this* commit.
$ git checkout my-feature-branch
$ git merge <format-commit>^ # and resolve conflicts normally
$ make format
$ git commit -a -m "reformat with ormolu"
$ git merge -s ours post-ormolu
https://github.com/hasura/graphql-engine-mono/pull/2404
GitOrigin-RevId: 75049f5c12f430c615eafb4c6b8e83e371e01c8e
### Description
We always build a subscription root, even when there was no possible fields. This breaks some third party clients, as the spec does not allow empty types in the schema. This PR fixes this by changing the `buildSubscriptionParser` helper to return a `Maybe` value, and harmonizes / cleans places where we build the subscription root.
https://github.com/hasura/graphql-engine-mono/pull/2357
GitOrigin-RevId: 1aeae25e321eee957e7645c436d17e69207309fd
GJ IR changes cherry-picked from the original GJ branch. There is a separate (can be merged independently) PR for metadata changes (#1727) and there will be a different PR upcoming PR for execution changes.
https://github.com/hasura/graphql-engine-mono/pull/1810
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: c31956af29dc9c9b75d002aba7d93c230697c5f4
### Description
This PR adds the required IR for DB to DB joins, based on @paf31 and @0x777 's `feature/db-to-db` branch.
To do so, it also refactors the IR to introduce a new type parameter, `r`, which is used to recursively constructs the `v` parameter of remote QueryDBs. When collecting remote joins, we replace `r` with `Const Void`, indicating at the type level that there cannot be any leftover remote join.
Furthermore, this PR refactors IR.Select for readability, moves some code from IR.Root to IR.Select to avoid having to deal with circular dependencies, and makes it compile by adding `error` in all new cases in the execution pipeline.
The diff doesn't make it clear, but most of Select.hs is actually unchanged. Declarations have just been reordered by topic, in the following order:
- type declarations
- instance declarations
- type aliases
- constructor functions
- traverse functions
https://github.com/hasura/graphql-engine-mono/pull/1580
Co-authored-by: Phil Freeman <630306+paf31@users.noreply.github.com>
GitOrigin-RevId: bbdcb4119cec8bb3fc32f1294f91b8dea0728721
Remote relationships are now supported on SQL Server and BigQuery. The major change though is the re-architecture of remote join execution logic. Prior to this PR, each backend is responsible for processing the remote relationships that are part of their AST.
This is not ideal as there is nothing specific about a remote join's execution that ties it to a backend. The only backend specific part is whether or not the specification of the remote relationship is valid (i.e, we'll need to validate whether the scalars are compatible).
The approach now changes to this:
1. Before delegating the AST to the backend, we traverse the AST, collect all the remote joins while modifying the AST to add necessary join fields where needed.
1. Once the remote joins are collected from the AST, the database call is made to fetch the response. The necessary data for the remote join(s) is collected from the database's response and one or more remote schema calls are constructed as necessary.
1. The remote schema calls are then executed and the data from the database and from the remote schemas is joined to produce the final response.
### Known issues
1. Ideally the traversal of the IR to collect remote joins should return an AST which does not include remote join fields. This operation can be type safe but isn't taken up as part of the PR.
1. There is a lot of code duplication between `Transport/HTTP.hs` and `Transport/Websocket.hs` which needs to be fixed ASAP. This too hasn't been taken up by this PR.
1. The type which represents the execution plan is only modified to handle our current remote joins and as such it will have to be changed to accommodate general remote joins.
1. Use of lenses would have reduced the boilerplate code to collect remote joins from the base AST.
1. The current remote join logic assumes that the join columns of a remote relationship appear with their names in the database response. This however is incorrect as they could be aliased. This can be taken up by anyone, I've left a comment in the code.
### Notes to the reviewers
I think it is best reviewed commit by commit.
1. The first one is very straight forward.
1. The second one refactors the remote join execution logic but other than moving things around, it doesn't change the user facing functionality. This moves Postgres specific parts to `Backends/Postgres` module from `Execute`. Some IR related code to `Hasura.RQL.IR` module. Simplifies various type class function signatures as a backend doesn't have to handle remote joins anymore
1. The third one fixes partial case matches that for some weird reason weren't shown as warnings before this refactor
1. The fourth one generalizes the validation logic of remote relationships and implements `scalarTypeGraphQLName` function on SQL Server and BigQuery which is used by the validation logic. This enables remote relationships on BigQuery and SQL Server.
https://github.com/hasura/graphql-engine-mono/pull/1497
GitOrigin-RevId: 77dd8eed326602b16e9a8496f52f46d22b795598
This reverts the remote schema type customisation and namespacing feature temporarily as we test for certain conditions.
GitOrigin-RevId: f8ee97233da4597f703970c3998664c03582d8e7
Fixes https://github.com/hasura/graphql-engine-mono/issues/712
Main point of interest: the `Hasura.SQL.Backend` module.
This PR creates an `Exists` type indexed by indexed type and packed constraint while hiding all of its complexity by not exporting the constructor.
Existential constructors/types which are no longer (directly) existential:
- [X] BackendSourceInfo :: BackendSourceInfo
- [x] BackendSourceMetadata :: BackendSourceMetadata
- [x] MOSourceObjId :: MetadatObjId
- [x] SOSourceObj :: SchemaObjId
- [x] RFDB :: RootField
- [x] LQP :: LiveQueryPlan
- [x] ExecutionStep :: ExecStepDB
This PR also removes ALL usages of `Typeable.cast` from our codebase. We still need to derive `Typeable` in a few places in order to be able to derive `Data` in one place. I have not dug deeper to see why this is needed.
GitOrigin-RevId: bb47e957192e4bb0af4c4116aee7bb92f7983445
fixes#3868
docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de`
Note:
To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`.
Introduction
------------
This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`.
How are select permissions of different roles are combined?
------------------------------------------------------------
A select permission includes 5 things:
1. Columns accessible to the role
2. Row selection filter
3. Limit
4. Allow aggregation
5. Scalar computed fields accessible to the role
Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`.
Let's say the following GraphQL query is queried with the `combined_roles` role.
```graphql
query {
employees {
address
phone
}
}
```
This will translate to the following SQL query:
```sql
select
(case when (P1 or P2) then address else null end) as address,
(case when P2 then phone else null end) as phone
from employee
where (P1 or P2)
```
The other parameters of the select permission will be combined in the following manner:
1. Limit - Minimum of the limits will be the limit of the inherited role
2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation
3. Scalar computed fields - same as table column fields, as in the above example
APIs for inherited roles:
----------------------
1. `add_inherited_role`
`add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments
`role_name`: the name of the inherited role to be added (String)
`role_set`: list of roles that need to be combined (Array of Strings)
Example:
```json
{
"type": "add_inherited_role",
"args": {
"role_name":"combined_user",
"role_set":[
"user",
"user1"
]
}
}
```
After adding the inherited role, the inherited role can be used like single roles like earlier
Note:
An inherited role can only be created with non-inherited/singular roles.
2. `drop_inherited_role`
The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument:
`role_name`: name of the inherited role to be dropped
Example:
```json
{
"type": "drop_inherited_role",
"args": {
"role_name":"combined_user"
}
}
```
Metadata
---------
The derived roles metadata will be included under the `experimental_features` key while exporting the metadata.
```json
{
"experimental_features": {
"derived_roles": [
{
"role_name": "manager_is_employee_too",
"role_set": [
"employee",
"manager"
]
}
]
}
}
```
Scope
------
Only postgres queries and subscriptions are supported in this PR.
Important points:
-----------------
1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done.
TODOs
-------
- [ ] Tests
- [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features
- [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?)
- [ ] Introspection test with a inherited role (nullability changes in a inherited role)
- [ ] Docs
- [ ] Changelog
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
This PR generalizes a bunch of metadata structures.
Most importantly, it changes `SourceCache` to hold existentially quantified values:
```
data BackendSourceInfo =
forall b. Backend b => BackendSourceInfo (SourceInfo b)
type SourceCache = HashMap SourceName BackendSourceInfo
```
This changes a *lot* of things throughout the code. For now, all code using the schema cache explicitly casts sources to Postgres, meaning that if any non-Postgres `SourceInfo` makes it to the cache, it'll be ignored.
That means that after this PR is submitted, we can split work between two different aspects:
- creating `SourceInfo` for other backends
- handling those other sources down the line
GitOrigin-RevId: fb9ea00f32e840fc33c5467896fb1dfa5283ab42
### Description
This PR updates the graphql schema to be backend-agnostic. To do so, it also moves the definition of operators to `BackendSchema`, to be specified differently per backend.
GitOrigin-RevId: 35b9d2d1bff93fb68b872d6ab0d3d12ec12c1b93
This is an incremental PR towards https://github.com/hasura/graphql-engine/pull/5797
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
GitOrigin-RevId: a6cb8c239b2ff840a0095e78845f682af0e588a9
Since PDV, introspection queries are parsed by a certain kind of reflection where during the GraphQL schema generation, we collect all GraphQL types used during schema generation to generate answers to introspection queries. This has a great advantage, namely that we don't need to keep track of which types are being used in our schema, as this information is extracted after the fact.
But what happens when we encounter two types with the same name in the GraphQL schema? Well, they better be the same, otherwise we likely made a programming error. So what do we do when we *do* encounter a conflict? So far, we've been throwing a rather generic error message, namely `found conflicting definitions for <typename> when collecting types from the schema`. It does not specify what the conflict is, or how it arose. In fact, I'm a little bit hesitant to output more information about what the conflict is, because we support many different kinds of GraphQL types, and these can have disagreements in many different ways. It'd be a bit tiring (not to mention error-prone) to spell this out explicitly for all types. And, in any case, at the moment our equality checks for types is incorrect anyway, as we are avoiding implementing a certain recursive equality checking algorithm.
As it turns out, type conflicts arise not just due to programming errors, but also arise naturally under certain configurations. @codingkarthik encountered an interesting case recently where adding a specific remote and a single unrelated database table would result in a conflict in our Relay schema. It was not readily visible how this conflict arose: this took significant engineering effort.
This adds stack traces to type collection, so that we can inform the user where the type conflict is taking place. The origin of the above conflict can easily be spotted using this PR. Here's a sample error message:
```
Found conflicting definitions for "PageInfo". The definition at "mutation_root.UpdateUser.favourites.anime.edges.node.characters.pageInfo" differs from the the definition at "query_root.test_connection.pageInfo"
```
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
GitOrigin-RevId: d4c01c243022d8570b3c057b168a61c3033244ff
This PR makes a bunch of schema generation code in Hasura.GraphQL.Schema backend-agnostic, by moving the backend-specific parts into a new BackendSchema type class. This way, the schema generation code can be reused for other backends, simply by implementing new instances of the BackendSchema type class.
This work is now in a state where the schema generators are sufficiently generic to accept the implementation of a new backend. That means that we can start exposing MS SQL schema. Execution is not implemented yet, of course.
The branch currently does not support computed fields or Relay. This is, in a sense, intentional: computed field support is normally baked into the schema generation (through the fieldSelection schema generator), and so this branch shows a programming technique that allows us to expose certain GraphQL schema depending on backend support. We can write support for computed fields and Relay at a later stage.
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
GitOrigin-RevId: df369fc3d189cbda1b931d31678e9450a6601314