Commit Graph

10 Commits

Author SHA1 Message Date
Antoine Leblanc
d91029ad51 [gardening] remove all traverse functions from RQL.IR
### Description

This PR removes all `fmapX` and `traverseX` functions from RQL.IR, favouring instead `Functor` and `Traversable` instances throughout the code. This was a relatively straightforward change, except for two small pain points: `AnnSelectG` and `AnnInsert`. Both were parametric over two types `a` and `v`, making it impossible to make them traversable functors... But it turns out that in every single use case, `a ~ f v`. By changing those types to take such an `f :: Type -> Type` as an argument instead of `a :: Type` makes it possible to make them functors.

The only small difference is for `AnnIns`, I had to introduce one `Identity` transformation for one of the `f` parameters. This is relatively straightforward.

### Notes

This PR fixes the most verbose BigQuery hint (`let` instead of `<- pure`).

https://github.com/hasura/graphql-engine-mono/pull/1668

GitOrigin-RevId: e632263a8c559aa04aeae10dcaec915b4a81ad1a
2021-07-08 15:42:53 +00:00
Rakesh Emmadi
e567a096e6 server/postgres: support computed fields in query filters ('where' expression)
https://github.com/hasura/graphql-engine-mono/pull/1677

GitOrigin-RevId: 30175a442237f6ac4b112c652f448a635ad90dc6
2021-07-07 11:59:32 +00:00
Chris Done
614c0dab80 Bigquery/fix limit offset for array aggregates
Blocked on https://github.com/hasura/graphql-engine-mono/pull/1640.

While fiddling with BigQuery I noticed a severe issue with offset/limit for array-aggregates. I've fixed it now.

The basic problem was that I was using a query like this:

```graphql
query MyQuery {
  hasura_Artist(order_by: {artist_self_id: asc}) {
    artist_self_id
    albums_aggregate(order_by: {album_self_id: asc}, limit: 2) {
      nodes {
        album_self_id
      }
      aggregate {
        count
      }
    }
  }
}
```

Producing this SQL:

```sql
SELECT `t_Artist1`.`artist_self_id` AS `artist_self_id`,
       STRUCT(IFNULL(`aa_albums1`.`nodes`, NULL) AS `nodes`, IFNULL(`aa_albums1`.`aggregate`, STRUCT(0 AS `count`)) AS `aggregate`) AS `albums_aggregate`
FROM `hasura`.`Artist` AS `t_Artist1`
LEFT OUTER JOIN (SELECT ARRAY_AGG(STRUCT(`t_Album1`.`album_self_id` AS `album_self_id`) ORDER BY (`t_Album1`.`album_self_id`) ASC) AS `nodes`,
                        STRUCT(COUNT(*) AS `count`) AS `aggregate`,
                        `t_Album1`.`artist_other_id` AS `artist_other_id`
                 FROM (SELECT *
                       FROM `hasura`.`Album` AS `t_Album1`
                       ORDER BY (`t_Album1`.`album_self_id`) ASC NULLS FIRST
                       -- PROBLEM HERE
                       LIMIT @param0) AS `t_Album1`
                 GROUP BY `t_Album1`.`artist_other_id`)
AS `aa_albums1`
ON (`aa_albums1`.`artist_other_id` = `t_Artist1`.`artist_self_id`)
ORDER BY (`t_Artist1`.`artist_self_id`) ASC NULLS FIRST
```

Note the `LIMIT @param0` -- that is incorrect because we want to limit
per artist. Instead, we want:

```sql
SELECT `t_Artist1`.`artist_self_id` AS `artist_self_id`,
       STRUCT(IFNULL(`aa_albums1`.`nodes`, NULL) AS `nodes`, IFNULL(`aa_albums1`.`aggregate`, STRUCT(0 AS `count`)) AS `aggregate`) AS `albums_aggregate`
FROM `hasura`.`Artist` AS `t_Artist1`
LEFT OUTER JOIN (SELECT ARRAY_AGG(STRUCT(`t_Album1`.`album_self_id` AS `album_self_id`) ORDER BY (`t_Album1`.`album_self_id`) ASC) AS `nodes`,
                        STRUCT(COUNT(*) AS `count`) AS `aggregate`,
                        `t_Album1`.`artist_other_id` AS `artist_other_id`
                 FROM (SELECT *,
                            -- ADDED
                            ROW_NUMBER() OVER(PARTITION BY artist_other_id) artist_album_index
                       FROM `hasura`.`Album` AS `t_Album1`
                       ORDER BY (`t_Album1`.`album_self_id`) ASC NULLS FIRST
                       ) AS `t_Album1`
                 -- CHANGED
                 WHERE artist_album_index <= @param
                 GROUP BY `t_Album1`.`artist_other_id`)
AS `aa_albums1`
ON (`aa_albums1`.`artist_other_id` = `t_Artist1`.`artist_self_id`)
ORDER BY (`t_Artist1`.`artist_self_id`) ASC NULLS FIRST
```

That serves both the LIMIT/OFFSET function in the where clause. Then,
both the ARRAY_AGG and the COUNT are correct per artist.

I've updated my Haskell test suite to add regression tests for this. I'll push a commit for Python tests shortly. The tests still pass there.

This just fixes a case that we hadn't noticed.

https://github.com/hasura/graphql-engine-mono/pull/1641

GitOrigin-RevId: 49933fa5e09a9306c89565743ecccf2cb54eaa80
2021-07-06 08:29:39 +00:00
Chris Done
6dc555f9eb Bigquery/global limit
This resolves https://github.com/hasura/graphql-engine/issues/6947.

A new [`global_select_limit`](b0ab5deefe/server/tests-py/queries/graphql_query/bigquery/replace_metadata.yaml (L17)) field is supported in the BigQuery configuration.

To test global limits, we have two sources defined,  the normal one (limited to 1million) and one with a limit of 1.

https://github.com/hasura/graphql-engine-mono/pull/1592

GitOrigin-RevId: 6ebcc7c1a16bc26ec36e53ae3694d36b7ce5c6e1
2021-06-25 13:36:35 +00:00
Antoine Leblanc
8a77386fcf server: IR for DB-DB joins
### Description

This PR adds the required IR for DB to DB joins, based on @paf31 and @0x777 's `feature/db-to-db` branch.

To do so, it also refactors the IR to introduce a new type parameter, `r`, which is used to recursively constructs the `v` parameter of remote QueryDBs. When collecting remote joins, we replace `r` with `Const Void`, indicating at the type level that there cannot be any leftover remote join.

Furthermore, this PR refactors IR.Select for readability, moves some code from IR.Root to IR.Select to avoid having to deal with circular dependencies, and makes it compile by adding `error` in all new cases in the execution pipeline.

The diff doesn't make it clear, but most of Select.hs is actually unchanged. Declarations have just been reordered by topic, in the following order:
- type declarations
- instance declarations
- type aliases
- constructor functions
- traverse functions

https://github.com/hasura/graphql-engine-mono/pull/1580

Co-authored-by: Phil Freeman <630306+paf31@users.noreply.github.com>
GitOrigin-RevId: bbdcb4119cec8bb3fc32f1294f91b8dea0728721
2021-06-17 23:13:05 +00:00
Chris Done
67a9045328 Bigquery/cleanups
A pull request for cleaning up small issues, bugs, redundancies and missing things in the BigQuery backend.

Summary:

1. Remove duplicate projection fields - BigQuery rejects these.
2. Add order_by to the test suite cases, as it was returning inconsistent results.
3. Add lots of in FromIr about how the dataloader approach is given support.
4. Produce the correct output structure for aggregates:
   a. Should be a singleton object for a top-level aggregate query.
   b. Should have appropriate aggregate{} and nodes{} labels.
   c. **Support for nodes** (via array_agg).
5. Smooth over support of array aggregates by removing the fields used for joining with an explicit projection of each wanted field.

https://github.com/hasura/graphql-engine-mono/pull/1317

Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: cd3899f4667770a27055f94988ef2a6d5808f1f5
2021-06-15 08:59:11 +00:00
Vamshi Surabhi
e8e4f30dd6 server: support remote relationships on SQL Server and BigQuery (#1497)
Remote relationships are now supported on SQL Server and BigQuery. The major change though is the re-architecture of remote join execution logic. Prior to this PR, each backend is responsible for processing the remote relationships that are part of their AST.

This is not ideal as there is nothing specific about a remote join's execution that ties it to a backend. The only backend specific part is whether or not the specification of the remote relationship is valid (i.e, we'll need to validate whether the scalars are compatible).

The approach now changes to this:

1. Before delegating the AST to the backend, we traverse the AST, collect all the remote joins while modifying the AST to add necessary join fields where needed.

1. Once the remote joins are collected from the AST, the database call is made to fetch the response. The necessary data for the remote join(s) is collected from the database's response and one or more remote schema calls are constructed as necessary.

1. The remote schema calls are then executed and the data from the database and from the remote schemas is joined to produce the final response.

### Known issues

1. Ideally the traversal of the IR to collect remote joins should return an AST which does not include remote join fields. This operation can be type safe but isn't taken up as part of the PR.

1. There is a lot of code duplication between `Transport/HTTP.hs` and `Transport/Websocket.hs` which needs to be fixed ASAP. This too hasn't been taken up by this PR.

1. The type which represents the execution plan is only modified to handle our current remote joins and as such it will have to be changed to accommodate general remote joins.

1. Use of lenses would have reduced the boilerplate code to collect remote joins from the base AST.

1. The current remote join logic assumes that the join columns of a remote relationship appear with their names in the database response. This however is incorrect as they could be aliased. This can be taken up by anyone, I've left a comment in the code.

### Notes to the reviewers

I think it is best reviewed commit by commit.

1. The first one is very straight forward.

1. The second one refactors the remote join execution logic but other than moving things around, it doesn't change the user facing functionality.  This moves Postgres specific parts to `Backends/Postgres` module from `Execute`. Some IR related code to `Hasura.RQL.IR` module.  Simplifies various type class function signatures as a backend doesn't have to handle remote joins anymore

1. The third one fixes partial case matches that for some weird reason weren't shown as warnings before this refactor

1. The fourth one generalizes the validation logic of remote relationships and implements `scalarTypeGraphQLName` function on SQL Server and BigQuery which is used by the validation logic. This enables remote relationships on BigQuery and SQL Server.

https://github.com/hasura/graphql-engine-mono/pull/1497

GitOrigin-RevId: 77dd8eed326602b16e9a8496f52f46d22b795598
2021-06-11 03:27:39 +00:00
Antoine Leblanc
2d8ac777b3 server: introduce new custom scalars and remove offsetParser
GitOrigin-RevId: 5db058a7ae8f57bdc7e9844fcdd94e31ce11d961
2021-06-10 16:14:21 +00:00
David Overton
ddad668f07 Fix/custom table name
GitOrigin-RevId: 5004717ac7d9e848ca186a1cdf52e375547034bf
2021-05-18 13:37:27 +00:00
Chris Done
f7a202a363 BigQuery Feature Branch
This will implement BigQuery support.

Co-authored-by: Antoine Leblanc <1618949+nicuveo@users.noreply.github.com>
Co-authored-by: Sibi Prabakaran <737477+psibi@users.noreply.github.com>
Co-authored-by: Aniket Deshpande <922486+aniketd@users.noreply.github.com>
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: 1a6ffaf34233e13e8125a5c908eaa7e32d65007b
2021-04-12 10:19:20 +00:00