<!-- Thank you for submitting this PR! :) -->
## Description
To check filter expression types properly, we need the resolved `source`
for all models. Previously we did not have this, as we did all model
resolving in one go. This splits that into stages, the existing `models`
resolves models and their sources, and then `models_graphql` resolves a
Model's filter expressions, aggregations and graphql schema. This means
we can inspect the model sources during the `models_graphql` in the
actual PR we want to do here.
It's massive, but it's a functional no-op, I promise.
V3_GIT_ORIGIN_REV_ID: 6c8fbf0763fb1daa209a4c84855609e8e9372e8b
## Description
Aggregate response reshaping is where the computed aggregates are
extracted from the NDC response and rewritten into the response GraphQL
shape (usually a nested object structure). Previously this reshaping was
precomputed and stored in the internal IR for the request and then
passed through to the response reshaping code. At the time, this seemed
smarter than re-walking the GraphQL annotation structure and
reinterpreting it to know how to reshape the response, when we already
do this at IR-creation time.
However, this was too smart for its own good. 😭 Unfortunately, the
necessary bits of the IR are not available when processing the an
aggregate response inside a relationship field. What _is_ available is
the GraphQL annotation structure. 🤦♂️
This PR reworks the aggregate response reshaping to solely use the
GraphQL annotation structure and removes the precomputation code and
related IR storage of it. This means the new reshaping code will be
usable for the upcoming aggregate relationships implementation. And...
it turns out that the new code actually cleaner and simpler than the old
"clever" code anyway.
This PR does not change any functionality.
V3_GIT_ORIGIN_REV_ID: 59e29cdac396c094a35c1203c02a3f27ba2c4c58
## Description
This PR refactors the `output_type::object_type_fields` function, which
is a monster, and splits out the handling of relationship fields into a
separate function. This function then itself is broken down into
separate functions that each handle one of the different types of
relationship fields.
This is particularly necessary now, as logic to handle
ModelAggregateRelationshipTargets is about to get added and that would
only grow the monster function more.
The `RelationshipTarget` `metadata_resolve` type has had its enum
variant structs broken out into separate struct types so that they can
be passed around separate to the enum. This makes it easier to write
functions that deal with each enum variant separately (like
`output_type::object_type_fields`).
This PR causes no behavioural changes.
V3_GIT_ORIGIN_REV_ID: 4707f8b07154bd4d503f78392a3d2d8976eab339
<!-- Thank you for submitting this PR! :) -->
## Description
When doing relationships across bool exps, by default we use the target
model's `where` clause bool exp. However, we allow the user to provide a
different bool exp instead, which we now pay attention to.
Behind a feature flag so a user-facing no-op.
V3_GIT_ORIGIN_REV_ID: b31157c455d56375c35db28f2418da9d41b68c46
<!-- Thank you for submitting this PR! :) -->
## Description
Previously we ignored `isNull` and `logicalOperators`. Now we obey
`isNull` for scalars, and `logicalOperators` for objects.
Both `isNull` on objects and `logicalOperators` on scalars don't exist
as features, so will create follow up tickets for these.
No new tests, but I was able to make both things fail by disabling the
appropriate options:
<img width="655" alt="Screenshot 2024-06-19 at 16 59 33"
src="https://github.com/hasura/v3-engine/assets/4729125/47eb342d-49c1-4ec8-8bfa-5e3f2d286928">
<img width="758" alt="Screenshot 2024-06-19 at 16 30 08"
src="https://github.com/hasura/v3-engine/assets/4729125/5722407c-e12e-429a-ab3a-fc221a8addfa">
Forgive me, I also broke up the `stages/boolean_expressions/object`
module into a few pieces to make this easier to think about.
This is behind a feature flag, no op.
V3_GIT_ORIGIN_REV_ID: b1273d8a9655a21adedc7c7a04a96e0cfe98cc98
<!-- Thank you for submitting this PR! :) -->
## Description
PR to implement filtering and sorting on nested fields in the custom
connector.
V3_GIT_ORIGIN_REV_ID: d112ff16beabeab1aea8d2bb24f51d22b461c8cd
## Description
This PR continues on from #725 and adds the metadata resolve logic to
validate the usage of aggregates applied to relationships. The metadata
resolve logic is gated behind a new `enable_aggregate_relationships`
flag and disabled by default.
The new resolve logic lives in
`crates/metadata-resolve/src/stages/relationships/mod.rs`, however, most
of the actual logic that validates the usage of the aggregate expression
with the relationship has been reused from the model aggregate resolve
code. Subsequently, that code
(`crates/metadata-resolve/src/stages/models/aggregation.rs`) was
refactored to return a distinct error type
`ModelAggregateExpressionError` that can be composed with the
`RelationshipError` type, so the same errors can be returned with the
additional context of the relationship they were found in.
New metadata resolve error tests have been added in
`crates/metadata-resolve/tests/failing/aggregate_expression_in_relationship/*`.
Also, two new passing metadata resolve tests have been added to cover
the happy case of root field and relationship aggregate expressions:
`crates/metadata-resolve/tests/passing/aggregate_expressions/*`
JIRA: [V3ENGINE-160](https://hasurahq.atlassian.net/browse/V3ENGINE-160)
[V3ENGINE-160]:
https://hasurahq.atlassian.net/browse/V3ENGINE-160?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
V3_GIT_ORIGIN_REV_ID: 8cd7040cc9277940641d5ac239d7b34f8c1462c5
This keeps versions in one place so we can more easily ensure we upgrade
crates together.
V3_GIT_ORIGIN_REV_ID: 6a929bb6196c19a1f66a768585b669127035e9be
<!-- Thank you for submitting this PR! :) -->
## Description
We cannot validate `BooleanExpressionType`s much until they are used.
This adds checks for when they are used as model `where` clauses. We
ensure the data connector in question is allowed to use nested filtering
(if we try and do any), and ensure that every scalar field has mappings
for the appropriate data connector.
Hidden behind a feature flag, so technically a no-op.
V3_GIT_ORIGIN_REV_ID: ab54f913157954e8197798febafc70012bfad9ad
### TODO
- [x] Validate the presence of arguments in the data connector schema
- [x] Add test for IR
- [x] Add test using custom connector
## Description
JIRA: https://hasurahq.atlassian.net/browse/V3ENGINE-148
This is part 2 of the field arguments.
This will add the field arguments to the graphql schema and also handles
the execution part.
![image](https://github.com/hasura/v3-engine/assets/85472423/c8f55d70-6539-42ef-ab5a-91b74e5699e2)
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes
in this PR have any user-facing impact. See
[changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [x] community-edition
- [x] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [x] enhancement
- [ ] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
Add support for field arguments
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
---------
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
Co-authored-by: Samir Talwar <samir.talwar@hasura.io>
V3_GIT_ORIGIN_REV_ID: ea06b896feaca30419f81cd070fd4cfd608ef35b
<!-- Thank you for submitting this PR! :) -->
## Description
Doing some work on the `models` section and it's a bit big and
confusing, so doing a tidy-up to get my head around it all. Splits the
large module into separate files (`ordering`, `source`, `graphql`,
`filter` etc).
Only actual changes are:
- Move filter resolving to after model source is checked, so we can
refer to it. This will be useful for the thing I actually want to do.
- Make steps like `graphql` and `source` return the values they create
rather than mutating `Model` directly.
- Move some filter checks into `filter` rather than `source`, now we are
able to. This logic was all over the place.
Functional no-op.
V3_GIT_ORIGIN_REV_ID: 540db16d34590ed9e481c36d251ab5e1886f2c81
<!-- Thank you for submitting this PR! :) -->
## Description
Because we globbed for folders, we had to put a test in the root of each
folder which is messy. If we glob for files instead, like the `failing`
tests do, we can have a better time. Functional no-op.
V3_GIT_ORIGIN_REV_ID: 381f5a44d9ae6100198bd28a286493bc3a2ba98e
Return a `T` instead of a `Result<T, E>` when we never return an error
(`E`) case.
I also enabled some more warnings. `unnecessary_box_returns` has been
suppressed where appropriate, and `unused_async` doesn't seem to be
violated anywhere any more.
I got rid of some calls to `.unwrap()` too.
V3_GIT_ORIGIN_REV_ID: 015ebd05978cf8c2d87474a90e0cd4333779a761
<!-- Thank you for submitting this PR! :) -->
## Description
This implements local relationships in the new `BooleanExpressionType`.
We copy all the existing tests for this and reimplement them using the
new types.
Things currently missing:
- a `BooleanExpressionType` can be specified to overwrite the target
model's `where` clause when defining. Currently we ignore this, there is
a follow up ticket for this:
https://hasurahq.atlassian.net/browse/V3ENGINE-209
This work is behind a feature flag, so this is a no-op for the user.
V3_GIT_ORIGIN_REV_ID: 6ae87e1ffb2ad0ebd27c6097e8ea87aca1a4c299
## Description
This PR adds the necessary new types and properties to OpenDD to support
aggregates over relationships. This follows the definition as laid out
in the [Aggregates
RFC](https://github.com/hasura/v3-engine/blob/main/rfcs/aggregations.md#relationship).
In order to achieve this incrementally, the changes have been made to
the OpenDD types but have been hidden from the generated JSON Schema
behind `#[opendd(hidden)]` macros. New functionality needed to be added
to the macro to support this; see the changes in opendds-derive crate.
There are also a couple of fixes to incorrect comments in the existing
aggregate expression OpenDD types.
A future PR will bring metadata resolve logic to validate the usage of
the new OpenDD properties and types.
V3_GIT_ORIGIN_REV_ID: 75de972cf8d2115248b3bf91f51f320e8c22b1f8
We pretend we handle directives but we simply set them to an empty map,
and then don't actually use them at all.
This is completely unused code that can be removed.
V3_GIT_ORIGIN_REV_ID: a86dc43acc2de2f2d3f78f0c8ebc53ce9f5bde8c
<!-- Thank you for submitting this PR! :) -->
## Description
Making the schema work for boolean expressions has been challenging as
they are no longer tied to a single data connector, so we cannot do any
checks of whether a relationships is local or remote. We defer this to
the IR step when the source data connector for a relationship is known,
so that we can generate schema for boolean expressions decoupled from
any concept of data connector.
Functional no-op.
V3_GIT_ORIGIN_REV_ID: d2923acedf92b031ba092bb83c515812c4d346f0
## Description
This PR moves all the tests that were added to
`crates/engine/tests/validate_metadata_artifacts/aggregate_expressions`
to the metadata-resolve `metadata_golden_tests`.
All the `error.txt` files got renamed to `expected_error.txt` and the
`metadata is not consistent:` bit on the front of the error got trimmed
off.
I had to change the way that `test_failing_metadata` works so that
instead of looking for directories under `failing/` it looks for
`metadata.json` files under `failing/`. This is because not every
directory has a test in it, some directories are just used for grouping
(eg `failing/aggregate_expressions/`). This doesn't change what tests
get run. The only side effect is that every test name has `_metadata`
suffixed to it (the filename). 😢 `test-each` doesn't appear to offer a
way to disable this behaviour.
V3_GIT_ORIGIN_REV_ID: dece7cd3bb4623c51050d12471d2c0990fd69bfd
Rules:
* The schema ID is set by the `id` property, falling back to `rename`.
* The schema ID is always prefixed by a URL base.
* The schema name is set by the `rename` property, falling back to `id`.
* The schema title is set by the `title` property, falling back to
`rename`, then `id`.
* If the name or title are automatically created from the ID, remove
characters that might choke a code generator, such as ' ' or '/'.
V3_GIT_ORIGIN_REV_ID: fac46a0849c2f36642daeca2971fdce7d253256a
This is Part 3 in a stacked PR set that delivers aggregate root field
support.
* Part 1: OpenDD: https://github.com/hasura/v3-engine/pull/683
* Part 2: Metadata Resolve: https://github.com/hasura/v3-engine/pull/684
JIRA: [V3ENGINE-159](https://hasurahq.atlassian.net/browse/V3ENGINE-159)
## Description
This PR implements the GraphQL API for aggregate root fields. The
GraphQL schema matches the design in the [Aggregate and Grouping
RFC](https://github.com/hasura/v3-engine/blob/main/rfcs/aggregations.md#aggregations-walkthrough).
### Schema Generation
The main new part of the GraphQL schema generation can be found in
`crates/schema/src/aggregates.rs`. This is where we generate the new
aggregate selection types. However, the root field generation can be
found in `crates/schema/src/query_root/select_aggregate.rs`.
The new `filter_input` type generation lives in
`crates/schema/src/model_filter_input.rs`. As this type effectively
encapsulates the existing field arguments used on the Select Many root
field, the code to generate them has moved into `model_filter_input.rs`
and `select_many.rs` simply reuses the functionality from there (without
actually using the filter input type!).
### IR
The main aggregates IR generation for the aggregate root field happens
in `crates/execute/src/ir/query_root/select_aggregate.rs`. It reads all
the input arguments to the root field and then kicks the selection logic
over to `model_aggregate_selection_ir` from
`crates/execute/src/ir/model_selection.rs`.
`crates/execute/src/ir/model_selection.rs` has received some refactoring
to facilitate that new `model_aggregate_selection_ir` function; it
mostly shares functionality with the existing `model_selection_ir`,
except instead of creating fields IR, it creates aggregates IR instead.
The actual reading of the aggregate selection happens in
`crates/execute/src/ir/aggregates.rs`.
The aggregates selection IR captures the nested JSON structure of the
aggregate selection, because NDC does not return aggregates in the same
nested JSON structure as the GraphQL request. NDC takes a flat list of
aggregate operations to run. This captured nested JSON structure is used
during response rewriting to convert NDC's flat list into the nested
structure that matches the GraphQL request.
The aggregate selection IR is placed onto `ModelSelection` alongside the
existing fields IR. Since both fields and aggregates can be put into the
one NDC request (even though they are not right now), this made sense.
They both translate onto one NDC `Query`. This necessitated making the
field selection optional on the `ModelSelection`
(`ModelSelection.selection`), since aggregate requests currently don't
use them.
### Planning
`crates/execute/src/plan/model_selection.rs` takes care of mapping the
aggregates into the NDC request from the generated IR.
There has been a new `ProcessResponseAs` variant added in
`crates/execute/src/plan.rs` to capture how to read and reshape an NDC
aggregates response. This is handled in
`crates/execute/src/process_response.rs` where the captured JSON
structure in the IR is used to restore NDC's flat aggregates list into
the required nested JSON output structure.
### Testing
The Custom Connector has been updated with functionality to allow
aggregates over nested object fields
(`crates/custom-connector/src/query.rs`).
New execution and introspection tests have been added to
`crates/engine/tests/execute/aggregates/` to test aggregates against
Postgres and the Custom Connector.
[V3ENGINE-159]:
https://hasurahq.atlassian.net/browse/V3ENGINE-159?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
V3_GIT_ORIGIN_REV_ID: ff47f13eaca70d10de21e102a6667110f8f8af40
This is Part 2 in a stacked PR set that delivers aggregate root field
support.
* Part 1: OpenDD: https://github.com/hasura/v3-engine/pull/683
* Part 3: GraphQL API: https://github.com/hasura/v3-engine/pull/685
JIRA: [V3ENGINE-159](https://hasurahq.atlassian.net/browse/V3ENGINE-159)
## Description
This PR implements the metadata resolve phase of the engine and adds
support for resolving `AggregateExpression`s and validates their use
when linked to a `Model`.
The bulk of the changes can be found in:
* `crates/metadata-resolve/src/stages/aggregates/*` - This is where the
`AggregateExpression`s are resolved
* `crates/metadata-resolve/src/stages/models/mod.rs` - This is where we
validate the `AggregateExpression` specified for use in the model is
actually compatible with the model and its data connector
The `ndc-spec` version used has been lifted to the latest version that
adds support for aggregates over nested objects
(https://github.com/hasura/ndc-spec/pull/144). This necessitated changes
in the Custom Connector, but actual functionality to implement
aggregation over nested objects is implemented in Part 3.
There are also some changes in
`crates/metadata-resolve/src/types/subgraph.rs` where the `Display`
trait for the various `Qualified<T>`, `QualifiedTypeReference`, etc
types has been reworked so that they print more cleanly, with the
subgraph being put outside the type syntax, and array types getting
formatted correctly. For example, previous an array of Varchars would
have printed as `Varchar (in subgraph default)!`, now it properly
formats as `[Varchar!]! (in subgraph default)`.
This was necessary to make useful error messages using these types.
A tonne of tests have been added in
`crates/engine/tests/validate_metadata_artifacts/aggregate_expressions`
to test every error condition of the metadata resolve process.
[V3ENGINE-159]:
https://hasurahq.atlassian.net/browse/V3ENGINE-159?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
V3_GIT_ORIGIN_REV_ID: ffd859127a3f1560707f06ef01906c9d1b183d31
This is a no-op refactor of IR related types. Introducing a new concrete
`IR` enum with `Mutation` and `Query` variants, where each contains a
map of root fields.
V3_GIT_ORIGIN_REV_ID: 76a197f5cf1efb34a8493c2b3356bea2bc2feb3c
<!-- Thank you for submitting this PR! :) -->
## Description
<!--
Questions to consider answering:
1. What user-facing changes are being made?
2. What are issues related to this PR? (Consider adding `(close
#<issue-no>)` to the PR title)
3. What is the conceptual design behind this PR?
4. How can this PR be tested/verified?
5. Does the PR have limitations?
6. Does the PR introduce breaking changes?
-->
This PR reverts the changes to `FieldMapping` and introduces the
`argument_mapping` in `ColumnMapping` instead.
V3_GIT_ORIGIN_REV_ID: badca2416b1a1b82d1a596aa60bf7a39b3b6152a