### What
Part of ongoing tidy up of errors, this splits out errors types for the
boolean expression stages.
### How
Remove things from `Error`, move files around. Functional no-op.
V3_GIT_ORIGIN_REV_ID: ccf1f29600a169a3787d744c7f60e79220aef8d2
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
To stop us being confused between `Error` type and `Error` trait.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Import `thiserror::Error` explicitly in place.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: b930480927b2c64537960cfb69f2b2b30921f4fd
This PR fixes the custom connector whose schema endpoint doesn't
actually return correct output (it was missing some
functions/procedures, etc). Then it updates all tests that actually talk
to the custom connector with the latest version of its
schema/capabilities in their DataConnectorLink.
This test update is done by a new script added to the justfile that
finds and patches all metadata json files and inserts the new schema and
capabilities after reading them from the custom connector running in
docker.
V3_GIT_ORIGIN_REV_ID: f1825a6f74ddcb6c01198fe4a41de6b4fc0bf533
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
As a treat, combine all Apollo errors into one enum.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Remove items from `ObjectTypesError` and `Error`.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 5a16a030b35372283490f3de7343fcfca2fadea5
### What
And don't set it except for the main application; tests and test
infrastructure does not care.
### How
We use the `VERSION` constant, populated from the `RELEASE_VERSION`
environment variable at build time.
It's now also optional so tests don't have to specify it.
V3_GIT_ORIGIN_REV_ID: 1bfc2efb060307cc9446bf07e944e107f0607ae0
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Update changelogs for new release.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 7adfd3c2c53b912bb7c4604ebed93601a201c15e
This PR removes the usage of ndc_models in the field arguments IR and
uses the actual arguments IR instead. This change was missed in #810.
V3_GIT_ORIGIN_REV_ID: 414b4eda9724e7702b5a09ea2855b457d7ce88d6
### What
`ValueExpression` contained boolean expressions, and was used in places
where boolean expressions were not allowed.
### How
This creates a new type `ValueExpressionOrPredicate`, and uses it in the
places where boolean expressions are allowed.
---------
Co-authored-by: Daniel Chambers <daniel@hasura.io>
V3_GIT_ORIGIN_REV_ID: c0a07c5e0096aeb4369ca7d6b5147451d1ccd14d
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Break down the big `Error` enum some more. This time, add all errors
from the `object_types` stage into the `ObjectTypesError` enum.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Moving code around. Functional no-op.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 154d4b40b21365c783d545a23cf18be623f2a3de
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
In the haste of breaking things up, realised we put the `relay` checks
in with the `apollo` ones where they are two separate things.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Split up `apollo` step into `relay` and `apollo` steps. Functional
no-op.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 8d02d3cb77b1248239be37ed61d0b87b9b734fdf
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
We pass around a lot of `BTreeMap` and `IndexMap` types. This has two
problems:
a) everytime we change the items inside, we have to manually update lots
of call sites
b) we can't attach useful behaviour to it
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
This adds some wrappers around `object_types`, and adds a `.get()`
function with a useful default error. This error only contains the
missing `type_name`, so the idea is that it would be wrapped in another
error that would provide more context. The hope is that we can get away
from one giant error enum, and instead have a set of smaller error types
that live along each resolving stage.
In isolation this PR isn't very interesting, as it's a tiny drop in the
ocean, so really it's here to say, "is this a thing we vaguely like?"
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 804a703d7155ce5b929937689d5649c6571357b0
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Put all the errors that happen in the `data_connectors` stage into a new
type `DataConnectorError`.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Moving enum members to the new type. Functional no-op.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 2e50eb561ce6b7c7fac4de4b31c9957f7ee4696c
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Much like https://github.com/hasura/v3-engine/pull/808, move an error to
the place it is thrown.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Move files around. Functional no-op.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: a61527b57662efada2c7d2049e12f103deec1e4e
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
We have a giant error file in `metadata-resolve`, and it's really
unclear what can go wrong where. Let's improve it in some small way.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Move `GraphqlConfigError` to the `graphql_config` stage, and make that
stage only return that kind of error.
Functional no-op.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
---------
Co-authored-by: Samir Talwar <samir.talwar@hasura.io>
V3_GIT_ORIGIN_REV_ID: 9aa58875a24d66e6945c9ead6434fedb124f1dd8
This PR removes the usage of `ndc_models` `Expression` types from the
IR. It then adds logic to map from the new IR `FilterExpression` types
to `ndc_models` types in the plan part of the code where IR ->
ndc_models mapping is performed. The new IR types and the change to the
IR creation logic is mostly in `crates/execute/src/ir/filter.rs`.
The new IR types have some optimisations applied to them. In particular,
some basic boolean expression simplification logic is applied when
`FilterExpression::mk_and` `mk_or` and `mk_not` are used. This strips
out redundant and/ors/nots resulting in simpler boolexps for connectors
to process. (This logic is similar to what GDC did in Hasura v2).
Unfortunately it turned out that arguments had JSON-serialized
`ndc_models` types embedded in them, in particular, where argument
presets had been used to set a BooleanExpression as an argument value.
The old code was simply creating an ndc_models::Expression and
serializing it directly to JSON. This has now been refactored to retain
the new IR `FilterExpression` until `plan` time, at which point it will
be mapped into the ndc type and serialized to JSON and embedded in the
argument value. The types change for this can be seen in
`crates/execute/src/ir/arguments.rs`, but the real logic is actually in
`crates/execute/src/ir/permissions.rs`, with the
`make_value_from_value_expression` and
`make_argument_from_value_expression` functions.
The mapping of expression and argument IR into `ndc_models` can be found
in `crates/execute/src/plan/common.rs`.
In `metadata_resolve`, some ndc_models types were removed and
non-ndc_model types were used instead. In particular
`ndc_models::ComparisonOperatorName` (swapped with
`DataConnectorOperatorName`) and `ndc_models::UnaryComparisonOperator`
(swapped with a new `UnaryComparisonOperator` enum type).
This PR is a functional no-op, other than the boolean expression
simplification.
V3_GIT_ORIGIN_REV_ID: 13805430fb2e2ca5653406a094d719138f0d5b51
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Since these code comments end up in the docs, let's make sure we point
out these old types are no longer in favour.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Updating doc comments.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: a21190fbf2327e1ae265c76d1f467af063f1dd64
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
If the target data connector for a relationship does not have the
`foreach` capability, we threw an error, however we should allow this.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
Check data connector names for relationship target before throwing
capability error.
V3_GIT_ORIGIN_REV_ID: 157908f0077b7c3af67a77600e5f8c9fc65df7f2
Note: This PR is stacked on #797 and should be merged after it.
This PR removes the use of ndc_models types from the arguments and order
by IR. It then shifts the logic that maps the IR to the ndc_models into
the plan part of the code where IR -> ndc_model mapping is performed.
This will help isolate the IR from ndc_models and move us towards being
able to have multiple IR -> ndc_models mapping codes, one per supported
ndc version.
This is a functional no-op PR.
V3_GIT_ORIGIN_REV_ID: 66be868ff4c8185c6190537d570d88813cb7f410
This PR replaces most of the string newtypes in the open-dds crate with
new implementations based on `SmolStr`, using a macro adapted from the
macro used in ndc-models. `SmolStr` usage should result in less heap
allocations and the macro ensures all the newtypes get all the same
trait implementations (such as various `From` and `Borrow` instances)
and helper impl functions (such as `as_str`).
The new macro, `str_newtype`, creates a newtype wrapper around `SmolStr`
or another type (typically `Identifier`), and writes out all the various
trait implementations. It also takes a documentation string which
ensures that every newtype has a description in JSON Schema.
The changes in the downstream crates are just adjusting to use the new
newtypes.
Also:
* `QueryGraphqlConfig` used a lot of String typed-properties; these have
been changed for the appropriate `GraphQlTypeName` and
`GraphQlFieldName` types.
* metadata-resolve had a duplicate newtype called
`ConnectorArgumentName`, which duplicated open-dds's
`DataConnectorArgumentName`. This has been removed and replaced.
* A new newtype called `CollectionName` has been added in open-dds to
represent the name of the ndc collection that a model points to. This
was previously just a string.
V3_GIT_ORIGIN_REV_ID: d5a33756ef0e110b2255478358a38e42e6ff89a2
This PR just sorts the definitions in the Open DD JSON schema file, so
that when it changes, we can get better diffs because the types won't
move around inside the file.
V3_GIT_ORIGIN_REV_ID: 80221fae1b4d6e4b98b658f9ba2f19413abf7389
### What
This PR fixes a bug with variable nullability coercion. Specifically,
providing a non-null variable for a nullable field should work, as all
non-nullable variables can be used as nullable variables via "coercion".
Related issues:
* https://hasurahq.atlassian.net/browse/V3ENGINE-243
* https://github.com/hasura/v3-e2e-testing/pull/224 (the test for this
change)
### How
I think someone did something clever with macros, so I tried to unpack
the macro to make sense of it, and in the process, spotted the bug: it's
not that both the location must be nullable AND the variable not, it's
that _if_ the variable is nullable, the location must not be nullable.
V3_GIT_ORIGIN_REV_ID: 4f4ba3fe6898220bbba9ae2867233cd762bef1cb
Update the changelog for new version
---------
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
V3_GIT_ORIGIN_REV_ID: b87f16ebaa1471f010ec461be097bcfd6648c99a
This PR updated ndc-models to the latest version on main. This version
is still a 0.1.x version, but it now includes all the [new
newtypes](https://github.com/hasura/ndc-spec/pull/156) that wrap
previously stringly-typed things. For example, `ArgumentName`,
`FieldName`, etc.
This pervades across the entire engine, but thankfully the changes are
mostly mechanical repetitive changes. Usually you will see conversions
from `String`-typed variables into the newtypes using this sort of form:
`FieldName::from(string.as_str())`, which is the most efficient way
copying the value (the str slice is copied). Or you will see usages of
the newtype as a raw string by `.as_str()`-ing it. Converting the
newtypes into a String can be done with `.into()` if owned, but if
referenced `.as_str().to_owned()` performs the clone and type
conversion.
Other changes:
* A few minor instances of `ok_or()` usages (or similar) have been
converted into lazy error construction variants (eg `ok_or_else()`)
V3_GIT_ORIGIN_REV_ID: 64a371ae6197ef3be98a6f7cdc4052d654a43da0
The PR adds code that handles multiple versions of the ndc-models during
the execution pipeline. Depending on whether a connector supports `v01`
or `v02` models, a different set of types is used to send and receive
the http requests.
However, the engine internally still uses the latest (v02) models inside
its IR. Unfortunately, it was going to be quite traumatic to prevent the
engine from using ndc models inside the IR and during response
processing and remote joins. This means that the engine generates v02
requests, and these are downgraded into v01 requests. v01 responses are
upgraded to v02 responses before being processed by the engine.
The ndc client (`execute::ndc::client`) now only takes new wrapper enum
types (`execute::ndc::types`) that split between v01 or v02
requests/responses. Every place that carries an ndc request now carries
this type instead, which allows it to carry either a v01 or a v02
request.
When ndc requests are created during planning, all creation goes via the
new `execute::plan::ndc_request` module. This inspects the connector's
supported version, creates the necessary request, and if needed,
downgrades it to v01.
When ndc responses are read during planning or during remote joins, they
are upgraded to v02 via helper functions defined on the types in
`execute::ndc::types`.
The upgrade/downgrade code is located in `execute::ndc::migration`. Keep
in mind the "v02" types are currently the same as the "v01" types so the
migration code is not doing much. This will change as the v02 types are
modified.
However, this approach has its drawbacks. One is that it prevents
changes to the ndc types [like
this](https://github.com/hasura/ndc-spec/pull/158) without a fair bit of
pain (see
[comment](https://github.com/hasura/ndc-spec/pull/158#issuecomment-2202127094)).
Another is that the downgrade code can fail at runtime and it is not
immediately obvious to developers using new, but unused, v02 features
that their new feature would fail on v01, because that mapping to v01
has already been written. Another is that we're paying some (small,
probably?) performance cost by upgrading/downgrading types because we
need to rebuild data structures.
Also:
* `execute::ndc::response` has been merged into `execute::ndc::client`,
since it was inextricably linked.
V3_GIT_ORIGIN_REV_ID: f3f36736b52058323d789c378fed06af566f39a3
### What
When generating GraphQL schema for relationship fields in model filter,
engine ignores `relation_comparisons` capability of the data connector.
Engine would generate schema for data connectors which don't have this
capability. This PR fixes that.
### How
While generating fields for the filter input type, take the relationship
capabilities into account.
The `ObjectBooleanExpressionType` and `BooleanExpressionType` objects
are quite different, hence their schema generation part is also
different, and is split in two different functions
(`build_comparable_relationships_schema`, and
`build_new_comparable_relationships_schema`). Added checking of
relationship comparison capability in both the functions.
V3_GIT_ORIGIN_REV_ID: dce2b88f7792e01e5bb390ecdb580e223ec80f01
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
# BooleanExpressionType
A new metadata kind `BooleanExpressionType` can now be defined. These
can be used in place of `ObjectBooleanExpressionType` and
`DataConnectorScalarRepresentation`, and allow more granular control of
comparison operators and how they are used.
The old metadata types still work, but will eventually be deprecated.
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: album_bool_exp
operand:
object:
type: Album
comparableFields:
- fieldName: AlbumId
booleanExpressionType: pg_int_comparison_exp
- fieldName: ArtistId
booleanExpressionType: pg_int_comparison_exp_with_is_null
- field: Address
booleanExpressionType: address_bool_exp
comparableRelationships:
- relationshipName: Artist
booleanExpressionType: artist_bool_exp
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_album_bool_exp
```
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: pg_int_comparison_exp
operand:
scalar:
type: Int
comparisonOperators:
- name: equals
argumentType: String!
- name: _in
argumentType: [String!]!
dataConnectorOperatorMapping:
- dataConnectorName: postgres_db
dataConnectorScalarType: String
operatorMapping:
equals: _eq
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_postgres_int_bool_exp
```
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Remove feature flag, unhide JsonSchema items, fix a few missing bits of
JsonSchema the tests didn't warn us about before.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: dd3055d926fdeb7446cd57085679f2492a4358a1
### What
This is a no-op refactor that involves unifying `get_base_type` and
`get_underlying_type_name` functions, whose motives are the same.
### How
Replace the name of erstwhile `get_base_type` fn with
`get_underlying_type_name`. Remove the latter. Update the rest of the
code to use the new function.
V3_GIT_ORIGIN_REV_ID: 0b0d999670641ba265fde153af9b43b4d865e215