The PR adds code that handles multiple versions of the ndc-models during
the execution pipeline. Depending on whether a connector supports `v01`
or `v02` models, a different set of types is used to send and receive
the http requests.
However, the engine internally still uses the latest (v02) models inside
its IR. Unfortunately, it was going to be quite traumatic to prevent the
engine from using ndc models inside the IR and during response
processing and remote joins. This means that the engine generates v02
requests, and these are downgraded into v01 requests. v01 responses are
upgraded to v02 responses before being processed by the engine.
The ndc client (`execute::ndc::client`) now only takes new wrapper enum
types (`execute::ndc::types`) that split between v01 or v02
requests/responses. Every place that carries an ndc request now carries
this type instead, which allows it to carry either a v01 or a v02
request.
When ndc requests are created during planning, all creation goes via the
new `execute::plan::ndc_request` module. This inspects the connector's
supported version, creates the necessary request, and if needed,
downgrades it to v01.
When ndc responses are read during planning or during remote joins, they
are upgraded to v02 via helper functions defined on the types in
`execute::ndc::types`.
The upgrade/downgrade code is located in `execute::ndc::migration`. Keep
in mind the "v02" types are currently the same as the "v01" types so the
migration code is not doing much. This will change as the v02 types are
modified.
However, this approach has its drawbacks. One is that it prevents
changes to the ndc types [like
this](https://github.com/hasura/ndc-spec/pull/158) without a fair bit of
pain (see
[comment](https://github.com/hasura/ndc-spec/pull/158#issuecomment-2202127094)).
Another is that the downgrade code can fail at runtime and it is not
immediately obvious to developers using new, but unused, v02 features
that their new feature would fail on v01, because that mapping to v01
has already been written. Another is that we're paying some (small,
probably?) performance cost by upgrading/downgrading types because we
need to rebuild data structures.
Also:
* `execute::ndc::response` has been merged into `execute::ndc::client`,
since it was inextricably linked.
V3_GIT_ORIGIN_REV_ID: f3f36736b52058323d789c378fed06af566f39a3
### What
When generating GraphQL schema for relationship fields in model filter,
engine ignores `relation_comparisons` capability of the data connector.
Engine would generate schema for data connectors which don't have this
capability. This PR fixes that.
### How
While generating fields for the filter input type, take the relationship
capabilities into account.
The `ObjectBooleanExpressionType` and `BooleanExpressionType` objects
are quite different, hence their schema generation part is also
different, and is split in two different functions
(`build_comparable_relationships_schema`, and
`build_new_comparable_relationships_schema`). Added checking of
relationship comparison capability in both the functions.
V3_GIT_ORIGIN_REV_ID: dce2b88f7792e01e5bb390ecdb580e223ec80f01
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
# BooleanExpressionType
A new metadata kind `BooleanExpressionType` can now be defined. These
can be used in place of `ObjectBooleanExpressionType` and
`DataConnectorScalarRepresentation`, and allow more granular control of
comparison operators and how they are used.
The old metadata types still work, but will eventually be deprecated.
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: album_bool_exp
operand:
object:
type: Album
comparableFields:
- fieldName: AlbumId
booleanExpressionType: pg_int_comparison_exp
- fieldName: ArtistId
booleanExpressionType: pg_int_comparison_exp_with_is_null
- field: Address
booleanExpressionType: address_bool_exp
comparableRelationships:
- relationshipName: Artist
booleanExpressionType: artist_bool_exp
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_album_bool_exp
```
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: pg_int_comparison_exp
operand:
scalar:
type: Int
comparisonOperators:
- name: equals
argumentType: String!
- name: _in
argumentType: [String!]!
dataConnectorOperatorMapping:
- dataConnectorName: postgres_db
dataConnectorScalarType: String
operatorMapping:
equals: _eq
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_postgres_int_bool_exp
```
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Remove feature flag, unhide JsonSchema items, fix a few missing bits of
JsonSchema the tests didn't warn us about before.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: dd3055d926fdeb7446cd57085679f2492a4358a1
### What
This is a no-op refactor that involves unifying `get_base_type` and
`get_underlying_type_name` functions, whose motives are the same.
### How
Replace the name of erstwhile `get_base_type` fn with
`get_underlying_type_name`. Remove the latter. Update the rest of the
code to use the new function.
V3_GIT_ORIGIN_REV_ID: 0b0d999670641ba265fde153af9b43b4d865e215
### What
Update ndc-postgres configuration to v4, including the new
`mutationsVersion: "v4"`.
### How
- sed `s/experimental_/v2_/g`
- `ndc-postgres-cli upgrade`
- `ndc-postgres-cli update`
V3_GIT_ORIGIN_REV_ID: 67c608c2a3f1e232d5e0725fe8817672aa3dd627
This PR introduces support for multiple versions of the ndc-spec by
adding a new `VersionedSchemaAndCapabilities` enum variant under the
`DataConnectorLink` in OpenDD. This allows the capture of both ndc
v0.1.* and v0.2.* schema and capabilities.
This is achieved by referencing the `ndc-models` crate twice, once for
`v0.1.4` and once for the first commit after `v0.1.4`. That commit was
chosen to avoid actual v0.2.0 breaking changes for now, while we lay in
this multiple version support plumbing. Future PRs will use a newer
commit and adopt the breaking changes where necessary. The
`VersionedSchemaAndCapabilities::V02` variant uses the the v0.2
reference of `ndc-models`.
Then, during metadata resolve, when we resolve the
`DataConnectorContext` from `DataConnectorLink`, we perform a migration
of v0.1 types to v0.2 types and store and use the v0.2 types during
metadata resolve. This migration is performed in the new module
`ndc_migration`. We also record the `NdcVersion` (either `V01` or `V02`)
in the `DataConnectorLink`. The `execute` crate will need to use this to
determine which version to send to the connector at runtime (to be
implemented in a future PR).
The new changes to OpenDD are hidden from the JSON Schema via a new
`UnstableFeatures` flag, and the use of the new variant is gated behind
it in metadata resolve, since we don't yet support it upstream in the
`execute` crate.
V3_GIT_ORIGIN_REV_ID: d6d8a768ea3537c0b5e620799e94d3dd1e529526
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Adds new Open DD type `OrderByExpression` and defines `ModelV2` type, as
described in
https://github.com/hasura/v3-engine/blob/main/rfcs/open-dd-expression-type-changes.md.
### How
- Added new types `OrderByExpression`, `ModelV2` and
`ModelGraphQlDefinitionV2` to the `open-dds` crate.
- Added new `UnstableFeatures` flag `enable_order_by_expressions` to
`metadata-resolve`. This is not yet used, as `metadata-resolve` does not
yet use the new Open DD types.
---------
Co-authored-by: Daniel Harvey <danieljamesharvey@gmail.com>
V3_GIT_ORIGIN_REV_ID: e861e88f293f1b380e6346de8490b96cec6f65bb
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Changed tests to use the new `BooleanExpressionType` until something
broke. Fortunately something broke - we need to add the scalar boolean
expression types to the data connector scalar types for various lookups.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Split scalar boolean expression type resolve into own step so it's
available earlier in the pipeline.
Loop through them and add them to the `data_connector_scalar_types`
outputs.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: dbd8969c3e9e9d8db1d4a34e93aefc34bdf31421
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Allow engine to connect to NDCs via HTTPS.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Add `cacert` to Docker image using Nix.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 52458920236f3868cc8daf18e140f8536d9bc674
This PR refactors the `DataConnectorContext` type in
`crates/metadata-resolve/src/stages/data_connectors/types.rs` to remove
the extra copy of scalar types it had and simplifies and removes the
nesting in the types.
`DataConnectorContext` used to have its own copy of `scalars` which
contained `ScalarTypeInfo`s. However, it already contains a more
complete copy of scalar types inside `inner.schema.scalar_types`! Turns
out the `scalars` copy (and `ScalarTypeInfo`) is unnecessary. The only
value it added was having the computed `ComparisonOperators` struct,
which is simply copied from there to where it really lives on
`ScalarTypeWithRepresentationInfo` during the
`data_connector_scalar_types` stage.
So I've moved `ComparisonOperators`, and the code that creates it, to
`data_connector_scalar_types` and deleted `ScalarTypeInfo`. This meant
that `DataConnectorContext` only contained `DataConnectorCoreInfo`
(pointless!), so I inlined `DataConnectorCoreInfo` into
`DataConnectorContext` to simplify things. This removed `.inner` calls
all through the code.
These changes help my work with supporting multiple `ndc-models`
versions because it simplifies the number of places we store and deal
with scalar types.
This PR is a functional no-op.
V3_GIT_ORIGIN_REV_ID: 3beb0b07abc7ff5cfaa7e9b60eb46aec94d7ec1a
<!-- Thank you for submitting this PR! :) -->
## Description
Upgrade Rust, as a treat. Functional no-op.
---------
Co-authored-by: Samir Talwar <samir@functional.computer>
V3_GIT_ORIGIN_REV_ID: 1e0014049e89b8658326c8d8f652df800c415526
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
In https://github.com/hasura/v3-engine/pull/750 and
https://github.com/hasura/v3-engine/pull/754 we added a number of checks
for boolean expression types that can be run once we know the data
source they will be used against.
Previously these checks were only used for model `where` clauses, this
pull request moves them into a shared folder and also checks them when
they are used as model or command arguments.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Add a new `arguments` resolve step. It doesn't fit inside the usual
`commands` or `models` step as we need the data sources for both
validated, plus access to all the resolved `relationships` outputs.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: f713659962e3f20b2c85f287b6c362fb52ffa1ed
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Much like https://github.com/hasura/v3-engine/pull/774, we split the big
files into separate modules. Functional no-op.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Copy pasta, `just fix-local`, mostly.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: ba7e40057583f98df54573c72663a4a2d2c4a4ab
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Tiny no-op PR to split the `command` stage of `metadata-resolve` into
smaller modules the same way `models` and others work.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Copy and paste, run `just fix-local` to remove unrequired imports.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: ab2846be83ba0e948ea222b42eb4fd7ffa5b3523
### What
Output all traces to stdout.
---------
Co-authored-by: Daniel Harvey <danieljamesharvey@gmail.com>
V3_GIT_ORIGIN_REV_ID: 06330076ca305a331996530ddcd4d4c13d46bd95
### What
This adds a flag, `--partial-supergraph`, which instructs the metadata
resolver to prune relationships to unknown subgraphs rather than failing
to resolve.
### How
The flag gets passed through as
`metadata_resolve::configuration::Configuration`, and known subgraphs
are now tracked in `MetadataAccessor`. If the flag is set and a
relationship target refers to an unknown subgraph, we return an empty
list of relationships instead of failing.
Some test infrastructure has been added to set configuration flags per
test.
V3_GIT_ORIGIN_REV_ID: 6f0de2442a3bfc7c7a4c48e3dc7296dc1538cd67
### What
This PR adds the ability to include internal errors in API responses via
a command line argument `--expose-internal-errors`.
The default behavior remains not to show the contents of internal error
messages.
V3_GIT_ORIGIN_REV_ID: 11c47286d3fbceeda71df3a224853633aeea8902
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
A scalar `BooleanExpressionType` lets us give operators fancy names.
This PR makes them actually work.
Functional no-op as this feature is behind a feature flag.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: b49ef95d3d6672a1e27371fa5f4df63acd0849fc
### What
There was a report that using a different argument name other than
`headers` for `DataConnectorLink.argumentPresets` doesn't work. This PR
updates the test to use a different name, and this seems to work.
### How
Update the headers argument name in custom connector schema, and the
argument name in `DataConnectorLink.argumentPresets` metadata.json.
V3_GIT_ORIGIN_REV_ID: 91346c01573f666e5707c0f41f4635d689bf5b98
### What
This removes an unused field. The remaining struct had just one field
and is private to the crate, so I converted it to a type alias.
### How
In order to make `rustc` highlight the issue, I reduced the number of
types we export as `pub`, changing some to `pub(crate)`.
Then I just deleted the field once the warning showed up.
V3_GIT_ORIGIN_REV_ID: 7a26e99f062ed0f2c7449e1f57bc76068f059afb
This PR fixes the version of the NDC Spec that Open DD's JSON schema
references when it uses NDC's schema and capabilities types in the
`DataConnectorLink`. OpenDD references 0.1.4 of NDC, but uses 0.1.3 in
the JSON schema. This corrects that.
V3_GIT_ORIGIN_REV_ID: bdbb417b3227861dae7835f6d3bda0d1bf935ea7
### What
Using `String` everywhere to represent subgraph identifiers is
definitely going to cause problems at some point. We can avoid this by
using `open_dds::identifier::SubgraphIdentifier` instead.
`Identifier` and `SubgraphIdentifier` were wrappers around `String`. I
have also changed them to wrap `SmolStr` instead, for better performance
as they're cloned a lot.
### How
First of all, I hid the innards of `Identifier` and
`SubgraphIdentifier`, instead exposing certain behaviors as methods.
Mostly, this means exposing `as_str()` and `to_string()`.
Then I replaced the internals with `SmolStr`. I had to change one place
where `RefCast` was used, but that was pretty much it.
Finally, I switched out `String` for `SubgraphIdentifier` in
`QualifiedObject`. This necessitated a couple of constants for the two
"magic" subgraphs, `__globals` and `__unknown_namespace`, but was
otherwise fine.
I didn't touch `Qualified` for now.
V3_GIT_ORIGIN_REV_ID: 28664609c3173b181c3789093cb9796896642eb7
### What
The `lazy_static` macro is poorly maintained, fairly bloated, and has
been mostly superseded by
[`OnceLock`](https://doc.rust-lang.org/stable/std/sync/struct.OnceLock.html)
in the stdlib.
### How
1. I turned a couple of `static ref` values into `const`, sometimes by
creating `const fn` equivalents to other functions.
2. I inlined static behavior to construct a JSON pointer into some
tests, where we don't care too much about losing a few milliseconds.
3. For the rest, I replaced `lazy_static` with a `static OnceLock` and a
call to `OnceLock::get_or_init`.
V3_GIT_ORIGIN_REV_ID: 18e4150a5fb24fe71f6ed77fe6178b7942405aa3
NOTE: This PR is stacked on #756 and should be shipped after that is
merged.
This PR enables the existing aggregate relationships work (see #725,
#731, #756) by default by removing the experimental flag it used to be
disabled behind.
The new OpenDD schema changes that were added are also unhidden so that
they are visible in the OpenDD JSON Schema.
V3_GIT_ORIGIN_REV_ID: cfd86d8a9ea61887ccf0f1a5d08bdcc3dda59cdc
## Description
This PR implements the GraphQL schema and execution for aggregate
relationships.
In the `schema` crate, the new `model_aggregate_relationship_field`
function handles generating schema for ModelAggregateTarget
relationships. It mostly delegates the meat of its implementation to
reused logic; some refactoring has occurred to make this possible.
This involved changes in `select_many`, `select_aggregate` and
`model_arguments`. The creation of the model arguments field argument
now exists in `model_arguments` and is reused by `select_many` and
`select_aggregate`. The creation of all aggregate field arguments is now
in `select_aggregate::generate_select_aggregate_arguments`, and is then
reused when generating the aggregate relationship field. That field is
annotated with the new `RelationshipToModelAggregate` annotation.
In the `execute` crate, the logic around generating an the aggregate
selection IR was moved from `select_aggregate` into `model_selection`.
This was so it can be reused by the logic in `relationship` that now
uses it to generate an aggregate selection when encountering an
`RelationshipToModelAggregate` field.
Inside `relationship` some rearranging was done so that
`build_local_model_relationship` and `build_remote_relationship` could
work with either a normal model selection IR or the new aggregate
selection IR. The necessitated moving the creation of that IR outside
those functions into the caller, so the different callers can create
different IR (normal vs aggregate IR). This also reduced code
duplication.
New tests have been added to `engine` that cover aggregate relationships
and also remote joined aggregate relationships.
This PR also corrects two bugs in metadata resolve revealed by new
testing:
* The filter input field name in `GraphqlConfig` must be specified if
using an aggregate relationship
* The filter input type name defined on a `Model` must be specified if
that model is the target of an aggregate relationship. Conversely, the
filter input type name can be specified if the `Model` itself doesn't
define an aggregate, but is still involved in a aggregate relationship
(this previously produced an error).
This PR completes the feature, but it is still hidden behind the
experimental flag. There will be a follow up PR to remove that and
expose the functionality by default.
JIRA: [V3ENGINE-160](https://hasurahq.atlassian.net/browse/V3ENGINE-160)
[V3ENGINE-160]:
https://hasurahq.atlassian.net/browse/V3ENGINE-160?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
V3_GIT_ORIGIN_REV_ID: d499371906f7af71a4017c7c3ae75b7693cd3fa7
### What
In order to more easily monitor and review changes to metadata
resolution, this introduces snapshot testing for both successful and
failing calls to `resolve`. I used [Insta](https://insta.rs/) for this.
### How
For tests of the failure case, we already had a text file with the
expected error, so I have turned those files into snapshot files. I
wrote a small script to move the files rather than deleting and
recreating them so I could guarantee that the contents have not changed.
(Unfortunately, Git's diff doesn't always recognise the move as a move
because Insta has added a header.)
For tests of the successful case, I added a line to snapshot the
metadata rather than discarding it.
I also rewrote the tests to use `insta::glob` so we could get rid of
`test_each`.
V3_GIT_ORIGIN_REV_ID: 41bef4cf77bddb8d20d7c101df52ae149e8b0476
### What
Use `claims.jwt.hasura.io` instead of `https:~1~1hasura.io~1jwt~1claims`
to avoid the awkward escaping, as JWT claims namespace. Also, update the
tests to use the new namespace.
This is related to the recent auth docs rehaul
(https://github.com/hasura/v3-docs/pull/448#discussion_r1653368661)
V3_GIT_ORIGIN_REV_ID: 42526785ebb82f96c4f92bada054a62251c9fc7c
### What
This just means we can drop a bunch of references and pass by value.
### How
`#[derive(Clone, Copy)]` and fixing lints.
V3_GIT_ORIGIN_REV_ID: e15d323f8232755294d1f7a2c70ccf0de8a1632f
We shouldn't be replacing 'TableScan's for `information_schema` and
`hasura` schemas. Previously, we only had a check for `hasura` schema,
this PR now includes a check for `information_schema`. Queries on
`information_schema` will now work.
V3_GIT_ORIGIN_REV_ID: b25276556027b52ff940ddd3d094ea20f6fc7538
Adds a very experimental SQL interface to v3-engine for GenAI use cases.
---------
Co-authored-by: Abhinav Gupta <127770473+abhinav-hasura@users.noreply.github.com>
Co-authored-by: Gil Mizrahi <gil@gilmi.net>
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
V3_GIT_ORIGIN_REV_ID: 077779ec4e7843abdffdac1ed6aa655210649b93
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Update the interface of `GraphQLResponse`
- make some functions public
- added some helper functions that are used in v3-engine-multitenant
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 691ff8e3505f96ba9e9a2e8518023e4812319a05
<!-- Thank you for submitting this PR! :) -->
## Description
Adding a test for this and fixing a few missing parts. Mostly threading
`boolean_expression_types` everywhere and adding them to our type
lookups. Behind a feature flag so this is a functional no-op.
V3_GIT_ORIGIN_REV_ID: 5fd6d5b9e06f0216e770b0715c59c0479881017f
### What
I'm not sure why this is still hidden, but it shouldn't be.
### How
We remove the flag.
V3_GIT_ORIGIN_REV_ID: 4a73e380e0daebe3370a6561bcd4056a9013410a
<!-- Thank you for submitting this PR! :) -->
## Description
Previously we didn't check whether a boolean expression over strings was
used on a `String` field or a `User` field. Now we look up the types and
actually find out.
Functional no-op as behind a feature flag.
V3_GIT_ORIGIN_REV_ID: 8b7e94c4b873c49e206caa84e24c0d17c049c899
## What
This PR introduces a changelog file, `changelog.md`.
Any PR that is not simply a technical refactor should include a relevant
entry in this file.
Additionally we also simplify the pull request template. The template
used to contain a section for a changelog entry, which is now rendered
irrelevant.
V3_GIT_ORIGIN_REV_ID: 00881d86ffe87c4c0584b88b960837543dde34b7
This PR introduces the following changes to query usage analytics data
shape:
- The `name` field in `RelationshipUsage` is just `RelationshipName`
without `Qualified` wrapper. The `source` is already qualified, and the
same qualification applies to `name`.
- The `used` for both field and input field is a list. A field can use
multiple opendd objects at a time.
- Example: A root field can use `Model` and `Permission` (with both
filter and argument presets).
- The permission usage now revamped to express available permissions in
the opendd
- Filter predicate - provides lists of fields and relationships
- Field presets - provides a list of fields involved
- Argument presets - provides a list of arguments involved
- The `GqlFieldArgument` is dropped in favor of `GqlInputField`.
- Opendd object usage is not specified for `GqlFieldArgument`. An input
argument with object type can have field presets permission. It is
replaced with `GqlInputField` to allow specifying the permission usage.
This PR also includes JSON schema for the data shape with a golden test
to verify.
V3_GIT_ORIGIN_REV_ID: f0bf9ba201471af367ef5027bc2c8b9f915994ac
## Description
According to NDC headers pass-through spec, commands can include headers
in their responses, which are forwarded as response headers by the
engine to the client. This PR implements it.
V3_GIT_ORIGIN_REV_ID: 4fe458db02c5dd51f4674e4e013312f8e179c087
I noticed a few extra calls to `.clone()` while working on an unrelated
refactor. I want to remove them for brevity and simplicity; I don't
expect a performance improvement.
This turns on the Clippy warning `redundant_clone`, which detects
unnecessary calls to `.clone()` (and `.to_string()`).
It is an unstable warning and so might reports some false positives. If
we find any, we can suppress the warning there.
V3_GIT_ORIGIN_REV_ID: a713f29cf862d6f4cb40300105c6b9f96df00676
<!-- Thank you for submitting this PR! :) -->
## Description
When using boolean expression types on models, we have to check that any
relationships they define are local, as we currently do not support
remote predicates. This adds these checks in the `models_graphql` stage,
once we know about a) models and their sources b) boolean expressions c)
relationships.
Behind a feature flag, so strictly a no-op.
V3_GIT_ORIGIN_REV_ID: 70b2e4b316f5b8d57fa06d5492cccdddca0aaf1c
<!-- Thank you for submitting this PR! :) -->
## Description
A few debug lines slipped in recently, let's make `clippy` `warn` on
those, so they are kicked out by CI. Functional no-op.
V3_GIT_ORIGIN_REV_ID: 290f6de35f9315b68811eb5f15969fb0333e9d06