The PR adds code that handles multiple versions of the ndc-models during
the execution pipeline. Depending on whether a connector supports `v01`
or `v02` models, a different set of types is used to send and receive
the http requests.
However, the engine internally still uses the latest (v02) models inside
its IR. Unfortunately, it was going to be quite traumatic to prevent the
engine from using ndc models inside the IR and during response
processing and remote joins. This means that the engine generates v02
requests, and these are downgraded into v01 requests. v01 responses are
upgraded to v02 responses before being processed by the engine.
The ndc client (`execute::ndc::client`) now only takes new wrapper enum
types (`execute::ndc::types`) that split between v01 or v02
requests/responses. Every place that carries an ndc request now carries
this type instead, which allows it to carry either a v01 or a v02
request.
When ndc requests are created during planning, all creation goes via the
new `execute::plan::ndc_request` module. This inspects the connector's
supported version, creates the necessary request, and if needed,
downgrades it to v01.
When ndc responses are read during planning or during remote joins, they
are upgraded to v02 via helper functions defined on the types in
`execute::ndc::types`.
The upgrade/downgrade code is located in `execute::ndc::migration`. Keep
in mind the "v02" types are currently the same as the "v01" types so the
migration code is not doing much. This will change as the v02 types are
modified.
However, this approach has its drawbacks. One is that it prevents
changes to the ndc types [like
this](https://github.com/hasura/ndc-spec/pull/158) without a fair bit of
pain (see
[comment](https://github.com/hasura/ndc-spec/pull/158#issuecomment-2202127094)).
Another is that the downgrade code can fail at runtime and it is not
immediately obvious to developers using new, but unused, v02 features
that their new feature would fail on v01, because that mapping to v01
has already been written. Another is that we're paying some (small,
probably?) performance cost by upgrading/downgrading types because we
need to rebuild data structures.
Also:
* `execute::ndc::response` has been merged into `execute::ndc::client`,
since it was inextricably linked.
V3_GIT_ORIGIN_REV_ID: f3f36736b52058323d789c378fed06af566f39a3
This PR introduces support for multiple versions of the ndc-spec by
adding a new `VersionedSchemaAndCapabilities` enum variant under the
`DataConnectorLink` in OpenDD. This allows the capture of both ndc
v0.1.* and v0.2.* schema and capabilities.
This is achieved by referencing the `ndc-models` crate twice, once for
`v0.1.4` and once for the first commit after `v0.1.4`. That commit was
chosen to avoid actual v0.2.0 breaking changes for now, while we lay in
this multiple version support plumbing. Future PRs will use a newer
commit and adopt the breaking changes where necessary. The
`VersionedSchemaAndCapabilities::V02` variant uses the the v0.2
reference of `ndc-models`.
Then, during metadata resolve, when we resolve the
`DataConnectorContext` from `DataConnectorLink`, we perform a migration
of v0.1 types to v0.2 types and store and use the v0.2 types during
metadata resolve. This migration is performed in the new module
`ndc_migration`. We also record the `NdcVersion` (either `V01` or `V02`)
in the `DataConnectorLink`. The `execute` crate will need to use this to
determine which version to send to the connector at runtime (to be
implemented in a future PR).
The new changes to OpenDD are hidden from the JSON Schema via a new
`UnstableFeatures` flag, and the use of the new variant is gated behind
it in metadata resolve, since we don't yet support it upstream in the
`execute` crate.
V3_GIT_ORIGIN_REV_ID: d6d8a768ea3537c0b5e620799e94d3dd1e529526
### What
Output all traces to stdout.
---------
Co-authored-by: Daniel Harvey <danieljamesharvey@gmail.com>
V3_GIT_ORIGIN_REV_ID: 06330076ca305a331996530ddcd4d4c13d46bd95
### What
Using `String` everywhere to represent subgraph identifiers is
definitely going to cause problems at some point. We can avoid this by
using `open_dds::identifier::SubgraphIdentifier` instead.
`Identifier` and `SubgraphIdentifier` were wrappers around `String`. I
have also changed them to wrap `SmolStr` instead, for better performance
as they're cloned a lot.
### How
First of all, I hid the innards of `Identifier` and
`SubgraphIdentifier`, instead exposing certain behaviors as methods.
Mostly, this means exposing `as_str()` and `to_string()`.
Then I replaced the internals with `SmolStr`. I had to change one place
where `RefCast` was used, but that was pretty much it.
Finally, I switched out `String` for `SubgraphIdentifier` in
`QualifiedObject`. This necessitated a couple of constants for the two
"magic" subgraphs, `__globals` and `__unknown_namespace`, but was
otherwise fine.
I didn't touch `Qualified` for now.
V3_GIT_ORIGIN_REV_ID: 28664609c3173b181c3789093cb9796896642eb7
### What
The `lazy_static` macro is poorly maintained, fairly bloated, and has
been mostly superseded by
[`OnceLock`](https://doc.rust-lang.org/stable/std/sync/struct.OnceLock.html)
in the stdlib.
### How
1. I turned a couple of `static ref` values into `const`, sometimes by
creating `const fn` equivalents to other functions.
2. I inlined static behavior to construct a JSON pointer into some
tests, where we don't care too much about losing a few milliseconds.
3. For the rest, I replaced `lazy_static` with a `static OnceLock` and a
call to `OnceLock::get_or_init`.
V3_GIT_ORIGIN_REV_ID: 18e4150a5fb24fe71f6ed77fe6178b7942405aa3
### What
In order to more easily monitor and review changes to metadata
resolution, this introduces snapshot testing for both successful and
failing calls to `resolve`. I used [Insta](https://insta.rs/) for this.
### How
For tests of the failure case, we already had a text file with the
expected error, so I have turned those files into snapshot files. I
wrote a small script to move the files rather than deleting and
recreating them so I could guarantee that the contents have not changed.
(Unfortunately, Git's diff doesn't always recognise the move as a move
because Insta has added a header.)
For tests of the successful case, I added a line to snapshot the
metadata rather than discarding it.
I also rewrote the tests to use `insta::glob` so we could get rid of
`test_each`.
V3_GIT_ORIGIN_REV_ID: 41bef4cf77bddb8d20d7c101df52ae149e8b0476
Adds a very experimental SQL interface to v3-engine for GenAI use cases.
---------
Co-authored-by: Abhinav Gupta <127770473+abhinav-hasura@users.noreply.github.com>
Co-authored-by: Gil Mizrahi <gil@gilmi.net>
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
V3_GIT_ORIGIN_REV_ID: 077779ec4e7843abdffdac1ed6aa655210649b93
This PR introduces the following changes to query usage analytics data
shape:
- The `name` field in `RelationshipUsage` is just `RelationshipName`
without `Qualified` wrapper. The `source` is already qualified, and the
same qualification applies to `name`.
- The `used` for both field and input field is a list. A field can use
multiple opendd objects at a time.
- Example: A root field can use `Model` and `Permission` (with both
filter and argument presets).
- The permission usage now revamped to express available permissions in
the opendd
- Filter predicate - provides lists of fields and relationships
- Field presets - provides a list of fields involved
- Argument presets - provides a list of arguments involved
- The `GqlFieldArgument` is dropped in favor of `GqlInputField`.
- Opendd object usage is not specified for `GqlFieldArgument`. An input
argument with object type can have field presets permission. It is
replaced with `GqlInputField` to allow specifying the permission usage.
This PR also includes JSON schema for the data shape with a golden test
to verify.
V3_GIT_ORIGIN_REV_ID: f0bf9ba201471af367ef5027bc2c8b9f915994ac
## Description
According to NDC headers pass-through spec, commands can include headers
in their responses, which are forwarded as response headers by the
engine to the client. This PR implements it.
V3_GIT_ORIGIN_REV_ID: 4fe458db02c5dd51f4674e4e013312f8e179c087
### TODO
- [x] Validate the presence of arguments in the data connector schema
- [x] Add test for IR
- [x] Add test using custom connector
## Description
JIRA: https://hasurahq.atlassian.net/browse/V3ENGINE-148
This is part 2 of the field arguments.
This will add the field arguments to the graphql schema and also handles
the execution part.
![image](https://github.com/hasura/v3-engine/assets/85472423/c8f55d70-6539-42ef-ab5a-91b74e5699e2)
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes
in this PR have any user-facing impact. See
[changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [x] community-edition
- [x] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [x] enhancement
- [ ] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
Add support for field arguments
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
---------
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
Co-authored-by: Samir Talwar <samir.talwar@hasura.io>
V3_GIT_ORIGIN_REV_ID: ea06b896feaca30419f81cd070fd4cfd608ef35b
Return a `T` instead of a `Result<T, E>` when we never return an error
(`E`) case.
I also enabled some more warnings. `unnecessary_box_returns` has been
suppressed where appropriate, and `unused_async` doesn't seem to be
violated anywhere any more.
I got rid of some calls to `.unwrap()` too.
V3_GIT_ORIGIN_REV_ID: 015ebd05978cf8c2d87474a90e0cd4333779a761
## Description
This PR moves all the tests that were added to
`crates/engine/tests/validate_metadata_artifacts/aggregate_expressions`
to the metadata-resolve `metadata_golden_tests`.
All the `error.txt` files got renamed to `expected_error.txt` and the
`metadata is not consistent:` bit on the front of the error got trimmed
off.
I had to change the way that `test_failing_metadata` works so that
instead of looking for directories under `failing/` it looks for
`metadata.json` files under `failing/`. This is because not every
directory has a test in it, some directories are just used for grouping
(eg `failing/aggregate_expressions/`). This doesn't change what tests
get run. The only side effect is that every test name has `_metadata`
suffixed to it (the filename). 😢 `test-each` doesn't appear to offer a
way to disable this behaviour.
V3_GIT_ORIGIN_REV_ID: dece7cd3bb4623c51050d12471d2c0990fd69bfd
This is Part 2 in a stacked PR set that delivers aggregate root field
support.
* Part 1: OpenDD: https://github.com/hasura/v3-engine/pull/683
* Part 3: GraphQL API: https://github.com/hasura/v3-engine/pull/685
JIRA: [V3ENGINE-159](https://hasurahq.atlassian.net/browse/V3ENGINE-159)
## Description
This PR implements the metadata resolve phase of the engine and adds
support for resolving `AggregateExpression`s and validates their use
when linked to a `Model`.
The bulk of the changes can be found in:
* `crates/metadata-resolve/src/stages/aggregates/*` - This is where the
`AggregateExpression`s are resolved
* `crates/metadata-resolve/src/stages/models/mod.rs` - This is where we
validate the `AggregateExpression` specified for use in the model is
actually compatible with the model and its data connector
The `ndc-spec` version used has been lifted to the latest version that
adds support for aggregates over nested objects
(https://github.com/hasura/ndc-spec/pull/144). This necessitated changes
in the Custom Connector, but actual functionality to implement
aggregation over nested objects is implemented in Part 3.
There are also some changes in
`crates/metadata-resolve/src/types/subgraph.rs` where the `Display`
trait for the various `Qualified<T>`, `QualifiedTypeReference`, etc
types has been reworked so that they print more cleanly, with the
subgraph being put outside the type syntax, and array types getting
formatted correctly. For example, previous an array of Varchars would
have printed as `Varchar (in subgraph default)!`, now it properly
formats as `[Varchar!]! (in subgraph default)`.
This was necessary to make useful error messages using these types.
A tonne of tests have been added in
`crates/engine/tests/validate_metadata_artifacts/aggregate_expressions`
to test every error condition of the metadata resolve process.
[V3ENGINE-159]:
https://hasurahq.atlassian.net/browse/V3ENGINE-159?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
V3_GIT_ORIGIN_REV_ID: ffd859127a3f1560707f06ef01906c9d1b183d31
It turns out that code generators based on JSON schemas don't always
play nicely with characters such as `' '` or `'/'`. To work around this,
when we use the schema name as a title, we replace any non-alphanumeric
character or underscore with an underscore (`'_'`).
V3_GIT_ORIGIN_REV_ID: 6af9a92c84c59294ff28bae27d7421627cde6a80
<!-- Thank you for submitting this PR! :) -->
## Description
When we generate a JSON schema via `schemars`, we end up with duplicate
types in the schema, which get names like `ValueExpression2`, `Role2`,
and so on. This isn't ideal, and seems to arise for two reasons:
1. The type is polymorphic, and is monomorphised in two ways, and thus
the types can't be unified.
2. The type is monomorphic, but is used inside and outside of its home
module.
The first problem was fixed previously by splitting polymorphic types,
but the second has proven to be a bit more work. This PR finally solves
the problem by introducing a new library, `jsonschema-tidying`:
* First, we search the definitions within the JSON schema for any whose
names end in a number, such as `ValueExpression2` or `MetadataV2`.
* Then, we look for types whose names match everything up to the final
numeric digits, and discard any types for whom we can't find a match (so
we keep `ValueExpression2` because `ValueExpression` exists, but discard
`MetadataV2` because `MetadataV` does not).
* Next, we remove the duplicate definition from the definitions map,
potentially breaking links in both the schema _and_ the rest of the
definitions map.
* Finally, we traverse the entirety of the tree looking for any
references to the duplicate entry, and replace them with references to
the original entry.
This PR has no direct user-facing change, however it _will_ have an
effect on the docs generation code, which will hopefully result in
tidier documentation.
<!--
Questions to consider answering:
1. What user-facing changes are being made?
4. What are issues related to this PR? (Consider adding `(close
#<issue-no>)` to the PR title)
5. What is the conceptual design behind this PR?
6. How can this PR be tested/verified?
7. Does the PR have limitations?
8. Does the PR introduce breaking changes?
-->
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes
in this PR have any user-facing impact. See
[changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [ ] community-edition
- [ ] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [x] enhancement
- [x] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
_Replace with changelog entry_
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
V3_GIT_ORIGIN_REV_ID: fe73acf7d9df0b9867852e673e53cb086e3725d3
## Description
This PR implements argument presets for `DataConnectorLink`, which can
be used to forward request headers as function/procedure arguments to
NDC. This PR implements the execution part.
**Note**: response header forwarding is not implemented yet.
V3_GIT_ORIGIN_REV_ID: ff69129aee3e3052367ca42acdec3922cbc2cb0c
<!-- Thank you for submitting this PR! :) -->
## Description
This PR is a refactoring which is a functional no-op.
<!--
Questions to consider answering:
1. What user-facing changes are being made?
2. What are issues related to this PR? (Consider adding `(close
#<issue-no>)` to the PR title)
3. What is the conceptual design behind this PR?
4. How can this PR be tested/verified?
5. Does the PR have limitations?
6. Does the PR introduce breaking changes?
-->
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes
in this PR have any user-facing impact. See
[changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [ ] community-edition
- [ ] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [ ] enhancement
- [ ] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
_Replace with changelog entry_
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
V3_GIT_ORIGIN_REV_ID: 625475ef23ef7fd2f72921e951fa1a957807516a
This PR refactors the schema crate to enable overriding the
interpretation of roles at runtime.
By providing a `NamespacedGetter` that ignores roles entirely (i.e.
`GDSNamespacedGetterAgnostic`) to schema generation functions it is
possible to derive a GraphQL schema that ignores permissions and
presets.
This PR leaves back a few opportunities to improve code clarity, which I
hope to address in follow-up PRs.
For instance, it seems superfluous to make sense for generic code
parametrised `<S: SchemaContext>` to carry around both a `S::Namespace`
and a `S::NamespaceGetter`, when all that code could ever do with that
`S::Namespace` value anyway is apply it to the `S::NamespaceGetter`.
Taking this line of reasoning to its conclusion suggests that we can
reduce the complexity of `trait SchemaContext` and its associated types
and kit.
---------
Co-authored-by: Samir Talwar <samir.talwar@hasura.io>
V3_GIT_ORIGIN_REV_ID: c84555fd88279582670919faa5a62bbd00b714bd
<!-- Thank you for submitting this PR! :) -->
## Description
This somewhat resolves the `BooleanExpressionType` metadata type.
There is a lot that is missing, but everything is behind a feature so I
want to merge what is here before moving onto getting it working with
the later parts of the pipeline. We'll start with a lot of duplication
and then work to remove as much as we can.
Functional no-op for users, the feature flag is not exposed to the
actual binary.
How to review this PR:
- Check that any changes to existing code are cosmetic and naming-only
to ensure this is a no-op
- Look through the new resolve step and compare it to the
[RFC](https://github.com/hasura/v3-engine/blob/main/rfcs/open-dd-expression-type-changes.md)
and see if it is doing roughly what you might expect.
- Don't get too bogged down in missing things / style stuff, there are
definitely missing things and bad style.
V3_GIT_ORIGIN_REV_ID: 24e3b3e72e62d0094db3b461cb8c6d359982755d
## Description
The expectation is, that engine should emit usages of OpenDD objects
(i.e. models, commands, relationships, permissions, fields of types
etc.) from a GraphQL query.
This PR adds the types required to gather query usage analytics, under a new crate `query-usage-analytics`.
V3_GIT_ORIGIN_REV_ID: 49778c25a9019e0c8c9a2d13eaa8ba28638b8b55
## Description
Add a helper function which execute items of an iterator concurrently.
Add unit tests for that.
Use the helper function in query root fields execution.
V3_GIT_ORIGIN_REV_ID: a034291684135072cbaf957dc788a269b5a33459
<!-- Thank you for submitting this PR! :) -->
## Description
We have a new `BooleanExpressionType` metadata kind. This adds it, tests
it can be parsed, but hides it from generated metadata and throws an
error if one is actually used in the engine.
V3_GIT_ORIGIN_REV_ID: 036b5fd9c32475d1c5a5e5e6321fb736fe6caefa
<!-- Thank you for submitting this PR! :) -->
## Description
These files have been moved to `crates/schema`, don't know how they
ended up back. Functional no-op.
V3_GIT_ORIGIN_REV_ID: 0cdda71476e84f06c5adab9273c08c843a6f945e
Bumps [schemars](https://github.com/GREsau/schemars) from 0.8.19 to
0.8.20.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/GREsau/schemars/releases">schemars's
releases</a>.</em></p>
<blockquote>
<h2>v0.8.20</h2>
<h3>Fixed:</h3>
<ul>
<li>Revert unintentional change in behaviour when combining
<code>default</code> and <code>required</code> attributes (<a
href="https://redirect.github.com/GREsau/schemars/issues/292">GREsau/schemars#292</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/GREsau/schemars/blob/master/CHANGELOG.md">schemars's
changelog</a>.</em></p>
<blockquote>
<h2>[0.8.20] - 2024-05-18</h2>
<h3>Fixed:</h3>
<ul>
<li>Revert unintentional change in behaviour when combining
<code>default</code> and <code>required</code> attributes (<a
href="https://redirect.github.com/GREsau/schemars/issues/292">GREsau/schemars#292</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7ecaa7feab"><code>7ecaa7f</code></a>
Revert unintentional change in behaviour when combining
<code>default</code> and `requir...</li>
<li><a
href="cf5be1b266"><code>cf5be1b</code></a>
Ignore failing test</li>
<li><a
href="449bb1a0ca"><code>449bb1a</code></a>
Add more tests for different schema settings</li>
<li>See full diff in <a
href="https://github.com/GREsau/schemars/compare/v0.8.19...v0.8.20">compare
view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=schemars&package-manager=cargo&previous-version=0.8.19&new-version=0.8.20)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
V3_GIT_ORIGIN_REV_ID: 0a55128d53088b9b0c8c8f2dc5fb03de8af09413