mirror of
https://github.com/hasura/graphql-engine.git
synced 2024-12-15 09:22:43 +03:00
d5909e8c48
554 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
Rakesh Emmadi
|
d5909e8c48 |
Fix permission filter usage reporting in query analytics (#933)
<!-- The PR description should answer 2 important questions: --> ### What <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> <!-- Does this PR introduce new validation that might break old builds? --> <!-- Consider: do we need to put new checks behind a flag? --> Fields involved in the relationship's inner predicates were incorrectly reported as fields of the root model. This PR resolves the issue. Also, fixes the predicates inside `And` or `Or` are not reported. Note: Changelog not required, as query usage analytics are Hasura internal and hidden from users. ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> - Use `for` loop instead of `Iterator map` to avoid confusion around the execution of lambda passed to the `map` function (more context in this [slack](https://hasurahq.slack.com/archives/C04PUMV4X16/p1722871834852519) thread) - Introduce a new struct, to report predicate relationship fields, that has a field to report its inner filter predicate usage. V3_GIT_ORIGIN_REV_ID: 9ca23e6005ccb09f2321a2ae30ef575f99e84e06 |
||
Abhinav Gupta
|
180c1dbc59 |
Refactor SQL layer to use OpenDD query IR (#925)
As per the multiple frontends RFC: https://github.com/hasura/v3-engine/blob/vamshi/multiple-frontends/rfcs/multiple-frontends.md V3_GIT_ORIGIN_REV_ID: 07f7c5323179a62fd08717d6d49f9415da139873 |
||
Vamshi Surabhi
|
4aefdabb65 |
avoid using raw String s in more places (#923)
- `DataConnectorAggregationFunctionName` and `AggregateFunctionName` now use `str_newtype`. - All usages of `String`s for subgraph names are removed. (This is part of a larger effort to remove references in `execute::plan::QueryPlan`). V3_GIT_ORIGIN_REV_ID: d51f0a2335e8dabbc9efdad1d1efff285ddb74c3 |
||
Rakesh Emmadi
|
9bf0ad967f |
Query usage analytics | include deprecated info in field usage (#932)
<!-- The PR description should answer 2 important questions: --> ### What <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> <!-- Does this PR introduce new validation that might break old builds? --> <!-- Consider: do we need to put new checks behind a flag? --> When reporting query usage analytics, mention whether a field is deprecated. Note: The changelog is not required, as the usage analytics are for Hasura internal use. ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> OpenDd allows marking an ObjectType's field as deprecated with an optional reason. Plumb the deprecation context to the input/output schema annotation. Report the field usage with a deprecated boolean field. V3_GIT_ORIGIN_REV_ID: 430cdcf3e1ff0c43812caecb8d06a64b729665be |
||
Anon Ray
|
4d31c4b42e |
add a flag to log traces to stdout (#931)
### What Add a flag `--export-traces-stdout` to log traces to stdout. Default is disabled. Command-line flag - `--export-traces-stdout` Env var - `EXPORT_TRACES_STDOUT` ### How Introduce a new command line flag. Make `initialize_tracing` accept a `bool`, and setup the stdout exporter based on that. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: f39d6f863fd2bca65ad89f1cef4b077aa9eabc5b |
||
dependabot[bot]
|
12ed058661 |
Bump regex from 1.10.5 to 1.10.6 (#930)
Bumps [regex](https://github.com/rust-lang/regex) from 1.10.5 to 1.10.6. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/rust-lang/regex/blob/master/CHANGELOG.md">regex's changelog</a>.</em></p> <blockquote> <h1>1.10.6 (2024-08-02)</h1> <p>This is a new patch release with a fix for the <code>unstable</code> crate feature that enables <code>std::str::Pattern</code> trait integration.</p> <p>Bug fixes:</p> <ul> <li>[BUG <a href="https://redirect.github.com/rust-lang/regex/issues/1219">#1219</a>](<a href="https://redirect.github.com/rust-lang/regex/pull/1219">rust-lang/regex#1219</a>): Fix the <code>Pattern</code> trait implementation as a result of nightly API breakage.</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
dependabot[bot]
|
e1f0974a07 |
Bump mockito from 1.4.0 to 1.5.0 (#929)
Bumps [mockito](https://github.com/lipanski/mockito) from 1.4.0 to 1.5.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/lipanski/mockito/releases">mockito's releases</a>.</em></p> <blockquote> <h2>1.5.0</h2> <ul> <li><strong>[Breaking]</strong> <a href="https://redirect.github.com/lipanski/mockito/pull/198">Upgrade</a> to hyper v1</li> </ul> <p>Thanks to <a href="https://github.com/tottoto"><code>@tottoto</code></a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
dependabot[bot]
|
a7c994af3f |
Bump clap from 4.5.11 to 4.5.13 (#928)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.11 to 4.5.13. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/clap-rs/clap/releases">clap's releases</a>.</em></p> <blockquote> <h2>v4.5.13</h2> <h2>[4.5.13] - 2024-07-31</h2> <h3>Fixes</h3> <ul> <li><em>(derive)</em> Improve error message when <code>#[flatten]</code>ing an optional <code>#[group(skip)]</code></li> <li><em>(help)</em> Properly wrap long subcommand descriptions in help</li> </ul> <h2>v4.5.12</h2> <h2>[4.5.12] - 2024-07-31</h2> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/clap-rs/clap/blob/master/CHANGELOG.md">clap's changelog</a>.</em></p> <blockquote> <h2>[4.5.13] - 2024-07-31</h2> <h3>Fixes</h3> <ul> <li><em>(derive)</em> Improve error message when <code>#[flatten]</code>ing an optional <code>#[group(skip)]</code></li> <li><em>(help)</em> Properly wrap long subcommand descriptions in help</li> </ul> <h2>[4.5.12] - 2024-07-31</h2> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
dependabot[bot]
|
a13cd55056 |
Bump bytes from 1.6.1 to 1.7.1 (#927)
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.6.1 to 1.7.1. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/tokio-rs/bytes/releases">bytes's releases</a>.</em></p> <blockquote> <h2>Bytes 1.7.1</h2> <h1>1.7.1 (August 1, 2024)</h1> <p>This release reverts the following change due to a regression:</p> <ul> <li>Reuse capacity when possible in <code><BytesMut as Buf>::advance</code> impl (<a href="https://redirect.github.com/tokio-rs/bytes/issues/698">#698</a>)</li> </ul> <p>The revert can be found at <a href="https://redirect.github.com/tokio-rs/bytes/issues/726">#726</a>.</p> <h2>Bytes 1.7.0</h2> <h1>1.7.0 (July 31, 2024)</h1> <h3>Added</h3> <ul> <li>Add conversion from <code>Bytes</code> to <code>BytesMut</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/695">#695</a>, <a href="https://redirect.github.com/tokio-rs/bytes/issues/710">#710</a>)</li> <li>Add reclaim method without additional allocation (<a href="https://redirect.github.com/tokio-rs/bytes/issues/686">#686</a>)</li> </ul> <h3>Documented</h3> <ul> <li>Clarify how <code>BytesMut::zeroed</code> works (<a href="https://redirect.github.com/tokio-rs/bytes/issues/714">#714</a>)</li> <li>Clarify the behavior of <code>Buf::chunk</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/717">#717</a>)</li> </ul> <h3>Changed</h3> <ul> <li>Change length condition of <code>BytesMut::truncate</code></li> <li>Reuse capacity when possible in <code><BytesMut as Buf>::advance</code> impl (<a href="https://redirect.github.com/tokio-rs/bytes/issues/698">#698</a>)</li> <li>Improve <code>must_use</code> suggestion of <code>BytesMut::split</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/699">#699</a>)</li> </ul> <h3>Internal changes</h3> <ul> <li>Use <code>ManuallyDrop</code> instead of <code>mem::forget</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/678">#678</a>)</li> <li>Don't set <code>len</code> in <code>BytesMut::reserve</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/682">#682</a>)</li> <li>Optimize <code>Bytes::copy_to_bytes</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/688">#688</a>)</li> <li>Refactor <code>BytesMut::truncate</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/694">#694</a>)</li> <li>Refactor <code>BytesMut::resize</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/696">#696</a>)</li> <li>Reorder assertion in <code>Bytes::split_to</code>, <code>Bytes::split_off</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/689">#689</a>, <a href="https://redirect.github.com/tokio-rs/bytes/issues/693">#693</a>)</li> <li>Use <code>offset_from</code> in more places (<a href="https://redirect.github.com/tokio-rs/bytes/issues/705">#705</a>)</li> <li>Correct the wrong usage of <code>IntoIter</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/707">#707</a>)</li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md">bytes's changelog</a>.</em></p> <blockquote> <h1>1.7.1 (August 1, 2024)</h1> <p>This release reverts the following change due to a regression:</p> <ul> <li>Reuse capacity when possible in <code><BytesMut as Buf>::advance</code> impl (<a href="https://redirect.github.com/tokio-rs/bytes/issues/698">#698</a>)</li> </ul> <p>The revert can be found at <a href="https://redirect.github.com/tokio-rs/bytes/issues/726">#726</a>.</p> <h1>1.7.0 (July 31, 2024)</h1> <h3>Added</h3> <ul> <li>Add conversion from <code>Bytes</code> to <code>BytesMut</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/695">#695</a>, <a href="https://redirect.github.com/tokio-rs/bytes/issues/710">#710</a>)</li> <li>Add reclaim method without additional allocation (<a href="https://redirect.github.com/tokio-rs/bytes/issues/686">#686</a>)</li> </ul> <h3>Documented</h3> <ul> <li>Clarify how <code>BytesMut::zeroed</code> works (<a href="https://redirect.github.com/tokio-rs/bytes/issues/714">#714</a>)</li> <li>Clarify the behavior of <code>Buf::chunk</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/717">#717</a>)</li> </ul> <h3>Changed</h3> <ul> <li>Change length condition of <code>BytesMut::truncate</code></li> <li>Reuse capacity when possible in <code><BytesMut as Buf>::advance</code> impl (<a href="https://redirect.github.com/tokio-rs/bytes/issues/698">#698</a>)</li> <li>Improve <code>must_use</code> suggestion of <code>BytesMut::split</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/699">#699</a>)</li> </ul> <h3>Internal changes</h3> <ul> <li>Use <code>ManuallyDrop</code> instead of <code>mem::forget</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/678">#678</a>)</li> <li>Don't set <code>len</code> in <code>BytesMut::reserve</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/682">#682</a>)</li> <li>Optimize <code>Bytes::copy_to_bytes</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/688">#688</a>)</li> <li>Refactor <code>BytesMut::truncate</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/694">#694</a>)</li> <li>Refactor <code>BytesMut::resize</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/696">#696</a>)</li> <li>Reorder assertion in <code>Bytes::split_to</code>, <code>Bytes::split_off</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/689">#689</a>, <a href="https://redirect.github.com/tokio-rs/bytes/issues/693">#693</a>)</li> <li>Use <code>offset_from</code> in more places (<a href="https://redirect.github.com/tokio-rs/bytes/issues/705">#705</a>)</li> <li>Correct the wrong usage of <code>IntoIter</code> (<a href="https://redirect.github.com/tokio-rs/bytes/issues/707">#707</a>)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
dependabot[bot]
|
3449b49487 |
Bump serde_with from 3.8.3 to 3.9.0 (#926)
Bumps [serde_with](https://github.com/jonasbb/serde_with) from 3.8.3 to 3.9.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/jonasbb/serde_with/releases">serde_with's releases</a>.</em></p> <blockquote> <h2>serde_with v3.9.0</h2> <h3>Added</h3> <ul> <li> <p>Deserialize a map` and skip all elements failing to deserialize by <a href="https://github.com/johnmave126"><code>@johnmave126</code></a> (<a href="https://redirect.github.com/jonasbb/serde_with/issues/763">#763</a>)</p> <p><code>MapSkipError</code> acts like a map (<code>HashMap</code>/<code>BTreeMap</code>), but keys or values that fail to deserialize, like are ignored.</p> <p>For formats with heterogeneously typed maps, we can collect only the elements where both key and value are deserializable. This is also useful in conjunction to <code>#[serde(flatten)]</code> to ignore some entries when capturing additional fields.</p> <pre lang="text"><code>// JSON "value": {"0": "v0", "5": "v5", "str": "str", "10": 2}, <p>// Rust #[serde_as(as = "MapSkipError<DisplayFromStr, _>")] value: BTreeMap<u32, String>,</p> <p>// Only deserializes entries with a numerical key and a string value, i.e., {0 => "v0", 5 => "v5"} </code></pre></p> </li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
Karthik Venkateswaran
|
2c70bc0538 |
engine: add operation_name attribute to execute_query (#913)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> We would like to generate operation_name level metrics with execution latency. Right now, the operation_name is part of validate span which isn't really doing anything while `execute_query` is the parent span which will represent the operation time. <!-- Consider: do we need to add a changelog entry? --> ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> This PR adds `operation_name` to `execute_query` span V3_GIT_ORIGIN_REV_ID: fc14d92c66b0245739d672b7570be1871243f241 |
||
Rakesh Emmadi
|
03c85f6985 |
Fix NDC relationship collection for filter predicates in nested relationship selection. (#924)
<!-- The PR description should answer 2 important questions: --> ### What <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> <!-- Does this PR introduce new validation that might break old builds? --> <!-- Consider: do we need to put new checks behind a flag? --> Fixes a bug where queries with nested relationship selection and filter predicates fail due to an issue with NDC relationship collection. ```graphql query MyQuery { Album { AlbumId Title ArtistId Tracks { AlbumId Name TrackId } } } ``` A selection permission defined on the `Tracks` model with a relationship comparison in the predicate. ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> - Previously, the collection of relationships occurred independently by traversing through the IR AST. Consequently, during planning, the collection of local relationships was explicitly ignored. This caused confusion and resulted in the omission of relationship collectors when planning nested selections for local relationships, leading to the issue. - In this PR, the independent collection of relationships is removed. Instead, all NDC relationships for field selection, filter, and permission predicates are now collected during planning. This unifies the logic, and ensures consistency in achieving the same purpose. V3_GIT_ORIGIN_REV_ID: cbd5bfef7a90a7d7602061a9c733ac54b764e0d3 |
||
Daniel Harvey
|
e7462f7884 |
Tidy boolean expression schema generation (#920)
<!-- The PR description should answer 2 important questions: --> ### What Trying to understand what is going on here. Still no closer, but have added a test and made some types more specific in order to clarify my understanding. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> <!-- Does this PR introduce new validation that might break old builds? --> <!-- Consider: do we need to put new checks behind a flag? --> ### How Add some introspection tests for relationships with `ObjectBooleanExpressionType`s to ensure they generate. Tried to make relationship fields disappear to recreate build problems but could not. Split `BooleanExpressionGraphqlConfig` and `ObjectBooleanExpressionGraphqlConfig` to make sure we're not mixing them up. We only want to use `BooleanExpressionGraphqlConfig` in `metadata_resolve`, this ensures that. Pushed some partiality in `schema/boolean_expressions.rs` out - a function was `Option<inputs> -> Option<outputs>` and now it's `inputs -> outputs`. We use `Option` a lot and it makes reasoning why something hasn't been added to the schema difficult. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: 893e6f32bfded14ea724be7eaedc519e264f4c01 |
||
dependabot[bot]
|
432c042399 |
Bump serde_json from 1.0.120 to 1.0.122 (#922)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.120 to 1.0.122. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/serde-rs/json/releases">serde_json's releases</a>.</em></p> <blockquote> <h2>v1.0.121</h2> <ul> <li>Optimize position search in error path (<a href="https://redirect.github.com/serde-rs/json/issues/1160">#1160</a>, thanks <a href="https://github.com/purplesyringa"><code>@purplesyringa</code></a>)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
Daniel Chambers
|
63732fe7be |
Bug fixes around argument presets in the DataConnectorLink (#866)
This PR fixes the following bugs: - Fixes a bug where models and commands were allowed even though they did not define arguments to satisfy the underlying data connector collection/function/procedure. **UPDATE:** This only raises a warning rather than fails the build, because existing builds on staging and production have this issue. This will need to be transitioned to an error once the Compatibility Date plumbing is in place. - Fixes a bug where argument presets set in the DataConnectorLink were sent to every connector function/procedure regardless of whether the function/procedure actually declared that argument - Fixes a bug where argument presets set in the DataConnectorLink were not sent to connector collections that backed Models - Fixes a bug where the type of the argument name in the DataConnectorLink's argument presets was incorrect in the Open DD schema. It was `ArgumentName` but should have been `DataConnectorArgumentName` - Fixes a bug where the check to ensure that argument presets in the DataConnectorLink does not overlap with arguments defined on Models/Commands was comparing against the Model/Command argument name not the data connector argument name There are a number of changes that tighten things up in this PR. Firstly, the custom connector is improved so that it rejects requests with arguments of the wrong type or unexpected arguments. This causes tests that should have been failing to actually fail. Then, new tests have been added to metadata_resolve to cover the untested edge cases around data connector link argument presets. Then, metadata resolve is refactored so that the link argument presets are validated and stored on each command/model source, rather than on the DataConnectorLink. Extra validation has been added during this process to fix the above bugs. Any irrelevant argument presets to the particular command/model are dropped. Then, during execution, we read the presets from the command/model source instead of from the DataConnectorLink, which ensures we only send the appropriate arguments. JIRA: [V3ENGINE-290](https://hasurahq.atlassian.net/browse/V3ENGINE-290) Fixes https://linear.app/hasura/issue/APIPG-676/dataconnectorlink-argument-presets-are-always-sent-regardless-of [V3ENGINE-290]: https://hasurahq.atlassian.net/browse/V3ENGINE-290?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ V3_GIT_ORIGIN_REV_ID: dd02e52e1ff224760c5f0ed6a73a1ae56779e1f1 |
||
Daniel Chambers
|
0d37cbd71f |
Re-enable ndc version validation backwards compatibly (#916)
The validation added in #880 validated that the version in the DataConnectorLink's capabilities version matched the version specified in the schema. Unfortunately, there are existing builds with invalid capabilities versions that failed to parse. Subsequently the validation was removed in #907 to fix staging the deploy that broke. This is the unique set of errors found when deploying to staging: ``` error generating artifacts: schema build error: invalid metadata: The data connector myts (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector my_ts (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector mydbpg (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector chinook (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector clickhouse (in subgraph analytics) has an error: The version specified in the capabilities ("^0.1.1") is an invalid version: unexpected character '^' while parsing major version number error generating artifacts: schema build error: invalid metadata: The data connector chinook_link (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector app_connector (in subgraph app) has an error: The version specified in the capabilities ("^0.1.1") is an invalid version: unexpected character '^' while parsing major version number error generating artifacts: schema build error: invalid metadata: The data connector chinook (in subgraph app) has an error: The version specified in the capabilities ("^0.1.1") is an invalid version: unexpected character '^' while parsing major version number error generating artifacts: schema build error: invalid metadata: The data connector nodejs (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector db (in subgraph app) has an error: The version specified in the capabilities ("*") is an invalid version: unexpected character '*' while parsing major version number error generating artifacts: schema build error: invalid metadata: The data connector my_pg (in subgraph my_subgraph) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector mypg (in subgraph myapp) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector mypglink (in subgraph mysubgraph) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector mypg (in subgraph app2) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector test_connector (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version ``` The invalid versions are: `""`, `"*"`, "^0.1.1"`. This PR restores the version validation code, but for NDC v0.1.x capabilities (the only supported version right now, v0.2.x is feature flagged off), we now accept versions that fail to parse as a valid semver, and instead we raise an issue that gets logged as a warning. NDC v0.2.x capabilities retains the stricter behaviour and does not accept dodgy a capabilities version. This is backwards compatible because trying to use NDC v0.2.x right now produces a build error. Fixes APIPG-736 V3_GIT_ORIGIN_REV_ID: 9e9bf99123bad31e8229e8ea29343eb8aaf9786d |
||
Vamshi Surabhi
|
d41170b06a |
simplify the sql context that powers datafusion (#921)
Prior to this, on every request, a datafusion catalog provider was created from the stored sql context. This PR reworks it so that this is cheap and also more maintainable will fewer intermediate steps. There is also some work done towards supporting table valued functions. --------- Co-authored-by: Abhinav Gupta <127770473+abhinav-hasura@users.noreply.github.com> V3_GIT_ORIGIN_REV_ID: 8c30485366969d81d2a35760962e0383ed5e488c |
||
Daniel Harvey
|
06ac3ba7bf |
Fallback to ObjectBooleanExpressionType on Model correctly (#919)
<!-- The PR description should answer 2 important questions: --> ### What When no `booleanExpressionType` is specified in a `BooleanExpressionType` `comparableRelationship`, we fallback to whatever is defined for the model. However, we were ignoring old style `ObjectBooleanExpressionType`, meaning relationship fields were disappearing. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> <!-- Does this PR introduce new validation that might break old builds? --> <!-- Consider: do we need to put new checks behind a flag? --> ### How Also match on `ModelExpressionType::ObjectBooleanExpressionType` when looking up leaf boolean expressions for relationships. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: 9a67b734679b8a1fe3d176a259ba579e127948b8 |
||
Daniel Harvey
|
c89809b02e |
Use warning/issue for nested array in bool exp to avoid breakage (#917)
<!-- The PR description should answer 2 important questions: --> ### What Change an error down to an issue / warning to unbreak builds. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> <!-- Does this PR introduce new validation that might break old builds? --> <!-- Consider: do we need to put new checks behind a flag? --> ### How Introduce `BooleanExpressionIssue`, move error value to it, emit this instead. Later we'll turn this into an error based on compatibility date. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: f0903cc04ea1cf328c9bf67a38d76fd670743679 |
||
Daniel Harvey
|
4b599d736d |
Remove warnings about data connector scalar representation (#918)
<!-- The PR description should answer 2 important questions: --> ### What We emit a warning suggesting users deprecate `DataConnectorScalarRepresentation`, however it still has uses outside boolean expressions, so let's not advise this until it is sensible advice. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> <!-- Does this PR introduce new validation that might break old builds? --> <!-- Consider: do we need to put new checks behind a flag? --> ### How Remove a warning. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: a95a705d121396a09a9b626237999f032e650189 |
||
Abhinav Gupta
|
fcaa344a3a |
add an OpenDD Query type (#911)
This PR adds an OpenDD Query type as proposed in the RFC here:
|
||
Rakesh Emmadi
|
7177a423da |
Support remote relationship in permission filter (#904)
<!-- The PR description should answer 2 (maybe 3) important questions: --> Closes: https://linear.app/hasura/issue/APIPG-397/support-remote-relationship-predicates-in-permission-filters ### What <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> Allow defining permission filters with remote relationships in their predicates. ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> - Lift metadata resolve restriction for remote relationships in permission predicates - Abstract out the remote relationship resolving logic, in query filter, into a new function and re-use it while resolving permission filters. - Tests: - A metadata build test to check the presence of essential equal operator on source fields in relationship mapping. - Ported all `select_many/relationship_predicate/`* tests to a new `select_many/remote_relationship_predicate/*` with appropriate metadata changes. --------- Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com> V3_GIT_ORIGIN_REV_ID: 9c496ecdc9829ed626354ef85e776e1afcb0dfc7 |
||
Rakesh Emmadi
|
bbff39c6ef |
Use IndexSet instead of Vec for distinct remote predicate comparison expressions (#914)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> This pull request optimizes the `DistinctComparisons` struct in the codebase to improve the performance of storing and checking for distinct comparison predicates in remote relationship comparison expressions. <!-- Consider: do we need to add a changelog entry? --> ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> - Replaced `Vec` with `IndexSet`: Changed the data structure used in the DistinctComparisons struct from `Vec` to `IndexSet` to leverage the average O(1) complexity for contains and insert operations provided by `IndexSet`. - Updated push Method: Modified the push method to use `IndexSet`'s insert method directly, which simplifies the code and improves performance. **Performance Improvement:** query: ```graphql query RemoteRelationship { Album(where: {TracksRemote: {Name: {_ilike: "%B%"}}}) { Title } } ``` The `TracksRemote` predicate query yields 723 non-distinct results, which reduce to 266 unique results after deduplication. Benchmark used: [graphql-bench](https://github.com/hasura/graphql-bench) configuration: autocannon - Requests Per Second strategy (50 rps) - 10 seconds duration. Results: - Before Optimization: Average Latency: 38.99 ms - After Optimization: Average Latency: 23.32 ms - Percentage Decrease in Latency: Approximately 40% V3_GIT_ORIGIN_REV_ID: 17a7160b7229eb3a2fde93273d5cf05102f9b4bd |
||
Daniel Harvey
|
f1da32c28f |
Update architecture to reflect ir crate (#910)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What Update architecture doc to include `ir` crate. Functional no-op. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: 32f47529c97c7cfe188f098f2024e159e5ab33cc |
||
Daniel Harvey
|
07f0a90332 |
Split out IR crate (#909)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What `execute` is now the biggest `crate` in engine and does a lot, let's split it into it's constituent steps. Functional no-op. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Split out `ir` crate from the `execute` crate. Replace export of entire modules with that of specific types / functions. Therefore, consumers outside the crate talk about `ir::CommandInfo` rather than `ir::command::CommandInfo`. There is no need for other crates to know about the internal structure of this crate. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: 47553aec63e80af7f95e659a170a2685e9ac2ce3 |
||
Rakesh Emmadi
|
7c9c3f5859 |
no-op refactor: split plan/types.rs into separate modules (#908)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> The `plan/type.rs` has become large and overwhelmed. This PR refactors its code and removes it. ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> - Move code from `plan/types.rs` into old `arguments.rs`, `filter.rs` and new `field.rs`, `query.rs`, `mutation.rs`. - Delete `plan/types.rs` - Refactor code in other modules to accommodate new changes. V3_GIT_ORIGIN_REV_ID: 0e294ca8fb4bf1d8622806f5c8b72a2bb01ccdaf |
||
Daniel Harvey
|
e006a36402 |
Skip NDC version checks for now (#907)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What These checks are breaking artifact generation, so disabling them until we can find a safer way to introduce them. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: ae97c87720b67384127122ed0220383036c87bbf |
||
Daniel Harvey
|
cb72538865 |
Default to IPV6 on dev-auth-webhook (#905)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What Making this match engine. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Replace `V4` with `V6` <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: e86d118b96d41407a292f9ad4132b8ab6d06454f |
||
Philip Lykke Carlsen
|
671ea8daa4 |
Judicious relaying of untrusted baggage (#903)
### What Telemetry-baggage is propagated via headers from incoming requests to a service and relayed when the service itself calls another service. However, when a service is open to the public it may not want just anyone to be able to pass it baggage. This PR adds the ability to configure the policy towards baggage relaying in the tracing-util crate. ### How When the argument `initialize_tracing(..., propagate_caller_baggage = false)` we add to the globally defined text map propagator a derived version of the `BaggagePropagator` which cannot extract baggage from incoming requests, only inject its own context baggage into outgoing requests. V3_GIT_ORIGIN_REV_ID: af9a51c20a8fe7ae2085e8218a4f1d5e01b26ae1 |
||
Daniel Harvey
|
a95eaa4c4f |
Allow object types to be used as comparison operator arguments (#895)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What This allows object types to be used as arguments for comparison operators. This is useful for Elasticsearch's `range` operator, which allows passing an object like `{ gt: 1, lt: 100 }` to an `integer` field in order to filter items that are greater than `1` and less than `100`. This PR has the nice side effect of dropping the requirement to use information from scalar `BooleanExpressionType`s in place of `DataConnectorScalarTypes`, which we only required because we were not looking up the comparable operator information in scalar boolean expression types correctly. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Previously, when using `ObjectBooleanExpressionType` and `DataConnectorScalarRepresentation`, we had no information about the argument types of comparison operators (ie, what values should I pass to `_eq`?), and so inferred this by looking up the comparison operator in the data connector schema, then looking for a `DataConnectorScalarRepresentation` that tells us what OpenDD type that maps to. Now, with `BooleanExpressionType`, we have this information provided in OpenDD itself: ```yaml kind: BooleanExpressionType version: v1 definition: name: Int_comparison_exp operand: scalar: type: Int comparisonOperators: - name: _eq argumentType: Int! # This is an OpenDD type - name: _within argumentType: WithinInput! - name: _in argumentType: "[Int!]!" ``` Now we look up this information properly, as well as tightening up some validation around relationships that was making us fall back to the old way of doing things where the user had failed to provide a `comparableRelationship` entry. This means a) we can actually use object types as comparable operator types b) scalar boolean expression types aren't used outside the world of boolean expressions, which is a lot easier to reason about. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: ad5896c7f3dbf89a38e7a11ca9ae855a197211e3 |
||
dependabot[bot]
|
d61e566019 |
Bump env_logger from 0.11.3 to 0.11.5 (#896)
Bumps [env_logger](https://github.com/rust-cli/env_logger) from 0.11.3 to 0.11.5. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/rust-cli/env_logger/releases">env_logger's releases</a>.</em></p> <blockquote> <h2>v0.11.5</h2> <h2>[0.11.5] - 2024-07-25</h2> <h2>v0.11.4</h2> <h2>[0.11.4] - 2024-07-23</h2> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/rust-cli/env_logger/blob/main/CHANGELOG.md">env_logger's changelog</a>.</em></p> <blockquote> <h2>[0.11.5] - 2024-07-25</h2> <h2>[0.11.4] - 2024-07-23</h2> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
dependabot[bot]
|
9b6ed154be |
Bump thiserror from 1.0.62 to 1.0.63 (#898)
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.62 to 1.0.63. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/dtolnay/thiserror/releases">thiserror's releases</a>.</em></p> <blockquote> <h2>1.0.63</h2> <ul> <li>Documentation improvements</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
dependabot[bot]
|
ff860841ec |
Bump clap from 4.5.9 to 4.5.11 (#900)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.9 to 4.5.11. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/clap-rs/clap/releases">clap's releases</a>.</em></p> <blockquote> <h2>v4.5.10</h2> <h2>[4.5.10] - 2024-07-23</h2> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/clap-rs/clap/blob/master/CHANGELOG.md">clap's changelog</a>.</em></p> <blockquote> <h2>[4.5.11] - 2024-07-25</h2> <h2>[4.5.10] - 2024-07-23</h2> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
dependabot[bot]
|
bfb9c7ded0 |
Bump tokio from 1.38.1 to 1.39.2 (#897)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.38.1 to 1.39.2. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/tokio-rs/tokio/releases">tokio's releases</a>.</em></p> <blockquote> <h2>Tokio v1.39.2</h2> <h1>1.39.2 (July 27th, 2024)</h1> <p>This release fixes a regression where the <code>select!</code> macro stopped accepting expressions that make use of temporary lifetime extension. (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6722">#6722</a>)</p> <p><a href="https://redirect.github.com/tokio-rs/tokio/issues/6722">#6722</a>: <a href="https://redirect.github.com/tokio-rs/tokio/pull/6722">tokio-rs/tokio#6722</a></p> <h2>Tokio v1.39.1</h2> <h1>1.39.1 (July 23rd, 2024)</h1> <p>This release reverts "time: avoid traversing entries in the time wheel twice" because it contains a bug. (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6715">#6715</a>)</p> <p><a href="https://redirect.github.com/tokio-rs/tokio/issues/6715">#6715</a>: <a href="https://redirect.github.com/tokio-rs/tokio/pull/6715">tokio-rs/tokio#6715</a></p> <h2>Tokio v1.39.0</h2> <h1>1.39.0 (July 23rd, 2024)</h1> <ul> <li>This release bumps the MSRV to 1.70. (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6645">#6645</a>)</li> <li>This release upgrades to mio v1. (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6635">#6635</a>)</li> <li>This release upgrades to windows-sys v0.52 (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6154">#6154</a>)</li> </ul> <h3>Added</h3> <ul> <li>io: implement <code>AsyncSeek</code> for <code>Empty</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6663">#6663</a>)</li> <li>metrics: stabilize <code>num_alive_tasks</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6619">#6619</a>, <a href="https://redirect.github.com/tokio-rs/tokio/issues/6667">#6667</a>)</li> <li>process: add <code>Command::as_std_mut</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6608">#6608</a>)</li> <li>sync: add <code>watch::Sender::same_channel</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6637">#6637</a>)</li> <li>sync: add <code>{Receiver,UnboundedReceiver}::{sender_strong_count,sender_weak_count}</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6661">#6661</a>)</li> <li>sync: implement <code>Default</code> for <code>watch::Sender</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6626">#6626</a>)</li> <li>task: implement <code>Clone</code> for <code>AbortHandle</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6621">#6621</a>)</li> <li>task: stabilize <code>consume_budget</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6622">#6622</a>)</li> </ul> <h3>Changed</h3> <ul> <li>io: improve panic message of <code>ReadBuf::put_slice()</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6629">#6629</a>)</li> <li>io: read during write in <code>copy_bidirectional</code> and <code>copy</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6532">#6532</a>)</li> <li>runtime: replace <code>num_cpus</code> with <code>available_parallelism</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6709">#6709</a>)</li> <li>task: avoid stack overflow when passing large future to <code>block_on</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6692">#6692</a>)</li> <li>time: avoid traversing entries in the time wheel twice (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6584">#6584</a>)</li> <li>time: support <code>IntoFuture</code> with <code>timeout</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6666">#6666</a>)</li> <li>macros: support <code>IntoFuture</code> with <code>join!</code> and <code>select!</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6710">#6710</a>)</li> </ul> <h3>Fixed</h3> <ul> <li>docs: fix docsrs builds with the fs feature enabled (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6585">#6585</a>)</li> <li>io: only use short-read optimization on known-to-be-compatible platforms (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6668">#6668</a>)</li> <li>time: fix overflow panic when using large durations with <code>Interval</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6612">#6612</a>)</li> </ul> <h3>Added (unstable)</h3> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
Anon Ray
|
5b23ed53bc |
introduce AuthConfig v2, which removes role emulation (#891)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What We have decided to remove the role emulation feature from engine altogether. More details in the RFC - https://docs.google.com/document/d/1tlS9pqRzLEotLXN_dhjFOeIgbH6zmejOdZTbkkPD-aM/edit ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: e7cb765df5afac6c6d6a05a572a832ce9910cc0b |
||
Daniel Harvey
|
cc2373a6ad |
Add generated Elasticsearch schema to range test (#894)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What We've had issues with `metadata-resolve` rejecting Elasticsearch schema output, so adding said output to this test. Appears to work fine, so merging it for further discussion and to improve the test case. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Add elastic search schema to test. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: ea7c39ca7ab07fc18abd08eb822d2d56fc152ae6 |
||
dependabot[bot]
|
a54614431d |
Bump async-graphql-parser from 7.0.6 to 7.0.7 (#865)
Bumps [async-graphql-parser](https://github.com/async-graphql/async-graphql) from 7.0.6 to 7.0.7. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/async-graphql/async-graphql/blob/master/CHANGELOG.md">async-graphql-parser's changelog</a>.</em></p> <blockquote> <h1>[7.0.7] 2024-07-14</h1> <ul> <li>Support raw values from serde_json <a href="https://redirect.github.com/async-graphql/async-graphql/pull/1554">#1554</a></li> <li>The custom directive <code>ARGUMENT_DEFINITION</code> is not being output at the appropriate location in SDL <a href="https://redirect.github.com/async-graphql/async-graphql/pull/1559">#1559</a></li> <li>Support for JSON extended representations of BSON ObjectId and Uuid <a href="https://redirect.github.com/async-graphql/async-graphql/pull/1542">#1542</a></li> <li>feat: get directives from SelectionField <a href="https://redirect.github.com/async-graphql/async-graphql/pull/1548">#1548</a></li> <li>Support Directives on Subscriptions <a href="https://redirect.github.com/async-graphql/async-graphql/pull/1500">#1500</a></li> <li>fix subscription err typo <a href="https://redirect.github.com/async-graphql/async-graphql/pull/1556">#1556</a></li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li>See full diff in <a href="https://github.com/async-graphql/async-graphql/commits">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=async-graphql-parser&package-manager=cargo&previous-version=7.0.6&new-version=7.0.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> V3_GIT_ORIGIN_REV_ID: 8c0d1a222a57c86cedf2f0870f6cccb0a861a3e6 |
||
Daniel Harvey
|
8bd439362b |
Update changelog links (#886)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What Forgot to do this in last PR. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: 24ec7278a63379f252d452b8ec627c934e1c534c |
||
Anon Ray
|
72289171aa |
rename NdcFieldName to NdcFieldAlias (#882)
### What We introduced a newtype around the NDC field alias, but we called it `NdcFieldName`. While in reality it is the alias of the field requested in the query. This PR changes the name to `NdcFieldAlias`. This is a no-op change V3_GIT_ORIGIN_REV_ID: 8e892c29860e93243a200b6a6291fd0a32cc6fe3 |
||
Philip Lykke Carlsen
|
4f6bde1fee |
Enable use of Otel baggage via tracing-util crate (#888)
### What Part of the point of the `tracing-util` crate is to centrally enforce usage of a single version of opentelemetry libraries. Previously we added some support for relaying baggage, but not actually for defining it. This PR exposes the crates and types necessary to add baggage to the context. V3_GIT_ORIGIN_REV_ID: 107ec652d4e812f31bbfaa362cedf44b25dc3c39 |
||
Daniel Harvey
|
3357f970e9 |
Remove old Docker based building stuff (#876)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What We have a bunch of local development infra for building the engine inside a Docker container. This is helpful for Buildkite which doesn't come with stuff like `cargo` preinstalled. We've not using Buildkite anymore, let's remove it. V3_GIT_ORIGIN_REV_ID: b4b7679aab5b14081288df25d139944f160a61fe |
||
Daniel Harvey
|
42768bab3a |
Implement NoAuth mode in AuthConfig (#877)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What We'd like to make it simpler to try out DDN, by starting with a mode that uses no auth. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Add a `NoAuth` `AuthConfig` mode that is configured thus: ```json "noAuth": { "role": "admin", "sessionVariables": { "x-hasura-user-id": "1" } } ``` Given the above config: - If no `x-hasura-role` is sent with a request, we run it as `admin`. - If a `x-hasura-role` header is sent and it's `admin`, it continues to work - If any other `x-hasura-role` header is sent, an error will happen. - All other headers are ignored, and we always set `x-hasura-user-id` to 1 <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: dddcfbee9c3a31e84dfc8013de32e3a9bf31943d |
||
Daniel Chambers
|
f84c2f3695 |
Validate that the capabilities version matches the DataConnectorLink schema version (#880)
This PR adds validation code to `metadata_resolve` that prevents someone from putting a schema/capabilities from the wrong NDC version into the DataConnectorLink while specifying a different schema version in the DataConnectorLink. For example: ``` kind: DataConnectorLink version: v1 definition: name: data_connector schema: version: v0.2 schema: {} capabilities: version: 0.1.5 # Not allowed for version v0.2! capabilities: {} ``` This PR has two commits. One is a refactor where we rearrange the DataConnectorError types so that the name of the data connector is captured centrally in `NamedDataConnectorError`, so that it doesn't have to be passed around and included in every error manually. The other is the validation changes to `metadata_resolve`. Completes APIPG-705 V3_GIT_ORIGIN_REV_ID: baed571f36f4cbed824ca546128f5df360d5b298 |
||
Daniel Harvey
|
1cd8e7f599 |
Remove benchmarks (#887)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What These take ages to run, are slowing development down and not offering the value they should. We should be benchmarking engine, but not like this. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Remove benchmarks CI job from Buildkite. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: 30a2c9d5f6ba09f5319a07fe394db8becaa16b8e |
||
Daniel Chambers
|
8e8b9839a9 |
Make tests run over both the ndc v0.1.x and v0.2.x custom connectors (#879)
This PR updates as many tests as possible that use the custom connector so that the tests run over two versions of the custom connector: 1. The custom connector in the repo, which currently speaks `ndc_models` v0.2.x 2. The custom connector from the past (commit ), which is the last version to speak `ndc_models` v0.1.x This helps us test both the NDC v0.1.x and v0.2.x code paths. When the postgres connector upgrades to v0.2.x, we can use the same approach as in this PR to get the tests to run over multiple versions of the postgres connector too, for much better coverage. This approach with the custom connector will become less useful over time as the v0.1.x connector is not updated and will diverge in data from the v0.2.x connector. The postgres connector is likely to be longer-lasting, as it is more stable. The basic test used for `execute` integration tests is `test_execution_expectation` (in `crates/engine/tests/common.rs`) and it has been extended into a version called `test_execution_expectation_for_multiple_ndc_versions` that takes metadata on a per NDC version basis and then runs the test multiple times, once for each NDC version. This allows one to swap out the DataConnectorLink involved in the test to a different one that points at either the v0.1.x or v0.2.x versions of the connector. The assertion is that both connectors should produce the same results, even if they talk a different version of the NDC protocol. As each version runs, we `println!` the version so that if the test fails you can look in stdout for the test and see which one was executing when it failed. Tests that use the custom connector now use `test_execution_expectation_for_multiple_ndc_versions` and run across both connector versions. Some tests were unable to be used across both version as the data between the two versions has changed. Some tests were modified to avoid the changed data so as to support running across both versions. Any tests that use `test_execution_expectation_legacy` don't run across both versions because those tests aren't backed by the same test implementation as `test_execution_expectation_for_multiple_ndc_versions`. Unfortunately the custom connector doesn't use the standard connector SDK, so it doesn't support `HASURA_CONNECTOR_PORT`. This means that the old connector is stuck on 8101. To work around this, I've moved the current connector port to 8102 instead. Technically we might be able to use docker to remap the ports, but then this binds us into always running the connectors in docker in order to move their ports around, so I avoided that approach. Completes APIPG-703 V3_GIT_ORIGIN_REV_ID: fb0e410ddbee0ea699815388bc63584d6ff5dd70 |
||
Daniel Harvey
|
291df666a6 |
[changelog] release v2024.07.25 (#885)
Get ready for `v2024.07.25` release, updating changelog. V3_GIT_ORIGIN_REV_ID: 4561b318ae234323c53bb8acb8b45d90aede78ab |
||
Daniel Harvey
|
cb380da086 |
Pass TraceContextResponsePropagator to set_text_map_propagator (#884)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What In a recent engine change, we changed some of our trace context mapping to use the shared settings consistently. However, we needed to make sure we included `TraceContextResponsePropagator`, which returns the `traceresponse` header. Request from console after this fix: <img width="810" alt="Screenshot 2024-07-25 at 11 58 30" src="https://github.com/user-attachments/assets/c8e73c56-87fd-49da-a887-f91cdb6d607a"> <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Adds `TraceContextResponsePropagator` to the global set of text map propagators. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: 48df6a6fe55e78a48f1dc6bf82304199a0a7e248 |
||
Anon Ray
|
fd734e061d |
human-readable NDC relationship name in NDC IR (#881)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What NDC query request expects relationship names which are unique across the query. Previously, we would generate relationship name of the form - ``` [{\"subgraph\":\"connector_2\",\"name\":\"Album\"},\"Tracks\"] ``` This works, but is harder to read while debugging. This PR changes it to have a human-readable name like - ``` connector_2___Album__Tracks ``` This is a no-op change, apart from the relationship names in NDC query requests. ### How Instead of json-ifying the data structure in a tuple, create a formatted string. V3_GIT_ORIGIN_REV_ID: 3fea3bf56f1688bc1cade1ea2b3ed6eb60509cac |
||
Rakesh Emmadi
|
bf1fd4dbd9 |
[changelog] release v2024.07.24 (#875)
Update changelog for release v2024.07.24 V3_GIT_ORIGIN_REV_ID: 7a87941774635dd1fb0e98ac406a88908fa55ba4 |
||
Daniel Harvey
|
66e847bc46 |
Move "test" job to Github Actions (#872)
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What We've had our CI mixed between Github and Buildkite for a while, it's time to commit. First step is moving the "tests" step to Github Actions. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How This PR: - Moves the `test` step to Github Actions - Creates a new `custom_connector.Dockerfile` which builds custom connector only, more quickly. - Changes the metadata tests to use `localhost` instead of their Docker internal names (ie `custom_connector` or `postgres_connector`) - this is because the tests are being run from outside Docker now - Removes the `test` Buildkite step It does not: - Remove the code coverage or benchmarks steps from Buildkite - Tidy up `justfile` or Dockerfiles <!-- How is it trying to accomplish it (what are the implementation steps)? --> --------- Co-authored-by: Philip Lykke Carlsen <plcplc@gmail.com> V3_GIT_ORIGIN_REV_ID: a67534ebc1634a24b48d2620c45003221852e199 |