<!-- The PR description should answer 2 important questions: -->
### What
Update changelog
V3_GIT_ORIGIN_REV_ID: f2da518548c31e0d8d7a6ac55008987dc9fcf8ff
<!-- The PR description should answer 2 important questions: -->
### What
Forgot to add these checks when implementing the feature, so now they're
added as an issue / warning, which will be promoted to an error for new
projects.
### How
Check the data connector capabilities at the point a boolean expression
is used for either model filtering or as a command argument. Raise a
warning if missing.
V3_GIT_ORIGIN_REV_ID: 68ecca998f61aee9340de634223c1933613c36d4
### What
We do not support aggregating fields that have field arguments.
`AggregateExpressions` currently don't check for this and allow them.
This PR adds an error that blocks this scenario in metadata resolve.
This is unlikely to have occurred in the wild as the CLI does not yet
generate aggregates and field arguments are only really used in the
GraphQL connector which uses Commands not Models, so aggregates are not
used with it.
### How
A new check is added in metadata resolve to block fields that have
arguments. A test is added to verify this.
V3_GIT_ORIGIN_REV_ID: f7997d1930dc6ca65cdd3e7dc32caf6ac4908767
### What
This PR is similar to #1079 in that it relaxes a build error that occurs
into a warning. If you define a `Model` with aggregates but you haven't
defined the `query.aggregate` section in your `GraphqlConfig`, then you
currently get a build error telling you to do it. This PR relaxes this
to a warning and aggregates are just omitted from the Model GraphQL API
schema.
### How
The error is converted to an issue in metadata resolve. An existing test
that tested that error has been shifted to a passing test and the new
raised issue can be seen at the bottom of the resolved metadata
snapshot.
V3_GIT_ORIGIN_REV_ID: b864be90141e2a8940fa9a6269beb24021880636
<!-- The PR description should answer 2 important questions: -->
### What
FIx a pending TODO; add a missing typecheck while resolving model
predicates.
This will check for the types of literal values in value expressions
used in various predicates.
### How
Use the `typecheck_value_expression` function to typecheck while
resolving model predicates as well.
V3_GIT_ORIGIN_REV_ID: 299aa432bc3ec98910441223eac8c8fc1a082aec
### What
If a user configures `AggregateExpressions` in their metadata with the
`graphql` section configured, but forgets to configure `aggregates` in
`GraphqlConfig`, an error was raised and the build failed. This scenario
occurs when a user has an existing pre-aggregrates metadata and are
adding aggregates to it later.
While simply adding the required configuration to `GraphqlConfig` would
fix the error, there are cases where the `GraphqlConfig` is not in the
user's current repo, such as where they are working in a separate
subgraph repo and the `GraphqlConfig` is managed elsewhere and requires
a coordinated change.
This PR turns that error into a warning and allows a successful build.
The build will not have aggregates show up in the GraphQL, but does
succeed, which allows the user to progress until the `GraphqlConfig` is
updated separately.
### How
A new `AggregateExpressionIssue` type is added and the error is moved
from `AggregateExpressionError` to that type instead. The code then logs
the new issue and contributes it to the main issues collection.
The test that checked for this error (a failure test) has been moved to
a successful test and the warning can be seen at the bottom of the new
snapshot file.
V3_GIT_ORIGIN_REV_ID: 751590c484feec4ae03f079ae6a1bc0bf867ff64
<!-- The PR description should answer 2 important questions: -->
### What
- Disallow recursive types in types of table columns
### How
- As we walk down the table type constructing the corresponding Arrow
types, we keep track of a set of struct type names that we've seen.
- If we see a type name we've seen before, we don't include the current
field:
- If we're in a nested context, we cascade the field deletion up
- If we're at a top-level table column, we remove that column.
With this approach, the fields which are removed never depend on the
path the traversal takes, so we never end up with a reference to a named
struct type which in reality fetches only a subset of the total fields.
A named type always refers to the same set of fields.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 270a71cd5b1d3700067abf7c771c473bbc33167e
<!-- The PR description should answer 2 important questions: -->
### What
As title.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
<!-- Does this PR introduce new validation that might break old builds?
-->
<!-- Consider: do we need to put new checks behind a flag? -->
V3_GIT_ORIGIN_REV_ID: 321053fa840d1e15780bbea8881e87dafd32674c
<!-- The PR description should answer 2 important questions: -->
### What
Update the span name and description around a function that handles the
predicates resolved in-engine.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
<!-- Does this PR introduce new validation that might break old builds?
-->
<!-- Consider: do we need to put new checks behind a flag? -->
V3_GIT_ORIGIN_REV_ID: 4d72c0b3e41d105f6a7e4f4357b15a8e91e923ad
<!-- The PR description should answer 2 important questions: -->
### What
Just updates the changelog by including the context for compatibility
date.
V3_GIT_ORIGIN_REV_ID: 19cccc551a0841e26d48859f640a3135ce41a49f
<!-- The PR description should answer 2 important questions: -->
### What
Exposes SQL constraints to datafusion's planner via the
`TableProvider::constraints()` trait method.
### How
We pull these constraints from the GraphQL configuration for the model.
This is not ideal - the data should really live at the model layer now.
But we can hopefully solve that later.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: c723d895fd707e24defae971f474b5938478a82b
<!-- The PR description should answer 2 important questions: -->
### What
Same as the title
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
<!-- Does this PR introduce new validation that might break old builds?
-->
<!-- Consider: do we need to put new checks behind a flag? -->
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: e2e62c9455d0aa6a54aae10f3f15992a1ce75ecb
<!-- The PR description should answer 2 important questions: -->
### What
This PR enhances the relationship handling in the filter predicates.
When local relationships lack the `relation_comparisons` capability for
underlying data connector, the predicate is handled just like remote
relationships. This will remove the dependence on the
`relation_comparison` capability for relationships.
The `relation_comparison` capability is useful for pushing down the
local relationships to NDC, where the predicates are resolved more
efficiently than handling them engine itself (like remote
relationships).
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
- While building the GraphQL schema for boolean expressions, don't check
for the capability presence. All available relationships can be
included.
- Introduce a new enum type for execution strategy for relationships in
predicates. Rename existing enum variants accordingly.
- Tests: Uses `jq-rs` crate to modify the common metadata to remove the
capability in-memory.
---------
Co-authored-by: Daniel Chambers <daniel@hasura.io>
V3_GIT_ORIGIN_REV_ID: ccc8a1304c80954d1c54d56b71e41a7c22b5a05d
<!-- The PR description should answer 2 important questions: -->
### What
We want to use `CompatibilityConfig` to configure turning warnings into
errors after a certain date, and to make sure we don't break old builds.
This allows passing a path to a optional compatibility config file to
engine and scaffolds how we'll map this config to metadata resolve
options.
### How
Add a flag to `v3-engine`, parse the file if flag is set. Test it by
adding static test file and using it with `just watch` and `just run`.
V3_GIT_ORIGIN_REV_ID: 972c67ae29905b9ce1bb57e150f4cfcfd6a069ef
<!-- The PR description should answer 2 important questions: -->
### What
We cannot support having relationship comparisons in nested object field
filters of a boolean expression. NDC does not support this natively
([slack
thread](https://hasurahq.slack.com/archives/C05HND0F6LB/p1722845028319099)).
This PR adds a metadata build check for this case and reject such
boolean expressions.
Tests in the PR is partially based on
https://github.com/hasura/v3-engine/pull/935.
**Note:** This might break older builds having relationship comparisons
in the nested object field filter expressions. This is a rare scenario.
We will check through our schema-diff job for any failing builds. If
there are significant, we might hold this PR. Full context in this slack
thread -
https://hasurahq.slack.com/archives/C06P2U8U55G/p1723121764332539.
### How
- Add a metadata build check for boolean expressions that restricts them
to have nested object filters with relationship comparisons.
- Raise GraphQL API runtime internal error while building the filter IR
when a relationship comparison found within a nested field filter.
---------
Co-authored-by: Daniel Chambers <daniel@hasura.io>
V3_GIT_ORIGIN_REV_ID: 86dce0b784c77e57bb1888125dba5b599e40f741
### What
Add a flag `--export-traces-stdout` to log traces to stdout. Default is
disabled.
Command-line flag - `--export-traces-stdout`
Env var - `EXPORT_TRACES_STDOUT`
### How
Introduce a new command line flag. Make `initialize_tracing` accept a
`bool`, and setup the stdout exporter based on that.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: f39d6f863fd2bca65ad89f1cef4b077aa9eabc5b
<!-- The PR description should answer 2 important questions: -->
### What
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
<!-- Does this PR introduce new validation that might break old builds?
-->
<!-- Consider: do we need to put new checks behind a flag? -->
Fixes a bug where queries with nested relationship selection and filter
predicates fail due to an issue with NDC relationship collection.
```graphql
query MyQuery {
Album {
AlbumId
Title
ArtistId
Tracks {
AlbumId
Name
TrackId
}
}
}
```
A selection permission defined on the `Tracks` model with a relationship
comparison in the predicate.
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
- Previously, the collection of relationships occurred independently by
traversing through the IR AST. Consequently, during planning, the
collection of local relationships was explicitly ignored. This caused
confusion and resulted in the omission of relationship collectors when
planning nested selections for local relationships, leading to the
issue.
- In this PR, the independent collection of relationships is removed.
Instead, all NDC relationships for field selection, filter, and
permission predicates are now collected during planning. This unifies
the logic, and ensures consistency in achieving the same purpose.
V3_GIT_ORIGIN_REV_ID: cbd5bfef7a90a7d7602061a9c733ac54b764e0d3
This PR fixes the following bugs:
- Fixes a bug where models and commands were allowed even though they
did not define arguments to satisfy the underlying data connector
collection/function/procedure. **UPDATE:** This only raises a warning
rather than fails the build, because existing builds on staging and
production have this issue. This will need to be transitioned to an
error once the Compatibility Date plumbing is in place.
- Fixes a bug where argument presets set in the DataConnectorLink were
sent to every connector function/procedure regardless of whether the
function/procedure actually declared that argument
- Fixes a bug where argument presets set in the DataConnectorLink were
not sent to connector collections that backed Models
- Fixes a bug where the type of the argument name in the
DataConnectorLink's argument presets was incorrect in the Open DD
schema. It was `ArgumentName` but should have been
`DataConnectorArgumentName`
- Fixes a bug where the check to ensure that argument presets in the
DataConnectorLink does not overlap with arguments defined on
Models/Commands was comparing against the Model/Command argument name
not the data connector argument name
There are a number of changes that tighten things up in this PR.
Firstly, the custom connector is improved so that it rejects requests
with arguments of the wrong type or unexpected arguments. This causes
tests that should have been failing to actually fail. Then, new tests
have been added to metadata_resolve to cover the untested edge cases
around data connector link argument presets.
Then, metadata resolve is refactored so that the link argument presets
are validated and stored on each command/model source, rather than on
the DataConnectorLink. Extra validation has been added during this
process to fix the above bugs. Any irrelevant argument presets to the
particular command/model are dropped. Then, during execution, we read
the presets from the command/model source instead of from the
DataConnectorLink, which ensures we only send the appropriate arguments.
JIRA: [V3ENGINE-290](https://hasurahq.atlassian.net/browse/V3ENGINE-290)
Fixes
https://linear.app/hasura/issue/APIPG-676/dataconnectorlink-argument-presets-are-always-sent-regardless-of
[V3ENGINE-290]:
https://hasurahq.atlassian.net/browse/V3ENGINE-290?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
V3_GIT_ORIGIN_REV_ID: dd02e52e1ff224760c5f0ed6a73a1ae56779e1f1
The validation added in #880 validated that the version in the
DataConnectorLink's capabilities version matched the version specified
in the schema. Unfortunately, there are existing builds with invalid
capabilities versions that failed to parse. Subsequently the validation
was removed in #907 to fix staging the deploy that broke.
This is the unique set of errors found when deploying to staging:
```
error generating artifacts: schema build error: invalid metadata: The data connector myts (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version
error generating artifacts: schema build error: invalid metadata: The data connector my_ts (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version
error generating artifacts: schema build error: invalid metadata: The data connector mydbpg (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version
error generating artifacts: schema build error: invalid metadata: The data connector chinook (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version
error generating artifacts: schema build error: invalid metadata: The data connector clickhouse (in subgraph analytics) has an error: The version specified in the capabilities ("^0.1.1") is an invalid version: unexpected character '^' while parsing major version number
error generating artifacts: schema build error: invalid metadata: The data connector chinook_link (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version
error generating artifacts: schema build error: invalid metadata: The data connector app_connector (in subgraph app) has an error: The version specified in the capabilities ("^0.1.1") is an invalid version: unexpected character '^' while parsing major version number
error generating artifacts: schema build error: invalid metadata: The data connector chinook (in subgraph app) has an error: The version specified in the capabilities ("^0.1.1") is an invalid version: unexpected character '^' while parsing major version number
error generating artifacts: schema build error: invalid metadata: The data connector nodejs (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version
error generating artifacts: schema build error: invalid metadata: The data connector db (in subgraph app) has an error: The version specified in the capabilities ("*") is an invalid version: unexpected character '*' while parsing major version number
error generating artifacts: schema build error: invalid metadata: The data connector my_pg (in subgraph my_subgraph) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version
error generating artifacts: schema build error: invalid metadata: The data connector mypg (in subgraph myapp) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version
error generating artifacts: schema build error: invalid metadata: The data connector mypglink (in subgraph mysubgraph) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version
error generating artifacts: schema build error: invalid metadata: The data connector mypg (in subgraph app2) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version
error generating artifacts: schema build error: invalid metadata: The data connector test_connector (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version
```
The invalid versions are: `""`, `"*"`, "^0.1.1"`.
This PR restores the version validation code, but for NDC v0.1.x
capabilities (the only supported version right now, v0.2.x is feature
flagged off), we now accept versions that fail to parse as a valid
semver, and instead we raise an issue that gets logged as a warning.
NDC v0.2.x capabilities retains the stricter behaviour and does not
accept dodgy a capabilities version. This is backwards compatible
because trying to use NDC v0.2.x right now produces a build error.
Fixes APIPG-736
V3_GIT_ORIGIN_REV_ID: 9e9bf99123bad31e8229e8ea29343eb8aaf9786d
<!-- The PR description should answer 2 important questions: -->
### What
Change an error down to an issue / warning to unbreak builds.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
<!-- Does this PR introduce new validation that might break old builds?
-->
<!-- Consider: do we need to put new checks behind a flag? -->
### How
Introduce `BooleanExpressionIssue`, move error value to it, emit this
instead. Later we'll turn this into an error based on compatibility
date.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: f0903cc04ea1cf328c9bf67a38d76fd670743679
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
Closes:
https://linear.app/hasura/issue/APIPG-397/support-remote-relationship-predicates-in-permission-filters
### What
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
Allow defining permission filters with remote relationships in their
predicates.
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
- Lift metadata resolve restriction for remote relationships in
permission predicates
- Abstract out the remote relationship resolving logic, in query filter,
into a new function and re-use it while resolving permission filters.
- Tests:
- A metadata build test to check the presence of essential equal
operator on source fields in relationship mapping.
- Ported all `select_many/relationship_predicate/`* tests to a new
`select_many/remote_relationship_predicate/*` with appropriate metadata
changes.
---------
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
V3_GIT_ORIGIN_REV_ID: 9c496ecdc9829ed626354ef85e776e1afcb0dfc7
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
This allows object types to be used as arguments for comparison
operators. This is useful for Elasticsearch's `range` operator, which
allows passing an object like `{ gt: 1, lt: 100 }` to an `integer` field
in order to filter items that are greater than `1` and less than `100`.
This PR has the nice side effect of dropping the requirement to use
information from scalar `BooleanExpressionType`s in place of
`DataConnectorScalarTypes`, which we only required because we were not
looking up the comparable operator information in scalar boolean
expression types correctly.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Previously, when using `ObjectBooleanExpressionType` and
`DataConnectorScalarRepresentation`, we had no information about the
argument types of comparison operators (ie, what values should I pass to
`_eq`?), and so inferred this by looking up the comparison operator in
the data connector schema, then looking for a
`DataConnectorScalarRepresentation` that tells us what OpenDD type that
maps to.
Now, with `BooleanExpressionType`, we have this information provided in
OpenDD itself:
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: Int_comparison_exp
operand:
scalar:
type: Int
comparisonOperators:
- name: _eq
argumentType: Int! # This is an OpenDD type
- name: _within
argumentType: WithinInput!
- name: _in
argumentType: "[Int!]!"
```
Now we look up this information properly, as well as tightening up some
validation around relationships that was making us fall back to the old
way of doing things where the user had failed to provide a
`comparableRelationship` entry.
This means
a) we can actually use object types as comparable operator types
b) scalar boolean expression types aren't used outside the world of
boolean expressions, which is a lot easier to reason about.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: ad5896c7f3dbf89a38e7a11ca9ae855a197211e3
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
We have decided to remove the role emulation feature from engine
altogether.
More details in the RFC -
https://docs.google.com/document/d/1tlS9pqRzLEotLXN_dhjFOeIgbH6zmejOdZTbkkPD-aM/edit
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: e7cb765df5afac6c6d6a05a572a832ce9910cc0b
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Forgot to do this in last PR.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 24ec7278a63379f252d452b8ec627c934e1c534c
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
We'd like to make it simpler to try out DDN, by starting with a mode
that uses no auth.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Add a `NoAuth` `AuthConfig` mode that is configured thus:
```json
"noAuth": {
"role": "admin",
"sessionVariables": {
"x-hasura-user-id": "1"
}
}
```
Given the above config:
- If no `x-hasura-role` is sent with a request, we run it as `admin`.
- If a `x-hasura-role` header is sent and it's `admin`, it continues to
work
- If any other `x-hasura-role` header is sent, an error will happen.
- All other headers are ignored, and we always set `x-hasura-user-id` to
1
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: dddcfbee9c3a31e84dfc8013de32e3a9bf31943d
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
When a boolean expression is passed as an argument it was not being
translated into an `ndc_models::Expression` and so queries failed.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Look up the type of an argument in `metadata-resolve`, and mark the
`ArgumentInfo` with a new `ArgumentKind`. Then in IR step we use that to
work out whether to turn the argument to JSON as before, or translate it
into an `Expression` type that will eventually be turned into an
`ndc_models::Expression`.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 4da3ce0ae04895c33de2b6bdb6fff1018c39b3ad
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Sometimes our builds succeed, but we'd like to tell the user how they
could do better. This implements the simplest possible warnings system.
<img width="957" alt="Screenshot 2024-07-18 at 16 06 49"
src="https://github.com/user-attachments/assets/ff91d221-667a-43f9-bc8a-51bf4574a7b8">
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Warnings are printed to stdout on `v3-engine` startup, and will be
returned to the CLI via `v3-metadata-build-service`. The diff is mostly
updated snapshot tests as we're returning more from
`metadata-resolve::resolve` now.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: d01520e53f49d9b594e94a4531b6a86e749875c3
### What
Previously, while generating relationship definitions for NDC, we would
ignore columns with nested selection.
This PR fixes that.
Closes https://hasurahq.atlassian.net/browse/V3ENGINE-247
### How
While matching on `FieldSelection::Column`, don't ignore it. Check if it
contains nested selection, if it does, call
`collect_relationships_from_nested_selection`
V3_GIT_ORIGIN_REV_ID: 9db94744d8e2d35f8430bded07209ef519175205
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
We're having issues with our deduplication of names in JSONSchema. We
would like to fix this, but in the short term, this renames a
conflicting object to avoid this quickly.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Rename `ArgumentPreset` in `open_dds::data_connectors` to
`DataConnectorArgumentPreset`.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: e3eafeffe8ba4d513f9d0a09a623f101650247ea
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
update changelog for new release
### How
:)
V3_GIT_ORIGIN_REV_ID: 6620f9923f393d190f0f2fab1aced1cff4d6aec0
Now, users can filter their queries using remote relationships in the
filter predicate. Users need to provide the relationships for comparison
in `comparableRelationships` field of the newer `BooleanExpressionType`
opendd metadata.
Minimal Algorithm:
```
Relationship: ARemoteB => Model_A -> Model_B (remote NDC)
Column Mapping: (A_column_1, B_column_1), (A_column_2, B_column_2).
query:
Model_A:
where: ARemoteB: {B_column_3: {_eq: value}}
Step 1: Fetch RHS column values (in mapping) from remote target model
SELECT B_column_1, B_column_2 from model_b_collection WHERE B_column_3 = value;
yields the following rows
[
[(B_column_1, b_value_1), (B_column_2, b_value_2)],
[(B_column_1, b_value_11), (B_column_2, b_value_22)],
]
Step 2: Using above rows the generate LHS column filter for Model_A query.
SELECT <fields> from model_a_collection WHERE
((A_column_1 = b_value_1) AND (A_column_2 = b_value_2))
OR ((A_column_1 = b_value_11) AND (A_column_2 = b_value_22))
The above comparison is equivalent to
WHERE
(A_column_1, A_column_2) IN ((b_value_1, b_value_11), (b_value_2, b_value_22))
```
Sample query:
```graphql
query MyQuery {
Track(
where: {
_or: [
{ AlbumRemote: { Artist: { ArtistId: { _eq: 2 } } } }
{ TrackId: { _eq: 3 } }
]
}
) {
TrackId
AlbumRemote {
Artist {
ArtistId
Name
}
}
}
}
```
In the query above, `AlbumRemote` is a remote relationship which targets
a model backed by a different data connector.
V3_GIT_ORIGIN_REV_ID: 7aa76fcae83e1f22de460f1eef5648fb7a35c047
### What
This PR fixes an issue where relationships that target commands do not
correctly use the data connector's argument name when making the ndc
request. Instead, they use the OpenDD argument name, which is incorrect.
For metadata where the OpenDD argument name is the same as the data
connector's argument name, the code works but only coincidentally.
### How
I've updated an existing test to change the name of the command argument
to be different from the data connector's argument name. This test
failed but is now fixed by this PR, which simply looks up the name of
the data connector argument name and uses that instead.
V3_GIT_ORIGIN_REV_ID: 71f1e812174c7bb9922792523129e4bcdce911ed
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Filtering on nested arrays doesn't work, let's make sure it's not
allowed for now.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Adding a check in `boolean_expression_types` stage.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
---------
Co-authored-by: Gil Mizrahi <gil@gilmi.net>
V3_GIT_ORIGIN_REV_ID: cc08e8c24098c1fea9b6e1ee61b82ade989dd29a
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
Update changelogs for new release.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 7adfd3c2c53b912bb7c4604ebed93601a201c15e
Update the changelog for new version
---------
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
V3_GIT_ORIGIN_REV_ID: b87f16ebaa1471f010ec461be097bcfd6648c99a
### What
When generating GraphQL schema for relationship fields in model filter,
engine ignores `relation_comparisons` capability of the data connector.
Engine would generate schema for data connectors which don't have this
capability. This PR fixes that.
### How
While generating fields for the filter input type, take the relationship
capabilities into account.
The `ObjectBooleanExpressionType` and `BooleanExpressionType` objects
are quite different, hence their schema generation part is also
different, and is split in two different functions
(`build_comparable_relationships_schema`, and
`build_new_comparable_relationships_schema`). Added checking of
relationship comparison capability in both the functions.
V3_GIT_ORIGIN_REV_ID: dce2b88f7792e01e5bb390ecdb580e223ec80f01
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
# BooleanExpressionType
A new metadata kind `BooleanExpressionType` can now be defined. These
can be used in place of `ObjectBooleanExpressionType` and
`DataConnectorScalarRepresentation`, and allow more granular control of
comparison operators and how they are used.
The old metadata types still work, but will eventually be deprecated.
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: album_bool_exp
operand:
object:
type: Album
comparableFields:
- fieldName: AlbumId
booleanExpressionType: pg_int_comparison_exp
- fieldName: ArtistId
booleanExpressionType: pg_int_comparison_exp_with_is_null
- field: Address
booleanExpressionType: address_bool_exp
comparableRelationships:
- relationshipName: Artist
booleanExpressionType: artist_bool_exp
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_album_bool_exp
```
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: pg_int_comparison_exp
operand:
scalar:
type: Int
comparisonOperators:
- name: equals
argumentType: String!
- name: _in
argumentType: [String!]!
dataConnectorOperatorMapping:
- dataConnectorName: postgres_db
dataConnectorScalarType: String
operatorMapping:
equals: _eq
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_postgres_int_bool_exp
```
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Remove feature flag, unhide JsonSchema items, fix a few missing bits of
JsonSchema the tests didn't warn us about before.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: dd3055d926fdeb7446cd57085679f2492a4358a1
### What
This PR adds the ability to include internal errors in API responses via
a command line argument `--expose-internal-errors`.
The default behavior remains not to show the contents of internal error
messages.
V3_GIT_ORIGIN_REV_ID: 11c47286d3fbceeda71df3a224853633aeea8902
This PR fixes the version of the NDC Spec that Open DD's JSON schema
references when it uses NDC's schema and capabilities types in the
`DataConnectorLink`. OpenDD references 0.1.4 of NDC, but uses 0.1.3 in
the JSON schema. This corrects that.
V3_GIT_ORIGIN_REV_ID: bdbb417b3227861dae7835f6d3bda0d1bf935ea7
NOTE: This PR is stacked on #756 and should be shipped after that is
merged.
This PR enables the existing aggregate relationships work (see #725,
#731, #756) by default by removing the experimental flag it used to be
disabled behind.
The new OpenDD schema changes that were added are also unhidden so that
they are visible in the OpenDD JSON Schema.
V3_GIT_ORIGIN_REV_ID: cfd86d8a9ea61887ccf0f1a5d08bdcc3dda59cdc
## What
This PR introduces a changelog file, `changelog.md`.
Any PR that is not simply a technical refactor should include a relevant
entry in this file.
Additionally we also simplify the pull request template. The template
used to contain a section for a changelog entry, which is now rendered
irrelevant.
V3_GIT_ORIGIN_REV_ID: 00881d86ffe87c4c0584b88b960837543dde34b7