graphql-engine/v3/changelog.md

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

676 lines
20 KiB
Markdown
Raw Normal View History

# Changelog
## [Unreleased]
### Added
### Fixed
### Changed
## [v2024.10.14]
### Added
- Added contexts to more MBS errors: when a model refers to a collection that
doesn't exist, the path to the offending reference will be reported.
### Fixed
- Fix local `docker-compose.yaml` file so that running `docker compose up`
builds the engine and serves it along with a sample schema using
`ndc-postgres` and a `postgres` database.
Move allow partial supergraph flag to OpenDD flags instead of an env var (#1205) ### What Currently if you want v3-engine to accept metadata that is only a partial supergraph (ie missing some subgraphs), you must set the PARTIAL_SUPERGRAPH env var. Builds sent to MBS get sent to a special endpoints `/validate/partial` and `/build/partial` that runs the engine build process with that configuration option set. This results in a terrible user experience for local builds, because MBS will accept a partial supergraph and yield build artifacts, but v3-engine will refuse to run them unless you set that env var. This PR removes that env var and creates a new OpenDD flag called `allow_partial_supergraph`. When MBS's `/validate/partial` endpoint is used, that flag is set in the build artifacts. v3-engine then looks at that flag to enable partial supergraph mode. This means a `/build/partial` build via MBS just works when you run it locally via v3-engine. ### How `metadata_resolve`'s `Configuration.allow_unknown_subgraphs` has been removed, and `metadata_resolve` now looks at OpenDD's new `Flags.allow_partial_supergraph` instead. In MBS, usage of `ValidationMode` (the enum that enables partial builds) previously used to set `Configuration.allow_unknown_subgraphs`; now it is used in `compute_open_dds_flags` in order to set `Flags.allow_partial_supergraph`. The existing metadata_resolve test that tested partial supergraphs has been modified to use the flags rather than the removed configuration option (`crates/metadata-resolve/tests/passing/missing_subgraph_when_ignoring_unknown_subgraphs`). A new `metadata_resolve` test has been added that checks that comparable relationships in `BooleanExpressionType` properly respects the `allow_partial_subgraphs` flag (this functionality was added in #1182). V3_GIT_ORIGIN_REV_ID: 2c984eb791263a1fb0606c6c44a2a1ae4a5e7370
2024-10-07 09:08:35 +03:00
- Subgraph builds that have relationships to other external subgraphs can now be
run locally and no longer fail with missing subgraph errors. Subgraph builds
are marked with a new OpenDD flag and when these builds are run by the engine
relationships to unknown subgraphs are automatically pruned.
- Aggregate queries now support `__typename` introspection fields.
### Changed
- metadata-build-service POST endpoints now accept zstd (preferred) or gzip
-encoded request bodies
Move allow partial supergraph flag to OpenDD flags instead of an env var (#1205) ### What Currently if you want v3-engine to accept metadata that is only a partial supergraph (ie missing some subgraphs), you must set the PARTIAL_SUPERGRAPH env var. Builds sent to MBS get sent to a special endpoints `/validate/partial` and `/build/partial` that runs the engine build process with that configuration option set. This results in a terrible user experience for local builds, because MBS will accept a partial supergraph and yield build artifacts, but v3-engine will refuse to run them unless you set that env var. This PR removes that env var and creates a new OpenDD flag called `allow_partial_supergraph`. When MBS's `/validate/partial` endpoint is used, that flag is set in the build artifacts. v3-engine then looks at that flag to enable partial supergraph mode. This means a `/build/partial` build via MBS just works when you run it locally via v3-engine. ### How `metadata_resolve`'s `Configuration.allow_unknown_subgraphs` has been removed, and `metadata_resolve` now looks at OpenDD's new `Flags.allow_partial_supergraph` instead. In MBS, usage of `ValidationMode` (the enum that enables partial builds) previously used to set `Configuration.allow_unknown_subgraphs`; now it is used in `compute_open_dds_flags` in order to set `Flags.allow_partial_supergraph`. The existing metadata_resolve test that tested partial supergraphs has been modified to use the flags rather than the removed configuration option (`crates/metadata-resolve/tests/passing/missing_subgraph_when_ignoring_unknown_subgraphs`). A new `metadata_resolve` test has been added that checks that comparable relationships in `BooleanExpressionType` properly respects the `allow_partial_subgraphs` flag (this functionality was added in #1182). V3_GIT_ORIGIN_REV_ID: 2c984eb791263a1fb0606c6c44a2a1ae4a5e7370
2024-10-07 09:08:35 +03:00
- The `--partial-supergraph` command-line argument and `PARTIAL_SUPERGRAPH`
environment variable have been removed. Builds now contain an OpenDD flag that
indicates if they are subgraph builds and should be run as such.
## [v2024.10.02]
### Added
#### Metadata build error contexts
Contexts are being added to errors raised during the build process to allow
users to locate the source of the issue more quickly. These contexts will be
surfaced in the Build Server API responses. The first example and test bed for
developing the scaffolding is the error raised when a model refers to a
nonexistent data connector. This error will now also contain the path to the
offending data connector name.
#### Pre-response Plugin
Engine now supports calling a HTTP webhook in the pre-response execution step.
This can be used to add some post execution functionalities to the DDN, such as
sending the response to a logging service, sending notifications for specific
requests like mutations, etc.
The following is an example of the OpenDD metadata for the pre-response plugin:
```yaml
kind: LifecyclePluginHook
version: v1
definition:
name: logging
url:
value: http://localhost:5001/log
pre: response
config:
request:
headers:
additional:
hasura-m-auth:
value: "your-strong-m-auth-key"
session: {}
rawRequest:
query: {}
variables: {}
rawResponse: {}
```
Similar to the pre-parse plugin, the pre-response plugin's request can be
customized using the `LifecyclePluginHook` metadata object. Currently we support
the following customizations:
- adding/removing session information
- adding new headers
- forwarding specific headers
- adding/removing graphql query and variables
- adding/removing response
### Fixed
- Fix poor performance of `process_response` for large and deeply-nested results
Fixed error when relationships used in BooleanExpressionTypes were not handled correctly in partial supergraph mode (#1182) ### What When the engine is run in `PARTIAL_SUPERGRAPH` mode, any relationship that targets an unknown subgraph should be silently dropped. This was being done when resolving relationship navigation fields on the `ObjectType`, but not for `comparableRelationships` on `BooleanExpressionType`. In that case we were erroring as we tried to resolve the relationship and couldn't find the target subgraph. ### How The `relationships` metadata resolve step has been enhanced to capture if a relationship is targeting an unknown subgraph. The `object_boolean_expressions` step already used this, so it was tweaked to skip relationships targeting unknown subgraphs. The `object_relationships` step, however, did not use `relationships` and instead went back to metadata_accessor for relationships. It then had special logic that skipped unknown subgraph relationships. The step has now been refactored to use `relationships` instead and the special skipping logic has been discarded, and it now just uses the unknown subgraph information from `relationships`. In addition, the `object_relationships` step now _consumes_ the output of the `type_permissions` step, rather than cloning it. This reduces unnecessary cloning and makes sense since `object_relationships` is simply a further enriched version of `type_permissions`. The test of whether the source object type of a relationship actually exists has also been moved from `object_relationships` to `relationships`, and the index built by `relationships` has been reordered to group by type first and then name, since that is more useful, especially in the `object_relationships` step. V3_GIT_ORIGIN_REV_ID: e5d18343f5ce24532a3258e88751bc3183692c50
2024-10-01 10:52:26 +03:00
- Fixed issue in partial supergraph builds where a `BooleanExpressionType` that
referenced a relationship that targeted an unknown subgraph would incorrectly
produce an error rather than ignoring the relationship
- Fixed double string escaping when forwarding headers to a data connector
### Changed
- Making `args` non-compulsory for models where all arguments have presets.
Previously, if a model had arguments specified that were all provided by
presets, then we would require them to pass an empty `args: {}` argument:
```graphql
query MyQuery
ActorsByMovieMany(args: {}) {
actor_id
movie_id
name
}
}
```
This change loosens the restriction, so now the following query is valid too:
```graphql
query MyQuery
ActorsByMovieMany {
actor_id
movie_id
name
}
}
```
- OpenTelemetry service name set to `ddn-engine` to avoid confusion with
Scaffolding for error paths in `metadata-resolve` (#1153) <!-- The PR description should answer 2 important questions: --> ### What We would like to show error paths for `metadata-resolve` so that debugging these errors is a little less painful, both for us and end users. To this end, #1147 introduced a type wrapper that would be deserialised to contain its own JSON path, so we could then pass this path to errors. This PR does precisely this for the `UnknownModelDataConnector` error. I chose this error because... it was the first one on the list, not for any reason beyond that. Right now, this is an extremely simple case whereby only one path is required, however other errors may need two ("name at path X conflicts with name at path Y", for example). This PR also changes the default engine error stdout to show the `Debug` instance rather than the `Display` instance, as the error path is discarded by the `Display` instance. Unfortunately, we use `Display` for both stdout and user responses, which is maybe something we'd want to change eventually, but for now this means we can't just add the error path to the `Display` instance. ### How I started by making `Model` a `Spanned` element within the metadata structure. I then added the `path` key to the resolved `Model` type. I then found the first error type that included a model name, and added the `path` key to that error variant. Then, I just did the wiring. You'll note that this error doesn't _alway_ return a path because it isn't always raised by a model-first code path, but this is probably the first PR of many. ### Next steps * Next step is to make the output a little neater, probably by creating an actual structured error type (most likely a lot like `Spanned`, with a `path` and a `value`). Then, we can use a `Display` instance again to print this nicely in the stdout, but ignore the path in the MBS API response. * After that, the plan is to stop ignoring it in the MBS API response, with a new key to hold an error path. * Step three is to allow for errors to produce multiple error paths in a list, hopefully such that they tell a story ("I found this... but then I found this... and those two things conflict") * Step four will be a wave of PRs that look quite similar to this one, wiring up paths to as many errors as possible. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: 2d8dda018055f65711e66b08aa15188b516e2ddc
2024-09-25 17:53:16 +03:00
`graphql-engine`.
- Builds can no longer contain two commands with the same root field name.
Previously, one of the two commands would be chosen arbitrarily as the exposed
root field. Now, this raises a build-time error.
## [v2024.09.23]
### Fixed
- Disallow defining custom scalar types with names that conflict with built-in
types, such as `String` or `Int`.
- Fixed bug where relationships defined on a boolean expression would not take
the target subgraph into account.
- Propagate deprecation status to boolean expression relationship fields.
## [v2024.09.16]
### Fixed
- Raise a warning when nested array comparisons are used without the necessary
data connector capability. A new OpenDD flag
`require_nested_array_filtering_capability` can be used to promote this
warning to an error.
- Disallow recursive types in SQL table column types.
- Previously, if you had `AggregateExpressions` that were configured to be used
in GraphQL, or `Models` configured for aggregates in GraphQL, but you did not
set the appropriate configuration in
`GraphqlConfig.definition.query.aggregates`, the build would fail with an
error. This has been relaxed so that the build now succeeds, but warnings are
raised instead. However, the aggregates will not appear in your GraphQL API
until the `GraphqlConfig` is updated. This allows you to add
`AggregateExpressions` and configure your `Model` but update your
`GraphqlConfig` separately, which is useful if they are in separate
repositories.
- A build error is now raised if an `AggregateExpression` specifies an
`aggregatableField` that has field arguments. This is an unsupported scenario
and previously would have allowed invalid queries that omitted the required
field arguments. These queries may have failed with errors at query time.
- Add a missing typecheck of `ValueExpression` while resolving model predicates.
## [v2024.09.05]
### Added
- SQL endpoint can utilize uniqueness constraints
### Fixed
- Fix the name and description of the span resolving relationship predicates in
the Engine.
### Changed
## [v2024.09.02]
### Added
- Enhanced handling of relationships in predicates
- Filter nested arrays
- Order by nested fields
- A new GraphQL config flag `require_valid_ndc_v01_version` to promote warnings
about NDC version as errors.
#### Enhanced Handling of Relationships in Predicates
Improved support for using relationships in boolean expressions even when the
data connector lacks the `relation_comparisons` capability. This update
introduces two strategies for handling relationship predicates:
- **Data Connector Pushdown**: When the source and target connectors are the
same and the target connector supports relationship comparisons, predicates
are pushed down to the NDC (Data Connector) for more efficient processing.
This strategy optimizes query execution by leveraging the data connectors
capabilities.
- **Engine-Based Resolution**: When the data connector does not support
relationship comparisons or when dealing with relationships targeting models
from other data connectors (remote relationships), predicates are resolved
internally within the engine. This approach involves querying the target
models field values and constructing the necessary comparison expressions.
This enhancement updates the GraphQL schema's boolean expression input types by
introducing relationship predicates. The feature is gated by a compatibility
date to ensure backward compatibility. To enable it, set the date to
`2024-09-03` or later in your DDN project's `globals/compatibility-config.hml`
file.
#### Filter Nested Arrays
If `institution` is a big JSON document, and `staff` is an array of objects
inside it, we can now filter `institutions` based on matches that exist within
that array.
```graphql
query MyQuery {
where_does_john_hughes_work: InstitutionMany(
where: { staff: { last_name: { _eq: "Hughes" } } }
) {
id
location {
city
campuses
}
}
```
This query would return us details of `Chalmers University of Technology`, where
`John Hughes` is a member of staff.
#### Order by Nested Fields
Add support for ordering by nested fields.
Example query:
```graphql
query MyQuery {
InstitutionMany(order_by: { location: { city: Asc } }) {
id
location {
city
campuses
}
}
}
```
This will order by the value of the nested field `city` within the `location`
JSONB column.
### Fixed
- Stack overflow error on startup. Even if the (experimental) SQL feature was
turned off, engine would try to build a SQL catalog on startup. Now it will
build an empty catalog.
### Changed
## [v2024.08.22]
### Added
#### Pre-parse Engine Plugins
Add support for pre-parse engine plugins. Engine now supports calling a HTTP
webhook in pre-parse execution step. This can be used to add a bunch of
functionalities to the DDN, such as an [allow list][plugin-allowlist].
The following is an example of the OpenDD metadata for the plugins:
```yaml
kind: LifecyclePluginHook
version: v1
definition:
name: allow list
url: http://localhost:8787
pre: parse
config:
request:
headers:
additional:
hasura-m-auth:
value: "your-strong-m-auth-key"
session: {}
rawRequest:
query: {}
variables: {}
```
The pre-parse plugin hook's request can be customized using the
`LifecyclePluginHook` metadata object. Currently we support the following
customizations:
- adding/removing session information
- adding new headers
- forwarding specific headers
- adding/removing graphql query and variables
### Fixed
- Disallow model filter boolean expressions having relationship comparisons in
their nested object filters.
### Changed
[plugin-allowlist]: https://github.com/hasura/plugin-allowlist
## [v2024.08.07]
### Added
- A new CLI flag (`--export-traces-stdout`) and env var (`EXPORT_TRACES_STDOUT`)
is introduced to enable logging of traces to STDOUT. By default, logging is
disabled.
#### Remote Relationships Predicates
We have significantly enhanced our permission capabilities to support remote
relationships in filter predicates. It is important to note that the
relationship source and target models should be from the same subgraph.
**Example:** API traces are stored in a separate database. Users should only be
able to view traces of their own API requests.
```yaml
kind: ModelPermissions
version: v1
definition:
modelName: traces
permissions:
- role: user
select:
filter:
relationship:
name: User
predicate:
fieldComparison:
field: user_id
operator: _eq
value:
sessionVariable: x-hasura-user-id
```
In the above configuration, a permission filter is defined on the `traces`
model. The filter predicate employs the `User` remote relationship, ensuring the
`user_id` field is equal to the `x-hasura-user-id` session variable.
- New `NoAuth` mode in auth config can be used to provide a static role and
session variables to use whilst running the engine, to make getting started
easier.
### Fixed
Fix NDC relationship collection for filter predicates in nested relationship selection. (#924) <!-- The PR description should answer 2 important questions: --> ### What <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> <!-- Does this PR introduce new validation that might break old builds? --> <!-- Consider: do we need to put new checks behind a flag? --> Fixes a bug where queries with nested relationship selection and filter predicates fail due to an issue with NDC relationship collection. ```graphql query MyQuery { Album { AlbumId Title ArtistId Tracks { AlbumId Name TrackId } } } ``` A selection permission defined on the `Tracks` model with a relationship comparison in the predicate. ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> - Previously, the collection of relationships occurred independently by traversing through the IR AST. Consequently, during planning, the collection of local relationships was explicitly ignored. This caused confusion and resulted in the omission of relationship collectors when planning nested selections for local relationships, leading to the issue. - In this PR, the independent collection of relationships is removed. Instead, all NDC relationships for field selection, filter, and permission predicates are now collected during planning. This unifies the logic, and ensures consistency in achieving the same purpose. V3_GIT_ORIGIN_REV_ID: cbd5bfef7a90a7d7602061a9c733ac54b764e0d3
2024-08-02 18:51:29 +03:00
- Fixes a bug where queries with nested relationship selection and filter
predicates fail due to an issue with NDC relationship collection
- Reduce error for using nested arrays in boolean expressions to a warning to
maintain backwards compatibility
Allow object types to be used as comparison operator arguments (#895) <!-- The PR description should answer 2 (maybe 3) important questions: --> ### What This allows object types to be used as arguments for comparison operators. This is useful for Elasticsearch's `range` operator, which allows passing an object like `{ gt: 1, lt: 100 }` to an `integer` field in order to filter items that are greater than `1` and less than `100`. This PR has the nice side effect of dropping the requirement to use information from scalar `BooleanExpressionType`s in place of `DataConnectorScalarTypes`, which we only required because we were not looking up the comparable operator information in scalar boolean expression types correctly. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Previously, when using `ObjectBooleanExpressionType` and `DataConnectorScalarRepresentation`, we had no information about the argument types of comparison operators (ie, what values should I pass to `_eq`?), and so inferred this by looking up the comparison operator in the data connector schema, then looking for a `DataConnectorScalarRepresentation` that tells us what OpenDD type that maps to. Now, with `BooleanExpressionType`, we have this information provided in OpenDD itself: ```yaml kind: BooleanExpressionType version: v1 definition: name: Int_comparison_exp operand: scalar: type: Int comparisonOperators: - name: _eq argumentType: Int! # This is an OpenDD type - name: _within argumentType: WithinInput! - name: _in argumentType: "[Int!]!" ``` Now we look up this information properly, as well as tightening up some validation around relationships that was making us fall back to the old way of doing things where the user had failed to provide a `comparableRelationship` entry. This means a) we can actually use object types as comparable operator types b) scalar boolean expression types aren't used outside the world of boolean expressions, which is a lot easier to reason about. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: ad5896c7f3dbf89a38e7a11ca9ae855a197211e3
2024-07-29 14:50:26 +03:00
- Fix use of object types as comparison operator arguments by correctly
utilising user-provided OpenDD types.
Bug fixes around argument presets in the DataConnectorLink (#866) This PR fixes the following bugs: - Fixes a bug where models and commands were allowed even though they did not define arguments to satisfy the underlying data connector collection/function/procedure. **UPDATE:** This only raises a warning rather than fails the build, because existing builds on staging and production have this issue. This will need to be transitioned to an error once the Compatibility Date plumbing is in place. - Fixes a bug where argument presets set in the DataConnectorLink were sent to every connector function/procedure regardless of whether the function/procedure actually declared that argument - Fixes a bug where argument presets set in the DataConnectorLink were not sent to connector collections that backed Models - Fixes a bug where the type of the argument name in the DataConnectorLink's argument presets was incorrect in the Open DD schema. It was `ArgumentName` but should have been `DataConnectorArgumentName` - Fixes a bug where the check to ensure that argument presets in the DataConnectorLink does not overlap with arguments defined on Models/Commands was comparing against the Model/Command argument name not the data connector argument name There are a number of changes that tighten things up in this PR. Firstly, the custom connector is improved so that it rejects requests with arguments of the wrong type or unexpected arguments. This causes tests that should have been failing to actually fail. Then, new tests have been added to metadata_resolve to cover the untested edge cases around data connector link argument presets. Then, metadata resolve is refactored so that the link argument presets are validated and stored on each command/model source, rather than on the DataConnectorLink. Extra validation has been added during this process to fix the above bugs. Any irrelevant argument presets to the particular command/model are dropped. Then, during execution, we read the presets from the command/model source instead of from the DataConnectorLink, which ensures we only send the appropriate arguments. JIRA: [V3ENGINE-290](https://hasurahq.atlassian.net/browse/V3ENGINE-290) Fixes https://linear.app/hasura/issue/APIPG-676/dataconnectorlink-argument-presets-are-always-sent-regardless-of [V3ENGINE-290]: https://hasurahq.atlassian.net/browse/V3ENGINE-290?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ V3_GIT_ORIGIN_REV_ID: dd02e52e1ff224760c5f0ed6a73a1ae56779e1f1
2024-08-02 12:02:43 +03:00
- Fixes a bug where argument presets set in the DataConnectorLink were sent to
every connector function/procedure regardless of whether the
function/procedure actually declared that argument
- Fixes a bug where argument presets set in the DataConnectorLink were not sent
to connector collections that backed Models
- Fixes a bug where the type of the argument name in the DataConnectorLink's
argument presets was incorrect in the Open DD schema. It was `ArgumentName`
but should have been `DataConnectorArgumentName`
- Fixes a bug where the check to ensure that argument presets in the
DataConnectorLink does not overlap with arguments defined on Models/Commands
was comparing against the Model/Command argument name not the data connector
argument name
### Changed
- Introduced `AuthConfig` `v2`. This new version removes role emulation in
engine (`allowRoleEmulationBy`) field.
Re-enable ndc version validation backwards compatibly (#916) The validation added in #880 validated that the version in the DataConnectorLink's capabilities version matched the version specified in the schema. Unfortunately, there are existing builds with invalid capabilities versions that failed to parse. Subsequently the validation was removed in #907 to fix staging the deploy that broke. This is the unique set of errors found when deploying to staging: ``` error generating artifacts: schema build error: invalid metadata: The data connector myts (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector my_ts (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector mydbpg (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector chinook (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector clickhouse (in subgraph analytics) has an error: The version specified in the capabilities ("^0.1.1") is an invalid version: unexpected character '^' while parsing major version number error generating artifacts: schema build error: invalid metadata: The data connector chinook_link (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector app_connector (in subgraph app) has an error: The version specified in the capabilities ("^0.1.1") is an invalid version: unexpected character '^' while parsing major version number error generating artifacts: schema build error: invalid metadata: The data connector chinook (in subgraph app) has an error: The version specified in the capabilities ("^0.1.1") is an invalid version: unexpected character '^' while parsing major version number error generating artifacts: schema build error: invalid metadata: The data connector nodejs (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector db (in subgraph app) has an error: The version specified in the capabilities ("*") is an invalid version: unexpected character '*' while parsing major version number error generating artifacts: schema build error: invalid metadata: The data connector my_pg (in subgraph my_subgraph) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector mypg (in subgraph myapp) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector mypglink (in subgraph mysubgraph) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector mypg (in subgraph app2) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector test_connector (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version ``` The invalid versions are: `""`, `"*"`, "^0.1.1"`. This PR restores the version validation code, but for NDC v0.1.x capabilities (the only supported version right now, v0.2.x is feature flagged off), we now accept versions that fail to parse as a valid semver, and instead we raise an issue that gets logged as a warning. NDC v0.2.x capabilities retains the stricter behaviour and does not accept dodgy a capabilities version. This is backwards compatible because trying to use NDC v0.2.x right now produces a build error. Fixes APIPG-736 V3_GIT_ORIGIN_REV_ID: 9e9bf99123bad31e8229e8ea29343eb8aaf9786d
2024-08-02 05:54:16 +03:00
- Raise a warning when an invalid data connector capabilities version is used in
in a `DataConnectorLink` and prevent the usage of incompatible data connector
capabilities versions
Bug fixes around argument presets in the DataConnectorLink (#866) This PR fixes the following bugs: - Fixes a bug where models and commands were allowed even though they did not define arguments to satisfy the underlying data connector collection/function/procedure. **UPDATE:** This only raises a warning rather than fails the build, because existing builds on staging and production have this issue. This will need to be transitioned to an error once the Compatibility Date plumbing is in place. - Fixes a bug where argument presets set in the DataConnectorLink were sent to every connector function/procedure regardless of whether the function/procedure actually declared that argument - Fixes a bug where argument presets set in the DataConnectorLink were not sent to connector collections that backed Models - Fixes a bug where the type of the argument name in the DataConnectorLink's argument presets was incorrect in the Open DD schema. It was `ArgumentName` but should have been `DataConnectorArgumentName` - Fixes a bug where the check to ensure that argument presets in the DataConnectorLink does not overlap with arguments defined on Models/Commands was comparing against the Model/Command argument name not the data connector argument name There are a number of changes that tighten things up in this PR. Firstly, the custom connector is improved so that it rejects requests with arguments of the wrong type or unexpected arguments. This causes tests that should have been failing to actually fail. Then, new tests have been added to metadata_resolve to cover the untested edge cases around data connector link argument presets. Then, metadata resolve is refactored so that the link argument presets are validated and stored on each command/model source, rather than on the DataConnectorLink. Extra validation has been added during this process to fix the above bugs. Any irrelevant argument presets to the particular command/model are dropped. Then, during execution, we read the presets from the command/model source instead of from the DataConnectorLink, which ensures we only send the appropriate arguments. JIRA: [V3ENGINE-290](https://hasurahq.atlassian.net/browse/V3ENGINE-290) Fixes https://linear.app/hasura/issue/APIPG-676/dataconnectorlink-argument-presets-are-always-sent-regardless-of [V3ENGINE-290]: https://hasurahq.atlassian.net/browse/V3ENGINE-290?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ V3_GIT_ORIGIN_REV_ID: dd02e52e1ff224760c5f0ed6a73a1ae56779e1f1
2024-08-02 12:02:43 +03:00
- Models and commands that do not define all the necessary arguments to satisfy
the underlying data connector collection/function/procedure now cause warnings
to be raised. The warnings will be turned into errors in the future.
## [v2024.07.25]
### Fixed
- Ensured `traceresponse` header is returned
## [v2024.07.24]
### Added
- The metadata resolve step now emits warnings to let users know about
soon-to-be deprecated features and suggest fixes.
### Fixed
- Fixes a bug where boolean expressions passed as arguments would not be
translated into NDC `Expression` types before being sent to the data
connector.
- Fixes a bug where relationships within nested columns would throw an internal
error. While generating NDC relationship definitions, engine would ignore
columns with nested selection.
- Renamed the `ArgumentPreset` for data connectors to
`DataConnectorArgumentPreset` to avoid ambiguity in generated JSONSchema.
### Changed
- Fixed a bug where command targeted relationships were not using the Open DD
argument name instead of the data connector's argument name when querying the
data connector
## [v2024.07.18]
### Added
Implement remote relationship predicates in where filter clause (#761) Now, users can filter their queries using remote relationships in the filter predicate. Users need to provide the relationships for comparison in `comparableRelationships` field of the newer `BooleanExpressionType` opendd metadata. Minimal Algorithm: ``` Relationship: ARemoteB => Model_A -> Model_B (remote NDC) Column Mapping: (A_column_1, B_column_1), (A_column_2, B_column_2). query: Model_A: where: ARemoteB: {B_column_3: {_eq: value}} Step 1: Fetch RHS column values (in mapping) from remote target model SELECT B_column_1, B_column_2 from model_b_collection WHERE B_column_3 = value; yields the following rows [ [(B_column_1, b_value_1), (B_column_2, b_value_2)], [(B_column_1, b_value_11), (B_column_2, b_value_22)], ] Step 2: Using above rows the generate LHS column filter for Model_A query. SELECT <fields> from model_a_collection WHERE ((A_column_1 = b_value_1) AND (A_column_2 = b_value_2)) OR ((A_column_1 = b_value_11) AND (A_column_2 = b_value_22)) The above comparison is equivalent to WHERE (A_column_1, A_column_2) IN ((b_value_1, b_value_11), (b_value_2, b_value_22)) ``` Sample query: ```graphql query MyQuery { Track( where: { _or: [ { AlbumRemote: { Artist: { ArtistId: { _eq: 2 } } } } { TrackId: { _eq: 3 } } ] } ) { TrackId AlbumRemote { Artist { ArtistId Name } } } } ``` In the query above, `AlbumRemote` is a remote relationship which targets a model backed by a different data connector. V3_GIT_ORIGIN_REV_ID: 7aa76fcae83e1f22de460f1eef5648fb7a35c047
2024-07-18 10:46:16 +03:00
#### Remote Relationships in Query Filter
We have enhanced the GraphQL query capabilities to support array and object
relationships targeting models backed by different data connectors. This allows
you to specify remote relationships directly within the `where` expression of
your queries.
**Example:** Retrieve a list of customers who have been impacted by cancelled
orders during the current sale event. This data should be filtered based on
order logs stored in a separate data source.
```graphql
query CustomersWithFailedOrders {
Customers(
where: {
OrderLogs: {
_and: [
{ timestamp: { _gt: "2024-10-10" } }
{ status: { _eq: "cancelled" } }
]
}
}
) {
CustomerId
EmailId
OrderLogs {
OrderId
}
}
}
```
By incorporating remote relationships into the where expression, you can
seamlessly query and filter data that spans across multiple data sources, making
your GraphQL queries more versatile and powerful.
### Fixed
- Build-time check to ensure boolean expressions cannot be built over nested
array fields until these are supported.
- Fixed a bug where command targeted relationships were not using the OpenDD
argument name instead of the data connector's argument name when querying the
data connector.
## [v2024.07.10]
### Fixed
- Fixes a bug with variable nullability coercion. Specifically, providing a
non-null variable for a nullable field should work, as all non-nullable
variables can be used as nullable variables via "coercion".
- Fixes a bug where data connectors without the `foreach` capability were not
allowed to create local relationships
## [v2024.07.04]
### Added
- Query Usage Analytics - usage analytics JSON data is attached to `execute`
span using `internal.query_usage_analytics` attribute
Enable `BooleanExpressionType`s (#783) <!-- The PR description should answer 2 (maybe 3) important questions: --> ### What # BooleanExpressionType A new metadata kind `BooleanExpressionType` can now be defined. These can be used in place of `ObjectBooleanExpressionType` and `DataConnectorScalarRepresentation`, and allow more granular control of comparison operators and how they are used. The old metadata types still work, but will eventually be deprecated. ```yaml kind: BooleanExpressionType version: v1 definition: name: album_bool_exp operand: object: type: Album comparableFields: - fieldName: AlbumId booleanExpressionType: pg_int_comparison_exp - fieldName: ArtistId booleanExpressionType: pg_int_comparison_exp_with_is_null - field: Address booleanExpressionType: address_bool_exp comparableRelationships: - relationshipName: Artist booleanExpressionType: artist_bool_exp logicalOperators: enable: true isNull: enable: true graphql: typeName: app_album_bool_exp ``` ```yaml kind: BooleanExpressionType version: v1 definition: name: pg_int_comparison_exp operand: scalar: type: Int comparisonOperators: - name: equals argumentType: String! - name: _in argumentType: [String!]! dataConnectorOperatorMapping: - dataConnectorName: postgres_db dataConnectorScalarType: String operatorMapping: equals: _eq logicalOperators: enable: true isNull: enable: true graphql: typeName: app_postgres_int_bool_exp ``` <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Remove feature flag, unhide JsonSchema items, fix a few missing bits of JsonSchema the tests didn't warn us about before. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: dd3055d926fdeb7446cd57085679f2492a4358a1
2024-07-01 18:28:27 +03:00
- Added a flag, `--partial-supergraph`, which instructs the metadata resolver to
prune relationships to unknown subgraphs rather than failing to resolve
Enable `BooleanExpressionType`s (#783) <!-- The PR description should answer 2 (maybe 3) important questions: --> ### What # BooleanExpressionType A new metadata kind `BooleanExpressionType` can now be defined. These can be used in place of `ObjectBooleanExpressionType` and `DataConnectorScalarRepresentation`, and allow more granular control of comparison operators and how they are used. The old metadata types still work, but will eventually be deprecated. ```yaml kind: BooleanExpressionType version: v1 definition: name: album_bool_exp operand: object: type: Album comparableFields: - fieldName: AlbumId booleanExpressionType: pg_int_comparison_exp - fieldName: ArtistId booleanExpressionType: pg_int_comparison_exp_with_is_null - field: Address booleanExpressionType: address_bool_exp comparableRelationships: - relationshipName: Artist booleanExpressionType: artist_bool_exp logicalOperators: enable: true isNull: enable: true graphql: typeName: app_album_bool_exp ``` ```yaml kind: BooleanExpressionType version: v1 definition: name: pg_int_comparison_exp operand: scalar: type: Int comparisonOperators: - name: equals argumentType: String! - name: _in argumentType: [String!]! dataConnectorOperatorMapping: - dataConnectorName: postgres_db dataConnectorScalarType: String operatorMapping: equals: _eq logicalOperators: enable: true isNull: enable: true graphql: typeName: app_postgres_int_bool_exp ``` <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Remove feature flag, unhide JsonSchema items, fix a few missing bits of JsonSchema the tests didn't warn us about before. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: dd3055d926fdeb7446cd57085679f2492a4358a1
2024-07-01 18:28:27 +03:00
#### Boolean Expression Types
A new metadata kind `BooleanExpressionType` can now be defined. These can be
used in place of `ObjectBooleanExpressionType` and
`DataConnectorScalarRepresentation`, and allow more granular control of
comparison operators and how they are used.
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: album_bool_exp
operand:
object:
type: Album
comparableFields:
- fieldName: AlbumId
booleanExpressionType: pg_int_comparison_exp
- fieldName: ArtistId
booleanExpressionType: pg_int_comparison_exp_with_is_null
- field: Address
booleanExpressionType: address_bool_exp
comparableRelationships:
- relationshipName: Artist
booleanExpressionType: artist_bool_exp
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_album_bool_exp
```
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: pg_int_comparison_exp
operand:
scalar:
type: Int
comparisonOperators:
- name: equals
argumentType: String!
- name: _in
argumentType: [String!]!
dataConnectorOperatorMapping:
- dataConnectorName: postgres_db
dataConnectorScalarType: String
operatorMapping:
equals: _eq
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_postgres_int_bool_exp
```
- Add flag to (`--expose-internal-errors`) toggle whether to expose internal
errors. ([#759](https://github.com/hasura/v3-engine/pull/759))
#### Aggregates of Array Relationships
Aggregates of array relationships can now be defined by specifying an
`aggregate` in the `Relationship`'s target. Note that this is only supported
when the target of the relationship is a `Model`. You must also specify the
`aggregateFieldName` under the `graphql` section.
```yaml
kind: Relationship
version: v1
definition:
name: invoices
sourceType: Customer
target:
model:
name: Invoice
relationshipType: Array
aggregate: # New!
aggregateExpression: Invoice_aggregate_exp
description: Aggregate of the customer's invoices
mapping:
- source:
fieldPath:
- fieldName: customerId
target:
modelField:
- fieldName: customerId
graphql: # New!
aggregateFieldName: invoicesAggregate
```
- One can now configure the engine to set response headers for GraphQL requests,
if NDC function/procedures returns headers in its result
#### Field arguments
Add field arguments to the OpenDD `ObjectType`:
```yaml
kind: ObjectType
version: v1
definition:
name: institution
fields:
- name: id
type: Int!
- name: name
type: String!
arguments:
- name: hash
argumentType: String
- name: limit
argumentType: Int
- name: offset
argumentType: Int
graphql:
typeName: Institution
dataConnectorTypeMapping:
- dataConnectorName: custom
dataConnectorObjectType: institution
fieldMapping:
id:
column:
name: id
name:
column:
name: name
argumentMapping:
hash: hash
offset: offset
limit: limit
```
### Changed
### Fixed
- Engine now respects `relation_comparisons` capability, when generating GraphQL
schema for relationship fields in model filter
- The OpenDD schema for `DataConnectorLink` now references the correct version
(v0.1.4) of the NDC schema when using the NDC `CapabilitiesResponse` and
`SchemaResponse` types
## [v2024.06.13]
Initial release.
<!-- end -->
[Unreleased]: https://github.com/hasura/v3-engine/compare/v2024.10.14...HEAD
[v2024.10.14]: https://github.com/hasura/v3-engine/releases/tag/v2024.10.14
[v2024.10.02]: https://github.com/hasura/v3-engine/releases/tag/v2024.10.02
[v2024.09.23]: https://github.com/hasura/v3-engine/releases/tag/v2024.09.23
[v2024.09.16]: https://github.com/hasura/v3-engine/releases/tag/v2024.09.16
[v2024.09.05]: https://github.com/hasura/v3-engine/releases/tag/v2024.09.05
[v2024.09.02]: https://github.com/hasura/v3-engine/releases/tag/v2024.09.02
[v2024.08.07]: https://github.com/hasura/v3-engine/releases/tag/v2024.08.07
[v2024.07.25]: https://github.com/hasura/v3-engine/releases/tag/v2024.07.25
[v2024.07.24]: https://github.com/hasura/v3-engine/releases/tag/v2024.07.24
[v2024.07.18]: https://github.com/hasura/v3-engine/releases/tag/v2024.07.18
[v2024.07.10]: https://github.com/hasura/v3-engine/releases/tag/v2024.07.10
[v2024.07.04]: https://github.com/hasura/v3-engine/releases/tag/v2024.07.04
[v2024.06.13]: https://github.com/hasura/v3-engine/releases/tag/v2024.06.13