graphql-engine/v3/changelog.md

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

356 lines
10 KiB
Markdown
Raw Normal View History

# Changelog
## [Unreleased]
### Added
### Fixed
### Changed
## [v2024.08.07]
### Added
- A new CLI flag (`--export-traces-stdout`) and env var (`EXPORT_TRACES_STDOUT`)
is introduced to enable logging of traces to STDOUT. By default, logging is
disabled.
#### Remote Relationships Predicates
We have significantly enhanced our permission capabilities to support remote
relationships in filter predicates. It is important to note that the
relationship source and target models should be from the same subgraph.
**Example:** API traces are stored in a separate database. Users should only be
able to view traces of their own API requests.
```yaml
kind: ModelPermissions
version: v1
definition:
modelName: traces
permissions:
- role: user
select:
filter:
relationship:
name: User
predicate:
fieldComparison:
field: user_id
operator: _eq
value:
sessionVariable: x-hasura-user-id
```
In the above configuration, a permission filter is defined on the `traces`
model. The filter predicate employs the `User` remote relationship, ensuring the
`user_id` field is equal to the `x-hasura-user-id` session variable.
- New `NoAuth` mode in auth config can be used to provide a static role and
session variables to use whilst running the engine, to make getting started
easier.
### Fixed
Fix NDC relationship collection for filter predicates in nested relationship selection. (#924) <!-- The PR description should answer 2 important questions: --> ### What <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> <!-- Does this PR introduce new validation that might break old builds? --> <!-- Consider: do we need to put new checks behind a flag? --> Fixes a bug where queries with nested relationship selection and filter predicates fail due to an issue with NDC relationship collection. ```graphql query MyQuery { Album { AlbumId Title ArtistId Tracks { AlbumId Name TrackId } } } ``` A selection permission defined on the `Tracks` model with a relationship comparison in the predicate. ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> - Previously, the collection of relationships occurred independently by traversing through the IR AST. Consequently, during planning, the collection of local relationships was explicitly ignored. This caused confusion and resulted in the omission of relationship collectors when planning nested selections for local relationships, leading to the issue. - In this PR, the independent collection of relationships is removed. Instead, all NDC relationships for field selection, filter, and permission predicates are now collected during planning. This unifies the logic, and ensures consistency in achieving the same purpose. V3_GIT_ORIGIN_REV_ID: cbd5bfef7a90a7d7602061a9c733ac54b764e0d3
2024-08-02 18:51:29 +03:00
- Fixes a bug where queries with nested relationship selection and filter
predicates fail due to an issue with NDC relationship collection
- Reduce error for using nested arrays in boolean expressions to a warning to
maintain backwards compatibility
Allow object types to be used as comparison operator arguments (#895) <!-- The PR description should answer 2 (maybe 3) important questions: --> ### What This allows object types to be used as arguments for comparison operators. This is useful for Elasticsearch's `range` operator, which allows passing an object like `{ gt: 1, lt: 100 }` to an `integer` field in order to filter items that are greater than `1` and less than `100`. This PR has the nice side effect of dropping the requirement to use information from scalar `BooleanExpressionType`s in place of `DataConnectorScalarTypes`, which we only required because we were not looking up the comparable operator information in scalar boolean expression types correctly. <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Previously, when using `ObjectBooleanExpressionType` and `DataConnectorScalarRepresentation`, we had no information about the argument types of comparison operators (ie, what values should I pass to `_eq`?), and so inferred this by looking up the comparison operator in the data connector schema, then looking for a `DataConnectorScalarRepresentation` that tells us what OpenDD type that maps to. Now, with `BooleanExpressionType`, we have this information provided in OpenDD itself: ```yaml kind: BooleanExpressionType version: v1 definition: name: Int_comparison_exp operand: scalar: type: Int comparisonOperators: - name: _eq argumentType: Int! # This is an OpenDD type - name: _within argumentType: WithinInput! - name: _in argumentType: "[Int!]!" ``` Now we look up this information properly, as well as tightening up some validation around relationships that was making us fall back to the old way of doing things where the user had failed to provide a `comparableRelationship` entry. This means a) we can actually use object types as comparable operator types b) scalar boolean expression types aren't used outside the world of boolean expressions, which is a lot easier to reason about. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: ad5896c7f3dbf89a38e7a11ca9ae855a197211e3
2024-07-29 14:50:26 +03:00
- Fix use of object types as comparison operator arguments by correctly
utilising user-provided OpenDD types.
Bug fixes around argument presets in the DataConnectorLink (#866) This PR fixes the following bugs: - Fixes a bug where models and commands were allowed even though they did not define arguments to satisfy the underlying data connector collection/function/procedure. **UPDATE:** This only raises a warning rather than fails the build, because existing builds on staging and production have this issue. This will need to be transitioned to an error once the Compatibility Date plumbing is in place. - Fixes a bug where argument presets set in the DataConnectorLink were sent to every connector function/procedure regardless of whether the function/procedure actually declared that argument - Fixes a bug where argument presets set in the DataConnectorLink were not sent to connector collections that backed Models - Fixes a bug where the type of the argument name in the DataConnectorLink's argument presets was incorrect in the Open DD schema. It was `ArgumentName` but should have been `DataConnectorArgumentName` - Fixes a bug where the check to ensure that argument presets in the DataConnectorLink does not overlap with arguments defined on Models/Commands was comparing against the Model/Command argument name not the data connector argument name There are a number of changes that tighten things up in this PR. Firstly, the custom connector is improved so that it rejects requests with arguments of the wrong type or unexpected arguments. This causes tests that should have been failing to actually fail. Then, new tests have been added to metadata_resolve to cover the untested edge cases around data connector link argument presets. Then, metadata resolve is refactored so that the link argument presets are validated and stored on each command/model source, rather than on the DataConnectorLink. Extra validation has been added during this process to fix the above bugs. Any irrelevant argument presets to the particular command/model are dropped. Then, during execution, we read the presets from the command/model source instead of from the DataConnectorLink, which ensures we only send the appropriate arguments. JIRA: [V3ENGINE-290](https://hasurahq.atlassian.net/browse/V3ENGINE-290) Fixes https://linear.app/hasura/issue/APIPG-676/dataconnectorlink-argument-presets-are-always-sent-regardless-of [V3ENGINE-290]: https://hasurahq.atlassian.net/browse/V3ENGINE-290?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ V3_GIT_ORIGIN_REV_ID: dd02e52e1ff224760c5f0ed6a73a1ae56779e1f1
2024-08-02 12:02:43 +03:00
- Fixes a bug where argument presets set in the DataConnectorLink were sent to
every connector function/procedure regardless of whether the
function/procedure actually declared that argument
- Fixes a bug where argument presets set in the DataConnectorLink were not sent
to connector collections that backed Models
- Fixes a bug where the type of the argument name in the DataConnectorLink's
argument presets was incorrect in the Open DD schema. It was `ArgumentName`
but should have been `DataConnectorArgumentName`
- Fixes a bug where the check to ensure that argument presets in the
DataConnectorLink does not overlap with arguments defined on Models/Commands
was comparing against the Model/Command argument name not the data connector
argument name
### Changed
- Introduced `AuthConfig` `v2`. This new version removes role emulation in
engine (`allowRoleEmulationBy`) field.
Re-enable ndc version validation backwards compatibly (#916) The validation added in #880 validated that the version in the DataConnectorLink's capabilities version matched the version specified in the schema. Unfortunately, there are existing builds with invalid capabilities versions that failed to parse. Subsequently the validation was removed in #907 to fix staging the deploy that broke. This is the unique set of errors found when deploying to staging: ``` error generating artifacts: schema build error: invalid metadata: The data connector myts (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector my_ts (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector mydbpg (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector chinook (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector clickhouse (in subgraph analytics) has an error: The version specified in the capabilities ("^0.1.1") is an invalid version: unexpected character '^' while parsing major version number error generating artifacts: schema build error: invalid metadata: The data connector chinook_link (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector app_connector (in subgraph app) has an error: The version specified in the capabilities ("^0.1.1") is an invalid version: unexpected character '^' while parsing major version number error generating artifacts: schema build error: invalid metadata: The data connector chinook (in subgraph app) has an error: The version specified in the capabilities ("^0.1.1") is an invalid version: unexpected character '^' while parsing major version number error generating artifacts: schema build error: invalid metadata: The data connector nodejs (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector db (in subgraph app) has an error: The version specified in the capabilities ("*") is an invalid version: unexpected character '*' while parsing major version number error generating artifacts: schema build error: invalid metadata: The data connector my_pg (in subgraph my_subgraph) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector mypg (in subgraph myapp) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector mypglink (in subgraph mysubgraph) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector mypg (in subgraph app2) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version error generating artifacts: schema build error: invalid metadata: The data connector test_connector (in subgraph app) has an error: The version specified in the capabilities ("") is an invalid version: empty string, expected a semver version ``` The invalid versions are: `""`, `"*"`, "^0.1.1"`. This PR restores the version validation code, but for NDC v0.1.x capabilities (the only supported version right now, v0.2.x is feature flagged off), we now accept versions that fail to parse as a valid semver, and instead we raise an issue that gets logged as a warning. NDC v0.2.x capabilities retains the stricter behaviour and does not accept dodgy a capabilities version. This is backwards compatible because trying to use NDC v0.2.x right now produces a build error. Fixes APIPG-736 V3_GIT_ORIGIN_REV_ID: 9e9bf99123bad31e8229e8ea29343eb8aaf9786d
2024-08-02 05:54:16 +03:00
- Raise a warning when an invalid data connector capabilities version is used in
in a `DataConnectorLink` and prevent the usage of incompatible data connector
capabilities versions
Bug fixes around argument presets in the DataConnectorLink (#866) This PR fixes the following bugs: - Fixes a bug where models and commands were allowed even though they did not define arguments to satisfy the underlying data connector collection/function/procedure. **UPDATE:** This only raises a warning rather than fails the build, because existing builds on staging and production have this issue. This will need to be transitioned to an error once the Compatibility Date plumbing is in place. - Fixes a bug where argument presets set in the DataConnectorLink were sent to every connector function/procedure regardless of whether the function/procedure actually declared that argument - Fixes a bug where argument presets set in the DataConnectorLink were not sent to connector collections that backed Models - Fixes a bug where the type of the argument name in the DataConnectorLink's argument presets was incorrect in the Open DD schema. It was `ArgumentName` but should have been `DataConnectorArgumentName` - Fixes a bug where the check to ensure that argument presets in the DataConnectorLink does not overlap with arguments defined on Models/Commands was comparing against the Model/Command argument name not the data connector argument name There are a number of changes that tighten things up in this PR. Firstly, the custom connector is improved so that it rejects requests with arguments of the wrong type or unexpected arguments. This causes tests that should have been failing to actually fail. Then, new tests have been added to metadata_resolve to cover the untested edge cases around data connector link argument presets. Then, metadata resolve is refactored so that the link argument presets are validated and stored on each command/model source, rather than on the DataConnectorLink. Extra validation has been added during this process to fix the above bugs. Any irrelevant argument presets to the particular command/model are dropped. Then, during execution, we read the presets from the command/model source instead of from the DataConnectorLink, which ensures we only send the appropriate arguments. JIRA: [V3ENGINE-290](https://hasurahq.atlassian.net/browse/V3ENGINE-290) Fixes https://linear.app/hasura/issue/APIPG-676/dataconnectorlink-argument-presets-are-always-sent-regardless-of [V3ENGINE-290]: https://hasurahq.atlassian.net/browse/V3ENGINE-290?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ V3_GIT_ORIGIN_REV_ID: dd02e52e1ff224760c5f0ed6a73a1ae56779e1f1
2024-08-02 12:02:43 +03:00
- Models and commands that do not define all the necessary arguments to satisfy
the underlying data connector collection/function/procedure now cause warnings
to be raised. The warnings will be turned into errors in the future.
## [v2024.07.25]
### Fixed
- Ensured `traceresponse` header is returned
## [v2024.07.24]
### Added
- The metadata resolve step now emits warnings to let users know about
soon-to-be deprecated features and suggest fixes.
### Fixed
- Fixes a bug where boolean expressions passed as arguments would not be
translated into NDC `Expression` types before being sent to the data
connector.
- Fixes a bug where relationships within nested columns would throw an internal
error. While generating NDC relationship definitions, engine would ignore
columns with nested selection.
- Renamed the `ArgumentPreset` for data connectors to
`DataConnectorArgumentPreset` to avoid ambiguity in generated JSONSchema.
### Changed
- Fixed a bug where command targeted relationships were not using the Open DD
argument name instead of the data connector's argument name when querying the
data connector
## [v2024.07.18]
### Added
Implement remote relationship predicates in where filter clause (#761) Now, users can filter their queries using remote relationships in the filter predicate. Users need to provide the relationships for comparison in `comparableRelationships` field of the newer `BooleanExpressionType` opendd metadata. Minimal Algorithm: ``` Relationship: ARemoteB => Model_A -> Model_B (remote NDC) Column Mapping: (A_column_1, B_column_1), (A_column_2, B_column_2). query: Model_A: where: ARemoteB: {B_column_3: {_eq: value}} Step 1: Fetch RHS column values (in mapping) from remote target model SELECT B_column_1, B_column_2 from model_b_collection WHERE B_column_3 = value; yields the following rows [ [(B_column_1, b_value_1), (B_column_2, b_value_2)], [(B_column_1, b_value_11), (B_column_2, b_value_22)], ] Step 2: Using above rows the generate LHS column filter for Model_A query. SELECT <fields> from model_a_collection WHERE ((A_column_1 = b_value_1) AND (A_column_2 = b_value_2)) OR ((A_column_1 = b_value_11) AND (A_column_2 = b_value_22)) The above comparison is equivalent to WHERE (A_column_1, A_column_2) IN ((b_value_1, b_value_11), (b_value_2, b_value_22)) ``` Sample query: ```graphql query MyQuery { Track( where: { _or: [ { AlbumRemote: { Artist: { ArtistId: { _eq: 2 } } } } { TrackId: { _eq: 3 } } ] } ) { TrackId AlbumRemote { Artist { ArtistId Name } } } } ``` In the query above, `AlbumRemote` is a remote relationship which targets a model backed by a different data connector. V3_GIT_ORIGIN_REV_ID: 7aa76fcae83e1f22de460f1eef5648fb7a35c047
2024-07-18 10:46:16 +03:00
#### Remote Relationships in Query Filter
We have enhanced the GraphQL query capabilities to support array and object
relationships targeting models backed by different data connectors. This allows
you to specify remote relationships directly within the `where` expression of
your queries.
**Example:** Retrieve a list of customers who have been impacted by cancelled
orders during the current sale event. This data should be filtered based on
order logs stored in a separate data source.
```graphql
query CustomersWithFailedOrders {
Customers(
where: {
OrderLogs: {
_and: [
{ timestamp: { _gt: "2024-10-10" } }
{ status: { _eq: "cancelled" } }
]
}
}
) {
CustomerId
EmailId
OrderLogs {
OrderId
}
}
}
```
By incorporating remote relationships into the where expression, you can
seamlessly query and filter data that spans across multiple data sources, making
your GraphQL queries more versatile and powerful.
### Fixed
- Build-time check to ensure boolean expressions cannot be built over nested
array fields until these are supported.
- Fixed a bug where command targeted relationships were not using the OpenDD
argument name instead of the data connector's argument name when querying the
data connector.
## [v2024.07.10]
### Fixed
- Fixes a bug with variable nullability coercion. Specifically, providing a
non-null variable for a nullable field should work, as all non-nullable
variables can be used as nullable variables via "coercion".
- Fixes a bug where data connectors without the `foreach` capability were not
allowed to create local relationships
## [v2024.07.04]
### Added
- Query Usage Analytics - usage analytics JSON data is attached to `execute`
span using `internal.query_usage_analytics` attribute
Enable `BooleanExpressionType`s (#783) <!-- The PR description should answer 2 (maybe 3) important questions: --> ### What # BooleanExpressionType A new metadata kind `BooleanExpressionType` can now be defined. These can be used in place of `ObjectBooleanExpressionType` and `DataConnectorScalarRepresentation`, and allow more granular control of comparison operators and how they are used. The old metadata types still work, but will eventually be deprecated. ```yaml kind: BooleanExpressionType version: v1 definition: name: album_bool_exp operand: object: type: Album comparableFields: - fieldName: AlbumId booleanExpressionType: pg_int_comparison_exp - fieldName: ArtistId booleanExpressionType: pg_int_comparison_exp_with_is_null - field: Address booleanExpressionType: address_bool_exp comparableRelationships: - relationshipName: Artist booleanExpressionType: artist_bool_exp logicalOperators: enable: true isNull: enable: true graphql: typeName: app_album_bool_exp ``` ```yaml kind: BooleanExpressionType version: v1 definition: name: pg_int_comparison_exp operand: scalar: type: Int comparisonOperators: - name: equals argumentType: String! - name: _in argumentType: [String!]! dataConnectorOperatorMapping: - dataConnectorName: postgres_db dataConnectorScalarType: String operatorMapping: equals: _eq logicalOperators: enable: true isNull: enable: true graphql: typeName: app_postgres_int_bool_exp ``` <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Remove feature flag, unhide JsonSchema items, fix a few missing bits of JsonSchema the tests didn't warn us about before. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: dd3055d926fdeb7446cd57085679f2492a4358a1
2024-07-01 18:28:27 +03:00
- Added a flag, `--partial-supergraph`, which instructs the metadata resolver to
prune relationships to unknown subgraphs rather than failing to resolve
Enable `BooleanExpressionType`s (#783) <!-- The PR description should answer 2 (maybe 3) important questions: --> ### What # BooleanExpressionType A new metadata kind `BooleanExpressionType` can now be defined. These can be used in place of `ObjectBooleanExpressionType` and `DataConnectorScalarRepresentation`, and allow more granular control of comparison operators and how they are used. The old metadata types still work, but will eventually be deprecated. ```yaml kind: BooleanExpressionType version: v1 definition: name: album_bool_exp operand: object: type: Album comparableFields: - fieldName: AlbumId booleanExpressionType: pg_int_comparison_exp - fieldName: ArtistId booleanExpressionType: pg_int_comparison_exp_with_is_null - field: Address booleanExpressionType: address_bool_exp comparableRelationships: - relationshipName: Artist booleanExpressionType: artist_bool_exp logicalOperators: enable: true isNull: enable: true graphql: typeName: app_album_bool_exp ``` ```yaml kind: BooleanExpressionType version: v1 definition: name: pg_int_comparison_exp operand: scalar: type: Int comparisonOperators: - name: equals argumentType: String! - name: _in argumentType: [String!]! dataConnectorOperatorMapping: - dataConnectorName: postgres_db dataConnectorScalarType: String operatorMapping: equals: _eq logicalOperators: enable: true isNull: enable: true graphql: typeName: app_postgres_int_bool_exp ``` <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> <!-- Consider: do we need to add a changelog entry? --> ### How Remove feature flag, unhide JsonSchema items, fix a few missing bits of JsonSchema the tests didn't warn us about before. <!-- How is it trying to accomplish it (what are the implementation steps)? --> V3_GIT_ORIGIN_REV_ID: dd3055d926fdeb7446cd57085679f2492a4358a1
2024-07-01 18:28:27 +03:00
#### Boolean Expression Types
A new metadata kind `BooleanExpressionType` can now be defined. These can be
used in place of `ObjectBooleanExpressionType` and
`DataConnectorScalarRepresentation`, and allow more granular control of
comparison operators and how they are used.
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: album_bool_exp
operand:
object:
type: Album
comparableFields:
- fieldName: AlbumId
booleanExpressionType: pg_int_comparison_exp
- fieldName: ArtistId
booleanExpressionType: pg_int_comparison_exp_with_is_null
- field: Address
booleanExpressionType: address_bool_exp
comparableRelationships:
- relationshipName: Artist
booleanExpressionType: artist_bool_exp
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_album_bool_exp
```
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: pg_int_comparison_exp
operand:
scalar:
type: Int
comparisonOperators:
- name: equals
argumentType: String!
- name: _in
argumentType: [String!]!
dataConnectorOperatorMapping:
- dataConnectorName: postgres_db
dataConnectorScalarType: String
operatorMapping:
equals: _eq
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_postgres_int_bool_exp
```
- Add flag to (`--expose-internal-errors`) toggle whether to expose internal
errors. ([#759](https://github.com/hasura/v3-engine/pull/759))
#### Aggregates of Array Relationships
Aggregates of array relationships can now be defined by specifying an
`aggregate` in the `Relationship`'s target. Note that this is only supported
when the target of the relationship is a `Model`. You must also specify the
`aggregateFieldName` under the `graphql` section.
```yaml
kind: Relationship
version: v1
definition:
name: invoices
sourceType: Customer
target:
model:
name: Invoice
relationshipType: Array
aggregate: # New!
aggregateExpression: Invoice_aggregate_exp
description: Aggregate of the customer's invoices
mapping:
- source:
fieldPath:
- fieldName: customerId
target:
modelField:
- fieldName: customerId
graphql: # New!
aggregateFieldName: invoicesAggregate
```
- One can now configure the engine to set response headers for GraphQL requests,
if NDC function/procedures returns headers in its result
#### Field arguments
Add field arguments to the OpenDD `ObjectType`:
```yaml
kind: ObjectType
version: v1
definition:
name: institution
fields:
- name: id
type: Int!
- name: name
type: String!
arguments:
- name: hash
argumentType: String
- name: limit
argumentType: Int
- name: offset
argumentType: Int
graphql:
typeName: Institution
dataConnectorTypeMapping:
- dataConnectorName: custom
dataConnectorObjectType: institution
fieldMapping:
id:
column:
name: id
name:
column:
name: name
argumentMapping:
hash: hash
offset: offset
limit: limit
```
### Changed
### Fixed
- Engine now respects `relation_comparisons` capability, when generating GraphQL
schema for relationship fields in model filter
- The OpenDD schema for `DataConnectorLink` now references the correct version
(v0.1.4) of the NDC schema when using the NDC `CapabilitiesResponse` and
`SchemaResponse` types
## [v2024.06.13]
Initial release.
<!-- end -->
[Unreleased]: https://github.com/hasura/v3-engine/compare/v2024.08.07...HEAD
[v2024.08.07]: https://github.com/hasura/v3-engine/releases/tag/v2024.08.07
[v2024.07.25]: https://github.com/hasura/v3-engine/releases/tag/v2024.07.25
[v2024.07.24]: https://github.com/hasura/v3-engine/releases/tag/v2024.07.24
[v2024.07.18]: https://github.com/hasura/v3-engine/releases/tag/v2024.07.18
[v2024.07.10]: https://github.com/hasura/v3-engine/releases/tag/v2024.07.10
[v2024.07.04]: https://github.com/hasura/v3-engine/releases/tag/v2024.07.04
[v2024.06.13]: https://github.com/hasura/v3-engine/releases/tag/v2024.06.13