<!-- The PR description should answer 2 important questions: -->
### What
Changing all of this code in situ is going to be extremely painful, so
instead we're taking the approach of copying small sections, making them
typecheck with the new set of types, and integrating them bit by bit.
This adds a version of the execution steps in `execute`, using the new
types. The next PR to follow will change the OpenDD pipeline to use
them, as remote predicates aren't implemented there so we can defer
implementing them in the new system.
V3_GIT_ORIGIN_REV_ID: 98d1bfa4753edf7d9a9f0b9d1e617a0a760d5862
### What
If an NDC v0.2.0 connector declares they don't support nested
relationships in its capabilities, then we use the engine's remote joins
functionality to perform selection across nested relationships anyway.
Note that filtering or ordering by nested relationships is still not
supported.
### How
The function that everything uses to determine whether a relationship is
to be done via remote joins or not
(`crates/metadata-resolve/src/stages/object_relationships/mod.rs::relationship_execution_category`)
has been updated to require being told about "FieldNestedness". This
describes whether the field is object or array nested. It then looks at
the capabilities and requires a remote join if the connector does not
support nested relationships.
The relationship capabilities stored against relationships in
metadata_resolve is now a copy of the detailed relationship capabilities
from the data connector (see `get_relationship_capabilities`).
In graphql IR and schema, all usages of the
`relationship_execution_category` now need to tell it about the
`FieldNestedness`. So they now keep track of the nestedness of the
relationship field they're looking at.
In `crates/execute/src/plan/relationships.rs`, usages of
`relationship_execution_category` were removed because it was being used
to check if the local relationship (ie not a remote join) being looked
at was actually a local relationship and not a remote one. However, this
is unnecessary as this determination was already made when the IR was
created and does not need to be checked again here.
To test these changes, a new test was added:
`crates/engine/tests/execute/relationships/nested/selection_no_nested_capability`.
This test needed a connector that does not support relationships so that
remote joins code paths would be exercised. To support this, the custom
connector now takes a new env var `ENABLE_RELATIONSHIP_SUPPORT` that
explicitly turns on support for relationships. A new instance of the
connector has been added to the `ci.docker-compose.yaml` with this
disabled. This is new connector instance is then used to perform the
test.
V3_GIT_ORIGIN_REV_ID: 476f7ec0469eed698ed9bcf49f73372b6cee0c6c
<!-- The PR description should answer 2 important questions: -->
### What
Lets be a bit more granular about our spans for JSONAPI endpoint.
<img width="1508" alt="Screenshot 2024-10-29 at 16 46 32"
src="https://github.com/user-attachments/assets/97f67ae3-ac27-4dcd-84a4-083dc6ef5e67">
V3_GIT_ORIGIN_REV_ID: d5c0342c0c9dae2f50e78c7f529a54eeedb541a9
<!-- The PR description should answer 2 important questions: -->
### What
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
<!-- Does this PR introduce new validation that might break old builds?
-->
<!-- Consider: do we need to put new checks behind a flag? -->
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 894ba81666bb441e71b8e204b89e5732ac4f1c83
### What
Previously, we only supported String as the type that contained your
session variable value when they are provided from webhooks/JWT/NoAuth.
We then coerced that value into whatever type was actually expected (eg
a float) later.
However, when we added support for array-typed session variables (#1221)
we didn't actually allow you to provide a JSON array of values as a
session variable value. You had to provide a string that contained a
JSON-encoded array of values. This meant that webhooks/JWT/NoAuth had to
double JSON-encode their session variables when returning them.
This PR fixes this and makes it so that webhooks/JWT/NoAuth can return
JSON values for session variables and that JSON is respected. So if a
session variable needs to be an array of integers, they can simply
return the JSON array of integers as the value for that session
variable.
### How
Instead of holding a `SessionVariableValue` as a `String`, we now turn
that into an enum where we have an "unparsed" String (used for when we
don't receive JSON, we just receive a string value (ie. http headers)),
or a "parsed" JSON value. When we receive session variables from
webhooks/JWT/NoAuth, we relax the restriction that they can only return
us JSON strings, and instead allow them to return JSON Values, which we
put in the new `SessionVariableValue::Parsed` enum variant. HTTP headers
go into `SessionVariableValue::Unparsed`.
Then, when we go to get the required value from the
`SessionVariableValue` based on the desired type, we either parse it out
of the "unparsed" String, or we expect that the value is already in the
correct form in the "parsed" JSON value. This is the behaviour you will
get if JSON session variables are turned on in the Flags.
If JSON session variables are not turned on, then we expect that only
String session variables (parsed or unparsed) are provided from
headers/webhooks/JWT/NoAuth, and so we run the old logic of always
expecting a String and parsing the correct value out of it.
V3_GIT_ORIGIN_REV_ID: b6734ad5443b7d68065f91aea71386c893aa7eba
I am sorry @rakeshkky , we will merge this again once we've done a
release, I promise.
Reverts hasura/v3-engine#1254
V3_GIT_ORIGIN_REV_ID: 74236d25d4e84658717531a55d87c8d3371b553c
<!-- The PR description should answer 2 important questions: -->
### What
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
<!-- Does this PR introduce new validation that might break old builds?
-->
<!-- Consider: do we need to put new checks behind a flag? -->
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 3458ea1e5ebe2d7387a7e503f957e5d55f225599
Reverts hasura/v3-engine#1225
According to @danieljharvey, websockets don't currently work in
multitenant and we want to do a release using a commit after this one.
V3_GIT_ORIGIN_REV_ID: 2d9239ab3203d5acbedef1cd86644861a99c99b2
<!-- The PR description should answer 2 important questions: -->
### What
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
<!-- Does this PR introduce new validation that might break old builds?
-->
<!-- Consider: do we need to put new checks behind a flag? -->
- Remove `enable-subscriptions` from unstable features.
- Expose the subscriptions related opendd metadata in the json schema
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
- Update unstable feature related types by dropping subscriptions.
- Remove `hidden=true` opendd attribute and other related on
subscription opendd metadata.
- Update json schema files
V3_GIT_ORIGIN_REV_ID: 0aa763f516d394aab2e375da0817d0e60228c9b2
<!-- The PR description should answer 2 important questions: -->
### What
Make sure we trash old Postgres DB data when refreshing Docker, to make
sure we start with the newest data.
V3_GIT_ORIGIN_REV_ID: 25416e918431a915fece6fe2d9478fa722f6a7c5
<!-- The PR description should answer 2 important questions: -->
### What
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
<!-- Does this PR introduce new validation that might break old builds?
-->
<!-- Consider: do we need to put new checks behind a flag? -->
Write tests to confirm websocket connection behavior in conjunction with
[graphl-ws](https://github.com/enisdenjo/graphql-ws/blob/master/PROTOCOL.md)
subprotocol.
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
Test the websocket by spinning up a server in an async tokio task. Use
tokio-tungstenite for websocket client.
V3_GIT_ORIGIN_REV_ID: 32c19298b6a5b23649b22d8d820ef8d47ef1d293
### What
This will save me minutes of time a week.
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: d441b07fcf69b30c59902198e025669df410346c
### What
Our test metadata has a lot of objects formatted like this:
```jsonc
{
"definition": {
// a lot of stuff here
},
"version": "v1",
"kind": "ObjectType"
}
```
This is very unhelpful when trying to read the metadata because I want
to know what kind the object is before I see the definition.
This PR adds a small JQ script that reorders the properties in existing
test json metadata so that the properties are ordered as kind, version,
definition first (like in HML files).
This is literally just a formatting change, nothing has _actually_
changed.
### How
There's a JQ script in the justfile that does this.
V3_GIT_ORIGIN_REV_ID: a56f4afee33c3074e564d9cbb50368932ee5275e
<!-- The PR description should answer 2 important questions: -->
### What
Let's add a test!
(and check it still works 👍 )
<img width="405" alt="Screenshot 2024-09-10 at 13 05 01"
src="https://github.com/user-attachments/assets/6a49feaa-bd0b-4137-a9d5-a1e1336d9fa6">
### How
Glob a folder for test files, parse them and run them against the
functions.
V3_GIT_ORIGIN_REV_ID: cc3e0d8cfb6f5eaa58cf72dab97e7220a57e7673
<!-- The PR description should answer 2 important questions: -->
### What
Quick quality of life improvement for making demos, now we can run:
```
just run ./my-demo-metadata.json
```
And this will start all the containers and run engine with the specified
file.
### How
Justfile arguments
V3_GIT_ORIGIN_REV_ID: 2dc47a06ab85d815393047714e76342fb4859db2
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
If`institution` is a big JSON document, and `staff` is an array of
objects inside it, we can now filter `institutions` based on matches
that exist within that array.
```graphql
query MyQuery {
where_does_john_hughes_work: InstitutionMany(
where: { staff: { last_name: { _eq: "Hughes" } } }
) {
id
location {
city
campuses
}
}
```
This query would return us details of `Chalmers University of
Technology`, where `John Hughes` is a member of staff.
### How
- Record the type of fields in `metadata_resolve`
- If we find an array one, wrap the inner predicate in an `EXISTS`-type
constructor.
V3_GIT_ORIGIN_REV_ID: 6f7c51961f3189d7068b4ddbed9fcc821a76ae7d
<!-- The PR description should answer 2 important questions: -->
### What
Put subscriptions API behind unstable feature.
V3_GIT_ORIGIN_REV_ID: 4eb454009c2e9658d899efbb81488faac3674e48
<!-- The PR description should answer 2 important questions: -->
### What
Updating the various Postgres test schemas we use in execute tests is
quite hard, and most of them have been hand modified to match certain
tests.
This meant you could not update the Postgres test SQL and propagate the
schema changes in all tests, which is frustrating.
But now we can!
### How
- Modifies the script that updates all `custom_connector` schemas to
make it work for Postgres too
- Runs it, updating all `DataConnectorLink` for `ndc-postgres` to match
the output of `/schema` from `ndc-postgres`
- Fixes lots of tests, that were all using slightly hand modified
`ndc-postgres` schema output. Mostly this involves swapping `String` and
`varchar` for `text`, and `Int` for `int4`.
V3_GIT_ORIGIN_REV_ID: d695c48d1ae04d51a17bf54786782830ddb1d683
### What
Use powerful constructs such as loops to avoid duplication in
`flake.nix`.
We are able to iterate over the list of binaries and list of target
systems and produce a list of packages from there.
### How
We generate a `targets` tree (see the `flake.nix` file comments for
details), and then generate the list of packages from there.
When using `nix run`, you'll get a package, which is usually what you
want.
The scripts that build Docker images run `nix build
'.#targets.x86_64-linux.<binary>.<arch>.docker`, which is verbose but
explicit.
V3_GIT_ORIGIN_REV_ID: 2a13fb31a41829f9804dfdb7c1a51a9e54e0922e
<!-- The PR description should answer 2 important questions: -->
### What
We want to use `CompatibilityConfig` to configure turning warnings into
errors after a certain date, and to make sure we don't break old builds.
This allows passing a path to a optional compatibility config file to
engine and scaffolds how we'll map this config to metadata resolve
options.
### How
Add a flag to `v3-engine`, parse the file if flag is set. Test it by
adding static test file and using it with `just watch` and `just run`.
V3_GIT_ORIGIN_REV_ID: 972c67ae29905b9ce1bb57e150f4cfcfd6a069ef
<!-- The PR description should answer 2 important questions: -->
### What
Same as with `ndc-postgres` and `ndc-postgres-multitenant`, we call a
`just` command in CI, rather than using an actions plugin.
### How
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 96daa602e4647fdcd641d79539df66a264914efc
### What
If `prettier` isn't installed globally, this means the format script
still works.
### How
If it's already installed, this is a no-op. Otherwise, this prompts the
user to download a temporary version of the package to run the command.
V3_GIT_ORIGIN_REV_ID: 6d10387a0a6d5cdc33b748c0533b7bb276d322ac
### What
We now support cloud-only crates, which are not open-sourced.
### How
Anything in `crates/cloud` will not be synced with the _graphql-engine_
repository.
In order to facilitate this, we generate and commit a
Cargo.toml/Cargo.lock pair with the cloud-only sections removed. We also
transform the justfile to remove this code.
This includes only a test repository, to ensure that nothing private is
synced.
When this is merged, it should not result in a commit to the
_graphql-engine_ repository.
V3_GIT_ORIGIN_REV_ID: 038839acdf3a97da05bbd4b6278171cc12e7cd71
### What
Try not to pollute the CI cache with the wrong thing.
### How
1. Remove the package selector as we don't care about production builds
here any more.
2. Tell the test build to not save the cache so there is no write
contention.
3. Downgrade `mockito` to v1.4 reduce the size of the cache, because
`v1.5` depends on `http` v1.
V3_GIT_ORIGIN_REV_ID: 2109c8c7db5d80e3b2c29d2949423e8faebd10b2
### What
- Added `AuthConfig` v2 config example in
`static/auth/auth_config_v2.json`
- Moved exisiting `auth_config.json` to `static/auth/`
- Removed unused `pre_plugins.json`
If one wants to start the engine with a v2 of AuthConfig,
`static/auth/auth_config_v2.json` can be used.
V3_GIT_ORIGIN_REV_ID: 471f8ae43ab02c2182457804a24b8445bb41f06c
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
We have a bunch of local development infra for building the engine
inside a Docker container. This is helpful for Buildkite which doesn't
come with stuff like `cargo` preinstalled. We've not using Buildkite
anymore, let's remove it.
V3_GIT_ORIGIN_REV_ID: b4b7679aab5b14081288df25d139944f160a61fe
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
These take ages to run, are slowing development down and not offering
the value they should. We should be benchmarking engine, but not like
this.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Remove benchmarks CI job from Buildkite.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 30a2c9d5f6ba09f5319a07fe394db8becaa16b8e
This PR updates as many tests as possible that use the custom connector
so that the tests run over two versions of the custom connector:
1. The custom connector in the repo, which currently speaks `ndc_models`
v0.2.x
2. The custom connector from the past (commit ), which is the last
version to speak `ndc_models` v0.1.x
This helps us test both the NDC v0.1.x and v0.2.x code paths. When the
postgres connector upgrades to v0.2.x, we can use the same approach as
in this PR to get the tests to run over multiple versions of the
postgres connector too, for much better coverage. This approach with the
custom connector will become less useful over time as the v0.1.x
connector is not updated and will diverge in data from the v0.2.x
connector. The postgres connector is likely to be longer-lasting, as it
is more stable.
The basic test used for `execute` integration tests is
`test_execution_expectation` (in `crates/engine/tests/common.rs`) and it
has been extended into a version called
`test_execution_expectation_for_multiple_ndc_versions` that takes
metadata on a per NDC version basis and then runs the test multiple
times, once for each NDC version. This allows one to swap out the
DataConnectorLink involved in the test to a different one that points at
either the v0.1.x or v0.2.x versions of the connector. The assertion is
that both connectors should produce the same results, even if they talk
a different version of the NDC protocol. As each version runs, we
`println!` the version so that if the test fails you can look in stdout
for the test and see which one was executing when it failed.
Tests that use the custom connector now use
`test_execution_expectation_for_multiple_ndc_versions` and run across
both connector versions. Some tests were unable to be used across both
version as the data between the two versions has changed. Some tests
were modified to avoid the changed data so as to support running across
both versions. Any tests that use `test_execution_expectation_legacy`
don't run across both versions because those tests aren't backed by the
same test implementation as
`test_execution_expectation_for_multiple_ndc_versions`.
Unfortunately the custom connector doesn't use the standard connector
SDK, so it doesn't support `HASURA_CONNECTOR_PORT`. This means that the
old connector is stuck on 8101. To work around this, I've moved the
current connector port to 8102 instead. Technically we might be able to
use docker to remap the ports, but then this binds us into always
running the connectors in docker in order to move their ports around, so
I avoided that approach.
Completes APIPG-703
V3_GIT_ORIGIN_REV_ID: fb0e410ddbee0ea699815388bc63584d6ff5dd70
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
We've had our CI mixed between Github and Buildkite for a while, it's
time to commit. First step is moving the "tests" step to Github Actions.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
This PR:
- Moves the `test` step to Github Actions
- Creates a new `custom_connector.Dockerfile` which builds custom
connector only, more quickly.
- Changes the metadata tests to use `localhost` instead of their Docker
internal names (ie `custom_connector` or `postgres_connector`) - this is
because the tests are being run from outside Docker now
- Removes the `test` Buildkite step
It does not:
- Remove the code coverage or benchmarks steps from Buildkite
- Tidy up `justfile` or Dockerfiles
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
---------
Co-authored-by: Philip Lykke Carlsen <plcplc@gmail.com>
V3_GIT_ORIGIN_REV_ID: a67534ebc1634a24b48d2620c45003221852e199
### What
This PR enables the use of Opentelemetry Baggage. Every bit of baggage
is then replicated on every span.
The current implementation does not actually set any baggage itself - it
only relays and outputs what it's getting.
Crafting a request with a `baggage` header set:
![image](https://github.com/user-attachments/assets/f2974398-370a-4e8c-8761-692cfc5682f6)
Has it propagated (here, to `dev-auth-webhook`) and stamped onto every
span:
![image](https://github.com/user-attachments/assets/6661c41f-56be-4edd-9027-e88eb816f1e7)
### How
This PR actually makes the engine and auth-hook use the globally
specified propagators (before they would only use an obsucre, concrete,
re-exported one from opentelemetry_contrib), and adds the
`BaggagePropagator` to the list.
It also adds a `SpanProcessor` which outputs the baggage as span
attributes. Currently it outputs all Baggage entries, but can be made
more specific in the future if we want to treat Baggage differently.
V3_GIT_ORIGIN_REV_ID: 3b5b8604b624c0b90c192e68b3b57fab7ca9b63e
This PR adds true support for ndc_models v0.2.0 to v3-engine. Note that
v0.2.0 is not finalized yet, so we're pointing at v0.2.0-rc0. The
support still comes via the migration methodology, where v0.2.x ndc
models are downgraded to v0.1.x to support backwards compatibility. In
the future we want to remove this and have the engine generate the
different versioned ndc models separately instead of performing a
migration.
The ndc_models_v01 crate reference has been bumped to the official
v0.1.5 version, which brings the newtypes to the v0.1.x version. The
ndc_models crate reference is now on v0.2.0-rc0.
The custom connector has been updated to support ndc-spec v0.2.0. All
tests that talk to the custom connector have been updated with its
latest v0.2.0 schema/capabilities.
In `metadata_resolve` the v01->v02 schema/capabilities migration code
has been updated to handle the new v0.2.0 types. This includes inferring
v0.2.0 capabilities from what was possible in v0.1.x.
In `execution`, the migration code has been updated to deal with the new
v0.1.5 newtypes and v0.2.0 types. This means there are now cases where a
downgrade is impossible and produces an error (see `NdcDowngradeError`
in `execute::ndc::migration`). A bug has also been fixed where NDC
expressions in arguments were not being serialized to the correct NDC
version.
V3_GIT_ORIGIN_REV_ID: 5b4afcde64c307b2bd7c985c588d6c74d9623a0f
This PR fixes the custom connector whose schema endpoint doesn't
actually return correct output (it was missing some
functions/procedures, etc). Then it updates all tests that actually talk
to the custom connector with the latest version of its
schema/capabilities in their DataConnectorLink.
This test update is done by a new script added to the justfile that
finds and patches all metadata json files and inserts the new schema and
capabilities after reading them from the custom connector running in
docker.
V3_GIT_ORIGIN_REV_ID: f1825a6f74ddcb6c01198fe4a41de6b4fc0bf533
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
# BooleanExpressionType
A new metadata kind `BooleanExpressionType` can now be defined. These
can be used in place of `ObjectBooleanExpressionType` and
`DataConnectorScalarRepresentation`, and allow more granular control of
comparison operators and how they are used.
The old metadata types still work, but will eventually be deprecated.
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: album_bool_exp
operand:
object:
type: Album
comparableFields:
- fieldName: AlbumId
booleanExpressionType: pg_int_comparison_exp
- fieldName: ArtistId
booleanExpressionType: pg_int_comparison_exp_with_is_null
- field: Address
booleanExpressionType: address_bool_exp
comparableRelationships:
- relationshipName: Artist
booleanExpressionType: artist_bool_exp
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_album_bool_exp
```
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: pg_int_comparison_exp
operand:
scalar:
type: Int
comparisonOperators:
- name: equals
argumentType: String!
- name: _in
argumentType: [String!]!
dataConnectorOperatorMapping:
- dataConnectorName: postgres_db
dataConnectorScalarType: String
operatorMapping:
equals: _eq
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_postgres_int_bool_exp
```
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Remove feature flag, unhide JsonSchema items, fix a few missing bits of
JsonSchema the tests didn't warn us about before.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: dd3055d926fdeb7446cd57085679f2492a4358a1
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
In https://github.com/hasura/v3-engine/pull/750 and
https://github.com/hasura/v3-engine/pull/754 we added a number of checks
for boolean expression types that can be run once we know the data
source they will be used against.
Previously these checks were only used for model `where` clauses, this
pull request moves them into a shared folder and also checks them when
they are used as model or command arguments.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Add a new `arguments` resolve step. It doesn't fit inside the usual
`commands` or `models` step as we need the data sources for both
validated, plus access to all the resolved `relationships` outputs.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: f713659962e3f20b2c85f287b6c362fb52ffa1ed
### What
This PR adds the ability to include internal errors in API responses via
a command line argument `--expose-internal-errors`.
The default behavior remains not to show the contents of internal error
messages.
V3_GIT_ORIGIN_REV_ID: 11c47286d3fbceeda71df3a224853633aeea8902
We pretend we handle directives but we simply set them to an empty map,
and then don't actually use them at all.
This is completely unused code that can be removed.
V3_GIT_ORIGIN_REV_ID: a86dc43acc2de2f2d3f78f0c8ebc53ce9f5bde8c
## Description
We can use Nix to build Docker images, which gives us a few advantages:
1. the images will be cached a little better
2. aarch64 builds become easy
3. Samir is happy because the Nixification continues
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes
in this PR have any user-facing impact. See
[changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [x] community-edition
- [ ] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [x] enhancement
- [ ] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
The v3 engine and dev-auth-webhook Docker images are now published for
both x86_64 (`amd64`) and aarch64 (`arm64`) architectures.
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
Co-Authored-By: Philip Carlsen <philip@hasura.io>
Co-Authored-By: Gil Mizrahi <gil@hasura.io>
V3_GIT_ORIGIN_REV_ID: ae6fec45dee62a21f03b5258b57d841a16542c72
<!-- Thank you for submitting this PR! :) -->
## Description
We'd like to be able to test new WIP experimental features. This adds a
`UNSTABLE_FEATURES` env var / command line arg that can be passed a
comma separated list of names.
Currently we accept `UNSTABLE_FEATURES=enable-boolean-expression-types`
but in future users could pass
`UNSTABLE_FEATURES=some-fancy-feature,other-feature,great`.
<!--
Questions to consider answering:
1. What user-facing changes are being made?
2. What are issues related to this PR? (Consider adding `(close
#<issue-no>)` to the PR title)
3. What is the conceptual design behind this PR?
4. How can this PR be tested/verified?
5. Does the PR have limitations?
6. Does the PR introduce breaking changes?
-->
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes
in this PR have any user-facing impact. See
[changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [x] community-edition
- [x] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [x] enhancement
- [ ] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
Add `UNSTABLE_FEATURES` environment variable
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
V3_GIT_ORIGIN_REV_ID: 3562c1341d4ea3a512626110dbd2b055425c1d60
Sometimes you just want to run a specific test.
The arguments are passed straight to the test runner: `cargo nextest` if
you have it installed, and `cargo test` if you do not. These accept
different arguments so the user will need to know which one they are
using.
V3_GIT_ORIGIN_REV_ID: 9cd6f9d770899e0bb3aa4537008eedba052818d7
`test_each::path` always gives us a `PathBuf` even if we don't want one,
so we need to suppress the associated warning.
V3_GIT_ORIGIN_REV_ID: 4f2bb29122df6979ad378227fb88e4632d3551f7
<!-- Thank you for submitting this PR! :) -->
## Description
Following `metadata-resolve` and `schema` crates, this splits out
`execute`, the largest folder in `engine`. Undoubtedly this could be
split further.
Functional no-op.
V3_GIT_ORIGIN_REV_ID: c272908153f78212d1f5dd58819707ac3cbcd439
<!-- Thank you for submitting this PR! :) -->
## Description
We have a separate copy of the code for resolving type predicates that
we use for object types (when resolving BooleanExpressions), because the
previous code was heavily tied to the Model it used. We'd like to unify
that code again, so the first step is re-implementing relationships in
`resolve_model_predicate_with_type`.
<!--
Questions to consider answering:
1. What user-facing changes are being made?
2. What are issues related to this PR? (Consider adding `(close
#<issue-no>)` to the PR title)
3. What is the conceptual design behind this PR?
4. How can this PR be tested/verified?
5. Does the PR have limitations?
6. Does the PR introduce breaking changes?
-->
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes
in this PR have any user-facing impact. See
[changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [X] community-edition
- [X] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [X] enhancement
- [ ] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
Allow boolean expressions to use relationships
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
V3_GIT_ORIGIN_REV_ID: 6225a4ab752b71df3cdfd0982bf2107ca39f4940
<!-- Thank you for submitting this PR! :) -->
## Description
We'd like to speed up creation of all these Docker images, so this adds
`dev-auth-webhook` to the Nix flake. Functional no-op.
---------
Co-authored-by: Samir Talwar <samir.talwar@hasura.io>
V3_GIT_ORIGIN_REV_ID: 384eb467b2fe7fba1644f5b4cc6224cdc043ce01
## Description
When regenerating goldenfiles, all tests formatting go out of order.
This adds a subsequent formatting step afterwards.
V3_GIT_ORIGIN_REV_ID: 5cddd4cc4ec6c1d684b1841be9347f1fcaa3aade
## Description
1. I've moved the architecture information we had in `CONTRIBUTING.md`
to a separate document `docs/architecture.md` so we can evolve both
separately in the future.
2. I've introduced a couple of sub directories: `utils` and `auth`, for
supporting crates that are not the core functionality of the engine so
it is easier to find the most relevant crates.
New structure:
```
crates
├── auth
│ ├── dev-auth-webhook
│ ├── hasura-authn-core
│ ├── hasura-authn-jwt
│ └── hasura-authn-webhook
├── custom-connector
├── engine
├── lang-graphql
├── metadata-schema-generator
├── open-dds
└── utils
├── opendds-derive
├── recursion_limit_macro
└── tracing-util
```
V3_GIT_ORIGIN_REV_ID: e0e9394da2fcd911f329c48107a76f8492fa304c
I found myself wanting to rewrite JSON files with `sed`. The problem is,
then I want to run a formatter over them afterwards, and this will
change the whole file, not just the area I touched.
I would like to propose the nuclear option in remedying this: format
everything now. This is a very large change that should make it easier
to keep files to a consistent format in the future.
I have chosen to use Prettier for this because (a) it has a useful
`--write` command and (b) it also does GraphQL, Markdown, YAML, etc.
I've elected to exclude two sets of files:
1. `crates/custom-connector/data/*.json`, because they are actually
multiple JSON objects, one per line, which Prettier cannot parse.
2. `crates/lang-graphql/tests/**/*.graphql`, because it contains invalid
GraphQL, and the parser is intended to work with strangely-formatted
GraphQL.
The main changes are standardizing whitespace, adding a newline at the
end of files, and putting JSON arrays on one line when they fit.
V3_GIT_ORIGIN_REV_ID: 92d4a535c34a3cc00721e8ddc6f17c5717e8ff76
<!-- Thank you for submitting this PR! :) -->
## Description
In order to test things quicker, we'd like to be able to build custom
connector and friends in Nix, and then use the containers when running
tests. First step here is to be able to build Docker containers in Nix,
and add a CI job to ensure it still works.
Then we'll move onto publishing and using these images.
No-op build times:
<img width="336" alt="Screenshot 2024-04-25 at 15 53 56"
src="https://github.com/hasura/v3-engine/assets/4729125/47cbc0c5-6e54-4583-aa01-0528d4a21080">
Functional no-op.
V3_GIT_ORIGIN_REV_ID: 8f9d609e26cdd3b0801e61fd361c241ad504dcdf