### What
We now support cloud-only crates, which are not open-sourced.
### How
Anything in `crates/cloud` will not be synced with the _graphql-engine_
repository.
In order to facilitate this, we generate and commit a
Cargo.toml/Cargo.lock pair with the cloud-only sections removed. We also
transform the justfile to remove this code.
This includes only a test repository, to ensure that nothing private is
synced.
When this is merged, it should not result in a commit to the
_graphql-engine_ repository.
V3_GIT_ORIGIN_REV_ID: 038839acdf3a97da05bbd4b6278171cc12e7cd71
### What
Try not to pollute the CI cache with the wrong thing.
### How
1. Remove the package selector as we don't care about production builds
here any more.
2. Tell the test build to not save the cache so there is no write
contention.
3. Downgrade `mockito` to v1.4 reduce the size of the cache, because
`v1.5` depends on `http` v1.
V3_GIT_ORIGIN_REV_ID: 2109c8c7db5d80e3b2c29d2949423e8faebd10b2
### What
- Added `AuthConfig` v2 config example in
`static/auth/auth_config_v2.json`
- Moved exisiting `auth_config.json` to `static/auth/`
- Removed unused `pre_plugins.json`
If one wants to start the engine with a v2 of AuthConfig,
`static/auth/auth_config_v2.json` can be used.
V3_GIT_ORIGIN_REV_ID: 471f8ae43ab02c2182457804a24b8445bb41f06c
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
We have a bunch of local development infra for building the engine
inside a Docker container. This is helpful for Buildkite which doesn't
come with stuff like `cargo` preinstalled. We've not using Buildkite
anymore, let's remove it.
V3_GIT_ORIGIN_REV_ID: b4b7679aab5b14081288df25d139944f160a61fe
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
These take ages to run, are slowing development down and not offering
the value they should. We should be benchmarking engine, but not like
this.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Remove benchmarks CI job from Buildkite.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 30a2c9d5f6ba09f5319a07fe394db8becaa16b8e
This PR updates as many tests as possible that use the custom connector
so that the tests run over two versions of the custom connector:
1. The custom connector in the repo, which currently speaks `ndc_models`
v0.2.x
2. The custom connector from the past (commit ), which is the last
version to speak `ndc_models` v0.1.x
This helps us test both the NDC v0.1.x and v0.2.x code paths. When the
postgres connector upgrades to v0.2.x, we can use the same approach as
in this PR to get the tests to run over multiple versions of the
postgres connector too, for much better coverage. This approach with the
custom connector will become less useful over time as the v0.1.x
connector is not updated and will diverge in data from the v0.2.x
connector. The postgres connector is likely to be longer-lasting, as it
is more stable.
The basic test used for `execute` integration tests is
`test_execution_expectation` (in `crates/engine/tests/common.rs`) and it
has been extended into a version called
`test_execution_expectation_for_multiple_ndc_versions` that takes
metadata on a per NDC version basis and then runs the test multiple
times, once for each NDC version. This allows one to swap out the
DataConnectorLink involved in the test to a different one that points at
either the v0.1.x or v0.2.x versions of the connector. The assertion is
that both connectors should produce the same results, even if they talk
a different version of the NDC protocol. As each version runs, we
`println!` the version so that if the test fails you can look in stdout
for the test and see which one was executing when it failed.
Tests that use the custom connector now use
`test_execution_expectation_for_multiple_ndc_versions` and run across
both connector versions. Some tests were unable to be used across both
version as the data between the two versions has changed. Some tests
were modified to avoid the changed data so as to support running across
both versions. Any tests that use `test_execution_expectation_legacy`
don't run across both versions because those tests aren't backed by the
same test implementation as
`test_execution_expectation_for_multiple_ndc_versions`.
Unfortunately the custom connector doesn't use the standard connector
SDK, so it doesn't support `HASURA_CONNECTOR_PORT`. This means that the
old connector is stuck on 8101. To work around this, I've moved the
current connector port to 8102 instead. Technically we might be able to
use docker to remap the ports, but then this binds us into always
running the connectors in docker in order to move their ports around, so
I avoided that approach.
Completes APIPG-703
V3_GIT_ORIGIN_REV_ID: fb0e410ddbee0ea699815388bc63584d6ff5dd70
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
We've had our CI mixed between Github and Buildkite for a while, it's
time to commit. First step is moving the "tests" step to Github Actions.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
This PR:
- Moves the `test` step to Github Actions
- Creates a new `custom_connector.Dockerfile` which builds custom
connector only, more quickly.
- Changes the metadata tests to use `localhost` instead of their Docker
internal names (ie `custom_connector` or `postgres_connector`) - this is
because the tests are being run from outside Docker now
- Removes the `test` Buildkite step
It does not:
- Remove the code coverage or benchmarks steps from Buildkite
- Tidy up `justfile` or Dockerfiles
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
---------
Co-authored-by: Philip Lykke Carlsen <plcplc@gmail.com>
V3_GIT_ORIGIN_REV_ID: a67534ebc1634a24b48d2620c45003221852e199
### What
This PR enables the use of Opentelemetry Baggage. Every bit of baggage
is then replicated on every span.
The current implementation does not actually set any baggage itself - it
only relays and outputs what it's getting.
Crafting a request with a `baggage` header set:
![image](https://github.com/user-attachments/assets/f2974398-370a-4e8c-8761-692cfc5682f6)
Has it propagated (here, to `dev-auth-webhook`) and stamped onto every
span:
![image](https://github.com/user-attachments/assets/6661c41f-56be-4edd-9027-e88eb816f1e7)
### How
This PR actually makes the engine and auth-hook use the globally
specified propagators (before they would only use an obsucre, concrete,
re-exported one from opentelemetry_contrib), and adds the
`BaggagePropagator` to the list.
It also adds a `SpanProcessor` which outputs the baggage as span
attributes. Currently it outputs all Baggage entries, but can be made
more specific in the future if we want to treat Baggage differently.
V3_GIT_ORIGIN_REV_ID: 3b5b8604b624c0b90c192e68b3b57fab7ca9b63e
This PR adds true support for ndc_models v0.2.0 to v3-engine. Note that
v0.2.0 is not finalized yet, so we're pointing at v0.2.0-rc0. The
support still comes via the migration methodology, where v0.2.x ndc
models are downgraded to v0.1.x to support backwards compatibility. In
the future we want to remove this and have the engine generate the
different versioned ndc models separately instead of performing a
migration.
The ndc_models_v01 crate reference has been bumped to the official
v0.1.5 version, which brings the newtypes to the v0.1.x version. The
ndc_models crate reference is now on v0.2.0-rc0.
The custom connector has been updated to support ndc-spec v0.2.0. All
tests that talk to the custom connector have been updated with its
latest v0.2.0 schema/capabilities.
In `metadata_resolve` the v01->v02 schema/capabilities migration code
has been updated to handle the new v0.2.0 types. This includes inferring
v0.2.0 capabilities from what was possible in v0.1.x.
In `execution`, the migration code has been updated to deal with the new
v0.1.5 newtypes and v0.2.0 types. This means there are now cases where a
downgrade is impossible and produces an error (see `NdcDowngradeError`
in `execute::ndc::migration`). A bug has also been fixed where NDC
expressions in arguments were not being serialized to the correct NDC
version.
V3_GIT_ORIGIN_REV_ID: 5b4afcde64c307b2bd7c985c588d6c74d9623a0f
This PR fixes the custom connector whose schema endpoint doesn't
actually return correct output (it was missing some
functions/procedures, etc). Then it updates all tests that actually talk
to the custom connector with the latest version of its
schema/capabilities in their DataConnectorLink.
This test update is done by a new script added to the justfile that
finds and patches all metadata json files and inserts the new schema and
capabilities after reading them from the custom connector running in
docker.
V3_GIT_ORIGIN_REV_ID: f1825a6f74ddcb6c01198fe4a41de6b4fc0bf533
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
# BooleanExpressionType
A new metadata kind `BooleanExpressionType` can now be defined. These
can be used in place of `ObjectBooleanExpressionType` and
`DataConnectorScalarRepresentation`, and allow more granular control of
comparison operators and how they are used.
The old metadata types still work, but will eventually be deprecated.
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: album_bool_exp
operand:
object:
type: Album
comparableFields:
- fieldName: AlbumId
booleanExpressionType: pg_int_comparison_exp
- fieldName: ArtistId
booleanExpressionType: pg_int_comparison_exp_with_is_null
- field: Address
booleanExpressionType: address_bool_exp
comparableRelationships:
- relationshipName: Artist
booleanExpressionType: artist_bool_exp
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_album_bool_exp
```
```yaml
kind: BooleanExpressionType
version: v1
definition:
name: pg_int_comparison_exp
operand:
scalar:
type: Int
comparisonOperators:
- name: equals
argumentType: String!
- name: _in
argumentType: [String!]!
dataConnectorOperatorMapping:
- dataConnectorName: postgres_db
dataConnectorScalarType: String
operatorMapping:
equals: _eq
logicalOperators:
enable: true
isNull:
enable: true
graphql:
typeName: app_postgres_int_bool_exp
```
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Remove feature flag, unhide JsonSchema items, fix a few missing bits of
JsonSchema the tests didn't warn us about before.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: dd3055d926fdeb7446cd57085679f2492a4358a1
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
In https://github.com/hasura/v3-engine/pull/750 and
https://github.com/hasura/v3-engine/pull/754 we added a number of checks
for boolean expression types that can be run once we know the data
source they will be used against.
Previously these checks were only used for model `where` clauses, this
pull request moves them into a shared folder and also checks them when
they are used as model or command arguments.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Add a new `arguments` resolve step. It doesn't fit inside the usual
`commands` or `models` step as we need the data sources for both
validated, plus access to all the resolved `relationships` outputs.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: f713659962e3f20b2c85f287b6c362fb52ffa1ed
### What
This PR adds the ability to include internal errors in API responses via
a command line argument `--expose-internal-errors`.
The default behavior remains not to show the contents of internal error
messages.
V3_GIT_ORIGIN_REV_ID: 11c47286d3fbceeda71df3a224853633aeea8902
We pretend we handle directives but we simply set them to an empty map,
and then don't actually use them at all.
This is completely unused code that can be removed.
V3_GIT_ORIGIN_REV_ID: a86dc43acc2de2f2d3f78f0c8ebc53ce9f5bde8c
## Description
We can use Nix to build Docker images, which gives us a few advantages:
1. the images will be cached a little better
2. aarch64 builds become easy
3. Samir is happy because the Nixification continues
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes
in this PR have any user-facing impact. See
[changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [x] community-edition
- [ ] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [x] enhancement
- [ ] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
The v3 engine and dev-auth-webhook Docker images are now published for
both x86_64 (`amd64`) and aarch64 (`arm64`) architectures.
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
Co-Authored-By: Philip Carlsen <philip@hasura.io>
Co-Authored-By: Gil Mizrahi <gil@hasura.io>
V3_GIT_ORIGIN_REV_ID: ae6fec45dee62a21f03b5258b57d841a16542c72
<!-- Thank you for submitting this PR! :) -->
## Description
We'd like to be able to test new WIP experimental features. This adds a
`UNSTABLE_FEATURES` env var / command line arg that can be passed a
comma separated list of names.
Currently we accept `UNSTABLE_FEATURES=enable-boolean-expression-types`
but in future users could pass
`UNSTABLE_FEATURES=some-fancy-feature,other-feature,great`.
<!--
Questions to consider answering:
1. What user-facing changes are being made?
2. What are issues related to this PR? (Consider adding `(close
#<issue-no>)` to the PR title)
3. What is the conceptual design behind this PR?
4. How can this PR be tested/verified?
5. Does the PR have limitations?
6. Does the PR introduce breaking changes?
-->
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes
in this PR have any user-facing impact. See
[changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [x] community-edition
- [x] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [x] enhancement
- [ ] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
Add `UNSTABLE_FEATURES` environment variable
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
V3_GIT_ORIGIN_REV_ID: 3562c1341d4ea3a512626110dbd2b055425c1d60
Sometimes you just want to run a specific test.
The arguments are passed straight to the test runner: `cargo nextest` if
you have it installed, and `cargo test` if you do not. These accept
different arguments so the user will need to know which one they are
using.
V3_GIT_ORIGIN_REV_ID: 9cd6f9d770899e0bb3aa4537008eedba052818d7
`test_each::path` always gives us a `PathBuf` even if we don't want one,
so we need to suppress the associated warning.
V3_GIT_ORIGIN_REV_ID: 4f2bb29122df6979ad378227fb88e4632d3551f7
<!-- Thank you for submitting this PR! :) -->
## Description
Following `metadata-resolve` and `schema` crates, this splits out
`execute`, the largest folder in `engine`. Undoubtedly this could be
split further.
Functional no-op.
V3_GIT_ORIGIN_REV_ID: c272908153f78212d1f5dd58819707ac3cbcd439
<!-- Thank you for submitting this PR! :) -->
## Description
We have a separate copy of the code for resolving type predicates that
we use for object types (when resolving BooleanExpressions), because the
previous code was heavily tied to the Model it used. We'd like to unify
that code again, so the first step is re-implementing relationships in
`resolve_model_predicate_with_type`.
<!--
Questions to consider answering:
1. What user-facing changes are being made?
2. What are issues related to this PR? (Consider adding `(close
#<issue-no>)` to the PR title)
3. What is the conceptual design behind this PR?
4. How can this PR be tested/verified?
5. Does the PR have limitations?
6. Does the PR introduce breaking changes?
-->
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes
in this PR have any user-facing impact. See
[changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [X] community-edition
- [X] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [X] enhancement
- [ ] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
Allow boolean expressions to use relationships
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
V3_GIT_ORIGIN_REV_ID: 6225a4ab752b71df3cdfd0982bf2107ca39f4940
<!-- Thank you for submitting this PR! :) -->
## Description
We'd like to speed up creation of all these Docker images, so this adds
`dev-auth-webhook` to the Nix flake. Functional no-op.
---------
Co-authored-by: Samir Talwar <samir.talwar@hasura.io>
V3_GIT_ORIGIN_REV_ID: 384eb467b2fe7fba1644f5b4cc6224cdc043ce01
## Description
When regenerating goldenfiles, all tests formatting go out of order.
This adds a subsequent formatting step afterwards.
V3_GIT_ORIGIN_REV_ID: 5cddd4cc4ec6c1d684b1841be9347f1fcaa3aade
## Description
1. I've moved the architecture information we had in `CONTRIBUTING.md`
to a separate document `docs/architecture.md` so we can evolve both
separately in the future.
2. I've introduced a couple of sub directories: `utils` and `auth`, for
supporting crates that are not the core functionality of the engine so
it is easier to find the most relevant crates.
New structure:
```
crates
├── auth
│ ├── dev-auth-webhook
│ ├── hasura-authn-core
│ ├── hasura-authn-jwt
│ └── hasura-authn-webhook
├── custom-connector
├── engine
├── lang-graphql
├── metadata-schema-generator
├── open-dds
└── utils
├── opendds-derive
├── recursion_limit_macro
└── tracing-util
```
V3_GIT_ORIGIN_REV_ID: e0e9394da2fcd911f329c48107a76f8492fa304c
I found myself wanting to rewrite JSON files with `sed`. The problem is,
then I want to run a formatter over them afterwards, and this will
change the whole file, not just the area I touched.
I would like to propose the nuclear option in remedying this: format
everything now. This is a very large change that should make it easier
to keep files to a consistent format in the future.
I have chosen to use Prettier for this because (a) it has a useful
`--write` command and (b) it also does GraphQL, Markdown, YAML, etc.
I've elected to exclude two sets of files:
1. `crates/custom-connector/data/*.json`, because they are actually
multiple JSON objects, one per line, which Prettier cannot parse.
2. `crates/lang-graphql/tests/**/*.graphql`, because it contains invalid
GraphQL, and the parser is intended to work with strangely-formatted
GraphQL.
The main changes are standardizing whitespace, adding a newline at the
end of files, and putting JSON arrays on one line when they fit.
V3_GIT_ORIGIN_REV_ID: 92d4a535c34a3cc00721e8ddc6f17c5717e8ff76
<!-- Thank you for submitting this PR! :) -->
## Description
In order to test things quicker, we'd like to be able to build custom
connector and friends in Nix, and then use the containers when running
tests. First step here is to be able to build Docker containers in Nix,
and add a CI job to ensure it still works.
Then we'll move onto publishing and using these images.
No-op build times:
<img width="336" alt="Screenshot 2024-04-25 at 15 53 56"
src="https://github.com/hasura/v3-engine/assets/4729125/47cbc0c5-6e54-4583-aa01-0528d4a21080">
Functional no-op.
V3_GIT_ORIGIN_REV_ID: 8f9d609e26cdd3b0801e61fd361c241ad504dcdf
This splits out a `debug.Dockerfile` which makes use of out-of-band
caching to speed up builds drastically, at the expense of
reproducibility.
It is used to run tests and auxiliary test services (i.e. the custom
connector).
The new `debug.Dockerfile` marks the Cargo dependency and build caches
as Docker caches, which means they are shared between builds. This is
probably fine for local work and testing. The `Dockerfile` continues to
not use a cache like this, to guarantee that it is not polluted by extra
information, at the expense of build speed.
In addition, we build a `nextest` archive ahead of time to avoid
building tests when attempting to run them.
On my machine, a re-run of `just test` now takes seconds.
I have also sped up the `postgres` container start time by creating a
database called "finished" last, and then waiting for that to show up.
V3_GIT_ORIGIN_REV_ID: 7ef0548361987175b68a0cad44c8f2295110a1fb
This uses the `json!` macro in the JWT client code and its associated
tests, rather than embedding a string and parsing it. This will make
compilation fail if the JSON is invalid, which seems better than a
runtime/test-time failure.
There are a few cases where we were constructing a JSON string by
concatenating strings using `format!`. `json!` also handles these cases
better, as you can refer to a JSON value by its variable name too, and
it doesn't require escaping `{` and `}`.
V3_GIT_ORIGIN_REV_ID: 4c40299b71d583efae7b0cdbabee8cff21b09bef
<!-- Thank you for submitting this PR! :) -->
## Description
Previously we moved all our types around in one big bucket, meaning we
often had to check we had the thing we wanted, this splits it up so
dependencies are more granular and clearer.
This means instead of passing `types` around, we'll have both
`scalar_types` or `object_types`. Usually just `object_types` though.
V3_GIT_ORIGIN_REV_ID: 6a6b8d6265b0391f8910f3d4f8932ad151453c18
<!-- Thank you for submitting this PR! :) -->
## Description
Following the approach taken here:
https://github.com/hasura/ndc-postgres/pull/402
This moves the `clippy` settings into the Cargo workspace file instead
of passing them for each invocation.
We enable all pedantic settings, run `cargo clippy --fix` to auto fix a
few things, and then manually disable all other lints.
Plenty of them are worth enabling and fixing in future IMO.
---------
Co-authored-by: Samir Talwar <samir.talwar@hasura.io>
V3_GIT_ORIGIN_REV_ID: aa0e6ccb8d72a7393e14b5c58b82077a67d9cb15
<!-- Thank you for submitting this PR! :) -->
## Description
Set our `ndc-postgres` connector in tests to use new mutations versions
so we can test Boolean Expressions. Also does some house-keeping, like
ensuring we pull the latest `ndc-postgres` in CI and exposing `8080`
from `ndc-postgres` to fix local dev flow.
V3_GIT_ORIGIN_REV_ID: 4c92670e9976a3f75ec31e1224079799380ef6e2
This seems appropriate now that we've stabilized the new configuration.
Of note are the configuration updates and the use of an environment
variable to specify the connection URI. This upgrade also fixes the
health checks.
Regenerating the configuration lost the table descriptions, which seems
to be because they were not present in the Chinook SQL. I have dragged
the Chinook SQL in from ndc-postgres and kept it separate from the
initialization of other tables.
The auto-generated configuration is slightly different from the
manually-created configuration in that the collection names are
singular, not plural. This means that I had to change a lot of test
metadata files too.
V3_GIT_ORIGIN_REV_ID: 2b66fd3049aaf4daeb386915ea3b64a209b1f393
<!-- Thank you for submitting this PR! :) -->
## Description
When I run `cargo fmt` on my branches, it makes more diff than I want.
This PR fixes that by adding `just format` / `just fmt`, and adding it
to a CI job.
<!--
Questions to consider answering:
1. What user-facing changes are being made?
2. What are issues related to this PR? (Consider adding `(close
#<issue-no>)` to the PR title)
3. What is the conceptual design behind this PR?
4. How can this PR be tested/verified?
5. Does the PR have limitations?
6. Does the PR introduce breaking changes?
-->
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes in this PR have any user-facing impact. See [changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [ ] community-edition
- [ ] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [ ] enhancement
- [ ] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
_Replace with changelog entry_
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
V3_GIT_ORIGIN_REV_ID: e31e352f27b9ad0129c3759fead051b1a8d86758
Most of our span names were in the format `snake_case` and usually
reflected `name_of_the_function`. This is unfriendly, so we've changed
it to match the format used by data connectors, which use a format like
"Database request" or "Waiting for connection".
It would be nice if `Execute request plan for query field` could be
`Execute request plan for query field "person"` but it seems changing
the currently used `&'static str` for `String` causes all sorts of other
lifetime issues, so I suppose we're better including more information as
span attributes instead. Therefore, all our span names need to be pretty
much static (or at least, dynamically chosen from a list of static
names).
Note: I have not changed the `SpanVisibility::Internal` span names for
now. Most of these reflect function names and this is pretty useful IMO.
Happy to change this later if other feel strongly though.
V3_GIT_ORIGIN_REV_ID: f2226b2466e8592676f3f4635d483289f0e3f6aa
<!-- Thank you for submitting this PR! :) -->
## Description
My memory is bad and I want to make it easy to run the `engine` to try
it out. This adds the `just run-local` command which uses the schema
from the tests (and so is very likely to continue working).
<img width="1445" alt="Screenshot 2024-03-28 at 09 44 46"
src="https://github.com/hasura/v3-engine/assets/4729125/5dc9a8d6-612e-418e-be24-ef0fefd0da99">
V3_GIT_ORIGIN_REV_ID: b5cd009b19805f5e9ed6180f68878207ece50d98
`cargo machete` is a very useful tool that figures out when you aren't
using a dependency. I have run this locally to remove unused
dependencies.
I've also added a CI job to make sure we catch these in the future.
Sometimes it reports false positives, e.g. when a dependency isn't used
directly but in macro-generated code (e.g. with `strum`). I have added
`"ignored"` clauses to the `Cargo.toml` files where appropriate.
V3_GIT_ORIGIN_REV_ID: ed015089b695cec8eeb03ce455d6dd3cd312a016