<!-- The PR description should answer 2 important questions: -->
### What
By default, `rustc` splits crates up and builds in parallel. This is
faster but misses some optimisations between these sections. Let's
reduce it to increase runtime performance.
### How
Add settings to `release` profile in `Cargo.toml`.
<img width="776" alt="Screenshot 2024-08-13 at 12 59 46"
src="https://github.com/user-attachments/assets/a03389dc-80ba-4723-8ca3-36af50846324">
V3_GIT_ORIGIN_REV_ID: 44fa511024140b680b30c9abfaa48034bf0845a9
<!-- The PR description should answer 2 important questions: -->
### What
We started using `mimalloc` allocator in MBS a while ago with good
results, let's use it here too.
Once `v3-engine-multitenant` is merged we should use it there too.
### How
Import crate, switch it on in engine binary and in benchmarks.
<img width="845" alt="Screenshot 2024-08-13 at 10 10 49"
src="https://github.com/user-attachments/assets/dc872668-1633-468a-86d3-51fca5be68bf">
V3_GIT_ORIGIN_REV_ID: ebad91bb57964477d0f227e341c7bd12d54f0f68
### What
Try not to pollute the CI cache with the wrong thing.
### How
1. Remove the package selector as we don't care about production builds
here any more.
2. Tell the test build to not save the cache so there is no write
contention.
3. Downgrade `mockito` to v1.4 reduce the size of the cache, because
`v1.5` depends on `http` v1.
V3_GIT_ORIGIN_REV_ID: 2109c8c7db5d80e3b2c29d2949423e8faebd10b2
<!-- The PR description should answer 2 (maybe 3) important questions:
-->
### What
`execute` is now the biggest `crate` in engine and does a lot, let's
split it into it's constituent steps.
Functional no-op.
<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->
<!-- Consider: do we need to add a changelog entry? -->
### How
Split out `ir` crate from the `execute` crate. Replace export of entire
modules with that of specific types / functions. Therefore, consumers
outside the crate talk about `ir::CommandInfo` rather than
`ir::command::CommandInfo`. There is no need for other crates to know
about the internal structure of this crate.
<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
V3_GIT_ORIGIN_REV_ID: 47553aec63e80af7f95e659a170a2685e9ac2ce3
This PR adds validation code to `metadata_resolve` that prevents someone
from putting a schema/capabilities from the wrong NDC version into the
DataConnectorLink while specifying a different schema version in the
DataConnectorLink. For example:
```
kind: DataConnectorLink
version: v1
definition:
name: data_connector
schema:
version: v0.2
schema: {}
capabilities:
version: 0.1.5 # Not allowed for version v0.2!
capabilities: {}
```
This PR has two commits. One is a refactor where we rearrange the
DataConnectorError types so that the name of the data connector is
captured centrally in `NamedDataConnectorError`, so that it doesn't have
to be passed around and included in every error manually. The other is
the validation changes to `metadata_resolve`.
Completes APIPG-705
V3_GIT_ORIGIN_REV_ID: baed571f36f4cbed824ca546128f5df360d5b298
This PR adds true support for ndc_models v0.2.0 to v3-engine. Note that
v0.2.0 is not finalized yet, so we're pointing at v0.2.0-rc0. The
support still comes via the migration methodology, where v0.2.x ndc
models are downgraded to v0.1.x to support backwards compatibility. In
the future we want to remove this and have the engine generate the
different versioned ndc models separately instead of performing a
migration.
The ndc_models_v01 crate reference has been bumped to the official
v0.1.5 version, which brings the newtypes to the v0.1.x version. The
ndc_models crate reference is now on v0.2.0-rc0.
The custom connector has been updated to support ndc-spec v0.2.0. All
tests that talk to the custom connector have been updated with its
latest v0.2.0 schema/capabilities.
In `metadata_resolve` the v01->v02 schema/capabilities migration code
has been updated to handle the new v0.2.0 types. This includes inferring
v0.2.0 capabilities from what was possible in v0.1.x.
In `execution`, the migration code has been updated to deal with the new
v0.1.5 newtypes and v0.2.0 types. This means there are now cases where a
downgrade is impossible and produces an error (see `NdcDowngradeError`
in `execute::ndc::migration`). A bug has also been fixed where NDC
expressions in arguments were not being serialized to the correct NDC
version.
V3_GIT_ORIGIN_REV_ID: 5b4afcde64c307b2bd7c985c588d6c74d9623a0f
This PR updated ndc-models to the latest version on main. This version
is still a 0.1.x version, but it now includes all the [new
newtypes](https://github.com/hasura/ndc-spec/pull/156) that wrap
previously stringly-typed things. For example, `ArgumentName`,
`FieldName`, etc.
This pervades across the entire engine, but thankfully the changes are
mostly mechanical repetitive changes. Usually you will see conversions
from `String`-typed variables into the newtypes using this sort of form:
`FieldName::from(string.as_str())`, which is the most efficient way
copying the value (the str slice is copied). Or you will see usages of
the newtype as a raw string by `.as_str()`-ing it. Converting the
newtypes into a String can be done with `.into()` if owned, but if
referenced `.as_str().to_owned()` performs the clone and type
conversion.
Other changes:
* A few minor instances of `ok_or()` usages (or similar) have been
converted into lazy error construction variants (eg `ok_or_else()`)
V3_GIT_ORIGIN_REV_ID: 64a371ae6197ef3be98a6f7cdc4052d654a43da0
This PR introduces support for multiple versions of the ndc-spec by
adding a new `VersionedSchemaAndCapabilities` enum variant under the
`DataConnectorLink` in OpenDD. This allows the capture of both ndc
v0.1.* and v0.2.* schema and capabilities.
This is achieved by referencing the `ndc-models` crate twice, once for
`v0.1.4` and once for the first commit after `v0.1.4`. That commit was
chosen to avoid actual v0.2.0 breaking changes for now, while we lay in
this multiple version support plumbing. Future PRs will use a newer
commit and adopt the breaking changes where necessary. The
`VersionedSchemaAndCapabilities::V02` variant uses the the v0.2
reference of `ndc-models`.
Then, during metadata resolve, when we resolve the
`DataConnectorContext` from `DataConnectorLink`, we perform a migration
of v0.1 types to v0.2 types and store and use the v0.2 types during
metadata resolve. This migration is performed in the new module
`ndc_migration`. We also record the `NdcVersion` (either `V01` or `V02`)
in the `DataConnectorLink`. The `execute` crate will need to use this to
determine which version to send to the connector at runtime (to be
implemented in a future PR).
The new changes to OpenDD are hidden from the JSON Schema via a new
`UnstableFeatures` flag, and the use of the new variant is gated behind
it in metadata resolve, since we don't yet support it upstream in the
`execute` crate.
V3_GIT_ORIGIN_REV_ID: d6d8a768ea3537c0b5e620799e94d3dd1e529526
<!-- Thank you for submitting this PR! :) -->
## Description
Upgrade Rust, as a treat. Functional no-op.
---------
Co-authored-by: Samir Talwar <samir@functional.computer>
V3_GIT_ORIGIN_REV_ID: 1e0014049e89b8658326c8d8f652df800c415526
### What
Output all traces to stdout.
---------
Co-authored-by: Daniel Harvey <danieljamesharvey@gmail.com>
V3_GIT_ORIGIN_REV_ID: 06330076ca305a331996530ddcd4d4c13d46bd95
### What
The `lazy_static` macro is poorly maintained, fairly bloated, and has
been mostly superseded by
[`OnceLock`](https://doc.rust-lang.org/stable/std/sync/struct.OnceLock.html)
in the stdlib.
### How
1. I turned a couple of `static ref` values into `const`, sometimes by
creating `const fn` equivalents to other functions.
2. I inlined static behavior to construct a JSON pointer into some
tests, where we don't care too much about losing a few milliseconds.
3. For the rest, I replaced `lazy_static` with a `static OnceLock` and a
call to `OnceLock::get_or_init`.
V3_GIT_ORIGIN_REV_ID: 18e4150a5fb24fe71f6ed77fe6178b7942405aa3
### What
In order to more easily monitor and review changes to metadata
resolution, this introduces snapshot testing for both successful and
failing calls to `resolve`. I used [Insta](https://insta.rs/) for this.
### How
For tests of the failure case, we already had a text file with the
expected error, so I have turned those files into snapshot files. I
wrote a small script to move the files rather than deleting and
recreating them so I could guarantee that the contents have not changed.
(Unfortunately, Git's diff doesn't always recognise the move as a move
because Insta has added a header.)
For tests of the successful case, I added a line to snapshot the
metadata rather than discarding it.
I also rewrote the tests to use `insta::glob` so we could get rid of
`test_each`.
V3_GIT_ORIGIN_REV_ID: 41bef4cf77bddb8d20d7c101df52ae149e8b0476
Adds a very experimental SQL interface to v3-engine for GenAI use cases.
---------
Co-authored-by: Abhinav Gupta <127770473+abhinav-hasura@users.noreply.github.com>
Co-authored-by: Gil Mizrahi <gil@gilmi.net>
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
V3_GIT_ORIGIN_REV_ID: 077779ec4e7843abdffdac1ed6aa655210649b93
I noticed a few extra calls to `.clone()` while working on an unrelated
refactor. I want to remove them for brevity and simplicity; I don't
expect a performance improvement.
This turns on the Clippy warning `redundant_clone`, which detects
unnecessary calls to `.clone()` (and `.to_string()`).
It is an unstable warning and so might reports some false positives. If
we find any, we can suppress the warning there.
V3_GIT_ORIGIN_REV_ID: a713f29cf862d6f4cb40300105c6b9f96df00676
<!-- Thank you for submitting this PR! :) -->
## Description
A few debug lines slipped in recently, let's make `clippy` `warn` on
those, so they are kicked out by CI. Functional no-op.
V3_GIT_ORIGIN_REV_ID: 290f6de35f9315b68811eb5f15969fb0333e9d06
This keeps versions in one place so we can more easily ensure we upgrade
crates together.
V3_GIT_ORIGIN_REV_ID: 6a929bb6196c19a1f66a768585b669127035e9be
Return a `T` instead of a `Result<T, E>` when we never return an error
(`E`) case.
I also enabled some more warnings. `unnecessary_box_returns` has been
suppressed where appropriate, and `unused_async` doesn't seem to be
violated anywhere any more.
I got rid of some calls to `.unwrap()` too.
V3_GIT_ORIGIN_REV_ID: 015ebd05978cf8c2d87474a90e0cd4333779a761
If a function doesn't return a value, terminate with a semicolon.
I also moved `implicit_hasher` and `return_self_not_must_use` to the
"definitely keep disabling this" list, and installed
[Bacon](https://dystroy.org/bacon/) in the Nix shell to make it easier
to run Clippy.
V3_GIT_ORIGIN_REV_ID: ffb17b42d982518aec433a1676dba0a0dd0ad95d
Calling `.to_owned()` on a reference, `.to_vec()` on a vector reference,
etc. are just synonyms for `.clone()` which are less explicit about
cloning. Let's be explicit.
This also removes some unnecessary clones.
V3_GIT_ORIGIN_REV_ID: 1bc00c4106f0346303d73e4268c89030c0ce93fc
1. Use `map_or(…, …)` instead of `.map(…).unwrap_or(…)`.
2. Use `.is_some_and(…)` instead of `.map(…).unwrap_or_default(…)`.
3. Nest `|` patterns where possible.
4. Be more specific about match patterns.
I found I could also simplify `typecheck_qualified_type_reference`
considerably.
V3_GIT_ORIGIN_REV_ID: 6a3b1a4c525c0187c2fdb6df0c979ca0b7b3016c
1. Always name the struct when calling `default()`.
2. Sort construction according to the definitions.
3. Approve allowing `struct_field_names` because it doesn't seem to be
helpful.
4. Enable `manual_string_new`; nothing seems to be triggering it now.
V3_GIT_ORIGIN_REV_ID: 868742114b0bf27bc3ea03cdf1e63a0f710ebe33
Rather than allowing the `cast_precision_loss` and `inline_always`
warnings everywhere, we just suppress them in the few places they're
already used.
V3_GIT_ORIGIN_REV_ID: c28350a21efa029c8c6aae2a602eec2d75f42216
Just because it's fewer lines of code.
1. Invert `if`/`else` blocks with negative conditions.
2. Unwrap redundant `else` blocks.
3. Simplify a few branches to `let … else`.
4. Replace a `match` with `if let`.
V3_GIT_ORIGIN_REV_ID: 4f10730b688d21c1fc86a45ee5fb4adf008b3d94
Fix some warnings flagged by Clippy.
1. Elide `.into_iter()` where it's unnecessary.
2. Favor `&` over `.iter()`.
3. Use `.values()` on maps instead of discarding keys by destructuring.
4. Avoid `::from_iter(…)` in favor of `.collect()`.
I also replaced a call to `.cloned()` with `.copied()`.
V3_GIT_ORIGIN_REV_ID: 7d39665b0cd04f5bae9405c0ff5f044f57433f32
## Description
The expectation is, that engine should emit usages of OpenDD objects
(i.e. models, commands, relationships, permissions, fields of types
etc.) from a GraphQL query.
This PR adds the types required to gather query usage analytics, under a new crate `query-usage-analytics`.
V3_GIT_ORIGIN_REV_ID: 49778c25a9019e0c8c9a2d13eaa8ba28638b8b55
<!-- Thank you for submitting this PR! :) -->
## Description
Following `metadata-resolve` and `schema` crates, this splits out
`execute`, the largest folder in `engine`. Undoubtedly this could be
split further.
Functional no-op.
V3_GIT_ORIGIN_REV_ID: c272908153f78212d1f5dd58819707ac3cbcd439
## Description
As advertised.
Now you can run the following to browse our internal code
```
cargo doc --no-deps --document-private-items --open
```
V3_GIT_ORIGIN_REV_ID: 5b7091a00ed6148b8a91168807b07aa6a925cac9
<!-- Thank you for submitting this PR! :) -->
## Description
This PR splits the GraphQL schema generation into the `schema` crate.
Functional no-op.
V3_GIT_ORIGIN_REV_ID: 4f1a91387305d88e9b5fbe4bc8df0575292cf878
<!-- Thank you for submitting this PR! :) -->
## Description
Now that metadata resolve has a clear interface with the rest of the
engine, let's take it out into it's own crate. This will make it easier
to maintain a strong boundary between things.
To simplify imports etc, removed nested layers of modules, so now we
import `use metadata_resolve::Qualified` instead of `use
crate::metadata::resolved::Qualified`.
The changes in `engine` crate are all just updating imports.
Functional no-op.
V3_GIT_ORIGIN_REV_ID: fb94304f7ed8883287c18bd6870045dfd69e3fe3
## Description
1. I've moved the architecture information we had in `CONTRIBUTING.md`
to a separate document `docs/architecture.md` so we can evolve both
separately in the future.
2. I've introduced a couple of sub directories: `utils` and `auth`, for
supporting crates that are not the core functionality of the engine so
it is easier to find the most relevant crates.
New structure:
```
crates
├── auth
│ ├── dev-auth-webhook
│ ├── hasura-authn-core
│ ├── hasura-authn-jwt
│ └── hasura-authn-webhook
├── custom-connector
├── engine
├── lang-graphql
├── metadata-schema-generator
├── open-dds
└── utils
├── opendds-derive
├── recursion_limit_macro
└── tracing-util
```
V3_GIT_ORIGIN_REV_ID: e0e9394da2fcd911f329c48107a76f8492fa304c
This shrinks the Docker image size by half.
I have also normalized the two Dockerfiles so they share a cache for
longer.
V3_GIT_ORIGIN_REV_ID: f976725b09ad2c8022a912b15cdcde55ce5a9486
When trying to reduce the number of dependencies we use in the engine, I
was blocked by a few `.clone()` calls that, on inspection, turned out to
be completely unnecessary.
I have replaced those with passing by reference, and then gone on a
pedant spree. I enabled the `needless_pass_by_value` Clippy warning and
fixed it everywhere that it highlighted. In most places, this meant
adding `&`, but I also marked some types as `Copy`, which makes
pass-by-value the right move.
In one place, I replaced calls to `async_map` with `if` and `else`, to
avoid constructing closures that capture across async boundaries. This
means I could just delete `async_map`.
V3_GIT_ORIGIN_REV_ID: 6ff71f0c553b707889d89552eff3e8c001e898cc
<!-- Thank you for submitting this PR! :) -->
## Description
In https://github.com/hasura/v3-engine/pull/441 we made all our skipped
Clippy rules explicit. This enables one (pretty arbitrarily) and fixes
what comes up.
V3_GIT_ORIGIN_REV_ID: 406692a2a134cb2a6cf5785acd0ac7c5b9f90c61
## Description
This PR iterates on #459.
Rather than serving the engine metadata it serves an arbitrary file,
given by the command line argument `--introspection-metadata`.
Specifying this argument gives rise to endpoints `/metadata` and
`/metadata-hash`.
![image](https://github.com/hasura/v3-engine/assets/358550/63040f02-876a-4c29-8cf1-52a305ffff67)
Update: We only load the file in at engine startup and serve that
version. Changing the file on disk will not change what the engine
serves.
---------
Co-authored-by: Gil Mizrahi <gil@gilmi.net>
V3_GIT_ORIGIN_REV_ID: db88adb5c08c4489cc1abd5fb5236b8d5ba51b9a
<!-- Thank you for submitting this PR! :) -->
## Description
Following the approach taken here:
https://github.com/hasura/ndc-postgres/pull/402
This moves the `clippy` settings into the Cargo workspace file instead
of passing them for each invocation.
We enable all pedantic settings, run `cargo clippy --fix` to auto fix a
few things, and then manually disable all other lints.
Plenty of them are worth enabling and fixing in future IMO.
---------
Co-authored-by: Samir Talwar <samir.talwar@hasura.io>
V3_GIT_ORIGIN_REV_ID: aa0e6ccb8d72a7393e14b5c58b82077a67d9cb15
<!-- Thank you for submitting this PR! :) -->
## Description
This moves all the crates into a `/crates` folder. Everything appears to
just work, thanks Cargo!
V3_GIT_ORIGIN_REV_ID: 8e3ef287b1a46cabdb4d919a50e813ab2cddf8b1