graphql-engine/v3
dependabot[bot] 2b59652a9c Bump bson from 2.11.0 to 2.12.0 (#1109)
Bumps [bson](https://github.com/mongodb/bson-rust) from 2.11.0 to
2.12.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mongodb/bson-rust/releases">bson's
releases</a>.</em></p>
<blockquote>
<h2>v2.12.0</h2>
<p>The MongoDB Rust driver team is pleased to announce the v2.12.0
release of the <code>bson</code> crate.</p>
<h2>Highlighted Changes</h2>
<p>This release was largely driven by external contributions!</p>
<ul>
<li>An optional implementation of <code>Hash</code> and <code>Eq</code>
for the <code>Bson</code> family of types</li>
<li><code>ObjectId::from_parts</code>, allowing direct construction of
an <code>ObjectId</code> from its component values</li>
<li>Helpers for serializing
<code>Option&lt;chrono::DateTime&lt;_&gt;&gt;</code> as
<code>Option&lt;bson::DateTime&gt;</code></li>
<li>A fix for a panic when parsing specific malformed input data into a
<code>Decimal128</code></li>
</ul>
<p>We've also added optional (off by default) integration with the
<code>serde_path_to_error</code> crate, which
provides paths to the precise point of failure for deserialization of
nested data structures.</p>
<h2>Full Release Notes</h2>
<h2>New Features</h2>
<ul>
<li>RUST-2027 Impl Hash/Eq for BSON (thanks @<a
href="https://github.com/NineLord"><code>@​NineLord</code></a>!)</li>
<li>RUST-2017 Allow constructing an ObjectId from its parts (thanks <a
href="https://github.com/tyilo"><code>@​tyilo</code></a>!)</li>
<li>RUST-1987 Support serializing
<code>Option&lt;chrono::DateTime&lt;_&gt;&gt;</code> as
<code>Option&lt;bson::DateTime&gt;</code> (thanks <a
href="https://github.com/lazureykis"><code>@​lazureykis</code></a>!)</li>
<li>RUST-1874 Add optional integration with serde_path_to_error</li>
</ul>
<h2>Improvements</h2>
<ul>
<li>RUST-1773 Merge duplicate extjson map parsing between
OwnedOrBorrowedRawBsonVisitor and SeededVisitor</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>RUST-2028 Fix Decimal128 panic when parsing strings w/o a char
boundary at idx 34 (thanks <a
href="https://github.com/arthurprs"><code>@​arthurprs</code></a>!)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8e0fb3b4ea"><code>8e0fb3b</code></a>
release v2.12.0 (<a
href="https://redirect.github.com/mongodb/bson-rust/issues/498">#498</a>)</li>
<li><a
href="692cd752e9"><code>692cd75</code></a>
RUST-2028 Fix Decimal128 panic when parsing strings w/o a char boundary
at id...</li>
<li><a
href="28e39259c1"><code>28e3925</code></a>
RUST-2027 Impl Hash/Eq for BSON (<a
href="https://redirect.github.com/mongodb/bson-rust/issues/495">#495</a>)</li>
<li><a
href="20c56f03e6"><code>20c56f0</code></a>
RUST-2017 Add method to construct an <code>ObjectId</code> from its
parts (<a
href="https://redirect.github.com/mongodb/bson-rust/issues/492">#492</a>)</li>
<li><a
href="a72431e486"><code>a72431e</code></a>
RUST-1874 Add optional integration with <code>serde_path_to_error</code>
(<a
href="https://redirect.github.com/mongodb/bson-rust/issues/488">#488</a>)</li>
<li><a
href="d0f5d233fd"><code>d0f5d23</code></a>
minor: update bson to clippy 1.80.0 (<a
href="https://redirect.github.com/mongodb/bson-rust/issues/487">#487</a>)</li>
<li><a
href="2e8fb00cf8"><code>2e8fb00</code></a>
RUST-1992 Factor raw bson encoding out of RawDocumentBuf (<a
href="https://redirect.github.com/mongodb/bson-rust/issues/486">#486</a>)</li>
<li><a
href="1c6e65a27a"><code>1c6e65a</code></a>
RUST-1992 Minor parsing cleanup (<a
href="https://redirect.github.com/mongodb/bson-rust/issues/485">#485</a>)</li>
<li><a
href="39d90f6c44"><code>39d90f6</code></a>
RUST-1992 Convert raw deserializer to use raw document iteration (<a
href="https://redirect.github.com/mongodb/bson-rust/issues/483">#483</a>)</li>
<li><a
href="b5541429b1"><code>b554142</code></a>
RUST-1987 Add serde helper module for
<code>Option\&lt;DateTime&gt;</code> (<a
href="https://redirect.github.com/mongodb/bson-rust/issues/482">#482</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/mongodb/bson-rust/compare/v2.11.0...v2.12.0">compare
view</a></li>
</ul>
</details>
<br />

[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=bson&package-manager=cargo&previous-version=2.11.0&new-version=2.12.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)

</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
V3_GIT_ORIGIN_REV_ID: df79694bffc030604f77a4bc7be3c478595d8ee2
2024-09-16 04:50:13 +00:00
..
.cargo v3: open-source hasura v3 engine 2023-12-19 09:05:39 +00:00
crates jsonapi: parse sort and pagination params (#1107) 2024-09-12 15:28:47 +00:00
docs frontends/graphql crate (#1049) 2024-09-10 15:44:43 +00:00
nix Make sleep available in Docker images (#1026) 2024-08-28 12:14:08 +00:00
rfcs RFC | OpenDD changes for field arguments (#682) 2024-06-10 10:29:26 +00:00
static Generate GraphQL schema for subscriptions (#1051) 2024-09-05 10:14:28 +00:00
.dockerignore Include the dev-auth-webhook crate in the workspace. (#500) 2024-04-24 08:12:37 +00:00
.envrc Add instructions for using the Nix Flake (#547) 2024-05-10 12:27:48 +00:00
.envrc.local.example Add instructions for using the Nix Flake (#547) 2024-05-10 12:27:48 +00:00
.gitignore Set the crate properties once, to improve the Nix build. (#334) 2024-03-06 17:15:13 +00:00
.prettierignore Use "dev" as the version in development for Nix builds. (#665) 2024-06-05 08:20:49 +00:00
.prettierrc Format everything with Prettier. (#530) 2024-04-30 14:58:57 +00:00
Cargo.lock Bump bson from 2.11.0 to 2.12.0 (#1109) 2024-09-16 04:50:13 +00:00
Cargo.toml frontends/graphql crate (#1049) 2024-09-10 15:44:43 +00:00
changelog.md Disallow aggregation of fields that have arguments (#1096) 2024-09-12 13:30:37 +00:00
ci.docker-compose.yaml Remove old Docker based building stuff (#876) 2024-07-25 16:16:49 +00:00
CONTRIBUTING.md Add UNSTABLE_FEATURES env var (#652) 2024-06-03 08:50:43 +00:00
custom-connector.Dockerfile Update to Rust v1.80.1. (#1002) 2024-08-22 08:56:32 +00:00
dev-auth-webhook.Dockerfile Update to Rust v1.80.1. (#1002) 2024-08-22 08:56:32 +00:00
docker-compose.yaml remove reference agent from docker-compose (#1098) 2024-09-11 09:37:53 +00:00
Dockerfile Don't include .git folder and instead provide RELEASE_VERSION env var in Dockerfile (#1094) 2024-09-11 07:48:25 +00:00
flake.lock Update to Rust v1.80.1. (#1002) 2024-08-22 08:56:32 +00:00
flake.nix Build Cloud services with Nix. (#1019) 2024-08-27 09:25:16 +00:00
justfile Basic test framework for jsonapi (#1091) 2024-09-10 15:14:33 +00:00
README.md add AuthConfig v2 example (#937) 2024-08-07 06:51:17 +00:00
rust-toolchain.toml Update to Rust v1.80.1. (#1002) 2024-08-22 08:56:32 +00:00

Hasura GraphQL Engine V3

Docs

Hasura V3 is the API execution engine, based over the Open Data Domain Specification (OpenDD spec) and Native Data Connector Specifications (NDC spec), which powers the Hasura Data Delivery Network (DDN). The v3-engine expects to run against an OpenDDS metadata file and exposes a GraphQL endpoint according to the specified metadata. The v3-engine needs a data connector to run alongside, for the execution of data source specific queries.

Data connectors

Hasura v3-engine does not execute queries directly - instead it sends IR (abstracted, intermediate query) to NDC agents (aka data connectors). To run queries on a database, we'll need to run the data connector that supports the database.

Available data connectors are listed at the Connector Hub

For local development, we use the reference agent implementation that is a part of the NDC spec.

To start the reference agent only, you can do:

docker compose up reference_agent

Run v3-engine (with reference agent)

Building locally using cargo

Hasura v3-engine is written in Rust, hence cargo is required to build and run the v3-engine locally.

To start the v3-engine locally, we need a metadata.json file and an auth config file.

Following are steps to run v3-engine with a reference agent (read only, in memory, relational database with sample tables), and an sample metadata file, exposing a fixed GraphQL schema. This can be used to understand the build setup and the new V3 concepts.

RUST_LOG=DEBUG cargo run --release --bin engine -- \
  --metadata-path crates/open-dds/examples/reference.json \
 --authn-config-path static/auth/auth_config.json

A dev webhook implementation is provided in crates/auth/dev-auth-webhook, that exposes the POST /validate-request which accepts converts the headers present in the incoming request to a object containing session variables, note that only headers that start with x-hasura- will be returned in the response.

The dev webhook can be run using the following command:

docker compose up auth_hook

and point the host name auth_hook to localhost in your /etc/hosts file.

Open http://localhost:3000 for GraphiQL.

Use --port option to start v3-engine on a different port.

RUST_LOG=DEBUG cargo run --release --bin engine -- \
     --port 8000 --metadata-path crates/open-dds/examples/reference.json

Now, open http://localhost:8000 for GraphiQL.

Run v3-engine (with Postgres)

Building with Docker

You can also start v3-engine, along with a Postgres data connector and Jaeger for tracing using Docker:

METADATA_PATH=crates/engine/tests/schema.json AUTHN_CONFIG_PATH=static/auth/auth_config.json docker compose up

Open http://localhost:3001 for GraphiQL, or http://localhost:4002 to view traces in Jaeger.

Note: you'll need to add {"x-hasura-role": "admin"} to the Headers section to run queries from GraphiQL.

NDC Postgres is the official connector by Hasura for Postgres Database. For running V3 engine for GraphQL API on Postgres, you need to run NDC Postgres Connector and have a metadata.json file that is authored specifically for your Postgres database and models (tables, views, functions).

The recommended way to author metadata.json for Postgres, is via Hasura DDN.

Follow the Hasura DDN Guide to create a Hasura DDN project, connect your cloud or local Postgres Database (Hasura DDN provides a secure tunnel mechanism to connect your local database easily), and model your GraphQL API. You can then download the authored metadata.json and use the following steps to run GraphQL API on your local Hasura V3 engine.

Steps to run metadata with V3 engine locally

  1. Download metadata from DDN project, using Hasura V3 CLI

    hasura3 build create --dry-run > ddn-metadata.json
    
  2. Following steps are to generate Postgres metadata object and run the Postgres Connector. These steps refer to the NDC Postgres repository:

    1. Start the Postgres connector in configuration mode (Config server). A config server provides additional endpoints for database instrospection and provide the schema of the database. Output of the config server will form the Postgres Metadata object.

    2. Run the following command in the ndc-postgres repository:

      just run-config
      
    3. Generate the postgres configuration using the new-configuration.sh script by running the following command (in another terminal) in the ndc-postgres repository:

      ./scripts/new-configuration.sh localhost:9100 '<postgres database url>' > pg-config.json
      
    4. Now shutdown the postgres config server and start the Postgres Connector using the pg-config.json generated in the above step, by running the following command:

      Please specify different PORT for different data connectors:

      PORT=8100 \
      RUST_LOG=INFO \
          cargo run --bin ndc-postgres --release -- serve --configuration pg-config.json > /tmp/ndc-postgres.log
      
    5. Fetch the schema for the data connector object by running the following command:

      curl -X GET http://localhost:8100/schema | jq . > pg-schema.json
      
    6. Finally, generate the DataConnector object:

      jq --null-input --arg name 'default' --arg port '8100' --slurpfile schema pg-schema.json '{"kind":"DataConnector","version":"v2","definition":{"name":"\($name)","url":{"singleUrl":{"value":"http://localhost:\($port)"}},"schema":$schema[0]}}' > pg-metadata.json
      
  3. Now you have the NDC Postgres connector running, and have obtained the Postgres metadata (pg-metadata.json) which is required for the V3 engine.

  4. In ddn-metadata.json (from step 1.), replace the HasuraHubDataConnector objects with DataConnector objects generated inside the pg-metadata.json file.

  5. Remove the object for kind: AuthConfig from ddn-metadata.json, move it to a separate file auth_config.json, and remove the kind field from it.

  6. Remove the object for kind: CompatibilityConfig from ddn-metadata.json. If desired, a flags field can be added to the OSS metadata to enable the flags corresponding to that compatibility date in the DDN metadata.

  7. Finally, start the v3-engine using the modified metadata using the following command (using the modified ddn-metadata.json and auth_config.json from Step 5):

    RUST_LOG=DEBUG cargo run --release --bin engine -- \
     --metadata-path ddn-metadata.json auth_config.json
    

    You should have the v3-engine up and running at http://localhost:3000

Note: We understand that these steps are not very straightforward, and we intend to continuously improve the developer experience of running OSS V3 Engine.

Running tests

To run the test suite, you need to docker login to ghcr.io first:

docker login -u <username> -p <token> ghcr.io

where username is your github username, and token is your github PAT. The PAT needs to have the read:packages scope and Hasura SSO configured. See this for more details.

Next run the postgres NDC locally using docker compose up postgres_connector and point the host name postgres_connector to localhost in your /etc/hosts file.

Next run the custom NDC locally using docker compose up custom_connector and point the host name custom_connector to localhost in your /etc/hosts file OR you can run cargo run --bin agent and then do cargo test.

Testing/Development with the chinook database

The crates/engine/tests/chinook contains static files required to run v3-engine run with the chinook database as a data connector.

To get this running, you can run the following command:

METADATA_PATH=crates/engine/tests/schema.json AUTHN_CONFIG_PATH=static/auth/auth_config.json docker compose up postgres_connector engine

Running tests with a single command

Alternatively, the tests can be run in the same Docker image as CI:

just test

Updating goldenfiles

There are some tests where we compare the output of the test against an expected golden file. If you make some changes which expectedly change the goldenfile, you can regenerate them like this:

Locally

  UPDATE_GOLDENFILES=1 cargo test

Docker:

  just update-golden-files

Run benchmarks

The benchmarks operate against the reference agent using the same test cases as the test suite, and need a similar setup.

To run benchmarks for the lexer, parser and validation:

cargo bench -p lang-graphql "lexer"
cargo bench -p lang-graphql "parser"
cargo bench -p lang-graphql "validation/.*"