graphql-engine/v3
Rakesh Emmadi db19cf965a subscriptions: Unhide OpenDd Metadata and lift from unstability (#1225)
<!-- The PR description should answer 2 important questions: -->

### What

<!-- What is this PR trying to accomplish (and why, if it's not
obvious)? -->

<!-- Consider: do we need to add a changelog entry? -->

<!-- Does this PR introduce new validation that might break old builds?
-->

<!-- Consider: do we need to put new checks behind a flag? -->
- Remove `enable-subscriptions` from unstable features.
- Expose the subscriptions related opendd metadata in the json schema

### How

<!-- How is it trying to accomplish it (what are the implementation
steps)? -->
- Update unstable feature related types by dropping subscriptions.
- Remove `hidden=true` opendd attribute and other related on
subscription opendd metadata.
- Update json schema files

V3_GIT_ORIGIN_REV_ID: 0aa763f516d394aab2e375da0817d0e60228c9b2
2024-10-18 09:21:13 +00:00
..
.cargo v3: open-source hasura v3 engine 2023-12-19 09:05:39 +00:00
crates subscriptions: Unhide OpenDd Metadata and lift from unstability (#1225) 2024-10-18 09:21:13 +00:00
docs Update architecture doc (#1130) 2024-09-19 14:28:56 +00:00
nix Make sleep available in Docker images (#1026) 2024-08-28 12:14:08 +00:00
rfcs RFC | OpenDD changes for field arguments (#682) 2024-06-10 10:29:26 +00:00
static subscriptions: respect allow_subscriptions in select permissions (#1235) 2024-10-17 16:18:09 +00:00
.dockerignore Include the dev-auth-webhook crate in the workspace. (#500) 2024-04-24 08:12:37 +00:00
.envrc Add instructions for using the Nix Flake (#547) 2024-05-10 12:27:48 +00:00
.envrc.local.example Add instructions for using the Nix Flake (#547) 2024-05-10 12:27:48 +00:00
.gitignore Set the crate properties once, to improve the Nix build. (#334) 2024-03-06 17:15:13 +00:00
.prettierignore Use "dev" as the version in development for Nix builds. (#665) 2024-06-05 08:20:49 +00:00
.prettierrc Format everything with Prettier. (#530) 2024-04-30 14:58:57 +00:00
Cargo.lock Fix a few small JSONAPI bugs (#1230) 2024-10-17 10:44:49 +00:00
Cargo.toml Fix a few small JSONAPI bugs (#1230) 2024-10-17 10:44:49 +00:00
changelog.md [ENG-643] Support array-valued session variables (#1221) 2024-10-17 19:41:36 +00:00
ci.docker-compose.yaml Bump all the packages (#1199) 2024-10-03 09:10:06 +00:00
CONTRIBUTING.md Add UNSTABLE_FEATURES env var (#652) 2024-06-03 08:50:43 +00:00
custom-connector.Dockerfile Upgrade to Rust 1.81.0 (#1119) 2024-09-18 07:29:12 +00:00
dev-auth-webhook.Dockerfile Upgrade to Rust 1.81.0 (#1119) 2024-09-18 07:29:12 +00:00
docker-compose.yaml Fix docker-compose.yaml (#1197) 2024-10-02 11:07:29 +00:00
Dockerfile Upgrade to Rust 1.81.0 (#1119) 2024-09-18 07:29:12 +00:00
flake.lock Upgrade to Rust 1.81.0 (#1119) 2024-09-18 07:29:12 +00:00
flake.nix Bump all the packages (#1199) 2024-10-03 09:10:06 +00:00
justfile subscriptions: Unhide OpenDd Metadata and lift from unstability (#1225) 2024-10-18 09:21:13 +00:00
README.md Fix docker-compose.yaml (#1197) 2024-10-02 11:07:29 +00:00
rust-toolchain.toml Upgrade to Rust 1.81.0 (#1119) 2024-09-18 07:29:12 +00:00

Hasura GraphQL Engine V3

Docs

Hasura V3 is the API execution engine, based over the Open Data Domain Specification (OpenDD spec) and Native Data Connector Specifications (NDC spec), which powers the Hasura Data Delivery Network (DDN). The v3-engine expects to run against an OpenDDS metadata file and exposes a GraphQL endpoint according to the specified metadata. The v3-engine needs a data connector to run alongside, for the execution of data source specific queries.

Data connectors

Hasura v3-engine does not execute queries directly - instead it sends IR (abstracted, intermediate query) to NDC agents (aka data connectors). To run queries on a database, we'll need to run the data connector that supports the database.

Available data connectors are listed at the Connector Hub

For local development, we use the reference agent implementation that is a part of the NDC spec.

To start the reference agent only, you can do:

docker compose up reference_agent

Run v3-engine (with Postgres)

Building with Docker

You can also start v3-engine, along with a Postgres data connector and Jaeger for tracing using Docker:

docker compose up

Open http://localhost:3000 for GraphiQL, or http://localhost:4002 to view traces in Jaeger.

Note: you'll need to add {"x-hasura-role": "admin"} to the Headers section to run queries from GraphiQL.

NDC Postgres is the official connector by Hasura for Postgres Database. For running V3 engine for GraphQL API on Postgres, you need to run NDC Postgres Connector and have a metadata.json file that is authored specifically for your Postgres database and models (tables, views, functions).

The recommended way to author metadata.json for Postgres, is via Hasura DDN.

Follow the Hasura DDN Guide to create a Hasura DDN project, connect your cloud or local Postgres Database (Hasura DDN provides a secure tunnel mechanism to connect your local database easily), and model your GraphQL API. You can then download the authored metadata.json and use the following steps to run GraphQL API on your local Hasura V3 engine.

Running tests

To run the test suite, you need to docker login to ghcr.io first:

docker login -u <username> -p <token> ghcr.io

where username is your github username, and token is your github PAT. The PAT needs to have the read:packages scope and Hasura SSO configured. See this for more details.

Running just watch will start the Docker dependencies, build the engine, and run all the tests.

Alternatively, run the tests once with just test

Updating goldenfiles

There are some tests where we compare the output of the test against an expected golden file. If you make some changes which expectedly change the goldenfile, you can regenerate them like this:

just update-golden-files

Some other tests use insta, and these can be reviewed with cargo insta review. If the cargo insta command cannot be found, install it with cargo install cargo-insta.

Run benchmarks

The benchmarks operate against the reference agent using the same test cases as the test suite, and need a similar setup.

To run benchmarks for the lexer, parser and validation:

cargo bench -p lang-graphql "lexer"
cargo bench -p lang-graphql "parser"
cargo bench -p lang-graphql "validation/.*"