graphql-engine/v3/justfile

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

147 lines
5.0 KiB
Makefile
Raw Normal View History

set positional-arguments
default:
just --list
build:
cargo build --release --all-targets
Run `cargo fmt` in CI (#431) <!-- Thank you for submitting this PR! :) --> ## Description When I run `cargo fmt` on my branches, it makes more diff than I want. This PR fixes that by adding `just format` / `just fmt`, and adding it to a CI job. <!-- Questions to consider answering: 1. What user-facing changes are being made? 2. What are issues related to this PR? (Consider adding `(close #<issue-no>)` to the PR title) 3. What is the conceptual design behind this PR? 4. How can this PR be tested/verified? 5. Does the PR have limitations? 6. Does the PR introduce breaking changes? --> ## Changelog - Add a changelog entry (in the "Changelog entry" section below) if the changes in this PR have any user-facing impact. See [changelog guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide). - If no changelog is required ignore/remove this section and add a `no-changelog-required` label to the PR. ### Product _(Select all products this will be available in)_ - [ ] community-edition - [ ] cloud <!-- product : end : DO NOT REMOVE --> ### Type <!-- See changelog structure: https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog --> _(Select only one. In case of multiple, choose the most appropriate)_ - [ ] highlight - [ ] enhancement - [ ] bugfix - [ ] behaviour-change - [ ] performance-enhancement - [ ] security-fix <!-- type : end : DO NOT REMOVE --> ### Changelog entry <!-- - Add a user understandable changelog entry - Include all details needed to understand the change. Try including links to docs or issues if relevant - For Highlights start with a H4 heading (#### <entry title>) - Get the changelog entry reviewed by your team --> _Replace with changelog entry_ <!-- changelog-entry : end : DO NOT REMOVE --> <!-- changelog : end : DO NOT REMOVE --> V3_GIT_ORIGIN_REV_ID: e31e352f27b9ad0129c3759fead051b1a8d86758
2024-04-02 18:08:38 +03:00
format:
cargo fmt --check
prettier --check .
Use Nix to build and publish Docker images. (#664) ## Description We can use Nix to build Docker images, which gives us a few advantages: 1. the images will be cached a little better 2. aarch64 builds become easy 3. Samir is happy because the Nixification continues ## Changelog - Add a changelog entry (in the "Changelog entry" section below) if the changes in this PR have any user-facing impact. See [changelog guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide). - If no changelog is required ignore/remove this section and add a `no-changelog-required` label to the PR. ### Product _(Select all products this will be available in)_ - [x] community-edition - [ ] cloud <!-- product : end : DO NOT REMOVE --> ### Type <!-- See changelog structure: https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog --> _(Select only one. In case of multiple, choose the most appropriate)_ - [ ] highlight - [x] enhancement - [ ] bugfix - [ ] behaviour-change - [ ] performance-enhancement - [ ] security-fix <!-- type : end : DO NOT REMOVE --> ### Changelog entry <!-- - Add a user understandable changelog entry - Include all details needed to understand the change. Try including links to docs or issues if relevant - For Highlights start with a H4 heading (#### <entry title>) - Get the changelog entry reviewed by your team --> The v3 engine and dev-auth-webhook Docker images are now published for both x86_64 (`amd64`) and aarch64 (`arm64`) architectures. <!-- changelog-entry : end : DO NOT REMOVE --> <!-- changelog : end : DO NOT REMOVE --> Co-Authored-By: Philip Carlsen <philip@hasura.io> Co-Authored-By: Gil Mizrahi <gil@hasura.io> V3_GIT_ORIGIN_REV_ID: ae6fec45dee62a21f03b5258b57d841a16542c72
2024-06-04 17:14:54 +03:00
! command -v nix || nix fmt -- --check .
Run `cargo fmt` in CI (#431) <!-- Thank you for submitting this PR! :) --> ## Description When I run `cargo fmt` on my branches, it makes more diff than I want. This PR fixes that by adding `just format` / `just fmt`, and adding it to a CI job. <!-- Questions to consider answering: 1. What user-facing changes are being made? 2. What are issues related to this PR? (Consider adding `(close #<issue-no>)` to the PR title) 3. What is the conceptual design behind this PR? 4. How can this PR be tested/verified? 5. Does the PR have limitations? 6. Does the PR introduce breaking changes? --> ## Changelog - Add a changelog entry (in the "Changelog entry" section below) if the changes in this PR have any user-facing impact. See [changelog guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide). - If no changelog is required ignore/remove this section and add a `no-changelog-required` label to the PR. ### Product _(Select all products this will be available in)_ - [ ] community-edition - [ ] cloud <!-- product : end : DO NOT REMOVE --> ### Type <!-- See changelog structure: https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog --> _(Select only one. In case of multiple, choose the most appropriate)_ - [ ] highlight - [ ] enhancement - [ ] bugfix - [ ] behaviour-change - [ ] performance-enhancement - [ ] security-fix <!-- type : end : DO NOT REMOVE --> ### Changelog entry <!-- - Add a user understandable changelog entry - Include all details needed to understand the change. Try including links to docs or issues if relevant - For Highlights start with a H4 heading (#### <entry title>) - Get the changelog entry reviewed by your team --> _Replace with changelog entry_ <!-- changelog-entry : end : DO NOT REMOVE --> <!-- changelog : end : DO NOT REMOVE --> V3_GIT_ORIGIN_REV_ID: e31e352f27b9ad0129c3759fead051b1a8d86758
2024-04-02 18:08:38 +03:00
alias fmt := format
fix:
cargo clippy --all-targets --no-deps --fix --allow-no-vcs
cargo fmt
just fix-format
Use Nix to build and publish Docker images. (#664) ## Description We can use Nix to build Docker images, which gives us a few advantages: 1. the images will be cached a little better 2. aarch64 builds become easy 3. Samir is happy because the Nixification continues ## Changelog - Add a changelog entry (in the "Changelog entry" section below) if the changes in this PR have any user-facing impact. See [changelog guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide). - If no changelog is required ignore/remove this section and add a `no-changelog-required` label to the PR. ### Product _(Select all products this will be available in)_ - [x] community-edition - [ ] cloud <!-- product : end : DO NOT REMOVE --> ### Type <!-- See changelog structure: https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog --> _(Select only one. In case of multiple, choose the most appropriate)_ - [ ] highlight - [x] enhancement - [ ] bugfix - [ ] behaviour-change - [ ] performance-enhancement - [ ] security-fix <!-- type : end : DO NOT REMOVE --> ### Changelog entry <!-- - Add a user understandable changelog entry - Include all details needed to understand the change. Try including links to docs or issues if relevant - For Highlights start with a H4 heading (#### <entry title>) - Get the changelog entry reviewed by your team --> The v3 engine and dev-auth-webhook Docker images are now published for both x86_64 (`amd64`) and aarch64 (`arm64`) architectures. <!-- changelog-entry : end : DO NOT REMOVE --> <!-- changelog : end : DO NOT REMOVE --> Co-Authored-By: Philip Carlsen <philip@hasura.io> Co-Authored-By: Gil Mizrahi <gil@hasura.io> V3_GIT_ORIGIN_REV_ID: ae6fec45dee62a21f03b5258b57d841a16542c72
2024-06-04 17:14:54 +03:00
! command -v nix || nix fmt
fix-format:
prettier --write .
run-local-with-shell:
#!/usr/bin/env bash
cargo run --bin custom-connector | ts "custom-connector:" &
OTLP_ENDPOINT=http://localhost:4317 \
cargo run --bin dev-auth-webhook | ts "dev-auth-webhook:" &
RUST_LOG=DEBUG cargo run --bin engine -- \
--otlp-endpoint http://localhost:4317 \
--authn-config-path static/auth/auth_config.json \
--metadata-path crates/engine/tests/schema.json \
--expose-internal-errors | ts "engine: " &
wait
# start all the docker deps for running tests (not engine)
start-docker-test-deps:
# start connectors and wait for health
Make tests run over both the ndc v0.1.x and v0.2.x custom connectors (#879) This PR updates as many tests as possible that use the custom connector so that the tests run over two versions of the custom connector: 1. The custom connector in the repo, which currently speaks `ndc_models` v0.2.x 2. The custom connector from the past (commit ), which is the last version to speak `ndc_models` v0.1.x This helps us test both the NDC v0.1.x and v0.2.x code paths. When the postgres connector upgrades to v0.2.x, we can use the same approach as in this PR to get the tests to run over multiple versions of the postgres connector too, for much better coverage. This approach with the custom connector will become less useful over time as the v0.1.x connector is not updated and will diverge in data from the v0.2.x connector. The postgres connector is likely to be longer-lasting, as it is more stable. The basic test used for `execute` integration tests is `test_execution_expectation` (in `crates/engine/tests/common.rs`) and it has been extended into a version called `test_execution_expectation_for_multiple_ndc_versions` that takes metadata on a per NDC version basis and then runs the test multiple times, once for each NDC version. This allows one to swap out the DataConnectorLink involved in the test to a different one that points at either the v0.1.x or v0.2.x versions of the connector. The assertion is that both connectors should produce the same results, even if they talk a different version of the NDC protocol. As each version runs, we `println!` the version so that if the test fails you can look in stdout for the test and see which one was executing when it failed. Tests that use the custom connector now use `test_execution_expectation_for_multiple_ndc_versions` and run across both connector versions. Some tests were unable to be used across both version as the data between the two versions has changed. Some tests were modified to avoid the changed data so as to support running across both versions. Any tests that use `test_execution_expectation_legacy` don't run across both versions because those tests aren't backed by the same test implementation as `test_execution_expectation_for_multiple_ndc_versions`. Unfortunately the custom connector doesn't use the standard connector SDK, so it doesn't support `HASURA_CONNECTOR_PORT`. This means that the old connector is stuck on 8101. To work around this, I've moved the current connector port to 8102 instead. Technically we might be able to use docker to remap the ports, but then this binds us into always running the connectors in docker in order to move their ports around, so I avoided that approach. Completes APIPG-703 V3_GIT_ORIGIN_REV_ID: fb0e410ddbee0ea699815388bc63584d6ff5dd70
2024-07-25 16:31:16 +03:00
docker compose -f ci.docker-compose.yaml up --wait postgres postgres_connector custom_connector custom_connector_ndc_v01
# start all the docker run time deps for the engine
start-docker-run-deps:
# start auth_hook and jaeger
docker compose up --wait auth_hook jaeger
# pull / build all docker deps
docker-refresh:
docker compose -f ci.docker-compose.yaml pull postgres_connector
docker compose -f ci.docker-compose.yaml build custom_connector
# stop all the docker deps
stop-docker:
docker compose -f ci.docker-compose.yaml down -v
docker compose down -v
# run the tests using local engine (once)
test *ARGS: start-docker-test-deps
#!/usr/bin/env bash
if command -v cargo-nextest; then
COMMAND=(cargo nextest run)
else
COMMAND=(cargo test)
fi
COMMAND+=(--no-fail-fast "$@")
echo "${COMMAND[*]}"
"${COMMAND[@]}"
# run a watch process that runs the tests locally
watch: start-docker-test-deps start-docker-run-deps
RUST_LOG=DEBUG \
cargo watch -i "**/*.snap.new" \
Resolve relationships in type predicates (#536) <!-- Thank you for submitting this PR! :) --> ## Description We have a separate copy of the code for resolving type predicates that we use for object types (when resolving BooleanExpressions), because the previous code was heavily tied to the Model it used. We'd like to unify that code again, so the first step is re-implementing relationships in `resolve_model_predicate_with_type`. <!-- Questions to consider answering: 1. What user-facing changes are being made? 2. What are issues related to this PR? (Consider adding `(close #<issue-no>)` to the PR title) 3. What is the conceptual design behind this PR? 4. How can this PR be tested/verified? 5. Does the PR have limitations? 6. Does the PR introduce breaking changes? --> ## Changelog - Add a changelog entry (in the "Changelog entry" section below) if the changes in this PR have any user-facing impact. See [changelog guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide). - If no changelog is required ignore/remove this section and add a `no-changelog-required` label to the PR. ### Product _(Select all products this will be available in)_ - [X] community-edition - [X] cloud <!-- product : end : DO NOT REMOVE --> ### Type <!-- See changelog structure: https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog --> _(Select only one. In case of multiple, choose the most appropriate)_ - [ ] highlight - [X] enhancement - [ ] bugfix - [ ] behaviour-change - [ ] performance-enhancement - [ ] security-fix <!-- type : end : DO NOT REMOVE --> ### Changelog entry <!-- - Add a user understandable changelog entry - Include all details needed to understand the change. Try including links to docs or issues if relevant - For Highlights start with a H4 heading (#### <entry title>) - Get the changelog entry reviewed by your team --> Allow boolean expressions to use relationships <!-- changelog-entry : end : DO NOT REMOVE --> <!-- changelog : end : DO NOT REMOVE --> V3_GIT_ORIGIN_REV_ID: 6225a4ab752b71df3cdfd0982bf2107ca39f4940
2024-05-07 13:13:38 +03:00
-x test \
-x 'clippy --no-deps' \
-x 'run --bin engine -- \
--otlp-endpoint http://localhost:4317 \
--authn-config-path static/auth/auth_config.json \
--metadata-path crates/engine/tests/schema.json \
--expose-internal-errors'
# check the code is fine
lint:
cargo clippy --all-targets --no-deps
Use Nix to build and publish Docker images. (#664) ## Description We can use Nix to build Docker images, which gives us a few advantages: 1. the images will be cached a little better 2. aarch64 builds become easy 3. Samir is happy because the Nixification continues ## Changelog - Add a changelog entry (in the "Changelog entry" section below) if the changes in this PR have any user-facing impact. See [changelog guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide). - If no changelog is required ignore/remove this section and add a `no-changelog-required` label to the PR. ### Product _(Select all products this will be available in)_ - [x] community-edition - [ ] cloud <!-- product : end : DO NOT REMOVE --> ### Type <!-- See changelog structure: https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog --> _(Select only one. In case of multiple, choose the most appropriate)_ - [ ] highlight - [x] enhancement - [ ] bugfix - [ ] behaviour-change - [ ] performance-enhancement - [ ] security-fix <!-- type : end : DO NOT REMOVE --> ### Changelog entry <!-- - Add a user understandable changelog entry - Include all details needed to understand the change. Try including links to docs or issues if relevant - For Highlights start with a H4 heading (#### <entry title>) - Get the changelog entry reviewed by your team --> The v3 engine and dev-auth-webhook Docker images are now published for both x86_64 (`amd64`) and aarch64 (`arm64`) architectures. <!-- changelog-entry : end : DO NOT REMOVE --> <!-- changelog : end : DO NOT REMOVE --> Co-Authored-By: Philip Carlsen <philip@hasura.io> Co-Authored-By: Gil Mizrahi <gil@hasura.io> V3_GIT_ORIGIN_REV_ID: ae6fec45dee62a21f03b5258b57d841a16542c72
2024-06-04 17:14:54 +03:00
! command -v nix || nix flake check
# ensure we don't have unused dependencies:
machete:
cargo machete --with-metadata
# update golden tests
update-golden-files: start-docker-test-deps
UPDATE_GOLDENFILES=1 cargo test
just fix-format
update-custom-connector-schema-in-test-metadata: && fix-format
#!/usr/bin/env bash
set -e
docker compose -f ci.docker-compose.yaml up --build --wait custom_connector
Make tests run over both the ndc v0.1.x and v0.2.x custom connectors (#879) This PR updates as many tests as possible that use the custom connector so that the tests run over two versions of the custom connector: 1. The custom connector in the repo, which currently speaks `ndc_models` v0.2.x 2. The custom connector from the past (commit ), which is the last version to speak `ndc_models` v0.1.x This helps us test both the NDC v0.1.x and v0.2.x code paths. When the postgres connector upgrades to v0.2.x, we can use the same approach as in this PR to get the tests to run over multiple versions of the postgres connector too, for much better coverage. This approach with the custom connector will become less useful over time as the v0.1.x connector is not updated and will diverge in data from the v0.2.x connector. The postgres connector is likely to be longer-lasting, as it is more stable. The basic test used for `execute` integration tests is `test_execution_expectation` (in `crates/engine/tests/common.rs`) and it has been extended into a version called `test_execution_expectation_for_multiple_ndc_versions` that takes metadata on a per NDC version basis and then runs the test multiple times, once for each NDC version. This allows one to swap out the DataConnectorLink involved in the test to a different one that points at either the v0.1.x or v0.2.x versions of the connector. The assertion is that both connectors should produce the same results, even if they talk a different version of the NDC protocol. As each version runs, we `println!` the version so that if the test fails you can look in stdout for the test and see which one was executing when it failed. Tests that use the custom connector now use `test_execution_expectation_for_multiple_ndc_versions` and run across both connector versions. Some tests were unable to be used across both version as the data between the two versions has changed. Some tests were modified to avoid the changed data so as to support running across both versions. Any tests that use `test_execution_expectation_legacy` don't run across both versions because those tests aren't backed by the same test implementation as `test_execution_expectation_for_multiple_ndc_versions`. Unfortunately the custom connector doesn't use the standard connector SDK, so it doesn't support `HASURA_CONNECTOR_PORT`. This means that the old connector is stuck on 8101. To work around this, I've moved the current connector port to 8102 instead. Technically we might be able to use docker to remap the ports, but then this binds us into always running the connectors in docker in order to move their ports around, so I avoided that approach. Completes APIPG-703 V3_GIT_ORIGIN_REV_ID: fb0e410ddbee0ea699815388bc63584d6ff5dd70
2024-07-25 16:31:16 +03:00
new_capabilities=$(curl http://localhost:8102/capabilities | jq)
new_schema=$(curl http://localhost:8102/schema | jq)
ndc_version="v0.2"
# Should only be tests that actually talk to the running connector and therefore must be up to date
test_directories=(./crates/engine/tests/execute)
find "${test_directories[@]}" -name '*.json' -print0 |
while IFS= read -r -d '' file; do
# Check if the file actually contains a custom connector DataConnectorLink
if jq -e '
(. | type == "object") and has("subgraphs") and (.subgraphs | length > 0) and (.subgraphs[] | has("objects") and (.objects | length > 0))
Make tests run over both the ndc v0.1.x and v0.2.x custom connectors (#879) This PR updates as many tests as possible that use the custom connector so that the tests run over two versions of the custom connector: 1. The custom connector in the repo, which currently speaks `ndc_models` v0.2.x 2. The custom connector from the past (commit ), which is the last version to speak `ndc_models` v0.1.x This helps us test both the NDC v0.1.x and v0.2.x code paths. When the postgres connector upgrades to v0.2.x, we can use the same approach as in this PR to get the tests to run over multiple versions of the postgres connector too, for much better coverage. This approach with the custom connector will become less useful over time as the v0.1.x connector is not updated and will diverge in data from the v0.2.x connector. The postgres connector is likely to be longer-lasting, as it is more stable. The basic test used for `execute` integration tests is `test_execution_expectation` (in `crates/engine/tests/common.rs`) and it has been extended into a version called `test_execution_expectation_for_multiple_ndc_versions` that takes metadata on a per NDC version basis and then runs the test multiple times, once for each NDC version. This allows one to swap out the DataConnectorLink involved in the test to a different one that points at either the v0.1.x or v0.2.x versions of the connector. The assertion is that both connectors should produce the same results, even if they talk a different version of the NDC protocol. As each version runs, we `println!` the version so that if the test fails you can look in stdout for the test and see which one was executing when it failed. Tests that use the custom connector now use `test_execution_expectation_for_multiple_ndc_versions` and run across both connector versions. Some tests were unable to be used across both version as the data between the two versions has changed. Some tests were modified to avoid the changed data so as to support running across both versions. Any tests that use `test_execution_expectation_legacy` don't run across both versions because those tests aren't backed by the same test implementation as `test_execution_expectation_for_multiple_ndc_versions`. Unfortunately the custom connector doesn't use the standard connector SDK, so it doesn't support `HASURA_CONNECTOR_PORT`. This means that the old connector is stuck on 8101. To work around this, I've moved the current connector port to 8102 instead. Technically we might be able to use docker to remap the ports, but then this binds us into always running the connectors in docker in order to move their ports around, so I avoided that approach. Completes APIPG-703 V3_GIT_ORIGIN_REV_ID: fb0e410ddbee0ea699815388bc63584d6ff5dd70
2024-07-25 16:31:16 +03:00
and any(.subgraphs[].objects[]; .kind == "DataConnectorLink" and .definition.url.singleUrl.value == "http://localhost:8102")' "$file" >/dev/null; then
# Update its schema, capabilities and version
jq --argjson newCapabilities "$new_capabilities" --argjson newSchema "$new_schema" --arg ndcVersion "$ndc_version" '
Make tests run over both the ndc v0.1.x and v0.2.x custom connectors (#879) This PR updates as many tests as possible that use the custom connector so that the tests run over two versions of the custom connector: 1. The custom connector in the repo, which currently speaks `ndc_models` v0.2.x 2. The custom connector from the past (commit ), which is the last version to speak `ndc_models` v0.1.x This helps us test both the NDC v0.1.x and v0.2.x code paths. When the postgres connector upgrades to v0.2.x, we can use the same approach as in this PR to get the tests to run over multiple versions of the postgres connector too, for much better coverage. This approach with the custom connector will become less useful over time as the v0.1.x connector is not updated and will diverge in data from the v0.2.x connector. The postgres connector is likely to be longer-lasting, as it is more stable. The basic test used for `execute` integration tests is `test_execution_expectation` (in `crates/engine/tests/common.rs`) and it has been extended into a version called `test_execution_expectation_for_multiple_ndc_versions` that takes metadata on a per NDC version basis and then runs the test multiple times, once for each NDC version. This allows one to swap out the DataConnectorLink involved in the test to a different one that points at either the v0.1.x or v0.2.x versions of the connector. The assertion is that both connectors should produce the same results, even if they talk a different version of the NDC protocol. As each version runs, we `println!` the version so that if the test fails you can look in stdout for the test and see which one was executing when it failed. Tests that use the custom connector now use `test_execution_expectation_for_multiple_ndc_versions` and run across both connector versions. Some tests were unable to be used across both version as the data between the two versions has changed. Some tests were modified to avoid the changed data so as to support running across both versions. Any tests that use `test_execution_expectation_legacy` don't run across both versions because those tests aren't backed by the same test implementation as `test_execution_expectation_for_multiple_ndc_versions`. Unfortunately the custom connector doesn't use the standard connector SDK, so it doesn't support `HASURA_CONNECTOR_PORT`. This means that the old connector is stuck on 8101. To work around this, I've moved the current connector port to 8102 instead. Technically we might be able to use docker to remap the ports, but then this binds us into always running the connectors in docker in order to move their ports around, so I avoided that approach. Completes APIPG-703 V3_GIT_ORIGIN_REV_ID: fb0e410ddbee0ea699815388bc63584d6ff5dd70
2024-07-25 16:31:16 +03:00
(.subgraphs[].objects[] | select(.kind == "DataConnectorLink" and .definition.url.singleUrl.value == "http://localhost:8102").definition.schema)
|= (.capabilities = $newCapabilities | .schema = $newSchema | .version = $ndcVersion)
' $file \
| sponge $file
echo "Updated $file"
else
echo "Skipping $file: Does not appear to be a metadata file with a custom connector"
fi
done
docker compose -f ci.docker-compose.yaml down
# run the engine using schema from tests
run: start-docker-test-deps start-docker-run-deps
RUST_LOG=DEBUG cargo run --bin engine -- \
--otlp-endpoint http://localhost:4317 \
--authn-config-path static/auth/auth_config.json \
--metadata-path crates/engine/tests/schema.json \
--expose-internal-errors
# check the docker build works
build-docker-with-nix binary="engine":
#!/usr/bin/env bash
echo "$(tput bold)nix build .#{{ binary }}-docker | gunzip | docker load$(tput sgr0)"
gunzip < "$(nix build --no-warn-dirty --no-link --print-out-paths '.#{{ binary }}-docker')" | docker load
# check the arm64 docker build works
build-aarch64-docker-with-nix binary="engine":
#!/usr/bin/env bash
echo "$(tput bold)nix build .#{{ binary }}-docker-aarch64-linux | gunzip | docker load$(tput sgr0)"
gunzip < "$(nix build --no-warn-dirty --no-link --print-out-paths --system aarch64-linux '.#{{ binary }}-docker-aarch64-linux')" | docker load