graphql-engine/server/benchmarks/benchmark_sets/chinook_throughput
Brandon Simmons 3332189756 benchmarks: resource_calibration.sh and CI throughput benchmarks
See this earlier iteration of this work for an example of the kind of report  we're producing: #7664

And related work in this repo: github.com:hasura/graphql-bench-helper

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7923
GitOrigin-RevId: 99d2a55e2fb5b55f3f33e2570cfd0bc23e448e0c
2023-04-18 16:44:54 +00:00
..
config.query.yaml benchmarks: resource_calibration.sh and CI throughput benchmarks 2023-04-18 16:44:54 +00:00
dump.sql.gz benchmarks: resource_calibration.sh and CI throughput benchmarks 2023-04-18 16:44:54 +00:00
README.md benchmarks: resource_calibration.sh and CI throughput benchmarks 2023-04-18 16:44:54 +00:00
replace_metadata.json benchmarks: resource_calibration.sh and CI throughput benchmarks 2023-04-18 16:44:54 +00:00

This is a copy of our Chinook latency benchmarks, but for measuring maximum sustained throughput, rather than latency at different load levels.

Benchmark sets like this one which have "throughput" in the name will be displayed differently in our regression report. We keep the same query names as in the regular Chinook benchmarks, so they can be easily compared using e.g. https://hasura.github.io/graphql-bench/app/web-app/#mono-pr-1234/chinook,mono-pr-1234/chinook_throughput

This is also used by resource_calibration.sh to get a peak through but number for the purposes of understanding resource provision.