graphql-engine/server/benchmarks/benchmark_sets/chinook_throughput
Brandon Simmons f500b837d8 server/benchmarks: remove introspection from throughput benchmarks
The results seem to vary wildly, and it's also not realistic usage.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9137
GitOrigin-RevId: 208abd919ae9786471a37452a9a0d34ae3c4df30
2023-05-11 17:50:03 +00:00
..
config.query.yaml server/benchmarks: remove introspection from throughput benchmarks 2023-05-11 17:50:03 +00:00
dump.sql.gz benchmarks: resource_calibration.sh and CI throughput benchmarks 2023-04-18 16:44:54 +00:00
README.md benchmarks: resource_calibration.sh and CI throughput benchmarks 2023-04-18 16:44:54 +00:00
replace_metadata.json benchmarks: resource_calibration.sh and CI throughput benchmarks 2023-04-18 16:44:54 +00:00

This is a copy of our Chinook latency benchmarks, but for measuring maximum sustained throughput, rather than latency at different load levels.

Benchmark sets like this one which have "throughput" in the name will be displayed differently in our regression report. We keep the same query names as in the regular Chinook benchmarks, so they can be easily compared using e.g. https://hasura.github.io/graphql-bench/app/web-app/#mono-pr-1234/chinook,mono-pr-1234/chinook_throughput

This is also used by resource_calibration.sh to get a peak through but number for the purposes of understanding resource provision.