graphql-engine/server/benchmarks/benchmark_sets/deep_schema/config.query.yaml
Antoine Leblanc 1537bbd442 [ci] add benchmark for deep schema
### Description

This PR adds a new benchmarl set named `deep_schema`, that is made to replicate one very specific edge-case: schemas that have deeply nested remote relationships. Our schema-building code is, in essence, "depth-first", and there are a lot of subtleties in the way we jump across remote relationship boundaries: this set will allows us to better understand the performance implications of technical decisions we make wrt. schema building.

This set, unlike others, does not declare any query: we are, for now, only interested in the schema building, which is tested with an ad-hoc script.

## Remaining work

There are several points worth discussing, wrt. this PR:
- should we make the schema larger, to make measures more consistent?
- should we extend this idea of measuring schema build performance to other sets?
- how do we extend the report to include this new information?

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5517
GitOrigin-RevId: 9d8f4fddb9bbdca5ef85f3d22337b992acf13bce
2022-08-24 15:55:29 +00:00

34 lines
1.1 KiB
YAML

# This tells graphql-bench that it's testing a hasura instance and should
# collect some additional metrics:
extended_hasura_checks: true
headers:
X-Hasura-Admin-Secret: my-secret
# Anchors to help us DRY below; settings here may be overridden selectively
constants:
scalars:
# We'll measure at just two consistent load levels, which makes comparing
# benchmarks within the same run useful.
#
# NOTE: a load of 500 may cause hasura to fall over on a laptop. On our
# beefy CI benchmark runner we cannot sustain 1,000 RPS for the
# "large_result" queries.
- &low_load 20
- &high_load 500
k6_custom: &k6_custom
tools: [k6]
execution_strategy: CUSTOM
settings: &settings
# This is equivalent to wrk2's approach:
executor: 'constant-arrival-rate'
timeUnit: '1s'
maxVUs: 500 # NOTE: required, else defaults to `preAllocatedVUs`
# NOTE: ideally we'd test all of the queries with the same *number of requests*
# but that would mean running the "low_load" queries for much longer than
# is acceptable.
duration: '60s'
queries: []