In f7400814792 we moved to benchmarking the Enterprise Edition pro container, but because we didn't successfully pipe the proper environment variables through to the docker container, we were benchmarking the pro server but in OSS mode. We would rather have visible the performance implications of as much functionality as possible, so make another attempt to run in EE mode.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/10206
GitOrigin-RevId: 83f823515d82503d1d7f0bf2b19637fad537175b
There are some incremental Metadata API methods that have no good justification for taking so much time to complete. This adds some of them to the CI benchmark suite, so that we can track their performance.
I have a prototype to speed up some of these methods 10x; see hasura/graphql-engine-mono#6613.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6627
GitOrigin-RevId: fecc7f28cae734b4acad68a63cbcdf0a2693d567
### Description
This PR adds a new benchmarl set named `deep_schema`, that is made to replicate one very specific edge-case: schemas that have deeply nested remote relationships. Our schema-building code is, in essence, "depth-first", and there are a lot of subtleties in the way we jump across remote relationship boundaries: this set will allows us to better understand the performance implications of technical decisions we make wrt. schema building.
This set, unlike others, does not declare any query: we are, for now, only interested in the schema building, which is tested with an ad-hoc script.
## Remaining work
There are several points worth discussing, wrt. this PR:
- should we make the schema larger, to make measures more consistent?
- should we extend this idea of measuring schema build performance to other sets?
- how do we extend the report to include this new information?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5517
GitOrigin-RevId: 9d8f4fddb9bbdca5ef85f3d22337b992acf13bce