29b2161402
There are some incremental Metadata API methods that have no good justification for taking so much time to complete. This adds some of them to the CI benchmark suite, so that we can track their performance. I have a prototype to speed up some of these methods 10x; see hasura/graphql-engine-mono#6613. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6627 GitOrigin-RevId: fecc7f28cae734b4acad68a63cbcdf0a2693d567 |
||
---|---|---|
.. | ||
adhoc_operations | ||
config.query.yaml | ||
dump.sql.gz | ||
README.md | ||
replace_metadata.json |
This data set is meant to test nested remote relationships: it contains a huge amount of remote relationships, chosen to make it so that as "depth-first" traversal will cross from one source to the other 50 times. Building the schema across remote relationships isn't easy, and the reason this set came to be was that an approach we considered for how to use different contexts for different sources actually resulted in a performance degradation with deep schemas such as this artificially created one.
At time of writing, we have no plan to use this set to benchmark queries: our
main concern is the schema building time, as measured by a call to
replace_metadata
.
A degradation of performance that is made visible by this set but no other likely indicates a problem with the implementation of remote relationships in the schema.