In f7400814792 we moved to benchmarking the Enterprise Edition pro container, but because we didn't successfully pipe the proper environment variables through to the docker container, we were benchmarking the pro server but in OSS mode. We would rather have visible the performance implications of as much functionality as possible, so make another attempt to run in EE mode.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/10206
GitOrigin-RevId: 83f823515d82503d1d7f0bf2b19637fad537175b
A fix for #9447
Before this change, for the workload in the ticket, we saw a memory stay at around 4 GB an hour after the workload ended. after this change we see both a lower memory high watermark, and also see memory come back to baseline at the end as we expect (after the `HASURA_GRAPHQL_PG_CONN_LIFETIME` and a subsequent `_idleStaleReaperThread` interval elapses). But note the numbers given on any given machine may be slightly different, since the behavior depends on the version of glibc and processor speed (Since the test case depends on the server being overloaded).
This might also help some users commenting in https://github.com/hasura/graphql-engine/issues/9592
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9823
GitOrigin-RevId: 5650aa42d10d46c418c21686983a982d69011884
So you can do:
```
$ HASURA_GRAPHQL_EE_LICENSE_KEY=... scripts/dev.sh graphql-engine-pro
```
along with the `--prof-*` modes etc.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8894
GitOrigin-RevId: 2257749b2936cbd3230beb23e774ac92989e2fbc
Trying to reduce the nondeterminism in the allowlist benchmarks.
Thanks to @jberryman for spotting this.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/8842
GitOrigin-RevId: 1369050e4c5844538ef0852f4347d7a72dd18287
See this earlier iteration of this work for an example of the kind of report we're producing: #7664
And related work in this repo: github.com:hasura/graphql-bench-helper
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7923
GitOrigin-RevId: 99d2a55e2fb5b55f3f33e2570cfd0bc23e448e0c
There are some incremental Metadata API methods that have no good justification for taking so much time to complete. This adds some of them to the CI benchmark suite, so that we can track their performance.
I have a prototype to speed up some of these methods 10x; see hasura/graphql-engine-mono#6613.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6627
GitOrigin-RevId: fecc7f28cae734b4acad68a63cbcdf0a2693d567
This introduces an adhoc operation to the benchmark of `huge_schema`, so that we can track performance of the incremental Metadata API.
This untracks a table that is not referenced by anything else in the `huge_schema` metadata, so that we don't need to cascade any changes. And then it tracks it again.
Benchmarking this will be valuable for working on `Hasura.Incremental`.
Results will start showing up in the benchmark report when this is merged to `main`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6553
GitOrigin-RevId: 65dad4f7a5fe1c230c5def136640bb68f4a4aa9b
See: https://github.com/grafana/k6/issues/2685
It might be interesting to think about taking into consideration decompression time when thinking about performance, but In general I think doing so is surprising and I wasted a lot of time trying to figure out why my optimizations to the compression codepath weren't improving things to the degree I expected
The downside here is we lose error reporting, so you'll need to only set
discardResponseBodies: true after the query has been tested.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5940
GitOrigin-RevId: 82a589a59b93f10ffb5391e4a3190459fb6e613b
### Description
This PR adds a new benchmarl set named `deep_schema`, that is made to replicate one very specific edge-case: schemas that have deeply nested remote relationships. Our schema-building code is, in essence, "depth-first", and there are a lot of subtleties in the way we jump across remote relationship boundaries: this set will allows us to better understand the performance implications of technical decisions we make wrt. schema building.
This set, unlike others, does not declare any query: we are, for now, only interested in the schema building, which is tested with an ad-hoc script.
## Remaining work
There are several points worth discussing, wrt. this PR:
- should we make the schema larger, to make measures more consistent?
- should we extend this idea of measuring schema build performance to other sets?
- how do we extend the report to include this new information?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5517
GitOrigin-RevId: 9d8f4fddb9bbdca5ef85f3d22337b992acf13bce
…rking metadata operations
And add an initial benchmark for replace_metadata, to unblock some
performance improvements to that op in a PR to be merged after this.
This is an MVP just to have something in CI to reference when optimizing
metadata operations. See TODO for roadmap.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3673
Co-authored-by: jkachmar <8461423+jkachmar@users.noreply.github.com>
GitOrigin-RevId: 968d1f92ca79c78ad90b2304d2214069bc739621
This PR upgrades some of the pinned dependencies do not build with python 3.10 - cffi, ruamel, py. Further, it upgrades other packages where the effort is minimal.
For the reviewers: Please review it commit by commit.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3367
GitOrigin-RevId: c5401fe289d3185a79c4d382297f86fbde139825
This excercises a different code path from regular queries and is
pretty slow. Motivated by https://github.com/hasura/graphql-engine-mono/pull/2835
We may need to break chinook up into two sets if it keeps growing or
becomes a bottleneck, but do that later.
GitOrigin-RevId: 5832aef520db85f694924e038c0f2c9a7dd17a13