mirror of
https://github.com/hasura/graphql-engine.git
synced 2024-12-15 01:12:56 +03:00
6d235be29c
https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/runtime_control.html#rts-flag---disable-delayed-os-memory-return Referencing canonical memory issue #3388 This is a bit of a mystery. It didn't seem to have any effect in early repros we had. But now, running an introspection query benchmark I see: Running 400 concurrent connections: before this change: max residency ~450M after: ~140M No difference in latency was observed. ...BUT: if I give graphql-engine a warmup of 10 requests with 1 connection (i.e. no concurrency): I see both have a max residency of ~140M (i.e. the flag doesn't help) ...also interestingly: a single warmup request doesn't seem to have any effect (ending RES is still high), 2 requests gets max RES down to ~180M. I suspect many concurrent connections are spraying pinned data over a bunch of blocks which are then not released to the OS barring memory pressure. Whatever this is is maybe thread-local or "per-capability" in some sense... |
||
---|---|---|
.. | ||
bench-wrk | ||
packaging | ||
src-bench-cache | ||
src-exec | ||
src-lib | ||
src-rsr | ||
src-test | ||
tests-py | ||
.dockerignore | ||
.gitignore | ||
.stylish-haskell.yaml | ||
cabal.project | ||
cabal.project.ci | ||
cabal.project.dev | ||
cabal.project.dev-sh | ||
cabal.project.dev-sh.freeze | ||
cabal.project.dev-sh.local | ||
cabal.project.freeze | ||
CONTRIBUTING.md | ||
graphql-engine.cabal | ||
Makefile | ||
Setup.hs | ||
STYLE.md |