* server: add logging for action handlers
* add changelog entry
* change action-handler log type from internal to non-internal
* fix action-handler-log name
* server: log request/response sizes for event triggers
event triggers (and scheduled triggers) now have request/response size
in their logs.
* add changelog entry
Also some minor refactoring of bounded cache module:
- the maxBound check in `trim` was confusing and unnecessary
- consequently trim was unnecessary for lookupPure
Also add some basic tests
* Pass environment variables around as a data structure, via @sordina
* Resolving build error
* Adding Environment passing note to changelog
* Removing references to ILTPollerLog as this seems to have been reintroduced from a bad merge
* removing commented-out imports
* Language pragmas already set by project
* Linking async thread
* Apply suggestions from code review
Use `runQueryTx` instead of `runLazyTx` for queries.
* remove the non-user facing entry in the changelog
Co-authored-by: Phil Freeman <paf31@cantab.net>
Co-authored-by: Phil Freeman <phil@hasura.io>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
The current idle GC settings seem never to cause idle GC to trigger.
The changes here at least help memory usage to look more reasonable when
running certain benchmarks, and speculatively could partially fix some
memory leaks users have reported.
See ourIdleGC for details.
Referencing canonical memory issue #3388
https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/runtime_control.html#rts-flag---disable-delayed-os-memory-return
Referencing canonical memory issue #3388
This is a bit of a mystery. It didn't seem to have any effect in early
repros we had. But now, running an introspection query benchmark I see:
Running 400 concurrent connections:
before this change: max residency ~450M
after: ~140M
No difference in latency was observed.
...BUT: if I give graphql-engine a warmup of 10 requests with 1
connection (i.e. no concurrency): I see both have a max residency of
~140M (i.e. the flag doesn't help)
...also interestingly: a single warmup request doesn't seem to have
any effect (ending RES is still high), 2 requests gets max RES down to
~180M.
I suspect many concurrent connections are spraying pinned data over a
bunch of blocks which are then not released to the OS barring memory
pressure. Whatever this is is maybe thread-local or "per-capability" in
some sense...
This adds a server flag, --pg-connection-options, that can be used to set a PostgreSQL connection parameter, extra_float_digits, that needs to be used to avoid loss of data on older versions of PostgreSQL, which have odd default behavior when returning float values. (fixes#5092)