This adds a server flag, --pg-connection-options, that can be used to set a PostgreSQL connection parameter, extra_float_digits, that needs to be used to avoid loss of data on older versions of PostgreSQL, which have odd default behavior when returning float values. (fixes#5092)
This reduces memory consumption for new idle subscriptions significantly
(see linked ticket).
The hypothesis is: we fork a lot of threads per websocket, and some of
these use slightly more than the initial 1K stack size, so the first
overflow balloons to 32K, when significantly less is required.
However: running with `+RTS -K1K -xc` did not seem to show evidence of
any overflows! So it's a mystery why this improves things.
GHC should probably also be doubling the stack buffer at each overflow
or doing something even smarter; the knobs we have aren't so helpful.
Introspection query is failing with `type info not found for xxxx` error message if multiple actions are defined with reused PG scalars. The fix for the same.
* Benchmark GraphQL queries using wrk
* fix console assets dir
* Store wrk parameters as well
* Add details about storing results in Readme
* Remove files in bench-wrk while computing server shasum
* Instead of just getting maximum throughput per query per version,
create plots using wrk2 for a given set of requests per second.
The maximum throughput is used to see what values of requests per second are feasible.
* Add id for version dropdown
* Allow specifiying env and args for GraphQL Engine
1) Arguments defined after -- will be applied as arguments to Hasura GraphQL Engine
2) Script will also pass the environmental variables to Hasura GraphQL Engine instances
Hasura GraphQL engine can be run with the given environmental variables and arguments as follows
$ export HASURA_GRAPHQL_...=....
$ python3 hge_wrk_bench.py -- --hge_arg1 val1 --hge_arg2 val2 ...
* Use matplotlib instead of plotly for figures
* Show throughput graph also.
It maybe useful in checking performance regression across versions
* Support storing results in s3
Use --upload-root-uri 's3://bucket/path' to upload results inside the
given path.When specified, the results will be uploaded to the bucket,
including latencies, latency histogram, and the test setup info.
The s3 credentials should be provided as given in AWS boto3 documentation.
* Allow specifying a name for the test scenario
* Fix open latency uri bug
* Update wrk docker image
* Keep ylim a little higher than maximum so that the throughput plot is clearly visible
* Show throughput plots for multiple queries at the same time
* 1) Adjust size of dropdowns
2) Make label for requests/sec invisible when plot type is throughput
* 1) Adding boto3 to requirements.txt
2) Removing CPU Key print line
3) Adding info about the tests that will be run with wrk2
* Docker builder fo wrk-websocket-server
* Make it optional to setup remote graphql-engine
* Listen on all interfaces and enable ping thread
* Add bench_scripts to wrk-websocket-server docker
* Use 127.0.0.1 instead of 'localhost' to address local hge
For some reason it seems wrk was hanging trying to resolve 'localhost'.
ping was able to fine from the same container, so I'm not sure what the
deal was. Probably some local misconfiguration on my machine, but maybe
this change will also help others.
* Store latency samples in subdirectory, server_shasum just once at start, additional docs
* Add a note on running the benchmarks in the simplest way
* Add a new section on how to run benchmarks on a new linux hosted instance
Co-authored-by: Nizar Malangadan <nizar-m@users.noreply.github.com>
Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com>
Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io>
Co-authored-by: Brandon Simmons <brandon@hasura.io>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>