Docs: Add missing env var and update performance tuning

[DOCS-2015]: https://hasurahq.atlassian.net/browse/DOCS-2015?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/10796
GitOrigin-RevId: 1a03783abc4c77a6b95e58b84835cc61e1b40fe3
This commit is contained in:
Rob Dominguez 2024-05-09 07:50:16 -05:00 committed by hasura-bot
parent a710454f9b
commit b2c7c7ab57
3 changed files with 49 additions and 71 deletions

View File

@ -24,8 +24,7 @@ Cached responses are stored for a period of time in a LRU (least-recently used)
a user-specified TTL (time-to-live) which defaults to 60 seconds.
For self-hosted Enterprise Edition, refer to the [enable caching](/caching/enterprise-caching.mdx) documentation
configure
various parameters.
configure various parameters.
## Getting started
@ -44,7 +43,6 @@ query MyCachedQuery @cached {
If the response was cached successfully, the HTTP response will include a `Cache-Control` header, whose value
(`max-age={SECONDS}`) indicates the maximum number of seconds for the returned response to remain in the cache.
## Controlling cache lifetime
The maximum lifetime of an entry in the cache can be controlled using the `ttl` argument to the `@cached` query
@ -62,6 +60,13 @@ query MyCachedQuery @cached(ttl: 120) {
By default, a response will be cached with a maximum lifetime of 60 seconds. The maximum allowed value is 300 seconds (5
minutes).
:::info Limits for Hasura Cloud projects
As stated above, for any Hasura Cloud project, the maximum allowed value is 300 seconds (5 minutes). Should you need a
longer cache lifetime, please [contact sales](mailto:sales@hasura.io).
:::
## Forcing the cache to refresh
The cache entry can be forced to refresh, regardless of the maximum lifetime using the `refresh` argument to `@cached`.
@ -78,9 +83,8 @@ query MyCachedQuery @cached(refresh: true) {
:::info Use a literal boolean value for refresh
`refresh` must be provided with literal boolean value and not as a variable to
have the desired effect. If the value of this refresh argument is provided via a
GraphQL variable, then there would be a cache miss, as it is considered a
`refresh` must be provided with literal boolean value and not as a variable to have the desired effect. If the value of
this refresh argument is provided via a GraphQL variable, then there would be a cache miss, as it is considered a
different query and will generate a new cache key.
:::

View File

@ -222,6 +222,18 @@ for JSON encoding-decoding.
| **Default** | `false` |
| **Supported in** | CE, Enterprise Edition |
### Cache Max Entry TTL
The maximum Query Cache TTL value in seconds, defaulting to 3600 seconds (1 hour).
| | |
| ------------------- | ------------------------------------ |
| **Flag** | `--query-cache-max-ttl` |
| **Env var** | `HASURA_GRAPHQL_CACHE_MAX_ENTRY_TTL` |
| **Accepted values** | Integer |
| **Default** | `3600` |
| **Supported in** | Enterprise Edition only |
### Close WebSocket connections on metadata change
When metadata changes, close all WebSocket connections (with error code `1012`). This is useful when you want to ensure
@ -387,31 +399,29 @@ subgraph in an Apollo supergraph.
| **Default** | `false` |
| **Supported in** | CE, Enterprise Edition, Cloud |
### Enable Automated Persisted Queries
Enables the [Automated Persisted Queries](https://www.apollographql.com/docs/apollo-server/performance/apq/) feature.
| | |
| ------------------- | ------------------------------------------------ |
| **Flag** | `--enable-persisted-queries` |
| **Env var** | `HASURA_GRAPHQL_ENABLE_PERSISTED_QUERIES` |
| **Accepted values** | Boolean |
| **Default** | `false` |
| **Supported in** | Enterprise Edition |
| | |
| ------------------- | ----------------------------------------- |
| **Flag** | `--enable-persisted-queries` |
| **Env var** | `HASURA_GRAPHQL_ENABLE_PERSISTED_QUERIES` |
| **Accepted values** | Boolean |
| **Default** | `false` |
| **Supported in** | Enterprise Edition |
### Set Automated Persisted Queries TTL
Sets the query TTL in the cache store for Automated Persisted Queries.
| | |
| ------------------- | ------------------------------------------------ |
| **Flag** | `--persisted-queries-ttl` |
| **Env var** | `HASURA_GRAPHQL_PERSISTED_QUERIES_TTL` |
| **Accepted values** | Integer |
| **Default** | `5` (seconds) |
| **Supported in** | Enterprise Edition |
| | |
| ------------------- | -------------------------------------- |
| **Flag** | `--persisted-queries-ttl` |
| **Env var** | `HASURA_GRAPHQL_PERSISTED_QUERIES_TTL` |
| **Accepted values** | Integer |
| **Default** | `5` (seconds) |
| **Supported in** | Enterprise Edition |
### Enable Error Log Level for Trigger Errors
@ -425,7 +435,6 @@ Sets the log-level as `error` for Trigger type error logs (Event Triggers, Sched
| **Default** | `false` |
| **Supported in** | CE, Enterprise Edition |
### Enable Console
Enable the Hasura Console (served by the server on `/` and `/console`).
@ -451,7 +460,6 @@ Sets the maximum cumulative length of all headers in bytes.
| **Default** | `1024*1024` (1MB) |
| **Supported in** | CE, Enterprise Edition |
### Enable High-cardinality Labels for Metrics
Enable high-cardinality labels for [Prometheus Metrics](/observability/enterprise-edition/prometheus/metrics.mdx).
@ -557,7 +565,7 @@ log types — can be found [here](/deployment/logging.mdx#log-types).
| **Env var** | `HASURA_GRAPHQL_ENABLED_LOG_TYPES` |
| **Accepted values** | String (Comma-separated) |
| **Options** | `startup`, `http-log`, `webhook-log`, `websocket-log`, `query-log`, `execution-log`, `livequery-poller-log`, `action-handler-log`, `data-connector-log`, `jwk-refresh-log`, `validate-input-log` |
| **Default** | `startup, http-log, webhook-log, websocket-log`, `jwk-refresh-log` |
| **Default** | `startup, http-log, webhook-log, websocket-log`, `jwk-refresh-log` |
| **Supported in** | CE, Enterprise Edition |
### Events HTTP Pool Size

View File

@ -31,53 +31,27 @@ very undersized for your system. You can read more details about the configurati
### Hasura configuration
With respect to Hasura, several environment variables can be configured to tune performance.
With respect to Hasura, several environment variables and project settings can be configured to tune performance.
#### `HASURA_GRAPHQL_PG_CONNECTIONS`
#### Connection pooling
- Minimum **2** connections
- Default value: **50**
- Max connections: = Max connections of Postgres - 5 (keep free connections for another services, e.g PGAdmin, metrics
tools)
However, how many connections is the best setting? Of course, it depends on the Postgres server's hardware
specifications. Moreover, too many connections doesn't mean query performance will be the highest. There are many great
articles that analyze this deeper:
- [https://brandur.org/postgres-connections](https://brandur.org/postgres-connections)
- [https://github.com/brettwooldridge/HikariCP/wiki/About-Pool-Sizing](https://github.com/brettwooldridge/HikariCP/wiki/About-Pool-Sizing)
There isn't a silver bullet for all server specs. Your developer has to test and benchmark carefully for the final
result. However, as a starting point, you can estimate with this formula, and then test around this value:
```
connections = ((core_count * 2) + effective_spindle_count)
```
For example, if your server has a 4 Core i7 CPU and 1 hard disk, it should have a connection pool of:
9 = ((4 \* 2) +1).
:::tip Tip
Call it 10, as a nice round number.
:::
Hasura provides automatic connection pools for (PostgreSQL and MSSQL) database connections. These pools are elastic and
will adapt based on usage and a limit you configure. You can learn more about connection pooling and how it can be
fine-tuned to help your project's performance [here](/databases/database-config/cloud-connection-pooling.mdx).
For high-transaction applications, a horizontal scale with multiple GraphQL Engine clusters is the recommended best
practice. However, you should be aware of the total connections of all nodes. The number must be lower than the maximum
connections of the Postgres instance.
practice.
#### `HASURA_GRAPHQL_CONNECTIONS_PER_READ_REPLICA`
With Read replicas, Hasura can load balance multiple databases. However, you will need to balance connections between
database nodes too. Currently, read-replica connections use one setting for all databases. It can't flexibly configure
specific values for each node. Therefore, you need to be aware of the total number of connections when scaling Hasura to
multiple nodes.
With Read replicas, Hasura can route queries and subscriptions to multiple
databases. However, you will need to balance connections between database nodes
too:
- Master connections (`HASURA_GRAPHQL_PG_CONNECTIONS`) are now used for writing only. You can decrease max connections
lower if Hasura doesn't write much, or share connections with other Hasura nodes.
- Currently, read-replica connections use one setting for all databases. It can't flexibly configure specific values for
each node. Therefore, you need to be aware of the total number of connections when scaling Hasura to multiple nodes.
too.
#### `HASURA_GRAPHQL_LIVE_QUERIES_MULTIPLEXED_REFETCH_INTERVAL`
@ -130,14 +104,6 @@ workloads.
Horizontal **auto-scaling** can be set up based on CPU & memory. It's advisable to start with this, monitor it for a few
days and see if there's a need to change based on your workload.
However, you need to be aware of the total Postgres connections, too. The default `HASURA_GRAPHQL_PG_CONNECTIONS` value
is `50`, meanwhile the default Postgres `max_connections` configuration is `100`. The Postgres server will easily be out
of connections with `3` Hasura nodes, or `2` nodes with events/action services that connect directly to the database.
```
pg_max_connections >= hasura_nodes * hasura_pg_connections + event_nodes * event_pg_connections
```
### Observability
Observability tools help us track issues, alert us to errors, and allow us to monitor performance and hardware usage. It