do not use prepared statements for mutations

This commit is contained in:
Vamshi Surabhi 2020-07-01 17:44:19 +05:30
parent a78f17b79c
commit cfffade115
2 changed files with 20 additions and 3 deletions

View File

@ -8,6 +8,7 @@
- server: add new `--conn-lifetime` and `HASURA_GRAPHQL_PG_CONN_LIFETIME` options for expiring connections after some amount of active time (#5087)
- server: shrink libpq connection request/response buffers back to 1MB if they grow beyond 2MB, fixing leak-like behavior on active servers (#5087)
- server: disable prepared statements for mutations as we end up with single-use objects which result in excessive memory consumption for mutation heavy workloads (#5255)
- console: allow configuring statement timeout on console RawSQL page (close #4998) (#5045)
- docs: add note for managed databases in postgres requirements (close #1677, #3783) (#5228)
- docs: add 1-click deployment to Nhost page to the deployment guides (#5180)
@ -119,7 +120,7 @@ Read more about the session argument for computed fields in the [docs](https://h
A new `seeds` command is introduced in CLI, this will allow managing seed migrations as SQL files
#### Creating seed
```
```
# create a new seed file and use editor to add SQL content
hasura seed create new_table_seed

View File

@ -72,6 +72,20 @@ mutateAndReturn (Mutation qt (cte, p) mutationOutput allCols remoteJoins strfyNu
sqlQuery = Q.fromBuilder $ toSQL $
mkMutationOutputExp qt allCols Nothing cte mutationOutput strfyNum
{- Note: [Prepared statements in Mutations]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The SQL statements we generate for mutations seem to include the actual values
in the statements in some cases which pretty much makes them unfit for reuse
(Handling relationships in the returning clause is the source of this
complexity). Further, `PGConn` has an internal cache which maps a statement to
a 'prepared statement id' on Postgres. As we prepare more and more single-use
SQL statements we end up leaking memory both on graphql-engine and Postgres
till the connection is closed. So a simpler but very crude fix is to not use
prepared statements for mutations. The performance of insert mutations
shouldn't be affected but updates and delete mutations with complex boolean
conditions **might** see some degradation.
-}
mutateAndSel
:: (HasVersion, MonadTx m, MonadIO m)
=> Mutation -> m EncJSON
@ -92,7 +106,8 @@ executeMutationOutputQuery
executeMutationOutputQuery query prepArgs = \case
Nothing ->
runIdentity . Q.getRow
<$> liftTx (Q.rawQE dmlTxErrorHandler query prepArgs True)
-- See Note [Prepared statements in Mutations]
<$> liftTx (Q.rawQE dmlTxErrorHandler query prepArgs False)
Just (remoteJoins, (httpManager, reqHeaders, userInfo)) ->
executeQueryWithRemoteJoins httpManager reqHeaders userInfo query prepArgs remoteJoins
@ -105,7 +120,8 @@ mutateAndFetchCols
-> Q.TxE QErr (MutateResp TxtEncodedPGVal)
mutateAndFetchCols qt cols (cte, p) strfyNum =
Q.getAltJ . runIdentity . Q.getRow
<$> Q.rawQE dmlTxErrorHandler (Q.fromBuilder sql) (toList p) True
-- See Note [Prepared statements in Mutations]
<$> Q.rawQE dmlTxErrorHandler (Q.fromBuilder sql) (toList p) False
where
aliasIden = Iden $ qualObjectToText qt <> "__mutation_result"
tabFrom = FromIden aliasIden