server/mssql: add support for creating event triggers (incremental PR #2)

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2634
Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com>
GitOrigin-RevId: 29567fa336c22d1812dfcfa6294f54e74a741f0c
This commit is contained in:
Karthikeyan Chinnakonda 2022-03-03 15:22:48 +05:30 committed by hasura-bot
parent 1eb7fe5999
commit a3e6b1c741
10 changed files with 429 additions and 68 deletions

View File

@ -1,22 +1,26 @@
# Event triggers in MS-SQL
This document outlines research for supporting event triggers in a MS-SQL database. This RFC includes only the event triggers support from a
Database PoV, which can be divided in two parts as the following:
This document outlines research for supporting event triggers in a MS-SQL
database. This RFC includes only the event triggers support from a Database PoV,
which can be divided in two parts as the following:
## Generating new events
1. For supporting event triggers, we need to generate new events on a mutation, whether
the mutation is done through Hasura or not. This can be done via an DML SQL triggers which are
supported by the [MS-SQL triggers](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-trigger-transact-sql?view=sql-server-ver15.).
1. For supporting event triggers, we need to generate new events on a mutation,
whether the mutation is done through Hasura or not. This can be done via an
DML SQL triggers which are supported by the [MS-SQL triggers](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-trigger-transact-sql?view=sql-server-ver15.).
2. An MS-SQL trigger is different from a postgres trigger in some ways
* MS-SQL doesn't support triggers which trigger for each row, so in case of mutations which affect multiple rows, there'll only be a single
trigger fired which will contain the data of all the rows that were affected.
* MS-SQL doesn't support triggers which trigger for each row, so in case of
mutations which affect multiple rows, there'll only be a single trigger
fired which will contain the data of all the rows that were affected.
* MS-SQL maintains two logical tables, namely, [`inserted` and `deleted`](https://docs.microsoft.com/en-us/sql/relational-databases/triggers/use-the-inserted-and-deleted-tables?view=sql-server-ver15).
The rows in the `inserted` table are copies of the new rows in the trigger table and similarly the `deleted` table contains the
copies of the rows that were deleted from the trigger table.
* When there's an update transaction, the old data (before the update) will be copied to the `deleted` table and the new data will be copied
to the `inserted` table.
The rows in the `inserted` table are copies of the new rows in the trigger
table and similarly the `deleted` table contains the copies of the rows
that were deleted from the trigger table.
* When there's an update transaction, the old data (before the update) will
be copied to the `deleted` table and the new data will be copied to the
`inserted` table.
3. The `data` value of the event trigger's payload should be as following:
@ -29,16 +33,19 @@ Database PoV, which can be divided in two parts as the following:
}
```
In postgres, we could use the `row_to_json` function to convert a table row into JSON. In SQL server, the way to convert an SQL row into
JSON is different. Say, we have a table `authors`, which has three columns - `id`, `name` and `created_at`. We can convert the rows
of the table into JSON by performing the following query:
In postgres, we could use the `row_to_json` function to convert a table row
into JSON. In SQL server, the way to convert an SQL row into JSON is
different. Say, we have a table `authors`, which has three columns - `id`,
`name` and `created_at`. We can convert the rows of the table into JSON by
performing the following query:
```sql
select * from authors FOR JSON PATH;
```
which will return a single row which will contain an JSON array with the rows info formatted into JSON with the column names being the
keys of the JSON object and the value being the value of those keys. For example,
which will return a single row which will contain an JSON array with the rows
info formatted into JSON with the column names being the keys of the JSON
object and the value being the value of those keys. For example:
```json
[
@ -53,15 +60,55 @@ Database PoV, which can be divided in two parts as the following:
]
```
4. Insert and delete event triggers are easier than the `update` triggers, because in the former, we only need to format the data
present in the `inserted` and `deleted` tables and insert it into the `hdb_catalog.event_log` table. For updates, it's not so
straight-forward because we need to combine data from the `inserted` and `deleted` tables to construct the payload as mentioned
in #3.
4. Insert and delete event triggers are easier than the `update` triggers,
because in the former, we only need to format the data present in the
`inserted` and `deleted` tables and insert it into the
`hdb_catalog.event_log` table. For updates, it's not so straight-forward
because we need to combine data from the `inserted` and `deleted` tables to
construct the payload as mentioned in #3.
`INSERT` event trigger definition:
```sql
CREATE OR ALTER TRIGGER hasuraAuthorsAfterInsert
ON authors
AFTER INSERT
AS
BEGIN
DECLARE @json NVARCHAR(MAX)
SET @json = (
SELECT id as [data.new.id], name as [data.new.name], NULL as [data.old]
FROM INSERTED
FOR JSON PATH
)
insert into hdb_catalog.event_log (schema_name,table_name,trigger_name, new_payload)
select 'dbo','authors','authors_insert', value from OPENJSON (@json)
END
```
`DELETE` event trigger definition:
```sql
CREATE OR ALTER TRIGGER hasuraAuthorsAfterDelete
ON authors
AFTER DELETE
AS
BEGIN
DECLARE @json NVARCHAR(MAX)
SET @json = (
SELECT id as [data.old.id], name as [data.old.name,] NULL as [data.new]
FROM DELETED
FOR JSON PATH, INCLUDE_NULL_VALUES
)
insert into hdb_catalog.event_log (schema_name,table_name,trigger_name,payload)
select 'dbo','authors','authors_delete', value from OPENJSON (@json)
END;
```
So, the following is proposed for `UPDATE` triggers:
```sql
CREATE TRIGGER hasuraAuthorsAfterUpdate
CREATE OR ALTER TRIGGER hasuraAuthorsAfterUpdate
ON authors
AFTER UPDATE
AS
@ -70,58 +117,86 @@ Database PoV, which can be divided in two parts as the following:
SET @json = (
select d.id as [data.old.id], d.name as [data.old.name], i.id as [data.new.id], i.name as [data.new.name]
from deleted d
JOIN inserted i on i.id = d.id # join using the primary key of the table
where (i.id != d.id || i.name != d.name) # check if there has been a value change
JOIN inserted i on i.id = d.id
where (i.id != d.id OR i.name != d.name)
FOR JSON PATH
)
insert into hdb_catalog.event_log (payload)
select value from OPENJSON (@json)
insert into hdb_catalog.event_log (schema_name,table_name,trigger_name, payload)
select 'dbo','authors','authors_update', value from OPENJSON (@json)
END
```
NOTE: the above will work only when a table has a primary key, which is used to join the `deleted` and the `inserted` tables.
NOTE: the above will work only when a table has a primary key, which is used
to join the `deleted` and the `inserted` tables.
The triggers will be created with template string values where the values of the tables or row expressions will be substituted
before creating the trigger, as it is done for postgres [here](https://github.com/hasura/graphql-engine-mono/blob/main/server/src-rsr/trigger.sql.shakespeare).
The triggers will be created with template string values where the values of
the tables or row expressions will be substitutedcbefore creating the
trigger, as it is done for postgres [here](https://github.com/hasura/graphql-engine-mono/blob/main/server/src-rsr/trigger.sql.shakespeare).
5. MS-SQL doesn't allow for the trigger to be created in a different schema from the target table's schema.
For example, if a table is created in the `dbo` schema, then the trigger should also be in the `dbo` schema.
5. MS-SQL doesn't allow for the trigger to be created in a different schema from
the target table's schema. For example, if a table is created in the `dbo`
schema, then the trigger should also be in the `dbo` schema.
6. In postgres, the session variables and trace context were set in runtime configurations, `hasura.user` and `hasura.tracecontext` respectively, it's done by setting these values via `SET LOCAL \"hasura.user\"={\"x-hasura-user-id\":\"1\"}`. In MS-SQL, the same can be done using [SESSION_CONTEXT](https://docs.microsoft.com/en-us/sql/t-sql/functions/session-context-transact-sql?view=sql-server-ver15).
6. In postgres, the session variables and trace context were set in runtime
configurations, `hasura.user` and `hasura.tracecontext` respectively, it's
done by setting these values via `SET LOCAL \"hasura.user\"={\"x-hasura-user-id\":\"1\"}`.
In MS-SQL, the same can be done using [SESSION_CONTEXT](https://docs.microsoft.com/en-us/sql/t-sql/functions/session-context-transact-sql?view=sql-server-ver15).
There are some differences between the postgres and MS-SQL session contexts,
* In postgres, there's an option to localize the session context only to a transaction (using `SET LOCAL`), but there's no way to do the
same in MS-SQL. In MS-SQL, the session context will be set for the whole context. So, for this to work in MS-SQL, we should only have
one transaction per session (which already exists).
* In postgres, there's an option to localize the session context only to a
transaction (using `SET LOCAL`), but there's no way to do the same in
MS-SQL. In MS-SQL, the session context will be set for the whole context.
So, for this to work in MS-SQL, we should only have one transaction per
session (which already exists).
7. The aim is to do as little work as possible in the source DB i.e. the source should only capture the `new`,`old`, `operation_type`, `session_variables` and `tracecontext` in an event log, the JSON processing of these details will be done by the graphql-engine during
the delivery of the event.
7. The aim is to do as little work as possible in the source DB i.e. the source
should only capture the `new`,`old`, `operation_type`, `session_variables`
and `tracecontext` in an event log, the JSON processing of these details will
be done by the graphql-engine during the delivery of the event.
## Fetching pending events
1. MS-SQL doesn't support a `JSON` column type and instead is stored in a column with `NVARCHAR(MAX)` type. So, we can't rely on the database that the
value in the `payload` will be an valid JSON value. MS-SQL does provide a function [ISJSON](https://docs.microsoft.com/en-us/sql/t-sql/functions/isjson-transact-sql?view=sql-server-ver15) which can be used to check if a value is valid JSON.
1. MS-SQL doesn't support a `JSON` column type and instead is stored in a column
with `NVARCHAR(MAX)` type. So, we can't rely on the database that the value
in the `payload` will be an valid JSON value. MS-SQL does provide a function
[ISJSON](https://docs.microsoft.com/en-us/sql/t-sql/functions/isjson-transact-sql?view=sql-server-ver15)
which can be used to check if a value is valid JSON.
2. As we know, there can be multiple instances of hasura running on the same source/database. So, we need to make sure that the multiple instances
do not fetch the same rows, otherwise the same events will be processed more than once. To solve this problem, postgres uses the `FOR UPDATE SKIP LOCKED` which when used in a `SELECT` query will skip over the rows that are locked by other transactions **without waiting**.
2. As we know, there can be multiple instances of hasura running on the same
source/database. So, we need to make sure that the multiple instances do not
fetch the same rows, otherwise the same events will be processed more than
once. To solve this problem, postgres uses the `FOR UPDATE SKIP LOCKED` which
when used in a `SELECT` query will skip over the rows that are locked by
other transactions **without waiting**.
MS-SQL has a similar feature, [READPAST](https://docs.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table?view=sql-server-ver15) which is more or less like `FOR UPDATE SKIP LOCKED`. From the docs,
MS-SQL has a similar feature, [READPAST and UPDLOCK](https://docs.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table?view=sql-server-ver15)
which is more or less like `FOR UPDATE SKIP LOCKED`. From the docs,
> READPAST is primarily used to reduce locking contention when implementing a work queue that uses a SQL Server table. A queue reader that uses READPAST skips past queue entries locked by other transactions to the next available queue entry, without having to wait until the other transactions release their locks.
> READPAST is primarily used to reduce locking contention when implementing a
> work queue that uses a SQL Server table. A queue reader that uses READPAST
> skips past queue entries locked by other transactions to the next available
> queue entry, without having to wait until the other transactions release
> their locks.
> When specified in transactions operating at the SNAPSHOT isolation level,
> READPAST must be combined with other table hints that require locks, such
> as UPDLOCK and HOLDLOCK.
### Server code changes
1. Support source migrations for MS-SQL sources, which will create the `event_log` and the `event_invocation_logs` table.
1. Support source migrations for MS-SQL sources, which will create the
`event_log` and the `event_invocation_logs` table.
2. Currently, events processing can be broken up into two steps:
1. Fetching the events from the database.
2. Processing the fetched events.
The current events processing code is postgres specific, this will need to change to be for any
backend `b`, like we have done with the `BackendMetadata` type class. The type class proposed
here will be `BackendEventTrigger`, which will be defined in the following way:
The current events processing code is postgres specific, this will need to
change to be for any backend `b`, like we have done with the
`BackendMetadata` type class. The type class proposed here will be
`BackendEventTrigger`, which will be defined in the following way:
```haskell
@ -140,14 +215,17 @@ Database PoV, which can be divided in two parts as the following:
-> m ()
```
By defining the above typeclass, in the future new backends can be easily added just by implementing
the `BackendEventTrigger` instance for those backends.
By defining the above typeclass, in the future new backends can be easily
added just by implementing the `BackendEventTrigger` instance for those
backends.
3. The creation of event triggers in the current code is generalized for all backends, so the error placeholders will be
needed to replace with appropriate backend-specific logic.
3. The creation of event triggers in the current code is generalized for all
backends, so the error placeholders will be needed to replace with
appropriate backend-specific logic.
## Blockers
1. At the time of writing this RFC, mutations aren't yet supported in MS-SQL. Support for mutations is needed to set the
`session_variables` and the `trace_context` in the database. This is not a hard blocker though, this can be added incrementally
after support for mutations is added.
1. At the time of writing this RFC, mutations aren't yet supported in MS-SQL.
Support for mutations is needed to set the `session_variables` and the
`trace_context` in the database. This is not a hard blocker though, this can
be added incrementally after support for mutations is added.

View File

@ -380,6 +380,7 @@ library
, Hasura.Backends.MSSQL.Connection
, Hasura.Backends.MSSQL.DDL
, Hasura.Backends.MSSQL.DDL.BoolExp
, Hasura.Backends.MSSQL.DDL.EventTrigger
, Hasura.Backends.MSSQL.DDL.RunSQL
, Hasura.Backends.MSSQL.DDL.Source
, Hasura.Backends.MSSQL.DDL.Source.Version

View File

@ -0,0 +1,218 @@
module Hasura.Backends.MSSQL.DDL.EventTrigger (createTableEventTrigger) where
import Data.FileEmbed (makeRelativeToProject)
import Data.Text qualified as T
import Data.Text.Extended (commaSeparated)
import Data.Text.Lazy qualified as LT
import Database.MSSQL.Transaction (TxE, unitQueryE)
import Database.ODBC.SQLServer (rawUnescapedText)
import Hasura.Backends.MSSQL.Connection
import Hasura.Backends.MSSQL.SQL.Error qualified as HGE
import Hasura.Backends.MSSQL.Types (SchemaName (..), TableName (..))
import Hasura.Backends.MSSQL.Types.Internal (columnNameText, geoTypes)
import Hasura.Base.Error
import Hasura.Prelude
import Hasura.RQL.Types.Column
import Hasura.RQL.Types.EventTrigger
import Hasura.RQL.Types.Eventing (OpVar (..))
import Hasura.RQL.Types.Table (PrimaryKey (..))
import Hasura.SQL.Backend
import Hasura.Server.Types
import Text.Shakespeare.Text qualified as ST
createTableEventTrigger ::
(MonadIO m) =>
ServerConfigCtx ->
MSSQLSourceConfig ->
TableName ->
[ColumnInfo 'MSSQL] ->
TriggerName ->
TriggerOpsDef 'MSSQL ->
Maybe (PrimaryKey 'MSSQL (ColumnInfo 'MSSQL)) ->
m (Either QErr ())
createTableEventTrigger _serverConfigCtx (MSSQLSourceConfig _ mssqlExecCtx) table columns triggerName opsDefinition primaryKeyMaybe = do
liftIO $
runExceptT $ do
mssqlRunReadWrite mssqlExecCtx $ do
dropTriggerQ triggerName (tableSchema table)
mkAllTriggersQ triggerName table columns opsDefinition primaryKeyMaybe
newtype QualifiedTriggerName = QualifiedTriggerName {unQualifiedTriggerName :: Text}
msssqlIdenTrigger :: Ops -> SchemaName -> TriggerName -> QualifiedTriggerName
msssqlIdenTrigger op (SchemaName schemaName) triggerName =
QualifiedTriggerName $ qualifyHasuraTriggerName op $ triggerNameToTxt triggerName
where
qualifyHasuraTriggerName op' triggerName' = schemaName <> "." <> "notify_hasura_" <> triggerName' <> "_" <> tshow op'
dropTriggerQ :: TriggerName -> SchemaName -> TxE QErr ()
dropTriggerQ triggerName schemaName =
mapM_
( \op ->
unitQueryE
HGE.defaultMSSQLTxErrorHandler
(rawUnescapedText $ getDropTriggerSQL op)
)
[INSERT, UPDATE, DELETE]
where
getDropTriggerSQL :: Ops -> Text
getDropTriggerSQL op =
"DROP TRIGGER IF EXISTS " <> unQualifiedTriggerName (msssqlIdenTrigger op schemaName triggerName)
mkAllTriggersQ ::
MonadMSSQLTx m =>
TriggerName ->
TableName ->
[ColumnInfo 'MSSQL] ->
TriggerOpsDef 'MSSQL ->
Maybe (PrimaryKey 'MSSQL (ColumnInfo 'MSSQL)) ->
m ()
mkAllTriggersQ triggerName tableName allCols fullSpec primaryKey = do
onJust (tdInsert fullSpec) (mkInsertTriggerQ triggerName tableName allCols)
onJust (tdDelete fullSpec) (mkDeleteTriggerQ triggerName tableName allCols)
onJust (tdUpdate fullSpec) (mkUpdateTriggerQ triggerName tableName allCols primaryKey)
getApplicableColumns :: [ColumnInfo 'MSSQL] -> SubscribeColumns 'MSSQL -> [ColumnInfo 'MSSQL]
getApplicableColumns allColumnInfos = \case
SubCStar -> allColumnInfos
SubCArray cols -> getColInfos cols allColumnInfos
-- | Currently we do not support Event Triggers on columns of Spatial data types.
-- We do this because, currently the graphQL API for these types is broken
-- for MSSQL sources. Ref: https://github.com/hasura/graphql-engine-mono/issues/787
checkSpatialDataTypeColumns ::
MonadMSSQLTx m =>
[ColumnInfo 'MSSQL] ->
SubscribeOpSpec 'MSSQL ->
m ()
checkSpatialDataTypeColumns allCols (SubscribeOpSpec listenCols deliveryCols) = do
let listenColumns = getApplicableColumns allCols listenCols
deliveryColumns = getApplicableColumns allCols $ fromMaybe SubCStar deliveryCols
isGeoTypesInListenCols = any (\c -> isScalarColumnWhere isGeoType (ciType c)) listenColumns
isGeoTypesInDeliversCols = any (\c -> isScalarColumnWhere isGeoType (ciType c)) deliveryColumns
when (isGeoTypesInListenCols || isGeoTypesInDeliversCols) $
throw400 NotSupported "Event triggers for MS-SQL sources are not supported on tables having Geometry or Geography column types"
where
isGeoType = (`elem` geoTypes)
mkInsertTriggerQ ::
MonadMSSQLTx m =>
TriggerName ->
TableName ->
[ColumnInfo 'MSSQL] ->
SubscribeOpSpec 'MSSQL ->
m ()
mkInsertTriggerQ triggerName table allCols subOpSpec@(SubscribeOpSpec _listenCols deliveryCols) = do
checkSpatialDataTypeColumns allCols subOpSpec
liftMSSQLTx $ do
unitQueryE HGE.defaultMSSQLTxErrorHandler $
rawUnescapedText . LT.toStrict $ do
let deliveryColumns = getApplicableColumns allCols $ fromMaybe SubCStar deliveryCols
mkInsertTriggerQuery table triggerName deliveryColumns
mkDeleteTriggerQ ::
MonadMSSQLTx m =>
TriggerName ->
TableName ->
[ColumnInfo 'MSSQL] ->
SubscribeOpSpec 'MSSQL ->
m ()
mkDeleteTriggerQ triggerName table allCols subOpSpec@(SubscribeOpSpec _listenCols deliveryCols) = do
checkSpatialDataTypeColumns allCols subOpSpec
liftMSSQLTx $ do
unitQueryE HGE.defaultMSSQLTxErrorHandler $
rawUnescapedText . LT.toStrict $ do
let deliveryColumns = getApplicableColumns allCols $ fromMaybe SubCStar deliveryCols
mkDeleteTriggerQuery table triggerName deliveryColumns
mkUpdateTriggerQ ::
MonadMSSQLTx m =>
TriggerName ->
TableName ->
[ColumnInfo 'MSSQL] ->
Maybe (PrimaryKey 'MSSQL (ColumnInfo 'MSSQL)) ->
SubscribeOpSpec 'MSSQL ->
m ()
mkUpdateTriggerQ triggerName table allCols primaryKeyMaybe subOpSpec@(SubscribeOpSpec listenCols deliveryCols) = do
checkSpatialDataTypeColumns allCols subOpSpec
liftMSSQLTx $ do
primaryKey <- onNothing primaryKeyMaybe (throw400 NotSupported "Update event triggers for MS-SQL sources are only supported on tables with primary keys")
let deliveryColumns = getApplicableColumns allCols $ fromMaybe SubCStar deliveryCols
listenColumns = getApplicableColumns allCols listenCols
unitQueryE HGE.defaultMSSQLTxErrorHandler $
rawUnescapedText . LT.toStrict $
mkUpdateTriggerQuery table triggerName listenColumns deliveryColumns primaryKey
generateColumnTriggerAlias :: OpVar -> Maybe Text -> ColumnInfo 'MSSQL -> Text
generateColumnTriggerAlias op colPrefixMaybe colInfo =
let opText =
case op of
OLD -> "old"
NEW -> "new"
dbColNameText = columnNameText $ ciColumn colInfo
joinPrefixedDbColNameText =
-- prefix with the joining table's name
-- `id` -> `inserted.id` (prefix = 'inserted')
case colPrefixMaybe of
Just colPrefix -> colPrefix <> "." <> dbColNameText
Nothing -> dbColNameText
dbColAlias = "data" <> "." <> opText <> "." <> dbColNameText
in LT.toStrict $ [ST.stext| #{joinPrefixedDbColNameText} as [#{dbColAlias}]|]
qualifyTableName :: TableName -> Text
qualifyTableName (TableName tableName (SchemaName schemaName)) =
if schemaName == "dbo"
then tableName
else schemaName <> "." <> tableName
mkInsertTriggerQuery :: TableName -> TriggerName -> [ColumnInfo 'MSSQL] -> LT.Text
mkInsertTriggerQuery table@(TableName tableName schema@(SchemaName schemaName)) triggerName columns =
let QualifiedTriggerName qualifiedTriggerName = msssqlIdenTrigger INSERT schema triggerName
triggerNameText = triggerNameToTxt triggerName
qualifiedTableName = qualifyTableName table
deliveryColsSQLExpression :: Text =
commaSeparated $ map (generateColumnTriggerAlias NEW Nothing) columns
in $(makeRelativeToProject "src-rsr/mssql_insert_trigger.sql.shakespeare" >>= ST.stextFile)
mkDeleteTriggerQuery :: TableName -> TriggerName -> [ColumnInfo 'MSSQL] -> LT.Text
mkDeleteTriggerQuery table@(TableName tableName schema@(SchemaName schemaName)) triggerName columns =
let QualifiedTriggerName qualifiedTriggerName = msssqlIdenTrigger DELETE schema triggerName
triggerNameText = triggerNameToTxt triggerName
qualifiedTableName = qualifyTableName table
deliveryColsSQLExpression :: Text = commaSeparated $ map (generateColumnTriggerAlias OLD Nothing) columns
in $(makeRelativeToProject "src-rsr/mssql_delete_trigger.sql.shakespeare" >>= ST.stextFile)
mkPrimaryKeyJoinExp :: Text -> Text -> [ColumnInfo 'MSSQL] -> Text
mkPrimaryKeyJoinExp lhsPrefix rhsPrefix columns =
T.intercalate " AND " $ singleColExp <$> columns
where
singleColExp colInfo =
let dbColNameText = columnNameText $ ciColumn colInfo
in LT.toStrict $ [ST.stext| #{lhsPrefix}.#{dbColNameText} = #{rhsPrefix}.#{dbColNameText} |]
mkListenColumnsExp :: Text -> Text -> [ColumnInfo 'MSSQL] -> Text
mkListenColumnsExp _ _ [] = ""
mkListenColumnsExp lhsPrefix rhsPrefix columns =
"where " <> (T.intercalate " OR " $ singleColExp <$> columns)
where
singleColExp colInfo =
let dbColNameText = columnNameText $ ciColumn colInfo
in LT.toStrict $ [ST.stext| #{lhsPrefix}.#{dbColNameText} != #{rhsPrefix}.#{dbColNameText} |]
--
mkUpdateTriggerQuery :: TableName -> TriggerName -> [ColumnInfo 'MSSQL] -> [ColumnInfo 'MSSQL] -> PrimaryKey 'MSSQL (ColumnInfo 'MSSQL) -> LT.Text
mkUpdateTriggerQuery
table@(TableName tableName schema@(SchemaName schemaName))
triggerName
listenColumns
deliveryColumns
primaryKey =
let QualifiedTriggerName qualifiedTriggerName = msssqlIdenTrigger UPDATE schema triggerName
triggerNameText = triggerNameToTxt triggerName
qualifiedTableName = qualifyTableName table
oldDeliveryColsSQLExp :: Text = commaSeparated $ map (generateColumnTriggerAlias NEW (Just "INSERTED")) deliveryColumns
newDeliveryColsSQLExp :: Text = commaSeparated $ map (generateColumnTriggerAlias OLD (Just "DELETED")) deliveryColumns
primaryKeyJoinExp = mkPrimaryKeyJoinExp "INSERTED" "DELETED" (toList (_pkColumns primaryKey))
listenColumnExp = mkListenColumnsExp "INSERTED" "DELETED" listenColumns
in $(makeRelativeToProject "src-rsr/mssql_update_trigger.sql.shakespeare" >>= ST.stextFile)

View File

@ -41,7 +41,7 @@ import Hasura.RQL.Types.Common
import Hasura.RQL.Types.EventTrigger
import Hasura.RQL.Types.Eventing
import Hasura.RQL.Types.Source
import Hasura.RQL.Types.Table ()
import Hasura.RQL.Types.Table (PrimaryKey)
import Hasura.SQL.Backend
import Hasura.SQL.Types
import Hasura.Server.Migrate.Internal
@ -51,10 +51,6 @@ import Hasura.Session
import Hasura.Tracing qualified as Tracing
import Text.Shakespeare.Text qualified as ST
-- Corresponds to the 'OLD' and 'NEW' Postgres records; see
-- https://www.postgresql.org/docs/current/plpgsql-trigger.html
data OpVar = OLD | NEW deriving (Show)
fetchUndeliveredEvents ::
(MonadIO m, MonadError QErr m) =>
SourceConfig ('Postgres pgKind) ->
@ -189,8 +185,9 @@ createTableEventTrigger ::
[ColumnInfo ('Postgres pgKind)] ->
TriggerName ->
TriggerOpsDef ('Postgres pgKind) ->
Maybe (PrimaryKey ('Postgres pgKind) (ColumnInfo ('Postgres pgKind))) ->
m (Either QErr ())
createTableEventTrigger serverConfigCtx sourceConfig table columns triggerName opsDefinition = runPgSourceWriteTx sourceConfig $ do
createTableEventTrigger serverConfigCtx sourceConfig table columns triggerName opsDefinition _ = runPgSourceWriteTx sourceConfig $ do
-- Clean all existing triggers
liftTx $ dropTriggerQ triggerName -- executes DROP IF EXISTS.. sql
-- Create the given triggers

View File

@ -1027,8 +1027,7 @@ buildSchemaCacheRule logger env = proc (metadata, invalidationKeys) -> do
modifyErrA
( do
(info, dependencies) <- bindErrorA -< buildEventTriggerInfo @b env source table eventTriggerConf
let tableColumns = M.mapMaybe (^? _FIColumn) (_tciFieldInfoMap tableInfo)
recreateTriggerIfNeeded -< (table, M.elems tableColumns, triggerName, etcDefinition eventTriggerConf, sourceConfig, recreateEventTriggers <> reloadMetadataRecreateEventTrigger)
recreateTriggerIfNeeded -< (table, tableInfo, triggerName, etcDefinition eventTriggerConf, sourceConfig, recreateEventTriggers <> reloadMetadataRecreateEventTrigger)
recordDependencies -< (metadataObject, schemaObjectId, dependencies)
returnA -< info
)
@ -1043,7 +1042,7 @@ buildSchemaCacheRule logger env = proc (metadata, invalidationKeys) -> do
Inc.cache
proc
( tableName,
tableColumns,
tableInfo,
triggerName,
triggerDefinition,
sourceConfig,
@ -1052,6 +1051,7 @@ buildSchemaCacheRule logger env = proc (metadata, invalidationKeys) -> do
-> do
bindA
-< do
let tableColumns = M.elems $ M.mapMaybe (^? _FIColumn) (_tciFieldInfoMap tableInfo)
buildReason <- ask
serverConfigCtx <- askServerConfigCtx
let isCatalogUpdate =
@ -1072,6 +1072,7 @@ buildSchemaCacheRule logger env = proc (metadata, invalidationKeys) -> do
tableColumns
triggerName
triggerDefinition
(_tciPrimaryKey tableInfo)
buildCronTriggers ::
( ArrowChoice arr,

View File

@ -9,6 +9,7 @@ module Hasura.RQL.Types.Eventing
TriggerTypes (..),
WebhookRequest (..),
WebhookResponse (..),
OpVar (..),
invocationVersionET,
invocationVersionST,
)
@ -108,3 +109,14 @@ instance Q.ToPrepArg PGTextArray where
where
-- 25 is the OID value of TEXT, https://jdbc.postgresql.org/development/privateapi/constant-values.html
encoder = PE.array 25 . PE.dimensionArray foldl' (PE.encodingArray . PE.text_strict)
-- | Used to construct the payload of Event Trigger
--
-- OLD: Depicts the old database row value for UPDATE/DELETE trigger operations.
-- This is used to construct the 'data.old' field of the event trigger
-- payload. The value of 'data.old' is null in INSERT trigger operation.
--
-- NEW: Depicts the new database row value for INSERT/UPDATE trigger operations.
-- This is used to construct the 'data.new' field of the event trigger
-- payload. The value of 'data.new' is null in DELETE trigger operation.
data OpVar = OLD | NEW deriving (Show)

View File

@ -7,6 +7,7 @@ import Control.Monad.Trans.Control (MonadBaseControl)
import Data.Aeson
import Data.Set qualified as Set
import Data.Time.Clock qualified as Time
import Hasura.Backends.MSSQL.DDL.EventTrigger qualified as MSSQL
import Hasura.Backends.Postgres.DDL.EventTrigger qualified as PG
import Hasura.Base.Error
import Hasura.Prelude
@ -16,6 +17,7 @@ import Hasura.RQL.Types.Common
import Hasura.RQL.Types.EventTrigger
import Hasura.RQL.Types.Eventing
import Hasura.RQL.Types.Source
import Hasura.RQL.Types.Table (PrimaryKey)
import Hasura.SQL.Backend
import Hasura.Server.Types
import Hasura.Session (UserInfo)
@ -158,13 +160,20 @@ class Backend b => BackendEventTrigger (b :: BackendType) where
m (Either QErr Int)
createTableEventTrigger ::
(MonadBaseControl IO m, MonadIO m) =>
(MonadBaseControl IO m, MonadIO m, MonadError QErr m) =>
ServerConfigCtx ->
SourceConfig b ->
TableName b ->
[ColumnInfo b] ->
TriggerName ->
TriggerOpsDef b ->
-- TODO: Naveen: Find a better way to pass these extra, backend specific
-- parameters instead of adding a bunch of Maybes to the type class
-- functions.
--
-- Update event trigger on MS-SQL are only supported on tables with primary
-- keys. Hence the PrimaryKey argument below.
Maybe (PrimaryKey b (ColumnInfo b)) ->
m (Either QErr ())
instance BackendEventTrigger ('Postgres 'Vanilla) where
@ -191,7 +200,7 @@ instance BackendEventTrigger ('Postgres 'Citus) where
dropTriggerAndArchiveEvents _ _ = throw400 NotSupported "Event triggers are not supported for Citus sources"
redeliverEvent _ _ = throw400 NotSupported "Event triggers are not supported for Citus sources"
unlockEventsInSource _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for Citus sources"
createTableEventTrigger _ _ _ _ _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for Citus sources"
createTableEventTrigger _ _ _ _ _ _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for Citus sources"
instance BackendEventTrigger 'MSSQL where
insertManualEvent _ _ _ _ _ _ = throw400 NotSupported $ "Event triggers are not supported for MS-SQL sources"
@ -204,7 +213,7 @@ instance BackendEventTrigger 'MSSQL where
dropTriggerAndArchiveEvents _ _ = throw400 NotSupported "Event triggers are not supported for MS-SQL sources"
redeliverEvent _ _ = throw400 NotSupported "Event triggers are not supported for MS-SQL sources"
unlockEventsInSource _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for MS-SQL sources"
createTableEventTrigger _ _ _ _ _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for MS-SQL sources"
createTableEventTrigger = MSSQL.createTableEventTrigger
instance BackendEventTrigger 'BigQuery where
insertManualEvent _ _ _ _ _ _ = throw400 NotSupported $ "Event triggers are not supported for BigQuery sources"
@ -217,7 +226,7 @@ instance BackendEventTrigger 'BigQuery where
dropTriggerAndArchiveEvents _ _ = throw400 NotSupported "Event triggers are not supported for BigQuery sources"
redeliverEvent _ _ = throw400 NotSupported "Event triggers are not supported for BigQuery sources"
unlockEventsInSource _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for BigQuery sources"
createTableEventTrigger _ _ _ _ _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for BigQuery sources"
createTableEventTrigger _ _ _ _ _ _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for BigQuery sources"
instance BackendEventTrigger 'MySQL where
insertManualEvent _ _ _ _ _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
@ -230,7 +239,7 @@ instance BackendEventTrigger 'MySQL where
dropTriggerAndArchiveEvents _ _ = throw400 NotSupported "Event triggers are not supported for MySQL sources"
redeliverEvent _ _ = throw400 NotSupported "Event triggers are not supported for MySQL sources"
unlockEventsInSource _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for MySQL sources"
createTableEventTrigger _ _ _ _ _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for MySQL sources"
createTableEventTrigger _ _ _ _ _ _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for MySQL sources"
--------------------------------------------------------------------------------
@ -259,5 +268,5 @@ instance BackendEventTrigger 'DataWrapper where
throw400 NotSupported "Event triggers are not supported for GraphQL Data Wrappers."
unlockEventsInSource _ _ =
runExceptT $ throw400 NotSupported "Event triggers are not supported for GraphQL Data Wrappers."
createTableEventTrigger _ _ _ _ _ _ =
createTableEventTrigger _ _ _ _ _ _ _ =
runExceptT $ throw400 NotSupported "Event triggers are not supported for GraphQL Data Wrappers."

View File

@ -0,0 +1,14 @@
CREATE OR ALTER TRIGGER #{qualifiedTriggerName}
ON #{qualifiedTableName}
AFTER DELETE
AS
BEGIN
DECLARE @json NVARCHAR(MAX)
SET @json = (
SELECT #{deliveryColsSQLExpression}, NULL as [data.new]
FROM DELETED
FOR JSON PATH, INCLUDE_NULL_VALUES
)
insert into hdb_catalog.event_log (schema_name,table_name,trigger_name,payload)
select '#{schemaName}','#{tableName}','#{triggerNameText}', value from OPENJSON (@json)
END;

View File

@ -0,0 +1,14 @@
CREATE OR ALTER TRIGGER #{qualifiedTriggerName}
ON #{qualifiedTableName}
AFTER INSERT
AS
BEGIN
DECLARE @json NVARCHAR(MAX)
SET @json = (
SELECT #{deliveryColsSQLExpression}, NULL as [data.old]
FROM INSERTED
FOR JSON PATH, INCLUDE_NULL_VALUES
)
insert into hdb_catalog.event_log (schema_name,table_name,trigger_name,payload)
select '#{schemaName}','#{tableName}','#{triggerNameText}', value from OPENJSON (@json)
END;

View File

@ -0,0 +1,17 @@
CREATE OR ALTER TRIGGER #{qualifiedTriggerName}
ON #{qualifiedTableName}
AFTER UPDATE
AS
BEGIN
DECLARE @json NVARCHAR(MAX)
SET @json = (
SELECT #{oldDeliveryColsSQLExp}, #{newDeliveryColsSQLExp}
FROM DELETED
JOIN INSERTED
ON #{primaryKeyJoinExp}
#{listenColumnExp}
FOR JSON PATH
)
insert into hdb_catalog.event_log (schema_name,table_name,trigger_name, payload)
select '#{schemaName}','#{tableName}','#{triggerNameText}', value from OPENJSON (@json)
END