mirror of
https://github.com/hasura/graphql-engine.git
synced 2024-12-14 17:02:49 +03:00
server: fix error in metadata APIs with inconsistency
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6669 Co-authored-by: Tirumarai Selvan <8663570+tirumaraiselvan@users.noreply.github.com> Co-authored-by: Sean Park-Ross <94021366+seanparkross@users.noreply.github.com> GitOrigin-RevId: 1b004074b41ccb6512123cdb1707b39792e97927
This commit is contained in:
parent
f430e5b599
commit
8dab7df169
@ -54,3 +54,4 @@ cause undefined behaviour during run-time. This should not be a problem if the H
|
||||
- [Using serverless functions](/event-triggers/serverless.mdx)
|
||||
- [Event trigger samples](/event-triggers/samples.mdx)
|
||||
- [Clean up event data](/event-triggers/clean-up/index.mdx)
|
||||
- [Remove Event Triggers](/event-triggers/remove-event-triggers.mdx)
|
||||
|
122
docs/docs/event-triggers/remove-event-triggers.mdx
Normal file
122
docs/docs/event-triggers/remove-event-triggers.mdx
Normal file
@ -0,0 +1,122 @@
|
||||
---
|
||||
sidebar_label: Removing Event Triggers
|
||||
sidebar_position: 1
|
||||
description: Remove Event Triggers
|
||||
keywords:
|
||||
- hasura
|
||||
- docs
|
||||
- event trigger
|
||||
- cleanup
|
||||
- delete
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; import Thumbnail from
|
||||
'@site/src/components/Thumbnail';
|
||||
|
||||
# Removing Event Triggers
|
||||
|
||||
## Removing an Event Trigger via metadata API
|
||||
|
||||
An Event Trigger can be removed using the following metadata API **only when the metadata is consistent with the
|
||||
database**.
|
||||
|
||||
- **delete_event_trigger**: Refer to the
|
||||
[pg_delete_event_trigger](/api-reference/metadata-api/event-triggers.mdx#metadata-pg-delete-event-trigger) API to
|
||||
remove an Event Trigger in a Postgres source
|
||||
- **untrack_table**: Refer to the
|
||||
[pg_untrack_table](/api-reference/metadata-api/table-view.mdx/#metadata-pg-untrack-table) API to untrack a table
|
||||
present in a Postgres source
|
||||
- **drop_source**: Refer to the [pg_drop_source](/api-reference/metadata-api/source.mdx/#metadata-pg-drop-source) API to
|
||||
drop a Postgres source
|
||||
|
||||
The following metadata APIs can be used to remove an Event Trigger even with inconsistent metadata, although it may
|
||||
leave a Hasura footprint in the database:
|
||||
|
||||
- **replace_metadata**: Refer to the
|
||||
[replace_metadata](/api-reference/metadata-api/manage-metadata.mdx/#metadata-replace-metadata) API to replace an
|
||||
existing metadata with new metadata
|
||||
- **clear_metadata**: Refer to the
|
||||
[clear_metadata](/api-reference/metadata-api/manage-metadata.mdx/#metadata-clear-metadata) to clear the metadata
|
||||
|
||||
Refer to the following sections on cleaning up Hasura footprints manually from the database.
|
||||
|
||||
## Clean up Event Trigger footprints manually
|
||||
|
||||
When an Event Trigger is created, Hasura creates SQL triggers on the table corresponding to each operation mentioned in
|
||||
the Event Trigger configuration (INSERT/UPDATE/DELETE).
|
||||
|
||||
When an inconsistent Table/Event Trigger is removed via the `replace_metadata` API, it may leave orphaned SQL triggers
|
||||
in the database. The following command can be used to manually delete SQL triggers corresponding to an Event Trigger on
|
||||
a table:
|
||||
|
||||
```sql
|
||||
DROP FUNCTION hdb_catalog."notify_hasura_<event-trigger-name>_<OPERATION-NAME>" CASCADE;
|
||||
```
|
||||
|
||||
For example: to delete SQL triggers corresponding to an Event Trigger: `users_all` on a table: `users` with operation:
|
||||
`INSERT` in the Event Trigger configuration:
|
||||
|
||||
```sql
|
||||
DROP FUNCTION hdb_catalog."notify_hasura_users_all_INSERT" CASCADE;
|
||||
```
|
||||
|
||||
:::info Note
|
||||
|
||||
The SQL trigger should be deleted for each operation mentioned in the Event Trigger configuration, i.e.
|
||||
INSERT/UPDATE/DELETE
|
||||
|
||||
:::
|
||||
|
||||
## Clean up Hasura footprints from a source manually {#clean-footprints-manually}
|
||||
|
||||
When an inconsistent source is dropped, it may leave Hasura footprint in the database due to Event Triggers. The
|
||||
following can be used to remove all footprint of Event Triggers present in a source from the database:
|
||||
|
||||
### Case 1: When using a different metadata database from the source database
|
||||
|
||||
In this case, `hdb_metadata` table is not present in `hdb_catalog` schema of the source.
|
||||
|
||||
To clean up Hasura footprint completely, drop the `hdb_catalog` schema:
|
||||
|
||||
```sql
|
||||
DROP SCHEMA IF EXISTS hdb_catalog;
|
||||
```
|
||||
|
||||
### Case 2: When the metadata database and source database are the same
|
||||
|
||||
In this case, a `hdb_metadata` table is present in `hdb_catalog` schema of the source. You may want to preserve the
|
||||
metadata but remove the remaining Hasura footprint of a few tables for Event Triggers and corresponding SQL triggers.
|
||||
|
||||
**Step 1:** In order to drop the SQL triggers corresponding to Event Triggers created, please refer to the [clean up
|
||||
Event Trigger footprints manually](/event-triggers/remove-event-triggers.mdx/#clean-up-event-trigger-footprints-manually) section.
|
||||
Alternatively, the following command can be used to drop all SQL triggers in the source:
|
||||
|
||||
```sql
|
||||
do $$
|
||||
declare f record;
|
||||
begin
|
||||
for f in select trigger_name, event_object_table
|
||||
from information_schema.triggers
|
||||
where trigger_name like 'notify_hasura_%'
|
||||
loop
|
||||
EXECUTE 'DROP FUNCTION hdb_catalog.' || QUOTE_IDENT(f.trigger_name) || ' CASCADE';
|
||||
end loop;
|
||||
end;
|
||||
$$;
|
||||
```
|
||||
|
||||
**Step 2:** The following commands can be used to delete Event Triggers tables from `hdb_catalog`:
|
||||
|
||||
```sql
|
||||
DROP TABLE IF EXISTS hdb_catalog.hdb_source_catalog_version;
|
||||
DROP FUNCTION IF EXISTS hdb_catalog.insert_event_log(text, text, text, text, json);
|
||||
DROP TABLE IF EXISTS hdb_catalog.event_invocation_logs;
|
||||
DROP TABLE IF EXISTS hdb_catalog.event_log;
|
||||
DROP TABLE IF EXISTS hdb_catalog.hdb_event_log_cleanups;
|
||||
```
|
||||
|
||||
:::info Note
|
||||
|
||||
It is recommended to perform the above steps in a single transaction.
|
||||
|
||||
:::
|
@ -85,6 +85,26 @@ tests opts = do
|
||||
is_consistent: true
|
||||
inconsistent_objects: []
|
||||
|]
|
||||
Postgres.dropTable testEnvironment table
|
||||
|
||||
describe "replacing metadata with already present inconsistency" do
|
||||
it "drop table for an already inconsistent table with event trigger" \testEnvironment -> do
|
||||
Postgres.createTable testEnvironment table
|
||||
_ <- postMetadata testEnvironment (setupMetadataWithTableAndEventTrigger testEnvironment)
|
||||
Postgres.dropTable testEnvironment table
|
||||
postMetadata testEnvironment (replaceMetadataDropInconsistentTable testEnvironment)
|
||||
`shouldReturnYaml` [yaml|
|
||||
is_consistent: true
|
||||
inconsistent_objects: []
|
||||
|]
|
||||
|
||||
it "drop source for an already inconsistent source" \testEnvironment -> do
|
||||
_ <- postMetadata testEnvironment (setupMetadataWithInconsistentSource testEnvironment)
|
||||
postMetadata testEnvironment repaceMetadataRemoveInconsistentSource
|
||||
`shouldReturnYaml` [yaml|
|
||||
is_consistent: true
|
||||
inconsistent_objects: []
|
||||
|]
|
||||
|
||||
reloadMetadata :: Value
|
||||
reloadMetadata =
|
||||
@ -118,6 +138,96 @@ replaceMetadataWithTable testEnvironment =
|
||||
schemaName = Schema.getSchemaName testEnvironment
|
||||
tableName = Schema.tableName table
|
||||
|
||||
setupMetadataWithInconsistentSource :: TestEnvironment -> Value
|
||||
setupMetadataWithInconsistentSource testEnvironment =
|
||||
[yaml|
|
||||
type: replace_metadata
|
||||
args:
|
||||
allow_inconsistent_metadata: true
|
||||
metadata:
|
||||
version: 3
|
||||
sources:
|
||||
- name: *sourceName
|
||||
kind: postgres
|
||||
tables:
|
||||
- table:
|
||||
schema: *schemaName
|
||||
name: *tableName
|
||||
configuration:
|
||||
connection_info:
|
||||
database_url: postgres://postgres:postgres@postgres:5432/non_existent_db
|
||||
pool_settings: {}
|
||||
|]
|
||||
where
|
||||
backendTypeMetadata = Maybe.fromMaybe (error "Unknown backend") $ backendTypeConfig testEnvironment
|
||||
sourceName = Fixture.backendSourceName backendTypeMetadata
|
||||
schemaName = Schema.getSchemaName testEnvironment
|
||||
tableName = Schema.tableName table
|
||||
|
||||
repaceMetadataRemoveInconsistentSource :: Value
|
||||
repaceMetadataRemoveInconsistentSource =
|
||||
[yaml|
|
||||
type: replace_metadata
|
||||
args:
|
||||
allow_inconsistent_metadata: true
|
||||
metadata:
|
||||
version: 3
|
||||
sources: []
|
||||
|]
|
||||
|
||||
setupMetadataWithTableAndEventTrigger :: TestEnvironment -> Value
|
||||
setupMetadataWithTableAndEventTrigger testEnvironment =
|
||||
[yaml|
|
||||
type: replace_metadata
|
||||
args:
|
||||
allow_inconsistent_metadata: true
|
||||
metadata:
|
||||
version: 3
|
||||
sources:
|
||||
- name: *sourceName
|
||||
kind: postgres
|
||||
tables:
|
||||
- table:
|
||||
schema: *schemaName
|
||||
name: *tableName
|
||||
event_triggers:
|
||||
- name: foo-trigger
|
||||
definition:
|
||||
insert:
|
||||
columns: '*'
|
||||
retry_conf:
|
||||
interval_sec: 10
|
||||
num_retries: 0
|
||||
timeout_sec: 60
|
||||
webhook: https://httpbin.org/post
|
||||
configuration: *sourceConfiguration
|
||||
|]
|
||||
where
|
||||
backendTypeMetadata = Maybe.fromMaybe (error "Unknown backend") $ backendTypeConfig testEnvironment
|
||||
sourceConfiguration = Postgres.defaultSourceConfiguration testEnvironment
|
||||
sourceName = Fixture.backendSourceName backendTypeMetadata
|
||||
schemaName = Schema.getSchemaName testEnvironment
|
||||
tableName = Schema.tableName table
|
||||
|
||||
replaceMetadataDropInconsistentTable :: TestEnvironment -> Value
|
||||
replaceMetadataDropInconsistentTable testEnvironment =
|
||||
[yaml|
|
||||
type: replace_metadata
|
||||
args:
|
||||
allow_inconsistent_metadata: true
|
||||
metadata:
|
||||
version: 3
|
||||
sources:
|
||||
- name: *sourceName
|
||||
kind: postgres
|
||||
tables: []
|
||||
configuration: *sourceConfiguration
|
||||
|]
|
||||
where
|
||||
backendTypeMetadata = Maybe.fromMaybe (error "Unknown backend") $ backendTypeConfig testEnvironment
|
||||
sourceConfiguration = Postgres.defaultSourceConfiguration testEnvironment
|
||||
sourceName = Fixture.backendSourceName backendTypeMetadata
|
||||
|
||||
expectedInconsistentYaml :: Maybe Text -> TestEnvironment -> Value
|
||||
expectedInconsistentYaml message testEnvironment =
|
||||
[interpolateYaml|
|
||||
|
@ -382,20 +382,20 @@ askTabInfoFromTrigger ::
|
||||
m (TableInfo b)
|
||||
askTabInfoFromTrigger sourceName triggerName = do
|
||||
schemaCache <- askSchemaCache
|
||||
getTabInfoFromSchemaCache schemaCache sourceName triggerName
|
||||
tableInfoMaybe <- getTabInfoFromSchemaCache schemaCache sourceName triggerName
|
||||
tableInfoMaybe `onNothing` throw400 NotExists errMsg
|
||||
where
|
||||
errMsg = "event trigger " <> triggerName <<> " does not exist"
|
||||
|
||||
getTabInfoFromSchemaCache ::
|
||||
(Backend b, QErrM m) =>
|
||||
SchemaCache ->
|
||||
SourceName ->
|
||||
TriggerName ->
|
||||
m (TableInfo b)
|
||||
m (Maybe (TableInfo b))
|
||||
getTabInfoFromSchemaCache schemaCache sourceName triggerName = do
|
||||
let tabInfos = HM.elems $ fromMaybe mempty $ unsafeTableCache sourceName $ scSources schemaCache
|
||||
find (isJust . HM.lookup triggerName . _tiEventTriggerInfoMap) tabInfos
|
||||
`onNothing` throw400 NotExists errMsg
|
||||
where
|
||||
errMsg = "event trigger " <> triggerName <<> " does not exist"
|
||||
pure $ find (isJust . HM.lookup triggerName . _tiEventTriggerInfoMap) tabInfos
|
||||
|
||||
askEventTriggerInfo ::
|
||||
forall b m.
|
||||
@ -551,9 +551,12 @@ getTableNameFromTrigger ::
|
||||
SchemaCache ->
|
||||
SourceName ->
|
||||
TriggerName ->
|
||||
m (TableName b)
|
||||
getTableNameFromTrigger schemaCache sourceName triggerName =
|
||||
(_tciName . _tiCoreInfo) <$> getTabInfoFromSchemaCache @b schemaCache sourceName triggerName
|
||||
m (Maybe (TableName b))
|
||||
getTableNameFromTrigger schemaCache sourceName triggerName = do
|
||||
tableInfoMaybe <- getTabInfoFromSchemaCache @b schemaCache sourceName triggerName
|
||||
case tableInfoMaybe of
|
||||
Nothing -> pure Nothing
|
||||
Just tableInfo -> pure $ Just $ (_tciName . _tiCoreInfo) $ tableInfo
|
||||
|
||||
runCleanupEventTriggerLog ::
|
||||
(MonadEventLogCleanup m, MonadError QErr m) =>
|
||||
@ -621,11 +624,14 @@ toggleEventTriggerCleanupAction conf cleanupSwitch = do
|
||||
AB.dispatchAnyBackend @BackendEventTrigger backendSourceInfo \(SourceInfo {} :: SourceInfo b) -> do
|
||||
forM_ triggerNames $ \triggerName -> do
|
||||
eventTriggerInfo <- askEventTriggerInfo @b sourceName triggerName
|
||||
tableName <- getTableNameFromTrigger @b schemaCache sourceName triggerName
|
||||
cleanupConfig <-
|
||||
(etiCleanupConfig eventTriggerInfo)
|
||||
`onNothing` throw400 NotExists ("cleanup config does not exist for " <> triggerNameToTxt triggerName)
|
||||
updateCleanupStatusInMetadata @b cleanupConfig cleanupSwitch sourceName tableName triggerName
|
||||
tableNameMaybe <- getTableNameFromTrigger @b schemaCache sourceName triggerName
|
||||
case tableNameMaybe of
|
||||
Nothing -> throw400 NotExists $ "event trigger " <> triggerName <<> " does not exist"
|
||||
Just tableName -> do
|
||||
cleanupConfig <-
|
||||
(etiCleanupConfig eventTriggerInfo)
|
||||
`onNothing` throw400 NotExists ("cleanup config does not exist for " <> triggerNameToTxt triggerName)
|
||||
updateCleanupStatusInMetadata @b cleanupConfig cleanupSwitch sourceName tableName triggerName
|
||||
pure successMsg
|
||||
where
|
||||
traverseTableHelper ::
|
||||
|
@ -13,6 +13,7 @@ module Hasura.RQL.DDL.Metadata
|
||||
runTestWebhookTransform,
|
||||
runSetMetricsConfig,
|
||||
runRemoveMetricsConfig,
|
||||
ShouldDeleteEventTriggerCleanupSchedules (..),
|
||||
module Hasura.RQL.DDL.Metadata.Types,
|
||||
)
|
||||
where
|
||||
@ -80,8 +81,14 @@ import Hasura.RQL.Types.SourceCustomization
|
||||
import Hasura.SQL.AnyBackend qualified as AB
|
||||
import Hasura.SQL.Backend (BackendType (..))
|
||||
import Hasura.SQL.BackendMap qualified as BackendMap
|
||||
import Hasura.Server.Logging (MetadataLog (..))
|
||||
import Network.HTTP.Client.Transformable qualified as HTTP
|
||||
|
||||
data ShouldDeleteEventTriggerCleanupSchedules
|
||||
= DeleteEventTriggerCleanupSchedules
|
||||
| DontDeleteEventTriggerCleanupSchedules
|
||||
deriving (Show, Eq)
|
||||
|
||||
runClearMetadata ::
|
||||
forall m r.
|
||||
( MonadIO m,
|
||||
@ -97,16 +104,30 @@ runClearMetadata ::
|
||||
m EncJSON
|
||||
runClearMetadata _ = do
|
||||
metadata <- getMetadata
|
||||
logger :: (HL.Logger HL.Hasura) <- asks getter
|
||||
-- Clean up all sources, drop hdb_catalog schema from source
|
||||
for_ (OMap.toList $ _metaSources metadata) $ \(sourceName, backendSourceMetadata) ->
|
||||
AB.dispatchAnyBackend @BackendMetadata (unBackendSourceMetadata backendSourceMetadata) \(_sourceMetadata :: SourceMetadata b) -> do
|
||||
sourceInfo <- askSourceInfo @b sourceName
|
||||
-- We do not bother dropping all dependencies on the source, because the
|
||||
-- metadata is going to be replaced with an empty metadata. And dropping the
|
||||
-- depdencies would lead to rebuilding of schema cache which is of no use here
|
||||
-- since we do not use the rebuilt schema cache. Hence, we only clean up the
|
||||
-- 'hdb_catalog' tables from the source.
|
||||
runPostDropSourceHook sourceName sourceInfo
|
||||
sourceInfoMaybe <- askSourceInfoMaybe @b sourceName
|
||||
case sourceInfoMaybe of
|
||||
Nothing ->
|
||||
HL.unLogger logger $
|
||||
MetadataLog
|
||||
HL.LevelWarn
|
||||
( "Could not cleanup the source '"
|
||||
<> sourceName
|
||||
<<> "' while dropping it from the graphql-engine as it is inconsistent."
|
||||
<> " Please consider cleaning the resources created by the graphql engine,"
|
||||
<> " refer https://hasura.io/docs/latest/graphql/core/event-triggers/remove-event-triggers/#clean-footprints-manually "
|
||||
)
|
||||
J.Null
|
||||
Just sourceInfo ->
|
||||
-- We do not bother dropping all dependencies on the source, because the
|
||||
-- metadata is going to be replaced with an empty metadata. And dropping the
|
||||
-- depdencies would lead to rebuilding of schema cache which is of no use here
|
||||
-- since we do not use the rebuilt schema cache. Hence, we only clean up the
|
||||
-- 'hdb_catalog' tables from the source.
|
||||
runPostDropSourceHook sourceName sourceInfo
|
||||
|
||||
-- We can infer whether the server is started with `--database-url` option
|
||||
-- (or corresponding env variable) by checking the existence of @'defaultSource'
|
||||
@ -133,7 +154,8 @@ runClearMetadata _ = do
|
||||
Nothing
|
||||
in emptyMetadata
|
||||
& metaSources %~ OMap.insert defaultSource emptyDefaultSource
|
||||
runReplaceMetadataV1 $ RMWithSources emptyMetadata'
|
||||
-- ShouldDeleteEventTriggerCleanupSchedules here is False because when `clear_metadata` is called, it checks the sources present in the schema-cache, and deletes `hdb_event_log_cleanups` also. Since the inconsistent source is not present in schema-cache, the process fails and it gives an sql run error.
|
||||
runReplaceMetadataV1 DontDeleteEventTriggerCleanupSchedules $ RMWithSources emptyMetadata'
|
||||
|
||||
{- Note [Cleanup for dropped triggers]
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -162,8 +184,8 @@ runReplaceMetadata ::
|
||||
ReplaceMetadata ->
|
||||
m EncJSON
|
||||
runReplaceMetadata = \case
|
||||
RMReplaceMetadataV1 v1args -> runReplaceMetadataV1 v1args
|
||||
RMReplaceMetadataV2 v2args -> runReplaceMetadataV2 v2args
|
||||
RMReplaceMetadataV1 v1args -> runReplaceMetadataV1 DeleteEventTriggerCleanupSchedules v1args
|
||||
RMReplaceMetadataV2 v2args -> runReplaceMetadataV2 DeleteEventTriggerCleanupSchedules v2args
|
||||
|
||||
runReplaceMetadataV1 ::
|
||||
( CacheRWM m,
|
||||
@ -175,10 +197,11 @@ runReplaceMetadataV1 ::
|
||||
Has (HL.Logger HL.Hasura) r,
|
||||
MonadEventLogCleanup m
|
||||
) =>
|
||||
ShouldDeleteEventTriggerCleanupSchedules ->
|
||||
ReplaceMetadataV1 ->
|
||||
m EncJSON
|
||||
runReplaceMetadataV1 =
|
||||
(successMsg <$) . runReplaceMetadataV2 . ReplaceMetadataV2 NoAllowInconsistentMetadata
|
||||
runReplaceMetadataV1 shouldDeleteEventTriggerCleanupSchedules =
|
||||
(successMsg <$) . (runReplaceMetadataV2 shouldDeleteEventTriggerCleanupSchedules) . ReplaceMetadataV2 NoAllowInconsistentMetadata
|
||||
|
||||
runReplaceMetadataV2 ::
|
||||
forall m r.
|
||||
@ -191,9 +214,10 @@ runReplaceMetadataV2 ::
|
||||
Has (HL.Logger HL.Hasura) r,
|
||||
MonadEventLogCleanup m
|
||||
) =>
|
||||
ShouldDeleteEventTriggerCleanupSchedules ->
|
||||
ReplaceMetadataV2 ->
|
||||
m EncJSON
|
||||
runReplaceMetadataV2 ReplaceMetadataV2 {..} = do
|
||||
runReplaceMetadataV2 shouldDeleteEventTriggerCleanupSchedules ReplaceMetadataV2 {..} = do
|
||||
logger :: (HL.Logger HL.Hasura) <- asks getter
|
||||
-- we drop all the future cron trigger events before inserting the new metadata
|
||||
-- and re-populating future cron events below
|
||||
@ -248,8 +272,20 @@ runReplaceMetadataV2 ReplaceMetadataV2 {..} = do
|
||||
-- clean that source.
|
||||
onNothing (OMap.lookup oldSource newSources) $ do
|
||||
AB.dispatchAnyBackend @BackendMetadata (unBackendSourceMetadata oldSourceBackendMetadata) \(_oldSourceMetadata :: SourceMetadata b) -> do
|
||||
sourceInfo <- askSourceInfo @b oldSource
|
||||
runPostDropSourceHook oldSource sourceInfo
|
||||
sourceInfoMaybe <- askSourceInfoMaybe @b oldSource
|
||||
case sourceInfoMaybe of
|
||||
Nothing ->
|
||||
HL.unLogger logger $
|
||||
MetadataLog
|
||||
HL.LevelWarn
|
||||
( "Could not cleanup the source '"
|
||||
<> oldSource
|
||||
<<> "' while dropping it from the graphql-engine as it is inconsistent."
|
||||
<> " Please consider cleaning the resources created by the graphql engine,"
|
||||
<> " refer https://hasura.io/docs/latest/graphql/core/event-triggers/remove-event-triggers/#clean-footprints-manually "
|
||||
)
|
||||
J.Null
|
||||
Just sourceInfo -> runPostDropSourceHook oldSource sourceInfo
|
||||
pure (BackendSourceMetadata (AB.mkAnyBackend _oldSourceMetadata))
|
||||
|
||||
-- Check for duplicate trigger names in the new source metadata
|
||||
@ -260,7 +296,6 @@ runReplaceMetadataV2 ReplaceMetadataV2 {..} = do
|
||||
duplicateTriggerNamesInNewMetadata = newTriggerNames \\ (L.uniques newTriggerNames)
|
||||
unless (null duplicateTriggerNamesInNewMetadata) $ do
|
||||
throw400 NotSupported ("Event trigger with duplicate names not allowed: " <> dquoteList (map triggerNameToTxt duplicateTriggerNamesInNewMetadata))
|
||||
|
||||
let cacheInvalidations =
|
||||
CacheInvalidations
|
||||
{ ciMetadata = False,
|
||||
@ -280,7 +315,8 @@ runReplaceMetadataV2 ReplaceMetadataV2 {..} = do
|
||||
-- See Note [Cleanup for dropped triggers]
|
||||
dropSourceSQLTriggers logger oldSchemaCache (_metaSources oldMetadata) (_metaSources metadata)
|
||||
|
||||
generateSQLTriggerCleanupSchedules (_metaSources oldMetadata) (_metaSources metadata)
|
||||
when (shouldDeleteEventTriggerCleanupSchedules == DeleteEventTriggerCleanupSchedules) $
|
||||
generateSQLTriggerCleanupSchedules logger (_metaSources oldMetadata) (_metaSources metadata)
|
||||
|
||||
encJFromJValue . formatInconsistentObjs . scInconsistentObjs <$> askSchemaCache
|
||||
where
|
||||
@ -375,25 +411,61 @@ runReplaceMetadataV2 ReplaceMetadataV2 {..} = do
|
||||
-- TODO: Determine if any errors should be thrown from askSourceConfig at all if the errors are just being discarded
|
||||
return $
|
||||
flip catchError catcher do
|
||||
sourceConfig <- askSourceConfig @b source
|
||||
for_ droppedEventTriggers $
|
||||
\triggerName -> do
|
||||
tableName <- getTableNameFromTrigger @b oldSchemaCache source triggerName
|
||||
dropTriggerAndArchiveEvents @b sourceConfig triggerName tableName
|
||||
for_ (OMap.toList retainedNewTriggers) $ \(retainedNewTriggerName, retainedNewTriggerConf) ->
|
||||
case OMap.lookup retainedNewTriggerName oldTriggersMap of
|
||||
Nothing -> pure ()
|
||||
Just oldTriggerConf -> do
|
||||
let newTriggerOps = etcDefinition retainedNewTriggerConf
|
||||
oldTriggerOps = etcDefinition oldTriggerConf
|
||||
isDroppedOp old new = isJust old && isNothing new
|
||||
droppedOps =
|
||||
[ (bool Nothing (Just INSERT) (isDroppedOp (tdInsert oldTriggerOps) (tdInsert newTriggerOps))),
|
||||
(bool Nothing (Just UPDATE) (isDroppedOp (tdUpdate oldTriggerOps) (tdUpdate newTriggerOps))),
|
||||
(bool Nothing (Just ET.DELETE) (isDroppedOp (tdDelete oldTriggerOps) (tdDelete newTriggerOps)))
|
||||
]
|
||||
tableName <- getTableNameFromTrigger @b oldSchemaCache source retainedNewTriggerName
|
||||
dropDanglingSQLTrigger @b sourceConfig retainedNewTriggerName tableName (Set.fromList $ catMaybes droppedOps)
|
||||
sourceConfigMaybe <- askSourceConfigMaybe @b source
|
||||
case sourceConfigMaybe of
|
||||
Nothing ->
|
||||
-- TODO: Add user facing docs on how to drop triggers manually. Issue #7104
|
||||
logger $
|
||||
MetadataLog
|
||||
HL.LevelWarn
|
||||
( "Could not drop SQL triggers present in the source '"
|
||||
<> source
|
||||
<<> "' as it is inconsistent."
|
||||
<> " While creating an event trigger, Hasura creates SQL triggers on the table."
|
||||
<> " Please refer https://hasura.io/docs/latest/graphql/core/event-triggers/remove-event-triggers/#clean-up-event-trigger-footprints-manually "
|
||||
<> " to delete the sql triggers from the database manually."
|
||||
<> " For more details, please refer https://hasura.io/docs/latest/graphql/core/event-triggers/index.html "
|
||||
)
|
||||
J.Null
|
||||
Just sourceConfig -> do
|
||||
for_ droppedEventTriggers $
|
||||
\triggerName -> do
|
||||
tableNameMaybe <- getTableNameFromTrigger @b oldSchemaCache source triggerName
|
||||
case tableNameMaybe of
|
||||
Nothing ->
|
||||
logger $
|
||||
MetadataLog
|
||||
HL.LevelWarn
|
||||
(sqlTriggerError triggerName)
|
||||
J.Null
|
||||
Just tableName -> dropTriggerAndArchiveEvents @b sourceConfig triggerName tableName
|
||||
for_ (OMap.toList retainedNewTriggers) $ \(retainedNewTriggerName, retainedNewTriggerConf) ->
|
||||
case OMap.lookup retainedNewTriggerName oldTriggersMap of
|
||||
Nothing ->
|
||||
logger $
|
||||
MetadataLog
|
||||
HL.LevelWarn
|
||||
(sqlTriggerError retainedNewTriggerName)
|
||||
J.Null
|
||||
Just oldTriggerConf -> do
|
||||
let newTriggerOps = etcDefinition retainedNewTriggerConf
|
||||
oldTriggerOps = etcDefinition oldTriggerConf
|
||||
isDroppedOp old new = isJust old && isNothing new
|
||||
droppedOps =
|
||||
[ (bool Nothing (Just INSERT) (isDroppedOp (tdInsert oldTriggerOps) (tdInsert newTriggerOps))),
|
||||
(bool Nothing (Just UPDATE) (isDroppedOp (tdUpdate oldTriggerOps) (tdUpdate newTriggerOps))),
|
||||
(bool Nothing (Just ET.DELETE) (isDroppedOp (tdDelete oldTriggerOps) (tdDelete newTriggerOps)))
|
||||
]
|
||||
tableNameMaybe <- getTableNameFromTrigger @b oldSchemaCache source retainedNewTriggerName
|
||||
case tableNameMaybe of
|
||||
Nothing ->
|
||||
logger $
|
||||
MetadataLog
|
||||
HL.LevelWarn
|
||||
(sqlTriggerError retainedNewTriggerName)
|
||||
J.Null
|
||||
Just tableName -> do
|
||||
dropDanglingSQLTrigger @b sourceConfig retainedNewTriggerName tableName (Set.fromList $ catMaybes droppedOps)
|
||||
where
|
||||
compose ::
|
||||
SourceName ->
|
||||
@ -403,28 +475,50 @@ runReplaceMetadataV2 ReplaceMetadataV2 {..} = do
|
||||
m ()
|
||||
compose sourceName x y f = AB.composeAnyBackend @BackendEventTrigger f x y (logger $ HL.UnstructuredLog HL.LevelInfo $ SB.fromText $ "Event trigger clean up couldn't be done on the source " <> sourceName <<> " because it has changed its type")
|
||||
|
||||
sqlTriggerError :: TriggerName -> Text
|
||||
sqlTriggerError triggerName =
|
||||
( "Could not drop SQL triggers associated with event trigger '"
|
||||
<> triggerName
|
||||
<<> "'. While creating an event trigger, Hasura creates SQL triggers on the table."
|
||||
<> " Please refer https://hasura.io/docs/latest/graphql/core/event-triggers/remove-event-triggers/#clean-up-event-trigger-footprints-manually "
|
||||
<> " to delete the sql triggers from the database manually."
|
||||
<> " For more details, please refer https://hasura.io/docs/latest/graphql/core/event-triggers/index.html "
|
||||
)
|
||||
|
||||
generateSQLTriggerCleanupSchedules ::
|
||||
HL.Logger HL.Hasura ->
|
||||
InsOrdHashMap SourceName BackendSourceMetadata ->
|
||||
InsOrdHashMap SourceName BackendSourceMetadata ->
|
||||
m ()
|
||||
generateSQLTriggerCleanupSchedules oldSources newSources = do
|
||||
generateSQLTriggerCleanupSchedules (HL.Logger logger) oldSources newSources = do
|
||||
-- If there are any event trigger cleanup configs with different cron then delete the older schedules
|
||||
-- generate cleanup logs for new event trigger cleanup config
|
||||
for_ (OMap.toList newSources) $ \(source, newBackendSourceMetadata) -> do
|
||||
for_ (OMap.lookup source oldSources) $ \oldBackendSourceMetadata ->
|
||||
AB.dispatchAnyBackend @BackendEventTrigger (unBackendSourceMetadata newBackendSourceMetadata) \(newSourceMetadata :: SourceMetadata b) -> do
|
||||
dispatch oldBackendSourceMetadata \oldSourceMetadata -> do
|
||||
sourceInfo@(SourceInfo _ _ _ sourceConfig _ _) <- askSourceInfo @b source
|
||||
let getEventMapWithCC sourceMeta = Map.fromList $ concatMap (getAllETWithCleanupConfigInTableMetadata . snd) $ OMap.toList $ _smTables sourceMeta
|
||||
oldEventTriggersWithCC = getEventMapWithCC oldSourceMetadata
|
||||
newEventTriggersWithCC = getEventMapWithCC newSourceMetadata
|
||||
-- event triggers with cleanup config that existed in old metadata but are missing in new metadata
|
||||
differenceMap = Map.difference oldEventTriggersWithCC newEventTriggersWithCC
|
||||
for_ (Map.toList differenceMap) $ \(triggerName, cleanupConfig) -> do
|
||||
deleteAllScheduledCleanups @b sourceConfig triggerName
|
||||
pure cleanupConfig
|
||||
for_ (Map.toList newEventTriggersWithCC) $ \(triggerName, cleanupConfig) -> do
|
||||
(`onLeft` logQErr) =<< generateCleanupSchedules (AB.mkAnyBackend sourceInfo) triggerName cleanupConfig
|
||||
sourceInfoMaybe <- askSourceInfoMaybe @b source
|
||||
case sourceInfoMaybe of
|
||||
Nothing ->
|
||||
logger $
|
||||
MetadataLog
|
||||
HL.LevelWarn
|
||||
( "Could not cleanup the scheduled autocleanup instances present in the source '"
|
||||
<> source
|
||||
<<> "' as it is inconsistent"
|
||||
)
|
||||
J.Null
|
||||
Just sourceInfo@(SourceInfo _ _ _ sourceConfig _ _) -> do
|
||||
let getEventMapWithCC sourceMeta = Map.fromList $ concatMap (getAllETWithCleanupConfigInTableMetadata . snd) $ OMap.toList $ _smTables sourceMeta
|
||||
oldEventTriggersWithCC = getEventMapWithCC oldSourceMetadata
|
||||
newEventTriggersWithCC = getEventMapWithCC newSourceMetadata
|
||||
-- event triggers with cleanup config that existed in old metadata but are missing in new metadata
|
||||
differenceMap = Map.difference oldEventTriggersWithCC newEventTriggersWithCC
|
||||
for_ (Map.toList differenceMap) $ \(triggerName, cleanupConfig) -> do
|
||||
deleteAllScheduledCleanups @b sourceConfig triggerName
|
||||
pure cleanupConfig
|
||||
for_ (Map.toList newEventTriggersWithCC) $ \(triggerName, cleanupConfig) -> do
|
||||
(`onLeft` logQErr) =<< generateCleanupSchedules (AB.mkAnyBackend sourceInfo) triggerName cleanupConfig
|
||||
|
||||
dispatch (BackendSourceMetadata bs) = AB.dispatchAnyBackend @BackendEventTrigger bs
|
||||
|
||||
|
@ -14,7 +14,9 @@ module Hasura.RQL.Types.SchemaCache
|
||||
unsafeTableCache,
|
||||
unsafeTableInfo,
|
||||
askSourceInfo,
|
||||
askSourceInfoMaybe,
|
||||
askSourceConfig,
|
||||
askSourceConfigMaybe,
|
||||
askTableCache,
|
||||
askTableInfo,
|
||||
askTableCoreInfo,
|
||||
@ -294,6 +296,15 @@ askSourceInfo sourceName = do
|
||||
)
|
||||
(metadata ^. metaSources . at sourceName)
|
||||
|
||||
askSourceInfoMaybe ::
|
||||
forall b m.
|
||||
(CacheRM m, Backend b) =>
|
||||
SourceName ->
|
||||
m (Maybe (SourceInfo b))
|
||||
askSourceInfoMaybe sourceName = do
|
||||
sources <- scSources <$> askSchemaCache
|
||||
pure (unsafeSourceInfo @b =<< M.lookup sourceName sources)
|
||||
|
||||
-- | Retrieves the source config for a given source name.
|
||||
--
|
||||
-- This function relies on 'askSourceInfo' and similarly throws an error if the
|
||||
@ -305,6 +316,14 @@ askSourceConfig ::
|
||||
m (SourceConfig b)
|
||||
askSourceConfig = fmap _siConfiguration . askSourceInfo @b
|
||||
|
||||
askSourceConfigMaybe ::
|
||||
forall b m.
|
||||
(CacheRM m, Backend b) =>
|
||||
SourceName ->
|
||||
m (Maybe (SourceConfig b))
|
||||
askSourceConfigMaybe =
|
||||
fmap (fmap _siConfiguration) . askSourceInfoMaybe @b
|
||||
|
||||
-- | Retrieves the table cache for a given source cache and source name.
|
||||
--
|
||||
-- This function must be used with a _type annotation_, such as
|
||||
|
@ -774,5 +774,5 @@ runMetadataQueryV2M ::
|
||||
RQLMetadataV2 ->
|
||||
m EncJSON
|
||||
runMetadataQueryV2M currentResourceVersion = \case
|
||||
RMV2ReplaceMetadata q -> runReplaceMetadataV2 q
|
||||
RMV2ReplaceMetadata q -> runReplaceMetadataV2 DeleteEventTriggerCleanupSchedules q
|
||||
RMV2ExportMetadata q -> runExportMetadataV2 currentResourceVersion q
|
||||
|
@ -485,7 +485,7 @@ runQueryM env rq = withPathK "args" $ case rq of
|
||||
RQV2TrackTable q -> runTrackTableV2Q q
|
||||
RQV2SetTableCustomFields q -> runSetTableCustomFieldsQV2 q
|
||||
RQV2TrackFunction q -> runTrackFunctionV2 q
|
||||
RQV2ReplaceMetadata q -> runReplaceMetadataV2 q
|
||||
RQV2ReplaceMetadata q -> runReplaceMetadataV2 DeleteEventTriggerCleanupSchedules q
|
||||
|
||||
requiresAdmin :: RQLQuery -> Bool
|
||||
requiresAdmin = \case
|
||||
|
Loading…
Reference in New Issue
Block a user