mirror of
https://github.com/hasura/graphql-engine.git
synced 2024-12-17 20:41:49 +03:00
7e970177c1
Aka “the PDV refactor.” History is preserved on the branch 2801-graphql-schema-parser-refactor. * [skip ci] remove stale benchmark commit from commit_diff * [skip ci] Check for root field name conflicts between remotes * [skip ci] Additionally check for conflicts between remotes and DB * [skip ci] Check for conflicts in schema when tracking a table * [skip ci] Fix equality checking in GraphQL AST * server: fix mishandling of GeoJSON inputs in subscriptions (fix #3239) (#4551) * Add support for multiple top-level fields in a subscription to improve testability of subscriptions * Add an internal flag to enable multiple subscriptions * Add missing call to withConstructorFn in live queries (fix #3239) Co-authored-by: Alexis King <lexi.lambda@gmail.com> * Scheduled triggers (close #1914) (#3553) server: add scheduled triggers Co-authored-by: Alexis King <lexi.lambda@gmail.com> Co-authored-by: Marion Schleifer <marion@hasura.io> Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io> Co-authored-by: Aleksandra Sikora <ola.zxcvbnm@gmail.com> * dev.sh: bump version due to addition of croniter python dependency * server: fix an introspection query caching issue (fix #4547) (#4661) Introspection queries accept variables, but we need to make sure to also touch the variables that we ignore, so that an introspection query is marked not reusable if we are not able to build a correct query plan for it. A better solution here would be to deal with such unused variables correctly, so that more introspection queries become reusable. An even better solution would be to type-safely track *how* to reuse which variables, rather than to split the reusage marking from the planning. Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * flush log buffer on exception in mkWaiApp ( fix #4772 ) (#4801) * flush log buffer on exception in mkWaiApp * add comment to explain the introduced change * add changelog * allow logging details of a live query polling thread (#4959) * changes for poller-log add various multiplexed query info in poller-log * minor cleanup, also fixes a bug which will return duplicate data * Live query poller stats can now be logged This also removes in-memory stats that are collected about batched query execution as the log lines when piped into an monitoring tool will give us better insights. * allow poller-log to be configurable * log minimal information in the livequery-poller-log Other information can be retrieved from /dev/subscriptions/extended * fix few review comments * avoid marshalling and unmarshalling from ByteString to EncJSON * separate out SubscriberId and SubscriberMetadata Co-authored-by: Anon Ray <rayanon004@gmail.com> * Don't compile in developer APIs by default * Tighten up handling of admin secret, more docs Store the admin secret only as a hash to prevent leaking the secret inadvertently, and to prevent timing attacks on the secret. NOTE: best practice for stored user passwords is a function with a tunable cost like bcrypt, but our threat model is quite different (even if we thought we could reasonably protect the secret from an attacker who could read arbitrary regions of memory), and bcrypt is far too slow (by design) to perform on each request. We'd have to rely on our (technically savvy) users to choose high entropy passwords in any case. Referencing #4736 * server/docs: add instructions to fix loss of float precision in PostgreSQL <= 11 (#5187) This adds a server flag, --pg-connection-options, that can be used to set a PostgreSQL connection parameter, extra_float_digits, that needs to be used to avoid loss of data on older versions of PostgreSQL, which have odd default behavior when returning float values. (fixes #5092) * [skip ci] Add new commits from master to the commit diff * [skip ci] serve default directives (skip & include) over introspection * [skip ci] Update non-Haskell assets with the version on master * server: refactor GQL execution check and config API (#5094) Co-authored-by: Vamshi Surabhi <vamshi@hasura.io> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] fix js issues in tests by pinning dependencies version * [skip ci] bump graphql version * [skip ci] Add note about memory usage * generalize query execution logic on Postgres (#5110) * generalize PGExecCtx to support specialized functions for various operations * fix tests compilation * allow customising PGExecCtx when starting the web server * server: changes catalog initialization and logging for pro customization (#5139) * new typeclass to abstract the logic of QueryLog-ing * abstract the logic of logging websocket-server logs introduce a MonadWSLog typeclass * move catalog initialization to init step expose a helper function to migrate catalog create schema cache in initialiseCtx * expose various modules and functions for pro * [skip ci] cosmetic change * [skip ci] fix test calling a mutation that does not exist * [skip ci] minor text change * [skip ci] refactored input values * [skip ci] remove VString Origin * server: fix updating of headers behaviour in the update cron trigger API and create future events immediately (#5151) * server: fix bug to update headers in an existing cron trigger and create future events Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * Lower stack chunk size in RTS to reduce thread STACK memory (closes #5190) This reduces memory consumption for new idle subscriptions significantly (see linked ticket). The hypothesis is: we fork a lot of threads per websocket, and some of these use slightly more than the initial 1K stack size, so the first overflow balloons to 32K, when significantly less is required. However: running with `+RTS -K1K -xc` did not seem to show evidence of any overflows! So it's a mystery why this improves things. GHC should probably also be doubling the stack buffer at each overflow or doing something even smarter; the knobs we have aren't so helpful. * [skip ci] fix todo and schema generation for aggregate fields * 5087 libpq pool leak (#5089) Shrink libpq buffers to 1MB before returning connection to pool. Closes #5087 See: https://github.com/hasura/pg-client-hs/pull/19 Also related: #3388 #4077 * bump pg-client-hs version (fixes a build issue on some environments) (#5267) * do not use prepared statements for mutations * server: unlock scheduled events on graceful shutdown (#4928) * Fix buggy parsing of new --conn-lifetime flag in2b0e3774
* [skip ci] remove cherry-picked commit from commit_diff.txt * server: include additional fields in scheduled trigger webhook payload (#5262) * include scheduled triggers metadata in the webhook body Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * server: call the webhook asynchronously in event triggers (#5352) * server: call the webhook asynchronosly in event triggers * Expose all modules in Cabal file (#5371) * [skip ci] update commit_diff.txt * [skip ci] fix cast exp parser & few TODOs * [skip ci] fix remote fields arguments * [skip ci] fix few more TODO, no-op refactor, move resolve/action.hs to execute/action.hs * Pass environment variables around as a data structure, via @sordina (#5374) * Pass environment variables around as a data structure, via @sordina * Resolving build error * Adding Environment passing note to changelog * Removing references to ILTPollerLog as this seems to have been reintroduced from a bad merge * removing commented-out imports * Language pragmas already set by project * Linking async thread * Apply suggestions from code review Use `runQueryTx` instead of `runLazyTx` for queries. * remove the non-user facing entry in the changelog Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] fix: restrict remote relationship field generation for hasura queries * [skip ci] no-op refactor; move insert execution code from schema parser module * server: call the webhook asynchronously in event triggers (#5352) * server: call the webhook asynchronosly in event triggers * Expose all modules in Cabal file (#5371) * [skip ci] update commit_diff.txt * Pass environment variables around as a data structure, via @sordina (#5374) * Pass environment variables around as a data structure, via @sordina * Resolving build error * Adding Environment passing note to changelog * Removing references to ILTPollerLog as this seems to have been reintroduced from a bad merge * removing commented-out imports * Language pragmas already set by project * Linking async thread * Apply suggestions from code review Use `runQueryTx` instead of `runLazyTx` for queries. * remove the non-user facing entry in the changelog Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] implement header checking Probably closes #14 and #3659. * server: refactor 'pollQuery' to have a hook to process 'PollDetails' (#5391) Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * update pg-client (#5421) * [skip ci] update commit_diff * Fix latency buckets for telemetry data These must have gotten messed up during a refactor. As a consequence almost all samples received so far fall into the single erroneous 0 to 1K seconds (originally supposed to be 1ms?) bucket. I also re-thought what the numbers should be, but these are still arbitrary and might want adjusting in the future. * [skip ci] include the latest commit compared against master in commit_diff * [skip ci] include new commits from master in commit_diff * [skip ci] improve description generation * [skip ci] sort all introspect arrays * [skip ci] allow parsers to specify error codes * [skip ci] fix integer and float parsing error code * [skip ci] scalar from json errors are now parse errors * [skip ci] fixed negative integer error message and code * [skip ci] Re-fix nullability in relationships * [skip ci] no-op refactor and removed couple of FIXMEs * [skip ci] uncomment code in 'deleteMetadataObject' * [skip ci] Fix re-fix of nullability for relationships * [skip ci] fix default arguments error code * [skip ci] updated test error message !!! WARNING !!! Since all fields accept `null`, they all are technically optional in the new schema. Meaning there's no such thing as a missing mandatory field anymore: a field that doesn't have a default value, and which therefore isn't labelled as "optional" in the schema, will be assumed to be null if it's missing, meaning it isn't possible anymore to have an error for a missing mandatory field. The only possible error is now when a optional positional argument is omitted but is not the last positional argument. * [skip ci] cleanup of int scalar parser * [skip ci] retro-compatibility of offset as string * [skip ci] Remove commit from commit_diff.txt Although strictly speaking we don't know if this will work correctly in PDV if we would implement query plan caching, the fact is that in the theoretical case that we would have the same issue in PDV, it would probably apply not just to introspection, and the fix would be written completely differently. So this old commit is of no value to us other than the heads-up "make sure query plan caching works correctly even in the presence of unused variables", which is already part of the test suite. * Add MonadTrace and MonadExecuteQuery abstractions (#5383) * [skip ci] Fix accumulation of input object types Just like object types, interface types, and union types, we have to avoid circularities when collecting input types from the GraphQL AST. Additionally, this fixes equality checks for input object types (whose fields are unordered, and hence should be compared as sets) and enum types (ditto). * [skip ci] fix fragment error path * [skip ci] fix node error code * [skip ci] fix paths in insert queries * [skip ci] fix path in objects * [skip ci] manually alter node id path for consistency * [skip ci] more node error fixups * [skip ci] one last relay error message fix * [skip ci] update commit_diff * Propagate the trace context to event triggers (#5409) * Propagate the trace context to event triggers * Handle missing trace and span IDs * Store trace context as one LOCAL * Add migrations * Documentation * changelog * Fix warnings * Respond to code review suggestions * Respond to code review * Undo changelog * Update CHANGELOG.md Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * server: log request/response sizes for event triggers (#5463) * server: log request/response sizes for event triggers event triggers (and scheduled triggers) now have request/response size in their logs. * add changelog entry * Tracing: Simplify HTTP traced request (#5451) Remove the Inversion of Control (SuspendRequest) and simplify the tracing of HTTP Requests. Co-authored-by: Phil Freeman <phil@hasura.io> * Attach request ID as tracing metadata (#5456) * Propagate the trace context to event triggers * Handle missing trace and span IDs * Store trace context as one LOCAL * Add migrations * Documentation * Include the request ID as trace metadata * changelog * Fix warnings * Respond to code review suggestions * Respond to code review * Undo changelog * Update CHANGELOG.md * Typo Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * server: add logging for action handlers (#5471) * server: add logging for action handlers * add changelog entry * change action-handler log type from internal to non-internal * fix action-handler-log name * server: pass http and websocket request to logging context (#5470) * pass request body to logging context in all cases * add message size logging on the websocket API this is required by graphql-engine-pro/#416 * message size logging on websocket API As we need to log all messages recieved/sent by the websocket server, it makes sense to log them as part of the websocket server event logs. Previously message recieved were logged inside the onMessage handler, and messages sent were logged only for "data" messages (as a server event log) * fix review comments Co-authored-by: Phil Freeman <phil@hasura.io> * server: stop eventing subsystem threads when shutting down (#5479) * server: stop eventing subsystem threads when shutting down * Apply suggestions from code review Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com> * [skip ci] update commit_diff with new commits added in master * Bugfix to support 0-size HASURA_GRAPHQL_QUERY_PLAN_CACHE_SIZE Also some minor refactoring of bounded cache module: - the maxBound check in `trim` was confusing and unnecessary - consequently trim was unnecessary for lookupPure Also add some basic tests * Support only the bounded cache, with default HASURA_GRAPHQL_QUERY_PLAN_CACHE_SIZE of 4000. Closes #5363 * [skip ci] remove merge commit from commit_diff * server: Fix compiler warning caused by GHC upgrade (#5489) Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * [skip ci] update all non server code from master * [skip ci] aligned object field error message with master * [skip ci] fix remaining undefined? * [skip ci] remove unused import * [skip ci] revert to previous error message, fix tests * Move nullableType/nonNullableType to Schema.hs These are functions on Types, not on Parsers. * [skip ci] fix setup to fix backend only test the order in which permission checks are performed on the branch is slightly different than on master, resulting in a slightly different error if there are no other mutations the user has access to. By adding update permissions, we go back to the expected case. * [skip ci] fix insert geojson tests to reflect new paths * [skip ci] fix enum test for better error message * [skip ci] fix header test for better error message * [skip ci] fix fragment cycle test for better error message * [skip ci] fix error message for type mismatch * [skip ci] fix variable path in test * [skip ci] adjust tests after bug fix * [skip ci] more tests fixing * Add hdb_catalog.current_setting abstraction for reading Hasura settings As the comment in the function’s definition explains, this is needed to work around an awkward Postgres behavior. * [skip ci] Update CONTRIBUTING.md to mention Node setup for Python tests * [skip ci] Add missing Python tests env var to CONTRIBUTING.md * [skip ci] fix order of result when subscription is run with multiple nodes * [skip ci] no-op refactor: fix a warning in Internal/Parser.hs * [skip ci] throw error when a subscription contains remote joins * [skip ci] Enable easier profiling by hiding AssertNF behind a flag In order to compile a profiling build, run: $ cabal new-build -f profiling --enable-profiling * [skip ci] Fix two warnings We used to lookup the objects that implement a given interface by filtering all objects in the schema document. However, one of the tests expects us to generate a warning if the provided `implements` field of an introspection query specifies an object not implementing some interface. So we use that field instead. * [skip ci] Fix warnings by commenting out query plan caching * [skip ci] improve masking/commenting query caching related code & few warning fixes * [skip ci] Fixed compiler warnings in graphql-parser-hs * Sync non-Haskell assets with master * [skip ci] add a test inserting invalid GraphQL but valid JSON value in a jsonb column * [skip ci] Avoid converting to/from Map * [skip ci] Apply some hlint suggestions * [skip ci] remove redundant constraints from buildLiveQueryPlan and explainGQLQuery * [skip ci] add NOTEs about missing Tracing constraints in PDV from master * Remove -fdefer-typed-holes, fix warnings * Update cabal.project.freeze * Limit GHC’s heap size to 8GB in CI to avoid the OOM killer * Commit package-lock.json for Python tests’ remote schema server * restrict env variables start with HASURA_GRAPHQL_ for headers configuration in actions, event triggers & remote schemas (#5519) * restrict env variables start with HASURA_GRAPHQL_ for headers definition in actions & event triggers * update CHANGELOG.md * Apply suggestions from code review Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * add test for table_by_pk node when roles doesn't have permission to PK * [skip ci] fix introspection query if any enum column present in primary key (fix #5200) (#5522) * [skip ci] test case fix fora6450e126b
* [skip ci] add tests to agg queries when role doesn't have access to any cols * fix backend test * Simplify subscription execution * [skip ci] add test to check if required headers are present while querying * Suppose, table B is related to table A and to query B certain headers are necessary, then the test checks that we are throwing error when the header is not set when B is queried through A * fix mutations not checking for view mutability * [skip ci] add variable type checking and corresponding tests * [skip ci] add test to check if update headers are present while doing an upsert * [skip ci] add positive counterparts to some of the negative permission tests * fix args missing their description in introspect * [skip ci] Remove unused function; insert missing markNotReusable call * [skip ci] Add a Note about InputValue * [skip ci] Delete LegacySchema/ 🎉 * [skip ci] Delete GraphQL/{Resolve,Validate}/ 🎉 * [skip ci] Delete top-level Resolve/Validate modules; tidy .cabal file * [skip ci] Delete LegacySchema top-level module Somehow I missed this one. * fix input value to json * [skip ci] elaborate on JSON objects in GraphQL * [skip ci] add missing file * [skip ci] add a test with subscription containing remote joins * add a test with remote joins in mutation output * [skip ci] Add some comments to Schema/Mutation.hs * [skip ci] Remove no longer needed code from RemoteServer.hs * [skip ci] Use a helper function to generate conflict clause parsers * [skip ci] fix type checker error in fields with default value * capitalize the header keys in select_articles_without_required_headers * Somehow, this was the reason the tests were failing. I have no idea, why! * [skip ci] Add a long Note about optional fields and nullability * Improve comments a bit; simplify Schema/Common.hs a bit * [skip ci] full implementation of 5.8.5 type checking. * [skip ci] fix validation test teardown * [skip ci] fix schema stitching test * fix remote schema ignoring enum nullability * [skip ci] fix fieldOptional to not discard nullability * revert nullability of use_spheroid * fix comment * add required remote fields with arguments for tests * [skip ci] add missing docstrings * [skip ci] fixed description of remote fields * [skip ci] change docstring for consistency * fix several schema inconsistencies * revert behaviour change in function arguments parsing * fix remaining nullability issues in new schema * minor no-op refactor; use isListType from graphql-parser-hs * use nullability of remote schema node, while creating a Remote reln * fix 'ID' input coercing & action 'ID' type relationship mapping * include ASTs in MonadExecuteQuery * needed for PRO code-base * Delete code for "interfaces implementing ifaces" (draft GraphQL spec) Previously I started writing some code that adds support for a future GraphQL feature where interfaces may themselves be sub-types of other interfaces. However, this code was incomplete, and partially incorrect. So this commit deletes support for that entirely. * Ignore a remote schema test during the upgrade/downgrade test The PDV refactor does a better job at exposing a minimal set of types through introspection. In particular, not every type that is present in a remote schema is re-exposed by Hasura. The test test_schema_stitching.py::TestRemoteSchemaBasic::test_introspection assumed that all types were re-exposed, which is not required for GraphQL compatibility, in order to test some aspect of our support for remote schemas. So while this particular test has been updated on PDV, the PDV branch now does not pass the old test, which we argue to be incorrect. Hence this test is disabled while we await a release, after which we can re-enable it. This also re-enables a test that was previously disabled for similar, though unrelated, reasons. * add haddock documentation to the action's field parsers * Deslecting some tests in server-upgrade Some tests with current build are failing on server upgrade which it should not. The response is more accurate than what it was. Also the upgrade tests were not throwing errors when the test is expected to return an error, but succeeds. The test framework is patched to catch this case. * [skip ci] Add a long Note about interfaces and object types * send the response headers back to client after running a query * Deselect a few more tests during upgrade/downgrade test * Update commit_diff.txt * change log kind from db_migrate to catalog_migrate (#5531) * Show method and complete URI in traced HTTP calls (#5525) Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * restrict env variables start with HASURA_GRAPHQL_ for headers configuration in actions, event triggers & remote schemas (#5519) * restrict env variables start with HASURA_GRAPHQL_ for headers definition in actions & event triggers * update CHANGELOG.md * Apply suggestions from code review Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> * fix introspection query if any enum column present in primary key (fix #5200) (#5522) * Fix telemetry reporting of transport (websocket was reported as http) * add log kinds in cli-migrations image (#5529) * add log kinds in cli-migrations image * give hint to resolve timeout error * minor changes and CHANGELOG * server: set hasura.tracecontext in RQL mutations [#5542] (#5555) * server: set hasura.tracecontext in RQL mutations [#5542] * Update test suite Co-authored-by: Tirumarai Selvan <tiru@hasura.io> * Add bulldozer auto-merge and -update configuration We still need to add the github app (as of time of opening this PR) Afterwards devs should be able to allow bulldozer to automatically "update" the branch, merging in parent when it changes, as well as automatically merge when all checks pass. This is opt-in by adding the `auto-update-auto-merge` label to the PR. * Remove 'bulldozer' config, try 'kodiak' for auto-merge see: https://github.com/chdsbd/kodiak The main issue that bit us was not being able to auto update forked branches, also: https://github.com/palantir/bulldozer/issues/66 https://github.com/palantir/bulldozer/issues/145 * Cherry-picked all commits * [skip ci] Slightly improve formatting * Revert "fix introspection query if any enum column present in primary key (fix #5200) (#5522)" This reverts commit0f9a5afa59
. This undoes a cherry-pick of34288e1eb5
that was already done previously ina6450e126b
, and subsequently fixed for PDV in70e89dc250
* Do a small bit of tidying in Hasura.GraphQL.Parser.Collect * Fix cherry-picking work Some previous cherry-picks ended up modifying code that is commented out * [skip ci] clarified comment regarding insert representation * [skip ci] removed obsolete todos * cosmetic change * fix action error message * [skip ci] remove obsolete comment * [skip ci] synchronize stylish haskell extensions list * use previously defined scalar names in parsers rather than ad-hoc literals * Apply most syntax hlint hints. * Clarify comment on update mutation. * [skip ci] Clarify what fields should be specified for objects * Update "_inc" description. * Use record types rather than tuples fo IntrospectionResult and ParsedIntrospection * Get rid of checkFieldNamesUnique (use Data.List.Extended.duplicates) * Throw more errors when collecting query root names * [skip ci] clean column parser comment * Remove dead code inserted inab65b39
* avoid converting to non-empty list where not needed * add note and TODO about the disabled checks in PDV * minor refactor in remoteField' function * Unify two getObject methods * Nitpicks in Remote.hs * Update CHANGELOG.md * Revert "Unify two getObject methods" This reverts commitbd6bb40355
. We do need two different getObject functions as the corresponding error message is different * Fix error message in Remote.hs * Update CHANGELOG.md Co-authored-by: Auke Booij <auke@tulcod.com> * Apply suggested Changelog fix. Co-authored-by: Auke Booij <auke@tulcod.com> * Fix typo in Changelog. * [skip ci] Update changelog. * reuse type names to avoid duplication * Fix Hashable instance for Definition The presence of `Maybe Unique`, and an optional description, as part of `Definition`s, means that `Definition`s that are considered `Eq`ual may get different hashes. This can happen, for instance, when one object is memoized but another is not. * [skip ci] Update commit_diff.txt * Bump parser version. * Bump freeze file after changes in parser. * [skip ci] Incorporate commits from master * Fix developer flag in server/cabal.project.freeze Co-authored-by: Auke Booij <auke@tulcod.com> * Deselect a changed ENUM test for upgrade/downgrade CI * Deselect test here as well * [skip ci] remove dead code * Disable more tests for upgrade/downgrade * Fix which test gets deselected * Revert "Add hdb_catalog.current_setting abstraction for reading Hasura settings" This reverts commit66e85ab9fb
. * Remove circular reference in cabal.project.freeze Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io> Co-authored-by: Auke Booij <auke@hasura.io> Co-authored-by: Tirumarai Selvan <tiru@hasura.io> Co-authored-by: Marion Schleifer <marion@hasura.io> Co-authored-by: Aleksandra Sikora <ola.zxcvbnm@gmail.com> Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com> Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com> Co-authored-by: Anon Ray <rayanon004@gmail.com> Co-authored-by: rakeshkky <12475069+rakeshkky@users.noreply.github.com> Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com> Co-authored-by: Vamshi Surabhi <vamshi@hasura.io> Co-authored-by: Antoine Leblanc <antoine@hasura.io> Co-authored-by: Brandon Simmons <brandon@hasura.io> Co-authored-by: Phil Freeman <phil@hasura.io> Co-authored-by: Lyndon Maydwell <lyndon@sordina.net> Co-authored-by: Phil Freeman <paf31@cantab.net> Co-authored-by: Naveen Naidu <naveennaidu479@gmail.com> Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com> Co-authored-by: Nizar Malangadan <nizar-m@users.noreply.github.com> Co-authored-by: Antoine Leblanc <crucuny@gmail.com> Co-authored-by: Auke Booij <auke@tulcod.com>
522 lines
25 KiB
Haskell
522 lines
25 KiB
Haskell
{-# LANGUAGE Arrows #-}
|
||
{-# LANGUAGE OverloadedLabels #-}
|
||
|
||
{-| Top-level functions concerned specifically with operations on the schema cache, such as
|
||
rebuilding it from the catalog and incorporating schema changes. See the module documentation for
|
||
"Hasura.RQL.DDL.Schema" for more details.
|
||
|
||
__Note__: this module is __mutually recursive__ with other @Hasura.RQL.DDL.Schema.*@ modules, which
|
||
both define pieces of the implementation of building the schema cache and define handlers that
|
||
trigger schema cache rebuilds. -}
|
||
module Hasura.RQL.DDL.Schema.Cache
|
||
( RebuildableSchemaCache
|
||
, lastBuiltSchemaCache
|
||
, buildRebuildableSchemaCache
|
||
, CacheRWT
|
||
, runCacheRWT
|
||
|
||
, withMetadataCheck
|
||
) where
|
||
|
||
import Hasura.Prelude
|
||
|
||
import qualified Data.HashMap.Strict.Extended as M
|
||
import qualified Data.HashSet as HS
|
||
import qualified Data.Text as T
|
||
import qualified Data.Environment as Env
|
||
import qualified Database.PG.Query as Q
|
||
|
||
import Control.Arrow.Extended
|
||
import Control.Lens hiding ((.=))
|
||
import Control.Monad.Unique
|
||
import Data.Aeson
|
||
import Data.List (nub)
|
||
|
||
import qualified Hasura.Incremental as Inc
|
||
|
||
import Hasura.Db
|
||
import Hasura.GraphQL.Execute.Types
|
||
import Hasura.GraphQL.Schema (buildGQLContext)
|
||
import Hasura.RQL.DDL.Action
|
||
import Hasura.RQL.DDL.ComputedField
|
||
import Hasura.RQL.DDL.CustomTypes
|
||
import Hasura.RQL.DDL.Deps
|
||
import Hasura.RQL.DDL.EventTrigger
|
||
import Hasura.RQL.DDL.RemoteSchema
|
||
import Hasura.RQL.DDL.ScheduledTrigger
|
||
import Hasura.RQL.DDL.Schema.Cache.Common
|
||
import Hasura.RQL.DDL.Schema.Cache.Dependencies
|
||
import Hasura.RQL.DDL.Schema.Cache.Fields
|
||
import Hasura.RQL.DDL.Schema.Cache.Permission
|
||
import Hasura.RQL.DDL.Schema.Catalog
|
||
import Hasura.RQL.DDL.Schema.Diff
|
||
import Hasura.RQL.DDL.Schema.Function
|
||
import Hasura.RQL.DDL.Schema.Table
|
||
import Hasura.RQL.DDL.Utils (clearHdbViews)
|
||
import Hasura.RQL.Types
|
||
import Hasura.RQL.Types.Catalog
|
||
import Hasura.Server.Version (HasVersion)
|
||
import Hasura.SQL.Types
|
||
|
||
buildRebuildableSchemaCache
|
||
:: (HasVersion, MonadIO m, MonadUnique m, MonadTx m, HasHttpManager m, HasSQLGenCtx m)
|
||
=> Env.Environment
|
||
-> m (RebuildableSchemaCache m)
|
||
buildRebuildableSchemaCache env = do
|
||
catalogMetadata <- liftTx fetchCatalogData
|
||
result <- flip runReaderT CatalogSync $
|
||
Inc.build (buildSchemaCacheRule env) (catalogMetadata, initialInvalidationKeys)
|
||
pure $ RebuildableSchemaCache (Inc.result result) initialInvalidationKeys (Inc.rebuildRule result)
|
||
|
||
newtype CacheRWT m a
|
||
-- The CacheInvalidations component of the state could actually be collected using WriterT, but
|
||
-- WriterT implementations prior to transformers-0.5.6.0 (which added
|
||
-- Control.Monad.Trans.Writer.CPS) are leaky, and we don’t have that yet.
|
||
= CacheRWT (StateT (RebuildableSchemaCache m, CacheInvalidations) m a)
|
||
deriving
|
||
( Functor, Applicative, Monad, MonadIO, MonadUnique, MonadReader r, MonadError e, MonadTx
|
||
, UserInfoM, HasHttpManager, HasSQLGenCtx, HasSystemDefined )
|
||
|
||
runCacheRWT
|
||
:: Functor m
|
||
=> RebuildableSchemaCache m -> CacheRWT m a -> m (a, RebuildableSchemaCache m, CacheInvalidations)
|
||
runCacheRWT cache (CacheRWT m) =
|
||
runStateT m (cache, mempty) <&> \(v, (newCache, invalidations)) -> (v, newCache, invalidations)
|
||
|
||
instance MonadTrans CacheRWT where
|
||
lift = CacheRWT . lift
|
||
|
||
instance (Monad m) => TableCoreInfoRM (CacheRWT m)
|
||
instance (Monad m) => CacheRM (CacheRWT m) where
|
||
askSchemaCache = CacheRWT $ gets (lastBuiltSchemaCache . fst)
|
||
|
||
instance (MonadIO m, MonadTx m) => CacheRWM (CacheRWT m) where
|
||
buildSchemaCacheWithOptions buildReason invalidations = CacheRWT do
|
||
(RebuildableSchemaCache _ invalidationKeys rule, oldInvalidations) <- get
|
||
let newInvalidationKeys = invalidateKeys invalidations invalidationKeys
|
||
catalogMetadata <- liftTx fetchCatalogData
|
||
result <- lift $ flip runReaderT buildReason $
|
||
Inc.build rule (catalogMetadata, newInvalidationKeys)
|
||
let schemaCache = Inc.result result
|
||
prunedInvalidationKeys = pruneInvalidationKeys schemaCache newInvalidationKeys
|
||
!newCache = RebuildableSchemaCache schemaCache prunedInvalidationKeys (Inc.rebuildRule result)
|
||
!newInvalidations = oldInvalidations <> invalidations
|
||
put (newCache, newInvalidations)
|
||
where
|
||
-- Prunes invalidation keys that no longer exist in the schema to avoid leaking memory by
|
||
-- hanging onto unnecessary keys.
|
||
pruneInvalidationKeys schemaCache = over ikRemoteSchemas $ M.filterWithKey \name _ ->
|
||
-- see Note [Keep invalidation keys for inconsistent objects]
|
||
name `elem` getAllRemoteSchemas schemaCache
|
||
|
||
buildSchemaCacheRule
|
||
-- Note: by supplying BuildReason via MonadReader, it does not participate in caching, which is
|
||
-- what we want!
|
||
:: ( HasVersion, ArrowChoice arr, Inc.ArrowDistribute arr, Inc.ArrowCache m arr
|
||
, MonadIO m, MonadUnique m, MonadTx m
|
||
, MonadReader BuildReason m, HasHttpManager m, HasSQLGenCtx m )
|
||
=> Env.Environment
|
||
-> (CatalogMetadata, InvalidationKeys) `arr` SchemaCache
|
||
buildSchemaCacheRule env = proc (catalogMetadata, invalidationKeys) -> do
|
||
invalidationKeysDep <- Inc.newDependency -< invalidationKeys
|
||
|
||
-- Step 1: Process metadata and collect dependency information.
|
||
(outputs, collectedInfo) <-
|
||
runWriterA buildAndCollectInfo -< (catalogMetadata, invalidationKeysDep)
|
||
let (inconsistentObjects, unresolvedDependencies) = partitionCollectedInfo collectedInfo
|
||
|
||
-- Step 2: Resolve dependency information and drop dangling dependents.
|
||
(resolvedOutputs, dependencyInconsistentObjects, resolvedDependencies) <-
|
||
resolveDependencies -< (outputs, unresolvedDependencies)
|
||
|
||
-- Step 3: Build the GraphQL schema.
|
||
(gqlContext, gqlSchemaInconsistentObjects) <- runWriterA buildGQLContext -<
|
||
( QueryHasura
|
||
, (_boTables resolvedOutputs)
|
||
, (_boFunctions resolvedOutputs)
|
||
, (_boRemoteSchemas resolvedOutputs)
|
||
, (_boActions resolvedOutputs)
|
||
, (_actNonObjects $ _boCustomTypes resolvedOutputs)
|
||
)
|
||
|
||
-- Step 4: Build the relay GraphQL schema
|
||
(relayContext, relaySchemaInconsistentObjects) <- runWriterA buildGQLContext -<
|
||
( QueryRelay
|
||
, (_boTables resolvedOutputs)
|
||
, (_boFunctions resolvedOutputs)
|
||
, (_boRemoteSchemas resolvedOutputs)
|
||
, (_boActions resolvedOutputs)
|
||
, (_actNonObjects $ _boCustomTypes resolvedOutputs)
|
||
)
|
||
|
||
returnA -< SchemaCache
|
||
{ scTables = _boTables resolvedOutputs
|
||
, scActions = _boActions resolvedOutputs
|
||
, scFunctions = _boFunctions resolvedOutputs
|
||
-- TODO this is not the right value: we should track what part of the schema
|
||
-- we can stitch without consistencies, I think.
|
||
, scRemoteSchemas = fmap fst (_boRemoteSchemas resolvedOutputs) -- remoteSchemaMap
|
||
, scAllowlist = _boAllowlist resolvedOutputs
|
||
-- , scCustomTypes = _boCustomTypes resolvedOutputs
|
||
, scGQLContext = fst gqlContext
|
||
, scUnauthenticatedGQLContext = snd gqlContext
|
||
, scRelayContext = fst relayContext
|
||
, scUnauthenticatedRelayContext = snd relayContext
|
||
-- , scGCtxMap = gqlSchema
|
||
-- , scDefaultRemoteGCtx = remoteGQLSchema
|
||
, scDepMap = resolvedDependencies
|
||
, scCronTriggers = _boCronTriggers resolvedOutputs
|
||
, scInconsistentObjs =
|
||
inconsistentObjects
|
||
<> dependencyInconsistentObjects
|
||
<> toList gqlSchemaInconsistentObjects
|
||
<> toList relaySchemaInconsistentObjects
|
||
}
|
||
where
|
||
buildAndCollectInfo
|
||
:: ( ArrowChoice arr, Inc.ArrowDistribute arr, Inc.ArrowCache m arr
|
||
, ArrowWriter (Seq CollectedInfo) arr, MonadIO m, MonadUnique m, MonadTx m, MonadReader BuildReason m
|
||
, HasHttpManager m, HasSQLGenCtx m )
|
||
=> (CatalogMetadata, Inc.Dependency InvalidationKeys) `arr` BuildOutputs
|
||
buildAndCollectInfo = proc (catalogMetadata, invalidationKeys) -> do
|
||
let CatalogMetadata tables relationships permissions
|
||
eventTriggers remoteSchemas functions allowlistDefs
|
||
computedFields catalogCustomTypes actions remoteRelationships
|
||
cronTriggers = catalogMetadata
|
||
|
||
-- tables
|
||
tableRawInfos <- buildTableCache -< (tables, Inc.selectD #_ikMetadata invalidationKeys)
|
||
|
||
-- remote schemas
|
||
let remoteSchemaInvalidationKeys = Inc.selectD #_ikRemoteSchemas invalidationKeys
|
||
remoteSchemaMap <- buildRemoteSchemas -< (remoteSchemaInvalidationKeys, remoteSchemas)
|
||
|
||
-- relationships and computed fields
|
||
let relationshipsByTable = M.groupOn _crTable relationships
|
||
computedFieldsByTable = M.groupOn (_afcTable . _cccComputedField) computedFields
|
||
remoteRelationshipsByTable = M.groupOn rtrTable remoteRelationships
|
||
tableCoreInfos <- (tableRawInfos >- returnA)
|
||
>-> (\info -> (info, relationshipsByTable) >- alignExtraTableInfo mkRelationshipMetadataObject)
|
||
>-> (\info -> (info, computedFieldsByTable) >- alignExtraTableInfo mkComputedFieldMetadataObject)
|
||
>-> (\info -> (info, remoteRelationshipsByTable) >- alignExtraTableInfo mkRemoteRelationshipMetadataObject)
|
||
>-> (| Inc.keyed (\_ (((tableRawInfo, tableRelationships), tableComputedFields), tableRemoteRelationships) -> do
|
||
let columns = _tciFieldInfoMap tableRawInfo
|
||
allFields <- addNonColumnFields -<
|
||
(tableRawInfos, columns, M.map fst remoteSchemaMap, tableRelationships, tableComputedFields, tableRemoteRelationships)
|
||
returnA -< (tableRawInfo { _tciFieldInfoMap = allFields })) |)
|
||
|
||
-- permissions and event triggers
|
||
tableCoreInfosDep <- Inc.newDependency -< tableCoreInfos
|
||
tableCache <- (tableCoreInfos >- returnA)
|
||
>-> (\info -> (info, M.groupOn _cpTable permissions) >- alignExtraTableInfo mkPermissionMetadataObject)
|
||
>-> (\info -> (info, M.groupOn _cetTable eventTriggers) >- alignExtraTableInfo mkEventTriggerMetadataObject)
|
||
>-> (| Inc.keyed (\_ ((tableCoreInfo, tablePermissions), tableEventTriggers) -> do
|
||
let tableName = _tciName tableCoreInfo
|
||
tableFields = _tciFieldInfoMap tableCoreInfo
|
||
permissionInfos <- buildTablePermissions -<
|
||
(tableCoreInfosDep, tableName, tableFields, HS.fromList tablePermissions)
|
||
eventTriggerInfos <- buildTableEventTriggers -< (tableCoreInfo, tableEventTriggers)
|
||
returnA -< TableInfo
|
||
{ _tiCoreInfo = tableCoreInfo
|
||
, _tiRolePermInfoMap = permissionInfos
|
||
, _tiEventTriggerInfoMap = eventTriggerInfos
|
||
}) |)
|
||
|
||
-- sql functions
|
||
functionCache <- (mapFromL _cfFunction functions >- returnA)
|
||
>-> (| Inc.keyed (\_ (CatalogFunction qf systemDefined config funcDefs) -> do
|
||
let definition = toJSON $ TrackFunction qf
|
||
metadataObject = MetadataObject (MOFunction qf) definition
|
||
schemaObject = SOFunction qf
|
||
addFunctionContext e = "in function " <> qf <<> ": " <> e
|
||
(| withRecordInconsistency (
|
||
(| modifyErrA (do
|
||
rawfi <- bindErrorA -< handleMultipleFunctions qf funcDefs
|
||
(fi, dep) <- bindErrorA -< mkFunctionInfo qf systemDefined config rawfi
|
||
recordDependencies -< (metadataObject, schemaObject, [dep])
|
||
returnA -< fi)
|
||
|) addFunctionContext)
|
||
|) metadataObject) |)
|
||
>-> (\infos -> M.catMaybes infos >- returnA)
|
||
|
||
-- allow list
|
||
let allowList = allowlistDefs
|
||
& concatMap _cdQueries
|
||
& map (queryWithoutTypeNames . getGQLQuery . _lqQuery)
|
||
& HS.fromList
|
||
|
||
-- custom types
|
||
let CatalogCustomTypes customTypes pgScalars = catalogCustomTypes
|
||
maybeResolvedCustomTypes <-
|
||
(| withRecordInconsistency
|
||
(bindErrorA -< resolveCustomTypes tableCache customTypes pgScalars)
|
||
|) (MetadataObject MOCustomTypes $ toJSON customTypes)
|
||
|
||
-- -- actions
|
||
(actionCache, annotatedCustomTypes) <- case maybeResolvedCustomTypes of
|
||
Just resolvedCustomTypes -> do
|
||
actionCache' <- buildActions -< ((resolvedCustomTypes, pgScalars), actions)
|
||
returnA -< (actionCache', resolvedCustomTypes)
|
||
|
||
-- If the custom types themselves are inconsistent, we can’t really do
|
||
-- anything with actions, so just mark them all inconsistent.
|
||
Nothing -> do
|
||
recordInconsistencies -< ( map mkActionMetadataObject actions
|
||
, "custom types are inconsistent" )
|
||
returnA -< (M.empty, emptyAnnotatedCustomTypes)
|
||
|
||
cronTriggersMap <- buildCronTriggers -< ((),cronTriggers)
|
||
|
||
returnA -< BuildOutputs
|
||
{ _boTables = tableCache
|
||
, _boActions = actionCache
|
||
, _boFunctions = functionCache
|
||
, _boRemoteSchemas = remoteSchemaMap
|
||
, _boAllowlist = allowList
|
||
, _boCustomTypes = annotatedCustomTypes
|
||
, _boCronTriggers = cronTriggersMap
|
||
}
|
||
|
||
mkEventTriggerMetadataObject (CatalogEventTrigger qt trn configuration) =
|
||
let objectId = MOTableObj qt $ MTOTrigger trn
|
||
definition = object ["table" .= qt, "configuration" .= configuration]
|
||
in MetadataObject objectId definition
|
||
|
||
mkCronTriggerMetadataObject catalogCronTrigger =
|
||
let definition = toJSON catalogCronTrigger
|
||
in MetadataObject (MOCronTrigger (_cctName catalogCronTrigger))
|
||
definition
|
||
|
||
mkActionMetadataObject (ActionMetadata name comment defn _) =
|
||
MetadataObject (MOAction name) (toJSON $ CreateAction name defn comment)
|
||
|
||
mkRemoteSchemaMetadataObject remoteSchema =
|
||
MetadataObject (MORemoteSchema (_arsqName remoteSchema)) (toJSON remoteSchema)
|
||
|
||
-- Given a map of table info, “folds in” another map of information, accumulating inconsistent
|
||
-- metadata objects for any entries in the second map that don’t appear in the first map. This
|
||
-- is used to “line up” the metadata for relationships, computed fields, permissions, etc. with
|
||
-- the tracked table info.
|
||
alignExtraTableInfo
|
||
:: forall a b arr
|
||
. (ArrowChoice arr, Inc.ArrowDistribute arr, ArrowWriter (Seq CollectedInfo) arr)
|
||
=> (b -> MetadataObject)
|
||
-> ( M.HashMap QualifiedTable a
|
||
, M.HashMap QualifiedTable [b]
|
||
) `arr` M.HashMap QualifiedTable (a, [b])
|
||
alignExtraTableInfo mkMetadataObject = proc (baseInfo, extraInfo) -> do
|
||
combinedInfo <-
|
||
(| Inc.keyed (\tableName infos -> combine -< (tableName, infos))
|
||
|) (align baseInfo extraInfo)
|
||
returnA -< M.catMaybes combinedInfo
|
||
where
|
||
combine :: (QualifiedTable, These a [b]) `arr` Maybe (a, [b])
|
||
combine = proc (tableName, infos) -> case infos of
|
||
This base -> returnA -< Just (base, [])
|
||
These base extras -> returnA -< Just (base, extras)
|
||
That extras -> do
|
||
let errorMessage = "table " <> tableName <<> " does not exist"
|
||
recordInconsistencies -< (map mkMetadataObject extras, errorMessage)
|
||
returnA -< Nothing
|
||
|
||
buildTableEventTriggers
|
||
:: ( ArrowChoice arr, Inc.ArrowDistribute arr, ArrowWriter (Seq CollectedInfo) arr
|
||
, Inc.ArrowCache m arr, MonadTx m, MonadReader BuildReason m, HasSQLGenCtx m )
|
||
=> (TableCoreInfo, [CatalogEventTrigger]) `arr` EventTriggerInfoMap
|
||
buildTableEventTriggers = buildInfoMap _cetName mkEventTriggerMetadataObject buildEventTrigger
|
||
where
|
||
buildEventTrigger = proc (tableInfo, eventTrigger) -> do
|
||
let CatalogEventTrigger qt trn configuration = eventTrigger
|
||
metadataObject = mkEventTriggerMetadataObject eventTrigger
|
||
schemaObjectId = SOTableObj qt $ TOTrigger trn
|
||
addTriggerContext e = "in event trigger " <> trn <<> ": " <> e
|
||
(| withRecordInconsistency (
|
||
(| modifyErrA (do
|
||
etc <- bindErrorA -< decodeValue configuration
|
||
(info, dependencies) <- bindErrorA -< subTableP2Setup env qt etc
|
||
let tableColumns = M.mapMaybe (^? _FIColumn) (_tciFieldInfoMap tableInfo)
|
||
recreateViewIfNeeded -< (qt, tableColumns, trn, etcDefinition etc)
|
||
recordDependencies -< (metadataObject, schemaObjectId, dependencies)
|
||
returnA -< info)
|
||
|) (addTableContext qt . addTriggerContext))
|
||
|) metadataObject
|
||
|
||
recreateViewIfNeeded = Inc.cache $
|
||
arrM \(tableName, tableColumns, triggerName, triggerDefinition) -> do
|
||
buildReason <- ask
|
||
when (buildReason == CatalogUpdate) $ do
|
||
liftTx $ delTriggerQ triggerName -- executes DROP IF EXISTS.. sql
|
||
mkAllTriggersQ triggerName tableName (M.elems tableColumns) triggerDefinition
|
||
|
||
buildCronTriggers
|
||
:: ( ArrowChoice arr
|
||
, Inc.ArrowDistribute arr
|
||
, ArrowWriter (Seq CollectedInfo) arr
|
||
, Inc.ArrowCache m arr
|
||
, MonadTx m)
|
||
=> ((),[CatalogCronTrigger])
|
||
`arr` HashMap TriggerName CronTriggerInfo
|
||
buildCronTriggers = buildInfoMap _cctName mkCronTriggerMetadataObject buildCronTrigger
|
||
where
|
||
buildCronTrigger = proc (_,cronTrigger) -> do
|
||
let triggerName = triggerNameToTxt $ _cctName cronTrigger
|
||
addCronTriggerContext e = "in cron trigger " <> triggerName <> ": " <> e
|
||
(| withRecordInconsistency (
|
||
(| modifyErrA (bindErrorA -< resolveCronTrigger env cronTrigger)
|
||
|) addCronTriggerContext)
|
||
|) (mkCronTriggerMetadataObject cronTrigger)
|
||
|
||
buildActions
|
||
:: ( ArrowChoice arr, Inc.ArrowDistribute arr, Inc.ArrowCache m arr
|
||
, ArrowWriter (Seq CollectedInfo) arr)
|
||
=> ( (AnnotatedCustomTypes, HashSet PGScalarType)
|
||
, [ActionMetadata]
|
||
) `arr` HashMap ActionName ActionInfo
|
||
buildActions = buildInfoMap _amName mkActionMetadataObject buildAction
|
||
where
|
||
buildAction = proc ((resolvedCustomTypes, pgScalars), action) -> do
|
||
let ActionMetadata name comment def actionPermissions = action
|
||
addActionContext e = "in action " <> name <<> "; " <> e
|
||
(| withRecordInconsistency (
|
||
(| modifyErrA (do
|
||
(resolvedDef, outObject) <- liftEitherA <<< bindA -<
|
||
runExceptT $ resolveAction env resolvedCustomTypes def pgScalars
|
||
let permissionInfos = map (ActionPermissionInfo . _apmRole) actionPermissions
|
||
permissionMap = mapFromL _apiRole permissionInfos
|
||
returnA -< ActionInfo name outObject resolvedDef permissionMap comment)
|
||
|) addActionContext)
|
||
|) (mkActionMetadataObject action)
|
||
|
||
buildRemoteSchemas
|
||
:: ( ArrowChoice arr, Inc.ArrowDistribute arr, ArrowWriter (Seq CollectedInfo) arr
|
||
, Inc.ArrowCache m arr , MonadIO m, MonadUnique m, HasHttpManager m )
|
||
=> ( Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey)
|
||
, [AddRemoteSchemaQuery]
|
||
) `arr` HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject)
|
||
buildRemoteSchemas =
|
||
buildInfoMapPreservingMetadata _arsqName mkRemoteSchemaMetadataObject buildRemoteSchema
|
||
where
|
||
-- We want to cache this call because it fetches the remote schema over HTTP, and we don’t
|
||
-- want to re-run that if the remote schema definition hasn’t changed.
|
||
buildRemoteSchema = Inc.cache proc (invalidationKeys, remoteSchema) -> do
|
||
Inc.dependOn -< Inc.selectKeyD (_arsqName remoteSchema) invalidationKeys
|
||
(| withRecordInconsistency (liftEitherA <<< bindA -<
|
||
runExceptT $ addRemoteSchemaP2Setup env remoteSchema)
|
||
|) (mkRemoteSchemaMetadataObject remoteSchema)
|
||
|
||
-- | @'withMetadataCheck' cascade action@ runs @action@ and checks if the schema changed as a
|
||
-- result. If it did, it checks to ensure the changes do not violate any integrity constraints, and
|
||
-- if not, incorporates them into the schema cache.
|
||
withMetadataCheck :: (MonadTx m, CacheRWM m, HasSQLGenCtx m) => Bool -> m a -> m a
|
||
withMetadataCheck cascade action = do
|
||
-- Drop hdb_views so no interference is caused to the sql query
|
||
liftTx $ Q.catchE defaultTxErrorHandler clearHdbViews
|
||
|
||
-- Get the metadata before the sql query, everything, need to filter this
|
||
oldMetaU <- liftTx $ Q.catchE defaultTxErrorHandler fetchTableMeta
|
||
oldFuncMetaU <- liftTx $ Q.catchE defaultTxErrorHandler fetchFunctionMeta
|
||
|
||
-- Run the action
|
||
res <- action
|
||
|
||
-- Get the metadata after the sql query
|
||
newMeta <- liftTx $ Q.catchE defaultTxErrorHandler fetchTableMeta
|
||
newFuncMeta <- liftTx $ Q.catchE defaultTxErrorHandler fetchFunctionMeta
|
||
sc <- askSchemaCache
|
||
let existingInconsistentObjs = scInconsistentObjs sc
|
||
existingTables = M.keys $ scTables sc
|
||
oldMeta = flip filter oldMetaU $ \tm -> tmTable tm `elem` existingTables
|
||
schemaDiff = getSchemaDiff oldMeta newMeta
|
||
existingFuncs = M.keys $ scFunctions sc
|
||
oldFuncMeta = flip filter oldFuncMetaU $ \fm -> fmFunction fm `elem` existingFuncs
|
||
FunctionDiff droppedFuncs alteredFuncs = getFuncDiff oldFuncMeta newFuncMeta
|
||
overloadedFuncs = getOverloadedFuncs existingFuncs newFuncMeta
|
||
|
||
-- Do not allow overloading functions
|
||
unless (null overloadedFuncs) $
|
||
throw400 NotSupported $ "the following tracked function(s) cannot be overloaded: "
|
||
<> reportFuncs overloadedFuncs
|
||
|
||
indirectDeps <- getSchemaChangeDeps schemaDiff
|
||
|
||
-- Report back with an error if cascade is not set
|
||
when (indirectDeps /= [] && not cascade) $ reportDepsExt indirectDeps []
|
||
|
||
-- Purge all the indirect dependents from state
|
||
mapM_ purgeDependentObject indirectDeps
|
||
|
||
-- Purge all dropped functions
|
||
let purgedFuncs = flip mapMaybe indirectDeps $ \dep ->
|
||
case dep of
|
||
SOFunction qf -> Just qf
|
||
_ -> Nothing
|
||
|
||
forM_ (droppedFuncs \\ purgedFuncs) $ \qf -> do
|
||
liftTx $ delFunctionFromCatalog qf
|
||
|
||
-- Process altered functions
|
||
forM_ alteredFuncs $ \(qf, newTy) -> do
|
||
when (newTy == FTVOLATILE) $
|
||
throw400 NotSupported $
|
||
"type of function " <> qf <<> " is altered to \"VOLATILE\" which is not supported now"
|
||
|
||
-- update the schema cache and hdb_catalog with the changes
|
||
processSchemaChanges schemaDiff
|
||
|
||
buildSchemaCache
|
||
postSc <- askSchemaCache
|
||
|
||
-- Recreate event triggers in hdb_views
|
||
forM_ (M.elems $ scTables postSc) $ \(TableInfo coreInfo _ eventTriggers) -> do
|
||
let table = _tciName coreInfo
|
||
columns = getCols $ _tciFieldInfoMap coreInfo
|
||
forM_ (M.toList eventTriggers) $ \(triggerName, eti) -> do
|
||
let opsDefinition = etiOpsDef eti
|
||
mkAllTriggersQ triggerName table columns opsDefinition
|
||
|
||
let currentInconsistentObjs = scInconsistentObjs postSc
|
||
checkNewInconsistentMeta existingInconsistentObjs currentInconsistentObjs
|
||
|
||
return res
|
||
where
|
||
reportFuncs = T.intercalate ", " . map dquoteTxt
|
||
|
||
processSchemaChanges :: (MonadTx m, CacheRM m) => SchemaDiff -> m ()
|
||
processSchemaChanges schemaDiff = do
|
||
-- Purge the dropped tables
|
||
mapM_ delTableAndDirectDeps droppedTables
|
||
|
||
sc <- askSchemaCache
|
||
for_ alteredTables $ \(oldQtn, tableDiff) -> do
|
||
ti <- case M.lookup oldQtn $ scTables sc of
|
||
Just ti -> return ti
|
||
Nothing -> throw500 $ "old table metadata not found in cache : " <>> oldQtn
|
||
processTableChanges (_tiCoreInfo ti) tableDiff
|
||
where
|
||
SchemaDiff droppedTables alteredTables = schemaDiff
|
||
|
||
checkNewInconsistentMeta
|
||
:: (QErrM m)
|
||
=> [InconsistentMetadata] -> [InconsistentMetadata] -> m ()
|
||
checkNewInconsistentMeta originalInconsMeta currentInconsMeta =
|
||
unless (null newInconsistentObjects) $
|
||
throwError (err500 Unexpected "cannot continue due to newly found inconsistent metadata")
|
||
{ qeInternal = Just $ toJSON newInconsistentObjects }
|
||
where
|
||
diffInconsistentObjects = M.difference `on` groupInconsistentMetadataById
|
||
newInconsistentObjects = nub $ concatMap toList $
|
||
M.elems (currentInconsMeta `diffInconsistentObjects` originalInconsMeta)
|
||
|
||
{- Note [Keep invalidation keys for inconsistent objects]
|
||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||
After building the schema cache, we prune InvalidationKeys for objects
|
||
that no longer exist in the schema to avoid leaking memory for objects
|
||
that have been dropped. However, note that we *don’t* want to drop
|
||
keys for objects that are simply inconsistent!
|
||
|
||
Why? The object is still in the metadata, so next time we reload it,
|
||
we’ll reprocess that object. We want to reuse the cache if its
|
||
definition hasn’t changed, but if we dropped the invalidation key, it
|
||
will incorrectly be reprocessed (since the invalidation key changed
|
||
from present to absent). -}
|