- Inline a few instances to avoid code duplication
- Use `(<$>)` to avoid `let`
- Improve error reporting when types of invalid kind are specified in `possibleTypes` or `interfaces`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7540
GitOrigin-RevId: 954fb710f94a275daff938b9a6e58765c4286d0c
### Description
Each Backend executes queries against the database in a slightly different stack: Postgres uses its own `TXeT`, MSSQL uses a variant of it, BigQuery is simply in `ExceptT QErr IO`... To accommodate those variations, we had originally introduced an `ExecutionMonad b` type family in `BackendExecute`, allowing each backend to describe its own stack. It was then up to that backend's `BackendTransport` instance to implement running said stack, and converting the result back into our main app monad.
However, this was not without complications: `TraceT` is one of them: as it usually needs to be on the top of the stack, converting from one stack to the other implies the use `interpTraceT`, which is quite monstrous. Furthermore, as part of the Entitlement Services work, we're trying to move to a "Services" architecture in which the entire engine runs in one base monad, that delegates features and dependencies to monad constraints; and as a result we'd like to minimize the number of different monad stacks we have to maintain and translate from and to in the codebase.
To improve things, this PR changes `ExecutionMonad b` from an _absolute_ stack to a _relative_ one: i.e.: what needs to be stacked on top of our base monad for the execution. In `Transport`, we then only need to pop the top of the stack, and voila. This greatly simplifies the implementation of the backends, as there's no longer any need to do any stack transformation: MySQL's implementation becomes a `runIdentityT`! This also removes most mentions of `TraceT` from the execution code since it's no longer required: we can rely on the base monad's existing `MonadTrace` constraint.
To continue encapsulating monadic actions in `DBStepInfo` and avoid threading a bunch of `forall` all over the place, this PR introduces a small local helper: `OnBaseMonad`. One only downside of all this is that this requires adding `MonadBaseControl IO m` constraint all over the place: previously, we would run directly on `IO` and lift, and would therefore not need to bring that constraint all the way.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7789
GitOrigin-RevId: e9b2e431c5c47fa9851abf87545c0415ff6d1a12
Add some configurations for modern profiling modes, and integration into dev.sh
These require cabal 3.8 due to the use of `import`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7671
GitOrigin-RevId: f793f64105cfd99fb51b247fa8bc050f6d4bd23e
Basic MongoDB agent. This is intended as a starting point for playing with nested documents in a MongoDB back end. Currently supports basic queries with projections, where expressions, limit and offset. No support for joins, aggregates or mutations.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7840
GitOrigin-RevId: 3f03b8416c95acf2b68da1db56cbe36a513a4bde
## Description
This PR removes `MetadataStorageT`, and cleans up all top-level error handling. In short: this PR changes `MonadMetadataStorage` to explicitly return a bunch of `Either QErr a`, instead of relying on the stack providing a `MonadError QErr`. Since we implement that class on the base monad *below any ExceptT*, this removes a lot of very complicated instances that make assumptions about the shape of the stack.
On the back of this, we can remove several layers of ExceptT from the core of the code, including the one in `RunT`, which allows us to remove several instances of `liftEitherM . runExceptT`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7689
GitOrigin-RevId: 97d600154d690f58c0b93fb4cc2d30fd383fd8b8
#7730 added `package` stanzas for all our internal Haskell libraries, so that `-Werror` was switched on for all of them. However, #7534 was developed simultaneously, and merged shortly after, so that we missed `package` stanzas for `hasura-incremental` and `arrows-extra`. Then #7761 added one for `hasura-incremental`. This PR fixes one warning for a test suite there, and adds the missing `package` stanza for `arrows-extra`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7770
GitOrigin-RevId: 5f5c3fb5d4852c88ed2e14a3fa83fe264aec895b
It's pretty frustrating to see an error in CI and not know the actual cause, because we just dropped the information.
This adds the actual status code and body to the error message.
Previously, `getWithStatus` was only used by the `healthCheck'` function. This also refactors `get_` to use the same function, so we don't have to duplicate the error-handling logic.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7752
GitOrigin-RevId: 474e4c02ad6c5b676abc311b90b21998b4a93d94
### Description
This PR:
- fixes the package names in dev-sh.project.local (the config was using the names of the folders, not the names of the packages)
- adds similar options to CI: we also want to build with CI and in parallel there
- fixes all warnings
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7761
GitOrigin-RevId: ef1d78db8c94f5e74c18443aa517544f6a6f5a10
## Description
Adds a content-length response header to all endpoints. This PR tests this feature by checking the content-length of every request we send in the tests.
## Changelog ✍️
__Component__ : server
__Type__: enhancement
__Product__: community-edition
### Short Changelog
add a content-length response header to all endpoints
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7444
Co-authored-by: Manas Agarwal <5352361+manasag@users.noreply.github.com>
GitOrigin-RevId: a0a811852053c5dde4b11b71ba11a7d456c84d76
- In a previous PR we made more use of `common` stanzas, but these require `cabal-version: 2.2` (so some of those stanzas were not functional before this PR). This just bumps the `cabal-version`s straight to 3.6, which is the latest version we support in CI, where we build with cabal 3.6.
- This `cabal-version` upgrade required fixing a few `license` fields, from `BSD3` to `BSD-3-Clause`. I've also added `default-language` fields where appropriate, although this is perhaps optional.
- Using cabal's [syntax for applying options to all _local_ packages](https://cabal.readthedocs.io/en/3.8/cabal-project.html#package-configuration-options), we unify the cabal configuration, and apply `-Werror` to all local packages, which wasn't the case until now.
- Applying `-Werror` to all local packages required a few additional exceptions to the warnings that were switched on in hasura/graphql-engine-mono#7614.
- Deleted SCM links to the original pool library.
Overall, the effect of this PR is:
- more warnings
- stricter compilation
- ~~less cabal configuration~~
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7730
GitOrigin-RevId: 592e9e46d103bcc8726df5b745306bd9f77f7efc
This means we don't need to include the port in the connection string.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7683
Co-authored-by: Vishnu Bharathi <4211715+scriptnull@users.noreply.github.com>
GitOrigin-RevId: 3f6fb3fe4cb246a2fc593a2aea3820cf2c0e0e2c
See [Enable all the warnings](https://medium.com/mercury-bank/enable-all-the-warnings-a0517bc081c3). This PR follows that approach, except that it re-disables those warnings that would prevent a successful build.
There are some newer warning flags that older GHC versions don't recognize. So this also updates some of our CI routines to the GHC version that we're currently using for `graphql-engine` itself, namely 9.2.5. I don't see a reason to keep testing those libraries against older GHC versions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7614
GitOrigin-RevId: d48a6db09dab29616e273549d0045f98ecb4586f
### Description
This fixes the libs' test config: without that line, hspec fails to run at build time. I haven't tried actually running the tests now that they build, fwiw.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7694
GitOrigin-RevId: 03d7bc969c4bd195e84080d50f1f6441a1d8d50f
Not sure why there was so much nesting, but I did not like it.
Just a bit of cleanup while I was nosing around.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7684
GitOrigin-RevId: a17c94561fe1688d35a51afa5dfda37a7ea35d25
We were previously using the Docker Compose file in the root directory
for manual testing _and_ the server API tests.
This splits them so we can e.g. add Yugabyte for easy manual testing.
In the future, this will also allow us to use ephemeral ports for API
test databases, while keeping the fixed ports for manual testing.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7524
GitOrigin-RevId: 7244e296b0ed0ace9782b6f44f321933a9d9a49d
## Description
This PR updates the JWK refresh thread to poll every second instead of the previous behaviour where the thread used to sleep based on the expiry time in `Cache-Control`/`Expires` response headers.
## Motivation
As a part of dynamically updating environment variables on cloud without restart the user projects, we want to implement a mechanism which makes HGE aware of any changes in the user configuration by updating a shared variable data type which can be accessed by relevant threads/core functionality before their execution.
The above updates requires us to make the threads polling in nature such that before executing their code, any change in the user config is captured and the appropriate behaviour is channelised. In the case of JWK updating thread, the thread used to sleep for the time as mentioned in the `Cache-Control` or `Expires` headers which make the thread unware of any new changes in the user config in that period of time, hence requiring a restart to propogate the new changes.
To solve this problem we have now updated the JWK update thread to poll every second for change in `AuthMode`(from a shared variable in subsequent changes to implement the dynamic env var update feature) and update the JWK accordingly such that it does not use any stale configurations and works without HGE restart.
### Related Issues
https://hasurahq.atlassian.net/browse/GS-300
### Solution and Design
- We store the expiry time in the `JWTCtx`
- On every poll check whether the current time exceeds the expiry time, in which case we call the JWK url to fetch the new JWK and expiry.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7177
Co-authored-by: Krushan Bauva <31391329+krushanbauva@users.noreply.github.com>
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
GitOrigin-RevId: bc1e44a8c3823d7554167a7f01c3ce085646cedb
Hooks up event trigger codecs from #7237. This required fixing a problem where some backend types implemented `defaultTriggerOnReplication` with `error` which caused the server to crash when evaluating those for default values in codecs. The changes here add a type family to `Backend` called `XEventTriggers` that signals backend support for event triggers, and changes the type of `defaultTriggerOnReplication` to from `TriggerOnReplication` to `Maybe (XEventTriggers b, TriggerOnReplication)` so that it can only be implemented with a `Just` value if `XEventTriggers b` is inhabited. This emulates some existing type families in `Backend`. (Thanks to @daniel-chambers for this suggestion!)
I used the implementation of `defaultTriggerOnReplication` as a signal for event triggers support to prune the Metadata API so that event trigger fields will not appear in the OpenAPI spec for backend types that do not support event triggers. The codec version of the API will also not emit or accept those fields for those backend types. I think I could use `Typeable` to test whether `XEventTriggers` is `Void` instead of testing whether `defaultTriggerOnReplication` is `Nothing`. But the codec implementation will crash anyway if `defaultTriggerOnReplication` is `Nothing`.
I checked to make sure that graphql-engine-pro still compiles.
Ticket: https://hasurahq.atlassian.net/browse/GDC-521
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7295
GitOrigin-RevId: 2b2dd44291513266107ca25cf330319bf53a8b66
This rewrites the last couple of Python tests that were failing when run with a separate HGE binary per test class. The changes are as follows:
1. The event triggers tests, naming conventions tests, and subscriptions tests all generate a new source DB per test, so can run in parallel.
2. The scheduled triggers tests use the correct URL for the trigger service when the port is generated randomly.
3. Whitespace and trailing commas are added to the scheduled triggers tests.
4. Support for SQL Server is added to _hge.py_ so the naming conventions test that runs on SQL Server passes. (The other SQL Server tests do not pass and we're not going to bother with them for now.)
5. Container names are fixed in _run.sh_.
6. _run.sh_ and _run-new.sh_ don't pull images explicitly as it's annoying when running tests a lot. If you want to pull the latest versions, just run `docker compose pull` from the _server/tests-py_ directory, or the root directory. (If you don't have the images at all, they'll still be pulled automatically.)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7350
GitOrigin-RevId: db58f310f017b2a0884fcf61ccc56d15583f99bd
Adds a bunch of tests to the _resource-pool_ code to try and track down a bug.
Not surprisingly, all tests pass, which means that this didn't help. I still think it's worth keeping them.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7330
GitOrigin-RevId: 6d4deb9af5b192b3a0aa34ac0751d28e12b22b48
We currently let the garbage collector and/or the operating system clean up our mess. This is mostly fine in production (kind of) but a problem when we want to start many HGE servers in parallel for testing purposes.
Shutting them down should, in theory, ease the load.
There is more work to be done in the API test suite before this is very helpful. Right now the test suite actually runs the finalizers on the server context straight away and then uses the leaked resources. As there's no way to actually "close" a connection pool, it keeps working regardless. If we wanted to be strict about this we might want to add a "closed" flag to `Data.Pool` which would cause an exception on `withResource` after closing it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7299
GitOrigin-RevId: ba02f96c7b5b06ba3ba7080a5583a56cb0efcfa7
Codecs for event triggers, including webhook transforms. These are not hooked into the higher-up table metadata codec yet because some backend implementations implement event triggers with `error` which causes an error when codecs are evaluated. I plan to follow up with another PR to resolve that.
Ticket: https://hasurahq.atlassian.net/browse/GDC-585
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7237
GitOrigin-RevId: 8ce40fe6fedcf8b109d6ca50a505333df855a8ce
We no longer support this and therefore don't run tests against it.
This also refactors the code a little so it doesn't have to skip running a PostgreSQL-specific test against MS SQL Server.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7201
GitOrigin-RevId: 307c2ab0052162c012f7b1c55866b57f2fa6d9a6
Generate more Metadata Inconsistencies instead of startup failures. Specifically this means that
- errors retrieving the main query of an executable GraphQL document, and
- errors during fragment inlining
no longer fail irrecoverably.
This also makes more parts of `buildSchemaCacheRule` into pure code, which is always nice.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7234
GitOrigin-RevId: aebf636c2fb1aad1c2df9a37f7d0b67c1ee40c42
context: This is foundation work, before we change how the server chooses to compress or not
part of effort: #5518
-----
Prior to this change it was difficult to understand how the functionality in this module related to the semantics of Accept-Encoding. We also didn't correctly handle directives with qvalues.
After this change certain technical infelicities are called out without modifying the behavior of the server; for instance we continue to fall back to identity (no compression) in the case where technically we're supposed to return 406, and we also continue to treat `*` conservatively as meaning “use no compression”.
The only external change here is `gzip;q=x.y` now results in a zipped response.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7213
GitOrigin-RevId: 1910ffd70d29f1ab8825c601f1bd998be70ceeeb
`toLazyByteString` is a little deficient in two ways:
- It allocates relatively large chunks (4KB + 32KB +32KB, etc…) which is wasteful for small ByteStrings
- It shrinks each chunk (Copying the data to a new chunk of exactly the right size) if it's not more than half filled. If we're running the builder right before we send it over the wire, this copy is totally extraneous (we simply end up with more work for the next GC)
part of the effort: https://github.com/hasura/graphql-engine-mono/issues/5518
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7187
GitOrigin-RevId: b499cd49c33da6cfee96be629a36b5c812486e39
## Description
There is a bug in the metadata defaults code, see [the original PR](https://github.com/hasura/graphql-engine-mono/pull/6286).
Steps to reproduce this issue:
* Start a new HGE project
* Start HGE with a defaults argument: `HASURA_GRAPHQL_LOG_LEVEL=debug cabal run exe:graphql-engine -- serve --enable-console --console-assets-dir=./console/static/dist --metadata-defaults='{"backend_configs": {"dataconnector": {"mongo": {"display_name": "BONGOBB", "uri": "http://localhost:8123"}}}}'`
* Add a source (doesn't need to be related to the defaults)
* Export metadata
* See that the defaults are present in the exported metadata
## Related Issues
* Github Issue: https://github.com/hasura/graphql-engine/issues/9237
* Jira: https://hasurahq.atlassian.net/browse/GDC-647
* Original PR: https://github.com/hasura/graphql-engine-mono/pull/6286
## Solution
* The test for if defaults should be included for metadata api operations has been extended to check for updates
* Metadata inconsistencies have been hidden for `/capabilities` calls on startup
## TODO
* [x] Fix bug
* [x] Write tests
* [x] OSS Metadata Migration to correct persisted data - `server/src-rsr/migrations/47_to_48.sql`
* [x] Cloud Metadata Migration - `pro/server/res/cloud/migrations/6_to_7.sql`
* [x] Bump Catalog Version - `server/src-rsr/catalog_version.txt`
* [x] Update Catalog Versions - `server/src-rsr/catalog_versions.txt` (This will be done by Infra when creating a release)
* [x] Log connection error as it occurs *(Already being logged. Requires `--enabled-log-types startup,webhook-log,websocket-log,http-log,data-connector-log`)
* [x] Don't mark metadata inconsistencies for this call.
## Questions
* [ ] Does the `pro/server/res/cloud/migrations/6_to_7.sql` cover the cloud scenarios?
* [ ] Should we have `SET search_path` in migrations?
* [x] What should be in `server/src-rsr/catalog_versions.txt`?
## Testing
To test the solution locally run:
> docker compose up -d
and
> cabal run -- exe:api-tests --skip BigQuery --skip SQLServer --skip '/Test.API.Explain/Postgres/'
## Solution
In `runMetadataQuery` in `server/src-lib/Hasura/Server/API/Metadata.hs`:
```diff
- if (exportsMetadata _rqlMetadata)
+ if (exportsMetadata _rqlMetadata || queryModifiesMetadata _rqlMetadata)
```
This ensures that defaults aren't present in operations that serialise metadata.
Note: You might think that `X_add_source` would need the defaults to be present to add a source that references the defaults, but since the resolution occurs in the schema-cache building phase, the defaults can be excluded for the metadata modifications required for `X_add_source`.
In addition to the code-change, a metadata migration has been introduced in order to clean up serialised defaults.
The following scenarios need to be considered for both OSS and Cloud:
* The user has not had defaults serialised
* The user has had the defaults serialised and no other backends configured
* The user has had the defaults serialised and has also configured other backends
We want to remove as much of the metadata as possible without any user-specified data and this should be reflected in migration `server/src-rsr/migrations/47_to_48.sql`.
## Server checklist
### Catalog upgrade
Does this PR change Hasura Catalog version?
- ✅ Yes
### Metadata
Does this PR add a new Metadata feature?
- ✅ No
### GraphQL
- ✅ No new GraphQL schema is generated
### Breaking changes
- ✅ No Breaking changes
## Changelog
__Component__ : server
__Type__: bugfix
__Product__: community-edition
### Short Changelog
Fixes a metadata defaults serialization bug and introduces a metadata migration to correct data that has been persisted due to the bug.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7034
GitOrigin-RevId: ad7d4f748397a1a607f2c0c886bf0fbbc3f873f2
This PR implements the remaining codecs for table permissions. However the codec for boolean expressions delegates to Aeson instances because Autodocodec doesn't currently have the necessary feature to write a codec for boolean expressions that will reliably parse valid data.
Boolean expressions are objects with keys like `_and`, `_or`, `_exists`, or `<field name>`. The parsing rules for each value depend on the key, so we need to be able to select different codecs for each key. We could do that with an `object` codec, but that doesn't account for the arbitrary field name keys that can be provided. OpenAPI supports object types with "additional properties", but I don't know if we can declare a specific type for those properties. There might or might not be a reasonable path to extending Autodocodec to handle this case.
Ticket: [GDC-585](https://hasurahq.atlassian.net/browse/GDC-585)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6978
GitOrigin-RevId: 0b0dcfd59ebd1d5022ff2ab86dd8d4c6f93bd039