## Description
### I want to speak to the `Manager`
Oh boy. This PR is both fairly straightforward and overreaching, so let's break it down.
For most network access, we need a [`HTTP.Manager`](https://hackage.haskell.org/package/http-client-0.1.0.0/docs/Network-HTTP-Client-Manager.html). It is created only once, at the top level, when starting the engine, and is then threaded through the application to wherever we need to make a network call. As of main, the way we do this is not standardized: most of the GraphQL execution code passes it "manually" as a function argument throughout the code. We also have a custom monad constraint, `HasHttpManagerM`, that describes a monad's ability to provide a manager. And, finally, several parts of the code store the manager in some kind of argument structure, such as `RunT`'s `RunCtx`.
This PR's first goal is to harmonize all of this: we always create the manager at the root, and we already have it when we do our very first `runReaderT`. Wouldn't it make sense for the rest of the code to not manually pass it anywhere, to not store it anywhere, but to always rely on the current monad providing it? This is, in short, what this PR does: it implements a constraint on the base monads, so that they provide the manager, and removes most explicit passing from the code.
### First come, first served
One way this PR goes a tiny bit further than "just" doing the aforementioned harmonization is that it starts the process of implementing the "Services oriented architecture" roughly outlined in this [draft document](https://docs.google.com/document/d/1FAigqrST0juU1WcT4HIxJxe1iEBwTuBZodTaeUvsKqQ/edit?usp=sharing). Instead of using the existing `HasHTTPManagerM`, this PR revamps it into the `ProvidesNetwork` service.
The idea is, again, that we should make all "external" dependencies of the engine, all things that the core of the engine doesn't care about, a "service". This allows us to define clear APIs for features, to choose different implementations based on which version of the engine we're running, harmonizes our many scattered monadic constraints... Which is why this service is called "Network": we can refine it, moving forward, to be the constraint that defines how all network communication is to operate, instead of relying on disparate classes constraint or hardcoded decisions. A comment in the code clarifies this intent.
### Side-effects? In my Haskell?
This PR also unavoidably touches some other aspects of the codebase. One such example: it introduces `Hasura.App.AppContext`, named after `HasuraPro.Context.AppContext`: a name for the reader structure at the base level. It also transforms `Handler` from a type alias to a newtype, as `Handler` is where we actually enforce HTTP limits; but without `Handler` being a distinct type, any code path could simply do a `runExceptT $ runReader` and forget to enforce them.
(As a rule of thumb, i am starting to consider any straggling `runReaderT` or `runExceptT` as a code smell: we should not stack / unstack monads haphazardly, and every layer should be an opaque `newtype` with a corresponding run function.)
## Further work
In several places, i have left TODOs when i have encountered things that suggest that we should do further unrelated cleanups. I'll write down the follow-up steps, either in the aforementioned document or on slack. But, in short, at a glance, in approximate order, we could:
- delete `ExecutionCtx` as it is only a subset of `ServerCtx`, and remove one more `runReaderT` call
- delete `ServerConfigCtx` as it is only a subset of `ServerCtx`, and remove it from `RunCtx`
- remove `ServerCtx` from `HandlerCtx`, and make it part of `AppContext`, or even make it the `AppContext` altogether (since, at least for the OSS version, `AppContext` is there again only a subset)
- remove `CacheBuildParams` and `CacheBuild` altogether, as they're just a distinct stack that is a `ReaderT` on top of `IO` that contains, you guessed it, the same thing as `ServerCtx`
- move `RunT` out of `RQL.Types` and rename it, since after the previous cleanups **it only contains `UserInfo`**; it could be bundled with the authentication service, made a small implementation detail in `Hasura.Server.Auth`
- rename `PGMetadaStorageT` to something a bit more accurate, such as `App`, and enforce its IO base
This would significantly simply our complex stack. From there, or in parallel, we can start moving existing dependencies as Services. For the purpose of supporting read replicas entitlement, we could move `MonadResolveSource` to a `SourceResolver` service, as attempted in #7653, and transform `UserAuthenticationM` into a `Authentication` service.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7736
GitOrigin-RevId: 68cce710eb9e7d752bda1ba0c49541d24df8209f
- Inline a few instances to avoid code duplication
- Use `(<$>)` to avoid `let`
- Improve error reporting when types of invalid kind are specified in `possibleTypes` or `interfaces`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7540
GitOrigin-RevId: 954fb710f94a275daff938b9a6e58765c4286d0c
### Description
Each Backend executes queries against the database in a slightly different stack: Postgres uses its own `TXeT`, MSSQL uses a variant of it, BigQuery is simply in `ExceptT QErr IO`... To accommodate those variations, we had originally introduced an `ExecutionMonad b` type family in `BackendExecute`, allowing each backend to describe its own stack. It was then up to that backend's `BackendTransport` instance to implement running said stack, and converting the result back into our main app monad.
However, this was not without complications: `TraceT` is one of them: as it usually needs to be on the top of the stack, converting from one stack to the other implies the use `interpTraceT`, which is quite monstrous. Furthermore, as part of the Entitlement Services work, we're trying to move to a "Services" architecture in which the entire engine runs in one base monad, that delegates features and dependencies to monad constraints; and as a result we'd like to minimize the number of different monad stacks we have to maintain and translate from and to in the codebase.
To improve things, this PR changes `ExecutionMonad b` from an _absolute_ stack to a _relative_ one: i.e.: what needs to be stacked on top of our base monad for the execution. In `Transport`, we then only need to pop the top of the stack, and voila. This greatly simplifies the implementation of the backends, as there's no longer any need to do any stack transformation: MySQL's implementation becomes a `runIdentityT`! This also removes most mentions of `TraceT` from the execution code since it's no longer required: we can rely on the base monad's existing `MonadTrace` constraint.
To continue encapsulating monadic actions in `DBStepInfo` and avoid threading a bunch of `forall` all over the place, this PR introduces a small local helper: `OnBaseMonad`. One only downside of all this is that this requires adding `MonadBaseControl IO m` constraint all over the place: previously, we would run directly on `IO` and lift, and would therefore not need to bring that constraint all the way.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7789
GitOrigin-RevId: e9b2e431c5c47fa9851abf87545c0415ff6d1a12
Add some configurations for modern profiling modes, and integration into dev.sh
These require cabal 3.8 due to the use of `import`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7671
GitOrigin-RevId: f793f64105cfd99fb51b247fa8bc050f6d4bd23e
Basic MongoDB agent. This is intended as a starting point for playing with nested documents in a MongoDB back end. Currently supports basic queries with projections, where expressions, limit and offset. No support for joins, aggregates or mutations.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7840
GitOrigin-RevId: 3f03b8416c95acf2b68da1db56cbe36a513a4bde
## Description
This PR removes `MetadataStorageT`, and cleans up all top-level error handling. In short: this PR changes `MonadMetadataStorage` to explicitly return a bunch of `Either QErr a`, instead of relying on the stack providing a `MonadError QErr`. Since we implement that class on the base monad *below any ExceptT*, this removes a lot of very complicated instances that make assumptions about the shape of the stack.
On the back of this, we can remove several layers of ExceptT from the core of the code, including the one in `RunT`, which allows us to remove several instances of `liftEitherM . runExceptT`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7689
GitOrigin-RevId: 97d600154d690f58c0b93fb4cc2d30fd383fd8b8
#7730 added `package` stanzas for all our internal Haskell libraries, so that `-Werror` was switched on for all of them. However, #7534 was developed simultaneously, and merged shortly after, so that we missed `package` stanzas for `hasura-incremental` and `arrows-extra`. Then #7761 added one for `hasura-incremental`. This PR fixes one warning for a test suite there, and adds the missing `package` stanza for `arrows-extra`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7770
GitOrigin-RevId: 5f5c3fb5d4852c88ed2e14a3fa83fe264aec895b
It's pretty frustrating to see an error in CI and not know the actual cause, because we just dropped the information.
This adds the actual status code and body to the error message.
Previously, `getWithStatus` was only used by the `healthCheck'` function. This also refactors `get_` to use the same function, so we don't have to duplicate the error-handling logic.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7752
GitOrigin-RevId: 474e4c02ad6c5b676abc311b90b21998b4a93d94
### Description
This PR:
- fixes the package names in dev-sh.project.local (the config was using the names of the folders, not the names of the packages)
- adds similar options to CI: we also want to build with CI and in parallel there
- fixes all warnings
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7761
GitOrigin-RevId: ef1d78db8c94f5e74c18443aa517544f6a6f5a10
## Description
Adds a content-length response header to all endpoints. This PR tests this feature by checking the content-length of every request we send in the tests.
## Changelog ✍️
__Component__ : server
__Type__: enhancement
__Product__: community-edition
### Short Changelog
add a content-length response header to all endpoints
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7444
Co-authored-by: Manas Agarwal <5352361+manasag@users.noreply.github.com>
GitOrigin-RevId: a0a811852053c5dde4b11b71ba11a7d456c84d76
- In a previous PR we made more use of `common` stanzas, but these require `cabal-version: 2.2` (so some of those stanzas were not functional before this PR). This just bumps the `cabal-version`s straight to 3.6, which is the latest version we support in CI, where we build with cabal 3.6.
- This `cabal-version` upgrade required fixing a few `license` fields, from `BSD3` to `BSD-3-Clause`. I've also added `default-language` fields where appropriate, although this is perhaps optional.
- Using cabal's [syntax for applying options to all _local_ packages](https://cabal.readthedocs.io/en/3.8/cabal-project.html#package-configuration-options), we unify the cabal configuration, and apply `-Werror` to all local packages, which wasn't the case until now.
- Applying `-Werror` to all local packages required a few additional exceptions to the warnings that were switched on in hasura/graphql-engine-mono#7614.
- Deleted SCM links to the original pool library.
Overall, the effect of this PR is:
- more warnings
- stricter compilation
- ~~less cabal configuration~~
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7730
GitOrigin-RevId: 592e9e46d103bcc8726df5b745306bd9f77f7efc
This means we don't need to include the port in the connection string.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7683
Co-authored-by: Vishnu Bharathi <4211715+scriptnull@users.noreply.github.com>
GitOrigin-RevId: 3f6fb3fe4cb246a2fc593a2aea3820cf2c0e0e2c
See [Enable all the warnings](https://medium.com/mercury-bank/enable-all-the-warnings-a0517bc081c3). This PR follows that approach, except that it re-disables those warnings that would prevent a successful build.
There are some newer warning flags that older GHC versions don't recognize. So this also updates some of our CI routines to the GHC version that we're currently using for `graphql-engine` itself, namely 9.2.5. I don't see a reason to keep testing those libraries against older GHC versions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7614
GitOrigin-RevId: d48a6db09dab29616e273549d0045f98ecb4586f
### Description
This fixes the libs' test config: without that line, hspec fails to run at build time. I haven't tried actually running the tests now that they build, fwiw.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7694
GitOrigin-RevId: 03d7bc969c4bd195e84080d50f1f6441a1d8d50f
Not sure why there was so much nesting, but I did not like it.
Just a bit of cleanup while I was nosing around.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7684
GitOrigin-RevId: a17c94561fe1688d35a51afa5dfda37a7ea35d25
We were previously using the Docker Compose file in the root directory
for manual testing _and_ the server API tests.
This splits them so we can e.g. add Yugabyte for easy manual testing.
In the future, this will also allow us to use ephemeral ports for API
test databases, while keeping the fixed ports for manual testing.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7524
GitOrigin-RevId: 7244e296b0ed0ace9782b6f44f321933a9d9a49d
## Description
This PR updates the JWK refresh thread to poll every second instead of the previous behaviour where the thread used to sleep based on the expiry time in `Cache-Control`/`Expires` response headers.
## Motivation
As a part of dynamically updating environment variables on cloud without restart the user projects, we want to implement a mechanism which makes HGE aware of any changes in the user configuration by updating a shared variable data type which can be accessed by relevant threads/core functionality before their execution.
The above updates requires us to make the threads polling in nature such that before executing their code, any change in the user config is captured and the appropriate behaviour is channelised. In the case of JWK updating thread, the thread used to sleep for the time as mentioned in the `Cache-Control` or `Expires` headers which make the thread unware of any new changes in the user config in that period of time, hence requiring a restart to propogate the new changes.
To solve this problem we have now updated the JWK update thread to poll every second for change in `AuthMode`(from a shared variable in subsequent changes to implement the dynamic env var update feature) and update the JWK accordingly such that it does not use any stale configurations and works without HGE restart.
### Related Issues
https://hasurahq.atlassian.net/browse/GS-300
### Solution and Design
- We store the expiry time in the `JWTCtx`
- On every poll check whether the current time exceeds the expiry time, in which case we call the JWK url to fetch the new JWK and expiry.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7177
Co-authored-by: Krushan Bauva <31391329+krushanbauva@users.noreply.github.com>
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
GitOrigin-RevId: bc1e44a8c3823d7554167a7f01c3ce085646cedb
Hooks up event trigger codecs from #7237. This required fixing a problem where some backend types implemented `defaultTriggerOnReplication` with `error` which caused the server to crash when evaluating those for default values in codecs. The changes here add a type family to `Backend` called `XEventTriggers` that signals backend support for event triggers, and changes the type of `defaultTriggerOnReplication` to from `TriggerOnReplication` to `Maybe (XEventTriggers b, TriggerOnReplication)` so that it can only be implemented with a `Just` value if `XEventTriggers b` is inhabited. This emulates some existing type families in `Backend`. (Thanks to @daniel-chambers for this suggestion!)
I used the implementation of `defaultTriggerOnReplication` as a signal for event triggers support to prune the Metadata API so that event trigger fields will not appear in the OpenAPI spec for backend types that do not support event triggers. The codec version of the API will also not emit or accept those fields for those backend types. I think I could use `Typeable` to test whether `XEventTriggers` is `Void` instead of testing whether `defaultTriggerOnReplication` is `Nothing`. But the codec implementation will crash anyway if `defaultTriggerOnReplication` is `Nothing`.
I checked to make sure that graphql-engine-pro still compiles.
Ticket: https://hasurahq.atlassian.net/browse/GDC-521
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7295
GitOrigin-RevId: 2b2dd44291513266107ca25cf330319bf53a8b66
This rewrites the last couple of Python tests that were failing when run with a separate HGE binary per test class. The changes are as follows:
1. The event triggers tests, naming conventions tests, and subscriptions tests all generate a new source DB per test, so can run in parallel.
2. The scheduled triggers tests use the correct URL for the trigger service when the port is generated randomly.
3. Whitespace and trailing commas are added to the scheduled triggers tests.
4. Support for SQL Server is added to _hge.py_ so the naming conventions test that runs on SQL Server passes. (The other SQL Server tests do not pass and we're not going to bother with them for now.)
5. Container names are fixed in _run.sh_.
6. _run.sh_ and _run-new.sh_ don't pull images explicitly as it's annoying when running tests a lot. If you want to pull the latest versions, just run `docker compose pull` from the _server/tests-py_ directory, or the root directory. (If you don't have the images at all, they'll still be pulled automatically.)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7350
GitOrigin-RevId: db58f310f017b2a0884fcf61ccc56d15583f99bd
Adds a bunch of tests to the _resource-pool_ code to try and track down a bug.
Not surprisingly, all tests pass, which means that this didn't help. I still think it's worth keeping them.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7330
GitOrigin-RevId: 6d4deb9af5b192b3a0aa34ac0751d28e12b22b48
We currently let the garbage collector and/or the operating system clean up our mess. This is mostly fine in production (kind of) but a problem when we want to start many HGE servers in parallel for testing purposes.
Shutting them down should, in theory, ease the load.
There is more work to be done in the API test suite before this is very helpful. Right now the test suite actually runs the finalizers on the server context straight away and then uses the leaked resources. As there's no way to actually "close" a connection pool, it keeps working regardless. If we wanted to be strict about this we might want to add a "closed" flag to `Data.Pool` which would cause an exception on `withResource` after closing it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7299
GitOrigin-RevId: ba02f96c7b5b06ba3ba7080a5583a56cb0efcfa7
Codecs for event triggers, including webhook transforms. These are not hooked into the higher-up table metadata codec yet because some backend implementations implement event triggers with `error` which causes an error when codecs are evaluated. I plan to follow up with another PR to resolve that.
Ticket: https://hasurahq.atlassian.net/browse/GDC-585
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7237
GitOrigin-RevId: 8ce40fe6fedcf8b109d6ca50a505333df855a8ce
We no longer support this and therefore don't run tests against it.
This also refactors the code a little so it doesn't have to skip running a PostgreSQL-specific test against MS SQL Server.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7201
GitOrigin-RevId: 307c2ab0052162c012f7b1c55866b57f2fa6d9a6
Generate more Metadata Inconsistencies instead of startup failures. Specifically this means that
- errors retrieving the main query of an executable GraphQL document, and
- errors during fragment inlining
no longer fail irrecoverably.
This also makes more parts of `buildSchemaCacheRule` into pure code, which is always nice.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7234
GitOrigin-RevId: aebf636c2fb1aad1c2df9a37f7d0b67c1ee40c42
context: This is foundation work, before we change how the server chooses to compress or not
part of effort: #5518
-----
Prior to this change it was difficult to understand how the functionality in this module related to the semantics of Accept-Encoding. We also didn't correctly handle directives with qvalues.
After this change certain technical infelicities are called out without modifying the behavior of the server; for instance we continue to fall back to identity (no compression) in the case where technically we're supposed to return 406, and we also continue to treat `*` conservatively as meaning “use no compression”.
The only external change here is `gzip;q=x.y` now results in a zipped response.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7213
GitOrigin-RevId: 1910ffd70d29f1ab8825c601f1bd998be70ceeeb
`toLazyByteString` is a little deficient in two ways:
- It allocates relatively large chunks (4KB + 32KB +32KB, etc…) which is wasteful for small ByteStrings
- It shrinks each chunk (Copying the data to a new chunk of exactly the right size) if it's not more than half filled. If we're running the builder right before we send it over the wire, this copy is totally extraneous (we simply end up with more work for the next GC)
part of the effort: https://github.com/hasura/graphql-engine-mono/issues/5518
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7187
GitOrigin-RevId: b499cd49c33da6cfee96be629a36b5c812486e39
## Description
There is a bug in the metadata defaults code, see [the original PR](https://github.com/hasura/graphql-engine-mono/pull/6286).
Steps to reproduce this issue:
* Start a new HGE project
* Start HGE with a defaults argument: `HASURA_GRAPHQL_LOG_LEVEL=debug cabal run exe:graphql-engine -- serve --enable-console --console-assets-dir=./console/static/dist --metadata-defaults='{"backend_configs": {"dataconnector": {"mongo": {"display_name": "BONGOBB", "uri": "http://localhost:8123"}}}}'`
* Add a source (doesn't need to be related to the defaults)
* Export metadata
* See that the defaults are present in the exported metadata
## Related Issues
* Github Issue: https://github.com/hasura/graphql-engine/issues/9237
* Jira: https://hasurahq.atlassian.net/browse/GDC-647
* Original PR: https://github.com/hasura/graphql-engine-mono/pull/6286
## Solution
* The test for if defaults should be included for metadata api operations has been extended to check for updates
* Metadata inconsistencies have been hidden for `/capabilities` calls on startup
## TODO
* [x] Fix bug
* [x] Write tests
* [x] OSS Metadata Migration to correct persisted data - `server/src-rsr/migrations/47_to_48.sql`
* [x] Cloud Metadata Migration - `pro/server/res/cloud/migrations/6_to_7.sql`
* [x] Bump Catalog Version - `server/src-rsr/catalog_version.txt`
* [x] Update Catalog Versions - `server/src-rsr/catalog_versions.txt` (This will be done by Infra when creating a release)
* [x] Log connection error as it occurs *(Already being logged. Requires `--enabled-log-types startup,webhook-log,websocket-log,http-log,data-connector-log`)
* [x] Don't mark metadata inconsistencies for this call.
## Questions
* [ ] Does the `pro/server/res/cloud/migrations/6_to_7.sql` cover the cloud scenarios?
* [ ] Should we have `SET search_path` in migrations?
* [x] What should be in `server/src-rsr/catalog_versions.txt`?
## Testing
To test the solution locally run:
> docker compose up -d
and
> cabal run -- exe:api-tests --skip BigQuery --skip SQLServer --skip '/Test.API.Explain/Postgres/'
## Solution
In `runMetadataQuery` in `server/src-lib/Hasura/Server/API/Metadata.hs`:
```diff
- if (exportsMetadata _rqlMetadata)
+ if (exportsMetadata _rqlMetadata || queryModifiesMetadata _rqlMetadata)
```
This ensures that defaults aren't present in operations that serialise metadata.
Note: You might think that `X_add_source` would need the defaults to be present to add a source that references the defaults, but since the resolution occurs in the schema-cache building phase, the defaults can be excluded for the metadata modifications required for `X_add_source`.
In addition to the code-change, a metadata migration has been introduced in order to clean up serialised defaults.
The following scenarios need to be considered for both OSS and Cloud:
* The user has not had defaults serialised
* The user has had the defaults serialised and no other backends configured
* The user has had the defaults serialised and has also configured other backends
We want to remove as much of the metadata as possible without any user-specified data and this should be reflected in migration `server/src-rsr/migrations/47_to_48.sql`.
## Server checklist
### Catalog upgrade
Does this PR change Hasura Catalog version?
- ✅ Yes
### Metadata
Does this PR add a new Metadata feature?
- ✅ No
### GraphQL
- ✅ No new GraphQL schema is generated
### Breaking changes
- ✅ No Breaking changes
## Changelog
__Component__ : server
__Type__: bugfix
__Product__: community-edition
### Short Changelog
Fixes a metadata defaults serialization bug and introduces a metadata migration to correct data that has been persisted due to the bug.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7034
GitOrigin-RevId: ad7d4f748397a1a607f2c0c886bf0fbbc3f873f2
This PR implements the remaining codecs for table permissions. However the codec for boolean expressions delegates to Aeson instances because Autodocodec doesn't currently have the necessary feature to write a codec for boolean expressions that will reliably parse valid data.
Boolean expressions are objects with keys like `_and`, `_or`, `_exists`, or `<field name>`. The parsing rules for each value depend on the key, so we need to be able to select different codecs for each key. We could do that with an `object` codec, but that doesn't account for the arbitrary field name keys that can be provided. OpenAPI supports object types with "additional properties", but I don't know if we can declare a specific type for those properties. There might or might not be a reasonable path to extending Autodocodec to handle this case.
Ticket: [GDC-585](https://hasurahq.atlassian.net/browse/GDC-585)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6978
GitOrigin-RevId: 0b0dcfd59ebd1d5022ff2ab86dd8d4c6f93bd039
Dependencies seem to get concatenated very often, so let's use a data structure that supports efficient concatenation.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7050
GitOrigin-RevId: 6331963f99f17d1b908a6038318d8c4834cf4dd7
## Description ✍️
This PR adds support to generate query params directly using a kriti template which can be used to flatten a list of parameter arguments as well.
### Changes in the Metadata API
Earlier the `query_params` key inside `request_transform` used to take in an object of key/value pairs where the `key` represents the query parameter name and `value` points to the value of the parameter or a kriti template which could be resolved to the value.
With this PR, we provide the user with more freedom to generate the complete query string using kriti template. The `query_params` can now take in a string as well which will be a kriti template. This new change needs to be incorporated on the console and CLI metadata import/export as well.
- [x] CLI: Compatible, no changes required
- [ ] Console
## Changelog ✍️
__Component__ : server
__Type__: feature
__Product__: community-edition
### Short Changelog
use kriti template to generate query param from list of arguments
### Related Issues ✍
https://hasurahq.atlassian.net/browse/GS-243
### Solution and Design ✍
We use a kriti template to generate the complete query parameter string.
| Query Template | Output |
|---|---|
| `{{ concat ([concat({{ range _, x := [\"apple\", \"banana\"] }} \"tags={{x}}&\" {{ end }}), \"flag=smthng\"]) }}`| `tags=apple&tags=banana&flag=smthng` |
| `{{ concat ([\"tags=\", concat({{ range _, x := $body.input }} \"{{x}},\" {{ end }})]) }}` | `tags=apple%2Cbanana%2C` |
### Steps to test and verify ✍
- start HGE and make the following request to `http://localhost:8080/v1/metadata`:
```json
{
"type": "test_webhook_transform",
"args": {
"webhook_url": "http://localhost:3000",
"body": {
"action": {
"name": "actionName"
},
"input": ["apple", "banana"]
},
"request_transform": {
"version": 2,
"url": "{{$base_url}}",
"query_params": "{{ concat ([concat({{ range _, x := $body.input }} \"tags={{x}}&\" {{ end }}), \"flag=smthng\"]) }}",
"template_engine": "Kriti"
}
}
}
```
- you should receive the following as output:
```json
{
"body": {
"action": {
"name": "actionName"
},
"input": [
"apple",
"banana"
]
},
"headers": [],
"method": "GET",
"webhook_url": "http://localhost:3000?tags=apple&tags=banana&flag=smthng"
}
```
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6961
Co-authored-by: Tirumarai Selvan <8663570+tirumaraiselvan@users.noreply.github.com>
GitOrigin-RevId: 712ba038f03009edc3e8eb0435e723304943399a
## Description ✍️
This PR introduces a new feature to enable/disable event triggers during logical replication of table data for PostgreSQL and MS-SQL data sources. We introduce a new field `trigger_on_replication` in the `*_create_event_trigger` metadata API. By default the event triggers will not fire for logical data replication.
## Changelog ✍️
__Component__ : server
__Type__: feature
__Product__: community-edition
### Short Changelog
Add option to enable/disable event triggers on logically replicated tables
### Related Issues ✍
https://github.com/hasura/graphql-engine/issues/8814https://hasurahq.atlassian.net/browse/GS-252
### Solution and Design
- By default, triggers do **not** fire when the session mode is `replica` in Postgres, so if the `triggerOnReplication` is set to `true` for an event trigger we run the query `ALTER TABLE #{tableTxt} ENABLE ALWAYS TRIGGER #{triggerNameTxt};` so that the trigger fires always irrespective of the `session_replication_role`
- By default, triggers do fire in case of replication in MS-SQL, so if the `triggerOnReplication` is set to `false` for an event trigger we add a clause `NOT FOR REPLICATION` to the the SQL when the trigger is created/altered, which sets the `is_not_for_replication` for the trigger as `true` and it does not fire during logical replication.
### Steps to test and verify ✍
- Run hspec integration tests for HGE
## Server checklist ✍
### Metadata ✍
Does this PR add a new Metadata feature?
- ✅ Yes
- Does `export_metadata`/`replace_metadata` supports the new metadata added?
- ✅
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6953
Co-authored-by: Puru Gupta <32328846+purugupta99@users.noreply.github.com>
Co-authored-by: Sean Park-Ross <94021366+seanparkross@users.noreply.github.com>
GitOrigin-RevId: 92731328a2bbdcad2302c829f26f9acb33c36135
Mostly trying to avoid tricky `Arrows` syntax, and unnecessary use of the `Hasura.Incremental` framework.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6997
GitOrigin-RevId: 9a2f5883e7e29af164e1581049ae003afec2cbe4
I encountered this dead code while doing other things: it's a type class with a single method which is never called. Deleting the type class allows us to simplify `TableCoreCacheRT` and `TableCacheRT`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7075
GitOrigin-RevId: 121320349c478a93717b0706037553d8406cbfa9
fwiw: I was looking here because ghc-debug showed many closures associated with the Applicative instance,
but defining Monoid/Semigroup by hand and inlining didn't seem to have any effect
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7026
GitOrigin-RevId: 4ad2fd26519da98b2380658d89942c700de4ffa2
We sometimes need to test against cloud databases. Here, we add a Terraform module to start a new AlloyDB cluster and instance, which we can then use for testing purposes.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7002
GitOrigin-RevId: 2d661b5cc6d60e47485ea68b781e13426ed4f097
This test did not work when splitting the metadata and source backends. Fixed mostly by running the relevant SQL using `source_backend.engine`, but I also took the time to clean it up a little, and broke up _test.yaml_ into 3 files.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6957
GitOrigin-RevId: bbca60a8906caba2d0cffd834b3b8595fca058fd
Sometimes this happens, especially in CI. It's alright. We can just leave it lying around and it will be destroyed when the container and associated volume are removed.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7003
GitOrigin-RevId: dcb74920c12341d7a15f9b6ebfe52d0864de4738
This increases the speed of `create_query_collection` and `add_collection_to_allowlist` by a factor ~~10~~ 65, by caching the in-memory GraphQL schema. This speedup also applies more broadly to Metadata changes relating to:
- allowlists
- query collections
- cron triggers
- REST endpoints
- API limits
- metrics config
- GraphQL introspection options
- TLS allow lists
- OpenTelemetry
When is construction of the in-memory GraphQL schema cached between Metadata operations?
Before this PR, **never**! It's rebuilt fully, for every role, on every Metadata operation.
However, there are many Metadata operations that don't influence the GraphQL schema. So we should be caching its construction.
The `Hasura.Incremental` framework allows us to cache such constructions: whenever we have an arrow `Rule m a b`, where `a` is the input to the arrow and `b` the output, we can use the `Inc.cache` combinator to obtain a new arrow which is only re-executed when the input `a` changes in a material way. To test this, `a` needs an `Eq` instance. (Before hasura/graphql-engine-mono#6877, this was a `Cacheable` type class which has now been removed.)
We can't simply apply `Inc.cache` to the "Steps 3 and 4" in `buildSchemaCacheRule`, because the inputs (components of `BuildOutputs` such as `SourceCache`) don't have an `Eq` instance.
So the changes to `buildSchemaCacheRule` restructure the code so that the input to "Step 1", namely the Metadata, can be used as a caching key instead, so that `Inc.cache` can be applied to the whole sequence of steps.
That works to cache construction of the GraphQL schema, but it means that now only those Metadata operations that _don't_ influence any of the products of steps 1-4 can use a cached build of the GraphQL schema. The most important intermediate product is `BuildOutputs`. So now the exercise becomes to minimize the amount of stuff stored in `BuildOutputs`, so that as many Metadata operations as possible can be handled outside of the codepath that produces a GraphQL schema.
Per hasura/graphql-engine-mono#6609, the `BuildOutputs` structure is too big, and stores things unnecessarily. Refer to the PR description there for reasoning - the same logic applies to this PR, and simply goes a few steps further. In doing so, it can benefit from hasura/graphql-engine-mono#6765, which allows us to verify at compile time that certain Schema Cache building steps _don't_ generate "Metadata dependencies". If a certain Metadata dependency is never generated, we don't need to handle that case in `deleteMetadataObject`. Thus such intermediate products don't need to be passed through `resolveDependencies`, and thus they don't need to be stored in `BuildOutputs`, and thus their rebuild won't trigger a GraphQL schema rebuild.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6613
GitOrigin-RevId: 27d2e69d3461bd4c32f08febef9995c0369fab3a
What is the `Cacheable` type class about?
```haskell
class Eq a => Cacheable a where
unchanged :: Accesses -> a -> a -> Bool
default unchanged :: (Generic a, GCacheable (Rep a)) => Accesses -> a -> a -> Bool
unchanged accesses a b = gunchanged (from a) (from b) accesses
```
Its only method is an alternative to `(==)`. The added value of `unchanged` (and the additional `Accesses` argument) arises _only_ for one type, namely `Dependency`. Indeed, the `Cacheable (Dependency a)` instance is non-trivial, whereas every other `Cacheable` instance is completely boilerplate (and indeed either generated from `Generic`, or simply `unchanged _ = (==)`). The `Cacheable (Dependency a)` instance is the only one where the `Accesses` argument is not just passed onwards.
The only callsite of the `unchanged` method is in the `ArrowCache (Rule m)` method. That is to say that the `Cacheable` type class is used to decide when we can re-use parts of the schema cache between Metadata operations.
So what is the `Cacheable (Dependency a)` instance about? Normally, the output of a `Rule m a b` is re-used when the new input (of type `a`) is equal to the old one. But sometimes, that's too coarse: it might be that a certain `Rule m a b` only depends on a small part of its input of type `a`. A `Dependency` allows us to spell out what parts of `a` are being depended on, and these parts are recorded as values of types `Access a` in the state `Accesses`.
If the input `a` changes, but not in a way that touches the recorded `Accesses`, then the output `b` of that rule can be re-used without recomputing.
So now you understand _why_ we're passing `Accesses` to the `unchanged` method: `unchanged` is an equality check in disguise that just needs some additional context.
But we don't need to pass `Accesses` as a function argument. We can use the `reflection` package to pass it as type-level context. So the core of this PR is that we change the instance declaration from
```haskell
instance (Cacheable a) => Cacheable (Dependency a) where
```
to
```haskell
instance (Given Accesses, Eq a) => Eq (Dependency a) where
```
and use `(==)` instead of `unchanged`.
If you haven't seen `reflection` before: it's like a `MonadReader`, but it doesn't require a `Monad`.
In order to pass the current `Accesses` value, instead of simply passing the `Accesses` as a function argument, we need to instantiate the `Given Accesses` context. We use the `give` method from the `reflection` package for that.
```haskell
give :: forall r. Accesses -> (Given Accesses => r) -> r
unchanged :: (Given Accesses => Eq a) => Accesses -> a -> a -> Bool
unchanged accesses a b = give accesses (a == b)
```
With these three components in place, we can delete the `Cacheable` type class entirely.
The remainder of this PR is just to remove the `Cacheable` type class and its instances.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6877
GitOrigin-RevId: 7125f5e11d856e7672ab810a23d5bf5ad176e77f
Rather than varying it, let's just use `postgis/postgis` everywhere.
This uses the latest version of PostGIS, in which some of the raster codes have changed. This seems benign (it's just one digit) in the hex stream. I can't find the relevant release notes though.
Also syncs _images.go_ and _databases.yaml_ so we use the same thing where possible.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6903
GitOrigin-RevId: bb5c56f2e7ff69e4c008f1d658850af08c96badc
We currently have a fairly intricate way of running our PostgreSQL and MSSQL integration tests (not the API tests). By splitting them out, we can simplify this a lot. Most prominently, we can rely on Cabal to be our argument parser instead of writing our own.
We can also simplify how they're run in CI. They are currently (weirdly) run alongside the Python integration tests. This breaks them out into their own jobs for better visibility, and to avoid conflating the two.
The changes are as follows:
- The "unit" tests that rely on a running PostgreSQL database are extracted out to a new test directory so they can be run separately.
- Most of the `Main` module comes with them.
- We now refer to these as "integration" tests instead.
- Likewise for the "unit" tests that rely on a running MS SQL Server database. These are a little simpler and we can use `hspec-discover`, with a `SpecHook` to extract the connection string from an environment variable.
- Henceforth, these are the MS SQL Server integration tests.
- New CI jobs have been added for each of these.
- There wasn't actually a job for the MS SQL Server integration tests. It's pretty amazing they still run well.
- The "haskell-tests" CI job, which used to run the PostgreSQL integration tests, has been removed.
- The makefiles and contributing guide have been updated to run these.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6912
GitOrigin-RevId: 67bbe2941bba31793f63d04a9a693779d4463ee1
### Description
This monster of a PR took way too long. As the title suggests, it reduces the schema context carried in the readers to the very strict minimum. In practice, that means that to build a source, we only require:
- the global `SchemaContext`
- the global `SchemaOptions` (soon to be renamed `SchemaSourceOptions`)
- that source's `SourceInfo`
Furthermore, _we no longer carry "default" customization options throughout the schema_. All customization information is extracted from the `SourceInfo`, when required. This prevents an entire category of bugs we had previously encountered, such as parts of the code using uninitialized / unupdated customization info.
In turn, this meant that we could remove the explicit threading of the `SourceInfo` throughout the schema, since it is now always available through the reader context.
Finally, this meant making a few adjustments to relay and actions as well, such as the introduction of a new separate "context" for actions, and a change to how we create some of the action-specific postgres scalar parsers.
I'll highlight with review comments the areas of interest.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6709
GitOrigin-RevId: ea80fddcb24e2513779dd04b0b700a55f0028dd1
- Avoid a few banana brackets `(| ... |)`, often by just using local `let` bindings
- Use proper `Arrows` syntax rather than helpers like `>->`
- Use monadic `do` syntax instead of `Arrows` syntax where possible
- Avoid `traverseA @Maybe`, in favor of a `case`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6751
GitOrigin-RevId: c07b22a1a259db6d135486ec71a716705e280717
When running using the "new" style (with a HGE binary, not a URL), a new PostgreSQL metadata and source database are created for each test. When we get this into CI, this should drastically reduce the flakiness.
I have also enabled parallelization by default when using `run-new.sh`. It's much faster.
I had to basically rewrite _server/tests-py/test_graphql_read_only_source.py_ so that it does two different things depending on how it's run. It's unfortunate, but it should eventually go away.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6879
GitOrigin-RevId: a121b9035f8da3e61a3e36d8b1fbc6ccae918fad
`CollectedInfo` was just an awkward sum type. By using an explicit `Either` instead, we can guarantee at the type level that certain methods only write inconsistencies, or only write dependencies. This is useful, because if we can guarantee that no dependencies are written, then we don't need to run `resolveDependencies` on that part of the Metadata. In other words, we can keep it out of `BuildOutputs`, which greatly benefits performance - see e.g. hasura/graphql-engine-mono#6613.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6765
GitOrigin-RevId: 9ce099d2eee2278dbb6e5bea72063e4b6e064b35
This enables sharing the Docker Compose-based database configuration across the Haskell-based API tests and the legacy Python integration tests.
Why? Because we depend on different database versions and I keep running out of disk space. I am far too lazy to buy another disk and set up my operating system _again_.
The files in question are:
- _docker-compose/databases.yaml_, which is the base specification for the databases
- _docker-compose.yml_, used by the API tests locally (and for other manual testing), which extends the above
- _.buildkite/docker-compose-files/test-oss-server-hspec.yml_, used by the API tests in CI, which extends _databases.yaml_
- _server/tests-py/docker-compose.yml_, used by the Python integration tests
The changes are summarized as follows:
1. The following snippets are moved from _docker-compose/databases.yaml_ to _docker-compose.yml_ and _.buildkite/docker-compose-files/test-oss-server-hspec.yml_, as they're not strictly necessary for other forms of testing:
- the fixed port mappings (in the range 65000–65010)
- the PostgreSQL initialization
- the SQL Server initialization
2. Environment variables are used a little more in health checks and initialization scripts, as usernames, passwords, etc. can be overridden.
3. The volumes in _docker-compose/databases.yaml_ are made anonymous (unnamed), and the names are only specified in _docker-compose.yml_. We don't need to do this elsewhere.
- For extra fun, I have removed all named volumes from the CI Docker Compose files, as they seem to be unnecessary.
4. _server/tests-py/docker-compose.yml_ now depends on _docker-compose/databases.yaml_.
- This was the point.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6864
GitOrigin-RevId: f22f2839716f543ce8a62f890da244de7e23abaa
A bunch of configurations are retrieved from the Metadata, then stored in the `BuildOutputs` structure, only to then be forwarded to the `SchemaCache`, with extremely little processing in between.
So this simplifies the build pipeline for some parts of the metadata: just construct those things from `Metadata` directly, and store them in the `SchemaCache` without any intermediate container.
Why did we have the detour via `BuildOutputs` in the first place? Parts of the Metadata (codified by `MetadataObjId`) can generate _metadata inconsistencies_ and/or _schema dependencies_, which are related.
- Metadata inconsistencies are warnings that we show to the user, indicating that there's something wrong with their configuration, and they have to fix it.
- Schema dependencies are an internal mechanism that allow us to build a consistent view of the world. For instance, if we have a relationship from DB tables `books` to `authors`, but the `authors` table is inconsistent (e.g. it doesn't exist in the DB), then we have schema dependencies indicating that. The job of `resolveDependencies` is to then drop the relationship, so that we can at least generate a legal GraphQL schema for `books`.
If we never generate a schema dependency for a certain fragment of Metadata, then there is no reason to call `resolveDependencies` on it, and so there is no reason to store it in `BuildOutputs`.
---
The starting point that allows this refactor is to apply Metadata defaults before it reaches `buildAndCollectInfo`, so that metadata-with-defaults can be used elsewhere.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6609
GitOrigin-RevId: df0c4a7ff9451e10e02a40bf26304b26584ba483
When setting up a resource (typically some kind of web server) for use in tests, we need to remember to tear it down afterwards.
This moves this logic into one place, under the `TestResource` module.
Like `SetupAction`, it encapsulates setup and teardown, and also separates out waiting for the resource to be ready, so we don't accidentally leave it lying around in the case of a healthcheck failure.
Unlike `SetupAction`, it is monadic, and can be composed with other resources. In the future, we may want to adopt this logic for `SetupAction` too rather than using lists.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6806
GitOrigin-RevId: 74e2d76c5c09b8e0fe1cad84c9e77011f5a4d3db
This removes calls to `setup` and `teardown` in favor of `setupTablesAction`.
Because this action untracks and drops tables (at least until we figure out how to make throwaway databases), the teardown phase can fail. I have added a wrapper which logs and discards exceptions as a workaround for now.
In the future, when we can simply drop the database, it will probably be sensible to catch "table already untracked" exceptions specifically and let them slide, while still failing on all other exceptions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6769
GitOrigin-RevId: 12cb8f81dd6aced892fe83c49b9a0bdbef8cc1ac
Just forcing some of the most numerous thunks (with -hi profiling), it
seems some of these were retaining significant amount of data
this can follow merge of, or supersede #6679
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6710
GitOrigin-RevId: d0566ee288841e264637231a7f238946aa2e3564
## Description ✍️
This PR aims to improve the developer experience when using a heroku postgres instance as source database. Better error messages and relevant documentation are added as a part of this PR.
## Changelog ✍️
__Component__ : server
__Type__: enhancement
__Product__: community-edition
### Short Changelog
Improve DX for heroku integration
### Related Issues ✍
https://hasurahq.atlassian.net/browse/GS-202
### Steps to test and verify ✍
- Add a new heroku postgres instance as DB source in Hasura
- Try adding an event trigger
- Improved error message will be emitted:
```json
{
"arguments": [],
"error": {
"description": null,
"exec_status": "FatalError",
"hint": null,
"message": "pgcrypto can only be created in heroku_ext schema. Hint: You can set \"extensions_schema\" to provide the schema to install the extensions. Refer to the documentation here: https://hasura.io/docs/latest/deployment/postgres-requirements/#pgcrypto-in-pg-search-path",
"status_code": "P0001"
},
"prepared": false,
"statement": "CREATE EXTENSION IF NOT EXISTS pgcrypto SCHEMA public"
}
```
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6630
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sean Park-Ross <94021366+seanparkross@users.noreply.github.com>
GitOrigin-RevId: a46d7c129a4e0378b7f33445f9bda11e0bddbd74
This upgrades the version of Ormolu required by the HGE repository to v0.5.0.1, and reformats all code accordingly.
Ormolu v0.5 reformats code that uses infix operators. This is mostly useful, adding newlines and indentation to make it clear which operators are applied first, but in some cases, it's unpleasant. To make this easier on the eyes, I had to do the following:
* Add a few fixity declarations (search for `infix`)
* Add parentheses to make precedence clear, allowing Ormolu to keep everything on one line
* Rename `relevantEq` to `(==~)` in #6651 and set it to `infix 4`
* Add a few _.ormolu_ files (thanks to @hallettj for helping me get started), mostly for Autodocodec operators that don't have explicit fixity declarations
In general, I think these changes are quite reasonable. They mostly affect indentation.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6675
GitOrigin-RevId: cd47d87f1d089fb0bc9dcbbe7798dbceedcd7d83
Ormolu v0.5 tries to reformat code using operators according to fixity. Unfortunately, it doesn't really understand backticked functions (even when they have an associated `infix` declaration), and so messes up the formatting.
This is probably a bug in Ormolu, but we can work around it by using a symbol operator.
Happy to bikeshed on `==~` (which I am reading as "pretty much equal to"). Please yell at me if you prefer something else.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6651
GitOrigin-RevId: 79af427422194460200b2b48339cdb9ee9b33c33
There are some incremental Metadata API methods that have no good justification for taking so much time to complete. This adds some of them to the CI benchmark suite, so that we can track their performance.
I have a prototype to speed up some of these methods 10x; see hasura/graphql-engine-mono#6613.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6627
GitOrigin-RevId: fecc7f28cae734b4acad68a63cbcdf0a2693d567
This introduces an adhoc operation to the benchmark of `huge_schema`, so that we can track performance of the incremental Metadata API.
This untracks a table that is not referenced by anything else in the `huge_schema` metadata, so that we don't need to cascade any changes. And then it tracks it again.
Benchmarking this will be valuable for working on `Hasura.Incremental`.
Results will start showing up in the benchmark report when this is merged to `main`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6553
GitOrigin-RevId: 65dad4f7a5fe1c230c5def136640bb68f4a4aa9b
`ssl.wrap_socket` is deprecated in favor of `SSLContext.wrap_socket`.
Also throws in a quick speed improvement to _server/tests-py/run.sh_ on x86_64.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6498
GitOrigin-RevId: 7bbe5f86daf45677e2a39cfcfe183794ffcd2954
>
## Description
->
This PR allows DC agents to define custom aggregate functions for their scalar types.
### Related Issues
->
GDC-189
### Solution and Design
>
We added a new property `aggregate_functions` to the scalar types capabilities. This allows the agent author to specify a set of aggregate functions supported by each scalar type, along with the function's result type.
During GraphQL schema generation, the custom aggregate functions are available via a new method `getCustomAggregateOperators` on the `Backend` type class.
Custom functions are merged with the builtin aggregate functions when building GraphQL schemas for table aggregate fields and for `order_by` operators on array relations.
### Steps to test and verify
>
• Codec tests for aggregate function capabilities have been added to the unit tests.
• Some custom aggregate operators have been added to the reference agent and are used in a new test in `api-tests`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6199
GitOrigin-RevId: e9c0d1617af93847c1493671fdbb794f573bde0c
Prior to this commit, various definition types representing GraphQL schema internally and the logic which collected a schema from the definition types were in a single module called `Hasura.GraphQL.Schema`. This created cyclic dependencies between `Hasura.GraphQL.Schema` module and `Hasura.GraphQL.Schema.Convert` module.
This is now fixed by:
1. Moving all the definition related types into `Hasura.GraphQL.Schema.Definition` module
1. The logic that collects a GraphQL Schema from these types into `Hasura.GraphQL.Schema.Collect`
With these changes, `Hasura.GraphQL.Schema` module just exports both these modules.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6517
GitOrigin-RevId: d5207cf31335aeeddd874ed6f921a17892580b4c
### Description
This small PR develops a bit the existing documentation about remote joins. It adds a new section that details where each piece of the feature is located, and adds two paragraphs detailing some of the implementation details of the execution.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6505
GitOrigin-RevId: 6edd5459e4081cc6c9a80fdc92c2d479dedb2be9
If the tests are run with specific ports assigned to specific services,
set through the environment variables, we continue to use those ports.
We just don't hard-code them now, we pick them up from the environment
variables.
However, if the environment variables are not set, we generate a random
port for each service. This allows us to run multiple tests in parallel
in the future, independently.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6218
GitOrigin-RevId: 3d2a1880bf67544c848951888ce7b4fa1ba379dc
This installs the ODBC Driver 18 for SQL Server in all our shipped Docker images, and update our tests and documentation accordingly.
This version supports arm64, and therefore can run natively (or via Docker) on macOS on aarch64.
`msodbcsql17` is still installed in production-targeted Docker images so that users do not _have_ to migrate to the new driver.
Nix expressions are packaged for the new driver, as it is not yet available in nixpkgs.
In this version, [the default encryption setting was changed from "no" to "yes"](https://techcommunity.microsoft.com/t5/sql-server-blog/odbc-driver-18-0-for-sql-server-released/ba-p/3169228). In addition, "mandatory" and "optional" were added as synonyms for "yes" and "no" respectively.
I have therefore modified all connection strings in tests to specify `Encrypt=optional` (and changed some from `Encrypt=no`). I chose "optional" rather than "no" because I feel it's more honest; these connection strings will work with or without an encrypted connection.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6241
GitOrigin-RevId: 959f88dd1f271ef06a3616bc46b358f364f6cdfd
The main aim of the PR is:
1. To set up a module structure for 'remote-schemas' package.
2. Move parts by the remote schema codebase into the new module structure to validate it.
## Notes to the reviewer
Why a PR with large-ish diff?
1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section.
1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process.
## Changes in the PR
### Module structure
Sets up the module structure as follows:
```
Hasura/
RemoteSchema/
Metadata/
Types.hs
SchemaCache/
Types.hs
Permission.hs
RemoteRelationship.hs
Build.hs
MetadataAPI/
Types.hs
Execute.hs
```
### 1. Types representing metadata are moved
Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`.
- This new module only depends on very 'core' modules such as
`Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass.
- The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship
definition.
### 2. SchemaCache related types and build logic have been moved
Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`.
Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at.
Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components.
This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in.
While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal.
This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase.
### 3. Metadata APIs related types and build logic have been moved
The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively.
## Open questions:
1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L55)) and [2](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L91)), it made more sense to create a separate top-level module for `MetadataAPI`s.
Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further.
1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type:
```haskell
buildRemoteSchemas ::
( ArrowChoice arr,
Inc.ArrowDistribute arr,
ArrowWriter (Seq CollectedInfo) arr,
Inc.ArrowCache m arr,
MonadIO m,
HasHttpManagerM m,
Inc.Cacheable remoteRelationshipDefinition,
ToJSON remoteRelationshipDefinition,
MonadError QErr m
) =>
Env.Environment ->
( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles),
[RemoteSchemaMetadataG remoteRelationshipDefinition]
)
`arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject)
```
Note the dependence on `CollectedInfo` which is defined as
```haskell
data CollectedInfo
= CIInconsistency InconsistentMetadata
| CIDependency
MetadataObject
-- ^ for error reporting on missing dependencies
SchemaObjId
SchemaDependency
deriving (Eq)
```
this pretty much means that remote schemas is dependent on types from databases, actions, ....
How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine?
1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature:
```haskell
buildSchemaCacheFor ::
(QErrM m, CacheRWM m, MetadataM m) =>
MetadataObjId ->
MetadataModifier ->
```
This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint):
```haskell
runAddRemoteSchema ::
( QErrM m,
CacheRWM m,
MonadIO m,
HasHttpManagerM m,
MetadataM m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m EncJSON
```
This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed.
```haskell
runAddRemoteSchema ::
( QErrM m,
MonadIO m,
HasHttpManagerM m,
MonadReader RemoteSchemasSchemaCache m,
MonadState RemoteSchemaMetadata m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m RemoteSchemeMetadataObjId
```
The idea is that the core graphql-engine would call these functions and then call
`buildSchemaCacheFor`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291
GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
We use a helper service to start a webhook-based authentication service for some tests. This moves the initialization of the service out of _test-server.sh_ and into the Python test harness, as a fixture.
In order to do this, I had to make a few changes. The main deviation is that we no longer run _all_ tests against an HGE with this authentication service, just a few (those in _test_webhook.py_). Because this reduced coverage, I have added some more tests there, which actually cover some areas not exacerbated elsewhere (mainly trying to use webhook credentials to talk to an admin-only endpoint).
The webhook service can run both with and without TLS, and decide whether it's necessary to skip one of these based on the arguments passed and how HGE is started, according to the following logic:
* If a TLS CA certificate is passed in, it will run with TLS, otherwise it will skip it.
* If HGE was started externally and a TLS certificate is provided, it will skip running without TLS, as it will assume that HGE was configured to talk to a webhook over HTTPS.
* Some tests should only be run with TLS; this is marked with a `tls_webhook_server` marker.
* Some tests should only be run _without_ TLS; this is marked with a `no_tls_webhook_server` marker.
The actual parameterization of the webhook service configuration is done through test subclasses, because normal pytest parameterization doesn't work with the `hge_fixture_env` hack that we use. Because `hge_fixture_env` is not a sanctioned way of conveying data between fixtures (and, unfortunately, there isn't a sanctioned way of doing this when the fixtures in question may not know about each other directly), parameterizing the `webhook_server` fixture doesn't actually parameterize `hge_server` properly. Subclassing forces this to work correctly.
The certificate generation is moved to a Python fixture, so that we don't have to revoke the CA certificate for _test_webhook_insecure.py_; we can just generate a bogus certificate instead. The CA certificate is still generated in the _test-server.sh_ script, as it needs to be installed into the OS certificate store.
Interestingly, the CA certificate installation wasn't actually working, because the certificates were written to the wrong location. This didn't cause any failures, as we weren't actually testing this behavior. This is now fixed with the other changes.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6363
GitOrigin-RevId: 0f277d374daa64f657257ed2a4c2057c74b911db