We were previously using the Docker Compose file in the root directory
for manual testing _and_ the server API tests.
This splits them so we can e.g. add Yugabyte for easy manual testing.
In the future, this will also allow us to use ephemeral ports for API
test databases, while keeping the fixed ports for manual testing.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7524
GitOrigin-RevId: 7244e296b0ed0ace9782b6f44f321933a9d9a49d
## Description
This PR updates the JWK refresh thread to poll every second instead of the previous behaviour where the thread used to sleep based on the expiry time in `Cache-Control`/`Expires` response headers.
## Motivation
As a part of dynamically updating environment variables on cloud without restart the user projects, we want to implement a mechanism which makes HGE aware of any changes in the user configuration by updating a shared variable data type which can be accessed by relevant threads/core functionality before their execution.
The above updates requires us to make the threads polling in nature such that before executing their code, any change in the user config is captured and the appropriate behaviour is channelised. In the case of JWK updating thread, the thread used to sleep for the time as mentioned in the `Cache-Control` or `Expires` headers which make the thread unware of any new changes in the user config in that period of time, hence requiring a restart to propogate the new changes.
To solve this problem we have now updated the JWK update thread to poll every second for change in `AuthMode`(from a shared variable in subsequent changes to implement the dynamic env var update feature) and update the JWK accordingly such that it does not use any stale configurations and works without HGE restart.
### Related Issues
https://hasurahq.atlassian.net/browse/GS-300
### Solution and Design
- We store the expiry time in the `JWTCtx`
- On every poll check whether the current time exceeds the expiry time, in which case we call the JWK url to fetch the new JWK and expiry.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7177
Co-authored-by: Krushan Bauva <31391329+krushanbauva@users.noreply.github.com>
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
GitOrigin-RevId: bc1e44a8c3823d7554167a7f01c3ce085646cedb
Hooks up event trigger codecs from #7237. This required fixing a problem where some backend types implemented `defaultTriggerOnReplication` with `error` which caused the server to crash when evaluating those for default values in codecs. The changes here add a type family to `Backend` called `XEventTriggers` that signals backend support for event triggers, and changes the type of `defaultTriggerOnReplication` to from `TriggerOnReplication` to `Maybe (XEventTriggers b, TriggerOnReplication)` so that it can only be implemented with a `Just` value if `XEventTriggers b` is inhabited. This emulates some existing type families in `Backend`. (Thanks to @daniel-chambers for this suggestion!)
I used the implementation of `defaultTriggerOnReplication` as a signal for event triggers support to prune the Metadata API so that event trigger fields will not appear in the OpenAPI spec for backend types that do not support event triggers. The codec version of the API will also not emit or accept those fields for those backend types. I think I could use `Typeable` to test whether `XEventTriggers` is `Void` instead of testing whether `defaultTriggerOnReplication` is `Nothing`. But the codec implementation will crash anyway if `defaultTriggerOnReplication` is `Nothing`.
I checked to make sure that graphql-engine-pro still compiles.
Ticket: https://hasurahq.atlassian.net/browse/GDC-521
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7295
GitOrigin-RevId: 2b2dd44291513266107ca25cf330319bf53a8b66
This rewrites the last couple of Python tests that were failing when run with a separate HGE binary per test class. The changes are as follows:
1. The event triggers tests, naming conventions tests, and subscriptions tests all generate a new source DB per test, so can run in parallel.
2. The scheduled triggers tests use the correct URL for the trigger service when the port is generated randomly.
3. Whitespace and trailing commas are added to the scheduled triggers tests.
4. Support for SQL Server is added to _hge.py_ so the naming conventions test that runs on SQL Server passes. (The other SQL Server tests do not pass and we're not going to bother with them for now.)
5. Container names are fixed in _run.sh_.
6. _run.sh_ and _run-new.sh_ don't pull images explicitly as it's annoying when running tests a lot. If you want to pull the latest versions, just run `docker compose pull` from the _server/tests-py_ directory, or the root directory. (If you don't have the images at all, they'll still be pulled automatically.)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7350
GitOrigin-RevId: db58f310f017b2a0884fcf61ccc56d15583f99bd
Adds a bunch of tests to the _resource-pool_ code to try and track down a bug.
Not surprisingly, all tests pass, which means that this didn't help. I still think it's worth keeping them.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7330
GitOrigin-RevId: 6d4deb9af5b192b3a0aa34ac0751d28e12b22b48
We currently let the garbage collector and/or the operating system clean up our mess. This is mostly fine in production (kind of) but a problem when we want to start many HGE servers in parallel for testing purposes.
Shutting them down should, in theory, ease the load.
There is more work to be done in the API test suite before this is very helpful. Right now the test suite actually runs the finalizers on the server context straight away and then uses the leaked resources. As there's no way to actually "close" a connection pool, it keeps working regardless. If we wanted to be strict about this we might want to add a "closed" flag to `Data.Pool` which would cause an exception on `withResource` after closing it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7299
GitOrigin-RevId: ba02f96c7b5b06ba3ba7080a5583a56cb0efcfa7
Codecs for event triggers, including webhook transforms. These are not hooked into the higher-up table metadata codec yet because some backend implementations implement event triggers with `error` which causes an error when codecs are evaluated. I plan to follow up with another PR to resolve that.
Ticket: https://hasurahq.atlassian.net/browse/GDC-585
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7237
GitOrigin-RevId: 8ce40fe6fedcf8b109d6ca50a505333df855a8ce
We no longer support this and therefore don't run tests against it.
This also refactors the code a little so it doesn't have to skip running a PostgreSQL-specific test against MS SQL Server.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7201
GitOrigin-RevId: 307c2ab0052162c012f7b1c55866b57f2fa6d9a6
Generate more Metadata Inconsistencies instead of startup failures. Specifically this means that
- errors retrieving the main query of an executable GraphQL document, and
- errors during fragment inlining
no longer fail irrecoverably.
This also makes more parts of `buildSchemaCacheRule` into pure code, which is always nice.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7234
GitOrigin-RevId: aebf636c2fb1aad1c2df9a37f7d0b67c1ee40c42
context: This is foundation work, before we change how the server chooses to compress or not
part of effort: #5518
-----
Prior to this change it was difficult to understand how the functionality in this module related to the semantics of Accept-Encoding. We also didn't correctly handle directives with qvalues.
After this change certain technical infelicities are called out without modifying the behavior of the server; for instance we continue to fall back to identity (no compression) in the case where technically we're supposed to return 406, and we also continue to treat `*` conservatively as meaning “use no compression”.
The only external change here is `gzip;q=x.y` now results in a zipped response.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7213
GitOrigin-RevId: 1910ffd70d29f1ab8825c601f1bd998be70ceeeb
`toLazyByteString` is a little deficient in two ways:
- It allocates relatively large chunks (4KB + 32KB +32KB, etc…) which is wasteful for small ByteStrings
- It shrinks each chunk (Copying the data to a new chunk of exactly the right size) if it's not more than half filled. If we're running the builder right before we send it over the wire, this copy is totally extraneous (we simply end up with more work for the next GC)
part of the effort: https://github.com/hasura/graphql-engine-mono/issues/5518
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7187
GitOrigin-RevId: b499cd49c33da6cfee96be629a36b5c812486e39
## Description
There is a bug in the metadata defaults code, see [the original PR](https://github.com/hasura/graphql-engine-mono/pull/6286).
Steps to reproduce this issue:
* Start a new HGE project
* Start HGE with a defaults argument: `HASURA_GRAPHQL_LOG_LEVEL=debug cabal run exe:graphql-engine -- serve --enable-console --console-assets-dir=./console/static/dist --metadata-defaults='{"backend_configs": {"dataconnector": {"mongo": {"display_name": "BONGOBB", "uri": "http://localhost:8123"}}}}'`
* Add a source (doesn't need to be related to the defaults)
* Export metadata
* See that the defaults are present in the exported metadata
## Related Issues
* Github Issue: https://github.com/hasura/graphql-engine/issues/9237
* Jira: https://hasurahq.atlassian.net/browse/GDC-647
* Original PR: https://github.com/hasura/graphql-engine-mono/pull/6286
## Solution
* The test for if defaults should be included for metadata api operations has been extended to check for updates
* Metadata inconsistencies have been hidden for `/capabilities` calls on startup
## TODO
* [x] Fix bug
* [x] Write tests
* [x] OSS Metadata Migration to correct persisted data - `server/src-rsr/migrations/47_to_48.sql`
* [x] Cloud Metadata Migration - `pro/server/res/cloud/migrations/6_to_7.sql`
* [x] Bump Catalog Version - `server/src-rsr/catalog_version.txt`
* [x] Update Catalog Versions - `server/src-rsr/catalog_versions.txt` (This will be done by Infra when creating a release)
* [x] Log connection error as it occurs *(Already being logged. Requires `--enabled-log-types startup,webhook-log,websocket-log,http-log,data-connector-log`)
* [x] Don't mark metadata inconsistencies for this call.
## Questions
* [ ] Does the `pro/server/res/cloud/migrations/6_to_7.sql` cover the cloud scenarios?
* [ ] Should we have `SET search_path` in migrations?
* [x] What should be in `server/src-rsr/catalog_versions.txt`?
## Testing
To test the solution locally run:
> docker compose up -d
and
> cabal run -- exe:api-tests --skip BigQuery --skip SQLServer --skip '/Test.API.Explain/Postgres/'
## Solution
In `runMetadataQuery` in `server/src-lib/Hasura/Server/API/Metadata.hs`:
```diff
- if (exportsMetadata _rqlMetadata)
+ if (exportsMetadata _rqlMetadata || queryModifiesMetadata _rqlMetadata)
```
This ensures that defaults aren't present in operations that serialise metadata.
Note: You might think that `X_add_source` would need the defaults to be present to add a source that references the defaults, but since the resolution occurs in the schema-cache building phase, the defaults can be excluded for the metadata modifications required for `X_add_source`.
In addition to the code-change, a metadata migration has been introduced in order to clean up serialised defaults.
The following scenarios need to be considered for both OSS and Cloud:
* The user has not had defaults serialised
* The user has had the defaults serialised and no other backends configured
* The user has had the defaults serialised and has also configured other backends
We want to remove as much of the metadata as possible without any user-specified data and this should be reflected in migration `server/src-rsr/migrations/47_to_48.sql`.
## Server checklist
### Catalog upgrade
Does this PR change Hasura Catalog version?
- ✅ Yes
### Metadata
Does this PR add a new Metadata feature?
- ✅ No
### GraphQL
- ✅ No new GraphQL schema is generated
### Breaking changes
- ✅ No Breaking changes
## Changelog
__Component__ : server
__Type__: bugfix
__Product__: community-edition
### Short Changelog
Fixes a metadata defaults serialization bug and introduces a metadata migration to correct data that has been persisted due to the bug.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7034
GitOrigin-RevId: ad7d4f748397a1a607f2c0c886bf0fbbc3f873f2
This PR implements the remaining codecs for table permissions. However the codec for boolean expressions delegates to Aeson instances because Autodocodec doesn't currently have the necessary feature to write a codec for boolean expressions that will reliably parse valid data.
Boolean expressions are objects with keys like `_and`, `_or`, `_exists`, or `<field name>`. The parsing rules for each value depend on the key, so we need to be able to select different codecs for each key. We could do that with an `object` codec, but that doesn't account for the arbitrary field name keys that can be provided. OpenAPI supports object types with "additional properties", but I don't know if we can declare a specific type for those properties. There might or might not be a reasonable path to extending Autodocodec to handle this case.
Ticket: [GDC-585](https://hasurahq.atlassian.net/browse/GDC-585)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6978
GitOrigin-RevId: 0b0dcfd59ebd1d5022ff2ab86dd8d4c6f93bd039
Dependencies seem to get concatenated very often, so let's use a data structure that supports efficient concatenation.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7050
GitOrigin-RevId: 6331963f99f17d1b908a6038318d8c4834cf4dd7
## Description ✍️
This PR adds support to generate query params directly using a kriti template which can be used to flatten a list of parameter arguments as well.
### Changes in the Metadata API
Earlier the `query_params` key inside `request_transform` used to take in an object of key/value pairs where the `key` represents the query parameter name and `value` points to the value of the parameter or a kriti template which could be resolved to the value.
With this PR, we provide the user with more freedom to generate the complete query string using kriti template. The `query_params` can now take in a string as well which will be a kriti template. This new change needs to be incorporated on the console and CLI metadata import/export as well.
- [x] CLI: Compatible, no changes required
- [ ] Console
## Changelog ✍️
__Component__ : server
__Type__: feature
__Product__: community-edition
### Short Changelog
use kriti template to generate query param from list of arguments
### Related Issues ✍
https://hasurahq.atlassian.net/browse/GS-243
### Solution and Design ✍
We use a kriti template to generate the complete query parameter string.
| Query Template | Output |
|---|---|
| `{{ concat ([concat({{ range _, x := [\"apple\", \"banana\"] }} \"tags={{x}}&\" {{ end }}), \"flag=smthng\"]) }}`| `tags=apple&tags=banana&flag=smthng` |
| `{{ concat ([\"tags=\", concat({{ range _, x := $body.input }} \"{{x}},\" {{ end }})]) }}` | `tags=apple%2Cbanana%2C` |
### Steps to test and verify ✍
- start HGE and make the following request to `http://localhost:8080/v1/metadata`:
```json
{
"type": "test_webhook_transform",
"args": {
"webhook_url": "http://localhost:3000",
"body": {
"action": {
"name": "actionName"
},
"input": ["apple", "banana"]
},
"request_transform": {
"version": 2,
"url": "{{$base_url}}",
"query_params": "{{ concat ([concat({{ range _, x := $body.input }} \"tags={{x}}&\" {{ end }}), \"flag=smthng\"]) }}",
"template_engine": "Kriti"
}
}
}
```
- you should receive the following as output:
```json
{
"body": {
"action": {
"name": "actionName"
},
"input": [
"apple",
"banana"
]
},
"headers": [],
"method": "GET",
"webhook_url": "http://localhost:3000?tags=apple&tags=banana&flag=smthng"
}
```
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6961
Co-authored-by: Tirumarai Selvan <8663570+tirumaraiselvan@users.noreply.github.com>
GitOrigin-RevId: 712ba038f03009edc3e8eb0435e723304943399a
## Description ✍️
This PR introduces a new feature to enable/disable event triggers during logical replication of table data for PostgreSQL and MS-SQL data sources. We introduce a new field `trigger_on_replication` in the `*_create_event_trigger` metadata API. By default the event triggers will not fire for logical data replication.
## Changelog ✍️
__Component__ : server
__Type__: feature
__Product__: community-edition
### Short Changelog
Add option to enable/disable event triggers on logically replicated tables
### Related Issues ✍
https://github.com/hasura/graphql-engine/issues/8814https://hasurahq.atlassian.net/browse/GS-252
### Solution and Design
- By default, triggers do **not** fire when the session mode is `replica` in Postgres, so if the `triggerOnReplication` is set to `true` for an event trigger we run the query `ALTER TABLE #{tableTxt} ENABLE ALWAYS TRIGGER #{triggerNameTxt};` so that the trigger fires always irrespective of the `session_replication_role`
- By default, triggers do fire in case of replication in MS-SQL, so if the `triggerOnReplication` is set to `false` for an event trigger we add a clause `NOT FOR REPLICATION` to the the SQL when the trigger is created/altered, which sets the `is_not_for_replication` for the trigger as `true` and it does not fire during logical replication.
### Steps to test and verify ✍
- Run hspec integration tests for HGE
## Server checklist ✍
### Metadata ✍
Does this PR add a new Metadata feature?
- ✅ Yes
- Does `export_metadata`/`replace_metadata` supports the new metadata added?
- ✅
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6953
Co-authored-by: Puru Gupta <32328846+purugupta99@users.noreply.github.com>
Co-authored-by: Sean Park-Ross <94021366+seanparkross@users.noreply.github.com>
GitOrigin-RevId: 92731328a2bbdcad2302c829f26f9acb33c36135
Mostly trying to avoid tricky `Arrows` syntax, and unnecessary use of the `Hasura.Incremental` framework.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6997
GitOrigin-RevId: 9a2f5883e7e29af164e1581049ae003afec2cbe4
I encountered this dead code while doing other things: it's a type class with a single method which is never called. Deleting the type class allows us to simplify `TableCoreCacheRT` and `TableCacheRT`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7075
GitOrigin-RevId: 121320349c478a93717b0706037553d8406cbfa9
fwiw: I was looking here because ghc-debug showed many closures associated with the Applicative instance,
but defining Monoid/Semigroup by hand and inlining didn't seem to have any effect
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7026
GitOrigin-RevId: 4ad2fd26519da98b2380658d89942c700de4ffa2
We sometimes need to test against cloud databases. Here, we add a Terraform module to start a new AlloyDB cluster and instance, which we can then use for testing purposes.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7002
GitOrigin-RevId: 2d661b5cc6d60e47485ea68b781e13426ed4f097
This test did not work when splitting the metadata and source backends. Fixed mostly by running the relevant SQL using `source_backend.engine`, but I also took the time to clean it up a little, and broke up _test.yaml_ into 3 files.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6957
GitOrigin-RevId: bbca60a8906caba2d0cffd834b3b8595fca058fd
Sometimes this happens, especially in CI. It's alright. We can just leave it lying around and it will be destroyed when the container and associated volume are removed.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7003
GitOrigin-RevId: dcb74920c12341d7a15f9b6ebfe52d0864de4738
This increases the speed of `create_query_collection` and `add_collection_to_allowlist` by a factor ~~10~~ 65, by caching the in-memory GraphQL schema. This speedup also applies more broadly to Metadata changes relating to:
- allowlists
- query collections
- cron triggers
- REST endpoints
- API limits
- metrics config
- GraphQL introspection options
- TLS allow lists
- OpenTelemetry
When is construction of the in-memory GraphQL schema cached between Metadata operations?
Before this PR, **never**! It's rebuilt fully, for every role, on every Metadata operation.
However, there are many Metadata operations that don't influence the GraphQL schema. So we should be caching its construction.
The `Hasura.Incremental` framework allows us to cache such constructions: whenever we have an arrow `Rule m a b`, where `a` is the input to the arrow and `b` the output, we can use the `Inc.cache` combinator to obtain a new arrow which is only re-executed when the input `a` changes in a material way. To test this, `a` needs an `Eq` instance. (Before hasura/graphql-engine-mono#6877, this was a `Cacheable` type class which has now been removed.)
We can't simply apply `Inc.cache` to the "Steps 3 and 4" in `buildSchemaCacheRule`, because the inputs (components of `BuildOutputs` such as `SourceCache`) don't have an `Eq` instance.
So the changes to `buildSchemaCacheRule` restructure the code so that the input to "Step 1", namely the Metadata, can be used as a caching key instead, so that `Inc.cache` can be applied to the whole sequence of steps.
That works to cache construction of the GraphQL schema, but it means that now only those Metadata operations that _don't_ influence any of the products of steps 1-4 can use a cached build of the GraphQL schema. The most important intermediate product is `BuildOutputs`. So now the exercise becomes to minimize the amount of stuff stored in `BuildOutputs`, so that as many Metadata operations as possible can be handled outside of the codepath that produces a GraphQL schema.
Per hasura/graphql-engine-mono#6609, the `BuildOutputs` structure is too big, and stores things unnecessarily. Refer to the PR description there for reasoning - the same logic applies to this PR, and simply goes a few steps further. In doing so, it can benefit from hasura/graphql-engine-mono#6765, which allows us to verify at compile time that certain Schema Cache building steps _don't_ generate "Metadata dependencies". If a certain Metadata dependency is never generated, we don't need to handle that case in `deleteMetadataObject`. Thus such intermediate products don't need to be passed through `resolveDependencies`, and thus they don't need to be stored in `BuildOutputs`, and thus their rebuild won't trigger a GraphQL schema rebuild.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6613
GitOrigin-RevId: 27d2e69d3461bd4c32f08febef9995c0369fab3a
What is the `Cacheable` type class about?
```haskell
class Eq a => Cacheable a where
unchanged :: Accesses -> a -> a -> Bool
default unchanged :: (Generic a, GCacheable (Rep a)) => Accesses -> a -> a -> Bool
unchanged accesses a b = gunchanged (from a) (from b) accesses
```
Its only method is an alternative to `(==)`. The added value of `unchanged` (and the additional `Accesses` argument) arises _only_ for one type, namely `Dependency`. Indeed, the `Cacheable (Dependency a)` instance is non-trivial, whereas every other `Cacheable` instance is completely boilerplate (and indeed either generated from `Generic`, or simply `unchanged _ = (==)`). The `Cacheable (Dependency a)` instance is the only one where the `Accesses` argument is not just passed onwards.
The only callsite of the `unchanged` method is in the `ArrowCache (Rule m)` method. That is to say that the `Cacheable` type class is used to decide when we can re-use parts of the schema cache between Metadata operations.
So what is the `Cacheable (Dependency a)` instance about? Normally, the output of a `Rule m a b` is re-used when the new input (of type `a`) is equal to the old one. But sometimes, that's too coarse: it might be that a certain `Rule m a b` only depends on a small part of its input of type `a`. A `Dependency` allows us to spell out what parts of `a` are being depended on, and these parts are recorded as values of types `Access a` in the state `Accesses`.
If the input `a` changes, but not in a way that touches the recorded `Accesses`, then the output `b` of that rule can be re-used without recomputing.
So now you understand _why_ we're passing `Accesses` to the `unchanged` method: `unchanged` is an equality check in disguise that just needs some additional context.
But we don't need to pass `Accesses` as a function argument. We can use the `reflection` package to pass it as type-level context. So the core of this PR is that we change the instance declaration from
```haskell
instance (Cacheable a) => Cacheable (Dependency a) where
```
to
```haskell
instance (Given Accesses, Eq a) => Eq (Dependency a) where
```
and use `(==)` instead of `unchanged`.
If you haven't seen `reflection` before: it's like a `MonadReader`, but it doesn't require a `Monad`.
In order to pass the current `Accesses` value, instead of simply passing the `Accesses` as a function argument, we need to instantiate the `Given Accesses` context. We use the `give` method from the `reflection` package for that.
```haskell
give :: forall r. Accesses -> (Given Accesses => r) -> r
unchanged :: (Given Accesses => Eq a) => Accesses -> a -> a -> Bool
unchanged accesses a b = give accesses (a == b)
```
With these three components in place, we can delete the `Cacheable` type class entirely.
The remainder of this PR is just to remove the `Cacheable` type class and its instances.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6877
GitOrigin-RevId: 7125f5e11d856e7672ab810a23d5bf5ad176e77f
Rather than varying it, let's just use `postgis/postgis` everywhere.
This uses the latest version of PostGIS, in which some of the raster codes have changed. This seems benign (it's just one digit) in the hex stream. I can't find the relevant release notes though.
Also syncs _images.go_ and _databases.yaml_ so we use the same thing where possible.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6903
GitOrigin-RevId: bb5c56f2e7ff69e4c008f1d658850af08c96badc
We currently have a fairly intricate way of running our PostgreSQL and MSSQL integration tests (not the API tests). By splitting them out, we can simplify this a lot. Most prominently, we can rely on Cabal to be our argument parser instead of writing our own.
We can also simplify how they're run in CI. They are currently (weirdly) run alongside the Python integration tests. This breaks them out into their own jobs for better visibility, and to avoid conflating the two.
The changes are as follows:
- The "unit" tests that rely on a running PostgreSQL database are extracted out to a new test directory so they can be run separately.
- Most of the `Main` module comes with them.
- We now refer to these as "integration" tests instead.
- Likewise for the "unit" tests that rely on a running MS SQL Server database. These are a little simpler and we can use `hspec-discover`, with a `SpecHook` to extract the connection string from an environment variable.
- Henceforth, these are the MS SQL Server integration tests.
- New CI jobs have been added for each of these.
- There wasn't actually a job for the MS SQL Server integration tests. It's pretty amazing they still run well.
- The "haskell-tests" CI job, which used to run the PostgreSQL integration tests, has been removed.
- The makefiles and contributing guide have been updated to run these.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6912
GitOrigin-RevId: 67bbe2941bba31793f63d04a9a693779d4463ee1
### Description
This monster of a PR took way too long. As the title suggests, it reduces the schema context carried in the readers to the very strict minimum. In practice, that means that to build a source, we only require:
- the global `SchemaContext`
- the global `SchemaOptions` (soon to be renamed `SchemaSourceOptions`)
- that source's `SourceInfo`
Furthermore, _we no longer carry "default" customization options throughout the schema_. All customization information is extracted from the `SourceInfo`, when required. This prevents an entire category of bugs we had previously encountered, such as parts of the code using uninitialized / unupdated customization info.
In turn, this meant that we could remove the explicit threading of the `SourceInfo` throughout the schema, since it is now always available through the reader context.
Finally, this meant making a few adjustments to relay and actions as well, such as the introduction of a new separate "context" for actions, and a change to how we create some of the action-specific postgres scalar parsers.
I'll highlight with review comments the areas of interest.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6709
GitOrigin-RevId: ea80fddcb24e2513779dd04b0b700a55f0028dd1
- Avoid a few banana brackets `(| ... |)`, often by just using local `let` bindings
- Use proper `Arrows` syntax rather than helpers like `>->`
- Use monadic `do` syntax instead of `Arrows` syntax where possible
- Avoid `traverseA @Maybe`, in favor of a `case`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6751
GitOrigin-RevId: c07b22a1a259db6d135486ec71a716705e280717
When running using the "new" style (with a HGE binary, not a URL), a new PostgreSQL metadata and source database are created for each test. When we get this into CI, this should drastically reduce the flakiness.
I have also enabled parallelization by default when using `run-new.sh`. It's much faster.
I had to basically rewrite _server/tests-py/test_graphql_read_only_source.py_ so that it does two different things depending on how it's run. It's unfortunate, but it should eventually go away.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6879
GitOrigin-RevId: a121b9035f8da3e61a3e36d8b1fbc6ccae918fad
`CollectedInfo` was just an awkward sum type. By using an explicit `Either` instead, we can guarantee at the type level that certain methods only write inconsistencies, or only write dependencies. This is useful, because if we can guarantee that no dependencies are written, then we don't need to run `resolveDependencies` on that part of the Metadata. In other words, we can keep it out of `BuildOutputs`, which greatly benefits performance - see e.g. hasura/graphql-engine-mono#6613.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6765
GitOrigin-RevId: 9ce099d2eee2278dbb6e5bea72063e4b6e064b35
This enables sharing the Docker Compose-based database configuration across the Haskell-based API tests and the legacy Python integration tests.
Why? Because we depend on different database versions and I keep running out of disk space. I am far too lazy to buy another disk and set up my operating system _again_.
The files in question are:
- _docker-compose/databases.yaml_, which is the base specification for the databases
- _docker-compose.yml_, used by the API tests locally (and for other manual testing), which extends the above
- _.buildkite/docker-compose-files/test-oss-server-hspec.yml_, used by the API tests in CI, which extends _databases.yaml_
- _server/tests-py/docker-compose.yml_, used by the Python integration tests
The changes are summarized as follows:
1. The following snippets are moved from _docker-compose/databases.yaml_ to _docker-compose.yml_ and _.buildkite/docker-compose-files/test-oss-server-hspec.yml_, as they're not strictly necessary for other forms of testing:
- the fixed port mappings (in the range 65000–65010)
- the PostgreSQL initialization
- the SQL Server initialization
2. Environment variables are used a little more in health checks and initialization scripts, as usernames, passwords, etc. can be overridden.
3. The volumes in _docker-compose/databases.yaml_ are made anonymous (unnamed), and the names are only specified in _docker-compose.yml_. We don't need to do this elsewhere.
- For extra fun, I have removed all named volumes from the CI Docker Compose files, as they seem to be unnecessary.
4. _server/tests-py/docker-compose.yml_ now depends on _docker-compose/databases.yaml_.
- This was the point.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6864
GitOrigin-RevId: f22f2839716f543ce8a62f890da244de7e23abaa
A bunch of configurations are retrieved from the Metadata, then stored in the `BuildOutputs` structure, only to then be forwarded to the `SchemaCache`, with extremely little processing in between.
So this simplifies the build pipeline for some parts of the metadata: just construct those things from `Metadata` directly, and store them in the `SchemaCache` without any intermediate container.
Why did we have the detour via `BuildOutputs` in the first place? Parts of the Metadata (codified by `MetadataObjId`) can generate _metadata inconsistencies_ and/or _schema dependencies_, which are related.
- Metadata inconsistencies are warnings that we show to the user, indicating that there's something wrong with their configuration, and they have to fix it.
- Schema dependencies are an internal mechanism that allow us to build a consistent view of the world. For instance, if we have a relationship from DB tables `books` to `authors`, but the `authors` table is inconsistent (e.g. it doesn't exist in the DB), then we have schema dependencies indicating that. The job of `resolveDependencies` is to then drop the relationship, so that we can at least generate a legal GraphQL schema for `books`.
If we never generate a schema dependency for a certain fragment of Metadata, then there is no reason to call `resolveDependencies` on it, and so there is no reason to store it in `BuildOutputs`.
---
The starting point that allows this refactor is to apply Metadata defaults before it reaches `buildAndCollectInfo`, so that metadata-with-defaults can be used elsewhere.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6609
GitOrigin-RevId: df0c4a7ff9451e10e02a40bf26304b26584ba483
When setting up a resource (typically some kind of web server) for use in tests, we need to remember to tear it down afterwards.
This moves this logic into one place, under the `TestResource` module.
Like `SetupAction`, it encapsulates setup and teardown, and also separates out waiting for the resource to be ready, so we don't accidentally leave it lying around in the case of a healthcheck failure.
Unlike `SetupAction`, it is monadic, and can be composed with other resources. In the future, we may want to adopt this logic for `SetupAction` too rather than using lists.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6806
GitOrigin-RevId: 74e2d76c5c09b8e0fe1cad84c9e77011f5a4d3db
This removes calls to `setup` and `teardown` in favor of `setupTablesAction`.
Because this action untracks and drops tables (at least until we figure out how to make throwaway databases), the teardown phase can fail. I have added a wrapper which logs and discards exceptions as a workaround for now.
In the future, when we can simply drop the database, it will probably be sensible to catch "table already untracked" exceptions specifically and let them slide, while still failing on all other exceptions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6769
GitOrigin-RevId: 12cb8f81dd6aced892fe83c49b9a0bdbef8cc1ac
Just forcing some of the most numerous thunks (with -hi profiling), it
seems some of these were retaining significant amount of data
this can follow merge of, or supersede #6679
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6710
GitOrigin-RevId: d0566ee288841e264637231a7f238946aa2e3564