What is the `Cacheable` type class about?
```haskell
class Eq a => Cacheable a where
unchanged :: Accesses -> a -> a -> Bool
default unchanged :: (Generic a, GCacheable (Rep a)) => Accesses -> a -> a -> Bool
unchanged accesses a b = gunchanged (from a) (from b) accesses
```
Its only method is an alternative to `(==)`. The added value of `unchanged` (and the additional `Accesses` argument) arises _only_ for one type, namely `Dependency`. Indeed, the `Cacheable (Dependency a)` instance is non-trivial, whereas every other `Cacheable` instance is completely boilerplate (and indeed either generated from `Generic`, or simply `unchanged _ = (==)`). The `Cacheable (Dependency a)` instance is the only one where the `Accesses` argument is not just passed onwards.
The only callsite of the `unchanged` method is in the `ArrowCache (Rule m)` method. That is to say that the `Cacheable` type class is used to decide when we can re-use parts of the schema cache between Metadata operations.
So what is the `Cacheable (Dependency a)` instance about? Normally, the output of a `Rule m a b` is re-used when the new input (of type `a`) is equal to the old one. But sometimes, that's too coarse: it might be that a certain `Rule m a b` only depends on a small part of its input of type `a`. A `Dependency` allows us to spell out what parts of `a` are being depended on, and these parts are recorded as values of types `Access a` in the state `Accesses`.
If the input `a` changes, but not in a way that touches the recorded `Accesses`, then the output `b` of that rule can be re-used without recomputing.
So now you understand _why_ we're passing `Accesses` to the `unchanged` method: `unchanged` is an equality check in disguise that just needs some additional context.
But we don't need to pass `Accesses` as a function argument. We can use the `reflection` package to pass it as type-level context. So the core of this PR is that we change the instance declaration from
```haskell
instance (Cacheable a) => Cacheable (Dependency a) where
```
to
```haskell
instance (Given Accesses, Eq a) => Eq (Dependency a) where
```
and use `(==)` instead of `unchanged`.
If you haven't seen `reflection` before: it's like a `MonadReader`, but it doesn't require a `Monad`.
In order to pass the current `Accesses` value, instead of simply passing the `Accesses` as a function argument, we need to instantiate the `Given Accesses` context. We use the `give` method from the `reflection` package for that.
```haskell
give :: forall r. Accesses -> (Given Accesses => r) -> r
unchanged :: (Given Accesses => Eq a) => Accesses -> a -> a -> Bool
unchanged accesses a b = give accesses (a == b)
```
With these three components in place, we can delete the `Cacheable` type class entirely.
The remainder of this PR is just to remove the `Cacheable` type class and its instances.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6877
GitOrigin-RevId: 7125f5e11d856e7672ab810a23d5bf5ad176e77f
Rather than varying it, let's just use `postgis/postgis` everywhere.
This uses the latest version of PostGIS, in which some of the raster codes have changed. This seems benign (it's just one digit) in the hex stream. I can't find the relevant release notes though.
Also syncs _images.go_ and _databases.yaml_ so we use the same thing where possible.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6903
GitOrigin-RevId: bb5c56f2e7ff69e4c008f1d658850af08c96badc
We currently have a fairly intricate way of running our PostgreSQL and MSSQL integration tests (not the API tests). By splitting them out, we can simplify this a lot. Most prominently, we can rely on Cabal to be our argument parser instead of writing our own.
We can also simplify how they're run in CI. They are currently (weirdly) run alongside the Python integration tests. This breaks them out into their own jobs for better visibility, and to avoid conflating the two.
The changes are as follows:
- The "unit" tests that rely on a running PostgreSQL database are extracted out to a new test directory so they can be run separately.
- Most of the `Main` module comes with them.
- We now refer to these as "integration" tests instead.
- Likewise for the "unit" tests that rely on a running MS SQL Server database. These are a little simpler and we can use `hspec-discover`, with a `SpecHook` to extract the connection string from an environment variable.
- Henceforth, these are the MS SQL Server integration tests.
- New CI jobs have been added for each of these.
- There wasn't actually a job for the MS SQL Server integration tests. It's pretty amazing they still run well.
- The "haskell-tests" CI job, which used to run the PostgreSQL integration tests, has been removed.
- The makefiles and contributing guide have been updated to run these.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6912
GitOrigin-RevId: 67bbe2941bba31793f63d04a9a693779d4463ee1
### Description
This monster of a PR took way too long. As the title suggests, it reduces the schema context carried in the readers to the very strict minimum. In practice, that means that to build a source, we only require:
- the global `SchemaContext`
- the global `SchemaOptions` (soon to be renamed `SchemaSourceOptions`)
- that source's `SourceInfo`
Furthermore, _we no longer carry "default" customization options throughout the schema_. All customization information is extracted from the `SourceInfo`, when required. This prevents an entire category of bugs we had previously encountered, such as parts of the code using uninitialized / unupdated customization info.
In turn, this meant that we could remove the explicit threading of the `SourceInfo` throughout the schema, since it is now always available through the reader context.
Finally, this meant making a few adjustments to relay and actions as well, such as the introduction of a new separate "context" for actions, and a change to how we create some of the action-specific postgres scalar parsers.
I'll highlight with review comments the areas of interest.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6709
GitOrigin-RevId: ea80fddcb24e2513779dd04b0b700a55f0028dd1
- Avoid a few banana brackets `(| ... |)`, often by just using local `let` bindings
- Use proper `Arrows` syntax rather than helpers like `>->`
- Use monadic `do` syntax instead of `Arrows` syntax where possible
- Avoid `traverseA @Maybe`, in favor of a `case`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6751
GitOrigin-RevId: c07b22a1a259db6d135486ec71a716705e280717
When running using the "new" style (with a HGE binary, not a URL), a new PostgreSQL metadata and source database are created for each test. When we get this into CI, this should drastically reduce the flakiness.
I have also enabled parallelization by default when using `run-new.sh`. It's much faster.
I had to basically rewrite _server/tests-py/test_graphql_read_only_source.py_ so that it does two different things depending on how it's run. It's unfortunate, but it should eventually go away.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6879
GitOrigin-RevId: a121b9035f8da3e61a3e36d8b1fbc6ccae918fad
`CollectedInfo` was just an awkward sum type. By using an explicit `Either` instead, we can guarantee at the type level that certain methods only write inconsistencies, or only write dependencies. This is useful, because if we can guarantee that no dependencies are written, then we don't need to run `resolveDependencies` on that part of the Metadata. In other words, we can keep it out of `BuildOutputs`, which greatly benefits performance - see e.g. hasura/graphql-engine-mono#6613.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6765
GitOrigin-RevId: 9ce099d2eee2278dbb6e5bea72063e4b6e064b35
This enables sharing the Docker Compose-based database configuration across the Haskell-based API tests and the legacy Python integration tests.
Why? Because we depend on different database versions and I keep running out of disk space. I am far too lazy to buy another disk and set up my operating system _again_.
The files in question are:
- _docker-compose/databases.yaml_, which is the base specification for the databases
- _docker-compose.yml_, used by the API tests locally (and for other manual testing), which extends the above
- _.buildkite/docker-compose-files/test-oss-server-hspec.yml_, used by the API tests in CI, which extends _databases.yaml_
- _server/tests-py/docker-compose.yml_, used by the Python integration tests
The changes are summarized as follows:
1. The following snippets are moved from _docker-compose/databases.yaml_ to _docker-compose.yml_ and _.buildkite/docker-compose-files/test-oss-server-hspec.yml_, as they're not strictly necessary for other forms of testing:
- the fixed port mappings (in the range 65000–65010)
- the PostgreSQL initialization
- the SQL Server initialization
2. Environment variables are used a little more in health checks and initialization scripts, as usernames, passwords, etc. can be overridden.
3. The volumes in _docker-compose/databases.yaml_ are made anonymous (unnamed), and the names are only specified in _docker-compose.yml_. We don't need to do this elsewhere.
- For extra fun, I have removed all named volumes from the CI Docker Compose files, as they seem to be unnecessary.
4. _server/tests-py/docker-compose.yml_ now depends on _docker-compose/databases.yaml_.
- This was the point.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6864
GitOrigin-RevId: f22f2839716f543ce8a62f890da244de7e23abaa
A bunch of configurations are retrieved from the Metadata, then stored in the `BuildOutputs` structure, only to then be forwarded to the `SchemaCache`, with extremely little processing in between.
So this simplifies the build pipeline for some parts of the metadata: just construct those things from `Metadata` directly, and store them in the `SchemaCache` without any intermediate container.
Why did we have the detour via `BuildOutputs` in the first place? Parts of the Metadata (codified by `MetadataObjId`) can generate _metadata inconsistencies_ and/or _schema dependencies_, which are related.
- Metadata inconsistencies are warnings that we show to the user, indicating that there's something wrong with their configuration, and they have to fix it.
- Schema dependencies are an internal mechanism that allow us to build a consistent view of the world. For instance, if we have a relationship from DB tables `books` to `authors`, but the `authors` table is inconsistent (e.g. it doesn't exist in the DB), then we have schema dependencies indicating that. The job of `resolveDependencies` is to then drop the relationship, so that we can at least generate a legal GraphQL schema for `books`.
If we never generate a schema dependency for a certain fragment of Metadata, then there is no reason to call `resolveDependencies` on it, and so there is no reason to store it in `BuildOutputs`.
---
The starting point that allows this refactor is to apply Metadata defaults before it reaches `buildAndCollectInfo`, so that metadata-with-defaults can be used elsewhere.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6609
GitOrigin-RevId: df0c4a7ff9451e10e02a40bf26304b26584ba483
When setting up a resource (typically some kind of web server) for use in tests, we need to remember to tear it down afterwards.
This moves this logic into one place, under the `TestResource` module.
Like `SetupAction`, it encapsulates setup and teardown, and also separates out waiting for the resource to be ready, so we don't accidentally leave it lying around in the case of a healthcheck failure.
Unlike `SetupAction`, it is monadic, and can be composed with other resources. In the future, we may want to adopt this logic for `SetupAction` too rather than using lists.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6806
GitOrigin-RevId: 74e2d76c5c09b8e0fe1cad84c9e77011f5a4d3db
This removes calls to `setup` and `teardown` in favor of `setupTablesAction`.
Because this action untracks and drops tables (at least until we figure out how to make throwaway databases), the teardown phase can fail. I have added a wrapper which logs and discards exceptions as a workaround for now.
In the future, when we can simply drop the database, it will probably be sensible to catch "table already untracked" exceptions specifically and let them slide, while still failing on all other exceptions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6769
GitOrigin-RevId: 12cb8f81dd6aced892fe83c49b9a0bdbef8cc1ac
Just forcing some of the most numerous thunks (with -hi profiling), it
seems some of these were retaining significant amount of data
this can follow merge of, or supersede #6679
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6710
GitOrigin-RevId: d0566ee288841e264637231a7f238946aa2e3564
## Description ✍️
This PR aims to improve the developer experience when using a heroku postgres instance as source database. Better error messages and relevant documentation are added as a part of this PR.
## Changelog ✍️
__Component__ : server
__Type__: enhancement
__Product__: community-edition
### Short Changelog
Improve DX for heroku integration
### Related Issues ✍
https://hasurahq.atlassian.net/browse/GS-202
### Steps to test and verify ✍
- Add a new heroku postgres instance as DB source in Hasura
- Try adding an event trigger
- Improved error message will be emitted:
```json
{
"arguments": [],
"error": {
"description": null,
"exec_status": "FatalError",
"hint": null,
"message": "pgcrypto can only be created in heroku_ext schema. Hint: You can set \"extensions_schema\" to provide the schema to install the extensions. Refer to the documentation here: https://hasura.io/docs/latest/deployment/postgres-requirements/#pgcrypto-in-pg-search-path",
"status_code": "P0001"
},
"prepared": false,
"statement": "CREATE EXTENSION IF NOT EXISTS pgcrypto SCHEMA public"
}
```
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6630
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sean Park-Ross <94021366+seanparkross@users.noreply.github.com>
GitOrigin-RevId: a46d7c129a4e0378b7f33445f9bda11e0bddbd74
This upgrades the version of Ormolu required by the HGE repository to v0.5.0.1, and reformats all code accordingly.
Ormolu v0.5 reformats code that uses infix operators. This is mostly useful, adding newlines and indentation to make it clear which operators are applied first, but in some cases, it's unpleasant. To make this easier on the eyes, I had to do the following:
* Add a few fixity declarations (search for `infix`)
* Add parentheses to make precedence clear, allowing Ormolu to keep everything on one line
* Rename `relevantEq` to `(==~)` in #6651 and set it to `infix 4`
* Add a few _.ormolu_ files (thanks to @hallettj for helping me get started), mostly for Autodocodec operators that don't have explicit fixity declarations
In general, I think these changes are quite reasonable. They mostly affect indentation.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6675
GitOrigin-RevId: cd47d87f1d089fb0bc9dcbbe7798dbceedcd7d83
Ormolu v0.5 tries to reformat code using operators according to fixity. Unfortunately, it doesn't really understand backticked functions (even when they have an associated `infix` declaration), and so messes up the formatting.
This is probably a bug in Ormolu, but we can work around it by using a symbol operator.
Happy to bikeshed on `==~` (which I am reading as "pretty much equal to"). Please yell at me if you prefer something else.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6651
GitOrigin-RevId: 79af427422194460200b2b48339cdb9ee9b33c33
There are some incremental Metadata API methods that have no good justification for taking so much time to complete. This adds some of them to the CI benchmark suite, so that we can track their performance.
I have a prototype to speed up some of these methods 10x; see hasura/graphql-engine-mono#6613.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6627
GitOrigin-RevId: fecc7f28cae734b4acad68a63cbcdf0a2693d567
This introduces an adhoc operation to the benchmark of `huge_schema`, so that we can track performance of the incremental Metadata API.
This untracks a table that is not referenced by anything else in the `huge_schema` metadata, so that we don't need to cascade any changes. And then it tracks it again.
Benchmarking this will be valuable for working on `Hasura.Incremental`.
Results will start showing up in the benchmark report when this is merged to `main`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6553
GitOrigin-RevId: 65dad4f7a5fe1c230c5def136640bb68f4a4aa9b
`ssl.wrap_socket` is deprecated in favor of `SSLContext.wrap_socket`.
Also throws in a quick speed improvement to _server/tests-py/run.sh_ on x86_64.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6498
GitOrigin-RevId: 7bbe5f86daf45677e2a39cfcfe183794ffcd2954
>
## Description
->
This PR allows DC agents to define custom aggregate functions for their scalar types.
### Related Issues
->
GDC-189
### Solution and Design
>
We added a new property `aggregate_functions` to the scalar types capabilities. This allows the agent author to specify a set of aggregate functions supported by each scalar type, along with the function's result type.
During GraphQL schema generation, the custom aggregate functions are available via a new method `getCustomAggregateOperators` on the `Backend` type class.
Custom functions are merged with the builtin aggregate functions when building GraphQL schemas for table aggregate fields and for `order_by` operators on array relations.
### Steps to test and verify
>
• Codec tests for aggregate function capabilities have been added to the unit tests.
• Some custom aggregate operators have been added to the reference agent and are used in a new test in `api-tests`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6199
GitOrigin-RevId: e9c0d1617af93847c1493671fdbb794f573bde0c
Prior to this commit, various definition types representing GraphQL schema internally and the logic which collected a schema from the definition types were in a single module called `Hasura.GraphQL.Schema`. This created cyclic dependencies between `Hasura.GraphQL.Schema` module and `Hasura.GraphQL.Schema.Convert` module.
This is now fixed by:
1. Moving all the definition related types into `Hasura.GraphQL.Schema.Definition` module
1. The logic that collects a GraphQL Schema from these types into `Hasura.GraphQL.Schema.Collect`
With these changes, `Hasura.GraphQL.Schema` module just exports both these modules.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6517
GitOrigin-RevId: d5207cf31335aeeddd874ed6f921a17892580b4c
### Description
This small PR develops a bit the existing documentation about remote joins. It adds a new section that details where each piece of the feature is located, and adds two paragraphs detailing some of the implementation details of the execution.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6505
GitOrigin-RevId: 6edd5459e4081cc6c9a80fdc92c2d479dedb2be9
If the tests are run with specific ports assigned to specific services,
set through the environment variables, we continue to use those ports.
We just don't hard-code them now, we pick them up from the environment
variables.
However, if the environment variables are not set, we generate a random
port for each service. This allows us to run multiple tests in parallel
in the future, independently.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6218
GitOrigin-RevId: 3d2a1880bf67544c848951888ce7b4fa1ba379dc
This installs the ODBC Driver 18 for SQL Server in all our shipped Docker images, and update our tests and documentation accordingly.
This version supports arm64, and therefore can run natively (or via Docker) on macOS on aarch64.
`msodbcsql17` is still installed in production-targeted Docker images so that users do not _have_ to migrate to the new driver.
Nix expressions are packaged for the new driver, as it is not yet available in nixpkgs.
In this version, [the default encryption setting was changed from "no" to "yes"](https://techcommunity.microsoft.com/t5/sql-server-blog/odbc-driver-18-0-for-sql-server-released/ba-p/3169228). In addition, "mandatory" and "optional" were added as synonyms for "yes" and "no" respectively.
I have therefore modified all connection strings in tests to specify `Encrypt=optional` (and changed some from `Encrypt=no`). I chose "optional" rather than "no" because I feel it's more honest; these connection strings will work with or without an encrypted connection.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6241
GitOrigin-RevId: 959f88dd1f271ef06a3616bc46b358f364f6cdfd
The main aim of the PR is:
1. To set up a module structure for 'remote-schemas' package.
2. Move parts by the remote schema codebase into the new module structure to validate it.
## Notes to the reviewer
Why a PR with large-ish diff?
1. We've been making progress on the MM project but we don't yet know long it is going to take us to get to the first milestone. To understand this better, we need to figure out the unknowns as soon as possible. Hence I've taken a stab at the first two items in the [end-state](https://gist.github.com/0x777/ca2bdc4284d21c3eec153b51dea255c9) document to figure out the unknowns. Unsurprisingly, there are a bunch of issues that we haven't discussed earlier. These are documented in the 'open questions' section.
1. The diff is large but that is only code moved around and I've added a section that documents how things are moved. In addition, there are fair number of PR comments to help with the review process.
## Changes in the PR
### Module structure
Sets up the module structure as follows:
```
Hasura/
RemoteSchema/
Metadata/
Types.hs
SchemaCache/
Types.hs
Permission.hs
RemoteRelationship.hs
Build.hs
MetadataAPI/
Types.hs
Execute.hs
```
### 1. Types representing metadata are moved
Types that capture metadata information (currently scattered across several RQL modules) are moved into `Hasura.RemoteSchema.Metadata.Types`.
- This new module only depends on very 'core' modules such as
`Hasura.Session` for the notion of roles and `Hasura.Incremental` for `Cacheable` typeclass.
- The requirement on database modules is avoided by generalizing the remote schemas metadata to accept an arbitrary 'r' for a remote relationship
definition.
### 2. SchemaCache related types and build logic have been moved
Types that represent remote schemas information in SchemaCache are moved into `Hasura.RemoteSchema.SchemaCache.Types`.
Similar to `H.RS.Metadata.Types`, this module depends on 'core' modules except for `Hasura.GraphQL.Parser.Variable`. It has something to do with remote relationships but I haven't spent time looking into it. The validation of 'remote relationships to remote schema' is also something that needs to be looked at.
Rips out the logic that builds remote schema's SchemaCache information from the monolithic `buildSchemaCacheRule` and moves it into `Hasura.RemoteSchema.SchemaCache.Build`. Further, the `.SchemaCache.Permission` and `.SchemaCache.RemoteRelationship` have been created from existing modules that capture schema cache building logic for those two components.
This was a fair amount of work. On main, currently remote schema's SchemaCache information is built in two phases - in the first phase, 'permissions' and 'remote relationships' are ignored and in the second phase they are filled in.
While remote relationships can only be resolved after partially resolving sources and other remote schemas, the same isn't true for permissions. Further, most of the work that is done to resolve remote relationships can be moved to the first phase so that the second phase can be a very simple traversal.
This is the approach that was taken - resolve permissions and as much as remote relationships information in the first phase.
### 3. Metadata APIs related types and build logic have been moved
The types that represent remote schema related metadata APIs and the execution logic have been moved to `Hasura.RemoteSchema.MetadataAPI.Types` and `.Execute` modules respectively.
## Open questions:
1. `Hasura.RemoteSchema.Metadata.Types` is so called because I was hoping that all of the metadata related APIs of remote schema can be brought in at `Hasura.RemoteSchema.Metadata.API`. However, as metadata APIs depended on functions from `SchemaCache` module (see [1](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L55)) and [2](ceba6d6226/server/src-lib/Hasura/RQL/DDL/RemoteSchema.hs (L91)), it made more sense to create a separate top-level module for `MetadataAPI`s.
Maybe we can just have `Hasura.RemoteSchema.Metadata` and get rid of the extra nesting or have `Hasura.RemoteSchema.Metadata.{Core,Permission,RemoteRelationship}` if we want to break them down further.
1. `buildRemoteSchemas` in `H.RS.SchemaCache.Build` has the following type:
```haskell
buildRemoteSchemas ::
( ArrowChoice arr,
Inc.ArrowDistribute arr,
ArrowWriter (Seq CollectedInfo) arr,
Inc.ArrowCache m arr,
MonadIO m,
HasHttpManagerM m,
Inc.Cacheable remoteRelationshipDefinition,
ToJSON remoteRelationshipDefinition,
MonadError QErr m
) =>
Env.Environment ->
( (Inc.Dependency (HashMap RemoteSchemaName Inc.InvalidationKey), OrderedRoles),
[RemoteSchemaMetadataG remoteRelationshipDefinition]
)
`arr` HashMap RemoteSchemaName (PartiallyResolvedRemoteSchemaCtxG remoteRelationshipDefinition, MetadataObject)
```
Note the dependence on `CollectedInfo` which is defined as
```haskell
data CollectedInfo
= CIInconsistency InconsistentMetadata
| CIDependency
MetadataObject
-- ^ for error reporting on missing dependencies
SchemaObjId
SchemaDependency
deriving (Eq)
```
this pretty much means that remote schemas is dependent on types from databases, actions, ....
How do we fix this? Maybe introduce a typeclass such as `ArrowCollectRemoteSchemaDependencies` which is defined in `Hasura.RemoteSchema` and then implemented in graphql-engine?
1. The dependency on `buildSchemaCacheFor` in `.MetadataAPI.Execute` which has the following signature:
```haskell
buildSchemaCacheFor ::
(QErrM m, CacheRWM m, MetadataM m) =>
MetadataObjId ->
MetadataModifier ->
```
This can be easily resolved if we restrict what the metadata APIs are allowed to do. Currently, they operate in an unfettered access to modify SchemaCache (the `CacheRWM` constraint):
```haskell
runAddRemoteSchema ::
( QErrM m,
CacheRWM m,
MonadIO m,
HasHttpManagerM m,
MetadataM m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m EncJSON
```
This should instead be changed to restrict remote schema APIs to only modify remote schema metadata (but has access to the remote schemas part of the schema cache), this dependency is completely removed.
```haskell
runAddRemoteSchema ::
( QErrM m,
MonadIO m,
HasHttpManagerM m,
MonadReader RemoteSchemasSchemaCache m,
MonadState RemoteSchemaMetadata m,
Tracing.MonadTrace m
) =>
Env.Environment ->
AddRemoteSchemaQuery ->
m RemoteSchemeMetadataObjId
```
The idea is that the core graphql-engine would call these functions and then call
`buildSchemaCacheFor`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6291
GitOrigin-RevId: 51357148c6404afe70219afa71bd1d59bdf4ffc6
We use a helper service to start a webhook-based authentication service for some tests. This moves the initialization of the service out of _test-server.sh_ and into the Python test harness, as a fixture.
In order to do this, I had to make a few changes. The main deviation is that we no longer run _all_ tests against an HGE with this authentication service, just a few (those in _test_webhook.py_). Because this reduced coverage, I have added some more tests there, which actually cover some areas not exacerbated elsewhere (mainly trying to use webhook credentials to talk to an admin-only endpoint).
The webhook service can run both with and without TLS, and decide whether it's necessary to skip one of these based on the arguments passed and how HGE is started, according to the following logic:
* If a TLS CA certificate is passed in, it will run with TLS, otherwise it will skip it.
* If HGE was started externally and a TLS certificate is provided, it will skip running without TLS, as it will assume that HGE was configured to talk to a webhook over HTTPS.
* Some tests should only be run with TLS; this is marked with a `tls_webhook_server` marker.
* Some tests should only be run _without_ TLS; this is marked with a `no_tls_webhook_server` marker.
The actual parameterization of the webhook service configuration is done through test subclasses, because normal pytest parameterization doesn't work with the `hge_fixture_env` hack that we use. Because `hge_fixture_env` is not a sanctioned way of conveying data between fixtures (and, unfortunately, there isn't a sanctioned way of doing this when the fixtures in question may not know about each other directly), parameterizing the `webhook_server` fixture doesn't actually parameterize `hge_server` properly. Subclassing forces this to work correctly.
The certificate generation is moved to a Python fixture, so that we don't have to revoke the CA certificate for _test_webhook_insecure.py_; we can just generate a bogus certificate instead. The CA certificate is still generated in the _test-server.sh_ script, as it needs to be installed into the OS certificate store.
Interestingly, the CA certificate installation wasn't actually working, because the certificates were written to the wrong location. This didn't cause any failures, as we weren't actually testing this behavior. This is now fixed with the other changes.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6363
GitOrigin-RevId: 0f277d374daa64f657257ed2a4c2057c74b911db
## Description
This PR fixes hasura/graphql-engine#8345: when creating the final representation of a remote relationship to a remote schema (a `RemoteJoin`), we would mistakenly label ALL join fields in the selection set as being relevant to that one relationship: if there are more than one remote relationship to process in that selection set, that would be the union of all their join fields. The problem with this error is that, when processing remote relationships, we correctly ignore all the ones for which at least one join key is null. Consequently, this error would result in us ignoring remote relationships for which an _unrelated_ join key was null, resulting in that data missing in the final JSON result.
This PR simply ensures that the aggregation of fields that are passed to `createRemoteJoin` is pruned to only contain the fields relevant to the join being created. This is a very small change, and the bulk of this PR is the regression tests.
## Changelog
__Component__ : server
__Type__: bugfix
__Product__: community-edition
### Short Changelog
fix remote relationship to remote schema sometimes being erroneously null when multiple relationships are defined on the same table / graphql object ([#8345](https://github.com/hasura/graphql-engine/issues/8345))
### Long Changelog
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6420
GitOrigin-RevId: eb54462724b007f80b674dcf234adf6d9cfaaf79
Context: https://hasurahq.atlassian.net/browse/SRE-10
Also remove an overlapping instance.
-----
The new flags if this needs to be tweaked on production by SRE are:
- --idleGCIdleInterval : "When the system has been idle for idleGCIdleInterval we may opportunistically try a major GC to run finalizers"
- --idleGCMinGCInterval : "We never run an opportunistic GC unless it has been at least idleGCMinGCInterval seconds since the last major GC"
- --idleGCMaxNoGCInterval : "If it has been longer than idleGCMaxNoGCInterval since the last major GC, force a GC to run finalizers"
Be aware: we may see memory usage grow to higher peaks than before, especially when under load
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6449
GitOrigin-RevId: 662d2f968f0d73b3b6eebb857c49aaede3312705
* The versions for some tools were out of date; updated accordingly.
* The link to the Dockerfile was broken.
* Included instructions for Nix.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6403
GitOrigin-RevId: 3acbafe90e4bb9267dcdb2dce5e205773a14dfc9
This helps with running the tests locally on a recent Mac.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6304
Co-authored-by: Daniel Harvey <4729125+danieljharvey@users.noreply.github.com>
GitOrigin-RevId: eb5c6a185f68b216c94df2581acf71906cce7872
This makes it possible for the test harness to start the test JWK server and the test remote schema server.
In order to do this, we still generate the TLS certificates in the test script (because we need to install the generated CA certificate in the OS certificate store), and then pass the certificate and key paths into the test runner.
Because we are still using _test-server.sh_ for now, we don't use the JWK server fixture in that case, as HGE needs the JWK server to be up and running when it starts. Instead, we keep running it outside (for now).
This is also the case for the GraphQL server fixture when we are running the server upgrade/downgrade tests.
I have also refactored _graphql_server.py_ so there isn't a global `HGE_URLS` value, but instead the value is passed through.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6303
GitOrigin-RevId: 06f05ff674372dc5d632e55d68e661f5c7a17c10
This upgrades CI and anyone using Nix to HLint v3.4.1.
If you're not using Nix, this doesn't actually _do_ anything on your
local machine; it's just a suggestion.
It also applies a bunch of simple HLint refactors, using
`make lint-hs-fix`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6324
GitOrigin-RevId: de8267e4909d6dcd3f83543188517f3aaeebc5f3
I didn't track why these were left behind. Presumably GHC 9.2 has an improved redundant constraint checker, so that explains a few. Otherwise, perhaps code got refactored along the way.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6256
GitOrigin-RevId: b6275edf3e867f8e33bdec533ce9932381d36bbb
- Remove a few unnecessary helper functions
- Delete kind annotations
- Bring GHC warnings and language extensions more in line with those of the `graphql-engine` library
- Constrain unconstrained dependency on `hasql-pool`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6251
GitOrigin-RevId: 10c2530f007f70cf1464cec36566ee2264589881
This updates _docker-compose.yml_ to use the new image tags, and updates _run.sh_ accordingly.
While I was at it, I also added a `docker compose pull` instruction to make sure that we don't have surprises half-way through the script, and a few `echo` lines for clarity.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6235
GitOrigin-RevId: 3855f6898bd3e906c5f423d9d0d6a7031de3777a
We seem to be rebuilding hpack on every PR. I'm hoping this will allow PRs to share a cache.
I have also changed the cache key to include the entirety of _server/VERSIONS.json_, and added the GHC version there, to make sure it's properly invalidated.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6142
GitOrigin-RevId: fc61a26ad721f59f52687913f6978902f4c2ca0a
- Remove `onJust` in favor of the more general `for_`
- Remove `withJust` which was used only once
- Remove `hashNub` in favor of `Ord`-based `uniques`
- Simplify some of the implementations in `Hasura.Prelude`
- Add `hlint` hint from `maybe True` to `all`, and `maybe False` to `any`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6173
GitOrigin-RevId: 2c6ebbe2d04f60071d2a53a2d43c6d62dbc4b84e
This PR is the result of running the following commands:
```bash
$ git grep -l '".* : "' -- '*.hs' | xargs sed -i -E 's/(".*) : "/\1: "/'
$ scripts/dev.sh test --integration --accept
```
Also manually fixed a few tests and docs
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6148
GitOrigin-RevId: cf8b87605d41d9ce86613a41ac5fd18691f5a641
When we run the HGE server inside the test harness, it needs to run with
an admin secret for some tests to make sense. This tags each test that
requires an admin secret with `pytest.mark.admin_secret`, which then
generates a UUID and injects that into both the server and the test case
(if required).
It also simplifies the way the test harness picks up an existing admin
secret, allowing it to use the environment variable instead of requiring
it via a parameter.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6120
GitOrigin-RevId: 55c5b9e8c99bdad9c8304098444ddb9516749a2c
This teaches `hge_server` how to run more tests, thanks to `hge_env`.
It also simplifies the logic a bit more.
I have also modified _run.sh_ and _docker-compose.yml_ so we can run multiple test suites, one after another.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6105
GitOrigin-RevId: eff009362eb6bb90c07cedaf96dfe6ec9336ff32
If we don't do this, we might end up applying metadata with a stale schema cache.
Following the principle of least surprise, replacing the metadata should probably compute inconsistencies with regards to the actual state of the database.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6026
GitOrigin-RevId: ff7469d7d9857c8a9f517d5d0b6f1ecf463621b3
This has two purposes:
* When running the Python integration tests against a running HGE instance, with `--hge-url`, it will check the environment variables available and actively skip the test if they aren't set. This replaces the previous ad-hoc skip behavior.
* More interestingly, when running against a binary with `--hge-bin`, the environment variables are passed through, which means different tests can run with different environment variables.
On top of this, the various services we use for testing now also provide their own environment variables, rather than expecting a test script to do it.
In order to make this work, I also had to invert the dependency between various services and `hge_ctx`. I extracted a `pg_version` fixture to provide the PostgreSQL version, and now pass the `hge_url` and `hge_key` explicitly to `ActionsWebhookServer`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6028
GitOrigin-RevId: 16d866741dba5887da1adf4e1ade8182ccc9d344
NPM v7 uses a new (backwards-compatible) lockfile format. This upgrades all our various _package-lock.json_ files to use the new format.
It's much more verbose so that NPM can be a lot faster.
I figured it was cleaner to do this once in a separate PR rather than upgrading them in combination with adding or upgrading a new dependency.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5869
GitOrigin-RevId: 322fb63b96e2d873a4a3cc05fa6c7afa414716ce
This adds support for running the Python integration tests for MSSQL and Citus just as in CI, as follows:
```
./server/tests-py/run.sh backend-mssql
./server/tests-py/run.sh backend-citus
```
These run the named CI jobs, providing the appropriate backend.
(In reality, all backends are always provided, which is much simpler.)
It also provides the various databases to _server/tests-py/run-new.sh_, though the tests fail as they don't properly initialize the sources. (This will be fixed in the future by provisioning sources in the test framework itself.)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5997
GitOrigin-RevId: c276a4779a35bb538ef0dc02ac8b7cb2d5a8dec5
This makes a few changes to the test scripts and makefiles in order to make things simpler for the average Apple user.
First of all, we change the `wait_for_mysql` function to use "localhost", not "127.0.0.1", as this fixed an issue on my system when attempting to connect to the MySQL server.
Secondly, we split the SQL Server test image into two:
* The first is the server itself, which now automatically uses `azure-sql-edge` as the image if you are on an aarch64 chip and using the `make` commands.
* The second is the initialization script. Because `sqlcmd` is not available in the `azure-sql-edge` image on aarch64, we use a separate container based on `mssql-tools` to initialize the server.
The README has been updated.
Tested on both macOS/aarch64 (with other changes) and Linux/x86_64.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5986
GitOrigin-RevId: b16e079861dcbcc66773295c47d715e443b67eea
See: https://github.com/grafana/k6/issues/2685
It might be interesting to think about taking into consideration decompression time when thinking about performance, but In general I think doing so is surprising and I wasted a lot of time trying to figure out why my optimizations to the compression codepath weren't improving things to the degree I expected
The downside here is we lose error reporting, so you'll need to only set
discardResponseBodies: true after the query has been tested.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5940
GitOrigin-RevId: 82a589a59b93f10ffb5391e4a3190459fb6e613b
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
This fixes a few issues so that we can run `./server/tests-py/run.sh backend-bigquery` to run the Python integration tests for BigQuery locally.
* We forward the relevant environment variables to the Docker container.
* We increase the HTTP timeout, as I'm seeing requests taking up to 90s locally.
* We rewrite the setup so that it avoids `INSERT INTO`, which is not available using the BigQuery free tier. Instead, we use `CREATE TABLE ... AS SELECT ...`. This is the same method used by the Haskell integration tests.
We also capture local server output in a volume so it's easier to figure out what went wrong later.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5921
GitOrigin-RevId: c628f8c08a84f2582958659ab6d6494832471f6f
I am working on https://github.com/hasura/graphql-engine/issues/8807, and wanted to write a Haskell integration test case to reproduce it.
We have Python integration tests somewhat covering this behavior in *test_inconsistent_meta.py*, but no Haskell tests, so I thought I'd shore up the coverage here by adding a few test cases for working behavior.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5897
GitOrigin-RevId: 21500e530e413feaede5cbd8b4a94b07d25a6260
This makes two changes to the Docker Compose files that we use for local testing:
1. We disable `fsync`. On my machine, this decreases the time taken to create a new database from ~5s to less than 0.1s. The trade-off is that you might lose data, which we don't care about, as this is for testing.
2. We increase the maximum number of connections from the default, 100, to 1000. This allows us to run more tests in parallel without hitting connection limits.
These changes won't have any meaningful effect for now; they simply allow us to parallelize tests against PostgreSQL in the future.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5892
GitOrigin-RevId: 5d0d0ab37fdfbf4c9e20084d3cbedf647f54a04e
This argument allows the user to specify how to run HGE, rather than starting it beforehand. The runner will start a new instance of HGE for each test class.
This does not provide isolation, as the database is still re-used, but it helps us get closer.
You can try it yourself by executing:
```
$ cabal build graphql-engine:exe:graphql-engine
$ ./server/tests-py/run-new.sh
```
This doesn't affect CI at all.
I also fixed a few warnings flagged by Pylance.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5881
GitOrigin-RevId: ea6f0fd631a2c278b2c6b50e9dbdd9d804ebc9d4
Starting it and stopping it for the various tests that actually use it.
There are only a few.
This also removes some dead code and fixes warnings in _test_webhook_request_context.py_.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5846
GitOrigin-RevId: 7760467f9de7b1f9718e7482275c298eeaa3ad3a
The intent is to generalize `columnParser` to the point where it is the same across all backends, and then remove the interface in favor of a single implementation.
This extracts out `enumParser` and `possiblyNullable` as the two main areas that differ across backends. We may split `possiblyNullable` further so that we can extract some of that logic out into a common function too.
With these changes, the various `columnParser` implementations become semantically equivalent. They still do different things, and so reconciling them will require further changes.
Co-Authored-By: Antoine Leblanc <antoine@hasura.io>
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5841
GitOrigin-RevId: eec1770931eed5d72da70c97d7d0f00e33fa15d2
### Description
This PR attempts to fix several issues with source customization as it relates to remote relationships. There were several issues regarding casing: at the relationship border, we didn't properly set the target source's case, we didn't have access to the list of supported features to decide whether the feature was allowed or not, and we didn't have access to the global default.
However, all of that information is available when we build the schema cache, as we do resolve the case of some elements such as function names: we can therefore resolve source information at the same time, and simplify both the root of the schema and the remote relationship border.
To do this, this PR introduces a new type, `ResolvedSourceCustomization`, to be used in the Schema Cache, as opposed to the metadata's `SourceCustomization`, following a pattern established by a lot of other types.
### Remaining work and open questions
One major point of confusion: it seems to me that we didn't set the case at all across remote relationships, which would suggest we would use the case of the LHS source across the subset of the RHS one that is accessible through the remote relationship, which would in turn "corrupt" the parser cache and might result in the wrong case being used for that source later on. Is that assesment correct, and was I right to fix it?
Another one is that we seem not to be using the local case of the RHS to name the field in an object relationship; unless I'm mistaken we only use it for array relationships? Is that intentional?
This PR is also missing tests that would show-case the difference, and a changelog entry. To my knowledge, all the tests of this feature are in the python test suite; this could be the opportunity to move them to the hspec suite, but this might be a considerable amount of work?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5619
GitOrigin-RevId: 51a81b713a74575e82d9f96b51633f158ce3a47b
This allows a developer, through Docker, to run the Python integration tests in pretty much exactly the same way as CI does.
Allowing us to more readily diagnose issues locally.
I'm hoping this is temporary and we won't need it for too long, but I have found it invaluable over the last few days so I would like to share it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5818
GitOrigin-RevId: 18876fbbcbe7c5492afdf54d96af45ab2c519b77
This abstracts `CircularT`'s test cases to work against "any" memoizer, and then runs them against `MemoizeT` as well.
Surprisingly (or not), this works without issue; `MemoizeT` passes all tests with a couple of extra instances.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5780
GitOrigin-RevId: 461880caf9220dc3f52d622a22e8b8bcd594e404
Where possible, we start the services on random ports, to avoid
port conflicts when parallelizing tests in the future.
When this isn't possible, we explicitly state the port, and wait for the
service to start. This is typically because the GraphQL Engine has already
started with knowledge of the relevant service passed in through an
environment variable.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5542
GitOrigin-RevId: b51a095b8710e3ff20d1edb13aa576c5272a5565
### Description
This PR changes all the schema code to operate in a specific `SchemaT` monad, rather than in an arbitrary `m` monad. `SchemaT` is intended to be used opaquely with `runSourceSchema` and `runRemoteSchema`. The main goal of this is to allow a different reader context per part of the schema: this PR also minimizes the contexts. This means that we no longer require `SchemaOptions` when building remote schemas' schema, and this PR therefore removes a lot of dummy / placeholder values accordingly.
### Performance and stacking
This PR has been through several iterations. #5339 was the original version, that accomplished the same thing by stacking readers on top of the stack at every remote relationship boundary. This raised performance concerns, and @0x777 confirmed with an ad-hoc test that in some extreme cases we could see up to a 10% performance impact. This version, while more verbose, allows us to unstack / re-stack the readers, and avoid that problem. #5517 adds a new benchmark set to be able to automatically measure this on every PR.
### Remaining work
- [x] a comment (or perhaps even a Note?) should be added to `SchemaT`
- [x] we probably want for #5517 to be merged first so that we can confirm the lack of performance penalty
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5458
GitOrigin-RevId: e06b83d90da475f745b838f1fd8f8b4d9d3f4b10
This removes string interpolation from quasiquoted literals. We only use
this in one place and it's totally unnecessary.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5750
GitOrigin-RevId: 3493a11db6347332e7e3721a7dca616947505be6
This includes TH.Lift instances.
I am motivated to make this change because `unordered-containers` is set to either v0.2.17.0 or v0.2.19.1 in nixpkgs-unstable.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5620
GitOrigin-RevId: 7fd3024fdbf6a948adbdf5f4187d47d5da9acbda
This PR expands the OpenAPI specification generated for metadata to include separate definitions for `SourceMetadata` for each native database type, and for DataConnector.
For the most part the changes add `HasCodec` implementations, and don't modify existing code otherwise.
The generated OpenAPI spec can be used to generate TypeScript definitions that distinguish different source metadata types based on the value of the `kind` properly. There is a problem: because the specified `kind` value for a data connector source is any string, when TypeScript gets a source with a `kind` value of, say, `"postgres"`, it cannot unambiguously determine whether the source is postgres, or a data connector. For example,
```ts
function consumeSourceMetadata(source: SourceMetadata) {
if (source.kind === "postgres" || source.kind === "pg") {
// At this point TypeScript infers that `source` is either an instance
// of `PostgresSourceMetadata`, or `DataconnectorSourceMetadata`. It
// can't narrow further.
source
}
if (source.kind === "something else") {
// TypeScript infers that this `source` must be an instance of
// `DataconnectorSourceMetadata` because `source.kind` does not match
// any of the other options.
source
}
}
```
The simplest way I can think of to fix this would be to add a boolean property to the `SourceMetadata` type along the lines of `isNative` or `isDataConnector`. This could be a field that only exists in serialized data, like the metadata version field. The combination of one of the native database names for `kind`, and a true value for `isNative` would be enough for TypeScript to unambiguously distinguish the source kinds.
But note that in the current state TypeScript is able to reference the short `"pg"` name correctly!
~~Tests are not passing yet due to some discrepancies in DTO serialization vs existing Metadata serialization. I'm working on that.~~
The placeholders that I used for table and function metadata are not compatible with the ordered JSON serialization in use. I think the best solution is to write compatible codecs for those types in another PR. For now I have disabled some DTO tests for this PR.
Here are the generated [OpenAPI spec](https://github.com/hasura/graphql-engine-mono/files/9397333/openapi.tar.gz) based on these changes, and the generated [TypeScript client code](https://github.com/hasura/graphql-engine-mono/files/9397339/client-typescript.tar.gz) based on that spec.
Ticket: [MM-66](https://hasurahq.atlassian.net/browse/MM-66)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5582
GitOrigin-RevId: e1446191c6c832879db04f129daa397a3be03f62
### Description
This PR adds a new benchmarl set named `deep_schema`, that is made to replicate one very specific edge-case: schemas that have deeply nested remote relationships. Our schema-building code is, in essence, "depth-first", and there are a lot of subtleties in the way we jump across remote relationship boundaries: this set will allows us to better understand the performance implications of technical decisions we make wrt. schema building.
This set, unlike others, does not declare any query: we are, for now, only interested in the schema building, which is tested with an ad-hoc script.
## Remaining work
There are several points worth discussing, wrt. this PR:
- should we make the schema larger, to make measures more consistent?
- should we extend this idea of measuring schema build performance to other sets?
- how do we extend the report to include this new information?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5517
GitOrigin-RevId: 9d8f4fddb9bbdca5ef85f3d22337b992acf13bce
This does not yet enable Aggregation Predicates to users, but enables building the execution backend and tests of the schema.
This is a prerequisite for:
* #5174
* #5261
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5607
GitOrigin-RevId: e07beb01949724545131629c111d41a7ec4636f2