The intent is to generalize `columnParser` to the point where it is the same across all backends, and then remove the interface in favor of a single implementation.
This extracts out `enumParser` and `possiblyNullable` as the two main areas that differ across backends. We may split `possiblyNullable` further so that we can extract some of that logic out into a common function too.
With these changes, the various `columnParser` implementations become semantically equivalent. They still do different things, and so reconciling them will require further changes.
Co-Authored-By: Antoine Leblanc <antoine@hasura.io>
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5841
GitOrigin-RevId: eec1770931eed5d72da70c97d7d0f00e33fa15d2
### Description
This PR attempts to fix several issues with source customization as it relates to remote relationships. There were several issues regarding casing: at the relationship border, we didn't properly set the target source's case, we didn't have access to the list of supported features to decide whether the feature was allowed or not, and we didn't have access to the global default.
However, all of that information is available when we build the schema cache, as we do resolve the case of some elements such as function names: we can therefore resolve source information at the same time, and simplify both the root of the schema and the remote relationship border.
To do this, this PR introduces a new type, `ResolvedSourceCustomization`, to be used in the Schema Cache, as opposed to the metadata's `SourceCustomization`, following a pattern established by a lot of other types.
### Remaining work and open questions
One major point of confusion: it seems to me that we didn't set the case at all across remote relationships, which would suggest we would use the case of the LHS source across the subset of the RHS one that is accessible through the remote relationship, which would in turn "corrupt" the parser cache and might result in the wrong case being used for that source later on. Is that assesment correct, and was I right to fix it?
Another one is that we seem not to be using the local case of the RHS to name the field in an object relationship; unless I'm mistaken we only use it for array relationships? Is that intentional?
This PR is also missing tests that would show-case the difference, and a changelog entry. To my knowledge, all the tests of this feature are in the python test suite; this could be the opportunity to move them to the hspec suite, but this might be a considerable amount of work?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5619
GitOrigin-RevId: 51a81b713a74575e82d9f96b51633f158ce3a47b
This allows a developer, through Docker, to run the Python integration tests in pretty much exactly the same way as CI does.
Allowing us to more readily diagnose issues locally.
I'm hoping this is temporary and we won't need it for too long, but I have found it invaluable over the last few days so I would like to share it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5818
GitOrigin-RevId: 18876fbbcbe7c5492afdf54d96af45ab2c519b77
This abstracts `CircularT`'s test cases to work against "any" memoizer, and then runs them against `MemoizeT` as well.
Surprisingly (or not), this works without issue; `MemoizeT` passes all tests with a couple of extra instances.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5780
GitOrigin-RevId: 461880caf9220dc3f52d622a22e8b8bcd594e404
Where possible, we start the services on random ports, to avoid
port conflicts when parallelizing tests in the future.
When this isn't possible, we explicitly state the port, and wait for the
service to start. This is typically because the GraphQL Engine has already
started with knowledge of the relevant service passed in through an
environment variable.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5542
GitOrigin-RevId: b51a095b8710e3ff20d1edb13aa576c5272a5565
### Description
This PR changes all the schema code to operate in a specific `SchemaT` monad, rather than in an arbitrary `m` monad. `SchemaT` is intended to be used opaquely with `runSourceSchema` and `runRemoteSchema`. The main goal of this is to allow a different reader context per part of the schema: this PR also minimizes the contexts. This means that we no longer require `SchemaOptions` when building remote schemas' schema, and this PR therefore removes a lot of dummy / placeholder values accordingly.
### Performance and stacking
This PR has been through several iterations. #5339 was the original version, that accomplished the same thing by stacking readers on top of the stack at every remote relationship boundary. This raised performance concerns, and @0x777 confirmed with an ad-hoc test that in some extreme cases we could see up to a 10% performance impact. This version, while more verbose, allows us to unstack / re-stack the readers, and avoid that problem. #5517 adds a new benchmark set to be able to automatically measure this on every PR.
### Remaining work
- [x] a comment (or perhaps even a Note?) should be added to `SchemaT`
- [x] we probably want for #5517 to be merged first so that we can confirm the lack of performance penalty
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5458
GitOrigin-RevId: e06b83d90da475f745b838f1fd8f8b4d9d3f4b10
This removes string interpolation from quasiquoted literals. We only use
this in one place and it's totally unnecessary.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5750
GitOrigin-RevId: 3493a11db6347332e7e3721a7dca616947505be6
This includes TH.Lift instances.
I am motivated to make this change because `unordered-containers` is set to either v0.2.17.0 or v0.2.19.1 in nixpkgs-unstable.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5620
GitOrigin-RevId: 7fd3024fdbf6a948adbdf5f4187d47d5da9acbda
This PR expands the OpenAPI specification generated for metadata to include separate definitions for `SourceMetadata` for each native database type, and for DataConnector.
For the most part the changes add `HasCodec` implementations, and don't modify existing code otherwise.
The generated OpenAPI spec can be used to generate TypeScript definitions that distinguish different source metadata types based on the value of the `kind` properly. There is a problem: because the specified `kind` value for a data connector source is any string, when TypeScript gets a source with a `kind` value of, say, `"postgres"`, it cannot unambiguously determine whether the source is postgres, or a data connector. For example,
```ts
function consumeSourceMetadata(source: SourceMetadata) {
if (source.kind === "postgres" || source.kind === "pg") {
// At this point TypeScript infers that `source` is either an instance
// of `PostgresSourceMetadata`, or `DataconnectorSourceMetadata`. It
// can't narrow further.
source
}
if (source.kind === "something else") {
// TypeScript infers that this `source` must be an instance of
// `DataconnectorSourceMetadata` because `source.kind` does not match
// any of the other options.
source
}
}
```
The simplest way I can think of to fix this would be to add a boolean property to the `SourceMetadata` type along the lines of `isNative` or `isDataConnector`. This could be a field that only exists in serialized data, like the metadata version field. The combination of one of the native database names for `kind`, and a true value for `isNative` would be enough for TypeScript to unambiguously distinguish the source kinds.
But note that in the current state TypeScript is able to reference the short `"pg"` name correctly!
~~Tests are not passing yet due to some discrepancies in DTO serialization vs existing Metadata serialization. I'm working on that.~~
The placeholders that I used for table and function metadata are not compatible with the ordered JSON serialization in use. I think the best solution is to write compatible codecs for those types in another PR. For now I have disabled some DTO tests for this PR.
Here are the generated [OpenAPI spec](https://github.com/hasura/graphql-engine-mono/files/9397333/openapi.tar.gz) based on these changes, and the generated [TypeScript client code](https://github.com/hasura/graphql-engine-mono/files/9397339/client-typescript.tar.gz) based on that spec.
Ticket: [MM-66](https://hasurahq.atlassian.net/browse/MM-66)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5582
GitOrigin-RevId: e1446191c6c832879db04f129daa397a3be03f62
### Description
This PR adds a new benchmarl set named `deep_schema`, that is made to replicate one very specific edge-case: schemas that have deeply nested remote relationships. Our schema-building code is, in essence, "depth-first", and there are a lot of subtleties in the way we jump across remote relationship boundaries: this set will allows us to better understand the performance implications of technical decisions we make wrt. schema building.
This set, unlike others, does not declare any query: we are, for now, only interested in the schema building, which is tested with an ad-hoc script.
## Remaining work
There are several points worth discussing, wrt. this PR:
- should we make the schema larger, to make measures more consistent?
- should we extend this idea of measuring schema build performance to other sets?
- how do we extend the report to include this new information?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5517
GitOrigin-RevId: 9d8f4fddb9bbdca5ef85f3d22337b992acf13bce
This does not yet enable Aggregation Predicates to users, but enables building the execution backend and tests of the schema.
This is a prerequisite for:
* #5174
* #5261
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5607
GitOrigin-RevId: e07beb01949724545131629c111d41a7ec4636f2
We plan on creating the source database dynamically, in the test setup.
This means that (a) we cannot assume that the metadata database and the
source database are the same, and (b) we need to drop and re-add the
source in code, not in YAML.
This changeset prepares the code for the introduction of a separate
source database, but doesn't go there yet. The separation is already
done but is too big to review in one go, so I have split this out.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5508
GitOrigin-RevId: b497a83ac4a100371762c2515c87ee3760d8d4ab
This splits two naming convention tests into four classes (and four YAML
files), which might seem overkill, but allows us to provision sources
declaratively in the future. As each class will require a custom source
configuration, we are able to annotate them accordingly, which means the
test cases are decoupled from the source database URL, letting us
generate a new database for each test case and automatically add it as a
source to HGE.
The future changes are already prepared, but this has been extracted out
as it splits the YAML files, which is a large change best reviewed in
isolation.
The test case `test_type_and_field_names` has been split into:
* `TestNamingConventionsTypeAndFieldNamesGraphqlDefault`
* `TestNamingConventionsTypeAndFieldNamesHasuraDefault`
The test case `test_type_and_field_names_with_prefix_and_suffix` has
been split into:
* `TestNamingConventionsTypeAndFieldNamesGraphqlDefaultWithPrefixAndSuffix`
* `TestNamingConventionsTypeAndFieldNamesHasuraDefaultWithPrefixAndSuffix`
The YAML files have been split in the same way. This was fairly trivial
as each test case would add a source, run some tests with
the `graphql_default` naming convention, drop the source, and then
repeat for the `hasura_default` naming convention. I simply split the
file in two. There is a little bit of duplication for provisioning the
various database tables, which I think is worth it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5496
GitOrigin-RevId: 94825e755c427a5414230f69985b534991b3aad6
This means that if `remote_schemas/nodejs/package.json` changes, the
dependencies will be automatically reinstalled.
It also moves `package-lock.json` to the correct location (in the
directory in which we run `npm install`), and updates it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5481
GitOrigin-RevId: f3fb431afd19de150f39ec2e4cb6572b896c870f
Making it easier to inject different ones later.
I also included a change to _.prettierignore_ so Visual Studio Code doesn't keep trying to reformat the JavaScript or YAML files in `server/tests-py`, as it can cause diffs to balloon for no obvious benefit.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5456
GitOrigin-RevId: bc6d548708160a328e1e61a00e19be8e124da025
Let's put it in one place.
This is a precursor to moving database provisioning into the Python
integration tests.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5453
GitOrigin-RevId: 5920b0b1177d94496485fcb4e178b946534ee4eb
This makes it easier to refactor `BackendSchema`, because if the type of a type class method is changed, it's easier to update the corresponding dummy implementations.
Partially addresses hasura/graphql-engine-mono#2971, in the sense that this aids refactors.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5443
GitOrigin-RevId: 65e169d01415a04e7c419a628cf32e743448543d
The module `Hasura.SQL.AnyBackend` was introduced (in #751) to centralize the logic for case-switching behavior that depends on the particular flavor of relational DB backend (Postgres vs MSSQL vs BigQuery vs MySQL vs DataConnectors). This allows us to write a bunch of code in a backend-agnostic way, even if runtime behavior does depend on the chosen backend. At the same time, it allows us to write backend-specific code without having to care (too much) about the existence of other backends.
In #851 this module was rewritten to use Template Haskell.
I've heard that one of the reasons for the use of TH was that this would make it easier to keep backends out of the compilation product entirely. This would allow customers, especially on OSS, to benefit from simpler software licensing.
However:
1. This conditional compilation never materialized.
2. It's not clear whether writing this particular module based on TH would be sufficient for conditional compilation. And in any case, it can be done using CPP pragmas as well.
3. The TH code is extraordinarily complex. Since its introduction, it has been documented extraordinarily well, but it's still very difficult to maintain and/or refactor, due to its non-idiomatic nature.
4. Hasura's company objectives are now Cloud-oriented, so that software licensing issues work differently, and in particular, do not depend on what's part of the compilation product.
So this PR reverts on #851 by spelling out the code generated by TH. This is a net-negative diff size. IOW we used to generate less code than the size of the code doing the generating. This makes the code readable and maintainable.
The generated code has been modified in one way, which I'll now describe.
In the scenario that support for a new backend is introduced, a constructor is added to the `BackendType` type. This would then cause `liftTag` to be partial, thus raising a compiler warning. Resolving this requires adding corresponding constructors to the `BackendTag` and `AnyBackend` types. This would then require amending **almost** all other methods.
The exceptions are `composeAnyBackend` and `unpackAnyBackend`. These methods test whether two values are compatible, i.e. belong to the same backend. Both have a default case that in one way or another ignores the input values. Using TH here ensures that all values that belong together are caught. But after spelling out the TH, the presence of the default case means that no compiler warning is thrown for a missing match of matching values. So in the default case, we now do an explicit check for equality. If there _is_ an equality, that means that there is a missing `case`. So this is reported as an `error` (which is very crude, but it should be).
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5333
GitOrigin-RevId: 5aaf0a93394bd740aa7371526d3175c8142b3541
### Description
This PR moves some strictness annotations to a concrete use site, rather than putting `seq` in an helper function.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5436
GitOrigin-RevId: 1f279e05333ab80167ad2e18d09b8792eddc52c3
### Description
This PR is a first step towards having a dedicated reader context per schema block. It adds the required Reader instance, and switches from a `SchemaT ReaderT` stack to a `ReaderT SchemaT` stack. Furthermore, it cleans up / harmonizes some of the top-level schema building functions.
Sources and remotes are now built each within their own run of `runReaderT`: for now, the reader context is the same in both cases, meaning no special care is required at the boundary of remote relationships.
Actions are explicitly run with the source context for now; we could envision creating a third and distinct context for them.
This PR is expected to be a no-op.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5300
GitOrigin-RevId: a014e5b3504eb4ef740c820d305d6d2695f622f7
It's about time.
To do this I had to check a few more boxes.
* I copied the flags from `graphql-engine.cabal` to the libraries in `server/lib`.
* I moved `Cacheable` instances of schema parser types beside the typeclass declaration.
* I removed imports of `Hasura.Prelude` from the tests, and rewrote them accordingly.
* I copied the `TestMonad` parse monad into `server/src-test/Hasura/GraphQL/Schema/RemoteTest.hs`, which was using it. I think this could be done with the real thing, but I tried replacing it with constraints and it messed with my head somewhat.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5311
GitOrigin-RevId: ebebcc50a16f2d517b7f730fe72410827ca3e86c
## Description ✍️
This PR fixes the config status update when the `Service configured successfully` message is written before the server is actually spawned. Now the status is updated only when the server is spawned successfully. To be specific, this change posts the status closer to where we log `starting API server`.
### Related Issues ✍
#2751
### Solution and Design ✍
We update the status inside `runHGEServer` function. This helps in adding the message only when the server is started. If any exception is thrown before the server is spawned, only that message is written to `config_status` table instead of the `Service configured successfully` message.
## Affected components ✍️
- ✅ Server
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5179
Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com>
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
GitOrigin-RevId: 7860008403aa0645583e26915f620b66a5bbc531
Followup to hasura/graphql-engine-mono#4713.
The `memoizeOn` method, part of `MonadSchema`, originally had the following type:
```haskell
memoizeOn
:: (HasCallStack, Ord a, Typeable a, Typeable b, Typeable k)
=> TH.Name
-> a
-> m (Parser k n b)
-> m (Parser k n b)
```
The reason for operating on `Parser`s specifically was that the `MonadSchema` effect would additionally initialize certain `Unique` values, which appear (nested in) the type of `Parser`.
hasura/graphql-engine-mono#518 changed the type of `memoizeOn`, to additionally allow memoizing `FieldParser`s. These also contained a `Unique` value, which was similarly initialized by the `MonadSchema` effect. The new type of `memoizeOn` was as follows:
```haskell
memoizeOn
:: forall p d a b
. (HasCallStack, HasDefinition (p n b) d, Ord a, Typeable p, Typeable a, Typeable b)
=> TH.Name
-> a
-> m (p n b)
-> m (p n b)
```
Note the type `p n b` of the value being memoized: by choosing `p` to be either `Parser k` or `FieldParser`, both can be memoized. Also note the new `HasDefinition (p n b) d` constraint, which provided a `Lens` for accessing the `Unique` value to be initialized.
A quick simplification is that the `HasCallStack` constraint has never been used by any code. This was realized in hasura/graphql-engine-mono#4713, by removing that constraint.
hasura/graphql-engine-mono#2980 removed the `Unique` value from our GraphQL-related types entirely, as their original purpose was never truly realized. One part of removing `Unique` consisted of dropping the `HasDefinition (p n b) d` constraint from `memoizeOn`.
What I didn't realize at the time was that this meant that the type of `memoizeOn` could be generalized and simplified much further. This PR finally implements that generalization. The new type is as follows:
```haskell
memoizeOn ::
forall a p.
(Ord a, Typeable a, Typeable p) =>
TH.Name ->
a ->
m p ->
m p
```
This change has a couple of consequences.
1. While constructing the schema, we often output `Maybe (Parser ...)`, to model that the existence of certain pieces of GraphQL schema sometimes depends on the permissions that a certain role has. The previous versions of `memoizeOn` were not able to handle this, as the only thing they could memoize was fully-defined (if not yet fully-evaluated) `(Field)Parser`s. This much more general API _would_ allow memoizing `Maybe (Parser ...)`s. However, we probably have to be continue being cautious with this: if we blindly memoize all `Maybe (Parser ...)`s, the resulting code may never be able to decide whether the value is `Just` or `Nothing` - i.e. it never commits to the existence-or-not of a GraphQL schema fragment. This would manifest as a non-well-founded knot tying, and this would get reported as an error by the implementation of `memoizeOn`.
tl;dr: This generalization _technically_ allows for memoizing `Maybe` values, but we probably still want to avoid doing so.
For this reason, the PR adds a specialized version of `memoizeOn` to `Hasura.GraphQL.Schema.Parser`.
2. There is no longer any need to connect the `MonadSchema` knot-tying effect with the `MonadParse` effect. In fact, after this PR, the `memoizeOn` method is completely GraphQL-agnostic, and so we implement hasura/graphql-engine-mono#4726, separating `memoizeOn` from `MonadParse` entirely - `memoizeOn` can be defined and implemented as a general Haskell typeclass method.
Since `MonadSchema` has been made into a single-type-parameter type class, it has been renamed to something more general, namely `MonadMemoize`. Its only task is to memoize arbitrary `Typeable p` objects under a combined key consisting of a `TH.Name` and a `Typeable a`.
Also for this reason, the new `MonadMemoize` has been moved to the more general `Control.Monad.Memoize`.
3. After this change, it's somewhat clearer what `memoizeOn` does: it memoizes an arbitrary value of a `Typeable` type. The only thing that needs to be understood in its implementation is how the manual blackholing works. There is no more semantic interaction with _any_ GraphQL code.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4725
Co-authored-by: Daniel Harvey <4729125+danieljharvey@users.noreply.github.com>
GitOrigin-RevId: 089fa2e82c2ce29da76850e994eabb1e261f9c92
I'm trying to shore up the Python integration tests to make them more reliable. In doing so, I noticed this.
---
Rather than hard-coding hostnames and ports, we can (and already do) inject these into the HGE process using environment variables.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5255
GitOrigin-RevId: 6bb593999ece42cedef6619f31f9d9b2e39f30ef
When documenting how adding a backend works, the information was a bit
out of date. Updated to link to files from the latest commit to `main`,
at the time of writing.
Also runs the README through the `prettier` autoformatter.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5301
GitOrigin-RevId: df54f95d85156e9f95a4a7788eed93c6359cc81c
### Description
By definition, root fields are at the root of the schema: only functions that craft root fields need to know about how to customize the name of root fields. However, the presence of `Has MkRootFieldName` in `MonadBuildSchemaBase` meant that the entirety of the schema building code was implicitly aware of / capable of altering root field names.
This PR removes this constraint, and moves it to the functions that do craft root fields. This has several upsides:
- it makes it more explicit where root fields are being crafted
- it prevents functions that should not use this from mistakenly applying it to non-root fields
- it simplifies the shared schema context
### Future work
- can we maybe pass this as an argument, instead of making it a required part of the context?
- ~~AFAICT, we only ever use `mempty` for it: is this actually dead code that we should actually just remove altogether?~~
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5235
GitOrigin-RevId: 4268751f3ab87ae8e03b6fe9e1efa1b096200027
For some reason these functions exist in `Backends.Postgres.SQL.Value`.
We don't want to depend on that module here.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5292
GitOrigin-RevId: a09bd3cdb0caf08938bce0728a8d281344c1d4ce
I'm trying to shore up the Python integration tests to make them more reliable. In doing so, I noticed this.
---
It feels a lot more sensible as we never run on more than one backend at a time.
This also removes the `check_file_exists` parameter from the setup functions; it never worked. It was always set to the result of a comparison between a backend name and a function, which was always `False`. Enabling it breaks things.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5254
GitOrigin-RevId: 8718ab21527c2ba0a7205d1c01ebaac1a10be844
Docker Compose is now a plugin for Docker, bundled by default in Docker Desktop and many Linux distribution packages. The standalone `docker-compose` binary has been deprecated since Docker Compose v2.
Using the new version directly allows us to write development scripts that do not require `docker-compose` to be installed.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5185
GitOrigin-RevId: c8542b8b2405d1aa32288991688c6fde4af96383
Moves code from `Hasura.RQL.Types.Metadata` that is specific to serialization into a new module, `Hasura.RQL.Types.Metadata.Serialization`.
I'm breaking up #5184 into smaller PRs. This is the third and final PR in that effort. This PR is stacked on #5210 and #5211.
The tracking issue is https://hasurahq.atlassian.net/browse/MM-35
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5212
GitOrigin-RevId: 6cde6d52173590fafe0969a06f2a3411db4fbc78
Introduces a new function, `metadataToDTO`, that converts a `Metadata` value to a `MetadataV3` DTO value. This is the next step in the alternative serialization path for metadata that comes with a generated OpenAPI specification.
This PR carves up the existing `metadataToOrdJSON` function so that helpers previously embedded in the `where` block of that function can also be used in the implementation of `metadataToDTO`. If I did everything correctly `metadataToOrdJSON` should behave exactly as before.
In a followup PR I will move the extracted helpers to a new submodule, `Hasura.RQL.Types.Metadata.Serialization`, since they add up to several hundred lines of code.
I'm breaking up #5184 into smaller PRs, and this is the second PR in that effort. This PR is stacked on #5210.
The tracking issue is https://hasurahq.atlassian.net/browse/MM-35
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5211
GitOrigin-RevId: 2596ed5312d7b1232c47ae1d08a51d8ead11fcb8
A following PR moves serialization-related code out `Hasura.RQL.Types.Metadata` into a specialized submodule. To avoid circular dependencies a number of other definitions also need to be moved into their own submodule. This PR does that extra moving first so that we can keep each PR as small, and as easy to review as possible.
There are a lot of changed lines; but it's all moving code from one module to another.
I'm breaking up #5184 into smaller PRs, and this is the first PR in that effort.
The tracking issue is https://hasurahq.atlassian.net/browse/MM-35
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5210
GitOrigin-RevId: 6fb6e29a967ab5ad4724006c8e0addd2d63a3946
We currently use `hpack` to generate the Cabal files from _package.yaml_
files for the two small libraries in _server/lib_. While this is more
convenient, we also check the Cabal files into the repository to avoid
needing an extra step upon pulling changes.
In order to ensure that the Cabal files do not get out of sync with the
hpack files, this introduces a few improvements:
1. Makefile targets to automatically generate the Cabal files without
needing to know the correct incantation. These targets are a
dependency of all build targets, so you can simply run
`make build-all` and it will work.
2. An extra comment at the top of all generated Cabal files that
explains how to regenerate it.
3. A `lint-hpack` Makefile target that verifies that the Cabal files
are up-to-date.
4. A CI job that runs `make lint-hpack`, to stop inconsistencies
getting merged into trunk.
Most of these changes are ported from #4794.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5217
GitOrigin-RevId: d3dfbe19ec00528368d357b6d0215a7ba4062f68
### Description
I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise.
In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`.
However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose...
### Alternatives
Some alternatives worth considering:
- attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous
- keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances
- changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context.
What do y'all think?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073
GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
In the process of decoupling the schema parsers from the GraphQL Engine, we need to remove dependencies on `Hasura.Base.Error`.
First of all, we have avoided using `QErr` in schema parsers code, instead returning a more appropriate data type which can be converted to a `Hasura.Base.Error.QErr` later.
Secondly, we create a new `ParseErrorCode` type to represent parse failure types, which are then converted to a `Hasura.Base.Error.Code` later.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5181
GitOrigin-RevId: 8655e26adb1e7d5e3d552c77a8a403f987b53467
Updates to the latest version of autodocodec and uses the new features, in particular `discriminatedUnionCodec`.
This allows us to remove the `ValueWrapper*` types and `sumTypeCodec`. Sum types are now encoded as discriminated unions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5155
GitOrigin-RevId: 20bfdc12b28d35db354c4a149b9175fab0b2b7d2
This is now the sole in-universe dependency of the schema parsers. As
such, we need to extract it as a library before we can extract the
schema parsers as a library.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5202
GitOrigin-RevId: fbe571855768e56dc8b8e259b8efe900de3ecc54
Rather than a homebrewed approach, we can use `make` to figure out when
it's necessary to regenerate our venv.
This Makefile will regenerate _requirements.txt_ from
_requirements-top-level.txt_ when the latter is changed.
It will also regenerate the venv when _requirements.txt_ is changed
(i.e. changes are pulled, or it's regenerated as described above).
`make` uses file/directory timestamps to figure out what to rebuild.
This is probably more reliable than expecting people to update a version
number whenever they change a file.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5152
GitOrigin-RevId: 24b27d49bf6c4ba1d57ac38ea38ae278216c6d66
This helps us use the same versions locally as in CI. If you're using the Nix setup, it guarantees it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5111
GitOrigin-RevId: 6e00cd7a78593df1e60fac37cc1195aba60e488f
I tried re-freezing _server/tests-py/requirements-top-level.txt_
recently, and discovered that it caused the tests to fail.
This pins a couple of dependencies so that we can safely re-freeze.
Specifically:
- `cryptography` is pinned at v3.*
- `graphene` is pinned at v2.*
- `PyJWT` is pinned at v2.3.*
- `websocket-client` is pinned at v0.56.0 (this was done in
_requirements.txt_ already, but that file is supposed to be
regenerated)
Upgrading `SQLAlchemy` required changing PostgreSQL URLs to use
"postgresql://" as the URL scheme, not "postgres://".
Updating `ruamel.yaml` caused a few tests to fail as we are passing
`ruamel.yaml.scalarstring.LiteralScalarString` values as header values.
This is fixed by explicitly converting header values to strings.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5120
GitOrigin-RevId: 9c12a3013c3d1f23dddbe781037663838b23f6f5
I got a flaky test run recently, in which some Data Connectors tests in
_tests-hspec_ failed. The failure was not very helpful, but the log
output contained this message:
> Network.Socket.bind: resource busy (Address already in use)
I _think_ this was caused by killing the Data Connectors mock agent
thread, but not waiting for the server to stop. `Async.cancel` should
handle this, as it waits for the thread to stop.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5121
GitOrigin-RevId: 3419dce2fc5ff52e3a6f2d452ea44dd85b326452
Makes the init specs shorter by using `shouldBe` instead of `shouldSatisfy` wherever possible. This also makes test failures more expressive.
It also simplifies boolean logic in most places, following HLint warnings. These changes brought to you by `hlint --refactor`, which is basically magic.
I have left some redundancy in the boolean logic for clarity, along with the appropriate HLint suppressions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5087
GitOrigin-RevId: 52bf3626be2615e6a32a0fc0e8be19cca31ee4ad