I'm trying to shore up the Python integration tests to make them more reliable. In doing so, I noticed this.
---
Rather than hard-coding hostnames and ports, we can (and already do) inject these into the HGE process using environment variables.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5255
GitOrigin-RevId: 6bb593999ece42cedef6619f31f9d9b2e39f30ef
When documenting how adding a backend works, the information was a bit
out of date. Updated to link to files from the latest commit to `main`,
at the time of writing.
Also runs the README through the `prettier` autoformatter.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5301
GitOrigin-RevId: df54f95d85156e9f95a4a7788eed93c6359cc81c
### Description
By definition, root fields are at the root of the schema: only functions that craft root fields need to know about how to customize the name of root fields. However, the presence of `Has MkRootFieldName` in `MonadBuildSchemaBase` meant that the entirety of the schema building code was implicitly aware of / capable of altering root field names.
This PR removes this constraint, and moves it to the functions that do craft root fields. This has several upsides:
- it makes it more explicit where root fields are being crafted
- it prevents functions that should not use this from mistakenly applying it to non-root fields
- it simplifies the shared schema context
### Future work
- can we maybe pass this as an argument, instead of making it a required part of the context?
- ~~AFAICT, we only ever use `mempty` for it: is this actually dead code that we should actually just remove altogether?~~
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5235
GitOrigin-RevId: 4268751f3ab87ae8e03b6fe9e1efa1b096200027
For some reason these functions exist in `Backends.Postgres.SQL.Value`.
We don't want to depend on that module here.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5292
GitOrigin-RevId: a09bd3cdb0caf08938bce0728a8d281344c1d4ce
I'm trying to shore up the Python integration tests to make them more reliable. In doing so, I noticed this.
---
It feels a lot more sensible as we never run on more than one backend at a time.
This also removes the `check_file_exists` parameter from the setup functions; it never worked. It was always set to the result of a comparison between a backend name and a function, which was always `False`. Enabling it breaks things.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5254
GitOrigin-RevId: 8718ab21527c2ba0a7205d1c01ebaac1a10be844
Docker Compose is now a plugin for Docker, bundled by default in Docker Desktop and many Linux distribution packages. The standalone `docker-compose` binary has been deprecated since Docker Compose v2.
Using the new version directly allows us to write development scripts that do not require `docker-compose` to be installed.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5185
GitOrigin-RevId: c8542b8b2405d1aa32288991688c6fde4af96383
Moves code from `Hasura.RQL.Types.Metadata` that is specific to serialization into a new module, `Hasura.RQL.Types.Metadata.Serialization`.
I'm breaking up #5184 into smaller PRs. This is the third and final PR in that effort. This PR is stacked on #5210 and #5211.
The tracking issue is https://hasurahq.atlassian.net/browse/MM-35
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5212
GitOrigin-RevId: 6cde6d52173590fafe0969a06f2a3411db4fbc78
Introduces a new function, `metadataToDTO`, that converts a `Metadata` value to a `MetadataV3` DTO value. This is the next step in the alternative serialization path for metadata that comes with a generated OpenAPI specification.
This PR carves up the existing `metadataToOrdJSON` function so that helpers previously embedded in the `where` block of that function can also be used in the implementation of `metadataToDTO`. If I did everything correctly `metadataToOrdJSON` should behave exactly as before.
In a followup PR I will move the extracted helpers to a new submodule, `Hasura.RQL.Types.Metadata.Serialization`, since they add up to several hundred lines of code.
I'm breaking up #5184 into smaller PRs, and this is the second PR in that effort. This PR is stacked on #5210.
The tracking issue is https://hasurahq.atlassian.net/browse/MM-35
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5211
GitOrigin-RevId: 2596ed5312d7b1232c47ae1d08a51d8ead11fcb8
A following PR moves serialization-related code out `Hasura.RQL.Types.Metadata` into a specialized submodule. To avoid circular dependencies a number of other definitions also need to be moved into their own submodule. This PR does that extra moving first so that we can keep each PR as small, and as easy to review as possible.
There are a lot of changed lines; but it's all moving code from one module to another.
I'm breaking up #5184 into smaller PRs, and this is the first PR in that effort.
The tracking issue is https://hasurahq.atlassian.net/browse/MM-35
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5210
GitOrigin-RevId: 6fb6e29a967ab5ad4724006c8e0addd2d63a3946
We currently use `hpack` to generate the Cabal files from _package.yaml_
files for the two small libraries in _server/lib_. While this is more
convenient, we also check the Cabal files into the repository to avoid
needing an extra step upon pulling changes.
In order to ensure that the Cabal files do not get out of sync with the
hpack files, this introduces a few improvements:
1. Makefile targets to automatically generate the Cabal files without
needing to know the correct incantation. These targets are a
dependency of all build targets, so you can simply run
`make build-all` and it will work.
2. An extra comment at the top of all generated Cabal files that
explains how to regenerate it.
3. A `lint-hpack` Makefile target that verifies that the Cabal files
are up-to-date.
4. A CI job that runs `make lint-hpack`, to stop inconsistencies
getting merged into trunk.
Most of these changes are ported from #4794.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5217
GitOrigin-RevId: d3dfbe19ec00528368d357b6d0215a7ba4062f68
### Description
I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise.
In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`.
However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose...
### Alternatives
Some alternatives worth considering:
- attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous
- keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances
- changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context.
What do y'all think?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073
GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2