Moves code from `Hasura.RQL.Types.Metadata` that is specific to serialization into a new module, `Hasura.RQL.Types.Metadata.Serialization`.
I'm breaking up #5184 into smaller PRs. This is the third and final PR in that effort. This PR is stacked on #5210 and #5211.
The tracking issue is https://hasurahq.atlassian.net/browse/MM-35
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5212
GitOrigin-RevId: 6cde6d52173590fafe0969a06f2a3411db4fbc78
A following PR moves serialization-related code out `Hasura.RQL.Types.Metadata` into a specialized submodule. To avoid circular dependencies a number of other definitions also need to be moved into their own submodule. This PR does that extra moving first so that we can keep each PR as small, and as easy to review as possible.
There are a lot of changed lines; but it's all moving code from one module to another.
I'm breaking up #5184 into smaller PRs, and this is the first PR in that effort.
The tracking issue is https://hasurahq.atlassian.net/browse/MM-35
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5210
GitOrigin-RevId: 6fb6e29a967ab5ad4724006c8e0addd2d63a3946
In the process of decoupling the schema parsers from the GraphQL Engine, we need to remove dependencies on `Hasura.Base.Error`.
First of all, we have avoided using `QErr` in schema parsers code, instead returning a more appropriate data type which can be converted to a `Hasura.Base.Error.QErr` later.
Secondly, we create a new `ParseErrorCode` type to represent parse failure types, which are then converted to a `Hasura.Base.Error.Code` later.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5181
GitOrigin-RevId: 8655e26adb1e7d5e3d552c77a8a403f987b53467
Updates to the latest version of autodocodec and uses the new features, in particular `discriminatedUnionCodec`.
This allows us to remove the `ValueWrapper*` types and `sumTypeCodec`. Sum types are now encoded as discriminated unions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5155
GitOrigin-RevId: 20bfdc12b28d35db354c4a149b9175fab0b2b7d2
This is now the sole in-universe dependency of the schema parsers. As
such, we need to extract it as a library before we can extract the
schema parsers as a library.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5202
GitOrigin-RevId: fbe571855768e56dc8b8e259b8efe900de3ecc54
This introduces an `ErrorMessage` newtype which wraps `Text` in a manner which is designed to be easy to construct, and difficult to deconstruct.
It provides functionality similar to `Data.Text.Extended`, but designed _only_ for error messages. Error messages are constructed through `fromString`, concatenation, or the `toErrorValue` function, which is designed to be overridden for all meaningful domain types that might show up in an error message. Notably, there are not and should never be instances of `ToErrorValue` for `String`, `Text`, `Int`, etc. This is so that we correctly represent the value in a way that is specific to its type. For example, all `Name` values (from the _graphql-parser-hs_ library) are single-quoted now; no exceptions.
I have mostly had to add `instance ToErrorValue` for various backend types (and also add newtypes where necessary). Some of these are not strictly necessary for this changeset, as I had bigger aspirations when I started. These aspirations have been tempered by trying and failing twice.
As such, in this changeset, I have started by introducing this type to the `parseError` and `parseErrorWith` functions. In the future, I would like to extend this to the `QErr` record and the various `throwError` functions, but this is a much larger task and should probably be done in stages.
For now, `toErrorMessage` and `fromErrorMessage` are provided for conversion to and from `Text`, but the intent is to stop exporting these once all error messages are converted to the new type.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5018
GitOrigin-RevId: 84b37e238992e4312255a87ca44f41af65e2d89a
This moves `MkTypename` and `NamingCase` into their own modules, with the intent of reducing the scope of the schema parsers code, and trying to reduce imports of large modules when small ones will do.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4978
GitOrigin-RevId: 19541257fe010035390f6183a4eaa37bae0d3ca1
Earlier, if the `select` root field had a custom root field set, the same custom root field was then used for the streaming subscription root field as well. This leads to duplicate root fields being generated in the `subscription_root`.
This PR fixes that. It provides a way to customize the streaming subscription root field and not use the `select` root field's custom root field name for the streaming subscription root field.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4967
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
GitOrigin-RevId: 54e74ce97561b0e5cfdfc60d1ca340aaebecf7d4
We only use these `Show` instances in error messages (where we call
`show` explicitly anyway) and test cases (in which Hspec requires `Show
a` for any `a` in an assertion).
This removes the instance in favor of a custom `showQErr` function
(which serializes the error to JSON). It is then used in certain error
message production which previously called `show` on a `QErr`.
There are two places where we serialize a QErr and then construct a new
QErr from the resulting string. Instead, we modify the existing QErr to
add extra information.
An orphan `Show QErr` instance is retained for tests so that we can have
nice test failure messages.
This is preparation for future changes in which the error message within
`QErr` will not be exposed directly, and therefore will not have a
`Show` instance. That said, it feels like a sensible kind of cleanup
anyway.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4897
GitOrigin-RevId: 8f79f7a356f0aea571156f39aefac242bf751f3a
### Description
This PR rewrites OpenAPI to be more idiomatic. Some noteworthy changes:
- we accumulate all required information during the Analyze phase, to avoid having to do a single lookup in the schema cache during the OpenAPI generation phase (we now only need the schema cache as input to run the analysis)
- we no longer build intermediary endpoint information and aggregate it, we directly build the the `PathItem` for each endpoint; additionally, that means we no longer have to assume that different methods have the same metadata
- we no longer have to first declare types, then craft references: we do everything in one step
- we now properly deal with nullability by treating "typeName" and "typeName!" as different
- we add a bunch of additional fields in the generated "schema", such as title
- we do now support enum values in both input and output positions
- checking whether the request body is required is now performed on the fly rather than by introspecting the generated schema
- the methods in the file are sorted by topic
### Controversial point
However, this PR creates some additional complexity, that we might not want to keep. The main complexity is _knot-tying_: to avoid lookups when generating the OpenAPI, it builds an actual graph of input types, which means that we need something similar to (but simpler than) `MonadSchema`, to avoid infinite recursions when analyzing the input types of a query. To do this, this PR introduces `CircularT`, a lesser `SchemaT` that aims at avoiding ever having to reinvent this particular wheel ever again.
### Remaining work
- [x] fix existing tests (they are all failing due to some of the schema changes)
- [ ] add tests to cover the new features:
- [x] tests for `CircularT`
- [ ] tests for enums in output schemas
- [x] extract / document `CircularT` if we wish to keep it
- [x] add more comments to `OpenAPI`
- [x] have a second look at `buildVariableSchema`
- [x] fix all missing diagnostics in `Analyze`
- [x] add a Changelog entry?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4654
Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com>
GitOrigin-RevId: f4a9191f22dfcc1dccefd6a52f5c586b6ad17172
This came about as I tried to add an instance over catalog versions and
found they were just simple integers most of the time (and in one case,
a float).
I think this change also clarifies how catalog versions work.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4864
GitOrigin-RevId: a6b7db86de564b71a8c2b602bee6a456b8e20d63
The code that builds the GraphQL schema, and `buildGQLContext` in particular, is partial: not every value of `(ServerConfigCtx, GraphQLQueryType, SourceCache, HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject), ActionCache, AnnotatedCustomTypes)` results in a valid GraphQL schema. When it fails, we want to be able to return better error messages than we currently do.
The key thing that is missing is a way to trace back GraphQL type information to their origin from the Hasura metadata. Currently, we have a number of correctness checks of our GraphQL schema. But these correctness checks only have access to pure GraphQL type information, and hence can only report errors in terms of that. Possibly the worst is the "conflicting definitions" error, which, in practice, can only be debugged by Hasura engineers. This is terrible DX for customers.
This PR allows us to print better error messages, by adding a field to the `Definition` type that traces the GraphQL type to its origin in the metadata. So the idea is simple: just add `MetadataObjId`, or `Maybe` that, or some other sum type of that, to `Definition`.
However, we want to avoid having to import a `Hasura.RQL` module from `Hasura.GraphQL.Parser`. So we instead define this additional field of `Definition` through a new type parameter, which is threaded through in `Hasura.GraphQL.Parser`. We then define type synonyms in `Hasura.GraphQL.Schema.Parser` that fill in this type parameter, so that it is not visible for the majority of the codebase.
The idea of associating metadata information to `Definition`s really comes to fruition when combined with hasura/graphql-engine-mono#4517. Their combination would allow us to use the API of fatal errors (just like the current `MonadError QErr`) to report _inconsistencies_ in the metadata. Such inconsistencies are then _automatically_ ignored. So no ad-hoc decisions need to be made on how to cut out inconsistent metadata from the GraphQL schema. This will allow us to report much better errors, as well as improve the likelihood of a successful HGE startup.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4770
Co-authored-by: Samir Talwar <47582+SamirTalwar@users.noreply.github.com>
GitOrigin-RevId: 728402b0cae83ae8e83463a826ceeb609001acae
This implements an initial set of DTO types that represent serialized metadata. These new types come with codecs using autodocodec which are used to derive both JSON serialization, and OpenAPI documentation. This ensures that we can automatically generate API documentation that is guaranteed to match JSON produced by the server.
For the moment the new types are not used for anything except to generate an early version of an OpenAPI document. Because this is early work the DTO types for each metadata format version list top-level properties only with placeholders for the types of each top-level property. This early iteration demonstrates using a sum type in Haskell that maps to a tagged union in OpenAPI (using the `version` field value as a tag).
This work is experimental and incomplete! Please do not incorporate the generated OpenAPI documentation into essential workflows at this time.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4801
GitOrigin-RevId: d2f110a6237b73520cdba24667333ef14e8cdd3d
Pretty much all quasi-quoted names in the server code base have ended up in `Hasura.GraphQL.Parser.Constants`. I'm now finding this unpleasant for two reasons:
1. I would like to factor out the parser code into its own Cabal package, and I don't want to have to expose all these names.
2. Most of them really have nothing to do with the parsers.
In order to remedy this, I have:
1. moved the names used by parser code to `Hasura.GraphQL.Parser.DirectiveName`, as they're all related to directives;
2. moved `Hasura.GraphQL.Parser.Constants` to `Hasura.Name`, changing the qualified import name from `G` to `Name`;
3. moved names only used in tests to the appropriate test case;
4. removed unused items from `Hasura.Name`; and
5. grouped related names.
Most of the changes are simply changing `G` to `Name`, which I find much more meaningful.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4777
GitOrigin-RevId: a77aa0aee137b2b5e6faec94495d3a9fbfa1348b
This aims to support loading up a `ghci repl` with both the `graphql-engine` library and the unit tests. This is currently not officially supported by cabal, but it uses a hack, which is why I added a flag. See the updated documentation for more info.
Also see
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4739
GitOrigin-RevId: 5e7b15855a7a829ed76b5830be1efc9146d25da6
## Description
Following on from #4572, this removes more dead code as identified by Weeder. Comments and thoughts similarly welcome!
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4587
GitOrigin-RevId: 73aa6a5a2833ee41d29b71fcd0a72ed19822ca73
This PR proposes some changes to the hspec testsuite:
* It amends the framework to make it easier to test from the ghci REPL
* It introduces a new module `Fixture`, distinguished from `Context` by:
* using a new concept of `SetupAction`s which bundle setup and teardown actions into one abstraction, making test system state setup more concise, modularized and safe (because the fixture know knows about the ordering of setup actions and can do partial rollbacks)
* somewhat opinionated, elides the `Options` of `Context`, preferring instead that tests that care about stringification of json numbers manage that themselves.
(Note that this PR builds on #4390, so contains some spurious commits which will become irrelevant once that PR is merged)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4630
GitOrigin-RevId: 619c8d985aed0aa42de31d6f16891d0782f4b4b5
(Work here originally done by awjchen, rebased and fixed up for merge by
jberryman)
This is part of a merge train towards GHC 9.2 compatibility. The main
issue is the use of the new abstract `KeyMap` in 2.0. See:
https://hackage.haskell.org/package/aeson-2.0.3.0/changelog
Alex's original work is here:
#4305
BEHAVIOR CHANGE NOTE: This change causes a different arbitrary ordering
of serialized Json, for example during metadata export. CLI users care
about this in particular, and so we need to call it out as a _behavior
change_ as we did in v2.5.0. The good news though is that after this
change ordering should be more stable (alphabetical key order).
See: https://hasurahq.slack.com/archives/C01M20G1YRW/p1654012632634389
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4611
Co-authored-by: awjchen <13142944+awjchen@users.noreply.github.com>
GitOrigin-RevId: 700265162c782739b2bb88300ee3cda3819b2e87
## Motivation
This PR rewrites most of Relay to achieve the following:
- ~~fix a bug in which the same node id could refer to two different tables in the schema~~
- remove one of the few remaining uses of the source cache in the schema building code
In doing so, it also:
- simplifies the `BackendSchema` class by removing `node` from it,
- makes it much easier for other backends to support Relay,
- documents, re-organizes, and clarifies the code.
## Description
This PR introduces a new `NodeId` version ~~, and adapts the Postgres code to always generate this V2 version~~. This new id contains the source name, in addition to the table name, in order to disambiguate similar table names across different sources (which is now possible with source customization). In doing so, it now explicitly handles that case for V1 node ids, and returns an explicit error message instead of running the risk of _silently returning the wrong information_.
Furthermore, it adapts `nodeField` to support multiple backends; most of the code was trivial to generalize, and as a result it lowers the cost of entry for other backends, that now only need to support `AFNodeId` in their translation layer.
Finally, it removes one more cycle in the schema building code, by using the same trick we used for remote relationships instead of using the memoization trick of #4576.
## Remaining work
- ~~[ ]write a Changelog entry~~
- ~~[x] adapt all tests that were asserting on an old node id~~
## Future work
This PR was adapted from its original form to avoid a breaking change: while it introduces a Node ID V2, we keep generating V1 IDs and the parser rejects V2 IDs. It will be easy to make the switch at a later data in a subsequent PR.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4593
GitOrigin-RevId: 88e5cb91e8b0646900547fa8c7c0e1463de267a1
>
## Description ✍️
- Creates a new `/capabilities` endpoint for the GDC agent API
- Removes capabilities from the `/schema` endpoint
- Removes the `/config-schema` endpoint and includes the `ConfigSchemaResponse` within the `CapabilitiesResponse`
### Related Issues ✍
->
https://hasurahq.atlassian.net/browse/GDW-85
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4478
GitOrigin-RevId: 426662ee9e751343d94207d439a5025df65d2de7
## Description
This PR adds a config file for [`weeder`](https://github.com/ocharles/weeder) to the `-mono` repository. `weeder` checks for dead code by building a call graph from the given entry points (currently every module named `Main` with a `main` function) and then marking every function _not_ in that call graph as dead code.
To avoid very large PRs, I'm going to tackle this in a series. This first PR adds the basic configuration, plus removes as many weeds as it took for me to realise this was going to become a very big PR. The PRs after this will largely be removing dead code, until the final PR that will add Weeder to the CI pipeline.
### Related Issues
This closes#2973.
## Affected components
- Server
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4572
GitOrigin-RevId: ac8eaa9473e5ac1f16babcb35388694392d0d7dc
This is a first step towards clarifying the role of `UnpreparedValue` as part of the IR. It certainly does not belong in the parser framework.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4588
GitOrigin-RevId: d1582a0b266729b79e00d31057178a4099168e6d
## Description
As identified in hasura/graphql-engine#8096, the format string we used for timestamps was incorrect; we were using `%F`, which expands to `%Y-%m-%d`; but that meant that the year was not padded to four digits: `0001` would be represented simply as `1`. However, Postgres inteprets that `1` as `2001`, probably due to interpretation rules about two-digit years (in `25/12/01`, `01` is indeed `2001`).
```
# create table timestamp_test ( test timestamptz );
CREATE TABLE
# insert into timestamp_test values ('1-01-01T00:00:57Z');
INSERT 0 1
# select * from timestamp_test;
test
------------------------
2001-01-01 00:00:57+00
(1 row)
```
To fix this, this PR changes the format string to use `%0Y`, which always pads the year number with zeroes.
## Remaining work
- [x] write Changelog entry
- [ ] copy timestamp tests from the python suite into the hspec tests
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3536
GitOrigin-RevId: fa144111358339fd4a35b32d888c1d2c5b418ea6
## Description
This PR removes `RQL.Types`, which was now only re-exporting a bunch of unrelated modules.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4363
GitOrigin-RevId: 894f29a19bff70b3dad8abc5d9858434d5065417
With the current implementation, only the first call to `waitForShutdown` on a given
`ShutdownLatch` will return, while others will block (typically indefinitely). That's not
how one would expect a shutdown latch to work.
This isn't currently a concrete issue because we only wait once on each `ShutdownLatch`.
But in the context of #4154 we'll probably end up wanting to wait for shutdown from
multiple threads.
This adds a number of tests to verify the current behaviour, and adds a test for multiple
`waitForShutdown` calls that fails prior to the functional change.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4162
GitOrigin-RevId: 9a108858d11390b847404f30bc7b93c06fc3f966
- adds Hasura.Session and Data.Parser.URLTemplate specs to the
list of specs to run
- minor naming cleanup
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4161
GitOrigin-RevId: 4bea54337268f3d2e28d0c68e8304098dbad893b
UPDATE: After testing in CI it turns out that the compile time Improvement is better than expected: even though we always have to recompile the OSS lib (due to Version.hs), downstream packages like Pro and multi-tenant can still benefit from some caching and avoid full recompilation. In the best case this takes us from 22 minutes to 13 minutes total.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4104
GitOrigin-RevId: 76cbfc157064b33856e30f4c2b2ab2366f9c6089
### Motivation
#2338 introduced a way to validate REST queries against the metadata after a change, to properly report any inconsistency that would emerge from a change in the underlying structure of our schema. However, the way this was done was quite complex and error-prone. Namely: we would use the generated schema parsers to statically execute an introspection query, similar to the one we use for remote schemas, then parse the resulting bytestring as it were coming from a remote schema.
This led to several issues: the code was using remote schema primitives, and was associated with remote schema code, despite being unrelated, which led to absurd situations like creating fake `Variable`s whose type was also their name. A lot of the code had to deal with the fact that we might fail to re-parse our own schema. Additionally, some of it was dead code, that for some reason GHC did not warn about? But more fundamentally, this architecture decision creates a dependency between unrelated pieces of the engine: modifying the internal processing of root fields or the introspection of remote schemas now risks impacting the unrelated `OpenAPI` feature.
### Description
This PR decouples that process from the remote schema introspection logic and from the execution engine by making `Analyse` and `OpenAPI` work on the generic `G.SchemaIntrospection` instead. To accomplish this, it:
- adds `GraphQL.Parser.Schema.Convert`, to convert from our "live" schema back to a flat `SchemaIntrospection`
- persists in the schema cache the `admin` introspection generated when building the schema, and uses it both for validation and for generating the `OpenAPI`.
### Known issues and limitations
This adds a bit of memory pressure to the engine, as we persist the entire schema in the schema cache. This might be acceptable in the short-term, but we have several potential ideas going forward should this be a problem:
- cache the result of `Analyze`: when it becomes possible to build the `OpenAPI` purely with the result of `Analyze` without any additional schema information, then we could cache that instead, reducing the footprint
- caching the `OpenAPI`: if it doesn't need to change every time the endpoint is queried, then it should be possible to cache the entire `OpenAPI` object instead of the schema
- cache a copy of the `FieldParsers` used to generate the schema: as those are persisted through the GraphQL `Context`, and are the only input required to generate the `Schema`, making them accessible in the schema cache would allow us to have the exact same feature with no additional memory cost, at the price of a slightly slower and more complicated process (need to rebuild the `Schema` every time we query the OpenAPI endpoint)
- cache nothing at all, and rebuild the admin schema from scratch every time.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3962
Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com>
GitOrigin-RevId: a8b9808170b231fdf6787983b4a9ed286cde27e0
### Description
This is it! This PR enables the Metadata API for remote relationships from remote schemas, adds tests, ~~adds documentation~~, adds an entry to the Changelog. This is the release PR that enables the feature.
### Checklist
- [ ] Tests:
- [x] RS-to-Postgres (high level)
- [x] RS-to-RS (high level)
- [x] From RS specifically (testing for edge cases)
- [x] Metadata API tests
- [ ] Unit testing the actual engine?
- [x] Changelog entry
- [ ] Documentation?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3974
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
Co-authored-by: Vishnu Bharathi <4211715+scriptnull@users.noreply.github.com>
Co-authored-by: jkachmar <8461423+jkachmar@users.noreply.github.com>
GitOrigin-RevId: c9aebf12e6eebef8d264ea831a327b968d4be9d2
### Description
This PR cleans `processRemoteJoins` by splitting the code, introducing comments, and applied the same strategies than #3810 did. Most importantly, it introduces a new module `RemoteJoin.Source`, made to be very similar to `RemoteJoin.RemoteSchema`, that exposes the required tooling to make a join call to a source, which decluters `Join`. Furthermore, this PR uses the same "dependency injection" to make the core of `Join` free from IO: this opens the door to testing the join engine in the unit tests.
None of the functions were modified when moved from their old module to the new one, but there's no way to easily see this in a diff.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3894
GitOrigin-RevId: 1e7c43006f092326e061f9ba12674e207b628bef
## Description
We go through the module `Hasura.Backends.MSSQL.FromIr` and split it into separate self-contained units, which we document.
Note that this PR has a slightly opinionated follow-up PR #3909 .
### Related Issues
Fix#3666
### Solution and Design
The module `FromIr` has given rise to:
* `FromIr.Expression`
* `FromIr.Query`
* `FromIr.Delete`
* `FromIr.Insert`
* `FromIr.Update`
* `FromIr.SelectIntoTempTable`
And `Execute.MutationResponse` has become `FromIr.MutationResponse` (after some slight adaptation of types).
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3908
GitOrigin-RevId: 364acf1bcdf74f2e19464c31cdded12bd8e9aa59
…rmance
It makes sense to try to utilize multiple threads for metadata
operations since we expect them to come one at a time (and likely at
lower load periods anyway).
As noted, although we build roles in parallel now, the admin role is
still a bottleneck. For replace_metadata on huge_schema, on my machine
I get:
BEFORE: 22.7 sec
AFTER: 13.5 sec
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3911
GitOrigin-RevId: 4d4ee6ac8b5506603e70e4fc666a3aacc054d493
### Description
Several libraries define `catMaybes` as `mapMaybe id`. We had it defined in `Data.HashMap.Strict.Extended` already. This small PR also defines it in `Extended` modules for other containers and replaces every occurrence of `mapMaybe id` accordingly.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3884
GitOrigin-RevId: d222a2ca2f4eb9b725b20450a62a626d3886dbf4
### Description
There were several places in the codebase where we would either implement a generic container, or express the need for one. This PR extracts / creates all relevant containers, and adapts the relevant parts of the code to make use of said new generic containers. More specifically, it introduces the following modules:
- `Data.Set.Extended`, for new functions on `Data.Set`
- `Data.HashMap.Strict.Multi`, for hash maps that accept multiple values
- `Data.HashMap.Strict.NonEmpty`, for hash maps that can never be constructed as empty
- `Data.Trie`, for a generic implementation of a prefix tree
This PR makes use of those new containers in the following parts of the code:
- `Hasura.GraphQL.Execute.RemoteJoin.Types`
- `Hasura.RQL.Types.Endpoint*`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3828
GitOrigin-RevId: e6c1b971bcb3f5ab66bc91d0fa4d0e9df7a0c6c6
The only purpose was enabling the developer API by default. I don't
think that justifies a flag and CPP usage.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3820
GitOrigin-RevId: 058c9a7b03e5e164ef88e35c42f50bae3c42b5b6
No logic in this PR, just tidying things up (renaming the backend from `Experimental` to `DataWrapper`).
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3779
GitOrigin-RevId: f11acf563ccd8b9f16bc23c5e92da392aa4cfb2c
## Description
This PR is in reference to #2449 (support IP blacklisting for multitenant)
*RFC Update: Add support for IPv6 blocking*
### Solution and Design
Using [http-client-restricted](https://hackage.haskell.org/package/http-client-restricted) package, we're creating the HTTP manager with restricting capabilities. The IPs can be supplied from the CLI arguments as `--ipv4BlocklistCidrs cidr1, cidr2...` or `--disableDefaultIPv4Blocklist` for a default IP list. The new manager will block all requests to the provided CIDRs.
We are extracting the error message string to show the end-user that given IP is blocked from being set as a webhook. There are 2 ways to extract the error message "connection to IP address is blocked". Given below are the responses from event trigger to a blocked IP for these implementations:
- 6d74fde316f61e246c861befcca5059d33972fa7 - We return the error message string as a HTTPErr(HOther) from `Hasura/Eventing/HTTP.hs`.
```
{
"data": {
"message": "blocked connection to private IP address "
},
"version": "2",
"type": "client_error"
}
```
- 88e17456345cbb449a5ecd4877c84c9f319dbc25 - We case match on HTTPExceptionContent for InternaException in `Hasura/HTTP.hs` and extract the error message string from it. (this is implemented as it handles all the cases where pro engine makes webhook requests)
```
{
"data": {
"message": {
"type": "http_exception",
"message": "blocked connection to private IP address ",
"request": {
"secure": false,
"path": "/webhook",
"responseTimeout": "ResponseTimeoutMicro 60000000",
"queryString": "",
"method": "POST",
"requestHeaders": {
"Content-Type": "application/json",
"X-B3-ParentSpanId": "5ae6573edb2a6b36",
"X-B3-TraceId": "29ea7bd6de6ebb8f",
"X-B3-SpanId": "303137d9f1d4f341",
"User-Agent": "hasura-graphql-engine/cerebushttp-ip-blacklist-a793a0e41-dirty"
},
"host": "139.59.90.109",
"port": 8000
}
}
},
"version": "2",
"type": "client_error"
}
```
### Steps to test and verify
The restricted IPs can be used as webhooks in event triggers, and hasura will return an error message in reponse.
### Limitations, known bugs & workarounds
- The `http-client-restricted` has a needlessly complex interface, and puts effort into implementing proxy support which we don't want, so we've inlined a stripped down version.
- Performance constraint: As the blocking is checked for each request, if a long list of blocked CIDRs is supplied, iterating through all of them is not what we would prefer. Using trie is suggested to overcome this. (Added to RFC)
- Calls to Lux endpoints are inconsistent: We use either the http manager from the ProServeCtx which is unrestricted, or the http manager from the ServeCtx which is restricted (the latter through the instances for MonadMetadataApiAuthorization and UserAuthentication). (The failure scenario here would be: cloud sets PRO_ENDPOINT to something that resolves to an internal address, and then restricted requests to those endpoints fail, causing auth to fail on user requests. This is about HTTP requests to lux auth endpoints.)
## Changelog
- ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR.
## Affected components
- ✅ Server
- ✅ Tests
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3186
Co-authored-by: Robert <132113+robx@users.noreply.github.com>
GitOrigin-RevId: 5bd2de2d028bc416b02c99e996c7bebce56fb1e7
TL;DR
---
We go from this:
```haskell
(|
withRecordInconsistency
( (|
modifyErrA
( do
(info, dependencies) <- liftEitherA -< buildRelInfo relDef
recordDependencies -< (metadataObject, schemaObject, dependencies)
returnA -< info
)
|) (addTableContext @b table . addRelationshipContext)
)
|) metadataObject
```
to this:
```haskell
withRecordInconsistencyM metadataObject $ do
modifyErr (addTableContext @b table . addRelationshipContext) $ do
(info, dependencies) <- liftEither $ buildRelInfo relDef
recordDependenciesM metadataObject schemaObject dependencies
return info
```
Background
---
We use Haskell's `Arrows` language extension to gain some syntactic sugar when working with `Arrow`s. `Arrow`s are a programming abstraction comparable to `Monad`s.
Unfortunately the syntactic sugar provided by this language extension is not very sweet.
This PR shows how we can sometimes avoid using `Arrow`s altogether, without loss of functionality or correctness. It is a demo of a technique that can be used to cut down the amount of `Arrows`-based code in our codebase by about half.
Approach
---
Although _in general_ not every `Monad` is an `Arrow`, specific `Arrow` instantiations are exactly as powerful as their `Monad` equivalents. Otherwise they wouldn't be very equivalent, would they?
Just like `liftEither` interprets the `Either e` monad into an arbitrary monad implementing `MonadError e`, we add `interpA` which interprets certain concrete monads such as `Writer w` into specific arrows, e.g. ones satisfying `ArrowWriter w`. This means that the part of the code that only uses such interpretable effects can be written _monadically_, and then used in _arrow_ constructions down the line.
This approach cannot be used for arrow effects which do not have a monadic equivalent. In our codebase, the only instance of this is `ArrowCache m`, implemented by the `Rule m` arrow. So code written with `ArrowCache m` in the context cannot be rewritten monadically using this technique.
See also
---
- #1827
- #2210
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3543
Co-authored-by: jkachmar <8461423+jkachmar@users.noreply.github.com>
GitOrigin-RevId: eb79619c95f7a571bce99bc144ce42ee65d08505
## Description
Hopefully this is relatively self-explanatory: this change splits the helper functions we've used to extend QuickCheck from the orphan instances and generators that we have defined for unit tests. These have now been placed in `Test.QuickCheck.Extended` and `Hasura.QuickCheck.Instances`, respectively.
This change also adds some documentation to the functions defined in `Test.QuickCheck.Extended` in the spirit of similar functions defined by `Test.QuickCheck`, itself.
### Motivation
We should adhere to the existing convention of constructing "extension modules" for common libraries separately from the code that takes advantage of these.
Alone, this wouldn't be a reason to split up `Hasura.Generators`, but we should **also** follow a convention of defining **all** orphan instances in modules whose names clearly indicate that they exist solely for the purpose of exporting these orphan instances (e.g. `Hasura.QuickCheck.Instances`).
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3747
GitOrigin-RevId: fb856a790b4a39163f81481d4f900fafb1797ea6
## Description
This PR adds the possibility for hspec tests to start a remote server with a custom schema, using the _morpheus_ library. In addition, it adds:
- X-to-DB object relationships tests
- X-to-DB array relationships tests
- X-to-RS relationships tests
For now, all those X are only postgres, but the tests are written in a way that will allow for it to easily be any other DB, or even remote schemas. The actual tests were taken mostly from #3069.
To achieve this, this PR heavily refactors the test harness. Most importantly: it generalizes the notion of a `Backend` to a notion of generic `Context`, allowing for contexts that are the unions of two backends, or of a backend and a remote schema.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3567
Co-authored-by: jkachmar <8461423+jkachmar@users.noreply.github.com>
GitOrigin-RevId: 623f700ba482743f94d3eaf659e6cfa22cd0dbc9
## Description
This PR adds all the scaffolding for tests that require remote servers. It is mostly a refactor of `Feature`; where we listed for each test a list of individual backends, we now provide a list of `Context`s, that allows for tests to specify not only how it should be setup, but also what state needs to be carried around throughout the test. This will be useful when launching custom remote servers.
Additionally, this PR:
- cleans the way we generate logs in the engine as part of the tests
- cleans the cabal file
- introduce a few more helpers for sending commands to the engine (such as `postMetadata_`)
- allows for headers in queries sent to the engine (to support permissions tests)
- adds basic code to start / stop a "remote" server
This PR is a pre-requisite of #3567.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3573
Co-authored-by: jkachmar <8461423+jkachmar@users.noreply.github.com>
GitOrigin-RevId: 05f808c6b85729dbb3ea6648c3e10a3c16b641ef
spec: https://github.com/hasura/graphql-engine-mono/pull/2278
Briefly:
- extend metadata so that allowlist entries get a new scope field
- update `add_collection_to_allowlist` to accept this new scope field,
and adds `update_scope_of_collection_in_allowlist` to change the scope
- scope can be global or role-based; a collection is available for every
role if it is global, and available to every listed role if it is role-based
- graphql-engine-oss is aware of role-based allowlist metadata; collections
with non-global scope are treated as if they weren't in the allowlist
To run the tests:
- `cabal run graphql-engine-tests -- unit --match Allowlist`
- py-tests against pro:
- launch `graphql-engine-pro` with `HASURA_GRAPHQL_ADMIN_SECRET` and `HASURA_GRAPHQL_ENABLE_ALLOWLIST`
- `pytest test_allowlist_queries.py --hge-urls=... --pg-urls=... --hge-key=... --test-allowlist-queries --pro-tests`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2477
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
Co-authored-by: Robert <132113+robx@users.noreply.github.com>
GitOrigin-RevId: 01f8026fbe59d8701e2de30986511a452fce1a99
This commit introduces an "experimental" backend adapter to the GraphQL Engine.
It defines a high-level interface which will eventually be used as the basis for implementing separate data source query generation & marshaling services that communicate with the GraphQL Engine Server via some protocol.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2684
Co-authored-by: awjchen <13142944+awjchen@users.noreply.github.com>
Co-authored-by: Chris Parks <592078+cdparks@users.noreply.github.com>
GitOrigin-RevId: 4463b682142ad6e069e223b88b14db511f634768
This PR pretty much does the same thing to remote relationship types in schemacache as what #2979 did to remote relationship types in the IR. On main remote relationships are represented by types of form `T from to`. This PR changes it to `T from` which makes it a lot more reusable.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3037
GitOrigin-RevId: 90a5c9e2346c8dc2da6ec5b8c970d6c863d2afb8
## Description
This PR fixes two issues:
- in [#2903](https://github.com/hasura/graphql-engine-mono/pull/2903), we introduced a new metadata representation of remote relationships, which broke parsing a metadata blob containing an old-style db-to-rs remote relationship
- in [#1179](https://github.com/hasura/graphql-engine-mono/pull/1179), we silently and mistakenly deprecated `create_remote_relationship` in favour of `<backend>_create_remote_relationship`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3124
Co-authored-by: jkachmar <8461423+jkachmar@users.noreply.github.com>
Co-authored-by: Antoine Leblanc <1618949+nicuveo@users.noreply.github.com>
GitOrigin-RevId: 45481db7a8d42c7612e938707cd2d652c4c81bf8
This PR simplifies the types that represent a remote relationship in IR so that they can be reused in other parts (in remote schema types) which could have remote relationships.
The comments on the PR explain the main changes.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2979
GitOrigin-RevId: 559c51d9d6ae79e2183ce4347018741b9096ac74
We'll see if this improves compile times at all, but I think it's worth
doing as at least the most minimal form of module documentation.
This was accomplished by first compiling everything with
-ddump-minimal-imports, and then a bunch of scripting (with help from
ormolu)
**EDIT** it doesn't seem to improve CI compile times but the noise floor is high as it looks like we're not caching library dependencies anymore
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2730
GitOrigin-RevId: 667eb8de1e0f1af70420cbec90402922b8b84cb4
<!-- Thank you for ss in the Title above ^ -->
## Description
<!-- Please fill thier. -->
<!-- Describe the changes from a user's perspective -->
We don't have dependency reporting mechanism for `mssql_run_sql` API i.e when a database object (table, column etc.) is dropped through the API we should raise an exception if any dependencies (relationships, permissions etc.) with the database object exists in the metadata.
This PR addresses the above mentioned problem by
-> Integrating transaction to the API to rollback the SQL query execution if dependencies exists and exception is thrown
-> Accepting `cascade` optional field in the API payload to drop the dependencies, if any
-> Accepting `check_metadata_consistency` optional field to bypass (if value set to `false`) the dependency check
### Related Issues
<!-- Please make surt title -->
<!-- Add the issue number below (e.g. #234) -->
Close#1853
### Solution and Design
<!-- How is this iss -->
<!-- It's better if we elaborate -->
The design/solution follows the `run_sql` API implementation for Postgres backend.
### Steps to test and verify
<!-- If this is a fehis is a bug-fix, how do we verify the fix? -->
- Create author - article tables and track them
- Defined object and array relationships
- Try to drop the article table without cascade or cascade set to `false`
- The server should raise the relationship dependency exists exception
## Changelog
- ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR.
If no changelog is required, then add the `no-changelog-required` label.
## Affected components
<!-- Remove non-affected components from the list -->
- ✅ Server
- ❎ Console
- ❎ CLI
- ❎ Docs
- ❎ Community Content
- ❎ Build System
- ✅ Tests
- ❎ Other (list it)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2636
GitOrigin-RevId: 0ab152295394056c4ca6f02923142a1658ad25dc
The only real use was for the dubious multitenant option
--consoleAssetsVersion, which actually overrode not just
the assets version. I.e., as far as I can tell, if you pass
--consoleAssetsVersion to multitenant, that version will
also make it into e.g. HTTP client user agent headers as
the proper graphql-engine version.
I'm dropping that option, since it seems unused in production
and I don't want to go to the effort of fixing it, but am happy
to look into that if folks feels strongly that it should be
kept.
(Reason for attacking this is that I was looking into http
client things around blacklisting, and the versioning thing
is a bit painful around http client headers.)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2458
GitOrigin-RevId: a02b05557124bdba9f65e96b3aa2746aeee03f4a
The Plan part is missing, because it needs support from FromIr. That'll come in a follow up commit.
**Next PR**: #2529
This is the result of splitting up the mega PR into more digestible chunks. This is the smallest subset I've been able to collect. Missing parts are noted in comments.
The code isn't reachable from Main, so it won't affect the test suite. It just gets compiled for now.
For context, this splits up work from https://github.com/hasura/graphql-engine-mono/pull/2332
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2511
Co-authored-by: Abby Sassel <3883855+sassela@users.noreply.github.com>
GitOrigin-RevId: 00f30b0f494b56b3b7f8c1b0996377db4874c88d
### Description
This PR implements operation timeouts, as specced in #1232.
RFC: [rfcs/operation-timeout-api-limits.md](c025a90fe9/rfcs/operation-timeout-api-limits.md)
There's still some things to be done (tests and docs most notably), but apart from that it can
be reviewed. I'd still appreciate feedback on the RFC!
TODO:
- [x] break out the `ApiLimits` refactoring into a separate PR: #2103
- [x] finish the `pg-client-hs` PR: https://github.com/hasura/pg-client-hs/pull/39
- [x] remove configurability, after testing, prior to merging
- [ ] tests: #2390 has some tests that I've run locally to confirm things work on a fundamental level
- [x] changelog
- [x] documentation
- [x] fill in the detailed PR checklist
### Changelog
- [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label.
### Affected components
- [x] Server
- [ ] Console
- [ ] CLI
- [x] Docs
- [ ] Tests
### Related Issues
Product spec: #1232.
### Solution and Design
Compare `rfcs/operation-timeout-api-limits.md`.
### Steps to test and verify
Configure operation timeouts, e.g. by posting
```
{
"type": "set_api_limits",
"args": {
"operation_timeout": {
"global": 3
}
}
}
```
to `v1/metadata` to set an operation timeout of 3s. Then verify that
1. non-admin queries that take longer than 3s time out with a nice error message
2. that those queries return after ~3s (at least for postgres)
3. also that everything else still works as usual
### Limitations, known bugs & workarounds
- while this will cause slow queries against any backends to fail, it's only verified to actually interrupt queries against postgres
- this will only successfully short-cut (cancel) queries to postgres if the database server is responsive
#### Catalog upgrade
Does this PR change Hasura Catalog version?
- [x] No
#### Metadata
Does this PR add a new Metadata feature?
- [x] Yes
- Does `run_sql` auto manages the new metadata through schema diffing?
- [x] Not required
- Does `run_sql` auto manages the definitions of metadata on renaming?
- [x] Not required
- Does `export_metadata`/`replace_metadata` supports the new metadata added?
- [x] Yes
#### GraphQL
- [x] No new GraphQL schema is generated
#### Breaking changes
- [x] No Breaking changes
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/1593
GitOrigin-RevId: f0582d0be3ed9fadf89e0c4aaf96344d18331dc4
### Description
- sets up a Makefile target for running ormolu to format and check source code
- updates CI to run ormolu instead of stylish-haskell (and to check instead of format actively)
Compare #1679.
Here's the plan for merging this:
1. merge this PR; at this point, all PRs will fail CI unless they have the `ignore-server-format-checks` label set
2. merge follow-up PR #2404 that does nothing but actually reformats the codebase
3. tag the merge commit as `post-ormolu` (also on `graphql-engine`, for the benefits of community contributors)
4. provide the following script to any devs in order to update their branches:
```
$ git checkout my-feature-branch
$ git merge post-ormolu^
$ make format
$ git commit -a -m "reformat with ormolu"
$ git merge -s ours post-ormolu
```
(I'll put this in the commit message)
https://github.com/hasura/graphql-engine-mono/pull/2020
Co-authored-by: Philip Lykke Carlsen <358550+plcplc@users.noreply.github.com>
Co-authored-by: Swann Moreau <62569634+evertedsphere@users.noreply.github.com>
GitOrigin-RevId: 130f480a6d79967c8d045b7f3a6dec30b10472a7
Some of our use of CPP causes trouble for ormolu, compare https://github.com/tweag/ormolu/issues/774.
Specifically, for understandable reasons, it can't deal well with `#ifdef` use that is not at the top-level.
This PR removes the problematic usage in ways that I hope are also a net non-loss regardless of helping
out ormolu (or other tooling).
- The default value for enabled APIs moves to the top level, next to the command line help, so
they'll stay in sync more easily.
- All the CPP around using `assertNFHere` is moved to one module.
https://github.com/hasura/graphql-engine-mono/pull/2361
GitOrigin-RevId: ed6e039e6d8960322fd8d1312df762ad197c29b1
## Description
Almost all our data structures use strictness annotations, following [our styleguide's principle](https://github.com/hasura/graphql-engine/blob/master/server/STYLE.md#dealing-with-laziness) of "by default, use strict data types and lazy functions". The very few cases where we actually need laziness were already explicitly labelled as lazy with the `~` prefix operator.
This PR simply globally enables `StrictData`, allowing us to express records without `!()` on every field, but makes no attempt at cleaning existing code.
https://github.com/hasura/graphql-engine-mono/pull/1869
Co-authored-by: Philip Lykke Carlsen <358550+plcplc@users.noreply.github.com>
GitOrigin-RevId: e65c6e2f89413188da250122f64c2173615946ec
## Suggestion: Add fancier trace debugging functions to `Hasura.Prelude`
This PR adds two trace functions, `ltrace` and `ltraceM`, which use the `pretty-simple` package to `show` the input with nice formatting and colors for ease of reading (and comparing using diff tools such as `meld` or `vim-diff`).
I've also added warning pragmas to the functions, which means:
1. Traces will not be left in code, as CI builds with -Werror
2. Developers will have to change the `ghc-options` to `-Wwarn` in their `cabal.project.local` settings to use these functions
### Example
Usage:
```hs
selectFunctionAggregate ... = ... do
ltraceM "functionInfo" function
...
```
Output to terminal looks like this:
<img width="524" alt="Screen Shot 2021-08-12 at 10 33 24" src="https://user-images.githubusercontent.com/8547573/129158878-4a5e96ba-30a5-452c-8f33-9eb4b2cc5e2a.png">
### Dependencies
Requires adding the following dependencies:
- prettyprinter-ansi-terminal-1.1.2 (BSD2)
- pretty-simple-4.0.0.0 (BSD3)
Question: what is the process for adding new dependencies? How does decisions on this matter happen?
https://github.com/hasura/graphql-engine-mono/pull/2075
GitOrigin-RevId: 490b0f0ca595da319b43e92e190ba50c0b132cd5
### Description
A first PR, #1947, removed all the `Arbitrary` stuff from our codebase. But #1740, merged on the same day, added some tests relying on `Arbitrary`. In the merge process, some unneeded `Arbitrary` code got reintroduced.
This PR removes all `Arbitrary` stuff from `src-lib`, and cleans / refactor `Hasura.Generator` in `src-test` to only reduce it to the bare minimum amount of `Arbitrary` instances.
https://github.com/hasura/graphql-engine-mono/pull/1957
GitOrigin-RevId: 7e76009bb022205e3737fca45749411a266cc08c
Query plan caching was introduced by - I believe - hasura/graphql-engine#1934 in order to reduce the query response latency. During the development of PDV in hasura/graphql-engine#4111, it was found out that the new architecture (for which query plan caching wasn't implemented) performed comparably to the pre-PDV architecture with caching. Hence, it was decided to leave query plan caching until some day in the future when it was deemed necessary.
Well, we're in the future now, and there still isn't a convincing argument for query plan caching. So the time has come to remove some references to query plan caching from the codebase. For the most part, any code being removed would probably not be very well suited to the post-PDV architecture of query execution, so arguably not much is lost.
Apart from simplifying the code, this PR will contribute towards making the GraphQL schema generation more modular, testable, and easier to profile. I'd like to eventually work towards a situation in which it's easy to generate a GraphQL schema parser *in isolation*, without being connected to a database, and then parse a GraphQL query *in isolation*, without even listening any HTTP port. It is important that both of these operations can be examined in detail, and in isolation, since they are two major performance bottlenecks, as well as phases where many important upcoming features hook into.
Implementation
The following have been removed:
- The entirety of `server/src-lib/Hasura/GraphQL/Execute/Plan.hs`
- The core phases of query parsing and execution no longer have any references to query plan caching. Note that this is not to be confused with query *response* caching, which is not affected by this PR. This includes removal of the types:
- - `Opaque`, which is replaced by a tuple. Note that the old implementation was broken and did not adequately hide the constructors.
- - `QueryReusability` (and the `markNotReusable` method). Notably, the implementation of the `ParseT` monad now consists of two, rather than three, monad transformers.
- Cache-related tests (in `server/src-test/Hasura/CacheBoundedSpec.hs`) have been removed .
- References to query plan caching in the documentation.
- The `planCacheOptions` in the `TenantConfig` type class was removed. However, during parsing, unrecognized fields in the YAML config get ignored, so this does not cause a breaking change. (Confirmed manually, as well as in consultation with @sordina.)
- The metrics no longer send cache hit/miss messages.
There are a few places in which one can still find references to query plan caching:
- We still accept the `--query-plan-cache-size` command-line option for backwards compatibility. The `HASURA_QUERY_PLAN_CACHE_SIZE` environment variable is not read.
https://github.com/hasura/graphql-engine-mono/pull/1815
GitOrigin-RevId: 17d92b254ec093c62a7dfeec478658ede0813eb7
### Context
One of the ways we use the Backend type families is to use `Void` for all types for which a backend has no representation; this allows us to make some branches of our metadata and IR unrepresentable, making some functions total, where they would have to handle those unsupported cases otherwise.
However, one of the biggest features, functions, cannot be cut that way, due to one of the constraints on `FunctionName b`: the metadata generator requires it to have an `Arbitrary` instance, and `Arbitrary` does not have a recovery mechanism which would allow for a `Void` instance...
### Description
This PR solves this problem and removes the `Arbitrary` constraints in `Backend`. To do so, it introduces a new typeclass: `PartialArbitrary`, which is very similar to `Arbitrary`, except that it returns a `Maybe (Gen a)`, allowing for `Void` to have a well-formed instance. An `Arbitrary` instance for `Metadata` can easily be retrieved with `arbitrary = fromJust . partialArbitrary`.
Furthermore, `PartialArbitrary` has a generic implementation, inspired by the one in `generic-arbitrary`, which automatically prunes branches that return `Nothing`, allowing to automatically construct most types. Types that don't have a type parameter and therefore can't contain `Void` can easily get their `PartialArbitrary` instance from `Arbitrary` with `partialArbitrary = Just arbitrary`. This is what a default overlappable instance provides.
In conjunction with other cleanups in #1666, **this allows for Void function names**.
### Notes
While this solves the stated problem, there are other possible solutions we could explore, such as:
- switching from QuickCheck to a library that supports that kind of pruning natively
- removing the test altogether, and dropping all notion of Arbitrary from the code
There are also several things we could do with the Generator module:
- move it out of RQL.DDL.Metadata, to some place that makes more sense
- move ALL Arbitrary instances in the code to it, since nothing else uses Arbitrary
- or, to the contrary, move all those Arbitrary instances alongside their types, to avoid an orphan instance
https://github.com/hasura/graphql-engine-mono/pull/1667
GitOrigin-RevId: 88e304ea453840efb5c0d39294639b8b30eefb81
Remote relationships are now supported on SQL Server and BigQuery. The major change though is the re-architecture of remote join execution logic. Prior to this PR, each backend is responsible for processing the remote relationships that are part of their AST.
This is not ideal as there is nothing specific about a remote join's execution that ties it to a backend. The only backend specific part is whether or not the specification of the remote relationship is valid (i.e, we'll need to validate whether the scalars are compatible).
The approach now changes to this:
1. Before delegating the AST to the backend, we traverse the AST, collect all the remote joins while modifying the AST to add necessary join fields where needed.
1. Once the remote joins are collected from the AST, the database call is made to fetch the response. The necessary data for the remote join(s) is collected from the database's response and one or more remote schema calls are constructed as necessary.
1. The remote schema calls are then executed and the data from the database and from the remote schemas is joined to produce the final response.
### Known issues
1. Ideally the traversal of the IR to collect remote joins should return an AST which does not include remote join fields. This operation can be type safe but isn't taken up as part of the PR.
1. There is a lot of code duplication between `Transport/HTTP.hs` and `Transport/Websocket.hs` which needs to be fixed ASAP. This too hasn't been taken up by this PR.
1. The type which represents the execution plan is only modified to handle our current remote joins and as such it will have to be changed to accommodate general remote joins.
1. Use of lenses would have reduced the boilerplate code to collect remote joins from the base AST.
1. The current remote join logic assumes that the join columns of a remote relationship appear with their names in the database response. This however is incorrect as they could be aliased. This can be taken up by anyone, I've left a comment in the code.
### Notes to the reviewers
I think it is best reviewed commit by commit.
1. The first one is very straight forward.
1. The second one refactors the remote join execution logic but other than moving things around, it doesn't change the user facing functionality. This moves Postgres specific parts to `Backends/Postgres` module from `Execute`. Some IR related code to `Hasura.RQL.IR` module. Simplifies various type class function signatures as a backend doesn't have to handle remote joins anymore
1. The third one fixes partial case matches that for some weird reason weren't shown as warnings before this refactor
1. The fourth one generalizes the validation logic of remote relationships and implements `scalarTypeGraphQLName` function on SQL Server and BigQuery which is used by the validation logic. This enables remote relationships on BigQuery and SQL Server.
https://github.com/hasura/graphql-engine-mono/pull/1497
GitOrigin-RevId: 77dd8eed326602b16e9a8496f52f46d22b795598
This reverts the remote schema type customisation and namespacing feature temporarily as we test for certain conditions.
GitOrigin-RevId: f8ee97233da4597f703970c3998664c03582d8e7