Followup to hasura/graphql-engine-mono#4713.
The `memoizeOn` method, part of `MonadSchema`, originally had the following type:
```haskell
memoizeOn
:: (HasCallStack, Ord a, Typeable a, Typeable b, Typeable k)
=> TH.Name
-> a
-> m (Parser k n b)
-> m (Parser k n b)
```
The reason for operating on `Parser`s specifically was that the `MonadSchema` effect would additionally initialize certain `Unique` values, which appear (nested in) the type of `Parser`.
hasura/graphql-engine-mono#518 changed the type of `memoizeOn`, to additionally allow memoizing `FieldParser`s. These also contained a `Unique` value, which was similarly initialized by the `MonadSchema` effect. The new type of `memoizeOn` was as follows:
```haskell
memoizeOn
:: forall p d a b
. (HasCallStack, HasDefinition (p n b) d, Ord a, Typeable p, Typeable a, Typeable b)
=> TH.Name
-> a
-> m (p n b)
-> m (p n b)
```
Note the type `p n b` of the value being memoized: by choosing `p` to be either `Parser k` or `FieldParser`, both can be memoized. Also note the new `HasDefinition (p n b) d` constraint, which provided a `Lens` for accessing the `Unique` value to be initialized.
A quick simplification is that the `HasCallStack` constraint has never been used by any code. This was realized in hasura/graphql-engine-mono#4713, by removing that constraint.
hasura/graphql-engine-mono#2980 removed the `Unique` value from our GraphQL-related types entirely, as their original purpose was never truly realized. One part of removing `Unique` consisted of dropping the `HasDefinition (p n b) d` constraint from `memoizeOn`.
What I didn't realize at the time was that this meant that the type of `memoizeOn` could be generalized and simplified much further. This PR finally implements that generalization. The new type is as follows:
```haskell
memoizeOn ::
forall a p.
(Ord a, Typeable a, Typeable p) =>
TH.Name ->
a ->
m p ->
m p
```
This change has a couple of consequences.
1. While constructing the schema, we often output `Maybe (Parser ...)`, to model that the existence of certain pieces of GraphQL schema sometimes depends on the permissions that a certain role has. The previous versions of `memoizeOn` were not able to handle this, as the only thing they could memoize was fully-defined (if not yet fully-evaluated) `(Field)Parser`s. This much more general API _would_ allow memoizing `Maybe (Parser ...)`s. However, we probably have to be continue being cautious with this: if we blindly memoize all `Maybe (Parser ...)`s, the resulting code may never be able to decide whether the value is `Just` or `Nothing` - i.e. it never commits to the existence-or-not of a GraphQL schema fragment. This would manifest as a non-well-founded knot tying, and this would get reported as an error by the implementation of `memoizeOn`.
tl;dr: This generalization _technically_ allows for memoizing `Maybe` values, but we probably still want to avoid doing so.
For this reason, the PR adds a specialized version of `memoizeOn` to `Hasura.GraphQL.Schema.Parser`.
2. There is no longer any need to connect the `MonadSchema` knot-tying effect with the `MonadParse` effect. In fact, after this PR, the `memoizeOn` method is completely GraphQL-agnostic, and so we implement hasura/graphql-engine-mono#4726, separating `memoizeOn` from `MonadParse` entirely - `memoizeOn` can be defined and implemented as a general Haskell typeclass method.
Since `MonadSchema` has been made into a single-type-parameter type class, it has been renamed to something more general, namely `MonadMemoize`. Its only task is to memoize arbitrary `Typeable p` objects under a combined key consisting of a `TH.Name` and a `Typeable a`.
Also for this reason, the new `MonadMemoize` has been moved to the more general `Control.Monad.Memoize`.
3. After this change, it's somewhat clearer what `memoizeOn` does: it memoizes an arbitrary value of a `Typeable` type. The only thing that needs to be understood in its implementation is how the manual blackholing works. There is no more semantic interaction with _any_ GraphQL code.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4725
Co-authored-by: Daniel Harvey <4729125+danieljharvey@users.noreply.github.com>
GitOrigin-RevId: 089fa2e82c2ce29da76850e994eabb1e261f9c92
### Description
By definition, root fields are at the root of the schema: only functions that craft root fields need to know about how to customize the name of root fields. However, the presence of `Has MkRootFieldName` in `MonadBuildSchemaBase` meant that the entirety of the schema building code was implicitly aware of / capable of altering root field names.
This PR removes this constraint, and moves it to the functions that do craft root fields. This has several upsides:
- it makes it more explicit where root fields are being crafted
- it prevents functions that should not use this from mistakenly applying it to non-root fields
- it simplifies the shared schema context
### Future work
- can we maybe pass this as an argument, instead of making it a required part of the context?
- ~~AFAICT, we only ever use `mempty` for it: is this actually dead code that we should actually just remove altogether?~~
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5235
GitOrigin-RevId: 4268751f3ab87ae8e03b6fe9e1efa1b096200027
For some reason these functions exist in `Backends.Postgres.SQL.Value`.
We don't want to depend on that module here.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5292
GitOrigin-RevId: a09bd3cdb0caf08938bce0728a8d281344c1d4ce
### Description
I am not 100% sure about this PR; while I think the code is better this way, I'm willing to be convinced otherwise.
In short, this PR moves the `RoleName` field into the `SchemaContext`, instead of being a nebulous `Has RoleName` constraint on the reader monad. The major upside of this is that it makes it an explicit named field, rather than something that must be given as part of a tuple of arguments when calling `runReader`.
However, the downside is that it breaks the helper permissions functions of `Schema.Table`, which relied on `Has RoleName r`. This PR makes the choice of passing the role name explicitly to all of those functions, which in turn means first explicitly fetching the role name in a lot of places. It makes it more explicit when a schema building block relies on the role name, but is a bit verbose...
### Alternatives
Some alternatives worth considering:
- attempting something like `Has context r, Has RoleName context`, which would allow them to be independent from the context but still fetch the role name from the reader, but might require type annotations to not be ambiguous
- keeping the permission functions the same, with `Has RoleName r`, and introducing a bunch of newtypes instead of using tuples to explicitly implement all the required `Has` instances
- changing the permission functions to `Has SchemaContext r`, since they are functions used only to build the schema, and therefore may be allowed to be tied to the context.
What do y'all think?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5073
GitOrigin-RevId: 8fd09fafb54905a4d115ef30842d35da0c3db5d2
In the process of decoupling the schema parsers from the GraphQL Engine, we need to remove dependencies on `Hasura.Base.Error`.
First of all, we have avoided using `QErr` in schema parsers code, instead returning a more appropriate data type which can be converted to a `Hasura.Base.Error.QErr` later.
Secondly, we create a new `ParseErrorCode` type to represent parse failure types, which are then converted to a `Hasura.Base.Error.Code` later.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5181
GitOrigin-RevId: 8655e26adb1e7d5e3d552c77a8a403f987b53467
This introduces an `ErrorMessage` newtype which wraps `Text` in a manner which is designed to be easy to construct, and difficult to deconstruct.
It provides functionality similar to `Data.Text.Extended`, but designed _only_ for error messages. Error messages are constructed through `fromString`, concatenation, or the `toErrorValue` function, which is designed to be overridden for all meaningful domain types that might show up in an error message. Notably, there are not and should never be instances of `ToErrorValue` for `String`, `Text`, `Int`, etc. This is so that we correctly represent the value in a way that is specific to its type. For example, all `Name` values (from the _graphql-parser-hs_ library) are single-quoted now; no exceptions.
I have mostly had to add `instance ToErrorValue` for various backend types (and also add newtypes where necessary). Some of these are not strictly necessary for this changeset, as I had bigger aspirations when I started. These aspirations have been tempered by trying and failing twice.
As such, in this changeset, I have started by introducing this type to the `parseError` and `parseErrorWith` functions. In the future, I would like to extend this to the `QErr` record and the various `throwError` functions, but this is a much larger task and should probably be done in stages.
For now, `toErrorMessage` and `fromErrorMessage` are provided for conversion to and from `Text`, but the intent is to stop exporting these once all error messages are converted to the new type.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5018
GitOrigin-RevId: 84b37e238992e4312255a87ca44f41af65e2d89a
This moves `MkTypename` and `NamingCase` into their own modules, with the intent of reducing the scope of the schema parsers code, and trying to reduce imports of large modules when small ones will do.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4978
GitOrigin-RevId: 19541257fe010035390f6183a4eaa37bae0d3ca1
This reduces the usage of "utils" modules in the parsers code, especially those that are simply re-exported from elsewhere, to facilitate extracting the parsers code into its own library.
It mostly inlines the imports that are re-exported from `Hasura.Prelude` and `Data.Parser.JSONPath`. It also removes references to `Data.*.Extended` modules. When necessary, it re-implements the functionality (which is typically trivial).
It does not tackle all external dependencies. I observed the following that will take more work:
- `Data.GADT.Compare.Extended`
- `Data.Text.Extended`
- `Hasura.Base.Error`
- `Hasura.RQL.Types.Common`
- `Hasura.Server.Utils`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4964
GitOrigin-RevId: 54ad3c1b7a31f13e34340ebe9fcc36d0ad57b8bd
This improves `parseJSONPath` and `encodeJSONPath` to encode special characters appropriately by delegating to Aeson.
This also makes a couple of improvements to `encodeJSONPath`.
1. The function is moved from `Hasura.Base.Error` to `Data.Parser.JSONPath`. This still doesn't seem too appropriate but it is somewhat better. I am basing this on the fact that its test cases already lived in `Data.Parser.JSONPathSpec`.
2. It now returns `Text`, not `String`.
4. It quotes strings with double quotes (`"`) rather than single quotes (`'`), just like JSON.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4935
GitOrigin-RevId: bf44353cd740500245f2e38907a7d6263ae0291c
This reflects the two different usages, which should not be conflated.
We also propagate the type a little more, to avoid `Text`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4931
GitOrigin-RevId: 16278f14aa4c2cb5667ea54bbb6b25e6d362835c
We only use these `Show` instances in error messages (where we call
`show` explicitly anyway) and test cases (in which Hspec requires `Show
a` for any `a` in an assertion).
This removes the instance in favor of a custom `showQErr` function
(which serializes the error to JSON). It is then used in certain error
message production which previously called `show` on a `QErr`.
There are two places where we serialize a QErr and then construct a new
QErr from the resulting string. Instead, we modify the existing QErr to
add extra information.
An orphan `Show QErr` instance is retained for tests so that we can have
nice test failure messages.
This is preparation for future changes in which the error message within
`QErr` will not be exposed directly, and therefore will not have a
`Show` instance. That said, it feels like a sensible kind of cleanup
anyway.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4897
GitOrigin-RevId: 8f79f7a356f0aea571156f39aefac242bf751f3a
…fix #5426"
This reverts commit f85742318167d1e51f463c45fcd00f26269c2555.
## Description ✍️
With this commit there is the possiblity that you could get conflicting
type definitions with remote schemas. Reverting for now as we determine
a solution. At which point we will add this back in.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4879
Co-authored-by: Gil Mizrahi <8547573+soupi@users.noreply.github.com>
GitOrigin-RevId: 932b4a9226717c826d4bde7e375695354cee8c0c
This came about as I tried to add an instance over catalog versions and
found they were just simple integers most of the time (and in one case,
a float).
I think this change also clarifies how catalog versions work.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4864
GitOrigin-RevId: a6b7db86de564b71a8c2b602bee6a456b8e20d63
The code that builds the GraphQL schema, and `buildGQLContext` in particular, is partial: not every value of `(ServerConfigCtx, GraphQLQueryType, SourceCache, HashMap RemoteSchemaName (RemoteSchemaCtx, MetadataObject), ActionCache, AnnotatedCustomTypes)` results in a valid GraphQL schema. When it fails, we want to be able to return better error messages than we currently do.
The key thing that is missing is a way to trace back GraphQL type information to their origin from the Hasura metadata. Currently, we have a number of correctness checks of our GraphQL schema. But these correctness checks only have access to pure GraphQL type information, and hence can only report errors in terms of that. Possibly the worst is the "conflicting definitions" error, which, in practice, can only be debugged by Hasura engineers. This is terrible DX for customers.
This PR allows us to print better error messages, by adding a field to the `Definition` type that traces the GraphQL type to its origin in the metadata. So the idea is simple: just add `MetadataObjId`, or `Maybe` that, or some other sum type of that, to `Definition`.
However, we want to avoid having to import a `Hasura.RQL` module from `Hasura.GraphQL.Parser`. So we instead define this additional field of `Definition` through a new type parameter, which is threaded through in `Hasura.GraphQL.Parser`. We then define type synonyms in `Hasura.GraphQL.Schema.Parser` that fill in this type parameter, so that it is not visible for the majority of the codebase.
The idea of associating metadata information to `Definition`s really comes to fruition when combined with hasura/graphql-engine-mono#4517. Their combination would allow us to use the API of fatal errors (just like the current `MonadError QErr`) to report _inconsistencies_ in the metadata. Such inconsistencies are then _automatically_ ignored. So no ad-hoc decisions need to be made on how to cut out inconsistent metadata from the GraphQL schema. This will allow us to report much better errors, as well as improve the likelihood of a successful HGE startup.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4770
Co-authored-by: Samir Talwar <47582+SamirTalwar@users.noreply.github.com>
GitOrigin-RevId: 728402b0cae83ae8e83463a826ceeb609001acae
Pretty much all quasi-quoted names in the server code base have ended up in `Hasura.GraphQL.Parser.Constants`. I'm now finding this unpleasant for two reasons:
1. I would like to factor out the parser code into its own Cabal package, and I don't want to have to expose all these names.
2. Most of them really have nothing to do with the parsers.
In order to remedy this, I have:
1. moved the names used by parser code to `Hasura.GraphQL.Parser.DirectiveName`, as they're all related to directives;
2. moved `Hasura.GraphQL.Parser.Constants` to `Hasura.Name`, changing the qualified import name from `G` to `Name`;
3. moved names only used in tests to the appropriate test case;
4. removed unused items from `Hasura.Name`; and
5. grouped related names.
Most of the changes are simply changing `G` to `Name`, which I find much more meaningful.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4777
GitOrigin-RevId: a77aa0aee137b2b5e6faec94495d3a9fbfa1348b
### Description
This small clean-up PR makes one further step towards backend-agnostic actions: it makes all the code parsing custom types backend agnostic. Surprisingly, this could be done *without* the need to finish generalizing the column parser. The remaining sore point is async queries, that still target Postgres explicitly.
In theory, this is enough to start allowing non-Postgres scalars in custom types. In practice, however:
- no other backend exposes scalars in a way that would allow users to do that as of this PR;
- we currently have no strategy to avoid / detect scalar collisions across backends.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4691
GitOrigin-RevId: bfe63fb131e306663d4406697ce23c02736566c5
(Work here originally done by awjchen, rebased and fixed up for merge by
jberryman)
This is part of a merge train towards GHC 9.2 compatibility. The main
issue is the use of the new abstract `KeyMap` in 2.0. See:
https://hackage.haskell.org/package/aeson-2.0.3.0/changelog
Alex's original work is here:
#4305
BEHAVIOR CHANGE NOTE: This change causes a different arbitrary ordering
of serialized Json, for example during metadata export. CLI users care
about this in particular, and so we need to call it out as a _behavior
change_ as we did in v2.5.0. The good news though is that after this
change ordering should be more stable (alphabetical key order).
See: https://hasurahq.slack.com/archives/C01M20G1YRW/p1654012632634389
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4611
Co-authored-by: awjchen <13142944+awjchen@users.noreply.github.com>
GitOrigin-RevId: 700265162c782739b2bb88300ee3cda3819b2e87
### Description
This PR is a first step in a series of cleanups of action relationships. This first step does not contain any behavioral change, and it simply reorganizes / prunes / rearranges / documents the code. Mainly:
- it divides some files in RQL.Types between metadata types, schema cache types, execution types;
- it renames some types for consistency;
- it minimizes exports and prunes unnecessary types;
- it moves some types in places where they make more sense;
- it replaces uses of `DMap BackendTag` with `BackendMap`.
Most of the "movement" within files re-organizes declarations in a "top-down" fashion, by moving all TH splices to the end of the file, which avoids order or declarations mattering.
### Optional list types
One main type change this PR makes is a replacement of variant list types in `CustomTypes.hs`; we had `Maybe [a]`, or sometimes `Maybe (NonEmpty a)`. This PR harmonizes all of them to `[a]`, as most of the code would use them as such, by doing `fromMaybe []` or `maybe [] toList`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4613
GitOrigin-RevId: bc624e10df587eba862ff27a5e8021b32d0d78a2
## Motivation
This PR rewrites most of Relay to achieve the following:
- ~~fix a bug in which the same node id could refer to two different tables in the schema~~
- remove one of the few remaining uses of the source cache in the schema building code
In doing so, it also:
- simplifies the `BackendSchema` class by removing `node` from it,
- makes it much easier for other backends to support Relay,
- documents, re-organizes, and clarifies the code.
## Description
This PR introduces a new `NodeId` version ~~, and adapts the Postgres code to always generate this V2 version~~. This new id contains the source name, in addition to the table name, in order to disambiguate similar table names across different sources (which is now possible with source customization). In doing so, it now explicitly handles that case for V1 node ids, and returns an explicit error message instead of running the risk of _silently returning the wrong information_.
Furthermore, it adapts `nodeField` to support multiple backends; most of the code was trivial to generalize, and as a result it lowers the cost of entry for other backends, that now only need to support `AFNodeId` in their translation layer.
Finally, it removes one more cycle in the schema building code, by using the same trick we used for remote relationships instead of using the memoization trick of #4576.
## Remaining work
- ~~[ ]write a Changelog entry~~
- ~~[x] adapt all tests that were asserting on an old node id~~
## Future work
This PR was adapted from its original form to avoid a breaking change: while it introduces a Node ID V2, we keep generating V1 IDs and the parser rejects V2 IDs. It will be easy to make the switch at a later data in a subsequent PR.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4593
GitOrigin-RevId: 88e5cb91e8b0646900547fa8c7c0e1463de267a1
This is a first step towards clarifying the role of `UnpreparedValue` as part of the IR. It certainly does not belong in the parser framework.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4588
GitOrigin-RevId: d1582a0b266729b79e00d31057178a4099168e6d
### Description
The main goal of this PR is, as stated, to remove the circular dependency in the schema building code. This cycle arises from the existence of remote relationships: when we build the schema for a source A, a remote relationship might force us to jump to the schema of a source B, or some remote schema. As a result, we end up having to do a dispatch from a "leaf" of the schema, similar to the one done at the root. In turn, this forces us to carry along in the schema a lot of information required for that dispatch, AND it forces us to import the instances in scope, creating an import loop.
As discussed in #4489, this PR implements the "dependency injection" solution: we pass to the schema a function to call to do the dispatch, and to get a generated field for a remote relationship. That way, this function can be chosen at the root level, and the leaves need not be aware of the overall context.
This PR grew a bit bigger than that, however; in an attempt to try and remove the `SourceCache` from the schema altogether, it changed a lot of functions across the schema building code, to thread along the `SourceInfo b` of the source being built. This avoids having to do cache lookups within a given source. A few cases remain, such as relay, that we might try to tackle in a subsequent PR.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4557
GitOrigin-RevId: 9388e48372877520a72a9fd1677005df9f7b2d72
### Description
There were several functions in `GraphQL.Schema.Common` that were unrelated to the schema building process, and were about metadata manipulation or dependency computation. Having those functions in the schema part of the code forces several places in the code to depend on the schema code, despite being completely unrelated.
This PR moves those functions where they make sense: alongside similar functions in `RQL.Types.*`, and rewrites `getRemoteDependencies` for clarity (it was using the term "indirect dependency" in a way that was inconsistent with the rest of the code).
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4568
GitOrigin-RevId: 948a18cebbb337a8bb6367c1f2d2ef5628209d96
### Description
Several places in the code used `a /= []`, which is inelegant. To my surprise, hlint did not warn about this, despite the fact that it forces an `Eq` instance on the elements. This PR replaces all occurrences of that pattern with `not (null a)` and adds a lint warning for it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4569
GitOrigin-RevId: 6471e75ade9e71e5d583a0dac7815c01870c696b
## Description
As identified in hasura/graphql-engine#8096, the format string we used for timestamps was incorrect; we were using `%F`, which expands to `%Y-%m-%d`; but that meant that the year was not padded to four digits: `0001` would be represented simply as `1`. However, Postgres inteprets that `1` as `2001`, probably due to interpretation rules about two-digit years (in `25/12/01`, `01` is indeed `2001`).
```
# create table timestamp_test ( test timestamptz );
CREATE TABLE
# insert into timestamp_test values ('1-01-01T00:00:57Z');
INSERT 0 1
# select * from timestamp_test;
test
------------------------
2001-01-01 00:00:57+00
(1 row)
```
To fix this, this PR changes the format string to use `%0Y`, which always pads the year number with zeroes.
## Remaining work
- [x] write Changelog entry
- [ ] copy timestamp tests from the python suite into the hspec tests
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3536
GitOrigin-RevId: fa144111358339fd4a35b32d888c1d2c5b418ea6
### Description
As part of the cache building process, we create / update / migrate the catalog that each DB uses as a place to store event trigger information. The function that decides how this should be done was doing an explicit `case ... of` on the backend tag, instead of delegating to one of the backend classes. The downsides of this is that:
- it adds a "friction point" where the backend matters in the core of the engine, which is otherwise written to be almost entirely backend-agnostic
- it creates imports from deep in the engine to the `Backends`, which we try to restrict to a very small set of clearly identified files (the `Instances` files)
- it is currently implemented using a "catch all" default case, which might not always be correct for new backends
This PR makes the catalog updating process a part of `BackendMetadata`, and cleans the corresponding schema cache code.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4457
GitOrigin-RevId: 592f0eaa97a7c38f4e6d4400e1d2353aab12c97e
## Description
As the name suggests, `DML.Internal` contains internal implementation details of RQL's DML. However, a lot of unrelated parts of the codebase still use some of the code it contains. This PR fixes this, and removes all imports of `RQL.DML.Internal` from outside of `RQL.DML`. Most of the time, this involves moving a function out of `DML.Internal` to an underlying module (see `getRolePermInfo`) or moving a function _back_ into it (see `checkRetCols`).
This PR also clarifies a bit the situation with `withTyAnn` and `withTypeAnn` by renaming the former into `withScalarTypeAnn` and moving them together. Worth noting: there might be a bug lurking in that function, as it doesn't seem to use the proper type annotations for some extension types!
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4380
GitOrigin-RevId: c8ae5b4e8378fefc0bcccf778d97813df727d3cb
## Description
This PR removes `RQL.Types`, which was now only re-exporting a bunch of unrelated modules.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4363
GitOrigin-RevId: 894f29a19bff70b3dad8abc5d9858434d5065417
## Description
This small PR moves all functions in `RQL.Types.hs` to better locations. Most `askX` functions are moved alongside the `unsafe` functions they use. Several other functions are moved closer to their call site. `MetadataM` is moved alongside `Metadata`. This PR also documents the `ask` functions.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4355
GitOrigin-RevId: 0498a7e8f98e7a94af911dd375cad84ace7ddffa
### Description
Small PR that moves code out of `RQL.Types.hs`. Specifically, it moves `HasServerConfigCtx` to where `ServerConfigCtx` is defined. This removes code from `RQL.Types`, makes the dependency on `Server.Types` more explicit, and will make some further cleanups easier.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4336
GitOrigin-RevId: 95bb3467d741763892c4e68a38760497157ba1aa
### Description
This PR improves the `Collect` module by re-ordering the functions to make clear what is public API and what is internal implementation. Furthermore, it makes use of `traverseOf` and `traverseFields` to reduce duplication. To do so, it also introduces a few more lenses in the rest of the codebase, and uses this opportunity to harmonize some structures that were not following our naming convention.
While the diff is massive, a lot of it is just code moving around; the file is now divided into separate sections:
- entry points: IR types for which we want to run the collection
- internal monadic structure
- internal traversals: functions that do nothing but drill down further
- actual transformations: the three cases where we do actually have work to do: selection sets on which we do want to insert join columns, extract remote relationships... those functions are left unchanged by this PR
- internal helpers
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3863
GitOrigin-RevId: f7cbecfae9eed9737b62acfa5848bfcf9d4651f6
I discovered and removed instances of Boolean Blindness about whether json numbers should be stringified or not.
Although quite far-reaching, this is a completely mechanical change and should have no observable impact outside the server code.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3763
GitOrigin-RevId: c588891afd8a6923a135c736f6581a43a2eddbc7
We build the GraphQL schema by combining building blocks such as `tableSelectionSet` and `columnParser`. These building blocks individually build `{InputFields,Field,}Parser` objects. Those object specify the valid GraphQL schema.
Since the GraphQL schema is role-dependent, at some point we need to know what fragment of the GraphQL schema a specific role is allowed to access, and this is stored in `{Sel,Upd,Ins,Del}PermInfo` objects.
We have passed around these permission objects as function arguments to the schema building blocks since we first started dealing with permissions during the PDV refactor - see hasura/graphql-engine@5168b99e46 in hasura/graphql-engine#4111. This means that, for instance, `tableSelectionSet` has as its type:
```haskell
tableSelectionSet ::
forall b r m n.
MonadBuildSchema b r m n =>
SourceName ->
TableInfo b ->
SelPermInfo b ->
m (Parser 'Output n (AnnotatedFields b))
```
There are three reasons to change this.
1. We often pass a `Maybe (xPermInfo b)` instead of a proper `xPermInfo b`, and it's not clear what the intended semantics of this is. Some potential improvements on the data types involved are discussed in issue hasura/graphql-engine-mono#3125.
2. In most cases we also already pass a `TableInfo b`, and together with the `MonadRole` that is usually also in scope, this means that we could look up the required permissions regardless: so passing the permissions explicitly undermines the "single source of truth" principle. Breaking this principle also makes the code more difficult to read.
3. We are working towards role-based parsers (see hasura/graphql-engine-mono#2711), where the `{InputFields,Field,}Parser` objects are constructed in a role-invariant way, so that we have a single object that can be used for all roles. In particular, this means that the schema building blocks _need_ to be constructed in a role-invariant way. While this PR doesn't accomplish that, it does reduce the amount of role-specific arguments being passed, thus fixing hasura/graphql-engine-mono#3068.
Concretely, this PR simply drops the `xPermInfo b` argument from almost all schema building blocks. Instead these objects are looked up from the `TableInfo b` as-needed. The resulting code is considerably simpler and shorter.
One way to interpret this change is as follows. Before this PR, we figured out permissions at the top-level in `Hasura.GraphQL.Schema`, passing down the obtained `xPermInfo` objects as required. After this PR, we have a bottom-up approach where the schema building blocks themselves decide whether they want to be included for a particular role.
So this moves some permission logic out of `Hasura.GraphQL.Schema`, which is very complex.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3608
GitOrigin-RevId: 51a744f34ec7d57bc8077667ae7f9cb9c4f6c962
This PR simplifies the types that represent a remote relationship in IR so that they can be reused in other parts (in remote schema types) which could have remote relationships.
The comments on the PR explain the main changes.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2979
GitOrigin-RevId: 559c51d9d6ae79e2183ce4347018741b9096ac74
GraphQL types can refer to each other in a circular way. The PDV framework used to use values of type `Unique` to recognize two fragments of GraphQL schema as being the same instance. Internally, this is based on `Data.Unique` from the `base` package, which simply increases a counter on every creation of a `Unique` object.
**NB**: The `Unique` values are _not_ used for knot tying the schema combinators themselves (i.e. `Parser`s). The knot tying for `Parser`s is purely based on keys provided to `memoizeOn`. The `Unique` values are _only_ used to recognize two pieces of GraphQL _schema_ as being identical. Originally, the idea was that this would help us with a perfectly correct identification of GraphQL types. But this fully correct equality checking of GraphQL types was never implemented, and does not seem to be necessary to prevent bugs.
Specifically, these `Unique` values are stored as part of `data Definition a`, which specifies a part of our internal abstract syntax tree for the GraphQL types that we expose. The `Unique` values get initialized by the `SchemaT` effect.
In #2894 and #2895, we are experimenting with how (parts of) the GraphQL types can be hidden behind certain permission predicates. This would allow a single GraphQL schema in memory to serve all roles, implementing #2711. The permission predicates get evaluated at query parsing time when we know what role is doing a certain request, thus outputting the correct GraphQL types for that role.
If the approach of #2895 is followed, then the `Definition` objects, and thus the `Unique` values, would be hidden behind the permission predicates. Since the permission predicates are evaluated only after the schema is already supposed to be built, this means that the permission predicates would prevent us from initializing the `Unique` values, rendering them useless.
The simplest remedy to this is to remove our usage of `Unique` altogether from the GraphQL schema and schema combinators. It doesn't serve a functional purpose, doesn't prevent bugs, and requires extra bookkeeping.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2980
GitOrigin-RevId: 50d3f9e0b9fbf578ac49c8fc773ba64a94b1f43d
Source typename customization (hasura/graphql-engine@aac64f2c81) introduced a mechanism to change certain names in the GraphQL schema that is exposed. In particular it allows last-minute modification of:
1. the names of some types, and
2. the names of some root fields.
The above two items are assigned distinct customization algorithms, and at times both algorithms are in scope. So a need to distinguish them is needed.
In the original design, this was addressed by introducing a newtype wrapper `Typename` around GraphQL `Name`s, dedicated to the names of types. However, in the majority of the codebase, type names are also represented by `Name`. For this reason, it was unavoidable to allow for easy conversion. This was supported by a `HasName Typename` instance, as well as by publishing the constructors of `Typename`.
This means that the type safety that newtypes can add is lost. In particular, it is now very easy to confuse type name customization with root field name customization.
This refactors the above design by instead introducing newtypes around the customization operations:
```haskell
newtype MkTypename = MkTypename {runMkTypename :: Name -> Name}
deriving (Semigroup, Monoid) via (Endo Name)
newtype MkRootFieldName = MkRootFieldName {runMkRootFieldName :: Name -> Name}
deriving (Semigroup, Monoid) via (Endo Name)
```
The `Monoid` instance allows easy composition of customization operations, piggybacking off of the type of `Endo`maps.
This design allows safe co-existence of the two customization algorithms, while avoiding the syntactic overhead of packing and unpacking newtypes.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2989
GitOrigin-RevId: da3a353a9b003ee40c8d0a1e02872e99d2edd3ca
This is effectively a no-op, the `Left err` case can't actually happen.
- removes some unused logic
- refactors the /healthz endpoint to be clearer
- that includes logging the full QErr if checkMetadataHealth fails,
but it actually can't because the existing Postgres implementation
just lifts
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2849
GitOrigin-RevId: ac8abf51b6d869ad4048419e36012137c86e5abd
We'll see if this improves compile times at all, but I think it's worth
doing as at least the most minimal form of module documentation.
This was accomplished by first compiling everything with
-ddump-minimal-imports, and then a bunch of scripting (with help from
ormolu)
**EDIT** it doesn't seem to improve CI compile times but the noise floor is high as it looks like we're not caching library dependencies anymore
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2730
GitOrigin-RevId: 667eb8de1e0f1af70420cbec90402922b8b84cb4
<!-- Thank you for ss in the Title above ^ -->
## Description
<!-- Please fill thier. -->
<!-- Describe the changes from a user's perspective -->
We don't have dependency reporting mechanism for `mssql_run_sql` API i.e when a database object (table, column etc.) is dropped through the API we should raise an exception if any dependencies (relationships, permissions etc.) with the database object exists in the metadata.
This PR addresses the above mentioned problem by
-> Integrating transaction to the API to rollback the SQL query execution if dependencies exists and exception is thrown
-> Accepting `cascade` optional field in the API payload to drop the dependencies, if any
-> Accepting `check_metadata_consistency` optional field to bypass (if value set to `false`) the dependency check
### Related Issues
<!-- Please make surt title -->
<!-- Add the issue number below (e.g. #234) -->
Close#1853
### Solution and Design
<!-- How is this iss -->
<!-- It's better if we elaborate -->
The design/solution follows the `run_sql` API implementation for Postgres backend.
### Steps to test and verify
<!-- If this is a fehis is a bug-fix, how do we verify the fix? -->
- Create author - article tables and track them
- Defined object and array relationships
- Try to drop the article table without cascade or cascade set to `false`
- The server should raise the relationship dependency exists exception
## Changelog
- ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR.
If no changelog is required, then add the `no-changelog-required` label.
## Affected components
<!-- Remove non-affected components from the list -->
- ✅ Server
- ❎ Console
- ❎ CLI
- ❎ Docs
- ❎ Community Content
- ❎ Build System
- ✅ Tests
- ❎ Other (list it)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2636
GitOrigin-RevId: 0ab152295394056c4ca6f02923142a1658ad25dc
Prior to this change, the SQL expression that resulted from translating permissions on functions would refer to the table of the function's return type, rather than the set of rows selected from the function being called.
Now the SQL that results from translating permissions correctly refer to the selected rows.
This PR also contains the suggested additions of https://github.com/hasura/graphql-engine-mono/pull/2563#discussion_r726116863, which simplifies the Boolean Expression IR, but in turn makes the Schema Dependency Discovery algorithm work a bit harder.
We are changing the definition of `data OpExpG`, but the format accepted by its JSON parser remains unchanged. While there does exist a generically derived `instance ToJSON OpExpG` this is only used in the (unpublished) `/v1/metadata/dump_internal_state` API.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2609
Co-authored-by: Gil Mizrahi <8547573+soupi@users.noreply.github.com>
GitOrigin-RevId: bb9a0b4addbc239499dd2268909220196984df72
The only real use was for the dubious multitenant option
--consoleAssetsVersion, which actually overrode not just
the assets version. I.e., as far as I can tell, if you pass
--consoleAssetsVersion to multitenant, that version will
also make it into e.g. HTTP client user agent headers as
the proper graphql-engine version.
I'm dropping that option, since it seems unused in production
and I don't want to go to the effort of fixing it, but am happy
to look into that if folks feels strongly that it should be
kept.
(Reason for attacking this is that I was looking into http
client things around blacklisting, and the versioning thing
is a bit painful around http client headers.)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2458
GitOrigin-RevId: a02b05557124bdba9f65e96b3aa2746aeee03f4a
>
### Description
>
Insert mutations for MSSQL backend. This PR implements execution logic.
### Changelog
- [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label.
### Affected components
- [x] Server
- [x] Tests
### Related Issues
->
Close https://github.com/hasura/graphql-engine-mono/issues/2114
### Steps to test and verify
>
Track a MSSQL table and perform the generated insert mutation to test.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2248
Co-authored-by: Abby Sassel <3883855+sassela@users.noreply.github.com>
Co-authored-by: Philip Lykke Carlsen <358550+plcplc@users.noreply.github.com>
GitOrigin-RevId: 936f138c80d7a928180e6e7b0c4da64ecc1f7ebc
This commit applies ormolu to the whole Haskell code base by running `make format`.
For in-flight branches, simply merging changes from `main` will result in merge conflicts.
To avoid this, update your branch using the following instructions. Replace `<format-commit>`
by the hash of *this* commit.
$ git checkout my-feature-branch
$ git merge <format-commit>^ # and resolve conflicts normally
$ make format
$ git commit -a -m "reformat with ormolu"
$ git merge -s ours post-ormolu
https://github.com/hasura/graphql-engine-mono/pull/2404
GitOrigin-RevId: 75049f5c12f430c615eafb4c6b8e83e371e01c8e
This is a follow-up to #1959.
Today, I spent a while in review figuring out that a harmless PR change didn't do anything,
because it was moving from a `runLazy...` to something without the `Lazy`. So let's get
that source of confusion removed.
This should be a bit easier to review commit by commit, since some of the functions had
confusing names. (E.g. there was a misnamed `Migrate.Internal.runTx` before.)
The change should be a no-op.
https://github.com/hasura/graphql-engine-mono/pull/2335
GitOrigin-RevId: 0f284c4c0f814482d7827e7732a6d49e7735b302
>
### Description
>
This PR is an incremental work towards [enabling insert mutations on MSSQL](https://github.com/hasura/graphql-engine-mono/pull/1974). In this PR, we generate insert mutation schema parser for MSSQL backend.
### Changelog
- [ ] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label.
### Affected components
- [x] Server
https://github.com/hasura/graphql-engine-mono/pull/2141
GitOrigin-RevId: 8595008dece35f7fded9c52e134de8b97b64f53f
When adding object relationships, we set the nullability of the generated GraphQL field based on whether the database backend enforces that the referenced data always exists. For manual relationships (corresponding to `manual_configuration`), the database backend is unaware of any relationship between data, and hence such fields are always set to be nullable.
For relationships generated from foreign key constraints (corresponding to `foreign_key_constraint_on`), we distinguish between two cases:
1. The "forward" object relationship from a referencing table (i.e. which has the foreign key constraint) to a referenced table. This should be set to be non-nullable when all referencing columns are non-nullable. But in fact, it used to set it to be non-nullable if *any* referencing column is non-nullable, which is only correct in Postgres when `MATCH FULL` is set (a flag we don't consider). This fixes that by changing a boolean conjunction to a disjunction.
2. The "reverse" object relationship from a referenced table to a referencing table which has the foreign key constraint. This should always be set to be nullable. But in fact, it used to always be set to non-nullable, as was reported in hasura/graphql-engine#7201. This fixes that.
Moreover, we have moved the computation of the nullability from `Hasura.RQL.DDL.Relationship` to `Hasura.GraphQL.Schema.Select`: this nullability used to be passed through the `riIsNullable` field of `RelInfo`, but for array relationships this information is not actually used, and moreover the remaining fields of `RelInfo` are already enough to deduce the nullability.
This also adds regression tests for both (1) and (2) above.
https://github.com/hasura/graphql-engine-mono/pull/2159
GitOrigin-RevId: 617f12765614f49746d18d3368f41dfae2f3e6ca
>
### Description
>
This PR supersedes https://github.com/hasura/graphql-engine-mono/pull/1484. Apply `limit` to the table selection before joining relationship rows to improve query performance.
### Changelog
- [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label.
### Affected components
- [x] Server
### Related Issues
->
Fix https://github.com/hasura/graphql-engine/issues/5745
### Solution and Design
>
Prior to this change, we apply `LIMIT` and `OFFSET` to the outer selection from sub-query which includes joins for relationships. Now, we move `LIMIT` and `OFFSET` (if present) to inner selection of base table. But, this isn't done always! If there are order by relationships' columns we apply at the outer selection. To know more, please refer to [source code note](https://github.com/hasura/graphql-engine-mono/pull/2078/files#diff-46d868ee45d3eaac667cebb34731f573c77d5c9c8097bb9ccf1115fc07f65bfdR652).
```graphql
query {
article(limit: 2){
id
title
content
author{
name
}
}
}
```
Before:
```sql
SELECT
coalesce(json_agg("root"), '[]') AS "root"
FROM
(
SELECT
row_to_json(
(
SELECT
"_4_e"
FROM
(
SELECT
"_0_root.base"."id" AS "id",
"_0_root.base"."title" AS "title",
"_0_root.base"."content" AS "content",
"_3_root.or.author"."author" AS "author"
) AS "_4_e"
)
) AS "root"
FROM
(
SELECT
*
FROM
"public"."article"
WHERE
('true')
) AS "_0_root.base"
LEFT OUTER JOIN LATERAL (
SELECT
row_to_json(
(
SELECT
"_2_e"
FROM
(
SELECT
"_1_root.or.author.base"."name" AS "name"
) AS "_2_e"
)
) AS "author"
FROM
(
SELECT
*
FROM
"public"."author"
WHERE
(("_0_root.base"."author_id") = ("id"))
) AS "_1_root.or.author.base"
) AS "_3_root.or.author" ON ('true')
LIMIT
2
) AS "_5_root"
```
cost
```
Aggregate (cost=0.73..0.74 rows=1 width=32)
-> Limit (cost=0.15..0.71 rows=2 width=32)
-> Nested Loop Left Join (cost=0.15..223.96 rows=810 width=32)
-> Seq Scan on article (cost=0.00..18.10 rows=810 width=72)
-> Index Scan using author_pkey on author (cost=0.15..0.24 rows=1 width=36)
Index Cond: (article.author_id = id)
SubPlan 1
-> Result (cost=0.00..0.01 rows=1 width=32)
SubPlan 2
-> Result (cost=0.00..0.01 rows=1 width=32)
```
After:
```sql
SELECT
coalesce(json_agg("root"), '[]') AS "root"
FROM
(
SELECT
row_to_json(
(
SELECT
"_4_e"
FROM
(
SELECT
"_0_root.base"."id" AS "id",
"_0_root.base"."title" AS "title",
"_0_root.base"."content" AS "content",
"_3_root.or.author"."author" AS "author"
) AS "_4_e"
)
) AS "root"
FROM
(
SELECT
*
FROM
"public"."article"
WHERE
('true')
LIMIT
2
) AS "_0_root.base"
LEFT OUTER JOIN LATERAL (
SELECT
row_to_json(
(
SELECT
"_2_e"
FROM
(
SELECT
"_1_root.or.author.base"."name" AS "name"
) AS "_2_e"
)
) AS "author"
FROM
(
SELECT
*
FROM
"public"."author"
WHERE
(("_0_root.base"."author_id") = ("id"))
) AS "_1_root.or.author.base"
) AS "_3_root.or.author" ON ('true')
) AS "_5_root"
```
cost:
```
Aggregate (cost=16.47..16.48 rows=1 width=32)
-> Nested Loop Left Join (cost=0.15..16.44 rows=2 width=100)
-> Limit (cost=0.00..0.04 rows=2 width=72)
-> Seq Scan on article (cost=0.00..18.10 rows=810 width=72)
-> Index Scan using author_pkey on author (cost=0.15..8.18 rows=1 width=36)
Index Cond: (article.author_id = id)
SubPlan 1
-> Result (cost=0.00..0.01 rows=1 width=32)
SubPlan 2
-> Result (cost=0.00..0.01 rows=1 width=32)
```
https://github.com/hasura/graphql-engine-mono/pull/2078
Co-authored-by: Evie Ciobanu <1017953+eviefp@users.noreply.github.com>
GitOrigin-RevId: 47eaccdbfb3499efd2c9f733f3312ad31c77916f
The `LazyTxT` type was introduced to avoid connecting to Postgres when a given GraphQL request did not require this. However, through the new way query execution plans are represented, this has now _mostly_ been taken care of in a different way, and so no `LazyTxT` action is generated at all for GraphQL requests that do not fetch data from Postgres.
This removes the laziness of `LazyTxT` by simply making it a newtype wrapper around `Q.TxET`. This simplifies a lot of code in `Hasura.Backends.Postgres.Connection`.
https://github.com/hasura/graphql-engine-mono/pull/1959
GitOrigin-RevId: 58b4d5a05d67bc602b59e02ac338f1c3e63859c7
Query plan caching was introduced by - I believe - hasura/graphql-engine#1934 in order to reduce the query response latency. During the development of PDV in hasura/graphql-engine#4111, it was found out that the new architecture (for which query plan caching wasn't implemented) performed comparably to the pre-PDV architecture with caching. Hence, it was decided to leave query plan caching until some day in the future when it was deemed necessary.
Well, we're in the future now, and there still isn't a convincing argument for query plan caching. So the time has come to remove some references to query plan caching from the codebase. For the most part, any code being removed would probably not be very well suited to the post-PDV architecture of query execution, so arguably not much is lost.
Apart from simplifying the code, this PR will contribute towards making the GraphQL schema generation more modular, testable, and easier to profile. I'd like to eventually work towards a situation in which it's easy to generate a GraphQL schema parser *in isolation*, without being connected to a database, and then parse a GraphQL query *in isolation*, without even listening any HTTP port. It is important that both of these operations can be examined in detail, and in isolation, since they are two major performance bottlenecks, as well as phases where many important upcoming features hook into.
Implementation
The following have been removed:
- The entirety of `server/src-lib/Hasura/GraphQL/Execute/Plan.hs`
- The core phases of query parsing and execution no longer have any references to query plan caching. Note that this is not to be confused with query *response* caching, which is not affected by this PR. This includes removal of the types:
- - `Opaque`, which is replaced by a tuple. Note that the old implementation was broken and did not adequately hide the constructors.
- - `QueryReusability` (and the `markNotReusable` method). Notably, the implementation of the `ParseT` monad now consists of two, rather than three, monad transformers.
- Cache-related tests (in `server/src-test/Hasura/CacheBoundedSpec.hs`) have been removed .
- References to query plan caching in the documentation.
- The `planCacheOptions` in the `TenantConfig` type class was removed. However, during parsing, unrecognized fields in the YAML config get ignored, so this does not cause a breaking change. (Confirmed manually, as well as in consultation with @sordina.)
- The metrics no longer send cache hit/miss messages.
There are a few places in which one can still find references to query plan caching:
- We still accept the `--query-plan-cache-size` command-line option for backwards compatibility. The `HASURA_QUERY_PLAN_CACHE_SIZE` environment variable is not read.
https://github.com/hasura/graphql-engine-mono/pull/1815
GitOrigin-RevId: 17d92b254ec093c62a7dfeec478658ede0813eb7
GJ IR changes cherry-picked from the original GJ branch. There is a separate (can be merged independently) PR for metadata changes (#1727) and there will be a different PR upcoming PR for execution changes.
https://github.com/hasura/graphql-engine-mono/pull/1810
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: c31956af29dc9c9b75d002aba7d93c230697c5f4
## Description
This PR fixes an oversight in the implementation of the resolvers of different backends. To implement resolution from environment variables, both MSSQL and BigQuery were directly fetching the process' environment variables, instead of using the careful curated set we thread from main. It was working just fine on OSS, but is failing on Cloud.
This PR fixes this by adding an additional argument to `resolveSourceConfig`, to ensure that backends always use the correct set of variables.
https://github.com/hasura/graphql-engine-mono/pull/1891
GitOrigin-RevId: 58644cab7d041a8bf4235e2acfe9cf71533a92a1
### Description
This PR is the first of several PRs meant to introduce Generalized Joins. In this first PR, we add non-breaking changes to the Metadata types for DB-to-DB remote joins. Note that we are currently rejecting the new remote join format in order to keep folks from breaking their metadata (in case of a downgrade). These issues will be tackled (and JSON changes reverted) in subsequent PRs.
This PR also changes the way we construct the schema cache, and breaks the way we process sources in two steps: we first resolve each source and construct a cache of their tables' raw info, then in a second step we build the source output. This is so that we have access to the target source's tables when building db-to-db relationships.
### Notes
- this PR contains a few minor cleanups of the schema
- it also fixes a bug in how we do renames in remote schema relationships
- it introduces cross-source schema dependencies
https://github.com/hasura/graphql-engine-mono/pull/1727
Co-authored-by: Evie Ciobanu <1017953+eviefp@users.noreply.github.com>
GitOrigin-RevId: f625473077bc5fff5d941b70e9a116192bc1eb22
### Description
In our haste to generalize everything for MSSQL, we put every single "suspicious" type in Backend, including ones that weren't required. `Alias` is one of those: it's only used in a type alias, and is actually just an implementation detail of the translation layer. This PR removes it.
https://github.com/hasura/graphql-engine-mono/pull/1759
GitOrigin-RevId: fb348934ec65a51aae7f95d93c83c3bb704587b5
### Description
This PR removes all `fmapX` and `traverseX` functions from RQL.IR, favouring instead `Functor` and `Traversable` instances throughout the code. This was a relatively straightforward change, except for two small pain points: `AnnSelectG` and `AnnInsert`. Both were parametric over two types `a` and `v`, making it impossible to make them traversable functors... But it turns out that in every single use case, `a ~ f v`. By changing those types to take such an `f :: Type -> Type` as an argument instead of `a :: Type` makes it possible to make them functors.
The only small difference is for `AnnIns`, I had to introduce one `Identity` transformation for one of the `f` parameters. This is relatively straightforward.
### Notes
This PR fixes the most verbose BigQuery hint (`let` instead of `<- pure`).
https://github.com/hasura/graphql-engine-mono/pull/1668
GitOrigin-RevId: e632263a8c559aa04aeae10dcaec915b4a81ad1a
* fix resetting the catalog version to 43 on migration from 1.0 to 2.0
* ci: remove applying patch in test_oss_server_upgrade job
* make the 43 to 46th migrations idempotent
* Set missing HASURA_GRAPHQL_EVENTS_HTTP_POOL_SIZE=8 in upgrade_test
It's not clear why this wasn't caught in CI.
* ci: disable one component of event backpressure test
Co-authored-by: Vishnu Bharathi P <vishnubharathi04@gmail.com>
Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io>
Co-authored-by: Brandon Simmons <brandon@hasura.io>
GitOrigin-RevId: c74c6425266a99165c6beecc3e4f7c34e6884d4d
### Description
This PR adds the required IR for DB to DB joins, based on @paf31 and @0x777 's `feature/db-to-db` branch.
To do so, it also refactors the IR to introduce a new type parameter, `r`, which is used to recursively constructs the `v` parameter of remote QueryDBs. When collecting remote joins, we replace `r` with `Const Void`, indicating at the type level that there cannot be any leftover remote join.
Furthermore, this PR refactors IR.Select for readability, moves some code from IR.Root to IR.Select to avoid having to deal with circular dependencies, and makes it compile by adding `error` in all new cases in the execution pipeline.
The diff doesn't make it clear, but most of Select.hs is actually unchanged. Declarations have just been reordered by topic, in the following order:
- type declarations
- instance declarations
- type aliases
- constructor functions
- traverse functions
https://github.com/hasura/graphql-engine-mono/pull/1580
Co-authored-by: Phil Freeman <630306+paf31@users.noreply.github.com>
GitOrigin-RevId: bbdcb4119cec8bb3fc32f1294f91b8dea0728721
Remote relationships are now supported on SQL Server and BigQuery. The major change though is the re-architecture of remote join execution logic. Prior to this PR, each backend is responsible for processing the remote relationships that are part of their AST.
This is not ideal as there is nothing specific about a remote join's execution that ties it to a backend. The only backend specific part is whether or not the specification of the remote relationship is valid (i.e, we'll need to validate whether the scalars are compatible).
The approach now changes to this:
1. Before delegating the AST to the backend, we traverse the AST, collect all the remote joins while modifying the AST to add necessary join fields where needed.
1. Once the remote joins are collected from the AST, the database call is made to fetch the response. The necessary data for the remote join(s) is collected from the database's response and one or more remote schema calls are constructed as necessary.
1. The remote schema calls are then executed and the data from the database and from the remote schemas is joined to produce the final response.
### Known issues
1. Ideally the traversal of the IR to collect remote joins should return an AST which does not include remote join fields. This operation can be type safe but isn't taken up as part of the PR.
1. There is a lot of code duplication between `Transport/HTTP.hs` and `Transport/Websocket.hs` which needs to be fixed ASAP. This too hasn't been taken up by this PR.
1. The type which represents the execution plan is only modified to handle our current remote joins and as such it will have to be changed to accommodate general remote joins.
1. Use of lenses would have reduced the boilerplate code to collect remote joins from the base AST.
1. The current remote join logic assumes that the join columns of a remote relationship appear with their names in the database response. This however is incorrect as they could be aliased. This can be taken up by anyone, I've left a comment in the code.
### Notes to the reviewers
I think it is best reviewed commit by commit.
1. The first one is very straight forward.
1. The second one refactors the remote join execution logic but other than moving things around, it doesn't change the user facing functionality. This moves Postgres specific parts to `Backends/Postgres` module from `Execute`. Some IR related code to `Hasura.RQL.IR` module. Simplifies various type class function signatures as a backend doesn't have to handle remote joins anymore
1. The third one fixes partial case matches that for some weird reason weren't shown as warnings before this refactor
1. The fourth one generalizes the validation logic of remote relationships and implements `scalarTypeGraphQLName` function on SQL Server and BigQuery which is used by the validation logic. This enables remote relationships on BigQuery and SQL Server.
https://github.com/hasura/graphql-engine-mono/pull/1497
GitOrigin-RevId: 77dd8eed326602b16e9a8496f52f46d22b795598
This reverts the remote schema type customisation and namespacing feature temporarily as we test for certain conditions.
GitOrigin-RevId: f8ee97233da4597f703970c3998664c03582d8e7
event catalog:
- `hdb_catalog` is no longer automatically created
- catalog is initialised when the first event trigger is created
- catalog initialisation is done during the schema cache build, using `ArrowCache` so it is only run in response to a change to the set of event triggers
event queue:
- `processEventQueue` thread is prevented from starting when `HASURA_GRAPHQL_EVENTS_FETCH_INTERVAL=0`
- `processEventQueue` thread only processes sources for which at least one event trigger exists in some table in the source
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
GitOrigin-RevId: 73f256465d62490cd2b59dcd074718679993d4fe
* add `use_prepared_statements` option to add_pg_source API
* update CHANGELOG.md
* change default value for 'use_prepared_statements' to False
* update CHANGELOG.md
GitOrigin-RevId: 6f5b90991f4a8c03a51e5829d2650771bb0e29c1
While debugging issues with HLS, Reed Mullanix noticed that we don't use relative paths. This leads to problems when using HLS + Emacs due to a bug in `lsp-mode` which prevents it from finding the correct project root.
However, it is still a good practice to use relative paths in TH for other reasons, including being able to import these modules in GHCI.
This PR should make it so HLS-1.0 & emacs provide type inference, imports, etc., in all modules in our codebase.
GitOrigin-RevId: 5f53b9a7ccf46df1ea7be94ff0a5c6ec861f4ead
Fixes https://github.com/hasura/graphql-engine-mono/issues/712
Main point of interest: the `Hasura.SQL.Backend` module.
This PR creates an `Exists` type indexed by indexed type and packed constraint while hiding all of its complexity by not exporting the constructor.
Existential constructors/types which are no longer (directly) existential:
- [X] BackendSourceInfo :: BackendSourceInfo
- [x] BackendSourceMetadata :: BackendSourceMetadata
- [x] MOSourceObjId :: MetadatObjId
- [x] SOSourceObj :: SchemaObjId
- [x] RFDB :: RootField
- [x] LQP :: LiveQueryPlan
- [x] ExecutionStep :: ExecStepDB
This PR also removes ALL usages of `Typeable.cast` from our codebase. We still need to derive `Typeable` in a few places in order to be able to derive `Data` in one place. I have not dug deeper to see why this is needed.
GitOrigin-RevId: bb47e957192e4bb0af4c4116aee7bb92f7983445
fixes#3868
docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de`
Note:
To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`.
Introduction
------------
This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`.
How are select permissions of different roles are combined?
------------------------------------------------------------
A select permission includes 5 things:
1. Columns accessible to the role
2. Row selection filter
3. Limit
4. Allow aggregation
5. Scalar computed fields accessible to the role
Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`.
Let's say the following GraphQL query is queried with the `combined_roles` role.
```graphql
query {
employees {
address
phone
}
}
```
This will translate to the following SQL query:
```sql
select
(case when (P1 or P2) then address else null end) as address,
(case when P2 then phone else null end) as phone
from employee
where (P1 or P2)
```
The other parameters of the select permission will be combined in the following manner:
1. Limit - Minimum of the limits will be the limit of the inherited role
2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation
3. Scalar computed fields - same as table column fields, as in the above example
APIs for inherited roles:
----------------------
1. `add_inherited_role`
`add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments
`role_name`: the name of the inherited role to be added (String)
`role_set`: list of roles that need to be combined (Array of Strings)
Example:
```json
{
"type": "add_inherited_role",
"args": {
"role_name":"combined_user",
"role_set":[
"user",
"user1"
]
}
}
```
After adding the inherited role, the inherited role can be used like single roles like earlier
Note:
An inherited role can only be created with non-inherited/singular roles.
2. `drop_inherited_role`
The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument:
`role_name`: name of the inherited role to be dropped
Example:
```json
{
"type": "drop_inherited_role",
"args": {
"role_name":"combined_user"
}
}
```
Metadata
---------
The derived roles metadata will be included under the `experimental_features` key while exporting the metadata.
```json
{
"experimental_features": {
"derived_roles": [
{
"role_name": "manager_is_employee_too",
"role_set": [
"employee",
"manager"
]
}
]
}
}
```
Scope
------
Only postgres queries and subscriptions are supported in this PR.
Important points:
-----------------
1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done.
TODOs
-------
- [ ] Tests
- [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features
- [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?)
- [ ] Introspection test with a inherited role (nullability changes in a inherited role)
- [ ] Docs
- [ ] Changelog
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63