This came about as I tried to add an instance over catalog versions and
found they were just simple integers most of the time (and in one case,
a float).
I think this change also clarifies how catalog versions work.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4864
GitOrigin-RevId: a6b7db86de564b71a8c2b602bee6a456b8e20d63
This implements an initial set of DTO types that represent serialized metadata. These new types come with codecs using autodocodec which are used to derive both JSON serialization, and OpenAPI documentation. This ensures that we can automatically generate API documentation that is guaranteed to match JSON produced by the server.
For the moment the new types are not used for anything except to generate an early version of an OpenAPI document. Because this is early work the DTO types for each metadata format version list top-level properties only with placeholders for the types of each top-level property. This early iteration demonstrates using a sum type in Haskell that maps to a tagged union in OpenAPI (using the `version` field value as a tag).
This work is experimental and incomplete! Please do not incorporate the generated OpenAPI documentation into essential workflows at this time.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4801
GitOrigin-RevId: d2f110a6237b73520cdba24667333ef14e8cdd3d
### Description
This PR removes the need for the `SourceCache` when building the schema for the actions. To do so, it changes the way we represent custom types in the source cache. Instead of trying to reuse the same `ObjectTypeDefinition` and `TypeRelationship`. we now have separate `AnnotatedObjectType` and `AnnotatedRelationship`. When building them, at schema cache building time, we persist all the relevant source information, so that it's all available at schema building time.
This PR makes no attempt at re-using `RemoteRelationship` primitives, to avoid having to change the way async action queries are executed, and to avoid having to make complicated changes to how we parse and represent those relationships.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4813
GitOrigin-RevId: 3cc65c5a043c8d3da5f7214eed40c558c4349327
Pretty much all quasi-quoted names in the server code base have ended up in `Hasura.GraphQL.Parser.Constants`. I'm now finding this unpleasant for two reasons:
1. I would like to factor out the parser code into its own Cabal package, and I don't want to have to expose all these names.
2. Most of them really have nothing to do with the parsers.
In order to remedy this, I have:
1. moved the names used by parser code to `Hasura.GraphQL.Parser.DirectiveName`, as they're all related to directives;
2. moved `Hasura.GraphQL.Parser.Constants` to `Hasura.Name`, changing the qualified import name from `G` to `Name`;
3. moved names only used in tests to the appropriate test case;
4. removed unused items from `Hasura.Name`; and
5. grouped related names.
Most of the changes are simply changing `G` to `Name`, which I find much more meaningful.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4777
GitOrigin-RevId: a77aa0aee137b2b5e6faec94495d3a9fbfa1348b
## Description
Following on from #4572, this removes more dead code as identified by Weeder. Comments and thoughts similarly welcome!
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4587
GitOrigin-RevId: 73aa6a5a2833ee41d29b71fcd0a72ed19822ca73
(Work here originally done by awjchen, rebased and fixed up for merge by
jberryman)
This is part of a merge train towards GHC 9.2 compatibility. The main
issue is the use of the new abstract `KeyMap` in 2.0. See:
https://hackage.haskell.org/package/aeson-2.0.3.0/changelog
Alex's original work is here:
#4305
BEHAVIOR CHANGE NOTE: This change causes a different arbitrary ordering
of serialized Json, for example during metadata export. CLI users care
about this in particular, and so we need to call it out as a _behavior
change_ as we did in v2.5.0. The good news though is that after this
change ordering should be more stable (alphabetical key order).
See: https://hasurahq.slack.com/archives/C01M20G1YRW/p1654012632634389
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4611
Co-authored-by: awjchen <13142944+awjchen@users.noreply.github.com>
GitOrigin-RevId: 700265162c782739b2bb88300ee3cda3819b2e87
### Description
This PR is a first step in a series of cleanups of action relationships. This first step does not contain any behavioral change, and it simply reorganizes / prunes / rearranges / documents the code. Mainly:
- it divides some files in RQL.Types between metadata types, schema cache types, execution types;
- it renames some types for consistency;
- it minimizes exports and prunes unnecessary types;
- it moves some types in places where they make more sense;
- it replaces uses of `DMap BackendTag` with `BackendMap`.
Most of the "movement" within files re-organizes declarations in a "top-down" fashion, by moving all TH splices to the end of the file, which avoids order or declarations mattering.
### Optional list types
One main type change this PR makes is a replacement of variant list types in `CustomTypes.hs`; we had `Maybe [a]`, or sometimes `Maybe (NonEmpty a)`. This PR harmonizes all of them to `[a]`, as most of the code would use them as such, by doing `fromMaybe []` or `maybe [] toList`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4613
GitOrigin-RevId: bc624e10df587eba862ff27a5e8021b32d0d78a2
## Description
This PR removes `RQL.Types`, which was now only re-exporting a bunch of unrelated modules.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4363
GitOrigin-RevId: 894f29a19bff70b3dad8abc5d9858434d5065417
### Description
`HasSystemDefined` is defined in `RQL.Types`, but only used in one place, `LegacyCatalog`, to avoid passing a boolean around. It is easily replaced by an ad-hoc `ReaderT`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4337
GitOrigin-RevId: 649d758bb2b18b39533429dda5ab71afde62fb53
### Description
Small PR that moves code out of `RQL.Types.hs`. Specifically, it moves `HasServerConfigCtx` to where `ServerConfigCtx` is defined. This removes code from `RQL.Types`, makes the dependency on `Server.Types` more explicit, and will make some further cleanups easier.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4336
GitOrigin-RevId: 95bb3467d741763892c4e68a38760497157ba1aa
UPDATE: After testing in CI it turns out that the compile time Improvement is better than expected: even though we always have to recompile the OSS lib (due to Version.hs), downstream packages like Pro and multi-tenant can still benefit from some caching and avoid full recompilation. In the best case this takes us from 22 minutes to 13 minutes total.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/4104
GitOrigin-RevId: 76cbfc157064b33856e30f4c2b2ab2366f9c6089
### Motivation
While we strive to write clear code, we have historically struggled at Hasura with having very different styles and standards across the codebase. There's been efforts to standardize our coding style, we have an official styleguide that isn't maintained as closely as it should... We still have some work in front of us.
However, in the last ~year or so, there's been a huge push towards incrementally improving the situation. As part of this we've been blocking PRs that don't add enough comments, or don't improve the files that they touch.
While looking at `Hasura.GraphQL.Analyse`, it became apparent that this file did not meet the engineering standards that I would expect to see addressed during a code review. Some ways in which I think it falls short:
- lack of documentation
- no clear distinction between public / internal components
- "unidiomatic" Haskell code (such as using `Either Result Error`)
While there's no problem with a file looking like this during development, those issues should have been caught at review time. The fact that they weren't indicates a problem in our process that we will need to address: code quality and maintainability is paramount, and we all need to do our part.
### Description
This PR rewrites all of `Hasura.GraphQL.Analyze`, and adapts `Hasura.Server.OpenAPI` accordingly where needed. I've attempted to clarify names and add documentation based on my understanding of the code, and to clean what was unused (such as field variables). I don't think this PR is good enough as is, and I welcome criticism where I got my comments wrong / am happy to help y'all add more.
This PR makes one small change in the way error messages are reported (and adjusts the corresponding test accordingly); each error message is now prefixed with the path within the selection set:
```
⚠️ $.test.foo.bar.baz.mizpelled: field 'mizpelled' not found in object 'Baz'
```
### Note
This PR is currently **on top of #3962**. You can preview the changes in isolation by [diffing the branches](https://github.com/hasura/graphql-engine-mono/compare/nicuveo/clean-rest-endpoint-inconsistency-check..nicuveo/rewrite-analysis).
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3963
Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com>
GitOrigin-RevId: 5ec38e0e753f0c12096a350db0737658495e2f15
### Motivation
#2338 introduced a way to validate REST queries against the metadata after a change, to properly report any inconsistency that would emerge from a change in the underlying structure of our schema. However, the way this was done was quite complex and error-prone. Namely: we would use the generated schema parsers to statically execute an introspection query, similar to the one we use for remote schemas, then parse the resulting bytestring as it were coming from a remote schema.
This led to several issues: the code was using remote schema primitives, and was associated with remote schema code, despite being unrelated, which led to absurd situations like creating fake `Variable`s whose type was also their name. A lot of the code had to deal with the fact that we might fail to re-parse our own schema. Additionally, some of it was dead code, that for some reason GHC did not warn about? But more fundamentally, this architecture decision creates a dependency between unrelated pieces of the engine: modifying the internal processing of root fields or the introspection of remote schemas now risks impacting the unrelated `OpenAPI` feature.
### Description
This PR decouples that process from the remote schema introspection logic and from the execution engine by making `Analyse` and `OpenAPI` work on the generic `G.SchemaIntrospection` instead. To accomplish this, it:
- adds `GraphQL.Parser.Schema.Convert`, to convert from our "live" schema back to a flat `SchemaIntrospection`
- persists in the schema cache the `admin` introspection generated when building the schema, and uses it both for validation and for generating the `OpenAPI`.
### Known issues and limitations
This adds a bit of memory pressure to the engine, as we persist the entire schema in the schema cache. This might be acceptable in the short-term, but we have several potential ideas going forward should this be a problem:
- cache the result of `Analyze`: when it becomes possible to build the `OpenAPI` purely with the result of `Analyze` without any additional schema information, then we could cache that instead, reducing the footprint
- caching the `OpenAPI`: if it doesn't need to change every time the endpoint is queried, then it should be possible to cache the entire `OpenAPI` object instead of the schema
- cache a copy of the `FieldParsers` used to generate the schema: as those are persisted through the GraphQL `Context`, and are the only input required to generate the `Schema`, making them accessible in the schema cache would allow us to have the exact same feature with no additional memory cost, at the price of a slightly slower and more complicated process (need to rebuild the `Schema` every time we query the OpenAPI endpoint)
- cache nothing at all, and rebuild the admin schema from scratch every time.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3962
Co-authored-by: paritosh-08 <85472423+paritosh-08@users.noreply.github.com>
GitOrigin-RevId: a8b9808170b231fdf6787983b4a9ed286cde27e0
### Description
This is it! This PR enables the Metadata API for remote relationships from remote schemas, adds tests, ~~adds documentation~~, adds an entry to the Changelog. This is the release PR that enables the feature.
### Checklist
- [ ] Tests:
- [x] RS-to-Postgres (high level)
- [x] RS-to-RS (high level)
- [x] From RS specifically (testing for edge cases)
- [x] Metadata API tests
- [ ] Unit testing the actual engine?
- [x] Changelog entry
- [ ] Documentation?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3974
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
Co-authored-by: Vishnu Bharathi <4211715+scriptnull@users.noreply.github.com>
Co-authored-by: jkachmar <8461423+jkachmar@users.noreply.github.com>
GitOrigin-RevId: c9aebf12e6eebef8d264ea831a327b968d4be9d2
### Description
This small PR improves the representation of an endpoint method from `Text` to an enum of the supported methods. Additionally, it cleans some of the instances defined on surrounding types (such as Postgres-specific instances on Endpoint types).
Due to a name conflict, this makes `RQL.Types.Endpoint` impossible to re-export from `RQL.Types`, which in turn forces several other modules to import it explicitly, which I think is fine since we want to ultimately get rid of `RQL.Types`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3965
GitOrigin-RevId: 33869007d0d818ddf486fb61d1f6099f9dad7570
### Description
Several libraries define `catMaybes` as `mapMaybe id`. We had it defined in `Data.HashMap.Strict.Extended` already. This small PR also defines it in `Extended` modules for other containers and replaces every occurrence of `mapMaybe id` accordingly.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3884
GitOrigin-RevId: d222a2ca2f4eb9b725b20450a62a626d3886dbf4
### Description
There were several places in the codebase where we would either implement a generic container, or express the need for one. This PR extracts / creates all relevant containers, and adapts the relevant parts of the code to make use of said new generic containers. More specifically, it introduces the following modules:
- `Data.Set.Extended`, for new functions on `Data.Set`
- `Data.HashMap.Strict.Multi`, for hash maps that accept multiple values
- `Data.HashMap.Strict.NonEmpty`, for hash maps that can never be constructed as empty
- `Data.Trie`, for a generic implementation of a prefix tree
This PR makes use of those new containers in the following parts of the code:
- `Hasura.GraphQL.Execute.RemoteJoin.Types`
- `Hasura.RQL.Types.Endpoint*`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3828
GitOrigin-RevId: e6c1b971bcb3f5ab66bc91d0fa4d0e9df7a0c6c6
The only purpose was enabling the developer API by default. I don't
think that justifies a flag and CPP usage.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3820
GitOrigin-RevId: 058c9a7b03e5e164ef88e35c42f50bae3c42b5b6
No logic in this PR, just tidying things up (renaming the backend from `Experimental` to `DataWrapper`).
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3779
GitOrigin-RevId: f11acf563ccd8b9f16bc23c5e92da392aa4cfb2c
I discovered and removed instances of Boolean Blindness about whether json numbers should be stringified or not.
Although quite far-reaching, this is a completely mechanical change and should have no observable impact outside the server code.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3763
GitOrigin-RevId: c588891afd8a6923a135c736f6581a43a2eddbc7
- consistent qualified imports
- less convoluted initialization of pro logging HTTP manager
- pass pro HTTP manager directly instead of via Has
- remove some dead healthcheck code
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3639
GitOrigin-RevId: dfa7b9c62d1842a07a8514cdb77f1ed86064fb06
spec: https://github.com/hasura/graphql-engine-mono/pull/2278
Briefly:
- extend metadata so that allowlist entries get a new scope field
- update `add_collection_to_allowlist` to accept this new scope field,
and adds `update_scope_of_collection_in_allowlist` to change the scope
- scope can be global or role-based; a collection is available for every
role if it is global, and available to every listed role if it is role-based
- graphql-engine-oss is aware of role-based allowlist metadata; collections
with non-global scope are treated as if they weren't in the allowlist
To run the tests:
- `cabal run graphql-engine-tests -- unit --match Allowlist`
- py-tests against pro:
- launch `graphql-engine-pro` with `HASURA_GRAPHQL_ADMIN_SECRET` and `HASURA_GRAPHQL_ENABLE_ALLOWLIST`
- `pytest test_allowlist_queries.py --hge-urls=... --pg-urls=... --hge-key=... --test-allowlist-queries --pro-tests`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2477
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
Co-authored-by: Robert <132113+robx@users.noreply.github.com>
GitOrigin-RevId: 01f8026fbe59d8701e2de30986511a452fce1a99
This commit introduces an "experimental" backend adapter to the GraphQL Engine.
It defines a high-level interface which will eventually be used as the basis for implementing separate data source query generation & marshaling services that communicate with the GraphQL Engine Server via some protocol.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2684
Co-authored-by: awjchen <13142944+awjchen@users.noreply.github.com>
Co-authored-by: Chris Parks <592078+cdparks@users.noreply.github.com>
GitOrigin-RevId: 4463b682142ad6e069e223b88b14db511f634768
## Remaining Work
- [x] changelog entry
- [x] more tests: `<backend>_delete_remote_relationship` is definitely untested
- [x] negative tests: we probably want to assert that there are some APIs we DON'T support
- [x] update the console to use the new API, if necessary
- [x] ~~adding the corresponding documentation for the API for other backends (only `pg_` was added here)~~
- deferred to https://github.com/hasura/graphql-engine-mono/issues/3170
- [x] ~~deciding which backends should support this API~~
- deferred to https://github.com/hasura/graphql-engine-mono/issues/3170
- [x] ~~deciding what to do about potentially overlapping schematic representations~~
- ~~cf. https://github.com/hasura/graphql-engine-mono/pull/3157#issuecomment-995307624~~
- deferred to https://github.com/hasura/graphql-engine-mono/issues/3171
- [x] ~~add more descriptive versioning information to some of the types that are changing in this PR~~
- cf. https://github.com/hasura/graphql-engine-mono/pull/3157#discussion_r769830920
- deferred to https://github.com/hasura/graphql-engine-mono/issues/3172
## Description
This PR fixes several important issues wrt. the remote relationship API.
- it fixes a regression introduced by [#3124](https://github.com/hasura/graphql-engine-mono/pull/3124), which prevented `<backend>_create_remote_relationship` from accepting the old argument format (break of backwards compatibility, broke the console)
- it removes the command `create_remote_relationship` added to the v1/metadata API as a work-around as part of [#3124](https://github.com/hasura/graphql-engine-mono/pull/3124)
- it reverts the subsequent fix in the console: [#3149](https://github.com/hasura/graphql-engine-mono/pull/3149)
Furthermore, this PR also addresses two other issues:
- THE DOCUMENTATION OF THE METADATA API WAS WRONG, and documented `create_remote_relationship` instead of `<backend>_create_remote_relationship`: this PR fixes this by adding `pg_` everywhere, but does not attempt to add the corresponding documentation for other backends, partly because:
- `<backend>_delete_remote_relationship` WAS BROKEN ON NON-POSTGRES BACKENDS; it always expected an argument parameterized by Postgres.
As of main, the `<backend>_(create|update|delete)_remote_relationship` commands are supported on Postgres, Citus, BigQuery, but **NOT MSSQL**. I do not know if this is intentional or not, if it even should be publicized or not, and as a result this PR doesn't change this.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3157
Co-authored-by: jkachmar <8461423+jkachmar@users.noreply.github.com>
GitOrigin-RevId: 37e2f41522a9229a11c595574c3f4984317d652a
## Description
This PR fixes two issues:
- in [#2903](https://github.com/hasura/graphql-engine-mono/pull/2903), we introduced a new metadata representation of remote relationships, which broke parsing a metadata blob containing an old-style db-to-rs remote relationship
- in [#1179](https://github.com/hasura/graphql-engine-mono/pull/1179), we silently and mistakenly deprecated `create_remote_relationship` in favour of `<backend>_create_remote_relationship`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3124
Co-authored-by: jkachmar <8461423+jkachmar@users.noreply.github.com>
Co-authored-by: Antoine Leblanc <1618949+nicuveo@users.noreply.github.com>
GitOrigin-RevId: 45481db7a8d42c7612e938707cd2d652c4c81bf8
GraphQL types can refer to each other in a circular way. The PDV framework used to use values of type `Unique` to recognize two fragments of GraphQL schema as being the same instance. Internally, this is based on `Data.Unique` from the `base` package, which simply increases a counter on every creation of a `Unique` object.
**NB**: The `Unique` values are _not_ used for knot tying the schema combinators themselves (i.e. `Parser`s). The knot tying for `Parser`s is purely based on keys provided to `memoizeOn`. The `Unique` values are _only_ used to recognize two pieces of GraphQL _schema_ as being identical. Originally, the idea was that this would help us with a perfectly correct identification of GraphQL types. But this fully correct equality checking of GraphQL types was never implemented, and does not seem to be necessary to prevent bugs.
Specifically, these `Unique` values are stored as part of `data Definition a`, which specifies a part of our internal abstract syntax tree for the GraphQL types that we expose. The `Unique` values get initialized by the `SchemaT` effect.
In #2894 and #2895, we are experimenting with how (parts of) the GraphQL types can be hidden behind certain permission predicates. This would allow a single GraphQL schema in memory to serve all roles, implementing #2711. The permission predicates get evaluated at query parsing time when we know what role is doing a certain request, thus outputting the correct GraphQL types for that role.
If the approach of #2895 is followed, then the `Definition` objects, and thus the `Unique` values, would be hidden behind the permission predicates. Since the permission predicates are evaluated only after the schema is already supposed to be built, this means that the permission predicates would prevent us from initializing the `Unique` values, rendering them useless.
The simplest remedy to this is to remove our usage of `Unique` altogether from the GraphQL schema and schema combinators. It doesn't serve a functional purpose, doesn't prevent bugs, and requires extra bookkeeping.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2980
GitOrigin-RevId: 50d3f9e0b9fbf578ac49c8fc773ba64a94b1f43d
This is effectively a no-op, the `Left err` case can't actually happen.
- removes some unused logic
- refactors the /healthz endpoint to be clearer
- that includes logging the full QErr if checkMetadataHealth fails,
but it actually can't because the existing Postgres implementation
just lifts
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2849
GitOrigin-RevId: ac8abf51b6d869ad4048419e36012137c86e5abd
>
High-Level TODO:
* [x] Code Changes
* [x] Tests
* [x] Check that pro/multitenant build ok
* [x] Documentation Changes
* [x] Updating this PR with full details
* [ ] Reviews
* [ ] Ensure code has all FIXMEs and TODOs addressed
* [x] Ensure no files are checked in mistakenly
* [x] Consider impact on console, cli, etc.
### Description
>
This PR adds support for adding set-cookie header on the response from the auth webhook. If the set-cookie header is sent by the webhook, it will be forwarded in the graphQL engine response.
Fixes a bug in test-server.sh: testing of get-webhook tests was done by POST method and vice versa. To fix, the parameters were swapped.
### Changelog
- [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR.
### Affected components
- [x] Server
- [ ] Console
- [ ] CLI
- [x] Docs
- [ ] Community Content
- [ ] Build System
- [x] Tests
- [ ] Other (list it)
### Related Issues
->
Closes [#2269](https://github.com/hasura/graphql-engine/issues/2269)
### Solution and Design
>
### Steps to test and verify
>
Please refer to the docs to see how to send the set-cookie header from webhook.
### Limitations, known bugs & workarounds
>
- Support for only set-cookie header forwarding is added
- the value forwarded in the set-cookie header cannot be validated completely, the [Cookie](https://hackage.haskell.org/package/cookie) package has been used to parse the header value and any unnecessary information is stripped off before forwarding the header. The standard given in [RFC6265](https://datatracker.ietf.org/doc/html/rfc6265) has been followed for the Set-Cookie format.
### Server checklist
#### Catalog upgrade
Does this PR change Hasura Catalog version?
- [x] No
- [ ] Yes
- [ ] Updated docs with SQL for downgrading the catalog
#### Metadata
Does this PR add a new Metadata feature?
- [x] No
#### GraphQL
- [x] No new GraphQL schema is generated
- [ ] New GraphQL schema is being generated:
- [ ] New types and typenames are correlated
#### Breaking changes
- [x] No Breaking changes
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2538
Co-authored-by: Robert <132113+robx@users.noreply.github.com>
GitOrigin-RevId: d9047e997dd221b7ce4fef51911c3694037e7c3f
We'll see if this improves compile times at all, but I think it's worth
doing as at least the most minimal form of module documentation.
This was accomplished by first compiling everything with
-ddump-minimal-imports, and then a bunch of scripting (with help from
ormolu)
**EDIT** it doesn't seem to improve CI compile times but the noise floor is high as it looks like we're not caching library dependencies anymore
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2730
GitOrigin-RevId: 667eb8de1e0f1af70420cbec90402922b8b84cb4
<!-- Thank you for ss in the Title above ^ -->
## Description
<!-- Please fill thier. -->
<!-- Describe the changes from a user's perspective -->
We don't have dependency reporting mechanism for `mssql_run_sql` API i.e when a database object (table, column etc.) is dropped through the API we should raise an exception if any dependencies (relationships, permissions etc.) with the database object exists in the metadata.
This PR addresses the above mentioned problem by
-> Integrating transaction to the API to rollback the SQL query execution if dependencies exists and exception is thrown
-> Accepting `cascade` optional field in the API payload to drop the dependencies, if any
-> Accepting `check_metadata_consistency` optional field to bypass (if value set to `false`) the dependency check
### Related Issues
<!-- Please make surt title -->
<!-- Add the issue number below (e.g. #234) -->
Close#1853
### Solution and Design
<!-- How is this iss -->
<!-- It's better if we elaborate -->
The design/solution follows the `run_sql` API implementation for Postgres backend.
### Steps to test and verify
<!-- If this is a fehis is a bug-fix, how do we verify the fix? -->
- Create author - article tables and track them
- Defined object and array relationships
- Try to drop the article table without cascade or cascade set to `false`
- The server should raise the relationship dependency exists exception
## Changelog
- ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR.
If no changelog is required, then add the `no-changelog-required` label.
## Affected components
<!-- Remove non-affected components from the list -->
- ✅ Server
- ❎ Console
- ❎ CLI
- ❎ Docs
- ❎ Community Content
- ❎ Build System
- ✅ Tests
- ❎ Other (list it)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2636
GitOrigin-RevId: 0ab152295394056c4ca6f02923142a1658ad25dc
The only real use was for the dubious multitenant option
--consoleAssetsVersion, which actually overrode not just
the assets version. I.e., as far as I can tell, if you pass
--consoleAssetsVersion to multitenant, that version will
also make it into e.g. HTTP client user agent headers as
the proper graphql-engine version.
I'm dropping that option, since it seems unused in production
and I don't want to go to the effort of fixing it, but am happy
to look into that if folks feels strongly that it should be
kept.
(Reason for attacking this is that I was looking into http
client things around blacklisting, and the versioning thing
is a bit painful around http client headers.)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2458
GitOrigin-RevId: a02b05557124bdba9f65e96b3aa2746aeee03f4a
### Description
This PR implements operation timeouts, as specced in #1232.
RFC: [rfcs/operation-timeout-api-limits.md](c025a90fe9/rfcs/operation-timeout-api-limits.md)
There's still some things to be done (tests and docs most notably), but apart from that it can
be reviewed. I'd still appreciate feedback on the RFC!
TODO:
- [x] break out the `ApiLimits` refactoring into a separate PR: #2103
- [x] finish the `pg-client-hs` PR: https://github.com/hasura/pg-client-hs/pull/39
- [x] remove configurability, after testing, prior to merging
- [ ] tests: #2390 has some tests that I've run locally to confirm things work on a fundamental level
- [x] changelog
- [x] documentation
- [x] fill in the detailed PR checklist
### Changelog
- [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label.
### Affected components
- [x] Server
- [ ] Console
- [ ] CLI
- [x] Docs
- [ ] Tests
### Related Issues
Product spec: #1232.
### Solution and Design
Compare `rfcs/operation-timeout-api-limits.md`.
### Steps to test and verify
Configure operation timeouts, e.g. by posting
```
{
"type": "set_api_limits",
"args": {
"operation_timeout": {
"global": 3
}
}
}
```
to `v1/metadata` to set an operation timeout of 3s. Then verify that
1. non-admin queries that take longer than 3s time out with a nice error message
2. that those queries return after ~3s (at least for postgres)
3. also that everything else still works as usual
### Limitations, known bugs & workarounds
- while this will cause slow queries against any backends to fail, it's only verified to actually interrupt queries against postgres
- this will only successfully short-cut (cancel) queries to postgres if the database server is responsive
#### Catalog upgrade
Does this PR change Hasura Catalog version?
- [x] No
#### Metadata
Does this PR add a new Metadata feature?
- [x] Yes
- Does `run_sql` auto manages the new metadata through schema diffing?
- [x] Not required
- Does `run_sql` auto manages the definitions of metadata on renaming?
- [x] Not required
- Does `export_metadata`/`replace_metadata` supports the new metadata added?
- [x] Yes
#### GraphQL
- [x] No new GraphQL schema is generated
#### Breaking changes
- [x] No Breaking changes
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/1593
GitOrigin-RevId: f0582d0be3ed9fadf89e0c4aaf96344d18331dc4
This commit applies ormolu to the whole Haskell code base by running `make format`.
For in-flight branches, simply merging changes from `main` will result in merge conflicts.
To avoid this, update your branch using the following instructions. Replace `<format-commit>`
by the hash of *this* commit.
$ git checkout my-feature-branch
$ git merge <format-commit>^ # and resolve conflicts normally
$ make format
$ git commit -a -m "reformat with ormolu"
$ git merge -s ours post-ormolu
https://github.com/hasura/graphql-engine-mono/pull/2404
GitOrigin-RevId: 75049f5c12f430c615eafb4c6b8e83e371e01c8e
### Description
This PR improves error messages in our metadata API by displaying a message with the name of the failing command and a link to our documentation. Furthermore, it harmonizes our internal uses of `withObject`, to respect the convention of using the Haskell type name, now that the Aeson error message is displayed as an "internal error message".
https://github.com/hasura/graphql-engine-mono/pull/1905
GitOrigin-RevId: e4064ba3290306437aa7e45faa316c60e51bc6b6
Some of our use of CPP causes trouble for ormolu, compare https://github.com/tweag/ormolu/issues/774.
Specifically, for understandable reasons, it can't deal well with `#ifdef` use that is not at the top-level.
This PR removes the problematic usage in ways that I hope are also a net non-loss regardless of helping
out ormolu (or other tooling).
- The default value for enabled APIs moves to the top level, next to the command line help, so
they'll stay in sync more easily.
- All the CPP around using `assertNFHere` is moved to one module.
https://github.com/hasura/graphql-engine-mono/pull/2361
GitOrigin-RevId: ed6e039e6d8960322fd8d1312df762ad197c29b1
This is a follow-up to #1959.
Today, I spent a while in review figuring out that a harmless PR change didn't do anything,
because it was moving from a `runLazy...` to something without the `Lazy`. So let's get
that source of confusion removed.
This should be a bit easier to review commit by commit, since some of the functions had
confusing names. (E.g. there was a misnamed `Migrate.Internal.runTx` before.)
The change should be a no-op.
https://github.com/hasura/graphql-engine-mono/pull/2335
GitOrigin-RevId: 0f284c4c0f814482d7827e7732a6d49e7735b302
This removes the module re-exports of [Data.Align](https://hackage.haskell.org/package/semialign-1.2/docs/Data-Align.html) and [Data.These](https://hackage.haskell.org/package/these-1.1.1.1/docs/Data-These.html) from `Hasura.Prelude`. The reasoning being that they're not used widely and reasonably obscure, and that being explicit about the imports makes for an easier to understand codebase.
(I spent longer than I'd have liked earlier today figuring out where `align` in multitenant came from.
The right one not showing up on the first hoogle page doesn't help. Yes, better tool use could have
avoided that, but still...)
Do feel free to shoot this down, I won't insist on the change.
https://github.com/hasura/graphql-engine-mono/pull/2194
GitOrigin-RevId: 10f887b74538b17623bee6d6451c5aba11573fbd
Query plan caching was introduced by - I believe - hasura/graphql-engine#1934 in order to reduce the query response latency. During the development of PDV in hasura/graphql-engine#4111, it was found out that the new architecture (for which query plan caching wasn't implemented) performed comparably to the pre-PDV architecture with caching. Hence, it was decided to leave query plan caching until some day in the future when it was deemed necessary.
Well, we're in the future now, and there still isn't a convincing argument for query plan caching. So the time has come to remove some references to query plan caching from the codebase. For the most part, any code being removed would probably not be very well suited to the post-PDV architecture of query execution, so arguably not much is lost.
Apart from simplifying the code, this PR will contribute towards making the GraphQL schema generation more modular, testable, and easier to profile. I'd like to eventually work towards a situation in which it's easy to generate a GraphQL schema parser *in isolation*, without being connected to a database, and then parse a GraphQL query *in isolation*, without even listening any HTTP port. It is important that both of these operations can be examined in detail, and in isolation, since they are two major performance bottlenecks, as well as phases where many important upcoming features hook into.
Implementation
The following have been removed:
- The entirety of `server/src-lib/Hasura/GraphQL/Execute/Plan.hs`
- The core phases of query parsing and execution no longer have any references to query plan caching. Note that this is not to be confused with query *response* caching, which is not affected by this PR. This includes removal of the types:
- - `Opaque`, which is replaced by a tuple. Note that the old implementation was broken and did not adequately hide the constructors.
- - `QueryReusability` (and the `markNotReusable` method). Notably, the implementation of the `ParseT` monad now consists of two, rather than three, monad transformers.
- Cache-related tests (in `server/src-test/Hasura/CacheBoundedSpec.hs`) have been removed .
- References to query plan caching in the documentation.
- The `planCacheOptions` in the `TenantConfig` type class was removed. However, during parsing, unrecognized fields in the YAML config get ignored, so this does not cause a breaking change. (Confirmed manually, as well as in consultation with @sordina.)
- The metrics no longer send cache hit/miss messages.
There are a few places in which one can still find references to query plan caching:
- We still accept the `--query-plan-cache-size` command-line option for backwards compatibility. The `HASURA_QUERY_PLAN_CACHE_SIZE` environment variable is not read.
https://github.com/hasura/graphql-engine-mono/pull/1815
GitOrigin-RevId: 17d92b254ec093c62a7dfeec478658ede0813eb7
## Description
Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API:
- whenever possible, it moves those argument types to where they're used (RQL.DDL.*)
- it removes all unrequired instances (mostly `ToJSON`)
This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL.
Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types.
## Notes
This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance:
- before: `expected Object for Object, but encountered X`
after: `expected Object for add computed field, but encountered X`
- before: `Expecting an object for update query`
after: `expected Object for update query, but encountered X`
This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`.
This PR also deletes some dead code, mostly in RQL.DML.
This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs.
https://github.com/hasura/graphql-engine-mono/pull/1844
GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
### Description
The spock handler requires the request type to have a `ToJSON` instance AND a `FromJSON` instance. That's because we parse it from the received bytestring into its proper type.... and call `toJSON` on it to log it. This PR simplifies this, by keeping the intermediate `Value` obtained during parsing, and using it for logging. This has two consequences:
1. it removes the `ToJSON` constraint, which will remove some code down the line (esp. in Metadata)
2. it means we log the actual JSON object query we received, not the result of parsing it, meaning the logged object will contain fields that would have been ignored when parsing the actual value; this is both an upside (more accurate log) and a downside (could be more verbose / more confusing)
### Further work
Should this PR also remove all obsolete ToJSON instances while at it?
How do we test this?
https://github.com/hasura/graphql-engine-mono/pull/1664
GitOrigin-RevId: ae099eea9a671eabadcdf507f993a5ad9433be87
### Description
RunSQL commands are analyzed to detect whether they require a schema cache rebuild; in the case of Citus we were always returning `False`. This PR fixes this, and also removes the catch-all case, to make it explicit / obvious whenever we change this.
https://github.com/hasura/graphql-engine-mono/pull/1549
GitOrigin-RevId: dddaaea868e7b7999bdfe11451032df9d9b44274
Remote relationships are now supported on SQL Server and BigQuery. The major change though is the re-architecture of remote join execution logic. Prior to this PR, each backend is responsible for processing the remote relationships that are part of their AST.
This is not ideal as there is nothing specific about a remote join's execution that ties it to a backend. The only backend specific part is whether or not the specification of the remote relationship is valid (i.e, we'll need to validate whether the scalars are compatible).
The approach now changes to this:
1. Before delegating the AST to the backend, we traverse the AST, collect all the remote joins while modifying the AST to add necessary join fields where needed.
1. Once the remote joins are collected from the AST, the database call is made to fetch the response. The necessary data for the remote join(s) is collected from the database's response and one or more remote schema calls are constructed as necessary.
1. The remote schema calls are then executed and the data from the database and from the remote schemas is joined to produce the final response.
### Known issues
1. Ideally the traversal of the IR to collect remote joins should return an AST which does not include remote join fields. This operation can be type safe but isn't taken up as part of the PR.
1. There is a lot of code duplication between `Transport/HTTP.hs` and `Transport/Websocket.hs` which needs to be fixed ASAP. This too hasn't been taken up by this PR.
1. The type which represents the execution plan is only modified to handle our current remote joins and as such it will have to be changed to accommodate general remote joins.
1. Use of lenses would have reduced the boilerplate code to collect remote joins from the base AST.
1. The current remote join logic assumes that the join columns of a remote relationship appear with their names in the database response. This however is incorrect as they could be aliased. This can be taken up by anyone, I've left a comment in the code.
### Notes to the reviewers
I think it is best reviewed commit by commit.
1. The first one is very straight forward.
1. The second one refactors the remote join execution logic but other than moving things around, it doesn't change the user facing functionality. This moves Postgres specific parts to `Backends/Postgres` module from `Execute`. Some IR related code to `Hasura.RQL.IR` module. Simplifies various type class function signatures as a backend doesn't have to handle remote joins anymore
1. The third one fixes partial case matches that for some weird reason weren't shown as warnings before this refactor
1. The fourth one generalizes the validation logic of remote relationships and implements `scalarTypeGraphQLName` function on SQL Server and BigQuery which is used by the validation logic. This enables remote relationships on BigQuery and SQL Server.
https://github.com/hasura/graphql-engine-mono/pull/1497
GitOrigin-RevId: 77dd8eed326602b16e9a8496f52f46d22b795598