>
High-Level TODO:
* [x] Code Changes
* [x] Tests
* [x] Check that pro/multitenant build ok
* [x] Documentation Changes
* [x] Updating this PR with full details
* [ ] Reviews
* [ ] Ensure code has all FIXMEs and TODOs addressed
* [x] Ensure no files are checked in mistakenly
* [x] Consider impact on console, cli, etc.
### Description
>
This PR adds support for adding set-cookie header on the response from the auth webhook. If the set-cookie header is sent by the webhook, it will be forwarded in the graphQL engine response.
Fixes a bug in test-server.sh: testing of get-webhook tests was done by POST method and vice versa. To fix, the parameters were swapped.
### Changelog
- [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR.
### Affected components
- [x] Server
- [ ] Console
- [ ] CLI
- [x] Docs
- [ ] Community Content
- [ ] Build System
- [x] Tests
- [ ] Other (list it)
### Related Issues
->
Closes [#2269](https://github.com/hasura/graphql-engine/issues/2269)
### Solution and Design
>
### Steps to test and verify
>
Please refer to the docs to see how to send the set-cookie header from webhook.
### Limitations, known bugs & workarounds
>
- Support for only set-cookie header forwarding is added
- the value forwarded in the set-cookie header cannot be validated completely, the [Cookie](https://hackage.haskell.org/package/cookie) package has been used to parse the header value and any unnecessary information is stripped off before forwarding the header. The standard given in [RFC6265](https://datatracker.ietf.org/doc/html/rfc6265) has been followed for the Set-Cookie format.
### Server checklist
#### Catalog upgrade
Does this PR change Hasura Catalog version?
- [x] No
- [ ] Yes
- [ ] Updated docs with SQL for downgrading the catalog
#### Metadata
Does this PR add a new Metadata feature?
- [x] No
#### GraphQL
- [x] No new GraphQL schema is generated
- [ ] New GraphQL schema is being generated:
- [ ] New types and typenames are correlated
#### Breaking changes
- [x] No Breaking changes
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2538
Co-authored-by: Robert <132113+robx@users.noreply.github.com>
GitOrigin-RevId: d9047e997dd221b7ce4fef51911c3694037e7c3f
We'll see if this improves compile times at all, but I think it's worth
doing as at least the most minimal form of module documentation.
This was accomplished by first compiling everything with
-ddump-minimal-imports, and then a bunch of scripting (with help from
ormolu)
**EDIT** it doesn't seem to improve CI compile times but the noise floor is high as it looks like we're not caching library dependencies anymore
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2730
GitOrigin-RevId: 667eb8de1e0f1af70420cbec90402922b8b84cb4
I've added a file `.git-blame-ignore-revs`, containing the rev of the
codebase-wide ormolu reformatting.
Issuing the command `git config blame.ignoreRevsFile .git-blame-ignore-revs`
causes `git blame` to 'see through' the renaming as best it can.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2699
GitOrigin-RevId: 0e62905e897e94e1fa51dc36c4e31015f0adb32a
I was trying to figure out how to pipe some information from query
execution to the http log recently, and once again stumbled over the
mess that is `runGQ`. Here's an attempt to break it apart a little bit.
The result should by no means be considered final, but I hope it makes it
somewhate easier to understand what's going on in this function. E.g. now it's
once again somewhat visible how execution of queries and mutations differs.
Had to stop somewhere...
The PR is intended to have no functional change. It consists of individual
commits which should be "obviously" such.
Some thoughts and possible follow-up:
- It'd be good to get rid of the ad hoc `Result` data type again eventually,
but for the moment I think it's better than the tuples that used to be.
- I think we're quite close to reducing the duplication with WebSocket. E.g.
executeQueryStep and executeMutationStep might be reusable.
- It's tempting to change the caching API slightly, so that the uncached
response headers don't have to be pulled from the cache lookup result.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2669
GitOrigin-RevId: ea414d24194509ce29469d74c62fd060b750488d
<!-- Thank you for ss in the Title above ^ -->
## Description ✍️
<!-- Please fill this se-->
<!-- Describe the changes from a user's perspective -->
This PR updates all the brand assets to reflect latest brand guidelines.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2653
GitOrigin-RevId: dc91a1c27baf78619c84879a6399a2c175c2b252
fixes https://github.com/hasura/graphql-engine-mono/issues/2635
The test is running postgres via docker compose, but wasn't waiting
for postgres to be ready, which is likely what caused intermittent
failures when trying to clean metadata.
Also make indentation consistent.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2670
Co-authored-by: Vishnu Bharathi <4211715+scriptnull@users.noreply.github.com>
GitOrigin-RevId: 5a4c03c3c05322695ee9ff9a49efc834dea37074
<!-- Thank you for ss in the Title above ^ -->
## Description
<!-- Please fill thier. -->
<!-- Describe the changes from a user's perspective -->
We don't have dependency reporting mechanism for `mssql_run_sql` API i.e when a database object (table, column etc.) is dropped through the API we should raise an exception if any dependencies (relationships, permissions etc.) with the database object exists in the metadata.
This PR addresses the above mentioned problem by
-> Integrating transaction to the API to rollback the SQL query execution if dependencies exists and exception is thrown
-> Accepting `cascade` optional field in the API payload to drop the dependencies, if any
-> Accepting `check_metadata_consistency` optional field to bypass (if value set to `false`) the dependency check
### Related Issues
<!-- Please make surt title -->
<!-- Add the issue number below (e.g. #234) -->
Close#1853
### Solution and Design
<!-- How is this iss -->
<!-- It's better if we elaborate -->
The design/solution follows the `run_sql` API implementation for Postgres backend.
### Steps to test and verify
<!-- If this is a fehis is a bug-fix, how do we verify the fix? -->
- Create author - article tables and track them
- Defined object and array relationships
- Try to drop the article table without cascade or cascade set to `false`
- The server should raise the relationship dependency exists exception
## Changelog
- ✅ `CHANGELOG.md` is updated with user-facing content relevant to this PR.
If no changelog is required, then add the `no-changelog-required` label.
## Affected components
<!-- Remove non-affected components from the list -->
- ✅ Server
- ❎ Console
- ❎ CLI
- ❎ Docs
- ❎ Community Content
- ❎ Build System
- ✅ Tests
- ❎ Other (list it)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2636
GitOrigin-RevId: 0ab152295394056c4ca6f02923142a1658ad25dc
While it looks like a lot of work in FromIr.hs, you can rather review the type changes in `Hasura.Backends.MySQL.Types.Internal` and the changes to FromIr are only to reflect that. Essentially we're simplifying the FromIr code to not think about SQL-based joins: instead, FromIr produces fields necessary for the dataloader Plan/Execute to do their job properly.
I've done my best to ensure that all the hunks in the diff in this PR are minimal for slightly easier perusing.
I think future PRs will be more intentionally well structured, rather than created retroactively.
**Preceding PR:** #2549
**Next PR**: #2367
The tests have been run like this on my machine. I don't know more beyond that.
```
docker run -i -e "PYTEST_ADDOPTS=--color=yes" -e "TERM=xterm-256color" --net=host -v`pwd`:`pwd` -w`pwd`/server/tests-py chrisdone/hasura-pytest:b0f26f615 pytest --hge-urls="http://localhost:8080" --pg-urls="postgres://chinook:chinook@localhost:5432/chinook" --backend mysql -k MySQL
```
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2608
Co-authored-by: Abby Sassel <3883855+sassela@users.noreply.github.com>
GitOrigin-RevId: a6483335c3036963360dde7d7d7eaf10859351cb