>
### Description
>
Few improvements to mssql transactions.
### Changelog
- [ ] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label.
### Affected components
- [x] Server
- [ ] Console
- [ ] CLI
- [ ] Docs
- [ ] Community Content
- [ ] Build System
- [ ] Tests
- [ ] Other (list it)
https://github.com/hasura/graphql-engine-mono/pull/2324
GitOrigin-RevId: 808947188f5f3d196c7dfc4ebfa661629db5f8f7
### Description
This PR improves error messages in our metadata API by displaying a message with the name of the failing command and a link to our documentation. Furthermore, it harmonizes our internal uses of `withObject`, to respect the convention of using the Haskell type name, now that the Aeson error message is displayed as an "internal error message".
https://github.com/hasura/graphql-engine-mono/pull/1905
GitOrigin-RevId: e4064ba3290306437aa7e45faa316c60e51bc6b6
>
### Description
>
While adding [insert mutation schema parser for MSSQL backend](https://github.com/hasura/graphql-engine-mono/pull/2141) I also included [identity](https://en.wikipedia.org/wiki/Identity_column) notion to table columns across all backends. In MSSQL we cannot insert any value (even `DEFAULT` expression) into Identity columns. This behavior of identity columns is not same in Postgres as we can insert values. This PR drops the notion of identity in the column info. The context of identity columns for MSSQL is carried in `ExtraTableMetadata` type.
### Changelog
- [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label.
### Affected components
- [x] Server
- [ ] Console
- [ ] CLI
- [ ] Docs
- [ ] Community Content
- [ ] Build System
- [x] Tests
- [ ] Other (list it)
### Related Issues
->
Fix https://github.com/hasura/graphql-engine/issues/7557https://github.com/hasura/graphql-engine-mono/pull/2378
GitOrigin-RevId: c18b5708e2e6107423a0a95a7fc2e9721e8a21a1
Some of our use of CPP causes trouble for ormolu, compare https://github.com/tweag/ormolu/issues/774.
Specifically, for understandable reasons, it can't deal well with `#ifdef` use that is not at the top-level.
This PR removes the problematic usage in ways that I hope are also a net non-loss regardless of helping
out ormolu (or other tooling).
- The default value for enabled APIs moves to the top level, next to the command line help, so
they'll stay in sync more easily.
- All the CPP around using `assertNFHere` is moved to one module.
https://github.com/hasura/graphql-engine-mono/pull/2361
GitOrigin-RevId: ed6e039e6d8960322fd8d1312df762ad197c29b1
This change was prompted by how `make ci-build` in `./server` clobbered my
`cabal.project.local`. Addressing that more directly proved awkward, thus this
change, which makes both `ci-build` targets use a shared `/cabal.project.ci.*`
(via `--with-project-file`).
Also removes some pro targets/scripts which were definitely broken thus unused.
### Affected components
- [x] Build System
### Steps to test and verify
If CI still passes, this should be safe.
https://github.com/hasura/graphql-engine-mono/pull/2244
GitOrigin-RevId: 1494824cabd2fbe6415d050c19e27f37bb51b86b
## Description
Almost all our data structures use strictness annotations, following [our styleguide's principle](https://github.com/hasura/graphql-engine/blob/master/server/STYLE.md#dealing-with-laziness) of "by default, use strict data types and lazy functions". The very few cases where we actually need laziness were already explicitly labelled as lazy with the `~` prefix operator.
This PR simply globally enables `StrictData`, allowing us to express records without `!()` on every field, but makes no attempt at cleaning existing code.
https://github.com/hasura/graphql-engine-mono/pull/1869
Co-authored-by: Philip Lykke Carlsen <358550+plcplc@users.noreply.github.com>
GitOrigin-RevId: e65c6e2f89413188da250122f64c2173615946ec
### Description
We always build a subscription root, even when there was no possible fields. This breaks some third party clients, as the spec does not allow empty types in the schema. This PR fixes this by changing the `buildSubscriptionParser` helper to return a `Maybe` value, and harmonizes / cleans places where we build the subscription root.
https://github.com/hasura/graphql-engine-mono/pull/2357
GitOrigin-RevId: 1aeae25e321eee957e7645c436d17e69207309fd
### Description
The inherited roles integration tests were behind a flag, and its corresponding fixture, presumably to avoid enabling the option globally. However, #2288 introduced a new test using inherited roles that was not gated behind the flag, which fails when run with `dev.sh`. However, that test works on CI... because inherited roles are globally enabled there.
Consequently, this PR:
- globally enables inherited roles in dev.sh
- removes the flag and the associated fixture
https://github.com/hasura/graphql-engine-mono/pull/2358
Co-authored-by: Vishnu Bharathi <4211715+scriptnull@users.noreply.github.com>
GitOrigin-RevId: ebfa6754873324bed15b2cc5e37ec2d8008e8f8d
This is a follow-up to #1959.
Today, I spent a while in review figuring out that a harmless PR change didn't do anything,
because it was moving from a `runLazy...` to something without the `Lazy`. So let's get
that source of confusion removed.
This should be a bit easier to review commit by commit, since some of the functions had
confusing names. (E.g. there was a misnamed `Migrate.Internal.runTx` before.)
The change should be a no-op.
https://github.com/hasura/graphql-engine-mono/pull/2335
GitOrigin-RevId: 0f284c4c0f814482d7827e7732a6d49e7735b302
### Description
During the PDV refactor that led to 2.0, we broke an undocumented and untested semantic of inserts: accepting _explicit_ null values in nested object inserts.
In short: in the schema, we often distinguish between _explicit_ null values `{id: 3, author: null}` and _implicit_ null values that correspond to the field being omitted `{id: 3}`. In this particular case, we forgot to accept explicit null values. Since the field is optional (meaning we accept implicit null values), it was nullable in the schema, like it was in pre-PDV times. But in practice we would reject explicit nulls.
This PR fixes this, and adds a test. Furthermore, it does a bit of a cleanup of the Mutation part of the schema, and more specifically of all insertion code.
https://github.com/hasura/graphql-engine-mono/pull/2341
GitOrigin-RevId: 895cfeecef7e8e49903a3fb37987707150446eb0
This PR only contains minor changes to documentation that I have collected over some time, revising text as I was passing by.
https://github.com/hasura/graphql-engine-mono/pull/2346
Co-authored-by: Rikin Kachhia <54616969+rikinsk@users.noreply.github.com>
GitOrigin-RevId: f3329f3212b831f1f3c74a299734faff337b1017
### Description
Our python test suite has several major problems; one of them being that the tests themselves are not responsible for their own setup. We are therefore using environment variables for all matters of configuration, such as _where the postgres instance is_. This is something that should be changed, but in the meantime, it is the test implementer's responsibility to ensure that tests have a consistent setup in CI and locally, or to to add the proper "skip" annotations.
The recently added `test_pg_add_source_with_source_parameters` fails to do so: as it tests adding a postgres source from hardcoded parameters, rather than relying on environment variables, it only works if the postgres instance is at the matching address, which happens to be the one set in the circle ci config. This is undesirable for two reasons:
- it breaks local tests: running tests locally with `dev.sh` sets postgres up differently, and the test fails;
- a change to the circle config would result in failures in that test.
Sadly, there's no good solution here: our tests do not currently support expanding environment variables in the queries' yaml files, meaning it's not possible to set the values of all those parameters differently in each environment. And we haven't yet started working towards having a unified testing environment setup.
As a result, this PR disables the offending test UNLESS the postgres instance happens to be exactly where the test expects it. This is also very inelegant and adds more tech debt to the pile, but I do not see how to fix this with our current test infrastructure. :(
https://github.com/hasura/graphql-engine-mono/pull/2336
GitOrigin-RevId: 8bc9142075d14acaa48e9c4b20de2527185bc75c
This moves the previous (illegal) `Show` instance for `Hasura.Base.Error.Code` to a `ToJSON`
instance, and uses that in the error `ToJSON` instances.
Addressing https://github.com/hasura/graphql-engine-mono/pull/2277#issuecomment-911557169.
This PR is against #2277.
It adds a replacement derived `Show` instance, which is used:
- in the derived `Show` instance for `QErr`
- in some unit tests
Mostly verified that we didn't otherwise rely on the hand-rolled `Show`
instance by compiling without it (and a faked `QErr` instance), and seeing
that the only compile failures were in tests. (Compare the individual commits.)
https://github.com/hasura/graphql-engine-mono/pull/2279
GitOrigin-RevId: 678fe241a14bd0c9aaf5b267efc510ad9d619dd7
The materialized views cannot be mutated, so this commit removes the option to run mutation on the materialized views via graphql endpoint. Before this, users could have tried running mutation for the materialized views using the graphql endpoint (or from HGE console), which would have resulted in the following error:
``` JSON
{
"errors": [
{
"extensions": {
"internal": {
"statement": "WITH \"articles_mat_view__mutation_result_alias\" AS (DELETE FROM \"public\".\"articles_mat_view\" WHERE (('true') AND (((((\"public\".\"articles_mat_view\".\"id\") = (('20155721-961c-4d8b-a5c4-873ed62c7a61')::uuid)) AND ('true')) AND ('true')) AND ('true'))) RETURNING * ), \"articles_mat_view__all_columns_alias\" AS (SELECT \"id\" , \"author_id\" , \"content\" , \"test_col\" , \"test_col2\" FROM \"articles_mat_view__mutation_result_alias\" ) SELECT json_build_object('affected_rows', (SELECT COUNT(*) FROM \"articles_mat_view__all_columns_alias\" ) ) ",
"prepared": false,
"error": {
"exec_status": "FatalError",
"hint": null,
"message": "cannot change materialized view \"articles_mat_view\"",
"status_code": "42809",
"description": null
},
"arguments": []
},
"path": "$",
"code": "unexpected"
},
"message": "database query error"
}
]
}
```
So, we don't want to generate the mutation fields for the materialized views altogether.
https://github.com/hasura/graphql-engine-mono/pull/2226
GitOrigin-RevId: 4ef441764035a8039e1c780d454569ee1f2febc3
>
### Description
>
Correctly alias the aggregate field projections in site instead of aliasing them later stage.
PS: I discovered this required change while [developing SQL generation for MSSQL inserts](https://github.com/hasura/graphql-engine-mono/pull/2248).
### Changelog
- [ ] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label.
### Affected components
- [x] Server
https://github.com/hasura/graphql-engine-mono/pull/2271
GitOrigin-RevId: 0d90fd8d8c0541b18ca9cb1197e413f3454bb227
>
### Description
>
This PR is an incremental work towards [enabling insert mutations on MSSQL](https://github.com/hasura/graphql-engine-mono/pull/1974). In this PR, we generate insert mutation schema parser for MSSQL backend.
### Changelog
- [ ] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label.
### Affected components
- [x] Server
https://github.com/hasura/graphql-engine-mono/pull/2141
GitOrigin-RevId: 8595008dece35f7fded9c52e134de8b97b64f53f
When adding object relationships, we set the nullability of the generated GraphQL field based on whether the database backend enforces that the referenced data always exists. For manual relationships (corresponding to `manual_configuration`), the database backend is unaware of any relationship between data, and hence such fields are always set to be nullable.
For relationships generated from foreign key constraints (corresponding to `foreign_key_constraint_on`), we distinguish between two cases:
1. The "forward" object relationship from a referencing table (i.e. which has the foreign key constraint) to a referenced table. This should be set to be non-nullable when all referencing columns are non-nullable. But in fact, it used to set it to be non-nullable if *any* referencing column is non-nullable, which is only correct in Postgres when `MATCH FULL` is set (a flag we don't consider). This fixes that by changing a boolean conjunction to a disjunction.
2. The "reverse" object relationship from a referenced table to a referencing table which has the foreign key constraint. This should always be set to be nullable. But in fact, it used to always be set to non-nullable, as was reported in hasura/graphql-engine#7201. This fixes that.
Moreover, we have moved the computation of the nullability from `Hasura.RQL.DDL.Relationship` to `Hasura.GraphQL.Schema.Select`: this nullability used to be passed through the `riIsNullable` field of `RelInfo`, but for array relationships this information is not actually used, and moreover the remaining fields of `RelInfo` are already enough to deduce the nullability.
This also adds regression tests for both (1) and (2) above.
https://github.com/hasura/graphql-engine-mono/pull/2159
GitOrigin-RevId: 617f12765614f49746d18d3368f41dfae2f3e6ca
In hasura/graphql-engine#7172, an issue was found where under certain conditions a JSON field from Postgres would be parsed as a GraphQL input object, which is not possible in general, and also unnecessary. Luckily, this was already fixed by the time `v2.0.6` got around, presumably thanks to 4a83bb1834. This adds a regression test.
https://github.com/hasura/graphql-engine-mono/pull/2158
GitOrigin-RevId: 1ded1456f6b89726e08f77cf3383ad88c04de451
This removes the module re-exports of [Data.Align](https://hackage.haskell.org/package/semialign-1.2/docs/Data-Align.html) and [Data.These](https://hackage.haskell.org/package/these-1.1.1.1/docs/Data-These.html) from `Hasura.Prelude`. The reasoning being that they're not used widely and reasonably obscure, and that being explicit about the imports makes for an easier to understand codebase.
(I spent longer than I'd have liked earlier today figuring out where `align` in multitenant came from.
The right one not showing up on the first hoogle page doesn't help. Yes, better tool use could have
avoided that, but still...)
Do feel free to shoot this down, I won't insist on the change.
https://github.com/hasura/graphql-engine-mono/pull/2194
GitOrigin-RevId: 10f887b74538b17623bee6d6451c5aba11573fbd
Replaces one instance of `mtl`-style effects with `transformers`-style, as this results in a measurable reduction in memory usage. The change is kept completely within one module.
https://github.com/hasura/graphql-engine-mono/pull/1944
GitOrigin-RevId: 587b8e61725bb4a505404bbe741185759b7bceeb
This should be mostly a no-op change, with one exception:
When limits are not disabled, but neither node nor depth limit
is configured, we no longer count nodes/depth uselessly.
https://github.com/hasura/graphql-engine-mono/pull/2103
GitOrigin-RevId: 9943f89d6b969ca101a9a5601417c5b14a358a10
We also added a missing `--network=host` to the postgres container, which it turns out improves performance numbers a bit (hopefully increases stability a bit too).
https://github.com/hasura/graphql-engine-mono/pull/2149
Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com>
GitOrigin-RevId: 3fffc8fbfc77606dd26421eed079629306b08d05
This removes the file `bahnql_query.yaml`, which is no longer being used.
a509a86eaa (hasura/graphql-engine#1117) changed the way we test the remote schema feature from using external GraphQL services to running our own mini GraphQL server for testing purposes. This gives us a lot of in-codebase flexibility on the behavior of "remote" GraphQL servers.
During this work, the `bahnql_query.yaml` test was swapped out for the `simple2_query.yaml` test. The former essentially tests if a field from a remote schema can be fetched, whereas the latter tests whether an entry can be fetched from the (non-remote!) database.
It's not clear to me why `bahnql_query.yaml` was no longer used. In any case, the relevant setup code was removed, and this test can no longer be run. Presumably we test such basic functionality already in many other ways.
https://github.com/hasura/graphql-engine-mono/pull/2102
GitOrigin-RevId: c01b7f7ec5c767c874bca2ddad991eb81a0e2809
>
### Description
>
This PR supersedes https://github.com/hasura/graphql-engine-mono/pull/1484. Apply `limit` to the table selection before joining relationship rows to improve query performance.
### Changelog
- [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label.
### Affected components
- [x] Server
### Related Issues
->
Fix https://github.com/hasura/graphql-engine/issues/5745
### Solution and Design
>
Prior to this change, we apply `LIMIT` and `OFFSET` to the outer selection from sub-query which includes joins for relationships. Now, we move `LIMIT` and `OFFSET` (if present) to inner selection of base table. But, this isn't done always! If there are order by relationships' columns we apply at the outer selection. To know more, please refer to [source code note](https://github.com/hasura/graphql-engine-mono/pull/2078/files#diff-46d868ee45d3eaac667cebb34731f573c77d5c9c8097bb9ccf1115fc07f65bfdR652).
```graphql
query {
article(limit: 2){
id
title
content
author{
name
}
}
}
```
Before:
```sql
SELECT
coalesce(json_agg("root"), '[]') AS "root"
FROM
(
SELECT
row_to_json(
(
SELECT
"_4_e"
FROM
(
SELECT
"_0_root.base"."id" AS "id",
"_0_root.base"."title" AS "title",
"_0_root.base"."content" AS "content",
"_3_root.or.author"."author" AS "author"
) AS "_4_e"
)
) AS "root"
FROM
(
SELECT
*
FROM
"public"."article"
WHERE
('true')
) AS "_0_root.base"
LEFT OUTER JOIN LATERAL (
SELECT
row_to_json(
(
SELECT
"_2_e"
FROM
(
SELECT
"_1_root.or.author.base"."name" AS "name"
) AS "_2_e"
)
) AS "author"
FROM
(
SELECT
*
FROM
"public"."author"
WHERE
(("_0_root.base"."author_id") = ("id"))
) AS "_1_root.or.author.base"
) AS "_3_root.or.author" ON ('true')
LIMIT
2
) AS "_5_root"
```
cost
```
Aggregate (cost=0.73..0.74 rows=1 width=32)
-> Limit (cost=0.15..0.71 rows=2 width=32)
-> Nested Loop Left Join (cost=0.15..223.96 rows=810 width=32)
-> Seq Scan on article (cost=0.00..18.10 rows=810 width=72)
-> Index Scan using author_pkey on author (cost=0.15..0.24 rows=1 width=36)
Index Cond: (article.author_id = id)
SubPlan 1
-> Result (cost=0.00..0.01 rows=1 width=32)
SubPlan 2
-> Result (cost=0.00..0.01 rows=1 width=32)
```
After:
```sql
SELECT
coalesce(json_agg("root"), '[]') AS "root"
FROM
(
SELECT
row_to_json(
(
SELECT
"_4_e"
FROM
(
SELECT
"_0_root.base"."id" AS "id",
"_0_root.base"."title" AS "title",
"_0_root.base"."content" AS "content",
"_3_root.or.author"."author" AS "author"
) AS "_4_e"
)
) AS "root"
FROM
(
SELECT
*
FROM
"public"."article"
WHERE
('true')
LIMIT
2
) AS "_0_root.base"
LEFT OUTER JOIN LATERAL (
SELECT
row_to_json(
(
SELECT
"_2_e"
FROM
(
SELECT
"_1_root.or.author.base"."name" AS "name"
) AS "_2_e"
)
) AS "author"
FROM
(
SELECT
*
FROM
"public"."author"
WHERE
(("_0_root.base"."author_id") = ("id"))
) AS "_1_root.or.author.base"
) AS "_3_root.or.author" ON ('true')
) AS "_5_root"
```
cost:
```
Aggregate (cost=16.47..16.48 rows=1 width=32)
-> Nested Loop Left Join (cost=0.15..16.44 rows=2 width=100)
-> Limit (cost=0.00..0.04 rows=2 width=72)
-> Seq Scan on article (cost=0.00..18.10 rows=810 width=72)
-> Index Scan using author_pkey on author (cost=0.15..8.18 rows=1 width=36)
Index Cond: (article.author_id = id)
SubPlan 1
-> Result (cost=0.00..0.01 rows=1 width=32)
SubPlan 2
-> Result (cost=0.00..0.01 rows=1 width=32)
```
https://github.com/hasura/graphql-engine-mono/pull/2078
Co-authored-by: Evie Ciobanu <1017953+eviefp@users.noreply.github.com>
GitOrigin-RevId: 47eaccdbfb3499efd2c9f733f3312ad31c77916f
This is just a one-off fix, based on running ormolu across
the code base, which uses GHC's parser in haddock mode.
### Description
Fixes several instances of illegal haddock comments.
### Related Issues
#1679
### Steps to test and verify
Run ormolu over the codebase. Prior to this change, it complains that it
can't parse certain files due to malformed Haddock comments, after it
doesn't (there are still some other errors).
### Limitations, known bugs & workarounds
This doesn't ensure that we don't introduce similar issues in the future;
that'll be dealt with once we implement #1679.
#### Breaking changes
- [x] No Breaking changes, only touches code comments
https://github.com/hasura/graphql-engine-mono/pull/2010
GitOrigin-RevId: 7fbab0325ce13a16a04ff98d351f1af768e25d7c
## Suggestion: Add fancier trace debugging functions to `Hasura.Prelude`
This PR adds two trace functions, `ltrace` and `ltraceM`, which use the `pretty-simple` package to `show` the input with nice formatting and colors for ease of reading (and comparing using diff tools such as `meld` or `vim-diff`).
I've also added warning pragmas to the functions, which means:
1. Traces will not be left in code, as CI builds with -Werror
2. Developers will have to change the `ghc-options` to `-Wwarn` in their `cabal.project.local` settings to use these functions
### Example
Usage:
```hs
selectFunctionAggregate ... = ... do
ltraceM "functionInfo" function
...
```
Output to terminal looks like this:
<img width="524" alt="Screen Shot 2021-08-12 at 10 33 24" src="https://user-images.githubusercontent.com/8547573/129158878-4a5e96ba-30a5-452c-8f33-9eb4b2cc5e2a.png">
### Dependencies
Requires adding the following dependencies:
- prettyprinter-ansi-terminal-1.1.2 (BSD2)
- pretty-simple-4.0.0.0 (BSD3)
Question: what is the process for adding new dependencies? How does decisions on this matter happen?
https://github.com/hasura/graphql-engine-mono/pull/2075
GitOrigin-RevId: 490b0f0ca595da319b43e92e190ba50c0b132cd5
>
### Description
>
From HGE version 2.0 onwards, all remote relationship fields are generated as plain types without non-nullable and lists. This PR fixes the same.
### Changelog
- [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label.
### Affected components
- [x] Server
- [x] Tests
### Related Issues
->
fix https://github.com/hasura/graphql-engine/issues/7284
### Steps to test and verify
>
- Create a remote relationship to a field in remote schema with non-nullable or list type
- The HGE introspection should give the remote relationship field type correctly as like in the remote schema
https://github.com/hasura/graphql-engine-mono/pull/2071
GitOrigin-RevId: e113f5d17b62bfa0a25028c20260ae1782ae224b
This PR adds source code documentation for metadata versioning.
Note: Adding `force-skip-ci` label to bypass server tests as this PR only has changes in source code comments.
https://github.com/hasura/graphql-engine-mono/pull/2026
GitOrigin-RevId: f24dcc1579f98e59e40677b99890a8af273bb96e
### A long tale about encoding
GraphQL has an [introspection system](http://spec.graphql.org/June2018/#sec-Introspection), which allows its schema to be introspected. This is what we use to introspect [remote schemas](41383e1f88/server/src-rsr/introspection.json). There is one place in the introspection where we might find GraphQL values: the default value of an argument.
```json
{
"fields": [
{
"name": "echo",
"args": [
{
"name": "msg",
"defaultValue": "\"Hello\\nWorld!\""
}
]
}
]
}
```
Note that GraphQL's introspection is transport agnostic: the default value isn't returned as a JSON value, but as a _string-encoded GraphQL Value_. In this case, the value is the GraphQL String `"Hello\nWorld!"`. Embedded into a string, it is encoded as: `"\"Hello\\nWorld!\""`.
When we [parse that value](41383e1f88/server/src-lib/Hasura/GraphQL/RemoteServer.hs (L351)), we first extract that JSON string, to get its content, `"Hello\nWorld!"`, then use our [GraphQL Parser library](21c1ddfb41/src/Language/GraphQL/Draft/Parser.hs (L200)) to interpret this: we find the double quote, understand that the content is a String, unescape the backslashes, and end up with the desired string value: `['H', 'e', 'l', 'l', 'o', '\n', 'W', 'o', 'r', 'l', 'd', '!']`. This all works fine.
However, there was a bug in the _printer_ part of our parser library: when printing back a String value, we would not re-escape characters properly. In practice, this meant that the GraphQL String `"Hello\nWorld"` would be encoded in JSON as `"\"Hello\nWorld!\""`. Note how the `\n` is not properly double-escaped. This led to a variety of problems, as described in #1965:
- we would successfully parse a remote schema containing such characters in its default values, but then would print those erroneous JSON values in our introspection, which would _crash the console_
- we would inject those default values in queries sent to remote schemas, and print them wrong doing so, sending invalid values to remote schemas and getting errors in result
It turns out that this bug had been lurking in the code for a long time: I combed through the history of [the parser library](https://github.com/hasura/graphql-parser-hs), and as far as I can tell, this bug has always been there. So why was it never caught? After all, we do have [round trip tests](21c1ddfb41/test/Spec.hs (L52)) that print + parse arbitrary values and check that we get the same value as a result. They do use any arbitrary unicode character in their generated strings. So... that should have covered it, right?
Well... it turns out that [the tests were ignoring errors](7678066c49/test/Spec.hs (L45)), and would always return "SUCCESS" in CI, even if they failed... Furthermore, the sample size was small enough that, most of the time, _they would not hit such characters_. Running the tests locally on a loop, I only got errors ~10% of the time...
This was all fixed in hasura/graphql-parser-hs#44. This was probably one of Hasura's longest standing bugs? ^^'
### Description
This PR bumps the version of graphql-parser-hs in the engine, and switches some of our own arbitrary tests to use unicode characters in text rather than alphanumeric values. It turns out those tests were much better at hitting "bad" values, and that they consistently failed when generating arbitrary unicode characters.
https://github.com/hasura/graphql-engine-mono/pull/2031
GitOrigin-RevId: 54fa48270386a67336e5544351691619e0684559
### Description
A first PR, #1947, removed all the `Arbitrary` stuff from our codebase. But #1740, merged on the same day, added some tests relying on `Arbitrary`. In the merge process, some unneeded `Arbitrary` code got reintroduced.
This PR removes all `Arbitrary` stuff from `src-lib`, and cleans / refactor `Hasura.Generator` in `src-test` to only reduce it to the bare minimum amount of `Arbitrary` instances.
https://github.com/hasura/graphql-engine-mono/pull/1957
GitOrigin-RevId: 7e76009bb022205e3737fca45749411a266cc08c
The `LazyTxT` type was introduced to avoid connecting to Postgres when a given GraphQL request did not require this. However, through the new way query execution plans are represented, this has now _mostly_ been taken care of in a different way, and so no `LazyTxT` action is generated at all for GraphQL requests that do not fetch data from Postgres.
This removes the laziness of `LazyTxT` by simply making it a newtype wrapper around `Q.TxET`. This simplifies a lot of code in `Hasura.Backends.Postgres.Connection`.
https://github.com/hasura/graphql-engine-mono/pull/1959
GitOrigin-RevId: 58b4d5a05d67bc602b59e02ac338f1c3e63859c7
This PR:
- removes a dependency on Postgres' `ToSQL` in other backends
- removes some usage of `unsafePerformIO` in `FronJSON` / `Arbitrary`
- fixes a `Set` into a `HashSet`
- moves some orphan instances where they belong:
- alongside others in `Hasura.Incremental` for `Cacheable`
- in `Hasura.Base.Instances` for `Hashable`
- introduces a local wrapper around ByteString to avoid unsound UTF8 instances
Some of the weird empty lines come from the fact that this PR is an offshoot of #1947.
https://github.com/hasura/graphql-engine-mono/pull/1949
GitOrigin-RevId: ef9d34452946f8466878d8fdda857b0b43816de7
## Description
This PR removes as many constraints as possible from Backend without refactoring the code. Thanks to #1844, a few ToJSON functions can be removed from the IR, and several constraints were simply redundant.
This PR borrows from similar work done as part of #1666.
## Note
To remove constraints more aggressively, I have explored the possibility of _removing Representable altogether_, in a [separate commit](https://github.com/hasura/graphql-engine-mono/compare/nicuveo/remove-extension-constraints..nicuveo/tentative-remove-representable). I am not convinced it's a good idea in terms of readability of the code, but it's a possibility.
Further work includes deciding what we want to do with `Show` and `ToTxt`; see #1747.
https://github.com/hasura/graphql-engine-mono/pull/1843
GitOrigin-RevId: 337324ad90cb8f86f06e1c5a36aa44bb414e195a
Query plan caching was introduced by - I believe - hasura/graphql-engine#1934 in order to reduce the query response latency. During the development of PDV in hasura/graphql-engine#4111, it was found out that the new architecture (for which query plan caching wasn't implemented) performed comparably to the pre-PDV architecture with caching. Hence, it was decided to leave query plan caching until some day in the future when it was deemed necessary.
Well, we're in the future now, and there still isn't a convincing argument for query plan caching. So the time has come to remove some references to query plan caching from the codebase. For the most part, any code being removed would probably not be very well suited to the post-PDV architecture of query execution, so arguably not much is lost.
Apart from simplifying the code, this PR will contribute towards making the GraphQL schema generation more modular, testable, and easier to profile. I'd like to eventually work towards a situation in which it's easy to generate a GraphQL schema parser *in isolation*, without being connected to a database, and then parse a GraphQL query *in isolation*, without even listening any HTTP port. It is important that both of these operations can be examined in detail, and in isolation, since they are two major performance bottlenecks, as well as phases where many important upcoming features hook into.
Implementation
The following have been removed:
- The entirety of `server/src-lib/Hasura/GraphQL/Execute/Plan.hs`
- The core phases of query parsing and execution no longer have any references to query plan caching. Note that this is not to be confused with query *response* caching, which is not affected by this PR. This includes removal of the types:
- - `Opaque`, which is replaced by a tuple. Note that the old implementation was broken and did not adequately hide the constructors.
- - `QueryReusability` (and the `markNotReusable` method). Notably, the implementation of the `ParseT` monad now consists of two, rather than three, monad transformers.
- Cache-related tests (in `server/src-test/Hasura/CacheBoundedSpec.hs`) have been removed .
- References to query plan caching in the documentation.
- The `planCacheOptions` in the `TenantConfig` type class was removed. However, during parsing, unrecognized fields in the YAML config get ignored, so this does not cause a breaking change. (Confirmed manually, as well as in consultation with @sordina.)
- The metrics no longer send cache hit/miss messages.
There are a few places in which one can still find references to query plan caching:
- We still accept the `--query-plan-cache-size` command-line option for backwards compatibility. The `HASURA_QUERY_PLAN_CACHE_SIZE` environment variable is not read.
https://github.com/hasura/graphql-engine-mono/pull/1815
GitOrigin-RevId: 17d92b254ec093c62a7dfeec478658ede0813eb7
## Description
Thanks to #1664, the Metadata API types no longer require a `ToJSON` instance. This PR follows up with a cleanup of the types of the arguments to the metadata API:
- whenever possible, it moves those argument types to where they're used (RQL.DDL.*)
- it removes all unrequired instances (mostly `ToJSON`)
This PR does not attempt to do it for _all_ such argument types. For some of the metadata operations, the type used to describe the argument to the API and used to represent the value in the metadata are one and the same (like for `CreateEndpoint`). Sometimes, the two types are intertwined in complex ways (`RemoteRelationship` and `RemoteRelationshipDef`). In the spirit of only doing uncontroversial cleaning work, this PR only moves types that are not used outside of RQL.DDL.
Furthermore, this is a small step towards separating the different types all jumbled together in RQL.Types.
## Notes
This PR also improves several `FromJSON` instances to make use of `withObject`, and to use a human readable string instead of a type name in error messages whenever possible. For instance:
- before: `expected Object for Object, but encountered X`
after: `expected Object for add computed field, but encountered X`
- before: `Expecting an object for update query`
after: `expected Object for update query, but encountered X`
This PR also renames `CreateFunctionPermission` to `FunctionPermissionArgument`, to remove the quite surprising `type DropFunctionPermission = CreateFunctionPermission`.
This PR also deletes some dead code, mostly in RQL.DML.
This PR also moves a PG-specific source resolving function from DDL.Schema.Source to the only place where it is used: App.hs.
https://github.com/hasura/graphql-engine-mono/pull/1844
GitOrigin-RevId: a594521194bb7fe6a111b02a9e099896f9fed59c
GJ IR changes cherry-picked from the original GJ branch. There is a separate (can be merged independently) PR for metadata changes (#1727) and there will be a different PR upcoming PR for execution changes.
https://github.com/hasura/graphql-engine-mono/pull/1810
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: c31956af29dc9c9b75d002aba7d93c230697c5f4
## Description
This PR fixes an oversight in the implementation of the resolvers of different backends. To implement resolution from environment variables, both MSSQL and BigQuery were directly fetching the process' environment variables, instead of using the careful curated set we thread from main. It was working just fine on OSS, but is failing on Cloud.
This PR fixes this by adding an additional argument to `resolveSourceConfig`, to ensure that backends always use the correct set of variables.
https://github.com/hasura/graphql-engine-mono/pull/1891
GitOrigin-RevId: 58644cab7d041a8bf4235e2acfe9cf71533a92a1
### Description
This PR is the first of several PRs meant to introduce Generalized Joins. In this first PR, we add non-breaking changes to the Metadata types for DB-to-DB remote joins. Note that we are currently rejecting the new remote join format in order to keep folks from breaking their metadata (in case of a downgrade). These issues will be tackled (and JSON changes reverted) in subsequent PRs.
This PR also changes the way we construct the schema cache, and breaks the way we process sources in two steps: we first resolve each source and construct a cache of their tables' raw info, then in a second step we build the source output. This is so that we have access to the target source's tables when building db-to-db relationships.
### Notes
- this PR contains a few minor cleanups of the schema
- it also fixes a bug in how we do renames in remote schema relationships
- it introduces cross-source schema dependencies
https://github.com/hasura/graphql-engine-mono/pull/1727
Co-authored-by: Evie Ciobanu <1017953+eviefp@users.noreply.github.com>
GitOrigin-RevId: f625473077bc5fff5d941b70e9a116192bc1eb22
## Description
This PR does a few minor improvements to the formatting of the benchmark results:
- changes the title from `<h1>` to `<h2>`, consistent with other hasura-bot reports AFAICT
- concatenates lines together to allow for more natural line wrapping
- allows for the detailed report to be collapsed (but is open by default)
## Notes
The state of the report (open / closed) isn't persisted across refreshes, and I don't know if it's feasible for it to be. An alternative would be to, when generating a new report, edit the old ones to make them collapsed by default.
https://github.com/hasura/graphql-engine-mono/pull/1867
GitOrigin-RevId: 7021a05aebdebe4cc5738d567a6dcc3aaabd38bf
In the same vein as #1762, but in less critical: we have some old instances that were made manual because of AnyBackend but that don't need to be anymore.
https://github.com/hasura/graphql-engine-mono/pull/1866
GitOrigin-RevId: f1515330859e70531139f8edb21bd016441f8e8d
During the "generalization" of the code, we removed the [generated instances](9d63187803 (diff-7b8382ab20e441fa214586be491eb6f7ca904d8b20ed97894c16a339be45219aL72)) of `Eq` and `Hashable` in favour of manually written ones, because for a time we were struggling with `AnyBackend`.
In the process, we lost one constructor, `MOEndpoint`. It's unlikely to have ever been an issue, since duplicate endpoints are unlikely, but it is nonetheless wrong.
This PR simply goes back to default derived instances, now that we can, fixing the problem and making the code simpler.
---
However, after filing this PR, I realized **it broke rest endpoint tests**. Upon closer inspection, it turns out that while we had proper checks in place, because of this bug we were actually failing _differently_ during metadata checks. This PR actually improves error messages for invalid endpoint configurations?! 🙀
This is a tiny bit scary. ^^'
https://github.com/hasura/graphql-engine-mono/pull/1762
GitOrigin-RevId: c4897d1f3ae92d18c44c87d2f58258c66f716686
The following has been tested locally:
1. [x] Uses the default query limit when `global_select_limit` is not present in the `metadata` configuration
2. [x] Uses the provided number when present in the configuration (tested with `1`)
3. [x] Throws an error if the provided value is not a parseable non-negative number (because bigquery throws an error when limits are negative)
https://github.com/hasura/graphql-engine-mono/pull/1816
Co-authored-by: Abby Sassel <3883855+sassela@users.noreply.github.com>
GitOrigin-RevId: e21157ce9ca8f8130b3b01e91ed048e79f376293
### Description
In our haste to generalize everything for MSSQL, we put every single "suspicious" type in Backend, including ones that weren't required. `Alias` is one of those: it's only used in a type alias, and is actually just an implementation detail of the translation layer. This PR removes it.
https://github.com/hasura/graphql-engine-mono/pull/1759
GitOrigin-RevId: fb348934ec65a51aae7f95d93c83c3bb704587b5
I finally got fed up trying to decipher the messages from failing assertions in integration tests and was thus spurred into action :-)
Instead of using the `repr(..)` format for the `OrderedDict`s of query responses we now deliberately format them as JSON with indentation.
https://github.com/hasura/graphql-engine-mono/pull/1751
GitOrigin-RevId: 1ca0d03ff52950095b89b447d9625d6481bc7177
### Description
This PR removes all `fmapX` and `traverseX` functions from RQL.IR, favouring instead `Functor` and `Traversable` instances throughout the code. This was a relatively straightforward change, except for two small pain points: `AnnSelectG` and `AnnInsert`. Both were parametric over two types `a` and `v`, making it impossible to make them traversable functors... But it turns out that in every single use case, `a ~ f v`. By changing those types to take such an `f :: Type -> Type` as an argument instead of `a :: Type` makes it possible to make them functors.
The only small difference is for `AnnIns`, I had to introduce one `Identity` transformation for one of the `f` parameters. This is relatively straightforward.
### Notes
This PR fixes the most verbose BigQuery hint (`let` instead of `<- pure`).
https://github.com/hasura/graphql-engine-mono/pull/1668
GitOrigin-RevId: e632263a8c559aa04aeae10dcaec915b4a81ad1a
Blocked on https://github.com/hasura/graphql-engine-mono/pull/1640.
While fiddling with BigQuery I noticed a severe issue with offset/limit for array-aggregates. I've fixed it now.
The basic problem was that I was using a query like this:
```graphql
query MyQuery {
hasura_Artist(order_by: {artist_self_id: asc}) {
artist_self_id
albums_aggregate(order_by: {album_self_id: asc}, limit: 2) {
nodes {
album_self_id
}
aggregate {
count
}
}
}
}
```
Producing this SQL:
```sql
SELECT `t_Artist1`.`artist_self_id` AS `artist_self_id`,
STRUCT(IFNULL(`aa_albums1`.`nodes`, NULL) AS `nodes`, IFNULL(`aa_albums1`.`aggregate`, STRUCT(0 AS `count`)) AS `aggregate`) AS `albums_aggregate`
FROM `hasura`.`Artist` AS `t_Artist1`
LEFT OUTER JOIN (SELECT ARRAY_AGG(STRUCT(`t_Album1`.`album_self_id` AS `album_self_id`) ORDER BY (`t_Album1`.`album_self_id`) ASC) AS `nodes`,
STRUCT(COUNT(*) AS `count`) AS `aggregate`,
`t_Album1`.`artist_other_id` AS `artist_other_id`
FROM (SELECT *
FROM `hasura`.`Album` AS `t_Album1`
ORDER BY (`t_Album1`.`album_self_id`) ASC NULLS FIRST
-- PROBLEM HERE
LIMIT @param0) AS `t_Album1`
GROUP BY `t_Album1`.`artist_other_id`)
AS `aa_albums1`
ON (`aa_albums1`.`artist_other_id` = `t_Artist1`.`artist_self_id`)
ORDER BY (`t_Artist1`.`artist_self_id`) ASC NULLS FIRST
```
Note the `LIMIT @param0` -- that is incorrect because we want to limit
per artist. Instead, we want:
```sql
SELECT `t_Artist1`.`artist_self_id` AS `artist_self_id`,
STRUCT(IFNULL(`aa_albums1`.`nodes`, NULL) AS `nodes`, IFNULL(`aa_albums1`.`aggregate`, STRUCT(0 AS `count`)) AS `aggregate`) AS `albums_aggregate`
FROM `hasura`.`Artist` AS `t_Artist1`
LEFT OUTER JOIN (SELECT ARRAY_AGG(STRUCT(`t_Album1`.`album_self_id` AS `album_self_id`) ORDER BY (`t_Album1`.`album_self_id`) ASC) AS `nodes`,
STRUCT(COUNT(*) AS `count`) AS `aggregate`,
`t_Album1`.`artist_other_id` AS `artist_other_id`
FROM (SELECT *,
-- ADDED
ROW_NUMBER() OVER(PARTITION BY artist_other_id) artist_album_index
FROM `hasura`.`Album` AS `t_Album1`
ORDER BY (`t_Album1`.`album_self_id`) ASC NULLS FIRST
) AS `t_Album1`
-- CHANGED
WHERE artist_album_index <= @param
GROUP BY `t_Album1`.`artist_other_id`)
AS `aa_albums1`
ON (`aa_albums1`.`artist_other_id` = `t_Artist1`.`artist_self_id`)
ORDER BY (`t_Artist1`.`artist_self_id`) ASC NULLS FIRST
```
That serves both the LIMIT/OFFSET function in the where clause. Then,
both the ARRAY_AGG and the COUNT are correct per artist.
I've updated my Haskell test suite to add regression tests for this. I'll push a commit for Python tests shortly. The tests still pass there.
This just fixes a case that we hadn't noticed.
https://github.com/hasura/graphql-engine-mono/pull/1641
GitOrigin-RevId: 49933fa5e09a9306c89565743ecccf2cb54eaa80
### Context
One of the ways we use the Backend type families is to use `Void` for all types for which a backend has no representation; this allows us to make some branches of our metadata and IR unrepresentable, making some functions total, where they would have to handle those unsupported cases otherwise.
However, one of the biggest features, functions, cannot be cut that way, due to one of the constraints on `FunctionName b`: the metadata generator requires it to have an `Arbitrary` instance, and `Arbitrary` does not have a recovery mechanism which would allow for a `Void` instance...
### Description
This PR solves this problem and removes the `Arbitrary` constraints in `Backend`. To do so, it introduces a new typeclass: `PartialArbitrary`, which is very similar to `Arbitrary`, except that it returns a `Maybe (Gen a)`, allowing for `Void` to have a well-formed instance. An `Arbitrary` instance for `Metadata` can easily be retrieved with `arbitrary = fromJust . partialArbitrary`.
Furthermore, `PartialArbitrary` has a generic implementation, inspired by the one in `generic-arbitrary`, which automatically prunes branches that return `Nothing`, allowing to automatically construct most types. Types that don't have a type parameter and therefore can't contain `Void` can easily get their `PartialArbitrary` instance from `Arbitrary` with `partialArbitrary = Just arbitrary`. This is what a default overlappable instance provides.
In conjunction with other cleanups in #1666, **this allows for Void function names**.
### Notes
While this solves the stated problem, there are other possible solutions we could explore, such as:
- switching from QuickCheck to a library that supports that kind of pruning natively
- removing the test altogether, and dropping all notion of Arbitrary from the code
There are also several things we could do with the Generator module:
- move it out of RQL.DDL.Metadata, to some place that makes more sense
- move ALL Arbitrary instances in the code to it, since nothing else uses Arbitrary
- or, to the contrary, move all those Arbitrary instances alongside their types, to avoid an orphan instance
https://github.com/hasura/graphql-engine-mono/pull/1667
GitOrigin-RevId: 88e304ea453840efb5c0d39294639b8b30eefb81
### Description
The spock handler requires the request type to have a `ToJSON` instance AND a `FromJSON` instance. That's because we parse it from the received bytestring into its proper type.... and call `toJSON` on it to log it. This PR simplifies this, by keeping the intermediate `Value` obtained during parsing, and using it for logging. This has two consequences:
1. it removes the `ToJSON` constraint, which will remove some code down the line (esp. in Metadata)
2. it means we log the actual JSON object query we received, not the result of parsing it, meaning the logged object will contain fields that would have been ignored when parsing the actual value; this is both an upside (more accurate log) and a downside (could be more verbose / more confusing)
### Further work
Should this PR also remove all obsolete ToJSON instances while at it?
How do we test this?
https://github.com/hasura/graphql-engine-mono/pull/1664
GitOrigin-RevId: ae099eea9a671eabadcdf507f993a5ad9433be87
- add name fields to log output in several spots:
- action logs get the action name in detail.action_name
- events (triggered and scheduled) get the trigger name in detail.event_name
- one-off scheduled events don't have a trigger name; instead, they get the
comment if it exists
- remove unused event creation timestamp from ExtraLogContext
https://github.com/hasura/graphql-engine-mono/pull/1712
GitOrigin-RevId: 28907340d4e2d9adc0c48cc5d3010eef1fa902e1
- Add export list to Hasura.Eventing.Common
- Group logging options in one type / argument
- Group request header processing code in one place
This doesn't address the convoluted logic, but should make it a bit easier
to figure out what's going on for the next person.
https://github.com/hasura/graphql-engine-mono/pull/1710
GitOrigin-RevId: 34b0abdd1b86b5836eb512484acb0db8c81f3014
### Description
This PR fixes a major issue in the JSON instances of `AnyBackend`: they were not symmetrical! `FromJSON` always made the assumption that the value was an object, and that it contained a "kind" field if it happened to not be a Postgres value. `ToJSON` did NOT insert said field in the output, and did not enforce that the output was an object.
....however, it worked, because nowhere in the code did we yet rely on those being symmetrical. They are both used only once:
- `parseJSON` was used to decode a `Metadata` object, but the matching `toJSON` instance, which is heavily customized, does insert the "kind" field properly
- `toJSON` was only used on the `SchemaCache`, which has no corresponding `FromJSON` instance, since we only serialize it in debug endpoints
This PR makes no attempt at making the instances symmetrical. Instead, it implements simpler functions, and pushes the problem of identifying the proper backend (if any) to the call sites.
### Notes
Additionally, it cleans up some instances that were manually written where they could be auto-generated. In the process, this PR changes the semantics of `Show`, since the stock derived instance will include the constructor name, where before it was skipped. I think it is preferable.
https://github.com/hasura/graphql-engine-mono/pull/1672
GitOrigin-RevId: 0a1580a0e0f01c25b8c9fee7612dba6e7de055d5
* fix resetting the catalog version to 43 on migration from 1.0 to 2.0
* ci: remove applying patch in test_oss_server_upgrade job
* make the 43 to 46th migrations idempotent
* Set missing HASURA_GRAPHQL_EVENTS_HTTP_POOL_SIZE=8 in upgrade_test
It's not clear why this wasn't caught in CI.
* ci: disable one component of event backpressure test
Co-authored-by: Vishnu Bharathi P <vishnubharathi04@gmail.com>
Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io>
Co-authored-by: Brandon Simmons <brandon@hasura.io>
GitOrigin-RevId: c74c6425266a99165c6beecc3e4f7c34e6884d4d
Previous versions of 'pg-client-hs' provided a Template Haskell splice
'sqlFromFile' which returned compile-time embedded PostgreSQL queries.
Rather than returning a concrete 'Query', however, this function
returned the polymorphic 'IsString txt => txt' which allowed the caller
to implicitly convert the result to anything other type with some
'IsString' instance.
https://github.com/hasura/graphql-engine-mono/pull/1570
GitOrigin-RevId: fb4294439148ae8b2762138ece2d59e8e18ef5e0
### Description
This PR adds the required IR for DB to DB joins, based on @paf31 and @0x777 's `feature/db-to-db` branch.
To do so, it also refactors the IR to introduce a new type parameter, `r`, which is used to recursively constructs the `v` parameter of remote QueryDBs. When collecting remote joins, we replace `r` with `Const Void`, indicating at the type level that there cannot be any leftover remote join.
Furthermore, this PR refactors IR.Select for readability, moves some code from IR.Root to IR.Select to avoid having to deal with circular dependencies, and makes it compile by adding `error` in all new cases in the execution pipeline.
The diff doesn't make it clear, but most of Select.hs is actually unchanged. Declarations have just been reordered by topic, in the following order:
- type declarations
- instance declarations
- type aliases
- constructor functions
- traverse functions
https://github.com/hasura/graphql-engine-mono/pull/1580
Co-authored-by: Phil Freeman <630306+paf31@users.noreply.github.com>
GitOrigin-RevId: bbdcb4119cec8bb3fc32f1294f91b8dea0728721
A pull request for cleaning up small issues, bugs, redundancies and missing things in the BigQuery backend.
Summary:
1. Remove duplicate projection fields - BigQuery rejects these.
2. Add order_by to the test suite cases, as it was returning inconsistent results.
3. Add lots of in FromIr about how the dataloader approach is given support.
4. Produce the correct output structure for aggregates:
a. Should be a singleton object for a top-level aggregate query.
b. Should have appropriate aggregate{} and nodes{} labels.
c. **Support for nodes** (via array_agg).
5. Smooth over support of array aggregates by removing the fields used for joining with an explicit projection of each wanted field.
https://github.com/hasura/graphql-engine-mono/pull/1317
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: cd3899f4667770a27055f94988ef2a6d5808f1f5
### Description
RunSQL commands are analyzed to detect whether they require a schema cache rebuild; in the case of Citus we were always returning `False`. This PR fixes this, and also removes the catch-all case, to make it explicit / obvious whenever we change this.
https://github.com/hasura/graphql-engine-mono/pull/1549
GitOrigin-RevId: dddaaea868e7b7999bdfe11451032df9d9b44274
Remote relationships are now supported on SQL Server and BigQuery. The major change though is the re-architecture of remote join execution logic. Prior to this PR, each backend is responsible for processing the remote relationships that are part of their AST.
This is not ideal as there is nothing specific about a remote join's execution that ties it to a backend. The only backend specific part is whether or not the specification of the remote relationship is valid (i.e, we'll need to validate whether the scalars are compatible).
The approach now changes to this:
1. Before delegating the AST to the backend, we traverse the AST, collect all the remote joins while modifying the AST to add necessary join fields where needed.
1. Once the remote joins are collected from the AST, the database call is made to fetch the response. The necessary data for the remote join(s) is collected from the database's response and one or more remote schema calls are constructed as necessary.
1. The remote schema calls are then executed and the data from the database and from the remote schemas is joined to produce the final response.
### Known issues
1. Ideally the traversal of the IR to collect remote joins should return an AST which does not include remote join fields. This operation can be type safe but isn't taken up as part of the PR.
1. There is a lot of code duplication between `Transport/HTTP.hs` and `Transport/Websocket.hs` which needs to be fixed ASAP. This too hasn't been taken up by this PR.
1. The type which represents the execution plan is only modified to handle our current remote joins and as such it will have to be changed to accommodate general remote joins.
1. Use of lenses would have reduced the boilerplate code to collect remote joins from the base AST.
1. The current remote join logic assumes that the join columns of a remote relationship appear with their names in the database response. This however is incorrect as they could be aliased. This can be taken up by anyone, I've left a comment in the code.
### Notes to the reviewers
I think it is best reviewed commit by commit.
1. The first one is very straight forward.
1. The second one refactors the remote join execution logic but other than moving things around, it doesn't change the user facing functionality. This moves Postgres specific parts to `Backends/Postgres` module from `Execute`. Some IR related code to `Hasura.RQL.IR` module. Simplifies various type class function signatures as a backend doesn't have to handle remote joins anymore
1. The third one fixes partial case matches that for some weird reason weren't shown as warnings before this refactor
1. The fourth one generalizes the validation logic of remote relationships and implements `scalarTypeGraphQLName` function on SQL Server and BigQuery which is used by the validation logic. This enables remote relationships on BigQuery and SQL Server.
https://github.com/hasura/graphql-engine-mono/pull/1497
GitOrigin-RevId: 77dd8eed326602b16e9a8496f52f46d22b795598
This reverts the remote schema type customisation and namespacing feature temporarily as we test for certain conditions.
GitOrigin-RevId: f8ee97233da4597f703970c3998664c03582d8e7
>
### Description
>
### Changelog
- [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. If no changelog is required, then add the `no-changelog-required` label.
### Affected components
- [x] Server
### Related Issues
->
Fixes#1528
### Solution and Design
>
Only replace connection configuration instead of replacing entire metadata with empty one.
GitOrigin-RevId: f9a16dcc7b1219ec1af915bf083622fcb7dde69a
This is a minor refactor (part of `Internal/Parser.hs` is moved into `Internal/Input.hs`) to remove `Collect.hs-boot` and `Directives.hs-boot` files. Without these changes:
1. Most changes would trigger recompilation from the modules with hs-boot files.
1. haskell-language-server fails for some reason in the presence of hs-boot files.
GitOrigin-RevId: 77a2e443417b449c5d7d9d418fc75fcdf076a9ae
This implements the `snakeCaseTableName` method. It's part of the larger BigQuery PR at https://github.com/hasura/graphql-engine-mono/pull/1317, but @0x777 requested a smaller PR with this change.
GitOrigin-RevId: aeb93c3228bb6ba7293cd73b7a34b39c7ed6139a
event catalog:
- `hdb_catalog` is no longer automatically created
- catalog is initialised when the first event trigger is created
- catalog initialisation is done during the schema cache build, using `ArrowCache` so it is only run in response to a change to the set of event triggers
event queue:
- `processEventQueue` thread is prevented from starting when `HASURA_GRAPHQL_EVENTS_FETCH_INTERVAL=0`
- `processEventQueue` thread only processes sources for which at least one event trigger exists in some table in the source
Co-authored-by: Anon Ray <616387+ecthiender@users.noreply.github.com>
GitOrigin-RevId: 73f256465d62490cd2b59dcd074718679993d4fe
This essentially restores the original code from c425b554b8
(https://github.com/hasura/graphql-engine/pull/4013). Prior to this
commit we would slurp messages as fast as possible from the database
(one thing c425b55 fixed).
Another thing broken as a consequence of the same logic was the
removeEventFromLockedEvents logic which unlocks in-flight events
(breaking at-least-once delivery)
Some archeology, post-c425b55:
- cc8e2ccc erroneously attempted to refactor using `bracket`, resulting
in the same slurp-all-events behavior (since we don't ever wait for
processEvent to complete)
- at some point event processing within a batch is made serial, this
reported as a bug. See: https://github.com/hasura/graphql-engine/issues/5189
- in 0ef52292b5 (which I approved...) an `async` is added, again
causing the same issue...
GitOrigin-RevId: d8cbaab385267a4c3f1f173e268a385265980fb1
Aggregate fields - both the table and the relationship fields haven't been enabled till now. This PR fixes enables the aggregate fields and fixes an issue with root aggregate fields.
Co-authored-by: Chris Done <11019+chrisdone@users.noreply.github.com>
GitOrigin-RevId: a6d7ed9b45cda6af659a57576a8623c725a7372f
This claws back ~7min from integration tests (run serially, as with `dev.sh test --integration`
Further improvements would do well to focus on optimizing metadata operations, as `setup` dominates
GitOrigin-RevId: 76637d6fa953c2404627c4391447a05bf09355fa
Removing `schemaSyncDisable` flag and interpreting `schemaPollInterval` of `0` as disabling schema sync.
This change brings the convention in line with how action and other intervals are used to disable processes.
There is an opportunity to abstract the notion of an optional interval similar to how actions uses `AsyncActionsFetchInterval`.
This can be used for the following fields of ServeOptions, with RawServeOptions having a milliseconds value where `0` is interpreted as disable.
OptionalInterval:
```
-- | Sleep time interval for activities
data OptionalInterval
= Skip -- ^ No polling
| Interval !Milliseconds -- ^ Interval time
deriving (Show, Eq)
```
ServeOptions:
```
data ServeOptions impl
= ServeOptions
{
...
, soEventsFetchInterval :: !OptionalInterval
, soAsyncActionsFetchInterval :: !OptionalInterval
, soSchemaPollInterval :: !OptionalInterval
...
}
```
Rather than encoding a `Maybe OptionalInterval` in RawServeOptions, instead a `Maybe Milliseconds` can be used to more directly express the input format, with the ServeOptions constructor interpreting `0` as `Skip`.
Current inconsistencies:
* `soEventsFetchInterval` has no value interpreted as disabling the fetches
* `soAsyncActionsFetchInterval` uses an `OptionalInterval` analog in `RawServeOptions` instead of `Milliseconds`
* `soSchemaPollInterval` currently uses `Milliseconds` directly in `ServeOptions`
---
### Kodiak commit message
Information used by [Kodiak bot](https://kodiakhq.com/) while merging this PR.
#### Commit title
Same as the title of this pull request
GitOrigin-RevId: 3cda1656ae39ae95ba142512ed4e123d6ffeb7fe
Modifying schema-sync implementation to use polling for OSS/Pro. Invalidations are now propagated via the `hdb_catalog.hdb_schema_notifications` table in OSS/Pro. Pattern followed is now a Listener/Processor split with Cloud listening for changes via a LISTEN/NOTIFY channel and OSS polling for resource version changes in the metadata table. See issue #460 for more details.
GitOrigin-RevId: 48434426df02e006f4ec328c0d5cd5b30183db25
Multi source support had limited the availability of async action queries in subscriptions. This PR
adds support for async action query subscriptions with new implementation. Also addresses https://github.com/hasura/graphql-engine/issues/6460.
GitOrigin-RevId: 5ddc321073d224f287dc4b86ce2239ff55190b36
* add `use_prepared_statements` option to add_pg_source API
* update CHANGELOG.md
* change default value for 'use_prepared_statements' to False
* update CHANGELOG.md
GitOrigin-RevId: 6f5b90991f4a8c03a51e5829d2650771bb0e29c1
While debugging issues with HLS, Reed Mullanix noticed that we don't use relative paths. This leads to problems when using HLS + Emacs due to a bug in `lsp-mode` which prevents it from finding the correct project root.
However, it is still a good practice to use relative paths in TH for other reasons, including being able to import these modules in GHCI.
This PR should make it so HLS-1.0 & emacs provide type inference, imports, etc., in all modules in our codebase.
GitOrigin-RevId: 5f53b9a7ccf46df1ea7be94ff0a5c6ec861f4ead
Fixes https://github.com/hasura/graphql-engine-mono/issues/712
Main point of interest: the `Hasura.SQL.Backend` module.
This PR creates an `Exists` type indexed by indexed type and packed constraint while hiding all of its complexity by not exporting the constructor.
Existential constructors/types which are no longer (directly) existential:
- [X] BackendSourceInfo :: BackendSourceInfo
- [x] BackendSourceMetadata :: BackendSourceMetadata
- [x] MOSourceObjId :: MetadatObjId
- [x] SOSourceObj :: SchemaObjId
- [x] RFDB :: RootField
- [x] LQP :: LiveQueryPlan
- [x] ExecutionStep :: ExecStepDB
This PR also removes ALL usages of `Typeable.cast` from our codebase. We still need to derive `Typeable` in a few places in order to be able to derive `Data` in one place. I have not dug deeper to see why this is needed.
GitOrigin-RevId: bb47e957192e4bb0af4c4116aee7bb92f7983445
Previously invalid REST endpoints would throw errors during schema cache build.
This PR changes the validation to instead add to the inconsistent metadata objects in order to allow use of `allow_inconsistent_metadata` with inconsistent REST endpoints.
All non-fatal endpoint definition errors are returned as inconsistent metadata warnings/errors depending on the use of `allow_inconsistent_metadata`. The endpoints with issues are then created and return informational runtime errors when they are called.
Console impact when creating endpoints is that error messages now refer to metadata inconsistencies rather than REST feature at the top level:
![image](https://user-images.githubusercontent.com/92299/109911843-ede9ec00-7cfe-11eb-9c55-7cf924d662a6.png)
<img width="969" alt="image" src="https://user-images.githubusercontent.com/92299/110258597-8336fa00-7ff7-11eb-872c-bfca945aa0e8.png">
Note: Conflicting endpoints generate one error per conflicting set of endpoints due to the implementation of `groupInconsistentMetadataById` and `imObjectIds`. This is done to ensure that error messages are terse, but may pose errors if there are some assumptions made surrounding `imObjectIds`.
Related to https://github.com/hasura/graphql-engine-mono/pull/473 (Allow Inconsistent Metadata (v2) #473 (Merged))
---
### Kodiak commit message
Changes the validation to use inconsistent metadata objects for REST endpoint issues.
#### Commit title
Inconsistent metadata for REST endpoints
GitOrigin-RevId: b9de971208e9bb0a319c57df8dace44cb115ff66
fixes#3868
docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de`
Note:
To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`.
Introduction
------------
This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`.
How are select permissions of different roles are combined?
------------------------------------------------------------
A select permission includes 5 things:
1. Columns accessible to the role
2. Row selection filter
3. Limit
4. Allow aggregation
5. Scalar computed fields accessible to the role
Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`.
Let's say the following GraphQL query is queried with the `combined_roles` role.
```graphql
query {
employees {
address
phone
}
}
```
This will translate to the following SQL query:
```sql
select
(case when (P1 or P2) then address else null end) as address,
(case when P2 then phone else null end) as phone
from employee
where (P1 or P2)
```
The other parameters of the select permission will be combined in the following manner:
1. Limit - Minimum of the limits will be the limit of the inherited role
2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation
3. Scalar computed fields - same as table column fields, as in the above example
APIs for inherited roles:
----------------------
1. `add_inherited_role`
`add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments
`role_name`: the name of the inherited role to be added (String)
`role_set`: list of roles that need to be combined (Array of Strings)
Example:
```json
{
"type": "add_inherited_role",
"args": {
"role_name":"combined_user",
"role_set":[
"user",
"user1"
]
}
}
```
After adding the inherited role, the inherited role can be used like single roles like earlier
Note:
An inherited role can only be created with non-inherited/singular roles.
2. `drop_inherited_role`
The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument:
`role_name`: name of the inherited role to be dropped
Example:
```json
{
"type": "drop_inherited_role",
"args": {
"role_name":"combined_user"
}
}
```
Metadata
---------
The derived roles metadata will be included under the `experimental_features` key while exporting the metadata.
```json
{
"experimental_features": {
"derived_roles": [
{
"role_name": "manager_is_employee_too",
"role_set": [
"employee",
"manager"
]
}
]
}
}
```
Scope
------
Only postgres queries and subscriptions are supported in this PR.
Important points:
-----------------
1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done.
TODOs
-------
- [ ] Tests
- [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features
- [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?)
- [ ] Introspection test with a inherited role (nullability changes in a inherited role)
- [ ] Docs
- [ ] Changelog
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
- [x] **Event Triggers Metrics**
- [x] Distribution of size of event trigger fetches / Number of events fetched in the last `event trigger fetch`
- [x] Event Triggers: Number of event trigger HTTP workers in process
- [x] Event Triggers: Avg event trigger lock time (if an event has been fetched but not processed because http worker is not free)
#### Sample response
The metrics can be viewed from the `/dev/ekg` endpoint
```json
{
"num_events_fetched":{
"max":0,
"mean":0,
"count":1,
"min":0,
"variance":null,
"type":"d",
"sum":0
},
"num_event_trigger_http_workers":{
"type":"g",
"val":0
},
"event_lock_time":{
"max":0,
"mean":0,
"count":0,
"min":0,
"variance":0,
"type":"d",
"sum":0
},
```
#### Todo
- [ ] Group similar metrics together (Eg: Group all the metrics related to Event trigger, How do we do it??)
Closes: https://github.com/hasura/graphql-engine-mono/issues/202
GitOrigin-RevId: bada11d871272b04c8a09d006d9d037a8464a472
Added a note on existentials. I plan to create a subsequent PR with a note on how we use the singletons trick to recover the type inside an existential.
GitOrigin-RevId: 1f227d859dcc384b4ac7e103053f643f879827d1
When adding a rest endpoint that references a query with multiple definitions, throw an error.
Needed a second PR for this as Kodiak merged the prior PR before this commit was added.
---
### Kodiak commit message
Adding check for multiple query definitions for rest endpoints.
GitOrigin-RevId: 6e3977a210e29f143b61282fbb37c93eb5b9d73c
Resolves Issues:
* https://github.com/hasura/graphql-engine-mono/issues/658 - Invalid Slashes
* https://github.com/hasura/graphql-engine-mono/issues/628 - Subscriptions
Implementation:
* Moved some logic from Endpoint.hs to allow reuse of splitting url into PathSegments.
* Additional validation steps alongside checking for overlapping routes
* Logging potential misuse of GET for mutations
Future Work:
* [ ] GET is allowed for mutations (Ignore/Log warning for Now)
* [ ] Add to scInconsistentObjs rather than throwing error
* Add information to scInconsistentObjs instead of raising errors directly.
TODO:
* [x] Duplicate variable segments with the same name in the location should not be allowed
* [x] We should throw an error on trailing and leading slashes and URLs which contain empty segments
* [x] Endpoints can be created using subscriptions. But the error only shows at the time of the query
* [x] Tests
---
### Kodiak commit message
Prohibit Invalid slashes, duplicate variables, subscriptions for REST endpoints.
GitOrigin-RevId: 86c0d4af97984c8afd02699e6071e9c1658710b8
Provides queries for fetching and inserting metadata into that database that do not assume there is a `resource_version` column. This means that will work when migrating to/from older versions.
Co-authored-by: Lyndon Maydwell <92299+sordina@users.noreply.github.com>
GitOrigin-RevId: dac636d530524082c5a13ae0f016a2d4ced16f7f
Add optimistic concurrency control to the ‘replace_metadata’ call.
Prevents users from submitting out-of-date metadata to metadata-mutating APIs.
See https://github.com/hasura/graphql-engine-mono/issues/472 for details.
GitOrigin-RevId: 5f220f347a3eba288a9098b01e9913ffd7e38166
* server: use a leaky bucket algorithm for bytes-per-second cache rate limiting
* Use evalsha properly
* Adds redis cache limit parameters to PoliciesConfig
* Loads Leaky Bucket Script On Server Start
* Adds more redis logging and moves cache update into lua script
* reverts setex in lua and adds notes
* Refactors cacheStore and adds max TTL and cache size limits
* Filter session vars in cache key
* WIP
* parens
* cache-clear-hander POC implementation
* cache-clear-hander POC implementation
* Pro projectId used as cache key
* POC working!
* prefixing query-response keys in redis
* Add cacheClearer to RedisScripts
* Partial implementation of cacheClearer from scripts record
* updating tests
* [automated] stylish-haskell commit
* Adds query look with up with metrics script
* Adds missing module and lua script from last commit
* Changes redis script module structure to match cache clearing branch
* minor change to lua script
* cleaning up cache clearing
* generalising JsonLog
* [automated] stylish-haskell commit
* Draft Cache Metrics Endpoint
* Adds Cache Metrics Handler
* Adds hook handler module
* Missed HandlerHook module in last commit
* glob
* Fixes redis mget bug
* Removes cache totals and changes dashes to colons in metric cache keys
* Adds query param to clear clear endpoint for deleting specific keys
* Adds query param to clear clear endpoint for deleting specific keys
* Cache Metrics on query families rather then queries
* Replace Set with nub
* Base16 Redis Hashes
* Query Family Redis Keys With Roles
* response headers for cache keys
* fixing bug in family key by excluding operation name; using hash for response header instead of entire key
* Adds query family to redis cache keys and cache clear endpoint
* Fixes queryfamily hash bug
* Moves cache endpoints to /pro
* Moved cache clear to POST
* Refactors cache clear function
* Fixes query family format bug
* Adds query cache tests and optional --redis-url flag to python test suite
* Adds session variable cache test
* Update pro changelog
* adding documentation for additional caching features
* more docs
* clearing up units of leaky bucket params
* Adds comments to leaky bucket script
* removes old todo
* Fixes session variable filtering to work with new query rootfield
* more advanced defaulting behaviour for bucket rate and capacity.
* Updates Docs
* Moves Role into QueryFamily hash
* Use Aeson for Cache Clear endpoint response
* Moves trace to bracket the leaky bucket script
* Misc review tweaks
* Adds sum type for cache clear query params
* Hardcodes RegisReplyLog log level
* Update docs/graphql/cloud/response-caching.rst
Co-authored-by: Phil Freeman <phil@hasura.io>
* new prose for rate limiting docs
* [automated] stylish-haskell commit
* make rootToSessVarPreds total
* [automated] stylish-haskell commit
* Fixes out of scope error
* Renamed _acRedis to _acCacheStore
Co-authored-by: Solomon Bothwell <ssbothwell@gmail.com>
Co-authored-by: Lyndon Maydwell <lyndon@sordina.net>
Co-authored-by: David Overton <david@hasura.io>
Co-authored-by: Stylish Haskell Bot <stylish-haskell@users.noreply.github.com>
Co-authored-by: Lyndon Maydwell <lyndon@hasura.io>
GitOrigin-RevId: dda5c1a3f902967b3d78310f950541a55fabb1b0
* Stop shutdown handler retaining the whole serveCtx
This might look like quite a strange way to write the function but it's
the only way I could get GHC to not capture `serveCtx` in the shutdown
handler.
Fixes the metadata issue in #344
* Force argumentNames
The arguments list is often empty so we end up with a lot of duplicate
thunks if this value is not forced.
* Increase sharing in nullableType and nonNullableType
The previous definitions would lead to increased allocation as it would
destory any previously created sharing. The new definition only allocate
a fresh constructor if the value is changed.
* Add memoization for field parsers
It was observed in #344 that many parsers were not being memoised which
led to an increase in memory usage. This patch generalisation memoisation so
that it works for FieldParsers as well as normal Parsers.
There can still be substantial improvement made by also memoising
InputFieldParsers but that is left for future work.
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
* [automated] stylish-haskell commit
* changelog
Co-authored-by: Phil Freeman <paf31@cantab.net>
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
Co-authored-by: Stylish Haskell Bot <stylish-haskell@users.noreply.github.com>
Co-authored-by: Phil Freeman <phil@hasura.io>
GitOrigin-RevId: 36255f77a47cf283ea61df9d6a4f9138d4e5834c
The metadata storage implementation for graphql-engine-multitenant.
- It uses a centralized PG database to store metadata of all tenants (instead of per tenant database)
- Similarly, it uses a single schema-sync listener thread per MT worker (instead of listener thread per tenant) (PS: although, the processor thread is spawned per tenant)
- 2 new flags are introduced - `--metadataDatabaseUrl` and (optional) `--metadataDatabaseRetries`
Internally, a "metadata mode" is introduced to indicate an external/managed store vs a store managed by each pro-server.
To run :
- obtain the schema file (located at `pro/server/res/cloud/metadata_db_schema.sql`)
- apply the schema on a PG database
- set the `--metadataDatabaseUrl` flag to point to the above database
- run the MT executable
The schema (and its migrations) for the metadata db is managed outside the MT worker.
### New metadata
The following is the new portion of `Metadata` added :
```yaml
version: 3
metrics_config:
analyze_query_variables: true
analyze_response_body: false
api_limits:
disabled: false
depth_limit:
global: 5
per_role:
user: 7
editor: 9
rate_limit:
per_role:
user:
unique_params:
- x-hasura-user-id
- x-hasura-team-id
max_reqs_per_min: 20
global:
unique_params: IP
max_reqs_per_min: 10
```
- In Pro, the code around fetching/updating/syncing pro-config is removed
- That also means, `hdb_pro_catalog` for keeping the config cache is not required. Hence the `hdb_pro_catalog` is also removed
- The required config comes from metadata / schema cache
### New Metadata APIs
- `set_api_limits`
- `remove_api_limits`
- `set_metrics_config`
- `remove_metrics_config`
#### `set_api_limits`
```yaml
type: set_api_limits
args:
disabled: false
depth_limit:
global: 5
per_role:
user: 7
editor: 9
rate_limit:
per_role:
anonymous:
max_reqs_per_min: 10
unique_params: "ip"
editor:
max_reqs_per_min: 30
unique_params:
- x-hasura-user-id
user:
unique_params:
- x-hasura-user-id
- x-hasura-team-id
max_reqs_per_min: 20
global:
unique_params: IP
max_reqs_per_min: 10
```
#### `remove_api_limits`
```yaml
type: remove_api_limits
args: {}
```
#### `set_metrics_config`
```yaml
type: set_metrics_config
args:
analyze_query_variables: true
analyze_response_body: false
```
#### `remove_metrics_config`
```yaml
type: remove_metrics_config
args: {}
```
#### TODO
- [x] on-prem pro implementation for `MonadMetadataStorage`
- [x] move the project config from Lux to pro metadata (PR: #379)
- [ ] console changes for pro config/api limits, subscription workers (cc @soorajshankar @beerose)
- [x] address other minor TODOs
- [x] TxIso for `MonadSourceResolver`
- [x] enable EKG connection pool metrics
- [x] add logging of connection info when sources are added?
- [x] confirm if the `buildReason` for schema cache is correct
- [ ] testing
- [x] 1.3 -> 1.4 cloud migration script (#465; PR: #508)
- [x] one-time migration of existing metadata from users' db to centralized PG
- [x] one-time migration of pro project config + api limits + regression tests from metrics API to metadata
- [ ] integrate with infra team (WIP - cc @hgiasac)
- [x] benchmark with 1000+ tenants + each tenant making read/update metadata query every second (PR: https://github.com/hasura/graphql-engine-mono/pull/411)
- [ ] benchmark with few tenants having large metadata (100+ tables etc.)
- [ ] when user moves regions (https://github.com/hasura/lux/issues/1717)
- [ ] metadata has to be migrated from one regional PG to another
- [ ] migrate metrics data as well ?
- [ ] operation logs
- [ ] regression test runs
- [ ] find a way to share the schema files with the infra team
Co-authored-by: Naveen Naidu <30195193+Naveenaidu@users.noreply.github.com>
GitOrigin-RevId: 39e8361f2c0e96e0f9e8f8fb45e6cc14857f31f1
### Description
This PR adds two new github actions: one in the OSS repo, one in the monorepo. The OSS action runs `stylish-haskell` on all files touched by the PR, and displays a warning for each file that was in need of a formatting. The monorepo action does the same thing, with a twist: if the branch is not a shadow copy of an OSS branch, we assume that it is a local branch, and simply push a new commit with the changes.
Furthermore, this PR upgrades our stylish-haskell config to add record formatting, as close as possible to our styleguide.
Both actions use the standard stylish-haskell, not our modified fork.
### Known limitation
The monorepo action does not handle forks: pushing to the branch will fail, and checking the branch out might fail too. This is probably acceptable since we don't use forks with the monorepo, but it wouldn't be hard to handle that gracefully.
GitOrigin-RevId: 814138c5b9826098e2e4ea192778fc0d93fbe390
fixes https://github.com/hasura/graphql-engine/issues/6449
A while back we added [support for customizing JWT claims](https://github.com/hasura/graphql-engine/pull/3575) and this enabled to map a session variable to any value within the unregistered claims, but as reported in #6449 , users aren't able to map the `x-hasura-user-id` session variable to the `sub` standard JWT claim.
This PR fixes the above issue by allowing mapping session variables to standard JWT claims as well.
GitOrigin-RevId: d3e63d7580adac55eb212e0a1ecf7c33f5b3ac4b
This PR generalizes a bunch of metadata structures.
Most importantly, it changes `SourceCache` to hold existentially quantified values:
```
data BackendSourceInfo =
forall b. Backend b => BackendSourceInfo (SourceInfo b)
type SourceCache = HashMap SourceName BackendSourceInfo
```
This changes a *lot* of things throughout the code. For now, all code using the schema cache explicitly casts sources to Postgres, meaning that if any non-Postgres `SourceInfo` makes it to the cache, it'll be ignored.
That means that after this PR is submitted, we can split work between two different aspects:
- creating `SourceInfo` for other backends
- handling those other sources down the line
GitOrigin-RevId: fb9ea00f32e840fc33c5467896fb1dfa5283ab42
### Description
Our Prelude provides the very convenient `hoistMaybe :: Maybe b -> MaybeT m b`. This PR adds hlint rules to replace uses of `MaybeT $ pure $ x` with the cleaner `hoistMaybe x`, and rules to specifically replace `MaybeT $ pure Nothing` with `empty`.
GitOrigin-RevId: 7254f4954e34e4d7ca972dc7c12073d3ab8cb0b8
Fixes https://github.com/hasura/graphql-engine/issues/5704 by checking, for aggregate fields whether we are handling a numeric aggregation.
This PR also adds type information to `ColFld` such that we know the type of the field.
This is the second attempt. See #319 for a less invasive approach. @nicuveo suggested type information might be useful, and since it wasn't hard to add, I think this version is better as well.
GitOrigin-RevId: aa6a259fd5debe9466df6302839ddbbd0ea659b5
Earlier (pre catalog separation), the remote schema permissions were in `/v1/query`. This PR moves it to `/v1/metadata`.
GitOrigin-RevId: cb39d9df4cc2288f67231504e3a7909f2f8df4da
fixes https://github.com/hasura/graphql-engine/issues/2109
This PR accepts a new config `allowed_skew` in the JWT config to provide for some leeway while comparing the JWT expiry time.
GitOrigin-RevId: ef50cf77d8e2780478685096ed13794b5c4c9de4
Fixes https://github.com/hasura/graphql-engine/issues/6385
In the v1.3.4-beta.2 version, the SQL generated for a query action containing relationship and configured with permissions parsed the `x-hasura-user-id` session variable value through the `hasura.user` postgres setting instead of passing the session variables as an JSON object to the query as it was in v1.3.3.
This PR fixes the SQL generation to that of v1.3.3
Co-authored-by: Rakesh Emmadi <12475069+rakeshkky@users.noreply.github.com>
GitOrigin-RevId: 838ba812f89b51df7fcead81b9d3c2885dfa39b4
This PR is a combination of the following other PRs:
- #169: move HasHttpManager out of RQL.Types
- #170: move UserInfoM to Hasura.Session
- #179: delete dead code from RQL.Types
- #180: move event related code to EventTrigger
GitOrigin-RevId: d97608d7945f2c7a0a37e307369983653eb62eb1
This PR was migrated from https://github.com/hasura/graphql-engine/pull/6171
---
### Description
A trivial change to make `dev.sh` use -Werror. This is what our CI uses, and it's annoying to have a PR rejected because of a warning that was missed at dev time. Since this is only for `dev.sh`, it won't impact most people's normal workflow.
Co-authored-by: Antoine Leblanc <1618949+nicuveo@users.noreply.github.com>
GitOrigin-RevId: 37c2f088c37326c244533a003bf9b8169448abc8
### Description
This PR updates the graphql schema to be backend-agnostic. To do so, it also moves the definition of operators to `BackendSchema`, to be specified differently per backend.
GitOrigin-RevId: 35b9d2d1bff93fb68b872d6ab0d3d12ec12c1b93
Earlier, while creating the event trigger's internal postgres trigger, we used to get the name of the table from the `TG_TABLE_NAME` special trigger variable. Using this with normal tables works fine, but it breaks when the parent table is partitioned because we associate the ET configuration in the schema only with the original table (as it should be).
In this PR, we supply the table name and schema name through template variables instead of using `TG_TABLE_NAME` and `TG_TABLE_SCHEMA`, so that event triggers work with a partitioned table as well.
TODO:
- [x] Changelog
- [x] unit test (ET on partition table)
GitOrigin-RevId: 556376881a85525300dcf64da0611ee9ad387eb0
This is an incremental PR towards https://github.com/hasura/graphql-engine/pull/5797
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
GitOrigin-RevId: a6cb8c239b2ff840a0095e78845f682af0e588a9
* server: fix handling of empty array values in relationships of set_custom_types
* add a test for dropping the relationship
GitOrigin-RevId: abff138b0021647a1cb6c6c041c2ba53c1ff9b77
* Remove unused ExitCode constructors
* Simplify shutdown logic
* Update server/src-lib/Hasura/App.hs
Co-authored-by: Brandon Simmons <brandon@hasura.io>
* WIP: fix zombie thread issue
* Use forkCodensity for the schema sync thread
* Use forkCodensity for the oauthTokenUpdateWorker
* Use forkCodensity for the schema update processor thread
* Add deprecation notice
* Logger threads use Codensity
* Add the MonadFix instance for Codensity to get log-sender thread logs
* Move outIdleGC out to the top level, WIP
* Update forkImmortal fuction for more logging info
* add back the idle GC to Pro
* setupAuth
* use ImmortalThreadLog
* Fix tests
* Add another finally block
* loud warnings
* Change log level
* hlint
* Finalize the logger in the correct place
* Add ManagedT
* Update server/src-lib/Hasura/Server/Auth.hs
Co-authored-by: Brandon Simmons <brandon@hasura.io>
* Comments etc.
Co-authored-by: Brandon Simmons <brandon@hasura.io>
Co-authored-by: Naveen Naidu <naveennaidu479@gmail.com>
GitOrigin-RevId: 156065c5c3ace0e13d1997daef6921cc2e9f641c
Generalize TableCoreInfoRM, TableCoreCacheRT, some table metadata data types, generalize fromPGCol to fromCol, generalize some schema cache functions, prepare some enum schema cache code for generalization
GitOrigin-RevId: a65112bc1688e00fd707d27af087cb2585961da2
An incremental PR towards https://github.com/hasura/graphql-engine/pull/5797
- Expands `MonadMetadataStorage` with operations related to async actions and setting/updating metadata
GitOrigin-RevId: 53386b7b2d007e162050b826d0708897f0b4c8f6
Since PDV, introspection queries are parsed by a certain kind of reflection where during the GraphQL schema generation, we collect all GraphQL types used during schema generation to generate answers to introspection queries. This has a great advantage, namely that we don't need to keep track of which types are being used in our schema, as this information is extracted after the fact.
But what happens when we encounter two types with the same name in the GraphQL schema? Well, they better be the same, otherwise we likely made a programming error. So what do we do when we *do* encounter a conflict? So far, we've been throwing a rather generic error message, namely `found conflicting definitions for <typename> when collecting types from the schema`. It does not specify what the conflict is, or how it arose. In fact, I'm a little bit hesitant to output more information about what the conflict is, because we support many different kinds of GraphQL types, and these can have disagreements in many different ways. It'd be a bit tiring (not to mention error-prone) to spell this out explicitly for all types. And, in any case, at the moment our equality checks for types is incorrect anyway, as we are avoiding implementing a certain recursive equality checking algorithm.
As it turns out, type conflicts arise not just due to programming errors, but also arise naturally under certain configurations. @codingkarthik encountered an interesting case recently where adding a specific remote and a single unrelated database table would result in a conflict in our Relay schema. It was not readily visible how this conflict arose: this took significant engineering effort.
This adds stack traces to type collection, so that we can inform the user where the type conflict is taking place. The origin of the above conflict can easily be spotted using this PR. Here's a sample error message:
```
Found conflicting definitions for "PageInfo". The definition at "mutation_root.UpdateUser.favourites.anime.edges.node.characters.pageInfo" differs from the the definition at "query_root.test_connection.pageInfo"
```
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
GitOrigin-RevId: d4c01c243022d8570b3c057b168a61c3033244ff
* Add update_by_pk test with more interesting permissions.
* Add new delete test with more interesting permissions.
* Fix byPk operations not providing all columns.
* Empty commit to unlock ci.
GitOrigin-RevId: eb1ca2cc4c59f421bc5fedf49bef91aa4bbdca94
This PR makes a bunch of schema generation code in Hasura.GraphQL.Schema backend-agnostic, by moving the backend-specific parts into a new BackendSchema type class. This way, the schema generation code can be reused for other backends, simply by implementing new instances of the BackendSchema type class.
This work is now in a state where the schema generators are sufficiently generic to accept the implementation of a new backend. That means that we can start exposing MS SQL schema. Execution is not implemented yet, of course.
The branch currently does not support computed fields or Relay. This is, in a sense, intentional: computed field support is normally baked into the schema generation (through the fieldSelection schema generator), and so this branch shows a programming technique that allows us to expose certain GraphQL schema depending on backend support. We can write support for computed fields and Relay at a later stage.
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
GitOrigin-RevId: df369fc3d189cbda1b931d31678e9450a6601314
* Use local graphql-engine, WIP
* Implement cacheLookup and cacheStore
* Fix the key generatio ncode, deduplicate
* changelog
* More deduplication
* Run tests from this repo
* remove changelog
Co-authored-by: Anon Ray <rayanon004@gmail.com>
GitOrigin-RevId: 80d10c4dfdcadbe9f49ec50fe67aa981202c98c9
Accept new server flag --websocket-keepalive to control
websockets keep-alive interval
Co-authored-by: Auke Booij <auke@hasura.io>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* use only required session variables in multiplexed queries for subscriptions
This will reduce the load on Postgres when the result of a subscription
is not dependent on the session variables of the request
* add DerivingVia to the project wide extension list
* expose a more specific function to filter session variables
* improve documentation of session variables of a cohort
Co-Authored-By: Alexis King <lexi.lambda@gmail.com>
* fix bad rebase
* add test for checking only required session variables are used to make query
Co-authored-by: Alexis King <lexi.lambda@gmail.com>
Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io>
* add support for joining to remote interface and union fields
* add test for making a remote relationship with an Union type
* add CHANGELOG
* remove the pretty-simple library
* remove unused imports from Remote.hs
* Apply suggestions from code review
Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com>
* support remote relationships against enum fields as well
* update CHANGELOG
Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
This issue was very tricky to track down, but fortunately easy to fix.
The interaction here is subtle enough that it’s difficult to put into
English what would go wrong in what circumstances, but the new unit test
captures precisely that interaction to ensure it remains fixed.
Add a backend type extension parameter to some RQL types, following the ideas of the paper "Trees that grow" (Najd & Jones 2016)
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* [skip ci] use the args while making the fieldParser
* modify the execution part of the remote queries
* parse union queries deeply
* add test for remote schema field validation
* add tests for validating remote query arguments
Co-authored-by: Auke Booij <auke@hasura.io>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* improve jsonpath parser to accept special characters and property tests for the same
* make the JWTClaimsMapValueG parametrizable
* add documentation in the JWT file
* modify processAuthZHeader
Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io>
Co-authored-by: Marion Schleifer <marion@hasura.io>
Aka “the PDV refactor.” History is preserved on the branch 2801-graphql-schema-parser-refactor.
* [skip ci] remove stale benchmark commit from commit_diff
* [skip ci] Check for root field name conflicts between remotes
* [skip ci] Additionally check for conflicts between remotes and DB
* [skip ci] Check for conflicts in schema when tracking a table
* [skip ci] Fix equality checking in GraphQL AST
* server: fix mishandling of GeoJSON inputs in subscriptions (fix#3239) (#4551)
* Add support for multiple top-level fields in a subscription to improve testability of subscriptions
* Add an internal flag to enable multiple subscriptions
* Add missing call to withConstructorFn in live queries (fix#3239)
Co-authored-by: Alexis King <lexi.lambda@gmail.com>
* Scheduled triggers (close#1914) (#3553)
server: add scheduled triggers
Co-authored-by: Alexis King <lexi.lambda@gmail.com>
Co-authored-by: Marion Schleifer <marion@hasura.io>
Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io>
Co-authored-by: Aleksandra Sikora <ola.zxcvbnm@gmail.com>
* dev.sh: bump version due to addition of croniter python dependency
* server: fix an introspection query caching issue (fix#4547) (#4661)
Introspection queries accept variables, but we need to make sure to
also touch the variables that we ignore, so that an introspection
query is marked not reusable if we are not able to build a correct
query plan for it.
A better solution here would be to deal with such unused variables
correctly, so that more introspection queries become reusable.
An even better solution would be to type-safely track *how* to reuse
which variables, rather than to split the reusage marking from the
planning.
Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
* flush log buffer on exception in mkWaiApp ( fix#4772 ) (#4801)
* flush log buffer on exception in mkWaiApp
* add comment to explain the introduced change
* add changelog
* allow logging details of a live query polling thread (#4959)
* changes for poller-log
add various multiplexed query info in poller-log
* minor cleanup, also fixes a bug which will return duplicate data
* Live query poller stats can now be logged
This also removes in-memory stats that are collected about batched
query execution as the log lines when piped into an monitoring tool
will give us better insights.
* allow poller-log to be configurable
* log minimal information in the livequery-poller-log
Other information can be retrieved from /dev/subscriptions/extended
* fix few review comments
* avoid marshalling and unmarshalling from ByteString to EncJSON
* separate out SubscriberId and SubscriberMetadata
Co-authored-by: Anon Ray <rayanon004@gmail.com>
* Don't compile in developer APIs by default
* Tighten up handling of admin secret, more docs
Store the admin secret only as a hash to prevent leaking the secret
inadvertently, and to prevent timing attacks on the secret.
NOTE: best practice for stored user passwords is a function with a
tunable cost like bcrypt, but our threat model is quite different (even
if we thought we could reasonably protect the secret from an attacker
who could read arbitrary regions of memory), and bcrypt is far too slow
(by design) to perform on each request. We'd have to rely on our
(technically savvy) users to choose high entropy passwords in any case.
Referencing #4736
* server/docs: add instructions to fix loss of float precision in PostgreSQL <= 11 (#5187)
This adds a server flag, --pg-connection-options, that can be used to set a PostgreSQL connection parameter, extra_float_digits, that needs to be used to avoid loss of data on older versions of PostgreSQL, which have odd default behavior when returning float values. (fixes#5092)
* [skip ci] Add new commits from master to the commit diff
* [skip ci] serve default directives (skip & include) over introspection
* [skip ci] Update non-Haskell assets with the version on master
* server: refactor GQL execution check and config API (#5094)
Co-authored-by: Vamshi Surabhi <vamshi@hasura.io>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* [skip ci] fix js issues in tests by pinning dependencies version
* [skip ci] bump graphql version
* [skip ci] Add note about memory usage
* generalize query execution logic on Postgres (#5110)
* generalize PGExecCtx to support specialized functions for various operations
* fix tests compilation
* allow customising PGExecCtx when starting the web server
* server: changes catalog initialization and logging for pro customization (#5139)
* new typeclass to abstract the logic of QueryLog-ing
* abstract the logic of logging websocket-server logs
introduce a MonadWSLog typeclass
* move catalog initialization to init step
expose a helper function to migrate catalog
create schema cache in initialiseCtx
* expose various modules and functions for pro
* [skip ci] cosmetic change
* [skip ci] fix test calling a mutation that does not exist
* [skip ci] minor text change
* [skip ci] refactored input values
* [skip ci] remove VString Origin
* server: fix updating of headers behaviour in the update cron trigger API and create future events immediately (#5151)
* server: fix bug to update headers in an existing cron trigger and create future events
Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
* Lower stack chunk size in RTS to reduce thread STACK memory (closes#5190)
This reduces memory consumption for new idle subscriptions significantly
(see linked ticket).
The hypothesis is: we fork a lot of threads per websocket, and some of
these use slightly more than the initial 1K stack size, so the first
overflow balloons to 32K, when significantly less is required.
However: running with `+RTS -K1K -xc` did not seem to show evidence of
any overflows! So it's a mystery why this improves things.
GHC should probably also be doubling the stack buffer at each overflow
or doing something even smarter; the knobs we have aren't so helpful.
* [skip ci] fix todo and schema generation for aggregate fields
* 5087 libpq pool leak (#5089)
Shrink libpq buffers to 1MB before returning connection to pool. Closes#5087
See: https://github.com/hasura/pg-client-hs/pull/19
Also related: #3388#4077
* bump pg-client-hs version (fixes a build issue on some environments) (#5267)
* do not use prepared statements for mutations
* server: unlock scheduled events on graceful shutdown (#4928)
* Fix buggy parsing of new --conn-lifetime flag in 2b0e3774
* [skip ci] remove cherry-picked commit from commit_diff.txt
* server: include additional fields in scheduled trigger webhook payload (#5262)
* include scheduled triggers metadata in the webhook body
Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
* server: call the webhook asynchronously in event triggers (#5352)
* server: call the webhook asynchronosly in event triggers
* Expose all modules in Cabal file (#5371)
* [skip ci] update commit_diff.txt
* [skip ci] fix cast exp parser & few TODOs
* [skip ci] fix remote fields arguments
* [skip ci] fix few more TODO, no-op refactor, move resolve/action.hs to execute/action.hs
* Pass environment variables around as a data structure, via @sordina (#5374)
* Pass environment variables around as a data structure, via @sordina
* Resolving build error
* Adding Environment passing note to changelog
* Removing references to ILTPollerLog as this seems to have been reintroduced from a bad merge
* removing commented-out imports
* Language pragmas already set by project
* Linking async thread
* Apply suggestions from code review
Use `runQueryTx` instead of `runLazyTx` for queries.
* remove the non-user facing entry in the changelog
Co-authored-by: Phil Freeman <paf31@cantab.net>
Co-authored-by: Phil Freeman <phil@hasura.io>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* [skip ci] fix: restrict remote relationship field generation for hasura queries
* [skip ci] no-op refactor; move insert execution code from schema parser module
* server: call the webhook asynchronously in event triggers (#5352)
* server: call the webhook asynchronosly in event triggers
* Expose all modules in Cabal file (#5371)
* [skip ci] update commit_diff.txt
* Pass environment variables around as a data structure, via @sordina (#5374)
* Pass environment variables around as a data structure, via @sordina
* Resolving build error
* Adding Environment passing note to changelog
* Removing references to ILTPollerLog as this seems to have been reintroduced from a bad merge
* removing commented-out imports
* Language pragmas already set by project
* Linking async thread
* Apply suggestions from code review
Use `runQueryTx` instead of `runLazyTx` for queries.
* remove the non-user facing entry in the changelog
Co-authored-by: Phil Freeman <paf31@cantab.net>
Co-authored-by: Phil Freeman <phil@hasura.io>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* [skip ci] implement header checking
Probably closes#14 and #3659.
* server: refactor 'pollQuery' to have a hook to process 'PollDetails' (#5391)
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* update pg-client (#5421)
* [skip ci] update commit_diff
* Fix latency buckets for telemetry data
These must have gotten messed up during a refactor. As a consequence
almost all samples received so far fall into the single erroneous 0 to
1K seconds (originally supposed to be 1ms?) bucket.
I also re-thought what the numbers should be, but these are still
arbitrary and might want adjusting in the future.
* [skip ci] include the latest commit compared against master in commit_diff
* [skip ci] include new commits from master in commit_diff
* [skip ci] improve description generation
* [skip ci] sort all introspect arrays
* [skip ci] allow parsers to specify error codes
* [skip ci] fix integer and float parsing error code
* [skip ci] scalar from json errors are now parse errors
* [skip ci] fixed negative integer error message and code
* [skip ci] Re-fix nullability in relationships
* [skip ci] no-op refactor and removed couple of FIXMEs
* [skip ci] uncomment code in 'deleteMetadataObject'
* [skip ci] Fix re-fix of nullability for relationships
* [skip ci] fix default arguments error code
* [skip ci] updated test error message
!!! WARNING !!!
Since all fields accept `null`, they all are technically optional in
the new schema. Meaning there's no such thing as a missing mandatory
field anymore: a field that doesn't have a default value, and which
therefore isn't labelled as "optional" in the schema, will be assumed
to be null if it's missing, meaning it isn't possible anymore to have
an error for a missing mandatory field. The only possible error is now
when a optional positional argument is omitted but is not the last
positional argument.
* [skip ci] cleanup of int scalar parser
* [skip ci] retro-compatibility of offset as string
* [skip ci] Remove commit from commit_diff.txt
Although strictly speaking we don't know if this will work correctly in PDV
if we would implement query plan caching, the fact is that in the theoretical
case that we would have the same issue in PDV, it would probably apply not just
to introspection, and the fix would be written completely differently. So this
old commit is of no value to us other than the heads-up "make sure query plan
caching works correctly even in the presence of unused variables", which is
already part of the test suite.
* Add MonadTrace and MonadExecuteQuery abstractions (#5383)
* [skip ci] Fix accumulation of input object types
Just like object types, interface types, and union types, we have to avoid
circularities when collecting input types from the GraphQL AST.
Additionally, this fixes equality checks for input object types (whose fields
are unordered, and hence should be compared as sets) and enum types (ditto).
* [skip ci] fix fragment error path
* [skip ci] fix node error code
* [skip ci] fix paths in insert queries
* [skip ci] fix path in objects
* [skip ci] manually alter node id path for consistency
* [skip ci] more node error fixups
* [skip ci] one last relay error message fix
* [skip ci] update commit_diff
* Propagate the trace context to event triggers (#5409)
* Propagate the trace context to event triggers
* Handle missing trace and span IDs
* Store trace context as one LOCAL
* Add migrations
* Documentation
* changelog
* Fix warnings
* Respond to code review suggestions
* Respond to code review
* Undo changelog
* Update CHANGELOG.md
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* server: log request/response sizes for event triggers (#5463)
* server: log request/response sizes for event triggers
event triggers (and scheduled triggers) now have request/response size
in their logs.
* add changelog entry
* Tracing: Simplify HTTP traced request (#5451)
Remove the Inversion of Control (SuspendRequest) and simplify
the tracing of HTTP Requests.
Co-authored-by: Phil Freeman <phil@hasura.io>
* Attach request ID as tracing metadata (#5456)
* Propagate the trace context to event triggers
* Handle missing trace and span IDs
* Store trace context as one LOCAL
* Add migrations
* Documentation
* Include the request ID as trace metadata
* changelog
* Fix warnings
* Respond to code review suggestions
* Respond to code review
* Undo changelog
* Update CHANGELOG.md
* Typo
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* server: add logging for action handlers (#5471)
* server: add logging for action handlers
* add changelog entry
* change action-handler log type from internal to non-internal
* fix action-handler-log name
* server: pass http and websocket request to logging context (#5470)
* pass request body to logging context in all cases
* add message size logging on the websocket API
this is required by graphql-engine-pro/#416
* message size logging on websocket API
As we need to log all messages recieved/sent by the websocket server,
it makes sense to log them as part of the websocket server event logs.
Previously message recieved were logged inside the onMessage handler,
and messages sent were logged only for "data" messages (as a server event log)
* fix review comments
Co-authored-by: Phil Freeman <phil@hasura.io>
* server: stop eventing subsystem threads when shutting down (#5479)
* server: stop eventing subsystem threads when shutting down
* Apply suggestions from code review
Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com>
Co-authored-by: Phil Freeman <phil@hasura.io>
Co-authored-by: Phil Freeman <paf31@cantab.net>
Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com>
* [skip ci] update commit_diff with new commits added in master
* Bugfix to support 0-size HASURA_GRAPHQL_QUERY_PLAN_CACHE_SIZE
Also some minor refactoring of bounded cache module:
- the maxBound check in `trim` was confusing and unnecessary
- consequently trim was unnecessary for lookupPure
Also add some basic tests
* Support only the bounded cache, with default HASURA_GRAPHQL_QUERY_PLAN_CACHE_SIZE of 4000. Closes#5363
* [skip ci] remove merge commit from commit_diff
* server: Fix compiler warning caused by GHC upgrade (#5489)
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* [skip ci] update all non server code from master
* [skip ci] aligned object field error message with master
* [skip ci] fix remaining undefined?
* [skip ci] remove unused import
* [skip ci] revert to previous error message, fix tests
* Move nullableType/nonNullableType to Schema.hs
These are functions on Types, not on Parsers.
* [skip ci] fix setup to fix backend only test
the order in which permission checks are performed on the branch is
slightly different than on master, resulting in a slightly different
error if there are no other mutations the user has access to. By
adding update permissions, we go back to the expected case.
* [skip ci] fix insert geojson tests to reflect new paths
* [skip ci] fix enum test for better error message
* [skip ci] fix header test for better error message
* [skip ci] fix fragment cycle test for better error message
* [skip ci] fix error message for type mismatch
* [skip ci] fix variable path in test
* [skip ci] adjust tests after bug fix
* [skip ci] more tests fixing
* Add hdb_catalog.current_setting abstraction for reading Hasura settings
As the comment in the function’s definition explains, this is needed to
work around an awkward Postgres behavior.
* [skip ci] Update CONTRIBUTING.md to mention Node setup for Python tests
* [skip ci] Add missing Python tests env var to CONTRIBUTING.md
* [skip ci] fix order of result when subscription is run with multiple nodes
* [skip ci] no-op refactor: fix a warning in Internal/Parser.hs
* [skip ci] throw error when a subscription contains remote joins
* [skip ci] Enable easier profiling by hiding AssertNF behind a flag
In order to compile a profiling build, run:
$ cabal new-build -f profiling --enable-profiling
* [skip ci] Fix two warnings
We used to lookup the objects that implement a given interface by filtering all
objects in the schema document. However, one of the tests expects us to
generate a warning if the provided `implements` field of an introspection query
specifies an object not implementing some interface. So we use that field
instead.
* [skip ci] Fix warnings by commenting out query plan caching
* [skip ci] improve masking/commenting query caching related code & few warning fixes
* [skip ci] Fixed compiler warnings in graphql-parser-hs
* Sync non-Haskell assets with master
* [skip ci] add a test inserting invalid GraphQL but valid JSON value in a jsonb column
* [skip ci] Avoid converting to/from Map
* [skip ci] Apply some hlint suggestions
* [skip ci] remove redundant constraints from buildLiveQueryPlan and explainGQLQuery
* [skip ci] add NOTEs about missing Tracing constraints in PDV from master
* Remove -fdefer-typed-holes, fix warnings
* Update cabal.project.freeze
* Limit GHC’s heap size to 8GB in CI to avoid the OOM killer
* Commit package-lock.json for Python tests’ remote schema server
* restrict env variables start with HASURA_GRAPHQL_ for headers configuration in actions, event triggers & remote schemas (#5519)
* restrict env variables start with HASURA_GRAPHQL_ for headers definition in actions & event triggers
* update CHANGELOG.md
* Apply suggestions from code review
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* add test for table_by_pk node when roles doesn't have permission to PK
* [skip ci] fix introspection query if any enum column present in primary key (fix#5200) (#5522)
* [skip ci] test case fix for a6450e126b
* [skip ci] add tests to agg queries when role doesn't have access to any cols
* fix backend test
* Simplify subscription execution
* [skip ci] add test to check if required headers are present while querying
* Suppose, table B is related to table A and to query B certain headers are
necessary, then the test checks that we are throwing error when the header
is not set when B is queried through A
* fix mutations not checking for view mutability
* [skip ci] add variable type checking and corresponding tests
* [skip ci] add test to check if update headers are present while doing an upsert
* [skip ci] add positive counterparts to some of the negative permission tests
* fix args missing their description in introspect
* [skip ci] Remove unused function; insert missing markNotReusable call
* [skip ci] Add a Note about InputValue
* [skip ci] Delete LegacySchema/ 🎉
* [skip ci] Delete GraphQL/{Resolve,Validate}/ 🎉
* [skip ci] Delete top-level Resolve/Validate modules; tidy .cabal file
* [skip ci] Delete LegacySchema top-level module
Somehow I missed this one.
* fix input value to json
* [skip ci] elaborate on JSON objects in GraphQL
* [skip ci] add missing file
* [skip ci] add a test with subscription containing remote joins
* add a test with remote joins in mutation output
* [skip ci] Add some comments to Schema/Mutation.hs
* [skip ci] Remove no longer needed code from RemoteServer.hs
* [skip ci] Use a helper function to generate conflict clause parsers
* [skip ci] fix type checker error in fields with default value
* capitalize the header keys in select_articles_without_required_headers
* Somehow, this was the reason the tests were failing. I have no idea, why!
* [skip ci] Add a long Note about optional fields and nullability
* Improve comments a bit; simplify Schema/Common.hs a bit
* [skip ci] full implementation of 5.8.5 type checking.
* [skip ci] fix validation test teardown
* [skip ci] fix schema stitching test
* fix remote schema ignoring enum nullability
* [skip ci] fix fieldOptional to not discard nullability
* revert nullability of use_spheroid
* fix comment
* add required remote fields with arguments for tests
* [skip ci] add missing docstrings
* [skip ci] fixed description of remote fields
* [skip ci] change docstring for consistency
* fix several schema inconsistencies
* revert behaviour change in function arguments parsing
* fix remaining nullability issues in new schema
* minor no-op refactor; use isListType from graphql-parser-hs
* use nullability of remote schema node, while creating a Remote reln
* fix 'ID' input coercing & action 'ID' type relationship mapping
* include ASTs in MonadExecuteQuery
* needed for PRO code-base
* Delete code for "interfaces implementing ifaces" (draft GraphQL spec)
Previously I started writing some code that adds support for a future GraphQL
feature where interfaces may themselves be sub-types of other interfaces.
However, this code was incomplete, and partially incorrect. So this commit
deletes support for that entirely.
* Ignore a remote schema test during the upgrade/downgrade test
The PDV refactor does a better job at exposing a minimal set of types through
introspection. In particular, not every type that is present in a remote schema
is re-exposed by Hasura. The test
test_schema_stitching.py::TestRemoteSchemaBasic::test_introspection assumed that
all types were re-exposed, which is not required for GraphQL compatibility, in
order to test some aspect of our support for remote schemas.
So while this particular test has been updated on PDV, the PDV branch now does
not pass the old test, which we argue to be incorrect. Hence this test is
disabled while we await a release, after which we can re-enable it.
This also re-enables a test that was previously disabled for similar, though
unrelated, reasons.
* add haddock documentation to the action's field parsers
* Deslecting some tests in server-upgrade
Some tests with current build are failing on server upgrade
which it should not. The response is more accurate than
what it was.
Also the upgrade tests were not throwing errors when the test is
expected to return an error, but succeeds. The test framework is
patched to catch this case.
* [skip ci] Add a long Note about interfaces and object types
* send the response headers back to client after running a query
* Deselect a few more tests during upgrade/downgrade test
* Update commit_diff.txt
* change log kind from db_migrate to catalog_migrate (#5531)
* Show method and complete URI in traced HTTP calls (#5525)
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* restrict env variables start with HASURA_GRAPHQL_ for headers configuration in actions, event triggers & remote schemas (#5519)
* restrict env variables start with HASURA_GRAPHQL_ for headers definition in actions & event triggers
* update CHANGELOG.md
* Apply suggestions from code review
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* fix introspection query if any enum column present in primary key (fix#5200) (#5522)
* Fix telemetry reporting of transport (websocket was reported as http)
* add log kinds in cli-migrations image (#5529)
* add log kinds in cli-migrations image
* give hint to resolve timeout error
* minor changes and CHANGELOG
* server: set hasura.tracecontext in RQL mutations [#5542] (#5555)
* server: set hasura.tracecontext in RQL mutations [#5542]
* Update test suite
Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
* Add bulldozer auto-merge and -update configuration
We still need to add the github app (as of time of opening this PR)
Afterwards devs should be able to allow bulldozer to automatically
"update" the branch, merging in parent when it changes, as well as
automatically merge when all checks pass.
This is opt-in by adding the `auto-update-auto-merge` label to the PR.
* Remove 'bulldozer' config, try 'kodiak' for auto-merge
see: https://github.com/chdsbd/kodiak
The main issue that bit us was not being able to auto update forked
branches, also:
https://github.com/palantir/bulldozer/issues/66https://github.com/palantir/bulldozer/issues/145
* Cherry-picked all commits
* [skip ci] Slightly improve formatting
* Revert "fix introspection query if any enum column present in primary key (fix#5200) (#5522)"
This reverts commit 0f9a5afa59.
This undoes a cherry-pick of 34288e1eb5 that was
already done previously in a6450e126b, and
subsequently fixed for PDV in 70e89dc250
* Do a small bit of tidying in Hasura.GraphQL.Parser.Collect
* Fix cherry-picking work
Some previous cherry-picks ended up modifying code that is commented out
* [skip ci] clarified comment regarding insert representation
* [skip ci] removed obsolete todos
* cosmetic change
* fix action error message
* [skip ci] remove obsolete comment
* [skip ci] synchronize stylish haskell extensions list
* use previously defined scalar names in parsers rather than ad-hoc literals
* Apply most syntax hlint hints.
* Clarify comment on update mutation.
* [skip ci] Clarify what fields should be specified for objects
* Update "_inc" description.
* Use record types rather than tuples fo IntrospectionResult and ParsedIntrospection
* Get rid of checkFieldNamesUnique (use Data.List.Extended.duplicates)
* Throw more errors when collecting query root names
* [skip ci] clean column parser comment
* Remove dead code inserted in ab65b39
* avoid converting to non-empty list where not needed
* add note and TODO about the disabled checks in PDV
* minor refactor in remoteField' function
* Unify two getObject methods
* Nitpicks in Remote.hs
* Update CHANGELOG.md
* Revert "Unify two getObject methods"
This reverts commit bd6bb40355.
We do need two different getObject functions as the corresponding error message is different
* Fix error message in Remote.hs
* Update CHANGELOG.md
Co-authored-by: Auke Booij <auke@tulcod.com>
* Apply suggested Changelog fix.
Co-authored-by: Auke Booij <auke@tulcod.com>
* Fix typo in Changelog.
* [skip ci] Update changelog.
* reuse type names to avoid duplication
* Fix Hashable instance for Definition
The presence of `Maybe Unique`, and an optional description, as part of
`Definition`s, means that `Definition`s that are considered `Eq`ual may get
different hashes. This can happen, for instance, when one object is memoized
but another is not.
* [skip ci] Update commit_diff.txt
* Bump parser version.
* Bump freeze file after changes in parser.
* [skip ci] Incorporate commits from master
* Fix developer flag in server/cabal.project.freeze
Co-authored-by: Auke Booij <auke@tulcod.com>
* Deselect a changed ENUM test for upgrade/downgrade CI
* Deselect test here as well
* [skip ci] remove dead code
* Disable more tests for upgrade/downgrade
* Fix which test gets deselected
* Revert "Add hdb_catalog.current_setting abstraction for reading Hasura settings"
This reverts commit 66e85ab9fb.
* Remove circular reference in cabal.project.freeze
Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io>
Co-authored-by: Auke Booij <auke@hasura.io>
Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
Co-authored-by: Marion Schleifer <marion@hasura.io>
Co-authored-by: Aleksandra Sikora <ola.zxcvbnm@gmail.com>
Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
Co-authored-by: Anon Ray <rayanon004@gmail.com>
Co-authored-by: rakeshkky <12475069+rakeshkky@users.noreply.github.com>
Co-authored-by: Anon Ray <ecthiender@users.noreply.github.com>
Co-authored-by: Vamshi Surabhi <vamshi@hasura.io>
Co-authored-by: Antoine Leblanc <antoine@hasura.io>
Co-authored-by: Brandon Simmons <brandon@hasura.io>
Co-authored-by: Phil Freeman <phil@hasura.io>
Co-authored-by: Lyndon Maydwell <lyndon@sordina.net>
Co-authored-by: Phil Freeman <paf31@cantab.net>
Co-authored-by: Naveen Naidu <naveennaidu479@gmail.com>
Co-authored-by: Karthikeyan Chinnakonda <chkarthikeyan95@gmail.com>
Co-authored-by: Nizar Malangadan <nizar-m@users.noreply.github.com>
Co-authored-by: Antoine Leblanc <crucuny@gmail.com>
Co-authored-by: Auke Booij <auke@tulcod.com>
Said commit had a lot of whitespace, formatting, and trivial
refactorings. We should be mindful that these have a real cost in terms
of review time and potential for bugs to be introduced.
a weeder pass in CI would have caught this.
Use B3-Propagation for Tracing Headers
The Tracing headers which we use now X-Hasura- are not
standard,though they help us trace requests within our
systems but this willbreak/not work when we tend to use
the other Trace Recorders.Hence it's better to refactor
the code to allow using at least B3 headers for now.
i#Refractor injectHTTPContext
Co-authored-by: Phil Freeman <phil@hasura.io>
* Add ekg-core to build-executable .cabal
* Move creation of EKG Store to Main.hs
This helps to share metrics between pro and OSS and
helps surface the metrics from OSS in Datadog via
Pro.
Co-authored-by: Phil Freeman <phil@hasura.io>
* pass request body to logging context in all cases
* add message size logging on the websocket API
this is required by graphql-engine-pro/#416
* message size logging on websocket API
As we need to log all messages recieved/sent by the websocket server,
it makes sense to log them as part of the websocket server event logs.
Previously message recieved were logged inside the onMessage handler,
and messages sent were logged only for "data" messages (as a server event log)
* fix review comments
Co-authored-by: Phil Freeman <phil@hasura.io>
* server: add logging for action handlers
* add changelog entry
* change action-handler log type from internal to non-internal
* fix action-handler-log name
* Propagate the trace context to event triggers
* Handle missing trace and span IDs
* Store trace context as one LOCAL
* Add migrations
* Documentation
* Include the request ID as trace metadata
* changelog
* Fix warnings
* Respond to code review suggestions
* Respond to code review
* Undo changelog
* Update CHANGELOG.md
* Typo
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* server: log request/response sizes for event triggers
event triggers (and scheduled triggers) now have request/response size
in their logs.
* add changelog entry
Also some minor refactoring of bounded cache module:
- the maxBound check in `trim` was confusing and unnecessary
- consequently trim was unnecessary for lookupPure
Also add some basic tests
These must have gotten messed up during a refactor. As a consequence
almost all samples received so far fall into the single erroneous 0 to
1K seconds (originally supposed to be 1ms?) bucket.
I also re-thought what the numbers should be, but these are still
arbitrary and might want adjusting in the future.
* Pass environment variables around as a data structure, via @sordina
* Resolving build error
* Adding Environment passing note to changelog
* Removing references to ILTPollerLog as this seems to have been reintroduced from a bad merge
* removing commented-out imports
* Language pragmas already set by project
* Linking async thread
* Apply suggestions from code review
Use `runQueryTx` instead of `runLazyTx` for queries.
* remove the non-user facing entry in the changelog
Co-authored-by: Phil Freeman <paf31@cantab.net>
Co-authored-by: Phil Freeman <phil@hasura.io>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
The current idle GC settings seem never to cause idle GC to trigger.
The changes here at least help memory usage to look more reasonable when
running certain benchmarks, and speculatively could partially fix some
memory leaks users have reported.
See ourIdleGC for details.
Referencing canonical memory issue #3388
https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/runtime_control.html#rts-flag---disable-delayed-os-memory-return
Referencing canonical memory issue #3388
This is a bit of a mystery. It didn't seem to have any effect in early
repros we had. But now, running an introspection query benchmark I see:
Running 400 concurrent connections:
before this change: max residency ~450M
after: ~140M
No difference in latency was observed.
...BUT: if I give graphql-engine a warmup of 10 requests with 1
connection (i.e. no concurrency): I see both have a max residency of
~140M (i.e. the flag doesn't help)
...also interestingly: a single warmup request doesn't seem to have
any effect (ending RES is still high), 2 requests gets max RES down to
~180M.
I suspect many concurrent connections are spraying pinned data over a
bunch of blocks which are then not released to the OS barring memory
pressure. Whatever this is is maybe thread-local or "per-capability" in
some sense...
This adds a server flag, --pg-connection-options, that can be used to set a PostgreSQL connection parameter, extra_float_digits, that needs to be used to avoid loss of data on older versions of PostgreSQL, which have odd default behavior when returning float values. (fixes#5092)
This reduces memory consumption for new idle subscriptions significantly
(see linked ticket).
The hypothesis is: we fork a lot of threads per websocket, and some of
these use slightly more than the initial 1K stack size, so the first
overflow balloons to 32K, when significantly less is required.
However: running with `+RTS -K1K -xc` did not seem to show evidence of
any overflows! So it's a mystery why this improves things.
GHC should probably also be doubling the stack buffer at each overflow
or doing something even smarter; the knobs we have aren't so helpful.
Introspection query is failing with `type info not found for xxxx` error message if multiple actions are defined with reused PG scalars. The fix for the same.
* Benchmark GraphQL queries using wrk
* fix console assets dir
* Store wrk parameters as well
* Add details about storing results in Readme
* Remove files in bench-wrk while computing server shasum
* Instead of just getting maximum throughput per query per version,
create plots using wrk2 for a given set of requests per second.
The maximum throughput is used to see what values of requests per second are feasible.
* Add id for version dropdown
* Allow specifiying env and args for GraphQL Engine
1) Arguments defined after -- will be applied as arguments to Hasura GraphQL Engine
2) Script will also pass the environmental variables to Hasura GraphQL Engine instances
Hasura GraphQL engine can be run with the given environmental variables and arguments as follows
$ export HASURA_GRAPHQL_...=....
$ python3 hge_wrk_bench.py -- --hge_arg1 val1 --hge_arg2 val2 ...
* Use matplotlib instead of plotly for figures
* Show throughput graph also.
It maybe useful in checking performance regression across versions
* Support storing results in s3
Use --upload-root-uri 's3://bucket/path' to upload results inside the
given path.When specified, the results will be uploaded to the bucket,
including latencies, latency histogram, and the test setup info.
The s3 credentials should be provided as given in AWS boto3 documentation.
* Allow specifying a name for the test scenario
* Fix open latency uri bug
* Update wrk docker image
* Keep ylim a little higher than maximum so that the throughput plot is clearly visible
* Show throughput plots for multiple queries at the same time
* 1) Adjust size of dropdowns
2) Make label for requests/sec invisible when plot type is throughput
* 1) Adding boto3 to requirements.txt
2) Removing CPU Key print line
3) Adding info about the tests that will be run with wrk2
* Docker builder fo wrk-websocket-server
* Make it optional to setup remote graphql-engine
* Listen on all interfaces and enable ping thread
* Add bench_scripts to wrk-websocket-server docker
* Use 127.0.0.1 instead of 'localhost' to address local hge
For some reason it seems wrk was hanging trying to resolve 'localhost'.
ping was able to fine from the same container, so I'm not sure what the
deal was. Probably some local misconfiguration on my machine, but maybe
this change will also help others.
* Store latency samples in subdirectory, server_shasum just once at start, additional docs
* Add a note on running the benchmarks in the simplest way
* Add a new section on how to run benchmarks on a new linux hosted instance
Co-authored-by: Nizar Malangadan <nizar-m@users.noreply.github.com>
Co-authored-by: Brandon Simmons <brandon.m.simmons@gmail.com>
Co-authored-by: Karthikeyan Chinnakonda <karthikeyan@hasura.io>
Co-authored-by: Brandon Simmons <brandon@hasura.io>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* new typeclass to abstract the logic of QueryLog-ing
* abstract the logic of logging websocket-server logs
introduce a MonadWSLog typeclass
* move catalog initialization to init step
expose a helper function to migrate catalog
create schema cache in initialiseCtx
* expose various modules and functions for pro
* generalize PGExecCtx to support specialized functions for various operations
* fix tests compilation
* allow customising PGExecCtx when starting the web server
* fix relay introspection failing if any views exist, fix#5020
* reduce base64 encoded node id length, close#5037
* make node field type non-nullable in an edge
* more relay tests with permissions & complete restructure of test yaml files
Co-authored-by: Aravind <aravindkp@outlook.in>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
The bulk of changes here is some shifting of code around and a little
parameterizing of functions for easier testing.
Also: comments, some renaming for clarity/less-chance-for-misue.
Store the admin secret only as a hash to prevent leaking the secret
inadvertently, and to prevent timing attacks on the secret.
NOTE: best practice for stored user passwords is a function with a
tunable cost like bcrypt, but our threat model is quite different (even
if we thought we could reasonably protect the secret from an attacker
who could read arbitrary regions of memory), and bcrypt is far too slow
(by design) to perform on each request. We'd have to rely on our
(technically savvy) users to choose high entropy passwords in any case.
Referencing #4736
* validation support for unions and interfaces
* refactor SQL generation logic for improved readability
* '/v1/relay' endpoint for relay schema
* implement 'Node' interface and top level 'node' field resolver
* add relay toggle on graphiql
* fix explain api response & index plan id with query type
* add hasura mutations to relay
* add relay pytests
* update CHANGELOG.md
Co-authored-by: rakeshkky <12475069+rakeshkky@users.noreply.github.com>
Co-authored-by: Rishichandra Wawhal <rishi@hasura.io>
Co-authored-by: Rikin Kachhia <54616969+rikinsk@users.noreply.github.com>
* changes for poller-log
add various multiplexed query info in poller-log
* minor cleanup, also fixes a bug which will return duplicate data
* Live query poller stats can now be logged
This also removes in-memory stats that are collected about batched
query execution as the log lines when piped into an monitoring tool
will give us better insights.
* allow poller-log to be configurable
* log minimal information in the livequery-poller-log
Other information can be retrieved from /dev/subscriptions/extended
* fix few review comments
* avoid marshalling and unmarshalling from ByteString to EncJSON
* separate out SubscriberId and SubscriberMetadata
Co-authored-by: Anon Ray <rayanon004@gmail.com>
When compiling the graphql-engine binary with `-O2`, ghc-8.10 seems to
be stuck at the module `Server.Init` while consuming `17G` of RAM (for 5
minutes at least before I forcefully terminated the compilation). With
this pragma, ghc-8.10 now takes under `12G` to compile graphql-engine
binary.
Introspection queries accept variables, but we need to make sure to
also touch the variables that we ignore, so that an introspection
query is marked not reusable if we are not able to build a correct
query plan for it.
A better solution here would be to deal with such unused variables
correctly, so that more introspection queries become reusable.
An even better solution would be to type-safely track *how* to reuse
which variables, rather than to split the reusage marking from the
planning.
Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
This also seems to squash a stubborn space leak we see with
subscriptions (linking to canonical #3388 for reference).
This may also fix some of the "Unexpected exception" websockets
exceptions we are now surfacing (see e.g. #4344)
Also: dev.sh: fix hpc reporting
Initial work on this done by Vamshi.
There are two implementations of a Cache, namely a bounded and an
unbounded variant. This can be elegantly captured in a type class.
In addition to reducing the amount of error-prone code in the
definition of the cache, this version reduces the amount of
error-prone code in usage sites of the cache, as it makes the cache
into an abstract object, so that a calling site cannot distinguish
between cache types. Any decision about what should be cached should
be made through the interface of a cache, rather than at the callsite,
and this is captured by this variant.
* Add support for multiple top-level fields in a subscription to improve testability of subscriptions
* Add an internal flag to enable multiple subscriptions
* Add missing call to withConstructorFn in live queries (fix#3239)
Co-authored-by: Alexis King <lexi.lambda@gmail.com>
* Allow computed fields to have access to Hasura's session variables
* Inform about session args for computed fields in changelog and docs
* Add tests for session arguments for computed fields (and the respective errors)
Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
Co-authored-by: Marion Schleifer <marion@hasura.io>
Co-authored-by: Rakesh Emmadi <12475069+rakeshkky@users.noreply.github.com>
* move user info related code to Hasura.User module
* the RFC #4120 implementation; insert permissions with admin secret
* revert back to old RoleName based schema maps
An attempt made to avoid duplication of schema contexts in types
if any role doesn't possess any admin secret specific schema
* fix compile errors in haskell test
* keep 'user_vars' for session variables in http-logs
* no-op refacto
* tests for admin only inserts
* update docs for admin only inserts
* updated CHANGELOG.md
* default behaviour when admin secret is not set
* fix x-hasura-role to X-Hasura-Role in pytests
* introduce effective timeout in actions async tests
* update docs for admin-secret not configured case
* Update docs/graphql/manual/api-reference/schema-metadata-api/permission.rst
Co-Authored-By: Marion Schleifer <marion@hasura.io>
* Apply suggestions from code review
Co-Authored-By: Marion Schleifer <marion@hasura.io>
* a complete iteration
backend insert permissions accessable via 'x-hasura-backend-privilege'
session variable
* console changes for backend-only permissions
* provide tooltip id; update labels and tooltips;
* requested changes
* requested changes
- remove className from Toggle component
- use appropriate function name (capitalizeFirstChar -> capitalize)
* use toggle props from definitelyTyped
* fix accidental commit
* Revert "introduce effective timeout in actions async tests"
This reverts commit b7a59c19d6.
* generate complete schema for both 'default' and 'backend' sessions
* Apply suggestions from code review
Co-Authored-By: Marion Schleifer <marion@hasura.io>
* remove unnecessary import, export Toggle as is
* update session variable in tooltip
* 'x-hasura-use-backend-only-permissions' variable to switch
* update help texts
* update docs
* update docs
* update console help text
* regenerate package-lock
* serve no backend schema when backend_only: false and header set to true
- Few type name refactor as suggested by @0x777
* update CHANGELOG.md
* Update CHANGELOG.md
* Update CHANGELOG.md
* fix a merge bug where a certain entity didn't get removed
Co-authored-by: Marion Schleifer <marion@hasura.io>
Co-authored-by: Rishichandra Wawhal <rishi@hasura.io>
Co-authored-by: rikinsk <rikin.kachhia@gmail.com>
Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
* config options for internal errors for non-admin role, close#4031
More detailed action debug info is added in response 'internal' field
* add docs
* update CHANGELOG.md
* set admin graphql errors option in ci tests, minor changes to docs
* fix tests
Don't use any auth for sync actions error tests. The request body
changes based on auth type in session_variables (x-hasura-auth-mode)
* Apply suggestions from code review
Co-Authored-By: Marion Schleifer <marion@hasura.io>
* use a new sum type to represent the inclusion of internal errors
As suggested in review by @0x777
-> Move around few modules in to specific API folder
-> Saperate types from Init.hs
* fix tests
Don't use any auth for sync actions error tests. The request body
changes based on auth type in session_variables (x-hasura-auth-mode)
* move 'HttpResponse' to 'Hasura.HTTP' module
* update change log with breaking change warning
* Update CHANGELOG.md
Co-authored-by: Marion Schleifer <marion@hasura.io>
Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
The previous check was too conservative and acquired a lock on the
schema cache in situations where it was unnecessary. This change
exposes the logic run_sql uses to determine whether to use the
metadata check to make the check more precise.
* Update graphql-parser-hs and hence use `Scientific` directly
The new version of graphql-parser-hs returns Scientific and Integer
rather than Double and Int32, respectively. So we now need to do less
work in graphql-engine, and we can process larger numbers.
In practice, this means that when inserting a bigint, we no longer
need to specify the inserted integer as text. This is also
represented in the updated tests.
* Generate int overflow error on insert
* Document bigint insertion support in changelog
* add additional tests for testing claims_namespace_path in JWT tokens
- add tests for at root level and at a nested level
* modify the JWT tests
* combine the claims_namespace_path tests together in test-server.sh
* change the order of the claims_namespace_path tests
* change the order of the claims_namespace_path tests
* allow underscore prefix and special characters in json path
* server: Rewrite/refactor JSONPath parser
The JSONPath parser is also rewritten, the previous implementation
was written in a very explicitly “recursive descent” style, but the whole
point of using attoparsec is to be able to backtrack! Taking advantage
of the combinators makes for a much simpler parser.
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
Co-authored-by: Alexis King <lexi.lambda@gmail.com>
Co-authored-by: Aleksandra Sikora <ola.zxcvbnm@gmail.com>
Co-authored-by: Shahidh K Muhammed <shahidh@hasura.io>
* Allow `_inc` to update other numeric types in addition to integers
* Add support for PostgreSQL's `money` field type
* Add support for _inc on money types
* Add note of generalized `_inc` support to changelog
* add support for action queries
* a new parameter `type` is added in the ArgumentDefinition, its value
can be either `query` or `mutation` and it defaults to the latter
* throw 400 when a query action is tried to explain
* update the actions docs to include query actions
* refactor the ToJSON and ToOrdJSON of ActionDefinition
Co-authored-by: Rishichandra Wawhal <rishi@hasura.io>
Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
* add new optional field `claims_namespace_path` in JWT config
* return value when empty array is found in executeJSONPath
* update the docs related to claims_namespace_path
* improve encodeJSONPath, add property tests for parseJSONPath
* throw error if both claims_namespace_path and claims_namespace are set
* refactor the Data.Parser.JsonPath to Data.Parser.JSONPathSpec
* update the JWT docs
Co-Authored-By: Marion Schleifer <marion@hasura.io>
Co-authored-by: Marion Schleifer <marion@hasura.io>
Co-authored-by: rakeshkky <12475069+rakeshkky@users.noreply.github.com>
Co-authored-by: Tirumarai Selvan <tirumarai.selvan@gmail.com>
* allow re-using Postgres scalars in custom types, close#4125
* add pytest tests
* update CHANGELOG.md
* add a doc pointer for reusable postgres scalars
* document the code, improve the CHANGELOG entry
As suggested by @lexi-lambda
* a bit more source code documentation, use WriterT to collect reused scalars
* Apply suggestions from code review
Co-Authored-By: Marion Schleifer <marion@hasura.io>
* improve doc for Postgres scalars in custom graphql types
* Add some more references to Note; fix Haddock syntax
Also a few very minor tweaks:
* Use HashSet instead of [] more pervasively
* Export execWriterT from Hasura.Prelude
* Use pattern guards in multi-way if
* Tweak a few names/comments
* Pull buildActions out of buildAndCollectInfo, use buildInfoMap
* Tweak wording in documentation
* incorporate changes in console code
* account Postgres scalars for action input arguments
-> Avoid unnecessary 'throw500' in making action schema
* Review changes
Co-authored-by: Marion Schleifer <marion@hasura.io>
Co-authored-by: Alexis King <lexi.lambda@gmail.com>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
Co-authored-by: Aleksandra Sikora <ola.zxcvbnm@gmail.com>
* unlock the locked-events during graceful shutdown
* Some events can still be delivered multiple times due to
ungraceful shutdown
* modify the preparevents documentation
* modify the prepareEvents doc
* update changelog
Co-authored-by: Tirumarai Selvan <tiru@hasura.io>
* type is not required for jwk_url
* remove type from JWTConfig
* Omit type field in JWTConfig serialization if jwk_url is provided
* remove type from jwk_url test suite
* add changelog
* fix docs with new format
Co-authored-by: Alexis King <lexi.lambda@gmail.com>
* Fix catalog version for v1.1.1
* Remove entries of removed tables from hdb_catalog
While downgrading catalog version from 32 -> 31, not removing entries
in hdb_table and hdb_relationship for the tables that are removed in
the downgrade, results in incosistent schema, when the server with
downgraded version is started. This should probably be handled in
a better fashion.
With the change in this commit, the server is able to successfully
start with downgraded catalog version 31.
* Test downgrade command along with upgrade tests
* add expiry time to webhook user info
This also adds an optional message to webhook errors: if we fail to
parse an expiry time, we will log a warning with the parse error.
* refactored Auth
This change had one main goal: put in common all expiry time
extraction code between the JWT and WebHook parts of the
code. Furthermore, this change also moves all WebHook specific code to
its own module, similarly to what is done for JWT.
* Remove dependency on string-conversions in favor of text-conversions
string-conversions silently uses UTF8 instead of being explicit about
it, and it uses lenientDecode when decoding ByteStrings when it’s
usually better to reject invalid UTF8 input outright. text-conversions
solves both those problems.
Co-authored-by: Alexis King <lexi.lambda@gmail.com>
* do not perform the metadata check in read-only mode
* improve the isAltrDropReplace regex
* quote the regex at compile-time to handle syntax errors statically
Co-authored-by: Alexis King <lexi.lambda@gmail.com>
* add new column "pg_version" while sending telemetry data
* make a new type for PGVersion and use serverVersion func
* define runTxIO action to run transaction(which exits on error)
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* Improve performance of replace_metadata on tracking tables. Closes#3802
On the 1000 table track case this went from >20min to 8 seconds, the
bottleneck being all the former calls (for each table) to
buildSchemaCache.
* refactor replace metadata
Only save metadata in hdb_catalog schema and build schema cache (strictly)
* remove `withBuildSchemaCache` function
Co-authored-by: rakeshkky <12475069+rakeshkky@users.noreply.github.com>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
- The metrics will include
- no of synchronous actions
- no of asynchronous actions
- no of type relationships with the output
- no of custom types defined
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
The stackoverflow answer this was copied from has a glaring problem: python really dislikes a dictionary being modified while it is being iterated on. I rewrote the function to instead return a modified copy.
* server: add tests for track_table of a materialized view
In the context of #91, we discovered that materialized views were
already "automagically" supported; to ensure we don't regress on this
accidental but welcome change, this patch adds simple tests.
This is basically just a copy of `track_untrack_table`, but for
materialized views.
* Expand abbreviations
Co-authored-by: Alexis King <lexi.lambda@gmail.com>
The setup in several tests was using `ST_GeomFromText`, which expects
data in the OGC WKT format, but was providing the SRID in the text
itself, which is part of the EWKT format.
The fix was simply to replace all calls to `ST_GeomFromText` to
`ST_GeomFromEWKT`.
* add 'ID' to default scalars for custom types, fix#4061
* preserve cookie headers from sync action webhook, close#4021
* validate action webhook response to conform to output type, fix#3977
* fix tests, don't run actions' tests on PG version < 10
* update CHANGELOG.md
* no-op refactor, use types from http-network more
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
Writing to a mutable var is a particularly potent source of leaks since
it mostly defeats GHC's analysis. Here we add assertions to all mutable
writes, and fix a couple spots where we wrote some thunks to a mutable
var (compiled with -O2).
Some of these thunks were probably benign, but others looked liked they
might be retaining big args. Didn't do much analysis, just fixed.
Actually pretty happy with how easy this was to use and as a diagnostic,
once I sorted out some issues. We should consider using it elsewhere,
and maybe extending so that we can use it with tests, enable when
`-fenable-assertsions` etc.
Relates #3388
Also simplified codepaths that use `AcceptWith`, which has unnecessary
`Maybe` fields.
We’re lucky that this never bit us. For the most part, these rules
aren’t actually used; most code programs against ArrowCache and doesn’t
get specialized enough for these rules to fire.
Even if we did have code that could trigger this rule, the situations
where it would actually fire are slim. In order for the rule to
typecheck at all, both sides of the pair being passed through the arrow
must have exactly the same type. Of course, that would just make this
even more hellish to debug.
Rewrite rules are dangerous.
* Test working through a backlog of change events
* Use a slightly more performant threaded http server in eventing pytests
This helped locally but not on CI it seems...
* Rework event processing for backpressure. Closes#3839
With loo low `HASURA_GRAPHQL_EVENTS_FETCH_INTERVAL` and/or slow webhooks
and/or too small `HASURA_GRAPHQL_EVENTS_HTTP_POOL_SIZE` we might
previously check out events from the DB faster than we can service them,
leading to space leaks, weirdness, etc.
Other changes:
- avoid fetch interval sleep latency when we previously did a non-empty
fetch
- prefetch event batch while http pool is working
- warn when it appears we can't keep up with events being generated
- make some effort to process events in creation order so we don't
starve older ones.
ALSO NOTE: HASURA_GRAPHQL_EVENTS_FETCH_INTERVAL changes semantics
slightly, since it only comes into play after an empty fetch. The old
semantics weren't documented in detail, so I think this is fine.
This is the result of a general audit of how we fork threads, with a
detour into how we're using mutable state especially in websocket
codepaths, making more robust to async exceptions and exceptions
resulting from bugs.
Some highlights:
- use a wrapper around 'immortal' so threads that die due to bugs are
restarted, and log the error
- use 'withAsync' some places
- use bracket a few places where we might break invariants
- log some codepaths that represent bugs
- export UnstructuredLog for ad hoc logging (the alternative is we
continue not logging useful stuff)
I had to timebox this. There are a few TODOs I didn't want to address.
And we'll wait until this is merged to attempt #3705 for
Control.Concurrent.Extended
* basic doc for actions
* custom_types, sync and async actions
* switch to graphql-parser-hs on github
* update docs
* metadata import/export
* webhook calls are now supported
* relationships in sync actions
* initialise.sql is now in sync with the migration file
* fix metadata tests
* allow specifying arguments of actions
* fix blacklist check on check_build_worthiness job
* track custom_types and actions related tables
* handlers are now triggered on async actions
* default to pgjson unless a field is involved in relationships, for generating definition list
* use 'true' for action filter for non admin role
* fix create_action_permission sql query
* drop permissions when dropping an action
* add a hdb_role view (and relationships) to fetch all roles in the system
* rename 'webhook' key in action definition to 'handler'
* allow templating actions wehook URLs with env vars
* add 'update_action' /v1/query type
* allow forwarding client headers by setting `forward_client_headers` in action definition
* add 'headers' configuration in action definition
* handle webhook error response based on status codes
* support array relationships for custom types
* implement single row mutation, see https://github.com/hasura/graphql-engine/issues/3731
* single row mutation: rename 'pk_columns' -> 'columns' and no-op refactor
* use top level primary key inputs for delete_by_pk & account select permissions for single row mutations
* use only REST semantics to resolve the webhook response
* use 'pk_columns' instead of 'columns' for update_by_pk input
* add python basic tests for single row mutations
* add action context (name) in webhook payload
* Async action response is accessible for non admin roles only if
the request session vars equals to action's
* clean nulls, empty arrays for actions, custom types in export metadata
* async action mutation returns only the UUID of the action
* unit tests for URL template parser
* Basic sync actions python tests
* fix output in async query & add async tests
* add admin secret header in async actions python test
* document async action architecture in Resolve/Action.hs file
* support actions returning array of objects
* tests for list type response actions
* update docs with actions and custom types metadata API reference
* update actions python tests as per #f8e1330
Co-authored-by: Tirumarai Selvan <tirumarai.selvan@gmail.com>
Co-authored-by: Aravind Shankar <face11301@gmail.com>
Co-authored-by: Rakesh Emmadi <12475069+rakeshkky@users.noreply.github.com>
* run basic tests after upgrade
* terminate before specifying file in pytest cmd
* Move fixture definitions out of test classes
Previously we had abstract classes with the fixtures defined
in them. The test classes then inherits these super classes. This
is creating inheritence problems, especially when you want to just
inherit the tests in class, but not the fixtures. We have now moved
all those fixture definitions outside of the class (in conftest.py).
These fixtures are now used by the test classes when and where they
are required.
* Run pytests on server upgrade
Server upgrade tests are run by
1) Run pytest with schema/metadata setup but do not do schema/metadata
teardown
2) Upgrade the server
3) Run pytest using the above schema and teardown at the end of the
tests
4) Cleanup hasura metadata and start again with next set of tests
We have added options --skip-schema-setup and --skip-schema-teardown to
help running server upgrade tests.
While running the tests, we noticed that error codes and messages for
some of the tests have changed. So we have added another option to
pytest `--avoid-error-message-checks`. If this flag is set, and if
comparing expected and response message fails, and if the expected
response has an error message, Pytest will throw warnings instead of an
error.
* Use marks to specify server-upgrade tests
Not all tests can be run as serve upgrade tests, particularly those
which themselves change the schema. We introduce two pytest markers.
Marker allow_server_upgrade_test will add the test into the list of
server upgrade tests that can be run. skip_server_upgrade_test
removes it from the list.
With this we have added tests for queries, mutations, and selected
event trigger and remote schema tests to the list of server upgrade
tests.
* Remove components not needed anymore
* Install curl
* Fix error in query validation
* Fix error in test_v1_queries.py
* install procps for server upgrade tests
* Use postgres image which has postgis installed
* set pager off with psql
* quote the bash variable WORKTREE_DIR
Co-authored-by: nizar-m <19857260+nizar-m@users.noreply.github.com>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* Add check expresion to update permissions (close#384)
* wip on conflict behavior
* Handle upserts for views properly
* Use insert check if there is no update check
* Fix the test
* Improve error message slightly
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* Add downgrade command
* Add docs per @lexi-lambda's suggestions
* make tests pass
* Update hdb_version once, from Haskell
* more work based on feedback
* Improve the usage message
* Small docs changes
* Test downgrades exist for each tag
* Update downgrading.rst
* Use git-log to find tags which are ancestors of the current commit
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* fix nested insert with returning computed fields gives error, fix#3609
* revert using ordered hashmaps, sort columns based on ordinal postion
* fix 1. keys order 2. json/jsonb column value in nested insert returning
* add a note for sorted columns
* cast 'VALUES' expression as table row type
* use single CTE expression for generating returning for nested inserts
We upload a set of accumulating timers and counters to track service
time for different types of operations, across several dimensions (e.g.
did we hit the plan cache, was a remote involved, etc.)
Also...
Standardize on DiffTime as a standard duration type, and try to use it
consistently.
See discussion here:
https://github.com/hasura/graphql-engine/pull/3584#pullrequestreview-340679369
It should be possible to overwrite that module so the new threadDelay
sticks per the pattern in #3705 blocked on #3558
Rename the Control.Concurrent.Extended.threadDelay to `sleep` since a
naive use with a literal argument would be very bad!
We catch a bug in 'computeTimeDiff'.
Add convenient 'Read' instances to the time unit utility types. Make
'Second' a newtype to support this.
This fixes#3759. Also, while we’re at it, also improve the way
invalidations are synced across instances so enums and remote schemas
are appropriately reloaded by the schema syncing process.
* WIP: Remove hdb_views for inserts
* Show failing row in check constraint error
* Revert "Show failing row in check constraint error"
This reverts commit dd2cac29d0.
* Use the better query plan
* Simplify things
* fix cli test
* Update downgrading.rst
* remove 1.1 asset for cli
- Move MonadBase/MonadBaseControl instances for TxE into pg-client-hs
- Set the -qn2 RTS option by default to limit the parallel GC to 2
threads
- Remove eventlog instrumentation
- Don’t rebuild the schema cache again after running a query that needs
it to be rebuilt, since we do that explicitly now.
- Remove some redundant checks, and relocate a couple others.
As explained in the note included in the diff, this can lead to
dramatically better query plans, and it seems to be especially important
for versions of Postgres <12.
This changes TableCoreCacheT to internally record dependencies at a
per-table level. In practice, this dramatically improves the performance
of building permissions: it makes it far, far less likely for
permissions to be needlessly rebuilt because some unrelated table
changed.
We mostly want to do this to make queries against information_schema
tables work, which the console cares about. information_schema tables
use types like sql_identifier, which have no corresponding array types
defined! Therefore, in order to generate valid queries for _in and _nin
conditions, we need to treat them as their base types, instead.
Right now on errors, only the expected and the actual responses are
shown. The actual response sometimes may not have all the information,
and you may have to look at the logs. In this case, request id would be
of great help to get the extra information from the logs.
These aren't suitable e.g. for running in CI since some take far too
long (and an impossibly long-time when running under criterion's normal
bootstrapping sampling regime.
We might try to improve this ourselves:
https://github.com/bos/criterion/issues/218
An initial summary analysis will be in #3530.
* export metadata without nulls, empty arrays
* property tests for 'ReplaceMetadata' using QuickCheck
-> Derive Arbitrary class for 'ReplaceMetadata' dependant types
* reduce property test cases number to 30
QuickCheck generates the `ReplaceMetadata` value really large
for higher number test cases. Encoded JSON for such values is large and
consumes more memory. Thus, CI is giving up while running property
tests.
* circle-ci: Add property tests as saperate job
* add no command mode to tests
* add yaml.v2 to go mod
* remove indirect comment for yaml.v2 dependency
The connection handler in websocket transport was not using the
'UserAuthentication' interface to resolve user info. Fix resolving
user info in websocket transport to use the common
'UserAuthentication' interface