- Remove a few unnecessary helper functions
- Delete kind annotations
- Bring GHC warnings and language extensions more in line with those of the `graphql-engine` library
- Constrain unconstrained dependency on `hasql-pool`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6251
GitOrigin-RevId: 10c2530f007f70cf1464cec36566ee2264589881
This updates _docker-compose.yml_ to use the new image tags, and updates _run.sh_ accordingly.
While I was at it, I also added a `docker compose pull` instruction to make sure that we don't have surprises half-way through the script, and a few `echo` lines for clarity.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6235
GitOrigin-RevId: 3855f6898bd3e906c5f423d9d0d6a7031de3777a
We seem to be rebuilding hpack on every PR. I'm hoping this will allow PRs to share a cache.
I have also changed the cache key to include the entirety of _server/VERSIONS.json_, and added the GHC version there, to make sure it's properly invalidated.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6142
GitOrigin-RevId: fc61a26ad721f59f52687913f6978902f4c2ca0a
- Remove `onJust` in favor of the more general `for_`
- Remove `withJust` which was used only once
- Remove `hashNub` in favor of `Ord`-based `uniques`
- Simplify some of the implementations in `Hasura.Prelude`
- Add `hlint` hint from `maybe True` to `all`, and `maybe False` to `any`
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6173
GitOrigin-RevId: 2c6ebbe2d04f60071d2a53a2d43c6d62dbc4b84e
This PR is the result of running the following commands:
```bash
$ git grep -l '".* : "' -- '*.hs' | xargs sed -i -E 's/(".*) : "/\1: "/'
$ scripts/dev.sh test --integration --accept
```
Also manually fixed a few tests and docs
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6148
GitOrigin-RevId: cf8b87605d41d9ce86613a41ac5fd18691f5a641
When we run the HGE server inside the test harness, it needs to run with
an admin secret for some tests to make sense. This tags each test that
requires an admin secret with `pytest.mark.admin_secret`, which then
generates a UUID and injects that into both the server and the test case
(if required).
It also simplifies the way the test harness picks up an existing admin
secret, allowing it to use the environment variable instead of requiring
it via a parameter.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6120
GitOrigin-RevId: 55c5b9e8c99bdad9c8304098444ddb9516749a2c
This teaches `hge_server` how to run more tests, thanks to `hge_env`.
It also simplifies the logic a bit more.
I have also modified _run.sh_ and _docker-compose.yml_ so we can run multiple test suites, one after another.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6105
GitOrigin-RevId: eff009362eb6bb90c07cedaf96dfe6ec9336ff32
If we don't do this, we might end up applying metadata with a stale schema cache.
Following the principle of least surprise, replacing the metadata should probably compute inconsistencies with regards to the actual state of the database.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6026
GitOrigin-RevId: ff7469d7d9857c8a9f517d5d0b6f1ecf463621b3
This has two purposes:
* When running the Python integration tests against a running HGE instance, with `--hge-url`, it will check the environment variables available and actively skip the test if they aren't set. This replaces the previous ad-hoc skip behavior.
* More interestingly, when running against a binary with `--hge-bin`, the environment variables are passed through, which means different tests can run with different environment variables.
On top of this, the various services we use for testing now also provide their own environment variables, rather than expecting a test script to do it.
In order to make this work, I also had to invert the dependency between various services and `hge_ctx`. I extracted a `pg_version` fixture to provide the PostgreSQL version, and now pass the `hge_url` and `hge_key` explicitly to `ActionsWebhookServer`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6028
GitOrigin-RevId: 16d866741dba5887da1adf4e1ade8182ccc9d344
NPM v7 uses a new (backwards-compatible) lockfile format. This upgrades all our various _package-lock.json_ files to use the new format.
It's much more verbose so that NPM can be a lot faster.
I figured it was cleaner to do this once in a separate PR rather than upgrading them in combination with adding or upgrading a new dependency.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5869
GitOrigin-RevId: 322fb63b96e2d873a4a3cc05fa6c7afa414716ce
This adds support for running the Python integration tests for MSSQL and Citus just as in CI, as follows:
```
./server/tests-py/run.sh backend-mssql
./server/tests-py/run.sh backend-citus
```
These run the named CI jobs, providing the appropriate backend.
(In reality, all backends are always provided, which is much simpler.)
It also provides the various databases to _server/tests-py/run-new.sh_, though the tests fail as they don't properly initialize the sources. (This will be fixed in the future by provisioning sources in the test framework itself.)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5997
GitOrigin-RevId: c276a4779a35bb538ef0dc02ac8b7cb2d5a8dec5
This makes a few changes to the test scripts and makefiles in order to make things simpler for the average Apple user.
First of all, we change the `wait_for_mysql` function to use "localhost", not "127.0.0.1", as this fixed an issue on my system when attempting to connect to the MySQL server.
Secondly, we split the SQL Server test image into two:
* The first is the server itself, which now automatically uses `azure-sql-edge` as the image if you are on an aarch64 chip and using the `make` commands.
* The second is the initialization script. Because `sqlcmd` is not available in the `azure-sql-edge` image on aarch64, we use a separate container based on `mssql-tools` to initialize the server.
The README has been updated.
Tested on both macOS/aarch64 (with other changes) and Linux/x86_64.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5986
GitOrigin-RevId: b16e079861dcbcc66773295c47d715e443b67eea
See: https://github.com/grafana/k6/issues/2685
It might be interesting to think about taking into consideration decompression time when thinking about performance, but In general I think doing so is surprising and I wasted a lot of time trying to figure out why my optimizations to the compression codepath weren't improving things to the degree I expected
The downside here is we lose error reporting, so you'll need to only set
discardResponseBodies: true after the query has been tested.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5940
GitOrigin-RevId: 82a589a59b93f10ffb5391e4a3190459fb6e613b
Result of executing the following commands:
```shell
# replace "as Q" imports with "as PG" (in retrospect this didn't need a regex)
git grep -lE 'as Q($|[^a-zA-Z])' -- '*.hs' | xargs sed -i -E 's/as Q($|[^a-zA-Z])/as PG\1/'
# replace " Q." with " PG."
git grep -lE ' Q\.' -- '*.hs' | xargs sed -i 's/ Q\./ PG./g'
# replace "(Q." with "(PG."
git grep -lE '\(Q\.' -- '*.hs' | xargs sed -i 's/(Q\./(PG./g'
# ditto, but for [, |, { and !
git grep -lE '\[Q\.' -- '*.hs' | xargs sed -i 's/\[Q\./\[PG./g'
git grep -l '|Q\.' -- '*.hs' | xargs sed -i 's/|Q\./|PG./g'
git grep -l '{Q\.' -- '*.hs' | xargs sed -i 's/{Q\./{PG./g'
git grep -l '!Q\.' -- '*.hs' | xargs sed -i 's/!Q\./!PG./g'
```
(Doing the `grep -l` before the `sed`, instead of `sed` on the entire codebase, reduces the number of `mtime` updates, and so reduces how many times a file gets recompiled while checking intermediate results.)
Finally, I manually removed a broken and unused `Arbitrary` instance in `Hasura.RQL.Network`. (It used an `import Test.QuickCheck.Arbitrary as Q` statement, which was erroneously caught by the first find-replace command.)
After this PR, `Q` is no longer used as an import qualifier. That was not the goal of this PR, but perhaps it's a useful fact for future efforts.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5933
GitOrigin-RevId: 8c84c59d57789111d40f5d3322c5a885dcfbf40e
This fixes a few issues so that we can run `./server/tests-py/run.sh backend-bigquery` to run the Python integration tests for BigQuery locally.
* We forward the relevant environment variables to the Docker container.
* We increase the HTTP timeout, as I'm seeing requests taking up to 90s locally.
* We rewrite the setup so that it avoids `INSERT INTO`, which is not available using the BigQuery free tier. Instead, we use `CREATE TABLE ... AS SELECT ...`. This is the same method used by the Haskell integration tests.
We also capture local server output in a volume so it's easier to figure out what went wrong later.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5921
GitOrigin-RevId: c628f8c08a84f2582958659ab6d6494832471f6f
I am working on https://github.com/hasura/graphql-engine/issues/8807, and wanted to write a Haskell integration test case to reproduce it.
We have Python integration tests somewhat covering this behavior in *test_inconsistent_meta.py*, but no Haskell tests, so I thought I'd shore up the coverage here by adding a few test cases for working behavior.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5897
GitOrigin-RevId: 21500e530e413feaede5cbd8b4a94b07d25a6260
This makes two changes to the Docker Compose files that we use for local testing:
1. We disable `fsync`. On my machine, this decreases the time taken to create a new database from ~5s to less than 0.1s. The trade-off is that you might lose data, which we don't care about, as this is for testing.
2. We increase the maximum number of connections from the default, 100, to 1000. This allows us to run more tests in parallel without hitting connection limits.
These changes won't have any meaningful effect for now; they simply allow us to parallelize tests against PostgreSQL in the future.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5892
GitOrigin-RevId: 5d0d0ab37fdfbf4c9e20084d3cbedf647f54a04e
This argument allows the user to specify how to run HGE, rather than starting it beforehand. The runner will start a new instance of HGE for each test class.
This does not provide isolation, as the database is still re-used, but it helps us get closer.
You can try it yourself by executing:
```
$ cabal build graphql-engine:exe:graphql-engine
$ ./server/tests-py/run-new.sh
```
This doesn't affect CI at all.
I also fixed a few warnings flagged by Pylance.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5881
GitOrigin-RevId: ea6f0fd631a2c278b2c6b50e9dbdd9d804ebc9d4
Starting it and stopping it for the various tests that actually use it.
There are only a few.
This also removes some dead code and fixes warnings in _test_webhook_request_context.py_.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5846
GitOrigin-RevId: 7760467f9de7b1f9718e7482275c298eeaa3ad3a
The intent is to generalize `columnParser` to the point where it is the same across all backends, and then remove the interface in favor of a single implementation.
This extracts out `enumParser` and `possiblyNullable` as the two main areas that differ across backends. We may split `possiblyNullable` further so that we can extract some of that logic out into a common function too.
With these changes, the various `columnParser` implementations become semantically equivalent. They still do different things, and so reconciling them will require further changes.
Co-Authored-By: Antoine Leblanc <antoine@hasura.io>
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5841
GitOrigin-RevId: eec1770931eed5d72da70c97d7d0f00e33fa15d2
### Description
This PR attempts to fix several issues with source customization as it relates to remote relationships. There were several issues regarding casing: at the relationship border, we didn't properly set the target source's case, we didn't have access to the list of supported features to decide whether the feature was allowed or not, and we didn't have access to the global default.
However, all of that information is available when we build the schema cache, as we do resolve the case of some elements such as function names: we can therefore resolve source information at the same time, and simplify both the root of the schema and the remote relationship border.
To do this, this PR introduces a new type, `ResolvedSourceCustomization`, to be used in the Schema Cache, as opposed to the metadata's `SourceCustomization`, following a pattern established by a lot of other types.
### Remaining work and open questions
One major point of confusion: it seems to me that we didn't set the case at all across remote relationships, which would suggest we would use the case of the LHS source across the subset of the RHS one that is accessible through the remote relationship, which would in turn "corrupt" the parser cache and might result in the wrong case being used for that source later on. Is that assesment correct, and was I right to fix it?
Another one is that we seem not to be using the local case of the RHS to name the field in an object relationship; unless I'm mistaken we only use it for array relationships? Is that intentional?
This PR is also missing tests that would show-case the difference, and a changelog entry. To my knowledge, all the tests of this feature are in the python test suite; this could be the opportunity to move them to the hspec suite, but this might be a considerable amount of work?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5619
GitOrigin-RevId: 51a81b713a74575e82d9f96b51633f158ce3a47b
This allows a developer, through Docker, to run the Python integration tests in pretty much exactly the same way as CI does.
Allowing us to more readily diagnose issues locally.
I'm hoping this is temporary and we won't need it for too long, but I have found it invaluable over the last few days so I would like to share it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5818
GitOrigin-RevId: 18876fbbcbe7c5492afdf54d96af45ab2c519b77
This abstracts `CircularT`'s test cases to work against "any" memoizer, and then runs them against `MemoizeT` as well.
Surprisingly (or not), this works without issue; `MemoizeT` passes all tests with a couple of extra instances.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5780
GitOrigin-RevId: 461880caf9220dc3f52d622a22e8b8bcd594e404
Where possible, we start the services on random ports, to avoid
port conflicts when parallelizing tests in the future.
When this isn't possible, we explicitly state the port, and wait for the
service to start. This is typically because the GraphQL Engine has already
started with knowledge of the relevant service passed in through an
environment variable.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5542
GitOrigin-RevId: b51a095b8710e3ff20d1edb13aa576c5272a5565
### Description
This PR changes all the schema code to operate in a specific `SchemaT` monad, rather than in an arbitrary `m` monad. `SchemaT` is intended to be used opaquely with `runSourceSchema` and `runRemoteSchema`. The main goal of this is to allow a different reader context per part of the schema: this PR also minimizes the contexts. This means that we no longer require `SchemaOptions` when building remote schemas' schema, and this PR therefore removes a lot of dummy / placeholder values accordingly.
### Performance and stacking
This PR has been through several iterations. #5339 was the original version, that accomplished the same thing by stacking readers on top of the stack at every remote relationship boundary. This raised performance concerns, and @0x777 confirmed with an ad-hoc test that in some extreme cases we could see up to a 10% performance impact. This version, while more verbose, allows us to unstack / re-stack the readers, and avoid that problem. #5517 adds a new benchmark set to be able to automatically measure this on every PR.
### Remaining work
- [x] a comment (or perhaps even a Note?) should be added to `SchemaT`
- [x] we probably want for #5517 to be merged first so that we can confirm the lack of performance penalty
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5458
GitOrigin-RevId: e06b83d90da475f745b838f1fd8f8b4d9d3f4b10
This removes string interpolation from quasiquoted literals. We only use
this in one place and it's totally unnecessary.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5750
GitOrigin-RevId: 3493a11db6347332e7e3721a7dca616947505be6
This includes TH.Lift instances.
I am motivated to make this change because `unordered-containers` is set to either v0.2.17.0 or v0.2.19.1 in nixpkgs-unstable.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5620
GitOrigin-RevId: 7fd3024fdbf6a948adbdf5f4187d47d5da9acbda
This PR expands the OpenAPI specification generated for metadata to include separate definitions for `SourceMetadata` for each native database type, and for DataConnector.
For the most part the changes add `HasCodec` implementations, and don't modify existing code otherwise.
The generated OpenAPI spec can be used to generate TypeScript definitions that distinguish different source metadata types based on the value of the `kind` properly. There is a problem: because the specified `kind` value for a data connector source is any string, when TypeScript gets a source with a `kind` value of, say, `"postgres"`, it cannot unambiguously determine whether the source is postgres, or a data connector. For example,
```ts
function consumeSourceMetadata(source: SourceMetadata) {
if (source.kind === "postgres" || source.kind === "pg") {
// At this point TypeScript infers that `source` is either an instance
// of `PostgresSourceMetadata`, or `DataconnectorSourceMetadata`. It
// can't narrow further.
source
}
if (source.kind === "something else") {
// TypeScript infers that this `source` must be an instance of
// `DataconnectorSourceMetadata` because `source.kind` does not match
// any of the other options.
source
}
}
```
The simplest way I can think of to fix this would be to add a boolean property to the `SourceMetadata` type along the lines of `isNative` or `isDataConnector`. This could be a field that only exists in serialized data, like the metadata version field. The combination of one of the native database names for `kind`, and a true value for `isNative` would be enough for TypeScript to unambiguously distinguish the source kinds.
But note that in the current state TypeScript is able to reference the short `"pg"` name correctly!
~~Tests are not passing yet due to some discrepancies in DTO serialization vs existing Metadata serialization. I'm working on that.~~
The placeholders that I used for table and function metadata are not compatible with the ordered JSON serialization in use. I think the best solution is to write compatible codecs for those types in another PR. For now I have disabled some DTO tests for this PR.
Here are the generated [OpenAPI spec](https://github.com/hasura/graphql-engine-mono/files/9397333/openapi.tar.gz) based on these changes, and the generated [TypeScript client code](https://github.com/hasura/graphql-engine-mono/files/9397339/client-typescript.tar.gz) based on that spec.
Ticket: [MM-66](https://hasurahq.atlassian.net/browse/MM-66)
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5582
GitOrigin-RevId: e1446191c6c832879db04f129daa397a3be03f62
### Description
This PR adds a new benchmarl set named `deep_schema`, that is made to replicate one very specific edge-case: schemas that have deeply nested remote relationships. Our schema-building code is, in essence, "depth-first", and there are a lot of subtleties in the way we jump across remote relationship boundaries: this set will allows us to better understand the performance implications of technical decisions we make wrt. schema building.
This set, unlike others, does not declare any query: we are, for now, only interested in the schema building, which is tested with an ad-hoc script.
## Remaining work
There are several points worth discussing, wrt. this PR:
- should we make the schema larger, to make measures more consistent?
- should we extend this idea of measuring schema build performance to other sets?
- how do we extend the report to include this new information?
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5517
GitOrigin-RevId: 9d8f4fddb9bbdca5ef85f3d22337b992acf13bce
This does not yet enable Aggregation Predicates to users, but enables building the execution backend and tests of the schema.
This is a prerequisite for:
* #5174
* #5261
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5607
GitOrigin-RevId: e07beb01949724545131629c111d41a7ec4636f2
We plan on creating the source database dynamically, in the test setup.
This means that (a) we cannot assume that the metadata database and the
source database are the same, and (b) we need to drop and re-add the
source in code, not in YAML.
This changeset prepares the code for the introduction of a separate
source database, but doesn't go there yet. The separation is already
done but is too big to review in one go, so I have split this out.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5508
GitOrigin-RevId: b497a83ac4a100371762c2515c87ee3760d8d4ab
This splits two naming convention tests into four classes (and four YAML
files), which might seem overkill, but allows us to provision sources
declaratively in the future. As each class will require a custom source
configuration, we are able to annotate them accordingly, which means the
test cases are decoupled from the source database URL, letting us
generate a new database for each test case and automatically add it as a
source to HGE.
The future changes are already prepared, but this has been extracted out
as it splits the YAML files, which is a large change best reviewed in
isolation.
The test case `test_type_and_field_names` has been split into:
* `TestNamingConventionsTypeAndFieldNamesGraphqlDefault`
* `TestNamingConventionsTypeAndFieldNamesHasuraDefault`
The test case `test_type_and_field_names_with_prefix_and_suffix` has
been split into:
* `TestNamingConventionsTypeAndFieldNamesGraphqlDefaultWithPrefixAndSuffix`
* `TestNamingConventionsTypeAndFieldNamesHasuraDefaultWithPrefixAndSuffix`
The YAML files have been split in the same way. This was fairly trivial
as each test case would add a source, run some tests with
the `graphql_default` naming convention, drop the source, and then
repeat for the `hasura_default` naming convention. I simply split the
file in two. There is a little bit of duplication for provisioning the
various database tables, which I think is worth it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5496
GitOrigin-RevId: 94825e755c427a5414230f69985b534991b3aad6
This means that if `remote_schemas/nodejs/package.json` changes, the
dependencies will be automatically reinstalled.
It also moves `package-lock.json` to the correct location (in the
directory in which we run `npm install`), and updates it.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5481
GitOrigin-RevId: f3fb431afd19de150f39ec2e4cb6572b896c870f