Where possible, we start the services on random ports, to avoid
port conflicts when parallelizing tests in the future.
When this isn't possible, we explicitly state the port, and wait for the
service to start. This is typically because the GraphQL Engine has already
started with knowledge of the relevant service passed in through an
environment variable.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5542
GitOrigin-RevId: b51a095b8710e3ff20d1edb13aa576c5272a5565
## Description
When setting up a remote relationship to a remote schema, values coming from the left-hand side are given as _arguments_ to the targeted field of the remote schema. In turn, that means we need to adjust the arguments to that remote field; in the case of input objects, it means creating a brand new input object in which the relevant fields have been removed.
To both avoid conflicts, and be explicit, we give a pretty verbose name to such an input object: its original name, followed by "remote_rel", followed by the full name of the field (table name + relationship name). The bug there was introduced when working on extending remote relationships to other backends: we changed the code that translates the table name to a graphql identifier to be generic, and use the table's `ToTxt` instance instead. However, when a table is not in the default schema, the character used by that instance is `.`, which is not a valid GraphQL name.
This PR fixes it, by doing two things:
- it defines a safe function to translate LHS identifiers to graphql names (by replacing all invalid characters by `_`)
- it doesn't use `unsafeMkName` anymore, and checks at validation time that the type name is correct
## Further work
On this PR:
- [x] add a test
- [x] write a Changelog entry
Beyond this PR, we might want to:
- prioritize #1747
- analyze all calls to `unsafeMkName` and remove as many as possible
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3363
GitOrigin-RevId: fe98eb1d34157b2c8323af453f5c369de616af38
This PR upgrades some of the pinned dependencies do not build with python 3.10 - cffi, ruamel, py. Further, it upgrades other packages where the effort is minimal.
For the reviewers: Please review it commit by commit.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3367
GitOrigin-RevId: c5401fe289d3185a79c4d382297f86fbde139825
When adding object relationships, we set the nullability of the generated GraphQL field based on whether the database backend enforces that the referenced data always exists. For manual relationships (corresponding to `manual_configuration`), the database backend is unaware of any relationship between data, and hence such fields are always set to be nullable.
For relationships generated from foreign key constraints (corresponding to `foreign_key_constraint_on`), we distinguish between two cases:
1. The "forward" object relationship from a referencing table (i.e. which has the foreign key constraint) to a referenced table. This should be set to be non-nullable when all referencing columns are non-nullable. But in fact, it used to set it to be non-nullable if *any* referencing column is non-nullable, which is only correct in Postgres when `MATCH FULL` is set (a flag we don't consider). This fixes that by changing a boolean conjunction to a disjunction.
2. The "reverse" object relationship from a referenced table to a referencing table which has the foreign key constraint. This should always be set to be nullable. But in fact, it used to always be set to non-nullable, as was reported in hasura/graphql-engine#7201. This fixes that.
Moreover, we have moved the computation of the nullability from `Hasura.RQL.DDL.Relationship` to `Hasura.GraphQL.Schema.Select`: this nullability used to be passed through the `riIsNullable` field of `RelInfo`, but for array relationships this information is not actually used, and moreover the remaining fields of `RelInfo` are already enough to deduce the nullability.
This also adds regression tests for both (1) and (2) above.
https://github.com/hasura/graphql-engine-mono/pull/2159
GitOrigin-RevId: 617f12765614f49746d18d3368f41dfae2f3e6ca
* run basic tests after upgrade
* terminate before specifying file in pytest cmd
* Move fixture definitions out of test classes
Previously we had abstract classes with the fixtures defined
in them. The test classes then inherits these super classes. This
is creating inheritence problems, especially when you want to just
inherit the tests in class, but not the fixtures. We have now moved
all those fixture definitions outside of the class (in conftest.py).
These fixtures are now used by the test classes when and where they
are required.
* Run pytests on server upgrade
Server upgrade tests are run by
1) Run pytest with schema/metadata setup but do not do schema/metadata
teardown
2) Upgrade the server
3) Run pytest using the above schema and teardown at the end of the
tests
4) Cleanup hasura metadata and start again with next set of tests
We have added options --skip-schema-setup and --skip-schema-teardown to
help running server upgrade tests.
While running the tests, we noticed that error codes and messages for
some of the tests have changed. So we have added another option to
pytest `--avoid-error-message-checks`. If this flag is set, and if
comparing expected and response message fails, and if the expected
response has an error message, Pytest will throw warnings instead of an
error.
* Use marks to specify server-upgrade tests
Not all tests can be run as serve upgrade tests, particularly those
which themselves change the schema. We introduce two pytest markers.
Marker allow_server_upgrade_test will add the test into the list of
server upgrade tests that can be run. skip_server_upgrade_test
removes it from the list.
With this we have added tests for queries, mutations, and selected
event trigger and remote schema tests to the list of server upgrade
tests.
* Remove components not needed anymore
* Install curl
* Fix error in query validation
* Fix error in test_v1_queries.py
* install procps for server upgrade tests
* Use postgres image which has postgis installed
* set pager off with psql
* quote the bash variable WORKTREE_DIR
Co-authored-by: nizar-m <19857260+nizar-m@users.noreply.github.com>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
We add a new pytest flag `--accept` that will automatically write back
yaml files with updated responses. This makes it much easier and less
error-prone to update test cases when we expect output to change, or
when authoring new tests.
Second we make sure to test that we actually preserve the order of the
selection set when returning results. This is a "SHOULD" part of the
spec but seems pretty important and something that users will rely on.
To support both of the above we use ruamel.yaml which preserves a
certain amount of formatting and comments (so that --accept can work in
a failry ergonomic way), as well as ordering (so that when we write yaml
the order of keys has meaning that's preserved during parsing).
Use ruamel.yaml everywhere for consistency (since both libraries have
different quirks).
Quirks of ruamel.yaml:
- trailing whitespace in multiline strings in yaml files isn't written
back out as we'd like: https://bitbucket.org/ruamel/yaml/issues/47/multiline-strings-being-changed-if-they
- formatting is only sort of preserved; ruamel e.g. normalizes
indentation. Normally the diff is pretty clean though, and you can
always just check in portions of your test file after --accept
fixup
Examples
1) `
pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv
`
2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv
`
### Solution and Design
<!-- How is this issue solved/fixed? What is the design? -->
<!-- It's better if we elaborate -->
#### Reducing execution time of tests
- The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec.
- For mutations, the model has now been changed to only do schema setup and teardown once per test class.
- A data setup and teardown will be done once per test instead (usually takes ~10ms).
- For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`.
- The function `dir()` should be define which returns the location of the configuration folder.
- Inside the configuration folder, there should be
- Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively
- Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively.
#### Running Graphql queries on both http and websockets
- Each GraphQL query/mutation is run on the both HTTP and websocket protocols
- Pytests test parameterisation is used to achieve this
- The errors over websockets are slightly different from that on HTTP
- The code takes care of converting the errors in HTTP to errors in websockets
#### Parallel executation of tests.
- The plugin pytest-xdist helps in running tests on parallel workers.
- We are using this plugin to group tests by file and run on different workers.
- Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker.
- With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
* 1) Tests for creating permissions
2) Test for constraint_on with GraphQL insert on_conflict
* Run tests with access key and webhook
* Tests for GraphQL query with quoted columns
* Rewrite test-server.sh so that it can be run locally
* JWT based tests
* Tests with various postgres types
* For tests on select queries, run setup only once per class
* Tests for v1 count queries
* Skip teardown for tests that does not modify data
* Workaround for hpc 'parse error when reading .tix file'
* Move GeoJson tests to the new structure
* Basic tests for v1 queries
* Tests for column, table or operator not found error cases on GraphQL queries
* Skip test teardown for mutation tests which does not change database state, even when it returns 200.