We use a helper service to start a webhook-based authentication service for some tests. This moves the initialization of the service out of _test-server.sh_ and into the Python test harness, as a fixture.
In order to do this, I had to make a few changes. The main deviation is that we no longer run _all_ tests against an HGE with this authentication service, just a few (those in _test_webhook.py_). Because this reduced coverage, I have added some more tests there, which actually cover some areas not exacerbated elsewhere (mainly trying to use webhook credentials to talk to an admin-only endpoint).
The webhook service can run both with and without TLS, and decide whether it's necessary to skip one of these based on the arguments passed and how HGE is started, according to the following logic:
* If a TLS CA certificate is passed in, it will run with TLS, otherwise it will skip it.
* If HGE was started externally and a TLS certificate is provided, it will skip running without TLS, as it will assume that HGE was configured to talk to a webhook over HTTPS.
* Some tests should only be run with TLS; this is marked with a `tls_webhook_server` marker.
* Some tests should only be run _without_ TLS; this is marked with a `no_tls_webhook_server` marker.
The actual parameterization of the webhook service configuration is done through test subclasses, because normal pytest parameterization doesn't work with the `hge_fixture_env` hack that we use. Because `hge_fixture_env` is not a sanctioned way of conveying data between fixtures (and, unfortunately, there isn't a sanctioned way of doing this when the fixtures in question may not know about each other directly), parameterizing the `webhook_server` fixture doesn't actually parameterize `hge_server` properly. Subclassing forces this to work correctly.
The certificate generation is moved to a Python fixture, so that we don't have to revoke the CA certificate for _test_webhook_insecure.py_; we can just generate a bogus certificate instead. The CA certificate is still generated in the _test-server.sh_ script, as it needs to be installed into the OS certificate store.
Interestingly, the CA certificate installation wasn't actually working, because the certificates were written to the wrong location. This didn't cause any failures, as we weren't actually testing this behavior. This is now fixed with the other changes.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6363
GitOrigin-RevId: 0f277d374daa64f657257ed2a4c2057c74b911db
This makes it possible for the test harness to start the test JWK server and the test remote schema server.
In order to do this, we still generate the TLS certificates in the test script (because we need to install the generated CA certificate in the OS certificate store), and then pass the certificate and key paths into the test runner.
Because we are still using _test-server.sh_ for now, we don't use the JWK server fixture in that case, as HGE needs the JWK server to be up and running when it starts. Instead, we keep running it outside (for now).
This is also the case for the GraphQL server fixture when we are running the server upgrade/downgrade tests.
I have also refactored _graphql_server.py_ so there isn't a global `HGE_URLS` value, but instead the value is passed through.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6303
GitOrigin-RevId: 06f05ff674372dc5d632e55d68e661f5c7a17c10
When we run the HGE server inside the test harness, it needs to run with
an admin secret for some tests to make sense. This tags each test that
requires an admin secret with `pytest.mark.admin_secret`, which then
generates a UUID and injects that into both the server and the test case
(if required).
It also simplifies the way the test harness picks up an existing admin
secret, allowing it to use the environment variable instead of requiring
it via a parameter.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6120
GitOrigin-RevId: 55c5b9e8c99bdad9c8304098444ddb9516749a2c
This has two purposes:
* When running the Python integration tests against a running HGE instance, with `--hge-url`, it will check the environment variables available and actively skip the test if they aren't set. This replaces the previous ad-hoc skip behavior.
* More interestingly, when running against a binary with `--hge-bin`, the environment variables are passed through, which means different tests can run with different environment variables.
On top of this, the various services we use for testing now also provide their own environment variables, rather than expecting a test script to do it.
In order to make this work, I also had to invert the dependency between various services and `hge_ctx`. I extracted a `pg_version` fixture to provide the PostgreSQL version, and now pass the `hge_url` and `hge_key` explicitly to `ActionsWebhookServer`.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6028
GitOrigin-RevId: 16d866741dba5887da1adf4e1ade8182ccc9d344
I'm trying to shore up the Python integration tests to make them more reliable. In doing so, I noticed this.
---
It feels a lot more sensible as we never run on more than one backend at a time.
This also removes the `check_file_exists` parameter from the setup functions; it never worked. It was always set to the result of a comparison between a backend name and a function, which was always `False`. Enabling it breaks things.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5254
GitOrigin-RevId: 8718ab21527c2ba0a7205d1c01ebaac1a10be844
* run basic tests after upgrade
* terminate before specifying file in pytest cmd
* Move fixture definitions out of test classes
Previously we had abstract classes with the fixtures defined
in them. The test classes then inherits these super classes. This
is creating inheritence problems, especially when you want to just
inherit the tests in class, but not the fixtures. We have now moved
all those fixture definitions outside of the class (in conftest.py).
These fixtures are now used by the test classes when and where they
are required.
* Run pytests on server upgrade
Server upgrade tests are run by
1) Run pytest with schema/metadata setup but do not do schema/metadata
teardown
2) Upgrade the server
3) Run pytest using the above schema and teardown at the end of the
tests
4) Cleanup hasura metadata and start again with next set of tests
We have added options --skip-schema-setup and --skip-schema-teardown to
help running server upgrade tests.
While running the tests, we noticed that error codes and messages for
some of the tests have changed. So we have added another option to
pytest `--avoid-error-message-checks`. If this flag is set, and if
comparing expected and response message fails, and if the expected
response has an error message, Pytest will throw warnings instead of an
error.
* Use marks to specify server-upgrade tests
Not all tests can be run as serve upgrade tests, particularly those
which themselves change the schema. We introduce two pytest markers.
Marker allow_server_upgrade_test will add the test into the list of
server upgrade tests that can be run. skip_server_upgrade_test
removes it from the list.
With this we have added tests for queries, mutations, and selected
event trigger and remote schema tests to the list of server upgrade
tests.
* Remove components not needed anymore
* Install curl
* Fix error in query validation
* Fix error in test_v1_queries.py
* install procps for server upgrade tests
* Use postgres image which has postgis installed
* set pager off with psql
* quote the bash variable WORKTREE_DIR
Co-authored-by: nizar-m <19857260+nizar-m@users.noreply.github.com>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
We add a new pytest flag `--accept` that will automatically write back
yaml files with updated responses. This makes it much easier and less
error-prone to update test cases when we expect output to change, or
when authoring new tests.
Second we make sure to test that we actually preserve the order of the
selection set when returning results. This is a "SHOULD" part of the
spec but seems pretty important and something that users will rely on.
To support both of the above we use ruamel.yaml which preserves a
certain amount of formatting and comments (so that --accept can work in
a failry ergonomic way), as well as ordering (so that when we write yaml
the order of keys has meaning that's preserved during parsing).
Use ruamel.yaml everywhere for consistency (since both libraries have
different quirks).
Quirks of ruamel.yaml:
- trailing whitespace in multiline strings in yaml files isn't written
back out as we'd like: https://bitbucket.org/ruamel/yaml/issues/47/multiline-strings-being-changed-if-they
- formatting is only sort of preserved; ruamel e.g. normalizes
indentation. Normally the diff is pretty clean though, and you can
always just check in portions of your test file after --accept
fixup
* 1) Tests for creating permissions
2) Test for constraint_on with GraphQL insert on_conflict
* Run tests with access key and webhook
* Tests for GraphQL query with quoted columns
* Rewrite test-server.sh so that it can be run locally
* JWT based tests
* Tests with various postgres types
* For tests on select queries, run setup only once per class
* Tests for v1 count queries
* Skip teardown for tests that does not modify data
* Workaround for hpc 'parse error when reading .tix file'
* Move GeoJson tests to the new structure
* Basic tests for v1 queries
* Tests for column, table or operator not found error cases on GraphQL queries
* Skip test teardown for mutation tests which does not change database state, even when it returns 200.