NOTE: this only runs the tests for Postgres. If you want to run tests
for a different backend, use:
```sh
scripts/dev.sh test --integration --backend mssql
```
Available options are documented in `scripts/parse-pytest-backend`:
- postgres (default)
- bigquery (see section below)
- citus
- mssql
- mysql
#### Filtering tests
You can filter tests by using `-k <name>`. Note that `<name>` is case-
insensitive.
```sh
scripts/dev.sh test --integration --backend mssql -k MSSQL
```
Note that you can also use expressions here, for example:
```sh
scripts/dev.sh test --integration --backend mssql -k "MSSQL and not Permission"
```
See [pytest docs](https://docs.pytest.org/en/6.2.x/usage.html#specifying-tests-selecting-tests)
for more details.
#### Failures
If you want to stop after the first test failure you can pass `-x`:
```sh
scripts/dev.sh test --integration --backend mssql -k MSSQL -x
```
#### Verbosity
You can increase or decrease the log verbosity by adding `-v` or `-q`
to the command.
### Running tests directly
WARNING: running tests manually will force skipping of some tests. `dev.sh`
deals with setting up some environment variables which decide how and if
some of the tests are executed.
1. To run the Python tests, you’ll need to install the necessary Python dependencies first. It is recommended that you do this in a self-contained Python venv, which is supported by Python 3.3+ out of the box. To create one, run:
```
python3 -m venv .python-venv
```
(The second argument names a directory where the venv sandbox will be created; it can be anything you like, but `.python-venv` is `.gitignore`d.)
With the venv created, you can enter into it in your current shell session by running:
```
source .python-venv/bin/activate
```
(Source `.python-venv/bin/activate.fish` instead if you are using `fish` as your shell.)
2. Install the necessary Python dependencies into the sandbox:
```
pip3 install -r tests-py/requirements.txt
```
3. Install the dependencies for the Node server used by the remote schema tests:
```
(cd tests-py/remote_schemas/nodejs && npm ci)
```
4. Start an instance of `graphql-engine` for the test suite to use:
The environment variables are needed for a couple of tests, and the `--stringify-numeric-types` option is used to avoid the need to do floating-point comparisons.
If the tests include more sources (e.g., by using `-k MSSQL or MySQL`), then you can use the following commands to add sources to your running graphql instance:
This will run all the tests, which can take a couple minutes (especially since some of the tests are slow). You can configure `pytest` to run only a subset of the tests; see [the `pytest` documentation](https://doc.pytest.org/en/latest/usage.html) for more details.
- It is recommended to use a separate Postgres database for testing, since the tests will drop and recreate the `hdb_catalog` schema, and they may fail if certain tables already exist. (It’s also useful to be able to just drop and recreate the entire test database if it somehow gets into a bad state.)
- You can pass the `-v` or `-vv` options to `pytest` to enable more verbose output while running the tests and in test failures. You can also pass the `-l` option to display the current values of Python local variables in test failures.
1. Ensure you have access to a [Google Cloud Console service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating). Store the project ID and account email in `HASURA_BIGQUERY_PROJECT_ID` variable.
2. [Create and download a new service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys). Store the contents of file in a `HASURA_BIGQUERY_SERVICE_KEY` variable.
3. [Login and activate the service account](https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account), if it is not already activated.
4. Verify the service account is accessible via the [BigQuery API](https://cloud.google.com/bigquery/docs/reference/rest):
1. Add functions to launch and cleanup a server for the new backend. [Example](https://github.com/hasura/graphql-engine/blob/5df8419a4f0efb3032e07130e5d5f33e75e53b66/scripts/containers/postgres).
2. Augment `dev.sh` to support the new backend. [Example](https://github.com/hasura/graphql-engine/blob/5df8419a4f0efb3032e07130e5d5f33e75e53b66/scripts/dev.sh#L208-L214).
3. Connect the GraphQL Engine to the database you've just launched. [Example](https://github.com/hasura/graphql-engine/blob/5df8419a4f0efb3032e07130e5d5f33e75e53b66/scripts/data-sources-util.sh).
4. Add setup and teardown files:
1.`setup_<backend>`: for `v1/query` or metadata queries such as `<backend>_track_table`. [Example](https://github.com/hasura/graphql-engine/blob/5df8419a4f0efb3032e07130e5d5f33e75e53b66/server/tests-py/queries/graphql_query/basic/setup_mssql.yaml).
2.`schema_setup_<backend>`: for `v2/query` queries such as `<backend>_run_sql`. [Example](https://github.com/hasura/graphql-engine/blob/5df8419a4f0efb3032e07130e5d5f33e75e53b66/server/tests-py/queries/graphql_query/basic/schema_setup_mssql.yaml).
3.`teardown_<backend>` and `cleardb_<backend>`
4.**Important:** filename suffixes _**should be the same**_ as the value that’s being passed to `—backend`; that's how the files are looked up.
5. Specify a `backend` parameter for [the `per_backend_test_class` and `per_backend_test_function` fixtures](https://github.com/hasura/graphql-engine/blob/5df8419a4f0efb3032e07130e5d5f33e75e53b66/server/tests-py/conftest.py#L311-L333), parameterised by backend. [Example](https://github.com/hasura/graphql-engine/blob/5df8419a4f0efb3032e07130e5d5f33e75e53b66/server/tests-py/test_graphql_queries.py#L123).
Note: When teardown is not disabled (via `skip_teardown`(\*) , in which case, this phase is skipped entirely), `teardown.yaml` always runs before `schema_teardown.yaml`, even if the tests fail. See `setup_and_teardown` in `server/tests-py/conftest.py` for the full source code/logic.
This means, for example, that if `teardown.yaml` untracks a table, and `schema_teardown.yaml` runs raw SQL to drop the table, both would succeed (assuming the table is tracked/exists).
**Test suite naming convention**
The current convention is to indicate the backend(s) tests can be run against in the class name. For example:
This naming convention enables easier test filtering with [pytest command line flags](https://docs.pytest.org/en/6.2.x/usage.html#specifying-tests-selecting-tests).
The backend-specific and common test suites are disjoint; for example, run `pytest --integration -k "Common or MySQL" --backend mysql` to run all MySQL tests.
Note that `--backend` does not interact with the selection of tests. You will generally have to combine `--backend` with `-k`.
## Updating Python requirements
The packages/requirements are documented in two files: