- Created new job test_and_build_cli_migrations which runs after test_and_build_cli
- Build the cli-migrations and cli-migrations-v2 and save the images as tar image.
- Run the test defined in each workflow v1 and v2.
- Load the image that was built earlier in deploy step
* Fix catalog version for v1.1.1
* Remove entries of removed tables from hdb_catalog
While downgrading catalog version from 32 -> 31, not removing entries
in hdb_table and hdb_relationship for the tables that are removed in
the downgrade, results in incosistent schema, when the server with
downgraded version is started. This should probably be handled in
a better fashion.
With the change in this commit, the server is able to successfully
start with downgraded catalog version 31.
* Test downgrade command along with upgrade tests
* run basic tests after upgrade
* terminate before specifying file in pytest cmd
* Move fixture definitions out of test classes
Previously we had abstract classes with the fixtures defined
in them. The test classes then inherits these super classes. This
is creating inheritence problems, especially when you want to just
inherit the tests in class, but not the fixtures. We have now moved
all those fixture definitions outside of the class (in conftest.py).
These fixtures are now used by the test classes when and where they
are required.
* Run pytests on server upgrade
Server upgrade tests are run by
1) Run pytest with schema/metadata setup but do not do schema/metadata
teardown
2) Upgrade the server
3) Run pytest using the above schema and teardown at the end of the
tests
4) Cleanup hasura metadata and start again with next set of tests
We have added options --skip-schema-setup and --skip-schema-teardown to
help running server upgrade tests.
While running the tests, we noticed that error codes and messages for
some of the tests have changed. So we have added another option to
pytest `--avoid-error-message-checks`. If this flag is set, and if
comparing expected and response message fails, and if the expected
response has an error message, Pytest will throw warnings instead of an
error.
* Use marks to specify server-upgrade tests
Not all tests can be run as serve upgrade tests, particularly those
which themselves change the schema. We introduce two pytest markers.
Marker allow_server_upgrade_test will add the test into the list of
server upgrade tests that can be run. skip_server_upgrade_test
removes it from the list.
With this we have added tests for queries, mutations, and selected
event trigger and remote schema tests to the list of server upgrade
tests.
* Remove components not needed anymore
* Install curl
* Fix error in query validation
* Fix error in test_v1_queries.py
* install procps for server upgrade tests
* Use postgres image which has postgis installed
* set pager off with psql
* quote the bash variable WORKTREE_DIR
Co-authored-by: nizar-m <19857260+nizar-m@users.noreply.github.com>
Co-authored-by: Vamshi Surabhi <0x777@users.noreply.github.com>
* export metadata without nulls, empty arrays
* property tests for 'ReplaceMetadata' using QuickCheck
-> Derive Arbitrary class for 'ReplaceMetadata' dependant types
* reduce property test cases number to 30
QuickCheck generates the `ReplaceMetadata` value really large
for higher number test cases. Encoded JSON for such values is large and
consumes more memory. Thus, CI is giving up while running property
tests.
* circle-ci: Add property tests as saperate job
* add no command mode to tests
* add yaml.v2 to go mod
* remove indirect comment for yaml.v2 dependency
* save permissions, relationships and collections in catalog with 'is_system_defined'
* Use common stanzas in the .cabal file
* Refactor migration code into lib instead of exe
* Add new server test suite that exercises migrations
* Make graphql-engine clean succeed even if the schema does not exist
* Fix hpc combine error
* Do not perform ciignore
* xfail test jsonb_has_all
* Bring back ciignore
* Refer jsonb_has_all xfaul to the corresponding issue in graphql-engine-internal
These changes also add a new type, PGColumnType, between PGColInfo and
PGScalarType, and they process PGRawColumnType values into PGColumnType
values during schema cache generation.
This PR builds console static assets into the server docker image at `/srv/console-assets`. When env var `HASURA_GRAPHQL_CONSOLE_ASSETS_DIR=/srv/console-assets` or flag `--console-assets-dir=/srv/console-assets` is set on the server, the files in this directory are served at `/console/assets/*`.
The console html template will have a variable called `cdnAssets: false` when this flag is set and it loads assets from server itself instead of CDN.
The assets are moved to a new bucket with a new naming scheme:
```
graphql-engine-cdn.hasura.io/console/assets/
/common/{}
/versioned/<version/{}
/channel/<channel>/<version>/{}
```
Console served by CLI will still load assets from CDN - will fix that in the next release.
1. Reuses postgres connections during startup which reduces the overhead of opening and closing connections.
2. Faster schema cache building. This is done by fetching all the required data in a single sql statement.
Examples
1) `
pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv
`
2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv
`
### Solution and Design
<!-- How is this issue solved/fixed? What is the design? -->
<!-- It's better if we elaborate -->
#### Reducing execution time of tests
- The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec.
- For mutations, the model has now been changed to only do schema setup and teardown once per test class.
- A data setup and teardown will be done once per test instead (usually takes ~10ms).
- For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`.
- The function `dir()` should be define which returns the location of the configuration folder.
- Inside the configuration folder, there should be
- Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively
- Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively.
#### Running Graphql queries on both http and websockets
- Each GraphQL query/mutation is run on the both HTTP and websocket protocols
- Pytests test parameterisation is used to achieve this
- The errors over websockets are slightly different from that on HTTP
- The code takes care of converting the errors in HTTP to errors in websockets
#### Parallel executation of tests.
- The plugin pytest-xdist helps in running tests on parallel workers.
- We are using this plugin to group tests by file and run on different workers.
- Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker.
- With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
1. Haskel library `pg-client-hs` has been updated to expose a function that helps listen to `postgres` notifications over a `channel` in this [PR](https://github.com/hasura/pg-client-hs/pull/5)
2. The server records an event in a table `hdb_catalog.hdb_cache_update_event` whenever any `/v1/query` (that changes metadata) is requested. A trigger notifies a `cache update` event via `hasura_cache_update` channel
3. The server runs two concurrent threads namely `listener` and `processor`. The `listener` thread listens to events on `hasura_cache_update` channel and pushed into a `Queue`. The `processor` thread fetches events from that `Queue` and processes it. Thus server rebuilds schema cache from database and updates.
CircleCI jobs are run for any PR that is submitted to the repo. This PR adds a check to decide whether the job should be run or not.
Figured out that CircleCI has a way to gracefully terminate a job:
```
circleci-agent step halt
```
A `.ciignore` file is ran against all the changes in the PR to decide whether the PR should be built or not. If the answer comes out as `no`, a file is written at `/buid/skip_job.txt`. This is done in the `check_build_worthiness` step.
All further jobs, in the beginning, looks for this file and gracefully terminates the job if this file is present. The directory is passed down to the jobs as the workspace.
```yaml
skip_job_on_ciignore: &skip_job_on_ciignore
run: |
if [ -f /build/skip_job.txt ]; then
echo "halting job due to /build/skip_job.txt"
circleci-agent step halt
fi
```
ref: https://support.circleci.com/hc/en-us/articles/360015562253-Conditionally-end-a-running-job-gracefully
There are some known issues on jobs that are run when PR is merged to master, need to address them after this PR is merged.
Rename the admin secret key header used to access GraphQL engine from X-Hasura-Access-Key to X-Hasura-Admin-Secret.
Server CLI and console all support the older flag but marks it as deprecated.
* 1) Tests for creating permissions
2) Test for constraint_on with GraphQL insert on_conflict
* Run tests with access key and webhook
* Tests for GraphQL query with quoted columns
* Rewrite test-server.sh so that it can be run locally
* JWT based tests
* Tests with various postgres types
* For tests on select queries, run setup only once per class
* Tests for v1 count queries
* Skip teardown for tests that does not modify data
* Workaround for hpc 'parse error when reading .tix file'
* Move GeoJson tests to the new structure
* Basic tests for v1 queries
* Tests for column, table or operator not found error cases on GraphQL queries
* Skip test teardown for mutation tests which does not change database state, even when it returns 200.
* testing console tests in the ci
* console: making cypress wait for the server to start
* console: fixing failing tests
* console: update failing test
* console: cleaned up modify tests
* console: fixed a failing test for api-explorer
* server: basic test setup
* server: use the default transaction mode
* server: basic tests in yaml files
* server: restructure test setup and some more tests