* save permissions, relationships and collections in catalog with 'is_system_defined'
* Use common stanzas in the .cabal file
* Refactor migration code into lib instead of exe
* Add new server test suite that exercises migrations
* Make graphql-engine clean succeed even if the schema does not exist
This change adds the `-z` option to the nc command when waiting for
a port to be ready. This ensures that we exit correctly reporting
connection status.
These are assets forked from https://github.com/tibbe/ekg at revision fe5e5a1e67
Only monitor.js was modified slightly to take a local port from the URL
fragment.
I think we should feel free to continue modifying this, or replace with
our own tooling later on.
At the moment we can...
...run tests in isolation, generating coverage report:
$ dev.sh test
You can pass args to pytest as well. e.g. to run a specific test:
$ dev.sh test -k "test_jsonb_has_all"
Launch a postgres container with useful dev defaults, with PostGIS,
cleaning up afterwards:
$ dev.sh postgres
Build and launch graphql-engine in dev mode, connecting with a
`postgres` launched above
$ dev.sh graphql-engine
This PR builds console static assets into the server docker image at `/srv/console-assets`. When env var `HASURA_GRAPHQL_CONSOLE_ASSETS_DIR=/srv/console-assets` or flag `--console-assets-dir=/srv/console-assets` is set on the server, the files in this directory are served at `/console/assets/*`.
The console html template will have a variable called `cdnAssets: false` when this flag is set and it loads assets from server itself instead of CDN.
The assets are moved to a new bucket with a new naming scheme:
```
graphql-engine-cdn.hasura.io/console/assets/
/common/{}
/versioned/<version/{}
/channel/<channel>/<version>/{}
```
Console served by CLI will still load assets from CDN - will fix that in the next release.
<!-- Thank you for submitting this PR! :) -->
<!-- Provide a general summary of your changes in the Title above ^, end with (close #<issue-no>) or (fix #<issue-no>) -->
### Description
<!-- The title might not be enough to convey how this change affects the user. -->
<!-- Describe the changes from a user's perspective -->
This PR adds a bash script and a serverless function to clean up the output of `pg_dump` so that it can be used as a migration file for Hasura. This can be later integrated with the CLI so that the cleanup is handled by CLI.
### Affected components
<!-- Remove non-affected components from the list -->
- Scripts
### Related Issues
<!-- Please make sure you have an issue associated with this Pull Request -->
<!-- And then add `(close #<issue-no>)` to the pull request title -->
<!-- Add the issue number below (e.g. #234) -->
#1861
### Solution and Design
<!-- How is this issue solved/fixed? What is the design? -->
<!-- It's better if we elaborate -->
- A serverless function written in Go gets the SQL content though HTTP POST.
- A set of pre-defined lines are removed from this SQL string.
- SQL comments are removed using regex matching.
- Postgres triggers created by Hasura for use with event triggers are removed with regex matching.
- Empty newlines are removed by regex matching.
- Resulting string is returned in the HTTP response.
### Steps to test and verify
<!-- If this is a feature, what are the steps to try them out? -->
<!-- If this is a bug-fix, how do we verify the fix? -->
```bash
curl --data-binary @filename.sql https://hasura-edit-pg-dump.now.sh > newfile.sql
```
### Limitations, known bugs & workarounds
<!-- Limitations of the PR, known bugs and suggested workarounds -->
NA
<!-- Feel free to delete these comment lines -->