- Move MonadBase/MonadBaseControl instances for TxE into pg-client-hs
- Set the -qn2 RTS option by default to limit the parallel GC to 2
threads
- Remove eventlog instrumentation
- Don’t rebuild the schema cache again after running a query that needs
it to be rebuilt, since we do that explicitly now.
- Remove some redundant checks, and relocate a couple others.
These aren't suitable e.g. for running in CI since some take far too
long (and an impossibly long-time when running under criterion's normal
bootstrapping sampling regime.
We might try to improve this ourselves:
https://github.com/bos/criterion/issues/218
An initial summary analysis will be in #3530.
* export metadata without nulls, empty arrays
* property tests for 'ReplaceMetadata' using QuickCheck
-> Derive Arbitrary class for 'ReplaceMetadata' dependant types
* reduce property test cases number to 30
QuickCheck generates the `ReplaceMetadata` value really large
for higher number test cases. Encoded JSON for such values is large and
consumes more memory. Thus, CI is giving up while running property
tests.
* circle-ci: Add property tests as saperate job
* add no command mode to tests
* add yaml.v2 to go mod
* remove indirect comment for yaml.v2 dependency
The connection handler in websocket transport was not using the
'UserAuthentication' interface to resolve user info. Fix resolving
user info in websocket transport to use the common
'UserAuthentication' interface
* save permissions, relationships and collections in catalog with 'is_system_defined'
* Use common stanzas in the .cabal file
* Refactor migration code into lib instead of exe
* Add new server test suite that exercises migrations
* Make graphql-engine clean succeed even if the schema does not exist
* Separate DB and metadata migrations
* Refactor Migrate.hs to generate list of migrations at compile-time
* Replace ginger with shakespeare to improve performance
* Improve migration log messages
Although brotli itself is MIT-licensed, the Haskell brotli library that provides bindings to it is GPL-licensed, so we cannot use it unless we get a response on haskell-hvr/brotli#1.
This fixes an issue where queries could incorrectly be considered
reusable if a variable was used in two positions: one where it affected
SQL generation and one where it did not.
* initial raster support
* _st_intersects_geom -> _st_intersects_geom_nband
* add tests
* update docs
* improve docs
As requested by @marionschleifer
* new type for raster values
Suggested by @lexi-lambda
* replace `SEUnsafe "NULL"` with SENull
These changes also add a new type, PGColumnType, between PGColInfo and
PGScalarType, and they process PGRawColumnType values into PGColumnType
values during schema cache generation.
* Listens for SIGTERM as the termination signal
* Stops accepting new connections once the signal is received
* Waits for all connections to be drained, before shutting down
* Forcefully kills all pending connections after 30 seconds
Currently this does not send a close message to websocket clients, I'd
like to submit that change as a separate pull request, but at least this
solve my biggest concern which is not getting confirmation for mutations
while restarting the server.
* allow altering type of a column iff session vars are defined in permissions
* use a sum type to define dependency reason
* set jwt expiry test's expiry time to 4 seconds
* derive Data instance for necessary types to simplify 'hasStaticExp'
This seems to resolve the issue locally (and has worked in the past),
but it's not clear what exactly is going on here (in particular, why
this should resolve what looks like a memory leak). It certainly seems
like a GHC issue of some sort.
Closes#2565
query templates is a little known feature that lets you template rql
queries and serve them as rest apis. This is not relevant anymore
given the GraphQL interface and getting rid of it reduces the dev
time when adding features in few subsystems.
This feature has never been used outside hasura's internal projects or
documented or exposed through console and hence can safely be removed.
This PR builds console static assets into the server docker image at `/srv/console-assets`. When env var `HASURA_GRAPHQL_CONSOLE_ASSETS_DIR=/srv/console-assets` or flag `--console-assets-dir=/srv/console-assets` is set on the server, the files in this directory are served at `/console/assets/*`.
The console html template will have a variable called `cdnAssets: false` when this flag is set and it loads assets from server itself instead of CDN.
The assets are moved to a new bucket with a new naming scheme:
```
graphql-engine-cdn.hasura.io/console/assets/
/common/{}
/versioned/<version/{}
/channel/<channel>/<version>/{}
```
Console served by CLI will still load assets from CDN - will fix that in the next release.
1. Reuses postgres connections during startup which reduces the overhead of opening and closing connections.
2. Faster schema cache building. This is done by fetching all the required data in a single sql statement.
* split stm transactions when snapshotting to make it faster
* mx subs: push to both old and new sinks at the same time
* expose dev APIs through allowed APIs flag
* add types to represent unparsed http gql requests
This will help when we add caching of frequently used ASTs
* query plan caching
* move livequery to execute
* add multiplexed module
* session variable can be customised depending on the context
Previously the value was always "current_setting('hasura.user')"
* get rid of typemap requirement in reusable plan
* subscriptions are multiplexed when possible
* use lazytx for introspection to avoid acquiring a pg connection
* refactor to make execute a completely decoupled module
* don't issue a transaction for a query
* don't use current setting for explained sql
* move postgres related types to a different module
* validate variableValues on postgres before multiplexing subs
* don't user current_setting for queries over ws
* plan_cache is only visible when developer flag is enabled
* introduce 'batch size' when multiplexing subscriptions
* bump stackage to 13.16
* fix schema_stitching test case error code
* store hashes instead of actual responses for subscriptions
* internal api to dump subscriptions state
* remove PlanCache from SchemaCacheRef
* allow live query options to be configured on server startup
* capture metrics for multiplexed subscriptions
* more metrics captured for multiplexed subs
* switch to tvar based hashmap for faster snapshotting
* livequery modules do not expose internal details
* fix typo in live query env vars
* switch to hasura's pg-client-hs
1. Haskel library `pg-client-hs` has been updated to expose a function that helps listen to `postgres` notifications over a `channel` in this [PR](https://github.com/hasura/pg-client-hs/pull/5)
2. The server records an event in a table `hdb_catalog.hdb_cache_update_event` whenever any `/v1/query` (that changes metadata) is requested. A trigger notifies a `cache update` event via `hasura_cache_update` channel
3. The server runs two concurrent threads namely `listener` and `processor`. The `listener` thread listens to events on `hasura_cache_update` channel and pushed into a `Queue`. The `processor` thread fetches events from that `Queue` and processes it. Thus server rebuilds schema cache from database and updates.
If returning field contains nested selections then mutation is performed in two steps
1. Mutation is performed with returning columns of any primary key and unique constraints
2. returning fields are queried on rows returned by selecting from table by filtering with column values returned in Step 1.
Since mutation takes two courses based on selecting relations in returning field, it is hard to maintain sequence of prepared arguments (PrepArg) generated while resolving returning field. So, we're using txtConverter instead of prepare to resolve mutation fields.
* console now works on local builds of the server
1. local console assets can be served at /static/ by a build time flag
'local-console'. This can be set with stack as follows:
`stack build --flag graphql-engine:local-console`
2. the --root-dir option is removed which was used as a temporary hack
for serving graphiql
3. remove server's graphiql source code
* remove phase one/two distinction and hdbquery typeclass
* move extensions to default-extensions
* switch to LazyTx which only acquires a connection if needed
* move defns from TH module into Ops module
* remove tojson orphan instance for http exception
* remove orphan instance for dmlp1
* getTopLevelNodes will not throw any exceptions
When using self referential relationships in boolean expressions, the exists clause incorrectly uses the table names to qualify columns which will be the same for parent table and the child table. This is now fixed by generating unique aliases as we traverse down the relationships.
JWT config now takes an optional jwk_url parameter (which points to published JWK Set). This is useful for providers who rotate their JWK Set.
Optional jwk_url parameter is taken. The published JWK set under that URL should be in standard JWK format (tools.ietf.org/html/rfc7517#section-4.8).
If the response contains an Expires header, the JWK set is automatically refreshed.
The API:
1. HGE has `--jwt-secret` flag or `HASURA_GRAPHQL_JWT_SECRET` env var. The value of which is a JSON.
2. The structure of this JSON is: `{"type": "<standard-JWT-algorithms>", "key": "<the-key>"}`
`type` : Standard JWT algos : `HS256`, `RS256`, `RS512` etc. (see jwt.io).
`key`:
i. Incase of symmetric key, the key as it is.
ii. Incase of asymmetric keys, only the public key, in a PEM encoded string or as a X509 certificate.
3. The claims in the JWT token must contain the following:
i. `x-hasura-default-role` field: default role of that user
ii. `x-hasura-allowed-roles` : A list of allowed roles for the user. The default role is overriden by `x-hasura-role` header.
4. The claims in the JWT token, can have other `x-hasura-*` fields where their values can only be strings.
5. The JWT tokens are sent as `Authorization: Bearer <token>` headers.
---
To test:
1. Generate a shared secret (for HMAC-SHA256) or RSA key pair.
2. Goto https://jwt.io/ , add the keys
3. Edit the claims to have `x-hasura-role` (mandatory) and other `x-hasura-*` fields. Add permissions related to the claims to test permissions.
4. Start HGE with `--jwt-secret` flag or `HASURA_GRAPHQL_JWT_SECRET` env var, which takes a JSON string: `{"type": "HS256", "key": "mylongsharedsecret"}` or `{"type":"RS256", "key": "<PEM-encoded-public-key>"}`
5. Copy the JWT token from jwt.io and use it in the `Authorization: Bearer <token>` header.
---
TODO: Support EC public keys. It is blocked on frasertweedale/hs-jose#61
* filter schema identifiers to conform to graphql naming scheme,closes #134
Filter out tables, columns, relationships etc which does not conform to
graphql naming scheme.
This ensures GraphiQL initialisation works properly for existing
databases.
* rename `isGraphQLConform` to `isValidName`
* rename all graphQL validators
* server: basic test setup
* server: use the default transaction mode
* server: basic tests in yaml files
* server: restructure test setup and some more tests