* fix nested insert with returning computed fields gives error, fix#3609
* revert using ordered hashmaps, sort columns based on ordinal postion
* fix 1. keys order 2. json/jsonb column value in nested insert returning
* add a note for sorted columns
* cast 'VALUES' expression as table row type
* use single CTE expression for generating returning for nested inserts
This fixes#3759. Also, while we’re at it, also improve the way
invalidations are synced across instances so enums and remote schemas
are appropriately reloaded by the schema syncing process.
* WIP: Remove hdb_views for inserts
* Show failing row in check constraint error
* Revert "Show failing row in check constraint error"
This reverts commit dd2cac29d0.
* Use the better query plan
* Simplify things
* fix cli test
* Update downgrading.rst
* remove 1.1 asset for cli
- Move MonadBase/MonadBaseControl instances for TxE into pg-client-hs
- Set the -qn2 RTS option by default to limit the parallel GC to 2
threads
- Remove eventlog instrumentation
- Don’t rebuild the schema cache again after running a query that needs
it to be rebuilt, since we do that explicitly now.
- Remove some redundant checks, and relocate a couple others.
This changes TableCoreCacheT to internally record dependencies at a
per-table level. In practice, this dramatically improves the performance
of building permissions: it makes it far, far less likely for
permissions to be needlessly rebuilt because some unrelated table
changed.
* export metadata without nulls, empty arrays
* property tests for 'ReplaceMetadata' using QuickCheck
-> Derive Arbitrary class for 'ReplaceMetadata' dependant types
* reduce property test cases number to 30
QuickCheck generates the `ReplaceMetadata` value really large
for higher number test cases. Encoded JSON for such values is large and
consumes more memory. Thus, CI is giving up while running property
tests.
* circle-ci: Add property tests as saperate job
* add no command mode to tests
* add yaml.v2 to go mod
* remove indirect comment for yaml.v2 dependency
Instead of
'WITH some_alias (SELECT * from some_func()) SELECT <rows> FROM some_alias'
for SQL function queries, Use
'SELECT <rows> FROM some_func() AS some_alias'
* save permissions, relationships and collections in catalog with 'is_system_defined'
* Use common stanzas in the .cabal file
* Refactor migration code into lib instead of exe
* Add new server test suite that exercises migrations
* Make graphql-engine clean succeed even if the schema does not exist
* Separate DB and metadata migrations
* Refactor Migrate.hs to generate list of migrations at compile-time
* Replace ginger with shakespeare to improve performance
* Improve migration log messages
* allow customizing GraphQL root field names, close#981
* document v2 track_table API in reference
* support customising column field names in GraphQL schema
* [docs] add custom column fields doc in API reference
* add tests
* rename 'ColField' to 'ColumnField'
* embed column's graphql field in 'PGColumnInfo'
-> Value constructor of 'PGCol' is not exposed
-> Using 'parseJSON' to construct 'PGCol' in 'FromJSON' instances
* avoid using 'Maybe TableConfig'
* refactors & 'custom_column_fields' -> 'custom_column_names'
* cli-test: add configuration field in metadata export test
* update expected keys in `FromJSON` instance of `TableMeta`
* use `buildSchemaCacheFor` to update configuration in v2 track_table
* remove 'GraphQLName' type and use 'isValidName' exposed from parser lib
* point graphql-parser-hs library git repo to hasura
* support 'set_table_custom_fields' query API & added docs and tests
This fixes an issue where queries could incorrectly be considered
reusable if a variable was used in two positions: one where it affected
SQL generation and one where it did not.
* initial raster support
* _st_intersects_geom -> _st_intersects_geom_nband
* add tests
* update docs
* improve docs
As requested by @marionschleifer
* new type for raster values
Suggested by @lexi-lambda
* replace `SEUnsafe "NULL"` with SENull
* use positional arguments in SQL functions
* only allow omitting set of last arguments in functions
* disallow omitting of a non default argument in functions
These changes also add a new type, PGColumnType, between PGColInfo and
PGScalarType, and they process PGRawColumnType values into PGColumnType
values during schema cache generation.
* allow altering type of a column iff session vars are defined in permissions
* use a sum type to define dependency reason
* set jwt expiry test's expiry time to 4 seconds
* derive Data instance for necessary types to simplify 'hasStaticExp'
query templates is a little known feature that lets you template rql
queries and serve them as rest apis. This is not relevant anymore
given the GraphQL interface and getting rid of it reduces the dev
time when adding features in few subsystems.
This feature has never been used outside hasura's internal projects or
documented or exposed through console and hence can safely be removed.
Currently, we allow tracking of a table with the same name as an already tracked function, and vice-versa. This causes an issue when querying from GraphQL since it will only query the table and not the function. I've made changes to disallow this by throwing an error.
Changes compared to `/v1alpha1/graphql`
* Changed all graphql responses in **/v1/graphql** endpoint to be 200. All graphql clients expect responses to be HTTP 200. Non-200 responses are considered transport layer errors.
* Errors in http and websocket layer are now consistent and have similar structure.
1. Reuses postgres connections during startup which reduces the overhead of opening and closing connections.
2. Faster schema cache building. This is done by fetching all the required data in a single sql statement.
* build schema cache function without db setup
The setup shouldn't happen for sync. The database is already setup by the instance which generated the event. This means that the sync is now faster.
* use SQL loop to drop hdb_views schema views and routines with ordering
This avoids deadlocks when schema is being changed concurrently
* schema sync now only processes the latest event
This becomes useful when a lot of schema change
events happen while we are still processing an
earlier event.
* add types to represent unparsed http gql requests
This will help when we add caching of frequently used ASTs
* query plan caching
* move livequery to execute
* add multiplexed module
* session variable can be customised depending on the context
Previously the value was always "current_setting('hasura.user')"
* get rid of typemap requirement in reusable plan
* subscriptions are multiplexed when possible
* use lazytx for introspection to avoid acquiring a pg connection
* refactor to make execute a completely decoupled module
* don't issue a transaction for a query
* don't use current setting for explained sql
* move postgres related types to a different module
* validate variableValues on postgres before multiplexing subs
* don't user current_setting for queries over ws
* plan_cache is only visible when developer flag is enabled
* introduce 'batch size' when multiplexing subscriptions
* bump stackage to 13.16
* fix schema_stitching test case error code
* store hashes instead of actual responses for subscriptions
* internal api to dump subscriptions state
* remove PlanCache from SchemaCacheRef
* allow live query options to be configured on server startup
* capture metrics for multiplexed subscriptions
* more metrics captured for multiplexed subs
* switch to tvar based hashmap for faster snapshotting
* livequery modules do not expose internal details
* fix typo in live query env vars
* switch to hasura's pg-client-hs
From `alpha-40` we've been using a `WHERE` clause to fetch required rows and generate mutation response. This has a few limitations like the requirement of a primary key/unique constraint. This also returns inconsistent data on `delete` mutation as mentioned in #1794.
Now, we're using `VALUES (..)` (refer [here](https://www.postgresql.org/docs/current/sql-values.html)) expression to form virtual table rows in `SQL` to generate mutation response.
Internal changes:-
- Not to use primary key/unique constraint columns:-
- Revert back to `ConstraintName` from `TableConstraint` in `TableInfo` type
- Remove `tcCols` field in `TableConstraint` type
- Modify `table_info.sql` and `fetchTableMeta` function `SQL`
- A test case to perform `delete` mutation and returning relational objects.
If returning field contains nested selections then mutation is performed in two steps
1. Mutation is performed with returning columns of any primary key and unique constraints
2. returning fields are queried on rows returned by selecting from table by filtering with column values returned in Step 1.
Since mutation takes two courses based on selecting relations in returning field, it is hard to maintain sequence of prepared arguments (PrepArg) generated while resolving returning field. So, we're using txtConverter instead of prepare to resolve mutation fields.
Rename the admin secret key header used to access GraphQL engine from X-Hasura-Access-Key to X-Hasura-Admin-Secret.
Server CLI and console all support the older flag but marks it as deprecated.
* support jsonb and geometry operators on RQL bool exps, close#1408
* add tests for jsonb operators in /v1/query
TODO:-
-> add tests for geometry (postgis) operators
* support parsing session variables for st_d_within and has_key ops
-> Add tests for boolExp operators and select permissions
* improve parsing $st_d_within op's json value logic
* remove phase one/two distinction and hdbquery typeclass
* move extensions to default-extensions
* switch to LazyTx which only acquires a connection if needed
* move defns from TH module into Ops module
* remove tojson orphan instance for http exception
* remove orphan instance for dmlp1
* getTopLevelNodes will not throw any exceptions
When using self referential relationships in boolean expressions, the exists clause incorrectly uses the table names to qualify columns which will be the same for parent table and the child table. This is now fixed by generating unique aliases as we traverse down the relationships.
* improved nested insert execution logic
* integrate error path, improve executing 'withExp' and improve tests
* add more readable types in '/Resolve/Insert.hs'
* set conflict context just before executing WITH expression
* allow ordering using columns from object relationships, close#463
* validate table fields in nested insert
* add tests
* add docs
* change 'table_order_by' type from enums to ordered map
* remove unwanted code from 'Schema.hs' file
* 'AnnGObject' is not list of field name and value tuple
* update docs for new order_by type
* use 'InsOrdHashMap' for 'AnnGObj'
* handle empty fields in order_by
* remove '_' prefixes for asc/desc
* fix the changed order_by syntax across the repo
* support for count and aggregations on columns, close#786
* support explain query for aggregations
* '<arr-rel>_agg' in '<table>' type, fix order by for aggregations
* add 'allow_aggregations' key in select permissions
* Add checkbox to toggle count and aggregations on columns on select permission
* align aggregation checkbox with columns div
* improve readability of the generated sql
* alias is needed at the top level aggregation
* throw internal errors for unexpected fields
* rename SelFld to more readable TableAggFld
* rename agg to aggregate
Better SQL generation for select queries (the query plans will be the same but much more readable). This closes some long standing issues (#6, #121, #278).
This is breaking change where bigint and bigserial Postgres types will be encoded as GraphQL String types, as opposed to Int as present in earlier releases.
Input types were already encoded as String.
This is achieved by selecting `bigint` and `bigserial` columns as `text`s in the SQL query: `select "big_id"::text ..` instead of `select "big_id" .. `.
Reason for that change is outlined in #633 where JavaScript cannot decode 64 bit Integers.
Insert trigger function: If query affects no rows then return `null`
Insert trigger function is modified to have
`IF r IS NULL THEN RETURN null; ELSE RETURN r; END IF;` in return statement.
Fix migration logic to accommodate for non superuser permissions. Closes#567
- [x] Server
By clearing the `hdb_views` schema of existing views and functions instead of dropping and creating it again.
- [x] Bug fix (non-breaking change which fixes an issue)
### Description
What component does this PR affect?
- [x] Server
### Related Issue
#494
### Solution and Design
Use `quote_ident()` SQL function over `constraint_name` in insert trigger function definition.
### Type
- [x] Bug fix (non-breaking change which fixes an issue)
* fix primary key changing on upsert, fix#342
* add 'update_columns' in 'on_conflict' object, consider 'allowUpsert'
* 'ConflictCtx' type should respect upsert cases
* validation for not null fields in an object
The API:
1. HGE has `--jwt-secret` flag or `HASURA_GRAPHQL_JWT_SECRET` env var. The value of which is a JSON.
2. The structure of this JSON is: `{"type": "<standard-JWT-algorithms>", "key": "<the-key>"}`
`type` : Standard JWT algos : `HS256`, `RS256`, `RS512` etc. (see jwt.io).
`key`:
i. Incase of symmetric key, the key as it is.
ii. Incase of asymmetric keys, only the public key, in a PEM encoded string or as a X509 certificate.
3. The claims in the JWT token must contain the following:
i. `x-hasura-default-role` field: default role of that user
ii. `x-hasura-allowed-roles` : A list of allowed roles for the user. The default role is overriden by `x-hasura-role` header.
4. The claims in the JWT token, can have other `x-hasura-*` fields where their values can only be strings.
5. The JWT tokens are sent as `Authorization: Bearer <token>` headers.
---
To test:
1. Generate a shared secret (for HMAC-SHA256) or RSA key pair.
2. Goto https://jwt.io/ , add the keys
3. Edit the claims to have `x-hasura-role` (mandatory) and other `x-hasura-*` fields. Add permissions related to the claims to test permissions.
4. Start HGE with `--jwt-secret` flag or `HASURA_GRAPHQL_JWT_SECRET` env var, which takes a JSON string: `{"type": "HS256", "key": "mylongsharedsecret"}` or `{"type":"RS256", "key": "<PEM-encoded-public-key>"}`
5. Copy the JWT token from jwt.io and use it in the `Authorization: Bearer <token>` header.
---
TODO: Support EC public keys. It is blocked on frasertweedale/hs-jose#61
* don't allow fkey based relations from/to a table that isn't tracked, fix#185
Check if remote table exist in metadata when creating foreign-key
based object relationship.
* add tests for adding object relation using fkey if remote table is untracked
* add 'on_conflict' condition to allow upsert mutation, closes#105
* check for empty unique or primary key constraints
* add 'on_conflict' condition test cases and introspection test case
* update 'conflict_action' enum values' description