* export metadata without nulls, empty arrays
* property tests for 'ReplaceMetadata' using QuickCheck
-> Derive Arbitrary class for 'ReplaceMetadata' dependant types
* reduce property test cases number to 30
QuickCheck generates the `ReplaceMetadata` value really large
for higher number test cases. Encoded JSON for such values is large and
consumes more memory. Thus, CI is giving up while running property
tests.
* circle-ci: Add property tests as saperate job
* add no command mode to tests
* add yaml.v2 to go mod
* remove indirect comment for yaml.v2 dependency
The connection handler in websocket transport was not using the
'UserAuthentication' interface to resolve user info. Fix resolving
user info in websocket transport to use the common
'UserAuthentication' interface
Instead of
'WITH some_alias (SELECT * from some_func()) SELECT <rows> FROM some_alias'
for SQL function queries, Use
'SELECT <rows> FROM some_func() AS some_alias'
This fix is a little ugly, but it’s the only simple solution without a
significant refactoring that restructures the relationship between
GraphQL/Validate and GraphQL/Resolve. The ugliness should go away if we
implement something like #2801.
* Separate DB and metadata migrations
* Refactor Migrate.hs to generate list of migrations at compile-time
* Replace ginger with shakespeare to improve performance
* Improve migration log messages
* allow customizing GraphQL root field names, close#981
* document v2 track_table API in reference
* support customising column field names in GraphQL schema
* [docs] add custom column fields doc in API reference
* add tests
* rename 'ColField' to 'ColumnField'
* embed column's graphql field in 'PGColumnInfo'
-> Value constructor of 'PGCol' is not exposed
-> Using 'parseJSON' to construct 'PGCol' in 'FromJSON' instances
* avoid using 'Maybe TableConfig'
* refactors & 'custom_column_fields' -> 'custom_column_names'
* cli-test: add configuration field in metadata export test
* update expected keys in `FromJSON` instance of `TableMeta`
* use `buildSchemaCacheFor` to update configuration in v2 track_table
* remove 'GraphQLName' type and use 'isValidName' exposed from parser lib
* point graphql-parser-hs library git repo to hasura
* support 'set_table_custom_fields' query API & added docs and tests
This fixes an issue where queries could incorrectly be considered
reusable if a variable was used in two positions: one where it affected
SQL generation and one where it did not.
* initial raster support
* _st_intersects_geom -> _st_intersects_geom_nband
* add tests
* update docs
* improve docs
As requested by @marionschleifer
* new type for raster values
Suggested by @lexi-lambda
* replace `SEUnsafe "NULL"` with SENull
* use positional arguments in SQL functions
* only allow omitting set of last arguments in functions
* disallow omitting of a non default argument in functions
These changes also add a new type, PGColumnType, between PGColInfo and
PGScalarType, and they process PGRawColumnType values into PGColumnType
values during schema cache generation.
This mostly simplifies the RootFlds type to make it clearer what it’s
used for, but it has the convenient side-effect of preventing some
“impossible” cases using the type system.
Changes compared to `/v1alpha1/graphql`
* Changed all graphql responses in **/v1/graphql** endpoint to be 200. All graphql clients expect responses to be HTTP 200. Non-200 responses are considered transport layer errors.
* Errors in http and websocket layer are now consistent and have similar structure.
* split stm transactions when snapshotting to make it faster
* mx subs: push to both old and new sinks at the same time
* expose dev APIs through allowed APIs flag
* add types to represent unparsed http gql requests
This will help when we add caching of frequently used ASTs
* query plan caching
* move livequery to execute
* add multiplexed module
* session variable can be customised depending on the context
Previously the value was always "current_setting('hasura.user')"
* get rid of typemap requirement in reusable plan
* subscriptions are multiplexed when possible
* use lazytx for introspection to avoid acquiring a pg connection
* refactor to make execute a completely decoupled module
* don't issue a transaction for a query
* don't use current setting for explained sql
* move postgres related types to a different module
* validate variableValues on postgres before multiplexing subs
* don't user current_setting for queries over ws
* plan_cache is only visible when developer flag is enabled
* introduce 'batch size' when multiplexing subscriptions
* bump stackage to 13.16
* fix schema_stitching test case error code
* store hashes instead of actual responses for subscriptions
* internal api to dump subscriptions state
* remove PlanCache from SchemaCacheRef
* allow live query options to be configured on server startup
* capture metrics for multiplexed subscriptions
* more metrics captured for multiplexed subs
* switch to tvar based hashmap for faster snapshotting
* livequery modules do not expose internal details
* fix typo in live query env vars
* switch to hasura's pg-client-hs
From `alpha-40` we've been using a `WHERE` clause to fetch required rows and generate mutation response. This has a few limitations like the requirement of a primary key/unique constraint. This also returns inconsistent data on `delete` mutation as mentioned in #1794.
Now, we're using `VALUES (..)` (refer [here](https://www.postgresql.org/docs/current/sql-values.html)) expression to form virtual table rows in `SQL` to generate mutation response.
Internal changes:-
- Not to use primary key/unique constraint columns:-
- Revert back to `ConstraintName` from `TableConstraint` in `TableInfo` type
- Remove `tcCols` field in `TableConstraint` type
- Modify `table_info.sql` and `fetchTableMeta` function `SQL`
- A test case to perform `delete` mutation and returning relational objects.
If returning field contains nested selections then mutation is performed in two steps
1. Mutation is performed with returning columns of any primary key and unique constraints
2. returning fields are queried on rows returned by selecting from table by filtering with column values returned in Step 1.
Since mutation takes two courses based on selecting relations in returning field, it is hard to maintain sequence of prepared arguments (PrepArg) generated while resolving returning field. So, we're using txtConverter instead of prepare to resolve mutation fields.
* read cookie while initialising websocket connection (fix#1660)
* add tests for cookie on websocket init
* fix logic for tests
* enforce cors, and flag to force read cookie when cors disabled
- as browsers don't enforce SOP on websockets, we enforce CORS policy
on websocket handshake
- if CORS is disabled, by default cookie is not read (because XSS
risk!). Add special flag to force override this behaviour
* add log and forward origin header to webhook
- add log notice when cors is disabled, and cookie is not read on
websocket handshake
- forward origin header to webhook in POST mode. So that when CORS is
disabled, webhook can also enforce CORS independently.
* add docs, and forward all client headers to webhook
* support jsonb and geometry operators on RQL bool exps, close#1408
* add tests for jsonb operators in /v1/query
TODO:-
-> add tests for geometry (postgis) operators
* support parsing session variables for st_d_within and has_key ops
-> Add tests for boolExp operators and select permissions
* improve parsing $st_d_within op's json value logic
* remove phase one/two distinction and hdbquery typeclass
* move extensions to default-extensions
* switch to LazyTx which only acquires a connection if needed
* move defns from TH module into Ops module
* remove tojson orphan instance for http exception
* remove orphan instance for dmlp1
* getTopLevelNodes will not throw any exceptions
When using self referential relationships in boolean expressions, the exists clause incorrectly uses the table names to qualify columns which will be the same for parent table and the child table. This is now fixed by generating unique aliases as we traverse down the relationships.
* improved nested insert execution logic
* integrate error path, improve executing 'withExp' and improve tests
* add more readable types in '/Resolve/Insert.hs'
* set conflict context just before executing WITH expression
* allow ordering using columns from object relationships, close#463
* validate table fields in nested insert
* add tests
* add docs
* change 'table_order_by' type from enums to ordered map
* remove unwanted code from 'Schema.hs' file
* 'AnnGObject' is not list of field name and value tuple
* update docs for new order_by type
* use 'InsOrdHashMap' for 'AnnGObj'
* handle empty fields in order_by
* remove '_' prefixes for asc/desc
* fix the changed order_by syntax across the repo
* support for count and aggregations on columns, close#786
* support explain query for aggregations
* '<arr-rel>_agg' in '<table>' type, fix order by for aggregations
* add 'allow_aggregations' key in select permissions
* Add checkbox to toggle count and aggregations on columns on select permission
* align aggregation checkbox with columns div
* improve readability of the generated sql
* alias is needed at the top level aggregation
* throw internal errors for unexpected fields
* rename SelFld to more readable TableAggFld
* rename agg to aggregate
Better SQL generation for select queries (the query plans will be the same but much more readable). This closes some long standing issues (#6, #121, #278).
JWT config now takes an optional jwk_url parameter (which points to published JWK Set). This is useful for providers who rotate their JWK Set.
Optional jwk_url parameter is taken. The published JWK set under that URL should be in standard JWK format (tools.ietf.org/html/rfc7517#section-4.8).
If the response contains an Expires header, the JWK set is automatically refreshed.
* fix primary key changing on upsert, fix#342
* add 'update_columns' in 'on_conflict' object, consider 'allowUpsert'
* 'ConflictCtx' type should respect upsert cases
* validation for not null fields in an object
* filter schema identifiers to conform to graphql naming scheme,closes #134
Filter out tables, columns, relationships etc which does not conform to
graphql naming scheme.
This ensures GraphiQL initialisation works properly for existing
databases.
* rename `isGraphQLConform` to `isValidName`
* rename all graphQL validators
* add 'on_conflict' condition to allow upsert mutation, closes#105
* check for empty unique or primary key constraints
* add 'on_conflict' condition test cases and introspection test case
* update 'conflict_action' enum values' description