This claws back ~7min from integration tests (run serially, as with `dev.sh test --integration`
Further improvements would do well to focus on optimizing metadata operations, as `setup` dominates
GitOrigin-RevId: 76637d6fa953c2404627c4391447a05bf09355fa
Multi source support had limited the availability of async action queries in subscriptions. This PR
adds support for async action query subscriptions with new implementation. Also addresses https://github.com/hasura/graphql-engine/issues/6460.
GitOrigin-RevId: 5ddc321073d224f287dc4b86ce2239ff55190b36
Previously invalid REST endpoints would throw errors during schema cache build.
This PR changes the validation to instead add to the inconsistent metadata objects in order to allow use of `allow_inconsistent_metadata` with inconsistent REST endpoints.
All non-fatal endpoint definition errors are returned as inconsistent metadata warnings/errors depending on the use of `allow_inconsistent_metadata`. The endpoints with issues are then created and return informational runtime errors when they are called.
Console impact when creating endpoints is that error messages now refer to metadata inconsistencies rather than REST feature at the top level:
![image](https://user-images.githubusercontent.com/92299/109911843-ede9ec00-7cfe-11eb-9c55-7cf924d662a6.png)
<img width="969" alt="image" src="https://user-images.githubusercontent.com/92299/110258597-8336fa00-7ff7-11eb-872c-bfca945aa0e8.png">
Note: Conflicting endpoints generate one error per conflicting set of endpoints due to the implementation of `groupInconsistentMetadataById` and `imObjectIds`. This is done to ensure that error messages are terse, but may pose errors if there are some assumptions made surrounding `imObjectIds`.
Related to https://github.com/hasura/graphql-engine-mono/pull/473 (Allow Inconsistent Metadata (v2) #473 (Merged))
---
### Kodiak commit message
Changes the validation to use inconsistent metadata objects for REST endpoint issues.
#### Commit title
Inconsistent metadata for REST endpoints
GitOrigin-RevId: b9de971208e9bb0a319c57df8dace44cb115ff66
fixes#3868
docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de`
Note:
To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`.
Introduction
------------
This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`.
How are select permissions of different roles are combined?
------------------------------------------------------------
A select permission includes 5 things:
1. Columns accessible to the role
2. Row selection filter
3. Limit
4. Allow aggregation
5. Scalar computed fields accessible to the role
Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`.
Let's say the following GraphQL query is queried with the `combined_roles` role.
```graphql
query {
employees {
address
phone
}
}
```
This will translate to the following SQL query:
```sql
select
(case when (P1 or P2) then address else null end) as address,
(case when P2 then phone else null end) as phone
from employee
where (P1 or P2)
```
The other parameters of the select permission will be combined in the following manner:
1. Limit - Minimum of the limits will be the limit of the inherited role
2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation
3. Scalar computed fields - same as table column fields, as in the above example
APIs for inherited roles:
----------------------
1. `add_inherited_role`
`add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments
`role_name`: the name of the inherited role to be added (String)
`role_set`: list of roles that need to be combined (Array of Strings)
Example:
```json
{
"type": "add_inherited_role",
"args": {
"role_name":"combined_user",
"role_set":[
"user",
"user1"
]
}
}
```
After adding the inherited role, the inherited role can be used like single roles like earlier
Note:
An inherited role can only be created with non-inherited/singular roles.
2. `drop_inherited_role`
The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument:
`role_name`: name of the inherited role to be dropped
Example:
```json
{
"type": "drop_inherited_role",
"args": {
"role_name":"combined_user"
}
}
```
Metadata
---------
The derived roles metadata will be included under the `experimental_features` key while exporting the metadata.
```json
{
"experimental_features": {
"derived_roles": [
{
"role_name": "manager_is_employee_too",
"role_set": [
"employee",
"manager"
]
}
]
}
}
```
Scope
------
Only postgres queries and subscriptions are supported in this PR.
Important points:
-----------------
1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done.
TODOs
-------
- [ ] Tests
- [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features
- [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?)
- [ ] Introspection test with a inherited role (nullability changes in a inherited role)
- [ ] Docs
- [ ] Changelog
Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com>
GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
Resolves Issues:
* https://github.com/hasura/graphql-engine-mono/issues/658 - Invalid Slashes
* https://github.com/hasura/graphql-engine-mono/issues/628 - Subscriptions
Implementation:
* Moved some logic from Endpoint.hs to allow reuse of splitting url into PathSegments.
* Additional validation steps alongside checking for overlapping routes
* Logging potential misuse of GET for mutations
Future Work:
* [ ] GET is allowed for mutations (Ignore/Log warning for Now)
* [ ] Add to scInconsistentObjs rather than throwing error
* Add information to scInconsistentObjs instead of raising errors directly.
TODO:
* [x] Duplicate variable segments with the same name in the location should not be allowed
* [x] We should throw an error on trailing and leading slashes and URLs which contain empty segments
* [x] Endpoints can be created using subscriptions. But the error only shows at the time of the query
* [x] Tests
---
### Kodiak commit message
Prohibit Invalid slashes, duplicate variables, subscriptions for REST endpoints.
GitOrigin-RevId: 86c0d4af97984c8afd02699e6071e9c1658710b8
Add optimistic concurrency control to the ‘replace_metadata’ call.
Prevents users from submitting out-of-date metadata to metadata-mutating APIs.
See https://github.com/hasura/graphql-engine-mono/issues/472 for details.
GitOrigin-RevId: 5f220f347a3eba288a9098b01e9913ffd7e38166
* server: use a leaky bucket algorithm for bytes-per-second cache rate limiting
* Use evalsha properly
* Adds redis cache limit parameters to PoliciesConfig
* Loads Leaky Bucket Script On Server Start
* Adds more redis logging and moves cache update into lua script
* reverts setex in lua and adds notes
* Refactors cacheStore and adds max TTL and cache size limits
* Filter session vars in cache key
* WIP
* parens
* cache-clear-hander POC implementation
* cache-clear-hander POC implementation
* Pro projectId used as cache key
* POC working!
* prefixing query-response keys in redis
* Add cacheClearer to RedisScripts
* Partial implementation of cacheClearer from scripts record
* updating tests
* [automated] stylish-haskell commit
* Adds query look with up with metrics script
* Adds missing module and lua script from last commit
* Changes redis script module structure to match cache clearing branch
* minor change to lua script
* cleaning up cache clearing
* generalising JsonLog
* [automated] stylish-haskell commit
* Draft Cache Metrics Endpoint
* Adds Cache Metrics Handler
* Adds hook handler module
* Missed HandlerHook module in last commit
* glob
* Fixes redis mget bug
* Removes cache totals and changes dashes to colons in metric cache keys
* Adds query param to clear clear endpoint for deleting specific keys
* Adds query param to clear clear endpoint for deleting specific keys
* Cache Metrics on query families rather then queries
* Replace Set with nub
* Base16 Redis Hashes
* Query Family Redis Keys With Roles
* response headers for cache keys
* fixing bug in family key by excluding operation name; using hash for response header instead of entire key
* Adds query family to redis cache keys and cache clear endpoint
* Fixes queryfamily hash bug
* Moves cache endpoints to /pro
* Moved cache clear to POST
* Refactors cache clear function
* Fixes query family format bug
* Adds query cache tests and optional --redis-url flag to python test suite
* Adds session variable cache test
* Update pro changelog
* adding documentation for additional caching features
* more docs
* clearing up units of leaky bucket params
* Adds comments to leaky bucket script
* removes old todo
* Fixes session variable filtering to work with new query rootfield
* more advanced defaulting behaviour for bucket rate and capacity.
* Updates Docs
* Moves Role into QueryFamily hash
* Use Aeson for Cache Clear endpoint response
* Moves trace to bracket the leaky bucket script
* Misc review tweaks
* Adds sum type for cache clear query params
* Hardcodes RegisReplyLog log level
* Update docs/graphql/cloud/response-caching.rst
Co-authored-by: Phil Freeman <phil@hasura.io>
* new prose for rate limiting docs
* [automated] stylish-haskell commit
* make rootToSessVarPreds total
* [automated] stylish-haskell commit
* Fixes out of scope error
* Renamed _acRedis to _acCacheStore
Co-authored-by: Solomon Bothwell <ssbothwell@gmail.com>
Co-authored-by: Lyndon Maydwell <lyndon@sordina.net>
Co-authored-by: David Overton <david@hasura.io>
Co-authored-by: Stylish Haskell Bot <stylish-haskell@users.noreply.github.com>
Co-authored-by: Lyndon Maydwell <lyndon@hasura.io>
GitOrigin-RevId: dda5c1a3f902967b3d78310f950541a55fabb1b0
Fixes https://github.com/hasura/graphql-engine/issues/5704 by checking, for aggregate fields whether we are handling a numeric aggregation.
This PR also adds type information to `ColFld` such that we know the type of the field.
This is the second attempt. See #319 for a less invasive approach. @nicuveo suggested type information might be useful, and since it wasn't hard to add, I think this version is better as well.
GitOrigin-RevId: aa6a259fd5debe9466df6302839ddbbd0ea659b5
Earlier (pre catalog separation), the remote schema permissions were in `/v1/query`. This PR moves it to `/v1/metadata`.
GitOrigin-RevId: cb39d9df4cc2288f67231504e3a7909f2f8df4da
Fixes https://github.com/hasura/graphql-engine/issues/6385
In the v1.3.4-beta.2 version, the SQL generated for a query action containing relationship and configured with permissions parsed the `x-hasura-user-id` session variable value through the `hasura.user` postgres setting instead of passing the session variables as an JSON object to the query as it was in v1.3.3.
This PR fixes the SQL generation to that of v1.3.3
Co-authored-by: Rakesh Emmadi <12475069+rakeshkky@users.noreply.github.com>
GitOrigin-RevId: 838ba812f89b51df7fcead81b9d3c2885dfa39b4
Earlier, while creating the event trigger's internal postgres trigger, we used to get the name of the table from the `TG_TABLE_NAME` special trigger variable. Using this with normal tables works fine, but it breaks when the parent table is partitioned because we associate the ET configuration in the schema only with the original table (as it should be).
In this PR, we supply the table name and schema name through template variables instead of using `TG_TABLE_NAME` and `TG_TABLE_SCHEMA`, so that event triggers work with a partitioned table as well.
TODO:
- [x] Changelog
- [x] unit test (ET on partition table)
GitOrigin-RevId: 556376881a85525300dcf64da0611ee9ad387eb0