server: support w3c traceparent context

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/10218
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Rob Dominguez <24390149+robertjdominguez@users.noreply.github.com>
GitOrigin-RevId: d3dbea6220fd2127ab76c0a240fc4725ca5d6aac
This commit is contained in:
Toan Nguyen 2023-09-13 20:40:54 +07:00 committed by hasura-bot
parent f44a3870dd
commit f915c7d1a2
38 changed files with 774 additions and 257 deletions

View File

@ -108,6 +108,7 @@ X-Hasura-Role: admin
"otlp_traces_endpoint": "http://localhost:4318/v1/traces",
"otlp_metrics_endpoint": "http://localhost:4318/v1/metrics",
"protocol": "http/protobuf",
"traces_propagators": ["tracecontext"],
"headers": [
{
"name": "x-test-header",

View File

@ -92,13 +92,13 @@ keywords:
## PGConfiguration {#pgconfiguration}
| Key | Required | Schema | Description |
| ------------------- | -------- | --------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| connection_info | true | [PGSourceConnectionInfo](#pgsourceconnectioninfo) | Connection parameters for the source |
| read_replicas | false | \[[PGSourceConnectionInfo](#pgsourceconnectioninfo)\] | Optional list of read replica configuration *(supported only in cloud/enterprise versions)* |
| extensions_schema | false | String | Name of the schema where the graphql-engine will install database extensions (default: `public`) |
| connection_template | false | [PGConnectionTemplate](#pgconnectiontemplate) | DB connection template *(supported only in cloud/enterprise versions)* |
| connection_set | false | \[[ConnectionSetElementConfig](#connectionsetelementconfig)\] | Connection Set used for DB connection template*(supported only in cloud/enterprise versions)* |
| Key | Required | Schema | Description |
| ------------------- | -------- | ------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ |
| connection_info | true | [PGSourceConnectionInfo](#pgsourceconnectioninfo) | Connection parameters for the source |
| read_replicas | false | \[[PGSourceConnectionInfo](#pgsourceconnectioninfo)\] | Optional list of read replica configuration _(supported only in cloud/enterprise versions)_ |
| extensions_schema | false | String | Name of the schema where the graphql-engine will install database extensions (default: `public`) |
| connection_template | false | [PGConnectionTemplate](#pgconnectiontemplate) | DB connection template _(supported only in cloud/enterprise versions)_ |
| connection_set | false | \[[ConnectionSetElementConfig](#connectionsetelementconfig)\] | Connection Set used for DB connection template*(supported only in cloud/enterprise versions)* |
## MsSQLConfiguration {#mssqlconfiguration}
@ -130,19 +130,19 @@ keywords:
:::info Note
When `use_prepared_statements` is `true`, all SQL queries compiled from GraphQL queries will be
[prepared](https://www.postgresql.org/docs/current/sql-prepare.html) before being executed, meaning that the
database server will cache queries and query plans.
[prepared](https://www.postgresql.org/docs/current/sql-prepare.html) before being executed, meaning that the database
server will cache queries and query plans.
This can result in an improvement in performance when serving mostly complex queries with little variation. But it's
a trade-off that increases memory usage, and under other circumstances the result is not a net performance gain.
And because the prepared statements cache is local to each database connection, the connection pool parameters also
This can result in an improvement in performance when serving mostly complex queries with little variation. But it's a
trade-off that increases memory usage, and under other circumstances the result is not a net performance gain. And
because the prepared statements cache is local to each database connection, the connection pool parameters also
influence its efficiency.
The only way to reasonably know if enabling prepared statements will increase the performance of a Hasura GraphQL
Engine instance is to benchmark it under a representative query load.
The only way to reasonably know if enabling prepared statements will increase the performance of a Hasura GraphQL Engine
instance is to benchmark it under a representative query load.
This option interacts with the [Query Tags](/observability/query-tags.mdx) feature (see for details),
and the two generally shouldn't be enabled at the same time.
This option interacts with the [Query Tags](/observability/query-tags.mdx) feature (see for details), and the two
generally shouldn't be enabled at the same time.
:::
@ -177,7 +177,7 @@ and the two generally shouldn't be enabled at the same time.
| max_connections | false | `Integer` | Maximum number of connections to be kept in the pool (default: 50) |
| total_max_connections | false | `Integer` | Maximum number of total connections to be maintained across any number of Hasura Cloud instances (default: 1000). Takes precedence over `max_connections` in Cloud projects. _(Only available in Hasura Cloud)_ |
| idle_timeout | false | `Integer` | The idle timeout (in seconds) per connection (default: 180) |
| retries | false | `Integer` | Number of retries to perform when failing to acquire connection (default: 1). Note that this configuration does not affect user/statement errors on PG. |
| retries | false | `Integer` | Number of retries to perform when failing to acquire connection (default: 1). Note that this configuration does not affect user/statement errors on PG. |
| pool_timeout | false | `Integer` | Maximum time to wait while acquiring a Postgres connection from the pool, in seconds (default: forever) |
| connection_lifetime | false | `Integer` | Time from connection creation after which the connection should be destroyed and a new one created. A value of 0 indicates we should never destroy an active connection. If 0 is passed, memory from large query results may not be reclaimed. (default: 600 sec) |
@ -205,9 +205,9 @@ This schema indicates that the source uses a connection pool (the default):
This schema indicates that the source does not use a connection pool:
| Key | Required | Schema | Description |
| -------------- | -------- | --------- | ------------------------------------------------------ |
| enable | true | `Bool` | Set to `false` to disable the connection pool entirely |
| Key | Required | Schema | Description |
| ------ | -------- | ------ | ------------------------------------------------------ |
| enable | true | `Bool` | Set to `false` to disable the connection pool entirely |
## PGColumnType {#pgcolumntype}
@ -352,7 +352,7 @@ Configuration properties for particular column, as specified on [ColumnConfig](#
## InputValidationDefinition {#input-validation-definition}
| Key | Required | Schema | Description |
| ----------- | -------- | --------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
| ---------------------- | -------- | --------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
| url | true | [WebhookURL](#webhookurl) | The input validations's webhook URL |
| headers | false | \[[HeaderFromValue](#headerfromvalue) \| [HeaderFromEnv](#headerfromenv) \] | List of defined headers to be sent to the handler |
| forward_client_headers | false | boolean | If set to `true` the client headers are forwarded to the webhook handler (default: `false`) |
@ -360,15 +360,15 @@ Configuration properties for particular column, as specified on [ColumnConfig](#
## InputValidation {#input-validation}
| Key | Required | Schema | Description |
| ---------------------- | -------- | -------- | -------------------------------------------------------------------- |
| type | true | `String` | The interface for input validation. (Currently only supports "http") |
| definition | true | [InputValidationDefinition](#input-validation-definition) | The definition for the input validation |
| Key | Required | Schema | Description |
| ---------- | -------- | --------------------------------------------------------- | -------------------------------------------------------------------- |
| type | true | `String` | The interface for input validation. (Currently only supports "http") |
| definition | true | [InputValidationDefinition](#input-validation-definition) | The definition for the input validation |
## InsertPermission {#insertpermission}
| Key | Required | Schema | Description |
| ------------ | -------- | -------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| -------------- | -------- | -------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| check | true | [BoolExp](#boolexp) | This expression has to hold true for every new row that is inserted |
| set | false | [ColumnPresetsExp](#columnpresetexp) | Preset values for columns that can be sourced from session variables or static values |
| columns | false | [PGColumn](#pgcolumn) array (or) `'*'` | Can insert into only these columns (or all when `'*'` is specified) |
@ -396,7 +396,7 @@ The `query_root_fields` and the `subscription_root_fields` are only available in
## UpdatePermission {#updatepermission}
| Key | Required | Schema | Description |
| ------------ | -------- | -------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| -------------- | -------- | -------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| columns | true | [PGColumn](#pgcolumn) array (or) `'*'` | Only these columns are selectable (or all when `'*'` is specified) |
| filter | true | [BoolExp](#boolexp) | Only the rows where this precondition holds true are updatable |
| check | false | [BoolExp](#boolexp) | Postcondition which must be satisfied by rows which have been updated |
@ -404,21 +404,20 @@ The `query_root_fields` and the `subscription_root_fields` are only available in
| backend_only | false | Boolean | When set to `true` the mutation is accessible only if the `x-hasura-use-backend-only-permissions` session variable exists and is set to `true` and the request is made with `x-hasura-admin-secret` set if any auth is configured |
| validate_input | false | [InputValidation](#input-validation) | The input validation definition for the insert mutation. |
## DeletePermission {#deletepermission}
| Key | Required | Schema | Description |
| ------------ | -------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| filter | true | [BoolExp](#boolexp) | Only the rows where this expression holds true are deletable |
| backend_only | false | Boolean | When set to `true` the mutation is accessible only if the `x-hasura-use-backend-only-permissions` session variable exists and is set to `true` and the request is made with `x-hasura-admin-secret` set if any auth is configured |
| validate_input | false | [InputValidation](#input-validation) | The input validation definition for the insert mutation. |
| Key | Required | Schema | Description |
| -------------- | -------- | ------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| filter | true | [BoolExp](#boolexp) | Only the rows where this expression holds true are deletable |
| backend_only | false | Boolean | When set to `true` the mutation is accessible only if the `x-hasura-use-backend-only-permissions` session variable exists and is set to `true` and the request is made with `x-hasura-admin-secret` set if any auth is configured |
| validate_input | false | [InputValidation](#input-validation) | The input validation definition for the insert mutation. |
## LogicalModelSelectPermission {#logicalmodelselectpermission}
| Key | Required | Schema | Description |
| ------------------------ | -------- | ----------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| columns | true | [PGColumn](#pgcolumn) array (or) `'*'` | Only these columns are selectable (or all when `'*'` is specified) |
| filter | true | [BoolExp](#boolexp) | Only the rows where this expression holds true are selectable |
| Key | Required | Schema | Description |
| ------- | -------- | -------------------------------------- | ------------------------------------------------------------------ |
| columns | true | [PGColumn](#pgcolumn) array (or) `'*'` | Only these columns are selectable (or all when `'*'` is specified) |
| filter | true | [BoolExp](#boolexp) | Only the rows where this expression holds true are selectable |
## ObjRelUsing {#objrelusing}
@ -429,8 +428,7 @@ The `query_root_fields` and the `subscription_root_fields` are only available in
:::info Note
There has to be at least one and only one of `foreign_key_constraint_on`
and `manual_configuration`.
There has to be at least one and only one of `foreign_key_constraint_on` and `manual_configuration`.
:::
@ -499,9 +497,8 @@ Supported in `v2.0.0-alpha.3` and above.
## InsertOrder {#insertorder}
Describes when should the referenced table row be inserted in relation
to the current table row in case of a nested insert. Defaults to
"before_parent".
Describes when should the referenced table row be inserted in relation to the current table row in case of a nested
insert. Defaults to "before_parent".
```
"before_parent" | "after_parent"
@ -677,10 +674,9 @@ scheduled | locked | delivered | error | dead
| `"$in"` | `IN` |
| `"$nin"` | `NOT IN` |
(For more details, refer to the Postgres docs for [comparison
operators](https://www.postgresql.org/docs/current/functions-comparison.html)
and [list based search
operators](https://www.postgresql.org/docs/current/functions-comparisons.html).)
(For more details, refer to the Postgres docs for
[comparison operators](https://www.postgresql.org/docs/current/functions-comparison.html) and
[list based search operators](https://www.postgresql.org/docs/current/functions-comparisons.html).)
**Text related operators :**
@ -697,11 +693,10 @@ operators](https://www.postgresql.org/docs/current/functions-comparisons.html).)
| `$nregex` | `!~` |
| `$niregex` | `!~*` |
(For more details on text related operators, refer to the [Postgres
docs](https://www.postgresql.org/docs/current/functions-matching.html).)
(For more details on text related operators, refer to the
[Postgres docs](https://www.postgresql.org/docs/current/functions-matching.html).)
**Operators for comparing columns (all column types except json,
jsonb):**
**Operators for comparing columns (all column types except json, jsonb):**
**Column Comparison Operator**
@ -729,9 +724,8 @@ jsonb):**
</div>
Column comparison operators can be used to compare columns of the same
table or a related table. To compare a column of a table with another
column of :
Column comparison operators can be used to compare columns of the same table or a related table. To compare a column of
a table with another column of :
1. The same table -
@ -790,8 +784,8 @@ column of :
| `"$cgte"` | `>=` |
| `"$clte"` | `<=` |
(For more details on comparison operators, refer to the [Postgres
docs](https://www.postgresql.org/docs/current/functions-comparison.html).)
(For more details on comparison operators, refer to the
[Postgres docs](https://www.postgresql.org/docs/current/functions-comparison.html).)
**Checking for NULL values :**
@ -799,8 +793,8 @@ docs](https://www.postgresql.org/docs/current/functions-comparison.html).)
| --------------------------------------- | --------------------- |
| `_is_null` (takes true/false as values) | `IS NULL` |
(For more details on the `IS NULL` expression, refer to the [Postgres
docs](https://www.postgresql.org/docs/current/functions-comparison.html).)
(For more details on the `IS NULL` expression, refer to the
[Postgres docs](https://www.postgresql.org/docs/current/functions-comparison.html).)
**JSONB operators :**
@ -812,8 +806,8 @@ docs](https://www.postgresql.org/docs/current/functions-comparison.html).)
| `_has_keys_any` | `?!` |
| `_has_keys_all` | `?&` |
(For more details on JSONB operators, refer to the [Postgres
docs](https://www.postgresql.org/docs/current/static/functions-json.html#FUNCTIONS-JSONB-OP-TABLE).)
(For more details on JSONB operators, refer to the
[Postgres docs](https://www.postgresql.org/docs/current/static/functions-json.html#FUNCTIONS-JSONB-OP-TABLE).)
**PostGIS related operators on GEOMETRY columns:**
@ -831,13 +825,11 @@ docs](https://www.postgresql.org/docs/current/static/functions-json.html#FUNCTIO
| `_st_3d_d_within` | `ST_3DDWithin(column, input)` |
(For more details on spatial relationship operators, refer to the
[PostGIS
docs](http://postgis.net/workshops/postgis-intro/spatial_relationships.html).)
[PostGIS docs](http://postgis.net/workshops/postgis-intro/spatial_relationships.html).)
:::info Note
- All operators take a JSON representation of `geometry/geography`
values as input value.
- All operators take a JSON representation of `geometry/geography` values as input value.
- The input value for `_st_d_within` operator is an object:
@ -871,9 +863,8 @@ An empty [JSONObject](https://tools.ietf.org/html/rfc7159)
## ColumnPresetsExp {#columnpresetexp}
A [JSONObject](https://tools.ietf.org/html/rfc7159) of a Postgres column
name to value mapping, where the value can be static or derived from a
session variable.
A [JSONObject](https://tools.ietf.org/html/rfc7159) of a Postgres column name to value mapping, where the value can be
static or derived from a session variable.
```
{
@ -883,8 +874,7 @@ session variable.
}
```
E.g. where `id` is derived from a session variable and `city` is a
static value.
E.g. where `id` is derived from a session variable and `city` is a static value.
```json
{
@ -895,9 +885,8 @@ static value.
:::info Note
If the value of any key begins with "x-hasura-" (_case-insensitive_),
the value of the column specified in the key will be derived from a
session variable of the same name.
If the value of any key begins with "x-hasura-" (_case-insensitive_), the value of the column specified in the key will
be derived from a session variable of the same name.
:::
@ -1011,12 +1000,10 @@ session variable of the same name.
| `suffix` | false | String | Suffix applied to type names in the Remote Schema |
| `mapping` | false | `{String: String}` | Explicit mapping of type names in the Remote Schema Note: explicit mapping takes precedence over `prefix` and `suffix`. |
- Type name prefix and suffix will be applied to all types in the
schema except the root types (for query, mutation and subscription),
types starting with `__`, standard scalar types (`Int`, `Float`,
`String`, `Boolean`, and `ID`), and types with an explicit mapping.
- Root types, types starting with `__`, and standard scalar types may
only be customized with an explicit mapping.
- Type name prefix and suffix will be applied to all types in the schema except the root types (for query, mutation and
subscription), types starting with `__`, standard scalar types (`Int`, `Float`, `String`, `Boolean`, and `ID`), and
types with an explicit mapping.
- Root types, types starting with `__`, and standard scalar types may only be customized with an explicit mapping.
## RemoteFieldCustomization {#remotefieldcustomization}
@ -1027,8 +1014,8 @@ session variable of the same name.
| `suffix` | false | String | Suffix applied to field names in the parent type |
| `mapping` | false | `{String: String}` | Explicit mapping of field names in the parent type Note: explicit mapping takes precedence over `prefix` and `suffix`. |
- Fields that are part of an interface must be renamed consistently
across all object types that implement that interface.
- Fields that are part of an interface must be renamed consistently across all object types that implement that
interface.
## SourceCustomization {#sourcecustomization}
@ -1055,22 +1042,21 @@ session variable of the same name.
:::info Note
Please note that the naming convention feature is an experimental feature for now.
To use this feature, please use the `--experimental-features=naming_convention`
flag or set the `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` environment variable to
`naming_convention`.
Please note that the naming convention feature is an experimental feature for now. To use this feature, please use the
`--experimental-features=naming_convention` flag or set the `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` environment variable
to `naming_convention`.
The naming convention can either be `graphql-default` or `hasura-default` (default).
**The `graphql-default` naming convention is supported only for postgres databases right now.**
Typecase for each of the naming convention is mentioned below:
The naming convention can either be `graphql-default` or `hasura-default` (default). **The `graphql-default` naming
convention is supported only for postgres databases right now.** Typecase for each of the naming convention is mentioned
below:
| Naming Convention | Field names | Type names | Arguments | Enum values |
| ----------------- | ----------- | ----------- | ---------- | ----------- |
| hasura-default | Snake case | Snake case | Snake case | as defined |
| graphql-default | Camel case | Pascal case | Camel case | Uppercased |
The naming convention can be overridden by `custom_name` in [Table Config](#table-config)
or by setting [Custom Root Fields](#custom-root-fields).
The naming convention can be overridden by `custom_name` in [Table Config](#table-config) or by setting
[Custom Root Fields](#custom-root-fields).
:::
@ -1174,14 +1160,14 @@ or by setting [Custom Root Fields](#custom-root-fields).
:::caution Deprecation
CustomColumnNames is deprecated in favour of using the `custom_name` property on columns in [ColumnConfig](#columnconfig).
If both CustomColumnNames and [ColumnConfig](#columnconfig) is used, any `custom_name ` properties used in
[ColumnConfig](#columnconfig) will take precedence and any overlapped values in `custom_column_names` will be discarded.
CustomColumnNames is deprecated in favour of using the `custom_name` property on columns in
[ColumnConfig](#columnconfig). If both CustomColumnNames and [ColumnConfig](#columnconfig) is used, any `custom_name `
properties used in [ColumnConfig](#columnconfig) will take precedence and any overlapped values in `custom_column_names`
will be discarded.
:::
A [JSONObject](https://tools.ietf.org/html/rfc7159) of Postgres column
name to GraphQL name mapping
A [JSONObject](https://tools.ietf.org/html/rfc7159) of Postgres column name to GraphQL name mapping
```
{
@ -1203,8 +1189,7 @@ name to GraphQL name mapping
## WebhookURL {#webhookurl}
A String value which supports templating environment variables enclosed
in `{{` and `}}`.
A String value which supports templating environment variables enclosed in `{{` and `}}`.
<div className="parsed-literal">
@ -1223,8 +1208,7 @@ Template example: `https://{{ACTION_API_DOMAIN}}/create-user`
| name | true | String | Name of the header |
| value | true | String | Value of the header |
The `value` field supports templating environment variables enclosed in
`{{` and `}}`.
The `value` field supports templating environment variables enclosed in `{{` and `}}`.
Template example: `header-{{HEADER_FROM_ENV}}`
@ -1237,9 +1221,7 @@ Template example: `header-{{HEADER_FROM_ENV}}`
## GraphQLType {#graphqltype}
A GraphQL [Type
Reference](https://spec.graphql.org/June2018/#sec-Type-References)
string.
A GraphQL [Type Reference](https://spec.graphql.org/June2018/#sec-Type-References) string.
<div className="parsed-literal">
@ -1249,13 +1231,11 @@ string.
</div>
Example: `String!` for non-nullable String type and `[String]` for array
of String types
Example: `String!` for non-nullable String type and `[String]` for array of String types
## GraphQLName {#graphqlname}
A string literal that conform to [GraphQL
spec](https://spec.graphql.org/June2018/#Name).
A string literal that conform to [GraphQL spec](https://spec.graphql.org/June2018/#Name).
<div className="parsed-literal">
@ -1289,8 +1269,8 @@ spec](https://spec.graphql.org/June2018/#Name).
:::info Note
The `GraphQL Types` used in creating an action must be defined before
via [Custom Types](/api-reference/metadata-api/custom-types.mdx)
The `GraphQL Types` used in creating an action must be defined before via
[Custom Types](/api-reference/metadata-api/custom-types.mdx)
:::
@ -1312,11 +1292,11 @@ via [Custom Types](/api-reference/metadata-api/custom-types.mdx)
## LogicalModelField {#logicalmodelfield}
| Key | Required | Schema | Description |
| ----------- | -------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------|
| name | true | `String` | The name of the Logical Model field |
| type | true | [Logical Model Type](#logicalmodeltype) | A Logical Model field type |
| description | false | `String` | An extended description of the field |
| Key | Required | Schema | Description |
| ----------- | -------- | --------------------------------------- | ------------------------------------ |
| name | true | `String` | The name of the Logical Model field |
| type | true | [Logical Model Type](#logicalmodeltype) | A Logical Model field type |
| description | false | `String` | An extended description of the field |
## LogicalModelType {#logicalmodeltype}
@ -1324,24 +1304,24 @@ A Logical Model type is one of either:
A scalar:
| Key | Required | Schema | Description |
| ----------- | -------- | --------- | ------------------------------------------------------------------------------------------------|
| scalar | true | `String` | The type of the exposed column, according to the underlying data source |
| nullable | false | `Boolean` | True if the field should be exposed over the GraphQL API as a nullable field (default: `false`) |
| Key | Required | Schema | Description |
| -------- | -------- | --------- | ----------------------------------------------------------------------------------------------- |
| scalar | true | `String` | The type of the exposed column, according to the underlying data source |
| nullable | false | `Boolean` | True if the field should be exposed over the GraphQL API as a nullable field (default: `false`) |
An array:
| Key | Required | Schema | Description |
| ----------- | -------- | --------- | ------------------------------------------------------------------------------------------------|
| array | true | [Logical Model Type](#logicalmodeltype) | A Logical Model type, which this denotes an array of |
| nullable | false | `Boolean` | True if the field should be exposed over the GraphQL API as a nullable field (default: `false`) |
| Key | Required | Schema | Description |
| -------- | -------- | --------------------------------------- | ----------------------------------------------------------------------------------------------- |
| array | true | [Logical Model Type](#logicalmodeltype) | A Logical Model type, which this denotes an array of |
| nullable | false | `Boolean` | True if the field should be exposed over the GraphQL API as a nullable field (default: `false`) |
A reference to another logical model:
| Key | Required | Schema | Description |
| ----------- | -------- | --------- | -------------------------------------------------------------------------------------------------------|
| logical_model | true | [Logical Model Type](#logicalmodeltype) | A Logical Model type, which this refers to. Recursive and mutually recursive references are permitted. |
| nullable | false | `Boolean` | True if the field should be exposed over the GraphQL API as a nullable field (default: `false`) |
| Key | Required | Schema | Description |
| ------------- | -------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------ |
| logical_model | true | [Logical Model Type](#logicalmodeltype) | A Logical Model type, which this refers to. Recursive and mutually recursive references are permitted. |
| nullable | false | `Boolean` | True if the field should be exposed over the GraphQL API as a nullable field (default: `false`) |
## NativeQueryArgument {#nativequeryargument}
@ -1354,7 +1334,7 @@ A reference to another logical model:
## NativeQueryRelationship {#nativequeryrelationship}
| Key | Required | Schema | Description |
| --------------- | -------- | ------------------------------------- | ---------------------------------------------------------------- |
| ------------------- | -------- | ------------------------------------- | ---------------------------------------------------------------- |
| remote_native_query | true | `String` | The Native Query to which the relationship has to be established |
| column_mapping | true | Object (local-column : remote-column) | Mapping of columns from current table to remote table |
@ -1386,23 +1366,19 @@ A reference to another logical model:
:::info Note
Currently, only functions which satisfy the following constraints can be
exposed over the GraphQL API (_terminology from_ [Postgres
docs](https://www.postgresql.org/docs/current/sql-createfunction.html)):
Currently, only functions which satisfy the following constraints can be exposed over the GraphQL API (_terminology
from_ [Postgres docs](https://www.postgresql.org/docs/current/sql-createfunction.html)):
- **Function behavior**: `STABLE` or `IMMUTABLE` functions may _only_
be exposed as queries (i.e. with `exposed_as: query`) `VOLATILE`
functions may be exposed as mutations or queries.
- **Return type**: MUST be `SETOF <table-name>` OR `<table_name>`
where `<table-name>` is already tracked
- **Function behavior**: `STABLE` or `IMMUTABLE` functions may _only_ be exposed as queries (i.e. with
`exposed_as: query`) `VOLATILE` functions may be exposed as mutations or queries.
- **Return type**: MUST be `SETOF <table-name>` OR `<table_name>` where `<table-name>` is already tracked
- **Argument modes**: ONLY `IN`
:::
## InputObjectType {#inputobjecttype}
A simple JSON object to define [GraphQL Input
Object](https://spec.graphql.org/June2018/#sec-Input-Objects)
A simple JSON object to define [GraphQL Input Object](https://spec.graphql.org/June2018/#sec-Input-Objects)
| Key | Required | Schema | Description |
| ----------- | -------- | ---------------------------------------------- | ------------------------------------ |
@ -1420,8 +1396,7 @@ Object](https://spec.graphql.org/June2018/#sec-Input-Objects)
## ObjectType {#objecttype}
A simple JSON object to define [GraphQL
Object](https://spec.graphql.org/June2018/#sec-Objects)
A simple JSON object to define [GraphQL Object](https://spec.graphql.org/June2018/#sec-Objects)
| Key | Required | Schema | Description |
| ------------- | -------- | -------------------------------------------------- | ------------------------------------------ |
@ -1449,8 +1424,7 @@ Object](https://spec.graphql.org/June2018/#sec-Objects)
## ScalarType {#scalartype}
A simple JSON object to define [GraphQL
Scalar](https://spec.graphql.org/June2018/#sec-Scalars)
A simple JSON object to define [GraphQL Scalar](https://spec.graphql.org/June2018/#sec-Scalars)
| Key | Required | Schema | Description |
| ----------- | -------- | --------------------------- | ------------------------------ |
@ -1459,8 +1433,7 @@ Scalar](https://spec.graphql.org/June2018/#sec-Scalars)
## EnumType {#enumtype}
A simple JSON object to define [GraphQL
Enum](https://spec.graphql.org/June2018/#sec-Enums)
A simple JSON object to define [GraphQL Enum](https://spec.graphql.org/June2018/#sec-Enums)
| Key | Required | Schema | Description |
| ----------- | -------- | -------------------------------- | ---------------------------- |
@ -1522,7 +1495,8 @@ Enum](https://spec.graphql.org/June2018/#sec-Enums)
:::tip Supported from
Version 2 is supported in `v2.5.0` and above. You must remove any "version 2" schemas from your Metadata prior to downgrading to `v2.4.0` or earlier
Version 2 is supported in `v2.5.0` and above. You must remove any "version 2" schemas from your Metadata prior to
downgrading to `v2.4.0` or earlier
:::
@ -1548,7 +1522,8 @@ HGE provides the following functions that can be used in the template:
"%3Ffoo%3Dbar%2Fbaz"
```
- `getSessionVariable`: This function takes a string and returns the session variable of the given name. This function can throw the following errors:
- `getSessionVariable`: This function takes a string and returns the session variable of the given name. This function
can throw the following errors:
- Session variable {variable name} not found
- Session variable name should be a string
@ -1671,9 +1646,8 @@ Note: _One_ of and _only one_ of `to_source` and `to_remote_schema` must be pres
}
```
`RemoteField` is a recursive tree structure that points to the field in
the Remote Schema that needs to be joined with. It is recursive because
the remote field maybe nested deeply in the Remote Schema.
`RemoteField` is a recursive tree structure that points to the field in the Remote Schema that needs to be joined with.
It is recursive because the remote field maybe nested deeply in the Remote Schema.
Examples:
@ -1759,6 +1733,7 @@ Table columns can be referred by prefixing `$` e.g `$id`.
| sources | true | `'*'` \| [[SourceName]](#sourcename) | Sources for which to update the cleaner status (or all sources when `'*'` is provided) |
## EventTriggerQualifier {#eventtriggerqualifier}
| Key | required | Schema | Description |
| -------------- | -------- | ----------------------------- | ----------------------------------------- |
| event_triggers | true | [[TriggerName]](#triggername) | List of trigger names |
@ -1786,50 +1761,54 @@ Table columns can be referred by prefixing `$` e.g `$id`.
| max_reqs_per_min | true | Integer | Maximum requests per minute to be allowed |
## PGConnectionTemplate {#pgconnectiontemplate}
| Key | required | Schema | Description |
| -------- | -------- | ------ | --------------------------------------------------------- |
| template | true | String | Template for the dynamic DB connection |
| version | false | Int | Version of the template (Possible value is 1, default: 1) |
## ConnectionSetElementConfig {#connectionsetelementconfig}
| Key | required | Schema | Description |
| ----------------- | -------- | ------------------------------------------------- | ------------------------------------ |
| name | true | String | name of the connection |
| connection_info | true | [PGSourceConnectionInfo](#pgsourceconnectioninfo) | Connection parameters for the source |
| Key | required | Schema | Description |
| --------------- | -------- | ------------------------------------------------- | ------------------------------------ |
| name | true | String | name of the connection |
| connection_info | true | [PGSourceConnectionInfo](#pgsourceconnectioninfo) | Connection parameters for the source |
## RequestContext {#requestcontext}
| Key | required | Schema | Description |
|---------|----------|----------------------------------------------------------------|---------------------------|
| ------- | -------- | -------------------------------------------------------------- | ------------------------- |
| headers | false | Object ([HeaderKey](#headerkey) : [HeaderValue](#headervalue)) | Request header |
| session | false | Object (String : String) | Request session variables |
| query | false | [QueryContext](#queryContext) | Operation details |
## QueryContext {#queryContext}
| Key | required | Schema | Description |
|----------------|----------|---------------------------------|-------------------------------|
| operation_type | true | query | mutation | subscription | Type of the graphql operation |
| operation_name | false | String | Name of the graphql operation |
| Key | required | Schema | Description |
| -------------- | -------- | ------ | ----------------------------- | ------------ | ----------------------------- |
| operation_type | true | query | mutation | subscription | Type of the graphql operation |
| operation_name | false | String | Name of the graphql operation |
## Attribute {#attribute}
| Key | required | Schema | Description |
|-------|----------|--------|------------------------|
| ----- | -------- | ------ | ---------------------- |
| name | true | String | Name of the attribute |
| value | true | String | Value of the attribute |
## OTLPExporter {#otlpexporter}
| Key | required | Schema | Description |
|-----------------------|----------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| otlp_traces_endpoint | true | `String` | OpenTelemetry compliant receiver endpoint URL for traces (usually having path "/v1/traces") |
| otlp_metrics_endpoint | true | `String` | OpenTelemetry compliant receiver endpoint URL for metrics (usually having path "/v1/metrics") |
| protocol | false | `String` | Protocol to be used for the communication with the receiver. Currently only supports `http/protobuf`|
| headers | false | \[[HeaderFromValue](#headerfromvalue) \| [HeaderFromEnv](#headerfromenv) \] | List of defined headers to be sent to the receiver |
| resource_attributes | false | \[[Attribute](#attribute)\] | List of resource attributes to be sent to the receiver |
| Key | required | Schema | Description |
| --------------------- | -------- | --------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| otlp_traces_endpoint | true | `String` | OpenTelemetry compliant receiver endpoint URL for traces (usually having path "/v1/traces") |
| otlp_metrics_endpoint | true | `String` | OpenTelemetry compliant receiver endpoint URL for metrics (usually having path "/v1/metrics") |
| protocol | false | `String` | Protocol to be used for the communication with the receiver. Currently only supports `http/protobuf` |
| headers | false | \[[HeaderFromValue](#headerfromvalue) \| [HeaderFromEnv](#headerfromenv) \] | List of defined headers to be sent to the receiver |
| resource_attributes | false | \[[Attribute](#attribute)\] | List of resource attributes to be sent to the receiver |
| traces_propagators | false | \["tracecontext"\] | List of trace propagations to exchange context between services and processes |
## OpenTelemetryBatchSpanProcessor {#opentelemetrybatchspanprocessor}
| Key | required | Schema | Description |
|-----------------------|----------|----------------------------------|---------------------------------------------------------------------------|
| max_export_batch_size | false | `Integer` | Maximum number of spans allowed per export request. Default value is 512 |
| Key | required | Schema | Description |
| --------------------- | -------- | --------- | ------------------------------------------------------------------------ |
| max_export_batch_size | false | `Integer` | Maximum number of spans allowed per export request. Default value is 512 |

View File

@ -157,6 +157,29 @@ currently supported.
Batch size is the maximum number of data points (spans in the context of traces) allowed per export request made to the
observability tool. Default size is 512.
### Trace Propagations
Trace Propagation implements the mechanism that exchanges context between services and processes. It serializes or
deserializes the context object and provides the relevant trace information to be propagated from one service to
another. GraphQL Engine supports the following propagation mechanisms:
- [B3 Propagation](https://github.com/openzipkin/b3-propagation)
- [W3C Trace Context](https://www.w3.org/TR/trace-context)
:::info Trace Propagation support
W3C Trace Context is supported for Hasura GraphQL Engine versions `v2.35.0` and above.
:::
B3 propagation is enabled by default. You can enable other protocols in the `OpenTelemetry Exporter` configuration.
<Thumbnail
src="/img/enterprise/open-telemetry-trace-propagation.png"
alt="OpenTelemetry Trace Propagation Configuration"
width="1000px"
/>
### Headers
Headers are _(optionally)_ added to every request made by Hasura to the observability tool. They are generally

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

View File

@ -58,6 +58,7 @@ export const DisabledWithoutLicense: StoryObj<typeof OpenTelemetry> = {
attributes: [],
dataType: ['traces'],
connectionType: 'http/protobuf',
tracesPropagators: [],
},
withoutLicense: true,
@ -91,6 +92,7 @@ export const Enabled: StoryObj<typeof OpenTelemetry> = {
attributes: [],
dataType: ['traces'],
connectionType: 'http/protobuf',
tracesPropagators: [],
},
},
};
@ -156,6 +158,7 @@ const metadataLoadedProps: ComponentPropsWithoutRef<typeof OpenTelemetry> = {
connectionType: 'http/protobuf',
tracesEndpoint: 'http://localhost:1234',
metricsEndpoint: 'http://localhost:1234',
tracesPropagators: ['tracecontext'],
headers: [{ name: 'foo', value: 'bar', type: 'from_value' }],
attributes: [{ name: 'foo', value: 'bar', type: 'from_value' }],
},

View File

@ -2,11 +2,15 @@ import * as React from 'react';
import clsx from 'clsx';
import { Button } from '../../../../../new-components/Button';
import { useConsoleForm, InputField } from '../../../../../new-components/Form';
import {
useConsoleForm,
InputField,
CheckboxesField,
} from '../../../../../new-components/Form';
import { RequestHeadersSelector } from '../../../../../new-components/RequestHeadersSelector';
import type { FormValues } from './schema';
import { formSchema } from './schema';
import { formSchema, tracesPropagatorSchema } from './schema';
import { Toggle } from './components/Toggle';
import { useResetDefaultFormValues } from './hooks/useResetDefaultFormValues';
import { CollapsibleFieldWrapper } from './components/CollapsibleFieldWrapper';
@ -119,6 +123,21 @@ export function Form(props: FormProps) {
clearButton
loading={skeletonMode}
/>
<div>
<CheckboxesField
name="tracesPropagators"
label="Trace Propagations"
orientation="horizontal"
tooltip="The specification that exchanges trace context propagation data between services and processes. The b3 propagation is enabled by default."
learnMoreLink="https://hasura.io/docs/latest/observability/opentelemetry/#trace-propagations"
loading={skeletonMode}
options={tracesPropagatorSchema.options.map(option => ({
label: option,
value: option,
disabled: option === 'b3',
}))}
/>
</div>
<CollapsibleFieldWrapper
inputFieldName="headers"
label="Headers"

View File

@ -7,6 +7,8 @@ const endPointSchema = z.string();
// SCHEMA
// --------------------------------------------------
export const tracesPropagatorSchema = z.enum(['b3', 'tracecontext']);
export const formSchema = z
.object({
// CONNECTION TYPE
@ -39,6 +41,9 @@ export const formSchema = z
// the user will only see one error and understand everything at once.
.min(1, { message: 'The value should be between 1 and 512' })
.max(512, { message: 'The value should be between 1 and 512' }),
// Enable extra trace propagators besides b3
tracesPropagators: tracesPropagatorSchema.array(),
})
// enforce invariant that: when export is enabled globally AND when the
// corresponding data_type is enabled THEN a valid endpoint url is provided.
@ -87,4 +92,5 @@ export const defaultValues: FormValues = {
headers: [],
attributes: [],
tracesPropagators: [],
};

View File

@ -47,6 +47,7 @@ const mockMetadataHandler = (
protocol: 'http/protobuf',
resource_attributes: [],
otlp_traces_endpoint: '',
traces_propagators: [],
},
data_types: [],
batch_span_processor: {
@ -64,6 +65,7 @@ const mockMetadataHandler = (
protocol: 'http/protobuf',
resource_attributes: [],
otlp_traces_endpoint: '',
traces_propagators: [],
},
data_types: [],
batch_span_processor: {

View File

@ -29,7 +29,7 @@ export function openTelemetryToFormValues(
metricsEndpoint: openTelemetry.exporter_otlp.otlp_metrics_endpoint ?? '',
headers: metadataHeadersToFormHeaders(openTelemetry.exporter_otlp.headers),
batchSize: openTelemetry.batch_span_processor.max_export_batch_size,
tracesPropagators: openTelemetry.exporter_otlp.traces_propagators,
attributes: metadataAttributesToFormAttributes(
openTelemetry.exporter_otlp.resource_attributes
),
@ -49,7 +49,7 @@ export function formValuesToOpenTelemetry(
const otlp_traces_endpoint = formValues.tracesEndpoint;
const otlp_metrics_endpoint = formValues.metricsEndpoint;
const max_export_batch_size = formValues.batchSize;
const traces_propagators = formValues.tracesPropagators;
// At the beginning, only one Connection Type is available
const protocol = 'http/protobuf';
@ -68,6 +68,7 @@ export function formValuesToOpenTelemetry(
headers,
protocol,
resource_attributes,
traces_propagators,
},
batch_span_processor: {

View File

@ -61,6 +61,11 @@ const exporterSchema = z.object({
protocol: protocolSchema,
resource_attributes: z.array(attributeSchema),
/*
* Enable extra trace propagators besides b3
*/
traces_propagators: z.array(z.enum(['b3', 'tracecontext'])),
/**
* The most important parts of the configuration. If OpenTelemetry export is
* enabled globally, AND a specific telemetry type is enabled, then a valid

View File

@ -6477,6 +6477,21 @@
"$ref": "#/components/schemas/OtelNameValue"
},
"type": "array"
},
"traces_propagators": {
"default": [
"b3"
],
"description": "List of propagators to inject and extract traces data from headers.",
"items": {
"description": "Possible trace propagators to use with OTLP",
"enum": [
"b3",
"tracecontext"
],
"type": "string"
},
"type": "array"
}
},
"type": "object"

View File

@ -770,7 +770,11 @@ library
, Hasura.Tracing.Reporter
, Hasura.Tracing.Sampling
, Hasura.Tracing.TraceId
, Hasura.Tracing.TraceState
, Hasura.Tracing.Utils
, Hasura.Tracing.Propagator
, Hasura.Tracing.Propagator.B3
, Hasura.Tracing.Propagator.W3CTraceContext
, Hasura.Server.Auth.WebHook
, Hasura.Server.Middleware
@ -1248,6 +1252,7 @@ test-suite graphql-engine-tests
Hasura.SQL.BackendMapSpec
Hasura.SQL.WKTSpec
Hasura.Tracing.TraceIdSpec
Hasura.Tracing.PropagatorSpec
Network.HTTP.Client.TransformableSpec
Test.Aeson.Expectation
Test.Aeson.Utils

View File

@ -27,7 +27,7 @@ import Hasura.HTTP qualified
import Hasura.Logging (Hasura, Logger)
import Hasura.Prelude
import Hasura.RQL.Types.Common qualified as RQL
import Hasura.Tracing (MonadTrace, MonadTraceContext, traceHTTPRequest)
import Hasura.Tracing (MonadTrace, MonadTraceContext, b3TraceContextPropagator, traceHTTPRequest)
import Network.HTTP.Client.Transformable qualified as HTTP
import Network.HTTP.Types.Status (Status)
import Servant.Client
@ -73,7 +73,7 @@ runRequestAcceptStatus' acceptStatus req = do
HTTP.headers
%= \headers -> maybe headers (\(AgentLicenseKey key) -> ("X-Hasura-License", key) : headers) _accAgentLicenseKey
(tracedReq, responseOrException) <- traceHTTPRequest transformableReq' \tracedReq ->
(tracedReq, responseOrException) <- traceHTTPRequest b3TraceContextPropagator transformableReq' \tracedReq ->
fmap (tracedReq,) . liftIO . try @HTTP.HttpException $ HTTP.httpLbs tracedReq _accHttpManager
logAgentRequest _accLogger tracedReq responseOrException
case responseOrException of

View File

@ -77,6 +77,7 @@ import Hasura.RQL.Types.Permission
import Hasura.RQL.Types.Schema.Options qualified as Options
import Hasura.Server.Utils
import Hasura.Session
import Hasura.Tracing (b3TraceContextPropagator)
import Hasura.Tracing qualified as Tracing
import Language.GraphQL.Draft.Syntax qualified as G
import Network.HTTP.Client.Transformable qualified as HTTP
@ -512,7 +513,7 @@ validateMutation env manager logger userInfo (ResolvedWebhook urlText) confHeade
& Lens.set HTTP.body (HTTP.RequestBodyLBS $ J.encode requestBody)
& Lens.set HTTP.timeout (HTTP.responseTimeoutMicro (unTimeout timeout * 1000000)) -- (default: 10 seconds)
httpResponse <-
Tracing.traceHTTPRequest request $ \request' ->
Tracing.traceHTTPRequest b3TraceContextPropagator request $ \request' ->
liftIO . try $ HTTP.httpLbs request' manager
case httpResponse of

View File

@ -76,6 +76,7 @@ import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.EventTrigger
import Hasura.RQL.Types.OpenTelemetry (getOtelTracesPropagator)
import Hasura.RQL.Types.SchemaCache
import Hasura.RQL.Types.Source
import Hasura.SQL.AnyBackend qualified as AB
@ -462,7 +463,7 @@ processEventQueue logger statsLogger httpMgr getSchemaCache getEventEngineCtx ac
Tracing.samplingStateFromHeader
$ e
^? JL.key "trace_context" . JL.key "sampling_state" . JL._String
pure $ Tracing.TraceContext traceId freshSpanId parentSpanId samplingState
pure $ Tracing.TraceContext traceId freshSpanId parentSpanId samplingState Tracing.emptyTraceState
processEvent ::
forall io r b.
@ -542,6 +543,7 @@ processEventQueue logger statsLogger httpMgr getSchemaCache getEventEngineCtx ac
$ mkRequest headers httpTimeout payload requestTransform (_envVarValue webhook)
>>= \reqDetails -> do
let request = extractRequest reqDetails
tracesPropagator = getOtelTracesPropagator $ scOpenTelemetryConfig cache
logger' res details = do
logHTTPForET res extraLogCtx details (_envVarName webhook) logHeaders triggersErrorLogLevelStatus
liftIO $ do
@ -571,7 +573,7 @@ processEventQueue logger statsLogger httpMgr getSchemaCache getEventEngineCtx ac
liftIO $ EKG.Gauge.dec $ smNumEventHTTPWorkers serverMetrics
liftIO $ Prometheus.Gauge.dec (eventTriggerHTTPWorkers eventTriggerMetrics)
)
(invokeRequest reqDetails responseTransform (_rdSessionVars reqDetails) logger')
(invokeRequest reqDetails responseTransform (_rdSessionVars reqDetails) logger' tracesPropagator)
pure (request, resp)
case eitherReqRes of
Right (req, resp) -> do

View File

@ -363,13 +363,14 @@ invokeRequest ::
Maybe Transform.ResponseTransform ->
Maybe SessionVariables ->
((Either (HTTPErr a) (HTTPResp a)) -> RequestDetails -> m ()) ->
HttpPropagator ->
m (HTTPResp a)
invokeRequest reqDetails@RequestDetails {..} respTransform' sessionVars logger = do
invokeRequest reqDetails@RequestDetails {..} respTransform' sessionVars logger tracesPropagator = do
let finalReq = fromMaybe _rdOriginalRequest _rdTransformedRequest
reqBody = fromMaybe J.Null $ preview (HTTP.body . HTTP._RequestBodyLBS) finalReq >>= J.decode @J.Value
manager <- asks getter
-- Perform the HTTP Request
eitherResp <- traceHTTPRequest finalReq $ runHTTP manager
eitherResp <- traceHTTPRequest tracesPropagator finalReq $ runHTTP manager
-- Log the result along with the pre/post transformation Request data
logger eitherResp reqDetails
resp <- eitherResp `onLeft` (throwError . HTTPError reqBody)

View File

@ -156,6 +156,7 @@ import Hasura.RQL.DDL.Webhook.Transform
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.EventTrigger
import Hasura.RQL.Types.Eventing
import Hasura.RQL.Types.OpenTelemetry (getOtelTracesPropagator)
import Hasura.RQL.Types.ScheduledTrigger
import Hasura.RQL.Types.SchemaCache
import Hasura.SQL.Types
@ -247,13 +248,14 @@ processCronEvents ::
) =>
L.Logger L.Hasura ->
HTTP.Manager ->
SchemaCache ->
ScheduledTriggerMetrics ->
[CronEvent] ->
HashMap TriggerName CronTriggerInfo ->
TVar (Set.Set CronEventId) ->
TriggersErrorLogLevelStatus ->
m ()
processCronEvents logger httpMgr scheduledTriggerMetrics cronEvents cronTriggersInfo lockedCronEvents triggersErrorLogLevelStatus = do
processCronEvents logger httpMgr sc scheduledTriggerMetrics cronEvents cronTriggersInfo lockedCronEvents triggersErrorLogLevelStatus = do
-- save the locked cron events that have been fetched from the
-- database, the events stored here will be unlocked in case a
-- graceful shutdown is initiated in midst of processing these events
@ -284,6 +286,7 @@ processCronEvents logger httpMgr scheduledTriggerMetrics cronEvents cronTriggers
runExceptT
$ flip runReaderT (logger, httpMgr)
$ processScheduledEvent
sc
scheduledTriggerMetrics
id'
ctiHeaders
@ -319,6 +322,7 @@ processOneOffScheduledEvents ::
Env.Environment ->
L.Logger L.Hasura ->
HTTP.Manager ->
SchemaCache ->
ScheduledTriggerMetrics ->
[OneOffScheduledEvent] ->
TVar (Set.Set OneOffScheduledEventId) ->
@ -328,6 +332,7 @@ processOneOffScheduledEvents
env
logger
httpMgr
schemaCache
scheduledTriggerMetrics
oneOffEvents
lockedOneOffScheduledEvents
@ -363,7 +368,7 @@ processOneOffScheduledEvents
Right (webhookEnvRecord, eventHeaderInfo) -> do
let processScheduledEventAction =
flip runReaderT (logger, httpMgr)
$ processScheduledEvent scheduledTriggerMetrics _ooseId eventHeaderInfo retryCtx payload webhookEnvRecord OneOff triggersErrorLogLevelStatus
$ processScheduledEvent schemaCache scheduledTriggerMetrics _ooseId eventHeaderInfo retryCtx payload webhookEnvRecord OneOff triggersErrorLogLevelStatus
eventTimeout = unrefine $ strcTimeoutSeconds $ _ooseRetryConf
@ -419,14 +424,15 @@ processScheduledTriggers getEnvHook logger statsLogger httpMgr scheduledTriggerM
return
$ Forever ()
$ const do
cronTriggersInfo <- scCronTriggers <$> liftIO getSC
sc <- liftIO getSC
env <- liftIO getEnvHook
let cronTriggersInfo = scCronTriggers sc
getScheduledEventsForDelivery (HashMap.keys cronTriggersInfo) >>= \case
Left e -> logInternalError e
Right (cronEvents, oneOffEvents) -> do
logFetchedScheduledEventsStats statsLogger (CronEventsCount $ length cronEvents) (OneOffScheduledEventsCount $ length oneOffEvents)
processCronEvents logger httpMgr scheduledTriggerMetrics cronEvents cronTriggersInfo leCronEvents triggersErrorLogLevelStatus
processOneOffScheduledEvents env logger httpMgr scheduledTriggerMetrics oneOffEvents leOneOffEvents triggersErrorLogLevelStatus
processCronEvents logger httpMgr sc scheduledTriggerMetrics cronEvents cronTriggersInfo leCronEvents triggersErrorLogLevelStatus
processOneOffScheduledEvents env logger httpMgr sc scheduledTriggerMetrics oneOffEvents leOneOffEvents triggersErrorLogLevelStatus
-- NOTE: cron events are scheduled at times with minute resolution (as on
-- unix), while one-off events can be set for arbitrary times. The sleep
-- time here determines how overdue a scheduled event (cron or one-off)
@ -444,6 +450,7 @@ processScheduledEvent ::
MonadMetadataStorage m,
MonadError QErr m
) =>
SchemaCache ->
ScheduledTriggerMetrics ->
ScheduledEventId ->
[EventHeaderInfo] ->
@ -453,7 +460,7 @@ processScheduledEvent ::
ScheduledEventType ->
TriggersErrorLogLevelStatus ->
m ()
processScheduledEvent scheduledTriggerMetrics eventId eventHeaders retryCtx payload webhookUrl type' triggersErrorLogLevelStatus =
processScheduledEvent schemaCache scheduledTriggerMetrics eventId eventHeaders retryCtx payload webhookUrl type' triggersErrorLogLevelStatus =
Tracing.newTrace Tracing.sampleAlways traceNote do
currentTime <- liftIO getCurrentTime
let retryConf = _rctxConf retryCtx
@ -476,6 +483,7 @@ processScheduledEvent scheduledTriggerMetrics eventId eventHeaders retryCtx payl
$ mkRequest headers httpTimeout webhookReqBody requestTransform (_envVarValue webhookUrl)
>>= \reqDetails -> do
let request = extractRequest reqDetails
tracesPropagator = getOtelTracesPropagator $ scOpenTelemetryConfig schemaCache
logger e d = do
logHTTPForST e extraLogCtx d (_envVarName webhookUrl) decodedHeaders triggersErrorLogLevelStatus
liftIO $ do
@ -495,7 +503,7 @@ processScheduledEvent scheduledTriggerMetrics eventId eventHeaders retryCtx payl
(OneOff, Left _err) -> Prometheus.Counter.inc (stmOneOffEventsInvocationTotalFailure scheduledTriggerMetrics)
(OneOff, Right _) -> Prometheus.Counter.inc (stmOneOffEventsInvocationTotalSuccess scheduledTriggerMetrics)
sessionVars = _rdSessionVars reqDetails
resp <- invokeRequest reqDetails responseTransform sessionVars logger
resp <- invokeRequest reqDetails responseTransform sessionVars logger tracesPropagator
pure (request, resp)
case eitherReqRes of
Right (req, resp) ->

View File

@ -55,6 +55,7 @@ import Hasura.RQL.Types.Allowlist
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.OpenTelemetry (getOtelTracesPropagator)
import Hasura.RQL.Types.Roles (adminRoleName)
import Hasura.RQL.Types.SchemaCache
import Hasura.RQL.Types.Subscription
@ -364,6 +365,7 @@ getResolvedExecPlan
maybeOperationName
reqId = do
let gCtx = makeGQLContext userInfo sc queryType
tracesPropagator = getOtelTracesPropagator $ scOpenTelemetryConfig sc
-- Construct the full 'ResolvedExecutionPlan' from the 'queryParts :: SingleOperation'.
(parameterizedQueryHash, resolvedExecPlan) <-
@ -373,6 +375,7 @@ getResolvedExecPlan
EQ.convertQuerySelSet
env
logger
tracesPropagator
prometheusMetrics
gCtx
userInfo
@ -393,6 +396,7 @@ getResolvedExecPlan
EM.convertMutationSelectionSet
env
logger
tracesPropagator
prometheusMetrics
gCtx
sqlGenCtx

View File

@ -74,6 +74,7 @@ import Hasura.RQL.Types.ComputedField
import Hasura.RQL.Types.CustomTypes
import Hasura.RQL.Types.Eventing
import Hasura.RQL.Types.Headers (HeaderConf)
import Hasura.RQL.Types.OpenTelemetry (getOtelTracesPropagator)
import Hasura.RQL.Types.Roles (adminRoleName)
import Hasura.RQL.Types.Schema.Options qualified as Options
import Hasura.RQL.Types.SchemaCache
@ -147,12 +148,13 @@ resolveActionExecution ::
HTTP.Manager ->
Env.Environment ->
L.Logger L.Hasura ->
Tracing.HttpPropagator ->
PrometheusMetrics ->
IR.AnnActionExecution Void ->
ActionExecContext ->
Maybe GQLQueryText ->
ActionExecution
resolveActionExecution httpManager env logger prometheusMetrics IR.AnnActionExecution {..} ActionExecContext {..} gqlQueryText =
resolveActionExecution httpManager env logger tracesPropagator prometheusMetrics IR.AnnActionExecution {..} ActionExecContext {..} gqlQueryText =
ActionExecution $ first (encJFromOrderedValue . makeActionResponseNoRelations _aaeFields _aaeOutputType _aaeOutputFields True) <$> runWebhook
where
handlerPayload = ActionWebhookPayload (ActionContext _aaeName) _aecSessionVariables _aaePayload gqlQueryText
@ -166,6 +168,7 @@ resolveActionExecution httpManager env logger prometheusMetrics IR.AnnActionExec
$ callWebhook
env
httpManager
tracesPropagator
prometheusMetrics
_aaeOutputType
_aaeOutputFields
@ -471,8 +474,10 @@ asyncActionsProcessor getEnvHook logger getSCFromRef' getFetchInterval lockedAct
-- we check for async actions to process.
Skip -> liftIO $ sleep $ seconds 1
Interval sleepTime -> do
actionCache <- scActions <$> liftIO getSCFromRef'
let asyncActions =
schemaCache <- liftIO getSCFromRef'
let actionCache = scActions schemaCache
tracesPropagator = getOtelTracesPropagator $ scOpenTelemetryConfig schemaCache
asyncActions =
HashMap.filter ((== ActionMutation ActionAsynchronous) . (^. aiDefinition . adType)) actionCache
unless (HashMap.null asyncActions) $ do
-- fetch undelivered action events only when there's at least
@ -488,11 +493,11 @@ asyncActionsProcessor getEnvHook logger getSCFromRef' getFetchInterval lockedAct
-- locked action events set TVar is empty, it will mean that there are
-- no events that are in the 'processing' state
saveLockedEvents (map (EventId . actionIdToText . _aliId) asyncInvocations) lockedActionEvents
LA.mapConcurrently_ (callHandler actionCache) asyncInvocations
LA.mapConcurrently_ (callHandler actionCache tracesPropagator) asyncInvocations
liftIO $ sleep $ milliseconds (unrefine sleepTime)
where
callHandler :: ActionCache -> ActionLogItem -> m ()
callHandler actionCache actionLogItem =
callHandler :: ActionCache -> Tracing.HttpPropagator -> ActionLogItem -> m ()
callHandler actionCache tracesPropagator actionLogItem =
Tracing.newTrace Tracing.sampleAlways "async actions processor" do
let ActionLogItem
actionId
@ -521,6 +526,7 @@ asyncActionsProcessor getEnvHook logger getSCFromRef' getFetchInterval lockedAct
$ callWebhook
env
appEnvManager
tracesPropagator
appEnvPrometheusMetrics
outputType
outputFields
@ -549,6 +555,7 @@ callWebhook ::
) =>
Env.Environment ->
HTTP.Manager ->
Tracing.HttpPropagator ->
PrometheusMetrics ->
GraphQLType ->
IR.ActionOutputFields ->
@ -564,6 +571,7 @@ callWebhook ::
callWebhook
env
manager
tracesPropagator
prometheusMetrics
outputType
outputFields
@ -617,7 +625,7 @@ callWebhook
actualSize = fromMaybe requestBodySize transformedReqSize
httpResponse <-
Tracing.traceHTTPRequest actualReq $ \request ->
Tracing.traceHTTPRequest tracesPropagator actualReq $ \request ->
liftIO . try $ HTTP.httpLbs request manager
let requestInfo = ActionRequestInfo webhookEnvName postPayload (confHeaders <> toHeadersConf clientHeaders) transformedReq

View File

@ -50,17 +50,18 @@ convertMutationAction ::
) =>
Env.Environment ->
L.Logger L.Hasura ->
Tracing.HttpPropagator ->
PrometheusMetrics ->
UserInfo ->
HTTP.RequestHeaders ->
Maybe GH.GQLQueryText ->
ActionMutation Void ->
m ActionExecutionPlan
convertMutationAction env logger prometheusMetrics userInfo reqHeaders gqlQueryText action = do
convertMutationAction env logger tracesPropagator prometheusMetrics userInfo reqHeaders gqlQueryText action = do
httpManager <- askHTTPManager
case action of
AMSync s ->
pure $ AEPSync $ resolveActionExecution httpManager env logger prometheusMetrics s actionExecContext gqlQueryText
pure $ AEPSync $ resolveActionExecution httpManager env logger tracesPropagator prometheusMetrics s actionExecContext gqlQueryText
AMAsync s ->
AEPAsyncMutation <$> resolveActionMutationAsync s reqHeaders userSession
where
@ -79,6 +80,7 @@ convertMutationSelectionSet ::
) =>
Env.Environment ->
L.Logger L.Hasura ->
Tracing.HttpPropagator ->
PrometheusMetrics ->
GQLContext ->
SQLGenCtx ->
@ -96,6 +98,7 @@ convertMutationSelectionSet ::
convertMutationSelectionSet
env
logger
tracesPropagator
prometheusMetrics
gqlContext
SQLGenCtx {stringifyNum}
@ -150,7 +153,7 @@ convertMutationSelectionSet
(actionName, _fch) <- pure $ case noRelsDBAST of
AMSync s -> (_aaeName s, _aaeForwardClientHeaders s)
AMAsync s -> (_aamaName s, _aamaForwardClientHeaders s)
plan <- convertMutationAction env logger prometheusMetrics userInfo reqHeaders (Just (GH._grQuery gqlUnparsed)) noRelsDBAST
plan <- convertMutationAction env logger tracesPropagator prometheusMetrics userInfo reqHeaders (Just (GH._grQuery gqlUnparsed)) noRelsDBAST
pure $ ExecStepAction plan (ActionsInfo actionName _fch) remoteJoins -- `_fch` represents the `forward_client_headers` option from the action
-- definition which is currently being ignored for actions that are mutations
RFRaw customFieldVal -> flip onLeft throwError =<< executeIntrospection userInfo customFieldVal introspectionDisabledRoles

View File

@ -71,6 +71,7 @@ convertQuerySelSet ::
) =>
Env.Environment ->
L.Logger L.Hasura ->
Tracing.HttpPropagator ->
PrometheusMetrics ->
GQLContext ->
UserInfo ->
@ -87,6 +88,7 @@ convertQuerySelSet ::
convertQuerySelSet
env
logger
tracingPropagator
prometheusMetrics
gqlContext
userInfo
@ -141,6 +143,7 @@ convertQuerySelSet
httpManager
env
logger
tracingPropagator
prometheusMetrics
s
(ActionExecContext reqHeaders (_uiSession userInfo))

View File

@ -74,8 +74,9 @@ processRemoteJoins ::
EncJSON ->
Maybe RemoteJoins ->
GQLReqUnparsed ->
Tracing.HttpPropagator ->
m EncJSON
processRemoteJoins requestId logger agentLicenseKey env requestHeaders userInfo lhs maybeJoinTree gqlreq =
processRemoteJoins requestId logger agentLicenseKey env requestHeaders userInfo lhs maybeJoinTree gqlreq tracesPropagator =
Tracing.newSpan "Process remote joins" $ forRemoteJoins maybeJoinTree lhs \joinTree -> do
lhsParsed <-
JO.eitherDecode (encJToLBS lhs)
@ -123,7 +124,7 @@ processRemoteJoins requestId logger agentLicenseKey env requestHeaders userInfo
m BL.ByteString
callRemoteServer remoteSchemaInfo request =
fmap (view _3)
$ execRemoteGQ env userInfo requestHeaders remoteSchemaInfo request
$ execRemoteGQ env tracesPropagator userInfo requestHeaders remoteSchemaInfo request
-- | Fold the join tree.
--

View File

@ -63,7 +63,7 @@ fetchRemoteSchema ::
m (IntrospectionResult, BL.ByteString, RemoteSchemaInfo)
fetchRemoteSchema env schemaSampledFeatureFlags rsDef = do
(_, _, rawIntrospectionResult) <-
execRemoteGQ env adminUserInfo [] rsDef introspectionQuery
execRemoteGQ env Tracing.b3TraceContextPropagator adminUserInfo [] rsDef introspectionQuery
(ir, rsi) <- stitchRemoteSchema schemaSampledFeatureFlags rawIntrospectionResult rsDef
-- The 'rawIntrospectionResult' contains the 'Bytestring' response of
-- the introspection result of the remote server. We store this in the
@ -135,6 +135,7 @@ execRemoteGQ ::
ProvidesNetwork m
) =>
Env.Environment ->
Tracing.HttpPropagator ->
UserInfo ->
[HTTP.Header] ->
ValidatedRemoteSchemaDef ->
@ -142,7 +143,7 @@ execRemoteGQ ::
-- | Returns the response body and headers, along with the time taken for the
-- HTTP request to complete
m (DiffTime, [HTTP.Header], BL.ByteString)
execRemoteGQ env userInfo reqHdrs rsdef gqlReq@GQLReq {..} = do
execRemoteGQ env tracesPropagator userInfo reqHdrs rsdef gqlReq@GQLReq {..} = do
let gqlReqUnparsed = renderGQLReqOutgoing gqlReq
when (G._todType _grQuery == G.OperationTypeSubscription)
@ -167,7 +168,7 @@ execRemoteGQ env userInfo reqHdrs rsdef gqlReq@GQLReq {..} = do
& set HTTP.timeout (HTTP.responseTimeoutMicro (timeout * 1000000))
manager <- askHTTPManager
Tracing.traceHTTPRequest req \req' -> do
Tracing.traceHTTPRequest tracesPropagator req \req' -> do
(time, res) <- withElapsedTime $ liftIO $ try $ HTTP.httpLbs req' manager
resp <- onLeft res (throwRemoteSchemaHttp webhookEnvRecord)
pure (time, mkSetCookieHeaders resp, resp ^. Wreq.responseBody)

View File

@ -73,6 +73,7 @@ import Hasura.RQL.IR
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.OpenTelemetry (getOtelTracesPropagator)
import Hasura.RQL.Types.ResultCustomization
import Hasura.RQL.Types.SchemaCache
import Hasura.RemoteSchema.SchemaCache
@ -371,6 +372,8 @@ runGQ env sqlGenCtx sc enableAL readOnlyMode prometheusMetrics logger agentLicen
forWithKey = flip InsOrdHashMap.traverseWithKey
tracesPropagator = getOtelTracesPropagator $ scOpenTelemetryConfig sc
executePlan ::
GQLReqParsed ->
(m AnnotatedResponse -> m AnnotatedResponse) ->
@ -470,7 +473,7 @@ runGQ env sqlGenCtx sc enableAL readOnlyMode prometheusMetrics logger agentLicen
\(EB.DBStepInfo _ sourceConfig genSql tx resolvedConnectionTemplate :: EB.DBStepInfo b) ->
runDBQuery @b reqId reqUnparsed fieldName userInfo logger agentLicenseKey sourceConfig (fmap (statsToAnyBackend @b) tx) genSql resolvedConnectionTemplate
finalResponse <-
RJ.processRemoteJoins reqId logger agentLicenseKey env reqHeaders userInfo resp remoteJoins reqUnparsed
RJ.processRemoteJoins reqId logger agentLicenseKey env reqHeaders userInfo resp remoteJoins reqUnparsed tracesPropagator
pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Local finalResponse []
E.ExecStepRemote rsi resultCustomizer gqlReq remoteJoins -> do
logQueryLog logger $ QueryLog reqUnparsed Nothing reqId QueryLogKindRemoteSchema
@ -480,7 +483,7 @@ runGQ env sqlGenCtx sc enableAL readOnlyMode prometheusMetrics logger agentLicen
(time, resp) <- doQErr $ do
(time, (resp, _)) <- EA.runActionExecution userInfo aep
finalResponse <-
RJ.processRemoteJoins reqId logger agentLicenseKey env reqHeaders userInfo resp remoteJoins reqUnparsed
RJ.processRemoteJoins reqId logger agentLicenseKey env reqHeaders userInfo resp remoteJoins reqUnparsed tracesPropagator
pure (time, finalResponse)
pure $ AnnotatedResponsePart time Telem.Empty resp []
E.ExecStepRaw json -> do
@ -503,7 +506,7 @@ runGQ env sqlGenCtx sc enableAL readOnlyMode prometheusMetrics logger agentLicen
\(EB.DBStepInfo _ sourceConfig genSql tx resolvedConnectionTemplate :: EB.DBStepInfo b) ->
runDBMutation @b reqId reqUnparsed fieldName userInfo logger agentLicenseKey sourceConfig (fmap EB.arResult tx) genSql resolvedConnectionTemplate
finalResponse <-
RJ.processRemoteJoins reqId logger agentLicenseKey env reqHeaders userInfo resp remoteJoins reqUnparsed
RJ.processRemoteJoins reqId logger agentLicenseKey env reqHeaders userInfo resp remoteJoins reqUnparsed tracesPropagator
pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Local finalResponse responseHeaders
E.ExecStepRemote rsi resultCustomizer gqlReq remoteJoins -> do
logQueryLog logger $ QueryLog reqUnparsed Nothing reqId QueryLogKindRemoteSchema
@ -513,7 +516,7 @@ runGQ env sqlGenCtx sc enableAL readOnlyMode prometheusMetrics logger agentLicen
(time, (resp, hdrs)) <- doQErr $ do
(time, (resp, hdrs)) <- EA.runActionExecution userInfo aep
finalResponse <-
RJ.processRemoteJoins reqId logger agentLicenseKey env reqHeaders userInfo resp remoteJoins reqUnparsed
RJ.processRemoteJoins reqId logger agentLicenseKey env reqHeaders userInfo resp remoteJoins reqUnparsed tracesPropagator
pure (time, (finalResponse, hdrs))
pure $ AnnotatedResponsePart time Telem.Empty resp $ fromMaybe [] hdrs
E.ExecStepRaw json -> do
@ -526,7 +529,7 @@ runGQ env sqlGenCtx sc enableAL readOnlyMode prometheusMetrics logger agentLicen
runRemoteGQ fieldName rsi resultCustomizer gqlReq remoteJoins = Tracing.newSpan ("Remote schema query for root field " <>> fieldName) $ do
(telemTimeIO_DT, remoteResponseHeaders, resp) <-
doQErr $ E.execRemoteGQ env userInfo reqHeaders (rsDef rsi) gqlReq
doQErr $ E.execRemoteGQ env tracesPropagator userInfo reqHeaders (rsDef rsi) gqlReq
value <- extractFieldFromResponse fieldName resultCustomizer resp
finalResponse <-
doQErr
@ -541,6 +544,7 @@ runGQ env sqlGenCtx sc enableAL readOnlyMode prometheusMetrics logger agentLicen
(encJFromOrderedValue value)
remoteJoins
reqUnparsed
tracesPropagator
let filteredHeaders = filter ((== "Set-Cookie") . fst) remoteResponseHeaders
pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Remote finalResponse filteredHeaders

View File

@ -78,8 +78,9 @@ import Hasura.Metadata.Class
import Hasura.Prelude
import Hasura.QueryTags
import Hasura.RQL.Types.Common (MetricsConfig (_mcAnalyzeQueryVariables))
import Hasura.RQL.Types.OpenTelemetry (getOtelTracesPropagator)
import Hasura.RQL.Types.ResultCustomization
import Hasura.RQL.Types.SchemaCache (scApiLimits, scMetricsConfig)
import Hasura.RQL.Types.SchemaCache (SchemaCache (scOpenTelemetryConfig), scApiLimits, scMetricsConfig)
import Hasura.RemoteSchema.SchemaCache
import Hasura.SQL.AnyBackend qualified as AB
import Hasura.Server.AppStateRef
@ -488,6 +489,7 @@ onStart enabledLogTypes agentLicenseKey serverEnv wsConn shouldCaptureVariables
let gqlOpType = G._todType queryParts
opName = getOpNameFromParsedReq reqParsed
maybeOperationName = _unOperationName <$> opName
tracesPropagator = getOtelTracesPropagator $ scOpenTelemetryConfig sc
for_ maybeOperationName $ \nm ->
-- https://opentelemetry.io/docs/reference/specification/trace/semantic_conventions/instrumentation/graphql/
Tracing.attachMetadata [("graphql.operation.name", unName nm)]
@ -549,17 +551,17 @@ onStart enabledLogTypes agentLicenseKey serverEnv wsConn shouldCaptureVariables
genSql
resolvedConnectionTemplate
finalResponse <-
RJ.processRemoteJoins requestId logger agentLicenseKey env reqHdrs userInfo resp remoteJoins q
RJ.processRemoteJoins requestId logger agentLicenseKey env reqHdrs userInfo resp remoteJoins q tracesPropagator
pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Local finalResponse []
E.ExecStepRemote rsi resultCustomizer gqlReq remoteJoins -> do
logQueryLog logger $ QueryLog q Nothing requestId QueryLogKindRemoteSchema
runRemoteGQ requestId q fieldName userInfo reqHdrs rsi resultCustomizer gqlReq remoteJoins
runRemoteGQ requestId q fieldName userInfo reqHdrs rsi resultCustomizer gqlReq remoteJoins tracesPropagator
E.ExecStepAction actionExecPlan _ remoteJoins -> do
logQueryLog logger $ QueryLog q Nothing requestId QueryLogKindAction
(time, (resp, _)) <- doQErr $ do
(time, (resp, hdrs)) <- EA.runActionExecution userInfo actionExecPlan
finalResponse <-
RJ.processRemoteJoins requestId logger agentLicenseKey env reqHdrs userInfo resp remoteJoins q
RJ.processRemoteJoins requestId logger agentLicenseKey env reqHdrs userInfo resp remoteJoins q tracesPropagator
pure (time, (finalResponse, hdrs))
pure $ AnnotatedResponsePart time Telem.Empty resp []
E.ExecStepRaw json -> do
@ -628,19 +630,19 @@ onStart enabledLogTypes agentLicenseKey serverEnv wsConn shouldCaptureVariables
genSql
resolvedConnectionTemplate
finalResponse <-
RJ.processRemoteJoins requestId logger agentLicenseKey env reqHdrs userInfo resp remoteJoins q
RJ.processRemoteJoins requestId logger agentLicenseKey env reqHdrs userInfo resp remoteJoins q tracesPropagator
pure $ AnnotatedResponsePart telemTimeIO_DT Telem.Local finalResponse []
E.ExecStepAction actionExecPlan _ remoteJoins -> do
logQueryLog logger $ QueryLog q Nothing requestId QueryLogKindAction
(time, (resp, hdrs)) <- doQErr $ do
(time, (resp, hdrs)) <- EA.runActionExecution userInfo actionExecPlan
finalResponse <-
RJ.processRemoteJoins requestId logger agentLicenseKey env reqHdrs userInfo resp remoteJoins q
RJ.processRemoteJoins requestId logger agentLicenseKey env reqHdrs userInfo resp remoteJoins q tracesPropagator
pure (time, (finalResponse, hdrs))
pure $ AnnotatedResponsePart time Telem.Empty resp $ fromMaybe [] hdrs
E.ExecStepRemote rsi resultCustomizer gqlReq remoteJoins -> do
logQueryLog logger $ QueryLog q Nothing requestId QueryLogKindRemoteSchema
runRemoteGQ requestId q fieldName userInfo reqHdrs rsi resultCustomizer gqlReq remoteJoins
runRemoteGQ requestId q fieldName userInfo reqHdrs rsi resultCustomizer gqlReq remoteJoins tracesPropagator
E.ExecStepRaw json -> do
logQueryLog logger $ QueryLog q Nothing requestId QueryLogKindIntrospection
buildRaw json
@ -796,12 +798,13 @@ onStart enabledLogTypes agentLicenseKey serverEnv wsConn shouldCaptureVariables
ResultCustomizer ->
GQLReqOutgoing ->
Maybe RJ.RemoteJoins ->
Tracing.HttpPropagator ->
ExceptT (Either GQExecError QErr) (ExceptT () m) AnnotatedResponsePart
runRemoteGQ requestId reqUnparsed fieldName userInfo reqHdrs rsi resultCustomizer gqlReq remoteJoins = Tracing.newSpan ("Remote schema query for root field " <>> fieldName) $ do
runRemoteGQ requestId reqUnparsed fieldName userInfo reqHdrs rsi resultCustomizer gqlReq remoteJoins tracesPropagator = Tracing.newSpan ("Remote schema query for root field " <>> fieldName) $ do
env <- liftIO $ acEnvironment <$> getAppContext appStateRef
(telemTimeIO_DT, _respHdrs, resp) <-
doQErr
$ E.execRemoteGQ env userInfo reqHdrs (rsDef rsi) gqlReq
$ E.execRemoteGQ env tracesPropagator userInfo reqHdrs (rsDef rsi) gqlReq
value <- hoist lift $ extractFieldFromResponse fieldName resultCustomizer resp
finalResponse <-
doQErr
@ -816,6 +819,7 @@ onStart enabledLogTypes agentLicenseKey serverEnv wsConn shouldCaptureVariables
(encJFromOrderedValue value)
remoteJoins
reqUnparsed
tracesPropagator
return $ AnnotatedResponsePart telemTimeIO_DT Telem.Remote finalResponse []
WSServerEnv

View File

@ -9,6 +9,7 @@ where
import Control.Lens ((.~))
import Data.Bifunctor (first)
import Data.Environment (Environment)
import Data.List.Extended (uniques)
import Data.Map.Strict qualified as Map
import Data.Set qualified as Set
import Data.Text qualified as Text
@ -102,7 +103,10 @@ parseOtelExporterConfig env enabledDataTypes OtelExporterConfig {..} = do
Map.fromList
$ map
(\NameValue {nv_name, nv_value} -> (nv_name, nv_value))
_oecResourceAttributes
_oecResourceAttributes,
_oteleiTracesPropagator =
mkOtelTracesPropagator
$ uniques (_oecTracesPropagators <> defaultOtelExporterTracesPropagators)
}
-- Smart constructor. Consistent with defaults.

View File

@ -19,6 +19,7 @@ module Hasura.RQL.Types.OpenTelemetry
OtelBatchSpanProcessorConfig (..),
defaultOtelBatchSpanProcessorConfig,
NameValue (..),
TracePropagator (..),
-- * Parsed configuration (schema cache)
OpenTelemetryInfo (..),
@ -30,6 +31,9 @@ module Hasura.RQL.Types.OpenTelemetry
getMaxExportBatchSize,
getMaxQueueSize,
defaultOtelBatchSpanProcessorInfo,
defaultOtelExporterTracesPropagators,
mkOtelTracesPropagator,
getOtelTracesPropagator,
)
where
@ -45,8 +49,10 @@ import Data.Set qualified as Set
import GHC.Generics
import Hasura.Prelude hiding (first)
import Hasura.RQL.Types.Headers (HeaderConf)
import Hasura.Tracing qualified as Tracing
import Language.Haskell.TH.Syntax (Lift)
import Network.HTTP.Client (Request)
import Network.HTTP.Types (RequestHeaders, ResponseHeaders)
--------------------------------------------------------------------------------
@ -179,7 +185,9 @@ data OtelExporterConfig = OtelExporterConfig
_oecHeaders :: [HeaderConf],
-- | Attributes to send as the resource attributes of an export request,
-- for all telemetry types.
_oecResourceAttributes :: [NameValue]
_oecResourceAttributes :: [NameValue],
-- | Trace propagator to be used to extract and inject trace headers
_oecTracesPropagators :: [TracePropagator]
}
deriving stock (Eq, Show)
@ -197,12 +205,15 @@ instance HasCodec OtelExporterConfig where
AC..= _oecHeaders
<*> optionalFieldWithDefault "resource_attributes" defaultOtelExporterResourceAttributes attrsDoc
AC..= _oecResourceAttributes
<*> optionalFieldWithDefault "traces_propagators" defaultOtelExporterTracesPropagators propagatorsDocs
AC..= _oecTracesPropagators
where
tracesEndpointDoc = "Target URL to which the exporter is going to send traces. No default."
metricsEndpointDoc = "Target URL to which the exporter is going to send metrics. No default."
protocolDoc = "The transport protocol"
headersDoc = "Key-value pairs to be used as headers to send with an export request."
attrsDoc = "Attributes to send as the resource attributes of an export request. We currently only support string-valued attributes."
propagatorsDocs = "List of propagators to inject and extract traces data from headers."
instance FromJSON OtelExporterConfig where
parseJSON = J.withObject "OtelExporterConfig" $ \o -> do
@ -216,17 +227,20 @@ instance FromJSON OtelExporterConfig where
o .:? "headers" .!= defaultOtelExporterHeaders
_oecResourceAttributes <-
o .:? "resource_attributes" .!= defaultOtelExporterResourceAttributes
_oecTracesPropagators <-
o .:? "traces_propagators" .!= defaultOtelExporterTracesPropagators
pure OtelExporterConfig {..}
instance ToJSON OtelExporterConfig where
toJSON (OtelExporterConfig otlpTracesEndpoint otlpMetricsEndpoint protocol headers resourceAttributes) =
toJSON (OtelExporterConfig otlpTracesEndpoint otlpMetricsEndpoint protocol headers resourceAttributes tracesPropagators) =
J.object
$ catMaybes
[ ("otlp_traces_endpoint" .=) <$> otlpTracesEndpoint,
("otlp_metrics_endpoint" .=) <$> otlpMetricsEndpoint,
Just $ "protocol" .= protocol,
Just $ "headers" .= headers,
Just $ "resource_attributes" .= resourceAttributes
Just $ "resource_attributes" .= resourceAttributes,
Just $ "traces_propagators" .= tracesPropagators
]
defaultOtelExporterConfig :: OtelExporterConfig
@ -236,7 +250,8 @@ defaultOtelExporterConfig =
_oecMetricsEndpoint = defaultOtelExporterMetricsEndpoint,
_oecProtocol = defaultOtelExporterProtocol,
_oecHeaders = defaultOtelExporterHeaders,
_oecResourceAttributes = defaultOtelExporterResourceAttributes
_oecResourceAttributes = defaultOtelExporterResourceAttributes,
_oecTracesPropagators = defaultOtelExporterTracesPropagators
}
-- | Possible protocol to use with OTLP. Currently, only http/protobuf is
@ -294,6 +309,31 @@ instance FromJSON NameValue where
nv_value <- o .: "value"
pure NameValue {..}
-- Internal helper type for trace propagators
data TracePropagator
= B3
| TraceContext
deriving stock (Eq, Ord, Show, Bounded, Enum)
instance HasCodec TracePropagator where
codec =
( boundedEnumCodec \case
B3 -> "b3"
TraceContext -> "tracecontext"
)
<?> "Possible trace propagators to use with OTLP"
instance FromJSON TracePropagator where
parseJSON = J.withText "TracePropagator" \case
"b3" -> pure B3
"tracecontext" -> pure TraceContext
x -> fail $ "unexpected string '" <> show x <> "'."
instance ToJSON TracePropagator where
toJSON = \case
B3 -> J.String "b3"
TraceContext -> J.String "tracecontext"
defaultOtelExporterTracesEndpoint :: Maybe Text
defaultOtelExporterTracesEndpoint = Nothing
@ -309,6 +349,9 @@ defaultOtelExporterHeaders = []
defaultOtelExporterResourceAttributes :: [NameValue]
defaultOtelExporterResourceAttributes = []
defaultOtelExporterTracesPropagators :: [TracePropagator]
defaultOtelExporterTracesPropagators = [B3]
-- https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#batching-processor
newtype OtelBatchSpanProcessorConfig = OtelBatchSpanProcessorConfig
{ -- | The maximum batch size of every export. It must be smaller or equal to
@ -379,11 +422,13 @@ data OtelExporterInfo = OtelExporterInfo
-- only operations on data are (1) folding and (2) union with a small
-- map of default attributes, and Map should be is faster than HashMap for
-- the latter.
_oteleiResourceAttributes :: Map Text Text
_oteleiResourceAttributes :: Map Text Text,
-- | Trace propagator to be used to extract and inject trace headers
_oteleiTracesPropagator :: Tracing.Propagator RequestHeaders ResponseHeaders
}
emptyOtelExporterInfo :: OtelExporterInfo
emptyOtelExporterInfo = OtelExporterInfo Nothing Nothing mempty
emptyOtelExporterInfo = OtelExporterInfo Nothing Nothing mempty mempty
-- | Batch processor configuration for trace export.
--
@ -408,6 +453,16 @@ getMaxExportBatchSize = _obspiMaxExportBatchSize
getMaxQueueSize :: OtelBatchSpanProcessorInfo -> Int
getMaxQueueSize = _obspiMaxQueueSize
mkOtelTracesPropagator :: [TracePropagator] -> Tracing.HttpPropagator
mkOtelTracesPropagator tps = foldMap toPropagator tps
where
toPropagator = \case
B3 -> Tracing.b3TraceContextPropagator
TraceContext -> Tracing.w3cTraceContextPropagator
getOtelTracesPropagator :: OpenTelemetryInfo -> Tracing.HttpPropagator
getOtelTracesPropagator = _oteleiTracesPropagator . _otiExporterOtlp
-- | Defaults taken from
-- https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#batching-processor
defaultOtelBatchSpanProcessorInfo :: OtelBatchSpanProcessorInfo

View File

@ -39,7 +39,6 @@ import Data.Aeson.KeyMap qualified as KM
import Data.Aeson.Types qualified as J
import Data.ByteString.Builder qualified as BB
import Data.ByteString.Char8 qualified as B8
import Data.ByteString.Char8 qualified as Char8
import Data.ByteString.Lazy qualified as BL
import Data.CaseInsensitive qualified as CI
import Data.HashMap.Strict qualified as HashMap
@ -79,6 +78,7 @@ import Hasura.RQL.DDL.Schema
import Hasura.RQL.DDL.Schema.Cache.Config
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.Endpoint as EP
import Hasura.RQL.Types.OpenTelemetry (getOtelTracesPropagator)
import Hasura.RQL.Types.Roles (adminRoleName, roleNameToTxt)
import Hasura.RQL.Types.SchemaCache
import Hasura.RQL.Types.Source
@ -302,40 +302,19 @@ mkSpockAction ::
mkSpockAction appStateRef qErrEncoder qErrModifier apiHandler = do
AppEnv {..} <- lift askAppEnv
AppContext {..} <- liftIO $ getAppContext appStateRef
SchemaCache {..} <- liftIO $ getSchemaCache appStateRef
req <- Spock.request
let origHeaders = Wai.requestHeaders req
ipAddress = Wai.getSourceFromFallback req
pathInfo = Wai.rawPathInfo req
propagators = getOtelTracesPropagator scOpenTelemetryConfig
-- Bytes are actually read from the socket here. Time this.
(ioWaitTime, reqBody) <- withElapsedTime $ liftIO $ Wai.strictRequestBody req
(requestId, headers) <- getRequestId origHeaders
tracingCtx <- liftIO do
-- B3 TraceIds can have a length of either 64 bits (16 hex chars) or 128 bits
-- (32 hex chars). For 64-bit TraceIds, we pad them with zeros on the left to
-- make them 128 bits long.
let traceIdMaybe =
lookup "X-B3-TraceId" headers >>= \rawTraceId ->
if
| Char8.length rawTraceId == 32 ->
Tracing.traceIdFromHex rawTraceId
| Char8.length rawTraceId == 16 ->
Tracing.traceIdFromHex $ Char8.replicate 16 '0' <> rawTraceId
| otherwise ->
Nothing
case traceIdMaybe of
Just traceId -> do
freshSpanId <- Tracing.randomSpanId
let parentSpanId = Tracing.spanIdFromHex =<< lookup "X-B3-SpanId" headers
samplingState = Tracing.samplingStateFromHeader $ lookup "X-B3-Sampled" headers
pure $ Tracing.TraceContext traceId freshSpanId parentSpanId samplingState
Nothing -> do
freshTraceId <- Tracing.randomTraceId
freshSpanId <- Tracing.randomSpanId
let samplingState = Tracing.samplingStateFromHeader $ lookup "X-B3-Sampled" headers
pure $ Tracing.TraceContext freshTraceId freshSpanId Nothing samplingState
tracingCtx <- liftIO $ Tracing.extract propagators headers
let runTrace ::
forall m1 a1.

View File

@ -3,9 +3,13 @@ module Hasura.Tracing (module Tracing) where
import Hasura.Tracing.Class as Tracing
import Hasura.Tracing.Context as Tracing
import Hasura.Tracing.Monad as Tracing
import Hasura.Tracing.Propagator as Tracing
import Hasura.Tracing.Propagator.B3 as Tracing
import Hasura.Tracing.Propagator.W3CTraceContext as Tracing
import Hasura.Tracing.Reporter as Tracing
import Hasura.Tracing.Sampling as Tracing
import Hasura.Tracing.TraceId as Tracing
import Hasura.Tracing.TraceState as Tracing
import Hasura.Tracing.Utils as Tracing
{- Note [Tracing]

View File

@ -16,6 +16,7 @@ import Hasura.Prelude
import Hasura.Tracing.Context
import Hasura.Tracing.Sampling
import Hasura.Tracing.TraceId
import Hasura.Tracing.TraceState qualified as TS
--------------------------------------------------------------------------------
-- MonadTrace
@ -105,7 +106,7 @@ newTrace :: (MonadIO m, MonadTrace m) => SamplingPolicy -> Text -> m a -> m a
newTrace policy name body = do
traceId <- randomTraceId
spanId <- randomSpanId
let context = TraceContext traceId spanId Nothing SamplingDefer
let context = TraceContext traceId spanId Nothing SamplingDefer TS.emptyTraceState
newTraceWith context policy name body
-- | Create a new span with a randomly-generated id.

View File

@ -9,6 +9,7 @@ import Data.Aeson qualified as J
import Hasura.Prelude
import Hasura.Tracing.Sampling
import Hasura.Tracing.TraceId
import Hasura.Tracing.TraceState (TraceState)
-- | Any additional human-readable key-value pairs relevant to the execution of
-- a span.
@ -30,7 +31,10 @@ data TraceContext = TraceContext
{ tcCurrentTrace :: TraceId,
tcCurrentSpan :: SpanId,
tcCurrentParent :: Maybe SpanId,
tcSamplingState :: SamplingState
tcSamplingState :: SamplingState,
-- Optional vendor-specific trace identification information across different distributed tracing systems.
-- It's used for the W3C Trace Context only https://www.w3.org/TR/trace-context/#tracestate-header
tcStateState :: TraceState
}
-- Should this be here? This implicitly ties Tracing to the name of fields in HTTP headers.

View File

@ -0,0 +1,64 @@
{-# OPTIONS_GHC -Wno-type-defaults #-}
module Hasura.Tracing.Propagator
( Propagator (..),
HttpPropagator,
extract,
inject,
)
where
import Control.Monad.IO.Class
import Hasura.Prelude
import Hasura.Tracing.Context
import Hasura.Tracing.Sampling (samplingStateFromHeader)
import Hasura.Tracing.TraceId
import Hasura.Tracing.TraceState (emptyTraceState)
import Network.HTTP.Types (RequestHeaders, ResponseHeaders)
-- | A carrier is the medium used by Propagators to read values from and write values to.
-- Each specific Propagator type defines its expected carrier type, such as a string map or a byte array.
data Propagator inboundCarrier outboundCarrier = Propagator
{ extractor :: inboundCarrier -> SpanId -> Maybe TraceContext,
injector :: TraceContext -> outboundCarrier -> outboundCarrier
}
instance (Semigroup o) => Semigroup (Propagator i o) where
(Propagator lExtract lInject) <> (Propagator rExtract rInject) =
Propagator
{ extractor = \i sid -> lExtract i sid <|> rExtract i sid,
injector = \c -> lInject c <> rInject c
}
instance (Semigroup o) => Monoid (Propagator i o) where
mempty = Propagator (\_ _ -> Nothing) (\_ p -> p)
type HttpPropagator = Propagator RequestHeaders ResponseHeaders
-- | Extracts the value from an incoming request. For example, from the headers of an HTTP request.
--
-- If a value can not be parsed from the carrier, for a cross-cutting concern, the implementation MUST NOT throw an exception and MUST NOT store a new value in the Context, in order to preserve any previously existing valid value.
extract ::
(MonadIO m) =>
Propagator i o ->
-- | The carrier that holds the propagation fields. For example, an incoming message or HTTP request.
i ->
-- | a new Context derived from the Context passed as argument, containing the extracted value, which can be a SpanContext, Baggage or another cross-cutting concern context.
m TraceContext
extract (Propagator extractor _) i = do
freshSpanId <- randomSpanId
onNothing (extractor i freshSpanId) (randomContext freshSpanId)
where
randomContext freshSpanId = do
freshTraceId <- randomTraceId
let samplingState = samplingStateFromHeader Nothing
pure $ TraceContext freshTraceId freshSpanId Nothing samplingState emptyTraceState
-- | Injects the value into a carrier. For example, into the headers of an HTTP request.
inject ::
Propagator i o ->
TraceContext ->
-- | The carrier that holds the propagation fields. For example, an outgoing message or HTTP request.
o ->
o
inject (Propagator _ injector) c = injector c

View File

@ -0,0 +1,49 @@
-- | B3 Propagation is a specification for the header "b3" and those that start with "x-b3-".
-- These headers are used for trace context propagation across service boundaries.
-- https://github.com/openzipkin/b3-propagation
-- https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/context/api-propagators.md#b3-requirements
module Hasura.Tracing.Propagator.B3
( b3TraceContextPropagator,
)
where
import Data.ByteString.Char8 qualified as Char8
import Hasura.Prelude
import Hasura.Tracing.Context (TraceContext (..))
import Hasura.Tracing.Propagator (Propagator (..))
import Hasura.Tracing.Sampling (samplingStateFromHeader, samplingStateToHeader)
import Hasura.Tracing.TraceId
import Hasura.Tracing.TraceState (emptyTraceState)
import Network.HTTP.Client.Transformable (RequestHeaders, ResponseHeaders)
b3TraceContextPropagator :: Propagator RequestHeaders ResponseHeaders
b3TraceContextPropagator =
Propagator
{ extractor = extractB3TraceContext,
injector = \TraceContext {..} headers ->
headers
++ catMaybes
[ Just ("X-B3-TraceId", traceIdToHex tcCurrentTrace),
Just ("X-B3-SpanId", spanIdToHex tcCurrentSpan),
("X-B3-ParentSpanId",) . spanIdToHex <$> tcCurrentParent,
("X-B3-Sampled",) <$> samplingStateToHeader tcSamplingState
]
}
extractB3TraceContext :: RequestHeaders -> SpanId -> Maybe TraceContext
extractB3TraceContext headers freshSpanId = do
-- B3 TraceIds can have a length of either 64 bits (16 hex chars) or 128 bits
-- (32 hex chars). For 64-bit TraceIds, we pad them with zeros on the left to
-- make them 128 bits long.
traceId <-
lookup "X-B3-TraceId" headers >>= \rawTraceId ->
if
| Char8.length rawTraceId == 32 ->
traceIdFromHex rawTraceId
| Char8.length rawTraceId == 16 ->
traceIdFromHex $ Char8.replicate 16 '0' <> rawTraceId
| otherwise ->
Nothing
let parentSpanId = spanIdFromHex =<< lookup "X-B3-SpanId" headers
samplingState = samplingStateFromHeader $ lookup "X-B3-Sampled" headers
Just $ TraceContext traceId freshSpanId parentSpanId samplingState emptyTraceState

View File

@ -0,0 +1,129 @@
-- | This module provides support for tracing context propagation in accordance with the W3C tracing context
-- propagation specifications: https://www.w3.org/TR/trace-context/
module Hasura.Tracing.Propagator.W3CTraceContext
( w3cTraceContextPropagator,
)
where
import Data.Attoparsec.ByteString.Char8 (Parser, hexadecimal, parseOnly, string, takeWhile)
import Data.Bits (Bits (setBit, testBit))
import Data.ByteString (ByteString)
import Data.ByteString.Builder qualified as B
import Data.ByteString.Lazy qualified as L
import Data.Char (isHexDigit)
import Data.Text qualified as T
import Data.Word (Word8)
import Hasura.Prelude hiding (takeWhile)
import Hasura.Tracing.Context (TraceContext (..))
import Hasura.Tracing.Propagator (Propagator (..))
import Hasura.Tracing.Sampling (SamplingState (SamplingAccept, SamplingDefer))
import Hasura.Tracing.TraceId
import Hasura.Tracing.TraceState (decodeTraceStateHeader)
import Hasura.Tracing.TraceState qualified as TS
import Network.HTTP.Types (RequestHeaders, ResponseHeaders)
-- | Propagate trace context information via headers using the w3c specification format
w3cTraceContextPropagator :: Propagator RequestHeaders ResponseHeaders
w3cTraceContextPropagator =
Propagator
{ extractor = \headers freshSpanId -> do
TraceParent {..} <- lookup "traceparent" headers >>= decodeTraceparentHeader
let traceState = lookup "tracestate" headers >>= decodeTraceStateHeader
Just
$ TraceContext
tpTraceId
freshSpanId
(Just tpParentId)
(traceFlagsToSampling tpTraceFlags)
(fromMaybe TS.emptyTraceState traceState),
injector = \context headers ->
let (traceParent, traceState) = encodeSpanContext context
in headers
++ [ ("traceparent", traceParent),
("tracestate", traceState)
]
}
--------------------------------------------------------------------------------
-- TraceParent
-- | The traceparent HTTP header field identifies the incoming request in a tracing system.
-- https://w3c.github.io/trace-context/#traceparent-header
data TraceParent = TraceParent
{ tpVersion :: {-# UNPACK #-} !Word8,
tpTraceId :: {-# UNPACK #-} !TraceId,
tpParentId :: {-# UNPACK #-} !SpanId,
tpTraceFlags :: {-# UNPACK #-} !TraceFlags
}
deriving (Show)
-- | Contain details about the trace. Unlike TraceState values, TraceFlags are present in all traces.
-- The current version of the specification only supports a single flag called sampled.
newtype TraceFlags = TraceFlags Word8
deriving (Show, Eq, Ord)
-- | TraceFlags with the @sampled@ flag not set. This means that it is up to the
-- sampling configuration to decide whether or not to sample the trace.
defaultTraceFlags :: TraceFlags
defaultTraceFlags = TraceFlags 0
-- | Get the current bitmask for the @TraceFlags@, useful for serialization purposes.
traceFlagsValue :: TraceFlags -> Word8
traceFlagsValue (TraceFlags flags) = flags
-- | Will the trace associated with this @TraceFlags@ value be sampled?
isSampled :: TraceFlags -> Bool
isSampled (TraceFlags flags) = flags `testBit` 0
-- | Set the @sampled@ flag on the @TraceFlags@
setSampled :: TraceFlags -> TraceFlags
setSampled (TraceFlags flags) = TraceFlags (flags `setBit` 0)
traceFlagsToSampling :: TraceFlags -> SamplingState
traceFlagsToSampling = bool SamplingDefer SamplingAccept . isSampled
traceFlagsFromSampling :: SamplingState -> TraceFlags
traceFlagsFromSampling = \case
SamplingAccept -> setSampled defaultTraceFlags
_ -> defaultTraceFlags
-- | Encoded the given 'TraceContext' into a @traceparent@, @tracestate@ tuple.
encodeSpanContext :: TraceContext -> (ByteString, ByteString)
encodeSpanContext TraceContext {..} = (traceparent, tracestate)
where
traceparent =
L.toStrict
$ B.toLazyByteString
-- version
$ B.word8HexFixed 0
<> B.char7 '-'
<> B.byteString (traceIdToHex tcCurrentTrace)
<> B.char7 '-'
<> B.byteString (spanIdToHex tcCurrentSpan)
<> B.char7 '-'
<> B.word8HexFixed (traceFlagsValue $ traceFlagsFromSampling tcSamplingState)
tracestate =
txtToBs
$ T.intercalate ","
$ (\(TS.Key key, TS.Value value) -> key <> "=" <> value)
<$> (TS.toTraceStateList tcStateState)
traceparentParser :: Parser TraceParent
traceparentParser = do
tpVersion <- hexadecimal
_ <- string "-"
traceIdBs <- takeWhile isHexDigit
tpTraceId <- onNothing (traceIdFromHex traceIdBs) (fail "TraceId must be 8 bytes long")
_ <- string "-"
parentIdBs <- takeWhile isHexDigit
tpParentId <- onNothing (spanIdFromHex parentIdBs) (fail "ParentId must be 8 bytes long")
_ <- string "-"
tpTraceFlags <- TraceFlags <$> hexadecimal
-- Intentionally not consuming end of input in case of version > 0
pure $ TraceParent {..}
decodeTraceparentHeader :: ByteString -> Maybe TraceParent
decodeTraceparentHeader tp = case parseOnly traceparentParser tp of
Left _ -> Nothing
Right ok -> Just ok

View File

@ -27,6 +27,7 @@ import System.Random.Stateful qualified as Random
--
-- Debug sampling state not represented.
data SamplingState = SamplingDefer | SamplingDeny | SamplingAccept
deriving (Show, Eq)
-- | Convert a sampling state to a value for the X-B3-Sampled header. A return
-- value of Nothing indicates that the header should not be set.

View File

@ -0,0 +1,54 @@
module Hasura.Tracing.TraceState
( TraceState,
Key (..),
Value (..),
emptyTraceState,
toTraceStateList,
decodeTraceStateHeader,
)
where
import Data.Attoparsec.ByteString.Char8 (Parser, parseOnly, string, takeWhile, try)
import Data.ByteString (ByteString)
import Data.Char (isAsciiLower, isDigit)
import Hasura.Prelude hiding (empty, takeWhile, toList)
newtype Key = Key Text
deriving (Show, Eq, Ord)
newtype Value = Value Text
deriving (Show, Eq, Ord)
-- | Data structure compliant with the storage and serialization needs of the W3C tracestate header.
-- https://www.w3.org/TR/trace-context/#tracestate-header
newtype TraceState = TraceState [(Key, Value)]
deriving (Show, Eq, Ord)
-- | An empty 'TraceState' key-value pair dictionary
emptyTraceState :: TraceState
emptyTraceState = TraceState []
-- | Convert the 'TraceState' to a list.
toTraceStateList :: TraceState -> [(Key, Value)]
toTraceStateList (TraceState ts) = ts
traceStateParser :: Parser TraceState
traceStateParser = do
pairs <- many stateItemParser
pure $ TraceState pairs
where
isValid c = isDigit c || (isAsciiLower c)
-- The tracestate field value is a list of list-members separated by commas (,)
-- e.g. vendorname1=opaqueValue1,vendorname2=opaqueValue2
stateItemParser :: Parser (Key, Value)
stateItemParser = do
key <- takeWhile isValid
_ <- string "="
value <- takeWhile isValid
_ <- try $ string ","
pure $ (Key (bsToTxt key), Value (bsToTxt value))
decodeTraceStateHeader :: Data.ByteString.ByteString -> Maybe TraceState
decodeTraceStateHeader ts = case parseOnly traceStateParser ts of
Left _ -> Nothing
Right ok -> Just ok

View File

@ -15,8 +15,7 @@ import Hasura.Prelude
import Hasura.RQL.Types.SourceConfiguration (HasSourceConfiguration (..))
import Hasura.Tracing.Class
import Hasura.Tracing.Context
import Hasura.Tracing.Sampling
import Hasura.Tracing.TraceId
import Hasura.Tracing.Propagator (HttpPropagator, inject)
import Network.HTTP.Client.Transformable qualified as HTTP
-- | Wrap the execution of an HTTP request in a span in the current
@ -28,12 +27,13 @@ import Network.HTTP.Client.Transformable qualified as HTTP
-- created span, and injects the trace context into the HTTP header.
traceHTTPRequest ::
(MonadIO m, MonadTrace m) =>
HttpPropagator ->
-- | http request that needs to be made
HTTP.Request ->
-- | a function that takes the traced request and executes it
(HTTP.Request -> m a) ->
m a
traceHTTPRequest req f = do
traceHTTPRequest propagator req f = do
let method = bsToTxt (view HTTP.method req)
uri = view HTTP.url req
newSpan (method <> " " <> uri) do
@ -43,13 +43,7 @@ traceHTTPRequest req f = do
f $ over HTTP.headers (headers <>) req
where
toHeaders :: TraceContext -> [HTTP.Header]
toHeaders TraceContext {..} =
catMaybes
[ Just ("X-B3-TraceId", traceIdToHex tcCurrentTrace),
Just ("X-B3-SpanId", spanIdToHex tcCurrentSpan),
("X-B3-ParentSpanId",) . spanIdToHex <$> tcCurrentParent,
("X-B3-Sampled",) <$> samplingStateToHeader tcSamplingState
]
toHeaders context = inject propagator context []
attachSourceConfigAttributes :: forall b m. (HasSourceConfiguration b, MonadTrace m) => SourceConfig b -> m ()
attachSourceConfigAttributes sourceConfig = do

View File

@ -0,0 +1,80 @@
module Hasura.Tracing.PropagatorSpec (spec) where
import Data.Maybe (fromJust)
import Hasura.Prelude
import Hasura.RQL.Types.OpenTelemetry qualified as OTEL
import Hasura.Tracing
import Test.Hspec
spec :: Spec
spec = do
describe "B3TraceContextPropagator" $ do
it "extract and inject x-b3 headers" $ do
traceId <- randomTraceId
spanId <- randomSpanId
parentSpanId <- randomSpanId
tc <-
extract
b3TraceContextPropagator
[ ("X-B3-TraceId", traceIdToHex traceId),
("X-B3-SpanId", spanIdToHex spanId),
("X-B3-ParentSpanId", spanIdToHex parentSpanId),
("X-B3-Sampled", fromJust $ samplingStateToHeader SamplingAccept)
]
tcCurrentTrace tc `shouldBe` traceId
tcCurrentParent tc `shouldBe` Just spanId
tcSamplingState tc `shouldBe` SamplingAccept
tcStateState tc `shouldBe` emptyTraceState
inject b3TraceContextPropagator tc []
`shouldBe` [ ("X-B3-TraceId", traceIdToHex traceId),
("X-B3-SpanId", spanIdToHex $ tcCurrentSpan tc),
("X-B3-ParentSpanId", spanIdToHex spanId),
("X-B3-Sampled", fromJust $ samplingStateToHeader SamplingAccept)
]
describe "W3cTraceContextPropagator" $ do
it "extract and inject w3c tracecontext headers" $ do
traceId <- randomTraceId
spanId <- randomSpanId
parentSpanId <- randomSpanId
let headers =
inject
w3cTraceContextPropagator
(TraceContext traceId spanId (Just parentSpanId) SamplingAccept emptyTraceState)
[]
tc <- extract w3cTraceContextPropagator headers
tcCurrentTrace tc `shouldBe` traceId
tcCurrentParent tc `shouldBe` Just spanId
tcSamplingState tc `shouldBe` SamplingAccept
tcStateState tc `shouldBe` emptyTraceState
describe "Composite Propagator" $ do
it "extract and inject propagator b3 + w3c" $ do
traceId1 <- randomTraceId
spanId1 <- randomSpanId
parentSpanId1 <- randomSpanId
traceId2 <- randomTraceId
spanId2 <- randomSpanId
parentSpanId2 <- randomSpanId
let propagator = OTEL.mkOtelTracesPropagator [OTEL.TraceContext, OTEL.B3]
headers =
( inject
b3TraceContextPropagator
(TraceContext traceId1 spanId1 (Just parentSpanId1) SamplingAccept emptyTraceState)
[]
)
<> ( inject
w3cTraceContextPropagator
(TraceContext traceId2 spanId2 (Just parentSpanId2) SamplingDefer emptyTraceState)
[]
)
tc <- extract propagator headers
tcCurrentTrace tc `shouldBe` traceId2
tcCurrentParent tc `shouldBe` Just spanId2
tcSamplingState tc `shouldBe` SamplingDefer
tcStateState tc `shouldBe` emptyTraceState
inject propagator tc []
`shouldBe` (inject w3cTraceContextPropagator tc [])
<> (inject b3TraceContextPropagator tc [])