Merge branch 'main' into stable

GitOrigin-RevId: 733058f9b7502d56c70070cab93ebfb85de9f37e
This commit is contained in:
rikinsk 2024-03-25 14:09:19 +05:30 committed by hasura-bot
parent a8630db22c
commit 1f50b241b5
115 changed files with 1480 additions and 711 deletions

View File

@ -19,7 +19,7 @@
You can also install a specific version of the CLI by providing the `VERSION` variable:
```bash
curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.37.0 bash
curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.38.0 bash
```
- Windows

View File

@ -44,7 +44,7 @@ log "Selecting version..."
# version=${VERSION:-`echo $(curl -s -f -H 'Content-Type: application/json' \
# https://releases.hasura.io/graphql-engine?agent=cli-get.sh) | sed -n -e "s/^.*\"$release\":\"\([^\",}]*\)\".*$/\1/p"`}
version=${VERSION:-v2.37.0}
version=${VERSION:-v2.38.0}
if [ ! $version ]; then
log "${YELLOW}"
@ -62,7 +62,7 @@ log "Selected version: $version"
log "${YELLOW}"
log NOTE: Install a specific version of the CLI by using VERSION variable
log 'curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.37.0 bash'
log 'curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.38.0 bash'
log "${NC}"
# check for existing hasura installation

2
docs/.gitignore vendored
View File

@ -31,3 +31,5 @@ yarn-error.log*
.tool-versions
spell_check_results.txt
.env*

View File

@ -296,6 +296,21 @@ When you use array operators such as `_in` in the permissions builder in the Has
an array for your values. If your session variable value is already an array, you can click the `[X-Hasura-Allowed-Ids]`
suggestion to remove the brackets and set your session variable in its place.
Here is an example of an array-based session variable:
```bash
X-Hasura-Allowed-Ids: {1,2,3}
```
And the related permission configuration:
```yaml
permission:
filter:
user_id:
_in: X-Hasura-Allowed-Ids
```
:::
## Permissions with relationships or nested objects {#relationships-in-permissions}

View File

@ -33,3 +33,11 @@ Here are 2 ways you can get started with Hasura:
service from Hasura Cloud.
2. [Docker](/databases/athena/getting-started/docker.mdx): Run Hasura with Docker and then connect your Amazon Athena
service to Hasura.
:::info Using Kubernetes?
We have Helm charts available for deploying Hasura on Kubernetes. Check out
[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
:::

View File

@ -16,7 +16,15 @@ To try Hasura with BigQuery, you'll need your own new or existing BigQuery datab
Here are two ways you can get started with Hasura:
1. [Hasura Cloud](/databases/bigquery/getting-started/cloud.mdx): Access and manage your BigQuery
database from Hasura Cloud.
1. [Hasura Cloud](/databases/bigquery/getting-started/cloud.mdx): Access and manage your BigQuery database from Hasura
Cloud.
2. [Docker](/databases/bigquery/getting-started/docker.mdx): Run Hasura with Docker and then connect your BigQuery
database to Hasura.
database to Hasura.
:::info Using Kubernetes?
We have Helm charts available for deploying Hasura on Kubernetes. Check out
[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
:::

View File

@ -18,3 +18,11 @@ Here are 2 ways you can get started with Hasura and ClickHouse:
service from Hasura Cloud.
2. [Docker](/databases/clickhouse/getting-started/docker.mdx): Run Hasura with Docker and then connect your ClickHouse
service to Hasura.
:::info Using Kubernetes?
We have Helm charts available for deploying Hasura on Kubernetes. Check out
[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
:::

View File

@ -72,9 +72,6 @@ is recommended to use environment variables for better security _(as connection
exposed as part of the Hasura Metadata)_ as well as to allow configuring different databases in different
environments _(like staging or production)_ easily.
A database can be connected to using the `HASURA_GRAPHQL_DATABASE_URL` environment variable as well in which case it
gets added automatically as a database named `default`.
### Allow connections from the Hasura Cloud IP {#cloud-projects-create-allow-nat-ip}
When using Hasura Cloud, you may need to adjust your connection settings of your database provider to allow
@ -114,8 +111,6 @@ is recommended to use environment variables for better security _(as connection
exposed as part of the Hasura Metadata)_ as well as to allow configuring different databases in different
environments _(like staging or production)_ easily.
A database can be connected to using the `HASURA_GRAPHQL_DATABASE_URL` environment variable as well in which case it
gets added automatically as a database named default.
</TabItem>
</Tabs>
@ -127,8 +122,7 @@ gets added automatically as a database named default.
<TabItem value="cli" label="CLI">
In your `config v3` project, head to the `/metadata/databases/databases.yaml` file and add the database configuration as
below. If you're using the `HASURA_GRAPHQL_DATABASE_URL` environment variable then the database will get automatically
added and named default.
below.
```yaml
- name: <db_name>
@ -198,8 +192,7 @@ Engine instance.
When using Hasura Cloud, Metadata is stored for you in separate data storage to your connected database(s). When
using Docker, if you want to
[store the Hasura Metadata on a separate database](/deployment/graphql-engine-flags/reference.mdx#metadata-database-url),
you can use the `HASURA_GRAPHQL_METADATA_DATABASE_URL` env var to specify which database to use. By default, the
Hasura Metadata is stored on the same database as specified in the `HASURA_GRAPHQL_DATABASE_URL` environment variable.
you can use the `HASURA_GRAPHQL_METADATA_DATABASE_URL` env var to specify which database to use.
## Connect different Hasura instances to the same database

View File

@ -25,7 +25,7 @@ the easiest way to set up Hasura Engine and the MariaDB GraphQL Data Connector.
:::tip Supported versions:
1. Hasura GraphQL Engine `v2.24.0` onwards
2. Hasura supports most databases with standard implementations of **MariaDB 10.5 and higher** including: Amazon RDS,
2. Hasura supports most databases with standard implementations of **MariaDB 10.6 and higher** including: Amazon RDS,
Amazon Aurora, Digital Ocean and SkySQL.
:::

View File

@ -28,7 +28,7 @@ MariaDB GraphQL Data Connector.
:::tip Supported versions:
1. Hasura GraphQL Engine `v2.24.0` onwards
2. Hasura supports most databases with standard implementations of **MariaDB 10.5 and higher** including: Amazon RDS,
2. Hasura supports most databases with standard implementations of **MariaDB 10.6 and higher** including: Amazon RDS,
Amazon Aurora, Digital Ocean and SkySQL.
:::

View File

@ -28,10 +28,18 @@ To get started with MariaDB:
- In Hasura Cloud, check out our [Getting Started with MariaDB in Hasura Cloud](/databases/mariadb/cloud.mdx) guide
- In a Docker environment, check out our [Getting Started with Docker](/databases/mariadb/docker.mdx) guide
:::info Using Kubernetes?
We have Helm charts available for deploying Hasura on Kubernetes. Check out
[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
:::
:::tip Supported versions:
1. Hasura GraphQL Engine `v2.24.0` onwards
2. Hasura supports most databases with standard implementations of **MariaDB 10.5 and higher** including: Amazon RDS,
2. Hasura supports most databases with standard implementations of **MariaDB 10.6 and higher** including: Amazon RDS,
Amazon Aurora, Digital Ocean and SkySQL.
:::
@ -216,8 +224,8 @@ in the `API` tab and interact with it using the GraphiQL interface.
:::info Console support
We recommend using your preferred MariaDB client instead. The Hasura Console is designed to be a tool for managing
your GraphQL API, and not a full-fledged database management tool.
We recommend using your preferred MariaDB client instead. The Hasura Console is designed to be a tool for managing your
GraphQL API, and not a full-fledged database management tool.
:::

View File

@ -15,10 +15,15 @@ To try Hasura with SQL Server, you'll need your own new or existing SQL Server d
Here are 2 ways you can get started with Hasura:
1. [Hasura Cloud](/databases/ms-sql-server/getting-started/cloud.mdx) : You'll need to be able to access your SQL Server database from Hasura Cloud.
2. [Docker](/databases/ms-sql-server/getting-started/docker.mdx): Run Hasura with Docker and then connect your SQL Server database to Hasura.
1. [Hasura Cloud](/databases/ms-sql-server/getting-started/cloud.mdx): You'll need to be able to access your SQL Server
database from Hasura Cloud.
2. [Docker](/databases/ms-sql-server/getting-started/docker.mdx): Run Hasura with Docker and then connect your SQL
Server database to Hasura.
<!--
- [Hasura Cloud](/databases/ms-sql-server/getting-started/cloud.mdx)
- [Docker](/databases/ms-sql-server/getting-started/docker.mdx)
-->
:::info Using Kubernetes?
We have Helm charts available for deploying Hasura on Kubernetes. Check out
[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
:::

View File

@ -30,6 +30,14 @@ To get started with MySQL:
- In Hasura Cloud, check out our [Getting Started with MySQL in Hasura Cloud](/databases/mysql/cloud.mdx) guide
- In a Docker environment, check out our [Getting Started with Docker](/databases/mysql/docker.mdx) guide
:::info Using Kubernetes?
We have Helm charts available for deploying Hasura on Kubernetes. Check out
[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
:::
:::tip Supported versions:
1. Hasura GraphQL Engine `v2.24.0` onwards
@ -219,8 +227,8 @@ in the `API` tab and interact with it using the GraphiQL interface.
:::info Console support
We recommend using your preferred MySQL client instead. The Hasura Console is designed to be a tool for managing
your GraphQL API, and not a full-fledged database management tool.
We recommend using your preferred MySQL client instead. The Hasura Console is designed to be a tool for managing your
GraphQL API, and not a full-fledged database management tool.
:::

View File

@ -29,6 +29,14 @@ To get started with Oracle:
- In Hasura Cloud, check out our [Getting Started with Oracle in Hasura Cloud](/databases/oracle/cloud.mdx) guide
- In a Docker environment, check out our [Getting Started with Docker](/databases/oracle/docker.mdx) guide
:::info Using Kubernetes?
We have Helm charts available for deploying Hasura on Kubernetes. Check out
[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
:::
:::tip Supported versions
1. Hasura GraphQL Engine `v2.24.0` onwards
@ -216,7 +224,7 @@ in the `API` tab and interact with it using the GraphiQL interface.
:::info Console support
We recommend using your preferred Oracle client instead. The Hasura Console is designed to be a tool for managing
your GraphQL API, and not a full-fledged database management tool.
We recommend using your preferred Oracle client instead. The Hasura Console is designed to be a tool for managing your
GraphQL API, and not a full-fledged database management tool.
:::

View File

@ -85,8 +85,7 @@ required to ensure connectivity to your database from Hasura Cloud if needed.
<TabItem value="cli" label="CLI">
In your `config v3` project, head to the `/metadata/databases/databases.yaml` file and add the database configuration as
below. If you're using the `HASURA_GRAPHQL_DATABASE_URL` environment variable then the database will get automatically
added and named default.
below.
```yaml
- name: <db_name>
@ -254,8 +253,7 @@ X-Hasura-Role: admin
When using Hasura Cloud, Metadata is stored for you in separate data storage to your connected database(s). When using
Docker, if you want to
[store the Hasura Metadata on a separate database](/deployment/graphql-engine-flags/reference.mdx#metadata-database-url),
you can use the `HASURA_GRAPHQL_METADATA_DATABASE_URL` env var to specify which database to use. By default, the Hasura
Metadata is stored on the same database as specified in the `HASURA_GRAPHQL_DATABASE_URL` environment variable.
you can use the `HASURA_GRAPHQL_METADATA_DATABASE_URL` env var to specify which database to use.
## Connect different Hasura instances to the same database

View File

@ -1,5 +1,6 @@
---
slug: index
keywords:
- hasura
- docs
- databases
@ -27,7 +28,15 @@ To try Hasura with Amazon Redshift, you'll need your own new or existing Amazon
Here are 2 ways you can get started with Hasura:
1. [Hasura Cloud](/databases/redshift/getting-started/cloud.mdx) : You'll need to be able to access your Amazon Redshift
service from Hasura Cloud.
2. [Docker](/databases/redshift/getting-started/docker.mdx): Run Hasura with Docker and then connect your Amazon Redshift
service to Hasura.
1. [Hasura Cloud](/databases/redshift/getting-started/cloud.mdx) : You'll need to be able to access your Amazon
Redshift service from Hasura Cloud.
2. [Docker](/databases/redshift/getting-started/docker.mdx): Run Hasura with Docker and then connect your Amazon
Redshift service to Hasura.
:::info Using Kubernetes?
We have Helm charts available for deploying Hasura on Kubernetes. Check out
[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
:::

View File

@ -18,3 +18,11 @@ Here are 2 ways you can get started with Hasura and Snowflake:
service from Hasura Cloud.
2. [Docker](/databases/snowflake/getting-started/docker.mdx): Run Hasura with Docker and then connect your Snowflake
service to Hasura.
:::info Using Kubernetes?
We have Helm charts available for deploying Hasura on Kubernetes. Check out
[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
:::

View File

@ -103,7 +103,7 @@ At this point, we'll need to configure a few parameters:
| Database Name | The name of your Weaviate database. |
| `apiKey` | The API key for your Weaviate database. |
| `host` | The URL of your Weaviate database. |
| `openAPIKey` | The OpenAI key for use with your Weaviate database. |
| `openAIKey` | The OpenAI key for use with your Weaviate database. |
| `scheme` | The URL scheme for your Weaviate database (http/https). |
:::info Where can I find these parameters?

View File

@ -150,7 +150,7 @@ az container create --resource-group hasura \
--dns-name-label "<dns-name-label>" \
--ports 80 \
--environment-variables "HASURA_GRAPHQL_SERVER_PORT"="80" "HASURA_GRAPHQL_ENABLE_CONSOLE"="true" "HASURA_GRAPHQL_ADMIN_SECRET"="<admin-secret>"\
--secure-environment-variables "HASURA_GRAPHQL_DATABASE_URL"="<database-url>"
--secure-environment-variables "HASURA_METADATA_DATABASE_URL"="<database-url>" "PG_DATABASE_URL"="<database-url>"
```
`<database-url>` should be replaced by the following format:
@ -159,7 +159,9 @@ az container create --resource-group hasura \
postgres://hasura%40<server_name>:<server_admin_password>@<hostname>:5432/hasura
```
If you'd like to connect to an existing database, use that server's database url.
If you'd like to connect to an existing database, use that server's database url. Hasura requires a Postgres database
to store its metadata. You can use the same database for both Hasura and the application data, or you can use a separate
database for Hasura's metadata.
:::info Note
@ -196,9 +198,14 @@ az container create --resource-group hasura \
"HASURA_GRAPHQL_ENABLE_CONSOLE"="true" \
"HASURA_GRAPHQL_ADMIN_SECRET"="<admin-secret>" \
"HASURA_GRAPHQL_JWT_SECRET"= \ "{\"type\": \"RS512\",\"key\": \"-----BEGIN CERTIFICATE-----\\nMIIDBzCCAe+gAwIBAgIJTpEEoUJ/bOElMA0GCSqGSIb3DQEBCwUAMCExHzAdBgNV\\nBAMTFnRyYWNrLWZyOC51cy5hdXRoMC5jb20wHhcNMjAwNzE3MDYxMjE4WhcNMzQw\\nMzI2MDYxMjE4WjAhMR8wHQYDVQQDExZ0cmFjay1mcjgudXMuYXV0aDAuY29tMIIB\\nIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuK9N9FWK1hEPtwQ8ltYjlcjF\\nX03jhGgUKkLCLxe8q4x84eGJPmeHpyK+iZZ8TWaPpyD3fk+s8BC3Dqa/Sd9QeOBh\\nZH/YnzoB3yKqF/FruFNAY+F3LUt2P2t72tcnuFg4Vr8N9u8f4ESz7OHazn+XJ7u+\\ncuqKulaxMI4mVT/fGinCiT4uGVr0VVaF8KeWsF/EJYeZTiWZyubMwJsaZ2uW2U52\\n+VDE0RE0kz0fzYiCCMfuNNPg5V94lY3ImcmSI1qSjUpJsodqACqk4srmnwMZhICO\\n14F/WUknqmIBgFdHacluC6pqgHdKLMuPnp37bf7ACnQ/L2Pw77ZwrKRymUrzlQID\\nAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSOG3E+4lHiI+l0i91u\\nxG2Rca2NATAOBgNVHQ8BAf8EBAMCAoQwDQYJKoZIhvcNAQELBQADggEBAKgmxr6c\\nYmSNJOTPtjMFFDZHHX/7iwr+vqzC3nalr6ku8E3Zs0/IpwAtzqXp0eVVdPCWUY3A\\nQCUTt63GrqshBHYAxTbT0rlXFkqL8UkJvdZQ3XoQuNsqcp22zlQWGHxsk3YP97rn\\nltPI56smyHqPj+SBqyN/Vs7Vga9G8fHCfltJOdeisbmVHaC9WquZ9S5eyT7JzPAC\\n5dI5ZUunm0cgKFVbLfPr7ykClTPy36WdHS1VWhiCyS+rKeN7KYUvoaQN2U3hXesL\\nr2M+8qaPOSQdcNmg1eMNgxZ9Dh7SXtLQB2DAOuHe/BesJj8eRyENJCSdZsUOgeZl\\nMinkSy2d927Vts8=\\n-----END CERTIFICATE-----\"}"
--secure-environment-variables "HASURA_GRAPHQL_DATABASE_URL"="<database-url>"
--secure-environment-variables "HASURA_METADATA_DATABASE_URL"="<database-url>" "PG_DATABASE_URL"="<database-url>"
```
Above, we're using the `--secure-environment-variables` flag to pass two environment variables that contain sensitive
information. The `--secure-environment-variables` flag ensures that the values of these variables are encrypted at rest
and in transit. Hasura uses the `HASURA_METADATA_DATABASE_URL` variable to store its metadata and the `PG_DATABASE_URL`
variable to connect to the database. These can be the same database or different databases.
:::info Note
Check out the [Running with JWT](/auth/authentication/jwt.mdx#running-with-jwt) section for the usage of

View File

@ -284,8 +284,10 @@ cd /etc/hasura
vim docker-compose.yaml
...
# change the url to use a different database
HASURA_GRAPHQL_DATABASE_URL: <database-url>
# change the url to use a different database for your metadata
HASURA_METADATA_DATABASE_URL: <database-url>
# and here for your data using the same or different database as above
PG_DATABASE_URL: <database-url>
...
# type ESC followed by :wq to save and quit

View File

@ -56,7 +56,7 @@ Completed, it will look like this:
### Step 3: Configure your database
In the database section, Set the `Env Variable Name for Connection String` in Database settings to be
`HASURA_GRAPHQL_DATABASE_URL` and choose a region:
`HASURA_METADATA_DATABASE_URL` and choose a region:
<Thumbnail src="/img/deployment/flightcontrol-env-variable-name.png" alt="flightcontol enable console" />

View File

@ -0,0 +1,286 @@
---
description: Step-by-step guide to deploy Hasura GraphQL Engine on Google Cloud Run with Cloud SQL for Postgres
title: 'Deploy Hasura GraphQL Engine on Google Cloud Run'
keywords:
- hasura
- google cloud run
- cloud sql
- deployment
- graphql
sidebar_position: 13
sidebar_label: Using Google Cloud Run & Cloud SQL
---
# Deploying Hasura GraphQL Engine on Cloud Run
To deploy Hasura GraphQL Engine on Google Cloud Run with a Cloud SQL (Postgres) instance and ensure secure communication
via private IP, follow this detailed guide.
:::info Prerequisites
This guide assumes you have a [Google Cloud](https://cloud.google.com/?hl=en) account and `gcloud` [installed](https://cloud.google.com/sdk/docs/install). Additionally, you should be working within a Google Cloud Project, whether it's one you've newly created or an existing project you have access to.
:::
## Step 1: Setup Your Environment
1. **Authenticate with Google Cloud:**
```bash
gcloud auth login
```
2. **Set your project ID:**
Replace `<PROJECT_ID>` with your actual Google Cloud project ID.
```bash
gcloud config set project <PROJECT_ID>
```
## Step 2: Enable Required Google Cloud Services
Enable Cloud Run, Cloud SQL, Cloud SQL Admin, Secret Manager, and the Service Networking APIs:
```bash
gcloud services enable run.googleapis.com sqladmin.googleapis.com servicenetworking.googleapis.com secretmanager.googleapis.com
```
:::caution Requires IAM permissions
To execute the above command, your Google Cloud account needs to have the Service Usage Admin role (roles/serviceusage.serviceUsageAdmin) or an equivalent custom role with permissions to enable services. This role allows you to view, enable, and disable services in your GCP project.
If you encounter permissions errors, contact your GCP administrator to ensure your account has the appropriate roles assigned, or to request the services be enabled on the project you are working with.
:::
## Step 3: Create a Cloud SQL (Postgres) Instance
1. **Create the database instance:**
```bash
gcloud sql instances create hasura-postgres --database-version=POSTGRES_15 --cpu=2 --memory=7680MiB --region=us-central1
```
2. **Set the password** for the default postgres user:
Replace `<PASSWORD>` with your desired password.
```bash
gcloud sql users set-password postgres --instance=hasura-postgres --password=<PASSWORD>
```
3. **Create a database**
Replace `<DATABASE_NAME>` with your database name:
```bash
gcloud sql databases create <DATABASE_NAME> --instance=hasura-postgres
```
:::info Don't have a `default` network?
The `default` network is normally created inside a Google Cloud Platform Project, however in some cases the `default` network might have been deleted or the project may have been set up with a specific network configuration without a default network.
To see the networks you have available you can run:
```bash
gcloud compute networks list
```
If you find you do not have an appropriate network for your deployment, you can create a new VPC network by running the following command to create a network named `default`:
```bash
gcloud compute networks create default --subnet-mode=auto
```
:::
## Step 4: Configure Service Networking for Private Connectivity
1. **Allocate an IP range** for Google services in your VPC:
```bash
gcloud compute addresses create google-managed-services-default \
--global \
--purpose=VPC_PEERING \
--prefix-length=24 \
--network=default
```
2. **Connect your VPC to the Service Networking API:**
Replace `<PROJECT_ID>` with your actual Google Cloud project ID.
```bash
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=google-managed-services-default \
--network=default \
--project=<PROJECT_ID>
```
3. **Enable a private IP** for your CloudSQL instance:
```bash
gcloud sql instances patch hasura-postgres --network=default
```
## Step 5: Create your connection string
1. **Find your Cloud SQL instance's connection name:**
```bash
gcloud sql instances describe hasura-postgres
```
:::info Note
Take note of the `connectionName` field in the output of the above `describe` command. You will use the `connectionName` to deploy the GraphQL Engine to Cloud Run.
:::
2. **Construct your connection string**
You can create the connection string by filling in the following template string. Replace `<CONNECTION_NAME>`, `<PASSWORD>`, and `<DATABASE_NAME>` with your actual connectionName, database password, and
database name.
```
postgres://postgres:<PASSWORD>@/<DATABASE_NAME>?host=/cloudsql/<CONNECTION_NAME>
```
## Step 6: Store your connection string in the Secret Manager
While you can put the connection string directly into the environment variables, it is recommended that you store it and any secrets or credentials inside of [Google's Secret Manager](https://cloud.google.com/security/products/secret-manager) for maximum security. This prevents secrets from being visible to administrators and from being accessible in other parts of the control/operations plane.
1. **Store the constructed connection string as a secret** replacing `<CONNECTION_STRING>` with your actual connection string.
```bash
echo -n "<CONNECTION_STRING>" | gcloud secrets create hasura-db-connection-string --data-file=-
```
:::info Not using the `default` service account?
The following steps assume that you are running the `gcloud deploy` command via the default service account used by compute engine. If you are not using the default service account, you will need to grant the service account you are using the `roles/secretmanager.secretAccessor` role.
:::
2. **To get the `<PROJECT_NUMBER>` associated with the default service account:**
```bash
echo "$(gcloud projects describe $(gcloud config get-value project) --format='value(projectNumber)')"
```
3. **Run the following command to grant the default service acount access to the secrets**, replacing `<PROJECT_NUMBER>` with your project number from the previous command:
```bash
gcloud projects add-iam-policy-binding <PROJECT_NUMBER> \
--member='serviceAccount:<PROJECT_NUMBER>-compute@developer.gserviceaccount.com' \
--role='roles/secretmanager.secretAccessor'
```
## Step 7: Deploy Hasura to Cloud Run:
1. **Run the following command** and replace `<CONNECTION_NAME>`, with your actual connectionName.
For additional information on configuring the Hasura GraphQL engine, please see the [Server configuration reference](https://hasura.io/docs/latest/deployment/graphql-engine-flags/reference/).
```bash
gcloud run deploy hasura-graphql-engine \
--image=hasura/graphql-engine:latest \
--add-cloudsql-instances=<CONNECTION_NAME> \
--update-env-vars='HASURA_GRAPHQL_ENABLE_CONSOLE=true' \
--update-secrets=HASURA_GRAPHQL_DATABASE_URL=hasura-db-connection-string:latest \
--region=us-central1 \
--cpu=1 \
--min-instances=1 \
--memory=2048Mi \
--port=8080 \
--allow-unauthenticated
```
## Step 8: Adding a VPC-Connector (Optional)
To further enhance the connectivity and security of your Hasura GraphQL Engine deployment on Google Cloud Run,
especially when connecting to other services within your Virtual Private Cloud (VPC), you might consider adding a
Serverless VPC Access connector. This optional step is particularly useful when your architecture requires direct access
from your serverless Cloud Run service to resources within your VPC, such as VMs, other databases, or private services
that are not exposed to the public internet. For more information, please see [Google's official documentation for Serverless VPC Access](https://cloud.google.com/vpc/docs/serverless-vpc-access).
1. **Enable the Serverless VPC Access API**
First ensure that the Serverless VPC Access API is enabled:
```bash
gcloud services enable vpcaccess.googleapis.com
```
2. **Create a Serverless VPC Access Connector**
Choose an IP range that does not overlap with existing ranges in your VPC. This range will be used by the connector to
route traffic from your serverless application to your VPC. **It's important to ensure that the IP range does not overlap with other subnets to avoid routing conflicts.**
```bash
gcloud compute networks vpc-access connectors create hasura-connector \
--region=us-central1 \
--network=default \
--range=10.8.0.0/28
```
3. **Update the Cloud Run Deployment to use the VPC Connector**
When deploying or updating your Hasura GraphQL Engine service, specify the VPC connector with the `--vpc-connector`
flag:
```bash
gcloud run deploy hasura-graphql-engine \
--image=hasura/graphql-engine:latest \
--add-cloudsql-instances=<CONNECTION_NAME> \
--update-env-vars='HASURA_GRAPHQL_ENABLE_CONSOLE=true' \
--update-secrets=HASURA_GRAPHQL_DATABASE_URL=hasura-db-connection-string:latest \
--vpc-connector=hasura-connector \
--region=us-central1 \
--cpu=1 \
--min-instances=1 \
--memory=2048Mi \
--port=8080 \
--allow-unauthenticated
```
### When and Why to Use a VPC Connector
* **Enhanced Security:** Utilize a VPC Connector when you need to ensure that traffic between your Cloud Run service and
internal Google Cloud resources does not traverse the public internet, enhancing security.
* **Access to Internal Resources:** Use it when your serverless application needs access to resources within your VPC,
such
as internal APIs, databases, or services that are not publicly accessible.
* **Compliance Requirements:** If your application is subject to compliance requirements that mandate data and network
traffic must remain within a private network, a VPC connector facilitates this by providing private access to your
cloud resources.
* **Network Peering:** It's beneficial when accessing services in a peered VPC, allowing your Cloud Run services to
communicate with resources across VPC networks.
Adding a VPC Connector to your Cloud Run deployment ensures that your Hasura GraphQL Engine can securely and privately
access the necessary Google Cloud resources within your VPC, providing a robust and secure environment for your
applications.
## Tearing Down
To avoid incurring charges, delete the resources once you're done:
```bash
gcloud sql instances delete hasura-postgres
gcloud run services delete hasura-graphql-engine
gcloud compute addresses delete google-managed-services-default --global
gcloud secrets delete hasura-db-connection-string
```
If you performed the optional Step 8, you should also delete the VPC-connector resource:
```bash
gcloud compute networks vpc-access connectors delete hasura-connector --region=us-central1
```

View File

@ -43,3 +43,4 @@ Choose from the full list of deployment guides:
- [Deploy using Nhost One-click Deploy with Managed PostgreSQL, Storage, and Auth](/deployment/deployment-guides/nhost-one-click.mdx)
- [Deploy using Koyeb Serverless Platform](/deployment/deployment-guides/koyeb.mdx)
- [Deploy using Flightcontrol on AWS Fargate](/deployment/deployment-guides/flightcontrol.mdx)
- [Deploy using Google Cloud Run with Cloud SQL](/deployment/deployment-guides/google-cloud-run-cloud-sql.mdx)

View File

@ -32,7 +32,7 @@ To deploy Hasura to Koyeb quickly, click the button below:
[![Deploy to Koyeb](https://www.koyeb.com/static/images/deploy/button.svg)](https://app.koyeb.com/deploy?name=hasura-demo&type=docker&image=hasura/graphql-engine&env[HASURA_GRAPHQL_DATABASE_URL]=CHANGE_ME&env[HASURA_GRAPHQL_ENABLE_CONSOLE]=true&env[HASURA_GRAPHQL_ADMIN_SECRET]=CHANGE_ME&ports=8080;http;/)
On the configuration screen, set the `HASURA_GRAPHQL_DATABASE_URL` environment variable to the connection string for your database and the `HASURA_GRAPHQL_ADMIN_SECRET` environment variable to a secret value to access the Hasura Console.
On the configuration screen, set the `HASURA_METADATA_DATABASE_URL` (depicted as `HASURA_GRAPHQL_ENGINE_DATABASE_URL` in this screenshot) environment variable to the connection string for your database and the `HASURA_GRAPHQL_ADMIN_SECRET` environment variable to a secret value to access the Hasura Console.
Click the **Deploy** button when you are finished. When the deployment completes, you can [access the Hasura Console](#access-the-hasura-console).
@ -52,9 +52,10 @@ On the [Koyeb control panel](https://app.koyeb.com/), click the **Create App** b
4. In the **Environment variables** section, configure the environment variables required to properly run the Hasura GraphQL Engine:
- `HASURA_GRAPHQL_DATABASE_URL`: The environment variable containing the PostgreSQL URL, i.e. `postgres://<user>:<password>@<host>:<port>/<database>`. Since this value contains sensitive information, select the "Secret" type. Secrets are encrypted at rest and are ideal for storing sensitive data like API keys, OAuth tokens, etc. Choose "Create secret" in the "Value" drop-down menu and enter the secret value in the "Create secret" form.
- `HASURA_METADATA_DATABASE_URL`: Hasura requires a PostgreSQL database to store its metadata. This can be the same database as `PG_DATABASE_URL` or a different one. We strongly recommend using a secret to store this value.
- `PG_DATABASE_URL`: The environment variable containing the PostgreSQL URL, i.e. `postgres://<user>:<password>@<host>:<port>/<database>`. Since this value contains sensitive information, select the "Secret" type. Secrets are encrypted at rest and are ideal for storing sensitive data like API keys, OAuth tokens, etc. Choose "Create secret" in the "Value" drop-down menu and enter the secret value in the "Create secret" form.
- `HASURA_GRAPHQL_ENABLE_CONSOLE`: Set to `true`. This will expose and allow you to access the Hasura Console.
- `HASURA_GRAPHQL_ADMIN_SECRET`: The secret to access the Hasura Console. As with the `HASURA_GRAPHQL_DATABASE_URL`, we strongly recommend using a secret to store this value.
- `HASURA_GRAPHQL_ADMIN_SECRET`: The secret to access the Hasura Console. As with the other environment variables, we strongly recommend using a secret to store this value.
5. In the **Exposing your service** section, change the `Port` from `80` to `8080` to match the port that the `hasura/graphql-engine` Docker image app listens on. Koyeb uses this setting to perform application health checks and to properly route incoming HTTP requests. If you want the Hasura GraphQL Engine to be available on a specific path, you can change the default one (`/`) to the path of your choice.

View File

@ -37,11 +37,11 @@ Edit `deployment.yaml` and set the right database url:
```yaml {2}
env:
- name: HASURA_GRAPHQL_DATABASE_URL
- name: HASURA_METADATA_DATABASE_URL
value: postgres://<username>:<password>@hostname:<port>/<dbname>
```
Examples of `HASURA_GRAPHQL_DATABASE_URL`:
Examples of `HASURA_METADATA_DATABASE_URL`:
- `postgres://admin:password@localhost:5432/my-db`
- `postgres://admin:@localhost:5432/my-db` _(if there is no password)_
@ -49,7 +49,7 @@ Examples of `HASURA_GRAPHQL_DATABASE_URL`:
:::info Note
- If your **password contains special characters** (e.g. #, %, $, @, etc.), you need to URL encode them in the
`HASURA_GRAPHQL_DATABASE_URL` env var (e.g. %40 for @).
`HASURA_METADATA_DATABASE_URL` env var (e.g. %40 for @).
You can check the [logs](#kubernetes-logs) to see if the database credentials are proper and if Hasura is able to
connect to the database.
@ -104,7 +104,7 @@ spec:
command: ["graphql-engine"]
args: ["serve", "--enable-console"]
env:
- name: HASURA_GRAPHQL_DATABASE_URL
- name: HASURA_METADATA_DATABASE_URL
value: postgres://<username>:<password>@hostname:<port>/<dbname>
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: mysecretkey

View File

@ -49,7 +49,7 @@ command on `graphql-engine` itself. The way to execute this command is
to run:
```bash
docker run -e HASURA_GRAPHQL_DATABASE_URL=$DATABASE_URL hasura/graphql-engine:<VERSION> graphql-engine downgrade --to-<NEW-VERSION>
docker run -e HASURA_METADATA_DATABASE_URL=$DATABASE_URL hasura/graphql-engine:<VERSION> graphql-engine downgrade --to-<NEW-VERSION>
```
You need to use a newer version of `graphql-engine` to downgrade to an

View File

@ -259,7 +259,7 @@ provided to the server**
```bash
# env var
HASURA_GRAPHQL_METADATA_DATABASE_URL=postgres://<user>:<password>@<host>:<port>/<metadata-db-name>
HASURA_GRAPHQL_DATABASE_URL=postgres://<user>:<password>@<host>:<port>/<db-name>
PG_DATABASE_URL=postgres://<user>:<password>@<host>:<port>/<db-name>
# flag
--metadata-database-url=postgres://<user>:<password>@<host>:<port>/<metadata-db-name>
@ -269,7 +269,7 @@ HASURA_GRAPHQL_DATABASE_URL=postgres://<user>:<password>@<host>:<port>/<db-name>
In this case, Hasura GraphQL Engine will use the
`HASURA_GRAPHQL_METADATA_DATABASE_URL` to store the `metadata catalogue`
and starts the server with the database provided in the
`HASURA_GRAPHQL_DATABASE_URL`.
`PG_DATABASE_URL`.
**2. Only** `metadata database` **is provided to the server**
@ -286,26 +286,3 @@ In this case, Hasura GraphQL Engine will use the
and starts the server without tracking/managing any database. _i.e_ a
Hasura GraphQL server will be started with no database. The user could
then manually track/manage databases at a later time.
**3. Only** `primary database` **is provided to the server**
```bash
# env var
HASURA_GRAPHQL_DATABASE_URL=postgres://<user>:<password>@<host>:<port>/<db-name>
# flag
--database-url=postgres://<user>:<password>@<host>:<port>/<db-name>
```
In this case, Hasura GraphQL Engine server will start with the database
provided in the `HASURA_GRAPHQL_DATABASE_URL` and will also use the
_same database_ to store the `metadata catalogue`.
**4. Neither** `primary database` **nor** `metadata database` **is
provided to the server**
Hasura GraphQL Engine will fail to startup and will throw an error
```bash
Fatal Error: Either of --metadata-database-url or --database-url option expected
```

View File

@ -44,15 +44,16 @@ the list of connected data sources.
:::info Note
This config option is supported to maintain backwards compatibility with `v1.x` Hasura instances. In versions `v2.0` and
above, databases can be connected using any custom environment variables of your choice.
This config option is supported to maintain backwards compatibility with `v1.x` Hasura instances. **In versions `v2.0`
and above, databases can be connected using any custom environment variables of your choice. Our `docker-compose.yaml`
files in the install manifests reference `PG_DATABASE_URL` as the environment variable to use for connecting to a
database, but this can be any plaintext value which does not start with `HASURA_`.**
:::
### Metadata Database URL
This Postgres database URL is used to store Hasura's Metadata. By default, the database configured using
`HASURA_GRAPHQL_DATABASE_URL` / `--database_url` will be used to store the Metadata. This can also be a URI of the form
This Postgres database URL is used to store Hasura's Metadata. This can also be a URI of the form
`dynamic-from-file:///path/to/file`, where the referenced file contains a postgres connection string, which will be read
dynamically every time a new connection is established. This allows the server to be used in an environment where
secrets are rotated frequently.
@ -68,7 +69,7 @@ secrets are rotated frequently.
:::info Note
Either one of the Metadata Database URL or the Database URL needs to be provided for Hasura to start.
THe metadata database URL needs to be set for Hasura to start.
:::
@ -386,17 +387,31 @@ subgraph in an Apollo supergraph.
| **Default** | `false` |
| **Supported in** | CE, Enterprise Edition, Cloud |
### Header Size Limit
Sets the maximum cumulative length of all headers in bytes.
### Enable Automated Persisted Queries
Enables the [Automated Persisted Queries](https://www.apollographql.com/docs/apollo-server/performance/apq/) feature.
| | |
| ------------------- | ---------------------------------------- |
| **Flag** | `--max-total-header-length` |
| **Env var** | `HASURA_GRAPHQL_MAX_TOTAL_HEADER_LENGTH` |
| ------------------- | ------------------------------------------------ |
| **Flag** | `--enable-persisted-queries` |
| **Env var** | `HASURA_GRAPHQL_ENABLE_PERSISTED_QUERIES` |
| **Accepted values** | Boolean |
| **Default** | `false` |
| **Supported in** | Enterprise Edition |
### Set Automated Persisted Queries TTL
Sets the query TTL in the cache store for Automated Persisted Queries.
| | |
| ------------------- | ------------------------------------------------ |
| **Flag** | `--persisted-queries-ttl` |
| **Env var** | `HASURA_GRAPHQL_PERSISTED_QUERIES_TTL` |
| **Accepted values** | Integer |
| **Default** | `1024*1024` (1MB) |
| **Supported in** | CE, Enterprise Edition |
| **Default** | `5` (seconds) |
| **Supported in** | Enterprise Edition |
### Enable Error Log Level for Trigger Errors
@ -410,6 +425,7 @@ Sets the log-level as `error` for Trigger type error logs (Event Triggers, Sched
| **Default** | `false` |
| **Supported in** | CE, Enterprise Edition |
### Enable Console
Enable the Hasura Console (served by the server on `/` and `/console`).
@ -423,6 +439,19 @@ Enable the Hasura Console (served by the server on `/` and `/console`).
| **Default** | **CE**, **Enterprise Edition**: `false` <br />**Cloud**: Console is always enabled |
| **Supported in** | CE, Enterprise Edition |
### Header Size Limit
Sets the maximum cumulative length of all headers in bytes.
| | |
| ------------------- | ---------------------------------------- |
| **Flag** | `--max-total-header-length` |
| **Env var** | `HASURA_GRAPHQL_MAX_TOTAL_HEADER_LENGTH` |
| **Accepted values** | Integer |
| **Default** | `1024*1024` (1MB) |
| **Supported in** | CE, Enterprise Edition |
### Enable High-cardinality Labels for Metrics
Enable high-cardinality labels for [Prometheus Metrics](/observability/enterprise-edition/prometheus/metrics.mdx).
@ -528,7 +557,7 @@ log types — can be found [here](/deployment/logging.mdx#log-types).
| **Env var** | `HASURA_GRAPHQL_ENABLED_LOG_TYPES` |
| **Accepted values** | String (Comma-separated) |
| **Options** | `startup`, `http-log`, `webhook-log`, `websocket-log`, `query-log`, `execution-log`, `livequery-poller-log`, `action-handler-log`, `data-connector-log`, `jwk-refresh-log`, `validate-input-log` |
| **Default** | `startup, http-log, webhook-log, websocket-log`, `jwk-refresh` |
| **Default** | `startup, http-log, webhook-log, websocket-log`, `jwk-refresh-log` |
| **Supported in** | CE, Enterprise Edition |
### Events HTTP Pool Size

View File

@ -131,7 +131,7 @@ gcloud run deploy hasura \
--env-vars-file=env.yaml \
--vpc-connector=<vpc-connector-name> \
--allow-unauthenticated \
--max-instances=1 \
--min-instances=1 \
--cpu=1 \
--memory=2048Mi \
--port=8080

View File

@ -306,7 +306,8 @@ services:
environment:
HASURA_GRAPHQL_EE_LICENSE_KEY: <YOUR_EE_LICENSE_KEY>
HASURA_GRAPHQL_ADMIN_SECRET: <YOUR_ADMIN_SECRET>
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
HASURA_METADATA_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
PG_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
HASURA_GRAPHQL_ENABLE_CONSOLE: 'true'
HASURA_GRAPHQL_DEV_MODE: 'true'
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup,http-log,webhook-log,websocket-log,query-log

View File

@ -427,7 +427,8 @@ services:
environment:
HASURA_GRAPHQL_EE_LICENSE_KEY: <YOUR_EE_LICENSE_KEY>
HASURA_GRAPHQL_ADMIN_SECRET: <YOUR_ADMIN_SECRET>
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
HASURA_METADATA_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
PG_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
HASURA_GRAPHQL_ENABLE_CONSOLE: 'true'
HASURA_GRAPHQL_DEV_MODE: 'true'
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup,http-log,webhook-log,websocket-log,query-log

View File

@ -281,7 +281,8 @@ services:
environment:
HASURA_GRAPHQL_EE_LICENSE_KEY: <YOUR_EE_LICENSE_KEY>
HASURA_GRAPHQL_ADMIN_SECRET: <YOUR_ADMIN_SECRET>
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
HASURA_METADATA_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
PG_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
HASURA_GRAPHQL_ENABLE_CONSOLE: 'true'
HASURA_GRAPHQL_DEV_MODE: 'true'
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup,http-log,webhook-log,websocket-log,query-log

View File

@ -403,7 +403,8 @@ services:
environment:
HASURA_GRAPHQL_EE_LICENSE_KEY: <YOUR_EE_LICENSE_KEY>
HASURA_GRAPHQL_ADMIN_SECRET: <YOUR_ADMIN_SECRET>
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
HASURA_METADATA_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
PG_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
HASURA_GRAPHQL_ENABLE_CONSOLE: 'true'
HASURA_GRAPHQL_DEV_MODE: 'true'
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup,http-log,webhook-log,websocket-log,query-log

View File

@ -50,7 +50,7 @@ After receiving response from the webhook, the event's state is updated in the H
## Observability
<ProductBadge self />
<ProductBadge self ee />
Hasura EE exposes a set of [Prometheus metrics](/observability/enterprise-edition/prometheus/metrics.mdx/#hasura-event-triggers-metrics)
that can be used to monitor the Event Trigger system and help diagnose performance issues.

View File

@ -46,7 +46,7 @@ curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | INSTALL
You can also install a specific version of the CLI by providing the `VERSION` variable:
```bash
curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.37.0 bash
curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.38.0 bash
```
</TabItem>
@ -71,7 +71,7 @@ curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | INSTALL
You can also install a specific version of the CLI by providing the `VERSION` variable:
```bash
curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.37.0 bash
curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.38.0 bash
```
</TabItem>

View File

@ -117,8 +117,7 @@ PG_DATABASE_URL: postgres://postgres:postgres@postgres:5432/postgres
```
We'll enter the name `default` for the ` Database Display Name` field. This name is used to identify the data source in
Hasura's Metadata and is not your database's name. Should you choose to use the `HASURA_GRAPHQL_DATABASE_URL`
environment variable instead, `default` is the default name assigned to your data source by Hasura.
Hasura's Metadata and is not your database's name.
Next, we'll choose `Environment Variable` from the `Connect Database Via` options; enter `PG_DATABASE_URL` as the name:

View File

@ -58,7 +58,7 @@ Example:
docker run -p 8080:8080 \
-v /home/me/my-project/migrations:/hasura-migrations \
-v /home/me/my-project/metadata:/hasura-metadata \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://postgres:@postgres:5432/postgres \
-e HASURA_METADATA_DATABASE_URL=postgres://postgres:@postgres:5432/postgres \
hasura/graphql-engine:v1.2.0.cli-migrations-v2
```

View File

@ -241,9 +241,7 @@ hasura scripts update-project-v3
Your project directory and `config.yaml` should be updated to v3.
The update script will ask for the name of database the current
Migrations and seeds correspond to. If you are starting Hasura with a
`HASURA_GRAPHQL_DATABASE_URL` then the name of the database should be
`default`.
Migrations and seeds correspond to.
## Continue using config v2

View File

@ -58,6 +58,12 @@ host, service name and custom attributes to associate with exported logs and met
| Custom Attributes | Custom Attributes associated with your logs and metrics. A default source tag `hasura-cloud-metrics` is added to all exported logs and metrics. Attributes `project_id` and `project_name` are added to all exported metrics. |
| Service Name | The name of the application or service generating the log events. |
:::info API Key type
Your API key must be of type `License` in order to export logs and metrics to New Relic.
:::
<Thumbnail src="/img/observability/configure-newrelic.png" alt="Configure New Relic Integration" />
After adding appropriate values, click `Save`.

View File

@ -45,10 +45,10 @@ buckets, you should consider [tuning the performance](/deployment/performance-tu
Number of GraphQL requests received, representing the GraphQL query/mutation traffic on the server.
| | |
| ------ | -------------------------------------------------------------- |
| ------ | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| Name | `hasura_graphql_requests_total` |
| Type | Counter |
| Labels | `operation_type`: query \| mutation \| subscription \| unknown |
| Labels | `operation_type`: query \| mutation \| subscription \| unknown, `response_status`: success \| failed, `operation_name`, `parameterized_query_hash` |
The `unknown` operation type will be returned for queries that fail authorization, parsing, or certain validations. The
`response_status` label will be `success` for successful requests and `failed` for failed requests.

View File

@ -38,6 +38,11 @@ subscriptions with the [OpenTelemetry](https://opentelemetry.io/docs/concepts/si
be exported directly from your Hasura instances to your observability tool that supports OpenTelemetry traces. This can
be configured in the `Settings` section of the Hasura Console.
## Available Metrics
The available OpenTelemetry metrics are the same as those available via
[Prometheus](/observability/enterprise-edition/prometheus/metrics.mdx).
## Configure the OpenTelemetry receiver
:::info Supported from
@ -56,8 +61,8 @@ All users are encouraged to migrate to this new integration.
:::info Traces on Hasura Cloud
Hasura Cloud implements sampling on traces. That means only one in every `n` traces will be sampled and exported
(`n` will be automatically configured based on various parameters during runtime. This can't be manually adjusted).
Hasura Cloud implements sampling on traces. That means only one in every `n` traces will be sampled and exported (`n`
will be automatically configured based on various parameters during runtime. This can't be manually adjusted).
:::
@ -304,8 +309,8 @@ be found in the [OpenTelemetry Collector repository](https://github.com/open-tel
Trace and Span ID are included in the root of the log body. GraphQL Engine follows
[OpenTelemetry's data model](https://opentelemetry.io/docs/specs/otel/logs/data-model/#log-and-event-record-definition)
so that OpenTelemetry-compliant services can automatically correlate logs with Traces. However, some services need
extra configurations.
so that OpenTelemetry-compliant services can automatically correlate logs with Traces. However, some services need extra
configurations.
### Jaeger
@ -334,7 +339,7 @@ datasources:
filterByTraceID: false
filterBySpanID: false
customQuery: true
query: "{exporter=\"OTLP\"} | json | traceid=`$${__span.traceId}`"
query: '{exporter="OTLP"} | json | traceid=`$${__span.traceId}`'
traceQuery:
timeShiftEnabled: true
spanStartTimeShift: '1h'
@ -351,8 +356,9 @@ You will see the `Logs for this span` button enabled when exploring the trace de
### Datadog
If Datadog can't correlate between traces and logs, you should verify the Trace ID attributes mapping.
Read more at [the troubleshooting section](https://docs.datadoghq.com/tracing/troubleshooting/correlated-logs-not-showing-up-in-the-trace-id-panel/?tab=jsonlogs#trace-id-option) on Datadog.
If Datadog can't correlate between traces and logs, you should verify the Trace ID attributes mapping. Read more at
[the troubleshooting section](https://docs.datadoghq.com/tracing/troubleshooting/correlated-logs-not-showing-up-in-the-trace-id-panel/?tab=jsonlogs#trace-id-option)
on Datadog.
<Thumbnail
src="/img/enterprise/open-telemetry-datadog-trace-log.png"
@ -362,9 +368,9 @@ Read more at [the troubleshooting section](https://docs.datadoghq.com/tracing/tr
### Honeycomb
Traces and logs can't correlate together if they are exported to different datasets.
Note that Honeycomb will use the `service.name` attribute as the dataset where logs are exported.
Therefore the `x-honeycomb-dataset` header must be matched with that attribute.
Traces and logs can't correlate together if they are exported to different datasets. Note that Honeycomb will use the
`service.name` attribute as the dataset where logs are exported. Therefore the `x-honeycomb-dataset` header must be
matched with that attribute.
<Thumbnail
src="/img/enterprise/open-telemetry-honeycomb-trace-log.png"

View File

@ -25,30 +25,31 @@ variables.
**Example:** Fetch an author by their `author_id`:
<GraphiQLIDE
query={`query getArticles($author_id: Int!) {
bigquery_articles(
where: { author_id: { _eq: $author_id } }
query={`query getArticles($author_id: Int!, $title: String!) {
articles(
where: { author_id: { _eq: $author_id }, title: { _ilike: $title } }
) {
id
title
}
}`}
response={`{
response={`{
"data": {
"bigquery_articles": [
"articles": [
{
"id": "15",
"id": 15,
"title": "How to climb Mount Everest"
},
{
"id": "6",
"id": 6,
"title": "How to be successful on broadway"
}
]
}
}`}
variables={`{
"author_id": 1
variables={`{
"author_id": 1,
"title": "%How to%"
}`}
/>

View File

@ -24,16 +24,13 @@ variables.
**Example:** Fetch an author by their `author_id`:
<GraphiQLIDE
query={`query getArticles($author_id: Int!) {
query={`query getArticles($author_id: Int!, $title: String!) {
articles(
where: { author_id: { _eq: $author_id } }
where: { author_id: { _eq: $author_id }, title: { _ilike: $title } }
) {
id
title
}
}`}
variables={`{
"author_id": 1
}`}
response={`{
"data": {
@ -48,6 +45,10 @@ variables.
}
]
}
}`}
variables={`{
"author_id": 1,
"title": "%How to%"
}`}
/>

View File

@ -25,9 +25,9 @@ variables.
**Example:** Fetch an author by their `author_id`:
<GraphiQLIDE
query={`query getArticles($author_id: Int!) {
query={`query getArticles($author_id: Int!, $title: String!) {
articles(
where: { author_id: { _eq: $author_id } }
where: { author_id: { _eq: $author_id }, title: { _ilike: $title } }
) {
id
title
@ -48,7 +48,8 @@ response={`{
}
}`}
variables={`{
"author_id": 1
"author_id": 1,
"title": "%How to%"
}`}
/>

View File

@ -173,26 +173,20 @@ A detailed changelog with all the new features introduced in Hasura v2 is availa
with Hasura v2 instances. Hasura v2 will assume the `v2` Metadata and Migrations belong to a database connected with
the name `default`.
- A new optional env var `HASURA_GRAPHQL_METADATA_DATABASE_URL` is now introduced. When set, this Postgres database is
used to store the Hasura Metadata. If not set, the database set using `HASURA_GRAPHQL_DATABASE_URL` is used to store
the Hasura Metadata.
Either one of `HASURA_GRAPHQL_METADATA_DATABASE_URL` or `HASURA_GRAPHQL_DATABASE_URL` needs to be set with a Postgres
database to start a Hasura v2 instance as Hasura always needs a Postgres database to store its metadata.
- The database set using the `HASURA_GRAPHQL_DATABASE_URL` env var is connected automatically with the name `default` in
Hasura v2 while updating an existing instance or while starting a fresh instance.
Setting this env var post initial setup/update will have no effect as the Hasura Metadata for data sources would
already have been initialized and the env var will be treated as any other custom env var.
It is now not mandatory to set this env var if a dedicated `HASURA_GRAPHQL_METADATA_DATABASE_URL` is set.
- A new mandatory env var `HASURA_GRAPHQL_METADATA_DATABASE_URL` is now introduced and is mandatory for storing Hasura
Metadata.
- Custom env vars can now be used to connect databases dynamically at runtime.
- With support for multiple databases, older database specific env vars have been deprecated.
[See details](#hasura-v2-env-changes)
:::info Existing Metadata
`HASURA_GRAPHQL_METADATA_DATABASE_URL` must be the connection string for where your metadata existed previously.
:::
## Moving from Hasura v1 to Hasura v2 {#moving-from-hasura-v1-to-v2}
### Hasura v1 and Hasura v2 compatibility {#hasura-v1-v2-compatibility}
@ -206,14 +200,12 @@ instance**.
Post adding a database named `default`, the Hasura v2 instance should behave equivalently to the Hasura v1 instance and
all previous workflows will continue working as they were.
Refer to [connecting databases](/databases/overview.mdx) to add a database to Hasura v2.
Refer to [connecting databases](/databases/quickstart.mdx) to add a database to Hasura v2.
### Migrate Hasura v1 instance to Hasura v2
Hasura v2 is backwards compatible with Hasura v1. Hence simply updating the Hasura docker image version number and
restarting your Hasura instance should work seamlessly. The database connected using the `HASURA_GRAPHQL_DATABASE_URL`
env var will be added as a database with the name `default` automatically and all existing Metadata and Migrations will
be assumed to belong to it.
restarting your Hasura instance should work seamlessly.
:::info Note
@ -282,7 +274,7 @@ by reverting the Hasura docker image version and using the [downgrade command](/
the Hasura Metadata catalogue changes:
```bash
docker run -e HASURA_GRAPHQL_DATABASE_URL=$POSTGRES_URL hasura/graphql-engine:v2.0.0 graphql-engine downgrade --to-v1.3.3
docker run -e HASURA_METADATA_DATABASE_URL=$POSTGRES_URL hasura/graphql-engine:v2.0.0 graphql-engine downgrade --to-v1.3.3
```
:::info Note

View File

@ -22,62 +22,40 @@ will walk you through the process of creating a REST endpoint from a table.
To see an alternative method of creating a REST endpoint from an query in the GraphiQL IDE, check out the
[Create RESTified endpoints](/restified/create.mdx#create-from-graphiql) page.
:::info Data source availability
Available for **Postgres, MS SQL Server, Citus, AlloyDB and CockroachDB** databases.
:::
<SampleAppBlock dependent />
### Step 1: Navigate to the products table.
Navigate to `Data > default > public > products` and click the "Create REST Endpoints" button.
<Thumbnail
src="/img/restified/restified-create-from-table-btn.png"
alt="Create RESTified Endpoint"
/>
<Thumbnail src="/img/restified/restified-create-from-table-btn.png" alt="Create RESTified Endpoint" />
### Step 2: Choose operations
After clicking on the "Create REST endpoints" button, you will see a modal list of all REST operations (`READ`, `READ
ALL`, `CREATE`, `UPDATE`, `DELETE`) available on the table. Select `READ` and `CREATE` for this demo. Click the
After clicking on the "Create REST endpoints" button, you will see a modal list of all REST operations (`READ`,
`READ ALL`, `CREATE`, `UPDATE`, `DELETE`) available on the table. Select `READ` and `CREATE` for this demo. Click the
"Create" button.
<Thumbnail
src="/img/restified/restified-modal-from-table.png"
alt="Create RESTified Endpoint"
width="400px"
/>
<Thumbnail src="/img/restified/restified-modal-from-table.png" alt="Create RESTified Endpoint" width="400px" />
### Step 3: View all REST endpoints
You will be able to see the newly created REST endpoints listed in the `API > REST` tab.
<Thumbnail
src="/img/restified/restified-tracked-table-view.png"
alt="Create RESTified Endpoint"
width="1000px"
/>
<Thumbnail src="/img/restified/restified-tracked-table-view.png" alt="Create RESTified Endpoint" width="1000px" />
### Step 4: Test the REST endpoint
Click on the `products_by_pk` title to get to the details page for that RESTified endpoint. In the "Request
Variables" section for `id` enter the value `7992fdfa-65b5-11ed-8612-6a8b11ef7372`, the UUID for one of the products
already in the `products` table of the docs sample app. Click "Run Request".
Click on the `products_by_pk` title to get to the details page for that RESTified endpoint. In the "Request Variables"
section for `id` enter the value `7992fdfa-65b5-11ed-8612-6a8b11ef7372`, the UUID for one of the products already in the
`products` table of the docs sample app. Click "Run Request".
<Thumbnail
src="/img/restified/restified-test.png"
alt="Create RESTified Endpoint"
width="1000px"
/>
<Thumbnail src="/img/restified/restified-test.png" alt="Create RESTified Endpoint" width="1000px" />
You will see the result returned next to the query.
You can test the other `insert_products_one` endpoint that we created in the same way by providing a new product
object as the request variable.
You can test the other `insert_products_one` endpoint that we created in the same way by providing a new product object
as the request variable.
You can also use your favourite REST client to test the endpoint. For example, using `curl`:
@ -92,8 +70,8 @@ curl --location --request GET 'https://<your-hasura-project>.hasura.app/api/rest
What just happened? Well, you just created two REST endpoints for reading a single product and inserting a product,
super fast, and without writing a single line of code 🎉
This saves you significant time and effort, as you easily enable REST endpoints on your tables or [convert any query
or mutation into a REST endpoint](/restified/create.mdx) with just a few clicks.
This saves you significant time and effort, as you easily enable REST endpoints on your tables or
[convert any query or mutation into a REST endpoint](/restified/create.mdx) with just a few clicks.
By using RESTified endpoints, you can take advantage of the benefits of both REST and GraphQL, making your Hasura
project even more versatile and powerful. For more details, check out the

View File

@ -8,7 +8,6 @@ keywords:
- postgres
- schema
- sql functions
- stored procedures
---
import GraphiQLIDE from '@site/src/components/GraphiQLIDE';
@ -21,8 +20,7 @@ import TabItem from '@theme/TabItem';
## What are Custom functions?
Postgres [user-defined SQL functions](https://www.postgresql.org/docs/current/sql-createfunction.html) can be used to
either encapsulate some custom business logic or extend the built-in SQL functions and operators. SQL functions are also
referred to as **stored procedures**.
either encapsulate some custom business logic or extend the built-in SQL functions and operators.
Hasura GraphQL Engine lets you expose certain types of user-defined functions as top level fields in the GraphQL API to
allow querying them with either `queries` or `subscriptions`, or for `VOLATILE` functions as `mutations`. These are

View File

@ -308,7 +308,7 @@ In order to represent the structure of the data returned by the query, we first
:::info Permissions and Logical Models
Note that this Logical Model has no attached permissions and therefore will only be available to the admin role. See the
[Logical Model documentation](/schema/ms-sql-server/logical-models.mdx) for information on attaching permissions.
[Logical Model documentation](/schema/snowflake/logical-models.mdx) for information on attaching permissions.
:::
@ -497,6 +497,14 @@ use an argument to specify the name of the table in a `FROM` clause.
When making a query, the arguments are specified using the `args` parameter of the query root field.
##### Example: `LIKE` operator
A commonly used operator is the `LIKE`. When used in a `WHERE` condition, it's usually written with this syntax
`WHERE Title LIKE '%word%'`.
In order to use it with Native Query arguments, you need to use this syntax `LIKE ('%' || {{searchTitle}} || '%')`,
where `searchTitle` is the Native Query parameter.
## Query functionality
Just like tables, Native Queries generate GraphQL types with the ability to further break down the data. You can find
@ -515,8 +523,7 @@ A future release will allow mutations to be specified using Native Queries.
## Permissions
Native queries will inherit the permissions of the Logical Model that they return. See the
[documentation on Logical Models](/schema/ms-sql-server/logical-models.mdx) for an explanation of how to add
permissions.
[documentation on Logical Models](/schema/snowflake/logical-models.mdx) for an explanation of how to add permissions.
## Relationships
@ -530,7 +537,7 @@ Model in order to be tracked successfully.
Currently relationships are only supported between Native Queries residing in the same source.
As an example, consider the following Native Queries which implement the data model of articles and authors given in the
section on [Logical Model references](/schema/ms-sql-server/logical-models.mdx#referencing-other-logical-models):
section on [Logical Model references](/schema/snowflake/logical-models.mdx#referencing-other-logical-models):
<Tabs groupId="user-preference" className="api-tabs">
<TabItem value="api" label="API">

View File

@ -92,3 +92,16 @@ reference.
Dynamic secrets can be used in template variables for data connectors. See
[Template variables](/databases/database-config/data-connector-config.mdx/#template) for reference.
## Forcing secret refresh
If the environment variable `HASURA_SECRETS_BLOCKING_FORCE_REFRESH_URL=<url>`
is set, on each connection failure the server will POST to the specified URL the payload:
```
{"filename": <path>}
```
It is expected that the responding server will return only after refreshing the
secret at the given filepath. [hasura-secret-refresh](https://github.com/hasura/hasura-secret-refresh)
follows this spec.

View File

@ -19,10 +19,19 @@ const config = {
projectName: 'graphql-engine',
staticDirectories: ['static', 'public'],
customFields: {
docsBotEndpointURL:
process.env.NODE_ENV === 'development'
? 'ws://localhost:8000/hasura-docs-ai'
: 'wss://website-api.hasura.io/chat-bot/hasura-docs-ai',
docsBotEndpointURL: (() => {
console.log('process.env.release_mode docs-bot', process.env.release_mode);
switch (process.env.release_mode) {
case 'development':
return 'ws://localhost:8000/hasura-docs-ai';
case 'production':
return 'wss://website-api.hasura.io/chat-bot/hasura-docs-ai';
case 'staging':
return 'wss://website-api.stage.hasura.io/chat-bot/hasura-docs-ai';
default:
return 'ws://localhost:8000/hasura-docs-ai'; // default to development if no match
}
})(),
hasuraVersion: 2,
DEV_TOKEN: process.env.DEV_TOKEN,
},

View File

@ -4,8 +4,8 @@ import './styles.css';
import useDocusaurusContext from '@docusaurus/useDocusaurusContext';
import { CloseIcon, RespondingIconGray, SparklesIcon } from '@site/src/components/AiChatBot/icons';
import { useLocalStorage } from 'usehooks-ts'
import profilePic from '@site/static/img/hasura-ai-profile-pic.png';
import profilePic from '@site/static/img/docs-bot-profile-pic.webp';
import { v4 as uuidv4 } from 'uuid';
interface Message {
userMessage: string;
@ -26,7 +26,7 @@ interface Query {
const initialMessages: Message[] = [
{
userMessage: '',
botResponse: "Hi! I'm HasuraAI, the docs chatbot.",
botResponse: "Hi! I'm DocsBot, the Hasura docs AI chatbot.",
},
{
userMessage: '',
@ -50,6 +50,8 @@ export function AiChatBot() {
const [isResponding, setIsResponding] = useState<boolean>(false)
// Manage the text input
const [input, setInput] = useState<string>('');
// Manage the message thread ID
const [messageThreadId, setMessageThreadId] = useLocalStorage<String>(`hasuraV${customFields.hasuraVersion}ThreadId`, uuidv4())
// Manage the historical messages
const [messages, setMessages] = useLocalStorage<Message[]>(`hasuraV${customFields.hasuraVersion}BotMessages`, initialMessages);
// Manage the current message
@ -185,7 +187,7 @@ export function AiChatBot() {
}
if (ws) {
const toSend = JSON.stringify({ previousMessages: messages, currentUserInput: input });
const toSend = JSON.stringify({ previousMessages: messages, currentUserInput: input, messageThreadId });
setCurrentMessage({ userMessage: input, botResponse: '' });
setInput('');
ws.send(toSend);
@ -194,6 +196,8 @@ export function AiChatBot() {
};
const baseUrl = useDocusaurusContext().siteConfig.baseUrl;
return (
<div className="chat-popup">
{isOpen ? (
@ -209,12 +213,13 @@ export function AiChatBot() {
<div className="chat-window">
<div className="info-bar">
<div className={"bot-name-pic-container"}>
<div className="bot-name">HasuraAI</div>
<div className="bot-name">DocsBot</div>
<img src={profilePic} height={30} width={30} className="bot-pic"/>
</div>
<button className="clear-button" onClick={() => {
setMessages(initialMessages)
setCurrentMessage({ userMessage: '', botResponse: '' });
setMessageThreadId(uuidv4());
}}>Clear</button>
</div>
<div className="messages-container" onScroll={handleScroll} ref={scrollDiv}>

View File

@ -7,149 +7,160 @@ import ArrowRight from '@site/static/icons/arrow_right.svg';
import styles from './styles.module.scss';
const HasuraConBanner = props => {
const isSnowFlakeSection = props.location.pathname.startsWith(`/docs/latest/databases/snowflake`);
// const isSnowFlakeSection = props.location.pathname.startsWith(`/docs/latest/databases/snowflake`);
const isObservabilitySection = props.location.pathname.startsWith(`/docs/latest/observability`);
// const isObservabilitySection = props.location.pathname.startsWith(`/docs/latest/observability`);
const isSecuritySection = props.location.pathname.startsWith(`/docs/latest/security`);
// const isSecuritySection = props.location.pathname.startsWith(`/docs/latest/security`);
const isMySQLSection = props.location.pathname.startsWith(`/docs/latest/databases/mysql`);
// const isMySQLSection = props.location.pathname.startsWith(`/docs/latest/databases/mysql`);
const isOracleSection = props.location.pathname.startsWith(`/docs/latest/databases/oracle`);
// const isOracleSection = props.location.pathname.startsWith(`/docs/latest/databases/oracle`);
const isMariaDBSection = props.location.pathname.startsWith(`/docs/latest/databases/mariadb`);
// const isMariaDBSection = props.location.pathname.startsWith(`/docs/latest/databases/mariadb`);
// Banner for - New product launch webinar */
if (isMySQLSection || isOracleSection || isMariaDBSection) {
return (
<div className={styles['product-launch-webinar-bg']}>
<a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/product-launch/">
<div className={styles['hasura-con-brand']}>
<img
className={styles['brand-light']}
src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1683628053/main-web/Group_11457_vceb9f.png"
alt="hasura-webinar"
/>
</div>
<div className={styles['content-div']}>
<h3>Ship faster with low-code APIs on MySQL, MariaDB, and Oracle</h3>
<div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
View Recording
<ArrowRight />
</div>
</div>
</a>
</div>
);
}
// if (isMySQLSection || isOracleSection || isMariaDBSection) {
// return (
// <div className={styles['product-launch-webinar-bg']}>
// <a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/product-launch/">
// <div className={styles['hasura-con-brand']}>
// <img
// className={styles['brand-light']}
// src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1683628053/main-web/Group_11457_vceb9f.png"
// alt="hasura-webinar"
// />
// </div>
// <div className={styles['content-div']}>
// <h3>Ship faster with low-code APIs on MySQL, MariaDB, and Oracle</h3>
// <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
// View Recording
// <ArrowRight />
// </div>
// </div>
// </a>
// </div>
// );
// }
if (isSnowFlakeSection) {
return (
<div className={styles['snowflake-bg']}>
<a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/snowflake-and-postgresql/">
<div className={styles['hasura-con-brand']}>
<img
className={styles['brand-light']}
src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677756408/main-web/Group_11455_1_ziz1fz.png"
alt="Hasura Con"
/>
</div>
<div className={styles['content-div']}>
<h3>Combining Snowflake and PostgreSQL to build low-latency apps on historical data insights</h3>
<div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
View Recording
<ArrowRight />
</div>
</div>
</a>
</div>
);
}
// if (isSnowFlakeSection) {
// return (
// <div className={styles['snowflake-bg']}>
// <a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/snowflake-and-postgresql/">
// <div className={styles['hasura-con-brand']}>
// <img
// className={styles['brand-light']}
// src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677756408/main-web/Group_11455_1_ziz1fz.png"
// alt="Hasura Con"
// />
// </div>
// <div className={styles['content-div']}>
// <h3>Combining Snowflake and PostgreSQL to build low-latency apps on historical data insights</h3>
// <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
// View Recording
// <ArrowRight />
// </div>
// </div>
// </a>
// </div>
// );
// }
if (isSnowFlakeSection) {
return (
<div className={styles['snowflake-bg']}>
<a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/snowflake-and-postgresql/">
<div className={styles['hasura-con-brand']}>
<img
className={styles['brand-light']}
src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677756408/main-web/Group_11455_1_ziz1fz.png"
alt="Hasura Con"
/>
</div>
<div className={styles['content-div']}>
<h3>Combining Snowflake and PostgreSQL to build low-latency apps on historical data insights</h3>
<div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
View Recording
<ArrowRight />
</div>
</div>
</a>
</div>
);
}
// if (isSnowFlakeSection) {
// return (
// <div className={styles['snowflake-bg']}>
// <a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/snowflake-and-postgresql/">
// <div className={styles['hasura-con-brand']}>
// <img
// className={styles['brand-light']}
// src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677756408/main-web/Group_11455_1_ziz1fz.png"
// alt="Hasura Con"
// />
// </div>
// <div className={styles['content-div']}>
// <h3>Combining Snowflake and PostgreSQL to build low-latency apps on historical data insights</h3>
// <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
// View Recording
// <ArrowRight />
// </div>
// </div>
// </a>
// </div>
// );
// }
// if (isObservabilitySection) {
// return (
// <div className={styles['observe-bg']}>
// <a
// className={styles['webinar-banner']}
// href="https://hasura.io/events/webinar/best-practices-for-api-observability-with-hasura/"
// >
// <div className={styles['hasura-con-brand']}>
// <img
// className={styles['brand-light']}
// src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677759444/main-web/Group_11455_2_rdpykm.png"
// alt="Hasura Con"
// />
// </div>
// <div className={styles['content-div']}>
// <h3>Best Practices for API Observability with Hasura</h3>
// <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
// View Recording
// <ArrowRight />
// </div>
// </div>
// </a>
// </div>
// );
// }
// if (isSecuritySection) {
// return (
// <div className={styles['security-bg']}>
// <a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/securing-your-api-with-hasura/">
// <div className={styles['hasura-con-brand']}>
// <img
// className={styles['brand-light']}
// src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677759811/main-web/Group_11455_3_azgk7w.png"
// alt="Hasura Con"
// />
// </div>
// <div className={styles['content-div']}>
// <h3>Securing your API with Hasura</h3>
// <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
// View Recording
// <ArrowRight />
// </div>
// </div>
// </a>
// </div>
// );
// }
if (isObservabilitySection) {
return (
<div className={styles['observe-bg']}>
<a
className={styles['webinar-banner']}
href="https://hasura.io/events/webinar/best-practices-for-api-observability-with-hasura/"
<a className={styles['hasura-con-banner']} href="https://hasura.io/events/hasura-con-2024">
<div className={styles['hasura-con-brand']}>
<svg
fill="none"
height="42"
viewBox="0 0 239 42"
width="239"
xmlns="http://www.w3.org/2000/svg"
>
<div className={styles['hasura-con-brand']}>
<img
className={styles['brand-light']}
src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677759444/main-web/Group_11455_2_rdpykm.png"
alt="Hasura Con"
/>
</div>
<div className={styles['content-div']}>
<h3>Best Practices for API Observability with Hasura</h3>
<div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
View Recording
<ArrowRight />
</div>
</div>
</a>
</div>
);
}
if (isSecuritySection) {
return (
<div className={styles['security-bg']}>
<a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/securing-your-api-with-hasura/">
<div className={styles['hasura-con-brand']}>
<img
className={styles['brand-light']}
src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677759811/main-web/Group_11455_3_azgk7w.png"
alt="Hasura Con"
/>
</div>
<div className={styles['content-div']}>
<h3>Securing your API with Hasura</h3>
<div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
View Recording
<ArrowRight />
</div>
</div>
</a>
</div>
);
}
return (
<a className={styles['hasura-con-banner']} href="https://hasura.io/events/hasura-con-2023/">
<div className={styles['hasura-con-brand']}>
<img
className={styles['hasuracon23-img']}
src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1686154570/hasura-con-2023/has-con-light-date_r2a2ud.png"
alt="Hasura Con"
<path
d="m38.0802 14.8938c1.1907-3.5976.5857-10.81721-1.6165-13.50688-.2856-.35146-.8325-.30753-1.0793.07322l-2.7976 4.31519c-.6921.85913-1.9166 1.0495-2.8265.42956-2.9572-2.00138-6.505-3.18757-10.3334-3.23639-3.8284-.04881-7.4052 1.05927-10.40597 2.98744-.92444.59553-2.14896.38075-2.81687-.49791l-2.69588-4.38352c-.23716-.385636-.78408-.439332-1.07932-.097632-2.265111 2.635972-3.03467 9.840922-1.931152 13.467822.367839 1.2009.459799 2.4749.2178 3.7099-.23716 1.2204-.479159 2.6995-.493679 3.7295-.121 10.5731 8.276381 19.2426 18.754971 19.3646 10.4834.122 19.0792-8.3473 19.2002-18.9156.0097-1.0299-.1936-2.5139-.4065-3.7391-.213-1.2399-.092-2.5091.3049-3.7002z"
fill="#3970fd"
/>
<g fill="#fff">
<path d="m20.1496 13.6664 1.6087 4.6515c.0826.2432.3146.4088.5707.403l4.4542-.02c.589-.0015.8323.7504.3531 1.0931l-3.6412 2.5884c-.2098.1493-.303.414-.2266.6653l1.4019 4.6901c.1627.5462-.4578.9959-.9219.6646l-3.8586-2.7106c-.2079-.147-.4819-.1446-.6899-.0042l-3.8737 2.687c-.4712.3256-1.0886-.1234-.9175-.6697l1.4334-4.6816c.0754-.245-.0162-.5131-.2232-.6646l-3.6225-2.6137c-.4758-.3428-.2273-1.0965.3599-1.089l4.4545.0496c.2589.004.49-.1597.5743-.4029l1.6381-4.6373c.1881-.5383.952-.5335 1.1352.0027z" />
<path d="m57.4343 9.99387h4.0106v22.00613h-4.0106v-9.3814h-4.5338v9.3814h-4.0106v-22.00613h4.0106v9.55573h4.5338zm16.0444 22.00613-.837-4.5686h-4.8128l-.7672 4.5686h-4.0107l4.4292-22.00613h5.4056l4.6384 22.00613zm-5.1267-7.6028h3.7317l-1.9182-10.5322zm17.7397 3.5922v-4.7082c0-.372-.0698-.6161-.2093-.7323-.1395-.1395-.3952-.2093-.7672-.2093h-2.8249c-2.3947 0-3.5921-1.1625-3.5921-3.4875v-5.4056c0-2.3018 1.2555-3.45263 3.7665-3.45263h3.8362c2.511 0 3.7665 1.15083 3.7665 3.45263v3.069h-4.0455v-2.511c0-.372-.0697-.6161-.2092-.7324-.1395-.1395-.3953-.2092-.7673-.2092h-1.3252c-.3953 0-.6626.0697-.8021.2092-.1395.1163-.2093.3604-.2093.7324v4.4291c0 .372.0698.6278.2093.7673.1395.1162.4068.1743.8021.1743h2.7551c2.4413 0 3.6619 1.1393 3.6619 3.4178v5.7544c0 2.3017-1.2671 3.4526-3.8014 3.4526h-3.7665c-2.5342 0-3.8014-1.1509-3.8014-3.4526v-3.0342h4.0107v2.4762c0 .372.0697.6277.2092.7672.1395.1163.4069.1744.8021.1744h1.3253c.372 0 .6277-.0581.7672-.1744.1395-.1395.2093-.3952.2093-.7672zm14.4183-17.99553h4.01v18.55353c0 2.3017-1.267 3.4526-3.801 3.4526h-4.255c-2.5342 0-3.8014-1.1509-3.8014-3.4526v-18.55353h4.0107v17.99553c0 .372.0697.6277.2092.7672.1395.1163.3953.1744.7673.1744h1.8483c.3953 0 .6629-.0581.8019-.1744.14-.1395.21-.3952.21-.7672zm10.971 13.42683v8.5793h-4.011v-22.00613h8.091c2.535 0 3.802 1.15083 3.802 3.45263v6.5216c0 1.9065-.849 3.0225-2.546 3.348l3.662 8.6839h-4.325l-3.348-8.5793zm0-10.3578v7.3935h2.895c.372 0 .627-.0582.767-.1744.139-.1395.209-.3953.209-.7673v-5.5102c0-.372-.07-.6161-.209-.7324-.14-.1395-.395-.2092-.767-.2092zm19.659 18.9371-.837-4.5686h-4.813l-.767 4.5686h-4.011l4.429-22.00613h5.406l4.638 22.00613zm-5.127-7.6028h3.732l-1.919-10.5322zm22.273-7.1145h-4.045v-3.4177c0-.372-.07-.6161-.209-.7324-.14-.1395-.396-.2092-.768-.2092h-1.639c-.372 0-.628.0697-.767.2092-.14.1163-.209.3604-.209.7324v14.2987c0 .372.069.6278.209.7673.139.1162.395.1744.767.1744h1.639c.372 0 .628-.0582.768-.1744.139-.1395.209-.3953.209-.7673v-3.348h4.045v3.7665c0 2.3018-1.267 3.4527-3.801 3.4527h-4.08c-2.535 0-3.802-1.1509-3.802-3.4527v-15.1357c0-2.3018 1.267-3.45263 3.802-3.45263h4.08c2.534 0 3.801 1.15083 3.801 3.45263zm6.143-7.28883h4.255c2.534 0 3.801 1.15083 3.801 3.45263v15.1009c0 2.3017-1.267 3.4526-3.801 3.4526h-4.255c-2.534 0-3.801-1.1509-3.801-3.4526v-15.1009c0-2.3018 1.267-3.45263 3.801-3.45263zm4.045 17.99553v-13.9849c0-.372-.069-.6161-.209-.7324-.139-.1395-.395-.2092-.767-.2092h-1.848c-.396 0-.663.0697-.803.2092-.139.1163-.209.3604-.209.7324v13.9849c0 .372.07.6277.209.7672.14.1163.407.1744.803.1744h1.848c.372 0 .628-.0581.767-.1744.14-.1395.209-.3952.209-.7672zm15.405-17.99553h3.662v22.00613h-3.766l-4.709-14.5778v14.5778h-3.696v-22.00613h3.871l4.638 14.36853zm15.151 4.01063v3.2782h-4.011v-3.8362c0-2.3018 1.267-3.45263 3.801-3.45263h3.348c2.558 0 3.837 1.15083 3.837 3.45263v2.4761c0 1.7205-.524 3.3945-1.57 5.022l-4.568 7.9864h6.242v3.069h-10.985v-2.8946l5.684-9.207c.814-1.2323 1.221-2.5808 1.221-4.0455v-1.8484c0-.372-.07-.6161-.209-.7324-.14-.1395-.396-.2092-.768-.2092h-1.011c-.395 0-.663.0697-.802.2092-.14.1163-.209.3604-.209.7324zm16.849 13.9849v-13.9849c0-.372-.07-.6161-.209-.7324-.14-.1395-.407-.2092-.802-.2092h-1.604c-.372 0-.628.0697-.768.2092-.139.1163-.209.3604-.209.7324v13.9849c0 .372.07.6277.209.7672.14.1163.396.1744.768.1744h1.604c.395 0 .662-.0581.802-.1744.139-.1395.209-.3952.209-.7672zm4.011-14.5429v15.1009c0 2.3017-1.267 3.4526-3.802 3.4526h-4.045c-2.534 0-3.801-1.1509-3.801-3.4526v-15.1009c0-2.3018 1.267-3.45263 3.801-3.45263h4.045c2.535 0 3.802 1.15083 3.802 3.45263zm6.063.558v3.2782h-4.011v-3.8362c0-2.3018 1.267-3.45263 3.802-3.45263h3.348c2.557 0 3.836 1.15083 3.836 3.45263v2.4761c0 1.7205-.523 3.3945-1.57 5.022l-4.568 7.9864h6.242v3.069h-10.985v-2.8946l5.684-9.207c.814-1.2323 1.221-2.5808 1.221-4.0455v-1.8484c0-.372-.07-.6161-.209-.7324-.14-.1395-.395-.2092-.767-.2092h-1.012c-.395 0-.662.0697-.802.2092-.139.1163-.209.3604-.209.7324zm21.802 10.6369v3.1038h-2.093v4.2548h-3.801v-4.2548h-7.987v-3.4177l6.906-14.33363h4.01l-7.254 14.64753h4.325v-4.2548h3.801v4.2548z" />
</g>
</svg>
</div>
<div className={styles['hasura-con-space-between']}>
<div>
<div className={styles['hasura-con-23-title']}>The fourth annual Hasura User Conference</div>
<div className={styles['hasura-con-23-title']}>The HasuraCon 2024 CFP is open!</div>
</div>
<div className={styles['hasura-con-register-button'] + ' ' + styles['hasura-con-register-mobile-hide']}>
Read more

View File

@ -71,10 +71,13 @@
font-size: var(--ifm-small-font-size);
font-weight: var(--ifm-font-weight-semibold);
align-self: center;
display: grid;
img {
width: 97px;
}
svg {
width: 170px;
}
.hasuracon23-img {
min-width: 159px;
// margin-right: 42px;
@ -216,7 +219,7 @@ html[data-theme='dark'] {
@media (min-width: 997px) and (max-width: 1380px) {
.hasura-con-banner {
grid-template-columns: 1fr;
grid-gap: 20px;
grid-gap: 20px !important;
.hasura-con-register-button {
margin-top: 20px;
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 250 KiB

View File

@ -5,11 +5,11 @@
"systems": "systems"
},
"locked": {
"lastModified": 1694529238,
"narHash": "sha256-zsNZZGTGnMOf9YpHKJqMSsa0dXbfmxeoJ7xHlrt+xmY=",
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "ff7b65b44d01cf9ba6a71320833626af21126384",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"type": "github"
},
"original": {
@ -20,11 +20,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1699914561,
"narHash": "sha256-b296O45c3Jgj8GEFg/NN7ZOJjBBCHr1o2iA4yoJ3OKE=",
"lastModified": 1710754590,
"narHash": "sha256-9LA94zYvr5a6NawEftuSdTP8HYMV0ZYdB5WG6S9Z7tI=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "2f8742189e9ef86961ab90a30c68eb844565578a",
"rev": "a089e2dc4cf2421ca29f2d5ced81badd5911fcdf",
"type": "github"
},
"original": {

View File

@ -69,7 +69,7 @@ const isExistingArrRel = (currentArrRels, relCols, relTable) => {
currRCol = Object.values(arrRelDef.manual_configuration.column_mapping);
}
if (currTable.name === relTable && sameRelCols(currRCol, relCols)) {
if (currTable?.name === relTable && sameRelCols(currRCol, relCols)) {
_isExistingArrRel = true;
break;
}

View File

@ -85,7 +85,7 @@
"dom-parser": "0.1.6",
"form-urlencoded": "^6.1.0",
"format-graphql": "^1.4.0",
"graphiql": "1.4.7",
"graphiql": "1.0.0-alpha.0",
"graphiql-code-exporter": "2.0.8",
"graphiql-explorer": "0.6.2",
"graphql": "14.5.8",

View File

@ -2647,6 +2647,18 @@ __metadata:
languageName: node
linkType: hard
"@emotion/cache@npm:^10.0.27":
version: 10.0.29
resolution: "@emotion/cache@npm:10.0.29"
dependencies:
"@emotion/sheet": 0.9.4
"@emotion/stylis": 0.8.5
"@emotion/utils": 0.11.3
"@emotion/weak-memoize": 0.2.5
checksum: 78b37fb0c2e513c90143a927abef229e995b6738ef8a92ce17abe2ed409b38859ddda7c14d7f4854d6f4e450b6db50231532f53a7fec4903d7ae775b2ae3fd64
languageName: node
linkType: hard
"@emotion/cache@npm:^11.11.0, @emotion/cache@npm:^11.4.0":
version: 11.11.0
resolution: "@emotion/cache@npm:11.11.0"
@ -2660,6 +2672,40 @@ __metadata:
languageName: node
linkType: hard
"@emotion/core@npm:^10.0.22":
version: 10.3.1
resolution: "@emotion/core@npm:10.3.1"
dependencies:
"@babel/runtime": ^7.5.5
"@emotion/cache": ^10.0.27
"@emotion/css": ^10.0.27
"@emotion/serialize": ^0.11.15
"@emotion/sheet": 0.9.4
"@emotion/utils": 0.11.3
peerDependencies:
react: ">=16.3.0"
checksum: d2dad428e1b2cf0777badfb55e262d369273be9b2e6e9e7d61c953066c00811d544a6234db36b17ee07872ed092f4dd102bf6ffe2c76fc38d53eef3a60fddfd0
languageName: node
linkType: hard
"@emotion/css@npm:^10.0.27":
version: 10.0.27
resolution: "@emotion/css@npm:10.0.27"
dependencies:
"@emotion/serialize": ^0.11.15
"@emotion/utils": 0.11.3
babel-plugin-emotion: ^10.0.27
checksum: 1420f5b514fc3a8500bcf90384b309b0d9acc9f687ec3a655166b55dc81d1661d6b6132ea6fe6730d0071c10da93bf9427937c22a90a18088af4ba5e11d59141
languageName: node
linkType: hard
"@emotion/hash@npm:0.8.0":
version: 0.8.0
resolution: "@emotion/hash@npm:0.8.0"
checksum: 4b35d88a97e67275c1d990c96d3b0450451d089d1508619488fc0acb882cb1ac91e93246d471346ebd1b5402215941ef4162efe5b51534859b39d8b3a0e3ffaa
languageName: node
linkType: hard
"@emotion/hash@npm:^0.9.1":
version: 0.9.1
resolution: "@emotion/hash@npm:0.9.1"
@ -2667,7 +2713,7 @@ __metadata:
languageName: node
linkType: hard
"@emotion/is-prop-valid@npm:^0.8.3":
"@emotion/is-prop-valid@npm:^0.8.1, @emotion/is-prop-valid@npm:^0.8.3":
version: 0.8.8
resolution: "@emotion/is-prop-valid@npm:0.8.8"
dependencies:
@ -2720,6 +2766,19 @@ __metadata:
languageName: node
linkType: hard
"@emotion/serialize@npm:^0.11.15, @emotion/serialize@npm:^0.11.16":
version: 0.11.16
resolution: "@emotion/serialize@npm:0.11.16"
dependencies:
"@emotion/hash": 0.8.0
"@emotion/memoize": 0.7.4
"@emotion/unitless": 0.7.5
"@emotion/utils": 0.11.3
csstype: ^2.5.7
checksum: 2949832fab9d803e6236f2af6aad021c09c6b6722ae910b06b4ec3bfb84d77cbecfe3eab9a7dcc269ac73e672ef4b696c7836825931670cb110731712e331438
languageName: node
linkType: hard
"@emotion/serialize@npm:^1.1.2":
version: 1.1.2
resolution: "@emotion/serialize@npm:1.1.2"
@ -2733,6 +2792,13 @@ __metadata:
languageName: node
linkType: hard
"@emotion/sheet@npm:0.9.4":
version: 0.9.4
resolution: "@emotion/sheet@npm:0.9.4"
checksum: 53bb833b4bb69ea2af04e1ecad164f78fb2614834d2820f584c909686a8e047c44e96a6e824798c5c558e6d95e10772454a9e5c473c5dbe0d198e50deb2815bc
languageName: node
linkType: hard
"@emotion/sheet@npm:^1.2.2":
version: 1.2.2
resolution: "@emotion/sheet@npm:1.2.2"
@ -2760,14 +2826,14 @@ __metadata:
languageName: node
linkType: hard
"@emotion/stylis@npm:^0.8.4":
"@emotion/stylis@npm:0.8.5, @emotion/stylis@npm:^0.8.4":
version: 0.8.5
resolution: "@emotion/stylis@npm:0.8.5"
checksum: 67ff5958449b2374b329fb96e83cb9025775ffe1e79153b499537c6c8b2eb64b77f32d7b5d004d646973662356ceb646afd9269001b97c54439fceea3203ce65
languageName: node
linkType: hard
"@emotion/unitless@npm:^0.7.4":
"@emotion/unitless@npm:0.7.5, @emotion/unitless@npm:^0.7.4":
version: 0.7.5
resolution: "@emotion/unitless@npm:0.7.5"
checksum: f976e5345b53fae9414a7b2e7a949aa6b52f8bdbcc84458b1ddc0729e77ba1d1dfdff9960e0da60183877873d3a631fa24d9695dd714ed94bcd3ba5196586a6b
@ -2790,6 +2856,13 @@ __metadata:
languageName: node
linkType: hard
"@emotion/utils@npm:0.11.3":
version: 0.11.3
resolution: "@emotion/utils@npm:0.11.3"
checksum: 9c4204bda84f9acd153a9be9478a83f9baa74d5d7a4c21882681c4d1b86cd113b84540cb1f92e1c30313b5075f024da2658dbc553f5b00776ef9b6ec7991c0c9
languageName: node
linkType: hard
"@emotion/utils@npm:^1.2.1":
version: 1.2.1
resolution: "@emotion/utils@npm:1.2.1"
@ -2797,6 +2870,13 @@ __metadata:
languageName: node
linkType: hard
"@emotion/weak-memoize@npm:0.2.5":
version: 0.2.5
resolution: "@emotion/weak-memoize@npm:0.2.5"
checksum: 27d402b0c683b94658220b6d47840346ee582329ca2a15ec9c233492e0f1a27687ccb233b76eedc922f2e185e444cc89f7b97a81a1d3e5ae9f075bab08e965ea
languageName: node
linkType: hard
"@emotion/weak-memoize@npm:^0.3.1":
version: 0.3.1
resolution: "@emotion/weak-memoize@npm:0.3.1"
@ -3114,19 +3194,6 @@ __metadata:
languageName: node
linkType: hard
"@graphiql/toolkit@npm:^0.3.2":
version: 0.3.2
resolution: "@graphiql/toolkit@npm:0.3.2"
dependencies:
"@n1ru4l/push-pull-async-iterable-iterator": ^3.0.0
graphql-ws: ^4.9.0
meros: ^1.1.4
peerDependencies:
graphql: ">= v14.5.0 <= 15.6.1"
checksum: 3d69ba8a75047d3d5eb4226d6366e3664ac5326afddd72690f230de4a9bbec173f96d648376c1b4472219b917c7e99844a34d54d683f0bc3b25a0f119b5a338e
languageName: node
linkType: hard
"@graphql-codegen/cli@npm:2.13.8":
version: 2.13.8
resolution: "@graphql-codegen/cli@npm:2.13.8"
@ -5192,6 +5259,15 @@ __metadata:
languageName: node
linkType: hard
"@mdx-js/react@npm:^1.5.2":
version: 1.6.22
resolution: "@mdx-js/react@npm:1.6.22"
peerDependencies:
react: ^16.13.1 || ^17.0.0
checksum: bc84bd514bc127f898819a0c6f1a6915d9541011bd8aefa1fcc1c9bea8939f31051409e546bdec92babfa5b56092a16d05ef6d318304ac029299df5181dc94c8
languageName: node
linkType: hard
"@mdx-js/react@npm:^2.1.5":
version: 2.3.0
resolution: "@mdx-js/react@npm:2.3.0"
@ -5246,13 +5322,6 @@ __metadata:
languageName: node
linkType: hard
"@n1ru4l/push-pull-async-iterable-iterator@npm:^3.0.0":
version: 3.2.0
resolution: "@n1ru4l/push-pull-async-iterable-iterator@npm:3.2.0"
checksum: 2c7bdbc6c3d8f0aa05c2e3e80c4a856f766e6113a86198fd0df2448117f7cfa71ee2946f6aa7e745caec6ac04d19a5a61c6c80c6fdbf686d43984b3791f0a04d
languageName: node
linkType: hard
"@ndelangen/get-tarball@npm:^3.0.7":
version: 3.0.9
resolution: "@ndelangen/get-tarball@npm:3.0.9"
@ -9407,7 +9476,7 @@ __metadata:
languageName: node
linkType: hard
"@styled-system/css@npm:^5.1.5":
"@styled-system/css@npm:^5.0.16, @styled-system/css@npm:^5.1.5":
version: 5.1.5
resolution: "@styled-system/css@npm:5.1.5"
checksum: 0d3579ae82f5f53412c22e675aec9f77fa17b52deddc03d680340d8187006f1698ef0577db30a3c57ee0204f83ec61bb8a01105c3f0d60ca5c925a70175b5358
@ -13418,6 +13487,24 @@ __metadata:
languageName: node
linkType: hard
"babel-plugin-emotion@npm:^10.0.27":
version: 10.2.2
resolution: "babel-plugin-emotion@npm:10.2.2"
dependencies:
"@babel/helper-module-imports": ^7.0.0
"@emotion/hash": 0.8.0
"@emotion/memoize": 0.7.4
"@emotion/serialize": ^0.11.16
babel-plugin-macros: ^2.0.0
babel-plugin-syntax-jsx: ^6.18.0
convert-source-map: ^1.5.0
escape-string-regexp: ^1.0.5
find-root: ^1.1.0
source-map: ^0.5.7
checksum: 763f38c67ffbe7d091691d68c74686ba478296cc24716699fb5b0feddce1b1b47878a20b0bbe2aa4dea17f41074ead4deae7935d2cf6823638766709812c5b40
languageName: node
linkType: hard
"babel-plugin-istanbul@npm:5.2.0":
version: 5.2.0
resolution: "babel-plugin-istanbul@npm:5.2.0"
@ -13467,7 +13554,7 @@ __metadata:
languageName: node
linkType: hard
"babel-plugin-macros@npm:^2.8.0":
"babel-plugin-macros@npm:^2.0.0, babel-plugin-macros@npm:^2.8.0":
version: 2.8.0
resolution: "babel-plugin-macros@npm:2.8.0"
dependencies:
@ -15248,7 +15335,7 @@ __metadata:
languageName: node
linkType: hard
"codemirror@npm:^5.58.2":
"codemirror@npm:^5.47.0":
version: 5.65.16
resolution: "codemirror@npm:5.65.16"
checksum: 1c5036bfffcce19b1ff91d8b158dcb45faba27047c4093f55ea7ad1165975179eb47c9ef604baa9c4f4ea6bf9817886c767f33e72fa9c62710404029be3c4744
@ -16270,7 +16357,7 @@ __metadata:
languageName: node
linkType: hard
"csstype@npm:^2.0.0, csstype@npm:^2.2.0, csstype@npm:^2.5.2, csstype@npm:^2.6.9":
"csstype@npm:^2.0.0, csstype@npm:^2.2.0, csstype@npm:^2.5.2, csstype@npm:^2.5.7, csstype@npm:^2.6.9":
version: 2.6.21
resolution: "csstype@npm:2.6.21"
checksum: 2ce8bc832375146eccdf6115a1f8565a27015b74cce197c35103b4494955e9516b246140425ad24103864076aa3e1257ac9bab25a06c8d931dd87a6428c9dccf
@ -16600,7 +16687,7 @@ __metadata:
languageName: node
linkType: hard
"deepmerge@npm:^4.2.2":
"deepmerge@npm:^4.0.0, deepmerge@npm:^4.2.2":
version: 4.3.1
resolution: "deepmerge@npm:4.3.1"
checksum: 2024c6a980a1b7128084170c4cf56b0fd58a63f2da1660dcfe977415f27b17dbe5888668b59d0b063753f3220719d5e400b7f113609489c90160bb9a5518d052
@ -17240,13 +17327,6 @@ __metadata:
languageName: node
linkType: hard
"dset@npm:^3.1.0":
version: 3.1.3
resolution: "dset@npm:3.1.3"
checksum: 5db964a36c60c51aa3f7088bfe1dc5c0eedd9a6ef3b216935bb70ef4a7b8fc40fd2f9bb16b9a4692c9c9772cea60cfefb108d2d09fbd53c85ea8f6cd54502d6a
languageName: node
linkType: hard
"dset@npm:^3.1.2":
version: 3.1.2
resolution: "dset@npm:3.1.2"
@ -17522,20 +17602,13 @@ __metadata:
languageName: node
linkType: hard
"entities@npm:~2.0":
"entities@npm:~2.0, entities@npm:~2.0.0":
version: 2.0.3
resolution: "entities@npm:2.0.3"
checksum: 5a7899fcc622e0d76afdeafe4c58a6b40ae3a8ee4772e5825a648c11a2ca324a9a02515386f512e466baac4aeb551f3d3b79eaece5cd98369b9f8601be336b1a
languageName: node
linkType: hard
"entities@npm:~2.1.0":
version: 2.1.0
resolution: "entities@npm:2.1.0"
checksum: a10a877e489586a3f6a691fe49bf3fc4e58f06c8e80522f08214a5150ba457e7017b447d4913a3fa041bda06ee4c92517baa4d8d75373eaa79369e9639225ffd
languageName: node
linkType: hard
"env-paths@npm:^2.2.0":
version: 2.2.1
resolution: "env-paths@npm:2.2.1"
@ -17875,7 +17948,7 @@ __metadata:
languageName: node
linkType: hard
"escape-html@npm:^1.0.3, escape-html@npm:~1.0.3":
"escape-html@npm:~1.0.3":
version: 1.0.3
resolution: "escape-html@npm:1.0.3"
checksum: 6213ca9ae00d0ab8bccb6d8d4e0a98e76237b2410302cf7df70aaa6591d509a2a37ce8998008cbecae8fc8ffaadf3fb0229535e6a145f3ce0b211d060decbb24
@ -19717,7 +19790,7 @@ __metadata:
form-urlencoded: ^6.1.0
format-graphql: ^1.4.0
glob: ^9.3.1
graphiql: 1.4.7
graphiql: 1.0.0-alpha.0
graphiql-code-exporter: 2.0.8
graphiql-explorer: 0.6.2
graphql: 14.5.8
@ -20600,24 +20673,25 @@ __metadata:
languageName: node
linkType: hard
"graphiql@npm:1.4.7":
version: 1.4.7
resolution: "graphiql@npm:1.4.7"
"graphiql@npm:1.0.0-alpha.0":
version: 1.0.0-alpha.0
resolution: "graphiql@npm:1.0.0-alpha.0"
dependencies:
"@graphiql/toolkit": ^0.3.2
codemirror: ^5.58.2
codemirror-graphql: ^1.0.3
"@emotion/core": ^10.0.22
"@mdx-js/react": ^1.5.2
codemirror: ^5.47.0
codemirror-graphql: ^0.12.0-alpha.0
copy-to-clipboard: ^3.2.0
dset: ^3.1.0
entities: ^2.0.0
escape-html: ^1.0.3
graphql-language-service: ^3.1.6
markdown-it: ^12.2.0
markdown-it: ^10.0.0
regenerator-runtime: ^0.13.3
theme-ui: ^0.2.52
peerDependencies:
graphql: ">= v14.5.0 <= 15.5.0"
react: ^16.8.0 || ^17.0.0 || ^18.0.0
react-dom: ^16.8.0 || ^17.0.0 || ^18.0.0
checksum: b62790da23a54209c469f628c1d87bdc7b975e1857de77a6c34e0e69348704d81f32c020b29d8ae56a035075bed49cf3c59bbacdda31d7a9b888cf17676b4e7a
graphql: ^0.12.0 || ^0.13.0 || ^14.0.0
prop-types: ">=15.5.0"
react: ^16.8.0
react-dom: ^16.8.0
checksum: fbd3787cdecdc9c7dbdec2ae1f767bd17d7c743d4e9a23f15dd8b0e7911330f24a0d8f1d9c1039579e1afb7a5c3fc896aa061370a26aa273db4bb5b96fd81a74
languageName: node
linkType: hard
@ -20710,7 +20784,7 @@ __metadata:
languageName: node
linkType: hard
"graphql-language-service-parser@npm:^1.10.3, graphql-language-service-parser@npm:^1.5.3-alpha.0":
"graphql-language-service-parser@npm:^1.5.3-alpha.0":
version: 1.10.4
resolution: "graphql-language-service-parser@npm:1.10.4"
dependencies:
@ -20721,7 +20795,7 @@ __metadata:
languageName: node
linkType: hard
"graphql-language-service-types@npm:^1.6.0-alpha.0, graphql-language-service-types@npm:^1.8.6, graphql-language-service-types@npm:^1.8.7":
"graphql-language-service-types@npm:^1.6.0-alpha.0, graphql-language-service-types@npm:^1.8.7":
version: 1.8.7
resolution: "graphql-language-service-types@npm:1.8.7"
dependencies:
@ -20733,7 +20807,7 @@ __metadata:
languageName: node
linkType: hard
"graphql-language-service-utils@npm:^2.4.0-alpha.0, graphql-language-service-utils@npm:^2.6.3":
"graphql-language-service-utils@npm:^2.4.0-alpha.0":
version: 2.7.1
resolution: "graphql-language-service-utils@npm:2.7.1"
dependencies:
@ -20746,22 +20820,6 @@ __metadata:
languageName: node
linkType: hard
"graphql-language-service@npm:^3.1.6":
version: 3.2.5
resolution: "graphql-language-service@npm:3.2.5"
dependencies:
graphql-language-service-interface: ^2.9.5
graphql-language-service-parser: ^1.10.3
graphql-language-service-types: ^1.8.6
graphql-language-service-utils: ^2.6.3
peerDependencies:
graphql: ^15.5.0 || ^16.0.0
bin:
graphql: dist/temp-bin.js
checksum: bf42d5db27d12fba4a0ba7fba81ef9601e00076ad7e2ac1dd8713d98f67004529b63ecac7099767f85a7c2577c17d518aebd9de3cbb5dc316a8074aaa37be4bc
languageName: node
linkType: hard
"graphql-mqtt-subscriptions@npm:^1.2.0":
version: 1.2.0
resolution: "graphql-mqtt-subscriptions@npm:1.2.0"
@ -20894,15 +20952,6 @@ __metadata:
languageName: node
linkType: hard
"graphql-ws@npm:^4.9.0":
version: 4.9.0
resolution: "graphql-ws@npm:4.9.0"
peerDependencies:
graphql: ">=0.11 <=15"
checksum: f74f5d42843798136202bed9766d2ac6ce614950d31a69d5b935b4f41255d3ace8329b659658fe88a45a4dad43c0d668361b826889d0191859839856084c1eb9
languageName: node
linkType: hard
"graphql@npm:0.13.1 - 16, graphql@npm:^15.0.0 || ^16.0.0":
version: 16.6.0
resolution: "graphql@npm:16.6.0"
@ -24923,12 +24972,12 @@ __metadata:
languageName: node
linkType: hard
"linkify-it@npm:^3.0.1":
version: 3.0.3
resolution: "linkify-it@npm:3.0.3"
"linkify-it@npm:^2.0.0":
version: 2.2.0
resolution: "linkify-it@npm:2.2.0"
dependencies:
uc.micro: ^1.0.1
checksum: 31367a4bb70c5bbc9703246236b504b0a8e049bcd4e0de4291fa50f0ebdebf235b5eb54db6493cb0b1319357c6eeafc4324c9f4aa34b0b943d9f2e11a1268fbc
checksum: d198871d0b3f3cfdb745dae564bfd6743474f20cd0ef1057e6ca29451834749e7f3da52b59b4de44e98f31a1e5c71bdad160490d4ae54de251cbcde57e4d7837
languageName: node
linkType: hard
@ -25556,18 +25605,18 @@ __metadata:
languageName: node
linkType: hard
"markdown-it@npm:^12.2.0":
version: 12.3.2
resolution: "markdown-it@npm:12.3.2"
"markdown-it@npm:^10.0.0":
version: 10.0.0
resolution: "markdown-it@npm:10.0.0"
dependencies:
argparse: ^2.0.1
entities: ~2.1.0
linkify-it: ^3.0.1
argparse: ^1.0.7
entities: ~2.0.0
linkify-it: ^2.0.0
mdurl: ^1.0.1
uc.micro: ^1.0.5
bin:
markdown-it: bin/markdown-it.js
checksum: 890555711c1c00fa03b936ca2b213001a3b9b37dea140d8445ae4130ce16628392aad24b12e2a0a9935336ca5951f2957a38f4e5309a2e38eab44e25ff32a41e
checksum: 69f5ee640cbebb451b80d3cce308fff7230767e05c0f8c206a1e413775b7a6e5a08e91e9f3ec59f9b5c5a45493f9ce7ac089379cffb60c9d3e6677ed9d535086
languageName: node
linkType: hard
@ -25872,7 +25921,7 @@ __metadata:
languageName: node
linkType: hard
"meros@npm:^1.1.4, meros@npm:^1.2.1":
"meros@npm:^1.2.1":
version: 1.3.0
resolution: "meros@npm:1.3.0"
peerDependencies:
@ -34463,6 +34512,21 @@ __metadata:
languageName: node
linkType: hard
"theme-ui@npm:^0.2.52":
version: 0.2.52
resolution: "theme-ui@npm:0.2.52"
dependencies:
"@emotion/is-prop-valid": ^0.8.1
"@styled-system/css": ^5.0.16
deepmerge: ^4.0.0
peerDependencies:
"@emotion/core": ^10.0.0
"@mdx-js/react": ^1.0.0
react: ^16.8.0
checksum: f00c61c2a7cf247b4b94ea0f7e64a0fc97ba78eeab1a472a3e4755fefa6fc412e7b56fee0d567f266837a02628899a7582fd1f261147a8566df8ac015de4a0bd
languageName: node
linkType: hard
"throttleit@npm:^1.0.0":
version: 1.0.0
resolution: "throttleit@npm:1.0.0"

View File

@ -98,7 +98,7 @@
"firewallRuleName": "allow-all-azure-firewall-rule",
"containerGroupName": "[concat(parameters('name'), '-container-group')]",
"containerName": "hasura-graphql-engine",
"containerImage": "hasura/graphql-engine:v2.37.0"
"containerImage": "hasura/graphql-engine:v2.38.0"
},
"resources": [
{

View File

@ -55,7 +55,7 @@
"dbName": "[parameters('postgresDatabaseName')]",
"containerGroupName": "[concat(parameters('name'), '-container-group')]",
"containerName": "hasura-graphql-engine",
"containerImage": "hasura/graphql-engine:v2.37.0"
"containerImage": "hasura/graphql-engine:v2.38.0"
},
"resources": [
{

View File

@ -27,7 +27,7 @@ services:
- "${PWD}/cockroach-data:/cockroach/cockroach-data"
graphql-engine:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
ports:
- "8080:8080"
depends_on:

View File

@ -8,7 +8,7 @@ services:
environment:
POSTGRES_PASSWORD: postgrespassword
graphql-engine:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
depends_on:
- "postgres"
restart: always

View File

@ -15,7 +15,7 @@ services:
volumes:
- mssql_data:/var/opt/mssql
graphql-engine:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
ports:
- "8080:8080"
depends_on:

View File

@ -19,7 +19,7 @@ services:
PGADMIN_DEFAULT_EMAIL: pgadmin@example.com
PGADMIN_DEFAULT_PASSWORD: admin
graphql-engine:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
ports:
- "8080:8080"
depends_on:

View File

@ -8,7 +8,7 @@ services:
environment:
POSTGRES_PASSWORD: postgrespassword
graphql-engine:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
ports:
- "8080:8080"
depends_on:

View File

@ -23,7 +23,7 @@ services:
- yugabyte-data:/var/lib/postgresql/data
graphql-engine:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
ports:
- "8080:8080"
depends_on:

View File

@ -8,7 +8,7 @@ services:
environment:
POSTGRES_PASSWORD: postgrespassword
graphql-engine:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
ports:
- "8080:8080"
restart: always
@ -31,7 +31,7 @@ services:
data-connector-agent:
condition: service_healthy
data-connector-agent:
image: hasura/graphql-data-connector:v2.37.0
image: hasura/graphql-data-connector:v2.38.0
restart: always
ports:
- 8081:8081

View File

@ -3,4 +3,4 @@ docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password@hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
-e HASURA_GRAPHQL_DEV_MODE=true \
hasura/graphql-engine:v2.37.0
hasura/graphql-engine:v2.38.0

View File

@ -13,7 +13,7 @@ services:
environment:
POSTGRES_PASSWORD: postgrespassword
hasura:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
restart: always
ports:
- 8080:8080
@ -48,7 +48,7 @@ services:
data-connector-agent:
condition: service_healthy
data-connector-agent:
image: hasura/graphql-data-connector:v2.37.0
image: hasura/graphql-data-connector:v2.38.0
restart: always
ports:
- 8081:8081

View File

@ -4,7 +4,7 @@
"containerDefinitions": [
{
"name": "hasura",
"image": "hasura/graphql-engine:v2.37.0",
"image": "hasura/graphql-engine:v2.38.0",
"portMappings": [
{
"hostPort": 8080,

View File

@ -13,7 +13,7 @@ services:
environment:
POSTGRES_PASSWORD: postgrespassword
hasura:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
restart: always
ports:
- 8080:8080
@ -48,7 +48,7 @@ services:
data-connector-agent:
condition: service_healthy
data-connector-agent:
image: hasura/clickhouse-data-connector:v2.37.0
image: hasura/clickhouse-data-connector:v2.38.0
restart: always
ports:
- 8080:8081

View File

@ -15,7 +15,7 @@ services:
environment:
POSTGRES_PASSWORD: postgrespassword
graphql-engine:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
ports:
- "8080:8080"
restart: always
@ -47,7 +47,7 @@ services:
data-connector-agent:
condition: service_healthy
data-connector-agent:
image: hasura/graphql-data-connector:v2.37.0
image: hasura/graphql-data-connector:v2.38.0
restart: always
ports:
- 8081:8081

View File

@ -18,7 +18,7 @@ spec:
fsGroup: 1001
runAsUser: 1001
containers:
- image: hasura/graphql-engine:v2.37.0
- image: hasura/graphql-engine:v2.38.0
imagePullPolicy: IfNotPresent
name: hasura
readinessProbe:

View File

@ -13,7 +13,7 @@ services:
environment:
POSTGRES_PASSWORD: postgrespassword
hasura:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
restart: always
ports:
- 8080:8080
@ -48,7 +48,7 @@ services:
data-connector-agent:
condition: service_healthy
data-connector-agent:
image: hasura/graphql-data-connector:v2.37.0
image: hasura/graphql-data-connector:v2.38.0
restart: always
ports:
- 8081:8081

View File

@ -30,7 +30,7 @@ services:
MONGO_INITDB_ROOT_USERNAME: mongouser
MONGO_INITDB_ROOT_PASSWORD: mongopassword
hasura:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
restart: always
ports:
- 8080:8080
@ -60,7 +60,7 @@ services:
postgres:
condition: service_healthy
mongo-data-connector:
image: hasura/mongo-data-connector:v2.37.0
image: hasura/mongo-data-connector:v2.38.0
ports:
- 3000:3000
volumes:

View File

@ -13,7 +13,7 @@ services:
environment:
POSTGRES_PASSWORD: postgrespassword
hasura:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
restart: always
ports:
- 8080:8080
@ -48,7 +48,7 @@ services:
data-connector-agent:
condition: service_healthy
data-connector-agent:
image: hasura/graphql-data-connector:v2.37.0
image: hasura/graphql-data-connector:v2.38.0
restart: always
ports:
- 8081:8081

View File

@ -13,7 +13,7 @@ services:
environment:
POSTGRES_PASSWORD: postgrespassword
hasura:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
restart: always
ports:
- 8080:8080
@ -48,7 +48,7 @@ services:
data-connector-agent:
condition: service_healthy
data-connector-agent:
image: hasura/graphql-data-connector:v2.37.0
image: hasura/graphql-data-connector:v2.38.0
restart: always
ports:
- 8081:8081

View File

@ -13,7 +13,7 @@ services:
environment:
POSTGRES_PASSWORD: postgrespassword
hasura:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
restart: always
ports:
- 8080:8080
@ -48,7 +48,7 @@ services:
data-connector-agent:
condition: service_healthy
data-connector-agent:
image: hasura/graphql-data-connector:v2.37.0
image: hasura/graphql-data-connector:v2.38.0
restart: always
ports:
- 8081:8081

View File

@ -13,7 +13,7 @@ services:
environment:
POSTGRES_PASSWORD: postgrespassword
hasura:
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
restart: always
ports:
- 8080:8080
@ -48,7 +48,7 @@ services:
data-connector-agent:
condition: service_healthy
data-connector-agent:
image: hasura/graphql-data-connector:v2.37.0
image: hasura/graphql-data-connector:v2.38.0
restart: always
ports:
- 8081:8081

View File

@ -16,7 +16,7 @@ spec:
spec:
containers:
- name: graphql-engine
image: hasura/graphql-engine:v2.37.0
image: hasura/graphql-engine:v2.38.0
ports:
- containerPort: 8080
readinessProbe:

View File

@ -18,7 +18,7 @@ spec:
app: hasura
spec:
containers:
- image: hasura/graphql-engine:v2.37.0
- image: hasura/graphql-engine:v2.38.0
imagePullPolicy: IfNotPresent
name: hasura
env:

View File

@ -5,11 +5,7 @@ final: prev: {
overrides = prev.lib.composeExtensions
(old.overrides or (_: _: { }))
(hfinal: hprev: {
graphql-parser = (final.haskell.packages.${prev.ghcName}.callCabal2nix "graphql-parser" ../../server/lib/graphql-parser-hs { }).overrideScope (
final: prev: {
hedgehog = final.hedgehog_1_2;
}
);
graphql-parser = (final.haskell.packages.${prev.ghcName}.callCabal2nix "graphql-parser" ../../server/lib/graphql-parser { });
});
});
};

View File

@ -78,17 +78,12 @@ let
pkgs.jq
];
consoleInputs = [
pkgs.google-cloud-sdk
pkgs."nodejs-${versions.nodejsVersion}_x"
pkgs."nodejs-${versions.nodejsVersion}_x".pkgs.typescript-language-server
];
docsInputs = [
pkgs.yarn
];
integrationTestInputs = [
pkgs.nodejs
pkgs.python3
pkgs.pyright # Python type checker
];
@ -101,7 +96,7 @@ let
hls
pkgs.haskell.packages.${pkgs.ghcName}.alex
# pkgs.haskell.packages.${pkgs.ghcName}.apply-refact
pkgs.haskell.packages.${pkgs.ghcName}.apply-refact
(versions.ensureVersion pkgs.haskell.packages.${pkgs.ghcName}.cabal-install)
(pkgs.haskell.lib.dontCheck (pkgs.haskell.packages.${pkgs.ghcName}.ghcid))
pkgs.haskell.packages.${pkgs.ghcName}.happy
@ -163,7 +158,7 @@ let
++ integrationTestInputs;
in
pkgs.mkShell ({
buildInputs = baseInputs ++ consoleInputs ++ docsInputs ++ serverDeps ++ devInputs ++ ciInputs;
buildInputs = baseInputs ++ docsInputs ++ serverDeps ++ devInputs ++ ciInputs;
} // pkgs.lib.optionalAttrs pkgs.stdenv.isDarwin {
shellHook = ''
export DYLD_LIBRARY_PATH='${dynamicLibraryPath}'

View File

@ -11,6 +11,4 @@ in
else throw "Invalid version for package ${package.pname}: expected ${expected}, got ${package.version}";
ghcVersion = pkgs.lib.strings.fileContents ../.ghcversion;
nodejsVersion = pkgs.lib.strings.fileContents ../.nvmrc;
}

View File

@ -1,7 +1,7 @@
# DATE VERSION: 2024-01-23
# DATE VERSION: 2024-03-13
# Modify the above date version (YYYY-MM-DD) if you want to rebuild the image
FROM ubuntu:jammy-20240111
FROM ubuntu:jammy-20240227
### NOTE! Shared libraries here need to be kept in sync with `server-builder.dockerfile`!

View File

@ -1,5 +1,5 @@
{
"cabal-install": "3.10.1.0",
"cabal-install": "3.10.2.1",
"ghc": "9.6.4",
"hlint": "3.6.1",
"ormolu": "0.7.2.0"

View File

@ -424,7 +424,6 @@ common lib-depends
-- logging related
, base64-bytestring >= 1.0
, auto-update
-- regex related
, regex-tdfa >=1.3.1 && <1.4
@ -662,6 +661,7 @@ library
-- Exposed for benchmark:
, Hasura.Cache.Bounded
, Hasura.CredentialCache
, Hasura.CachedTime
, Hasura.Logging
, Hasura.HTTP
, Hasura.PingSources

View File

@ -54,6 +54,7 @@ library
Database.PG.Query.Pool
Database.PG.Query.PTI
Database.PG.Query.Transaction
Database.PG.Query.URL
build-depends:
, aeson
@ -65,6 +66,9 @@ library
, ekg-prometheus
, hashable
, hashtables
-- for our HASURA_SECRETS_BLOCKING_FORCE_REFRESH_URL hook
, http-client
, http-types
, mmorph
, monad-control
, mtl
@ -94,19 +98,20 @@ test-suite pg-client-tests
Interrupt
Timeout
Jsonb
URL
build-depends:
, aeson
, async
, base
, bytestring
, hspec
, mtl
, pg-client
, postgresql-libpq
, safe-exceptions
, time
, transformers
, aeson
, mtl
, postgresql-libpq
benchmark pg-client-bench
import: common-all
@ -123,5 +128,4 @@ benchmark pg-client-bench
, hasql-transaction
, pg-client
, tasty-bench
, text
, transformers

View File

@ -48,13 +48,14 @@ where
import Control.Concurrent.Interrupt (interruptOnAsyncException)
import Control.Exception.Safe (Exception, SomeException (..), catch, throwIO)
import Control.Monad (unless)
import Control.Monad.Except (MonadError (throwError))
import Control.Monad.IO.Class (MonadIO (liftIO))
import Control.Monad.Trans.Class (lift)
import Control.Monad.Trans.Except (ExceptT, runExceptT, withExceptT)
import Control.Retry (RetryPolicyM)
import Control.Retry qualified as Retry
import Data.Aeson (ToJSON (toJSON), Value (String), genericToJSON, object, (.=))
import Data.Aeson (ToJSON (toJSON), Value (String), encode, genericToJSON, object, (.=))
import Data.Aeson.Casing (aesonDrop, snakeCase)
import Data.Aeson.TH (mkToJSON)
import Data.Bool (bool)
@ -74,9 +75,13 @@ import Data.Text.Encoding (decodeUtf8, decodeUtf8With, encodeUtf8)
import Data.Text.Encoding.Error (lenientDecode)
import Data.Time (NominalDiffTime, UTCTime)
import Data.Word (Word16, Word32)
import Database.PG.Query.URL (encodeURLPassword)
import Database.PostgreSQL.LibPQ qualified as PQ
import Database.PostgreSQL.Simple.Options qualified as Options
import GHC.Generics (Generic)
import Network.HTTP.Client
import Network.HTTP.Types.Status (statusCode)
import System.Environment (lookupEnv)
import Prelude
{-# ANN module ("HLint: ignore Use tshow" :: String) #-}
@ -118,7 +123,7 @@ readDynamicURIFile path = do
<> Text.pack path
<> ": "
<> Text.pack (show e)
pure $ Text.strip uriDirty
pure $ encodeURLPassword $ Text.strip uriDirty
where
-- Text.readFile but explicit, ignoring locale:
readFileUtf8 = fmap decodeUtf8 . BS.readFile
@ -209,6 +214,7 @@ readConnErr conn = do
pgRetrying ::
(MonadIO m) =>
Maybe String ->
-- | An action to perform on error
IO () ->
PGRetryPolicyM m ->
PGLogger ->
@ -242,6 +248,36 @@ initPQConn ::
IO PQ.Connection
initPQConn ci logger = do
host <- extractHost (ciDetails ci)
-- if this is a dynamic connection, we'll signal to refresh the secret (if
-- configured) during each retry, ensuring we don't make too many connection
-- attempts with the wrong credentials and risk getting locked out
resetFn <- do
mbUrl <- lookupEnv "HASURA_SECRETS_BLOCKING_FORCE_REFRESH_URL"
case (mbUrl, ciDetails ci) of
(Just url, CDDynamicDatabaseURI path) -> do
manager <- newManager defaultManagerSettings
-- Create the request
let body = encode $ object ["filename" .= path]
initialRequest <- parseRequest url
let request =
initialRequest
{ method = "POST",
requestBody = RequestBodyLBS body,
requestHeaders = [("Content-Type", "application/json")]
}
-- The action to perform on each retry. This must only return after
-- the secrets file has been refreshed.
return $ do
status <- statusCode . responseStatus <$> httpLbs request manager
unless (status >= 200 && status < 300) $
logger $
PLERetryMsg $
object
["message" .= String "Forcing refresh of secret file at HASURA_SECRETS_BLOCKING_FORCE_REFRESH_URL seems to have failed. Retrying anyway."]
_ -> pure $ pure ()
-- Retry if postgres connection error occurs
pgRetrying host resetFn retryP logger $ do
-- Initialise the connection
@ -252,7 +288,6 @@ initPQConn ci logger = do
let connOk = s == PQ.ConnectionOk
bool (whenConnNotOk conn) (whenConnOk conn) connOk
where
resetFn = return ()
retryP = mkPGRetryPolicy $ ciRetries ci
whenConnNotOk conn = Left . PGConnErr <$> readConnErr conn

View File

@ -15,6 +15,7 @@ module Database.PG.Query.Pool
PGPoolStats (..),
PGPoolMetrics (..),
getInUseConnections,
getMaxConnections,
defaultConnParams,
initPGPool,
resizePGPool,
@ -97,6 +98,9 @@ data PGPoolMetrics = PGPoolMetrics
getInUseConnections :: PGPool -> IO Int
getInUseConnections = RP.getInUseResourceCount . _pool
getMaxConnections :: PGPool -> IO Int
getMaxConnections = RP.getMaxResources . _pool
data ConnParams = ConnParams
{ cpStripes :: !Int,
cpConns :: !Int,

View File

@ -0,0 +1,32 @@
{-# LANGUAGE DerivingStrategies #-}
{-# LANGUAGE OverloadedStrings #-}
module Database.PG.Query.URL
( encodeURLPassword,
)
where
import Data.Text (Text)
import Data.Text qualified as Text
import Data.Text.Encoding (decodeUtf8, encodeUtf8)
import Network.HTTP.Types.URI (urlEncode)
import Prelude
-- | It is possible and common for postgres url's to have passwords with special
-- characters in them (ex AWS Secrets Manager passwords). Current URI parsing
-- libraries fail at parsing postgres uri's with special characters. Also note
-- that encoding the whole URI causes postgres to fail as well. This only
-- encodes the password when given a url.
encodeURLPassword :: Text -> Text
encodeURLPassword url =
case Text.breakOnEnd "://" url of
(_, "") -> url
(scheme, urlWOScheme) -> case Text.breakOnEnd "@" urlWOScheme of
("", _) -> url
(auth, rest) -> case Text.splitOn ":" $ Text.dropEnd 1 auth of
[user] -> scheme <> user <> "@" <> rest
(user : pass) -> scheme <> user <> ":" <> encode' pass <> "@" <> rest
_ -> url
where
encode' arg =
decodeUtf8 $ urlEncode True (encodeUtf8 $ Text.intercalate ":" arg)

View File

@ -24,6 +24,7 @@ import Jsonb (specJsonb)
import System.Environment qualified as Env
import Test.Hspec (describe, hspec, it, shouldBe, shouldReturn)
import Timeout (specTimeout)
import URL (specURL)
import Prelude
-------------------------------------------------------------------------------
@ -82,6 +83,7 @@ main = hspec $ do
specInterrupt
specTimeout
specJsonb
specURL
mkPool :: IO PGPool
mkPool = do

View File

@ -0,0 +1,58 @@
{-# LANGUAGE DerivingStrategies #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# OPTIONS_GHC -Wno-unused-imports -Wno-orphans -Wno-name-shadowing #-}
module URL (specURL) where
import Database.PG.Query.URL
import Test.Hspec
import Prelude
specURL :: Spec
specURL = do
describe "Only the password from a postgres url is encoded if if exists" $ do
it "None Postgres connection urls succeed" $ do
let url = "jdbc:mysql://localhostok?user=root&password=pass&allowMultiQueries=true"
url `shouldBe` encodeURLPassword url
it "Postgres simple urls succeed" $ do
let url = "postgres://localhost"
url `shouldBe` encodeURLPassword url
it "Postgres urls with no username, password, or database succeed" $ do
let url = "postgres://localhost:5432"
url `shouldBe` encodeURLPassword url
it "Postgres urls with no username or password succeed" $ do
let url = "postgres://localhost:5432/chinook"
url `shouldBe` encodeURLPassword url
it "Postgres urls with no password succeed" $ do
let url = "postgres://user@localhost:5432/chinook"
url `shouldBe` encodeURLPassword url
it "Postgres urls with no password but a : succeed" $ do
let url = "postgres://user:@localhost:5432/chinook"
url `shouldBe` encodeURLPassword url
it "Postgres urls with no username succeed" $ do
let url = "postgres://:pass@localhost:5432/chinook"
url `shouldBe` encodeURLPassword url
it "Postgres urls with simple passwords succeed" $ do
let url = "postgres://user:pass@localhost:5432/chinook"
url `shouldBe` encodeURLPassword url
it "Postgres urls with special characters passwords succeed" $ do
let url = "postgres://user:a[:sdf($#)]@localhost:5432/chinook"
expected = "postgres://user:a%5B%3Asdf%28%24%23%29%5D@localhost:5432/chinook"
expected `shouldBe` encodeURLPassword url
it "Postgres urls with special characters with @ passwords succeed" $ do
let url = "postgres://user:a@[:sdf($@#@)]@localhost:5432/chinook"
expected = "postgres://user:a%40%5B%3Asdf%28%24%40%23%40%29%5D@localhost:5432/chinook"
expected `shouldBe` encodeURLPassword url

View File

@ -34,6 +34,7 @@ module Data.Pool
createPool,
createPool',
resizePool,
getMaxResources,
tryTrimLocalPool,
tryTrimPool,
withResource,
@ -231,6 +232,9 @@ resizePool Pool {..} maxResources' = do
"invalid maximum resource count " ++ show maxResources'
atomically $ writeTVar maxResources maxResources'
getMaxResources :: Pool a -> IO Int
getMaxResources Pool {..} = readTVarIO maxResources
-- | Attempt to reduce resource allocation below maximum by dropping some unused
-- resources
tryTrimLocalPool :: (a -> IO ()) -> TVar Int -> LocalPool a -> IO ()

View File

@ -713,10 +713,10 @@ instance HttpLog AppM where
buildExtraHttpLogMetadata _ _ = ()
logHttpError logger loggingSettings userInfoM reqId waiReq req qErr headers _ _ =
logHttpError logger loggingSettings userInfoM reqId waiReq req qErr qTime cType headers _ _ =
unLoggerTracing logger
$ mkHttpLog
$ mkHttpErrorLogContext userInfoM loggingSettings reqId waiReq req qErr Nothing Nothing headers
$ mkHttpErrorLogContext userInfoM loggingSettings reqId waiReq req qErr qTime cType headers
logHttpSuccess logger loggingSettings userInfoM reqId waiReq reqBody response compressedResponse qTime cType headers (CommonHttpLogMetadata rb batchQueryOpLogs, ()) _ =
unLoggerTracing logger

View File

@ -0,0 +1,45 @@
-- safety for unsafePerformIO below
{-# OPTIONS_GHC -fno-cse -fno-full-laziness #-}
module Hasura.CachedTime (cachedRecentFormattedTimeAndZone) where
import Control.Concurrent (forkIO, threadDelay)
import Control.Exception (uninterruptibleMask_)
import Data.ByteString.Char8 qualified as B8
import Data.IORef
import Data.Time.Clock qualified as Time
import Data.Time.Format
import Data.Time.LocalTime qualified as Time
import Hasura.Prelude
import System.IO.Unsafe
-- | A fast timestamp source, updated every 1sec, at the whims of the RTS, calling
-- 'Time.getCurrentTimeZone' and 'Time.getCurrentTime'
--
-- We also store an equivalent RFC7231 timestamp for use in the @Date@ HTTP
-- header, avoiding 6% latency regression from computing it every time.
-- We use this at call sites to try to avoid warp's code path that uses the
-- auto-update library to do this same thing.
--
-- Formerly we used the auto-update library but observed bugs. See
-- "Hasura.Logging" and #10662
--
-- NOTE: if we wanted to make this more resilient to this thread being
-- descheduled for long periods, we could store monotonic timestamp here (fast)
-- then logging threads could do the same and determine if the time is stale. I
-- considered doing the same to also get more granular timestamps but it seems
-- the addUTCTime makes this just as slow as getCurrentTime
cachedRecentFormattedTimeAndZone :: IORef (Time.UTCTime, Time.TimeZone, B8.ByteString)
{-# NOINLINE cachedRecentFormattedTimeAndZone #-}
cachedRecentFormattedTimeAndZone = unsafePerformIO do
tRef <- getTimeAndZone >>= newIORef
void $ forkIO $ uninterruptibleMask_ $ forever do
threadDelay $ 1000 * 1000
getTimeAndZone >>= writeIORef tRef
pure tRef
where
getTimeAndZone = do
!tz <- Time.getCurrentTimeZone
!t <- Time.getCurrentTime
let !tRFC7231 = B8.pack $ formatTime defaultTimeLocale "%a, %d %b %Y %H:%M:%S GMT" t
pure (t, tz, tRFC7231)

View File

@ -75,7 +75,7 @@ ourIdleGC (Logger logger) idleInterval minGCInterval maxNoGCInterval =
else do
when (areOverdue && not areIdle)
$ logger
$ UnstructuredLog LevelWarn
$ UnstructuredLog LevelInfo
$ "Overdue for a major GC: forcing one even though we don't appear to be idle"
performMajorGC
startTimer >>= go (gcs + 1) (major_gcs + 1) True

View File

@ -363,7 +363,7 @@ resolveAsyncActionQuery userInfo annAction responseErrorsConfig =
\response -> makeActionResponseNoRelations annFields outputType HashMap.empty False <$> decodeValue response
IR.AsyncId -> pure $ AO.String $ actionIdToText actionId
IR.AsyncCreatedAt -> pure $ AO.toOrdered $ J.toJSON _alrCreatedAt
IR.AsyncErrors -> pure $ AO.toOrdered $ J.toJSON $ mkQErrFromErrorValue _alrErrors
IR.AsyncErrors -> pure $ AO.toOrdered $ J.toJSON $ mkQErrFromErrorValue <$> _alrErrors
pure $ encJFromOrderedValue $ AO.object resolvedFields
IR.ASISource sourceName sourceConfig ->
let jsonAggSelect = mkJsonAggSelect outputType
@ -413,12 +413,12 @@ resolveAsyncActionQuery userInfo annAction responseErrorsConfig =
tablePermissions = RS.TablePerm annBoolExpTrue Nothing
in RS.AnnSelectG annotatedFields tableFromExp tablePermissions tableArguments stringifyNumerics Nothing
where
mkQErrFromErrorValue :: Maybe J.Value -> QErr
mkQErrFromErrorValue :: J.Value -> QErr
mkQErrFromErrorValue actionLogResponseError =
let internal = ExtraInternal <$> (actionLogResponseError >>= (^? key "internal"))
let internal = ExtraInternal <$> (actionLogResponseError ^? key "internal")
internal' = if shouldIncludeInternal (_uiRole userInfo) responseErrorsConfig then internal else Nothing
errorMessageText = fromMaybe "internal: error in parsing the action log" $ actionLogResponseError >>= (^? key "error" . _String)
codeMaybe = actionLogResponseError >>= (^? key "code" . _String)
errorMessageText = fromMaybe "internal: error in parsing the action log" $ actionLogResponseError ^? key "error" . _String
codeMaybe = actionLogResponseError ^? key "code" . _String
code = maybe Unexpected ActionWebhookCode codeMaybe
in QErr [] HTTP.status500 errorMessageText code internal'
IR.AnnActionAsyncQuery _ actionId outputType asyncFields definitionList stringifyNumerics _ actionSource = annAction

View File

@ -39,7 +39,7 @@ import Hasura.RQL.Types.Common (SourceName)
import Hasura.RQL.Types.Roles (RoleName)
import Hasura.RQL.Types.Subscription (SubscriptionType (..))
import Hasura.Server.Logging (ModelInfo (..), ModelInfoLog (..))
import Hasura.Server.Prometheus (PrometheusMetrics (..), SubscriptionMetrics (..), liveQuerySubscriptionLabel, recordSubcriptionMetric)
import Hasura.Server.Prometheus (PrometheusMetrics (..), SubscriptionMetrics (..), liveQuerySubscriptionLabel, recordSubscriptionMetric)
import Hasura.Server.Types (GranularPrometheusMetricsState (..), ModelInfoLogState (..))
import Refined (unrefine)
import System.Metrics.Prometheus.Gauge qualified as Prometheus.Gauge
@ -121,7 +121,7 @@ pollLiveQuery pollerId pollerResponseState lqOpts (sourceName, sourceConfig) rol
(queryExecutionTime, mxRes) <- runDBSubscription @b sourceConfig query (over (each . _2) C._csVariables cohorts) resolvedConnectionTemplate
let dbExecTimeMetric = submDBExecTotalTime $ pmSubscriptionMetrics $ prometheusMetrics
recordSubcriptionMetric
recordSubscriptionMetric
granularPrometheusMetricsState
True
operationNamesMap
@ -215,7 +215,7 @@ pollLiveQuery pollerId pollerResponseState lqOpts (sourceName, sourceConfig) rol
when (modelInfoLogStatus' == ModelInfoLogOn) $ do
for_ (modelInfoList) $ \(ModelInfoPart modelName modelType modelSourceName modelSourceType modelQueryType) -> do
L.unLogger logger $ ModelInfoLog L.LevelInfo $ ModelInfo modelName (toTxt modelType) (toTxt <$> modelSourceName) (toTxt <$> modelSourceType) (toTxt modelQueryType) False
recordSubcriptionMetric
recordSubscriptionMetric
granularPrometheusMetricsState
True
operationNamesMap

Some files were not shown because too many files have changed in this diff Show More