Document and automate spinning up AlloyDB for testing.

We sometimes need to test against cloud databases. Here, we add a Terraform module to start a new AlloyDB cluster and instance, which we can then use for testing purposes.

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7002
GitOrigin-RevId: 2d661b5cc6d60e47485ea68b781e13426ed4f097
This commit is contained in:
Samir Talwar 2022-11-24 15:14:30 +01:00 committed by hasura-bot
parent eebeb5cc3b
commit a0dc296ede
9 changed files with 313 additions and 0 deletions

View File

@ -68,6 +68,7 @@ let
devInputs = [
pkgs.nixpkgs-fmt
pkgs.shellcheck
pkgs.terraform
];
ciInputs = [

4
server/test-manual/.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
.terraform
.terraform.tfstate.*
terraform.tfstate
terraform.tfstate.backup

View File

@ -0,0 +1,101 @@
# Manual testing helpers
The code in this directory will help you spin up manual testing environments.
It uses Terraform, which you can [install using your package manager][install terraform].
You will need to specify your name to use these, so that you don't step on the toes of other developers. We recommend using the name portion of your Hasura email address, without any dots.
You can either specify your name by passing `-var <name>` to each `terraform` command, or by setting the `TF_VAR_name` environment variable. If you do the latter, you might find it easiest to add it to _.envrc.local_ in the repository root, and use [direnv] to load it automatically.
```shell
$ export TF_VAR_name='tanmaigopal'
```
Note that Terraform creates files locally to manage its internal state. **Do not delete the state files** or it will lose track of what you've created.
Once you've started something, it's your job to stop it too. Please don't leave it lying around.
[install terraform]: https://developer.hashicorp.com/terraform/downloads
[direnv]: https://direnv.net/
## Google Cloud
To spin up these resources, you will need a Google Cloud account and a project with billing enabled.
First, install the [Google Cloud CLI].
Next, authenticate:
```shell
$ gcloud auth application-default login
```
Set your project (you can add this to _.envrc.local_ if you want):
```shell
$ export GOOGLE_CLOUD_PROJECT='<project name>'
```
Then proceed.
[google cloud cli]: https://cloud.google.com/cli
### AlloyDB
To spin up a test instance, `cd` into the _alloydb_ directory, and run:
```shell
$ terraform init
$ terraform apply -var password='<a strong password>'
```
(You can generate a strong password by running `openssl rand 32 | base64`.)
If the plan looks good, type "yes".
You may have to wait a few minutes for things to start up, and then a couple more while the bastion instance comes online.
The URL can be found by running `terraform output url` (be warned, it includes your super-secret password).
Once everything is up and running, you can connect to the AlloyDB proxy as follows:
```shell
$ psql "$(terraform output -raw url)"
```
Test all you like. When you're done, run:
```shell
$ terraform destroy
```
#### Troubleshooting
If you're having trouble, you may want to debug the Bastion instance. View the auth proxy logs by SSHing in and reading the log file.
First of all, enable SSH by uncommenting the "ssh" tag in _bastion.tf_, and running `terraform apply` again.
Then take a peek:
```shell
$ gcloud compute ssh <name>-testing-alloydb-bastion
bastion$ cat /alloydb-auth-proxy.log
```
## Modifying these files
You are advised to read the Terraform documentation.
- [General documentation](https://developer.hashicorp.com/terraform/docs)
- [Google provider documentation](https://registry.terraform.io/providers/hashicorp/google/latest/docs)
When reading provider documentation, you might notice discrepancies between the pinned version here and the version documented. To mitigate this, upgrade first:
```shell
$ terraform get -update
```
Then fix any issues. After that, you can make your changes.
Once you're done, please run `terraform fmt` before committing.

View File

@ -0,0 +1,40 @@
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/google" {
version = "4.44.1"
hashes = [
"h1:raHEXJpQCHohqYupGPt56hD7LrO0uq3O/ve7x4AQ14I=",
"zh:0668252985a677707bf46e393a96ea63b58b93d730f4daf8357246f7e6dd8ba9",
"zh:1cbfa5c7dcf02acb90718474a6b0e6af6a7c839c964270feaf55cddb537ef762",
"zh:44d601bc4667158c45ab584e60662f69d38e9febbb65b9bb1c5a84fdccc8b91e",
"zh:452a2c703c5c6024696d818a1e008067fa29cdf99bdae6847d5f8eed7a0b4d75",
"zh:55a0a22fd03fabdfff7debb5840ffa68a93850c864f48a852d6eb1f74ecae915",
"zh:7f59721375d9bb05fcc6eb17991a7bf1aab1b0f180107515ec3b9e298c6c6152",
"zh:a09ce6734c8f2cfb1b5855f073105b2403305fee0f68d846b1303ca37d516a28",
"zh:ae8e413ee02824c44b85b4d0aa71335940757a7a33922f55c13a800bc5e15b45",
"zh:c2947aea252929ba608bcc5b892d045541ce7224f104a542c18c5487287df9ef",
"zh:df463ad9ef19641e4502879af051dc17c6880882d436026255d8f9103992dc42",
"zh:ee113c1f6e32fa4c41fda9191d7fd50a1137d1072d1a39627fe90e10941d48ea",
"zh:f569b65999264a9416862bca5cd2a6177d94ccb0424f3a4ef424428912b9cb3c",
]
}
provider "registry.terraform.io/hashicorp/google-beta" {
version = "4.44.1"
hashes = [
"h1:BxydrEO+NuBsj3bDWl5s2qUpQH24G/BxMPX9pb04hEA=",
"zh:1993bd9512f4b48b3de8faac5c9e6f82fb0597bb18028a4308e1b9d3464773be",
"zh:38c35a4bae7c3d26836d3a800ed3eeb48dcb2be0368322ee43c545cd1cc2a800",
"zh:58c80343761a8f8a5d410fe78c7f772ab963cb627afb95138ec35496a17c8742",
"zh:7161b97c8af746d7cce94a7e69a471d51f602a7c2dd30aab6ed2db8cdc6ee526",
"zh:74e21c303bc2c25ba3606bc88ecaeb2b43e59d5d428d7d3174be30fecc5cb288",
"zh:7caad09cadecd2b69e63452b1a09b7b3a371d7ec2bd72b339ece4c8547146a7b",
"zh:b27d35b93cd99a352cfa0c285602d3df24eb75341511a6d70eb6a602eeccb984",
"zh:bce44c67756c364bb2ea99e5e626ff05c7a6629d1a548d308161736e2b060eba",
"zh:bd2d8cbd4163987e3eaa48f30e6ac15d234fb2040ea1927695428aadfe5558e7",
"zh:e358d18f585d35fe2d688cf7d51c23f3d1ef3e0977f8037435ba316ab3dc675b",
"zh:f5669ce94b63ea612542a9d167108927c6bab070fac1199a11695fb2514d71d9",
"zh:f569b65999264a9416862bca5cd2a6177d94ccb0424f3a4ef424428912b9cb3c",
]
}

View File

@ -0,0 +1,34 @@
# The AlloyDB cluster, with a single instance.
# The initial user password must be provided.
variable "password" {
type = string
sensitive = true
}
resource "google_alloydb_cluster" "default" {
provider = google-beta
cluster_id = "${var.name}-testing-alloydb"
network = "projects/${data.google_project.project.number}/global/networks/${google_compute_network.default.name}"
location = local.region
initial_user {
user = var.name
password = var.password
}
}
resource "google_alloydb_instance" "primary" {
provider = google-beta
cluster = google_alloydb_cluster.default.name
instance_id = "${var.name}-testing-alloydb-instance"
instance_type = "PRIMARY"
depends_on = [
google_service_networking_connection.default
]
}
output "url" {
value = "postgresql://${var.name}:${var.password}@${google_compute_instance.bastion.network_interface.0.access_config.0.nat_ip}/postgres"
sensitive = true
}

View File

@ -0,0 +1,54 @@
# The bastion instance, which runs the AlloyDB auth proxy.
# This grabs the name of the latest Debian image.
data "google_compute_image" "bastion_image" {
family = "debian-11"
project = "debian-cloud"
}
resource "google_compute_instance" "bastion" {
name = "${var.name}-testing-alloydb-bastion"
machine_type = "e2-small"
zone = "us-central1-a"
tags = [
# "ssh", # uncomment and re-apply to SSH in
"postgres",
]
network_interface {
network = google_compute_network.default.id
# Runs on an ephemeral public IP address.
access_config {}
}
boot_disk {
initialize_params {
image = data.google_compute_image.bastion_image.self_link
}
}
# This service account has client access to AlloyDB.
service_account {
email = google_service_account.service_account.email
scopes = ["cloud-platform"]
}
# On startup, download the AlloyDB auth proxy and run it.
# Logs are written to /alloydb-auth-proxy.log. You can SSH in and view them if necessary.
metadata_startup_script = <<-EOT
#!/usr/bin/env bash
set -e
set -u
set -o pipefail
curl -fsSL https://storage.googleapis.com/alloydb-auth-proxy/v0.6.1/alloydb-auth-proxy.linux.amd64 -o alloydb-auth-proxy
chmod +x alloydb-auth-proxy
nohup ./alloydb-auth-proxy \
'${google_alloydb_instance.primary.id}' \
--address "0.0.0.0" \
>& alloydb-auth-proxy.log &
EOT
}

View File

@ -0,0 +1,47 @@
# AlloyDB requires a dedicated virtual private network.
resource "google_compute_network" "default" {
name = "${var.name}-testing-alloydb-network"
}
resource "google_compute_global_address" "private_ip_alloc" {
name = "${var.name}-testing-alloydb-cluster"
address_type = "INTERNAL"
purpose = "VPC_PEERING"
prefix_length = 16
network = google_compute_network.default.id
}
resource "google_service_networking_connection" "default" {
network = google_compute_network.default.id
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name]
}
# Allows instances with the tag "ssh" to expose port 22.
resource "google_compute_firewall" "ssh" {
name = "${var.name}-testing-alloydb-allow-ssh"
allow {
ports = ["22"]
protocol = "tcp"
}
direction = "INGRESS"
network = google_compute_network.default.id
priority = 1000
source_ranges = ["0.0.0.0/0"]
target_tags = ["ssh"]
}
# Allows instances with the tag "postgres" to expose port 5432.
resource "google_compute_firewall" "postgres" {
name = "${var.name}-testing-alloydb-allow-postgres"
allow {
ports = ["5432"]
protocol = "tcp"
}
direction = "INGRESS"
network = google_compute_network.default.id
priority = 1000
source_ranges = ["0.0.0.0/0"]
target_tags = ["postgres"]
}

View File

@ -0,0 +1,16 @@
locals {
region = "us-central1"
}
provider "google" {
region = local.region
}
# Your name, which will prefix all resources so we know who left instances lying around.
variable "name" {
type = string
}
data "google_project" "project" {
provider = google-beta
}

View File

@ -0,0 +1,16 @@
# The service account is used by the bastion instance.
# It grants the relevant privileges for connecting to AlloyDB as a client.
resource "google_service_account" "service_account" {
account_id = "${var.name}-testing-alloydb"
display_name = "Testing AlloyDB for ${var.name}"
}
resource "google_project_iam_binding" "service_account_alloydb_client" {
project = data.google_project.project.id
role = "roles/alloydb.client"
members = [
google_service_account.service_account.member,
]
}