mirror of
https://github.com/digital-asset/daml.git
synced 2024-09-20 09:17:43 +03:00
31a76a4a2a
This morning we started with very restricted CI pools (2/6 for Windows and 7/20 for Linux), apparently because the region we run in (us-east1) has three zones, two of them were unable to allocate new nodes, and the default policy is to distribute nodes evenly between zones. I've manually changed the distribution policy. Unfortunately this option is not yet available in our version of the GCP Terraform plugin. CHANGELOG_BEGIN CHANGELOG_END |
||
---|---|---|
.. | ||
google_compute.tf | ||
google_storage.tf | ||
outputs.tf | ||
README.md | ||
variables.tf |
A Google Storage Bucket + CDN configuration
This modules contains essentially two things:
- A GCS bucket to store objects into
- A load-balancer connected to it
It also makes a few assumptions:
- A service account will be created to write into the bucket
- All objects are meant to be publicly-readable
Module config
> terraform-docs md .
Inputs
Name | Description | Type | Default | Required |
---|---|---|---|---|
cache_retention_days | The number of days to keep the objects around | string | n/a | yes |
labels | Labels to apply on all the resources | map | <map> |
no |
name | Name prefix for all the resources | string | n/a | yes |
project | GCP project name | string | n/a | yes |
region | GCP region in which to create the resources | string | n/a | yes |
ssl_certificate | A reference to the SSL certificate, google managed or not | string | n/a | yes |
Outputs
Name | Description |
---|---|
bucket_name | Name of the GCS bucket that will receive the objects. |
external_ip | The external IP assigned to the global fowarding rule. |