daml/infra
Gary Verhaegen 7c2ba6f996
infra: add prod label (#8140)
Requested by @nycnewman.

CHANGELOG_BEGIN
CHANGELOG_END
2020-12-03 01:55:43 +01:00
..
macos add default capability to macos (#5915) 2020-11-25 15:34:33 +01:00
modules/gcp_cdn_bucket replace DAML Authors with DA in copyright headers (#5228) 2020-03-27 01:26:10 +01:00
.gitignore open-sourcing daml 2019-04-04 09:33:38 +01:00
apply infra: add a VSTS windows agents (#368) 2019-04-18 11:20:57 +00:00
bazel_cache.tf let CI delete bazel cache items (#7514) 2020-09-29 13:56:35 +02:00
binaries.tf protect GCS bucket items (#7439) 2020-09-18 15:59:23 +02:00
data_bucket.tf remove leo (#7535) 2020-09-30 18:16:57 +02:00
dumps_bucket.tf protect GCS bucket items (#7439) 2020-09-18 15:59:23 +02:00
hoogle_server.tf infra: add prod label (#8140) 2020-12-03 01:55:43 +01:00
main.tf infra: add prod label (#8140) 2020-12-03 01:55:43 +01:00
nix_cache.tf protect GCS bucket items (#7439) 2020-09-18 15:59:23 +02:00
periodic_killer.tf infra: add prod label (#8140) 2020-12-03 01:55:43 +01:00
README.md document how to kill nodes (#7782) 2020-10-22 15:44:48 +02:00
vsts_agent_linux_startup.sh incident-118: investigate & fix (#8135) 2020-12-02 19:20:56 +01:00
vsts_agent_linux.tf infra: add prod label (#8140) 2020-12-03 01:55:43 +01:00
vsts_agent_windows.tf infra: add prod label (#8140) 2020-12-03 01:55:43 +01:00
writer.tf replace DAML Authors with DA in copyright headers (#5228) 2020-03-27 01:26:10 +01:00

DAML

This is the terraform code used by the DAML repository to deploy supporting infrastructure such as the Bazel caches, Nix caches and Azure Pipeline (VSTS) Agents.

Setup

To deploy the infrastructure changes, you will to get access to the da-dev-gcp-daml-language Google project from DA IT. Then run gcloud auth login to configure the local credentials.

Deployment

All the infrastructure is currently deployed using Terraform. For convenience we have a little wrapper script that you can run to apply the latest changes:

$ ./apply

Writer service-account key

To avoid holding the secret key into the store, creating the key has to be done through the UI.

This can be done here: https://console.cloud.google.com/iam-admin/serviceaccounts/details/104272946446260011088?project=da-dev-gcp-daml-language

Setting up credentials

In order to interact with these Terraform files, you will need security to give you access to the relevant GCP project (da-dev-gcp-daml-language), and login via gcloud by running:

gcloud auth application-default login --account your.name@gcloud-domain.com

Resetting build nodes

Permissions to reset build nodes are defined in periodic-killer.tf using the killCiNodes role. CI nodes are managed so killed nodes will be immediately replaced by a new one with the exact same configuration (but starting its initialization from scratch); we can therefore see killing a node and resetting a node as the same operation.

Nodes can be listed with

gcloud compute instances list --project=da-dev-gcp-daml-language

and individual nodes can be killed with

gcloud compute instances --project=da-dev-gcp-daml-language delete --zone=us-east4-a vsts-agent-linux-dhw4

where zone and name have to match.

As a reference, here are a couple zsh functions I have added to my shell to make my life easier:

refresh_machines() {
    machines=$(gcloud compute instances list --format=json --project=da-dev-gcp-daml-language | jq -c '[.[] | select (.name | startswith("vsts-")) | {key: .name, value: .zone | sub (".*/"; "")}] | from_entries')
}

kill_machine() {
    if [ -z "$machines" ]; then
        refresh_machines
    fi
    for machine in $@; do
        gcloud -q compute instances --project=da-dev-gcp-daml-language delete --zone=$(echo $machines | jq -r ".[\"$machine\"]") $machine
    done
}
_kill_machine() {
    local machine_names
    if [ -z "$machines" ]; then
        refresh_machines
    fi
    machine_names=$(echo $machines | jq -r "keys - $(echo -n $words | jq -sRc 'split(" ")') | .[]")
    _arguments "*: :($machine_names)"
}
compdef _kill_machine kill_machine