* ci: temp machines for scheduled killing experiment
Based on our discussions last week, I am exploring ways to move us to
permanent machines instead of preemptible ones. This should drastically
reduce the number of "cancelled" jobs.
The end goal is to have:
1. An instance group (per OS) that defines the actual CI nodes; this
would be pretty much the same as the existing ones, but with
`preemptible` set to false.
2. A separate machine that, on a cron (say at 4AM UTC), destroys all the
CI nodes.
The hope is that the group managers, which are set to maintain 10 nodes,
will then recreate the "missing" nodes using their normal starting
procedure.
However, there are a lot of unknowns I would like to explore, and I need
a playground for that. This is where this PR comes in. As it stands, it
creates one "killer" machine and a temporary group manager. I will use
these to experiment with the GCP API in various ways without interfering
with the real CI nodes.
This experimentation will likely require multiple `terraform apply` with
multiple different versions of the associated files, as well as
connecting to the machines and running various commands directly from
them. I will ensure all of that only affects the new machines created as
part of this PR, and therefore believe we do not need to go through a
separate round of approval for each change.
Once I have finished experimenting, I will create a new PR to clean up
the temporary resources created with this one and hopefully set up a
more permanent solution.
CHANGELOG_BEGIN
CHANGELOG_END
* add missing zone for killer instance
* add compute scope to killer
* authorize Terraform to shutdown killer to update it
* change in plans: use a service account instead
* .
* add compute.instances.list permission
* add compute.instances.delete permission
* add cron script
* obligatory round of extra escaping
* fix PATH issue & crontab format
* smaller machine & less frequent reboots
Following the happy resolution of #4370 in #4371, we do not need the
temporary nodes anymore. This PR therefore removes them.
CHANGELOG_BEGIN
CHANGELOG_END
This is an attempt to apply a potential fix discovered as part of the
investigation in #4370. The issue seems to be that Chocolatey is using a
protocol deemed not secure enough and disabled in recent Windows images
(our node creation script dynamically selects the lmatest "Windows 2016"
server image from GCP).
CHANGELOG_BEGIN
CHANGELOG_END
Today we don't have any Windows machine in the CI pool. The machine
template has not changed since 2019-11-21, yet as of today when the
machine starts GCP proudly declares
> GCEMetadataScripts: No startup scripts to run.
despite the script being defined as `sysprep-specialize-script-ps1`, as
per the
[documentation](https://cloud.google.com/compute/docs/startupscript).
Also, it used to work and we haven't changed anything.
I'm not quite sure what's going on and how to investigate, but I think
at the very least we can try to unblock the team by having a set of
machines we initialize manually. This PR is meant to do that.)
This is the same changeset as a877491139
and 16da700532, except that it now
specifies 5 machines instead of just one.
CHANGELOG_BEGIN
CHANGELOG_END
The recent changes to the way in which we build npm packages with Bazel
have caused a lot of issues on Windows. To debug those, Andreas has
requested a temporary machine.
This is pretty much an exact replica of #3294 (a87749113), with the same
plan:
1. I run terraform apply on this PR is merged.
2. I manually, through the GCP web console, set a dummy password for that
machine's RDP connection and transmit that to @aherrmann-da through
Slack.
3. @aherrmann-da debugs the issue.
4. I create a PR to roll back this one, then apply it once it's merged.
Note: I have verified that master applies cleanly prior to opening this
PR.
CHANGELOG_BEGIN
CHANGELOG_END
* infra: gcp_cdn_bucket: update comment
The cache retention can be configured, while the comment suggests its
hardcoded.
* infra: don't create index.html inside gcp_cdn_bucket module
We might want to add a different index.html per bucket, so move that
code outside the module and into the bucket-specific terraform files.
Also add bucket-specific index.html files.
There are two issues with the current setup:
- iptables entry prevents connecting to the metadata server, and
- machines are given insufficient permissions.
There is no simple way to configure GCS to serve the desired security
headers, so instead the script will keep updating the existing s3
bucket.
Consequent changes:
- Add aws cli tool to dev-env
- Remove docs bucket from Terraform
It looks like the curl command is currently installing but not starting the service that is supposed to send logs to StackDriver. When connecting to the machines manually, a call to `restart` seems to fix it.
* remove -O option from curl command in order to pipe script contents to bash
* follow redirects for stackdriver
Co-Authored-By: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
This is a first step towards improving our docs release process. The
goal here is to get rid of the manual "publish docs" step. This is done
as a periodic check because we only want to run this for "published"
releases, i.e. the ones that are not marked as prerelease. Because the
act of publishing a release is a manual step that Azure cannot trigger
on, we instead opt for a periodic check.
Not included in this piece of work:
- Any change to the docs themselves; the goal here is to automate the
current process as a first step. Future plans for the docs themselves
include adding links to older versions of the docs.
- A better way to detect docs are already up-to-date, and abort if so.
- Including older versions of the docs.
- Switching the DNS record from the current AWS S3 bucket to this new
GCS bucket. That will be a manual step once we're happy with how the
new bucket works.
* ci: always use the linux-pool
reduce the difference of environment between external and internal
contributions
* infra: tweak the linux cache warmup script
Don't share the same bazel cache directory with the disk cache, which is
something else. Be more specific about the target. Clean after yourself.
* infra: bump the linux agent disk to 200GB
avoid running out of disk space
Warm up local caches by building dev-env and current daml master This is
allowed to fail, as we still want to have CI machines around, even when
their caches are only warmed up halfway.
Afterwards, we purge old agents that might still be around, that didn't
unregister themselves
This depends on #402 to be merged, as otherwise purge_old_agents.py
can't be found obviously.
* nix: add the more providers to terraform
* docs: make tarballs more reproducible
* ci: use the linux-pool pool
* ci: tweak the nix installation
handle the case where the user is root and on ubuntu
* infra: terraform fmt
* infra: add Azure Pipeline agents
* ci: only enable linux-pool for internal PRs