set up macOS nodes
This PR documents how to create and manage macOS CI nodes. Because macOS
is not supported by our current cloud providers, these instructions are
geared towards creating VMs on physical machines we would need to host
and manage ourselves, i.e. these notes are mostly targeted at Ed.
CHANGELOG_BEGIN
CHANGELOG_END
It looks like the change in Windows agent names has caused an issue:
because Windows agents are not always properly cleaned up on shutdown,
i.e. they do not always have time to tell Azure they are going away, and
because GCP likes to reuse the same names for machines in a group, we've
been seeing errors like:
```
ERROR: The running command stopped because the preference variable
"ErrorActionPreference" or common parameter is set to Stop: Pool 11
already contains an agent with name VSTS-WIN-3QCX.
```
recently. Today, only 2 out of our 6 agents have managed to register
with Azure. This PR should fix that.
ChaNGELOG_BEGIN
CHANGELOG_END
This is a small QoL improvement, mostly targeted at myself: have Windows
agents register with Azure using the name they display on the GCP
console, so I don't need to find a build and look at the "Agent
Diagnostics" step to figure out the corresponding between Azure and GCP.
CHANGELOG_BEGIN
CHANGELOG_END
@cocreature told me he's done with the Linux machine. He's still using
the Windows one, not removing it is not an oversight.
CHANGELOG_BEGIN
CHANGELOG_END
Our Linux startup script never finishes, as it ends with `exec`'ing to
the Azure agent. Since I've removed that part, the EXIT handler,
supposed to only kick in when an issue prevents the script from
finishing, triggers on normal exit, and the machine shuts down. Making
it hard to use.
CHANGELOG_BEGIN
CHANGELOG_END
CI has been behaving weirdly for the past three days, with build times
on Linux and Windows regularly taking over 40 minutes, macOS builds
occasionally running for almost three hours, and generally a lot of OOM
exceptions (mostly on Windows, but a bit on the other two too).
We currently have no idea what changed, and have been having trouble
reproducing locally. As far as I'm aware, there has been no change to
the CI infrastructure itself, so we suspect we broke something in our
code somehow.
@cocreature has requested access to Linux and Windows machines with
similar specs and set-up as the CI ones, but without credentials. This
PR attempts to provide that.
Once the machines are up I will manually add accounts for @cocreature.
CHANGELOG_BEGIN
CHANGELOG_END
Even though the command succeeds as far as deleting the machine goes, it
does log an error. That is probably why we recently had only one machine
deleted per night.
Something must have changed on the Google side recently to make this
additional permission required.
CHANGELOG_BEGIN
CHANGELOG_END
1. Google says the instance is currently overutilized and suggests
g1-small as a more appropriate size.
2. It occurred to me that the reason no error was logged might be that
we lose them, so explicitly redirecting stderr too.
CHANGELOG_BEGIN
CHANGELOG_END
It appears that most of our Windows machines have not been rebooted
since Tuesday 24. We detected this because one of them has run out of
disk space.
This is not good, but what's worse is I currently have no idea what
could be going wrong, and we are not logging anything at all in the
current setup, so even ssh'ing into the machine provides no insight.
This PR hopefully addresses that by:
1. Redirecting the outputs of the script to a file, and
2. `tail`iing that file from the startup script, so the logs will appear
directly in the GCP web console. (This is what we currently do for
the Azure agent logs on Linux.)
This PR also tells the script to not stop on the first failed machine
and keep trying.
CHANGELOG_BEGIN
CHANGELOG_END
It looks like GCP doesn't like not having a "page suffix" set, so it
sets a default. Except somehow Terraform doesn't know it's a default
value, so when trying to plan without the (optional) website value set,
Terraform will always find that the deployed state has changed.
With this change, we set it to a value that doesn't exist and won't
work, but at least Terraform will see that the deployed state matches
the configured one.
Note: this PR is a bit special as far as "changes" go as there will be
nothing to apply: applying current master tries to get rid of this
website.main_page_suffix value, but it's back on the next run. With this
patch, `terraform plan` declares "nothing to apply", so this PR itself
won't (need to) be applied.
CHANGELOG_BEGIN
CHANGELOG_END
Our current Terraform setup attempts to create three static files on our
GCS buckets. The issue is that these buckets are configured to
automatically delete files that are older than X days, and there is no
way to exclude specific files from that. Therefore, the created files
disappear after some time, and running `terraform plan` suddenly looks
like the infrastructure has changed.
Moreover, the added value of these three files seems questionable: two
of them provide `index.html` type of functionality for our two caches,
whereas the third is automatically created by `nix` when pushing to the
cache anyway (if it doesn't exist already).
This PR also reduces the cache eviction time for the nix cache to 60
days, as a full year seemed a bit long.
CHANGELOG_BEGIN
CHANGELOG_END
We've had a number of jobs waiting for >10 minutes at the busiest times
of the day since we switched to 6 nodes, so increasing back a bit.
I don't have very good visibility through the Azure UI, but it looks
like all of the jobs queued (and not running) right now are very short
ones so hopefully 8 should be enough.
CHANGELOG_BEGIN
CHANGELOG_END
We're currently depending on a floating "latest", which is often a bad
idea. Today my machine decided to upgrade the google plugin,w hich is no
specifying some new fields for the GCS objects, and therefore `terraform
plan` doe snot look clean anymore, even though there has been no change
to the terraform files (nor to the infrastructure).
This PR aims to make our Terraform setup more reproducible by pinning
Terraform plugin versions. It's also a way to track the application of
the "new" Terraform setup, as it is technically a standard change
(though hopefully a very safe one).
CHANGELOG_BEGIN
CHANGELOG_END
* ci: temp machines for scheduled killing experiment
Based on our discussions last week, I am exploring ways to move us to
permanent machines instead of preemptible ones. This should drastically
reduce the number of "cancelled" jobs.
The end goal is to have:
1. An instance group (per OS) that defines the actual CI nodes; this
would be pretty much the same as the existing ones, but with
`preemptible` set to false.
2. A separate machine that, on a cron (say at 4AM UTC), destroys all the
CI nodes.
The hope is that the group managers, which are set to maintain 10 nodes,
will then recreate the "missing" nodes using their normal starting
procedure.
However, there are a lot of unknowns I would like to explore, and I need
a playground for that. This is where this PR comes in. As it stands, it
creates one "killer" machine and a temporary group manager. I will use
these to experiment with the GCP API in various ways without interfering
with the real CI nodes.
This experimentation will likely require multiple `terraform apply` with
multiple different versions of the associated files, as well as
connecting to the machines and running various commands directly from
them. I will ensure all of that only affects the new machines created as
part of this PR, and therefore believe we do not need to go through a
separate round of approval for each change.
Once I have finished experimenting, I will create a new PR to clean up
the temporary resources created with this one and hopefully set up a
more permanent solution.
CHANGELOG_BEGIN
CHANGELOG_END
* add missing zone for killer instance
* add compute scope to killer
* authorize Terraform to shutdown killer to update it
* change in plans: use a service account instead
* .
* add compute.instances.list permission
* add compute.instances.delete permission
* add cron script
* obligatory round of extra escaping
* fix PATH issue & crontab format
* smaller machine & less frequent reboots
Following the happy resolution of #4370 in #4371, we do not need the
temporary nodes anymore. This PR therefore removes them.
CHANGELOG_BEGIN
CHANGELOG_END
This is an attempt to apply a potential fix discovered as part of the
investigation in #4370. The issue seems to be that Chocolatey is using a
protocol deemed not secure enough and disabled in recent Windows images
(our node creation script dynamically selects the lmatest "Windows 2016"
server image from GCP).
CHANGELOG_BEGIN
CHANGELOG_END
Today we don't have any Windows machine in the CI pool. The machine
template has not changed since 2019-11-21, yet as of today when the
machine starts GCP proudly declares
> GCEMetadataScripts: No startup scripts to run.
despite the script being defined as `sysprep-specialize-script-ps1`, as
per the
[documentation](https://cloud.google.com/compute/docs/startupscript).
Also, it used to work and we haven't changed anything.
I'm not quite sure what's going on and how to investigate, but I think
at the very least we can try to unblock the team by having a set of
machines we initialize manually. This PR is meant to do that.)
This is the same changeset as a877491139
and 16da700532, except that it now
specifies 5 machines instead of just one.
CHANGELOG_BEGIN
CHANGELOG_END
The recent changes to the way in which we build npm packages with Bazel
have caused a lot of issues on Windows. To debug those, Andreas has
requested a temporary machine.
This is pretty much an exact replica of #3294 (a87749113), with the same
plan:
1. I run terraform apply on this PR is merged.
2. I manually, through the GCP web console, set a dummy password for that
machine's RDP connection and transmit that to @aherrmann-da through
Slack.
3. @aherrmann-da debugs the issue.
4. I create a PR to roll back this one, then apply it once it's merged.
Note: I have verified that master applies cleanly prior to opening this
PR.
CHANGELOG_BEGIN
CHANGELOG_END
* infra: gcp_cdn_bucket: update comment
The cache retention can be configured, while the comment suggests its
hardcoded.
* infra: don't create index.html inside gcp_cdn_bucket module
We might want to add a different index.html per bucket, so move that
code outside the module and into the bucket-specific terraform files.
Also add bucket-specific index.html files.
There are two issues with the current setup:
- iptables entry prevents connecting to the metadata server, and
- machines are given insufficient permissions.
There is no simple way to configure GCS to serve the desired security
headers, so instead the script will keep updating the existing s3
bucket.
Consequent changes:
- Add aws cli tool to dev-env
- Remove docs bucket from Terraform
It looks like the curl command is currently installing but not starting the service that is supposed to send logs to StackDriver. When connecting to the machines manually, a call to `restart` seems to fix it.
* remove -O option from curl command in order to pipe script contents to bash
* follow redirects for stackdriver
Co-Authored-By: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
This is a first step towards improving our docs release process. The
goal here is to get rid of the manual "publish docs" step. This is done
as a periodic check because we only want to run this for "published"
releases, i.e. the ones that are not marked as prerelease. Because the
act of publishing a release is a manual step that Azure cannot trigger
on, we instead opt for a periodic check.
Not included in this piece of work:
- Any change to the docs themselves; the goal here is to automate the
current process as a first step. Future plans for the docs themselves
include adding links to older versions of the docs.
- A better way to detect docs are already up-to-date, and abort if so.
- Including older versions of the docs.
- Switching the DNS record from the current AWS S3 bucket to this new
GCS bucket. That will be a manual step once we're happy with how the
new bucket works.