When no service account is explicitly selected, GCP provides a default
one, which happens to have way more access rights than we're comfortable
with. I'm not quite sure how the total lack of a service account slipped
through here, but I've noticed today so I'm changing it.
CHANGELOG_BEGIN
CHANGELOG_END
As the title suggests. We already disable all communication between CI
nodes through network rules, but we currently get a lot of noise from
GCP logging violations to those rules from Windows trying to feel its
way out for file share buddies.
CHANGELOG_BEGIN
CHANGELOG_END
As usualy, this branch will contain intermediate commits that may serve
as an audit log of sorts.
Our Terraform configuration has been slightly broken by two recent
changes:
- The nixpkgs upgrade in #12280 means a new version of our GCP plugin
for Terraform, which as a breaking change added a required argument to
`google_project_iam_member`. The new version also results in a number
of smaller changes in the way Terraform handles default arguments, which
doesn't result in any changes to our configuration files or to the
behaviour of our deployed infrastructure, but does require re-syncing
the Terraform state (by running `terraform apply`, which would
essentially be a no-op if it were not for the next bullet point).
- The nix configuration changes in #12265 have changed the Linux CI
nodes configuration but have not been deployed yet.
This PR is an audit log of the steps taken to rectfy those and bring us
back to a state where our deployed configuration and our recorded
Terraform state both agree with our current `main` branch tip.
CHANGELOG_BEGIN
CHANGELOG_END
* devenv: Factor out a function to get the Nix version.
* devenv: On newer versions of Nix, use `NIX_USER_CONF_FILES`.
This stacks, rather than overwrites.
* devenv: Append Nix caches, instead of overwriting them.
The "extra-" prefix tells Nix to append.
We also switch to non-deprecated configuration keys.
CHANGELOG_BEGIN
CHANGELOG_END
* devenv: Just require Nix v2.4 or newer.
* devenv: `NIX_USER_CONF_FILES` may not be set already.
New year, new copyright, new expected unknown issues with various files
that won't be covered by the script and/or will be but shouldn't change.
I'll do the details on Jan 1, but would appreciate this being
preapproved so I can actually get it merged by then.
CHANGELOG_BEGIN
CHANGELOG_END
The cluster shut down today. I'm still not sure why this happens, but
cycling the entire cluster seems to solve it so here goes.
CHANGELOG_BEGIN
CHANGELOG_END
As part of the 2.4 release, the Nix installer has been changed to take
care of the volume setup (which we don't want it to do here). Because
that requires root access, they've decided to make multi-user install
the default, and to disable single-user install.
We could do an in-depth review of the difference and adapt our setup to
use a multi-user setup (we do use the multi-user setup on Linux, so
there's precedent), but as an immediate fix, we can keep our single-user
setup _and_ get the latest Nix by using the 2.3.16 installer and then
upgrading from within Nix itself. This _should_ keep working at least
for a while, as Linux still defaults to single-user.
CHANGELOG_BEGIN
CHANGELOG_END
Hoogle has been down for at least 24h accoridng to user reports. What
seems to be happening is that our nixpkgs pinning is not taking effect,
and the nixpkgs version of Hoogle already includes the patch we are
trying to add. This confuses nix, which fails, and thus the boot
sequence is broken.
I've applied the minimal possible patch here (i.e. enforce the pin),
which gets things running again. I've already deployed this change.
We may want to look at bumping the nixpkgs snapshot.
CHANGELOG_BEGIN
CHANGELOG_END
This fixes an issue where we try to upload a single JSON blob that is
bigger than the limit of 500MB. We could also raise the limit, I guess,
but that seems more error-prone.
CHANGELOG_BEGIN
CHANGELOG_END
If a job fails at build time, there are not test logs to process.
Currently this means the ingestion process is going to be stuck in an
endless loop of retrying that job and failing on the missing file.
This change should let us process jobs with no test files.
CHANGELOG_BEGIN
CHANGELOG_END
We are currently ingesting Bazel events in two forms:
In the `events-*` indices, each Bazel event is recorded as a separate ES
object, with the corresponding job name as a field that can serve to
aggregate all of the events for a given job.
In the `jobs-*` indices, each job is ingested as a single (composite) ES
object, with the individual events as elements in a list-type field.
When I set up the cluster, I wasn't sure which one would be more useful,
so I included both. We now have a bit more usage experience and it turns
out the `events-*` form is the only one we use, so I think we should
stop ingesting evrything twice and from now on create only the
`events-*` ones.
CHANGELOG_BEGIN
CHANGELOG_END
Disks are currently at 75% fullness, so seems like a good idea to bump a
bit. #10857 should reduce our needs, too, so this should last us a
while.
CHANGELOG_BEGIN
CHANGELOG_END
This bumps dotnet to the version required by the latest azuresigntool,
and pins azuresigntool for the future.
As usual for live CI upgrades, this will be rolled out using the
blue/green approach. I'll keep each deployed commit in this PR.
For future reference, this is PR [#10979].
[#10979]: https://github.com/digital-asset/daml/pull/10979
CHANGELOG_BEGIN
CHANGELOG_END
As explained in #10853, we recently lost our ES cluster. While I'm not
planning on trusting Google's "rolling restart" feature ever again, we
can't exclude the possibility of future similar outages (without a
significant investment in the cluster, which I don't think we want to
do).
Losing the cluster is not a huge issue as we can always reingest the
data. Worst case we lose visibility for a few days. At least, as far as
the bazel logs are concerned.
Losing the Kibana data is a lot more annoying, as that is not derived
data and thus cannot be reingested. This PR aims to add a backup
mechanism for our Kibana configuration.
CHANGELOG_BEGIN
CHANGELOG_END
On Sept 8 our ES cluster became unresponsive. I tried connecting to the
machines.
One machine had an ES Docker container that claimed to have started 7
weeks ago and stopped 5 weeks ago, while the machine's own uptime was 5
weeks. I assume GCP had decided to restart it for some reason. The init
script had failed on missing a TTY, hence the addition of the
`DEBIAN_FRONTEND` env var.
Two machines had a Docker container that had stopped on that day, resp.
6h and 2h before I started investigating. It wasn't immediately clear
what had caused the containers to stop.
On all three of these machines, I was abble to manually restart the
containers and they were abble to reform a cluster, though the state of
the cluster was red (missing shards).
The last two machines simply did not respond to SSH connection attempts.
Assuming it might help, I decided to try to restart the machines. As GCP
does not allow restarting individual machines when they're part of a
managed instance roup, I tried clicking the "rolling restart" button
on the GCP console, which seemed like it would restart the machines. I
carefully selected "restart" (and not "replace"), started the process,
and watched GCP proceed to immediately replace all five machines, losing
all data in the process.
I then started a new cluster and used bigger (and more) machines to
reingest all of the data, and then fell back to the existing
configuration for the "steady" state. I'll try to keep a better eye on
the state of the cluster from now on. In particular, we should not have
a node down for 5 weeks without noticing.
I'll also try to find some time to look into backing up the Kibana
configuration, as that's the one thing we can't just reingest at the
moment.
CHANGELOG_BEGIN
CHANGELOG_END
macOS filesystems have been case-insensitive by default for years, and
in particular our laptops are, so if we want the cache to work as
expected, CI should be too.
Note: this does not apply to Nix, because the Nix partition is a
case-sensitive image per @nycnewman's script on laptops too.
CHANGELOG_BEGIN
CHANGELOG_END
* Generate short to long name mapping in aspect
Maps shortened test names in da_scala_test_suite on Windows to their
long name on Linux and MacOS.
Names are shortened on Windows to avoid exceeding MAX_PATH.
* Script to generate scala test name mapping
* Generate scala-test-suite-name-map.json on Windows
changelog_begin
changelog_end
* Generate UTF-8 with Unix line endings
Otherwise the file will be formatted using UTF-16 with CRLF line
endings, which confuses `jq` on Linux.
* Apply Scala test name remapping before ES upload
* Pipe bazel output into intermediate file
Bazel writes the output of --experimental_show_artifacts to stderr
instead of stdout. In Powershell this means that these outputs are not
plain strings, but instead error objects. Simply redirecting these to
stdout and piping them into further processing will lead to
indeterministically missing items or indeterministically introduced
additional newlines which may break paths.
To work around this we extract the error message from error objects,
introduce appropriate newlines, and write the output to a temporary file
before further processing.
This solution is taken and adapted from
https://stackoverflow.com/a/48671797/841562
* Add copyright header
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
ES died (again) over the weekend, so I had to manually connect to each
node in order to restore it, and thus made another migration. This time
I opted to make a change, though. Lack of memory is a bit of a weak
hypothesis for the observed behaviour, but it's the only one I have at
the moment and, given how reliably ES has been crashing so far, it's
fairly easy to test.
CHANGELOG_BEGIN
CHANGELOG_END
The cluster died yesterday. As part of recovery, I connected to the
machines and made manual changes. To ensure that we get back to a known,
documented setup I then proceeded to do a full blue -> green migration,
having not tainted any of the green machines with manual interventions.
CHANGELOG_BEGIN
CHANGELOG_END
A few small tweaks, but the most important change is giving up on
preemptible instances (for now at least), because I saw GCP kill 8 out
of the 10 nodes at exactly the same time and I can't really expect the
cluster to sruvive that.
CHANGELOG_BEGIN
CHANGELOG_END
Two changes at the Terraform level, both with no impact on the actual
GCP state:
- There is no reason to make this value a `variable`: variables in
Terraforma are meant to be supplied at the CLI. `local` is the right
abstraction here (i.e. set in the file directly).
- Using an unordered `for_each` set rather than a list so we don't have
positional identity, meaning when adding someone at the top we don't
need to destroy and recreate everyone else.
CHANGELOG_BEGIN
CHANGELOG_END
This PR contains many small changes:
- A small refactoring whereby the "es-init" machine is now
(syntactically) integrated with the two instance groups, to cut down a
bit on repetition.
- The feeder machine is now preemptible, because I've seen it recover
enough times that I'm confident this will not cause any issue.
- Indices are now sharded.
- Return values from ES are filtered, cutting down a bit on network
usage and memory requirements to produce the responses.
- Bulk uploads for a single job are now done in parallel. This results
in about a 2x speedup for ingestion.
- crontab was changed to very minute instead of every 5 minutes.
CHANGELOG_BEGIN
CHANGELOG_END
We currently have about 1% (28 out of 2756) of our build logs that have
invalid JSON files. They are all about a `-profile` file being
incomplete, and since those files represent a single JSON object we
can't do smarter things like filtering invalid individual lines.
I haven't looked deeply into _why_ we create invalid files, but this
should let our ingestion process make some progress in the meantime.
CHANGELOG_BEGIN
CHANGELOG_END
* ci/windows: disable spool
We're not expecting to print anything, and @RPS5' security newsletter
says this is a vector of attack.
CHANGELOG_BEGIN
CHANGELOG_END
* increase no-spool to 6
* Windows name truncation causing collisions
* update main group
* remove temp group
This PR adds a machine that will, every 5 minutes, look at the GCS
bucket that stores Bazel metrics and push whatever it finds to
ElasticSearch.
A huge part of this commit is based on @aherrmann-da's work. You can
assume that all the good bits are his.
CHANGELOG_BEGIN
CHANGELOG_END
This PR adds a Kibana instance to each ES node, and duplicates the load
balancer mechanism to expose both raw ES and Kibana.
CHANGELOG_BEGIN
CHANGELOG_END
This PR adds a basic ES cluster to our infrastructure, completely open
and unprotected but only accessible through VPN.
And, as of yet, through its IP address. I'm not sure whether it's worth
adding a DNS for it.
CHANGELOG_BEGIN
CHANGELOG_END
This morning we started with very restricted CI pools (2/6 for Windows
and 7/20 for Linux), apparently because the region we run in (us-east1)
has three zones, two of them were unable to allocate new nodes, and the
default policy is to distribute nodes evenly between zones.
I've manually changed the distribution policy. Unfortunately this option
is not yet available in our version of the GCP Terraform plugin.
CHANGELOG_BEGIN
CHANGELOG_END
We've recently seen a few cases where the macOS nodes ended up not
having the cache partition mounted. So far this has only happened on
semi-broken nodes (guest VM still up and running but host unable to
connect to it), so I haven't been able to actually poke at a broken
machine, but I believe this should allow a machine in such a state to
recover.
While we haven't observed a similar issue on Linux nodes (as far as I'm
aware), I have made similar changes there to keep both scripts in sync.
CHANGELOG_BEGIN
CHANGELOG_END
This includes https://github.com/ndmitchell/hoogle/pull/367.
As usual, I unfortunately cannot test this myself so please review
carefully.
Note that this will slightly increase compile times since we will now
build hoogle. However, we still only build hoogle rather than
everything which takes less than 2min on my very weak personal
laptop. We could integrate this with our nix cache but for now, I’m
not that worried about this.
changelog_begin
changelog_end
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
After #9362, hoogle stopped returning any result. It was up and running,
but with an empty database.
Problem was two-fold:
1. In the switch to `nix`, I lost track of the fact that we were
previously doing all the work in `~/hoogle` rather than `~/`, which
broke the periodic refresh.
2. The initial setup has been broken for months; machines have been
initializing with an empty DB and only getting data on the first
refresh since #7370.
CHANGELOG_BEGIN
CHANGELOG_END
This is a demonstration, commit-by-commit, of how to use the blue/green
deployment for Hoogle servers.
The actual change (using nix) is stolen from #9352.
CHANGELOG_BEGIN
CHANGELOG_END
This is adapting the same approach as #9137 to the macOS machines. The
setup is very similar, except macOS apparently doesn't require any kind
of `sudo` access in the process.
The main reason for the change here is that while `~/.bazel-cache` is
reasonably fast to clean, cleaning just that has finally caught up to us
with a recent cleanup step that proudly claimed:
```
before: 638Mi free
after: 1.2Gi free
```
So we do need to start cleaning the other one after all.
CHANGELOG_BEGIN
CHANGELOG_END
This is a continuation of #8595 and #8599. I somehow had missed that
`/etc/fstab` can be used to tell `mount` to let users mount some
filesystems with preset options.
This is using the full history of `mount` hardening so should be safe
enough. The option `user` in `/etc/fstab` automatically disables any kind
of `setuid` feature on the mounted filesystem, which is the main attack
vector I know of.
This works flawlessly on my local VM, so hopefully this time's the
charm. (It also happens to be my third PR specifically targeted on this
issue, so, who knows, it may even work.)
CHANGELOG_BEGIN
CHANGELOG_END
* fixup terraform config
Two changes have happened recently that have invalidated the current
Terraform files:
1. The Terraform version has gone through a major, incompatible upgrade
(#8190); the required updates for this are reflected in the first
commit of this PR.
2. The certificate used to serve [Hoogle](https://hoogle.daml.com) was
about to expire, so Edward created a new one and updated the config
directly. The second commit in this PR updates the Terraform config
to match that new, already-in-prod setting.
Note: This PR applies cleanly, as there are no resulting changes in
Terraform's perception of the target state from 1, and the change from 2
has already been applied through other channels.
CHANGELOG_BEGIN
CHANGELOG_END
* update hoogle cert
CHANGELOG_BEGIN
- Our Linux binaries are now built on Ubuntu 20.04 instead of 16.04. We
do not expect any user-level impact, but please reach out if you
do notice any issue that might be caused by this.
CHANGELOG_END
* Replace many occurrences of DAML with Daml
* Update docs logo
* A few more CLI occurrences
CHANGELOG_BEGIN
- Change DAML capitalization and docs logo
CHANGELOG_END
* Fix some over-eager replacements
* A few mor occurrences in md files
* Address comments in *.proto files
* Change case in comments and strings in .ts files
* Revert changes to frozen proto files
* Also revert LF 1.11
* Update get-daml.sh
* Update windows installer
* Include .py files
* Include comments in .daml files
* More instances in the assistant CLI
* some more help texts
Our CI nodes install nix in multi-user mode. This means that changing
cache information is only available to certain trusted users for
security reasons. The CI user is not part of those so the cache info
from dev-env/etc/nix.conf is silently ignored.
We could consider not running in multi-user mode although from a
security pov this seems like a pretty sensible decision and our
signing keys change very rarely so for now, I would keep it.
changelog_begin
changelog_end
As we strive for more inclusiveness, we are becoming less comfortable
with historically-charged terms being used in our everyday work.
This is targeted for merge on Dec 26, _after_ the necessary
corresponding changes at both the GitHub and Azure Pipelines levels.
CHANGELOG_BEGIN
- DAML Connect development is now conducted from the `main` branch,
rather than the `master` one. If you had any dependency on the
digital-asset/daml repository, you will need to update this parameter.
CHANGELOG_END
This is far from perfect but removes the blatantly wrong sections of the
README.
Note: as a README change, this is not really a standard change, but
because the README is under the infra folder, this PR does need the tag
to pass CI.
CHANGELOG_BEGIN
CHANGELOG_END
incident-118: fruitless investigation; revert
This first commit just duplicates the existing configuration. Further
commits will make actual changes so they can be tracked by looking at
individual commits (rather than try to think up the diff by looking at
entirely new files).
CHANGELOG_BEGIN
CHANGELOG_END
This is the macOS part of #5912, which I have separated because our
macOS nodes have a different deployment process so it seemed easier to
track the deployment of the change separately.
CHANGELOG_BEGIN
CHANGELOG_END
I screwed up in #7771: `google_project_iam_binding` is defined as _the_
authoritative list of accounts for that role, not just a list of
accounts to add the role to. So in applying that rule yesterday, I
inadvertently stripped the periodic-killer machine of its role, and
therefore nothing got reset last night. The Terraform plan did not
mention this, unfortunately (though, arguably, consistently with the
semantics of the Terraform rules).
This is the same intent as #7771, but this one actually works. (Or at
least does not fail in the same way.)
CHANGELOG_BEGIN
CHANGELOG_END
Recently we have been seeing lots of issues with the Bazel cache. It
does not seem like it would need to delete things, but the issues
cropped up about the same time we restricted the permissions, so it's
worth trying to revert that.
CHANGELOG_BEGIN
CHANGELOG_END
Yesterday, a certificate expiration triggered the `patch_bazel_windows`
job to run when it shouldn't, and it overrode an artifact we depend on.
This was build from the same sources, but the build is not reproducible
so we ended up with a hash mismatch.
As far as I know, there is no good reason for CI to ever delete or
overwrite anything from our GCS buckets, so I'm removing its rights to
do so.
As an added safety measure, this PR also enables versioning on all
non-cache buckets (GCS does not support versioning on buckets with an
expiration policy).
CHANGELOG_BEGIN
CHANGELOG_END
Our old wildcard certificate has expired. @nycnewman has already updated
our configuration to use new ones; this is just updating the tf files to
match.
CHANGELOG_BEGIN
CHANGELOG_END
We've been saving data there but not doing anything with it. Ideally
this data would be used by some sort of automated process, but in the
meantime (or while developing said processes), having at least some
people with read access can help.
This is a Standard Change requested by @cocreature.
CHANGELOG_BEGIN
CHANGELOG_END
It looks like #6761 broke our Terraform setup by upgrading the nixpkgs
snapshot. That this has not been caught earlier is, I suppose, a
testament to how stable our infrastructure has become nowadays.
This is the same issue we had with the Google providers in #6402, i.e.
we are trying to pin the provider versions both at the nix level and at
the terraform level, with no way to force them to stay in sync.
I don't have a good proposal for such a way, and it seems rare and
innocuous enough to not warrant the investment to fix this at a more
fundamental level.
CHANGELOG_BEGIN
CHANGELOG_END
We want to be able to support more than one package in our [Hoogle]
instance. In order to not have to list each file individually, we assume
the collection of Hoogle text files will be published as a tarball.
Note: we keep trying the existing file for now, because the deployment
of this change needs to be done in separate, asynchronous steps if we
want everything to keep working with no downtime:
1. We deploy the new version of the Hoogle configuration, which supports
both the new and old file structure on the docs website (this PR).
2. After the next stable version (likely 1.6) is published, the docs
site actually changes to the new format.
3. We can then clean-up the Hoogle configuration.
Any other sequence will require turning off Hoogle and coordinating with
the docs update, which seems impractical.
[Hoogle]: https://hoogle.daml.com
CHANGELOG_BEGIN
CHANGELOG_END
multistep macos setup
This updates the macOS node setup instructions to avoid repeating
identical work and network traffic across all machines through
initialization by building a "daily" image with all the tools and code
we need.
CHANGELOG_BEGIN
CHANGELOG_END
* Fix 3-running-box to remount nix partition
* updated scripts to use multi-step process
* add copyright notices
Co-authored-by: nycnewman <edward@digitalasset.com>
switch CI nodes from n1-standard-8 to c2-*
A while back (#4520), I did a bunch of performance tests when trying to
size up the requirements for the hosted macOS nodes we needed to buy. As
part of that testing, it looked like `c2-standard-8` nodes were faster
(full build down from ~95 to ~75 minutes) and marginally cheaper
($0.4176 vs $0.4280) than the `n1-standard-8` we are currently using.
Then I got distracted, and I forgot to upgrade our existing machines.
CHANGELOG_BEGIN
CHANGELOG_END
This script was supposed to remove old agents from the Azure Pipelines
UI. It may have been useful at some time (notably, when we used
ephemeral instances, they did not necessarily get to run their shutdown
script), but as it stands now, it's broken. The output from that step
ends in:
```
error: 2 derivations need to be built, but neither local builds ('--max-jobs') nor remote builds ('--builders') are enabled
```
after listing the nix packages it would build. Furthermore, it does not
seem to be useful as I have not seen any spurious entry in the agents
list on Azure since we switched to permanent nodes, on either the Linux
or Windows side (and this would only run on Linux, if it ran).
I'm also not convinced it ever ran, as I used to see a lot of spurious
machines on both Linux and Windows when we did use ephemeral instances.
CHANGELOG_BEGIN
CHANGELOG_END
The nix cache is currently only 3.5GB, and GHC takes a long time to
build, so I think the convenience vs. cost tradeoff is in favour of
keeping things for a bit longer.
CHANGELOG_BEGIN
CHANGELOG_END
This is the last step of the plan outlined in #6405. As of opening this
PR, "old" nodes are back up, "temp" nodes are disabled at the Azure
level, and there is no job running on either (🤔). In other
words, this can be deployed as soon as it gets a stamp.
CHANGELOG_BEGIN
CHANGELOG_END
See #6400; split out as separate PR so master == reality and we can
track when this is done. @nycnewman please merge this once the change
is deployed.
Note: it has to be deployed before the next restart; nodes will _not_ be
able to boot with the current configuration.
CHANGELOG_BEGIN
CHANGELOG_END
This is the second PR in the plan outlined in #6405. I have already
disabled the old nodes so no new job will get started there; I will,
however, wait until I've seen a few successful builds on the new nodes
before pulling the plug.
CHANGELOG_BEGIN
CHANGELOG_END
This PR duplicates the linux CI cluster. This is the first in a
three-PR plan to implement #6400 safely while people are working.
I usually do cluster updates over the weekend because they require
shutting down the entire CI system for about two hours. This is
unfortunately not practical while people are working, and timezones make
it difficult for me to find a time where people are not working during
the week.
So instead the plan is as follows:
1. Create a duplicate of our CI cluster (this PR).
2. Wait for the new cluster to be operational (~90-120 minutes ime).
3. In the Azure Pipelines config screen, disable all the nodes of the
"old" cluster, so all new jobs get assigned to the temp cluster. Wait
for all jobs to finish on the old cluster.
4. Update the old cluster. Wait for it to be deployed. (Second PR.)
5. In Azure, disable temp nodes, wait for jobs to drain.
6. Delete temp nodes (third PR).
Reviewing this PR is best done by verifying you can reproduce the
following shell session:
```
$ diff vsts_agent_linux.tf vsts_agent_linux_temp.tf
4,7c4,5
< resource "secret_resource" "vsts-token" {}
<
< data "template_file" "vsts-agent-linux-startup" {
< template = "${file("${path.module}/vsts_agent_linux_startup.sh")}"
---
> data "template_file" "vsts-agent-linux-startup-temp" {
> template =
"${file("${path.module}/vsts_agent_linux_startup_temp.sh")}"
16c14
< resource "google_compute_region_instance_group_manager"
"vsts-agent-linux" {
---
> resource "google_compute_region_instance_group_manager"
"vsts-agent-linux-temp" {
18,19c16,17
< name = "vsts-agent-linux"
< base_instance_name = "vsts-agent-linux"
---
> name = "vsts-agent-linux-temp"
> base_instance_name = "vsts-agent-linux-temp"
24,25c22,23
< name = "vsts-agent-linux"
< instance_template =
"${google_compute_instance_template.vsts-agent-linux.self_link}"
---
> name = "vsts-agent-linux-temp"
> instance_template =
"${google_compute_instance_template.vsts-agent-linux-temp.self_link}"
36,37c34,35
< resource "google_compute_instance_template" "vsts-agent-linux" {
< name_prefix = "vsts-agent-linux-"
---
> resource "google_compute_instance_template" "vsts-agent-linux-temp" {
> name_prefix = "vsts-agent-linux-temp-"
52c50
< startup-script =
"${data.template_file.vsts-agent-linux-startup.rendered}"
---
> startup-script =
"${data.template_file.vsts-agent-linux-startup-temp.rendered}"
$ diff vsts_agent_linux_startup.sh vsts_agent_linux_startup_temp.sh
149c149
< su --command "sh <(curl https://nixos.org/nix/install) --daemon"
--login vsts
---
> su --command "sh <(curl -sSfL https://nixos.org/nix/install) --daemon"
--login vsts
$
```
and reviewing that diff, rather than looking at the added files in their
entirety. The name changes are benign and needed for Terraform to
appropriately keep track of which node belongs to the old vs the temp
group. The only change that matters is the new group has the `-sSfL`
flag so they will actually boot up. (Hopefully.)
CHANGELOG_BEGIN
CHANGELOG_END
It looks like some nix update has broken our current Terraform setup.
The Google provider plugin has changed its reported version to 0.0.0;
poking at my local nix store seems to indicate we actually get 3.15, but
🤷.
This PR also reverts the infra part of #6400 so we get back to master ==
reality.
CHANGELOG_BEGIN
CHANGELOG_END
Nix now requires -L, I’ve gone ahead and just normalized everything to
use -sfL which we were already using in one place.
changelog_begin
changelog_end
Keeping CI working on Windows involves a constant fight against
MAX_PATH, which is a very short 260 characters. As the username appears
in some paths, sometimes multiple times, we can save a few precious
characters by having it shorter.
CHANGELOG_BEGIN
CHANGELOG_END
* Fix alunchd killing VMWare process at end of script execution
* Fix alunchd killing VMWare process at end of script execution
CHANGELOG_BEGIN
Fix issue with MacOS Catalina Launchd killing VMWare instance on rebuild (AbandonProcessGrop)
CHANGELOG_END
patch Bazel on Windows (ci setup)
We have a weird, intermittent bug on Windows where Bazel gets into a
broken state. To investigate, we need to patch Bazel to add more debug
output than present in the official distribution. This PR adds the basic
infrastructure we need to download the Bazel source code, apply a patch,
compile it, and make that binary available to the rest of the build.
This is for Windows only as we already have the ability to do similar
things on Linux and macOS through Nix.
This PR does not contain any intresting patch to Bazel, just the minimum
that we can check we are actually using the patched version.
CHANGELOG_BEGIN
CHANGELOG_END
* Updates to support VMWare vairant of Hypervisor
* Update infra/macos/scripts/rebuild-crontask.sh
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
* Update infra/macos/scripts/run-agent.sh
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
add default machine capability
We semi-regularly need to do work that has the potential to disrupt a
machine's local cache, rendering it broken for other streams of work.
This can include upgrading nix, upgrading Bazel, debugging caching
issues, or anything related to Windows.
Right now we do not have any good solution for these situations. We can
either not do those streams of work, or we can proceed with them and
just accept that all other builds may get affected depending on which
machine they get assigned to. Debugging broken nodes is particularly
tricky as we do not have any way to force a build to run on a given
node.
This PR aims at providing a better alternative by (ab)using an Azure
Pipelines feature called
[capabilities](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#capabilities).
The idea behind capabilities is that you assign a set of tags to a
machine, and then a job can express its
[demands](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml),
i.e. specify a set of tags machines need to have in order to run it.
Support for this is fairly badly documented. We can gather from the
documentation that a job can specify two things about a capability
(through its `demands`): that a given tag exists, and that a given tag
has an exact specified value. In particular, a job cannot specify that a
capability should _not_ be present, meaning we cannot rely on, say,
adding a "broken" tag to broken machines.
Documentation on how to set capabilities for an agent is basically
nonexistent, but [looking at the
code](https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Microsoft.VisualStudio.Services.Agent/Capabilities/UserCapabilitiesProvider.cs)
indicates that they can be set by using a simple `key=value`-formatted
text file, provided we can find the right place to put this file.
This PR adds this file to our Linux, macOS and Windows node init scripts
to define an `assignment` capability and adds a demand for a `default`
value on each job. From then on, when we hit a case where we want a PR
to run on a specific node, and to prevent other PRs from running on that
node, we can manually override the capability from the Azure UI and
update the demand in the relevant YAML file in the PR.
CHANGELOG_BEGIN
CHANGELOG_END
We have seen the following error message crop up a couple times
recently:
```
FATAL: could not create shared memory segment: No space left on device
DETAIL: Failed system call was shmget(key=5432001, size=56, 03600).
HINT: This error does *not* mean that you have run out of disk space.
It occurs either if all available shared memory IDs have been taken, in
which case you need to raise the SHMMNI parameter in your kernel, or
because the system's overall limit for shared memory has been reached.
The PostgreSQL documentation contains more information about shared
memory configuration.
child process exited with exit code 1
```
Based on [the PostgreSQL
documentation](https://www.postgresql.org/docs/12/kernel-resources.html),
this should fix it.
CHANGELOG_BEGIN
CHANGELOG_END