There are two issues with the current setup:
- iptables entry prevents connecting to the metadata server, and
- machines are given insufficient permissions.
It looks like the curl command is currently installing but not starting the service that is supposed to send logs to StackDriver. When connecting to the machines manually, a call to `restart` seems to fix it.
* remove -O option from curl command in order to pipe script contents to bash
* follow redirects for stackdriver
Co-Authored-By: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* ci: always use the linux-pool
reduce the difference of environment between external and internal
contributions
* infra: tweak the linux cache warmup script
Don't share the same bazel cache directory with the disk cache, which is
something else. Be more specific about the target. Clean after yourself.
* infra: bump the linux agent disk to 200GB
avoid running out of disk space
Warm up local caches by building dev-env and current daml master This is
allowed to fail, as we still want to have CI machines around, even when
their caches are only warmed up halfway.
Afterwards, we purge old agents that might still be around, that didn't
unregister themselves
This depends on #402 to be merged, as otherwise purge_old_agents.py
can't be found obviously.
* nix: add the more providers to terraform
* docs: make tarballs more reproducible
* ci: use the linux-pool pool
* ci: tweak the nix installation
handle the case where the user is root and on ubuntu
* infra: terraform fmt
* infra: add Azure Pipeline agents
* ci: only enable linux-pool for internal PRs