daml/azure-cron.yml

158 lines
6.2 KiB
YAML
Raw Normal View History

# Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
2019-05-14 15:25:02 +03:00
# Azure Pipelines file, see https://aka.ms/yaml
# Do not run on PRs
pr: none
2019-08-16 15:05:11 +03:00
# Do not run on merge to main
trigger: none
2019-08-16 15:05:11 +03:00
# Do run on a schedule (hourly)
2019-07-26 20:51:45 +03:00
#
2019-08-16 15:05:11 +03:00
# This is currently (2019-08-15) broken on Azure for GitHub-hosted repos. It
# does, however, work as expected for Azure-hosted repos. As a workaround, we
# have created a repo inside Azure that contains an `azure-pipelines.yml` file
# that just triggers this job.
#
# When the situation is resolved, delete that repo in Azure and uncomment the
# following. In the meantime, this should stay commented so we avoid running
# jobs twice when Azure fixes this issue.
#schedules:
#- cron: "0 * * * *"
# displayName: hourly cron
# branches:
# include:
# - main
2019-08-16 15:05:11 +03:00
# always: true
2019-05-14 15:25:02 +03:00
jobs:
- job: fix_bazel_cache
timeoutInMinutes: 120
pool:
name: 'ubuntu_20_04'
demands: assignment -equals default
steps:
- checkout: self
- bash: ci/dev-env-install.sh
displayName: 'Build/Install the Developer Environment'
- template: ci/bash-lib.yml
parameters:
var_name: bash-lib
- bash: |
set -euo pipefail
eval "$(dev-env/bin/dade assist)"
bazel build //ci/cron:cron
key=$(mktemp)
cleanup="rm -rf $key ~/.config/gcloud"
trap "$cleanup" EXIT
echo "$GCRED" > $key
gcloud auth activate-service-account --key-file=$key
export BOTO_CONFIG=/dev/null
./bazel-bin/ci/cron/cron bazel-cache --age 75 --delete --cache-suffix '**'
env:
GCRED: $(GOOGLE_APPLICATION_CREDENTIALS_CONTENT)
- template: ci/tell-slack-failed.yml
2022-11-15 16:07:41 +03:00
- job: vscode_marketplace
timeoutInMinutes: 10
pool:
name: 'ubuntu_20_04'
add default machine capability (#5912) add default machine capability We semi-regularly need to do work that has the potential to disrupt a machine's local cache, rendering it broken for other streams of work. This can include upgrading nix, upgrading Bazel, debugging caching issues, or anything related to Windows. Right now we do not have any good solution for these situations. We can either not do those streams of work, or we can proceed with them and just accept that all other builds may get affected depending on which machine they get assigned to. Debugging broken nodes is particularly tricky as we do not have any way to force a build to run on a given node. This PR aims at providing a better alternative by (ab)using an Azure Pipelines feature called [capabilities](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#capabilities). The idea behind capabilities is that you assign a set of tags to a machine, and then a job can express its [demands](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml), i.e. specify a set of tags machines need to have in order to run it. Support for this is fairly badly documented. We can gather from the documentation that a job can specify two things about a capability (through its `demands`): that a given tag exists, and that a given tag has an exact specified value. In particular, a job cannot specify that a capability should _not_ be present, meaning we cannot rely on, say, adding a "broken" tag to broken machines. Documentation on how to set capabilities for an agent is basically nonexistent, but [looking at the code](https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Microsoft.VisualStudio.Services.Agent/Capabilities/UserCapabilitiesProvider.cs) indicates that they can be set by using a simple `key=value`-formatted text file, provided we can find the right place to put this file. This PR adds this file to our Linux, macOS and Windows node init scripts to define an `assignment` capability and adds a demand for a `default` value on each job. From then on, when we hit a case where we want a PR to run on a specific node, and to prevent other PRs from running on that node, we can manually override the capability from the Azure UI and update the demand in the relevant YAML file in the PR. CHANGELOG_BEGIN CHANGELOG_END
2020-05-09 19:21:42 +03:00
demands: assignment -equals default
steps:
- checkout: self
- bash: |
set -euo pipefail
eval "$(dev-env/bin/dade-assist)"
AUTH=$(echo -n "OAuth:${MARKETPLACE_TOKEN}" | base64 -w0)
MARKET=$(curl -H "Authorization: Basic $AUTH" \
-H "Accept: application/json;api-version=5.0-preview.2" \
-sSfL \
"https://marketplace.visualstudio.com/_apis/gallery/publishers/DigitalAssetHoldingsLLC/extensions/daml?flags=1" \
| jq -r '.versions[0].version')
# This jq expression should ensure that we always upload the
# highest-number version. Here is how this works:
#
# 1. The GitHub API documentation does not specify the order for the
# "list releases" endpoint, but does specify that the "latest"
# endpoint returns the release that points to the most recent commit.
# Assuming the same sort order is applied for the list endpoint
# (which empirically seems to hold so far), this means that they may
# be out-of-order wrt version numbers, e.g. 1.1.0 may appear after
# 1.0.2.
# 2. The `.tag_name | .[1:] | split (".") | map(tonumber)` part will
# turn "v1.0.2" into an array [1, 0, 2].
# 3. jq documents its sort method to sort numbers in numeric order
# and arrays in lexical order (ascending in both cases).
#
# This is required because, while the VSCode Marketplace does show
# _a_ version number, it doesn't handle versions at all: we can only
# have one version on the marketplace at any given time, and any
# upload replaces the existing version.
GITHUB=$(curl https://api.github.com/repos/digital-asset/daml/releases -sSfL \
| jq -r '. | map(select(.prerelease == false)
| .tag_name
| .[1:]
| split (".")
| map(tonumber))
| sort
| reverse
| .[0]
| map(tostring)
| join(".")')
if [[ "$GITHUB" != "$MARKET" ]] && git merge-base --is-ancestor 798e96c9b9034eac85ace786b9e1955cf380285c v$GITHUB; then
echo "Publishing $GITHUB to VSCode Marketplace"
git checkout v$GITHUB
cp LICENSE compiler/daml-extension
trap "rm -rf $PWD/compiler/daml-extension/LICENSE" EXIT
cd compiler/daml-extension
sed -i "s/__VERSION__/$GITHUB/" package.json
# This produces out/src/extension.js
bazel run --run_under="cd $PWD &&" @nodejs//:yarn install
bazel run --run_under="cd $PWD &&" @nodejs//:yarn compile
bazel run --run_under="cd $PWD && " @daml_extension_deps//vsce/bin:vsce -- publish --yarn $GITHUB -p $MARKETPLACE_TOKEN
else
if [[ "$GITHUB" == "$MARKET" ]]; then
echo "Version on marketplace is already the latest ($GITHUB)."
else
echo "Latest version is not ready for marketplace publication."
fi
fi
env:
MARKETPLACE_TOKEN: $(VSCODE_MARKETPLACE_TOKEN)
- template: ci/tell-slack-failed.yml
- job: download_stats
timeoutInMinutes: 10
pool:
name: "ubuntu_20_04"
add default machine capability (#5912) add default machine capability We semi-regularly need to do work that has the potential to disrupt a machine's local cache, rendering it broken for other streams of work. This can include upgrading nix, upgrading Bazel, debugging caching issues, or anything related to Windows. Right now we do not have any good solution for these situations. We can either not do those streams of work, or we can proceed with them and just accept that all other builds may get affected depending on which machine they get assigned to. Debugging broken nodes is particularly tricky as we do not have any way to force a build to run on a given node. This PR aims at providing a better alternative by (ab)using an Azure Pipelines feature called [capabilities](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#capabilities). The idea behind capabilities is that you assign a set of tags to a machine, and then a job can express its [demands](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml), i.e. specify a set of tags machines need to have in order to run it. Support for this is fairly badly documented. We can gather from the documentation that a job can specify two things about a capability (through its `demands`): that a given tag exists, and that a given tag has an exact specified value. In particular, a job cannot specify that a capability should _not_ be present, meaning we cannot rely on, say, adding a "broken" tag to broken machines. Documentation on how to set capabilities for an agent is basically nonexistent, but [looking at the code](https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Microsoft.VisualStudio.Services.Agent/Capabilities/UserCapabilitiesProvider.cs) indicates that they can be set by using a simple `key=value`-formatted text file, provided we can find the right place to put this file. This PR adds this file to our Linux, macOS and Windows node init scripts to define an `assignment` capability and adds a demand for a `default` value on each job. From then on, when we hit a case where we want a PR to run on a specific node, and to prevent other PRs from running on that node, we can manually override the capability from the Azure UI and update the demand in the relevant YAML file in the PR. CHANGELOG_BEGIN CHANGELOG_END
2020-05-09 19:21:42 +03:00
demands: assignment -equals default
steps:
- checkout: self
- bash: |
set -euo pipefail
eval "$(dev-env/bin/dade-assist)"
STATS=$(mktemp)
curl https://api.github.com/repos/digital-asset/daml/releases -sSfL | gzip -9 > $STATS
GCS_KEY=$(mktemp)
2019-09-24 14:02:29 +03:00
cleanup () {
rm -f $GCS_KEY
}
trap cleanup EXIT
echo "$GOOGLE_APPLICATION_CREDENTIALS_CONTENT" > $GCS_KEY
gcloud auth activate-service-account --key-file=$GCS_KEY
BOTO_CONFIG=/dev/null gsutil cp $STATS gs://daml-data/downloads/$(date -u +%Y%m%d_%H%M%SZ).json.gz
env:
GOOGLE_APPLICATION_CREDENTIALS_CONTENT: $(GOOGLE_APPLICATION_CREDENTIALS_CONTENT)
- template: ci/tell-slack-failed.yml
- template: ci/refresh-get-daml-com.yml