2022-01-03 19:36:51 +03:00
|
|
|
# Copyright (c) 2022 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
|
2020-02-06 14:54:07 +03:00
|
|
|
# SPDX-License-Identifier: Apache-2.0
|
|
|
|
|
2019-05-14 15:25:02 +03:00
|
|
|
# Azure Pipelines file, see https://aka.ms/yaml
|
|
|
|
|
2019-06-14 02:12:06 +03:00
|
|
|
# Do not run on PRs
|
|
|
|
pr: none
|
2019-08-16 15:05:11 +03:00
|
|
|
|
2020-12-27 16:19:07 +03:00
|
|
|
# Do not run on merge to main
|
2019-06-14 02:12:06 +03:00
|
|
|
trigger: none
|
2019-08-16 15:05:11 +03:00
|
|
|
|
2019-06-14 02:12:06 +03:00
|
|
|
# Do run on a schedule (hourly)
|
2019-07-26 20:51:45 +03:00
|
|
|
#
|
2019-08-16 15:05:11 +03:00
|
|
|
# This is currently (2019-08-15) broken on Azure for GitHub-hosted repos. It
|
|
|
|
# does, however, work as expected for Azure-hosted repos. As a workaround, we
|
|
|
|
# have created a repo inside Azure that contains an `azure-pipelines.yml` file
|
|
|
|
# that just triggers this job.
|
|
|
|
#
|
|
|
|
# When the situation is resolved, delete that repo in Azure and uncomment the
|
|
|
|
# following. In the meantime, this should stay commented so we avoid running
|
|
|
|
# jobs twice when Azure fixes this issue.
|
|
|
|
#schedules:
|
|
|
|
#- cron: "0 * * * *"
|
|
|
|
# displayName: hourly cron
|
|
|
|
# branches:
|
|
|
|
# include:
|
2020-12-27 16:19:07 +03:00
|
|
|
# - main
|
2019-08-16 15:05:11 +03:00
|
|
|
# always: true
|
2019-05-14 15:25:02 +03:00
|
|
|
|
|
|
|
jobs:
|
|
|
|
- job: docs
|
partial fix for docs cron (#3941)
This commit aims at mitigating two issues we have noticed with the
0.13.41 release:
1. The initial cron run for that release got interrupted at the 50
minutes mark, which happened to be right in the middle of the s3 upload.
This means it had already changed the versions.json file, but had not
finished updating the actual html files. Right now, the docs.daml.com
website shows version 0.13.41 in the drop-down, but actually displays
the content for 0.13.40. Additionally, trying to explicitly visit the
website for 0.13.41 (https://docs.daml.com/0.13.41) yields a 404. Note
that this also means the cron job did not reach the "tell HubSpot"
point, so 0.13.41 did not get announced.
2. As the script also did not reach the "clear cache" step, subsequent
runs have been rebuilding the documentation for no reason as the
sequence of steps was: check versions.json through HTTP, get cached one,
see it's not up-to-date, build docs, check versions.json through s3 API,
bypassing the cache, see it's up-to-date, stop.
To address those issues, this PR changes the cron to:
1. Increase the timeout to 2h instead of 50 minutes.
2. Always check the versions.json file through s3, rather than go
through the HTTP cache first.
These are not complete solutions but I'm not sure how to do better given
that s3 does not have atomic operations.
2020-01-03 16:43:22 +03:00
|
|
|
timeoutInMinutes: 120
|
2019-05-14 15:25:02 +03:00
|
|
|
pool:
|
2021-01-27 19:38:34 +03:00
|
|
|
name: 'ubuntu_20_04'
|
add default machine capability (#5912)
add default machine capability
We semi-regularly need to do work that has the potential to disrupt a
machine's local cache, rendering it broken for other streams of work.
This can include upgrading nix, upgrading Bazel, debugging caching
issues, or anything related to Windows.
Right now we do not have any good solution for these situations. We can
either not do those streams of work, or we can proceed with them and
just accept that all other builds may get affected depending on which
machine they get assigned to. Debugging broken nodes is particularly
tricky as we do not have any way to force a build to run on a given
node.
This PR aims at providing a better alternative by (ab)using an Azure
Pipelines feature called
[capabilities](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#capabilities).
The idea behind capabilities is that you assign a set of tags to a
machine, and then a job can express its
[demands](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml),
i.e. specify a set of tags machines need to have in order to run it.
Support for this is fairly badly documented. We can gather from the
documentation that a job can specify two things about a capability
(through its `demands`): that a given tag exists, and that a given tag
has an exact specified value. In particular, a job cannot specify that a
capability should _not_ be present, meaning we cannot rely on, say,
adding a "broken" tag to broken machines.
Documentation on how to set capabilities for an agent is basically
nonexistent, but [looking at the
code](https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Microsoft.VisualStudio.Services.Agent/Capabilities/UserCapabilitiesProvider.cs)
indicates that they can be set by using a simple `key=value`-formatted
text file, provided we can find the right place to put this file.
This PR adds this file to our Linux, macOS and Windows node init scripts
to define an `assignment` capability and adds a demand for a `default`
value on each job. From then on, when we hit a case where we want a PR
to run on a specific node, and to prevent other PRs from running on that
node, we can manually override the capability from the Azure UI and
update the demand in the relevant YAML file in the PR.
CHANGELOG_BEGIN
CHANGELOG_END
2020-05-09 19:21:42 +03:00
|
|
|
demands: assignment -equals default
|
2019-05-14 15:25:02 +03:00
|
|
|
steps:
|
|
|
|
- checkout: self
|
|
|
|
- bash: ci/dev-env-install.sh
|
|
|
|
displayName: 'Build/Install the Developer Environment'
|
2020-10-05 20:35:09 +03:00
|
|
|
- bash: |
|
|
|
|
set -euo pipefail
|
|
|
|
eval "$(dev-env/bin/dade assist)"
|
|
|
|
|
|
|
|
bazel build //ci/cron:cron
|
|
|
|
./bazel-bin/ci/cron/cron docs
|
2019-05-14 15:25:02 +03:00
|
|
|
env:
|
2019-05-22 01:26:07 +03:00
|
|
|
AWS_ACCESS_KEY_ID: $(AWS_ACCESS_KEY_ID)
|
|
|
|
AWS_SECRET_ACCESS_KEY: $(AWS_SECRET_ACCESS_KEY)
|
2020-01-23 17:28:37 +03:00
|
|
|
- template: ci/tell-slack-failed.yml
|
2019-06-21 21:39:24 +03:00
|
|
|
|
|
|
|
- job: docker_image
|
2019-06-24 19:37:57 +03:00
|
|
|
timeoutInMinutes: 60
|
2019-06-21 21:39:24 +03:00
|
|
|
pool:
|
2021-01-27 19:38:34 +03:00
|
|
|
name: 'ubuntu_20_04'
|
add default machine capability (#5912)
add default machine capability
We semi-regularly need to do work that has the potential to disrupt a
machine's local cache, rendering it broken for other streams of work.
This can include upgrading nix, upgrading Bazel, debugging caching
issues, or anything related to Windows.
Right now we do not have any good solution for these situations. We can
either not do those streams of work, or we can proceed with them and
just accept that all other builds may get affected depending on which
machine they get assigned to. Debugging broken nodes is particularly
tricky as we do not have any way to force a build to run on a given
node.
This PR aims at providing a better alternative by (ab)using an Azure
Pipelines feature called
[capabilities](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#capabilities).
The idea behind capabilities is that you assign a set of tags to a
machine, and then a job can express its
[demands](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml),
i.e. specify a set of tags machines need to have in order to run it.
Support for this is fairly badly documented. We can gather from the
documentation that a job can specify two things about a capability
(through its `demands`): that a given tag exists, and that a given tag
has an exact specified value. In particular, a job cannot specify that a
capability should _not_ be present, meaning we cannot rely on, say,
adding a "broken" tag to broken machines.
Documentation on how to set capabilities for an agent is basically
nonexistent, but [looking at the
code](https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Microsoft.VisualStudio.Services.Agent/Capabilities/UserCapabilitiesProvider.cs)
indicates that they can be set by using a simple `key=value`-formatted
text file, provided we can find the right place to put this file.
This PR adds this file to our Linux, macOS and Windows node init scripts
to define an `assignment` capability and adds a demand for a `default`
value on each job. From then on, when we hit a case where we want a PR
to run on a specific node, and to prevent other PRs from running on that
node, we can manually override the capability from the Azure UI and
update the demand in the relevant YAML file in the PR.
CHANGELOG_BEGIN
CHANGELOG_END
2020-05-09 19:21:42 +03:00
|
|
|
demands: assignment -equals default
|
2019-06-21 21:39:24 +03:00
|
|
|
steps:
|
|
|
|
- checkout: self
|
|
|
|
- bash: |
|
|
|
|
set -euo pipefail
|
|
|
|
|
|
|
|
eval "$(dev-env/bin/dade-assist)"
|
2020-01-15 21:34:48 +03:00
|
|
|
HEAD=$(git rev-parse HEAD)
|
2021-12-21 15:59:02 +03:00
|
|
|
while ! nix-build --no-out-link -A tools.sed -A tools.jq -A tools.curl -A tools.base64 nix; do :; done
|
2020-11-04 12:33:04 +03:00
|
|
|
|
|
|
|
trap 'rm -rf ~/.docker' EXIT
|
2019-06-21 21:39:24 +03:00
|
|
|
echo $DOCKER_PASSWORD | docker login --username $DOCKER_LOGIN --password-stdin
|
2020-11-04 12:33:04 +03:00
|
|
|
echo $DOCKER_CONTENT_TRUST_KEY | base64 -d > ~/.docker/da_automation.key
|
|
|
|
chmod 600 ~/.docker/da_automation.key
|
|
|
|
docker trust key load ~/.docker/da_automation.key --name $DOCKER_CONTENT_TRUST_USERNAME
|
|
|
|
|
2021-05-03 16:11:30 +03:00
|
|
|
RELEASES=$(curl https://api.github.com/repos/digital-asset/daml/releases -sSfL | jq -r '.[] | .tag_name')
|
2019-06-21 21:39:24 +03:00
|
|
|
DIR=$(pwd)
|
2021-05-03 16:11:30 +03:00
|
|
|
VERSIONS=$(curl 'https://hub.docker.com/v2/repositories/digitalasset/daml-sdk/tags/?page_size=10000' -sSfL)
|
2020-01-13 13:53:38 +03:00
|
|
|
# Our docker tags should be stable. Therefore, we only build the image if it has not already
|
|
|
|
# been built before and we checkout the Dockerfile for the release tag.
|
|
|
|
# We do not update docker images for older releases so only docker images for SDK releases
|
|
|
|
# >= 0.13.43 are built this way.
|
2019-06-21 21:39:24 +03:00
|
|
|
for version in $(echo $RELEASES | sed -e 's/ /\n/g'); do
|
2019-06-24 19:37:57 +03:00
|
|
|
LAST_UPDATE=$(echo $VERSIONS | jq -r '.results[] | select(.name == "'${version#v}'") | .last_updated')
|
2020-01-13 13:53:38 +03:00
|
|
|
if [[ -n "$LAST_UPDATE" ]]; then
|
|
|
|
echo "${version#v} already exists, skipping."
|
2019-06-21 21:39:24 +03:00
|
|
|
else
|
|
|
|
echo "Building version ${version#v}..."
|
2022-07-18 16:18:59 +03:00
|
|
|
#git checkout "$version"
|
2019-06-24 19:37:57 +03:00
|
|
|
cd ci/docker/daml-sdk
|
|
|
|
docker build -t digitalasset/daml-sdk:${version#v} --build-arg VERSION=${version#v} .
|
2022-07-18 16:18:59 +03:00
|
|
|
#git checkout Dockerfile
|
2020-11-04 12:33:04 +03:00
|
|
|
# Despite the name not suggesting it at all, this actually signs
|
|
|
|
# _and pushes_ the image; see
|
|
|
|
# https://docs.docker.com/engine/security/trust/#signing-images-with-docker-content-trust
|
|
|
|
docker trust sign digitalasset/daml-sdk:${version#v}
|
2019-06-21 21:39:24 +03:00
|
|
|
cd "$DIR"
|
2020-01-15 21:34:48 +03:00
|
|
|
git checkout $HEAD
|
2019-06-21 21:39:24 +03:00
|
|
|
echo "Done."
|
|
|
|
fi
|
|
|
|
done
|
2020-11-04 14:42:23 +03:00
|
|
|
env:
|
|
|
|
DOCKER_LOGIN: $(DOCKER_LOGIN)
|
|
|
|
DOCKER_PASSWORD: $(DOCKER_PASSWORD)
|
|
|
|
DOCKER_CONTENT_TRUST_KEY: $(DOCKER_CONTENT_TRUST_KEY)
|
|
|
|
DOCKER_CONTENT_TRUST_USERNAME: $(DOCKER_CONTENT_TRUST_USERNAME)
|
|
|
|
# Does not appear explicitly in the script, but is used by
|
|
|
|
# docker trust key load
|
|
|
|
DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE: $(DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE)
|
2020-01-23 17:28:37 +03:00
|
|
|
- template: ci/tell-slack-failed.yml
|
2019-06-27 01:48:19 +03:00
|
|
|
|
|
|
|
- job: vscode_marketplace
|
|
|
|
timeoutInMinutes: 10
|
|
|
|
pool:
|
2021-01-27 19:38:34 +03:00
|
|
|
name: 'ubuntu_20_04'
|
add default machine capability (#5912)
add default machine capability
We semi-regularly need to do work that has the potential to disrupt a
machine's local cache, rendering it broken for other streams of work.
This can include upgrading nix, upgrading Bazel, debugging caching
issues, or anything related to Windows.
Right now we do not have any good solution for these situations. We can
either not do those streams of work, or we can proceed with them and
just accept that all other builds may get affected depending on which
machine they get assigned to. Debugging broken nodes is particularly
tricky as we do not have any way to force a build to run on a given
node.
This PR aims at providing a better alternative by (ab)using an Azure
Pipelines feature called
[capabilities](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#capabilities).
The idea behind capabilities is that you assign a set of tags to a
machine, and then a job can express its
[demands](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml),
i.e. specify a set of tags machines need to have in order to run it.
Support for this is fairly badly documented. We can gather from the
documentation that a job can specify two things about a capability
(through its `demands`): that a given tag exists, and that a given tag
has an exact specified value. In particular, a job cannot specify that a
capability should _not_ be present, meaning we cannot rely on, say,
adding a "broken" tag to broken machines.
Documentation on how to set capabilities for an agent is basically
nonexistent, but [looking at the
code](https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Microsoft.VisualStudio.Services.Agent/Capabilities/UserCapabilitiesProvider.cs)
indicates that they can be set by using a simple `key=value`-formatted
text file, provided we can find the right place to put this file.
This PR adds this file to our Linux, macOS and Windows node init scripts
to define an `assignment` capability and adds a demand for a `default`
value on each job. From then on, when we hit a case where we want a PR
to run on a specific node, and to prevent other PRs from running on that
node, we can manually override the capability from the Azure UI and
update the demand in the relevant YAML file in the PR.
CHANGELOG_BEGIN
CHANGELOG_END
2020-05-09 19:21:42 +03:00
|
|
|
demands: assignment -equals default
|
2019-06-27 01:48:19 +03:00
|
|
|
steps:
|
|
|
|
- checkout: self
|
|
|
|
- bash: |
|
|
|
|
set -euo pipefail
|
|
|
|
|
|
|
|
eval "$(dev-env/bin/dade-assist)"
|
|
|
|
|
|
|
|
AUTH=$(echo -n "OAuth:${MARKETPLACE_TOKEN}" | base64 -w0)
|
|
|
|
MARKET=$(curl -H "Authorization: Basic $AUTH" \
|
|
|
|
-H "Accept: application/json;api-version=5.0-preview.2" \
|
2021-05-03 16:11:30 +03:00
|
|
|
-sSfL \
|
2019-06-27 01:48:19 +03:00
|
|
|
"https://marketplace.visualstudio.com/_apis/gallery/publishers/DigitalAssetHoldingsLLC/extensions/daml?flags=1" \
|
|
|
|
| jq -r '.versions[0].version')
|
2020-04-30 16:10:30 +03:00
|
|
|
# This jq expression should ensure that we always upload the
|
|
|
|
# highest-number version. Here is how this works:
|
|
|
|
#
|
|
|
|
# 1. The GitHub API documentation does not specify the order for the
|
|
|
|
# "list releases" endpoint, but does specify that the "latest"
|
|
|
|
# endpoint returns the release that points to the most recent commit.
|
|
|
|
# Assuming the same sort order is applied for the list endpoint
|
|
|
|
# (which empirically seems to hold so far), this means that they may
|
|
|
|
# be out-of-order wrt version numbers, e.g. 1.1.0 may appear after
|
|
|
|
# 1.0.2.
|
|
|
|
# 2. The `.tag_name | .[1:] | split (".") | map(tonumber)` part will
|
|
|
|
# turn "v1.0.2" into an array [1, 0, 2].
|
|
|
|
# 3. jq documents its sort method to sort numbers in numeric order
|
|
|
|
# and arrays in lexical order (ascending in both cases).
|
|
|
|
#
|
|
|
|
# This is required because, while the VSCode Marketplace does show
|
|
|
|
# _a_ version number, it doesn't handle versions at all: we can only
|
|
|
|
# have one version on the marketplace at any given time, and any
|
|
|
|
# upload replaces the existing version.
|
2021-05-03 16:11:30 +03:00
|
|
|
GITHUB=$(curl https://api.github.com/repos/digital-asset/daml/releases -sSfL \
|
2020-04-30 17:40:08 +03:00
|
|
|
| jq -r '. | map(select(.prerelease == false)
|
2020-04-30 16:10:30 +03:00
|
|
|
| .tag_name
|
|
|
|
| .[1:]
|
|
|
|
| split (".")
|
|
|
|
| map(tonumber))
|
|
|
|
| sort
|
|
|
|
| reverse
|
|
|
|
| .[0]
|
|
|
|
| map(tostring)
|
|
|
|
| join(".")')
|
|
|
|
if [[ "$GITHUB" != "$MARKET" ]] && git merge-base --is-ancestor 798e96c9b9034eac85ace786b9e1955cf380285c v$GITHUB; then
|
2019-06-27 01:48:19 +03:00
|
|
|
echo "Publishing $GITHUB to VSCode Marketplace"
|
2020-04-30 16:10:30 +03:00
|
|
|
git checkout v$GITHUB
|
2022-01-17 16:40:42 +03:00
|
|
|
cp LICENSE compiler/daml-extension
|
|
|
|
trap "rm -rf $PWD/compiler/daml-extension/LICENSE" EXIT
|
2019-07-08 12:40:48 +03:00
|
|
|
cd compiler/daml-extension
|
2021-06-17 15:34:36 +03:00
|
|
|
sed -i "s/__VERSION__/$GITHUB/" package.json
|
2019-07-08 22:47:38 +03:00
|
|
|
# This produces out/src/extension.js
|
2022-01-17 16:40:42 +03:00
|
|
|
bazel run --run_under="cd $PWD &&" @nodejs//:yarn install
|
|
|
|
bazel run --run_under="cd $PWD &&" @nodejs//:yarn compile
|
2020-07-27 19:50:23 +03:00
|
|
|
bazel run --run_under="cd $PWD && " @daml_extension_deps//vsce/bin:vsce -- publish --yarn $GITHUB -p $MARKETPLACE_TOKEN
|
2019-06-27 01:48:19 +03:00
|
|
|
else
|
2020-04-30 16:10:30 +03:00
|
|
|
if [[ "$GITHUB" == "$MARKET" ]]; then
|
2019-06-27 01:48:19 +03:00
|
|
|
echo "Version on marketplace is already the latest ($GITHUB)."
|
|
|
|
else
|
|
|
|
echo "Latest version is not ready for marketplace publication."
|
|
|
|
fi
|
|
|
|
fi
|
|
|
|
env:
|
|
|
|
MARKETPLACE_TOKEN: $(VSCODE_MARKETPLACE_TOKEN)
|
2020-01-23 17:28:37 +03:00
|
|
|
- template: ci/tell-slack-failed.yml
|
2019-07-04 14:23:51 +03:00
|
|
|
|
|
|
|
- job: download_stats
|
|
|
|
timeoutInMinutes: 10
|
|
|
|
pool:
|
2021-01-27 19:38:34 +03:00
|
|
|
name: "ubuntu_20_04"
|
add default machine capability (#5912)
add default machine capability
We semi-regularly need to do work that has the potential to disrupt a
machine's local cache, rendering it broken for other streams of work.
This can include upgrading nix, upgrading Bazel, debugging caching
issues, or anything related to Windows.
Right now we do not have any good solution for these situations. We can
either not do those streams of work, or we can proceed with them and
just accept that all other builds may get affected depending on which
machine they get assigned to. Debugging broken nodes is particularly
tricky as we do not have any way to force a build to run on a given
node.
This PR aims at providing a better alternative by (ab)using an Azure
Pipelines feature called
[capabilities](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#capabilities).
The idea behind capabilities is that you assign a set of tags to a
machine, and then a job can express its
[demands](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml),
i.e. specify a set of tags machines need to have in order to run it.
Support for this is fairly badly documented. We can gather from the
documentation that a job can specify two things about a capability
(through its `demands`): that a given tag exists, and that a given tag
has an exact specified value. In particular, a job cannot specify that a
capability should _not_ be present, meaning we cannot rely on, say,
adding a "broken" tag to broken machines.
Documentation on how to set capabilities for an agent is basically
nonexistent, but [looking at the
code](https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Microsoft.VisualStudio.Services.Agent/Capabilities/UserCapabilitiesProvider.cs)
indicates that they can be set by using a simple `key=value`-formatted
text file, provided we can find the right place to put this file.
This PR adds this file to our Linux, macOS and Windows node init scripts
to define an `assignment` capability and adds a demand for a `default`
value on each job. From then on, when we hit a case where we want a PR
to run on a specific node, and to prevent other PRs from running on that
node, we can manually override the capability from the Azure UI and
update the demand in the relevant YAML file in the PR.
CHANGELOG_BEGIN
CHANGELOG_END
2020-05-09 19:21:42 +03:00
|
|
|
demands: assignment -equals default
|
2019-07-04 14:23:51 +03:00
|
|
|
steps:
|
|
|
|
- checkout: self
|
|
|
|
- bash: |
|
|
|
|
set -euo pipefail
|
|
|
|
|
|
|
|
eval "$(dev-env/bin/dade-assist)"
|
|
|
|
|
|
|
|
STATS=$(mktemp)
|
2021-05-03 16:11:30 +03:00
|
|
|
curl https://api.github.com/repos/digital-asset/daml/releases -sSfL | gzip -9 > $STATS
|
2019-07-04 14:23:51 +03:00
|
|
|
|
|
|
|
GCS_KEY=$(mktemp)
|
2019-09-24 14:02:29 +03:00
|
|
|
cleanup () {
|
|
|
|
rm -f $GCS_KEY
|
|
|
|
}
|
|
|
|
trap cleanup EXIT
|
2019-07-04 14:23:51 +03:00
|
|
|
echo "$GOOGLE_APPLICATION_CREDENTIALS_CONTENT" > $GCS_KEY
|
|
|
|
gcloud auth activate-service-account --key-file=$GCS_KEY
|
|
|
|
BOTO_CONFIG=/dev/null gsutil cp $STATS gs://daml-data/downloads/$(date -u +%Y%m%d_%H%M%SZ).json.gz
|
|
|
|
env:
|
|
|
|
GOOGLE_APPLICATION_CREDENTIALS_CONTENT: $(GOOGLE_APPLICATION_CREDENTIALS_CONTENT)
|
2020-01-23 17:28:37 +03:00
|
|
|
- template: ci/tell-slack-failed.yml
|