Commit Graph

139 Commits

Author SHA1 Message Date
Gary Verhaegen
78f5757eb6
tell release manager about release instructions (#7240)
Easy to assume everyone knows where they are, I suppose, but little harm
in repeating it.

CHANGELOG_BEGIN
CHANGELOG_END
2020-08-26 14:05:21 +02:00
Moritz Kiefer
48e05e25e2
Fix name of get_gh_auth_header (#7239)
See
b812f09da9/ci/bash-lib.yml (L12),
this obviously failed on CI.

changelog_begin
changelog_end
2020-08-26 12:16:47 +02:00
Gary Verhaegen
6d1adee92f
push script-runner and trigger-runner to Artifactory (#7196)
I have created the corresponding user and repositories on Artifactory,
and tested the `curl` command manually. I'll add the corresponding
credentials to Azure once this is approved.

CHANGELOG_BEGIN
CHANGELOG_END
2020-08-20 19:11:27 +02:00
Gary Verhaegen
94f2a10e70
fix auto release pr message (#7178)
Because the build for the automated release PR is triggered "manually"
by the Azure cron (through the GH API), as far as the build of that PR
is concerned, this is a random commit build, not a PR build, and thus it
doesn't have a PR number.

This works around that by getting the corresponding PR number from the
GH API.

CHANGELOG_BEGIN
CHANGELOG_END
2020-08-19 13:58:56 +02:00
Gary Verhaegen
1baea84ca0
fix auth header for compat pr (#7134)
On the last release, the job succeeded despite no being able to create
the compat PR. This fixes:

- The curl call to actually return non-0 on non-2xx HTTP response.
- The way in which we encode the credentials.

This also attempts to create a Bash library, hopefully this time in a
way that doesn't get destroyed by our release process. IIUC pipeline
instructions (YAML files) are all parsed and read before any execution,
so by embedding the Bash library in a template we should get the correct
version (i.e. the one that is running the pipeline) even when checking
out other commits.

CHANGELOG_BEGIN
CHANGELOG_END
2020-08-14 11:35:57 +02:00
Gary Verhaegen
17c0cddb3d
fix release pr notif (#7099)
At the moment, the release PR notification piggybacks on the existing PR
notifications. Unfortunately, that does not work, because those
explicitly only trigger for "pr" builds, whereas the release PR gets a
"manual" build, as it is opened by Azure and thus does not run
automatically.

CHANGELOG_BEGIN
CHANGELOG_END
2020-08-12 13:29:55 +02:00
Moritz Kiefer
5ac9b3ed44
Fix more awk typos (#7104)
changelog_begin
changelog_end
2020-08-12 10:53:16 +00:00
Moritz Kiefer
49b5051ebf
Fix user notify job (again) (#7101)
Double quotes cause $1 to be expanded instead of being passed to AWK.

changelog_begin
changelog_end
2020-08-12 12:21:21 +02:00
Moritz Kiefer
b53bb7bd53
Fix notify_user job for releases (#7096)
Bash is hard and I hate shell scripts

changelog_begin
changelog_end
2020-08-12 08:48:19 +00:00
Gary Verhaegen
00f3de63c9
rotate responsibility for release process (#7011)
This PR attempts to add some automation around assigning release
management. The PR adds a file `release/rotation`; each week, the
updated CI cron job will:

- Open a PR for the new release [as current].
- Assign the first user in the file to that PR.
- Add the Standard-Change label to the PR.
- Start the build for that PR [as current].
- Open a new PR that rotates the `release/rotate` file, i.e. pushes back
  the first line to the end of the file.

This PR also adds mentions of the "release handler" (the first line of
`release/rotation`) to the various messages we send to Slack along the
release process.

The initial state of the `release/rotation` file has been created by
listing all the volunteers (Language team, Application Runtime team, as
well as @SamirTalwar-DA and @stefanobaghino-da) and piping the file
through `shuf`. (Then I put myself at the top so I can hopefully iron
out the issues with the first attempt.)

CHANGELOG_BEGIN
CHANGELOG_END
2020-08-05 18:58:56 +02:00
Gary Verhaegen
e4482767af
fix release job (#6959)
This failed on [the master build for the latest release][0] (trigger commit
08d4bb0a21), fortunately after everything
was done so the only consequence is a red tick in the commit list.

[0]: https://dev.azure.com/digitalasset/daml/_build/results?buildId=50840&view=logs&jobId=8d802004-fbbb-5f17-b73e-f23de0c1dec8&j=8d802004-fbbb-5f17-b73e-f23de0c1dec8&t=3f4756f8-86d7-526f-1a17-d1c7745ae68d

CHANGELOG_BEGIN
CHANGELOG_END
2020-08-03 12:35:51 +02:00
Moritz Kiefer
9ed9970493
Avoid dev-env for posting to slack (#6914)
This is running on Azure’s agents and we just call curl so there is
really no need for dev-env (as evidenced by the fact that the message
got sent despite dev-env failing).

changelog_begin
changelog_end
2020-07-29 15:22:23 +02:00
Samir Talwar
11bc582a9e
Publish DAML on SQL to GitHub Releases. (#6876) 2020-07-27 17:33:20 +02:00
Gary Verhaegen
8043756883
better release triggers (#6859)
Based on feedback from @nickchapman-da, this PR aims at making the
release process easier by:

- Automatically opening a release PR on Wednesday morning. The goal here
  is that by the time we start working, there is a release already
  built, so we save about an hour on waiting for that. This obviously
  doesn't help with ad-hoc releases.
- On a release PR build, posting to Slack when the release is ready to
  merge.
- On a release master build, posting to Slack when a release is ready to
  be tested.

My hope is that this makes the release process less tedious. This is not
trying to address the actual release testing, but hopefully should
reduce the annoyance of having to constantly go and check if the release
is ready.

CHANGELOG_BEGIN
CHANGELOG_END
2020-07-24 16:40:11 +00:00
Gary Verhaegen
34ca4f7219
run tests on auto-compat pr (#6755)
While closely following the 1.3 release through our pipeline to check
that #6709 worked as expected, I realized that the automatically-created
PR does not start the normal tests either, presumably because it's been
opened by a bot. The bot doe shave write access to the repo (obviously,
as it can create the PR in the first place), but somehow that doesn't
seem to count as a PR with write access for Azure.

So this PR adds the normal test run too, so we don't need to manually
say `/azp run` on the PR.

CHANGELOG_BEGIN
CHANGELOG_END
2020-07-16 11:36:22 +00:00
Moritz Kiefer
da995886bb
Fi setvar call in compat update PR (#6734)
The revert that removed the use of lib.sh broke this again.

changelog_begin
changelog_end
2020-07-15 10:42:16 +02:00
Moritz Kiefer
52b9eabbcc
Revert "refactor ci jobs: add setvar to ci/lib.sh (#6708)" (#6732)
This reverts commit 61e9df3eaf.

This interacts very badly with the fact that we check out old commits
for releases. While we could fix it for this particular issue, I don’t
think this buys us enough to make this worth doing and it makes it
easy to introduce issues in the future if we modify lib.sh

changelog_begin
changelog_end
2020-07-14 23:53:49 +02:00
Gary Verhaegen
6ea1afc247
run full compat tests on compat tests update PR (#6709)
This asks Azure to run a full compat test against the branch that
updates the compat test matrix, so we know it's good when we merge it
rather than the next morning.

CHANGELOG_BEGIN
CHANGELOG_END
2020-07-13 17:46:32 +02:00
Gary Verhaegen
61e9df3eaf
refactor ci jobs: add setvar to ci/lib.sh (#6708)
CHANGELOG_BEGIN
CHANGELOG_END
2020-07-13 17:34:54 +02:00
Gary Verhaegen
6366fb753d
fix auto compat update PR (#6706)
This PR fixes a few things with the script that automatically updates
the compat test matrix on release, based on its use over the past two
weeks. Specifically:

- Send an error message to Slack in case this job fails. Previously,
  failures here were silent.
- Add an exponential backoff strategy to wait for the artifact to be
  available on Maven. Previously, the update script just failed.
- Allow for rerunning on the same machine after failure by removing the
  branch if it exists.
- Fix the commit message to include proper newlines instead of literal
  `\n`'s.

CHANGELOG_BEGIN
CHANGELOG_END
2020-07-13 15:52:50 +02:00
Moritz Kiefer
631ed3e891
Bump timeouts in compat tests (#6689)
This bumps the timeout of the compat tests on PRs to 360 minutes
matching other jobs on a PR (we mainly hit this if ghc-lib is rebuilt)
and the timeout on the daily jobs to 720 minutes (we hit this if
_everything_ is rebuilt).

I am slightly worried about the timeout on the daily job. After having
taken a look at it, there are a few reasons how we ended up here:

1. We started including more tests, e.g., sandbox-classic. Not much we
   can do here, those tests are useful.

2. We have a very large number of snapshots for 1.3.0. There are a few
   reasons for this:

   1. Timing: We branched off early for the 1.2.0 release so the first
      snapshot for 1.3 was on June 3th. For 1.4 it looks like the first
      snapshot will be on July 15th so that’s roughly 2 extra
      snapshots just due to timing.

   2. Additional snapshots: We had one broken snapshot due to a broken
      VSCode extension that we didn’t delete (probably not worth doing
      at this point). We also had to backport to an old snapshot which
      resulted in another extra snapshot. We also had one extra
      snapshot which was supposed to be the RC but wasn’t since the
      ANF revert needed to go in.

   The only thing that is clearly useless is the one broken snapshot
   but that doesn’t change things that much. I see 2 orthogonal
   options for improving this assuming we agree that the current
   runtime is worryingly high.

   1. Prune snapshots more aggressively, e.g., only include the last 3
      snapshots. That’s a pretty arbitrary decision but it would
      enforce a hard limit.

   2. Reduce test combinations. E.g., only test snapshots vs stable
      releases but not snapshots vs snapshots.

3. We end up forcing a full build quite frequently. Here are just 2
   examples of how we’ve done that so far.

   1. Upgrade rules_haskell. Basically all tests are run by a Haskell
      binary so this forces a full rebuild.

   2. Change runfiles of `daml`.

I don’t think there is much we can do about 1 or 3 which leaves us
with 2. One not entirely unreasonable option is to just do nothing. We
did have periods where things went pretty smoothly for the most part
and each month we reset to a much smaller number of releases (we also
have to start throwing out old stable releases at some
point). Otherwise reducing the number of test combinations seems the
most promising option to me.

changelog_begin
changelog_end
2020-07-10 12:34:53 +00:00
Moritz Kiefer
1b533561b4
Only publish JSON API to GH releases (#6620)
daml-on-sql isn’t quite ready

changelog_begin
changelog_end
2020-07-06 14:43:09 +00:00
Moritz Kiefer
bc3f485b9a
Update maven_install.json in compatibility tests (#6555)
We take our own libraries from latest_stable_version which changed but
we did not rerun pinning which meant that this did not get updated.

changelog_begin
changelog_end
2020-07-01 00:49:53 +00:00
Gary Verhaegen
beb33f2ab1
add explanation for clearing shared segments (#6545)
As requested on #6530.

CHANGELOG_BEGIN
CHANGELOG_END
2020-06-30 13:21:32 +00:00
Gary Verhaegen
55776f92ba
clear shared memory segment on macOS (#6530)
For a while now we've had errors along the line of

```
FATAL:  could not create shared memory segment: No space left on device
DETAIL:  Failed system call was shmget(key=5432001, size=56, 03600).
HINT:  This error does *not* mean that you have run out of disk space.
It occurs either if all available shared memory IDs have been taken, in
which case you need to raise the SHMMNI parameter in your kernel, or
because the system's overall limit for shared memory has been reached.
        The PostgreSQL documentation contains more information about
shared memory configuration.
child process exited with exit code 1
```

on macOS CI nodes, which we were not able to reproduce locally. Today I
managed to, sort of by accident, and that allowed me to dig a bit
further.

The root cause seems to be that PostgreSQL, as run by Bazel, does not
always seem to properly unlink the shared memory segment it uses to
communicate with itself. On my machine, running:

```
bazel test -t- --runs_per_test=100 //ledger/sandbox:conformance-test-wall-clock-postgresql
```

and eyealling the results of

```
watch ipcs -mcopt
```

I would say about one in three runs leaks its memory segment. After much
googling and some head scratching trying to figure out the C APIs for
managing shared memory segments on macOS, I kind of stumbled on a
reference to `pcirm` in a comment to some low-ranking StackOverflow
answer. It looks like it's working very well on my machine, even if I
run it while a test (and therefore an instance of pg) is running. I
believe this is because the command does not actually remove the shared
memory segments, but simply marks them for removal once the last process
stops using it. (At least that's what the manpage describes.)

CHANGELOG_BEGIN
CHANGELOG_END
2020-06-30 01:40:16 +02:00
Gary Verhaegen
c7ea0a8b08
automatically run update-versions on release (#6479)
This PR adds an extra post-release job to CI that will run the
[`compatiblity/update-versions.sh`][0] script and open a PR with the
result.

[0]: cb82a8d6be/compatibility/update-versions.sh

CHANGELOG_BEGIN
CHANGELOG_END
2020-06-24 17:02:12 +02:00
Moritz Kiefer
416a568cbd
Release daml-on-sql and JSON API to GH releases (#6397)
fixes #6384

For now this keeps the GCP bucket as well. I would suggest to keep
that for 1.3 and drop it in 1.4 but I don’t feel particularly strongly
about this so I’m also happy to drop it now.

changelog_begin

- [SDK] The JSON API and DAML on SQL (sandbox-classic) are now
  published as fat JARs to github releases. The GCP bucket that
  contained the fat JARs will not receive releases > 1.3.

changelog_end
2020-06-18 13:08:18 +02:00
Moritz Kiefer
2c1d4cb805
Fix nix installation (#6400)
Nix now requires -L, I’ve gone ahead and just normalized everything to
use -sfL which we were already using in one place.

changelog_begin
changelog_end
2020-06-18 10:34:08 +02:00
Gary Verhaegen
7735acb833
add sha256sums to releases (#6263)
Based on discussion on #6258.

CHANGELOG_BEGIN
CHANGELOG_END
2020-06-08 14:19:04 +00:00
Gary Verhaegen
2fe320fe48
automated ghc-lib build (#6188)
automated ghc-lib build

This PR aims at automating the build of ghc-lib. The current process
still has a few manual steps; it needs to be updated because Bintray is
going away, so this seemed like a good opportunity to fully automate it.

This works like the "patch bazel on Windows" jobs: the filename will
contain a hash of the `ci/da-ghc-lib` folder, and the job will run only
if the corresponding filename does not yet exist on the GCS bucket. PRs
aiming at changing the ghc-lib version will need to run twice: once to
create the artifacts, and once to change the `stack-snapshot.yaml` file
to match.

CHANGELOG_BEGIN
CHANGELOG_END
2020-06-04 12:05:03 +02:00
Gary Verhaegen
595f1e278d
fix fat-jar publish (#6046)
CHANGELOG_BEGIN
CHANGELOG_END
2020-05-20 15:08:53 +02:00
Gary Verhaegen
4882327db5
fix release diff (#6042)
CHANGELOG_BEGIN
CHANGELOG_END
2020-05-20 11:58:33 +02:00
Gary Verhaegen
fb6dc904a4
trigger all releases from master (#6016)
trigger all releases from master

The 1.1.0 release went wrong and we had to trash it and release 1.1.1
instead. This is an attempt at identifying and correcting the root
cause behind that incident.

To understand the situation, we need to know how releases worked before
1.0. We had a one-line file called `LATEST` that specifies the git SHA and
version tag for the latest release. A change to that file triggered a
release with the specified release tag, built from the source tree of
the specified commit. The `LATEST` file looked something like:

```
f050da78c9 1.0.0-snapshot.20200411.3905.0.f050da78
```

To mark a release as stable, we would change it to look like this:

```
f050da78c9 1.0.0
```

i.e. simply drop the `-snapshot...` suffix. Even though the commit (and
thus the entire source tree we build from) is the same, we would need to
rebuild almost all of our release artifacts, as they embed the version
tag in various places and ways. That worked well as long as we could
assume we were doing trunk-based development, i.e. all releases would
always come from the same (`master`) branch.

When we released 1.0, and started work on 1.1, we had a few bug reports
for 1.0 that we decided should be resolved in a point release. We
decided that the best way to handle that would be to have a branch
starting on the release commit for 1.0, and then backport patches from
`master` to that branch. We adapted our build process to also watch the
`release/1.0.x` branch and, in particular, trigger a new release build if
the `LATEST` file in that branch changed. That worked well.

The plan going forward was to keep doing regular snapshot releases from
the `master` branch, and create support, point releases ("patch" releases
in semver) from dedicated branches.

On April 30, we made a snapshot release as an RC for 1.1.0, by changing
the `LATEST` file in the `master` branch. That release was built on commit
681c862d. On May 6, we decided to take a new snapshot as the RC for
1.1.0; we changed `LATEST` in `master` to designate 7e448d81 as the new
latest release.

On May 11, we noticed an issue that broke our builds. Without going into
details, an external artifact we depend on had changed in incompatible
ways. After fixing that on `master`, we reasoned that this would also
break the build of the final 1.1.0 release if we just tried to build
7e448d81 again. But as the target release date was May 13, we did not
want to take a new snapshot after that fix, as that would have included
one more week of work in the release, and given us no time to test it.

So we did what we did for the 1.0 branch, as it had worked well: we
created a branch that forked from `master` at commit 7e448d81 and called
it `release/1.1.x`, then cherry-picked the one fix to our build process to
work around the broken download. When the time came to make the final
1.1.0 build on May 13, we naturally picked the `LATEST` file from the
`release/1.1.x` branch and dropped the `-snapshot...` suffix. Importantly,
we did not need to update the target commit to include the "broken
download" fix as, in the meantime, the internet had fixed itself, and we
thus reasoned we should go for the exact code of the RC rather than
include an unnecessary, albeit seemingly harmless, change.

Everything went well with the release process. Tests went well too. Then
we got a report that an application that worked against the latest RC
broke with the final 1.1.0. The issue was that we had built the wrong
commit: by branching off at the point of the _target_ commit for the
latest snapshot, we did not have the change to the `LATEST` file that
designated that commit as the target. So the `LATEST` file in
`release/1.1.x` was still pointing to 681c862d.

I believe the root cause for this issue is the fact that we have
scattered our release process over multiple branches, meaning there is
no linear history of what was released and we are relying on people
being able to mentally manage multiple timelines. Therefore, I propose
to fix our release process so this should not happen again by
linearizing the release process, i.e. getting back to a situation where
all releases are made from a single branch, `master`.

Because we do want to be able to release _for_ multiple release branches
(to provide backports and bugfixes), we still need some way to
accommodate that. Having a single `LATEST` file in the same format as
before would not really work well: keeping track of interleaved release
streams on a single file would not really be easier than keeping track
of multiple branches.

My proposed solution is to instead have a multiline LATEST file, so that
all the release branch "tips" can be observed at the same time, and, as
long as we take care to only advance one release branch at a time, we
can easily keep track of each of them. This is what this PR does.

This required a few changes to our release process. Most notably:

- Obviously, as this is the main point of this PR, the build process has
  once again been restricted to only trigger new releases from the
  `master` branch.
- As our CI machinery cannot easily be made to produce multiple releases
  from a single build, the `check_for_release` step will only recognize
  a commit as a release trigger if it changes a single line in the
  `LATEST` file. This restriction comes in addition to the existing one
  that a release commit is only allowed to change either just the
  `LATEST` file or both the `LATEST` and
  `docs/source/support/release-notes.rst` files.
- The docs publication process has been changed to update _all_
  published versions to display the _latest_ release notes page. This
  means that the release notes page will always show you all published
  versions, regardless of which version of the documentation you're
  looking at. This also means that interleaving release notes correctly on
  that page is a manual exercise.
- As per the intention of the new process, the `LATEST` file has been
  updated to contained all existing post-1.0 stable releases. It should
  also include all existing snapshot releases should we have more than one
  at a time (say, should we discover an issue with 1.1.1 that required us
  to work on a 1.1.2).
- The `release.sh` script has been dramatically simplified as I felt it
  was trying to do too much and porting its existing functionality to a
  multi-line `LATEST` file would be too hard.

CHANGELOG_BEGIN
CHANGELOG_END
2020-05-19 19:18:10 +02:00
Gary Verhaegen
af939a7ee4
provisional beta deployment for daml-on-sql (#6024)
Note: this is beta-level software. See documentation for the precise
guarantees this does and does not come with. (Documentation does not
exist at the time of opening this PR, but should exist by the time the
first version of this gets published.)

CHANGELOG_BEGIN
- We now publish Sandbox Next as an **ALPHA** standalone jar.
- We now publish the HTTP JSON API as a standalone jar.
CHANGELOG_END
2020-05-19 18:11:26 +02:00
Moritz Kiefer
294c881a2a
Fix standard change check (#5958)
This check never triggered for changes to LATEST due to the trailing
slash in `has_changed`.

changelog_begin
changelog_end
2020-05-13 13:58:43 +02:00
Moritz Kiefer
4916a28682
Include create-daml-app tests in compatibility tests (#5945)
This is the first part of #5700

It adds tests that build create-daml-app using `daml build` and then
run the codegen and build the UI. Contrary to our main tests these
also run on Windows. This is actually reasonably simple by first
building the typescript libraries on Linux and then downloading them
on Windows.

There are two parts that are still missing from the tests in the main
workspace:

1. Building the extra feature. This should be fairly easy to add.
2. Running the pupeeter tests. At least MacOS and Linux should be
   reasonably easy. I don’t know what horrors Windows will throw at
   us. This step is what actually makes this a compatibility
   test. Currently it doesn’t actually launch Sandbox and the JSON API.

Since this PR is already pretty large, I’d like to tackle those things
separately.

changelog_begin
changelog_end
2020-05-13 10:39:51 +02:00
Gary Verhaegen
bda565fa44
patching Bazel on Windows (infra bits, no patch yet) (#5918)
patch Bazel on Windows (ci setup)

We have a weird, intermittent bug on Windows where Bazel gets into a
broken state. To investigate, we need to patch Bazel to add more debug
output than present in the official distribution. This PR adds the basic
infrastructure we need to download the Bazel source code, apply a patch,
compile it, and make that binary available to the rest of the build.
This is for Windows only as we already have the ability to do similar
things on Linux and macOS through Nix.

This PR does not contain any intresting patch to Bazel, just the minimum
that we can check we are actually using the patched version.

CHANGELOG_BEGIN
CHANGELOG_END
2020-05-12 23:16:04 +02:00
Gary Verhaegen
3899a59a11
switch back to hosted macOS nodes (#5935)
CHANGELOG_BEGIN
CHANGELOG_END
2020-05-11 22:59:33 +02:00
Gary Verhaegen
9b476416b8
switch back to Azure-provided macos nodes (#5920)
This is temporary. It looks like the macOS nodes are dead; @nycnewman is
looking into it, but in case he doesn't fix it in time, at least we
have a backup plan so we're not completely blocked on Monday.

CHANGELOG_BEGIN
CHANGELOG_END
2020-05-11 09:28:40 +02:00
Gary Verhaegen
4a6ab84b69
add default machine capability (#5912)
add default machine capability

We semi-regularly need to do work that has the potential to disrupt a
machine's local cache, rendering it broken for other streams of work.
This can include upgrading nix, upgrading Bazel, debugging caching
issues, or anything related to Windows.

Right now we do not have any good solution for these situations. We can
either not do those streams of work, or we can proceed with them and
just accept that all other builds may get affected depending on which
machine they get assigned to. Debugging broken nodes is particularly
tricky as we do not have any way to force a build to run on a given
node.

This PR aims at providing a better alternative by (ab)using an Azure
Pipelines feature called
[capabilities](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#capabilities).
The idea behind capabilities is that you assign a set of tags to a
machine, and then a job can express its
[demands](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml),
i.e. specify a set of tags machines need to have in order to run it.

Support for this is fairly badly documented. We can gather from the
documentation that a job can specify two things about a capability
(through its `demands`): that a given tag exists, and that a given tag
has an exact specified value. In particular, a job cannot specify that a
capability should _not_ be present, meaning we cannot rely on, say,
adding a "broken" tag to broken machines.

Documentation on how to set capabilities for an agent is basically
nonexistent, but [looking at the
code](https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Microsoft.VisualStudio.Services.Agent/Capabilities/UserCapabilitiesProvider.cs)
indicates that they can be set by using a simple `key=value`-formatted
text file, provided we can find the right place to put this file.

This PR adds this file to our Linux, macOS and Windows node init scripts
to define an `assignment` capability and adds a demand for a `default`
value on each job. From then on, when we hit a case where we want a PR
to run on a specific node, and to prevent other PRs from running on that
node, we can manually override the capability from the Azure UI and
update the demand in the relevant YAML file in the PR.

CHANGELOG_BEGIN
CHANGELOG_END
2020-05-09 18:21:42 +02:00
Gary Verhaegen
2b0c59f8af
detect cancellation in notify-user (#5895)
This PR changes the notify_user job to not run when the job has been
canceled, which happens mostly when we push new code.

Not sure how I failed to see the `canceled` function in the past, but
this does seem to do exactly what we want.

CHANGELOG_BEGIN
CHANGELOG_END
2020-05-07 19:35:45 +02:00
Gary Verhaegen
11a2fc3c2d
more flexible perf test check (#5891)
This PR separates the "last known valid perf test" commit from the
"baseline speedy implementation" commit. It is important for the perf
test to be meaningful that the changes between those two commits are
benign, say minor API adjustments, so that the perf measurement remains
meaningful.

This also adds a check on merging to master that tells Slack if the perf
test has changed and the `test_sha` file needs updating. The Slack
message is conditional on the current commit to avoid excessive noise.

CHANGELOG_BEGIN
CHANGELOG_END
2020-05-07 13:53:22 +02:00
Moritz Kiefer
b291e96ce1
Publish execution logs from Windows compatibility jobs (#5834)
Hopefully, this helps diagnose the Windows CI failures.

changelog_begin
changelog_end
2020-05-05 12:23:11 +02:00
Gary Verhaegen
49b4a8dad8
tweak release process for more reliable labeling (#5823)
Currently, there are quite a few releases that are lacking the
Standard-Change label, even though they did publish artifacts. This
makes our SOC2-compliance tracking a bit harder. For the past two
months, I have manually added the label after-the-fact while preparing
the monthly compliance report, but that doesn't seem like a great
solution.

This PR changes the release process to be more optimistic: assume the
release is going to succeed by putting in the label immediately, and
then (optionally) removing it if the release fails.

Note that the label should only be removed in the rare case where the
release was merged into master but somehow did not produce any artifact.
This can only happen if the Linux build fails quite early, which as far
as I know only happened once over the past two months when we had the
release notes race condition.

CHANGELOG_BEGIN
CHANGELOG_END
2020-05-04 15:35:58 +02:00
Andreas Herrmann
4c99f67814
Publish Bazel logs (#5821)
CHANGELOG_BEGIN
CHANGELOG_END

Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
2020-05-04 11:14:40 +02:00
Gary Verhaegen
32fbf040aa
fail collect_build_data for windows_compat (#5779)
At the moment, collect_build_data will wait for the Windows
compatibility test to have "finished", but doesn't check its return
status. This means two things:

1. Should the compatibility test end without a success or error (e.g.
   communication broken between Azure and the node), the option to rerun
   failed jobs will not appear, as there will be no failed job.
2. The subsequent notify_user step will ignore failures in the
   compatibility_windows job when reporting to Slack, making for
   confusing reports.

CHANGELOG_BEGIN
CHANGELOG_END
2020-04-30 15:03:03 +02:00
Moritz Kiefer
49e19ebed1
Make compat tests work on windows (#5732)
* Make compat tests work on windows

This required some changes to the daml_sdk rule since the read-only
installation by the assistant breaks Bazel completely. We could only
apply those changes on Windows but I think I prefer the consistency
across platforms here over trying to stay close to how the SDK is
installed on user machines given that the SDK installation is not
something we’ve had issues with.

I’ve excluded the postgresql tests for now. I don’t expect them to be
particularly hard to fix but I’ve already spent almost 2 days on this
and having some tests run on Windows seems like a clear improvement
over running no tests on Windows :)

changelog_begin
changelog_end

* Remove todo

changelog_begin
changelog_end
2020-04-28 16:06:36 +02:00
Gary Verhaegen
54d6782be3
drop v from release titles (#5742)
This is a minor, cosmetic change. Note that all our references to
releases are based on tags, and do not depend on the release title. This
is evidenced by the fairly random titles we used to have before the
title was set by CI, see e.g.
[0.13.35](https://github.com/digital-asset/daml/releases/tag/v0.13.53).

CHANGELOG_BEGIN
CHANGELOG_END
2020-04-28 11:51:16 +02:00
Gary Verhaegen
7ceda5678a
run compatibility tests on macos (#5723)
This PR extends the existing Linux compatibility tests to run on macOS
too. Fixes #5692.

CHANGELOG_BEGIN
CHANGELOG_END

Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
2020-04-27 14:55:16 +02:00
Moritz Kiefer
0d1f21e4a2
Extend compatibility tests to test against HEAD (#5714)
fixes #5691

changelog_begin
changelog_end
2020-04-24 14:43:35 +02:00