Turns out bash is hard and I’m stupid :sadpanda:
We need to write thoutput to stderr, otherwise this ends up in the
JSON output which obviously is not valid JSON.
changelog_begin
changelog_end
We changed the patch to target more than one file which made the
checkout insufficient for restoring the state and then the following
git checkout of current fails with:
```
error: Your local changes to the following files would be overwritten by checkout:
stack-snapshot.yaml
```
A git reset --hard should make sure everything gets reset.
changelog_begin
changelog_end
Today the [perf check failed], but we got no notification of it. I'm not
sure what's happening as I can't reproduce any of it locally: not only
does the `bazel run` command work for me (despite the ghc-lib URL
returning a 404 when I try it manually), I also can't reproduce the fact
that Bash, on CI, doesn't seem to fail on either the `bazel run` error
or the fact that on the next line `cat` tries to access a file that
doesn't exist (for which CI does print the error message).
This PR does two things:
- Add an explicit check that _should_ get Bash to actually fail should
this happen again in the future. It is not a great fix but at least
we'll know if it happens again (to the best of my knowledge today was
the first time we hit this).
- Amend the existing patch we apply on the baseline commit to use the
GCS-hosted ghc-lib packages.
CHANGELOG_BEGIN
CHANGELOG_END
[perf check failed]: https://dev.azure.com/digitalasset/daml/_build/results?buildId=64395&view=results
This should be merged after #6080. This PR adds a patch (and
consequently updates the `ci/cron/perf/compare.sh` script) to apply the
same logical change as #6080 on top of the baseline commit, so our
performance comparison remains "apples to apples".
I am well aware that managing patches is not going to be a great way
forward. The rate of changes on the benchmark seems to be slow enough
that this is good enough for now, but should we change the benchmark
more often and/or want to add new benchmarks, a better approach would be
to handle the changes at the Scala level. That is:
- Create a "rest of the world" (world = Speedy, its compiler, and all of
the associated types) interface that benchmarks would depend on,
rather than depend directly on the rest of the codebase.
- Create two implementations of that interface, one that compiles
against the current state of the world, and one that compiles against
the baseline.
- Change the script to load the relevant implementation, and then run
all the benchmarks as-is, with no match necessary.
CHANGELOG_BEGIN
CHANGELOG_END
This PR separates the "last known valid perf test" commit from the
"baseline speedy implementation" commit. It is important for the perf
test to be meaningful that the changes between those two commits are
benign, say minor API adjustments, so that the perf measurement remains
meaningful.
This also adds a check on merging to master that tells Slack if the perf
test has changed and the `test_sha` file needs updating. The Slack
message is conditional on the current commit to avoid excessive noise.
CHANGELOG_BEGIN
CHANGELOG_END