Avoiding `damlc compile/package` commands (which we would like to deprecate), and replace with plain `damlc build` together with a post dar->dalf extraction step in the couple of places where we actually want the .dalf for testing.
changelog_begin
changelog_end
Dependencies on other DAML projects are declared with the `dar_dict` attribute of the build rule. This attribute also declares the names by which the `.dar` files are known in the client project, corresponding to the references in the `daml.yaml` config.
The new rule is used build & test the upgrade documentation example code.
changelog_begin
changelog_end
Context
=======
After multiple discussions about our current release schedule and
process, we've come to the conclusion that we need to be able to make a
distinction between technical snapshots and marketing releases. In other
words, we need to be able to create a bundle for early adopters to test
without making it an officially-supported version, and without
necessarily implying everyone should go through the trouble of
upgrading. The underlying goal is to have less frequent but more stable
"official" releases.
This PR is a proposal for a new release process designed under the
following constraints:
- Reuse as much as possible of the existing infrastructure, to minimize
effort but also chances of disruptions.
- Have the ability to create "snapshot"/"nightly"/... releases that are
not meant for general public consumption, but can still be used by savvy
users without jumping through too many extra hoops (ideally just
swapping in a slightly-weirder version string).
- Have the ability to promote an existing snapshot release to "official"
release status, with as few changes as possible in-between, so we can be
confident that the official release is what we tested as a prerelease.
- Have as much of the release pipeline shared between the two types of
releases, to avoid discovering non-transient problems while trying to
promote a snapshot to an official release.
- Triggerring a release should still be done through a PR, so we can
keep the same approval process for SOC2 auditability.
The gist of this proposal is to replace the current `VERSION` file with
a `LATEST` file, which would have the following format:
```
ef5d32b7438e481de0235c5538aedab419682388 0.13.53-alpha.20200214.3025.ef5d32b7
```
This file would be maintained with a script to reduce manual labor in
producing the version string. Other than that, the process will be
largely the same, with releases triggered by changes to this `LATEST`
and the release notes files.
Version numbers
===============
Because one of the goals is to reduce the velocity of our published
version numbers, we need a different version scheme for our snapshot
releases. Fortunately, most version schemes have some support for that;
unfortunately, the SDK sits at the intersection of three different
version schemes that have made incompatible choices. Without going into
too much detail:
- Semantic versioning (which we chose as the version format for the SDK
version number) allows for "prerelease" version numbers as well as
"metadata"; an example of a complete version string would be
`1.2.3-nightly.201+server12.43`. The "main" part of the version string
always has to have 3 numbers separated by dots; the "prerelease"
(after the `-` but before the `+`) and the "metadata" (after the `+`)
parts are optional and, if present, must consist of one or more segments
separated by dots, where a segment can be either a number or an
alphanumeric string. In terms of ordering, metadata is irrelevant and
any version with a prerelease string is before the corresponding "main"
version string alone. Amongst prereleases, segments are compared in
order with purely numeric ones compared as numbers and mixed ones
compared lexicographically. So 1.2.3 is more recent than 1.2.3-1,
which is itself less recent than 1.2.3-2.
- Maven version strings are any number of segments separated by a `.`, a
`-`, or a transition between a number and a letter. Version strings
are compared element-wise, with numeric segments being compared as
numbers. Alphabetic segments are treated specially if they happen to be
one of a handful of magic words (such as "alpha", "beta" or "snapshot"
for example) which count as "qualifiers"; a version string with a
qualifier is "before" its prefix (`1.2.3` is before `1.2.3-alpha.3`,
which is the same as `1.2.3-alpha3` or `1.2.3-alpha-3`), and there is a
special ordering amongst qualifiers. Other alphabetic segments are
compared alphabetically and count as being "after" their prefix
(`1.2.3-really-final-this-time` counts as being released after `1.2.3`).
- GHC package numbers are comprised of any number of numeric segments
separated by `.`, plus an optional (though deprecated) alphanumeric
"version tag" separated by a `-`. I could not find any official
documentation on ordering for the version tag; numeric segments are
compared as numbers.
- npm uses semantic versioning so that is covered already.
After much more investigation than I'd care to admit, I have come up
with the following compromise as the least-bad solution. First,
obviously, the version string for stable/marketing versions is going to
be "standard" semver, i.e. major.minor.patch, all numbers, which works,
and sorts as expected, for all three schemes. For snapshot releases, we
shall use the following (semver) format:
```
0.13.53-alpha.20200214.3025.ef5d32b7
```
where the components are, respectively:
- `0.13.53`: the expected version string of the next "stable" release.
- `alpha`: a marker that hopefully scares people enough.
- `20200214`: the date of the release commit, which _MUST_ be on
master.
- `3025`: the number of commits in master up to the release commit
(included). Because we have a linear, append-only master branch, this
uniquely identifies the commit.
- `ef5d32b7ù : the first 8 characters of the release commit sha. This is
not strictly speaking necessary, but makes it a lot more convenient to
identify the commit.
The main downsides of this format are:
1. It is not a valid format for GHC packages. We do not publish GHC
packages from the SDK (so far we have instead opted to release our
Haskell code as separate packages entirely), so this should not be an
issue. However, our SDK version currently leaks to `ghc-pkg` as the
version string for the stdlib (and prim) packages. This PR addresses
that by tweaking the compiler to remove the offending bits, so `ghc-pkg`
would see the above version number as `0.13.53.20200214.3025`, which
should be enough to uniquely identify it. Note that, as far as I could
find out, this number would never be exposed to users.
2. It is rather long, which I think is good from a human perspective as
it makes it more scary. However, I have been told that this may be
long enough to cause issues on Windows by pushing us past the max path
size limitation of that "OS". I suggest we try it and see what
happens.
The upsides are:
- It clearly indicates it is an unstable release (`alpha`).
- It clearly indicates how old it is, by including the date.
- To humans, it is immediately obvious which version is "later" even if
they have the same date, allowing us to release same-day patches if
needed. (Note: that is, commits that were made on the same day; the
release date itself is irrelevant here.)
- It contains the git sha so the commit built for that release is
immediately obvious.
- It sorts correctly under all schemes (modulo the modification for
GHC).
Alternatives I considered:
- Pander to GHC: 0.13.53-alpha-20200214-3025-ef5d32b7. This format would
be accepted by all schemes, but will not sort as expected under semantic
versioning (though Maven will be fine). I have no idea how it will sort
under GHC.
- Not having any non-numeric component, e.g. `0.13.53.20200214.3025`.
This is not valid semantic versioning and is therefore rejected by
npm.
- Not having detailed info: just go with `0.13.53-snapshot`. This is
what is generally done in the Java world, but we then lose track of what
version is actually in use and I'm concerned about bug reports. This
would also not let us publish to the main Maven repo (at least not more
than once), as artifacts there are supposed to be immutable.
- No having a qualifier: `0.13.53-3025` would be acceptable to all three
version formats. However, it would not clearly indicate to humans that
it is not meant as a stable version, and would sort differently under
semantic versioning (which counts it as a prerelease, i.e. before
`0.13.53`) than under maven (which counts it as a patch, so after
`0.13.53`).
- Just counting releases: `0.13.53-alpha.1`, where we just count the
number of prereleases in-between `0.13.52` and the next. This is
currently the fallback plan if Windows path length causes issues. It
would be less convenient to map releases to commits, but it could still
be done via querying the history of the `LATEST` file.
Release notes
=============
> Note: We have decided not to have release notes for snapshot releases.
Release notes are a bit tricky. Because we want the ability to make
snapshot releases, then later on promote them to stable releases, it
follows that we want to build commits from the past. However, if we
decide post-hoc that a commit is actually a good candidate for a
release, there is no way that commit can have the appropriate release
notes: it cannot know what version number it's getting, and, moreover,
we now track changes in commit messages. And I do not think anyone wants
to go back to the release notes file being a merge bottleneck.
But release notes need to be published to the releases blog upon
releasing a stable version, and the docs website needs to be updated and
include them.
The only sensible solution here is to pick up the release notes as of
the commit that triggers the release. As the docs cron runs
asynchronously, this means walking down the git history to find the
relevant commit.
> Note: We could probably do away with the asynchronicity at this point.
> It was originally included to cover for the possibility of a release
> failing. If we are releasing commits from the past after they have been
> tested, this should not be an issue anymore. If the docs generation were
> part of the synchronous release step, it would have direct access to the
> correct release notes without having to walk down the git history.
>
> However, I think it is more prudent to keep this change as a future step,
> after we're confident the new release scheme does indeed produce much more
> reliable "stable" releases.
New release process
===================
Just like releases are currently controlled mostly by detecting
changes to the `VERSION` file, the new process will be controlled by
detecting changes to the `LATEST` file. The format of that file will
include both the version string and the corresponding SHA.
Upon detecting a change to the `LATEST` file, CI will run the entire
release process, just like it does now with the VERSION file. The main
differences are:
1. Before running the release step, CI will checkout the commit
specified in the LATEST file. This requires separating the release
step from the build step, which in my opinion is cleaner anyway.
2. The `//:VERSION` Bazel target is replaced by a repository rule
that gets the version to build from an environment variable, with a
default of `0.0.0` to remain consistent with the current `daml-head`
behaviour.
Some of the manual steps will need to be skipped for a snapshot release.
See amended `release/RELEASE.md` in this commit for details.
The main caveat of this approach is that the official release will be a
different binary from the corresponding snapshot. It will have been
built from the same source, but with a different version string. This is
somewhat mitigated by Bazel caching, meaning any build step that does
not depend on the version string should use the cache and produce
identical results. I do not think this can be avoided when our artifact
includes its own version number.
I must note, though, that while going through the changes required after
removing the `VERSION` file, I have been quite surprised at the sheer number of
things that actually depend on the SDK version number. I believe we should
look into reducing that over time.
CHANGELOG_BEGIN
CHANGELOG_END
* bazel: 0.28.1 --> 1.1.0
* bazel-watcher sha256
* Fix missing line in patch
* proto_source_root --> strip_import_prefix
See https://github.com/bazelbuild/bazel/issues/7153 for details.
* Update rules_nixpkgs
Required to avoid errors of the form
```
ERROR: An error occurred during the fetch of repository 'node_nix':
parameter 'sep' may not be specified by name, for call to method split(sep, maxsplit = None) of 'string'
```
and
```
ERROR: An error occurred during the fetch of repository 'node_nix':
Traceback (most recent call last):
File "/private/var/tmp/_bazel_runner/17d2b3954f1c6dcf5414d5453467df9a/external/io_tweag_rules_nixpkgs/nixpkgs/nixpkgs.bzl", line 149
_execute_or_fail(repository_ctx, <3 more arguments>)
File "/private/var/tmp/_bazel_runner/17d2b3954f1c6dcf5414d5453467df9a/external/io_tweag_rules_nixpkgs/nixpkgs/nixpkgs.bzl", line 318, in _execute_or_fail
fail(<1 more arguments>)
Cannot build Nix attribute 'nodejs'.
Command: [/Users/runner/.nix-profile/bin/nix-build, /private/var/tmp/_bazel_runner/17d2b3954f1c6dcf5414d5453467df9a/external/node_nix/nix/bazel.nix, "-A", "nodejs", "--out-link", "bazel-support/nix-out-link", "-I", "nixpkgs=/private/var/tmp/_bazel_runner/17d2b3954f1c6dcf5414d5453467df9a/external/nixpkgs/nixpkgs"]
Return code: 1
Error output:
src/main/tools/process-tools.cc:173: "setitimer": Invalid argument
```
* Update rules_scala
* .proto has been removed, use [ProtoInfo] instead
See
https://docs.bazel.build/versions/1.1.0/be/protocol-buffer.html#proto_library
* python3_nix add nix_file attribute
To avoid the following error
```
ERROR: /home/aj/tweag.io/da/da-bazel-1.1/BUILD:66:1: //:nix_python3_runtime depends on @python3_nix//:bin/python in repository @python3_nix which failed to fetch. no such package '@python3_nix//': Traceback (most recent call last):
File "/home/aj/.cache/bazel/_bazel_aj/5f825ad28f8e070f999ba37395e46ee5/external/io_tweag_rules_nixpkgs/nixpkgs/nixpkgs.bzl", line 149
_execute_or_fail(repository_ctx, <3 more arguments>)
File "/home/aj/.cache/bazel/_bazel_aj/5f825ad28f8e070f999ba37395e46ee5/external/io_tweag_rules_nixpkgs/nixpkgs/nixpkgs.bzl", line 318, in _execute_or_fail
fail(<1 more arguments>)
Cannot build Nix attribute 'python3'.
Command: [/home/aj/.nix-profile/bin/nix-build, "-E", "import <nixpkgs> { config = {}; overlays = []; }", "-A", "python3", "--out-link", "bazel-support/nix-out-link", "-I", "nixpkgs=/home/aj/.cache/bazel/_bazel_aj/5f825ad28f8e070f999ba37395e46ee5/external/nixpkgs/nixpkgs"]
Return code: 1
Error output:
error: anonymous function at /home/aj/.cache/bazel/_bazel_aj/5f825ad28f8e070f999ba37395e46ee5/external/nixpkgs/nixpkgs.nix:3:1 called with unexpected argument 'config', at (string):1:1
```
* rules_haskell unnamed string.split(_, maxsplit = _)
The keyword argument may no longer be named.
* string.replace(_, _, maxsplit = _) may not be named
* Move proto sources from deps to data
Fixes
```
ERROR: /home/aj/tweag.io/da/da-bazel-1.1/daml-lf/archive/BUILD.bazel:150:1: in deps attribute of scala_test rule //daml-lf/archive:daml_lf_archive_reader_tests_test_suite_src_test_scala_com_digitalasset_daml_lf_archive_DecodeV1Spec.scala: '//daml-lf/archive:daml_lf_1.6_archive_proto_srcs' does not have mandatory providers: 'JavaInfo'. Since this rule was created by the macro 'da_scala_test_suite', the error might have been caused by the macro implementation
```
* Define sha256 for haskell_ghc__paths
Bazel 1.1.0 fails on missing hashes.
* Disable --incompatible_windows_native_test_wrapper
* //compiler/daml-extension don't modify sources
Modifying sources in-place can cause issues on Windows, where build
actions are not sandboxed and changes on sources can affect other build
steps.
* bazel-genfiles --> bazel-bin
The bazel-genfiles symlink has been removed since Bazel 1.0.
See https://github.com/bazelbuild/bazel/issues/8651
* Mark dev_env_tool repository rule as configure
See
https://docs.bazel.build/versions/1.1.0/skylark/lib/globals.html#repository_rule
* Move data deps into data attribute
* Mark dev_env_tool as local = True
* Manually fetch @makensis_dev_env
* Implement AnyTemplate in DAML
* Fix doctest path
* Shuffle around CPP
* Do not hide anything
* Hide it again
* Clean build
* Enable caching again
* debug windows crap
* more tests
* reinstantiate full tests
* language: check dflags for errors
We add a check when we build the dflags for cases that will lead to a
failed build and emmit a clearer error message. Currently this only
includes a check, to see whether the current installed unit id is also
imported as a package from the package database.
* exclude ghc-prim from check
* exclude code generation from dflag check
* add an internal option to turn dflags check off
There is lots of room for improvements here but I think this is a good
first step. The 3 main things that could be improved imho are:
- Rewrite source locations to point to the original file rather than
the generated module
- Provide some way to declare things like imports or more general,
setup code that is added to the generated module.
- Prettier/more helpful output during a run, e.g., print the list of
successful tests.
* Update rules_haskell and static GHC
Remove patches that have been upstreamed or are no longer required.
Update still required patches to match the new rules_haskell version.
Previously we patched rules_haskell to coerce GHC into using static
Haskell libraries in most places. In particular we moved hs-libraries
entries into extra-libraries entries in the package configuration files.
A much cleaner approach is to compile GHC with a static RTS, then GHC
will by itself choose to load static Haskell libraries.
* Remove haskell_cc_import
* da-hs-daml-cli -> daml-cli
* da-hs-damlc-app -> damlc-app
* windows: root build
* windows: fixed haskell bindings tests
* windows: disable client_server_test test
* windows: marking daml_test flaky due to #1907
* windows: removing da-hs-damlc-app run from build.ps1
* windows: disable hie-core alias of currently disabled target
* Add test case for damlc test-files
* Separate damlc test-files from damlc test
* Fix rules_daml/daml.bzl daml_test
* damlc test-files --> damlc test --files
The project options are still relevant for the --files case, as it may
be necessary to change into the project root directory in order to
locate the project package database.
* Removing unused/broken daml.bzl rules, adding Scala codegen rule
Scala codegen rule: dar_to_scala follows the same approach
as dar_to_java with a few differences:
- dar_to_scala supports multiple dars as an input
- dar_to_scala does not try to compile the generated scala (separation
of responsibilities)
* Using dar_to_scala to compile quickstart-scala example
* Fixing formatting
* Add dependency to examples-quickstart-scala-bin
So if example does not compile, it can't be published.
* Fixing the path to the jar executable
* Changing scala codegen rule to rely on zipper instead of jar
JDK's jar creates srcjar with timestamped files, zipper doesn't
timestamp files. This means we can create reproducible/deterministic/
cacheable srcjars
* Addressing code review comments, wrong var name
* Accept multiple files in damlc test
Since damlc test also runs tests in transitive dependencies, this can
be significantly faster than running "damlc test" individually on
a set of files as you will end up recompiling and rerunning tests
multiple times if those files depend on each other.
For //docs:daml-ref-daml-test This is roughly a 10x improvement from
~70s to ~7s.
* Add buildifier targets.
The tool allows to check and format BUILD files in the repo.
To check if files are well formatted, run:
bazel run //:buildifier
To fix badly-formatted files run:
bazel run //:buildifier-fix
* Cleanup dade-copyright-headers formatting.
* Fix dade-copyright-headers on files with just the copyright.
* Run buildifier automatically on CI via 'fmt.sh'.
* Reformat all BUILD files with buildifier.
Excludes autogenerated Bazel files.