- Move deployment tests (deployTest, fetchTest) out of integration-tests.
- Use DA.Test.Sandbox where appropriate.
- Split out code for useful test patterns: i.e. calling commands quietly, getFreePort.
changelog_begin
changelog_end
Context
=======
After multiple discussions about our current release schedule and
process, we've come to the conclusion that we need to be able to make a
distinction between technical snapshots and marketing releases. In other
words, we need to be able to create a bundle for early adopters to test
without making it an officially-supported version, and without
necessarily implying everyone should go through the trouble of
upgrading. The underlying goal is to have less frequent but more stable
"official" releases.
This PR is a proposal for a new release process designed under the
following constraints:
- Reuse as much as possible of the existing infrastructure, to minimize
effort but also chances of disruptions.
- Have the ability to create "snapshot"/"nightly"/... releases that are
not meant for general public consumption, but can still be used by savvy
users without jumping through too many extra hoops (ideally just
swapping in a slightly-weirder version string).
- Have the ability to promote an existing snapshot release to "official"
release status, with as few changes as possible in-between, so we can be
confident that the official release is what we tested as a prerelease.
- Have as much of the release pipeline shared between the two types of
releases, to avoid discovering non-transient problems while trying to
promote a snapshot to an official release.
- Triggerring a release should still be done through a PR, so we can
keep the same approval process for SOC2 auditability.
The gist of this proposal is to replace the current `VERSION` file with
a `LATEST` file, which would have the following format:
```
ef5d32b7438e481de0235c5538aedab419682388 0.13.53-alpha.20200214.3025.ef5d32b7
```
This file would be maintained with a script to reduce manual labor in
producing the version string. Other than that, the process will be
largely the same, with releases triggered by changes to this `LATEST`
and the release notes files.
Version numbers
===============
Because one of the goals is to reduce the velocity of our published
version numbers, we need a different version scheme for our snapshot
releases. Fortunately, most version schemes have some support for that;
unfortunately, the SDK sits at the intersection of three different
version schemes that have made incompatible choices. Without going into
too much detail:
- Semantic versioning (which we chose as the version format for the SDK
version number) allows for "prerelease" version numbers as well as
"metadata"; an example of a complete version string would be
`1.2.3-nightly.201+server12.43`. The "main" part of the version string
always has to have 3 numbers separated by dots; the "prerelease"
(after the `-` but before the `+`) and the "metadata" (after the `+`)
parts are optional and, if present, must consist of one or more segments
separated by dots, where a segment can be either a number or an
alphanumeric string. In terms of ordering, metadata is irrelevant and
any version with a prerelease string is before the corresponding "main"
version string alone. Amongst prereleases, segments are compared in
order with purely numeric ones compared as numbers and mixed ones
compared lexicographically. So 1.2.3 is more recent than 1.2.3-1,
which is itself less recent than 1.2.3-2.
- Maven version strings are any number of segments separated by a `.`, a
`-`, or a transition between a number and a letter. Version strings
are compared element-wise, with numeric segments being compared as
numbers. Alphabetic segments are treated specially if they happen to be
one of a handful of magic words (such as "alpha", "beta" or "snapshot"
for example) which count as "qualifiers"; a version string with a
qualifier is "before" its prefix (`1.2.3` is before `1.2.3-alpha.3`,
which is the same as `1.2.3-alpha3` or `1.2.3-alpha-3`), and there is a
special ordering amongst qualifiers. Other alphabetic segments are
compared alphabetically and count as being "after" their prefix
(`1.2.3-really-final-this-time` counts as being released after `1.2.3`).
- GHC package numbers are comprised of any number of numeric segments
separated by `.`, plus an optional (though deprecated) alphanumeric
"version tag" separated by a `-`. I could not find any official
documentation on ordering for the version tag; numeric segments are
compared as numbers.
- npm uses semantic versioning so that is covered already.
After much more investigation than I'd care to admit, I have come up
with the following compromise as the least-bad solution. First,
obviously, the version string for stable/marketing versions is going to
be "standard" semver, i.e. major.minor.patch, all numbers, which works,
and sorts as expected, for all three schemes. For snapshot releases, we
shall use the following (semver) format:
```
0.13.53-alpha.20200214.3025.ef5d32b7
```
where the components are, respectively:
- `0.13.53`: the expected version string of the next "stable" release.
- `alpha`: a marker that hopefully scares people enough.
- `20200214`: the date of the release commit, which _MUST_ be on
master.
- `3025`: the number of commits in master up to the release commit
(included). Because we have a linear, append-only master branch, this
uniquely identifies the commit.
- `ef5d32b7ù : the first 8 characters of the release commit sha. This is
not strictly speaking necessary, but makes it a lot more convenient to
identify the commit.
The main downsides of this format are:
1. It is not a valid format for GHC packages. We do not publish GHC
packages from the SDK (so far we have instead opted to release our
Haskell code as separate packages entirely), so this should not be an
issue. However, our SDK version currently leaks to `ghc-pkg` as the
version string for the stdlib (and prim) packages. This PR addresses
that by tweaking the compiler to remove the offending bits, so `ghc-pkg`
would see the above version number as `0.13.53.20200214.3025`, which
should be enough to uniquely identify it. Note that, as far as I could
find out, this number would never be exposed to users.
2. It is rather long, which I think is good from a human perspective as
it makes it more scary. However, I have been told that this may be
long enough to cause issues on Windows by pushing us past the max path
size limitation of that "OS". I suggest we try it and see what
happens.
The upsides are:
- It clearly indicates it is an unstable release (`alpha`).
- It clearly indicates how old it is, by including the date.
- To humans, it is immediately obvious which version is "later" even if
they have the same date, allowing us to release same-day patches if
needed. (Note: that is, commits that were made on the same day; the
release date itself is irrelevant here.)
- It contains the git sha so the commit built for that release is
immediately obvious.
- It sorts correctly under all schemes (modulo the modification for
GHC).
Alternatives I considered:
- Pander to GHC: 0.13.53-alpha-20200214-3025-ef5d32b7. This format would
be accepted by all schemes, but will not sort as expected under semantic
versioning (though Maven will be fine). I have no idea how it will sort
under GHC.
- Not having any non-numeric component, e.g. `0.13.53.20200214.3025`.
This is not valid semantic versioning and is therefore rejected by
npm.
- Not having detailed info: just go with `0.13.53-snapshot`. This is
what is generally done in the Java world, but we then lose track of what
version is actually in use and I'm concerned about bug reports. This
would also not let us publish to the main Maven repo (at least not more
than once), as artifacts there are supposed to be immutable.
- No having a qualifier: `0.13.53-3025` would be acceptable to all three
version formats. However, it would not clearly indicate to humans that
it is not meant as a stable version, and would sort differently under
semantic versioning (which counts it as a prerelease, i.e. before
`0.13.53`) than under maven (which counts it as a patch, so after
`0.13.53`).
- Just counting releases: `0.13.53-alpha.1`, where we just count the
number of prereleases in-between `0.13.52` and the next. This is
currently the fallback plan if Windows path length causes issues. It
would be less convenient to map releases to commits, but it could still
be done via querying the history of the `LATEST` file.
Release notes
=============
> Note: We have decided not to have release notes for snapshot releases.
Release notes are a bit tricky. Because we want the ability to make
snapshot releases, then later on promote them to stable releases, it
follows that we want to build commits from the past. However, if we
decide post-hoc that a commit is actually a good candidate for a
release, there is no way that commit can have the appropriate release
notes: it cannot know what version number it's getting, and, moreover,
we now track changes in commit messages. And I do not think anyone wants
to go back to the release notes file being a merge bottleneck.
But release notes need to be published to the releases blog upon
releasing a stable version, and the docs website needs to be updated and
include them.
The only sensible solution here is to pick up the release notes as of
the commit that triggers the release. As the docs cron runs
asynchronously, this means walking down the git history to find the
relevant commit.
> Note: We could probably do away with the asynchronicity at this point.
> It was originally included to cover for the possibility of a release
> failing. If we are releasing commits from the past after they have been
> tested, this should not be an issue anymore. If the docs generation were
> part of the synchronous release step, it would have direct access to the
> correct release notes without having to walk down the git history.
>
> However, I think it is more prudent to keep this change as a future step,
> after we're confident the new release scheme does indeed produce much more
> reliable "stable" releases.
New release process
===================
Just like releases are currently controlled mostly by detecting
changes to the `VERSION` file, the new process will be controlled by
detecting changes to the `LATEST` file. The format of that file will
include both the version string and the corresponding SHA.
Upon detecting a change to the `LATEST` file, CI will run the entire
release process, just like it does now with the VERSION file. The main
differences are:
1. Before running the release step, CI will checkout the commit
specified in the LATEST file. This requires separating the release
step from the build step, which in my opinion is cleaner anyway.
2. The `//:VERSION` Bazel target is replaced by a repository rule
that gets the version to build from an environment variable, with a
default of `0.0.0` to remain consistent with the current `daml-head`
behaviour.
Some of the manual steps will need to be skipped for a snapshot release.
See amended `release/RELEASE.md` in this commit for details.
The main caveat of this approach is that the official release will be a
different binary from the corresponding snapshot. It will have been
built from the same source, but with a different version string. This is
somewhat mitigated by Bazel caching, meaning any build step that does
not depend on the version string should use the cache and produce
identical results. I do not think this can be avoided when our artifact
includes its own version number.
I must note, though, that while going through the changes required after
removing the `VERSION` file, I have been quite surprised at the sheer number of
things that actually depend on the SDK version number. I believe we should
look into reducing that over time.
CHANGELOG_BEGIN
CHANGELOG_END
* Refactor handling of package names and versions
This is a preparatory refactoring PR in preparation for propagating
package metadata into DAML-LF. There are no actual changes in here.
Primarily the changes consist of 3 things:
1. In options, we split the `optMbPackageName` field which previously
contained the unit id into `optMbPackageName` and
`optMbPackageVersion`.
2. We use newtypes for names and versions and try to keep them pretty
much everywhere (the only place missing is `splitUnitId`, I’ll do that
separately).
3. We use `UnitId` where we want `name-version`.
As was probably to be expected, this surfaced some minor issues. They
are pretty much exclusively in debugging or “internal” commands so
I’ve mostly just added notes/todos.
changelog_begin
changelog_end
* cry about applicativedo
* daml2ts: Isolate tests better
Currently, the `sh_test` for `daml2ts` copies too many files into the
temporary directory where the test is run. This can cause issues with
stale files from development experiments being copied into the test.
This PR addresses the issue by exluding some directories which only
contain generated files.
CHANGELOG_BEGIN
CHANGELOG_END
* Exclude `node_modules` from buildifier check
Co-authored-by: Andreas Herrmann <andreash87@gmx.ch>
* CHANGELOG_BEGIN
Type-check type synonyms.
CHANGELOG_END
* placate HLint
* comments
* Add an example that requires the check in kindOf
* check types containing syn-apps are well formed even when there is no expression of that type
* show type mismatch error after synonyms are expanded
* typeOf calls expandTypeSynonyms; track vars bound by TForall during expansion
* test interaction of syn-expansion and free-vars; add one bigger testcase
* extend bigger example with pointed typeclass, having functor as a super class
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* daml assistant expected auth token in Bearer format
* Daml assistant does no validation of the auth token before passing in on to the ledger.
* clarify code with newtype Token
* MVP for a daml2ts codegen
This PR adds an MVP for a codegen for TypeScript.
Given a DAR, daml2ts replicates the structure of the serializable type
definitions in it as TypeScript type definitions following our JSON
representation of DAML-LF types. It also adds decoders for all these types,
which can be used to check where an arbitray JSON value has the given type.
Finally, daml2ts also produces one JavaScript object for each template, which
reflects the type information of that template.
All produces objects implement some interfaces defined in a TypeScript
library currently called `@digitalasset/daml-json-types`. This libary is not
yet uploaded to NPM but rather included in the `tests/ts/daml-json-types`
directory. This libary also contains the JSON decoders for all of DAML-LF's
builtin types.
There are quite a few limitations right now. Most notably, variant and enum
types are not properly typed right now but rather gradually "typed" as
`unknown`. We also don't support nested `Optional`s, the `Numeric` type or
sum-of-product types in DAML. These issues are tracked in #3518.
There is currently one test. It takes a very simple DAML model, generates
the TypeScript for it and checks that it compiles and contains no linter
warnings/errors. Proper integration tests against the JSON API will follow.
* Address @cocreature's comments
* Make test work on Windows
* Remove git-revision from dependency graph
The navigator binary depended on the current git-revision. The issue is
that any target/test depending on the git-revision will have to be
rebuilt/rerun on every commit.
The navigator package itself was careful to avoid unnecessary
rebuilds/reruns. However, the SDK release tarball depends on the
navigator binary and thereby on the git-revision. The daml-assistant
integration tests, which depend on the SDK release tarball, therefore
had to be rerun on every commit.
This change removes the git-revision from the navigator alltogether. For
issue reporting the SDK version is still available.
* Remove unused git-revision and workspace_status
* language: cross sdk dalf/dar imports
The final piece for cross sdk imports. With this PR we can import the
data types of packages and dalfs that were created with different sdks.
This is done by generating interface files from dalfs and an 'instances'
package that contains the template instance definitions of template data
types. The instances itself are defined via the `external` keyword,
which is inlined to proper daml-lf instance definitions given in the
respective dalf package.
We test that cross sdk imports work by importing the `simple-dalf` in
the daml-assistant integation tests and running a scenario.
* Update bazel-common to fix javadoc issues
Specifically, to fix the following error
```
ERROR: /home/aj/tweag.io/da/da-bazel-1.1/ledger-api/rs-grpc-bridge/BUILD.bazel:7:1: in javadoc_library rule //ledger-api/rs-grpc-bridge:rs-grpc-bridge_javadoc:
Traceback (most recent call last):
File "/home/aj/tweag.io/da/da-bazel-1.1/ledger-api/rs-grpc-bridge/BUILD.bazel", line 7
javadoc_library(name = 'rs-grpc-bridge_javadoc')
File "/home/aj/.cache/bazel/_bazel_aj/5f825ad28f8e070f999ba37395e46ee5/external/com_github_google_bazel_common/tools/javadoc/javadoc.bzl", line 27, in _javadoc_library
dep.java.transitive_deps
object of type 'JavaSkylarkApiProvider' has no field 'transitive_deps'
```
* Define Maven deps using rules_jvm_external
* Pin artifacts
* Remove bazel-deps generated targets
* Remove bazel-deps
* Switch to rules_jvm_external targets
* update bazel documentation
* pom_file: There are no more bazel-deps targets
* BAZEL-JVM.md `maven_install` typo
* Rename hie-core to ghcide
The name `hie-core` has caused a lot of confusion as to how we relate
to haskell-ide-engine so changing it should hopefully help with that.
I also think that ghcide is still a good name once we hopefully
integrate with haskell-ide-engine more closely.
The name ghcide seems to have a reasonable amount of support on
Twitter https://twitter.com/ndm_haskell/status/1170681262987710464
which is of course the only good way to come up with names.
* Add a readme that points people to the new directory.
* Fix bogus replacements
* Use a proper link
* links are hard
* Bazel: 0.24.0 -> 0.27.0
* Update rules_haskell for Bazel 0.27 compatibility
* Update bazel-deps and bazel-watcher
* Windows escape JVM flags
* load commands at top of .bzl file
Bazel 0.27 no longer allows load commands that are not at the beginning
of the file.
* Update Bazel rules
* subpackage boundary
* native is not defined in BUILD files
* yarn: @bazel/hide-bazel-files
Seems to be required since latest rules_nodejs version. Otherwise, yarn
fails with errors about existing BUILD or BUILD.bazel files.
* grpc-java plugin visibility
* Update fat_cc_library
* Nix Python3 toolchain
* Iteration over depset
* dev_env_package: Create symlinks one level deeper
To prevent symlinking the BUILD file as well. The nested BUILD file
confuses Bazel as of 0.27 and rules_nodejs cannot find the node
executable anymore.
* Update rules_nodejs
* Add managed_directories for node_modules
* hie-bios: Extract bazel-genfiles from bazel info
Bazel 0.27 changed the genfiles location which breaks the hie-core test
on macOS.
* update cc_wrapper to Bazel 0.27
* bazel info -> bazel info bazel-genfiles
* Fix typo in BUILD
Co-Authored-By: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Update rules_haskell and static GHC
Remove patches that have been upstreamed or are no longer required.
Update still required patches to match the new rules_haskell version.
Previously we patched rules_haskell to coerce GHC into using static
Haskell libraries in most places. In particular we moved hs-libraries
entries into extra-libraries entries in the package configuration files.
A much cleaner approach is to compile GHC with a static RTS, then GHC
will by itself choose to load static Haskell libraries.
* Remove haskell_cc_import
* da-hs-daml-cli -> daml-cli
* da-hs-damlc-app -> damlc-app
* windows: root build
* windows: fixed haskell bindings tests
* windows: disable client_server_test test
* windows: marking daml_test flaky due to #1907
* windows: removing da-hs-damlc-app run from build.ps1
* windows: disable hie-core alias of currently disabled target
* Pull the CPP into a separate module
* Pass Nothing to indicate that a text buffer shoud just be used from disk
* Add save handlers, since the version changing to ModTime may have an impact
* Rename contents to mbContents in one place
* Change runCpp to take a Maybe StringBuffer and attempt to reuse the existing file, if it can
* Add a Bazel alias for hie-core
* Add notes about the sad path
* Avoid one use of filePathToUri
* Avoid another use of filePathToUri which went wrong for CPP output
* Normalize Uri's by replacing adjacent // with a single /
* Improve how CPP works if you have a modified buffer
* Move textToStringBuffer out to Util
* Switch to hPutStringBuffer which is in GHC 8.8
* Note why we are escaping to /
* Refactoring suggested by review
* Add support for haskell_repl targets in da-ghci
* Change default target
* Revert newline loss.
* bazel fetch before bazel query
* Make a top level repl target the default.
* Add da_haskell_repl dependency in //BUILD
* Fix syntax error
* Fix bazel formatting...
* Rename DamlHelper modules to make //:repl work
* DamlHelper -> DamlHelper.Run
* Update the import in DamlHelper.Main
* Fix bazel rules again
* Update DamlHelper import in integration-tests
* Hazel: Shorten target names
Previously, Hazel would generate library and binary targets, that
repeated the package name in their target name. This easily lead to too
long paths on Windows, which could induce errors with code that did not
use API functions with long path support.
This change modifies Hazel to name the library target "lib" and shorten
the binary target names to "bin" or just the Cabal exe component name.
This change had further reaching consequences, because the package name
in the generated version macros was derived from the library target
name. rules_haskell has been extended to allow to override that default
behaviour.
* data-default: Remove custom build definitions
These had been introduced to resolve issues on Windows due to too long
target names. Hazel has meanwhile been patched to generate such shorter
target names by default, making the custom builds superfluous.
* Hazel: unshorten cbits name
This is a temporary workaround for otherwise clashing cbits library
names in the case of static only linking.
* Add buildifier targets.
The tool allows to check and format BUILD files in the repo.
To check if files are well formatted, run:
bazel run //:buildifier
To fix badly-formatted files run:
bazel run //:buildifier-fix
* Cleanup dade-copyright-headers formatting.
* Fix dade-copyright-headers on files with just the copyright.
* Run buildifier automatically on CI via 'fmt.sh'.
* Reformat all BUILD files with buildifier.
Excludes autogenerated Bazel files.