This adds a function withTriggerServiceAndDb which runs a test twice, once with and once without a database, and succeeds if both succeed. This will be useful for reusing test logic with both backends and making sure behaviour is consistent. I have used this function where possible, but it won't work for everything until stop is implemented on the DB side.
At the moment this new function squashes two tests into one making it hard to tell whether it failed with or without the database. In a future PR I will investigate using an abstract class to run the tests separately (hopefully with altered descriptions).
This feature required a few changes in the process, mainly:
- Use PostgresAroundAll to connect/disconnect to the database before and after all tests run
- Add a destroy method to the TriggerDao to reset the database between tests
- Use the TriggerDao in the withTriggerService functions to initialize / clean up the database at the start / end of each test
- Sort trigger instances from list using Scala's sort, not relying on Postgres' ordering of UUIDs. This also means we need to use UUIDs for trigger instances in the tests and sort nonempty vectors in expected results.
* Insert running trigger to DB when using one
If the DB write fails, the server sends itself a
TriggerInitializationFailure message so that the corresponding trigger
runner is stopped and the table is in sync with the actors.
We still need to retry writes here.
Includes basic test that runs the server with a JDBC config set and adds
a trigger, expecting a new entry to be added to the DB. However does not
check the running trigger table which we can do once reads are
implemented.
changelog_begin
changelog_end
* Await on future in test
* Update to new assertTriggerIds
* Apply scalafmt suggestions
* Create index on party token
* Read db in list command
* Update comment in test script
* Remove outdated comment
* Fix strings in insert and select
* Clean up test
* Add a second trigger in the db test
* Fix comment in test script
* Comment db tables
* Order trigger instances in list command
* Comment about TriggerDao execution context
* Store trigger history
changelog_begin
changelog_end
* Harvest trigger histories
changelog_begin
changelog_end
* Switch to Vector over List (and other bits and bobs)
* Use a better verb for updating trigger status method
* Add a comment
* Fix mangled comments
* Pass JdbcConfig object to TriggerDao apply
* No need to return TriggerDao from init db
* Refactor introducing RunningTrigger type
* Rename triggerId -> triggerInstance and triggerOrigId -> triggerName
Note this also changes the start request parameter name to triggerName.
However I have not yet renamed triggerId in the response messages. We
should probably make it triggerInstance there too but in a later PR.
changelog_begin
changelog_end
* disable Any wart
* first pass removal of Any suppressions for false positives
* second pass removal of Any suppressions for false positives
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* third pass removal of Any suppressions for false positives
* fourth pass removal of Any suppressions for false positives
* reformat newly single-suppressions into single lines
- suggested by @SamirTalwar-DA; thanks
* Don't update running triggers until we know the trigger is running
changelog_begin
changelog_end
* Don't update running triggers until we know the trigger is running
Minimal database initialization with schemas for running_triggers and dalfs tables. The user passes in the database URL, username and password in a config string argument (approach and code adapted from the JSON API).
In future the idea is to also create a "service" role with permissions to read and write to the new tables. Then the user can pass in the service role to connect to the database when running the service for real.
* Implement a simple profiler for DAML scenarios
The profiler runs a single scenario and records timing information when
each function (and some other closures) are entered and left. The
resulting information can be visualized as a flamegraph using
[speedscope](https://www.speedscope.app/).
The profiler works by instrumenting the CEK machine at the heart of
DAML Engine. Unfortunetaly, this causes a very small overhead on
non-profiling runs too. However, in my benchmarks I could not measure
any significant impact on the overall runtime at all. More precisely,
the overhead is as follows:
Every closure now has an additional field called `label`. In
non-profiling runs this field is always set to `null`. This field needs
to be allocated, copied whenever we copy a closure and scanned during
garbage collection. Additionally, whenever we enter a closure, we check
this field and whenever it is _not_ `null`, i.e. never during
non-profiling runs, we record an "open event" and set up a hook for the
corresponding "close event". Thus, the additional cost during
non-profiling runs are a single pointer comparison and a jump beyond
the "then branch".
Since this is still very much in active development, there are no
documentation, other than an entry in a README, and no tests yet. They
will come before we promote this. However, the UX will look very
different then since we already have plans to significantly change it.
CHANGELOG_BEGIN
CHANGELOG_END
* Run scalafmt
* Make profiling argument to PureCompiledPackges optional
* Fix a bunch of tests
CHANGELOG_BEGIN
CHANGELOG_END
* scalafmt is so annoying
CHANGELOG_BEGIN
CHANGELOG_END
* Apply simple suggestions
CHANGELOG_BEGIN
CHANGELOG_END
* Simplify and clarify the public interface to Speedy.
- Remove `isFinal`. A client just uses `run()`.
- Remove `toSValue`. The value in available in `SResultFinalValue(v: SValue)`.
- A client never directly access the `.ctrl` (or `.returnValue`) components.
- A client may use `setExpressionToEvaluate(expr)` to evaluate a new expression on an existing machine.
changelog_begin
changelog_end
* remove while loop which executes just once
* avoid unnecessary mutation when running speedy
Remove the `Ctrl` trait and separate `Machine.ctrl: Ctrl` into `Machine.ctrl: SExpr` and `Machine.returnValue: SValue` instead. This allows for avoiding dynamic dispatch on `ctrl` and instead allows for checking a pointer for `null` to decide if we have an expression that needs further break-down or a return value ready to be passed to the next continuation.
To make this check really only a pointer comparison we also needed to remove the abomination of "fully applied partially applied primitives". In order to achieve this, we check whether a PAP will be fully applied afterward when applying the last argument.
On the `collect-authority` benchmark, this increases throughput by around 13%, on another more computation heave benchmark by about 21%.
`collect-authority` benchmark on `master`:
```
Result "com.daml.lf.speedy.perf.CollectAuthority.bench":
112.361 ±(99.9%) 1.965 ms/op [Average]
(min, avg, max) = (107.047, 112.361, 120.745), stdev = 3.493
CI (99.9%): [110.396, 114.326] (assumes normal distribution)
```
`collect-authority` benchmark on this branch:
```
Result "com.daml.lf.speedy.perf.CollectAuthority.bench":
98.196 ±(99.9%) 1.933 ms/op [Average]
(min, avg, max) = (91.580, 98.196, 105.478), stdev = 3.436
CI (99.9%): [96.263, 100.129] (assumes normal distribution)
```
computation heavy benchmark on master
```
Result "com.daml.lf.speedy.perf.CollectAuthority.bench":
44.030 ±(99.9%) 0.742 ms/op [Average]
(min, avg, max) = (42.124, 44.030, 46.781), stdev = 1.319
CI (99.9%): [43.289, 44.772] (assumes normal distribution)
```
computation heavy benchmark on this branch:
```
Result "com.daml.lf.speedy.perf.CollectAuthority.bench":
36.222 ±(99.9%) 0.580 ms/op [Average]
(min, avg, max) = (34.897, 36.222, 39.787), stdev = 1.031
CI (99.9%): [35.643, 36.802] (assumes normal distribution)
```
changelog_begin
changelog_end
* Use triggerId field in trigger start response
* Use triggerId field for stop trigger result
* Fix indentation and make yields consistent
* Use pair constructor for JsObject instead of Map
* Use triggerIds field in list triggers response
changelog_begin
changelog_end
* Adapt ResponseFormat from JSON API
* Add some type annotations
* Use response format with status and errors/result fields
* Update and refactor tests
changelog_begin
changelog_end
Speedy: run() dont step()
- Running the Speedy machine with `run()` instead of `step()`
- Remove: `SResultContinue`
- Add: `SResultFinalValue(_)`
We change the top level control of Speedy: from machine.step() to machine.run, with the control of stepping while the machine returns SResultContinue moved into speedy itself. (And so SResultContinue is removed in favour of SResultFinalValue.) The main advantage of this approach is that the tight while loop can be moved inside the exception handler, rather than having to wrap the handler every step.
changelog_begin
changelog_end
* Endpoint to list all triggers (not yet by party)
* Clean up test code a little
changelog_begin
changelog_end
* Test for listing running triggers
* Respond with JSON list instead of random text
* List triggers by party
Pass party name in request body.
Store another map of party name to set of trigger ids.
Also store party names in the values of the original trigger id map, so
we can update the party map when stopping a trigger.
* Make DAML Triggers and DAML Script default to wall-clock-time
Now that sandbox defaults to wall-clock-time there is no reason why we
should not default in DAML triggers and DAML Script.
changelog_begin
- [DAML Triggers] ``daml trigger`` now defaults to wall clock time if
neither ``--wall-clock-time`` or ``--static-time`` is passed.
- [DAML Script] ``daml script`` now defaults to wall clock time if
neither ``--wall-clock-time`` or ``--static-time`` is passed.
changelog_end
* Make --static-time and --wall-clock-time exclusive
Packages com.digitalasset.daml and com.daml have been unified under com.daml
Ledger API and DAML-LF DEV protos have also been moved from `com/digitalasset`
to `com/daml` on the file system.
Protos for already released DAML LF versions (1.6, 1.7, 1.8) stay in the
package `com.digitalasset`.
CHANGELOG_BEGIN
[SDK] All Java and Scala packages starting with
``com.digitalasset.daml`` and ``com.digitalasset`` are now consolidated
under ``com.daml``. Simply changing imports should be enough to
migrate your code.
CHANGELOG_END
* Move more trigger tests to scala tests
This PR moves more tests of triggers over to the Scala test suite, in
particular:
- The existing tests there abstract over the time mode and are
instantiated once for wallclock mode and once for static time mode.
- I’ve added tests for TLS and Auth.
- I’ve removed the TLS and Auth tests outside of scala-test since they
are now redundant.
- I’ve added the time tests to the scala-tests stuff since with the
new ledger time model, that’s necessary to actually trigger a
failure if you get static time vs wallclock time wrong (MRT and LET
no longer exist).
I haven’t yet moved all the func tests over, I’ll do that separately
and then we can kill the old tests completely.
changelog_begin
changelog_end
* Factor out test utils into a library
* Improve handling of exposed-modules with data-dependencies
Previously, we tried to rename all modules of a dependency via
--package. This fails if some of those modules are not exported. This
was trivial to hit as a user since the ``daml-trigger`` library made
use of this.
This PR adds a few things to improve the situation:
1. We only rename modules that are exposed. This fixes the issue if
you don’t actually reference a non-exposed module from your
data-dependency.
2. I’ve removed the exposed-modules from daml-trigger. I don’t think
they are essential here given that the module name has `Internal`
in the name and it’s too easy to have something that actually
references the non-exposed module since the types are reexported.
3. I’ve added documentation that mentions this issue.
4. I’ve added a warning if your exposed-modules are excluding some
modules. Maybe worth turning this into an error in the future.
changelog_begin
changelog_end
* Update compiler/damlc/lib/DA/Cli/Damlc/Packaging.hs
Co-Authored-By: associahedron <231829+associahedron@users.noreply.github.com>
Co-authored-by: associahedron <231829+associahedron@users.noreply.github.com>
* Use com.daml as groupId for all artifacts
CHANGELOG_BEGIN
[SDK] Changed the groupId for Maven artifacts to ``com.daml``.
CHANGELOG_END
* Add 2 additional maven related checks to the release binary
1. Check that all maven upload artifacts use com.daml as the groupId
2. Check that all maven upload artifacts have a unique artifactId
* Address @cocreature's comments in https://github.com/digital-asset/daml/pull/5272#pullrequestreview-385026181
This replaces the rather horrible previous setup of having a custom
test runner that spawns 3 separate JVM processes by a single scalatest
test suite that starts sandbox and the JSON API in process.
changelog_begin
changelog_end
This PR adds a new test suite for DAML triggers based on scala test
rather than the client_server_test macro + a custom main. This seems
much nicer than the client_server_test (we get a lot of useful stuff
from scalatest, e.g., useful output of assertion failures, things
don’t blow up after the first test failure, …).
This PR only ports over a small fraction of the tests to make review
easier. The plan is then to port over everything and kill off the
existing test stuff once everything is ported over.
changelog_begin
changelog_end
* Depend on LF version specific daml-libs
* daml-script.dar build multiple LF versions
CHANGELOG_BEGIN
[DAML Script] The `daml-script` library is now available in multiple LF
versions, namely 1.7, 1.8, and 1.dev.
CHANGELOG_END
* daml-trigger.dar build multiple LF versions
[DAML Triggers] The `daml-trigger` library is now available in multiple
LF versions, namely 1.7, 1.8, and 1.dev.
* Keep daml-script.dar available for tests
* Keep daml-trigger.dar available for tests
* daml-libs LF versions integration test
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
Contributes to #4194.
Closes#4231.
Closes#5022.
CHANGELOG_BEGIN
- [Ledger API] The protobuf fields ledger_effective_time and maximum_record_time have been removed from
command submission. These fields were previously deprecated following the introduction
of a new ledger time model. See issue `#4194 <https://github.com/digital-asset/daml/issues/4194>`__.
[Java Bindings] removed the usage of ledgerEffectiveTime and
maximumRecordTime, and instead added minLedgerTimeAbsolute and
minLedgerTimeRelative in CommandSubmissionClient and CommandClient
CHANGELOG_END
* Tighten result type
Command execution can't result in a sequencer error
* New helper method for extracting used contracts
* New error clause
* Add a DAO query for the maximum time of contracts
* Implement algorithm for finding ledger time
CHANGELOG_BEGIN
CHANGELOG_END
* fixup ledgerTimeHelper
* Use new ledger time algorithm
* Mark LET/MRT as deprecated
CHANGELOG_BEGIN
- [Ledger API] DAML ledgers have switched to a new ledger time model.
The ledger_effective_time and maximum_record_time fields of command submission are deprecated,
the ledger time of transactions is instead set automatically by the ledger API server.
Ledger time is no longer strictly monotonically increasing, but only follows causal monotonicity:
ledger time of transactions is greater than or equal to the ledger time of any used contract.
See `#4345 <https://github.com/digital-asset/daml/issues/4345>`__.
CHANGELOG_END
* Add ledger time skew check
* Remove command updater
LET/MRT are now deprecated, this class is now useless
* Remove old time model validator
* Switch to new time model check: kvutils
* Switch to new time model check: in-memory ledger
* Switch to new time model check: SqlLedger
* Use initial ledger config
* Ignore user provided LET
* Use TimeProvider in submission services
* Use deduplication_time in daml-script runner
- Also remove unnecessary command completion output of CommandTracker.
- Remove usage of maximum record time in CommandTracker.
* Use arbitrary default value for deduplication time
* Use built-in Instant ordering
* Remove obsolete test
* Remove obsolete test: CommandStaticTimeIT
* Refactor test: TransactionMRTCompliance
* Disable test: CommandTrackerFlow timeout
* thread maxDeduplicationTime through to CommandTracker
* Improve test
* Refactor command client configuration
* Deduplication time should always use UTC
* Add missing method in TimedIndexService after rebase
* Put more details into the deduplication error response.
* Use system time for command dedup submittedAt.
* Use explicit UTC time source in command validator
* Revert CommandTracker[Flow] to previous completion-recovering-behavior
* Adapt scala client command config to new config params
Co-authored-by: Gerolf Seitz <gerolf.seitz@digitalasset.com>
* Support uploading DARs to the trigger service
This PR adds a new `upload_dar` endpoint that accepts a DAR as a
multi-part form request and adds it to the list of compiled packages.
I’ve also made the DAR passed in on startup optional now given the new
endpoint.
There is no endpoint for deleting a DAR so far but there is none on
the ledger API either so I think this not particularly urgent.
changelog_begin
changelog_end
* Address review comments
Previously the http endpoint for starting a trigger would always
return immediately. Based on the recent refactorings, we now do the
non-IO trigger initialization synchronously and return a failed http
status code with an error message.
This also refactors the code to only have one (mutable) set of
compiled packages which is a prerequisite for dynamic package uploads.
changelog_begin
changelog_end
* sandbox: Fail to start if a time mode is not explicitly specified.
CHANGELOG_BEGIN
- [Sandbox] Sandbox is switching from Static Time mode to Wall Clock
Time mode as the default. To ensure that our users know about this,
for one version, there will be no default time mode. Instead, users
will have to explicitly select their preferred time mode by means of
the `--static-time` or `--wall-clock-time` switches. In the next
release, Wall Clock Time will become the default, and users who are
happy with the defaults will no longer need to specify the time mode.
CHANGELOG_END
* daml-script|triggers: Specify time mode when testing against Sandbox.
* daml-assistant: Default the Sandbox to wall clock time.
CHANGELOG_BEGIN
- [DAML Assistant] Initializing a new DAML project adds a switch to
``daml.yaml`` to ensure Sandbox can continue to start with ``daml
start``::
sandbox-options:
- --wall-clock-time
CHANGELOG_END
* docs: Update the DAML Script and Triggers docs to use Wall Clock time.
It's now what Sandbox will use by default when using `daml init`.
* docs: Change the Quickstart to run Sandbox in wall clock time.
This explains why the contract IDs may vary.
It also updates the manual release testing script to match.
Previously parts of the initialization, in particular, the code for
finding the filter and the heartbeat were part of the Runner. This led
to an akward API and didn’t really make any sense.
Now all of this code is part of a pure `Trigger.fromIdentifier`
method and the runner only takes care of actually running the
ledger. This could also be useful for the trigger service where we
might want to synchronously call `getIdentifier` so users get some
indication if there request even points to a valid trigger
directly. However, this is not tackled by this PR.
changelog_begin
changelog_end
Previously the runner class was in a weird state where it was specific
to a DAR but not to an individual trigger. This meant that you had to
pass around a fair bit of state which got a bit awkward. This PR
addresses this by making the trigger class specific to the trigger.
It also now accepts `CompiledPackages` instead of a DAR which should
make it easier in the trigger service to support dynamic package
uploads.
changelog_begin
changelog_end
Previously we assumed that the module name was globally unique in the
DAR which is definitely not guaranteed. Now we instead detect the
package id of the trigger library based on the type of the trigger we
are running which doesn’t fall apart if there are multiple versions of
the trigger library.
I’ve also removed the check for the package id of the trigger library
since I’d like the trigger runner to be backwarts compatible from now on (we
didn’t break that in a while).
This is slightly ugly since the Runner class is currently not specific
to a single trigger but only the individual methods are aware of the
specific trigger identifier. I’ll refactor this in a separate PR.
changelog_begin
changelog_end
* Share test certificates
This is primarily an attempt at making sure my contribution stats
remain negative but I think it’s a nice cleanup. The only difference
in the certs used by daml-helper which are now used everywhere is that
they use a different CN for the CA and the server. This is required to
make openssl happy (which is used by the daml-helper).
changelog_begin
changelog_end
* Fix script and trigger tests
This adds CLI parametrs for connecting via TLS following the scheme
used by navigator, extractor and `daml ledger`.
changelog_begin
- [DAML Script] Support TLS. Enable it by passing ``--tls``. You can
set certificates for client authentication via ``--pem`` and
``-crt`` and a custom root CA for validating the server certificate
via ``--cacrt``.
- [DAML Triggers - Experimental] Support TLS. Enable it by passing ``--tls``. You can
set certificates for client authentication via ``--pem`` and
``-crt`` and a custom root CA for validating the server certificate
via ``--cacrt``.
changelog_end
* libs-scala/ports: Wrap socket ports in a type, `Port`.
* sandbox: Use `Port` for the API server port, and propagate.
CHANGELOG_BEGIN
CHANGELOG_END
* extractor: Use `Port` for the server port.
* ports: Make Port a compile-time class only.
* ports: Allow port 0; it can be specified by a user.
* ports: Publish to Maven Central.
Context
=======
After multiple discussions about our current release schedule and
process, we've come to the conclusion that we need to be able to make a
distinction between technical snapshots and marketing releases. In other
words, we need to be able to create a bundle for early adopters to test
without making it an officially-supported version, and without
necessarily implying everyone should go through the trouble of
upgrading. The underlying goal is to have less frequent but more stable
"official" releases.
This PR is a proposal for a new release process designed under the
following constraints:
- Reuse as much as possible of the existing infrastructure, to minimize
effort but also chances of disruptions.
- Have the ability to create "snapshot"/"nightly"/... releases that are
not meant for general public consumption, but can still be used by savvy
users without jumping through too many extra hoops (ideally just
swapping in a slightly-weirder version string).
- Have the ability to promote an existing snapshot release to "official"
release status, with as few changes as possible in-between, so we can be
confident that the official release is what we tested as a prerelease.
- Have as much of the release pipeline shared between the two types of
releases, to avoid discovering non-transient problems while trying to
promote a snapshot to an official release.
- Triggerring a release should still be done through a PR, so we can
keep the same approval process for SOC2 auditability.
The gist of this proposal is to replace the current `VERSION` file with
a `LATEST` file, which would have the following format:
```
ef5d32b7438e481de0235c5538aedab419682388 0.13.53-alpha.20200214.3025.ef5d32b7
```
This file would be maintained with a script to reduce manual labor in
producing the version string. Other than that, the process will be
largely the same, with releases triggered by changes to this `LATEST`
and the release notes files.
Version numbers
===============
Because one of the goals is to reduce the velocity of our published
version numbers, we need a different version scheme for our snapshot
releases. Fortunately, most version schemes have some support for that;
unfortunately, the SDK sits at the intersection of three different
version schemes that have made incompatible choices. Without going into
too much detail:
- Semantic versioning (which we chose as the version format for the SDK
version number) allows for "prerelease" version numbers as well as
"metadata"; an example of a complete version string would be
`1.2.3-nightly.201+server12.43`. The "main" part of the version string
always has to have 3 numbers separated by dots; the "prerelease"
(after the `-` but before the `+`) and the "metadata" (after the `+`)
parts are optional and, if present, must consist of one or more segments
separated by dots, where a segment can be either a number or an
alphanumeric string. In terms of ordering, metadata is irrelevant and
any version with a prerelease string is before the corresponding "main"
version string alone. Amongst prereleases, segments are compared in
order with purely numeric ones compared as numbers and mixed ones
compared lexicographically. So 1.2.3 is more recent than 1.2.3-1,
which is itself less recent than 1.2.3-2.
- Maven version strings are any number of segments separated by a `.`, a
`-`, or a transition between a number and a letter. Version strings
are compared element-wise, with numeric segments being compared as
numbers. Alphabetic segments are treated specially if they happen to be
one of a handful of magic words (such as "alpha", "beta" or "snapshot"
for example) which count as "qualifiers"; a version string with a
qualifier is "before" its prefix (`1.2.3` is before `1.2.3-alpha.3`,
which is the same as `1.2.3-alpha3` or `1.2.3-alpha-3`), and there is a
special ordering amongst qualifiers. Other alphabetic segments are
compared alphabetically and count as being "after" their prefix
(`1.2.3-really-final-this-time` counts as being released after `1.2.3`).
- GHC package numbers are comprised of any number of numeric segments
separated by `.`, plus an optional (though deprecated) alphanumeric
"version tag" separated by a `-`. I could not find any official
documentation on ordering for the version tag; numeric segments are
compared as numbers.
- npm uses semantic versioning so that is covered already.
After much more investigation than I'd care to admit, I have come up
with the following compromise as the least-bad solution. First,
obviously, the version string for stable/marketing versions is going to
be "standard" semver, i.e. major.minor.patch, all numbers, which works,
and sorts as expected, for all three schemes. For snapshot releases, we
shall use the following (semver) format:
```
0.13.53-alpha.20200214.3025.ef5d32b7
```
where the components are, respectively:
- `0.13.53`: the expected version string of the next "stable" release.
- `alpha`: a marker that hopefully scares people enough.
- `20200214`: the date of the release commit, which _MUST_ be on
master.
- `3025`: the number of commits in master up to the release commit
(included). Because we have a linear, append-only master branch, this
uniquely identifies the commit.
- `ef5d32b7ù : the first 8 characters of the release commit sha. This is
not strictly speaking necessary, but makes it a lot more convenient to
identify the commit.
The main downsides of this format are:
1. It is not a valid format for GHC packages. We do not publish GHC
packages from the SDK (so far we have instead opted to release our
Haskell code as separate packages entirely), so this should not be an
issue. However, our SDK version currently leaks to `ghc-pkg` as the
version string for the stdlib (and prim) packages. This PR addresses
that by tweaking the compiler to remove the offending bits, so `ghc-pkg`
would see the above version number as `0.13.53.20200214.3025`, which
should be enough to uniquely identify it. Note that, as far as I could
find out, this number would never be exposed to users.
2. It is rather long, which I think is good from a human perspective as
it makes it more scary. However, I have been told that this may be
long enough to cause issues on Windows by pushing us past the max path
size limitation of that "OS". I suggest we try it and see what
happens.
The upsides are:
- It clearly indicates it is an unstable release (`alpha`).
- It clearly indicates how old it is, by including the date.
- To humans, it is immediately obvious which version is "later" even if
they have the same date, allowing us to release same-day patches if
needed. (Note: that is, commits that were made on the same day; the
release date itself is irrelevant here.)
- It contains the git sha so the commit built for that release is
immediately obvious.
- It sorts correctly under all schemes (modulo the modification for
GHC).
Alternatives I considered:
- Pander to GHC: 0.13.53-alpha-20200214-3025-ef5d32b7. This format would
be accepted by all schemes, but will not sort as expected under semantic
versioning (though Maven will be fine). I have no idea how it will sort
under GHC.
- Not having any non-numeric component, e.g. `0.13.53.20200214.3025`.
This is not valid semantic versioning and is therefore rejected by
npm.
- Not having detailed info: just go with `0.13.53-snapshot`. This is
what is generally done in the Java world, but we then lose track of what
version is actually in use and I'm concerned about bug reports. This
would also not let us publish to the main Maven repo (at least not more
than once), as artifacts there are supposed to be immutable.
- No having a qualifier: `0.13.53-3025` would be acceptable to all three
version formats. However, it would not clearly indicate to humans that
it is not meant as a stable version, and would sort differently under
semantic versioning (which counts it as a prerelease, i.e. before
`0.13.53`) than under maven (which counts it as a patch, so after
`0.13.53`).
- Just counting releases: `0.13.53-alpha.1`, where we just count the
number of prereleases in-between `0.13.52` and the next. This is
currently the fallback plan if Windows path length causes issues. It
would be less convenient to map releases to commits, but it could still
be done via querying the history of the `LATEST` file.
Release notes
=============
> Note: We have decided not to have release notes for snapshot releases.
Release notes are a bit tricky. Because we want the ability to make
snapshot releases, then later on promote them to stable releases, it
follows that we want to build commits from the past. However, if we
decide post-hoc that a commit is actually a good candidate for a
release, there is no way that commit can have the appropriate release
notes: it cannot know what version number it's getting, and, moreover,
we now track changes in commit messages. And I do not think anyone wants
to go back to the release notes file being a merge bottleneck.
But release notes need to be published to the releases blog upon
releasing a stable version, and the docs website needs to be updated and
include them.
The only sensible solution here is to pick up the release notes as of
the commit that triggers the release. As the docs cron runs
asynchronously, this means walking down the git history to find the
relevant commit.
> Note: We could probably do away with the asynchronicity at this point.
> It was originally included to cover for the possibility of a release
> failing. If we are releasing commits from the past after they have been
> tested, this should not be an issue anymore. If the docs generation were
> part of the synchronous release step, it would have direct access to the
> correct release notes without having to walk down the git history.
>
> However, I think it is more prudent to keep this change as a future step,
> after we're confident the new release scheme does indeed produce much more
> reliable "stable" releases.
New release process
===================
Just like releases are currently controlled mostly by detecting
changes to the `VERSION` file, the new process will be controlled by
detecting changes to the `LATEST` file. The format of that file will
include both the version string and the corresponding SHA.
Upon detecting a change to the `LATEST` file, CI will run the entire
release process, just like it does now with the VERSION file. The main
differences are:
1. Before running the release step, CI will checkout the commit
specified in the LATEST file. This requires separating the release
step from the build step, which in my opinion is cleaner anyway.
2. The `//:VERSION` Bazel target is replaced by a repository rule
that gets the version to build from an environment variable, with a
default of `0.0.0` to remain consistent with the current `daml-head`
behaviour.
Some of the manual steps will need to be skipped for a snapshot release.
See amended `release/RELEASE.md` in this commit for details.
The main caveat of this approach is that the official release will be a
different binary from the corresponding snapshot. It will have been
built from the same source, but with a different version string. This is
somewhat mitigated by Bazel caching, meaning any build step that does
not depend on the version string should use the cache and produce
identical results. I do not think this can be avoided when our artifact
includes its own version number.
I must note, though, that while going through the changes required after
removing the `VERSION` file, I have been quite surprised at the sheer number of
things that actually depend on the SDK version number. I believe we should
look into reducing that over time.
CHANGELOG_BEGIN
CHANGELOG_END
* kvutils: Extract a committer from the uses of `SubmissionValidator`.
This makes the clock injectable too.
* kvutils: Provide logging contexts in the `Runner`.
* sandbox: Remove the `StaticAllowBackwards` time provider type.
It's not used anywhere.
* sandbox: Fix warnings in CliSpec.
* sandbox: Ensure that we cannot specify both static and wall-clock time.
* sandbox-next: Crash if wall clock time is not specified.
* sandbox-next: Document more known issues in the new Sandbox.
* sandbox: Add a Clock (and some tests) to TimeServiceBackend.
* sandbox-next: Support static time.
CHANGELOG_BEGIN
- [Sandbox Next] Re-establish static time mode.
CHANGELOG_END
* ledger-on-(memory|sql): Expect a `() => Instant`, not a `Clock`.
* sandbox: Move more resource acquisition into the `owner`.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Reimplement SandboxClientResource as a resources.Resource.
* codegen: Use resources in TestUtil.
* sandbox: Manage PostgreSQL in tests with ResourceOwners.
This should provide a better migration path for people that still rely
on static time by forcing them to make this explicit. Given that both
DAML script and DAML triggers are still experimental, I’m not marking
this as a breaking change
changelog_begin
- [DAML Script - Experimental] The time mode must now always be
specified explicitly. Use ``--static-time`` to recover the previous
default time mode.
- [DAML Triggers - Experimental] The time mode must now always be
specified explicitly. Use ``--static-time`` to recover the previous
default time mode.
changelog_end
* Add first prototype of triggers as a service (TaaS)
This is an extremely basic version of the trigger as a service thingy.
Right now, it supports spawning triggers and stopping them but nothing
else.
There is a very simple test to check that it’s not completely broken.
changelog_begin
changelog_end
* Apply suggestions from code review
Co-Authored-By: Andreas Herrmann <42969706+aherrmann-da@users.noreply.github.com>
* remove debugging output
* remove leftover import
Co-authored-by: Andreas Herrmann <42969706+aherrmann-da@users.noreply.github.com>
* Daml.Trigger.Assert for trigger testing API
Requires extracting part of Daml.Trigger into Daml.Trigger.Internal to
get access to internal data constructors and functionality
CHANGELOG_BEGIN
- [DAML Trigger - Experimental] Trigger testing functionality is now
available in the module Daml.Trigger.Assert.
CHANGELOG_END
* Set exposed-modules to hide Daml.Trigger.Internal
* API docs for Daml.Trigger.Assert
Co-authored-by: Andreas Herrmann <andreash87@gmx.ch>
* Bazel test for trigger scenario
* daml-triggers: Allow testing trigger rules in scenarios
CHANGELOG_BEGIN
- [DAML Triggers - Experimental] DAML triggers can now be tested in
scenarios. Specifically, a trigger's ``rule`` can be executed in a
scenario and assertions performed on the emitted commands.
CHANGELOG_END
* Allow assertions on create commands
CHANGELOG_BEGIN
* [DAML stdlib] Add `CanAbort` instance for `Either Text`.
CHANGELOG_END
* Add convenience to construct ACS for testRule
* Add assertions for exercise and exerciseByKey
* fix assert message
* Test assertExercise(ByKey)Cmd
* unpackCommands --> flattenCommands
* Add API documentation
* Document that command ids start from "0"
* generalise command assertions to CanAbortm
* export ACSBuilder type
* Better haddocs for trigger command assertions
* explicit let
* ./fmt.sh
* Fix runfiles on Windows
* Add reference to Bazel issue
Co-authored-by: Andreas Herrmann <andreash87@gmx.ch>
* daml triggers: Set -Werror
* Handle MHeartbeat message in runTrigger
This was missing when heartbeat support was added.
CHANGELOG_BEGIN
CHANGELOG_END
Co-authored-by: Andreas Herrmann <andreash87@gmx.ch>
* Implement heartbeat messages in trigger runner.
* Add heartbeat to Daml.Trigger
CHANGELOG_BEGIN
- [DAML Triggers - Experimental] DAML triggers can now configure a heartbeat message to be sent at regular time interval.
CHANGELOG_END
* Add DAML trigger heartbeat test-case
* ./fmts.h
Co-authored-by: Andreas Herrmann <andreash87@gmx.ch>
These have failed quite a few times on Windows and occasionally also
on MacOS.
This test, first fixes a small issue where the tests were actually
using the times from completions instead of only the timings from
creations. (that technically shouldn’t be an issue but it’s at least
confusing since the error claims to test creations).
In addition to that, this PR changes the condition to allow for the
times to be equal since especially on Windows we don’t seem to have a
very high resolution and the tests are remarkably quick so sometimes
the times can be identical.
I’ve slightly rephrased the condition since I got confused by the fact
that we test for the negated condition.
changelog_begin
changelog_end
Fixes#28.
CHANGELOG_BEGIN
[Sandbox] DAML trace logs (trace, traceRaw, traceId) are now logged via the regular logging system (slf4j+logback) at interpretation time via the logger ``daml.tracelog`` at DEBUG level.
CHANGELOG_END
* Upgrade to Akka 2.6.1, akka-http 10.1.11 and Scala 2.12.10
Akka 2.6.1 Upgrade Changes
- Materializer in place of ActorMaterializer
- Source.future instead of Source.fromFuture
- The Scheduler.schedule method has been deprecated in favor of selecting scheduleWithFixedDelay or scheduleAtFixedRate
- onDownstreamFinish(cause: Throwable)
- ActorAttributes.supervisionStrategy(...) in place of ActorMaterializerSettings.withSupervisionStrategy
See https://doc.akka.io/docs/akka/current/project/migration-guide-2.5.x-2.6.x.html
* Akka 2.6.1 Upgrade Changes
- onDownstreamFinish(cause: Throwable)
See https://doc.akka.io/docs/akka/current/project/migration-guide-2.5.x-2.6.x.html
* code review: remove unnecessary supervision strategy
* Add time to Trigger update function
CHANGELOG_BEGIN
- [DAML Triggers - Experimental] Expose timestamp in triggers.
See `#3612 <https://github.com/digital-asset/daml/issues/3612>`__.
CHANGELOG_END
* Add triggers time test
* Update trigger docs
This still contains the main class so you can use it like you would
use the fat jar but publishing fat jars to maven central is apparently
bad practise and some peple have asked for the library.
This includes some slight tweaks to the scala_docs rule to make it
capable of coping with the generated source file and a hack in the
release script to avoid it complaining about the scenario proto
library not being published to Maven even though it is included in the
transitive deps.
* Spin off TokenHolder into a new library
Avoids having weird dependencies between different packages, makes TokenHolder available on Maven
* Fix auth-utils path
* language: suffix all dalfs dependencies in a dar with the pkgid.
This makes sure that dalf dependencies are not accidentally overwritten
when two packages with equally named dalfs are imported.
* factor out parseUnitId
* Support authentication in DAML triggers
fixes#3259
CHANGELOG_BEGIN
- [DAML Triggers - Experimental] DAML triggers can now be run against
an authenticated ledger.
CHANGELOG_END
* Remove debug printf
* Windows is bad
We don't drop full support from the compiler yet but rather ban their use by
adding a check to the preprocessor. We'll remove the actual support as we go
along with fixing the upgrading story.
CHANGELOG_BEGIN
- [DAML Compiler] Make the experimental feature "generic templates"
unavailable. The current implementation is at odds with other, more important
language features still under development.
CHANGELOG_END
* Add template id filtering to triggers
CHANGELOG_BEGIN
- [Daml Triggers - Experimental] DAML triggers now allow you to specify which templates you want to listen for which can improve performance.
CHANGELOG_END
* Address review comments
* Fix list-triggers test
* Regression test for #3485
* Respect CommandId mapping for transactions
* triggers: Fix ExerciseByKey
Make choice nonconsuming, so that no second `T` is created later.
Only execute `dedupExerciseByKey` if `T_` doesn't exist, yet.
* Update list-triggers test
* triggers: Empty commandId on foreign TransactionM
95ccc59b45 (r347287514)
Previously, we use SValue.fromValue for the conversion. However, this
breaks in cases like Numeric where the scale information is lost. By
using the ValueTranslator instead, we avoid this issue.
There is a similar problem in DAML script but I’ll fix that in a
separate PR.
Since the ValueTranslator is package private, this PR moves the
triggers in the engine package.
Previously, we ended up accumulating trace messages which obviously is
not what you want. This PR changes this to replace the tracelog by a
new, empty TraceLog (if there actually was something to trace) and
thereby fixes this issue.
I don’t really want to start adding tests for logging output so for
now this doesn’t have a manual test but if this starts becoming a
problem again, we probably want to add a separate output source to the
trigger runner instead of going via the logger so we can test this easily.
* Add toAnyContractKey and fromAnyContractKey
This is necessary to add exerciseByKey to DAML triggers.
* Switch to proper ghc-lib release
* Remove unnecessary filter
* Bump timeout because macos is terrible
* bazel fmt
* Add client_server_build and integrity_test rules
And use them to implement ledger dump of the reference
server and to check it.
* Only build and test ledger dump on Linux. Only run tests relevant to dump.
* Make client_server_build quiet in happy path
* Reformat
* Remove unnecessary runfiles for client_server_build
* trigger.Converter avoid exceptions
Refactor `com.daml.trigger.Converter` to consistently use
`Either[String, _]` instead of throwing exceptions to handle conversion
errors.
* Throw ConverterError on conversion error
The only reason for having AbsoluteContractId was that we could get
some more instances in particular `Ord` and `MapKey` but given that
`ContractId` will be a valid key type for the new DAML-LF maps, we can
just use slower implementations for now and switch to Map-based
implementations once that has landed.
fixes#3336
For now, this uses a somewhat adhoc format with one trigger identifier
per line but given that I expect that it won’t be particularly common
to want to do this mechanically this should be sufficient and it’s
trivial to parse.
This is fairly easy to run into so it makes sense to warn about
this. Given that I expect the trigger library will be reasonably
stable in the near future, this is only a warning rather than an
error.
fixes#3244
* Update bazel-common to fix javadoc issues
Specifically, to fix the following error
```
ERROR: /home/aj/tweag.io/da/da-bazel-1.1/ledger-api/rs-grpc-bridge/BUILD.bazel:7:1: in javadoc_library rule //ledger-api/rs-grpc-bridge:rs-grpc-bridge_javadoc:
Traceback (most recent call last):
File "/home/aj/tweag.io/da/da-bazel-1.1/ledger-api/rs-grpc-bridge/BUILD.bazel", line 7
javadoc_library(name = 'rs-grpc-bridge_javadoc')
File "/home/aj/.cache/bazel/_bazel_aj/5f825ad28f8e070f999ba37395e46ee5/external/com_github_google_bazel_common/tools/javadoc/javadoc.bzl", line 27, in _javadoc_library
dep.java.transitive_deps
object of type 'JavaSkylarkApiProvider' has no field 'transitive_deps'
```
* Define Maven deps using rules_jvm_external
* Pin artifacts
* Remove bazel-deps generated targets
* Remove bazel-deps
* Switch to rules_jvm_external targets
* update bazel documentation
* pom_file: There are no more bazel-deps targets
* BAZEL-JVM.md `maven_install` typo
* Add TemplateTypeRep to AnyContractId
* Define Trigger.ContractId t
* Use Trigger.ContractId t
* Implement fromCreated and fromArchived
* instance MapKey TemplateTypeRep
* More efficient ACS access using Map TemplateTypeRep
* ./fmt.sh
* toString and fromString for Identifier
* Replace Identifier by TemplateTypeRep
* TheContractId --> AbsoluteContractId
https://github.com/digital-asset/daml/pull/3245#discussion_r338033546
* Use the same CopyTrigger in the docs and the tests
That way, we actually test what we use in our docs which seem like a
much better idea than duplicating the code.
* Add debugging output on failures
- We now display `trace` messages so they can be used for debugging.
- I’ve removed the log message from the low-level API since it is
confusing as it is not exposed via the high-level API and somewhat
redundant since you can use `trace` for debugging. If there is a
demand for proper logging support we might want to add it back at
some point.
- We now display errors from speedy properly. This is mainly important
for calls to `error` since we previously lost the error message.
- Logging messages go through a proper logging library now and the
internal debugging stuff is only displayed at debug level so not by default.
- We log failed completions at warning level to ease debugging.
Previously, it could happen that the command flow used in the test was
started before the trigger queried the ACS on startup.
Now we wait for the ACS to be queried before starting the commands
flow which fixes this issue.
* Add a first draft of documentation for DAML triggers
The API will still change in a bunch of ways but I’d rather get some
docs in place now and update them as we change things than not have
any docs at all.
* Fix path to daml.yaml
* s/bot/trigger/
* fix source code markers
* Fix tests
* Update docs/source/triggers/index.rst
Co-Authored-By: Andreas Herrmann <42969706+aherrmann-da@users.noreply.github.com>
* Generalize AnyTemplate type to Any in DAML-LF
See #3131 for the motivation for this. The tl;dr is that we need
something like AnyTemplate for choice types as well.
Since the protobuf was already more general in anticipation of such a
change, this change only changes the internal AST on the Haskell and
Scala side.
Since AnyTemplate change has never made it out of 1.dev, I updated the
changelog in the LF spec instead of adding a new entry.
* Update daml-lf/spec/daml-lf-1.rst
Co-Authored-By: Andreas Herrmann <42969706+aherrmann-da@users.noreply.github.com>
* windows debugging
* more windows debugging
* clean expunge
* don’t cat the config file
* remove comment on type equality
* windows …
* gnah
* foobar
* foobar
* does anything ever work?
* reenable caching
* Do not build daml-lf-ast separately
fixes#3128
Also includes a small bug fix to reenable some tests that I
accidentally disabled in #3127 and fixes deserialization of command
ids in completions which I accidentaly broke when making fromCommandId
return an Optional since that’s what we need for transactions.
This PR adds a first draft of a high-level API for DAML
triggers. There is definitely more work to be done and the design is
absolutely not final. However it already allows expressing the copy
bot fairly cleanly so I would like to merge this in its current
state (or at least without bikeshedding the design too much) and then
iterate upon it.
Having everything in a single file has gotten a bit unwieldly so this
PR splits it up. There is no change in the actual code, this is just a reshuffling.
This makes the API a bit safer and nicer to use. Since this is a
low-level API the constructors are exposed, for the high-level API we
probably want to hide them.