* Revert ANF changes and add a testcase for evaluation order
After careful consideration, we decided that the change in evaluation
order that was accidentally introduced by the ANF changes should be
considered a breaking change or arguably even a bug and should not
land in 1.3.0.
Therefore, this PR reverts the following commits:
1. 353d0da6f7
2. a45b51042f
3. 04c7b2af7f
4. a624dd7242
5. b3aab72cee
Other PRs mostly had trivial merge conflicts that I resolved. The two
most interesting ones here are probably
1. https://github.com/digital-asset/daml/pull/6576 which was easy to
resolve and the change to return SEValue instead of SExpr is still
nice and useful even if we do not need the guarantees.
2. it https://github.com/digital-asset/daml/pull/6542 which required
some changes since the constructors changed. If you want to review
those changes in detail (they are pretty straightforward so not too
important), it’s probably easiest to check out this PR and run
```
git diff 2cd2a8f2a8
daml-lf/interpreter/src/main/scala/com/digitalasset/daml/lf/speedy/Compiler.scala
```
to see the diff to the parent commit of the first commit that
introduced ANF.
changelog_begin
changelog_end
* replace traverseU and sequenceU with traverse and sequence
- with -Ypartial-unification on, the extra Unapply typeclass lookup is
unnecessary
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* limit imports; we only need *> and void
If you have something like `http://localhost:8080` the port is handled
correctly. However, if you have `http://localhost/abc:8080` the port
will silently be ignored (`http://localhost:8080/abc` would be
correct). That is clearly wrong so this PR fixes it.
changelog_begin
- [DAML Script] Fix an issue where the `port` was ignored for
non-empty paths in the url when running DAML Script over the JSON API.
changelog_end
GHC has weird restriction on version numbers which damlc inserits so
we need to use `ghc_version` instead of `sdk_version`. That only makes
a difference for snapshot versions where the `-snapshot.` part is
replaced by `.`.
changelog_begin
changelog_end
It makes no sense to keep this at 0.0.1.
changelog_begin
- [DAML Script] The DAML Script library now has the version of the
corresponding SDK.
- [DAML Trigger] The DAML Trigger library now has the version of the
corresponding SDK.
changelog_end
* ANF transformation in Speedy.
The idea behind this PR is to transform speedy expressions into a simpler form where all non-atomic sub-expressions are made explicit by the introduction of let-forms. In particular, for the function-application form. These simpler forms allow the execution engine to take advantage of the atomic assumption, and often removes many additional execution steps. In particular the pushing of continuations to allow execution to continue after a compound expression has been reduced to a value.
changelog_begin
changelog_end
* improve comment
* inline functions relocateA/L
* remove comment about scalafmt
* remove commented out alterative def for transformLet1
* improve code by adding incr methods to DepthA/E
* remove (n == 0) special case in trackBindings
* clarify comment further
* improve validate/go to not consume stack for deeply right-nested let-expressions
* address comments from Remy: be private; use final case case; etc
* rename to unsafeCompilationPipeline
* add back some trailing commas
* remove commented-out debug line
* improve comment
* remove dev/debug code in compilationPipeline
* remove commented out code in SEAppGeneral.execute
* undo unrelated code improvement in SValue.scala
* fix compile. object Anf cannot be private
Apart from the current test being broken since it tested the
vals.get(0) twice 🤦 we can now also test is_local properly
since the issue mentioned in the comment has been fixed in #6533.
changelog_begin
changelog_end
It is pretty easy to hit this, e.g., when your templates haven’t been
uploaded to the ledger. Just printing the response doesn’t actually
include the response body which means that you don’t see the actual
error so it’s pretty useless. This PR changes that by printing the
status code and the response body.
All the rest is just test setup to be able to submit a script with a
template that has not been uploaded to the ledger.
changelog_begin
changelog_end
* add -Xsource:2.13, -Ypartial-unification to common_scalacopts
* add now-referenced scalaz-core where needed
* work around bad type signatures in scalatest Aggregating, Containing
* unused Any suppression
* work around bad partial-unification wrought by type alias
* remove unused Conversions import
- not required in 4f68cfc480 either, so unsure how it's survived this long
* work around Future.traverse; remove unused show import
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* remove unused bounds
* remove -Ypartial-unification and -Xsource:2.13 where they were explicitly passed
* longer comment on what the options do
- suggested by @stefanobaghino-da; thanks
* forget Future.traverse, just use scalaz, it knows how to do this
* Support multiple auth tokens in DAML Script
This piggy backs on top of the already existing --participant-config
feature. While you can argue that it might be slightly confusing that
you have to specify the same participant twice to specify different
auth tokens, I think this actually makes sense: In an ideal
world (ignoring any performance issues) you have one participant per
party anyway and one connection per participant specified in the
config file still seems like a very reasonable model.
changelog_begin
- [DAML Script] You can now use DAML Script with multiple auth
tokens. This is particularly useful if you are working with the JSON
API where you can only have one party per token or with an IAM that
only provides single-party tokens. The tokens are specified in the
participant configuration passed via `--participant-config` in a new
``access_token`` field. The existing `--acess-token-file` flag is still supported if you want to use the same token for all connections. See
https://docs.daml.com/daml-script/index.html#running-daml-script-against-authenticated-ledgers
for more details.
changelog_end
* I will never understand rst
changelog_begin
changelog_end
* First draft of constant lifting
changelog_begin
changelog_end
* refactoring
* doing stuff
* run simplifier on template exprs
* remove merge artifact
* Fix ExerciseWithoutActors
* add comments
* fix trace order test
* prefix generated val names with their provenance
* Verification tool bugfix
During the value collection phase, when encountering a record projection on a (yet) undefined value, stop searching this branch instead of throwing an error.
* Bump pattern-match-perf memory limit with cocreature`s blessing
* bump again
* Filter generated identifiers from daml script test runner
changelog_begin
changelog_end
* Fix party literals
* Remove inlineClosedExpr for now.
* Improve comments
* Reset script test locations
* Unhashmap
* disable daml-lf-verify quickstart tests for now
Co-authored-by: Gert-Jan Bottu <gertjan.bottu@kuleuven.be>
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
This integrates the time service into DAML script thereby covering the
main piece of scenarios that was missing from DAML script.
This PR does two things (they are very related and doing them together
makes it much easier to test):
1. It “fixes” `getTime` to return the ledger time in static mode by
querying the ledger time service instead of defaulting to the Unix
epoch which is pretty useless and I would consider the old behavior
a bug. We keep the old behavior via the JSON API since there is no
time service.
2. It adds `setTime` to set the ledger time via the time service. This
is only supported in static time mode (sadbonx and other ledgers do
not expose the time service in wallclock mode because changing time
makes it not wallclock) or via the JSON API (no time service).
fixes#6220
changelog_begin
- [DAML Script] DAML Script’s `getTime` now correctly handles time
changes in static time mode and returns the current time by querying
the time service rather than defaulting to the Unix epoch. Note that
when run via the JSON API, it still returns the Unix epoch.
- [DAML Script] Add `setTime` to DAML Script which sets the ledger
time via the ledger API time service. Note that this is only
supported when running over gRPC in static time mode.
changelog_end
In this PR we cleanup the constructor for the speedy Machine.
* We remove the `case` keyword since `Machine` is a stateful class,
* We replace the pre-existing builders with
+ one generic builder `Machine.apply`,
+ scenario specific builder,
CHANGELOG_BEGIN
CHANGELOG_END
* Add listKnownParties to DAML Script
This is particularly useful if you want easy access to a party that
has already been allocated since partyFromText is bad.
For now this is not supported in the JSON API. It should be possible
to add it but I consider it fairly low priority so omitting for now.
changelog_begin
- [DAML Script] Add ``listKnownParties`` and ``listKnownPartiesOn`` to
query the corresponding ListKnownParties endpoint in the party
management service.
changelog_end
* Fix `daml script-test` tests by running them sequentially.
changelog_begin
changelog_end
* Add option based constructor for LedgerIdRequirement
changelog_begin
changelog_end
* Make option based consructor the default, deprecate old constructor
* Update with review comments
Choices for `stacktracing` are `NoStackTrace` / `FullStackTrace`.
Adapt code so the selection is made by the original caller:
- `engine`
- `scenario-service`
- `repl-service`
- `daml-script` runner
etc
Currently, all callers pass `FullStackTrace` (the existing behaviour), except for the
exploration dev-code: `daml-lf/interpreter/perf/src/main/scala/com/daml/lf/explore`.
The idea is that once this control is in place, we can discuss if we can change how we
might expose it to the user, and/or perhaps change the default behaviour to have
`stacktracing` disabled.
changelog_begin
changelog_end
* Speedy Tail call optimization
The goal of this PR is to achieve Tail call optimization #5767
Tail call optimization means that tail-calls execute without consuming resources. In particular, they must not consume stack space.
Speedy has two stacks: The `env`-stack and the `kontStack`. For an optimized tail call in Speedy, we must not extend either. In Speedy, all function calls are executed via the code `enterFullyAppliedFunction`. The behaviour of this code (prior to this PR) is as follows:
(1) Push the values of all args and free-variables on the env-stack (because that's where the code expects to find them), and
(2) Push a KPop continuation on the kontStack, which will restore the env-stack to its original size before returning to the code which made the function call.
We must stop doing both these things. We achieve this as follows:
(1) We address function args and free-vars via a new machine component: the current `frame`.
(2) We make continuations responsible for restoring their own environment.
As well as achieving proper tail calls, we also gain a performance improvement by (a) removing the many pushes to the env-stack, and (b) never having to push (and then later re-enter) a KPop continuation. The args array and the free-vars array already existed, so there is no additional cost associated with constructing these array. The only extra costs (which are smaller than the gains) are that we must manage the new `frame` component of the machine, and we must record frame/env-size information in continuations so they can restore their environment.
To make use of the frame, we need to identify (at compile time) the run-time location for every variable in a speedy expression. This is done during the `closureConvert` phase. At run-time, an environment is now composed of both the existing env-stack and the frame. The only values which now live on the env-stack are those introduced by let-bindings and pattern-match-destructuring. All other are found in the frame.
Changes to SEExpr:
- Introduce a new expression form `SELoc`, with 3 sub classes: SELocS/SELocA/SELocF to represent the run-time location of a variable.
- SELocS/A/F execute by calling corresponding lookup function in Speedy: getEnv(Stack,Arg,Free).
- SEMakeClo takes a list of SELoc instead of list of int.
- During closure conversion all SEVar are replaced by an SELocS/A/F.
- SEVar are not allowed to exist at run-time (just as SEAbs may not exist).
- We adapt the synthesised code for SEBuiltinRecursiveDefinition: FoldL, FoldR, EqualList
It is worth noting the prior code also had the notion of before/after closureConvert, but SEVar was used for both meanings: Prior to closureConvert it meant the relative-index of the enclosing binder (lambda,let,..). After closureConvert it meant the relative-offset from the top of the env-stack where the value would be found at run-time. These are not quite the same! Now we have different sub-types (SEVar vs SELoc), this change of mode is made more explicit.
Run-time changes:
- Use the existing `KFun` continuation as the new `Frame` component.
- `KFun` allows access to both the args of the current application, and the free-vars of the current closure.
- A variable is looked up by it's run-time location (SELocS/A/F)
- A function application is executed (`enterFullyAppliedFunction`), by setting the machine's `frame` component to the new current `KFun`.
- When a continuation (KArg, KMatch, KPushTo, KCatch) is pushed, we record the current Frame and current stack depth within the continuation, so when it is entered, it can call `restoreEnv` to restore the environment to the state when the continuation was pushed.
Changes to Compiler:
- The required changes are to the `closureConvert` and `validate`.
- `closureConvert` `remaps` is now a `Map` from the `SEVar`s relative-index to `SELoc`
- `validate` now tracks 3-ints (maxS,masA,maxF)
changelog_begin
changelog_end
* changes for Remy
* Changes for Martin
* test designed explicitly to blow if the free variables are captured incorrectly
* address more comments
* improve comment about shift in Compiler
* Implement a simple profiler for DAML scenarios
The profiler runs a single scenario and records timing information when
each function (and some other closures) are entered and left. The
resulting information can be visualized as a flamegraph using
[speedscope](https://www.speedscope.app/).
The profiler works by instrumenting the CEK machine at the heart of
DAML Engine. Unfortunetaly, this causes a very small overhead on
non-profiling runs too. However, in my benchmarks I could not measure
any significant impact on the overall runtime at all. More precisely,
the overhead is as follows:
Every closure now has an additional field called `label`. In
non-profiling runs this field is always set to `null`. This field needs
to be allocated, copied whenever we copy a closure and scanned during
garbage collection. Additionally, whenever we enter a closure, we check
this field and whenever it is _not_ `null`, i.e. never during
non-profiling runs, we record an "open event" and set up a hook for the
corresponding "close event". Thus, the additional cost during
non-profiling runs are a single pointer comparison and a jump beyond
the "then branch".
Since this is still very much in active development, there are no
documentation, other than an entry in a README, and no tests yet. They
will come before we promote this. However, the UX will look very
different then since we already have plans to significantly change it.
CHANGELOG_BEGIN
CHANGELOG_END
* Run scalafmt
* Make profiling argument to PureCompiledPackges optional
* Fix a bunch of tests
CHANGELOG_BEGIN
CHANGELOG_END
* scalafmt is so annoying
CHANGELOG_BEGIN
CHANGELOG_END
* Apply simple suggestions
CHANGELOG_BEGIN
CHANGELOG_END
Previously, we used the stack to recurse when filling in the command
results which obviously breaks once you have large multi-command
submissions.
You probably want to view the diff with whitespace disabled.
changelog_begin
- [DAML Script] Fix a bug where large multi-command transactions
produced a stack overflow.
changelog_end
Fixes#5592
The CLI syntax and the defaults follow the JSON API here.
changelog_begin
- [DAML Script] The maximum inbound message size can now be configured
using `--max-inbound-message-size``. This matches the flag in the JSON
API.
- [DAML REPL] The maximum inbound message size can now be configured
using `--max-inbound-message-size``. This matches the flag in the JSON API.
changelog_end
* Simplify and clarify the public interface to Speedy.
- Remove `isFinal`. A client just uses `run()`.
- Remove `toSValue`. The value in available in `SResultFinalValue(v: SValue)`.
- A client never directly access the `.ctrl` (or `.returnValue`) components.
- A client may use `setExpressionToEvaluate(expr)` to evaluate a new expression on an existing machine.
changelog_begin
changelog_end
* remove while loop which executes just once
* avoid unnecessary mutation when running speedy
Remove the `Ctrl` trait and separate `Machine.ctrl: Ctrl` into `Machine.ctrl: SExpr` and `Machine.returnValue: SValue` instead. This allows for avoiding dynamic dispatch on `ctrl` and instead allows for checking a pointer for `null` to decide if we have an expression that needs further break-down or a return value ready to be passed to the next continuation.
To make this check really only a pointer comparison we also needed to remove the abomination of "fully applied partially applied primitives". In order to achieve this, we check whether a PAP will be fully applied afterward when applying the last argument.
On the `collect-authority` benchmark, this increases throughput by around 13%, on another more computation heave benchmark by about 21%.
`collect-authority` benchmark on `master`:
```
Result "com.daml.lf.speedy.perf.CollectAuthority.bench":
112.361 ±(99.9%) 1.965 ms/op [Average]
(min, avg, max) = (107.047, 112.361, 120.745), stdev = 3.493
CI (99.9%): [110.396, 114.326] (assumes normal distribution)
```
`collect-authority` benchmark on this branch:
```
Result "com.daml.lf.speedy.perf.CollectAuthority.bench":
98.196 ±(99.9%) 1.933 ms/op [Average]
(min, avg, max) = (91.580, 98.196, 105.478), stdev = 3.436
CI (99.9%): [96.263, 100.129] (assumes normal distribution)
```
computation heavy benchmark on master
```
Result "com.daml.lf.speedy.perf.CollectAuthority.bench":
44.030 ±(99.9%) 0.742 ms/op [Average]
(min, avg, max) = (42.124, 44.030, 46.781), stdev = 1.319
CI (99.9%): [43.289, 44.772] (assumes normal distribution)
```
computation heavy benchmark on this branch:
```
Result "com.daml.lf.speedy.perf.CollectAuthority.bench":
36.222 ±(99.9%) 0.580 ms/op [Average]
(min, avg, max) = (34.897, 36.222, 39.787), stdev = 1.031
CI (99.9%): [35.643, 36.802] (assumes normal distribution)
```
changelog_begin
changelog_end
* DAML-SCRIPT: cleanup to prepare #5811
* a bit more.
CHANGELOG_BEGIN
CHANGELOG_END
* Address Moritz's review
* Update daml-script/runner/src/main/scala/com/digitalasset/daml/lf/engine/script/Runner.scala
Co-authored-by: Martin Huschenbett <martin.huschenbett@posteo.me>
Co-authored-by: Martin Huschenbett <martin.huschenbett@posteo.me>
* add GenMap to the "all types" test generators
* report bad GenMap format with DeserializationError, not MatchError
* document GenMap JSON
* notes on missing features
* enable -Xsource:2.13 in transaction
* make an Order instance for Value resolvable, but unimplemented
* use the skeleton from SValue ordering to make a Value ordering skeleton
* add Party Order
* add Order instance for SortedLookupList
* add Order for FrontStack, deriving everything
* factor the Order lookup, and tie a knot in the recursive Value instances
* we're going to need this Iterator thing again
* replacing Order#contramap with version that supports equalIsNatural
* use new equalBy, orderBy for FrontStack, SortedLookupList, ImmArray
* _2 comparator, upgrade Name Equal to an Order
* incorporate lookup for enums, variants into Value order; record/struct cases
* Enum/Variant comparison
* looking up the singleton implicitly won't work for non-`object`s, alas
* test Order laws for values of all primitive types
* test Order laws for record and variant types
* test Order laws for enum types
* test that enum strings are not compared
* use checkLaws for Value Equal as well
* test that enums match order to constructor rank
* factor genAddend and genAddendNoListMap
* reintroduce Order for TypedValueGenerators
* more addend order
* record, variant order cases
* record cons order
* deriving Order while decoding from JSON
* make ApiCodecCompressed's Cid codec based on the typeclass
* test how the Value ordering and the underlying projected value orderings line up
- hint: they don't, yet
- this is also a template for how we'll check the fidelity with SValue
ordering
* test how the Value ordering and SValue ordering line up
- hint: they don't, yet
* typed Arbitrarys have access to Order
* produce proper ValueGenMap
* inj requires Order, sometimes
- we encode this as "all the time" but there is a type-level unification
approach to remove this requirement in some cases
* make inj a function
* test that order doesn't matter for JSON decoder
* use Utf8 order for TVG text; don't pretend that base equal works
* sort JSON GenMaps, and check for duplicates
* make injarb use IntroCtx
* remove stray import
* Order instances for Bytes, Hash, AbsoluteContractId
* require Order[Cid] to decode JSON to LF values
* clean up map reordering test
* remove unused Instant instance
* fake Order instance no longer needed, valid instance defined
* test parity of global AbsoluteContractId order and SContractId order
* bazel fmt
* test AbsoluteContractId Order lawfulness
* test duplicate key detection
CHANGELOG_BEGIN
- [JSON API] Prepare full support for the planned GenMap primitive type.
See `issue #5031 <https://github.com/digital-asset/daml/issues/5031>`_.
CHANGELOG_END
Speedy: run() dont step()
- Running the Speedy machine with `run()` instead of `step()`
- Remove: `SResultContinue`
- Add: `SResultFinalValue(_)`
We change the top level control of Speedy: from machine.step() to machine.run, with the control of stepping while the machine returns SResultContinue moved into speedy itself. (And so SResultContinue is removed in favour of SResultFinalValue.) The main advantage of this approach is that the tight while loop can be moved inside the exception handler, rather than having to wrap the handler every step.
changelog_begin
changelog_end
* new --leak-passwords-firesheep-style option; functions to check forwarded protocol
* enforce https reverse-proxy in all JWT-accepting endpoints
* make HttpService.start take config record
* test that X-Forwarded-Proto or Forwarded is enforced
* use new start signature in daml-script tests
* use insecure http mode for ts codegen tests
* note on regex
* use insecure option in daml assistant integration tests
* log allowNonHttps setting
* add non-https option to more places in daml-assistant tests
* add non-https option to getting started guide
* rename --leak-passwords-firesheep-style to --allow-insecure-tokens
- per suggestion by @garyverhaegen-da, @hurryabit
CHANGELOG_BEGIN
- [JSON API] By default, checks that connections are made through a reverse-proxy
providing HTTPS, ensuring that JWT tokens don't leak. To disable this check,
such as for development, pass ``--allow-insecure-tokens``.
See `issue #5572 <https://github.com/digital-asset/daml/issues/5572>`_.
CHANGELOG_END
* daml start includes --allow-insecure-tokens by default
- as indicated by @cocreature
* Make DAML Triggers and DAML Script default to wall-clock-time
Now that sandbox defaults to wall-clock-time there is no reason why we
should not default in DAML triggers and DAML Script.
changelog_begin
- [DAML Triggers] ``daml trigger`` now defaults to wall clock time if
neither ``--wall-clock-time`` or ``--static-time`` is passed.
- [DAML Script] ``daml script`` now defaults to wall clock time if
neither ``--wall-clock-time`` or ``--static-time`` is passed.
changelog_end
* Make --static-time and --wall-clock-time exclusive
This PR adds an --output-file option to DAML Script that writes the
result of a DAML Script to a file and complements the --input-file option.
changelog_begin
- [DAML Script] ``daml script`` now has a `--output-file`` option that
can be used to specify a file the result of the script should be
written to. Similar to ``--input-file`` the result will be output in
the DAML-LF JSON encoding.
changelog_end
* factor TlsConfiguration parser from extractor
* move TlsConfigurationParser to new library
* link extractor to ledger-service/cli-opts properly
* use TlsConfigurationCli in http-json, pass SslContext to ledger-client
* test TLS options as used in http-json
- the TLS config code is shared with extractor, where it is more fully
tested; we just do a sanity check here
* doc TLS options for http-json
CHANGELOG_BEGIN
- [JSON API] New ``--pem``, ``--crt``, ``--cacrt``, and ``--tls`` options
for securing the connection between JSON API server and ledger.
See `issue #2540 <https://github.com/digital-asset/daml/issues/2540>`__.
CHANGELOG_END
* TLS off in daml-script JSON API test
* Improve error messages in daml repl on calls to `error`
There were two issues with calls to `error`:
1. This one is harmless but somewhat annoying: When calling `error` we
run into the log statement in `stepToValue` which prints out the
error message in a fairly reasonable form (you can argue whether
Error: User abort: is a super useful prefix but that’s a relatively
minor issue). Afterwards we then call `println` on the failed
future. However, that will just print the type of the exception
which isn’t something we want to show to users. I’ve just disabled
the println statement if the exception is `SError`.
2. This one is a bigger issue: `throw x` is not the same as
`Future.failed(x)`. I only fully realized the difference fairly
recently. The former fails before it produces a future. So `(throw
x).onComplete(…)` will never execute the callback. The latter is
just a failed future. It is rather confusing to have a function
that returns a future but then throws an exception instead of a
future and it confuses the grpc library which prints out a horrible
exception. I’ve changed all calls to `throw` in `runWithClients` to
instead use `Future.failed` and `flatMap` (in the form of
for-comprehensions).
There are still a few calls in `run` left which I’ll leave for a
separate PR.
I think we need to factor out some helper functions here to make this
a bit more manageable (e.g. for the Converter.toFuture) stuff but I’ll
leave that for a separate PR. You probably want to view this with
whitespace diffs disabled.
changelog_begin
- [DAML Repl] DAML Repl now produces better error messages on calls to
`error` and `abort`.
changelog_end
* Switch stepToValue to return Either
* Adding `--port-file` support
* ``--port-file`` support
* Updating docs
changelog_begin
[JSON API] Add support for ``--port-file`` command line option.
``--http-port 0 --port-file ./json-api.port`` will pick up a free port
and write it into ``./json-api.port` file.
changelog_end
* reformatting
* Usage grammar
* use bimap
* Adding `PortFiles` utility for creating and deleting port files on JVM exit
* Adding scaladoc explaining that the port file should be deleted on
JVM termination.
* Updating usage and docs to reflect that the file must be unique and
will be deleted on graceful shutdown
* Relying on `java.nio.file.FileAlreadyExistsException` to determine the
case when failed due to the nonunique file name.
* toString instead of Exception.getMessage
java.nio exception's getMessage can be just a file name, need the class
name to capture the error context.
* updatePortFile -> createPortFile
* write to file instead of write into file
* Set the `Bearer ` prefix in bindings.
* Make the `Bearer ` prefix in the authorization header mandatory.
* Bearer prefix can be removed from the token file.
CHANGELOG_BEGIN
[Extractor]: The ``Bearer `` prefix can be removed from the token file.
It is added automatically.
[Navigator]: The ``Bearer `` prefix can be removed from the token file.
It is added automatically.
[DAML Script] The ``Bearer `` prefix can be removed from the token file. It
is added automatically.
[DAML Repl] The ``Bearer `` prefix can be removed from the token file. It is
added automatically.
[Scala Bindings] The ``Bearer `` prefix can be removed from the token. It is
added automatically.
[Java Bindings] The ``Bearer `` prefix can be removed from the token. It is
added automatically.
[DAML Integration Kit] ``AuthService`` implementations MUST read the
``Authorization`` header and the value of the header MUST start with
``Bearer ``.
CHANGELOG_END
Packages com.digitalasset.daml and com.daml have been unified under com.daml
Ledger API and DAML-LF DEV protos have also been moved from `com/digitalasset`
to `com/daml` on the file system.
Protos for already released DAML LF versions (1.6, 1.7, 1.8) stay in the
package `com.digitalasset`.
CHANGELOG_BEGIN
[SDK] All Java and Scala packages starting with
``com.digitalasset.daml`` and ``com.digitalasset`` are now consolidated
under ``com.daml``. Simply changing imports should be enough to
migrate your code.
CHANGELOG_END
This adds a validation step when running DAML script over the JSON API
to ensure that the party in the token matches the party that is passed
as an argument to `submit/query`.
changelog_begin
changelog_end
It doesn’t really make sense to catch errors like PERMISSION_DENIED
and it only make the error message more confusing and debugging
harder.
changelog_begin
changelog_end
This replaces the rather horrible previous setup of having a custom
test runner that spawns 3 separate JVM processes by a single scalatest
test suite that starts sandbox and the JSON API in process.
changelog_begin
changelog_end
* Support running DAML script over the JSON API
This is still in a somewhat messy state and some things don’t
work (documented in a comment) so I deliberately didn’t add this to
the documentation. However, there are tests and the PR is already
pretty large so I’d like to move the rest to separate PRs to not turn
this into more of a review nightmare than it already is.
changelog_begin
changelog_end
* Address review comments
* Depend on LF version specific daml-libs
* daml-script.dar build multiple LF versions
CHANGELOG_BEGIN
[DAML Script] The `daml-script` library is now available in multiple LF
versions, namely 1.7, 1.8, and 1.dev.
CHANGELOG_END
* daml-trigger.dar build multiple LF versions
[DAML Triggers] The `daml-trigger` library is now available in multiple
LF versions, namely 1.7, 1.8, and 1.dev.
* Keep daml-script.dar available for tests
* Keep daml-trigger.dar available for tests
* daml-libs LF versions integration test
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
Contributes to #4194.
Closes#4231.
Closes#5022.
CHANGELOG_BEGIN
- [Ledger API] The protobuf fields ledger_effective_time and maximum_record_time have been removed from
command submission. These fields were previously deprecated following the introduction
of a new ledger time model. See issue `#4194 <https://github.com/digital-asset/daml/issues/4194>`__.
[Java Bindings] removed the usage of ledgerEffectiveTime and
maximumRecordTime, and instead added minLedgerTimeAbsolute and
minLedgerTimeRelative in CommandSubmissionClient and CommandClient
CHANGELOG_END
This PR adds as `ScriptLedgerClient` trait (happy to change the name
if anyone has a better proposal) that abstracts over the interaction
with the ledger. This will allow us to plug in a different
implementation for interacting with the JSON API so we can run DAML
scripts against DABL or other environments where gRPC is not a
workable option. Note that this PR does not yet add the implementation
for interacting with the JSON API. I’ll leave that for a separate PR.
changelog_begin
changelog_end
* Tighten result type
Command execution can't result in a sequencer error
* New helper method for extracting used contracts
* New error clause
* Add a DAO query for the maximum time of contracts
* Implement algorithm for finding ledger time
CHANGELOG_BEGIN
CHANGELOG_END
* fixup ledgerTimeHelper
* Use new ledger time algorithm
* Mark LET/MRT as deprecated
CHANGELOG_BEGIN
- [Ledger API] DAML ledgers have switched to a new ledger time model.
The ledger_effective_time and maximum_record_time fields of command submission are deprecated,
the ledger time of transactions is instead set automatically by the ledger API server.
Ledger time is no longer strictly monotonically increasing, but only follows causal monotonicity:
ledger time of transactions is greater than or equal to the ledger time of any used contract.
See `#4345 <https://github.com/digital-asset/daml/issues/4345>`__.
CHANGELOG_END
* Add ledger time skew check
* Remove command updater
LET/MRT are now deprecated, this class is now useless
* Remove old time model validator
* Switch to new time model check: kvutils
* Switch to new time model check: in-memory ledger
* Switch to new time model check: SqlLedger
* Use initial ledger config
* Ignore user provided LET
* Use TimeProvider in submission services
* Use deduplication_time in daml-script runner
- Also remove unnecessary command completion output of CommandTracker.
- Remove usage of maximum record time in CommandTracker.
* Use arbitrary default value for deduplication time
* Use built-in Instant ordering
* Remove obsolete test
* Remove obsolete test: CommandStaticTimeIT
* Refactor test: TransactionMRTCompliance
* Disable test: CommandTrackerFlow timeout
* thread maxDeduplicationTime through to CommandTracker
* Improve test
* Refactor command client configuration
* Deduplication time should always use UTC
* Add missing method in TimedIndexService after rebase
* Put more details into the deduplication error response.
* Use system time for command dedup submittedAt.
* Use explicit UTC time source in command validator
* Revert CommandTracker[Flow] to previous completion-recovering-behavior
* Adapt scala client command config to new config params
Co-authored-by: Gerolf Seitz <gerolf.seitz@digitalasset.com>
* Support partial patterns in DAML repl
This PR improves the support for partial patterns in DAML repl by
making sure that they fail on the line itself rather than some
subsequent line and avoids the partial pattern match warnings on all
following lines.
changelog_begin
changelog_end
* Fix tests
* Factor out common identifier generation
For `DA.Types`, `DA.Internal.Any`, and `Daml.Script`.
* Factor out Script type for DAML scripts
* Adapt DAML script test runners
* Adapt REPL
CHANGELOG_BEGIN
CHANGELOG_END
* ./fmt.sh
* Avoid `unapply`
addressing
https://github.com/digital-asset/daml/pull/5076#discussion_r394526881
* Pure Script.fromIdentifier
* Pure Script.fromDar
* Simplify test script discovery
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
* sandbox: Fail to start if a time mode is not explicitly specified.
CHANGELOG_BEGIN
- [Sandbox] Sandbox is switching from Static Time mode to Wall Clock
Time mode as the default. To ensure that our users know about this,
for one version, there will be no default time mode. Instead, users
will have to explicitly select their preferred time mode by means of
the `--static-time` or `--wall-clock-time` switches. In the next
release, Wall Clock Time will become the default, and users who are
happy with the defaults will no longer need to specify the time mode.
CHANGELOG_END
* daml-script|triggers: Specify time mode when testing against Sandbox.
* daml-assistant: Default the Sandbox to wall clock time.
CHANGELOG_BEGIN
- [DAML Assistant] Initializing a new DAML project adds a switch to
``daml.yaml`` to ensure Sandbox can continue to start with ``daml
start``::
sandbox-options:
- --wall-clock-time
CHANGELOG_END
* docs: Update the DAML Script and Triggers docs to use Wall Clock time.
It's now what Sandbox will use by default when using `daml init`.
* docs: Change the Quickstart to run Sandbox in wall clock time.
This explains why the contract IDs may vary.
It also updates the manual release testing script to match.
* Support authentication and TLS in DAML repl
changelog_begin
- [DAML Repl - Experimental] You can now connect to a ledger via TLS
by passing ``--tls`` to ``daml repl``
- [DAML Repl - Experimental] You can now connect to a ledger with
authentication by passing the token via ``--access-token-file`` to
``daml repl``.
changelog_end
* try to fix linking on windows
* windows is weird
* gnah
* Share test certificates
This is primarily an attempt at making sure my contribution stats
remain negative but I think it’s a nice cleanup. The only difference
in the certs used by daml-helper which are now used everywhere is that
they use a different CN for the CA and the server. This is required to
make openssl happy (which is used by the daml-helper).
changelog_begin
changelog_end
* Fix script and trigger tests
This adds CLI parametrs for connecting via TLS following the scheme
used by navigator, extractor and `daml ledger`.
changelog_begin
- [DAML Script] Support TLS. Enable it by passing ``--tls``. You can
set certificates for client authentication via ``--pem`` and
``-crt`` and a custom root CA for validating the server certificate
via ``--cacrt``.
- [DAML Triggers - Experimental] Support TLS. Enable it by passing ``--tls``. You can
set certificates for client authentication via ``--pem`` and
``-crt`` and a custom root CA for validating the server certificate
via ``--cacrt``.
changelog_end
The logic for detecting these needs to be improved but for now this at
least gives a useful error message instead of some internal stacktrace.
changelog_begin
changelog_end
* Wrap Script in StateT to make evaluation order a bit less important
This PR wraps the Script newtype in `StateT` which means that
evaluation won’t do much so `debug` behaves a bit more sensibly and
you don’t end up evaluating a script that only consists of `pure` and
`>>=` if you do not execute it.
fixes#4821
changelog_begin
- [DAML Script] Fix an issue where ``debug`` messages were output
before the script was executed.
changelog_end
* Inline StateT and improve error messages
This introduces a `HasSubmit` typeclass (following the naming scheme
of `HasCreate`, …) and instances for `Scenario` and `Script`. This
avoids the need to hide `submit` in every single DAML script.
changelog_begin
- [DAML Standard Library] ``submit`` and ``submitMustFail`` are now
overloaded so that they can be used in both scenarios and DAML script.
changelog_end
* libs-scala/ports: Wrap socket ports in a type, `Port`.
* sandbox: Use `Port` for the API server port, and propagate.
CHANGELOG_BEGIN
CHANGELOG_END
* extractor: Use `Port` for the server port.
* ports: Make Port a compile-time class only.
* ports: Allow port 0; it can be specified by a user.
* ports: Publish to Maven Central.
This removes the sample/reference implementation of kvutils
InMemoryKVParticipantState.
This used to be the only implementation of kvutils, but now with the
simplified kvutils api we have ledger-on-memory and ledger-on-sql.
InMemoryKVParticipantState was also used for the ledger dump utility,
which now uses ledger-on-memory.
* Runner now supports a multi participant configuration
This change removes the "extra participants" config and goes for consistent
participant setup with --participant.
* Run all conformance tests in the repository in verbose mode.
This means we'll print stack traces on error, which should make it
easier to figure out what's going on with flaky tests on CI.
This doesn't change the default for other users of the
ledger-api-test-tool; we just add the flag for:
- ledger-api-test-tool-on-canton
- ledger-on-memory
- ledger-on-sql
- sandbox
Fixes#4225.
CHANGELOG_BEGIN
CHANGELOG_END
Context
=======
After multiple discussions about our current release schedule and
process, we've come to the conclusion that we need to be able to make a
distinction between technical snapshots and marketing releases. In other
words, we need to be able to create a bundle for early adopters to test
without making it an officially-supported version, and without
necessarily implying everyone should go through the trouble of
upgrading. The underlying goal is to have less frequent but more stable
"official" releases.
This PR is a proposal for a new release process designed under the
following constraints:
- Reuse as much as possible of the existing infrastructure, to minimize
effort but also chances of disruptions.
- Have the ability to create "snapshot"/"nightly"/... releases that are
not meant for general public consumption, but can still be used by savvy
users without jumping through too many extra hoops (ideally just
swapping in a slightly-weirder version string).
- Have the ability to promote an existing snapshot release to "official"
release status, with as few changes as possible in-between, so we can be
confident that the official release is what we tested as a prerelease.
- Have as much of the release pipeline shared between the two types of
releases, to avoid discovering non-transient problems while trying to
promote a snapshot to an official release.
- Triggerring a release should still be done through a PR, so we can
keep the same approval process for SOC2 auditability.
The gist of this proposal is to replace the current `VERSION` file with
a `LATEST` file, which would have the following format:
```
ef5d32b7438e481de0235c5538aedab419682388 0.13.53-alpha.20200214.3025.ef5d32b7
```
This file would be maintained with a script to reduce manual labor in
producing the version string. Other than that, the process will be
largely the same, with releases triggered by changes to this `LATEST`
and the release notes files.
Version numbers
===============
Because one of the goals is to reduce the velocity of our published
version numbers, we need a different version scheme for our snapshot
releases. Fortunately, most version schemes have some support for that;
unfortunately, the SDK sits at the intersection of three different
version schemes that have made incompatible choices. Without going into
too much detail:
- Semantic versioning (which we chose as the version format for the SDK
version number) allows for "prerelease" version numbers as well as
"metadata"; an example of a complete version string would be
`1.2.3-nightly.201+server12.43`. The "main" part of the version string
always has to have 3 numbers separated by dots; the "prerelease"
(after the `-` but before the `+`) and the "metadata" (after the `+`)
parts are optional and, if present, must consist of one or more segments
separated by dots, where a segment can be either a number or an
alphanumeric string. In terms of ordering, metadata is irrelevant and
any version with a prerelease string is before the corresponding "main"
version string alone. Amongst prereleases, segments are compared in
order with purely numeric ones compared as numbers and mixed ones
compared lexicographically. So 1.2.3 is more recent than 1.2.3-1,
which is itself less recent than 1.2.3-2.
- Maven version strings are any number of segments separated by a `.`, a
`-`, or a transition between a number and a letter. Version strings
are compared element-wise, with numeric segments being compared as
numbers. Alphabetic segments are treated specially if they happen to be
one of a handful of magic words (such as "alpha", "beta" or "snapshot"
for example) which count as "qualifiers"; a version string with a
qualifier is "before" its prefix (`1.2.3` is before `1.2.3-alpha.3`,
which is the same as `1.2.3-alpha3` or `1.2.3-alpha-3`), and there is a
special ordering amongst qualifiers. Other alphabetic segments are
compared alphabetically and count as being "after" their prefix
(`1.2.3-really-final-this-time` counts as being released after `1.2.3`).
- GHC package numbers are comprised of any number of numeric segments
separated by `.`, plus an optional (though deprecated) alphanumeric
"version tag" separated by a `-`. I could not find any official
documentation on ordering for the version tag; numeric segments are
compared as numbers.
- npm uses semantic versioning so that is covered already.
After much more investigation than I'd care to admit, I have come up
with the following compromise as the least-bad solution. First,
obviously, the version string for stable/marketing versions is going to
be "standard" semver, i.e. major.minor.patch, all numbers, which works,
and sorts as expected, for all three schemes. For snapshot releases, we
shall use the following (semver) format:
```
0.13.53-alpha.20200214.3025.ef5d32b7
```
where the components are, respectively:
- `0.13.53`: the expected version string of the next "stable" release.
- `alpha`: a marker that hopefully scares people enough.
- `20200214`: the date of the release commit, which _MUST_ be on
master.
- `3025`: the number of commits in master up to the release commit
(included). Because we have a linear, append-only master branch, this
uniquely identifies the commit.
- `ef5d32b7ù : the first 8 characters of the release commit sha. This is
not strictly speaking necessary, but makes it a lot more convenient to
identify the commit.
The main downsides of this format are:
1. It is not a valid format for GHC packages. We do not publish GHC
packages from the SDK (so far we have instead opted to release our
Haskell code as separate packages entirely), so this should not be an
issue. However, our SDK version currently leaks to `ghc-pkg` as the
version string for the stdlib (and prim) packages. This PR addresses
that by tweaking the compiler to remove the offending bits, so `ghc-pkg`
would see the above version number as `0.13.53.20200214.3025`, which
should be enough to uniquely identify it. Note that, as far as I could
find out, this number would never be exposed to users.
2. It is rather long, which I think is good from a human perspective as
it makes it more scary. However, I have been told that this may be
long enough to cause issues on Windows by pushing us past the max path
size limitation of that "OS". I suggest we try it and see what
happens.
The upsides are:
- It clearly indicates it is an unstable release (`alpha`).
- It clearly indicates how old it is, by including the date.
- To humans, it is immediately obvious which version is "later" even if
they have the same date, allowing us to release same-day patches if
needed. (Note: that is, commits that were made on the same day; the
release date itself is irrelevant here.)
- It contains the git sha so the commit built for that release is
immediately obvious.
- It sorts correctly under all schemes (modulo the modification for
GHC).
Alternatives I considered:
- Pander to GHC: 0.13.53-alpha-20200214-3025-ef5d32b7. This format would
be accepted by all schemes, but will not sort as expected under semantic
versioning (though Maven will be fine). I have no idea how it will sort
under GHC.
- Not having any non-numeric component, e.g. `0.13.53.20200214.3025`.
This is not valid semantic versioning and is therefore rejected by
npm.
- Not having detailed info: just go with `0.13.53-snapshot`. This is
what is generally done in the Java world, but we then lose track of what
version is actually in use and I'm concerned about bug reports. This
would also not let us publish to the main Maven repo (at least not more
than once), as artifacts there are supposed to be immutable.
- No having a qualifier: `0.13.53-3025` would be acceptable to all three
version formats. However, it would not clearly indicate to humans that
it is not meant as a stable version, and would sort differently under
semantic versioning (which counts it as a prerelease, i.e. before
`0.13.53`) than under maven (which counts it as a patch, so after
`0.13.53`).
- Just counting releases: `0.13.53-alpha.1`, where we just count the
number of prereleases in-between `0.13.52` and the next. This is
currently the fallback plan if Windows path length causes issues. It
would be less convenient to map releases to commits, but it could still
be done via querying the history of the `LATEST` file.
Release notes
=============
> Note: We have decided not to have release notes for snapshot releases.
Release notes are a bit tricky. Because we want the ability to make
snapshot releases, then later on promote them to stable releases, it
follows that we want to build commits from the past. However, if we
decide post-hoc that a commit is actually a good candidate for a
release, there is no way that commit can have the appropriate release
notes: it cannot know what version number it's getting, and, moreover,
we now track changes in commit messages. And I do not think anyone wants
to go back to the release notes file being a merge bottleneck.
But release notes need to be published to the releases blog upon
releasing a stable version, and the docs website needs to be updated and
include them.
The only sensible solution here is to pick up the release notes as of
the commit that triggers the release. As the docs cron runs
asynchronously, this means walking down the git history to find the
relevant commit.
> Note: We could probably do away with the asynchronicity at this point.
> It was originally included to cover for the possibility of a release
> failing. If we are releasing commits from the past after they have been
> tested, this should not be an issue anymore. If the docs generation were
> part of the synchronous release step, it would have direct access to the
> correct release notes without having to walk down the git history.
>
> However, I think it is more prudent to keep this change as a future step,
> after we're confident the new release scheme does indeed produce much more
> reliable "stable" releases.
New release process
===================
Just like releases are currently controlled mostly by detecting
changes to the `VERSION` file, the new process will be controlled by
detecting changes to the `LATEST` file. The format of that file will
include both the version string and the corresponding SHA.
Upon detecting a change to the `LATEST` file, CI will run the entire
release process, just like it does now with the VERSION file. The main
differences are:
1. Before running the release step, CI will checkout the commit
specified in the LATEST file. This requires separating the release
step from the build step, which in my opinion is cleaner anyway.
2. The `//:VERSION` Bazel target is replaced by a repository rule
that gets the version to build from an environment variable, with a
default of `0.0.0` to remain consistent with the current `daml-head`
behaviour.
Some of the manual steps will need to be skipped for a snapshot release.
See amended `release/RELEASE.md` in this commit for details.
The main caveat of this approach is that the official release will be a
different binary from the corresponding snapshot. It will have been
built from the same source, but with a different version string. This is
somewhat mitigated by Bazel caching, meaning any build step that does
not depend on the version string should use the cache and produce
identical results. I do not think this can be avoided when our artifact
includes its own version number.
I must note, though, that while going through the changes required after
removing the `VERSION` file, I have been quite surprised at the sheer number of
things that actually depend on the SDK version number. I believe we should
look into reducing that over time.
CHANGELOG_BEGIN
CHANGELOG_END
As mentioned in the title, this is still very experimental and needs
more work before we want to advertise it. However, the code is in a
somewhat reasonable shape, there are tests and I think even in the
current state it is already useful. Also this PR is already getting
very large so I don’t want to hold off much longer before merging this.
It is included in the SDK but hidden from `damlc --help` and `daml
--help` until the most pressing issues are addressed (primarily around
making sure that it doesn’t just shut down if you have a type error
and better error messages in general).
changelog_begin
changelog_end
changelog_begin
- [DAML Script - Experimental] Support running DAML scripts against an
authenticated ledger. The token is passed via ``--access-token-file``.
changelog_end
* kvutils: Extract a committer from the uses of `SubmissionValidator`.
This makes the clock injectable too.
* kvutils: Provide logging contexts in the `Runner`.
* sandbox: Remove the `StaticAllowBackwards` time provider type.
It's not used anywhere.
* sandbox: Fix warnings in CliSpec.
* sandbox: Ensure that we cannot specify both static and wall-clock time.
* sandbox-next: Crash if wall clock time is not specified.
* sandbox-next: Document more known issues in the new Sandbox.
* sandbox: Add a Clock (and some tests) to TimeServiceBackend.
* sandbox-next: Support static time.
CHANGELOG_BEGIN
- [Sandbox Next] Re-establish static time mode.
CHANGELOG_END
* ledger-on-(memory|sql): Expect a `() => Instant`, not a `Clock`.
* allocatePartyWithHint(On)
CHANGELOG_BEGIN
- [DAML Script - Experimental] The participant argument in ``allocatePartyOn`` is wrapped in ``ParticipantName`` to avoid confusion with the ``displayName`` argument.
- [DAML Script - Experimental] Add ``allocatePartyWithHint`` and ``allocatePartyWithHintOn`` that allow to specify the ``partyIdHint`` to the backing participant. See https://github.com/digital-asset/daml/issues/4472.
CHANGELOG_END
* test-cases for allocatePartyWithHint(On)
* DAML formatting
* Supply "" party id hint instead of None
Addressing review comment
https://github.com/digital-asset/daml/pull/4489#discussion_r378245989
Co-authored-by: Andreas Herrmann <andreash87@gmx.ch>
This should provide a better migration path for people that still rely
on static time by forcing them to make this explicit. Given that both
DAML script and DAML triggers are still experimental, I’m not marking
this as a breaking change
changelog_begin
- [DAML Script - Experimental] The time mode must now always be
specified explicitly. Use ``--static-time`` to recover the previous
default time mode.
- [DAML Triggers - Experimental] The time mode must now always be
specified explicitly. Use ``--static-time`` to recover the previous
default time mode.
changelog_end
* sandbox: Don't hold on to old resources when resetting.
Now there's one hell of a memory leak.
CHANGELOG_BEGIN
- [Sandbox] Fixed a memory leak when using the ResetService; not
everything was cleaned up correctly.
CHANGELOG_END
* sandbox: Split out SandboxClientResource from SandboxServerResource.
Gonna replace SandboxServerResource with a ResourceOwner acquisition.
* sandbox: Don't capture the API server in the SandboxServer resource.
When we reset, this is stored forever, leading to a memory leak.
Tested by rewriting the SandboxServerResource to use
`SandboxServer.owner`.
* sandbox: Revert the test client resource to calling `shutdownNow()`.
* sandbox: Make sure the fixture is recreated properly on each test run.
* sandbox: Make `SandboxState` a non-case class.
The `toString()` was unnecessarily heavy.
* sandbox: Futures, futures everywhere.
Avoid a race condition where the server is stopped before it starts by
storing a `Future[SandboxState]` rather than the `SandboxState` itself.
This doesn't trigger the same memory leak as storing a
`Resource[SandboxState]` because we don't capture the object itself in
the `flatMap` in the same way with `Future`.
* sandbox: Remove an unused parameter left in for debugging.
* sandbox: Replace `@VisibleForTesting` with a comment.
* sandbox: Add more comments to the weird logic in SandboxServer.
* sandbox: Get rid of the `Port` type alias; it was confusing.
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
changelog_begin
- [DAML Script - Experimental] Add a sleep funciton that pauses
the script for the given duration. This is primarily useful in tests
where you repeatedly call query until a certain state is
reached.
changelog_end
fixes#4199
* Expose time in DAML script
changelog_begin
- [DAML Script] Add a ``HasTime`` instance for ``Script`` which allows
you to get the current time (UTC in wallclock mode, UNIX epoch otherwise)
changelog_end
* reenable tests
* clarify how time works
* fix tests
* Support DAML-LF type synonyms in scala.
CHANGELOG_BEGIN
CHANGELOG_END
* dont create synonymns in GenerateSimpleDalf
* extend DAML-LF parser to support type synonyms
* test: expand type synonyms correctly
Fixes#28.
CHANGELOG_BEGIN
[Sandbox] DAML trace logs (trace, traceRaw, traceId) are now logged via the regular logging system (slf4j+logback) at interpretation time via the logger ``daml.tracelog`` at DEBUG level.
CHANGELOG_END
* daml script-test choose free port
* Remove exclusive tag on script-test
The test was marked exclusive because it required access to port 6865.
However, the test-runner now automatically chooses a free port at
runtime.
Co-authored-by: Andreas Herrmann <andreash87@gmx.ch>
* Start on daml test-scripts
* Run all `Script a` as test cases
* LedgerClient: Expose PackageManagementClient
To enable DAR uploads
* Upload the DAR to the ledger
* Start sandbox if no ledger specified
* Format daml test-script
* Fix deprecation warning on ActorMaterializer
* Add test-case //daml-script/tests:test_daml_script_test_runner
* Add daml test-script command
CHANGELOG_BEGIN
- [DAML Script - Experimental] Allow running DAML scripts as test-cases.
Executing ``daml test-script --dar mydar.dar`` will execute all
definitions matching the type ``Script a`` as test-cases.
See `#3687 <https://github.com/digital-asset/daml/issues/3687>`__.
CHANGELOG_END
* daml-test-script enable logging
* Remove outdated TODO comment
* daml script-test More elaborate test-caseo
Compare to expected output and add failing test-case
* daml test-script Don't abort on test-failure
Before the test runner would abort on the first failed test-case. This
occasionally introduce additional test-failures if the sandbox was
torn down half-way through execution.
* ./fmt.sh
Co-authored-by: Andreas Herrmann <andreash87@gmx.ch>
* Upgrade to Akka 2.6.1, akka-http 10.1.11 and Scala 2.12.10
Akka 2.6.1 Upgrade Changes
- Materializer in place of ActorMaterializer
- Source.future instead of Source.fromFuture
- The Scheduler.schedule method has been deprecated in favor of selecting scheduleWithFixedDelay or scheduleAtFixedRate
- onDownstreamFinish(cause: Throwable)
- ActorAttributes.supervisionStrategy(...) in place of ActorMaterializerSettings.withSupervisionStrategy
See https://doc.akka.io/docs/akka/current/project/migration-guide-2.5.x-2.6.x.html
* Akka 2.6.1 Upgrade Changes
- onDownstreamFinish(cause: Throwable)
See https://doc.akka.io/docs/akka/current/project/migration-guide-2.5.x-2.6.x.html
* code review: remove unnecessary supervision strategy
CHANGELOG_BEGIN
- [Sandbox] Restore 0.13.38 logging behaviour.
- [Navigator] Restore 0.13.38 logging behaviour.
- [Extractor] Restore 0.13.38 logging behaviour.
- [Internals] As of 0.13.39, we merged a number of internal JAR files in
the SDK tarball to reduce its size. These jars used to be standalone
JARs you could invoke as e.g. ``java -jar sandbox.jar <args>``. As a
result of merging the jars, they lost their individual ``logback.xml``
configuration file. Although running the jars directly was (and is
still) not supported, note that you can now achieve the same behaviour
with e.g. ``java -Dlogback.configurationFile=sandbox-logback.xml -jar
daml-sdk.jar sandbox <args>``.
CHANGELOG_END
* Support multi-participant DAML script
fixes#3555
CHANGELOG_BEGIN
- [DAML Script - Experimental] DAML script can now run be used in distributed topologies.
CHANGELOG_END
* Fix ports in multiparticipants tests
* Generate API docs for DAML script and include them in the SDK docs
* Update daml-script/daml/Daml/Script.daml
Co-Authored-By: Martin Huschenbett <martin.huschenbett@posteo.me>
* Expose conversion from Ast.Type to iface.Type
This allows me to get rid of the duplicated conversion logic for DAML
script. The reason for why I can’t use the higher level APIs provided
by the interface reader is that the type of the script identifier can
be a function which is not serializable and therefore does not show up
in the interface. However, I only want to translate the type of the
argument of that function which is serializable.
* Update daml-lf/interface/src/main/scala/com/digitalasset/daml/lf/iface/reader/InterfaceReader.scala
Co-Authored-By: Stephen Compall <stephen.compall@daml.com>
This is useful for testing purposes and matches the function provided
in scenarios. We probably want to expose a variant of submitMustFail
that only succeeds if the SubmitFailure matches a specific condition
but I need to think a bit more about which API I want for that.
This uses the format for LF values that we already use elsewhere.
There is one annoying part in this PR where I had to duplicate the
logic for converting to the types used in the interface reader since
it is not exposed but hopefully we can get rid of this soon in a
separate PR.
fixes#3470
* Cleanup conversions in DAML script submit
I’ll leave the other commands such as Query for a separate PR.
Part of this code can be shared with the DAML trigger runner but I’ll
also leave that for a separate PR
* Address review comments
The code still needs a fair amount of cleanup but it seems to work and
there is a test so I’d like to do the cleanup in-tree after merging
the current state