Create normalized TXs when a partial TX is finalised.
Except in limited cases! (i.e for scenario-runner, sandbox)
CHANGELOG_BEGIN
CHANGELOG_END
normalize values in the engine as they are converted from speedy-values
fix 2.12 build
backout redundant change
ensure byKey field is correctly normalized when constructed by engine
rename flag: valueNormalization -> transactionNormalization
improve comment
delete commented-out code
rename: toValueNorm --> toNormalizedValue
rename: (SValue.) toValue --> toUnNormalizedValue
revert changes to ptx so that the interface to insertCreate() etc is Value-based (not SValue-based)
improve comments
respell: toUnNormalizedValue --> toUnnormalizedValue
fix build
* Separate traces from warnings in engine.
I decided to separate the engine warnings from the tracelog after all,
because I think it will make testing and maintenance easier in the
long run.
Part of #9947, follow up from #10179
changelog_begin
changelog_end
* scalafmt
* Apply suggestions from code review
Co-authored-by: Remy <remy.haemmerle@daml.com>
* dont use case class for WarningLog
* revert TraceLog changes from last PR
* Scala 2.12 doesnt have ArrayBuffer.addOne :(
* remove isWarnEnabled check
Co-authored-by: Remy <remy.haemmerle@daml.com>
* LF: Simplify archive reader.
- decouple Reader and Decoder
- introduce case class to handle hash, proto payload, and version
CHANGELOG_BEGIN
CHANGELOG_END
* Address Moritz' review
* cosmetic
* Refactor error reporting in Daml Repl
fixes#10098
As mentioned in that issue, the current behavior is a mess which
silently drops errors in certain cases. This PR switches things around
so that errors are reported to the client and we print them there
making sure that no error should ever get lost.
changelog_begin
changelog_end
* Bump timeout
changelog_begin
changelog_end
* Expose Daml stacktraces for Daml Script errors
This finally plugs together the pieces from the previous PRs to
provide stacktraces on any ScriptF command (and thereby anything
involving an interaction with the ledger).
fixes#8754
changelog_begin
- [Daml Script] When running Daml Script on the command line you will
now see a Daml stacktrace on failures to interact with the ledger
which makes it significantly easier to track down which of the calls
fails. By default, you will only get the callsite of functions like
`submit`. To extend the stack trace, add `HasCallStack` constraints
to functions and those will also be included.
changelog_end
* Fix non-determinism in tests
changelog_begin
changelog_end
* fmt
changelog_begin
changelog_end
Pure shuffling around there is no logic change in here. I keep getting
lost in the 1.4k LedgerInteraction file so this client splits it up
into a 4 different files to make it a bit easier to navigate.
changelog_begin
changelog_end
* Fix closure references in Daml Repl
Turns out the comment "we probably need to extend this to merge the
modules from each line" was exactly correct: If the result evaluates
to a closure (or a value including a closure), it can reference
definitions from the current module. This happens if the simplifier
lifted something out of the current definition (otherwise we have only
one and it cannot be recursive so no chance of leaking a reference).
This PR fixes this by checking whether the result references the
current module and if it does, we keep it around.
changelog_begin
changelog_end
* Address review comments
changelog_begin
changelog_end
* fmt
changelog_begin
changelog_end
Disabling it per target works nicely for compilation but it gets
annoying in ghci since the warnings are still triggered. We could
disable it everywhere but I think the warning is generally useful. I
tried patching proto3-suite to use DerivingStrategies but that doesn’t
work because haskell-src is dead and doesn’t support it. So for now
adding it to the per-file list seems like the best option.
changelog_begin
changelog_end
This PR updates scalafmt and enables trailingCommas =
multiple. Unfortunately, scalafmt broke the version field which means
we cannot fully preserve the rest of the config. I’ve made some
attempts to stay reasonably close to the original config but couldn’t
find an exact equivalent in a lot of cases. I don’t feel strongly
about any of the settings so happy to change them to something else.
As announced, this will be merged on Saturday to avoid too many conflicts.
changelog_begin
changelog_end
* Port damlc dependencies to Scala 2.13
I got a bit fed up by the fact that going directory by directory
didn’t really work since there are two many interdependencies in
tests (e.g., client tests depend on sandbox, sandbox tests depend on
clients, engine tests depend on DARs which depend on damlc, …).
So before attempting to continue with the per-directory process, this
is a bruteforce approach to break a lot of those cycles by porting all
dependencies of damlc which includes client bindings (for DAML Script)
and Sandbox Classic (also for DAML Script).
If this is too annoying to review let me know and I’ll try to split it
up into a few chunks.
changelog_begin
changelog_end
* Update daml-lf/data/src/main/2.13/com/daml/lf/data/LawlessTraversals.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* fixup lawlesstraversal
changelog_begin
changelog_end
* less iterator more view
changelog_begin
changelog_end
* document safety of unsafeWrapArray
changelog_begin
changelog_end
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
We drop the distinction (at the type level) of Dev and Stable language
version. The two main reason that motivate this choice:
* we never really used this distinction.
* we want to add the concept of "preview" version (which is neither Dev nor Stable)
CHANGELOG_BEGIN
CHANGELOG_END
* Cache computation of top-level values at definition level
Earlier the computation of a top-level value was only cached at the
use-site in SEVal. This introduces SDefinition which contains the same
mechanism as SEVal to cache the computation.
As expected this does not impact performance much:
before: CollectAuthority.bench //daml-lf/scenario-interpreter/CollectAuthority.dar CollectAuthority:test avgt 40 44.267 ± 0.728 ms/op
after: CollectAuthority.bench //daml-lf/scenario-interpreter/CollectAuthority.dar CollectAuthority:test avgt 40 43.693 ± 0.702 ms/op
What this does have a significant impact is on reducing the number of distinct
SValues for things like type class dictionaries etc, so that now we have one
SValue per dictionary rather than one per SEVal.
CHANGELOG_BEGIN
CHANGELOG_END
* Address code review
* Fix speedy tests
* Speedy: group compiler parameters into a Config case class.
Speedy compiler is parametrized by several option flags. Those flags
are passed through different calls (in particular in the instance of
`CompiledPackages`.
This PR group all this configuration parameters into a case class to
pass the option in a cleaner way.
CHANGELOG_BEGIN
CHANGELOG_END
* Fix trace statement handling in DAML Script service
This PR makes sure that we get both trace statements coming from the
ledger client as well as from the ledger server. In the script service
those are two different Speedy machines so we need to interleave
them. This is a bit messy so let me explain the steps (apologies for
not splitting it up but I think it’s easier to understand the
motivation when it is a single PR):
1. Abstract over the tracelog interface. The ringbuffer doesn’t makes
sense for our purposes and is annoying to clear.
2. Pass in the respective fields of `Machine` to `Conversions` instead
of the whole machine. This is both a simpler design (passing around
mutable objects scares me) and it allows us to assemble the fields
from different machines.
3. Initialize the ledger machine with a simple ArrayBuffer tracelog.
4. After `submit` and `submitMustFail` copy the tracelog from the
ledger to the client.
fixes#7280
changelog_begin
changelog_end
* Address review feedback
changelog_begin
changelog_end
* REPL test `:json` command
changelog_begin
changelog_end
* Implement :json command in DAML REPL
changelog_begin
- [DAML REPL] You can now convert DAML expressions to JSON in the DAML
REPL using the meta-command ``:json`` for example ``:json [1, 2, 3]``.
changelog_end
* Extend DAML REPL documentation
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
This is set per participant since it is similar to the token file.
fixes#7029
changelog_begin
- [DAML Script/DAML REPL] You can now configure the application id via
`--application-id` or the `--participant-config`. This is primarily
useful if you are working against a ledger with authentication and
need to match the application id in your token.
changelog_end
This makes much more sense since for things like the JSON API and the
script service, we don’t actually have the application id as a
separate parameter.
I’ve also cleaned up all the arbitrary hardcoded application id in
various tests in favor of a DEFAULT_APPLICATION_ID value.
Next step is #7029
changelog_begin
changelog_end
* add -Ywarn-unused to all scalac options
* remove some unused arguments
* remove some unused definitions
* remove some unused variable names
* suppress some unused variable names
* changeExtension doesn't use baseName
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* work around no plugins in scenario interpreter perf tests
* remove many more unused things
* remove more unused things, restore some used things
* remove more unused things, restore a couple signature mistakes
* missed import
* unused argument
* remove more unused loggingContexts
* some unused code in triggers
* some unused code in sandbox and kvutils
* some unused code in repl-service and daml-script
* some unused code in bindings-rxjava tests
* some unused code in triggers runner
* more comments on silent usages
- suggested by @cocreature; thanks
* fix missing reference in TestCommands
* more unused in triggers
* more unused in sandbox
* more unused in daml-script
* more unused in ledger-client tests
* more unused in triggers
* more unused in kvutils
* more unused in daml-script
* more unused in sandbox
* remove unused in ledger-api-test-tool
* suppress final special case for codegen unused warnings
.../com/daml/sample/mymain/ContractIdNT.scala:24: warning: parameter value ev 0 in method ContractIdNT Value is never used
implicit def `ContractIdNT Value`[a_a1dk](implicit `ev 0`: ` lfdomainapi`.Value[a_a1dk]): ` lfdomainapi`.Value[_root_.com.daml.sample.MyMain.ContractIdNT[a_a1dk]] = {
^
.../com/daml/sample/mymain/ContractIdNT.scala:41: warning: parameter value eva_a1dk in method ContractIdNT LfEncodable is never used
implicit def `ContractIdNT LfEncodable`[a_a1dk](implicit eva_a1dk: ` lfdomainapi`.encoding.LfEncodable[a_a1dk]): ` lfdomainapi`.encoding.LfEncodable[_root_.com.daml.sample.MyMain.ContractIdNT[a_a1dk]] = {
^
* one more unused in daml-script
* special scaladoc rules may need silencer, too
* unused in compatibility/sandbox-migration
* more commas, a different way to `find`
- suggested by @remyhaemmerle-da; thanks
This was not only unnecessarily duplicated, it also had a bug where
`--crt` behaved like `--pem` instead of setting the cert chain.
I didn’t add new tests since it seems like the wrong place to test
config parsing of a library. We do have tests for TLS in general for
both DAML Script and DAML Triggers.
changelog_begin
changelog_end
This PR makes the ``--ledger-host`` and ``--ledger-port`` parameters
optional so DAML REPL works without a ledger which is useful now that
we have better. support for pure expressions. This just piggybacks on
DAML Script’s multiparticipant support so there are no significant
changes on the service.
Docs are updated and we have a testcase.
Side note: What is still missing is `let x = …` in DAML REPL. I’ll
tackle that in a separate PR.
changelog_begin
changelog_end
* Upgrade nixpkgs revision
* Remove unused minio
It used to be used as a gateway to push the Nix cache to GCS, but has
since been replaced by nix-store-gcs-proxy.
* Update Bazel on Windows
changelog_begin
changelog_end
* Fix hlint warnings
The nixpkgs update implied an hlint update which enabled new warnings.
* Fix "Error applying patch"
Since Bazel 2.2.0 the order of generating `WORKSPACE` and `BUILD` files
and applying patches has been reversed. The allows users to define
patches to these files that will not be immediately overwritten.
However, it also means that patches on another repository's original
`WORKSPACE` file will likely become invalid.
* a948eb7255
* https://github.com/bazelbuild/bazel/issues/10681
Hint: If you're generating a patch with `git` then you can use the
following command to exclude the `WORKSPACE` file.
```
git diff ':(exclude)WORKSPACE'
```
* Update rules_nixpkgs
* nixpkgs location expansion escaping
* Drop --noincompatible_windows_native_test_wrapper
* client_server_test using sh_inline_test
client_server_test used to produce an executable shell script in form of
a text file output. However, since the removal of
`--noincompatible_windows_native_test_wrapper` this no longer works on
Windows since `.sh` files are not directly executable on Windows.
This change fixes the issue by producing the script file in a dedicated
rule and then wrapping it in a `sh_test` rule which also works on
Windows.
* daml_test using sh_inline_test
* daml_doc_test using sh_inline_test
* _daml_validate_test using sh_inline_test
* damlc_compile_test using sh_inline_test
* client_server_test find .exe on Windows
* Bump Windows cache for Bazel update
Remove `clean --expunge` after merge.
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
* Print results in DAML REPL and support pure expressions
This extends DAML REPL to behave similar to GHCi in that it prints
results if the result is an instance of `Show` and it also supports
pure, non-script expressions now. We copy the same hack from GHCi and
typecheck multiple times. This doesn’t seem to have any noticeable
performance impact (which makes sense, the expressions are tiny and
the overhead of grpc is much much bigger).
Docs and tests are updated.
fixes#6780
changelog_begin
changelog_end
* windows is garbage
* but I fixed it \o/
* Rename getReplLogger to newReplLogger
changelog_begin
changelog_end
* Revert ANF changes and add a testcase for evaluation order
After careful consideration, we decided that the change in evaluation
order that was accidentally introduced by the ANF changes should be
considered a breaking change or arguably even a bug and should not
land in 1.3.0.
Therefore, this PR reverts the following commits:
1. 353d0da6f7
2. a45b51042f
3. 04c7b2af7f
4. a624dd7242
5. b3aab72cee
Other PRs mostly had trivial merge conflicts that I resolved. The two
most interesting ones here are probably
1. https://github.com/digital-asset/daml/pull/6576 which was easy to
resolve and the change to return SEValue instead of SExpr is still
nice and useful even if we do not need the guarantees.
2. it https://github.com/digital-asset/daml/pull/6542 which required
some changes since the constructors changed. If you want to review
those changes in detail (they are pretty straightforward so not too
important), it’s probably easiest to check out this PR and run
```
git diff 2cd2a8f2a8
daml-lf/interpreter/src/main/scala/com/digitalasset/daml/lf/speedy/Compiler.scala
```
to see the diff to the parent commit of the first commit that
introduced ANF.
changelog_begin
changelog_end
* ANF transformation in Speedy.
The idea behind this PR is to transform speedy expressions into a simpler form where all non-atomic sub-expressions are made explicit by the introduction of let-forms. In particular, for the function-application form. These simpler forms allow the execution engine to take advantage of the atomic assumption, and often removes many additional execution steps. In particular the pushing of continuations to allow execution to continue after a compound expression has been reduced to a value.
changelog_begin
changelog_end
* improve comment
* inline functions relocateA/L
* remove comment about scalafmt
* remove commented out alterative def for transformLet1
* improve code by adding incr methods to DepthA/E
* remove (n == 0) special case in trackBindings
* clarify comment further
* improve validate/go to not consume stack for deeply right-nested let-expressions
* address comments from Remy: be private; use final case case; etc
* rename to unsafeCompilationPipeline
* add back some trailing commas
* remove commented-out debug line
* improve comment
* remove dev/debug code in compilationPipeline
* remove commented out code in SEAppGeneral.execute
* undo unrelated code improvement in SValue.scala
* fix compile. object Anf cannot be private
* DAML REPL --static-time regression test
* damlc repl --static-time|--wall-clock-time flags
CHANGELOG_BEGIN
- [DAML REPL] The time mode can now be specified using the
``--static-time`` and ``--wall-clock-time`` flags.
CHANGELOG_END
* Update compiler/damlc/lib/DA/Cli/Damlc.hs
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Verify the effect of setTime using getTime
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Support multiple auth tokens in DAML Script
This piggy backs on top of the already existing --participant-config
feature. While you can argue that it might be slightly confusing that
you have to specify the same participant twice to specify different
auth tokens, I think this actually makes sense: In an ideal
world (ignoring any performance issues) you have one participant per
party anyway and one connection per participant specified in the
config file still seems like a very reasonable model.
changelog_begin
- [DAML Script] You can now use DAML Script with multiple auth
tokens. This is particularly useful if you are working with the JSON
API where you can only have one party per token or with an IAM that
only provides single-party tokens. The tokens are specified in the
participant configuration passed via `--participant-config` in a new
``access_token`` field. The existing `--acess-token-file` flag is still supported if you want to use the same token for all connections. See
https://docs.daml.com/daml-script/index.html#running-daml-script-against-authenticated-ledgers
for more details.
changelog_end
* I will never understand rst
changelog_begin
changelog_end
This integrates the time service into DAML script thereby covering the
main piece of scenarios that was missing from DAML script.
This PR does two things (they are very related and doing them together
makes it much easier to test):
1. It “fixes” `getTime` to return the ledger time in static mode by
querying the ledger time service instead of defaulting to the Unix
epoch which is pretty useless and I would consider the old behavior
a bug. We keep the old behavior via the JSON API since there is no
time service.
2. It adds `setTime` to set the ledger time via the time service. This
is only supported in static time mode (sadbonx and other ledgers do
not expose the time service in wallclock mode because changing time
makes it not wallclock) or via the JSON API (no time service).
fixes#6220
changelog_begin
- [DAML Script] DAML Script’s `getTime` now correctly handles time
changes in static time mode and returns the current time by querying
the time service rather than defaulting to the Unix epoch. Note that
when run via the JSON API, it still returns the Unix epoch.
- [DAML Script] Add `setTime` to DAML Script which sets the ledger
time via the ledger API time service. Note that this is only
supported when running over gRPC in static time mode.
changelog_end
* Add option based constructor for LedgerIdRequirement
changelog_begin
changelog_end
* Make option based consructor the default, deprecate old constructor
* Update with review comments
Choices for `stacktracing` are `NoStackTrace` / `FullStackTrace`.
Adapt code so the selection is made by the original caller:
- `engine`
- `scenario-service`
- `repl-service`
- `daml-script` runner
etc
Currently, all callers pass `FullStackTrace` (the existing behaviour), except for the
exploration dev-code: `daml-lf/interpreter/perf/src/main/scala/com/daml/lf/explore`.
The idea is that once this control is in place, we can discuss if we can change how we
might expose it to the user, and/or perhaps change the default behaviour to have
`stacktracing` disabled.
changelog_begin
changelog_end
* Implement a simple profiler for DAML scenarios
The profiler runs a single scenario and records timing information when
each function (and some other closures) are entered and left. The
resulting information can be visualized as a flamegraph using
[speedscope](https://www.speedscope.app/).
The profiler works by instrumenting the CEK machine at the heart of
DAML Engine. Unfortunetaly, this causes a very small overhead on
non-profiling runs too. However, in my benchmarks I could not measure
any significant impact on the overall runtime at all. More precisely,
the overhead is as follows:
Every closure now has an additional field called `label`. In
non-profiling runs this field is always set to `null`. This field needs
to be allocated, copied whenever we copy a closure and scanned during
garbage collection. Additionally, whenever we enter a closure, we check
this field and whenever it is _not_ `null`, i.e. never during
non-profiling runs, we record an "open event" and set up a hook for the
corresponding "close event". Thus, the additional cost during
non-profiling runs are a single pointer comparison and a jump beyond
the "then branch".
Since this is still very much in active development, there are no
documentation, other than an entry in a README, and no tests yet. They
will come before we promote this. However, the UX will look very
different then since we already have plans to significantly change it.
CHANGELOG_BEGIN
CHANGELOG_END
* Run scalafmt
* Make profiling argument to PureCompiledPackges optional
* Fix a bunch of tests
CHANGELOG_BEGIN
CHANGELOG_END
* scalafmt is so annoying
CHANGELOG_BEGIN
CHANGELOG_END
* Apply simple suggestions
CHANGELOG_BEGIN
CHANGELOG_END
Fixes#5592
The CLI syntax and the defaults follow the JSON API here.
changelog_begin
- [DAML Script] The maximum inbound message size can now be configured
using `--max-inbound-message-size``. This matches the flag in the JSON
API.
- [DAML REPL] The maximum inbound message size can now be configured
using `--max-inbound-message-size``. This matches the flag in the JSON API.
changelog_end
* Improve error messages in daml repl on calls to `error`
There were two issues with calls to `error`:
1. This one is harmless but somewhat annoying: When calling `error` we
run into the log statement in `stepToValue` which prints out the
error message in a fairly reasonable form (you can argue whether
Error: User abort: is a super useful prefix but that’s a relatively
minor issue). Afterwards we then call `println` on the failed
future. However, that will just print the type of the exception
which isn’t something we want to show to users. I’ve just disabled
the println statement if the exception is `SError`.
2. This one is a bigger issue: `throw x` is not the same as
`Future.failed(x)`. I only fully realized the difference fairly
recently. The former fails before it produces a future. So `(throw
x).onComplete(…)` will never execute the callback. The latter is
just a failed future. It is rather confusing to have a function
that returns a future but then throws an exception instead of a
future and it confuses the grpc library which prints out a horrible
exception. I’ve changed all calls to `throw` in `runWithClients` to
instead use `Future.failed` and `flatMap` (in the form of
for-comprehensions).
There are still a few calls in `run` left which I’ll leave for a
separate PR.
I think we need to factor out some helper functions here to make this
a bit more manageable (e.g. for the Converter.toFuture) stuff but I’ll
leave that for a separate PR. You probably want to view this with
whitespace diffs disabled.
changelog_begin
- [DAML Repl] DAML Repl now produces better error messages on calls to
`error` and `abort`.
changelog_end
* Switch stepToValue to return Either
* Split up repl tests to make them faster
This PR splits up the tests into the tests for TLS and Auth and the
tests for the actual functionality.
The func tests use the repl as a library which allows them to be
significantly faster:
1. We only need to start the service process once.
2. We only need to initialize the package db once.
There is at least one other point that I did not address for now and
that is only loading the packages into the repl service once. While
loading them multiple times is a noop, it still has a performance
implication.
Sadly, this has turned out much more messy than I thought it would be
due to various issues with haskeline/repline/tasty/computers. The
details are in a comment in DA.Test.Repl.FuncTests.
changelog_begin
changelog_end
* Try to nuke cache on window selectively
* Enable cache again