* Track used packages during whole of engine submit
- Introduce MutableCompiledPackages interface
- Add TrackingCompiledPackages that tracks fetched packages
- Make used packages in transaction optional to distinguish between
missing dependencies and empty set of dependencies.
* Reimplement package dependency tracking
- Compute direct dependencies of a package during decoding
- Compute transitive dependencies of a package when adding a package
to engine.
- Annotate the resulting transaction with package dependencies
in Engine.submit.
* Create Ast.Package with proper direct deps in scenario service
While we don't have use for direct dependencies of a package in
scenario service (only Engine.submit needs it), it's better to be
accurate.
This of course overapproximates the direct dependencies.
* Compile a each new package once in ConcurrentCompiledPackages
Until now, we broke long party names in the table view in the scenario at
spaces. This looked quite ugly.
Now, we don't break them at spaces. At the same time we also moved the CSS
into a file rather than having it in the code to make changes like this one
easier in the future.
This fixes#3642.
* Update bazel-common to fix javadoc issues
Specifically, to fix the following error
```
ERROR: /home/aj/tweag.io/da/da-bazel-1.1/ledger-api/rs-grpc-bridge/BUILD.bazel:7:1: in javadoc_library rule //ledger-api/rs-grpc-bridge:rs-grpc-bridge_javadoc:
Traceback (most recent call last):
File "/home/aj/tweag.io/da/da-bazel-1.1/ledger-api/rs-grpc-bridge/BUILD.bazel", line 7
javadoc_library(name = 'rs-grpc-bridge_javadoc')
File "/home/aj/.cache/bazel/_bazel_aj/5f825ad28f8e070f999ba37395e46ee5/external/com_github_google_bazel_common/tools/javadoc/javadoc.bzl", line 27, in _javadoc_library
dep.java.transitive_deps
object of type 'JavaSkylarkApiProvider' has no field 'transitive_deps'
```
* Define Maven deps using rules_jvm_external
* Pin artifacts
* Remove bazel-deps generated targets
* Remove bazel-deps
* Switch to rules_jvm_external targets
* update bazel documentation
* pom_file: There are no more bazel-deps targets
* BAZEL-JVM.md `maven_install` typo
* Properly fill eventId for active contracts
This gets rid of the last remaining bit that assumes
contractId==eventId.
Fixes#65.
Contributes to #2068.
* Do not conflate eventId and contractId in the daml-lf interpreter
* Do not treat contractId as eventId in Ledger.scala
* Remember the transaction that divulged a contract.
* In this scope we can treat divulged contracts the same as disclosed ones
* revert a few more syntactical changes to make the overall diff smaller
* retain the same behavior on the scenario service api
* fix unreleased after rebase
* Refactor DAML-LF encoder/decoder in prep for string interning
Refactor the encoder/decoder such that all the functions concerned with
interning live next to each other and code will break once the types of
the protobuf messages change.
This PR is a pure refactoring and does not change any functionality.
* Fix typos
* Preload packages to engine during upload
* Improve logging in KeyValueCommitting and add timing information
* Fix scenario service tests now that logging is done in interpreter
* add Numeric.java
* ledger-api: rename `decimal` field to `numeric` in value protobuf
* Address Gerolf's comment
* ledger-api: add missing renammings
* ledger-api: relax syntax of numbers that can be sent as numerics
* extractor: fix
* leger-api: change format of number though ledger api
* daml-lf: fix numeric regexp
* ledger: fix tests
This fixes all flakiness in `damlc test` that I was able to
reproduce. Previously, I got it to fail in about 10% of the cases
whereas now I have successfully run tests 200 times under load without
issues.
There were two issues at play here:
1. We run scenarios in separate threads to be able to kill the running
Shake session quickly even if a scenario has an infinite loop or
something like that (there is a timeout but it’s quite long). This
could result in one of those left-over threads trying to issue a
request while we are already trying to shut down.
To fix that, we wait for the concurrency semaphore to be empty before
shutting down.
2. Just waiting for scenario executions is not quite sufficient as
`runAction` does not wait for all rules to finish (we could just use
runActionSync in `damlc test` but I’d rather make this work
properly). While we do wait for all scenario executions to finish
there is one gRPC request in offInterest that we do not wait for:
gcCtxs.
To fix this, I’ve now routed all gRPC requests through the semaphore
which means that we will also wait for these requests to finish (or
prevent them from spawning).
This makes more sense anyway as scenario executions are mostly fairly
cheap requests while things like setting up the context are expensive
so we want to limit their concurrency.
We should make the concurrency limit configurable but I’ll leave that
for a separate PR.
SS.scenarioServiceClient does not just read the actual client from
some IORef, it registers the available gRPC methods. Apparently we
never knew about this or at least I didn’t.
By only doing this once, we should speed things up a bit and this
fixes once of the assertion failures that we have been seing on
shutdown (pthread_mutex_lock(&mu->mutex) == 0 in sync_posix.cc) which
was caused by trying to register a method from another thread after
destroying the channel.
* Show function names in stack trace on failing scenario
So far, we've only shown the location of the function but not its name.
Now, we add the name of the function as well.
I assume the plan was to implement stack traces. I intend to do that as well
but the message type does not fit my approach. Thus, let's remove it first.
* Print stack traces in the scenario on failure
Currently, we only print the last source location, which is not
particularly helpful for debugging. Now, we put all source locations we
encounter during execution on the continuation stack and print them when
a scenario fails. This PR does not print the names of entered functions
or choices. We leave this for a future PR.
* Address Moritz' comments
The module was only used to format decimal numbers. The only impact the
formatting could have had was to add/strip trailing zeros from the
presentation of the number. However, manual testing shows this did not
happen. Thus, the whole module is now useless.
While experimenting with the pure Haskell gRPC implementation I
noticed that there is a fair amount of duplication in PrettyScenario
so this PR cleans up some of that.
* check that submitter is in maintainers when looking up keys
Fixes#1866. Note that this limitation applies both for `lookupByKey`
and `fetchByKey` -- anything involving retrieving a key is affected.
* add UNTIL-LF to run tests up to a certain version of DAML-LF
* name targets for DAML tests better
* add notes about DAML-LF changes
* commit Test.daml with DAML-LF 1.5 rather than compiling it on the fly
* add scenario tests for #1866
* add warnings about future key behavior in docs
* use flag rather than version when executing
Given that we already made the max message size configurable it only
seems reasonable to also make the timeout configurable and on very
large projects, we do sometimes hit this.
Previously we sometimes ended up declaring the expected exit of the
scenario service as an unexpected exit. This PR addresses this by
adding a new variable that we set to True before we ask the scenario
service to exit by closing stdin.
* Allow controlling the gRPC message limit via daml.yaml
We have had to raise that in the past since it caused issues on large
projects so it makes sense to make it configurable.
* Update unreleased.rst
Co-Authored-By: Beth Aitman <bethaitman@users.noreply.github.com>
* specify the new fields for interning package IDs
* percolating intern tables
* crawl packages for package IDs as a pre-phase: generic prep
* stub case for interned_id in Scala packageref reader
* HasPackageRefs instances for the rest of the ast
* make intern table and use when encoding PackageRefs in v1
* don't need where
* stub out decode for interned package IDs
* no benefit to using uint32 instead of uint64
* percolate in encode one step
* interned case for decoding PackageRefs
* naming details
* intern table decoder
* finish propagating the intern table in encoder
* encode the package ID table
* document the vital assumption of encodeInternedPackageIds
* propagate the intern table through the LF decoder
- done by stacking ReaderT on top of Decode internally,
as discussed with @hurryabit
* daml-lf-proto requires mtl
* stub out interned case in Scala LF decoder
* stub interface decoder function
* get the interned table to most places in InterfaceReader
* support for interned package IDs in Scala decoder
* use ImmArraySeq instead of Vector for Scala intern decode table
* adding that ghc extension didn't make sense
* implement interned ID decoding for InterfaceReader
* scenario service won't have interned package IDs
* test the interned ID resolution in Scala by examining the proto -> AST in detail
* proper precondition for the dev phase of interned IDs testing
* better error reporting for malformed DALFs in intern test
* just import Data.Int
- suggested by @neil-da; thanks
* pass around the lookup function instead of the vector in decoder
- suggested by @neil-da; thanks
* remove derivations for types deleted in e63b012d2d
* rename VersionAware to EncodeCtx
- suggested by @hurryabit; thanks
* rename MDecode to MonadDecode
- suggested by @hurryabit; thanks
* pass a function through the encoder instead of a set
- based on suggestions by @hurryabit and @neil-da; thanks
* daml-ghc test that interned IDs are generated
- suggested by @hurryabit; thanks
* adapt to 5b480c99ec#1844
* add scenario Module decoding to Decode.OfPackage
* use purely data-driven decoding in scenario service in Scala
- decouples scenario service from LF decoder implementation
* make DecodeV1 companion private
* make extension to LFv2 more obvious
Previously, we always reverted back to the table view when the
scenario results changed. This PR changes this to preserve the
selected view.
I’ve also tested that this does not break backwards compatibility: If
you use the newer extension with an older SDK, you will get the
previous behavior of reverting to the table view and you can still
switch by clicking on the button.
Fixes#1675
This is a temporary fix to unblock users that have run into this
limit. The long-term solution here is to use compression and/or make
the limit configurable.