* add -Ywarn-unused to all scalac options
* remove some unused arguments
* remove some unused definitions
* remove some unused variable names
* suppress some unused variable names
* changeExtension doesn't use baseName
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* work around no plugins in scenario interpreter perf tests
* remove many more unused things
* remove more unused things, restore some used things
* remove more unused things, restore a couple signature mistakes
* missed import
* unused argument
* remove more unused loggingContexts
* some unused code in triggers
* some unused code in sandbox and kvutils
* some unused code in repl-service and daml-script
* some unused code in bindings-rxjava tests
* some unused code in triggers runner
* more comments on silent usages
- suggested by @cocreature; thanks
* fix missing reference in TestCommands
* more unused in triggers
* more unused in sandbox
* more unused in daml-script
* more unused in ledger-client tests
* more unused in triggers
* more unused in kvutils
* more unused in daml-script
* more unused in sandbox
* remove unused in ledger-api-test-tool
* suppress final special case for codegen unused warnings
.../com/daml/sample/mymain/ContractIdNT.scala:24: warning: parameter value ev 0 in method ContractIdNT Value is never used
implicit def `ContractIdNT Value`[a_a1dk](implicit `ev 0`: ` lfdomainapi`.Value[a_a1dk]): ` lfdomainapi`.Value[_root_.com.daml.sample.MyMain.ContractIdNT[a_a1dk]] = {
^
.../com/daml/sample/mymain/ContractIdNT.scala:41: warning: parameter value eva_a1dk in method ContractIdNT LfEncodable is never used
implicit def `ContractIdNT LfEncodable`[a_a1dk](implicit eva_a1dk: ` lfdomainapi`.encoding.LfEncodable[a_a1dk]): ` lfdomainapi`.encoding.LfEncodable[_root_.com.daml.sample.MyMain.ContractIdNT[a_a1dk]] = {
^
* one more unused in daml-script
* special scaladoc rules may need silencer, too
* unused in compatibility/sandbox-migration
* more commas, a different way to `find`
- suggested by @remyhaemmerle-da; thanks
* Bazel: Remove the reference in BAZEL-JVM.md to Artifactory.
That link won't work for external contributors.
CHANGELOG_BEGIN
CHANGELOG_END
* Bazel: Hashes need 64 hex digits, not a random number.
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
This is a pre-step for ANF, which will allow ANF expression forms to call `executeMatchAlts` directly, rather than always from the `KMatch` continuation.
changelog_begin
changelog_end
The "output was not created" errors seem to have become very
frequent. While taking out nodes seems to work as a bandaid, I’d like
to see if resetting the cache buys us a few days of not having to deal
with this. Admittedly, I don’t really have an explanation for why
resetting the cache should help if taking out the machines seems to do
something (suggesting that it hasn’t propagated fully).
changelog_begin
changelog_end
This is another small PR split out from the ANF work.
Improve explore-dar to allow dar file to be selected on the command line, and so allows the json-parser example to be run like this:
bazel run daml-lf/interpreter/perf:explore-dar -- --base JsonParser pipeline --arg 10
As well as a speed test:
bazel run daml-lf/interpreter/perf:speed-json-parser
This example is a nice quick smoke test to check speedy execution is not broken.
changelog_begin
changelog_end
* DAML Engine: Speed up foldr
This PR replaces Speedy's current implementation of `foldr`, which
basically does nothing more than replace the builtin `foldr` with an
expression for its standard implementation in a functional language,
with a more loop-like implementation that takes advantage of the
structure of Speedy. There's additional complexity for the case when
the step function expects only one more argument before it performs
some computation. The exact issue and the solution are explained in a
comment in the code.
I've benchmarked the application of `foldr (+) 0` to the pre-computed
list `[1..100_000]`. With the old implementation of `foldl` this took
ca. 35ms, with the new implementation ca. 11ms. Further experiments
indicate that out of these times ca. 5ms are spent applying the
arguments to `(+)` and performing the addition. Thus, the time actually
spent in `foldr` amounts to 30ms and 6ms, respectively. This means
the new implementation of `foldr` is ca. 5x faster than the old
implementation when the step function takes at least two arguments.
For the case of a step function which expects only one argument before
performing computation, I've benchmarked the application of
`foldr (\_ -> identity) 0` to the pre-computed list `[1..100_00]`. The
measured times have dropped from ca. 42ms to ca. 19ms. Further
experiments that the time spent in `foldr` itself has dropped from
ca. 32ms to 9ms.
CHANGELOG_BEGIN
- [DAML Engine] The performance of `foldr` has been improved by more
than 4x.
CHANGELOG_END
CHANGELOG_BEGIN
CHANGELOG_END
* Add explanation on how extra coomplexity could have been avoided
CHANGELOG_BEGIN
CHANGELOG_END
* remove unused definitions, params, args from sandbox Scala code
CHANGELOG_BEGIN
CHANGELOG_END
* remove unused loggingContext from sandbox
* pass pageSize along in JdbcLedgerDaoTransactionsSpec
- seems to have been the intent of the parameter, and at the moment it
is semantically identical
* remove unused definitions, params, args from kvutils Scala code
CHANGELOG_BEGIN
CHANGELOG_END
* label desired default for enclose compression argument, should it come into use
- suggested by @fabiotudone-da; thanks
* type-alias a couple of ProcessSubmission's args to label what they are
- suggested by @fabiotudone-da; thanks
* reformat after fixing merge
* more unused in kvutils
* define enclose's "default" compression as a constant
- suggested by @miklos-da; thanks
https://github.com/digital-asset/daml/pull/6992#discussion_r466489923
* participant-integration-api: In `JdbcIndexer`, log with context.
We were not providing the correct `loggingContext` to
`JdbcIndexer#handleStateUpdate`. This means we were just dropping useful
information. This adds the implicit so that it uses the correct logging
context.
There's a bigger problem, in that there are multiple logging contexts in
scope, making this very error prone. We'll need to figure out a way to
avoid this as much as possible.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-integration-api: Purge unnecessary newlines in JdbcIndexer.
* testCreateAndExercise test-case
* CreateAndExerciseCommand in DAML Script service
changelog_begin
changelog_end
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
Not quite sure if this will help but I’ve only seen checksum
mismatches here and not for other Maven artifacts so worth a try at least.
changelog_begin
changelog_end
* help Scala codegen output by passing actor along
* don't generate unused ` view` variables
* macroexpansion replaces _ with a variable name; avoid this
* be explicit about scope of generated PackageIDs object, to avoid warning
* remove silent annotations, which aren't used yet
CHANGELOG_BEGIN
CHANGELOG_END
* Test exerciseByKeyCmd in DAML Script service
* ExerciseByKeyCommand in DAML Script service
changelog_begin
changelog_end
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
* Perf test scenario for query with variable ACS, WIP
* WIP
* change ACS with every query
exercise a choice + create a new contract to keep ACS size the same
* change ACS with every query
running exercise and create in parallel with the query
* exercise Archive instead of Transfer
* Adding copyright header
* Thanks @S11001001
* Create scala library for integrity checking tools.
CHANGELOG_BEGIN
CHANGELOG_END
* Moved integrity checking drivers into separate package.
* First define the scala library then the rest.
* Added missing header.
* Moved all export related code to under package kvutils.tools.export.
* Added missing header.
* Make all binaries depend on the library and not need sources.
This PR attempts to add some automation around assigning release
management. The PR adds a file `release/rotation`; each week, the
updated CI cron job will:
- Open a PR for the new release [as current].
- Assign the first user in the file to that PR.
- Add the Standard-Change label to the PR.
- Start the build for that PR [as current].
- Open a new PR that rotates the `release/rotate` file, i.e. pushes back
the first line to the end of the file.
This PR also adds mentions of the "release handler" (the first line of
`release/rotation`) to the various messages we send to Slack along the
release process.
The initial state of the `release/rotation` file has been created by
listing all the volunteers (Language team, Application Runtime team, as
well as @SamirTalwar-DA and @stefanobaghino-da) and piping the file
through `shuf`. (Then I put myself at the top so I can hopefully iron
out the issues with the first attempt.)
CHANGELOG_BEGIN
CHANGELOG_END
Just gives us a bit more typesafety until we really need to drop it
and avoids a bunch of ugly ChoiceName.assertFromString
changelog_begin
changelog_end
For the script service, we don’t need any conversion so storing SValue
is faster (and easier since converting back needs type info). For
other client, calling `toValue` is easy enough.
changelog_begin
changelog_end
* Factor out tar/gzip reproducibility flags
* use mktgz in package-app
* Bazel managed tar/gzip
* Remove quiet = True
As stated in the comment this is no longer required with Bazel >= 3.0.
* Build package-app as a sh_binary
This way Bazel will manage the runtime dependencies tar, gzip, mktgz,
and patchelf.
package-app.sh changes directory so it needs to make sure that all paths
are absolute and that the runfiles tree/manifest location is forwarded
to programs called by package-app.sh.
* Avoid file path too long errors
* Fix readlink -f on MacOS
* Document abspath
changelog_begin
changelog_end
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
* Extend the scenario service with DAML Script support
This adds most of the infrastructure for running DAML Script via the
scenario service which means it runs as part of DAML Studio and `daml
test`. This is hidden behind a feature flag so we can land this and
parallelize the remaining tasks. The main things that are missing are:
1. `createAndExerciseCmd` and `exerciseByKeyCmd`.
2. Party management needs some work and listing parties is
unsupported.
3. Time management
4. Potentially some better error handling (we need to go through
SResult and SError and see what is relevant for us).
Overall, it is already in a very usable state and there is a decent
range of tests.
closes#3688
changelog_begin
changelog_end
* Update compiler/damlc/daml-ide-core/src/Development/IDE/Core/Rules/Daml.hs
Co-authored-by: Andreas Herrmann <42969706+aherrmann-da@users.noreply.github.com>
* Fix name for actor system and pool
changelog_begin
changelog_end
Co-authored-by: Andreas Herrmann <42969706+aherrmann-da@users.noreply.github.com>
* Stratify Speedy builtins into pure/effectful.
This is a preparatory change for the ANF translation, which can treat pure
builtins more efficiently.
- Parent: `SBuiltin`, (which are effectful).
- Child: `SBuiltinPure` (which are pure).
Effectful builtin functions may raise `SpeedyHungry` exceptions or change
machine state. Pure builtins can be treated specially because their evaluation
is immediate.
The interface to effectful builtins remains:
def execute(args: util.ArrayList[SValue], machine: Machine): Unit
For pure builtins the interface is:
def executePure(args: util.ArrayList[SValue]): SValue
changelog_begin
changelog_end
* insert trailing comma and newline in 17x execute method args-list
* fix spello
* Move in-mem writer's `ledgerStateAccess.inTransaction` down to committer
* Move `BatchedSubmissionValidator` and spec into `batch` subpackage
* Add `StateAccessingValidatingCommitter` and inherit it in batching one
* Document `StateAccessingValidatingCommitter`
* Generalize the committer for `InMemoryLedgerReaderWriter`
* `envelope` -> `submissionEnvelope` in validating committers
* Add `PreExecutingValidatingCommitter` and sub-components
* Add retry in case of conflict in `PreExecutingValidatingCommitter`
CHANGELOG_BEGIN
CHANGELOG_END
* Fix compilation error
* Hook pre-execution in `daml-on-memory-kv`
* Add fake time updates provider
* Fix `BatchedValidatingCommitterSpec`
* Don't use batched writer with pre-execution
* Fix conflict detection
* Fix out-of-time-bounds detection
* Prefix/unprefix serialized log entry IDs in pre-execution write sets
* Fix: produce an out-of-bounds rejection log entry in transaction rejected cases too
* Fix `SubmissionResult` return in case of repeated pre-exec conflict
* Fidelity level 1: sequential pre-execution
* Documentation for pre-execution support in DAML-on-Memory KV
* Add ledger-on-memory conformance test with pre-execution enabled
* Revert "Fix: produce an out-of-bounds rejection log entry in transaction rejected cases too"
This reverts commit 4df7e26b
* Fix test
* Improve naming and documentation
* Address review comments
* Fix test
* Fix wrong implementation used for `ParticipantStateIntegrationSpecBase` tests
* Address review comments
* Address review comments
* Address minor review comments
* remove unused definitions, params, args from ledger API Scala code
CHANGELOG_BEGIN
- [Ledger API] withTimeProvider removed from CommandClient; this method
has done nothing since the new ledger time model was introduced in
1.0.0. See `issue #6985 <https://github.com/digital-asset/daml/pull/6985>`__.
CHANGELOG_END
* percolate withTimeProvider and label removal elsewhere
* improve error message on failed JSON parsing
Fixes#6971.
Interestingly, all the other cases in that block already had useful
feedback, not sure why this ones was missing.
CHANGELOG_BEGIN
CHANGELOG_END
* add tests
* reenable 'restart triggers after shutdown'
CHANGELOG_BEGIN
CHANGELOG_END
* wait for everything to shut down before completing a withTriggerService fixture
- similar to a change to HttpServiceFixture.withHttpService in #4593,
but without the suppression of shutdown errors
* label the WithDb tests
* in CI, test only 'recover packages after shutdown', 50 times
* experiment: Process#destroy appears to be async
* is it in the in-between period?
* partial -> total
* replace some booleans with assertions for better error reporting
* make triggerLog concurrent
* close channel and file in other error cases for port locking
- suggested by @leo-da; thanks
* use port locking instead of port 0 for trigger service fixtures
* destroy one service at a time
* missed continuation in build script
* use assertion language for "restart triggers with update errors"
* Revert "is it in the in-between period?"
This reverts commit 211ebfe9d2.
* use better assertion language for "restart triggers with update errors"
* restore full CI build
This script is no longer relevant to our internal processes. The report
is now generated by the security team and validated by us, rather than
produced and validated by us.
CHANGELOG_BEGIN
CHANGELOG_END