Changes:
1. Add support for defining pruning benchmarks.
Currently the new top-level `unary` section allow defining only a single `pruning` section.
Declaring both top sections `unary` and `streams` in a single workflow config is unsupported.
```
unary:
- type: pruning
name: pruning-or-101
max_duration_objective: 600s
prune_all_divulged_contracts: false
```
2. Add support for optionally submitting non-transient contracts.
```
submission:
...
allow_non_transient_contracts: true
```
* bump canton to 20230202
CHANGELOG_BEGIN
CHANGELOG_END
* DACH-NY/canton#11206 Fix dev option flags for Canton in `ledger-api-test-tool-on-canton`
---------
Co-authored-by: Azure Pipelines Daml Build <support@digitalasset.com>
Co-authored-by: Kirill Zhuravlev <kirill.zhuravlev@digitalasset.com>
A concern with the existing pruning backend tests is that they are relatively complex particularly because:
- in order to fetch data from the indexdb they use production sql queries which might have builtin knowledge of pruning offsets,
- production sql queries are not designed for ergonomic one-off usage.
This PR introduces a set of queries, one for each table subject to pruning, each returning a record for each row its table. This leads to more a concise and straightforward way assert what data has or hasn't been pruned.
Previously the config key `index-service.events-processing-parallelism` was used both when fetching ACS from DB
and for buffered transaction reader.
Now these resposibilities are separated into two new config keys:
1. `index-service.acs-streams.contract-processing-parallelism` (which mirrors the tx streams configs)
2. `index-service.buffered-events-processing-parallelism`
Also moving a larger chunk of code from TransactionReader to ACSReader (which mirrors the tx stream readers)
* confirm that monadifying the package fetch still suppresses the error
* thread ExecutionContext from request
- makes the error less likely
- but still fairly easy to repro with 3 tabs
* experiment with setting executor
* explain that the cache isn't a cache
* random order, maybe
- #3090 mentions keeping the order as a goal; I don't see why we should,
though
* random order with groups of 8
* embed the decoding
- this slows down the processing of a group, yielding somewhat less
granular contention
- and also makes hits cost much less, at the cost of making granular
contention more expensive
* reduce diff size before resolution
- this won't improve contention, but does nearly eliminate the cost of
resolution for already-resolved packages, making hits nearly free
(amortized)
* randomize groups instead
- while groups themselves can overlap with this arrangement, each
costing ParallelLoadFactor granular contention, on average it seems to
perform a little better due to groups never overlapping
* refactor StatusEnvelope to utils
* constant 250ms retry
* detect contention earlier and skip decode
* factor traverseFM
* bump canton to 20230124
* reactive canton dev test
Co-authored-by: Azure Pipelines Daml Build <support@digitalasset.com>
Co-authored-by: Remy Haemmerle <Remy.Haemmerle@daml.com>
* Flaky test fix of TransactionServiceVisibilityIT
when running against OracleDb.
For example we have seen TransactionServiceVisibilityIT:TXTreeBlinding
time out waiting for one of the three participants to write to oracle
db.
changelog_begin
changelog_end
* PR feedback
Changes:
* Move acs config keys into into its own case class + config key renames.
* Apply the global limit of parallel event id queries (shared with tx streams) to acs streams.
* Replace the acs limit of parallel event payload queries with the global limit (shared with tx streams).
* Assert on participant_meta_table in StorageBackendTestsInitializeIngestion test
* cleanup PartialTransaction API
* Put SValue + GlobalKey in cached contract key
* slight change of the ContractStateMahcine API
* drop (unsafe) builder for GlobalKey
* Move ExplicitDisclosureIT to test tool 1.15
since it's not tied to the LF dev version anymore in the Engine.
* Merge ED and interfaces conformance test targets