* Change the TelemetryContext to use SpanAttributes instead of raw AttributeKeys
and remove an unused object along with its dependencies
CHANGELOG_BEGIN
CHANGELOG_END
* Change the Tracer name and the instrumentation name from participant to com.daml.telemetry
* Add telemetry classes from the oem integration kit and use it for command submissions
* Change submitTransaction to submitTransactionWithTelemetry and add a deprecation
* Fix tests
* Revert "Change submitTransaction to submitTransactionWithTelemetry and add a deprecation"
CHANGELOG_BEGIN
- [Integration Kit] TelemetryContext has been introduced to the WriteService.submitTransaction method to support distributed tracing
CHANGELOG_END
* Time lag between contract state events and general dispatcher
* Emit current event sequential id from the indexer.
* Time update registering metric
CHANGELOG_BEGIN
CHANGELOG_END
* Integrate mutable state cache into ReadOnlySqlLedger
* Create tiered ReadOnlySqlLedger for accomodating both legacy and new caching stores
* Integrate event stream lifecycle responsibility into MutableCacheBackedContractStore
* Add missing SQL fragment to ContractsReader fetch query
CHANGELOG_BEGIN
CHANGELOG_END
* Addressed review comments
* ContractStateEventsReader draft based on the POC
* Code formatting
* Moved the contract state event reader as a method to the TransactionsReader.
(required to ease parallel stories development)
* Basic unit tests for reading contract state events
* Removed the dependence on not yet present column.
* Workaround for the lack of 'event_sequential_id' column in the 'parameters' table.
The workaround is intended to be replaced with a proper solution when the append-only schema comes in
* Fixed the unit test for contract state events
* Re-enabled all ledger dao suites
* Included 'JdbcLedgerDaoContractEventsStreamSpec' in the H2 db suites
* Intermediary DTO for contract state events
* Added a comment explaining introduction of the RawContractStateEvent
* Simplified contract state event reading query
* Easier transition for the append-only schema
* Minor refactor
* Minor change
* Ingest contract key for consuming exercise nodes
* CHANGELOG_BEGIN
- [Integration Kit] new streaming query for contract state events
- [Integration Kit] indexing contract keys for consuming exercise events
CHANGELOG_END
* Minor change
* Fixed formatting
* Ingesting key values for consuming exercised events - H2 database
* Addressed review comments
* Moving changes from the 'dao' to the 'appendonlydao' package - ContractStateEventsReader
* Moving changes from the 'dao' to the 'appendonlydao' package - indexing create_contract_key for exercise nodes
* Reduced code duplication - minor
* Reverted changes to the JdbcLedgerDaoSuite
Tests for the ContractStateEventsReader will come in the future
* Restored original naming for the indexed create key
* Improved comments + throwing UnsupportedOperationException
* Added ledger_effective_time to the ContractStateEvent.Created
* Update a metric name
Co-authored-by: mziolekda <marcin.ziolek@digitalasset.com>
* Simplified GlobalKey imports
* Added a comment on events stream parallelism level
* Inline event_kind types in the SQL query for contract state events
* A constant for the event sequential id edge case
* Removed unused imports
Co-authored-by: mziolekda <marcin.ziolek@digitalasset.com>
* LedgerDao and ContractsReader interface updates for mutable state cache implementation
* Factored the LfValueSerialization cache out of the ContractStore
* Implemented state lookup methods (at valid_at)
This PR does not contain tests for the new lookup methods.
The tests have been extracted in a separate branch and will be merged
when the DAO integration testing suite will be adapted
for the append-only schema.
CHANGELOG_BEGIN
CHANGELOG_END
* Addressed review comments
* Introduce parallel indexer
* Adds parallel indexer in PoC quality
* Adds relevant metrics, and wiring to the parallel indexer code
* Minor fixes to tracing in TransactionsReader
changelog_begin
changelog_end
* Tag all todos
... with 'append-only', so that they are easier to find.
* Refactor metrics
* Remove AverageCounter
Co-authored-by: Robert Autenrieth <robert.autenrieth@digitalasset.com>
* [DPP-142] Explicitly deflate/inflate data outside of the index
* [DPP-142] Explicitly deflate/inflate data outside of the index - review fixes - exposing prepare update parallelism as param
changelog_begin
[Integration Kit] Compression and decompression of stored DAML-LF values
is now executed outside of the index database, allowing to make more
efficient use of the participant resources when indexing.
changelog_end
* Pipelined transaction indexing
CHANGELOG_BEGIN
[Integration Kit] The participant indexer (for PostgreSQL)
can now execute DAML transaction insertions in three pipelined stages.
CHANGELOG_END
* Make participant-integration-api test suite `large` for BAZEL
* Fixed constant timeout for MacOS builds
* Moved ledger end guard to TransactionReader
* Removed TransactionServiceResponseValidator
* Removed MetadataUpdate intermediary level from Update
* Added back store_ledger_entry timer
* Updated comment for idempotent insertions.
* Port more of //ledger/... to Scala 2.13
changelog_begin
changelog_end
* Remove unusued dependency
changelog_begin
changelog_end
* Rename bf to factory to reflect the fact that it’s now a Factory
changelog_begin
changelog_end
* Use regex match instead of sliding string equalityt
changelog_begin
changelog_end
* regex matches are bad
changelog_begin
changelog_end
This PR updates scalafmt and enables trailingCommas =
multiple. Unfortunately, scalafmt broke the version field which means
we cannot fully preserve the rest of the config. I’ve made some
attempts to stay reasonably close to the original config but couldn’t
find an exact equivalent in a lot of cases. I don’t feel strongly
about any of the settings so happy to change them to something else.
As announced, this will be merged on Saturday to avoid too many conflicts.
changelog_begin
changelog_end
* Port damlc dependencies to Scala 2.13
I got a bit fed up by the fact that going directory by directory
didn’t really work since there are two many interdependencies in
tests (e.g., client tests depend on sandbox, sandbox tests depend on
clients, engine tests depend on DARs which depend on damlc, …).
So before attempting to continue with the per-directory process, this
is a bruteforce approach to break a lot of those cycles by porting all
dependencies of damlc which includes client bindings (for DAML Script)
and Sandbox Classic (also for DAML Script).
If this is too annoying to review let me know and I’ll try to split it
up into a few chunks.
changelog_begin
changelog_end
* Update daml-lf/data/src/main/2.13/com/daml/lf/data/LawlessTraversals.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* fixup lawlesstraversal
changelog_begin
changelog_end
* less iterator more view
changelog_begin
changelog_end
* document safety of unsafeWrapArray
changelog_begin
changelog_end
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
This is necessary to at least attempt an upgrade to 2.13 and
generally, I want to keep our rulesets up2date. rules-scala forces the
version of scalatest so we have to bump that at the same time.
This requires changes to basically all Scala test suites since the
import structure has changed and a bunch of things (primarily
scalacheck support) got split out.
Apologies for the giant PR, I don’t see a way to keep it smaller.
changelog_begin
changelog_end
* Participant pruning ledger api server support ported from Canton
CHANGELOG_BEGIN
- [Ledger API]: The preview of `ParticipantPruningService` enables ledger participants to prune the "front" of ledger state at the participant including the ledger api server index.
CHANGELOG_END
* Review feedback from Stefano
* Add pruning tests plus missed command completions change
* Review feedback from Robert
* Improved test readability by having populate helper return offsets
* Review feedback
* Ledger api changes to pruning api and disable canton pruning test
- Change return result to PruneResponse
- Change type of PruneRequest.prune_up_to to string
* Review feedback: Use ApiOffsetSConverter for logged offsets
This PR creates 3 validation modes:
* `Strict`: Specifies that the committer should validate the package
before committing them to the ledger. When using this mode, the
packages committed to the ledger can be fully trusted and do not
have to be validated when loaded into the engine.
* `Lenient`: Specifies that the committer should perform a fast
validation of the packages before committing them to the ledger.
This mode is useful for ledger integrations that cannot handle
long-running submissions (> 10s). When using this mode, the
packages committed to the ledger cannot be trusted and must be
validated every time they are loaded into the engine.
* `No`: Specifies that the committer should not perform any
validation the packages before committing them to the ledger. This
should be used only by non distributed ledgers, like DAML-on-SQL,
where the validation done in the API server can be trusted.
This PR creates 3 preloading modes:
* `Synchronous` : Specifies that the packages should be preloading
into the engine before committed.
* `Asynchronous`: Specifies that the packages should be preloaded into
the engine asynchronously with the rest of the commit process. This
mode is useful for ledger integrations that cannot handle
long-running submissions (> 10s). Failure of the preloading process
will not affect the commit.
* `No`: Specifies that the packages should not be preloaded into
the engine.
CHANGELOG_BEGIN
- [Integration Kit] In kvutils, add metric
daml.kvutils.committer.package_upload.validate_timer to track
package validation time.
CHANGELOG_END
* [KVL-519] Instrument command service queues
changelog_begin
changelog_end
* Instrument max-in-flight queue
* Document inputBuffer and maxInFlight metrics
changelog_begin
[Sandbox] New metrics tracking the pending submissions and completions on the
CommandService. Check out the Metrics session in the sandbox documentation
for more details. The new metrics are input_buffer_size, input_buffer_saturation,
max_in_flight_size and max_in_flight_saturation.
changelog_end
* Fix compilations issues (1)
* Fix title underline in docs
* Refactoring of InstrumentedSource
- Rename saturation/size to length/capacity to make it more obvious what they are.
- Move the InstrumentedSource to ledger/metrics. Fits there better, with the utilities
there already for futures. Arguable both should move into libs-scala package at some point though.
- Expand the tests and make the tests less flaky. 200 runs complete fine now.
- Inc/dec the capacity counter within InstrumentedSource.
* Add missing copyright header
* Reformat
* Update ledger/metrics/src/test/scala/com/daml/metrics/InstrumentedSourceSpec.scala
Co-authored-by: hanshoglund-da <67470727+hanshoglund-da@users.noreply.github.com>
* Fix title underline in docs (again)
Co-authored-by: Jussi Maki <jussi.maki@digitalasset.com>
Co-authored-by: hanshoglund-da <67470727+hanshoglund-da@users.noreply.github.com>
* Add metrics for concurrent commands
* Update readme
CHANGELOG_BEGIN
- [DAML on SQL] Add new metrics for measuring the number
of concurrent command executions. The metrics are:
daml.commands.submissions_running, daml.execution.total_running,
daml.execution.engine_running
CHANGELOG_END
* metrics: Support tagged Futures when timing.
* ledger-on-sql: Use tagged execution contexts in `Database`.
We have to deal with multiple execution contexts in `Database`. This
makes it possible to use them implicitly, which is much cleaner.
CHANGELOG_BEGIN
CHANGELOG_END
* ledger-on-sql: Simplify `Database` a little.
* ledger-on-sql: Make the connection pool implicit.
* ledger-on-sql: Move the execution context into the connection pool.
* ledger-on-sql: Make connection pools more implicit.
* ledger-on-sql: Use the `sc` prefix for `scala.concurrent`.
* ledger-on-sql: Remove an unnecessary import.
* concurrent: Tag DirectExecutionContext.
1. Tag `DirectExecutionContext` as `ExecutionContext[Nothing]`, thereby
stating that it works for any tagged `Future`.
2. Move `DirectExecutionContext` to the _libs-scala/concurrent_
library, as it requires it and it's tiny.
CHANGELOG_BEGIN
CHANGELOG_END
* concurrent: Fix the privacy of `DirectExecutionContextInternal`.
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Before, we could in-place update the value of a var gauge. Now, we can not only set such a value but in-place update it atomically.
CHANGELOG_BEGIN
CHANGELOG_END
This is the same technique as `DerivativeGauge` from the metrics
library, but with less code, because Scala is prettier than Java.
CHANGELOG_BEGIN
CHANGELOG_END
* [KVL-222] Add participant id to index metadata dump
changelog_begin
changelog_end
* Test SqlLedger participant id initialization
* Test JdbcIndexer participant id initialization
* Make RecoveringIndexerSpec final and remove unused trait
* metrics: Factor out registering a gauge.
* metrics: Don't return when registering a gauge. Nothing uses it.
CHANGELOG_BEGIN
CHANGELOG_END
* metrics: Merge redundant tests.
* Added ledger writer that chooses between instances based on estimate interpretation cost.
CHANGELOG_BEGIN
CHANGELOG_END
* Code tidying.
* Delegate to pre-executing writer in case thershold is set to 0.
* Added ability to change metrics.
* Added metrics.
* Code tidying.
* Update ledger/participant-state/kvutils/src/main/scala/com/daml/ledger/participant/state/kvutils/api/InterpretationCostBasedLedgerWriterChooser.scala
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
* Added metrics for decoding and total time of pre-execution.
CHANGELOG_BEGIN
CHANGELOG_END
* Code tidying.
* Added timer for tracking time spent with generating write sets.
* row_id changes
* fixing inserts
* replacing offset with row_id in the flat transaction stream queries
* fixing flat transaction query, updating H2 migration script
* fixing formatting
* ACS query pagination relies on row_id instead of ledger offset
* give a name to the index that we have to drop
* give a name to the index
* Fixing events range query it can return SQL nulls on empty DB.
* remove the debug println
* remove outdated comment
* removing unused orderByColumns constant
* getting rid of new `Source.flatMapConcat` calls that were added as part of this PR.
CHANGELOG_BEGIN
1. ACS, Flat Transaction and Transaction Tree stream pagination based on event_sequential_id instead of event_offset.
2. Events ordering based on the order of insertion: order by event_sequential_id instead of order by (event_offset, transaction_id, node_index).
CHANGELOG_END
* reverting changes to V13 H2 migration script,
figuring out the name of the index that has to be dropped
* Addressing code review comments:
- replacing scalaz Option.cata with stdlib Option.fold
- moving implicit val def into import
* Addressing code review comments:
- extracting re-usable stream query functions
* forcing postgres to use index when looking up lower and upper bound row ids
* fixing the query when it is run on an empty ledger
* resolving rebase conflicts
* Update ledger/sandbox/src/main/scala/com/digitalasset/platform/store/dao/events/EventsRange.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* fetching a single row, fetchSize should NOT matter
* Adding integration test to reproduce invalid order of archived, created events
The test fails, which is expected.
* Fixing the order of archived, created events triggered by exercise
* Addressing code review comments and cleaning up
* Renaming row_id to event_sequential_id
* Investigating flaky tests
* Fixing formatting
* Revert HOTFIX-flaky-client-server changes
`bazel test --runs_per_test=50 //ledger/participant-state/kvutils:reference-ledger-dump` passed on this branch.
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Add additional metrics when storing transactions
Since event witnesses will soon be denormalized into the participant_events
table, I did not include metrics right now.
CHANGELOG_BEGIN
[DAML Ledger Integration Kit] Add additional metrics for storing transactions. The overall time is measured by ``daml.index.db.store_ledger_entry``.
- Timer ``daml.index.db.store_ledger_entry.prepare_batches``: measures the time for preparing batch insert/delete statements
- Timer ``daml.index.db.store_ledger_entry.events_batch``: measures the time for inserting events
- Timer ``daml.index.db.store_ledger_entry.delete_contract_witnesses_batch``: measures the time for deleting contract witnesses
- Timer ``daml.index.db.store_ledger_entry.delete_contracts_batch``: measures the time for deleting contracts
- Timer ``daml.index.db.store_ledger_entry.insert_contracts_batch``: measures the time for inserting contracts
- Timer ``daml.index.db.store_ledger_entry.insert_contract_witnesses_batch``: measures the time for inserting contract witnesses
- Timer ``daml.index.db.store_ledger_entry.insert_completion``: measures the time for inserting the completion
- Timer ``daml.index.db.store_ledger_entry.update_ledger_end``: measures the time for updating the ledger end
[Sandbox Classic] Added Timer ``daml.index.db.store_ledger_entry.commit_validation``: measure the time for commit validation in Sandbox Classic
CHANGELOG_END
* Refactoring: rename metrics *dao to *DbMetrics
CHANGELOG_BEGIN
[DAML Ledger Integration Kit]: Added 4 new metrics for more detailed execution time statistics:
- Timer ``daml.execution.lookup_active_contract_per_execution``: measures the accumulated time spent for looking up active contracts per execution
- Histogram ``daml.execution.lookup_active_contract_count_per_execution``: measures the number of active contract lookups per execution
- Timer ``daml.execution.lookup_contract_key_per_execution``: measures the accumulated time spent for looking up contract keys per execution
- Histogram ``daml.execution.lookup_contract_key_count_per_execution``: measures the number of contract key lookups per execution
CHANGELOG_END