CHANGELOG_BEGIN
ledger-api - Command deduplication period can now be specified by setting `deduplication_offset` instead of `deduplication_time` (only valid for v2 WriteService). This change is backwards compatible.
CHANGELOG_END
* Propagate the enriched deduplicationPeriod instead of deduplication duration
* Update the Haskell bindings for the new deduplication period
* Calculate the deduplicateUntil using the new deduplication period for backward compat
* Use consistent naming for deduplication_period
* Cleanup command timeout extraction from deduplication period
* Add the required deduplication_offset to deduplication instead of deduplication_start
* Update haskell bindings to support deduplication_offset
* Add support for deduplication_offset in the ledger-api
* Remove the timestamp-based deduplication from our models to simplify upgrade for users
* Add optional conformance test for offset based deduplication
* Remove buf rule for FIELD_SAME_ONEOF as our change is backwards compatible
* Disable FIELD_SAME_ONEOF buf check for commands file
* Apply suggestions from code review
Co-authored-by: Miklos <57664299+miklos-da@users.noreply.github.com>
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
* Update comment for deduplication period
Co-authored-by: Miklos <57664299+miklos-da@users.noreply.github.com>
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
* participant-integration-api: Clean up failed trackers.
Otherwise they can hang around forever.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-integration-api: Ensure that all trackers are closed.
* participant-integration-api: Ensure that waiting trackers are closed.
* participant-integration-api: Store the tracker map future in the state.
* participant-integration-api: Fix a race in `TrackerMap`.
If the supplied `Future` to `AsyncResourceState` is too fast, the state
may not be set. We need to initialize the state immediately.
* participant-integration-api: Add more comments to `TrackerMap`.
Create normalized TXs when a partial TX is finalised.
Except in limited cases! (i.e for scenario-runner, sandbox)
CHANGELOG_BEGIN
CHANGELOG_END
normalize values in the engine as they are converted from speedy-values
fix 2.12 build
backout redundant change
ensure byKey field is correctly normalized when constructed by engine
rename flag: valueNormalization -> transactionNormalization
improve comment
delete commented-out code
rename: toValueNorm --> toNormalizedValue
rename: (SValue.) toValue --> toUnNormalizedValue
revert changes to ptx so that the interface to insertCreate() etc is Value-based (not SValue-based)
improve comments
respell: toUnNormalizedValue --> toUnnormalizedValue
fix build
* participant-integration-api: Make `TrackerMap` a `Tracker`.
Mostly by moving parameters around.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-integration-api: Clean up `TrackerMap` a little more.
* participant-integration-api: Make `TrackerMap` generic.
* participant-integration-api: Move `Tracker.WithLastSubmission`.
It's only used by `TrackerMap`, so now it's an internal class there.
* participant-integration-api: Only provide the key to `newTracker`.
* participant-integration-api: Subsume the `TrackerMap` cleanup schedule.
* participant-integration-api: Construct the tracker map outside.
* ledger-api-common: Remove some unnecessary braces.
* participant-integration-api: Prettify `TrackerMap` some more.
* participant-integration-api: Make `TrackerMap.selfCleaning` a class.
* participant-integration-api: Add some tests for TrackerMap.
* participant-integration-api: Convert a method to `Runnable`, 2.12-style.
Apparently underscores aren't good enough.
* ledger-api-client: Delete CompletionSuccess#unapply.
It doesn't work on Scala 2.12.
Preparation, small fixes
* Remove unnecessary TODO
* Fix FieldStrategy.idempotentInsert type parameters
Changing the DB-Schema
This change adapts the DB schema for all supported backends. After this change we only populate the party_entries table, and on the query side we reconstruct the state from this.
* Drop the party table
* Add indexing to party_entries
Adapting StorageBackend: ingestion
Since we only ingest party_entries, the party population needs to be removed.
* Drop the party table in ingestion code
* Fixes test
Adapting StorageBackend: queries
Queries needs to be adapted to construct the state from the read side.
* Rewrite queries.
* Fixes reset implementations.
Adapting JdbcLedgerDao
Since underlying storage changed, JdbcLedgerDao can be simplified: no special treatment needed with duplicate errors, since these errors are impossible to happen.
Removing JdbcLedgerDao tests, and adding a new test, testing the behavior of the new event-source party model. Please note: this database refactoring only applies to the append-only schema, so for the mutating schema the test is disabled.
During implementation a bug surfaced: it was not possible anymore to store the is_local information via JdbcLedgerDao.storePartyEntry. Although this bug is a minor issue, since that method is only used from single participant environment, still a fix was also implemented for this, by passing a magic participantId upon non-local party storage.
* Simplify storePartyEntry.
* Fixes bug introduced by append-only.
* adds/adapts tests
Refactoring: remove not used duplicateKeyError from StorageBackend
Changes to JdbcLedgerDao rendered this duplicateKeyError unused.
* Removes unused duplicateKeyError
Adapting sandbox-classic
In sandbox-classic it is not allowed to have updates for parties. Essentially the updates concerning already existent parties were dropped silently with logging without effect.
Here I started by pinning down this behaviour in the SqlLedgerSpec and SqlLedgerSpecAppendOnly. These tests were implemented with the original code in mind.
Then adapted the SqlLedger method: making sure of uniqueness by first trying to lookup the to-be-persisted party.
* Added tests grabbing a hold on original behavior
* Adapted implementation to ensure same behavior
Switching to correct is_local derivation for party queries as per review
* Adapting implementation: switching to aggregated table and a query on that
* Introducing QueryStrategy.booleanOrAggregationFunction to support Oracle
* Moving party related queries to PartyStorageBackendTemplate
* Fixes JdbcLedgerDaoPartiesSpec tests, and add another test case
Also:
* Align Update interface documentation
* Switching to explicit optionality in party query implementation asa per review
Co-authored-by: Simon Meier <meiersi-da@users.noreply.github.com>
CHANGELOG_BEGIN
CHANGELOG_END
This PR makes possible to check for contract IDs suffix during
preprocessing.
This is the first part of the task 3 described in #10504.
CHANGELOG_BEGIN
CHANGELOG_END
* Augment completion.proto with deduplication-related info
CHANGELOG_BEGIN
CHANGELOG_END
* Explicitly specify fields not yet filled in when building Completion
* Time-based deduplication periods are measured in record time of completions
* Add deduplication_offset as a deduplication_period option
* Don't skip proto field numbers
* CompletionFromTransaction: use default Completion constructor
* submission_rank: reserve proto field for future use
* Add comment about reserved proto field
Previously, if the max deduplication time was extended, the participant
_might_ retain the old time for certain submissions.
CHANGELOG_BEGIN
- [Ledger API Server] The API server manages a single command tracker
per (application ID × submitters) pair. This tracker would read the
current ledger configuration's maximum deduplication time on creation,
but never updated it, leading to trackers that might inadvertently
reject a submission when it should have been accepted. The tracker now
reads the latest ledger configuration.
CHANGELOG_END
* Time conversion duration between buffer event and API domain transaction
* Compute partiesTemplates inversion mapping outside events transformation
* Other small optimizations
CHANGELOG_BEGIN
CHANGELOG_END
* Add StorageBackend tests
changelog_begin
changelog_end
* Fix Oracle tests
* Do not use empty byte arrays
* Format
* Fix after rebase
* Substitute type params with type bounded ingest method
* Remove empty line
* Assert on configuration contents
* Fix Oracle build
* Add tests for ingestion initialization
* fmt
* Add test for leftover data after reset
* Add resetAll
* Use resetAll between tests
Co-authored-by: Marton Nagy <marton.nagy@digitalasset.com>
* participant-integration-api: Add logging to RecoveringIndexerSpec.
It's flaky, and I would like to know exactly what's going on here.
* participant-integration-api: Attempt to fix RecoveringIndexerSpec.
The checks for the health status are racy. I'm hoping increasing the
timeouts will help a little.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-integration-api: Inline timeouts in RecoveringIndexerSpec.
The `LedgerConfigurationSubscriptionFromIndexSpec` was flaky due to
stubs not specifying all behavior for `IndexConfigManagementService`.
This fixes the underlying issue by avoiding stubs in favor of fakes,
which means that _all_ behavior must be modelled.
Note: Martin Fowler has an excellent, terse description of [the various
forms of test doubles][TestDouble].
[TestDouble]: https://www.martinfowler.com/bliki/TestDouble.html
CHANGELOG_BEGIN
CHANGELOG_END
* Upgrade to a newer canton version (post 0.27.0 snapshot version)
with canton-community configuration that supports higher throughput.
changelog_begin
changelog_end
* Disable flaky reject DeeplyNestedValueIT:Reject tests that time out half the time
As stated in #10504 the contract ID freshness check cannot be
implemented correctly in general.
This PR drops the support for this (buggy) check.
This corresponds to the fist task of #10504.
CHANGELOG_BEGIN
CHANGELOG_END
This was limited a while back during the initial development of
_ledger-on-sql_ for reasons I can't remember. Let's stop doing that.
CHANGELOG_BEGIN
CHANGELOG_END
* Refactor ParameterStorageBackend
- a single method for atomic ledger initialization
- a single method to look up the ledger end
changelog_begin
changelog_end
* Add a test
* Fix reading event sequential ids
* Remove debug statements
* Allow ledgerEnd on an empty database
* Initialization is not safe to call concurrently
* Remove leftovers from isolation level change
* Use unit return type
for initialization methods
* Allow getParticipantId on an empty database
* Use exceptions instead of a return type ADT
* Don't use Try for initialization
* Clean up parameters table
* Simplify parameter storage api
* Address review suggestion
* Address review comment
* Address review comment
* Prefer ledger id over participant id mismatch
* Address review comment
* Move type definition
* Remove useleess new keyword
* Renove unused import
* Inline result mapping
* Fix reporting of mismatching participantId
* participant-integration-api: Construct completions in one place.
* sandbox-classic: Inline `CompletionFromTransaction#apply`.
It's only used here; there's no reason to keep it in the
_participant-integration-api_.
* participant-integration-api: Store a status gRPC protobuf.
Instead of storing the status code and message, we store a serialized
`google.rpc.Status` protocol buffers message. This allows us to pass
through any additional information reported by the driver `ReadService`.
The migration is only done for the append-only database, and preserves
old data in the existing columns. New data will only be written to the
new column.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-integration-api: Improve comments in migrations.
Co-authored-by: Fabio Tudone <fabio.tudone@digitalasset.com>
* participant-integration-api: Further improvements to migrations.
* participant-integration-api: Store the rejection status as 3 columns.
Serializing the details but keeping the code and message columns
populated.
* participant-integration-api: Publish the indexer protobuf to Maven.
Co-authored-by: Fabio Tudone <fabio.tudone@digitalasset.com>
* Use `extra` in the port file runner, rather than `temporary`.
* ledger-api-test-tool-on-canton: Use the port check runner.
Much simpler than the port file runner for our purposes.
* Replace `runner` with `runner_with_port_file`.
Rather than expecting a particular set of command-line-arguments, we use
templating.
CHANGELOG_BEGIN
CHANGELOG_END
* Rename the `runner_with_port_check` target to the default.
* Use the port file and dynamic port generation in client/server tests.
This creates a runner named `runner_with_port_file` which knows how to
interpolate two variables, `%PORT_FILE%` and `%PORT%`. This allows us to
use the `port-file` argument to the kvutils runner rather than
hard-coding a port for conformance tests.
For now, we only use this for generating the kvutils reference ledger
export.
CHANGELOG_BEGIN
CHANGELOG_END
* Simplify the runner_with_port_file considerably.
It doesn't need to check if the port is open; we trust that the process
will do it.
This also makes sure the port file will be cleaned up, and reduces the
number of dependencies by making use of more functions in `extra`.
* Simplify port file generation in the new client-server runner.
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Simplify the runner_with_port_file further.
This doesn't need to work if the server doesn't take a port file.
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Move `DeduplicationPeriod` to ledger-api-domain so that it ca be reused and passed down from the ledger-api-client to into the v2.SubmitterInfo
CHANGELOG_BEGIN
participant-state - move `DeduplicationPeriod` to ledger-api-domain
CHANGELOG_END
* Revert unrelated changes
* ledger-api-test-tool: Make IntelliJ happy with TransactionServiceIT.
* ledger-api-test-tool: Collect TransactionServiceIT tests into groups.
* ledger-api-test-tool: Split TransactionServiceIT into lots of suites.
CHANGELOG_BEGIN
- [Ledger API Test Tool] The ``TransactionServiceIT`` test suite has
been split into many test suites. If you are including or excluding
it, you will need to use the new test suite names, or you can use
"TransactionService" as a prefix for all of them.
If you are including or excluding individual tests, you will need to
update your arguments with the new test suite. You can find the new
test suite by running the test tool with the ``--list-all``
flag and looking for the test's short identifier. The short
identifiers have not changed, with the exception of
``TXNoContractKey``, which has been renamed to ``CKNoContractKey`` and
is now in the ``ContractKeysIT`` test suite.
CHANGELOG_END
* ledger-grpc: Fix the directory paths.
This brings _ledger-grpc_ in line with other projects. Scala main files
should be under "src/main/scala", and test files should be under
"src/test/suite/scala".
CHANGELOG_BEGIN
CHANGELOG_END
* ledger-grpc: Moar imports.
* Normalize transactions & values as a separate pass. Use for simpler defintiion of isReplayedBy.
CHANGELOG_BEGIN
CHANGELOG_END
normalize transaction version
* remove stray import from bad merge which breaks scala 2_12 build
* change isReplayedBy to only norm its RIGHT (replay) argument
* add forgotton normalization for ValueEmum
* switch to use existing value normalization code (remove my newly coded duplicate code)
* normalize submittedTransaction before calling engine.validate
* dont call normalizeTx from Engine.validate
* *do* call normalizeTx from Engine.validate
* Set ErrorInfo metadata flag for definite_answer, which is propagated from the completion status
CHANGELOG_BEGIN
ledger-api-client - Propagate definite_answer as metadata in the GRPC response for submit/submitAndWait
CHANGELOG_END
* Keep alphabetical order for bazel build files
* Add test for inclusion of metadata
* Formatting
* Use explicit types to track failures when submitting a request for execution
To distinguish from execution failures (represented by `CompletionFailure`) which is also exposed as part of the akka-bindings, we introduced `TrackingCompletionFailures` which can also represent the failure to add the request to the execution queue.
CHANGELOG_BEGIN
CHANGELOG_END
* Fix formatting
* Apply suggestions from code review
Co-authored-by: Hubert Slojewski <hubert.slojewski@digitalasset.com>
* Inline handling of errors in the tracker to eliminate the need for the secondary promise and simplify the code
* Update testing for the new error handling
* Remove brackets and make code compatible with 2.12
* Apply suggestions from code review
Co-authored-by: fabiotudone-da <fabio.tudone@digitalasset.com>
* Review cleanup and use inside for cleaner tests
Co-authored-by: Hubert Slojewski <hubert.slojewski@digitalasset.com>
Co-authored-by: fabiotudone-da <fabio.tudone@digitalasset.com>
* Upgrade Scalatest to v3.2.9.
Because of some coupling we also have to upgrade Scalaz to the latest
v7.2 point release, v7.2.33.
The Scalatest changes are quite involved because the JAR has been broken
up into several smaller JARs. Because Bazel expects us to specify all
dependencies and doesn't allow transitive dependencies to be used
directly, this means that we need to specify the explicit Scalatest
components that we use.
As you can imagine, this results in quite a big set of changes. They
are, however, constrained to dependency management; all the code remains
the same.
CHANGELOG_BEGIN
CHANGELOG_END
* http-json-oracle: Fix a Scalatest dependency.
* ledger-api-client: Fix a Scalatest dependency.
* ledger-api-test-tool: Add some basic unit tests for test names.
* ledger-api-test-tool: Ensure that test names are not prefixes of others.
This makes sure that we can include or exclude any given test, without
affecting others.
* ledger-api-test-tool:Ensure that all tests have different names.
Looks like we had some copy-pasta.
CHANGELOG_BEGIN
CHANGELOG_END
* Track command response using an Either instead of passing the completion with the grpc code.
This makes it clearer as to the result of command tracking. We no longer count on the grpc status to determine if there was an error or not, and instead use types for that.
CHANGELOG_BEGIN
akka-bindings: `LedgerClientBinding.commands` now returns a flow of `Either[CompletionFailure, CompletionSuccess]` instead of `Completion` for clearer error handling. For backwards compatiblity the new return type can be turned back into a `Completion` using `CompletionResponse.toCompletion`
CHANGELOG_END
* Fix formatting
* Code review changes
- remove usages of Symbol in tests
- clean curly braces
* Remove change added from another PR
* Fix import
* Fix import
* Fix retry flow and extract one more match case
* Un-nest matches to a single level for simplicity
* fix typo
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
* Be consistent in assertions and prefer `inside` for pattern matching
* Inline CompletionResponse to use the full type
* Use simpler matcher
* Formatting
* Add a way to convert back an `Either[CompletionFailure, CompletionSuccess]` to a `Completion` for backwards compatibility. This simplifies update for systems that are tightly coupled to `Completion`
* Add test for converting to/from CompletionResponse
* Remove unnecessary brackets
* Add missing header
* Use checked exceptions to preserve backwards compatiblity
* Fix unapply
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
* kvutils: Add property-based tests for the offset builder.
* kvutils: Avoid using `BigInt` in offset parsing.
This also means we can theoretically support negative offsets, even if
in practice it's probably a bad idea.
CHANGELOG_BEGIN
CHANGELOG_END
* kvutils: Further property-based tests for OffsetBuilder.
* kvutils: Improve a comment in OffsetBuilder.
Co-authored-by: fabiotudone-da <fabio.tudone@digitalasset.com>
Co-authored-by: fabiotudone-da <fabio.tudone@digitalasset.com>
* Add flag to enable/disable command deduplication
* Remove flags for configuration as it should not be exposed externally
* Move deduplicationEnabled flag to the write service.
The deduplication enabled flag is tightly coupled to the WriteService implementation so the flag has been moved to the WriteService trait so that it's explicitly defined.
CHANGELOG_BEGIN
Command deduplication is enabled/disabled based on the write service implementation.
v1 WriteService enables command deduplication while v2 WriteService disabled it
CHANGELOG_END
* Rename deduplication flag
Set sandbox deduplication to true and proxy write service to the delegage
* H2 Storage backend support for canton jdbc urls with user/password
The H2 database's JdbcDataSource does not like seeing user/password
properties in the jdbc url raising errors upon hikari connection pool
initialization:
`org.h2.jdbc.JdbcSQLNonTransientConnectionException: Duplicate property "USER"`
expecting to be able to set user and password separately from passing on
the jdbc url.
As h2 is not supported in production and to get canton integration tests past this
resorting to parsing the jdbc url looking for user/password properties, removing
them from the url and instead setting them explicitly on the data source object.
changelog_begin
changelog_end
* Review feedback