* client_server runner - use temp dir for port file
The sandbox will not overwrite an already existing port file but instead
fail, or worse, silently ignore the error and leave the port-file empty.
changelog_begin
changelog_end
* sandbox: Fail if writing the port file fails
So far this was being silently ignored, leaving a pre-existing port-file
untouched.
changelog_begin
changelog_end
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
* Add race condition tests for exceptions
This PR addresses
https://github.com/digital-asset/daml/pull/9400#pullrequestreview-634770251
and adds tests that match RaceConditionTests but to the read side in a
rollback (we cannot do writes in rollbacks, they are rolled back :)).
The tests are as close as possible to the other race condition tests
to ease maintenance and reduce confusion.
changelog_begin
changelog_end
* remove commented lines
changelog_begin
changelog_end
* Disable accidentally enabled ClosedWorldIT
changelog_begin
changelog_end
This is a preparatory refactoring PR so we can use the same utilities
for the existing race condition tests and for a test suite that tests
race conditions in combinations with exceptions.
changelog_begin
changelog_end
* restore natural join
CHANGELOG_BEGIN
CHANGELOG_END
* JOIN with ON clause rather than NATURAL JOIN so table aliasing will work cross-platform
* scalafmt
* check whether collection.compat is unused when compiling for Scala 2.12
- Instead of always suppressing warnings for collection.compat._,
we should only do it for Scala 2.13
- We can also reduce boilerplate by automatically adding this
option when both silencer_plugin and collection-compat are
present
CHANGELOG_BEGIN
CHANGELOG_END
* remove unused import
* remove another unused import
* remove even more unused imports
* missed compat dependency
* more missed compat dependencies
* missed compat dependency
* use scala_deps in scaladoc_jar
- #8423 inlined the major version expansion, but this seems to
have been prior to proper support by scaladoc_jar
* restore custom handling of participant-integration-api
- fixing scaladoc_jar isn't worth it for a single case, as with
deps vs scala_deps
In order not to make the appearance that current state of sandbox-on-x can support anything than clean initialization form scratch, and clean initialization from already ingested database.
changelog_begin
changelog_end
* Simple scaffolding for the ledger-api-bench-tool
* Added README
* Added resource management and the LedgerIdentityService
* Changed the default log level to DEBUG
* Added the TransactionService
* Added stream configuration options
* Options for ledger configuration
* Minor improvements
* Refactored packages
* Minor improvement
* CHANGELOG_BEGIN
- [Integration Kit] - Created the ledger-api-bench-tool prototype for benchmarking ledger transaction streaming capabilities
CHANGELOG_END
* Unified endpoint argument with the ledger-api-test-tool + other minor fixes
* Logger as an argument to LogOnlyObserver
* Make error throw a GeneralError.
As well as abort, fail, etc.
changelog_begin
changelog_end
* keep the error message when you have an unhandled error in scenario
* Disable crashing opsem tests for now.
* Update CommandServiceIT regex pattern.
* Put | in wrong place :-|
* forgot to escape "
* Illegal repetition!
This PR includes :
- Adding ApiCommand to distinguish between generic command (that are
accepted by the engine) and command that are accepted by the ledger
API.
- Reimplement Canton's reinterpret method using commands instead of
node.
CHANGELOG_BEGIN
CHANGELOG_END
This change adds support for append-only sandbox-classic as well.
Temporarily enabled tests ensure correctness for now
In parallel-ingestion stabilisation epic further unit tests willbe added (hence keeping the TODO for now)
changelog_begin
changelog_end
* Filter divulgence to an empty set of parties
As @nmarton-da noticed painfully, we currently include divulgence to
an empty set of parties. While this is arguably not wrong it is at
least confusing and useless. The whole point of divulgence is to track
visibility. Divulging to an empty set of parties does not affect
visibility so it is not meaningfully different from no
divulgence. Therefore this PR filters it out and adds a doc comment
that the list of divulgees is always non-empty.
changelog_begin
changelog_end
* Fix tests
changelog_begin
changelog_end
The behavior of LedgerConfigProvider is time-sensitive at startup. The
test was accidentally supplying the serial execution context, which
meant that the initialization process could be blocked by a lack of
available threads, causing the wrong result.
Switching to the materializer's execution context avoids this issue in
testing, and does not impact production logic.
In general, resource owners should not accept execution contexts
implicitly. Looks like I added this one, so everyone else is off the
hook. I will punish myself by ordering a pizza.
CHANGELOG_BEGIN
CHANGELOG_END
This change adds support of append-only schema in sandbox-classic and daml-on-sql ledgers: this is available with feature flag.
The support is PoC grade, it will be stabilized/productionized in the upcoming epic.
Currently enabled CI tests in the respective projects guarding this implementation.
* Introduce SequentialWriteDao for simplified indexing in sandbox-classic
* Use this in appendonly.JdbcLedgerDao to implement necessary methods
* Add support for ledgerEnd query to StorageBackend
* Fix JdbcLedgerDao creation (supporting append-only)
* Add feature flag and wiring for sandbox-classic
* Activate conformance tests with append-only on sandbox-classic
* Add support/ci coverage for daml-on-sql
changelog_begin
changelog_end
Some cached threadpools weren't given names, meaning at runtime there
are a bunch of pool-x-thread-y threads. This makes it hard to understand
which threads are being used for what.
The following pool names were introduced:
append-only indexer: input-mapping-pool, batching-pool
ProgramResource: program-resource-pool
kvutils PackageCommitter: package-preload-executor
CHANGELOG_BEGIN
CHANGELOG_END
Just crashing here would actually be fine since this migration will
never run on a transaction with a rollback node but it’s easy enough
to fix it properly and that raises fewer questions.
changelog_begin
changelog_end
* Address more exception todos
A bit of a kitchen sink PR to address a bunch of the trivial todos
that didn’t seem worth splitting out into separate PRs.
changelog_begin
changelog_end
* Revert changes to TransactionSpec
changelog_begin
changelog_end
As discussed, we don’t want to expose this via serializable values at
least for now (and it’s not exposed on the ledger API anyway) so this
PR drops the type.
changelog_begin
changelog_end
* log all external requests at an info level
CHANGELOG_BEGIN
CHANGELOG_END
* additional logs in the transaction service
CHANGELOG_BEGIN
Log ledger-api client read requests at the info level. Affects following services
- Active Contracts Service
- Command Completion Service
- Ledger Configuration Service
- Ledger Identity Service
- Package Service
- Time Service
- Transaction Service
CHANGELOG_END
* Added a new test suite for testing limit API values - ValueLimitsIT
* Change index on participant_command_completions table
This change fixes issues with commands with large submitters number on sandbox-classic
* Added key_hash column to the ledger *state table
The new column is now the primary key of the table. Its values are hashed values of the 'key' column which allows to mitigate limitations on the index row size
* Backfill key_hash for ledger_state table
* Dynamic state table prefix form the backfill migration
* Removed redundant comments
* Backfill migration for all the db types
* Added missing copyright comment
* Fixed migration order after a rebase
* Added missing checksums for sql migrations
* Temporarily removed copyrights from one of sql migrations
* Removed unnecessary NOT NULL constraint
* Removed submitters from the index participant_command_completion_offset_application_idx in the append-only schema
* Disabled the test for old platforms
* CHANGELOG_BEGIN
- [Integration Kit] - a new test suite ValueLimitsIT for testing edge case values
- [Integration Kit] - modified index on the participant_command_completions table to avoid issuess with a large number of submitters
- [Sandbox] - added the key_hash column to the *state table
CHANGELOG_END
* Disabled concurrent testing for the ValueLimitsIT:VLLargeSubmittersNumberCreateContract test case
* Recomputed migrations checksums for the participant-integration-api
* Increase the sandbox-on-x queue size to 500
The motivation is that running conformance tests failed with RESOURCE_EXHAUSTED error due to 200 limit on the queue.
* Minor improvement
* Inlined key hashing for migrations to avoid external dependencies
* Minor improvement
* Fixed migrations hash
* Add Ledger API test tool tests for exceptions
changelog_begin
changelog_end
* Update daml-lf/language/daml-lf.bzl
Co-authored-by: Sofia Faro <sofia.faro@digitalasset.com>
* Address review comments
changelog_begin
changelog_end
* Shuffle around test
changelog_begin
changelog_end
Co-authored-by: Sofia Faro <sofia.faro@digitalasset.com>
Motivation of this PR: with the help of this combined method, wiring of the
append-only schema ingestion becomes possible.
* Add proper implementation for dao
* Adapts sandbox-classic usage
changelog_begin
changelog_end
* Precreate Oracle indexer scala migration package
This avoids needing to special case FlywayMigrations to not include
scala migrations which will be more robust when adding the first oracle-based
scala migration.
changelog_begin
changelog_end
* Review feedback - remove FIXME
The intent of this change is to make the first step in this direction
in order to support dpp-336 work of sanbdox-classic integration by
injecting DAO functionality via this interface.
Level of quality is still pre-production, hence TODO comments.
Planned next step: move StorageBackend interface, and implementation
to platform/store, to it's final place.
* Introduce StorageBackend interface
* Decouple event-seq-id assignment logic from storage specific batching
* Pull out of batching step from input mapping, execution too
* Switch to stateless DAO functions
* Switch to DBDispatcher instead of custom JDBC Connection pool
* Introduce/adapt metrics
* Naturally extend configuration
* Move RunningBatch layer to ParallelIndexerFactory
* Remove dead code
changelog_begin
changelog_end
* Support rollback nodes in ActiveStateManager
If some one can point me to existing tests for ActiveStateManager,
I’ll happily extend them I failed to find any.
I did test it against #9400 and it fixes most tests and the failing
ones fail in other parts so at that level it seems to work as
expected.
changelog_begin
changelog_end
* Better comments
changelog_begin
changelog_end
* less Set()
changelog_begin
changelog_end
* switch pattern matching
changelog_begin
changelog_end
* Clarify comment
changelog_begin
changelog_end
* Simplify tracking of archivedIds
changelog_begin
changelog_end
* Document where ActiveLedgerStateManager is used
changelog_begin
changelog_end
* Blow up when using rollback nodes with a mutable ALS
changelog_begin
changelog_end
This means we don't have to provide context explicitly on each log line;
it'll get passed through.
We include extra logging context, extracted from the state at each step.
CHANGELOG_BEGIN
CHANGELOG_END