* sandbox: Move command execution into its own package.
* sandbox: Make LedgerTimeHelper into a CommandExecutor implementation.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Rename CommandExecutorImpl to StoreBackedCommandExecutor.
And pass it objects, not functions.
* sandbox: Reorder result cases in StoreBackedCommandExecutor.
* sandbox: Inject the LedgerTimeAwareCommandExecutor.
* sandbox: Pull out the default time provider type into a constant.
* sandbox: Name ResetService tests consistently.
* sandbox: Reset the time service when resetting in Static Time mode.
The test, unfortunately, is ``@Ignore`d due to flakiness in CI, so won't
actually be run. However, I _hope_ we're going to remove that
annotation eventually, and it allowed me to test-drive the fix on my
machine, so it's still helpful.
CHANGELOG_BEGIN
- [Sandbox] Fix a regression in the ResetService which did not reset the
TimeService in static time mode.
CHANGELOG_END
Turns out not every participant can support seeding (yet).
CHANGELOG_BEGIN
- [Ledger API Server] Re-introduce an option to disable seeding. This
does not affect Sandbox.
CHANGELOG_END
* Set the `Bearer ` prefix in bindings.
* Make the `Bearer ` prefix in the authorization header mandatory.
* Bearer prefix can be removed from the token file.
CHANGELOG_BEGIN
[Extractor]: The ``Bearer `` prefix can be removed from the token file.
It is added automatically.
[Navigator]: The ``Bearer `` prefix can be removed from the token file.
It is added automatically.
[DAML Script] The ``Bearer `` prefix can be removed from the token file. It
is added automatically.
[DAML Repl] The ``Bearer `` prefix can be removed from the token file. It is
added automatically.
[Scala Bindings] The ``Bearer `` prefix can be removed from the token. It is
added automatically.
[Java Bindings] The ``Bearer `` prefix can be removed from the token. It is
added automatically.
[DAML Integration Kit] ``AuthService`` implementations MUST read the
``Authorization`` header and the value of the header MUST start with
``Bearer ``.
CHANGELOG_END
* Produce performance tests for all envelopes
CHANGELOG_BEGIN
[TestTool] Provide performance tests for all performance envelopes
CHANGELOG_END
* Follow Scala's recommended pattern for enum-like hierarchies
There have been reports of sporadic occurrences of
`java.lang.RuntimeException: Lob not found: 49/-2`
when running the participant server with H2.
It seems like using `setBinaryStream` triggers H2 to go through the BLOB
machinery. Since this issue is not easily reproducible, it seems like an
altogether better solution to switch to using `setBytes`. Offsets aren't
that large anyway, so going directly for byte array should be fine.
Better than a broken query anway.
CHANGELOG_BEGIN
CHANGELOG_END
* Add failing test
The test can produce false negatives,
but locally it fails 10 out of 10 times.
* Stop deduplicating commands after rejections
Fixes#5338.
CHANGELOG_BEGIN
CHANGELOG_END
* Only deduplicate successful transactions
* Use pass instead of pure unit
* Handle exceptions
* kvutils: Simplify calculating the weight of a Caffeine cache.
And remove an errant `println` that slipped through the cracks.
Thank you to @ben-manes for the tip!
CHANGELOG_BEGIN
CHANGELOG_END
* kvutils: Make classes final and defs into vals.
Co-Authored-By: Stefano Baghino <stefano.baghino@digitalasset.com>
Co-authored-by: Stefano Baghino <stefano.baghino@digitalasset.com>
We do this for Sandbox Next™; we can do it for Sandbox Classic™ too. It
doesn't seem to have much of a noticeable impact on the test run time,
and reduces the number of exclusive tests, which is helpful.
CHANGELOG_BEGIN
CHANGELOG_END
* Add legacy proxy gRPC services.
This exposes the services as com.digitalasset as well, to ensure that
applications built with a previous release of the SDK continue to work
with the Ledger API.
Due to how the gRPC reflection service works, this doesn't expose the
com.digitalasset services on the reflection api, and thus grpcurl won't
work with the old services. These scripts need to be updated to refer to
the com.daml services.
CHANGELOG_BEGIN
CHANGELOG_END
* kvutils: Cache state value conversions from bytes.
This seems to have a decent speedup in ledger-on-memory.
CHANGELOG_BEGIN
- [Ledger Integration Kit] Submissions now look up ledger values from a
cache where possible, improving performance when there's contention over
certain resources (e.g. common packages). The cache size currently
defaults to 64 MB.
CHANGELOG_END
* kvutils: Make the SubmissionValidator statue value cache configurable.
* kvutils: Report state value cache metrics.
* kvutils: Add a suffix to a Long literal because WartRemover is unhappy.
Strangely, it doesn't fail on my machine.
* kvutils: Extract caching out into its own file.
* kvutils: Move the `bytesToStateValue` call into `cache.get`.
* kvutils: Move caching to its own package.
* kvutils: Inject the state value cache.
* kvutils: Default to no state value cache.
* kvutils: Accept a state value cache size in megabytes, not bytes.
* kvutils: Move cache building from `Config` to the `caching` package.
* kvutils: Replace Guava's cache with Caffeine.
* kvutils: Simplify caching configuration.
* sandbox: Enable state value caching by default.
CHANGELOG_BEGIN
- [Sandbox] State values deserialization is now cached, with a fixed
cache size of 128MB.
CHANGELOG_END
* Changelog commit.
CHANGELOG_BEGIN
- [Ledger Integration Kit] The state value cache is now opt-in, with a
default of no cache at all.
CHANGELOG_END
* Draft of PingPong throughput and latency benchmarks for on-mem and on-sql ledgers
* Augment `ParticipantTestContext` and remove `LedgerApiServer` hack
* Separate performance tests into distinct category with concurrency 1
* 🎨
* Package performance tests in separate DAR
* Have performance tests excluded by default and run exclusively if passed
* Fix rebase
* Simplify `BenchmarkReporter`
* Make `concurrencyOverride` into an `Option`
* Clarify command line usage, prevent regular and perf. tests together
* Fix preventing regular and perf. tests together
* Split `PingPong`, `PingPongExplode` and `Cycle` benchmarks' model
CHANGELOG_BEGIN
- [TestTool] Add `PingPong` performance envelope test
CHANGELOG_END
* Explicitly name `concurrencyOverride`
* Fix formatting
* Lower bar for CI run of performance envelope tests
* Make benchmark output file configurable
* Improve messages and report config option name
* Use exit status 64 for "bad command line usage" as in BSD
Packages com.digitalasset.daml and com.daml have been unified under com.daml
Ledger API and DAML-LF DEV protos have also been moved from `com/digitalasset`
to `com/daml` on the file system.
Protos for already released DAML LF versions (1.6, 1.7, 1.8) stay in the
package `com.digitalasset`.
CHANGELOG_BEGIN
[SDK] All Java and Scala packages starting with
``com.digitalasset.daml`` and ``com.digitalasset`` are now consolidated
under ``com.daml``. Simply changing imports should be enough to
migrate your code.
CHANGELOG_END
* Fix test for groupContiguous
Automatic generation of test values was prone to cause flakiness, removed in favor of a simpler test case.
changelog_begin
changelog_end
* Relax order sensitivity
* Update ledger/sandbox/src/test/suite/scala/com/digitalasset/platform/store/dao/events/GroupContiguousSpec.scala
Co-Authored-By: Remy <remy.haemmerle@daml.com>
* Fix compilation issue
Co-authored-by: Remy <remy.haemmerle@daml.com>
* sandbox: Move the events page size configuration value into config.
* sandbox: Pass `config` directly into JdbcIndexerFactory.
* sandbox: Reorder `eventsPageSize` before `metrics` in parameters.
* sandbox: Move `seeding` into `ApiServerConfig`.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Name all parameters of `JdbcLedgerDao.writeOwner`.
Co-Authored-By: stefano.baghino@digitalasset.com
* Implement timed command deduplication in kvutils
This adds a field deduplication_time to DamlCommandDedupValue for
deduplication timeout checking.
* Bump kvutils version to 4
* Fix CommandTracker pulling commandResultIn multiple times
Now that the timeouts are generated out of band, we have 2
"unsynchronized" places that pull on commandResultIn.
Whenever we pull, we need to check that commandResultIn hasn't been
pulled before.
* Add inStaticTimeMode flag to enable command dedup in sandbox-next with static-time
Fixes#4624.
CHANGELOG_BEGIN
[kvutils] KVUtils now respects the command deduplciation time instead of
deduplicating commands forever.
CHANGELOG_END
* kvutils: Remove the LedgerEntry trait; it's no longer necessary.
This was introduced to allow for heartbeats, which no longer exist.
CHANGELOG_BEGIN
CHANGELOG_END
* kvutils: Make LedgerRecord a case class again.
We used to store the envelope as an array of bytes, which doesn't have a
value-based `equals` method and therefore should not be used in a case
class. We now use a `ByteString`, so this is no longer an issue.
* Migrate create_consumed_at to offset
This is a leftover from the stable offsets migration and causes issues when serving active contracts from the new schema.
changelog_begin
changelog_end
* No decoding necessary for create_consumed_at
* kvutils: Use `.view` in SubmissionValidator.
CHANGELOG_BEGIN
CHANGELOG_END
* kvutils: Don't compute missing inputs unless we're asked.
* ledger-on-memory: Do less in `InMemoryLedgerStateOperations`.
* ledger-on-memory: Use `RangeSource` instead of `OneAfterAnother`.
Should be faster to just take a slice.
* ledger-on-memory: Don't bother locking when reading.
We're only reading the log, which is append-only; we never mutate
existing data. This means we don't need to lock to read it.
* ledger-on-memory: Make it impossible to construct a state with data.
This can fail in CI, especially when we're running tests in parallel,
and starting PostgreSQL as well.
I've increased the limit from 30 seconds to 1 minute.
This is hopefully not a permanent fix; I'm going to look into doing this
without an `Await.result`.
CHANGELOG_BEGIN
CHANGELOG_END
* Refactor flat events range queries
By factoring out query routing, this logic can be re-used to serve active contracts.
changelog_begin
changelog_end
* Fix copyright notice header
* participant-state-metrics: Wrap metric names in a value type.
For safety, and for simplicity when building upon prefixes.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Use MetricName within MetricsNaming.
* kvutils: Use `MetricName` in `Committer`.
* sandbox | kvutils: Extract common metric prefixes.
* sandbox: Remove a redundant visibility modifier in `MetricsNaming`.
* participant-state-metrics: `MetricName` doesn't need to be a case class.
Co-Authored-By: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Retry if time lookup failed
An error while looking up the maximum ledger time for the used contracts
likely means that one of the contracts is already not active anymore,
which can happen under contention.
Retry finding a suitable ledger time in this case instead of crashing.
CHANGELOG_BEGIN
CHANGELOG_END
* Handle archived contracts in JDBC max time lookup
* Don't retry if ledger time lookup fails
The retry might not fix anything.
Instead, log a helpful message.
Contributes to #4231.
Remove checkpoints from Participant Server storage:
- Removed code to store checkpoints in the index database.
- Remove existing checkpoint rows in ledger_entries and participant_command_completions.
- Removed ObservedTimeServiceBackend
This commit modifies the java migration V2_1__Rebuild_Acs. This is safe to do, because:
a) any even semi recent persistent sandbox had already gone through this migration and won't re-run it
b) A new database doesn't even have entries yet to migrate.
CHANGELOG_BEGIN
[DAML Ledger Integration Kit] Removed the ``Hearbeat`` state update.
[Sandbox] Checkpoints are no longer emitted in regular intervals in wall
clock time mode.
CHANGELOG_END
* Use com.daml as groupId for all artifacts
CHANGELOG_BEGIN
[SDK] Changed the groupId for Maven artifacts to ``com.daml``.
CHANGELOG_END
* Add 2 additional maven related checks to the release binary
1. Check that all maven upload artifacts use com.daml as the groupId
2. Check that all maven upload artifacts have a unique artifactId
* Address @cocreature's comments in https://github.com/digital-asset/daml/pull/5272#pullrequestreview-385026181
* kvutils: Do less in the ledger transaction.
* kvutils: Make `SubmissionValidator#runValidation` tail-recursive.
Because otherwise IntelliJ is very unhappy.
* kvutils: Time transaction acquisition and release.
CHANGELOG_BEGIN
- [Ledger Integration Kit] Metrics for acquiring and releasing
transactions.
CHANGELOG_END
* kvutils: Include the word "lock" in validator transaction lock metrics.
* kvutils: Record successful + failed transaction acquisitions separately.
Just to get them consistent with the others.
CHANGELOG_BEGIN
- [Ledger Integration Kit] Prefixed all metrics with "daml." for
consistency.
CHANGELOG_END
* kvutils: Use `MetricRegistry.name` to create metric names.
String interpolation is 4TL.
* sandbox: Make the metric names of the services consistent with kvutils.
CHANGELOG_BEGIN
- [Sandbox] Move the service metrics from ``daml.sandbox.indexService``
and ``daml.sandbox.writeService`` to ``daml.services.index`` and
``daml.services.write`` respectively. This brings Sandbox in line with
the Ledger Integration Kit.
CHANGELOG_END
* sandbox: Use `MetricRegistry.name` to create metric names.
String interpolation is 4TL.
* kvutils/app: Publish JVM metrics.
* kvutils/app: Sort the metrics prefixes.
* kvutils/app: Allow for specifying a metrics reporter as a CLI argument.
No changelog as this is hidden.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Remove an unnecessary string interpolation mark.
Co-Authored-By: Miklos <57664299+miklos-da@users.noreply.github.com>
Co-authored-by: Miklos <57664299+miklos-da@users.noreply.github.com>
* Add canonical string representation for identifiers
An identifier is represented as a string as a package identifier and a qualified name separated by a colon.
changelog_begin
changelog_end
* Simplify explicit type signature for HexString
Thanks @remyhaemmerle-da
Co-Authored-By: Remy <remy.haemmerle@daml.com>
* Fix hashing test expected output
* Fix key hasher test expected output
Co-authored-by: Remy <remy.haemmerle@daml.com>
Before this reaches production-ready status, it's time to squash the
migrations. This will improve startup performance a little.
CHANGELOG_BEGIN
CHANGELOG_END
* Remove old time model from ledger config
CHANGELOG_BEGIN
- [Ledger API] Fields related to the old ledger time
model have been removed from the configuration
management service and the ledger configuration service.
CHANGELOG_END
* Update ledger/ledger-api-test-tool/src/main/scala/com/daml/ledger/api/testtool/tests/LedgerConfigurationService.scala
Co-Authored-By: Gerolf Seitz <gerolf.seitz@digitalasset.com>
Co-authored-by: Gerolf Seitz <gerolf.seitz@digitalasset.com>
* Remove documentation for a removed option
CHANGELOG_BEGIN
- [DAML Integration Kit] The CLI option command-submission-ttl-scale-factor was
removed, as the LET/MRT/TTL fields have recently been removed
from the command submission service.
CHANGELOG_END
* Minor renaming
* sandbox: On reset, wait for the API server to start before replacing it.
Hopefully this addresses the flickering behavior we're seeing in CI.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Scale the retries in ResetServiceIT along with everything else.
* Allow to have both verbose and succinct row parsers
This will be used when we need to stream transactions and not individual
events.
changelog_begin
changelog_end
* Succinct parser should have verbose = false
* Introduce DamlSubmissionBatch and the BatchingLedgerWriter
This introduces the DamlSubmissionBatch message to group
submissions into a single message and extends Envelope to
carry the batch. We're using the envelope wrapping for consistency
and compatibility.
We're adding this to kvutils version 3 as it has not yet been released
into the wild and as this is not backwards incompatible change.
Support for batching is implemented with the BatchingLedgerWriter
that wraps a LedgerWriter and groups submissions into a batch based
on size and time duration.
For implementing the validation of a batch we will require some rework
in the SubmissionValidator to be able to produce multiple "LogResult"s,
e.g. commit on the in-memory ledger results in an "Index" which is used
to signal new head to dispatcher. With a batch we'd need to pick max index.
CHANGELOG_BEGIN
CHANGELOG_END
* Add missing copyright header
* Address code reviews
* Post rebase fixes
* Rename BATCH -> SUBMISSION_BATCH
* Address code reviews, add further tests and cleanup.
* Add test for DefaultBatchingQueue.close
* Use generous timeouts
* Renamed BatchMessage => SubmissionBatchMessage. Added default boolean parameter value. Added simple test case.
* Removed unused include.
* Address final code review
Co-authored-by: Miklos Erdelyi <miklos.erdelyi@digitalasset.com>
Add keys with maintainers to Fetch nodes
The new field is populated by the interpreter whenever the fetched
contract has a key. Used for contract key reinterpretation in Canton.
CHANGELOG_BEGIN
- [DAML-LF] Add keys with maintainers to Fetch nodes in transactions.
CHANGELOG_END
Contributes to #4194.
Closes#4231.
Closes#5022.
CHANGELOG_BEGIN
- [Ledger API] The protobuf fields ledger_effective_time and maximum_record_time have been removed from
command submission. These fields were previously deprecated following the introduction
of a new ledger time model. See issue `#4194 <https://github.com/digital-asset/daml/issues/4194>`__.
[Java Bindings] removed the usage of ledgerEffectiveTime and
maximumRecordTime, and instead added minLedgerTimeAbsolute and
minLedgerTimeRelative in CommandSubmissionClient and CommandClient
CHANGELOG_END
* participant-state{,-index}: Move Timed*Service classes from Sandbox.
CHANGELOG_BEGIN
- [Ledger Integration Kit] Metrics for the various read, write, and index
services.
CHANGELOG_END
* kvutils/app: Add timing metrics for read/write/index services.
* participant-state: Move metrics-related code to another Bazel package.
* participant-state-metrics: Add to artifacts.yml.
* participant-state-metrics: Move TimedIndexService back into Sandbox.
Cuts down on dependencies like nobody's business.
Tests would often display this warning:
io.grpc.netty.NettyChannelBuilder buildTransportFactory
WARNING: Both EventLoopGroup and ChannelType should be provided or neither should be, otherwise client may not start. Not provided values will use Nio (NioSocketChannel, NioEventLoopGroup) for compatibility. This will cause an Exception in the future.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Capture timing metrics for API server calls.
`timer` is a superset of `meter`, so this doesn't lose any existing
behavior; just adds new behavior.
CHANGELOG_BEGIN
- [Ledger API Server] Added timing metrics for all GRPC endpoints.
CHANGELOG_END
* sandbox: Rename SandboxClientResource to GrpcClientResource.
* sample-service: Clean up warnings.
* sandbox: Add tests for MetricsInterceptor.
* sandbox: Split the API metrics interceptor from the naming.
* sandbox: Use `MetricRegistry.name` instead of string interpolation.
* rs-grpc-akka: Restrict the test library to the DAML workspace.
Co-Authored-By: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Tighten result type
Command execution can't result in a sequencer error
* New helper method for extracting used contracts
* New error clause
* Add a DAO query for the maximum time of contracts
* Implement algorithm for finding ledger time
CHANGELOG_BEGIN
CHANGELOG_END
* fixup ledgerTimeHelper
* Use new ledger time algorithm
* Mark LET/MRT as deprecated
CHANGELOG_BEGIN
- [Ledger API] DAML ledgers have switched to a new ledger time model.
The ledger_effective_time and maximum_record_time fields of command submission are deprecated,
the ledger time of transactions is instead set automatically by the ledger API server.
Ledger time is no longer strictly monotonically increasing, but only follows causal monotonicity:
ledger time of transactions is greater than or equal to the ledger time of any used contract.
See `#4345 <https://github.com/digital-asset/daml/issues/4345>`__.
CHANGELOG_END
* Add ledger time skew check
* Remove command updater
LET/MRT are now deprecated, this class is now useless
* Remove old time model validator
* Switch to new time model check: kvutils
* Switch to new time model check: in-memory ledger
* Switch to new time model check: SqlLedger
* Use initial ledger config
* Ignore user provided LET
* Use TimeProvider in submission services
* Use deduplication_time in daml-script runner
- Also remove unnecessary command completion output of CommandTracker.
- Remove usage of maximum record time in CommandTracker.
* Use arbitrary default value for deduplication time
* Use built-in Instant ordering
* Remove obsolete test
* Remove obsolete test: CommandStaticTimeIT
* Refactor test: TransactionMRTCompliance
* Disable test: CommandTrackerFlow timeout
* thread maxDeduplicationTime through to CommandTracker
* Improve test
* Refactor command client configuration
* Deduplication time should always use UTC
* Add missing method in TimedIndexService after rebase
* Put more details into the deduplication error response.
* Use system time for command dedup submittedAt.
* Use explicit UTC time source in command validator
* Revert CommandTracker[Flow] to previous completion-recovering-behavior
* Adapt scala client command config to new config params
Co-authored-by: Gerolf Seitz <gerolf.seitz@digitalasset.com>
* kvutils: Remove an unnecessary `@SuppressWarnings`.
* kvutils: Reduce the scope of fields and methods in `Committer`.
* kvutils: Inject the metric registry into `KeyValueCommitting`.
CHANGELOG_BEGIN
CHANGELOG_END
* kvutils: Inject the metric registry into the committers.
* kvutils: Inject the metric registry into `ProcessTransactionSubmission`.
* kvutils: Avoid shared metric registries in tests.
* kvutils: Recreate the metrics registry per participant state.
* kvutils: Add trailing commas to parameter lists.
Flagrantly encouraged by @stefanobaghino-da.
* recovering-indexer: Don't re-use the metric registry in tests.
* Sandbox: Reveal contract id seeding flag
CHANGELOG_BEGIN
- [Sandbox] Add support for random contract identifiers. See section
`Contract Identifiers Generation` section in
docs/source/tools/sandbox.rst
CHANGELOG_END
The database-backed reset service can (understandably) go a bit slower than the one backed by the in-memory ledger.
This should help avoiding flaky tests.
CHANGELOG_BEGIN
CHANGELOG_END
Some Option2Iterable ignore annotations are not needed, others were needed for unused methods.
In a few occasions we were ignoring the warning for the very purpose for which is was there,
i.e. avoiding an implicit conversion. I'm all for not verifying this rule if we agree we
don't need it.
For ProcessFailedException it was a bit gratuitous, I changed the way in which the exception
message is built.
CHANGELOG_BEGIN
CHANGELOG_END
* Add test to ensure that the reset truncates all tables
The test can be adjusted over time to accomodate for exceptions (which are already there).
Unfortunataly we have to add a new couple of queries to support both Postgres and H2.
Fixes#5130
CHANGELOG_BEGIN
CHANGELOG_END
* Make loose check on configuration_entries
* sandbox: Clean up `MetricsReporting` a little.
Make sure it closes both reporters, and avoid starting things in a
constructor.
* sandbox: Add hidden options for enable metrics reporting.
* sandbox: Add a disambiguating name to the DB connection/thread pools.
CHANGELOG_BEGIN
- [Sandbox] DB connection pool metrics names have changed slightly, from
``daml.index.db.connection`` to ``daml.index.db.connection.sandbox``.
- [Ledger Integration Kit] DB connection pool metrics names have changed
to disambiguate the StandaloneApiServer from the
StandaloneIndexerServer. The former now has a ``.ledger-api-server``
suffix, and the latter now has a ``.indexer`` suffix.
CHANGELOG_END
* sandbox-next: Use the same metrics registry for the API and indexer.
* sandbox: Give a useful error message on an invalid metrics reporter.
And simplify the error messages.
With the arguments `--client-auth=foo --metrics-reporter=foo`, we now
get the output:
```
Error: Option --client-auth failed when given 'foo'. Must be one of
"none", "optional", or "require".
Error: Option --metrics-reporter failed when given 'foo'. Must be one of
"console", or "csv:PATH".
Try --help for more information.
```
* sandbox: Pull out more helpers in `MetricsReporting`.
* sandbox: Rename MetricsReporter classes so they don't clash.
* sandbox: Wrap the `name` parameter in a `ServerName` tagged string.
For safety. Yours, not mine.
* sandbox: Push metrics to Graphite with `--metrics-reporter=graphite`.
* sandbox: Make `MetricsReporter.Graphite` singly-lazy, not doubly-.
Co-Authored-By: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* sandbox: Replace `ServerName` with `ServerRole`.
* sandbox: Fix usage of `ServerRole.Testing` in `LedgerResource`.
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
- New seeding type "Static" that uses a fixed seed for generating new seeds.
This is only used in Sandbox and Sandbox-next.
- Remove the fuzzing for submission time in Engine.scala
- DAML-on-SQL: Create new log entry IDs from a provided SeedService.
This allows for generating deterministic transaction IDs in
Sandbox-Next.
Fixes#5107
CHANGELOG_BEGIN
[Sandbox] Add contract-id-seeding=static to allow for predictable contract IDs. This is useful for documentation,
to be able to refer to a specific contract ID instead of having to write "note down the contract ID you see on the screen. we will use it later."
[DAML-on-SQL] Derive the next log entry ID using the provided SeedService. This allows us to also deterministically create transactionIds in static time mode together with `--contract-id-seeding=static`. This should only be used for demos or documentation.
CHANGELOG_END
* sandbox: Remove a few levels of indentation in `ApiCommandService`.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Get rid of the `LowLevelCommandServiceAccess` trait.
It only has one implementation. Just use it.
* sandbox: Remove unused type aliases from ApiCommandService.
* kvutils/app|sandbox: Rename `system` to `actorSystem`.
* sandbox: Pass a materializer in to the StandaloneIndexerServer.
There's no need for it to construct one when the caller always has one
available.
CHANGELOG_BEGIN
CHANGELOG_END
* recovering-indexer: Use `SubmissionId` instead of `LedgerString`.
Where appropriate.
* sandbox: Use the materializer implicitly in JdbcIndexer.
by reading parameters.ledger_end column instead of parameters.external_ledger_end.
CHANGELOG_BEGIN
[Ledger API Server]: Upon restart, ledger api server continues consuming unconsumed events rather than
all events from beginning.
CHANGELOG_END
Closes#5121
* Integrate transaction lookup on new schema in Ledger API
Re-wires all transaction lookups to the new schema
CHANGELOG_BEGIN
CHANGELOG_END
* Always return the agreement text
CHANGELOG_BEGIN
[Ledger API Server] The metric 'daml.index.lookup_transaction' has been
replaced by 'daml.index.lookup_flat_transaction_by_id' and
'daml.index.lookup_transaction_tree_by_id', which record the same events
but with more granularity regarding the type of lookup.
CHANGELOG_END
* Ensure agreement text invariant in a single place
* Do not compare the order in which witness parties appear in an event
* Hide command identifier from non-submitters in transaction trees
* Fix time assigned to transaction to be the ledger effective time and not the record time
* Store transactions from initial state into the new schema
* sandbox: Don't let just anyone construct a DbDispatcher.
It's the job of the JdbcLedgerDao.
* sandbox: Clean up the DbDispatcher a little.
* sandbox: JdbcLedgerDao now creates its own execution context.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Make `defaultNumberOfShortLivedConnections` private.
Makes storing and generating test transaction more composable (store just takes a generated transaction)
- Add a nonTransient utility method to retrieve contracts that have been created but not consumed as part of a transaction
- Add an addChildren utility method to add children to a transaction to allow to create more complex test transactions
- Add a transaction generator that uses the aforementioned addChildren to create a more complex transaction
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Remove duplicate parameters in JdbcIndexerFactory.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: `InitializedJdbcIndexerFactory` is just a ResourceOwner now.
* sandbox: Rename helper methods in `JdbcIndexerFactory`.
* sandbox: Make `JdbcIndexer#initialized` look like it does something.
Avoiding `damlc compile/package` commands (which we would like to deprecate), and replace with plain `damlc build` together with a post dar->dalf extraction step in the couple of places where we actually want the .dalf for testing.
changelog_begin
changelog_end
* Modularize JdbcLedgerDaoSpec
Adding more tests, but breaking it up a bit to make sense of it beforehand.
CHANGELOG_BEGIN
CHANGELOG_END
* Remove unnecessary suppressed warnings
* sandbox: Fail to start if a time mode is not explicitly specified.
CHANGELOG_BEGIN
- [Sandbox] Sandbox is switching from Static Time mode to Wall Clock
Time mode as the default. To ensure that our users know about this,
for one version, there will be no default time mode. Instead, users
will have to explicitly select their preferred time mode by means of
the `--static-time` or `--wall-clock-time` switches. In the next
release, Wall Clock Time will become the default, and users who are
happy with the defaults will no longer need to specify the time mode.
CHANGELOG_END
* daml-script|triggers: Specify time mode when testing against Sandbox.
* daml-assistant: Default the Sandbox to wall clock time.
CHANGELOG_BEGIN
- [DAML Assistant] Initializing a new DAML project adds a switch to
``daml.yaml`` to ensure Sandbox can continue to start with ``daml
start``::
sandbox-options:
- --wall-clock-time
CHANGELOG_END
* docs: Update the DAML Script and Triggers docs to use Wall Clock time.
It's now what Sandbox will use by default when using `daml init`.
* docs: Change the Quickstart to run Sandbox in wall clock time.
This explains why the contract IDs may vary.
It also updates the manual release testing script to match.
A "stable offset" in the context of the Participant Server is the offset
that was provided by the ledger backend (be it kvutils, corda, daml on sql).
The Participant Server does not keep a participant-local offset anymore.
In a single domain/kvutil setup, this makes offsets stable across participants,
since all participants will see the same offset for the same transaction.
The following changes were needed to achieve this:
- The participant server always uses the offset provided by the backend
AS IS (no more +1 magic).
- Offsets provided to the Ledger API in requests must be treated as
startExclusive and endInclusive (previously beginInclusive and
endExclusive).
CHANGELOG_BEGIN
[Ledger API]: Offsets have been redefined. Instead of being represented
by a number or a structured string, an offset is now an opaque string
that can be compared lexicographically.
[DAML Integration Kit]: The bounds for ``Dispatcher`` are now
startExclusive and endInclusive.
CHANGELOG_END
---------
ledger api:
ledger_offset.proto
Changed definition of offsets, since they can now be compared
lexicographically.
---------
participant-state-api:
Offset:
Changed from Array[Long] to ByteString. Ledgers need to make sure that the
offsets produced are strictly monotonically increasing according to
lexicographical order.
---------
akka-streams:
Dispatcher, DispatcherImpl, SubSource:
Changed interval handling to exclusive/inclusive.
---------
ledger-on-memory:
InMemoryLedgerReaderWriter, InMemoryState:
Changed interval handling to exclusive/inclusive.
---------
ledger-on-sql:
CommonQueries, SqlLedgerReaderWriter:
Change interval in query and boundary handling.
---------
kvutils:
KeyValueParticipantStateReader, KVOffset:
Convenience functions for kvutils to add or remove sub-indexes for
offsets.
KV ledger implementations can use KVOffset to construct a structured offset.
---------
Participant Server:
JdbcLedgerDao:
Use Offset instead of Long.
Fetch offsets directly as Offset from the database with proper anorm
integration.
Change interval handling to exclusive/inclusive.
CommandCompletionsReader, CommandCompletionsTable:
Change interval handling to exclusive/inclusive.
BaseLedger:
Use Offset instead of Long.
Change interval handling to exclusive/inclusive.
Conversions:
Anorm integration for using Offset in queries and result parsers.
JdbcIndexer:
Remove references to "extenalLedgerEnd" and participant-local Long
offset (headRef).
---------
sandbox:
In general:
Use the Offset type everywhere instead of Long.
SQL migrations:
Change all offset columns to bytea or BINARY.
LedgerBackedIndexService:
Proper bounds checking has been pushed down to Dispatcher, which
allowed simplifying the acceptedTransactions implementation.
InMemoryLedger, LedgerEntries:
Change interval handling to exclusive/inclusive.
Transaction lookup by ID is now O(n) because transaction IDs are not
necessarily the same as the offset.
SqlLedger:
Remove external offset references.
* Switched to ByteString from Array[Byte] on almost all simplified API interfaces.
* Sort output by keys.
* Added comment.
CHANGELOG_BEGIN
CHANGELOG_END
* Removed DamlLogEntryId from LedgerEntry.
* Return a SortedMap ordering output state by its keys' hash in order to have deterministic ordering.
* Code tidying.
* Added implicit conversion for anorm for ByteStrings to make SQL queries cleaner.
* Ooops, missed adding a header.
* Avoid copying bytes by anorm by using ByteString.newInput()
* Added some Scaladoc to simplified API interfaces.
* Added docs to LedgerStateAccess.
* Reverted some changes.
* Added some docs to ValidatingCommitter.
* Corrected some typos.
* Added package-level documentation to kvutils.api.
* Clarified convenience classes for LedgerStateOperations.
* Update ledger/participant-state/kvutils/src/main/scala/com/daml/ledger/participant/state/kvutils/Version.scala
Co-Authored-By: Gerolf Seitz <gerolf.seitz@digitalasset.com>
* Minor rewording.
* Added missing header.
* Fixed problem with merge.
Co-authored-by: Gerolf Seitz <gerolf.seitz@digitalasset.com>
* kvutils: Remove the unnecessary execution context from the test base.
* kvutils: Remove the unnecessary execution context from the writer.
* ledger-on-sql: Make a proper owner so it has a proper execution context.
This means the parallelization now needs to come from the test, so I've
augmented ParticipantStateIntegrationSpecBase to take a proper execution
context instead of the serial one that ScalaTest provides, with a
default of `ExecutionContext.global`.
* ledger-on-memory: Make a proper owner with a proper execution context.
* kvutils/app: Remove `executionContext` from LedgerFactory.
Shouldn't need it in `ResourceOwner`. I was bad.
CHANGELOG_BEGIN
CHANGELOG_END
* ledger-on-memory: Make ResourceOwners real classes.
* ledger-on-sql: Make the ResourceOwner a real class.
* ledger-on-sql: Cause side effects on resource acquisition.
Not on owner construction.
This would fail only on PostgreSQL because `IN ()` is invalid. H2 seems
to be fine with it.
CHANGELOG_BEGIN
- [Ledger API Server] Support a call to `GetParties` with an empty list
of parties.
CHANGELOG_END
* Periodically clear expired deduplication entries
CHANGELOG_BEGIN
CHANGELOG_END
Fixes#4959
* Increase cache maintenance frequency
The previous value was only good for testing purposes
* Actually remove deduplication entries
* Clear deduplication cache for IndexAndWriteService
* Share test certificates
This is primarily an attempt at making sure my contribution stats
remain negative but I think it’s a nice cleanup. The only difference
in the certs used by daml-helper which are now used everywhere is that
they use a different CN for the CA and the server. This is required to
make openssl happy (which is used by the daml-helper).
changelog_begin
changelog_end
* Fix script and trigger tests
This PR fixes the tls configuration to work if client auth is not
enabled and adds a `--tls` flag to extractor and navigator which
allows you to enable tls without overriding any certificates.
There is a test for extractor but none for navigator since there are
no tls tests at all afaict atm. I did however test it manually.
changelog_begin
- [Navigator] Navigator can now run a TLS enabled ledger without
client authentication. You can enable TLS without any special
certificates by passing ``--tls``.
- [Extractor] Extractor can now run a TLS enabled ledger without
client authentication. You can enable TLS without any special
certificates by passing ``--tls``.
changelog_end
Currently sandbox only supports TLS if you also enable client
authentication. There is no reason for why this has to be the case and
for things like DABL we want TLS without client authentication so it’s
useful to be able to test this in sandbox. This PR introduces a
`--client-auth` flag that allows you to configure the behavior. The
default is the current one of requiring client authentication.
This PR does not yet update Java clients, however, the Haskell client
supports this already and is used to test this functionality.
I’ve also added a section in the documentation on TLS (there were no
docs at all so far).
changelog_begin
- [DAML Sandbox] When Sandbox is run with TLS enabled, you can now
configure the requirement for client authentication via
``--client-auth``. See
https://docs.daml.com/tools/sandbox.html#running-with-tls for more information.
changelog_end
We exclude the tests that create lots of data.
CommandDeduplicationIT is disabled as kvutils does not yet
have time-based deduplication.
CHANGELOG_BEGIN
CHANGELOG_END
* Don't read exclusive end in completions query
CHANGELOG_BEGIN
CHANGELOG_END
* Store offsets directly and do +1 only on read side
* Fix existing completions
* Add test for the completion service
Co-authored-by: Gerolf Seitz <gerolf.seitz@digitalasset.com>
* Rename EC auth cmdline options in line with the standard and document them.
CHANGELOG_BEGIN
CHANGELOG_END
* 📝 Fix doc
* Auth docs: change `RSA DSA` -> `RSA Signature` (clashed with DSA algo)
As proposed by @SamirTalwar-DA
CHANGELOG_BEGIN
[Sandbox] Rename the `--auth-jwt-ec256-crt` command line option to `--auth-jwt-es256-crt` as well as `--auth-jwt-ec256-crt` to `--auth-jwt-es256-crt` and fix their docs
CHANGELOG_END
* kvutils: Avoid casting `ArgumentCaptor` and friends in tests.
Instead, use generics the way they're intended.
CHANGELOG_BEGIN
CHANGELOG_END
* kvutils: In KeyValueParticipantStateWriterSpec, drop the Option.
After some investigation, canton does not currently expose a nice way
to tell ammonite where it should write its files or even better use
the in-memory mode. However, ammonite respects $HOME so we can just
set that to a temp directory which fixes the issue.
changelog_begin
changelog_end
* Include Bazel patch to mark tests as exclusive
This should hopefully avoid rerunning the conformance tests as often
as we do now. While this patch is not applied on Windows (since we get
it from nix), this is not really an issue since most of the exclusive
tests (in particular, all conformance tests) are disabled on Windows
anyway.
I’ve tested this locally accross a couple of runs and I get the
caching I want and looking at the code in the patch, the change looks
very reasonable. I somewhat wonder if it just broke internally at
google because they marked tests as exclusive that should have gotten
no-cache.
changelog_begin
changelog_end
* Disable caching for canton
* ledger-api-test-tool: Fix warnings flagged by IntelliJ IDEA.
* ledger-api-test-tool: Open-world mode.
In open-world mode, parties aren't allocated; their names are just
reserved for the test case, so that no other test will accidentally use
the same party name.
This is so we can test ledgers which dynamically allocate parties, such
as Sandbox.
* sandbox: Run conformance tests in "open-world" mode.
This means that the tests don't explicitly allocate parties (except for
a few), instead relying on Sandbox's implicit party allocation feature.
This is not enabled for Sandbox Next yet.
* sandbox-next: Implicit party allocation.
This is added to the command submission service.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox-next: Don't implicitly allocate pre-existing parties.
* ledger-api-test-tool: Move pre-allocation into ParticipantTestContext.
* ledger-api-test-tool: We can reserve parties or wait for them. Not both.
Make illegal states unrepresentable as early as possible.
* sandbox: Name ApiSubmissionService's private methods a little better.
* sandbox: Move ApiSubmissionService's conditional logic into methods.
* sandbox: Document why we set `implicitPartyAllocation` to `false`.
* sandbox: Document why `implicitPartyAllocation` is dangerous.
* Rework ValueSerializer
- Handle errors directly in the convenience method (it's the only way in which it's used)
- Don't drop the root cause when a serialization error occurs
- Remove outdated comment
CHANGELOG_BEGIN
CHANGELOG_END
* Add alternative with error context for deserialize method
* Make error context evaluated lazily
* Rename inner helper to avoid name clash
This renames methods backing the `ListKnownParties` request to
from `parties`, `getParties` or `listParties` to `listKnownParties`.
CHANGELOG_BEGIN
- [Ledger API Server] Renamed two metrics:
``daml.index.parties`` was renamed to ``daml.index.list_known_parties``
``daml.index.db.get_parties`` was renamed to ``daml.index.db.list_known_parties``
CHANGELOG_END
* sandbox: Add a database test for storing and retrieving parties.
* sandbox: Add database queries for selecting one or many parties.
* ledger-api-test-tool: Add a test for `ListKnownParties`.
* sandbox: Add an endpoint to retrieve a single party's details.
CHANGELOG_BEGIN
- [Ledger API] Added an endpoint to retrieve a single party's details at
``com.digitalasset.ledger.api.v1.admin.PartyManagementService.GetParty``.
Please consult the ledger API reference documentation for more
information.
CHANGELOG_END
* sandbox: Add an endpoint to retrieve a multiple parties' details.
CHANGELOG_BEGIN
- [Ledger API] Added an endpoint to retrieve multiple parties's details at
``com.digitalasset.ledger.api.v1.admin.PartyManagementService.GetParties``.
Please consult the ledger API reference documentation for more
information.
CHANGELOG_END
* sandbox: Getting a single party is a special case of multiple parties.
So let's use that code path and stop duplicating work.
* sandbox: Remove `GetParty`, as it's subsumed by `GetParties`.
"Subsumed" is a great word.
Events in transaction trees should only reference other events
that:
1) are either create or exercise events
2) the requesting parties are a witness of
This applies to recalculated root nodes as well as
the child event ids referenced in exercise nodes.
CHANGELOG_BEGIN
[Sandbox]: fixed projection of transaction trees.
CHANGELOG_END
* Deprecate ledger initialization with scenarios
CHANGELOG_BEGIN
[Sandbox] Initializing the sandbox with scenarios is now deprecated in
favor of using DAML Script. The scenario parameter will be removed in
the near future. A warning is logged on startup.
The DAML SDK templates and quickstart guide are using DAML Script.
See the DAML Script migration guide for more information:
https://docs.daml.com/daml-script/index.html#using-daml-script-for-ledger-initialization
CHANGELOG_END
* sandbox-next: Pull runner configuration into the constructor.
No need to do it on `acquire()` if it's pure.
* sandbox-next: Error if a scenario is provided.
Sandbox-Next doesn't support scenarios, instead favoring DAML Script.
CHANGELOG_BEGIN
CHANGELOG_END
* kvutils: On error opening an envelope, throw the correct message.
CHANGELOG_BEGIN
CHANGELOG_END
* ledger-on-sql: On error when querying state, throw the correct error.
* kvutils|ledger-on-sql: Remove unnecessary curly braces around `throw`.
* Refactor extraction of events from transaction
Closes#1909
CHANGELOG_BEGIN
CHANGELOG_END
* Remove unnecessary filtering
* Fix disclosure rule for flat transaction
* Refactor and split collection and filtering
* Replace transaction filtration with blinding info
* Move transient contract remover in transaction conversion
* Remove dangling file
* Simplify transient contract filtering
* Further refinements
* Simplify transaction tree event extraction
* Move newRoots up the file for consistency and readability
* Remove collect from GenTransaction, replace with custom iterator
* Address https://github.com/digital-asset/daml/pull/4781#discussion_r388167562
* Switch to a strict collect method
* Replaced direct access to map with contains
* sandbox: Re-use the root actor system in the StandaloneIndexerServer.
* kvutils/app: Don't use the ActorSystem execution context randomly.
Instead, make `Runner` a proper ResourceOwner, with an `acquire` method.
* sandbox: Re-use the root actor system in the StandaloneApiServer.
CHANGELOG_BEGIN
CHANGELOG_END
* resources: Remove the now-unused `ResourceOwner.sequence` functions.
They weren't well-thought-out anyway; they acquire resources
sequentially, rather than in parallel.
* Allow `LedgerFactory` to provide a full-blown `ReadWriterService` rather than a `LedgerReaderWriter` (needed by at least vDAML)
CHANGELOG_BEGIN
CHANGELOG_END
* Address review points
* Address Samir's review point in `SqlLedgerFactory`
* Finish addressing Samir's review point in `SqlLedgerFactory`
* Split reader and writer owners in `LedgerFactory` as suggested by Gerolf
* Remove unneeded `val` from `KeyvalueParticipantState[Reader|Writer]` constructor params
* Remove unneeded type parameter from `app.Runner`
* Leave `LedgerFactory` a full ledger builder but split hierarchy upwards and clarify responsibilities
* Rename `SimpleLedgerFactory` to `KeyValueLedgerFactory`
* sandbox-next: Make the Runner a real ResourceOwner.
* sandbox: Don't construct the ResetService twice.
* sandbox: Inline and simplify methods in StandaloneApiServer.
* resources: Define a `ResettableResource`, which can be `reset()`.
`reset()` releases the resource, performs an optional reset operation,
and then re-acquires it, binding it to the same variable.
* resources: Pass the resource value into the reset operation.
* sandbox: Fix warnings in `TestCommands`.
* sandbox-next: Add the ResetService.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Make sure the SandboxResetService resets asynchronously.
It was being too clever and negating its own asynchronous behavior.
* sandbox-next: Forbid no seeding.
This double negative is really hard to phrase well.
* sandbox-next: Implement ResetService for a persistent ledger.
* sandbox: Delete the comment heading StandaloneIndexerServer.
It's no longer meaningful.
* sandbox-next: No need to wrap the SandboxResetService in an owner.
* sandbox-next: Bump the ResetService test timeouts.
It looks like it's definitely slower than on Sandbox Classic™. Gonna
look into this as part of future work.
* Revert to previous asynchronous reset behavior
Co-authored-by: Gerolf Seitz <gerolf.seitz@digitalasset.com>
* Always return error on duplicate submissions
* Remove unnecessary submission information
Now that duplicate submissions always return an error,
we don't need to store the original submission result.
CHANGELOG_BEGIN
CHANGELOG_END
* Rename ttl to deduplicationTime/deduplicateUntil
* Store absolute deduplicateUntil in domain commands
* Fix my own initials
* Remove CommandDeduplicationEntry
Instead, use CommandDeduplicationResult everywhere,
removing the extra layer.
It is basically impossible to not hit this all the time if you upload
more than one package so issuing a warning is a bit confusing.
changelog_begin
- [Sandbox] The warning about duplicate package uploads is no longer
emitted by default. You can enable them by passing
``--log-level=debug``.
changelog_end
* sandbox: Return `Future[Unit]` from migrations rather than awaiting.
I've removed the explicit error-handling, because this will be
propagated and handled at the top level.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Pass the JDBC URL into the JdbcIndexerFactory constructor.
* sandbox: Replace the JdbcIndexerFactory's `InitStatus` with two classes.
The `asInstanceOf` conversions put me off.
* sandbox: Stop passing around the ledger ID in JdbcIndexerFactory.
* sandbox: Remove the indexer `asyncTolerance`; it's no longer used.
The change to `EventFilter` and to the query in `JdbcLedgerDao` are
"duplicate work", but we need the change in EventFilter for the
InMemoryLedger, and the change in JdbcLedgerDao so that we avoid
fetching a contract that anyway would be discarded later.
CHANGELOG_BEGIN
[Sandbox]: Witnessed contracts for which a party is not a stakeholder
are no longer returned in the active contract stream.
CHANGELOG_END
Fixes#3254.
* libs-scala/ports: Wrap socket ports in a type, `Port`.
* sandbox: Use `Port` for the API server port, and propagate.
CHANGELOG_BEGIN
CHANGELOG_END
* extractor: Use `Port` for the server port.
* ports: Make Port a compile-time class only.
* ports: Allow port 0; it can be specified by a user.
* ports: Publish to Maven Central.
* kvutils: Make the `KeyValueParticipantStateReader` tests more rigorous.
If the `offset` is specified as `None`, expect it to be `None`, not
just anything.
* kvutils: Simplify `KeyValueParticipantStateReader#stateUpdates`.
Construct the Source with `Source.apply`, not `Source.fromIterator`.
* kvutils: Use multiple entry IDs in `KeyValueParticipantStateReaderSpec`.
* kvutils: Add basic tests to `KeyValueParticipantStateReaderSpec`.
* kvutils: Add heartbeats to `LedgerReader`'s `events` output.
Heartbeats are optional, to be delivered by the ledger if and when it
deems necessary.
* sandbox-next: An observing time service backend using Akka streams.
* sandbox-next: A regular heartbeat based on Akka Streams' `tick`.
* sandbox: Replace `TimeServiceBackend.withObserver` with `.observing`.
More code, but it's more decoupled, so can more easily be sent to the
underlying backend in Sandbox Next.
CHANGELOG_BEGIN
- [Sandbox] Fixed a bug in the command completions stream when running
Sandbox in static time. Previously, upon updating the time, the old
time was emitted on the completions stream. The new time is now
emitted.
CHANGELOG_END
* sandbox: TimeServiceBackend should only emit accepted changes.
* ledger-on-memory: Use `LedgerRecord` directly.
* ledger-on-memory: Stream heartbeats to the log.
* ledger-on-memory: Encapsulate mutations behind locks at all times.
* ledger-on-memory: Differentiate between reading and writing.
* ledger-on-memory: Factor out appending to the log.
* kvutils: Move the heartbeat test into the base from ledger-on-memory.
* kvutils: Log when the submission validation fails unexpectedly.
* ledger-on-sql: Add a script to hash all migrations.
* ledger-on-sql: Publish heartbeats to the log, and stream them out.
* ledger-on-sql: Log if publishing the heartbeat failed.
* ledger-on-sql: Wrap all queries in `Try`.
Just to make sure that we don't throw from a function that returns `Try`
or `Future`.
* ledger-on-sql: Allow `Long` values as the heartbeat timestamp.
`INTEGER` really does mean 32-bit, apparently.
* sandbox-next: Pipe heartbeats to the ledger.
* ledger-on-sql: Make sure we publish the correct head after a heartbeat.
Off-by-one errors are the best errors.
* ledger-on-(memory|sql): Just accept heartbeats, not their owner.
* sandbox: Update CIDs in tests to account for the extra heartbeat.
* ledger-on-memory: Fix a reference to variable in a comment.
Co-Authored-By: Gerolf Seitz <gerolf.seitz@digitalasset.com>
* ledger-on-sql: `flatMap` over `Try` rather than `Future` when possible.
* sandbox: Make sure the heartbeat queues are thread-safe.
* kvutils: Remove `LoggingContext` from the interfaces.
Keep it internally. This means we'll drop any context, but otherwise
things should work as expected.
* sandbox-next: Pull out the heartbeat interval into a constant.
* ledger-on-sql|sandbox: Clarify large levels of nesting.
Co-authored-by: Gerolf Seitz <gerolf.seitz@digitalasset.com>
Rejected submissions are a user-error and don't indicate a problem with the server not functioning properly. Such user
errors make it hard to spot "real" server-side warnings and errors.
i
Closes#4772
CHANGELOG_BEGIN
- The Ledger API Server now logs rejected submissions at a lower "INFO" level to remove a source of warnings/errors without relation to server health.
CHANGELOG_END
* Freeze DAML-LF 1.8
Two minor points that I did not mention in the previous PR:
We also include the renaming of structural records to `struct` and the
renaming of `Map` to `TextMap`.
There are some minor changes around the LF encoder tests which need to
be able to emit package metadata properly so I’ve added it to the
parser. Sorry for not splitting that out.
Following the process used for the DAML-LF 1.7 release, this does not
yet include the frozen proto file.
changelog_begin
- [DAML-LF] Release DAML-LF 1.8:
* Rename structural records to ``Struct``. Note that
structural records are not exposed in DAML.
* Rename ``Map`` to ``TextMap``.
* Add type synonyms. Note that type synonyms are not serializable.
* Add package metadata, i.e., package names and versions.
Note that the default output of ``damlc`` is stil DAML-LF 1.7. You
can produce DAML-LF 1.8 by passing ``--target=1.8``.
changelog_end
* Update encoder
* Update java codegen tests
* Update comment in scala codegen
* Handle TSynApp in interface reader
* Bump lf_stable_version to 1.7
* Fix kvutils tests
* Make kvutils work with the new contract id scheme
CHANGELOG_BEGIN
- [KVUtils] uses random contract id. Contract ids are made of 65 hexa decimal characters.
CHANGELOG_END
Co-authored-by: Jussi Mäki <jussi.maki@digitalasset.com>
* Tighten the loop: backend services to return API responses
CHANGELOG_BEGIN
CHANGELOG_END
* Use transaction filter directly
* Remove unnecessary transition through domain objects
* Ensure transient contract remover compares sets of witnesses
* Honor verbosity in request
* Address review https://github.com/digital-asset/daml/pull/4763#pullrequestreview-367012726
- using named parameters when creating the API objects
- renamed EventOps accessors to easily recognizable names
- dropped unnecessary usage of views
- honoring verbosity level in request in all places
- replaced usage of lenses with simple copying where it made sense
* sandbox: Name the arguments to `ApiServices.create` for clarity.
* sandbox: Clarify numbers and types in configuration classes.
* sandbox-next: Log the correct port on startup.
* sandbox-next: Connect up the command configuration.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox-next: Wire up TLS configuration.
* sandbox-next: Wire up the maximum inbound message size.
* sandbox-next: Set the global log level if specified.
And if it's not specified, default to the level in logback.xml, INFO.
* sandbox-next: Connect up the submission configuration.
* sandbox-next: Log the correct ledger ID.
* sandbox-next: Use `TimeProvider.UTC`.
* Make completion service return checkpoints
The new table for #4681 and the query used to retrieve completions
currently does not return checkpoints. These do not have to match
the application_id and submitting_party query since those fields
are not populated.
CHANGELOG_BEGIN
CHANGELOG_END
* Address https://github.com/digital-asset/daml/pull/4735#discussion_r384713277
This removes the sample/reference implementation of kvutils
InMemoryKVParticipantState.
This used to be the only implementation of kvutils, but now with the
simplified kvutils api we have ledger-on-memory and ledger-on-sql.
InMemoryKVParticipantState was also used for the ledger dump utility,
which now uses ledger-on-memory.
* Runner now supports a multi participant configuration
This change removes the "extra participants" config and goes for consistent
participant setup with --participant.
* Run all conformance tests in the repository in verbose mode.
This means we'll print stack traces on error, which should make it
easier to figure out what's going on with flaky tests on CI.
This doesn't change the default for other users of the
ledger-api-test-tool; we just add the flag for:
- ledger-api-test-tool-on-canton
- ledger-on-memory
- ledger-on-sql
- sandbox
Fixes#4225.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox-next: Get the authorization service from configuration.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox-next: Add parameter names to Runner function calls.
It's getting very confusing what's what without them, and the mix of
with and without is more confusing.
* Add TTL field to protobuf
* Add command deduplication to index service
* Wire command deduplication to DAO
* Implement in-memory command deduplication
* Remove Deduplicator
* Implement JDBC command deduplication
* Add TTL field to domain commands
* Deduplicate commands in the submission service
CHANGELOG_BEGIN
- [Sandbox] Implement a new command submission deduplication mechanism
based on a time-to-live (TTL) for commands.
See https://github.com/digital-asset/daml/issues/4193
CHANGELOG_END
* Remove unused command service parameter
* fixup protobuf
* Add configuration for TTL
* Fix Haskell bindings
* Rename SQL table
* Add command deduplication test
* Redesign command deduplication queries
* Address review comment
* Address review comment
* Address review comments
* Make command deduplication test optional
* Disable more tests
* Address review comments
* Address review comments
* Refine test
* Address review comments
* scalafmt
* Truncate new table on reset
* Store original command result
* Rename table columns
... to be consistent with other upcoming tables
* Rename migrations to solve conflicts
Fixes#4193.
* Add overridable indexer, api and auth configuration to `LedgerFactory`
CHANGELOG_BEGIN
CHANGELOG_END
* Add overridable indexer and api metrics creation to `LedgerFactory`
CHANGELOG_BEGIN
CHANGELOG_END
* Add overridable api's `TimeServiceBackend` to `LedgerFactory`
* 🎨 Fix formatting
* Port SDK ledgers based on `Runner` (and the sandbox) to `TimeServiceBackend`
* Revert to `TimeProvider` for committer usage and to `None` default for API server.
Also removed now unused `TimeServiceProvider.wallClock()`.
* Move TimeServiceBackend back to the API server.
* 🎨 Remove unneeded argument passed for parameter w/default
* Restore sandbox ledger time support
* Simplify passing a `TimeProvider` to the sandbox ledger
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
Context
=======
After multiple discussions about our current release schedule and
process, we've come to the conclusion that we need to be able to make a
distinction between technical snapshots and marketing releases. In other
words, we need to be able to create a bundle for early adopters to test
without making it an officially-supported version, and without
necessarily implying everyone should go through the trouble of
upgrading. The underlying goal is to have less frequent but more stable
"official" releases.
This PR is a proposal for a new release process designed under the
following constraints:
- Reuse as much as possible of the existing infrastructure, to minimize
effort but also chances of disruptions.
- Have the ability to create "snapshot"/"nightly"/... releases that are
not meant for general public consumption, but can still be used by savvy
users without jumping through too many extra hoops (ideally just
swapping in a slightly-weirder version string).
- Have the ability to promote an existing snapshot release to "official"
release status, with as few changes as possible in-between, so we can be
confident that the official release is what we tested as a prerelease.
- Have as much of the release pipeline shared between the two types of
releases, to avoid discovering non-transient problems while trying to
promote a snapshot to an official release.
- Triggerring a release should still be done through a PR, so we can
keep the same approval process for SOC2 auditability.
The gist of this proposal is to replace the current `VERSION` file with
a `LATEST` file, which would have the following format:
```
ef5d32b7438e481de0235c5538aedab419682388 0.13.53-alpha.20200214.3025.ef5d32b7
```
This file would be maintained with a script to reduce manual labor in
producing the version string. Other than that, the process will be
largely the same, with releases triggered by changes to this `LATEST`
and the release notes files.
Version numbers
===============
Because one of the goals is to reduce the velocity of our published
version numbers, we need a different version scheme for our snapshot
releases. Fortunately, most version schemes have some support for that;
unfortunately, the SDK sits at the intersection of three different
version schemes that have made incompatible choices. Without going into
too much detail:
- Semantic versioning (which we chose as the version format for the SDK
version number) allows for "prerelease" version numbers as well as
"metadata"; an example of a complete version string would be
`1.2.3-nightly.201+server12.43`. The "main" part of the version string
always has to have 3 numbers separated by dots; the "prerelease"
(after the `-` but before the `+`) and the "metadata" (after the `+`)
parts are optional and, if present, must consist of one or more segments
separated by dots, where a segment can be either a number or an
alphanumeric string. In terms of ordering, metadata is irrelevant and
any version with a prerelease string is before the corresponding "main"
version string alone. Amongst prereleases, segments are compared in
order with purely numeric ones compared as numbers and mixed ones
compared lexicographically. So 1.2.3 is more recent than 1.2.3-1,
which is itself less recent than 1.2.3-2.
- Maven version strings are any number of segments separated by a `.`, a
`-`, or a transition between a number and a letter. Version strings
are compared element-wise, with numeric segments being compared as
numbers. Alphabetic segments are treated specially if they happen to be
one of a handful of magic words (such as "alpha", "beta" or "snapshot"
for example) which count as "qualifiers"; a version string with a
qualifier is "before" its prefix (`1.2.3` is before `1.2.3-alpha.3`,
which is the same as `1.2.3-alpha3` or `1.2.3-alpha-3`), and there is a
special ordering amongst qualifiers. Other alphabetic segments are
compared alphabetically and count as being "after" their prefix
(`1.2.3-really-final-this-time` counts as being released after `1.2.3`).
- GHC package numbers are comprised of any number of numeric segments
separated by `.`, plus an optional (though deprecated) alphanumeric
"version tag" separated by a `-`. I could not find any official
documentation on ordering for the version tag; numeric segments are
compared as numbers.
- npm uses semantic versioning so that is covered already.
After much more investigation than I'd care to admit, I have come up
with the following compromise as the least-bad solution. First,
obviously, the version string for stable/marketing versions is going to
be "standard" semver, i.e. major.minor.patch, all numbers, which works,
and sorts as expected, for all three schemes. For snapshot releases, we
shall use the following (semver) format:
```
0.13.53-alpha.20200214.3025.ef5d32b7
```
where the components are, respectively:
- `0.13.53`: the expected version string of the next "stable" release.
- `alpha`: a marker that hopefully scares people enough.
- `20200214`: the date of the release commit, which _MUST_ be on
master.
- `3025`: the number of commits in master up to the release commit
(included). Because we have a linear, append-only master branch, this
uniquely identifies the commit.
- `ef5d32b7ù : the first 8 characters of the release commit sha. This is
not strictly speaking necessary, but makes it a lot more convenient to
identify the commit.
The main downsides of this format are:
1. It is not a valid format for GHC packages. We do not publish GHC
packages from the SDK (so far we have instead opted to release our
Haskell code as separate packages entirely), so this should not be an
issue. However, our SDK version currently leaks to `ghc-pkg` as the
version string for the stdlib (and prim) packages. This PR addresses
that by tweaking the compiler to remove the offending bits, so `ghc-pkg`
would see the above version number as `0.13.53.20200214.3025`, which
should be enough to uniquely identify it. Note that, as far as I could
find out, this number would never be exposed to users.
2. It is rather long, which I think is good from a human perspective as
it makes it more scary. However, I have been told that this may be
long enough to cause issues on Windows by pushing us past the max path
size limitation of that "OS". I suggest we try it and see what
happens.
The upsides are:
- It clearly indicates it is an unstable release (`alpha`).
- It clearly indicates how old it is, by including the date.
- To humans, it is immediately obvious which version is "later" even if
they have the same date, allowing us to release same-day patches if
needed. (Note: that is, commits that were made on the same day; the
release date itself is irrelevant here.)
- It contains the git sha so the commit built for that release is
immediately obvious.
- It sorts correctly under all schemes (modulo the modification for
GHC).
Alternatives I considered:
- Pander to GHC: 0.13.53-alpha-20200214-3025-ef5d32b7. This format would
be accepted by all schemes, but will not sort as expected under semantic
versioning (though Maven will be fine). I have no idea how it will sort
under GHC.
- Not having any non-numeric component, e.g. `0.13.53.20200214.3025`.
This is not valid semantic versioning and is therefore rejected by
npm.
- Not having detailed info: just go with `0.13.53-snapshot`. This is
what is generally done in the Java world, but we then lose track of what
version is actually in use and I'm concerned about bug reports. This
would also not let us publish to the main Maven repo (at least not more
than once), as artifacts there are supposed to be immutable.
- No having a qualifier: `0.13.53-3025` would be acceptable to all three
version formats. However, it would not clearly indicate to humans that
it is not meant as a stable version, and would sort differently under
semantic versioning (which counts it as a prerelease, i.e. before
`0.13.53`) than under maven (which counts it as a patch, so after
`0.13.53`).
- Just counting releases: `0.13.53-alpha.1`, where we just count the
number of prereleases in-between `0.13.52` and the next. This is
currently the fallback plan if Windows path length causes issues. It
would be less convenient to map releases to commits, but it could still
be done via querying the history of the `LATEST` file.
Release notes
=============
> Note: We have decided not to have release notes for snapshot releases.
Release notes are a bit tricky. Because we want the ability to make
snapshot releases, then later on promote them to stable releases, it
follows that we want to build commits from the past. However, if we
decide post-hoc that a commit is actually a good candidate for a
release, there is no way that commit can have the appropriate release
notes: it cannot know what version number it's getting, and, moreover,
we now track changes in commit messages. And I do not think anyone wants
to go back to the release notes file being a merge bottleneck.
But release notes need to be published to the releases blog upon
releasing a stable version, and the docs website needs to be updated and
include them.
The only sensible solution here is to pick up the release notes as of
the commit that triggers the release. As the docs cron runs
asynchronously, this means walking down the git history to find the
relevant commit.
> Note: We could probably do away with the asynchronicity at this point.
> It was originally included to cover for the possibility of a release
> failing. If we are releasing commits from the past after they have been
> tested, this should not be an issue anymore. If the docs generation were
> part of the synchronous release step, it would have direct access to the
> correct release notes without having to walk down the git history.
>
> However, I think it is more prudent to keep this change as a future step,
> after we're confident the new release scheme does indeed produce much more
> reliable "stable" releases.
New release process
===================
Just like releases are currently controlled mostly by detecting
changes to the `VERSION` file, the new process will be controlled by
detecting changes to the `LATEST` file. The format of that file will
include both the version string and the corresponding SHA.
Upon detecting a change to the `LATEST` file, CI will run the entire
release process, just like it does now with the VERSION file. The main
differences are:
1. Before running the release step, CI will checkout the commit
specified in the LATEST file. This requires separating the release
step from the build step, which in my opinion is cleaner anyway.
2. The `//:VERSION` Bazel target is replaced by a repository rule
that gets the version to build from an environment variable, with a
default of `0.0.0` to remain consistent with the current `daml-head`
behaviour.
Some of the manual steps will need to be skipped for a snapshot release.
See amended `release/RELEASE.md` in this commit for details.
The main caveat of this approach is that the official release will be a
different binary from the corresponding snapshot. It will have been
built from the same source, but with a different version string. This is
somewhat mitigated by Bazel caching, meaning any build step that does
not depend on the version string should use the cache and produce
identical results. I do not think this can be avoided when our artifact
includes its own version number.
I must note, though, that while going through the changes required after
removing the `VERSION` file, I have been quite surprised at the sheer number of
things that actually depend on the SDK version number. I believe we should
look into reducing that over time.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Make ReadOnlySqlLedger provide a ResourceOwner, not a Resource.
* sandbox: Fix two race conditions on shutdown in ReadOnlySqlLedger.
It appears that there are two race conditions regarding the ledger end
update mechanism.
1. The dispatcher can keep firing for a little while even after we shut
down the source, which can cause a spurious connection failure as it
makes a query on a closed database connection.
2. We don't wait for the sink to complete, which means, again, we could
shut down the connection before the last `lookupLedgerEnd` query is
issued.
This also makes sure we actually construct a new source if the updates
fail. Previously we were re-using the same source, which looked like a
crash-loop waiting to happen.
Tested by constructing `ReadOnlySqlLedger` and closing it in a loop, and
watching for errors.
CHANGELOG_BEGIN
- [Ledger API Server] Fix a race condition on shutdown in which polling
for the ledger end could continue even after the database connection
was closed.
CHANGELOG_END
* Split Ledger API Test Tool output
Makes failure pop up even without text coloring (e.g. on Azure Pipelines)
CHANGELOG_BEGIN
[DAML Ledger Integration Kit] Ledger API Test Tool now prints errors as a separate section
CHANGELOG_END
* Successes on the right, failures on the left :)
* Add missing newline
* sandbox: Fix a bug in the ResetServiceIT `timedReset` function.
It was computing the start and end times almost simultaneously.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Better error messages in ResetServiceIT if resets are slow.
Let the matchers do their magic.