* sandbox-next: Make the Runner a real ResourceOwner.
* sandbox: Don't construct the ResetService twice.
* sandbox: Inline and simplify methods in StandaloneApiServer.
* resources: Define a `ResettableResource`, which can be `reset()`.
`reset()` releases the resource, performs an optional reset operation,
and then re-acquires it, binding it to the same variable.
* resources: Pass the resource value into the reset operation.
* sandbox: Fix warnings in `TestCommands`.
* sandbox-next: Add the ResetService.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Make sure the SandboxResetService resets asynchronously.
It was being too clever and negating its own asynchronous behavior.
* sandbox-next: Forbid no seeding.
This double negative is really hard to phrase well.
* sandbox-next: Implement ResetService for a persistent ledger.
* sandbox: Delete the comment heading StandaloneIndexerServer.
It's no longer meaningful.
* sandbox-next: No need to wrap the SandboxResetService in an owner.
* sandbox-next: Bump the ResetService test timeouts.
It looks like it's definitely slower than on Sandbox Classic™. Gonna
look into this as part of future work.
* Revert to previous asynchronous reset behavior
Co-authored-by: Gerolf Seitz <gerolf.seitz@digitalasset.com>
* Always return error on duplicate submissions
* Remove unnecessary submission information
Now that duplicate submissions always return an error,
we don't need to store the original submission result.
CHANGELOG_BEGIN
CHANGELOG_END
* Rename ttl to deduplicationTime/deduplicateUntil
* Store absolute deduplicateUntil in domain commands
* Fix my own initials
* Remove CommandDeduplicationEntry
Instead, use CommandDeduplicationResult everywhere,
removing the extra layer.
It is basically impossible to not hit this all the time if you upload
more than one package so issuing a warning is a bit confusing.
changelog_begin
- [Sandbox] The warning about duplicate package uploads is no longer
emitted by default. You can enable them by passing
``--log-level=debug``.
changelog_end
* sandbox: Return `Future[Unit]` from migrations rather than awaiting.
I've removed the explicit error-handling, because this will be
propagated and handled at the top level.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Pass the JDBC URL into the JdbcIndexerFactory constructor.
* sandbox: Replace the JdbcIndexerFactory's `InitStatus` with two classes.
The `asInstanceOf` conversions put me off.
* sandbox: Stop passing around the ledger ID in JdbcIndexerFactory.
* sandbox: Remove the indexer `asyncTolerance`; it's no longer used.
The change to `EventFilter` and to the query in `JdbcLedgerDao` are
"duplicate work", but we need the change in EventFilter for the
InMemoryLedger, and the change in JdbcLedgerDao so that we avoid
fetching a contract that anyway would be discarded later.
CHANGELOG_BEGIN
[Sandbox]: Witnessed contracts for which a party is not a stakeholder
are no longer returned in the active contract stream.
CHANGELOG_END
Fixes#3254.
* libs-scala/ports: Wrap socket ports in a type, `Port`.
* sandbox: Use `Port` for the API server port, and propagate.
CHANGELOG_BEGIN
CHANGELOG_END
* extractor: Use `Port` for the server port.
* ports: Make Port a compile-time class only.
* ports: Allow port 0; it can be specified by a user.
* ports: Publish to Maven Central.
* kvutils: Make the `KeyValueParticipantStateReader` tests more rigorous.
If the `offset` is specified as `None`, expect it to be `None`, not
just anything.
* kvutils: Simplify `KeyValueParticipantStateReader#stateUpdates`.
Construct the Source with `Source.apply`, not `Source.fromIterator`.
* kvutils: Use multiple entry IDs in `KeyValueParticipantStateReaderSpec`.
* kvutils: Add basic tests to `KeyValueParticipantStateReaderSpec`.
* kvutils: Add heartbeats to `LedgerReader`'s `events` output.
Heartbeats are optional, to be delivered by the ledger if and when it
deems necessary.
* sandbox-next: An observing time service backend using Akka streams.
* sandbox-next: A regular heartbeat based on Akka Streams' `tick`.
* sandbox: Replace `TimeServiceBackend.withObserver` with `.observing`.
More code, but it's more decoupled, so can more easily be sent to the
underlying backend in Sandbox Next.
CHANGELOG_BEGIN
- [Sandbox] Fixed a bug in the command completions stream when running
Sandbox in static time. Previously, upon updating the time, the old
time was emitted on the completions stream. The new time is now
emitted.
CHANGELOG_END
* sandbox: TimeServiceBackend should only emit accepted changes.
* ledger-on-memory: Use `LedgerRecord` directly.
* ledger-on-memory: Stream heartbeats to the log.
* ledger-on-memory: Encapsulate mutations behind locks at all times.
* ledger-on-memory: Differentiate between reading and writing.
* ledger-on-memory: Factor out appending to the log.
* kvutils: Move the heartbeat test into the base from ledger-on-memory.
* kvutils: Log when the submission validation fails unexpectedly.
* ledger-on-sql: Add a script to hash all migrations.
* ledger-on-sql: Publish heartbeats to the log, and stream them out.
* ledger-on-sql: Log if publishing the heartbeat failed.
* ledger-on-sql: Wrap all queries in `Try`.
Just to make sure that we don't throw from a function that returns `Try`
or `Future`.
* ledger-on-sql: Allow `Long` values as the heartbeat timestamp.
`INTEGER` really does mean 32-bit, apparently.
* sandbox-next: Pipe heartbeats to the ledger.
* ledger-on-sql: Make sure we publish the correct head after a heartbeat.
Off-by-one errors are the best errors.
* ledger-on-(memory|sql): Just accept heartbeats, not their owner.
* sandbox: Update CIDs in tests to account for the extra heartbeat.
* ledger-on-memory: Fix a reference to variable in a comment.
Co-Authored-By: Gerolf Seitz <gerolf.seitz@digitalasset.com>
* ledger-on-sql: `flatMap` over `Try` rather than `Future` when possible.
* sandbox: Make sure the heartbeat queues are thread-safe.
* kvutils: Remove `LoggingContext` from the interfaces.
Keep it internally. This means we'll drop any context, but otherwise
things should work as expected.
* sandbox-next: Pull out the heartbeat interval into a constant.
* ledger-on-sql|sandbox: Clarify large levels of nesting.
Co-authored-by: Gerolf Seitz <gerolf.seitz@digitalasset.com>
Rejected submissions are a user-error and don't indicate a problem with the server not functioning properly. Such user
errors make it hard to spot "real" server-side warnings and errors.
i
Closes#4772
CHANGELOG_BEGIN
- The Ledger API Server now logs rejected submissions at a lower "INFO" level to remove a source of warnings/errors without relation to server health.
CHANGELOG_END
* Freeze DAML-LF 1.8
Two minor points that I did not mention in the previous PR:
We also include the renaming of structural records to `struct` and the
renaming of `Map` to `TextMap`.
There are some minor changes around the LF encoder tests which need to
be able to emit package metadata properly so I’ve added it to the
parser. Sorry for not splitting that out.
Following the process used for the DAML-LF 1.7 release, this does not
yet include the frozen proto file.
changelog_begin
- [DAML-LF] Release DAML-LF 1.8:
* Rename structural records to ``Struct``. Note that
structural records are not exposed in DAML.
* Rename ``Map`` to ``TextMap``.
* Add type synonyms. Note that type synonyms are not serializable.
* Add package metadata, i.e., package names and versions.
Note that the default output of ``damlc`` is stil DAML-LF 1.7. You
can produce DAML-LF 1.8 by passing ``--target=1.8``.
changelog_end
* Update encoder
* Update java codegen tests
* Update comment in scala codegen
* Handle TSynApp in interface reader
* Bump lf_stable_version to 1.7
* Fix kvutils tests
* Make kvutils work with the new contract id scheme
CHANGELOG_BEGIN
- [KVUtils] uses random contract id. Contract ids are made of 65 hexa decimal characters.
CHANGELOG_END
Co-authored-by: Jussi Mäki <jussi.maki@digitalasset.com>
* Tighten the loop: backend services to return API responses
CHANGELOG_BEGIN
CHANGELOG_END
* Use transaction filter directly
* Remove unnecessary transition through domain objects
* Ensure transient contract remover compares sets of witnesses
* Honor verbosity in request
* Address review https://github.com/digital-asset/daml/pull/4763#pullrequestreview-367012726
- using named parameters when creating the API objects
- renamed EventOps accessors to easily recognizable names
- dropped unnecessary usage of views
- honoring verbosity level in request in all places
- replaced usage of lenses with simple copying where it made sense
* sandbox: Name the arguments to `ApiServices.create` for clarity.
* sandbox: Clarify numbers and types in configuration classes.
* sandbox-next: Log the correct port on startup.
* sandbox-next: Connect up the command configuration.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox-next: Wire up TLS configuration.
* sandbox-next: Wire up the maximum inbound message size.
* sandbox-next: Set the global log level if specified.
And if it's not specified, default to the level in logback.xml, INFO.
* sandbox-next: Connect up the submission configuration.
* sandbox-next: Log the correct ledger ID.
* sandbox-next: Use `TimeProvider.UTC`.
* Make completion service return checkpoints
The new table for #4681 and the query used to retrieve completions
currently does not return checkpoints. These do not have to match
the application_id and submitting_party query since those fields
are not populated.
CHANGELOG_BEGIN
CHANGELOG_END
* Address https://github.com/digital-asset/daml/pull/4735#discussion_r384713277
This removes the sample/reference implementation of kvutils
InMemoryKVParticipantState.
This used to be the only implementation of kvutils, but now with the
simplified kvutils api we have ledger-on-memory and ledger-on-sql.
InMemoryKVParticipantState was also used for the ledger dump utility,
which now uses ledger-on-memory.
* Runner now supports a multi participant configuration
This change removes the "extra participants" config and goes for consistent
participant setup with --participant.
* Run all conformance tests in the repository in verbose mode.
This means we'll print stack traces on error, which should make it
easier to figure out what's going on with flaky tests on CI.
This doesn't change the default for other users of the
ledger-api-test-tool; we just add the flag for:
- ledger-api-test-tool-on-canton
- ledger-on-memory
- ledger-on-sql
- sandbox
Fixes#4225.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox-next: Get the authorization service from configuration.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox-next: Add parameter names to Runner function calls.
It's getting very confusing what's what without them, and the mix of
with and without is more confusing.
* Add TTL field to protobuf
* Add command deduplication to index service
* Wire command deduplication to DAO
* Implement in-memory command deduplication
* Remove Deduplicator
* Implement JDBC command deduplication
* Add TTL field to domain commands
* Deduplicate commands in the submission service
CHANGELOG_BEGIN
- [Sandbox] Implement a new command submission deduplication mechanism
based on a time-to-live (TTL) for commands.
See https://github.com/digital-asset/daml/issues/4193
CHANGELOG_END
* Remove unused command service parameter
* fixup protobuf
* Add configuration for TTL
* Fix Haskell bindings
* Rename SQL table
* Add command deduplication test
* Redesign command deduplication queries
* Address review comment
* Address review comment
* Address review comments
* Make command deduplication test optional
* Disable more tests
* Address review comments
* Address review comments
* Refine test
* Address review comments
* scalafmt
* Truncate new table on reset
* Store original command result
* Rename table columns
... to be consistent with other upcoming tables
* Rename migrations to solve conflicts
Fixes#4193.
* Add overridable indexer, api and auth configuration to `LedgerFactory`
CHANGELOG_BEGIN
CHANGELOG_END
* Add overridable indexer and api metrics creation to `LedgerFactory`
CHANGELOG_BEGIN
CHANGELOG_END
* Add overridable api's `TimeServiceBackend` to `LedgerFactory`
* 🎨 Fix formatting
* Port SDK ledgers based on `Runner` (and the sandbox) to `TimeServiceBackend`
* Revert to `TimeProvider` for committer usage and to `None` default for API server.
Also removed now unused `TimeServiceProvider.wallClock()`.
* Move TimeServiceBackend back to the API server.
* 🎨 Remove unneeded argument passed for parameter w/default
* Restore sandbox ledger time support
* Simplify passing a `TimeProvider` to the sandbox ledger
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
Context
=======
After multiple discussions about our current release schedule and
process, we've come to the conclusion that we need to be able to make a
distinction between technical snapshots and marketing releases. In other
words, we need to be able to create a bundle for early adopters to test
without making it an officially-supported version, and without
necessarily implying everyone should go through the trouble of
upgrading. The underlying goal is to have less frequent but more stable
"official" releases.
This PR is a proposal for a new release process designed under the
following constraints:
- Reuse as much as possible of the existing infrastructure, to minimize
effort but also chances of disruptions.
- Have the ability to create "snapshot"/"nightly"/... releases that are
not meant for general public consumption, but can still be used by savvy
users without jumping through too many extra hoops (ideally just
swapping in a slightly-weirder version string).
- Have the ability to promote an existing snapshot release to "official"
release status, with as few changes as possible in-between, so we can be
confident that the official release is what we tested as a prerelease.
- Have as much of the release pipeline shared between the two types of
releases, to avoid discovering non-transient problems while trying to
promote a snapshot to an official release.
- Triggerring a release should still be done through a PR, so we can
keep the same approval process for SOC2 auditability.
The gist of this proposal is to replace the current `VERSION` file with
a `LATEST` file, which would have the following format:
```
ef5d32b7438e481de0235c5538aedab419682388 0.13.53-alpha.20200214.3025.ef5d32b7
```
This file would be maintained with a script to reduce manual labor in
producing the version string. Other than that, the process will be
largely the same, with releases triggered by changes to this `LATEST`
and the release notes files.
Version numbers
===============
Because one of the goals is to reduce the velocity of our published
version numbers, we need a different version scheme for our snapshot
releases. Fortunately, most version schemes have some support for that;
unfortunately, the SDK sits at the intersection of three different
version schemes that have made incompatible choices. Without going into
too much detail:
- Semantic versioning (which we chose as the version format for the SDK
version number) allows for "prerelease" version numbers as well as
"metadata"; an example of a complete version string would be
`1.2.3-nightly.201+server12.43`. The "main" part of the version string
always has to have 3 numbers separated by dots; the "prerelease"
(after the `-` but before the `+`) and the "metadata" (after the `+`)
parts are optional and, if present, must consist of one or more segments
separated by dots, where a segment can be either a number or an
alphanumeric string. In terms of ordering, metadata is irrelevant and
any version with a prerelease string is before the corresponding "main"
version string alone. Amongst prereleases, segments are compared in
order with purely numeric ones compared as numbers and mixed ones
compared lexicographically. So 1.2.3 is more recent than 1.2.3-1,
which is itself less recent than 1.2.3-2.
- Maven version strings are any number of segments separated by a `.`, a
`-`, or a transition between a number and a letter. Version strings
are compared element-wise, with numeric segments being compared as
numbers. Alphabetic segments are treated specially if they happen to be
one of a handful of magic words (such as "alpha", "beta" or "snapshot"
for example) which count as "qualifiers"; a version string with a
qualifier is "before" its prefix (`1.2.3` is before `1.2.3-alpha.3`,
which is the same as `1.2.3-alpha3` or `1.2.3-alpha-3`), and there is a
special ordering amongst qualifiers. Other alphabetic segments are
compared alphabetically and count as being "after" their prefix
(`1.2.3-really-final-this-time` counts as being released after `1.2.3`).
- GHC package numbers are comprised of any number of numeric segments
separated by `.`, plus an optional (though deprecated) alphanumeric
"version tag" separated by a `-`. I could not find any official
documentation on ordering for the version tag; numeric segments are
compared as numbers.
- npm uses semantic versioning so that is covered already.
After much more investigation than I'd care to admit, I have come up
with the following compromise as the least-bad solution. First,
obviously, the version string for stable/marketing versions is going to
be "standard" semver, i.e. major.minor.patch, all numbers, which works,
and sorts as expected, for all three schemes. For snapshot releases, we
shall use the following (semver) format:
```
0.13.53-alpha.20200214.3025.ef5d32b7
```
where the components are, respectively:
- `0.13.53`: the expected version string of the next "stable" release.
- `alpha`: a marker that hopefully scares people enough.
- `20200214`: the date of the release commit, which _MUST_ be on
master.
- `3025`: the number of commits in master up to the release commit
(included). Because we have a linear, append-only master branch, this
uniquely identifies the commit.
- `ef5d32b7ù : the first 8 characters of the release commit sha. This is
not strictly speaking necessary, but makes it a lot more convenient to
identify the commit.
The main downsides of this format are:
1. It is not a valid format for GHC packages. We do not publish GHC
packages from the SDK (so far we have instead opted to release our
Haskell code as separate packages entirely), so this should not be an
issue. However, our SDK version currently leaks to `ghc-pkg` as the
version string for the stdlib (and prim) packages. This PR addresses
that by tweaking the compiler to remove the offending bits, so `ghc-pkg`
would see the above version number as `0.13.53.20200214.3025`, which
should be enough to uniquely identify it. Note that, as far as I could
find out, this number would never be exposed to users.
2. It is rather long, which I think is good from a human perspective as
it makes it more scary. However, I have been told that this may be
long enough to cause issues on Windows by pushing us past the max path
size limitation of that "OS". I suggest we try it and see what
happens.
The upsides are:
- It clearly indicates it is an unstable release (`alpha`).
- It clearly indicates how old it is, by including the date.
- To humans, it is immediately obvious which version is "later" even if
they have the same date, allowing us to release same-day patches if
needed. (Note: that is, commits that were made on the same day; the
release date itself is irrelevant here.)
- It contains the git sha so the commit built for that release is
immediately obvious.
- It sorts correctly under all schemes (modulo the modification for
GHC).
Alternatives I considered:
- Pander to GHC: 0.13.53-alpha-20200214-3025-ef5d32b7. This format would
be accepted by all schemes, but will not sort as expected under semantic
versioning (though Maven will be fine). I have no idea how it will sort
under GHC.
- Not having any non-numeric component, e.g. `0.13.53.20200214.3025`.
This is not valid semantic versioning and is therefore rejected by
npm.
- Not having detailed info: just go with `0.13.53-snapshot`. This is
what is generally done in the Java world, but we then lose track of what
version is actually in use and I'm concerned about bug reports. This
would also not let us publish to the main Maven repo (at least not more
than once), as artifacts there are supposed to be immutable.
- No having a qualifier: `0.13.53-3025` would be acceptable to all three
version formats. However, it would not clearly indicate to humans that
it is not meant as a stable version, and would sort differently under
semantic versioning (which counts it as a prerelease, i.e. before
`0.13.53`) than under maven (which counts it as a patch, so after
`0.13.53`).
- Just counting releases: `0.13.53-alpha.1`, where we just count the
number of prereleases in-between `0.13.52` and the next. This is
currently the fallback plan if Windows path length causes issues. It
would be less convenient to map releases to commits, but it could still
be done via querying the history of the `LATEST` file.
Release notes
=============
> Note: We have decided not to have release notes for snapshot releases.
Release notes are a bit tricky. Because we want the ability to make
snapshot releases, then later on promote them to stable releases, it
follows that we want to build commits from the past. However, if we
decide post-hoc that a commit is actually a good candidate for a
release, there is no way that commit can have the appropriate release
notes: it cannot know what version number it's getting, and, moreover,
we now track changes in commit messages. And I do not think anyone wants
to go back to the release notes file being a merge bottleneck.
But release notes need to be published to the releases blog upon
releasing a stable version, and the docs website needs to be updated and
include them.
The only sensible solution here is to pick up the release notes as of
the commit that triggers the release. As the docs cron runs
asynchronously, this means walking down the git history to find the
relevant commit.
> Note: We could probably do away with the asynchronicity at this point.
> It was originally included to cover for the possibility of a release
> failing. If we are releasing commits from the past after they have been
> tested, this should not be an issue anymore. If the docs generation were
> part of the synchronous release step, it would have direct access to the
> correct release notes without having to walk down the git history.
>
> However, I think it is more prudent to keep this change as a future step,
> after we're confident the new release scheme does indeed produce much more
> reliable "stable" releases.
New release process
===================
Just like releases are currently controlled mostly by detecting
changes to the `VERSION` file, the new process will be controlled by
detecting changes to the `LATEST` file. The format of that file will
include both the version string and the corresponding SHA.
Upon detecting a change to the `LATEST` file, CI will run the entire
release process, just like it does now with the VERSION file. The main
differences are:
1. Before running the release step, CI will checkout the commit
specified in the LATEST file. This requires separating the release
step from the build step, which in my opinion is cleaner anyway.
2. The `//:VERSION` Bazel target is replaced by a repository rule
that gets the version to build from an environment variable, with a
default of `0.0.0` to remain consistent with the current `daml-head`
behaviour.
Some of the manual steps will need to be skipped for a snapshot release.
See amended `release/RELEASE.md` in this commit for details.
The main caveat of this approach is that the official release will be a
different binary from the corresponding snapshot. It will have been
built from the same source, but with a different version string. This is
somewhat mitigated by Bazel caching, meaning any build step that does
not depend on the version string should use the cache and produce
identical results. I do not think this can be avoided when our artifact
includes its own version number.
I must note, though, that while going through the changes required after
removing the `VERSION` file, I have been quite surprised at the sheer number of
things that actually depend on the SDK version number. I believe we should
look into reducing that over time.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Make ReadOnlySqlLedger provide a ResourceOwner, not a Resource.
* sandbox: Fix two race conditions on shutdown in ReadOnlySqlLedger.
It appears that there are two race conditions regarding the ledger end
update mechanism.
1. The dispatcher can keep firing for a little while even after we shut
down the source, which can cause a spurious connection failure as it
makes a query on a closed database connection.
2. We don't wait for the sink to complete, which means, again, we could
shut down the connection before the last `lookupLedgerEnd` query is
issued.
This also makes sure we actually construct a new source if the updates
fail. Previously we were re-using the same source, which looked like a
crash-loop waiting to happen.
Tested by constructing `ReadOnlySqlLedger` and closing it in a loop, and
watching for errors.
CHANGELOG_BEGIN
- [Ledger API Server] Fix a race condition on shutdown in which polling
for the ledger end could continue even after the database connection
was closed.
CHANGELOG_END
* Split Ledger API Test Tool output
Makes failure pop up even without text coloring (e.g. on Azure Pipelines)
CHANGELOG_BEGIN
[DAML Ledger Integration Kit] Ledger API Test Tool now prints errors as a separate section
CHANGELOG_END
* Successes on the right, failures on the left :)
* Add missing newline
* sandbox: Fix a bug in the ResetServiceIT `timedReset` function.
It was computing the start and end times almost simultaneously.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Better error messages in ResetServiceIT if resets are slow.
Let the matchers do their magic.
This moves IndexerIT into its own package, and swaps the dependency from
reference-v2 to ledger-on-memory.
This test should ideally live in the sandbox code, but because it
depends on ledger-on-memory, it's easier to keep it separate.
Also rewrites a lot of the code because the API is different. The tests
should now be clearer too.
I've also marked the test as flaky, because, well, it is.
CHANGELOG_BEGIN
CHANGELOG_END
* Revert "sandbox: Log, explaining the removal of static time and scenarios. (#4582)"
This reverts commit b5cb341e8d.
CHANGELOG_BEGIN
- [Sandbox] Removed the warnings regarding static time and scenarios on
initialization. We will not deprecate these until we have a stable
path forward.
CHANGELOG_END
* Sandbox: Include the log level in output.
* ledger-on-sql: Provide queries in the transaction lambda.
* ledger-on-sql: Split read queries out from write queries.
Can't run write queries from a read transaction.
CHANGELOG_BEGIN
CHANGELOG_END
* ledger-on-sql: Pass the connection into the `Queries` constructors.
Way less typing this way round.
* sandbox: If the ledger ID isn't provided, use the one in the database.
Previously, we would fail if working against an existing ledger, and not
explicitly providing the ledger ID. This was the case even if the ledger
ID was randomly generated initially.
CHANGELOG_BEGIN
- [Sandbox] If no ledger ID is provided when running against an existing
ledger, use the existing ID. Previously, Sandbox would fail to start.
CHANGELOG_END
* sandbox: The ReadOnlySqlLedger should always receive a ledger ID.
It's read-only; it can't create one.
* sandbox: Stop using `equal` in SqlLedgerSpec.
* sandbox: Test that the ledger ID is as specified in SqlLedgerSpec.
* sandbox: Let the top-level runner handle a ledger ID mismatch.
And clean up the log text.
* sandbox: Initialize the ledger properly when the ID is dynamic.
* sandbox: Use `Vector`, not `List`, for SqlLedger initialization.
Append with Vector, good. List, bad.
* ledger-api-common: Make `LedgerApiMode.Dynamic` an object.
And add Java-style static factory methods.
* kvutils/app | ledger-on-{memory,sql}: Make `ledgerId` optional.
It should be generated or retrieved from the persistence layer by the
ledger itself.
* kvutils: Make the ledger ID optional in the tests.
* ledger-on-sql: Store the ledger ID, and reject conflicting IDs.
* ledger-on-sql: Make more things final.
* ledger-on-sql: Document the `ledger_meta.table_key` column better.
* sandbox: Don't hardcode the number of packages in the test DAR.
It changes.
* ledger-on-sql: Merge the `head` resource owner with the `dispatcher`.
* sandbox: Use backticks to simplify pattern match in ReadOnlySqlLedger.
* ledger-on-sql: Extract methods in `owner`.
* Push down completion requests to data access layer
This is largely a refactoring. The externally observable behavior is unchanged, but:
- a sub-dao is created for command completions (with the intent of breaking up the dao completely in future commits)
- the command completions dao can, in theory, directly fetch completions off the index
- in practice this is not implemented here to keep this PR as small as possible
Filtering ledger entries to get completions is moved to a function that is in turn used by:
- the ledger dao
- the in-memory sandbox
The plan for the former is to add a new table where completion-relevant data is stored so that it can be fetched quickly.
The plan for the latter is to get rid of it once DAML-on-SQL ships.
CHANGELOG_BEGIN
CHANGELOG_END
* Fix off-by-one error in the in-memory sandbox
* Add type-level strings in DAML.
This PR adds a `PromotedText` stable package, with `PromotedText` type, which is used to encode type-level strings from DAML into DAML-LF. The reason for this is to preserve the `HasField` instance argument. This PR adds a test that `HasField` is succesfully reconstructed incontexts, during data-dependencies, which wasn't possible before.
changelog_begin
changelog_end
* adresss comments
* fix overly specific tests
* Use KeyHasher to serialize contract keys in kvutils
- Use Value instead of VersionedValue in GlobalKey as the versioning does not make sense here
and may be misleading as the a value with a different version but same meaning would still
be the same key.
- Relocate the KeyHasher to ledger-api-common so kvutils can use it (otherwise cyclic dependencies)
- Replace storing of the contract key as a VersionedValue with the hash produced by KeyHasher.
This is backwards incompatible. A compatible option would require us to query the key with both
the old way and the new way which is unattenable. We're making a calculated breaking change.
CHANGELOG_BEGIN
- [DAML Ledger Integration Kit] Serialize contract keys using a hash instead of the value in kvutils.
This is a backwards incompatible change to kvutils.
CHANGELOG_END
* Use proper hasher for contract keys and not KeyHasher
- Use Hash.scala, not KeyHasher.scala.
- Add hash to GlobalKey as we want the hash to be computed from the inside.
The use of KeyHasher will be later deprecated and replaced by this.
* Use "sealed abstract case class" trick instead of private ctor
and rebase fix
* Revert change to unsupported value version decode error
* Reformat code
* Add kvutils changelog entry and bump the version
CHANGELOG_BEGIN
- [Sandbox] Static time mode is being deprecated in the future. A warning has been added to notify users of this fact.
- [Sandbox] Scenarios are being deprecated in the future, in favor of `DAML Script <https://docs.daml.com/daml-script/>`_. A warning has been added to notify users of this fact.
CHANGELOG_END
* kvutils: Extract a committer from the uses of `SubmissionValidator`.
This makes the clock injectable too.
* kvutils: Provide logging contexts in the `Runner`.
* sandbox: Remove the `StaticAllowBackwards` time provider type.
It's not used anywhere.
* sandbox: Fix warnings in CliSpec.
* sandbox: Ensure that we cannot specify both static and wall-clock time.
* sandbox-next: Crash if wall clock time is not specified.
* sandbox-next: Document more known issues in the new Sandbox.
* sandbox: Add a Clock (and some tests) to TimeServiceBackend.
* sandbox-next: Support static time.
CHANGELOG_BEGIN
- [Sandbox Next] Re-establish static time mode.
CHANGELOG_END
* ledger-on-(memory|sql): Expect a `() => Instant`, not a `Clock`.
* Add more tests to default run of Ledger API Test Tool
Furthermore, drop TimeIT, which is a sandbox-only property (and tested already as part of its integration tests)
CHANGELOG_BEGIN
[DAML Ledger Integration Kit] Ledger API Test Tool default tests modified. Use --list for the updated list of default tests. Time service test dropped from the suite.
CHANGELOG_END
* Address https://github.com/digital-asset/daml/pull/4561#discussion_r380635979
* Address https://github.com/digital-asset/daml/pull/4561#discussion_r380636989
* Optimize imports
* Only run semantic tests on Canton
* kvutils/ledger-on-sql: Avoid a race condition in dispatching.
This changes the API of kvutils to allow for passing data out of the
transaction, which makes it much easier to ensure the new head makes it
to the dispatcher. Previously, we would use an `AtomicLong` to
communicate the data, but this was problematic because values could
arrive out in the wrong order. For example:
- log index 5 is committed
- `head` is updated to 6
- the dispatcher is signalled with a head of 6
- log index 6 is committed
- log index 7 is committed
- `head` is updated to 8
- `head` is updated to 7
- the dispatcher is signalled with a head of 7
In this scenario, we would have to wait until a new commit comes in
before the indexer finds out about log index 7.
* kvutils: Just return an `Either`from `SubmissionValidator`.
It was either that or introduce yet another type to split
`SubmissionValidated` into two.
CHANGELOG_BEGIN
CHANGELOG_END
* kvutils: Make ValidationFailed extend NoStackTrace.