* use traverseFM for storeSync
CHANGELOG_BEGIN
CHANGELOG_END
* split the transaction seq trial from the selection of random parameters
* sample focused trial
* temporarily enable focused 48% onlyWildcardParties H2 test in CI
- partially reverts b4244036f6 from #7482
* try different PowerShell syntax, reduce trial count to 250
* remove temporary tests
250ms is a bit low for CI when our database might be overloaded.
5 seconds seems like a decent balance between a quick response and
being sympathetic to slow/overloaded machines.
CHANGELOG_BEGIN
CHANGELOG_END
* add parameter information to "fall back to limit-based query with consistent results" test
* run only one test in CI, and run it a lot more
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* considering a grouped reporter
* never mind that
* clean up the error report
* link to #7521
* remove harder testing
* concurrent: Tag DirectExecutionContext.
1. Tag `DirectExecutionContext` as `ExecutionContext[Nothing]`, thereby
stating that it works for any tagged `Future`.
2. Move `DirectExecutionContext` to the _libs-scala/concurrent_
library, as it requires it and it's tiny.
CHANGELOG_BEGIN
CHANGELOG_END
* concurrent: Fix the privacy of `DirectExecutionContextInternal`.
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* participant-integration-api: Inject health checks into the API server.
CHANGELOG_BEGIN
- [Integration Kit] The ``StandaloneApiServer`` now takes a
``healthChecks`` parameter, which should identify the health checks to
be exposed over the gRPC Health Checking Protocol. This will
typically look something like::
healthChecks = new HealthChecks("read" -> readService, "write" -> writeService)
Integrators may also wish to expose the health of more components.
All components wishing to report their health must implement the
``ReportsHealth`` trait.
CHANGELOG_END
* sandbox + kvutils: Add the "read" component back to the health checks.
* remove failedTransactions field from ScenarioLedger.RichTransaction
changelog_begin
changelog_end
* remove Blinding.checkAuthorizationAndBlind; fixup callers to use Blinding.blind when blindningInfo is required
* rename/relocate: ScenarioLedger.CommitError.FailedAuthorizations --> SError.DamlEFailedAuthorization
* fix types to demonstate that at most one FailedAuthorization is detected/reported
* address small review comments
This PR implements a part of the proposal from #7093.
Here packages are validated in the participant node before to be sent to the ledger.
CHANGELOG_BEGIN
- [Ledger-API] participant node validate Dar before uploading to the ledger.
This may increase upload time significantly.
CHANGELOG_END
It turns out that if you give the CSV reporter a non-existent directory,
it crashes. I did not expect this.
This constructs the directory so you don't have to worry about that.
CHANGELOG_BEGIN
CHANGELOG_END
This is the same technique as `DerivativeGauge` from the metrics
library, but with less code, because Scala is prettier than Java.
CHANGELOG_BEGIN
CHANGELOG_END
* [KVL-222] Add participant id to index metadata dump
changelog_begin
changelog_end
* Test SqlLedger participant id initialization
* Test JdbcIndexer participant id initialization
* Make RecoveringIndexerSpec final and remove unused trait
Furthermore, does not mention PostgreSQL in the message, as
the HikariJdbcConnectionProvider is shared across all our
usages of RDBMSs (including H2 and Sqlite).
changelog_begin
changelog_end
* Refactor SQLLedger initialization routine
Small refactoring to make initialization a bit more readable. Performed
while moving forward with the addition of the participant identifier to
the parameters table (so a few minor details have leaked into this PR).
changelog_begin
changelog_end
* Fix compilation errors
* Address https://github.com/digital-asset/daml/pull/7200#discussion_r474630880
* Fix test, lower test logging noise
* participant-integration-api: Never use a delay of zero.
If `akka.pattern.after` is passed a delay of zero, it will execute the
body synchronously, potentially leading to a stack overflow error.
This error was observed in tests.
CHANGELOG_BEGIN
CHANGELOG_END
* timer-utils: Add tests for Delayed.Future.
* timer-utils: Add tests for RetryStrategy.
* timer-utils: Remove duplication in RetryStrategy tests.
* timer-utils: Allow for more wiggle room in the RetryStrategy tests.
* timer-utils: Fail after retrying the correct number of times.
* timer-utils: Ensure we don't overflow the stack in RetryStrategy.
* timer-utils: Reject a negative number of retry attempts.
* participant-integration-api: Factor out `pollUntilPersisted`.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-integration-api: Flatten `flatMap`s in admin services.
* participant-integration-api: Use the `FutureConverters` implicits.
* participant-integration-api: Inline `waitForEntry` again.
* participant-integration-api: Commas are great.
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* participant-integration-api: Pass the EC to `pollUntilPersisted`.
* participant-integration-api: Extract out more into SynchronousResponse.
* participant-state-index: Stream entries from an optional offset.
Makes them consistent with configuration entries.
* participant-integration-api: Move the ledger end into the strategy.
* participant-integration-api: Use SynchronousResponse everywhere.
* participant-integration-api: Pass inputs to SynchronousResponse#submit.
* participant-integration-api: Add docs to SynchronousResponse.
* participant-integration-api: Add a changelog entry.
CHANGELOG_BEGIN
- [Ledger API] The ConfigurationManagementService will now use the same
description as other services in case of certain errors. The error
status codes have not changed.
CHANGELOG_END
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
We don't use the `participant_id` column in either the
`configuration_entries` or `party_entries` tables of the index.
This does not change the partcipant state API, so ledger integrators
should not need to make changes.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox-common: Add tests for parsing the metrics reporter CLI args.
* participant-integration-api: Remove unused MetricsReporter stuff.
* sandbox-common: More rigorous parsing for metrics reporters.
This avoids the issue where a previously-valid value such as
"graphite:server:1234" fails with a cryptic error message:
"hostname can't be null".
CHANGELOG_BEGIN
- [Sandbox] Improved the error message when providing an invalid metric
reporter as a command line argument. Now the error message always
shows the correct syntax.
CHANGELOG_END
* participant-integration-api: Add cause to metrics reporter parse error.
* add -Ywarn-unused to all scalac options
* remove some unused arguments
* remove some unused definitions
* remove some unused variable names
* suppress some unused variable names
* changeExtension doesn't use baseName
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* work around no plugins in scenario interpreter perf tests
* remove many more unused things
* remove more unused things, restore some used things
* remove more unused things, restore a couple signature mistakes
* missed import
* unused argument
* remove more unused loggingContexts
* some unused code in triggers
* some unused code in sandbox and kvutils
* some unused code in repl-service and daml-script
* some unused code in bindings-rxjava tests
* some unused code in triggers runner
* more comments on silent usages
- suggested by @cocreature; thanks
* fix missing reference in TestCommands
* more unused in triggers
* more unused in sandbox
* more unused in daml-script
* more unused in ledger-client tests
* more unused in triggers
* more unused in kvutils
* more unused in daml-script
* more unused in sandbox
* remove unused in ledger-api-test-tool
* suppress final special case for codegen unused warnings
.../com/daml/sample/mymain/ContractIdNT.scala:24: warning: parameter value ev 0 in method ContractIdNT Value is never used
implicit def `ContractIdNT Value`[a_a1dk](implicit `ev 0`: ` lfdomainapi`.Value[a_a1dk]): ` lfdomainapi`.Value[_root_.com.daml.sample.MyMain.ContractIdNT[a_a1dk]] = {
^
.../com/daml/sample/mymain/ContractIdNT.scala:41: warning: parameter value eva_a1dk in method ContractIdNT LfEncodable is never used
implicit def `ContractIdNT LfEncodable`[a_a1dk](implicit eva_a1dk: ` lfdomainapi`.encoding.LfEncodable[a_a1dk]): ` lfdomainapi`.encoding.LfEncodable[_root_.com.daml.sample.MyMain.ContractIdNT[a_a1dk]] = {
^
* one more unused in daml-script
* special scaladoc rules may need silencer, too
* unused in compatibility/sandbox-migration
* more commas, a different way to `find`
- suggested by @remyhaemmerle-da; thanks
* remove unused definitions, params, args from sandbox Scala code
CHANGELOG_BEGIN
CHANGELOG_END
* remove unused loggingContext from sandbox
* pass pageSize along in JdbcLedgerDaoTransactionsSpec
- seems to have been the intent of the parameter, and at the moment it
is semantically identical
* participant-integration-api: In `JdbcIndexer`, log with context.
We were not providing the correct `loggingContext` to
`JdbcIndexer#handleStateUpdate`. This means we were just dropping useful
information. This adds the implicit so that it uses the correct logging
context.
There's a bigger problem, in that there are multiple logging contexts in
scope, making this very error prone. We'll need to figure out a way to
avoid this as much as possible.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-integration-api: Purge unnecessary newlines in JdbcIndexer.
* remove unused definitions, params, args from ledger API Scala code
CHANGELOG_BEGIN
- [Ledger API] withTimeProvider removed from CommandClient; this method
has done nothing since the new ledger time model was introduced in
1.0.0. See `issue #6985 <https://github.com/digital-asset/daml/pull/6985>`__.
CHANGELOG_END
* percolate withTimeProvider and label removal elsewhere
* event_sequential_id arithmetic
* add minPageSize fetch size to fallbacks
* use row_id arithmetic for single-party and multi-party tree queries, wildcard templates
- as a fallback, try to grab 10 or (pageSize / 10) rows with LIMIT,
whichever is larger
* add sequences of arnorm queries with parsed results
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* remove an extra LIMIT
* reformat SQL statements
* add newlines to other tx tree queries
* group by does nothing with this order by setting
* reformat all queries in EventsTableFlatEventsRangeQueries
* group by doesn't matter when you have order by and no aggregate exprs
* factor out the "faster read, then safer read" pattern from tree events
* make flat events all return SqlSequence with embedded parser
- with a path for EventsRange.readPage
* make flat transactions singleWildcardParty use arithmetic and fallback
* rename Fast to ByArith, Slow to ByLimit, because speed may vary
* missed Fast/Slow
* replace the other flat transaction queries with limited versions
replace
(?s)^ SQL"""(.*?)(\$\{range.endInclusive\})(.*?) limit \$pageSize"""
with
FrqK.ByArith(\n fasterRead = guessedPageEnd => SQL"""\n $1\$guessedPageEnd$3""",\n saferRead = minPageSize => SQL"""\n $1$2$3 limit \$minPageSize"""\n )
which is obviously better than being able to factor common parts of SQL
queries, so naturally I agree with anorm lacking a doobie-like append.
* remove readUpperBound, stray merge conflict inclusion
- thanks @leo-da for pointing it out
* rename SqlSequence.Elt to SqlSequence.Element
- suggested by @stefanobaghino-da; thanks
* rename FrqK to QueryParts
- suggested by @stefanobaghino-da; thanks
* reformat flatMap chain
* don't rescan first page; eliminate duplicate SQL exprs
- overload 'range' to mean "first page" then "search space after first page"
- page sizes are always safe to interpolate directly into SQL, as ints
(?s)^ fasterRead = guessedPageEnd => (.*?)\$guessedPageEnd(.*?)""",\n saferRead = minPageSize => SQL""".*?"""
read = (range, limitExpr) => $1\${range.endInclusive}$2 #\$limitExpr"""
* FilterRelation is used in a private[dao] context
- you won't get a warning for this because aliases are expanded before
this is checked, so the method can still be called, you simply can't
use the same type name used in the written signature
* generated sequences of transactions with different matching frequencies
- different occurrences of the matched transactions cause different SQL
queries to be used, so we try to exercise all of them
* generalize storeSync's traverse to let multiple tests run in order
- thanks to @leo-da for the inspiration
* a way for singleCreates to be slightly different
* test that matched transaction count and the specific offsets match
* test more code paths for flat events
* add -Xlint:doc-detached
- reverts 1feae964e3 from #6798
* attach several scaladocs where they'll actually be included
* no changelog
* attach several more scaladocs where they'll actually be included
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* Change error code for invalid offsets for transaction stream and completion stream requests
* Expanded application architecture docs on how to build application with ledger api failover capabilities.
Fixes#6842.
CHANGELOG_BEGIN
- [Ledger API] The error code for requesting a transaction stream
with an offset beyond the ledger end changed from INVALID_ARGUMENT
to OUT_OF_RANGE. This makes it easier to handle scenarios where
an application fails over to a backup participant which hasn't
caught up with the ledger yet.
- [Ledger API] The command completion service now validates the offset and
returns the OUT_OF_RANGE error if the request offset is beyond the ledger end.
- [Documentation] Added a section on how to write DAML applications
that can fail over between multiple eventually consistent Ledger API endpoints
where command deduplication works across these Ledger API endpoints, which
can be useful for addressing HA and/or DR scenarios.
CHANGELOG_END
* participant-integration-api: `GrpcServerOwner` -> `GrpcServer.Owner`.
Mostly so I can create a test class named `GrpcServerSpec`.
* ports: Move the free port search from postgresql-testing.
* participant-integration-api: Test the basics of GrpcServer.
This uses the HelloService to make sure the server behaves normally.
* ledger-api-client: Extract out channel configuration from LedgerClient.
So we can test it independently of the LedgerClient itself.
* ledger-api-client: Increase the default maximum inbound header size.
Increased from 8 KB to 1 MB.
* participant-integration-api: Reduce the maximum error message size.
Truncate GRPC error descriptions to 256 KB.
* participant-integration-api: Use `Port.Dynamic` instead of `FreePort`.
In tests.
* participant-integration-api: Explicit null checks when they're shorter.
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* ledger-api-client: Reduce the max inbound message size back to 8 KB.
And reduce the maximum size of an error description pushed out by the
server accordingly.
CHANGELOG_BEGIN
- [Integration Kit] Truncate GPRC error messages at 4 KB. This ensures
that we won't trigger a protocol error when sending errors to the
client.
CHANGELOG_END
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* set many extra scalac -Xlint options for all Scala projects
CHANGELOG_BEGIN
CHANGELOG_END
* move NoCopy to its own file
package.scala:18: warning: it is not recommended to define classes/objects inside of package objects.
If possible, define trait NoCopy in package data instead.
trait NoCopy {
^
* move more traits, classes, and objects to proper packages
- note that `package` is itself a scoping construct, so if your reason
is the apparent aesthetic of placing a bunch of things in one `package
object`, that is easily remedied by deleting the `object` keyword
* fix some type-parameter-shadow warnings
- I'm generally in favor of sensible name-shadowing, following the
"deliberately hide variables that should not be accessed here" school
of thought. But I think type name shadowing isn't quite as valuable
and more likely to confuse than general variable shadowing, so have
experimentally linted it out.
Example warning:
EventsTableFlatEventsRangeQueries.scala:11: warning: type parameter
Offset defined in trait EventsTableFlatEventsRangeQueries shadows class
Offset defined in package v1. You may want to rename your type
parameter, or possibly remove it.
private[events] sealed trait EventsTableFlatEventsRangeQueries[Offset] {
^
* fix more package-object-classes warnings
* fix an inaccessible warning
ContractsService.scala:197: warning: method searchDb in class ContractsService references private class ContractsFetch.
Classes which cannot access ContractsFetch may be unable to override searchDb.
def searchDb(dao: dbbackend.ContractDao, fetch: ContractsFetch)(
^
* enable -Xlint:infer-any
- continuing the saga of #6116, #6132
* enable -explaintypes for more detailed type errors
* missed header for NoCopy; probably should have left it in the package file
* misspelling in comment
* revert -Xlint:doc-detached
- there are a lot of these fixes, and they are noisy, so shifting to a
separate PR
- thanks to @leo-da for pointing out
CHANGELOG_BEGIN
[Engine] - Change the callback for contract key from `GlobalKey => Option[ContractId]` to `GlobalKeyWithMaintainers => Option[ContractId]`
CHANGELOG_END
* Ledger API: Only set the offset in the last ACS response
Fixes#6757.
CHANGELOG_BEGIN
[Sandbox][DAML Integration Kit]: Bug Fix: The ActiveContractService now only sets
the offset in the last response again instead of in every response
element.
CHANGELOG_END
* Move public code into daml-integration-api
CHANGELOG_BEGIN
[DAML Integration Kit]: Removed sandbox specific code from the API intended to be used by ledger integrations. Use the maven coordinates ``com.daml:participant-integration-api:VERSION`` instead of ``com.daml:ledger-api-server`` or ``com.daml:sandbox``.
CHANGELOG_END