* Fixes OracleAround so it creates unique oracle users
* Fixes rouge connection pool in JdbcLedgerDaoTransactionsSpec
* Fixes cleanup in OracleAroundAll
* Introduces lockIdSeed for test frameworks
* Adapts usage
changelog_begin
changelog_end
Continues the work started in https://github.com/digital-asset/daml/pull/12543
These libraries were only needed to transition from Scala 2.12 to 2.13
and are no longer useful as all the necessary items are now available
in Scala 2.13.
changelog_begin
changelog_end
Since Scala 2.13.2, Scala introduced built-in support to
manage warnings in a more granular fashion, thus making
the silencer plugin we are currently using no longer
strictly useful. Removing compiler plugins also removes
friction from migrating to Scala 3 in the future. As a
cherry on top, the built-in warning configuration also
allows to check whether a `@nowarn` actually does
anything, allowing us to proactively remove unused
warnings should the need arise.
[Here][1] is s a blog post by the Scala team about it.
Warnings have been either solved or preserved if useful,
trying to minimize the scope (keeping it at the single
expression scope if possible). In particular, all
remaining usages of the Scala Collection API compatibility
module have been removed.
Using the silencer plugin also apparently hid a few
remaining usages of compatibility libraries that were used
as part of the transition from Scala 2.12 to Scala 2.13
that are no longer needed. Removing those warnings
highlighted those.
changelog_begin
changelog_end
[1]: https://www.scala-lang.org/2021/01/12/configuring-and-suppressing-warnings.html
* resources: Add more test coverage for parallel acquisition.
Just to verify the behavior is as I expected.
CHANGELOG_BEGIN
CHANGELOG_END
* resources: Use concurrent collections in tests, instead of syncing.
* Changes to add the option of starting trigger service with typeconf/HOCON config
CHANGELOG_BEGIN
CHANGELOG_END
* add tests for authorization config and fail on both config file and cli args
* refactor and cleanup config loading and tests
* Changes based on code review comments
* Daml doc changes and making sure that we have defaults for most fields to mirror cli args
CHANGELOG_BEGIN
Trigger Service can now be configured with HOCON config file.
- If a config file is provided we will choose to start the service using that, else we will fallback to cli arguments.
- If both config file and cli args are provided we will error out.
CHANGELOG_END
* addressing some more code review comments
* use scalatest inside properly
New year, new copyright, new expected unknown issues with various files
that won't be covered by the script and/or will be but shouldn't change.
I'll do the details on Jan 1, but would appreciate this being
preapproved so I can actually get it merged by then.
CHANGELOG_BEGIN
CHANGELOG_END
* resources-akka: Wait for the bounded source queue to finish.
Otherwise, we may get submissions after dependencies have shut down.
CHANGELOG_BEGIN
CHANGELOG_END
* resources-akka: Simplify the interface and use clearer type param names.
* concurrent: Replace `DirectExecutionContextInternal` with `parasitic`.
* concurrent: Rename `DirectExecutionContext` `parasitic`.
* Use `ExecutionContext.parasitic` instead of `DirectExecutionContext`.
We no longer need the latter.
CHANGELOG_BEGIN
CHANGELOG_END
* Fix formatting.
* replace pour with a new, total, uncurried apply to create NonEmpty's
* use the new NonEmpty apply in place of pour
* non-empty cons, snoc, head, tail
* add map and flatMap for NonEmpty iterables
* remove scala-collection-compat from scalautils
* tests for map, flatMap, cons
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* missing 'extends AnyVal'
* colliding map and flatMap for Maps
* Revert "colliding map and flatMap for Maps"
* more specific Map and Set return types
* type tests for map operations
* add 'to' conversions
CHANGELOG_BEGIN
- [Integration Kit] `SourceQueueResourceOwner` has been renamed to `BoundedSourceQueueResourceOwner` and takes a `BoundedSourceQueue` from now on
CHANGELOG_END
I’ve kept the infrastructure for versioned_scala_deps around because
I’m optimistic and hope that eventually we’ll do another Scala upgrade.
changelog_begin
changelog_end
* add actAs, readAs to `meta` for create, exercise, createAndExercise endpoints
* use meta actAs, readAs to control how contract IDs are looked up for exercise
* outdated comments on JwtWritePayload and JwtPayload
* toSet1 operator to clean up some NEL manipulation
* take optional readAs argument for query endpoint
* use readAs for query POST
* check whether readAs is safe in query endpoint
* missed CommandMeta args in tests
* FetchRequest, a domain model to layer on "fetch" endpoint's ContractLocator
- ContractLocator was overloaded as a domain request model *and* a component
of other domain request models; the addition of new arguments means it can
no longer exactly meet the former, and adding "readAs" to it would poison it
for the latter cases.
* take readAs argument from fetch endpoint
* add readAs security check from query to fetch
* move jwt parties functions to util
* testing the party-set/JWT functions
* missing headers
* caught boolean blindness in readAs security checks
* test that meta params are used for commands
* make resolveRefParties do a subset check, too
* Revert "make resolveRefParties do a subset check, too"
This reverts commit 40a66f102c.
* test that the readAs auth check actually applies
* test that command service uses meta readAs, actAs
* note on test coverage
* add changelog
CHANGELOG_BEGIN
- [JSON API] ``actAs`` and ``readAs`` may be specified for create, exercise,
create-and-exercise, non-WS fetch, and non-WS query.
See `issue #11454 <https://github.com/digital-asset/daml/pull/11454>`__.
CHANGELOG_END
* no saving mallocs
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* untabify
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* move design comment to comment from function name
- suggested by @cocreature; thanks
* remove unneeded variable
* refactor single-key callers of requestJsonReader
- suggested by @cocreature; thanks
* build error in ce
* diagnose Windows failure
* add missed http-json-testing requirement
* use readers as fetch/query party-set name
- suggested by @cocreature and @realvictorprm, thanks
* extra import
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Changes to make certain hikari cp connection pool properties configurable via jdbc conf string
CHANGELOG_BEGIN
[JSON-API] Make certain Hikari cp connection pool properties configurable via jdbc conf string, the properties are listed below
poolSize -- specifies the max pool size for the database connection pool
minIdle -- specifies the min idle connections for database connection pool
connectionTimeout -- long value, specifies the connection timeout for database connection pool
idleTimeout -- long value, specifies the idle timeout for the database connection pool
CHANGELOG_END
* some missed changes for DbTriggerDao
* remove defaults for poolSize on JdbcConfig
* add constants for test defaults
* FoldableContravariant, a mapping for Foldable instances
* use FoldableContravariant to specialize several ImmArraySeq, NonEmpty methods
* folding specializations for ImmArray
* a few docs for FoldableContravariant
* specializations for FrontStack
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* add PartySet alias for db-backend
* add PartySet alias for fetch-contracts
* add PartySet alias for http-json
* deprecate old apply
* quick builder for NonEmpty collections
* replace PartySet in db-backend
* replace PartySet in fetch-contracts
* lar.Party is also domain.Party
* add incl1 operator
* replace PartySet in http-json
* port tests
* into with Scala 2.12 needs collection-compat
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* simplify a couple functions that don't need so much data transformation now
* clean up some OneAnds and HKTs
* deal with Scala 2.12 without having warning suppression
* better, more obscure choice for Scala 2.12
* Fix typo postgres --> oracle
* Move tablePrefix into base jdbcConfig
* Add table.prefix in trigger service migrations
* Add tablePrefix to trigger service db table names
changelog_begin
* [Trigger Service] Enable the new `tablePrefix` setting in the `--jdbc`
flag to add a prefix to all tables used by the trigger service to
avoid collisions with other components using the same db-schema.
changelog_end
* Add tablePrefix config test for trigger service
* Fix Oracle test
* Allow existing schema in trigger service
CHANGELOG_BEGIN
* [Trigger Service] Enable the new ``--allow-existing-schema`` flag to
initialize the trigger service on a database with a pre-existing
schema.
CHANGELOG_END
* Don't ignore CLI flag value
* Update triggers/service/src/main/scala/com/digitalasset/daml/lf/engine/trigger/dao/DbTriggerDao.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Use fragment interpolation
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* enumerating out-of-sync offsets at the DB level
* cleanup in lastOffset
* write the latest-requested-or-read offset when catching up
- Writing only the latest-read, as before, would imply unsynced offsets
that are actually well-synced. This puts the DB in a more uniform
state, i.e. it should actually reflect the single value that the
fetchAndPersist loop tries to catch everything up to.
* detecting lagging offsets from the unsynced-offsets set
- Treating every unsynced offset as a lag would make us needlessly retry
perfectly synchronized query results.
* add Foldable1 derived from Foldable for NonEmpty
* nicer version of the unsynced function
* ConnectionIO scalaz monad
* rename Offset.ordering to `Offset ordering` so it can be imported verbatim
* finish aggregating in the lag-detector function, compiles
* port sjd
* XTag, a scalaz 7.3-derived tag to allow stacked tags
* make the complicated aggregation properly testable
* extra semantic corner cases I didn't think of
* tests for laggingOffsets
* a way to rerun queries if the laggingOffsets check reveals inconsistency
* if bookmark is ever different, we always have to rerun anyway
* boolean blindness
* incorporate laggingOffsets into fetchAndPersistBracket
* split fetchAndPersist from getTermination and clean up its arguments
* just compose functors
* add looping to fetchAndPersistBracket
* more mvo tests
* test unsyncedOffsets, too
* Lagginess collector
* supply more likely actual data with mvo tests; don't trust Java equals
* rework minimumViableOffsets to track sync states across template IDs
* extra note
* fix the tests to work against the stricter mvo
* move surrogatesToDomains call
* more tests for lagginess accumulator
* add changelog
CHANGELOG_BEGIN
- [JSON API] Under rare conditions, a multi-template query backed by database
could have an ACS portion that doesn't match its transaction stream, if
updated concurrently. This conditions is now checked and accounted for.
See `issue #10617 <https://github.com/digital-asset/daml/pull/10617>`__.
CHANGELOG_END
* port toSeq to Scala 2.12
* handle a corner case with offsets being too close to expected values
* didn't need XTag
* [Daml error codes API] Further implementations
* Implements ErrorCode.asGrpcError (and test)
* Error code logging now accepts correlation id and an extra context map
* Full error context is included into enriched logging context
CHANGELOG_BEGIN
CHANGELOG_END
* Fixed Scala 2.12 compilation issues
* Test case for LockedFreePort not colliding with port 0
changelog_begin
changelog_end
* Discover dynamic port range on Linux
* Random port generator outside ephemeral range
* remove dev comments
* Draw FreePort from outside the ephemeral port range
Note, there is a race condition between the socket being closed and the
lock-file being created in LockedFreePort. This is not a new issue, it
was already present with the previous port 0 based implementation.
LockedFreePort handles this by attempting to find a free port and taking
a file lock multiple times.
But, it could happen that A `find`s port N, and obtains the lock, but
doesn't bind port N again, yet; then B binds port N during `find`; then
A attempts to bind port N before B could release it again and fails
because B still holds it.
* Select dynamic port range based on OS
* Detect dynamic port range on MacOS and Windows
* Import sysctl from Nix on MacOS
changelog_begin
changelog_end
* Windows line separator
* FreePort helpers visibility
* Use more informative exception types
* Use a more light weight unit test
* Add comments
* Fix Windows
* Update libs-scala/ports/src/main/scala/com/digitalasset/ports/FreePort.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Update libs-scala/ports/src/main/scala/com/digitalasset/ports/FreePort.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Add a comment to clarify the generated port range
* fmt
* unused import
* Split libs-scala/ports
Splits the FreePort and LockedFreePort components into a separate
library as this is only used for testing purposes.
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* unconditionally enable JSON search index on Oracle
In '1kb of data' and larger Oracle integration tests:
ORA-29902: error in executing ODCIIndexStart() routine
ORA-20000: Oracle Text error:
DRG-50943: query token too long on line 1 on column 3
From https://docs.oracle.com/en/database/oracle/oracle-database/19/errmg/DRG-10000.html#GUID-46BC3B3F-4DB7-4EB4-85DA-55E9461966CB
Cause: A query token is longer than 256 bytes
Action: Rewrite query
* add changelog
CHANGELOG_BEGIN
- [JSON API] The Oracle database schema has changed; if using
``--query-store-jdbc-config``, you must rebuild the database by adding
``,start-mode=create-only``. See #10539.
CHANGELOG_END
* test only 1kb
* extra flag in db config string
* let Queries backends configure themselves from maps
* new Queries constructor dataflow to better support config values
* remove fields as we go, isolating backend-specific from -agnostic conf
- we use StateT to avoid the problems that will definitely arise if we
don't DRY.
* fix up DbConfig including DbStartupMode
* start to uncouple json-api's config from db-utils
* two JdbcConfigs with different purposes/scopes
- also moves db-utils contents to com.daml.dbutils
* adapt trigger service to refactoring
* fix JdbcConfig leftovers
* adapt http-json-cli to new JdbcConfig
* remove extra ConfigCompanion
* explain more about the QueryBackend/Queries distinction
* split SupportedJdbcDriver into two phases with a tparam
* use SupportedJdbcDriver.TC instead of SupportedJdbcDriver as the nullary typeclass
* patch around all the moved objects with imports
* missed import from moving ConnectionPool to dbutils
* use new 2-phase SupportedJdbcDriver for ContractDao setup
* left off part of a comment
* more q.queries imports
* other imports from the dbutils move
* nested JdbcConfig
* configure the driver in each backend-specific test
* very confusing error, but make the imports nicer and it goes away
* nested JdbcConfig in perf
* missing newline
* port contractdao-bench
* test new option parsing all the way through QueryBackend
* disable search index for some tests, enable for others
* add changelog
CHANGELOG_BEGIN
- [Trigger Service] ``--help`` no longer advertises unsupported JDBC
options from JSON API.
- [JSON API] [EE only] By default, on Oracle, sets up a JSON search
index to speed up the queries endpoints. However, Oracle versions
prior to 19.12 have an unrecoverably buggy implementation of this
index; in addition, the current implementation fails on queries with
strings >256 bytes, with no way to disable the index for that query.
Pass the ``disableContractPayloadIndexing=true`` option as part of
``--query-store-jdbc-config`` to disable this index when creating the
schema.
See `issue #10539 <https://github.com/digital-asset/daml/pull/10539>`__.
CHANGELOG_END
* port failure tests
* init version table last, drop first
- suggested by @realvictorprm; thanks
* rename split DBConfig.scala
- suggested by @realvictorprm; thanks
* move imports to not be in alphabetical order
- suggested by @realvictorprm; thanks
* remove createSchema
- suggested by @realvictorprm; thanks
* Revert "test only 1kb"
This reverts commit 616e173e63.
* port to scala 2.12
- bug in unused imports
- old name `-` for `removed`
* Upgrade Scalatest to v3.2.9.
Because of some coupling we also have to upgrade Scalaz to the latest
v7.2 point release, v7.2.33.
The Scalatest changes are quite involved because the JAR has been broken
up into several smaller JARs. Because Bazel expects us to specify all
dependencies and doesn't allow transitive dependencies to be used
directly, this means that we need to specify the explicit Scalatest
components that we use.
As you can imagine, this results in quite a big set of changes. They
are, however, constrained to dependency management; all the code remains
the same.
CHANGELOG_BEGIN
CHANGELOG_END
* http-json-oracle: Fix a Scalatest dependency.
* ledger-api-client: Fix a Scalatest dependency.
* Move ExceptionOps from ledger-service/utils to //libs-scala/scala-utils
* extract connection and JdbcConfig from //ledger-service to independent db-utils module
Changelog_begin
Changelog_end
* update trigger service to use new libs-scala/db-utils
* missed changes for http-json-oracle
* minor cleanup based on comments
* fix breaking scala 2_12 build
* cleanup db-utils/BAZEL.md file
* participant-integration-api: Use `Scheduler` instead of `Materializer`.
Simpler API, and we can inject a test scheduler to make the tests more
reliable.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-integration-api: Use a test scheduler in tests.
This makes the tests faster and more reliable.
* participant-integration-api: Cancel timeout when the config is found.
* participant-integration-api: Fail properly if config lookup fails.
* participant-integration-api: Handle failures in provisioning config.
* participant-integration-api: Test shutting down the config provisioner.
* participant-integration-api: Use the scheduler cancellations.
More useful than a boolean.
* participant-integration-api: Handle submission ID generation failure.
Unfortunately this is untestable, because the only output is logging,
which we can't test right now.
* resources-akka: Add `ResourceOwner.forCancellable`.
* participant-integration-api: Simplify the config provisioner.
This makes it a set of functions; we don't need a class. There is no
exposed behavior.
* participant-integration-api: Use the services EC to initialize config.
* participant-integration-api: Add comments around `release`.
Co-authored-by: fabiotudone-da <fabio.tudone@digitalasset.com>
* Revert "participant-integration-api: Handle submission ID generation failure."
This reverts commit 72b13771a7.
* participant-integration-api: Factor out a `LedgerConfigProvider` class.
Again.
TBH, it is cleaner than multiple parameter methods.
Also adds more Scaladoc.
* resources-akka: `ResourceOwner.forCancellable` is now generic.
It returns whatever type the `acquire` function returns, to allow for
subtypes of `Cancellable`.
Co-authored-by: fabiotudone-da <fabio.tudone@digitalasset.com>
* logging-entries: Make `LoggingEntries` a non-case class.
There's no reason for it to need `equals`, etc.
CHANGELOG_BEGIN
CHANGELOG_END
* ledger-api-domain: Convert commands into a logging value.
Instead of having a function, let's use `ToLoggingContext`.
This also adds a couple of missing items, and always logs `workflowId`.
* participant-state: Convert updates into a logging value.
Instead of having a function, let's use `ToLoggingContext`.
This changes some of the logging context structure, but otherwise
everything remains the same.
* Make sure Scaladoc is lined up for modified code.
* new projection for aggregated matched-queries
We can redo all the template-ID matches (and payload query matches, if
needed) in the SELECT projection clause to emit a list of matchedQueries
indices SQL-side.
CHANGELOG_BEGIN
CHANGELOG_END
* selectContractsMultiTemplate always returns one query
* factoring
* remove multiquery deduplication from ContractDao
* test simplest case of projectedIndex; remove uniqueSets tests
* remove uniqueSets
* add more test cases for the 3 main varieties of potential inputs
* remove uniqueSets tests that were commented for reference
* remove unneeded left-join
* scala 2.12 port
* port Map test order to 2.12
* use SortedMap so the Scala version tests are unified
- suggested by @cocreature; thanks
* Removing previous Async commit features
Previous async commit features had
- async commit configured by conifg-param
- special treatments to stil force sync commit for certain threadpools
- special treatment to stil force sync commit on transaction level for certain transactions.
This is a preparation step to clean the path for adding a new approach for async commit treatment:
- only session/connection level async configuration
- no transaction level special treatments
- only enable async commit for specific Connection pools (where it is needed / is safe)
* Add DataSourceStorageBackend
- to spawn DataSources in a controlled fashion
these will be needed in upcoming commits for the HikariCP
- DataSources can have Connection init hooks defined with the help of the InitHookDataSourceProxy (this is needed for HA implementation)
- added DataSourceConfig to capture needed level of fine-tuning for DataSource creation
* Switches to DataSource wrapping in HikariCP instantiation
* Adds DBLockStorageBackend
- this is the abstraction and the implementation of database level locking
- with support for Oracle and Postgres
* Adds HaCoordinator and implementation
* Wiring of HaCoordinator in parallel indexer
* Adds feature flag
changelog_begin
changelog_end
* [JSON-API] Log json request & response bodies in debug
This also readds logging of incoming requests and the responses which are being send out.
changelog_begin
- [JSON-API] Logging of the request and response bodies are now available for appropriate requests if the chosen log level is equal or lower than DEBUG. These can then be found in the logging context of the request begin & end log messages (The field names in the ctx are "request_body" and "response_body").
changelog_end
* Move the http request throughput marking to the right place including the logging of the processing time
* Ensure that the processing time measuring is implemented consistent
* daml-lf/data: Truncate party names in log output, on request.
The party name can grow quite long, so we offer ledger implementors the
opportunity to truncate it in structured log output.
Unfortunately, because we use Logback through the global
`LoggerFactory`, there is no place to inject logging configuration. This
means we also need to use global, mutable state to configure logging
output. I have added a `LoggingConfiguration` class+object in Daml-LF
Data, which may not be the best place, but I can't think of a better
one right now. I suggest we leave it there until it has reason to grow,
at which point we may want to move it.
CHANGELOG_BEGIN
CHANGELOG_END
* logging-entries: Make `ToLoggingValue` mixin-able.
* participant-integration-api: Truncate parties in filters when logging.
* participant-integration-api: Cast to `Party` for logging.
Invalid input should not break the request at this point. No assertions.
* daml-lf/data: Move `Party to LoggingValue` to a new package.
This avoids the transitive dependency issue most of the time.
* daml-lf-data: Move the `Identifier` logging to another package.
Again, reduces the need for transitively depending on _logging-entries_.
* logging-entries: Split from contextualized-logging.
This allows us to introduce it to Daml-LF without bringing in the
Logback, Logstash, and gRPC dependencies.
CHANGELOG_BEGIN
CHANGELOG_END
* logging-entries: Fix dependencies for 2.12.
* logging-entries: Missed one more Scala 2.12 dependency.
* release: Publish logging-entries.
* participant-integration-api: Remove the subscription ID.
Pretty sure it's not used except for a single log line, which makes it
useless as a correlation ID.
If we want to correlate logs, let's add a correlation ID.
* participant-integration-api: Move transaction requests to trace logging.
Most of the useful information is already in the logging context. We
don't need to log the data structure too.
CHANGELOG_BEGIN
- [Ledger API Server] The amount of data logged in the API transaction
service has been reduced at INFO level. Please enable TRACE logging to
log the request data structures.
CHANGELOG_END
* participant-integration-api: Reorder methods in ApiTransactionService.
* participant-integration-api: Add the word "request" to some log lines.
* participant-integration-api: Add a logging prefix for string offsets.
* ledger-api-domain: `immutable.Set` -> `Set`.
It's an alias.
* participant-integration-api: Log transaction filters on subscription.
* participant-integration-api: Log transaction filters.
Just the parties isn't enough information.
* participant-integration-api: Log the entire transaction request.
Structured, because otherwise it's hard to throw things away later.
* contextualized-logging: Avoid `View` because it's not in Scala 2.12.
* contextualized-logging: Add tests for booleans.
* contextualized-logging: Avoid methods that accept views.
Scala 2.12 really doesn't like me when I do that.
* participant-integration-api: One more try at building with Scala 2.12.