changelog_begin
- [HTTP-JSON] Removed the /user/delete GET endpoint. Please use the /user/delete POST endpoint with the own user ID if you need to delete the user associated with the current token
changelog_end
* Split channel configuration from LedgerClientConfiguration
Fixes#12391
The channel configuration now has to be provided separately from the
configuration specific to the ledger client. In this way we avoid
situations where the builder is provided with some configuration
that gets overridden.
changelog_begin
[Scala bindings] The channel configuration has been split from the
LedgerClientConfiguration class. Provide the gRPC channel specific
configuration separately or use a builder. The channel configuration
no longer overrides the builder.
changelog_end
* Fix compilation issues in //ledger-service/...
changelog_begin
- [HTTP-JSON] Added endpoints:
- /user/delete that if called with GET will delete the current user & with POST will delete the user specified via the payload
- /user that if called with POST will now return user info about the user specified via the payload
changelog_end
* Add list users endpoint
changelog_begin
- [HTTP-JSON] Added an endpoint /users which returns the available users on the ledger.
changelog_end
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/domain.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Change heartBeatPer to more intuitive naming of heartbeatPeriod
CHANGELOG_BEGIN
CHANGELOG_END
* Initial changes to add HOCON config for json_api
CHANGELOG_BEGIN
CHANGELOG_END
* avoid IllegalArgumentException noise
* use named arguments in big config conversion
* Changes include
- tests for a full http-json-api config file
- logging config and non-repudiation config is still specified via cli args.
- config readers for MetricsReporter
* Add defaults to WebsocketConfig case class to allow partially specifying fields on typeconf file
* changes to the JwtVerifierBase config reader and equivalent test
* message already describes the value
* replace manual succeed/fails with scalatest combinators
* use qualified imports for WebsocketConfig defaults
* add back autodeleted empty lines
* collapse two lists of token verifiers into one
* add new line to config files
* rename dbStartupMode to startMode to keep consistent with cli option and for easy documentation
* Changes to daml docs to specify ways to run JSON-API by supplying a HOCON config file.
CHANGELOG_BEGIN
JSON-API can now be started supplying a HOCON application config file using the `--config` option.
All CLI flags except `logging` and `non-repudiation` one's are now deprecated and will be cleaned up in some future releases.
CHANGELOG_END
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* WIP
* Adjust the format of the CreateUserRequest to be a nicer payload & add a simple test
changelog_begin
- [HTTP-JSON] An endpoint user/create has been added to be able to create a user via the json api
changelog_end
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Changes to add a pureconfig-util module with some shared config readers, and cleanup some code from oauth2-middleware hocon
CHANGELOG_BEGIN
CHANGELOG_END
* Update triggers/service/auth/src/test/scala/com/daml/auth/middleware/oauth2/CliSpec.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Changelog entry and commit msg differ here because the bug described in
the changelog was already fixed by adding the user management support
because it caused for the affected endpoints that it will be interpret as
user token while only fetching the ledger id (without actually checking
that it is a user token).
changelog_begin
- [HTTP-JSON] Fixed a bug that caused jwt's without the daml namespace to be rejected for some endpoints (https://github.com/digital-asset/daml/issues/12215)
changelog_end
* Changes to add the option of starting trigger service with typeconf/HOCON config
CHANGELOG_BEGIN
CHANGELOG_END
* add tests for authorization config and fail on both config file and cli args
* refactor and cleanup config loading and tests
* Changes based on code review comments
* Daml doc changes and making sure that we have defaults for most fields to mirror cli args
CHANGELOG_BEGIN
Trigger Service can now be configured with HOCON config file.
- If a config file is provided we will choose to start the service using that, else we will fallback to cli arguments.
- If both config file and cli args are provided we will error out.
CHANGELOG_END
* addressing some more code review comments
* use scalatest inside properly
* Expose user management service over the HTTP-JSON API
Fixes#12078
Add user & user/rights endpoint which provide the current user id, primary party and the latter the user rights
Fix new endpoints for ledgers without auth and add test coverage for these
changelog_begin
- [HTTP-JSON] Added GET endpoint:
- /user which returns the current user id & primary party
- /user/rights which returns the user rights of the current user
changelog_end
* Update ledger-service/http-json/src/it/scala/http/HttpServiceIntegrationTestUserManagement.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Apply review comments
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/domain.scala
Co-authored-by: akshayshirahatti-da <86774832+akshayshirahatti-da@users.noreply.github.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Co-authored-by: akshayshirahatti-da <86774832+akshayshirahatti-da@users.noreply.github.com>
New year, new copyright, new expected unknown issues with various files
that won't be covered by the script and/or will be but shouldn't change.
I'll do the details on Jan 1, but would appreciate this being
preapproved so I can actually get it merged by then.
CHANGELOG_BEGIN
CHANGELOG_END
* WIP
* Remove the dummy implementation and replace it with an actual working implementation
* Make it compile!
* Add working tests for the user management support in the json api
CHANGELOG_BEGIN
- [JSON-API] Added basic support for the new user management feature of the ledger such that user tokens are now accepted instead of the legacy tokens
CHANGELOG_END
* Simplify the create iou test case and adjust the test case name to be correct
* Add additional test that covers that the overwrite of actAs&readAs still works via the meta object
* Make it work with unauthenticated ledgers too
* Fix compile error & wrong behaviour & add test coverage for non auth ledgers
* Clean up the diff
* Address 66312e9940 (r770782884)
* Address 66312e9940 (r770750653)
* Addressing 66312e9940 (r770751958)
* Address 66312e9940 (r770736671)
* Address 66312e9940 (r770734395) and 66312e9940 (r770783237)
Co-authored-by: Stefano Baghino <stefano.baghino@digitalasset.com>
CHANGELOG_BEGIN
- [User Management]: add support for managing participant node users and authenticating
requests as these users using standard JWT tokens.
CHANGELOG_END
Co-authored-by: Marton Nagy <marton.nagy@digitalasset.com>
Co-authored-by: Adriaan Moors <90182053+adriaanm-da@users.noreply.github.com>
* replace pour with a new, total, uncurried apply to create NonEmpty's
* use the new NonEmpty apply in place of pour
* non-empty cons, snoc, head, tail
* add map and flatMap for NonEmpty iterables
* remove scala-collection-compat from scalautils
* tests for map, flatMap, cons
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* missing 'extends AnyVal'
* colliding map and flatMap for Maps
* Revert "colliding map and flatMap for Maps"
* more specific Map and Set return types
* type tests for map operations
* add 'to' conversions
* Add query store metrics
CHANGELOG_BEGIN
- [JSON-API] added metrics to separately track:
- time taken to update query-store ACS (from ledger)
- lookup times for the query store
CHANGELOG_END
* Apply review comment
* [Self-service error codes] Enabled by default
* Flag changed to `use-pre-1.18-error-codes` (disabled by default)
CHANGELOG_BEGIN
[Ledger API Specification] The Ledger API returns enriched error codes (see https://docs.daml.com/error-codes/self-service/index.html)
For backwards-compatibility, a new API flag `--use-pre-1.18-error-codes` is introduced for preserving the legacy behavior for
clients that want to migrate incrementally to the changed gRPC status code responses and error details format.
CHANGELOG_END
* Adapted HttpServiceIntegrationTest
* Renamed `Feature Flag` to `Configuration` in docs
* Fix Daml Script tests
changelog_begin
changelog_end
* Fix Repl functests
changelog_begin
changelog_end
* Fix haskell binding tests
changelog_begin
changelog_end
* Fix CommandClientIT test
* Fixed Sandbox and CommandServiceBackpressureIT tests
Please enter the commit message for your changes. Lines starting
* Adapt //compiler/damlc/tests:repl-functests again
* Fix more tests and address Miklos' comments
* Flag name changed to `grpc-status-codes-compatibility-mode`
* Remove useless flags sandbox-classic
* Sandbox-classic tests fix for ContractKeysIT and ExceptionsIT
* Created 2 deprecated test suites that have the more generic assertions as returned
by the deprecated in-memory backend
* More fixes for CommandServiceIT
* Fixes compilation issue with the deprecated exceptionsIT class for Sandbox-classic in-memory
* Compatibility mode for old test tools
* Change flag name to `use-pre-1.18-error-codes`
* Apply suggestions from code review
Co-authored-by: Miklos <57664299+miklos-da@users.noreply.github.com>
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
Co-authored-by: Miklos <57664299+miklos-da@users.noreply.github.com>
* Update to Java 11
changelog_begin
changelog_end
* Fix RoundingMode deprecation warnings
* Fix dep-ann warning
* Integer constructor
* JavaX annotation dependency
* javax.xml.bind was removed in Java 11
Using Guava as a replacement, since it is already a project dependency.
* JDK 11 no longer has a separate JRE tree
* Remove unused jdk_nix import
* remove now redundant jdk11_nix
* Java 8 --> 9 increased Instant.now() precision
See https://bugs.openjdk.java.net/browse/JDK-8068730
The precision of `Instant.now()` increased between Java 8 and Java 9.
On Linux and MacOS this doesn't seem to be a problem, as the precision
still seems to be at micro seconds. However, on Windows this now causes
errors of the following form:
```
java.lang.IllegalArgumentException: Conversion of Instant
2021-11-05T13:58:56.726875100Z to microsecond granularity would result
in loss of precision.
```
Suggesting that it now offers sub-microsecond precision.
`TimestampConversion.instantToMicros` had a check to fail if the
conversion lead to a loss of precision. In the specific failing test
case this is not a concern, so this adds a `roundInstantToMicros`
variant that avoids this kind of error.
* TMP round timestamps
* Revert "TMP round timestamps"
This reverts commit af8e261278.
* Skip versions before 1.6.0 in migration tests
changelog_begin
changelog_end
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
I’ve kept the infrastructure for versioned_scala_deps around because
I’m optimistic and hope that eventually we’ll do another Scala upgrade.
changelog_begin
changelog_end
* Changes to port JMH benchmark tests for contract dao to be be run against postgres db.
CHANGELOG_BEGIN
CHANGELOG_END
* add missing libs-scala/ports depenency to ledger-on-sql test suite
* changes based on codereview comments
* make http-json:integration-tests into test suites
* make http-json-oracle:integration-tests into test suite
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* remove commented libraries
* Changes to renable ws multiplexing
CHANGELOG_BEGIN
[TS-BINDINGS] Re-enable ws multiplexing for stream queries after resolving the reconnect connection close bug associated with ws state and liveness.
CHANGELOG_END
* websocket is passed as an argument to the onMessage handler
* consistently use 'manager' reference instead of 'this' in the handleQueries change method
* add actAs, readAs to `meta` for create, exercise, createAndExercise endpoints
* use meta actAs, readAs to control how contract IDs are looked up for exercise
* outdated comments on JwtWritePayload and JwtPayload
* toSet1 operator to clean up some NEL manipulation
* take optional readAs argument for query endpoint
* use readAs for query POST
* check whether readAs is safe in query endpoint
* missed CommandMeta args in tests
* FetchRequest, a domain model to layer on "fetch" endpoint's ContractLocator
- ContractLocator was overloaded as a domain request model *and* a component
of other domain request models; the addition of new arguments means it can
no longer exactly meet the former, and adding "readAs" to it would poison it
for the latter cases.
* take readAs argument from fetch endpoint
* add readAs security check from query to fetch
* move jwt parties functions to util
* testing the party-set/JWT functions
* missing headers
* caught boolean blindness in readAs security checks
* test that meta params are used for commands
* make resolveRefParties do a subset check, too
* Revert "make resolveRefParties do a subset check, too"
This reverts commit 40a66f102c.
* test that the readAs auth check actually applies
* test that command service uses meta readAs, actAs
* note on test coverage
* add changelog
CHANGELOG_BEGIN
- [JSON API] ``actAs`` and ``readAs`` may be specified for create, exercise,
create-and-exercise, non-WS fetch, and non-WS query.
See `issue #11454 <https://github.com/digital-asset/daml/pull/11454>`__.
CHANGELOG_END
* no saving mallocs
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* untabify
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* move design comment to comment from function name
- suggested by @cocreature; thanks
* remove unneeded variable
* refactor single-key callers of requestJsonReader
- suggested by @cocreature; thanks
* build error in ce
* diagnose Windows failure
* add missed http-json-testing requirement
* use readers as fetch/query party-set name
- suggested by @cocreature and @realvictorprm, thanks
* extra import
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Changes to make certain hikari cp connection pool properties configurable via jdbc conf string
CHANGELOG_BEGIN
[JSON-API] Make certain Hikari cp connection pool properties configurable via jdbc conf string, the properties are listed below
poolSize -- specifies the max pool size for the database connection pool
minIdle -- specifies the min idle connections for database connection pool
connectionTimeout -- long value, specifies the connection timeout for database connection pool
idleTimeout -- long value, specifies the idle timeout for the database connection pool
CHANGELOG_END
* some missed changes for DbTriggerDao
* remove defaults for poolSize on JdbcConfig
* add constants for test defaults
* move contract insertion/deletion batching to separate function
* limit contract insertion/deletion batching on backpressure
* add changelog
CHANGELOG_BEGIN
- [JSON API] While updating the contract table for a query, if the DB appears to be slow,
JSON API will slow down its own inserts and deletes at some point rather than construct
ever-larger INSERT and DELETE batch commands.
See `issue #11589 <https://github.com/digital-asset/daml/pull/11589>`__.
CHANGELOG_END
* Changes to migrate http-perf-test back to sandbox with more parallelization for single user scenarios.
Increased parallelization is due to the architectural changes in sandbox where it uses
a tick every 100 millis to trigger stuff/data to be available on the read side
CHANGELOG_BEGIN
CHANGELOG_END
* Parallelization fixes for scenarios ExerciseCommand and SyncQueryNewAcs scenarios
* refactor sequential scenario run, make query part of SyncQueryVariableAcs run with single user
* add PartySet alias for db-backend
* add PartySet alias for fetch-contracts
* add PartySet alias for http-json
* deprecate old apply
* quick builder for NonEmpty collections
* replace PartySet in db-backend
* replace PartySet in fetch-contracts
* lar.Party is also domain.Party
* add incl1 operator
* replace PartySet in http-json
* port tests
* into with Scala 2.12 needs collection-compat
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* simplify a couple functions that don't need so much data transformation now
* clean up some OneAnds and HKTs
* deal with Scala 2.12 without having warning suppression
* better, more obscure choice for Scala 2.12
* Changes to ensure matchedQueries are returned correctly when queries contain a mix of offsets and no offsets.
CHANGELOG_BEGIN
[JSON-API] fixes a bug related to the matchedQueries value returned for websocket multiqueries,
this only happens for patterns where the multiqueries contain a mixture of queries with and without
offsets.
CHANGELOG_END
* changes based on code review comments
* clean up some imports
* test case trying to find deadlock situation
* add deadlocks to causes of ContractsFetch retry for Oracle
* Revert "test case trying to find deadlock situation"
This reverts commit 9b19046b18.
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* trying to reliably reproduce the template ID constraint error
* tentative fix for template ID constraint error
* sequential simulation
* successfully reproduce the error pre-4633c3137a
- Batch entry 0
INSERT INTO some_fancy_prefix_contract
VALUES ('foo', 1, 'null'::jsonb, NULL, '{}'::jsonb, ?, ?, '')
ON CONFLICT (contract_id) DO NOTHING
was aborted: ERROR: insert or update on table "some_fancy_prefix_contract" violates foreign key constraint "some_fancy_prefix_contract_tpid_fkey"
Detail: Key (tpid)=(1) is not present in table "some_fancy_prefix_template_id".
* also reproduced the error pre-4633c3137a on Oracle
- ORA-02291: integrity constraint (UNA3GOHUV7YMSKT0MQXJKLKD9HKKAZ.SYS_C007859)
violated - parent key not found
* add changelog
CHANGELOG_BEGIN
- [JSON API] Fixed a rare error that manifested as
‘violates foreign key constraint "contract_tpid_fkey"
Detail: Key (tpid)=(...) is not present in table’
when attempting to run queries and goes away on JSON API restart.
See `issue #11330 <https://github.com/digital-asset/daml/pull/11330>`__.
CHANGELOG_END
* clean up some now-unneeded printlns
* a model for trapping client errors in Scala bindings shim and reporting correctly
* clean up some nesting with an alias
* filter out client-side command service errors
* fix flattening error propagation of CommandService errors in endpoints
* remove todo
* Daml evaluation triggers INVALID_ARGUMENT; handle this for creates/exercises
* clean up lookupResult
* remove stripLeft utility; it is unused
* proper error propagation for /parties endpoint
* map grpc status codes to HTTP error codes
* add a case to pass-through gRPC errors in Endpoints errors
* handle gRPC status in all explicit top-level catches
* pass through gRPC errors in CommandService as well
* treat a gRPC status anywhere in the causal chain as indicating participant-server error
* propagate ContractsService errors without assuming they will always be ServerErrors
* filter ServerErrors' contents when rendering errorful streams
* log errors from websocket output instead of rendering full messages
* hide message in ServerError case
* remove Aborted
* transfer with bad contract ID now returns 409
* mention new error codes
* add changelog
CHANGELOG_BEGIN
- [JSON API] Several kinds of gRPC server errors are now reported with
associated HTTP statuses; for example, a Daml-LF interpreter error now
returns a 400 instead of a 500, and an exercise on an archived contract
returns a 409 Conflict instead of a 500. Errors internal to JSON API
(e.g. internal assertion failures) are no longer detailed in the HTTP
response; their details are only logged.
See `issue #11184 <https://github.com/digital-asset/daml/pull/11184>`__.
CHANGELOG_END
* remove unused Show and liftErr utility
* adapt daml-script to new error codes
* adapt typescript tests to new error codes
* adapt json-api failure tests to new error codes
* Moved ErrorCodesVersionSwitcher to //ledger/error
CHANGELOG_BEGIN
CHANGELOG_END
* Rename ErrorCodeLoggingContext to ContextualizedErrorLogger
* Refactored ErrorFactories
* All error factories use ContextualizedErrorLogger for being able to dispatch self-service error codes.
* The ContextualizedErrorLogger is passed down from the dispatching Ledger API services.
* ErrorFactoriesSpec asserts both legacy (V1) and self-service error codes (V2).
* Adapted ApiSubmissionService
* Addressed Marcin's review comments
Closes#11251
Schema changed as part of https://github.com/digital-asset/daml/pull/11102
Also backported to 1.17.1 in https://github.com/digital-asset/daml/pull/11143
changelog_begin
[JSON API] Solving a bug that could cause the JSON API to return
correct result if a contract with the same key is observed twice
required a schema change. The JSON API data needs to be dropped
and the query store needs to reset. If you are migrating from a
previous version, either reset your database manually or start
the HTTP JSON API with one of the options that regenerate the
schema (`create-only`, `create-if-needed-and-start`, `create-and-start`).
changelog_end
* split akka-streams and doobie utils from ContractsFetch to new fetch-contracts library
* move most stream components from ContractsFetch to new library
* fix packages
* make the fetchcontracts domain model work
* move transactionFilter to fetchcontracts
* lots of unused imports
* start incorporating fetch-contracts in http-json
* move toTerminates
* more unused imports; http-json compiles
* more fetch-contracts dep
* bring back HasTemplateId[ActiveContract]; integration tests compile
* whole ledger-service tree compiles
* fix oracle missing dep
* scoping some new library identifiers
* remove apiIdentifier aliases
* comment on Aliases
* remove toTerminates shim
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* unused bazel imports
* remove already-addressed TODO
- suggested by @akshayshirahatti-da; thanks
* Add failing test that covers the bug
* Fix on conflict error for inserts into the contracts table
changelog_begin
- [JSON-API] make key_hash indexes non-unique, this fixes a bug where a duplicate key conflict was raised on the query store when the same contract was being witnessed twice by two separate parties
changelog_end
* move test to parent so as to test oracle query store
* make key_hash indexes non-unique
* use recordFromFields
Co-authored-by: Akshay <akshay.shirahatti@digitalasset.com>
* Fix typo postgres --> oracle
* Move tablePrefix into base jdbcConfig
* Add table.prefix in trigger service migrations
* Add tablePrefix to trigger service db table names
changelog_begin
* [Trigger Service] Enable the new `tablePrefix` setting in the `--jdbc`
flag to add a prefix to all tables used by the trigger service to
avoid collisions with other components using the same db-schema.
changelog_end
* Add tablePrefix config test for trigger service
* Fix Oracle test
* Allow existing schema in trigger service
CHANGELOG_BEGIN
* [Trigger Service] Enable the new ``--allow-existing-schema`` flag to
initialize the trigger service on a database with a pre-existing
schema.
CHANGELOG_END
* Don't ignore CLI flag value
* Update triggers/service/src/main/scala/com/digitalasset/daml/lf/engine/trigger/dao/DbTriggerDao.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Use fragment interpolation
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Changes to use sandbox next for our integration tests
CHANGELOG_BEGIN
CHANGELOG_END
* remove sandbox classic dependency for HttpServiceTestFixture and perf tests
* rely on sandbox next fixture test class
* add missing dependencies for http-json-oracle
* changes based on code review comments
* Add tag to skip test case for scala_2_12 and also add a jdbc backend for sandbox spun up for perf tests
* Reduce size of contracts for archiving test
* enumerating out-of-sync offsets at the DB level
* cleanup in lastOffset
* write the latest-requested-or-read offset when catching up
- Writing only the latest-read, as before, would imply unsynced offsets
that are actually well-synced. This puts the DB in a more uniform
state, i.e. it should actually reflect the single value that the
fetchAndPersist loop tries to catch everything up to.
* detecting lagging offsets from the unsynced-offsets set
- Treating every unsynced offset as a lag would make us needlessly retry
perfectly synchronized query results.
* add Foldable1 derived from Foldable for NonEmpty
* nicer version of the unsynced function
* ConnectionIO scalaz monad
* rename Offset.ordering to `Offset ordering` so it can be imported verbatim
* finish aggregating in the lag-detector function, compiles
* port sjd
* XTag, a scalaz 7.3-derived tag to allow stacked tags
* make the complicated aggregation properly testable
* extra semantic corner cases I didn't think of
* tests for laggingOffsets
* a way to rerun queries if the laggingOffsets check reveals inconsistency
* if bookmark is ever different, we always have to rerun anyway
* boolean blindness
* incorporate laggingOffsets into fetchAndPersistBracket
* split fetchAndPersist from getTermination and clean up its arguments
* just compose functors
* add looping to fetchAndPersistBracket
* more mvo tests
* test unsyncedOffsets, too
* Lagginess collector
* supply more likely actual data with mvo tests; don't trust Java equals
* rework minimumViableOffsets to track sync states across template IDs
* extra note
* fix the tests to work against the stricter mvo
* move surrogatesToDomains call
* more tests for lagginess accumulator
* add changelog
CHANGELOG_BEGIN
- [JSON API] Under rare conditions, a multi-template query backed by database
could have an ACS portion that doesn't match its transaction stream, if
updated concurrently. This conditions is now checked and accounted for.
See `issue #10617 <https://github.com/digital-asset/daml/pull/10617>`__.
CHANGELOG_END
* port toSeq to Scala 2.12
* handle a corner case with offsets being too close to expected values
* didn't need XTag
99% of our usecases use Value[ContractId] so this PR just fixes it.
The few other usescases are:
1. Value[Nothing] which we use for keys. This is technically more
precise but we benefit very little from it.
2. Value[String] mostly because in a few places we are lazy.
We don’t have any code which benefits from being polymorphic in the
contract id type.
changelog_begin
changelog_end
- Add support for specifying either 1.2 or 1.3 as minimum TLS versions for ledger api server.
- Log enabled protocols (~TLS versions) and cipher suites at server and client startup.
- Add integration tests against Sandbox-classic and Sandbox
CHANGELOG_BEGIN
Sandbox: Add CLI flag to select minimum enabled TLS version for ledger API server.
CHANGELOG_END
* Initial changes to add a surrogate_template_id cache to reduce db queries
CHANGELOG_BEGIN
CHANGELOG_END
* refactoring and addition of tests
* Code review based changes to use Contextual Logger and json-api metrics instance
* make max cache entries/size configurable
* Rename cache max entries default variable
* Add failing test that covers the bug we found in #10823
* Fix /v1/query endpoint bug
changelog_begin
- [JSON API] Fixed a bug that prevented the JSON API to be aware of
packages uploaded directly via the Ledger API.
changelog_end
* Test case for LockedFreePort not colliding with port 0
changelog_begin
changelog_end
* Discover dynamic port range on Linux
* Random port generator outside ephemeral range
* remove dev comments
* Draw FreePort from outside the ephemeral port range
Note, there is a race condition between the socket being closed and the
lock-file being created in LockedFreePort. This is not a new issue, it
was already present with the previous port 0 based implementation.
LockedFreePort handles this by attempting to find a free port and taking
a file lock multiple times.
But, it could happen that A `find`s port N, and obtains the lock, but
doesn't bind port N again, yet; then B binds port N during `find`; then
A attempts to bind port N before B could release it again and fails
because B still holds it.
* Select dynamic port range based on OS
* Detect dynamic port range on MacOS and Windows
* Import sysctl from Nix on MacOS
changelog_begin
changelog_end
* Windows line separator
* FreePort helpers visibility
* Use more informative exception types
* Use a more light weight unit test
* Add comments
* Fix Windows
* Update libs-scala/ports/src/main/scala/com/digitalasset/ports/FreePort.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Update libs-scala/ports/src/main/scala/com/digitalasset/ports/FreePort.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Add a comment to clarify the generated port range
* fmt
* unused import
* Split libs-scala/ports
Splits the FreePort and LockedFreePort components into a separate
library as this is only used for testing purposes.
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
Since we switch to scala 2.13, ImmArray companion object extends
`Factory`. Hence:
- the `apply` methods of `ImmArray` override the one from `Factory`
- we can use the notation `.to(ImmArray)` to convert an `Iterable` to
`ImmArray`
This PR drops those `apply` ImmArray. Conversion from Iterable to
`ImmArray` should use the `.to(ImmArray)`.
CHANGELOG_BEGIN
CHANGELOG_END
* Use the token from incoming requests to update the package list
changelog_begin
changelog_end
* Lazily initialize the ledger client
* Fix ee integration tests
* Fix package reloading behaviour by using a semaphore to check for ongoing updates
* Refactor out the semaphore code into a concurrency utility class
* Use correct locking for the updateTask so every thread always uses an up to date task
* Remove unused imports in utils.Concurrent & remove packages from the tests
* Hide & mark the token file cli option deprecated because we dont need it anymore and only keep it so client deployment code doesn't break
* Fix scala 2.12 build by adding more type annotations
* Update ledger-service/http-json-cli/src/main/scala/com/daml/http/OptionParser.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/PackageService.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Readd pgkManagementClient after it was removed accidentally (but now it's lazy)
* Remove concurrent object & use atomic boolean instead of a mutex because it makes more sense
* Replace semaphore with countdownlatch
* Refactor the caching into a separate class
* Use Instant instead of LocalDateTime
* Remove that ** of bad synchonization and do stupid simple synchronization because it JUST WORKS, besides adapt when we want to reload
* Remove await in tests because it can result in buggy tests
* remove unused code in WebSocketService.scala
* Unhide the access-token-file option as per request of Stefano
* Less implicit jwts per request of Stefano
* Try making some code more readable as by request of Akshay
* Use more shark because it expresses better than flatMaps if I don't need the arg
* Move defs in predicate in WebsocketService.scala around
* Try to minimize diff further in WebsocketService.scala
* Fix build and minimize diff in WebSocketService.scala further
* Minimize diff of function getTransactionSourceForParty in WebSocketService.scala
* Share the ec in WebSocketService.scala to minimize the diff
* Minimize in function predicate in WebSocketService.scala
* Further minimize in function predicate in WebSocketService.scala
* Change some case classes to be normal classes but with apply method
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/PackageService.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Get rid of implicit jwt tokens, the world is already confusing and full of implicits enough
* Improve readability
* Integrate the new LedgerClient which does not depend on a leder id
* Fix tests
* Apply suggestions from code review
thanks to @S11001001
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Apply further review comments
* Remove outcommented code
* Deprecate access token file option in the description too
changelog_begin
- [JSON API] The cli option `--access-token-file` is now deprecated. It
is not needed anymore and you can safely remove it. Reason is that
the operations which prior required a token at startup are now done
on demand using the token of the incoming request.
changelog_end
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* vanilla job test on main pipeline
changelog_begin
changelog_end
* move job to daily compat tests
* add timeout to dev-env and changes based on code review
* unconditionally enable JSON search index on Oracle
In '1kb of data' and larger Oracle integration tests:
ORA-29902: error in executing ODCIIndexStart() routine
ORA-20000: Oracle Text error:
DRG-50943: query token too long on line 1 on column 3
From https://docs.oracle.com/en/database/oracle/oracle-database/19/errmg/DRG-10000.html#GUID-46BC3B3F-4DB7-4EB4-85DA-55E9461966CB
Cause: A query token is longer than 256 bytes
Action: Rewrite query
* add changelog
CHANGELOG_BEGIN
- [JSON API] The Oracle database schema has changed; if using
``--query-store-jdbc-config``, you must rebuild the database by adding
``,start-mode=create-only``. See #10539.
CHANGELOG_END
* test only 1kb
* extra flag in db config string
* let Queries backends configure themselves from maps
* new Queries constructor dataflow to better support config values
* remove fields as we go, isolating backend-specific from -agnostic conf
- we use StateT to avoid the problems that will definitely arise if we
don't DRY.
* fix up DbConfig including DbStartupMode
* start to uncouple json-api's config from db-utils
* two JdbcConfigs with different purposes/scopes
- also moves db-utils contents to com.daml.dbutils
* adapt trigger service to refactoring
* fix JdbcConfig leftovers
* adapt http-json-cli to new JdbcConfig
* remove extra ConfigCompanion
* explain more about the QueryBackend/Queries distinction
* split SupportedJdbcDriver into two phases with a tparam
* use SupportedJdbcDriver.TC instead of SupportedJdbcDriver as the nullary typeclass
* patch around all the moved objects with imports
* missed import from moving ConnectionPool to dbutils
* use new 2-phase SupportedJdbcDriver for ContractDao setup
* left off part of a comment
* more q.queries imports
* other imports from the dbutils move
* nested JdbcConfig
* configure the driver in each backend-specific test
* very confusing error, but make the imports nicer and it goes away
* nested JdbcConfig in perf
* missing newline
* port contractdao-bench
* test new option parsing all the way through QueryBackend
* disable search index for some tests, enable for others
* add changelog
CHANGELOG_BEGIN
- [Trigger Service] ``--help`` no longer advertises unsupported JDBC
options from JSON API.
- [JSON API] [EE only] By default, on Oracle, sets up a JSON search
index to speed up the queries endpoints. However, Oracle versions
prior to 19.12 have an unrecoverably buggy implementation of this
index; in addition, the current implementation fails on queries with
strings >256 bytes, with no way to disable the index for that query.
Pass the ``disableContractPayloadIndexing=true`` option as part of
``--query-store-jdbc-config`` to disable this index when creating the
schema.
See `issue #10539 <https://github.com/digital-asset/daml/pull/10539>`__.
CHANGELOG_END
* port failure tests
* init version table last, drop first
- suggested by @realvictorprm; thanks
* rename split DBConfig.scala
- suggested by @realvictorprm; thanks
* move imports to not be in alphabetical order
- suggested by @realvictorprm; thanks
* remove createSchema
- suggested by @realvictorprm; thanks
* Revert "test only 1kb"
This reverts commit 616e173e63.
* port to scala 2.12
- bug in unused imports
- old name `-` for `removed`
Adding support for accepting server's private key as an encrypted file (since storing unencrypted private key in a file system might be a risk).
Encrypted private key is assumed to be encrypted using AES or similar algorithm. The details necessary to decrypt it are be obtained from a secrets server over HTTP as JSON document. The URL to secret's server is supplied through the new `--secrets-url` CLI parameter.
One can supply private in either plaintext (old behavior) or ciphertext: if a private key's file ends with .enc suffix it is assumed to be ciphertext. Otherwise it is assumed to be plain text.
CHANGELOG_BEGIN
- [DPP-418] [Participant] Add support for supplying server's private key as an encrypted file and then decrypting it with the help of a secrets server.
CHANGELOG_END
* Addition of a key_hash field to speed up fetchByKey queries
CHANGELOG_BEGIN
CHANGELOG_END
* changes to make key_hash and Optional field
CHANGELOG_BEGIN
- Update schema version for http-json-api query store with new key_hash field
- Improved performance for fetchByKey query which now uses key_hash field
CHANGELOG_END
* remove btree index for postgres and other changes based on code review comments
* Simplify loading of logback file
doConfigure accepts a URL which slightly simplifies things.
Really the primary reason why I’m doing this is that it gets veracode
to shut up. I don’t fully understand what it’s worried about in the
first place but it looks like it gets angry about calling openStream
on the resource *shrug*
changelog_begin
changelog_end
* fix 2.12 build
changelog_begin
changelog_end
* JSON API: log ledger connection errors at every attempt
This should help diagnose connection errors.
changelog_begin
[JSON API] Ledger connection errors are now logged at every attempt
changelog_end
* Make match exhaustive
* Upgrade Scalatest to v3.2.9.
Because of some coupling we also have to upgrade Scalaz to the latest
v7.2 point release, v7.2.33.
The Scalatest changes are quite involved because the JAR has been broken
up into several smaller JARs. Because Bazel expects us to specify all
dependencies and doesn't allow transitive dependencies to be used
directly, this means that we need to specify the explicit Scalatest
components that we use.
As you can imagine, this results in quite a big set of changes. They
are, however, constrained to dependency management; all the code remains
the same.
CHANGELOG_BEGIN
CHANGELOG_END
* http-json-oracle: Fix a Scalatest dependency.
* ledger-api-client: Fix a Scalatest dependency.
17709b5ba3 (#10344) brought the two implementations of
`selectContractsMultiTemplate` close together enough that they can be
usefully factored. Here is that factoring.
Several of the arguments to `queryByCondition` take the form
(Read[T], T => Out), i.e. Coyoneda; we could invert the control by
returning a data structure with coyonedas, but instead here we use a
sort of continuation-passing style, so the coyonedas are embedded in the
arguments to `queryByCondition`.
CHANGELOG_BEGIN
CHANGELOG_END
* Move ExceptionOps from ledger-service/utils to //libs-scala/scala-utils
* extract connection and JdbcConfig from //ledger-service to independent db-utils module
Changelog_begin
Changelog_end
* update trigger service to use new libs-scala/db-utils
* missed changes for http-json-oracle
* minor cleanup based on comments
* fix breaking scala 2_12 build
* cleanup db-utils/BAZEL.md file
* correct JSON API upper date bound
As reported by @quid-agis. Fixes#10449.
CHANGELOG_BEGIN
CHANGELOG_END
* add tests
* test error messages
* more specific catch
* Add optional submission id to commands.proto
This allows to propagate a submission id. If no id is submitted (the submission id is empty) then we generate a new submission id
CHANGELOG_BEGIN
Add optional submission_id to the commands.proto.
CHANGELOG_END
* Update haskell bindings to include the submission id
* Code review - rename submission id extractor
* Code review - update comment and remove braces from if block
* Fix braces
* participant-integration-api: Encapsulate the initial configuration.
* participant-integration-api: Reduce usage of `LedgerConfiguration`.
* Inline `LedgerConfiguration` wherever it's used.
Most things don't need all its constituent parts; this reduces the
amount of unused properties.
CHANGELOG_BEGIN
- [Integration Kit] The ``LedgerConfiguration`` class has been
removed in favor of ``InitialLedgerConfiguration``. Its usage
has been changed accordingly, with the ``configurationLoadTimeout``
property becoming part of ``ApiServerConfig`` instead.
The default options provided by ``LedgerConfiguration`` have been
removed; you are now encouraged to come up with sensible values for
your own ledger. The ``Configuration.reasonableInitialConfiguration``
value may help.
CHANGELOG_END
* Correct the initial configuration submission delay for KV ledgers.
* kvutils: Mark supertype unused parameters as unused.
* kvutils: Extract out common configuration submission delays.
These values are specific to kvutils; other drivers should come up with
their own.
* configuration: Delete `NoGeneration`, as it's unused.
* [JSON-API] Move database independent tests into a seperate abstract test
The DatabaseStartupOps tests are now also tested against Oracle.
Besides, an additional test will now cover that table creation
doesn't run into name collisions for different table prefixes within
the same database.
changelog_begin
changelog_end
* Add missing copyright headers
* Adjusting the version query slightly to fix the oracle db integration tests
* Rewrite the version query of oracle to fix it (hopefully)
* Test the prefix collision the other way around
* Put the table prefix also infront of the ensure_json constraint in the oracle queries
* Convert the table name of the jsonApSchemaVersion table to uppercase so it can be found in the list of the created tables in Oracle.
* Fix scala 2.12 collection compatibility compiler error by using :+
* Update ledger-service/db-backend/src/main/scala/com/digitalasset/http/dbbackend/Queries.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Use flatTraverse instead of flatMap to fix the compile error in Queries.scala
* Process the startup mode also in the tests & error if it failed
* Add collections compat import to fix scala 2.12 build failure
* Be confused about the build error prior, revert the change
* Move dropAllTablesIfExist a bit down to have a better declaration order
* Extract the tables vector combined with the version table into a seperate val
* Remove debug in Queries.scala logging
* Make the initDatabaseDdlsAndVersionTable val lazy, so we don't get a nullpointer exception
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
#9895 reintroduced the row_number over partition to eliminate duplicates
when querying the stakeholder-joined Oracle contract table. However, as
#10123 establishes, these duplicates cannot happen if we are querying
for only one party.
Therefore, we special-case the single-party query case, for which we
skip the partition + outer-query duplicate elimination steps.
CHANGELOG_BEGIN
CHANGELOG_END
* [JSON-API] Add option for setting a table prefix
changelog_begin
- [JSON-API] A table prefix can now be specified in the jdbc config via `tablePrefix=<YourFancyTablePrefix>`. This was added to allow running multiple instances of the json api without having collisions (independent of the chosen database).
changelog_end
* Extend the correct test in the oracle tests and simplify config override
* Fix formatting
* Fix postgres tests
* Fix bug in oracle query
* Fix typo
* Update ledger-service/db-backend/src/main/scala/com/digitalasset/http/dbbackend/Queries.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* add the table prefix to named constraints too
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* [JSON-API] Validate schema version & add minimal options for schema creation
* Add tests
* [JSON-API] Rework prior work and introduce the object SchemaHandling
* Add license headers & revert formatting changes
* Fix oracle build & scala 2_12 build
* correctly fix 2.12 build
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/SchemaHandlingResult.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* [JSON-API] Change case names & add backwards compat (but deprecate createSchema=true)
changelog_begin
- [JSON-API] Schema versioning was introduced to the db schema. Because of this the field `createSchema` in the jdbcConfig was deprecated. Via the field `start-mode` you can specify:
1. `create-only`: This is equal to the behaviour of `createSchema=true` so the db schema is created and then the application terminates.
2. `start-only`: With this the schema version is checked and if no version or an version was found which is not equal to the internal schema version then the application terminates. This is also the replacement of `createsSchema=false`.
3. `create-if-needed-and-start`: With this the schema version is checked and if no version or an version was found which is not equal to the internal schema version then the schema will be created/updated and the application proceeds with the startup.
4. `create-and-start`: Similar to the first option but instead of terminating the application proceeds with the startup.
changelog_end
* Add info about deprecated createSchema field
* Fix build & improve logging
* Give suggestions on what option to take, to fix an outdated or missing schema
* Renaming of schemaHandling to DbStartupMode, added more tests & correct exit codes depending on how the db startup went
* Align name with sandbox
* Improve tests
* Only add new sql code which strictly uses the interpolation to align with other pr's & minimally adjust statements
* Minimize diff
* Add backwards compat test
* Fix scala 2.12 build & oracle integration tests build
* Update ledger-service/http-json-cli/src/main/scala/com/daml/http/Config.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Adjust code according to review request & tests & add a failure test
* If the call to initialize fails also log the error which was thrown
* Fix formatting
* Add missing collections compat import in integration tests
* Fix last build errors (scala 2.12) & use Either instead of Option for getDbVersionState
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* [JSON-API] Shutdown on startup if the db connection is invalid
changelog_begin
- [JSON-API] The json api now correctly shutdowns at startup if the provided db connection is invalid in case of `createSchema=false`
changelog_end
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Main.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Switch ContractDao to use a HikariCP connection pool
CHANGELOG_BEGIN
CHANGELOG_END
* missed conn pool changes for PostgresTest and ContractDaoBenchmark
* shutdown db access await threadpool and fix formatting
* custom pool sizes for Prod and Integration similar to DbTriggerDao
* cleanup contract dao connection pool
* simplify Dao shutdown
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* remove redundant config setting
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* fix code formatting issue, NonUnitStatments warning
* use doobie 0.9.0 Fragment-in-Fragment interpolation in json-api db-backend
Since tpolecat/doobie#1045 (and therefore 4ca02e0eb6) doobie has
supported interpolating fragments in fragments. We've used this feature
for several fragments written since #7618, but have left the ones
written before alone to use ++. Here we change that where it
meaningfully clarifies the SQL subexpression.
Note that this does not entail a Put or Write instance for Fragment.
You cannot abstract over Fragment and arbitrary interpolated data in
this way, because Fragments are not treated as positional parameters;
that would mean being able to put arbitrary SQL substrings in positional
parameters.
CHANGELOG_BEGIN
CHANGELOG_END
* scalafmt
* useless whitespace accidentally removed
* new projection for aggregated matched-queries
We can redo all the template-ID matches (and payload query matches, if
needed) in the SELECT projection clause to emit a list of matchedQueries
indices SQL-side.
CHANGELOG_BEGIN
CHANGELOG_END
* selectContractsMultiTemplate always returns one query
* factoring
* remove multiquery deduplication from ContractDao
* test simplest case of projectedIndex; remove uniqueSets tests
* remove uniqueSets
* add more test cases for the 3 main varieties of potential inputs
* remove uniqueSets tests that were commented for reference
* remove unneeded left-join
* scala 2.12 port
* port Map test order to 2.12
* use SortedMap so the Scala version tests are unified
- suggested by @cocreature; thanks
* Support deletion of a large number of contracts
fixes#10339
There are two orthogonal issues here:
1. scalaz’s toVector from the Foldable[Set] instance
stackoverflows. I’ve just avoided using that altogether.
2. Oracle doesn’t like more than 1k items in the IN clause. I chunked
the queries into chunks of size 1k to fix this.
changelog_begin
- [JSON API] Fix an error where transactions that delete a large
number of contracts resulted in stackoverflows with the PostgreSQL
backend and database errors with Oracle.
changelog_end
* fix benchmark
changelog_begin
changelog_end
* Update ledger-service/db-backend/src/main/scala/com/digitalasset/http/dbbackend/Queries.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Update ledger-service/db-backend/src/main/scala/com/digitalasset/http/dbbackend/Queries.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* that's not how you foldA
changelog_begin
changelog_end
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Printing stacktraces is consider an antipattern by some people and
gets flagged by VeraCode. While this shouldn’t actually be an issue
here, it is also not super useful so dropping it is easier than
arguing that this is a false positive.
changelog_begin
changelog_end
changelog_begin
- [JSON-API] Connection tries from the json api to the ledger now include the logging context, more specifically the instance_uuid is included in each logging statement.
changelog_end
Was curious if there were any relevant performance improvements in
newer versions. Looks like the answer is no but we might as well
upgrade anyway.
changelog_begin
changelog_end
* daml-lf/data: Move ID aliases to `Ref` from _ledger-api-common_.
This allows us to remove a lot of dependencies on _ledger-api-common_,
and use these aliases in other places where that module is not used.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-integration-api: Remove an unused import.
* http-json-oracle: Remove `ledger-api-common` as a dependency.
* bindings-rxjava: Remove a now-unused dependency.
* [DOCS] Add documentation for the JSON API metrics
changelog_begin
- [JSON-API] You can now find a section `Metrics` in the http-json api documentation explaining how to enable metrics and which are available
changelog_end
* Fix rst build warnings
* Update docs/source/json-api/metrics.rst
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Adapt metrics doc to state that it IS an exhaustive list and remove wrong copy pasta text & add info about prometheus
* Update the legal values for the metrics reporter cli option
* shorten the description, the change prior was unnecessary ._.
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* [JSON-API] Log json request & response bodies in debug
This also readds logging of incoming requests and the responses which are being send out.
changelog_begin
- [JSON-API] Logging of the request and response bodies are now available for appropriate requests if the chosen log level is equal or lower than DEBUG. These can then be found in the logging context of the request begin & end log messages (The field names in the ctx are "request_body" and "response_body").
changelog_end
* Move the http request throughput marking to the right place including the logging of the processing time
* Ensure that the processing time measuring is implemented consistent
* participant-state: Remove the `ParticipantId` alias.
This alias adds nothing. By using `Ref.ParticipantId` directly, many
packages can remove their dependency on the _participant-state_ package.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-state: Remove the `PackageId` and `Party` aliases.
They don't add anything. Let's just use `Ref`.
* kvutils: Restore missing compat imports.
This PR extends the test to test a full matrix of different party &
template id numbers. Summarizing the results, as expected we index by
party but not by template id:
Benchmark (batchSize) (extraParties) (extraTemplates) Mode Cnt Score Error Units
QueryBenchmark.run 10000 1 0 avgt 5 0.255 ± 0.064 s/op
QueryBenchmark.run 10000 10 0 avgt 5 0.304 ± 0.245 s/op
QueryBenchmark.run 10000 100 0 avgt 5 0.296 ± 0.064 s/op
Benchmark (batchSize) (extraParties) (extraTemplates) Mode Cnt Score Error Units
QueryBenchmark.run 10000 0 1 avgt 5 0.277 ± 0.037 s/op
QueryBenchmark.run 10000 0 10 avgt 5 0.479 ± 0.301 s/op
QueryBenchmark.run 10000 0 100 avgt 5 2.131 ± 0.497 s/op
We know how to fix that so I’ll get on that.
changelog_begin
changelog_end
CHANGELOG_BEGIN
* [Integration Kit] Removed trace_context field from Ledger API and its bindings as we now have trace context propagation support via gRPC metadata. If you are constructing or consuming Ledger API requests or responses directly, you may need to update your code.
CHANGELOG_END
I haven’t found any conclusive information as to why ON COMMIT doesn’t
work incrementally but
https://docs.oracle.com/en/database/oracle/oracle-database/19/adjsn/json-query-rewrite-use-materialized-view-json_table.html#GUID-8B0922ED-C0D1-45BD-9588-B7719BE4ECF0
recommends that for rewriting (which isn’t what we do here but both
involve a materialized view on json_table).
Benchmarks:
before:
InsertBenchmark.run 1000 1 1000 avgt 5 0.327 ± 0.040 s/op
InsertBenchmark.run 1000 3 1000 avgt 5 0.656 ± 0.043 s/op
InsertBenchmark.run 1000 5 1000 avgt 5 1.034 ± 0.051 s/op
InsertBenchmark.run 1000 7 1000 avgt 5 1.416 ± 0.106 s/op
InsertBenchmark.run 1000 9 1000 avgt 5 1.734 ± 0.143 s/op
QueryBenchmark.run 1000 10 N/A avgt 5 0.071 ± 0.016 s/op
After:
Benchmark (batchSize) (batches) (numContracts) Mode Cnt Score Error Units
InsertBenchmark.run 1000 1 1000 avgt 5 0.217 ± 0.034 s/op
InsertBenchmark.run 1000 3 1000 avgt 5 0.232 ± 0.027 s/op
InsertBenchmark.run 1000 5 1000 avgt 5 0.226 ± 0.051 s/op
InsertBenchmark.run 1000 7 1000 avgt 5 0.225 ± 0.048 s/op
InsertBenchmark.run 1000 9 1000 avgt 5 0.232 ± 0.021 s/op
QueryBenchmark.run 1000 10 N/A avgt 5 0.080 ± 0.014 s/op
The difference in query times is just noise and changes across runs.
So we get the expected behavior of inserts being independent of the
total ACS size now. We could still explore if we gain something by
avoiding the materialized view to reduce constant factors but that’s
much less of an issue.
fixes#10243
changelog_begin
changelog_end
* LF: change type from Try to Either in archive module
This is the first part of restructuring errors in archive module.
This is part of #9974.
CHANGELOG_BEGIN
CHANGELOG_END
* Apply suggestions from code review
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* remove type alias
* apply stephen suggestion
* fix after rebase
* fix test
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* [JSON-API] Refactor Endpoints.scala to use path directives etc.
changelog_begin
changelog_end
* Don't warn that the ev param in toRoute is not used
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Remove weird stuff to have nice stuff with the toRoute function
* Rename the toRoute function & remove comments as things are now clarified
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Add a benchmark for contract insertion in the JSON API
Unfortunately the results seem to match up with my initial benchmark
in #10234
Benchmark (batchSize) (batches) (numContracts) Mode Cnt Score Error Units
InsertBenchmark.run 1000 1 1000 avgt 5 336.674 ± 42.058 ms/op
InsertBenchmark.run 1000 3 1000 avgt 5 787.086 ± 223.018 ms/op
InsertBenchmark.run 1000 5 1000 avgt 5 1181.041 ± 317.017 ms/op
InsertBenchmark.run 1000 7 1000 avgt 5 1531.185 ± 341.060 ms/op
InsertBenchmark.run 1000 9 1000 avgt 5 1945.345 ± 436.352 ms/op
Score should ideally be more or less constant but it goes up very
significantly as the total ACS size changes
fixes#10245
changelog_begin
changelog_end
* throughput -> average time
changelog_begin
changelog_end
* Add a ContractDao benchmark
This PR adds a simple benchmark that uses the ContractDao directly and
is therefore a bit more fine-grained and easier to analyze than the
gatling benchmarks. I expect we’ll want to extend this, this really
only tests queries on reasonably large size ACS filtered by party but
let’s start somewhere.
fixes#10247
changelog_begin
changelog_end
* Factorize
changelog_begin
changelog_end
I don't see a reason why it's part of the participant state API, and
it definitely doesn't need to change between v1 and v2.
CHANGELOG_BEGIN
- [Integration Kit] The class ``SeedService`` has been moved from the
*participant-state* Maven package to the *participant-integration-api*
Maven package, under the Java package name
``com.daml.platform.apiserver`` to reflect its usage by the API
server, not the participant state API. If you use this class directly,
you will need to change your imports.
CHANGELOG_END
* [JSON-API] Correctly extract the request source URL/IP
changelog_begin
- [JSON-API] If the service is put behind a proxy filling either of these headers X-Forwarded-For & X-Real-Ip then these will now be respected for logging the request source ip/url
changelog_end
* Return to the simple http server start code
* Remove unused import
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* LF: Simplify archive reader.
- decouple Reader and Decoder
- introduce case class to handle hash, proto payload, and version
CHANGELOG_BEGIN
CHANGELOG_END
* Address Moritz' review
* cosmetic
These were originally hidden in the first PR because the metrics were
very shaky. Now they are actually useful and mentioned in release
notes so hiding this option makes no sense.
changelog_begin
changelog_end
Fixes#10161
changelog_begin
[JSON API] Fixed a bug that could sporadically make the streaming query
endpoint crash. This bug only affected 1.15.x snapshot releases.
changelog_end
* logging-entries: Split from contextualized-logging.
This allows us to introduce it to Daml-LF without bringing in the
Logback, Logstash, and gRPC dependencies.
CHANGELOG_BEGIN
CHANGELOG_END
* logging-entries: Fix dependencies for 2.12.
* logging-entries: Missed one more Scala 2.12 dependency.
* release: Publish logging-entries.
In #10016, 1% template ID and 1% party-set membership meant _the same_ 1%,
meaning that an index of both couldn't possibly yield interesting results. This
changes how LargeAcs builds the large ACS so that it's "1% of 1%", as you'd
expect.
CHANGELOG_BEGIN
CHANGELOG_END
changelog_begin
- [JSON-API] Timing metrics which measure how long the processing of a command submission request takes on the ledger are now available
changelog_end
changelog_begin
- [JSON-API] The database operations (regardless of in-memory or postgres) contain now metrics for fetching contracts by id or key (seperate metrics foreach)
- [JSON-API] The time for a repsonse payload construction of a request is now also tracked in a metric
changelog_end
* [JSON-API] Add more timing metrics
changelog_begin
- [JSON-API] Timing metrics are now available for party management, package management, command submission and query endpoints.
- [JSON-API] Also added a timing metric for parsing and decoding of incoming json payloads
changelog_end
* Add comments to new metrics
* Split metrics up more & remove obsolete metric
* Split up timers for query endpoints
* nvarchar2 keys are text-incompatible, but varchar2 keys are fine
* commit the ACS update before query
* add changelog
CHANGELOG_BEGIN
- [JSON API] The Oracle database schema has changed; if using
``--query-store-jdbc-config``, you must rebuild the database by adding
``,createSchema=true``. See #9895.
CHANGELOG_END
* expand the InitDdl set to include materialized views
* replace search index with a materialized view that expands the stakeholders
* allow materialized views to be created in Oracle testing
* join and query the contract_stakeholders table for party-set membership
- restoring a few elements removed by 3e6661128d (#9484)
This solves two warts in the code:
- the validate/createUnsafe double-parse, because scopt doesn't let you flatMap;
- the non-JdbcConfig sub-configs appeared to need to know the JDBC drivers when
they really don't, because of a quirk in the inherited implementation
That coherence of scopt.Read instances calls for all its dependencies to be
coherent leads us to treat supportedJdbcDriverNames as a nullary typeclass
instance. This is a nullary typeclass by the same justification as
SupportedJdbcDriver; see scaladoc there for more.
And we solve the latter problem by...adding a type parameter, how else.
CHANGELOG_BEGIN
CHANGELOG_END
* [JSON-API] Concurrent query etc. metrics
changelog_begin
- [JSON-API] The metrics which describe the amount of these concurrent events is now available:
- running http request
- command submissions
- package allocations
- party allocations
changelog_end
* Rename running metrics to throughput ones & add comments on the metrics
* Adjust names of other existing metrics too, to have for the json api a more consistent metrics naming
* Add information from the jwtpayload to the logging context
changelog_begin
- [JSON API] For applicable requests actAs, readAs, applicationId & ledgerId are included in the log context
changelog_end
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Revert changes to make the function generic
* Create JwtPayloadG trait from which both payload variants inherit
* Reduce code duplication in Endpoints.scala
* Apply review suggestion
* Update test name to reflect field name changes
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* upgrade scalacheck to 1.14.3
* regenerate maven_install files
* some different names and implicits
* remove some fromTryCatchNonFatal
* more porting
* port fromTryCatchNonFatal to attempt
* factor the assertions in SignatureSpec to avoid \/
* deal with invariant \/
* make partial unification do what we want
* \/, parse*, and toNel
* many uses of the .right method
* a legitimate use of fromTryCatchThrowable
* rebuild maven pins
* further invariant \/
* OneAnd and Nel interface changes
* further Either games
* \/ and reformatting
* \/ in http-json
* \/ in http-json
* deprecations
* more invariance
* cleanup unused
* more invariance; http-json compiles
* final either follies
* small 2.12 extra incompatibility
* rebuild deps
* revisit a couple earlier fixes using nicer expressions I learned later
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* repin 2.12
* Log templateId & choice name (if present) on command submissions in the json API
changelog_begin
- [JSON API] The template id & choice name (if present) are now logged on command submissions in the Json API (at trace level)
changelog_end
* Move the template id & the choice into the logging context
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/CommandService.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/CommandService.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Fix compile error due to scala 2.12 collection differences
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
changelog_begin
- [JSON-API] The source and the path for incoming http requests are now logged
- [JSON-API] The http response for a request is now logged
changelog_end
* Don't duplicate log failed futures in the http json api (commandservice specific)
changelog_begin
- [JSON API] Errors which arise from the CommandService are not logged twice anymore (thus reducing noise)
changelog_end
* Fix duplicate logging of a failed future differently via adding another error case in CommandService.scala
* Restore old formatting
* Improve comment
* Remove the wrong changes I accidentally made, the functions are quite similar xD
* remove the additional type and just make the id optional & remove unecessary comment
* Log call to submitAndWaitRequest with the command id provided in the log ctx
changelog_begin
- [HTTP-JSON] Calls which trigger a submitAndWaitRequest are logged with the command id provided in the log ctx
changelog_end
* Require the request id to be also in the log ctx
* Log command kind in submitAndWaitRequest of CommandService.scala
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/CommandService.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Proxy ledger API health endpoint in JSON API health endpoint
This is a bit more useful than just querying ledger end.
changelog_begin
- [JSON API] The healthcheck endpoint on the JSON API now proxies the
health check endpoint on the underlying ledger. Previously it only
queried for ledger end to determine ledger health.
changelog_end
* naming things is hard
changelog_begin
changelog_end
* Fix Scala 2.12
changelog_begin
changelog_end
* Load the correct logback file for the http json service respecting the deployment situation
changelog_begin
[HTTP-JSON]
- fixed that json log output could not be enabled via cli options except via usage of env vars
changelog_end
* Move import statement, remove some braces and reformat
* Move system prop dependend logic to cliopts Logging.scala
* Remove PathKind type in cliopts Logging.scala as it's not necessary anymore
* Allow two different time formats as input for the metrics reporting interval and accordingly revert to using the old test for the CommonCliSpecBase in sandbox-common
changelog_begin
- for applications which support the --metrics-reporter-interval cli option, these now support both the java and scala duration string format (e.g. PT1M30S and 1.5min)
changelog_end
* Replace the thrown RuntimeException in the DurationFormat reader with an IllegalArgumentException
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Try parsing java duration first
* Add test which covers the scala duration format too in the CommonCliSpecBase of sandbox-common
* Add comment about java duration format
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Introduce metrics in the http-json service
changelog_begin
[HTTP/JSON API]
- metrics reporting can now be chosen via the command line option --metrics-reporter (currently hidden), valid options are console, csv://, graphite:// and prometheus://
- metrics reporting interval can also now be chosen via a command line option --metrics-reporting-interval (currently hidden)
changelog_end
* Move MetricsReporter and it's dependencies into //ledger/metrics
* Restore non-ugly formatting for that one section in Endpoints.scala
* Update ledger/sandbox-common/src/test/lib/scala/platform/sandbox/cli/CommonCliSpecBase.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Hide metrics option for http-json
* Propagate context exception for the parseUri function in MetricsReporter.scala
* Move cliHint value above parseUri function to have a better structure (it's used once before it's defined and once after it's defined, which is weird to me)
* Use better value name than optMr & optFd in cliopts Metrics.scala
* Remove import order changes & whitespace changes
* Revert usage of Nanoseconds for conversion from scala duration to java duration to usage of Seconds
* Shorten hideIfRequested function
* Fix another rearranged import
* Fix another whitespace removal
* Readd metrics cli option to sandbox after refactoring
* Add missing return type annotation for invalidRead in MetricsReporter
* Readd newline in https OptionParser.scala
* Remove unecessary import
* Update ledger/sandbox-common/src/main/scala/platform/sandbox/cli/CommonCliBase.scala
Co-authored-by: Miklos <57664299+miklos-da@users.noreply.github.com>
* Align setter & config name for metricsReportingInterval setting too in CommonCliBase.scala
* Rename http_json_api in Metrics.scala of metrics project to HttpJsonApi
* Reformat CommonCliBase.scala of sandbox-common project
* Fix CommonCliSpecBase test of sandbox
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
Co-authored-by: Miklos <57664299+miklos-da@users.noreply.github.com>
This has made things worse not better. Now tests that should be
successful hit the timeout.
Still not sure what is actually going wrong so marking it as flaky
Tracked in #9886
changelog_begin
changelog_end
On CI we occasionally see the timeout tests fail with
`UnexpectedConnectionClosureException` instead of the expected
server-side timout error. I haven’t managed to reliably reproduce this
or figure out what is going wrong (the closest seems to be
https://github.com/akka/akka-http/issues/3806 but I struggle to see
how that applies here).
This change seems at least promising and at worst, it just speeds up
the tests (by waiting less) and doesn’t change flakiness which still
seems like an improvement.
changelog_begin
changelog_end
While looking at errors produced during command submissions I got
confused by the fact that we resolve the template id in the command
service. Turns out there is no reason for doing that, our types are
just not precise enough. This PR fixes that by making sure that we
differentiate between commands where the template id has been resolved
and those where it hasn’t.
changelog_begin
changelog_end
* new perf test module LargeAcs to shift many-contract creation to Daml-side
* using MakeIouRange to make blocks of (more or less constant) Ious
* syntax and bad variable quasiquoting
* ACS data distribution plans
* run with 100k contracts, same template, 1% observer frequency
* use CanAssert instead of CanAbort
* query under proper alternative jwt
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* make sure the database accurately represents the ACS
changelog_begin
- [Ledger HTTP Json Service] Logging output can now be in JSON either via providing the cli option `--log-encoder json` or via setting the env var `LOG_FORMAT_JSON=true`
changelog_end
Ths is required to get a version of Gatling that supports Scala
2.13 (and only that because they do not cross build).
Unfortunately the upgrade is a bit more annoying than I was hoping for:
Our custom gatling utils rely on parsing the simulation log. This is
an internal file format with zero documentation or stabilityt
guarantees and as expected it has changed in incompatible ways during
the upgrade.
Rather than trying to reverse engineer and adapt to changes everytime
we upgrade, this PR switches us to a slightly more supported codepath
by parsing the `stats.json` and `assertions.json` produced by the
highcharts stuff. Afaict this is also what for example the Jenkins
integration relies on so while it’s not completely public API it seems
like the best option I could find.
There are a few pieces of information we can’t get out of those
files. Specifically:
1. maxUsers: we only ever need one users anyway so not really relevant.
2. start, duration, end: no idea why we would want those. we want per
request metrics not the total duration.
3. geometric mean: slightly annoying, but avg & stdev should be good
enough™.
4. The scenario name: Not really an issue but if it is, we can
disambiguate by changing request names.
changelog_begin
changelog_end
* check whether collection.compat is unused when compiling for Scala 2.12
- Instead of always suppressing warnings for collection.compat._,
we should only do it for Scala 2.13
- We can also reduce boilerplate by automatically adding this
option when both silencer_plugin and collection-compat are
present
CHANGELOG_BEGIN
CHANGELOG_END
* remove unused import
* remove another unused import
* remove even more unused imports
* missed compat dependency
* more missed compat dependencies
* missed compat dependency
* use scala_deps in scaladoc_jar
- #8423 inlined the major version expansion, but this seems to
have been prior to proper support by scaladoc_jar
* restore custom handling of participant-integration-api
- fixing scaladoc_jar isn't worth it for a single case, as with
deps vs scala_deps
CHANGELOG_BEGIN
http-json:
- add contextual id as logging context to distinguish different application runs in logs
- add request id as logging context to distinguish different http requests within an application run
- add for non-static endpoints trace logs which show how long processing it took in ns
CHANGELOG_END
CHANGELOG_BEGIN
- [Ledger HTTP Json service] Logging can now be configured via the `--log-level` cli argument. Valid values are `error`, `warn`, `info` (default), `debug`, `trace`
CHANGELOG_END
Co-authored-by: victor.mueller@digitalasset.com <mueller.vpr@gmail.com>
As discussed, we don’t want to expose this via serializable values at
least for now (and it’s not exposed on the ledger API anyway) so this
PR drops the type.
changelog_begin
changelog_end
* add oracle option to http-json-perf-binary-ee
* add oracle path to perf Main's JDBC bracket
* adapt to availableJdbcDriverNames; missing deps
* add changelog
CHANGELOG_BEGIN
- [JSON-API Perf] ``--query-store-index=postgres`` must be passed
to select PostgreSQL query store performance testing; ``true``
and ``yes`` are no longer supported.
See `issue #9492 <https://github.com/digital-asset/daml/pull/9492>`__.
CHANGELOG_END
* participant-integration-api: Build Oracle tests, but don't run them.
CHANGELOG_BEGIN
CHANGELOG_END
* triggers: Switch to an environment variable for enabling Oracle tests.
* http-json: Switch to an environment variable for enabling Oracle tests.
* Disable running Oracle tests by default, not building them.
* triggers/service: Remove unused test dependencies.
* Switch from `@silent` to `@nowarn`.
This annotation is native to Scala 2.12.13+ and 2.13.2+. It replaces
most usages of `@silent`.
I had to get creative about a couple of use cases that didn't work.
Specifically:
1. Suppressing deprecation warnings works, but Scala 2.12 erroneously
complains that the `@nowarn` is unnecessary. I had to suppress
this warning too with `-Ywarn-unused:-nowarn`.
2. I can't seem to suppress the warning, "The outer reference in this
type test cannot be checked at run time." Instead, I have
refactored the code to remove the warning.
We still need to use the silencer plugin to suppress some warnings about
unused imports (because of compatibility between Scala 2.12 and 2.13),
but this means we no longer need the library, and therefore it is not a
transitive dependency that downstream consumers need to worry about.
CHANGELOG_BEGIN
CHANGELOG_END
* Add some comments around `@nowarn` support.
* language-support/scala: Fix a warning suppression.
* Revert to the default warnings.
Compatibility was complaining.
* Set supported jdbc driver names at compile time
This is mainly to unblock the work on Oracle support in the Ledger API
but I think it’s a sensible thing in general. For the Ledger API,
moving the dependency to the top-level is apparently rather
tricky. Because the SDK bundles everything into a single megajar,
Sandbox depending on the oracle library does also result in the JSON
API and the trigger service will also have the oracle library in scope
and will support Oracle in CE which they should not.
This PR simply hardcodes the list of supported drivers to address
that. Not pretty but does the job.
changelog_begin
changelog_end
* format
* Address review comments
changelog_begin
changelog_end
* Test for duplicate contracts when querying on behalf of multiple parties
Fixes#9388
changelog_begin
changelog_end
* Optimize imports
* Thanks to @S11001001 for answering the comment
* Re-structure the test following @S11001001's input in https://github.com/digital-asset/daml/pull/9443#discussion_r616083932 -- thanks
* remove Array[String] instances from Oracle json-api driver layer
CHANGELOG_BEGIN
CHANGELOG_END
* fix Postgres integration test to deal with removed implicit API
* comparison queries
* name the contract primary key constraint
* use ignore_row_on_dupkey_index instead of merge
- suggested by @cocreature in #9286 f7b2f14294fa33d6804251ce841529a1e2bd298d; thanks
* retrySqlStates for oracle
* enable all non-websocket tests
* name the template_id primary key constraint
* clean up concatFragment calls
* add Websocket tests for oracle
* move iouCreateCommand to be usable by oracle integration tests
* work around Scala 2.12 NPE in Source
* multiquery support for Oracle
* matchedQueries, therefore query-stream support for Oracle
* enable websocket tests
* test '& bar' and 5kb strings
- 5kb string fails on Oracle with
ORA-01704: string literal too long
* refine the long data cases; gets too long at 4000 bytes as expected
- however, the predicate fails for unknown reason before then; possibly a missed
escape character case
* handle long data with a fallback
- now the predicate fails in all cases instead of a SQL error, which is...better
* only interpolate true, false, null into JSON predicate conditions
- the problem was with JSON-formatted data; it must be SQL-typed instead
* adapt equal's large-data solution for comparison as well
- only works for numbers and strings, but that's all we need to compare
* move Implicits to Queries
* remove stray spaces in output
* test Oracle query expressions alongside Postgresql
* test that bools aren't compared like numbers and strings
* test @> conjunctions and special {}-query handling
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* note on PASSING ... AS X
- suggested by @cocreature; thanks
* remove printlns; these functions don't really need scaffolding anymore
- suggested by @stefanobaghino-da; thanks
* literal variant test case for compiled query
* nicer error reporting of positional args in Fragments
* a failing advanced case for variant query
* use proper path
* also check nested number roundtrip
* add changelog
CHANGELOG_BEGIN
- [HTTP JSON API] Range queries within variant data would not return matching
data when using the PostgreSQL backend with JSON API; this is fixed.
See `issue #9321 <https://github.com/digital-asset/daml/pull/9321>`__.
CHANGELOG_END
* Add Oracle support in the trigger service
This PR migrates the ddl & queries and adds tests for this. It does
not yet expose this to users. I’ll handle that in a separate PR.
changelog_begin
changelog_end
* use getOrElse
changelog_begin
changelog_end
* support scalaz.Foldable1 in Fragments.in
* incorporating signatories and observers in Oracle contract query
* join syntax; allowed aggregation
* aggregate the signatories and observers independently before join
- prior: ERROR at line 8 (the GROUP BY line):
ORA-00932: inconsistent datatypes: expected - got CLOB
* make toSqlWhereClause portable, mostly
* name the constraints for debugging
* import cleanup
* skip inserting contract on conflict (for read committed)
* support lookup by contract ID
* remove ::jsonb from fetch-by-key for Oracle
* proper key comparison and retrieval
* on conflict ignore for signatories and observers
* contract ID, party, offset, package ID column types
* template module and entity name types
- nvarchar2 for name type because
,template_module_name CLOB NOT NULL
,template_entity_name CLOB NOT NULL
,UNIQUE (package_id, template_module_name, template_entity_name)
)
, Error Msg = ORA-02329: column of datatype LOB cannot be unique or a primary key
* type-aware == and @> output for Oracle
* pick arbitrary maximum module/entity name size
Cause: Error : 1450, Position : 0, Sql =
CREATE TABLE
template_id
(tpid NUMBER(19,0) GENERATED ALWAYS AS IDENTITY NOT NULL PRIMARY KEY
,package_id NVARCHAR2(64) NOT NULL
,template_module_name NVARCHAR2(1594) NOT NULL
,template_entity_name NVARCHAR2(1594) NOT NULL
,UNIQUE (package_id, template_module_name, template_entity_name)
)
, Error Msg = ORA-01450: maximum key length (6398) exceeded
* happy path for query-less queries
* done todo
CHANGELOG_BEGIN
CHANGELOG_END
* handle 2.13 deprecation
* factor NVARCHAR2(255)s
- suggested by @cocreature; thanks
* deal with where only a signatory OR observer matches
- suggested by @cocreature; thanks
* Fix gRPC status codes for inconsistency rejections and DamlLf errors
Also, add unit tests and exclude failing compatibility and conformance tests
CHANGELOG_BEGIN
- [Integration Kit] Fix gRPC status codes for inconsistency rejections and DamlLf errors (ContractNotFound, ReplayMismatch) by changing them from INVALID_ARGUMENT to ABORTED
CHANGELOG_END
* Factor out an Oracle test fixture
We need to add Oracle support to other components like the trigger
service as well. At that point, we cannot assume that things don’t
stomp on each other so this PR adds a fixture that generates a random
user (which also comes with its own schema, creating new databases is
a bit different in oracle) which we use testing.
changelog_begin
changelog_end
* Update libs-scala/oracle-testing/src/main/scala/testing/oracle/OracleAround.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Less top-level stuff in traits
changelog_begin
changelog_end
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Expose libraries for integration testing purposes
The motivation of these changes is to eliminate manual work and reduce duplication between the SDK and oem-integration-kit repos by reusing the same test fixture for integration testing participant state implementations. Also, the DARs required for running these tests won't need to be manually updated.
CHANGELOG_BEGIN
CHANGELOG_END
* Fix a concurrency issue in integration tests
* Fix Bazel error
* Fix conflict resolution
* Move inline daml-lf to separate dar files
* Add a comment
* Add a missing artifact
* Extract method
* Remove maven tags
* Add a macro for Scala libraries with dar resources
* Improve the macro
* Add missing artifact
* Simplify the tests
* Format signature
* Fix the maven tag
* Add missing copyright headers
* Format bazel files
* Make //ledger/test-common lf version dependent (to avoid jar hell)
* Move da_scala_dar_resources_library to a separate bzl file
* Add missing artifacts
Co-authored-by: Hubert Slojewski <hubert.slojewski@tesco.com>
* Add new variant to Value.scala for builtin-exceptions.
final case class ValueBuiltinException[+Cid](tag: String, value: Value[Cid]) extends Value[Cid]
And push through the code consequences.
Most places fixed up.
A couple more things to do in this PR (marked NICK)
A couple of things which can be left for later (marked 8020)
fix build
fix another scala match
changelog_begin
changelog_end
* fix any match
* add marker of code which needs attending to in the PR
* extend ledger-api value.proto & fix LfEngineToApi
* undo/comment-out the change to value.proto
* add tests in HashSpec for BuiltinException
* code but dont yet enable value-gen for builtin exceptions
* address comments which suggest we crash in various places
* support BuiltinException in scenario_service.proto
* one more TODO 8020 tag
We filter heartbeat ticks until we get the first step message. This is
when starting from the ACS but it is incorrect when starting from an
existing offset where this results in us not emitting heartbeats until
we get the first step message. This PR fixes this by passing along the
initial offset and adds a test for this.
changelog_begin
- [Json Api] Fix a bug where heartbeating on websocket connections did not start until the first transaction was received when resuming from a previous ledger offset. See https://github.com/digital-asset/daml/issues/9102
changelog_end
fixes#9102
* Build SDK EE tarball
This sets up the infrastructure to build an SDK EE tarball and allows
for swapping out all files included in the tarball depending on the
edition. As an example, this includes the JSON API with (partial)
Oracle support in the EE tarball.
This PR does not yet address publishing this artifact to Artifactory.
I’ll tackle that in a separate PR.
changelog_begin
changelog_end
* Build in temp dir because Windows is stupid
changelog_begin
changelog_end
* directories are bad
changelog_begin
changelog_end
* Navigator resources are actually needed
changelog_begin
changelog_end
* Do not require a JWT token for Health and Reflection services
CHANGELOG_BEGIN
- A JWT token is no longer required to call methods of Health and Reflection services
CHANGELOG_END
* Let service's authorizer decide about rejections
* Updated authorization test
* Added integration test for unsecured authorisation test for the Health service
* Added integration test for unsecured authorisation test for the Server Reflection service
* Updated Claims doc comments
* Minor change
* Reduced code duplication with SecuredServiceCallAuthTests and UnsecuredServiceCallAuthTests
* Added copyrights
* Move response status handling logic to Authorizer
CHANGELOG_BEGIN
- [LF] Release LF 1.12. This version reduce the size of transaction
- [Compiler]: Change the default LF output from 1.8 to 1.11.
CHANGELOG_END
The tests start the stream in parallel to submitting commands. This is
problematic since it means that those commands can either be included
in the ACS block or they can come after the ACS block. This PR polls
for the ACS block upfront which makes sure that the commands come afterwards.
changelog_begin
changelog_end
* non-empty newtypes
* an operation
* add some map/set operations and make everything compile on 2.12 and 2.13
* +-: and :-+, with compatibility layer; docs
* move to nonempty package; add aliases for cons/snoc; fix SeqOps aliases
* ensure 2.12 aliases are inferrable
* groupBy1 and toList, use to prove uniqueSets's invariants
* prove immutability first
* matching variance in aliases
* prove the return property of uniqueSets, and use the proof
* tests for NonEmpty API
* rename sci alias to imm
* move RefinedOps to more obvious location
* more docs
CHANGELOG_BEGIN
CHANGELOG_END
* remove unused imports
* illustrate the scala.collection.Seq problem
* ideas for extension
* tests for toF
* tests for +-:
* explain difference with OneAnd
improve previous generalization from #8695
- use lf version instead keyword (like 'stable', 'latest', 'dev') to
tag actual target. This will allow two keywords to map to the same
versions without doing twice the compilation/test work.
- use alias to map keywords tag target to versioned tag target.
- move package manage dar to test_commong.
CHANGELOG_BEGIN
CHANGELOG_END
* separate OracleQueries from PostgresQueries
- with some changes from 8161e63189 courtesy @cocreature
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* abstract BIGINT
* json, signatories, observers columns
* compatible lastOffset
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* oracle functions for select (single template ID), insert
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* add oracle branch to integration tests
* oracle CLI configuration for json-api
* run integration tests with ojdbc in classpath
* update maven_install for ojdbc
* drop table if exists for Oracle
* make create DDLs and drops more planned out; drop in reverse order for Oracle integrity
* repin maven
* port agreement_text
* port (by removal) array part of ledger offset update
* use CASE instead of JSON map lookup for multiparty offset update
* simplify self types
* fix contract archival
* repin
* remove selectContracts in favor of selectContractsMultiTemplate
* move Oracle test execution to separate build target
* move websocket test to itlib
* make a bad array instance for Oracle
* report actually-available JDBC drivers only
* configure Oracle test from CI
* attempt with platforms and constraints
* a mismash of bazel to get it to conditionally enable oracle testing
* fix dep resolution in Scala 2.13
* make the Oracle test a stub (inits and does empty DB query)
* remove commented unused deps
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* repin
* we never supply a value for the surrogate ID columns
- suggested by @cocreature; thanks
* add not null to json in DB-specific place
- suggested by @cocreature; thanks
* why DBContractKey
- suggested by @cocreature; thanks
* textType isn't finalized
- suggested by @cocreature; thanks
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
This fixes Scaladoc and our pom file generation.
It also clears up the confusing error around gatling and removes a
redundant dependency on sbt (no idea why we had that in the first
place) both of which resulted in Scala 2.12 dependencies in our 2.13
lockfile which is obviously bad.
With this, we should now be ready to publish Scala 2.13 artifacts once
the ledger API test tool PR lands.
changelog_begin
changelog_end
Unfortunately missing the actual interesting part since porting
`partitionBimap` seems to be rather annoying but this at least gets us
started on the easy parts.
changelog_begin
changelog_end
The jdkLogHandler provided by Doobie exists purely as an example and the library
itself does not recommend using it in production.
Note that this slightly changes the runtime behavior, logging successful queries
at debug level rather then info. The message itself is preserved from the original
MIT-licensed example.
This uses Slf4j as most of our components, instead of java.util.logging.
changelog_begin
[HTTP JSON API] The server now logs successful queries at debug level
instead of info
[Trigger Service] The server now logs successful queries at debug level
instead of info
changelog_end
* SupportedJdbcDriver box for the required DB-specific implicits and magic values
* replace postgres references with the SupportedJdbcDriver box
* explaining the typeclass
* labels for debugging
* allow external initialization of SupportedJdbcDriver, but not usage
* thread SupportedJdbcDriver everywhere, hang it off of ContractDao
* remove unused dep from integration tests
* split Queries into an agnostic part and a DB-specific part
* document withOptPrefix
* reformat
* SQL syntax more amenable to refactoring
* different sets of DDL for different backends
* make everything use queries passed around everywhere (usually via ContractDao)
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* group database queries while still providing the matchedQueries part of the event
* SQL query for multiquery websocket request all at once
* fetchAndPersist responds with a single bookmark
* unify connection imports
* convert DB results to domain, reassociate with proper query indices
* reassociate multiple matches with proper query indices
* make overlap more likely in testing overlap
* simpler matchedQueries merging for multi-query case
* integrate DB query with metadata-ful StreamQuery's
* expose daoAndFetch; better insertDeleteStepSource doc
* more efficient query path for contract key streams
* missed LogHandler
* persist resolved template IDs; glue the prefiltered set into the stream
* ticks and phantom removal need the state from the ACS
* compile SQL queries for query language predicates on WS
- wrong matchedQueries order
* harmonize order of ACS-vector and Positives
* misc compilation conversions
* WebsocketServicePostgresInt mixin
* update (C) date
* test websocket queries under postgres
* looking for new way to compile queries with proper matchedQueries offsets
* model that querying without matchedQueries requires only one SQL query
* SQL path for contract key streams
* nondeterminism
* fix 3 fetch tests with SQL syntax
* nondeterminism mk 2
* fix multi-party query tests by dealing with nondeterminism properly
* temp logs to track down the contract duplication
* match new scalafmt from #8437
* remove completed TODOs
* add changelog
CHANGELOG_BEGIN
- [JSON API] If the JDBC query store is enabled, it will be used to optimize
Websocket queries as well as the previously-supported synchronous queries.
See `issue #8226 <https://github.com/digital-asset/daml/pull/8226>`__.
CHANGELOG_END
* fix up matchedQueries indices
* complete the fast path for by-id queries
* remove AS c
- suggested by @cocreature; thanks
* remove temporary debugging logs
- suggested by @cocreature; thanks
* fix race condition in DB update when ACS has later contracts than the ledger-end
CHANGELOG_BEGIN
- [JSON API] Under rare conditions, a multi-template query backed by Postgres
could have mismatched snapshots of the ACS for different templates. These
conditions are now checked and accounted for.
See `issue #8226 <https://github.com/digital-asset/daml/pull/8226#issuecomment-756446537>`__.
CHANGELOG_END
* contractsFromOffsetIo already saves the offset to DB
* notes on why we rerun DB update to fix the race condition
* if the ACS last-offset exceeds the tx stream offset, save it in the DB instead
* Upgrade scopt to 4.0.0
Scopt 3.x has some issues with Scala 2.13 because it expects an
immutable Seq on 2.13 meaning you cannot just pass in an Array. Rather
than fixing our callsites to convert to an immutable Seq everywhere,
this PR bumps to Scopt 4.0 which goes back to collection.Seq.
and leaving that aside, I’m a fan of upgrading dependencies anyway :)
changelog_begin
changelog_end
* Use val instead of def
changelog_begin
changelog_end
This PR updates scalafmt and enables trailingCommas =
multiple. Unfortunately, scalafmt broke the version field which means
we cannot fully preserve the rest of the config. I’ve made some
attempts to stay reasonably close to the original config but couldn’t
find an exact equivalent in a lot of cases. I don’t feel strongly
about any of the settings so happy to change them to something else.
As announced, this will be merged on Saturday to avoid too many conflicts.
changelog_begin
changelog_end
* Replace many occurrences of DAML with Daml
* Update docs logo
* A few more CLI occurrences
CHANGELOG_BEGIN
- Change DAML capitalization and docs logo
CHANGELOG_END
* Fix some over-eager replacements
* A few mor occurrences in md files
* Address comments in *.proto files
* Change case in comments and strings in .ts files
* Revert changes to frozen proto files
* Also revert LF 1.11
* Update get-daml.sh
* Update windows installer
* Include .py files
* Include comments in .daml files
* More instances in the assistant CLI
* some more help texts
* Port damlc dependencies to Scala 2.13
I got a bit fed up by the fact that going directory by directory
didn’t really work since there are two many interdependencies in
tests (e.g., client tests depend on sandbox, sandbox tests depend on
clients, engine tests depend on DARs which depend on damlc, …).
So before attempting to continue with the per-directory process, this
is a bruteforce approach to break a lot of those cycles by porting all
dependencies of damlc which includes client bindings (for DAML Script)
and Sandbox Classic (also for DAML Script).
If this is too annoying to review let me know and I’ll try to split it
up into a few chunks.
changelog_begin
changelog_end
* Update daml-lf/data/src/main/2.13/com/daml/lf/data/LawlessTraversals.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* fixup lawlesstraversal
changelog_begin
changelog_end
* less iterator more view
changelog_begin
changelog_end
* document safety of unsafeWrapArray
changelog_begin
changelog_end
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Support multi-party submissions in the JSON API
changelog_begin
- [JSON API] Add support for multi-party submissions by allowing for
multiple actAs parties in the token and passing on readAs to the
ledger.
changelog_end
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/EndpointsCompanion.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Upgrade Scala dependencies for 2.13 compatibility
This upgrades a bunch of Scala libraries to versions that have 2.13
support. There are two libraries that are still missing:
- diffson, this has a new version but with significant breaking
changes and it is only used in Naigator console which I hope to kill
before I have to worry about this.
- ai.x:diff, this is used in the ledger API test tool. The library is
abondened but there are a few alternatives.
changelog_begin
changelog_end
* Fix pureconfig
changelog_begin
changelog_end
* Fix Navigator
changelog_begin
changelog_end
This is necessary to at least attempt an upgrade to 2.13 and
generally, I want to keep our rulesets up2date. rules-scala forces the
version of scalatest so we have to bump that at the same time.
This requires changes to basically all Scala test suites since the
import structure has changed and a bunch of things (primarily
scalacheck support) got split out.
Apologies for the giant PR, I don’t see a way to keep it smaller.
changelog_begin
changelog_end
* mark some of dbbackend private
* fetchBy* functions for Queries
* shift in-memory filtering into the transaction stream
- removes irrelevant contracts from memory as soon as possible for fetch by
contract ID and key in-memory
* push the three synchronous search varieties into new signatures
* replace the core findByContract* functions with DB-delegating versions
* remove the GADT equality and most of the explicit traversals
- compiles again, finally
* factoring
* ContractDao wrappers for fetchById and fetchByKey
* DB version of findByContractId
* DB version of findByContractKey
* Search is the split of ContractsService
* fix SQL for keys
* trade the typeclass for a sum type
- sealed instead of final because of the path dependency on ContractsService
instance
* number conversion is done already in ContractDao
* make fetch-by-key tests depend on proper number conversion for SQL
* add changelog
CHANGELOG_BEGIN
- [JSON API] ``/v1/fetch`` now uses the Postgres database, if configured, to
look up contracts by ID or key, except when querying a contract by ID without
its corresponding template ID. The fallback in-memory version of
``/v1/fetch`` is also now significantly more efficient for large datasets,
though still linear.
You may optionally re-create JSON API's database to take full advantage.
See `issue #7993 <https://github.com/digital-asset/daml/pull/7993>`__.
CHANGELOG_END
* use search.search for search
- suggested by @cocreature; thanks
* add an index for contract key lookups
- suggested by @cocreature; thanks
* kvutils: Use ScalaPB to generate a Scala JAR for daml_kvutils.proto.
* Bazel: Delete the unused `da_java_binary` rule, and inline `_wrap_rule`.
* Bazel: Factor out Java/Scala protobuf class generation into a helper.
CHANGELOG_BEGIN
CHANGELOG_END
* daml-lf/archive: Use `proto_jars`.
* Bazel: Remove the visibility modifier from `proto_jars`.
It's too confusing. Just make everything public.
* daml-lf/archive: Push protobuf source tarballs into `proto_jars`.
* Bazel: Add comments to the various parts of `proto_jars`.
* daml-assistant: Do unpleasant things with `location` in Bazel.
* Upgrade akka-http to 10.2
Follow up to #8048, I left out this upgrade to reduce noise and since
I wasn’t quite sure how involved it was going to be.
changelog_begin
changelog_end
* Reenable transparent HEAD requests
Apparently no longer on by default but we depend on this in waitForHttpServer
changelog_begin
changelog_end
* Upgrade akka and akka-http
Was chasing an issue somewhere and thought this might affect it in
some way. It didn’t but I might as well turn the upgrade into a PR.
changelog_begin
changelog_end
* Fix trigger service tests
changelog_begin
changelog_end
* Downgrade akka-http again
changelog_begin
changelog_end
* Upgrade akka-http again and fix tests
changelog_begin
changelog_end
* Cleanup trigger service
changelog_begin
changelog_end
Previously we didn’t build up the `OneAnd[Set, Party]` properly and
included the one party in the set as well. This was an issue if you
have the same party multiple times, most likely in readAs and
actAs (but not limited to that). This then lead to SQL queries failing
since we tried to insert twice for a given party. This PR fixes that
by properly deduplicating the parties and adding a test for this.
changelog_begin
- [JSON API] Fix a regression introduced in SDK 1.7.0, where using a
party multiple times in the same JWT token (e.g., readAs and actAs)
broke database queries for that party. Note that there is never a
reason to include a party multiple times since actAs implies readAs.
changelog_end
* Make HealthService public
DABL patches the rest adapter so making this public helps them plug it
together with other things.
Also removes some garbage debug print which I forgot to remove
🤦
changelog_begin
changelog_end
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/HealthService.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* No logging
changelog_begin
changelog_end
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Add healthcheck endpoints to JSON API
This PR adds /livez and /readyz (following k8s naming scheme) that can
be used as liveness and readyness check. There isn’t much we can do
for liveness apart from showing that we can still respond to http
requests but readyness can be a bit more clever and check the ledger
connection as well as the database connection.
changelog_begin
- [JSON API] Add `/livez` and `/readyz` health check endpoints for
easier integration with k8s and other schedulers.
changelog_end
* I hate windows
changelog_begin
changelog_end
HTTP 1.1 exists since 1999 so there isn’t really a good reason not do
use this. In fact, the docs recommend to use Chunked in favor of
CloseDelimited.
changelog_begin
changelog_end
* Factor JWT verifier CLI flags
changelog_begin
changelog_end
* Use cli-opts in auth middleware
* Use cli-opts in sandbox cli
* Mark trigger service test as long
These have become prone to timeout on CI.
Increasing the size (timeout) is a temporary fix. A proper
solution is to a) not start a fresh sandbox per test-case and b)
separate the in-mem/db and no-auth/auth configrations into
separate Bazel test targets.
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
This PR extends the failure tests with one that establishes a
websocket connection, kills the connection and reenables it at the
last offset.
changelog_begin
changelog_end
The previous restriction was both too lax and too strict now that we
have multi pary queries:
1. It allowed a party in `readAs` for command submissions which just
fails on the ledger side. Changing this is technically breaking but
only if you used a token that would have been rejected as soon as you
enabled auth so that seems very resonable to break.
2. It didn’t allow extra parties in `readAs`.
This PR switches to requiring exactly one party in `actAs` while
supporting multiple parties in `readAs`.
changelog_begin
- [JSON API] JWTs on command submissions can now contain extra parties
in the `readAs` field. `actAs` is still limited to a single party.
changelog_end
* Add tests for connection failures in the JSON API
This PR adds some toxiproxy based tests to see how the JSON API reacts
if the connection to the ledger is killed. There are a bunch of
inconsistencies here in the tests some of which we might want to
address and the others we should at least document but I’ll leave that
for future PRs.
changelog_begin
changelog_end
* Import HttpServiceTestFixture instead of prefixing
changelog_begin
changelog_end
* set doobie version to 0.9.2 and rerun maven pin
* port extractor and some of JSON API
* repin maven
* use doobie's own builder compatibility where required
* use probably bad derivations to supply Blockers where transactEC was required
- The point of using Blocker instead of ExecutionContext seems to be to
especially emphasize to API users that it isn't appropriate to use an
ExecutionContext with ordinary behavior. That is what we have done, which
should probably change, but just compiling for now.
* fix fragment inspection test for internal restructuring
- This test depends on implementation details of Doobie, so naturally it must be
altered when that runs. Fortunately, it's been made easier by the changes
in this upgrade.
* allow 256 blockers for navigator transaction blocker, like the global EC
* allow as many blockers as the pool size for trigger service
- The transactor shouldn't share ExecutionContext for transactions with the
caller, so we set up a new one based on configured pool size.
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
This PR shuffles things around a bit to make it easier to test and
adds `--query-store-jdbc-config-env` which specifies the name of an
environment variable containing the jdbc URL. The UX for this is
modeled after #7660.
Added a test for the different formats.
fixes#7667
changelog_begin
- [JSON API] The JDBC url can now also be specified via
`--query-store-jdbc-config-env` which reads it from the given
environment variable.
changelog_end
* add silent_annotations option to da scala bazel functions
* use silent_annotations for several scala targets
* use silencer_plugin instead when the lib isn't used
* use silent_annotations for several more scala targets
* use silencer_lib for strange indirect requirement for running tests
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* silent_annotations support for scaladoc
* Support multi-party reads on the JSON API
Given that those aren’t going away and we’re instead doubling down on
this and adding multi-party writes as well, the JSON API needs to
support this. This PR only implements the read side (since the ledgers
do not yet support the write side).
This does not deviate from the approach chosen by the JSON API to
infer the parties from the token, we just don’t error out anymore when
more than one party is passed.
changelog_begin
changelog_end
* Apply suggestions from code review
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Remove dependency on doobie_postgres from db-backend
changelog_begin
changelog_end
* Fix offset update
changelog_begin
changelog_end
* Use nonempty sets for parties
changelog_begin
changelog_end
* Fix updateOffset under concurrent transactions
changelog_begin
changelog_end
* Add tests for multi-party websocket queries and fetches
changelog_begin
changelog_end
* fmt
changelog_begin
changelog_end
* Fix perf tests
changelog_begin
changelog_end
* Cleanup
changelog_begin
changelog_end
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/dbbackend/ContractDao.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Move ParsePayload instances, thanks Stephen!
changelog_begin
changelog_end
* More unsubst
changelog_begin
changelog_end
* Fix off by 1 error
changelog_begin
changelog_end
* Remove redundant type annotation
changelog_begin
changelog_end
* fmt
changelog_begin
changelog_end
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
With the introduction of the standalone JAR, we cannot rely on the
assistant anymore to pass the default logback config. Users can still
override the logback config with `-Dlogback.configurationFile` if they
need something else but this provides a more sensible default logging
config than seeing a ton of debug logs from netty.
changelog_begin
changelog_end
* Add http-json-perf daily cron job
changelog_begin
changelog_end
* commenting out other jobs so we can manually test the new one
* commenting out other jobs so we can manually test the new one
* Fix the shell script
* Fixing the gs bucket, `gs://http-json bucket does not exist`
* uncomment the other jobs
* timestamp from git log
* get rid of DAR copying
* comment out the other jobs, so we can test it
* uncomment the other jobs
* Open sourcing gatling statistics reporter
Running gatling scenarios with `RunLikeGatling` from libs-scala/gatling-utils
* cleaning up
* Replace "\n" with System.lineSeparator
so the formatting test cases pass on windows
* Testing DurationStatistics Monoid laws
* Renaming RunLikeGatling -> CustomRunner
This PR uses the new data structure introduced in #7220.
Additionnally this fix `Value Equal instance` which was considering
<a: x, b: y> different from <b:y, a:x>.
CHANGELOG_BEGIN
CHANGELOG_END
* Adding `uniqueModuleEntity`
and making sure that generated domain Template IDs do not
have module entity duplicates, so resolution should work
with no problem.
changelog_begin
changelog_end
* cleaning up
* Run gatling scenario from the perf runner main
reports are disabled for now, getting a class not found
when generating them
changelog_begin
changelog_end
There are two sources of flakiness and I’ve seen both on CI:
1. We can get more than one offset at the beginning if things are too
slow. This is addressed by just filtering those out.
2. The stream completes as soon as the input is closed. This is
addressed by keeping the stream open and closing it with `take`.
Point 2 is a problem for all tests, see #7293, but I’ll leave the
other tests for separate PRs (I’ve also never seen them flake).
You can observe the failures locally if you add a `Thread.sleep` between
creating the stream future and sending the commands.
changelog_begin
changelog_end
* ledger-api-client: `maxInboundMessageSize` -> `maxInboundMetadataSize`.
CHANGELOG_BEGIN
- [Scala Bindings] Rename a field in the ``LedgerClientConfiguration``
to ``maxInboundMetadataSize``, to match the builder Netty channel
builder. It was incorrectly named ``maxInboundMessageSize``, which is
a different channel property that configures the maximum message size,
not the header size.
CHANGELOG_END
* ledger-api-client: Introduce a `maxInboundMessageSize` config property.
We use this a lot; easier if it's in the configuration.
CHANGELOG_BEGIN
- [Scala Bindings] Replace the
``LedgerClientConfiguration.maxInboundMessageSize`` property with a
new one that represents the maximum size of the response body.
CHANGELOG_END
fix JSON API multikey stream
In the current state, the JSON API only handles multiple keys _from
different templates_. This makes it work for multiple keys from the same
template too.
Extracted from #7066 with the following changes:
- Use of a mutable `HashSet` to test for keys, because perf.
- Addition of a test at the JSON API level.
CHANGELOG_BEGIN
- [JSON API] Fix a bug where streaming multiple keys could silently
ignore some of the given keys.
CHANGELOG_END
* apply @cocreature's patch
https://gist.github.com/cocreature/d35367153a7331dc15cca4e5ea9098f0
* fix fmt
* reintroducing the main
* Introducing `ledger-service/http-json-testing`
* cleaning up
* Starting sandbox and json-api from perf-test main
changelog_begin
changelog_end
* Deprecate noop `--application-id`
changelog_begin
[JSON API]
Hiding and deprecating `--application-id` command-line option. JSON API never used it.
It is required to instantiate LedgerClientConfiguration and was not used for any command submission.
JSON API uses Application ID specified in the JWT. See #7162
changelog_end
* removing further usage of noop applicationId
* a bit of explanation what this is for
Apparently `[[]]` links don't work to external resources. I wanted to
turn the first referencee into a link too but that contains too many
weird characters.
CHANGELOG_BEGIN
CHANGELOG_END
* Adding `package-max-inbound-message-size`
this is to allow separate configuration settings for command submission
and package management ledger clients
* Fixing formatting
* Updating docs
changelog_begin
[JSON API] Adding `--package-max-inbound-message-size` command line option.
Optional max inbound message size in bytes used for uploading and downloading package updates. Defaults to the `max-inbound-message-size` setting.
changelog_end
* Addressing code review comments
fixes#2506
Judging from the issue this was originally introduced to workaround a
bug. I couldn’t actually track down what that bug was but at this
point they are identical so no point keeping this around.
changelog_begin
changelog_end
* Introducing `TickTriggerOrStep` ADT, filtering out `TickTrigger`s preceding the initial ACS retrieval
changelog_begin
[JSON API] Filter out offset ticks preceding the ACS events block. See issue: #6940.
changelog_end
* Cleaning up a bit
* Do not emit offset tick unless we know the real offset
wait for LiveBegin message
* Make WebsocketConfig configurable
* Adding offset tick integration tests
reverting WebsocketService to 05d49b37c3 makes these tests fail
* cleaning up
* Refactoring `emitOffsetTicksAndFilterOutEmptySteps`
keep offset instead of StepAndError with offset
* factor --address, --http-port, --port-file options from http-json to cli-opts
- enabling reuse in trigger service
* use cli-opts for address and http-port options in Trigger service
* mark ServiceConfig and some defaults private
* use --address option to set up server
* document Setter
* test --address option is parsed
* missing (c) headers
* add changelog
CHANGELOG_BEGIN
- [Trigger Service] Accepts a new ``--address`` option to listen for HTTP connections on
interfaces other than localhost, such as ``0.0.0.0`` for all addresses.
See `issue #7090 <https://github.com/digital-asset/daml/pull/7090>`__.
CHANGELOG_END