* Initial changes to add a surrogate_template_id cache to reduce db queries
CHANGELOG_BEGIN
CHANGELOG_END
* refactoring and addition of tests
* Code review based changes to use Contextual Logger and json-api metrics instance
* make max cache entries/size configurable
* Rename cache max entries default variable
* Add failing test that covers the bug we found in #10823
* Fix /v1/query endpoint bug
changelog_begin
- [JSON API] Fixed a bug that prevented the JSON API to be aware of
packages uploaded directly via the Ledger API.
changelog_end
* Test case for LockedFreePort not colliding with port 0
changelog_begin
changelog_end
* Discover dynamic port range on Linux
* Random port generator outside ephemeral range
* remove dev comments
* Draw FreePort from outside the ephemeral port range
Note, there is a race condition between the socket being closed and the
lock-file being created in LockedFreePort. This is not a new issue, it
was already present with the previous port 0 based implementation.
LockedFreePort handles this by attempting to find a free port and taking
a file lock multiple times.
But, it could happen that A `find`s port N, and obtains the lock, but
doesn't bind port N again, yet; then B binds port N during `find`; then
A attempts to bind port N before B could release it again and fails
because B still holds it.
* Select dynamic port range based on OS
* Detect dynamic port range on MacOS and Windows
* Import sysctl from Nix on MacOS
changelog_begin
changelog_end
* Windows line separator
* FreePort helpers visibility
* Use more informative exception types
* Use a more light weight unit test
* Add comments
* Fix Windows
* Update libs-scala/ports/src/main/scala/com/digitalasset/ports/FreePort.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Update libs-scala/ports/src/main/scala/com/digitalasset/ports/FreePort.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Add a comment to clarify the generated port range
* fmt
* unused import
* Split libs-scala/ports
Splits the FreePort and LockedFreePort components into a separate
library as this is only used for testing purposes.
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
Since we switch to scala 2.13, ImmArray companion object extends
`Factory`. Hence:
- the `apply` methods of `ImmArray` override the one from `Factory`
- we can use the notation `.to(ImmArray)` to convert an `Iterable` to
`ImmArray`
This PR drops those `apply` ImmArray. Conversion from Iterable to
`ImmArray` should use the `.to(ImmArray)`.
CHANGELOG_BEGIN
CHANGELOG_END
* Use the token from incoming requests to update the package list
changelog_begin
changelog_end
* Lazily initialize the ledger client
* Fix ee integration tests
* Fix package reloading behaviour by using a semaphore to check for ongoing updates
* Refactor out the semaphore code into a concurrency utility class
* Use correct locking for the updateTask so every thread always uses an up to date task
* Remove unused imports in utils.Concurrent & remove packages from the tests
* Hide & mark the token file cli option deprecated because we dont need it anymore and only keep it so client deployment code doesn't break
* Fix scala 2.12 build by adding more type annotations
* Update ledger-service/http-json-cli/src/main/scala/com/daml/http/OptionParser.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/PackageService.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Readd pgkManagementClient after it was removed accidentally (but now it's lazy)
* Remove concurrent object & use atomic boolean instead of a mutex because it makes more sense
* Replace semaphore with countdownlatch
* Refactor the caching into a separate class
* Use Instant instead of LocalDateTime
* Remove that ** of bad synchonization and do stupid simple synchronization because it JUST WORKS, besides adapt when we want to reload
* Remove await in tests because it can result in buggy tests
* remove unused code in WebSocketService.scala
* Unhide the access-token-file option as per request of Stefano
* Less implicit jwts per request of Stefano
* Try making some code more readable as by request of Akshay
* Use more shark because it expresses better than flatMaps if I don't need the arg
* Move defs in predicate in WebsocketService.scala around
* Try to minimize diff further in WebsocketService.scala
* Fix build and minimize diff in WebSocketService.scala further
* Minimize diff of function getTransactionSourceForParty in WebSocketService.scala
* Share the ec in WebSocketService.scala to minimize the diff
* Minimize in function predicate in WebSocketService.scala
* Further minimize in function predicate in WebSocketService.scala
* Change some case classes to be normal classes but with apply method
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/PackageService.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Get rid of implicit jwt tokens, the world is already confusing and full of implicits enough
* Improve readability
* Integrate the new LedgerClient which does not depend on a leder id
* Fix tests
* Apply suggestions from code review
thanks to @S11001001
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Apply further review comments
* Remove outcommented code
* Deprecate access token file option in the description too
changelog_begin
- [JSON API] The cli option `--access-token-file` is now deprecated. It
is not needed anymore and you can safely remove it. Reason is that
the operations which prior required a token at startup are now done
on demand using the token of the incoming request.
changelog_end
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* vanilla job test on main pipeline
changelog_begin
changelog_end
* move job to daily compat tests
* add timeout to dev-env and changes based on code review
* unconditionally enable JSON search index on Oracle
In '1kb of data' and larger Oracle integration tests:
ORA-29902: error in executing ODCIIndexStart() routine
ORA-20000: Oracle Text error:
DRG-50943: query token too long on line 1 on column 3
From https://docs.oracle.com/en/database/oracle/oracle-database/19/errmg/DRG-10000.html#GUID-46BC3B3F-4DB7-4EB4-85DA-55E9461966CB
Cause: A query token is longer than 256 bytes
Action: Rewrite query
* add changelog
CHANGELOG_BEGIN
- [JSON API] The Oracle database schema has changed; if using
``--query-store-jdbc-config``, you must rebuild the database by adding
``,start-mode=create-only``. See #10539.
CHANGELOG_END
* test only 1kb
* extra flag in db config string
* let Queries backends configure themselves from maps
* new Queries constructor dataflow to better support config values
* remove fields as we go, isolating backend-specific from -agnostic conf
- we use StateT to avoid the problems that will definitely arise if we
don't DRY.
* fix up DbConfig including DbStartupMode
* start to uncouple json-api's config from db-utils
* two JdbcConfigs with different purposes/scopes
- also moves db-utils contents to com.daml.dbutils
* adapt trigger service to refactoring
* fix JdbcConfig leftovers
* adapt http-json-cli to new JdbcConfig
* remove extra ConfigCompanion
* explain more about the QueryBackend/Queries distinction
* split SupportedJdbcDriver into two phases with a tparam
* use SupportedJdbcDriver.TC instead of SupportedJdbcDriver as the nullary typeclass
* patch around all the moved objects with imports
* missed import from moving ConnectionPool to dbutils
* use new 2-phase SupportedJdbcDriver for ContractDao setup
* left off part of a comment
* more q.queries imports
* other imports from the dbutils move
* nested JdbcConfig
* configure the driver in each backend-specific test
* very confusing error, but make the imports nicer and it goes away
* nested JdbcConfig in perf
* missing newline
* port contractdao-bench
* test new option parsing all the way through QueryBackend
* disable search index for some tests, enable for others
* add changelog
CHANGELOG_BEGIN
- [Trigger Service] ``--help`` no longer advertises unsupported JDBC
options from JSON API.
- [JSON API] [EE only] By default, on Oracle, sets up a JSON search
index to speed up the queries endpoints. However, Oracle versions
prior to 19.12 have an unrecoverably buggy implementation of this
index; in addition, the current implementation fails on queries with
strings >256 bytes, with no way to disable the index for that query.
Pass the ``disableContractPayloadIndexing=true`` option as part of
``--query-store-jdbc-config`` to disable this index when creating the
schema.
See `issue #10539 <https://github.com/digital-asset/daml/pull/10539>`__.
CHANGELOG_END
* port failure tests
* init version table last, drop first
- suggested by @realvictorprm; thanks
* rename split DBConfig.scala
- suggested by @realvictorprm; thanks
* move imports to not be in alphabetical order
- suggested by @realvictorprm; thanks
* remove createSchema
- suggested by @realvictorprm; thanks
* Revert "test only 1kb"
This reverts commit 616e173e63.
* port to scala 2.12
- bug in unused imports
- old name `-` for `removed`
Adding support for accepting server's private key as an encrypted file (since storing unencrypted private key in a file system might be a risk).
Encrypted private key is assumed to be encrypted using AES or similar algorithm. The details necessary to decrypt it are be obtained from a secrets server over HTTP as JSON document. The URL to secret's server is supplied through the new `--secrets-url` CLI parameter.
One can supply private in either plaintext (old behavior) or ciphertext: if a private key's file ends with .enc suffix it is assumed to be ciphertext. Otherwise it is assumed to be plain text.
CHANGELOG_BEGIN
- [DPP-418] [Participant] Add support for supplying server's private key as an encrypted file and then decrypting it with the help of a secrets server.
CHANGELOG_END
* Addition of a key_hash field to speed up fetchByKey queries
CHANGELOG_BEGIN
CHANGELOG_END
* changes to make key_hash and Optional field
CHANGELOG_BEGIN
- Update schema version for http-json-api query store with new key_hash field
- Improved performance for fetchByKey query which now uses key_hash field
CHANGELOG_END
* remove btree index for postgres and other changes based on code review comments
* Simplify loading of logback file
doConfigure accepts a URL which slightly simplifies things.
Really the primary reason why I’m doing this is that it gets veracode
to shut up. I don’t fully understand what it’s worried about in the
first place but it looks like it gets angry about calling openStream
on the resource *shrug*
changelog_begin
changelog_end
* fix 2.12 build
changelog_begin
changelog_end
* JSON API: log ledger connection errors at every attempt
This should help diagnose connection errors.
changelog_begin
[JSON API] Ledger connection errors are now logged at every attempt
changelog_end
* Make match exhaustive
* Upgrade Scalatest to v3.2.9.
Because of some coupling we also have to upgrade Scalaz to the latest
v7.2 point release, v7.2.33.
The Scalatest changes are quite involved because the JAR has been broken
up into several smaller JARs. Because Bazel expects us to specify all
dependencies and doesn't allow transitive dependencies to be used
directly, this means that we need to specify the explicit Scalatest
components that we use.
As you can imagine, this results in quite a big set of changes. They
are, however, constrained to dependency management; all the code remains
the same.
CHANGELOG_BEGIN
CHANGELOG_END
* http-json-oracle: Fix a Scalatest dependency.
* ledger-api-client: Fix a Scalatest dependency.
17709b5ba3 (#10344) brought the two implementations of
`selectContractsMultiTemplate` close together enough that they can be
usefully factored. Here is that factoring.
Several of the arguments to `queryByCondition` take the form
(Read[T], T => Out), i.e. Coyoneda; we could invert the control by
returning a data structure with coyonedas, but instead here we use a
sort of continuation-passing style, so the coyonedas are embedded in the
arguments to `queryByCondition`.
CHANGELOG_BEGIN
CHANGELOG_END
* Move ExceptionOps from ledger-service/utils to //libs-scala/scala-utils
* extract connection and JdbcConfig from //ledger-service to independent db-utils module
Changelog_begin
Changelog_end
* update trigger service to use new libs-scala/db-utils
* missed changes for http-json-oracle
* minor cleanup based on comments
* fix breaking scala 2_12 build
* cleanup db-utils/BAZEL.md file
* correct JSON API upper date bound
As reported by @quid-agis. Fixes#10449.
CHANGELOG_BEGIN
CHANGELOG_END
* add tests
* test error messages
* more specific catch
* Add optional submission id to commands.proto
This allows to propagate a submission id. If no id is submitted (the submission id is empty) then we generate a new submission id
CHANGELOG_BEGIN
Add optional submission_id to the commands.proto.
CHANGELOG_END
* Update haskell bindings to include the submission id
* Code review - rename submission id extractor
* Code review - update comment and remove braces from if block
* Fix braces
* participant-integration-api: Encapsulate the initial configuration.
* participant-integration-api: Reduce usage of `LedgerConfiguration`.
* Inline `LedgerConfiguration` wherever it's used.
Most things don't need all its constituent parts; this reduces the
amount of unused properties.
CHANGELOG_BEGIN
- [Integration Kit] The ``LedgerConfiguration`` class has been
removed in favor of ``InitialLedgerConfiguration``. Its usage
has been changed accordingly, with the ``configurationLoadTimeout``
property becoming part of ``ApiServerConfig`` instead.
The default options provided by ``LedgerConfiguration`` have been
removed; you are now encouraged to come up with sensible values for
your own ledger. The ``Configuration.reasonableInitialConfiguration``
value may help.
CHANGELOG_END
* Correct the initial configuration submission delay for KV ledgers.
* kvutils: Mark supertype unused parameters as unused.
* kvutils: Extract out common configuration submission delays.
These values are specific to kvutils; other drivers should come up with
their own.
* configuration: Delete `NoGeneration`, as it's unused.
* [JSON-API] Move database independent tests into a seperate abstract test
The DatabaseStartupOps tests are now also tested against Oracle.
Besides, an additional test will now cover that table creation
doesn't run into name collisions for different table prefixes within
the same database.
changelog_begin
changelog_end
* Add missing copyright headers
* Adjusting the version query slightly to fix the oracle db integration tests
* Rewrite the version query of oracle to fix it (hopefully)
* Test the prefix collision the other way around
* Put the table prefix also infront of the ensure_json constraint in the oracle queries
* Convert the table name of the jsonApSchemaVersion table to uppercase so it can be found in the list of the created tables in Oracle.
* Fix scala 2.12 collection compatibility compiler error by using :+
* Update ledger-service/db-backend/src/main/scala/com/digitalasset/http/dbbackend/Queries.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Use flatTraverse instead of flatMap to fix the compile error in Queries.scala
* Process the startup mode also in the tests & error if it failed
* Add collections compat import to fix scala 2.12 build failure
* Be confused about the build error prior, revert the change
* Move dropAllTablesIfExist a bit down to have a better declaration order
* Extract the tables vector combined with the version table into a seperate val
* Remove debug in Queries.scala logging
* Make the initDatabaseDdlsAndVersionTable val lazy, so we don't get a nullpointer exception
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
#9895 reintroduced the row_number over partition to eliminate duplicates
when querying the stakeholder-joined Oracle contract table. However, as
#10123 establishes, these duplicates cannot happen if we are querying
for only one party.
Therefore, we special-case the single-party query case, for which we
skip the partition + outer-query duplicate elimination steps.
CHANGELOG_BEGIN
CHANGELOG_END
* [JSON-API] Add option for setting a table prefix
changelog_begin
- [JSON-API] A table prefix can now be specified in the jdbc config via `tablePrefix=<YourFancyTablePrefix>`. This was added to allow running multiple instances of the json api without having collisions (independent of the chosen database).
changelog_end
* Extend the correct test in the oracle tests and simplify config override
* Fix formatting
* Fix postgres tests
* Fix bug in oracle query
* Fix typo
* Update ledger-service/db-backend/src/main/scala/com/digitalasset/http/dbbackend/Queries.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* add the table prefix to named constraints too
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* [JSON-API] Validate schema version & add minimal options for schema creation
* Add tests
* [JSON-API] Rework prior work and introduce the object SchemaHandling
* Add license headers & revert formatting changes
* Fix oracle build & scala 2_12 build
* correctly fix 2.12 build
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/SchemaHandlingResult.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* [JSON-API] Change case names & add backwards compat (but deprecate createSchema=true)
changelog_begin
- [JSON-API] Schema versioning was introduced to the db schema. Because of this the field `createSchema` in the jdbcConfig was deprecated. Via the field `start-mode` you can specify:
1. `create-only`: This is equal to the behaviour of `createSchema=true` so the db schema is created and then the application terminates.
2. `start-only`: With this the schema version is checked and if no version or an version was found which is not equal to the internal schema version then the application terminates. This is also the replacement of `createsSchema=false`.
3. `create-if-needed-and-start`: With this the schema version is checked and if no version or an version was found which is not equal to the internal schema version then the schema will be created/updated and the application proceeds with the startup.
4. `create-and-start`: Similar to the first option but instead of terminating the application proceeds with the startup.
changelog_end
* Add info about deprecated createSchema field
* Fix build & improve logging
* Give suggestions on what option to take, to fix an outdated or missing schema
* Renaming of schemaHandling to DbStartupMode, added more tests & correct exit codes depending on how the db startup went
* Align name with sandbox
* Improve tests
* Only add new sql code which strictly uses the interpolation to align with other pr's & minimally adjust statements
* Minimize diff
* Add backwards compat test
* Fix scala 2.12 build & oracle integration tests build
* Update ledger-service/http-json-cli/src/main/scala/com/daml/http/Config.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Adjust code according to review request & tests & add a failure test
* If the call to initialize fails also log the error which was thrown
* Fix formatting
* Add missing collections compat import in integration tests
* Fix last build errors (scala 2.12) & use Either instead of Option for getDbVersionState
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* [JSON-API] Shutdown on startup if the db connection is invalid
changelog_begin
- [JSON-API] The json api now correctly shutdowns at startup if the provided db connection is invalid in case of `createSchema=false`
changelog_end
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Main.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Switch ContractDao to use a HikariCP connection pool
CHANGELOG_BEGIN
CHANGELOG_END
* missed conn pool changes for PostgresTest and ContractDaoBenchmark
* shutdown db access await threadpool and fix formatting
* custom pool sizes for Prod and Integration similar to DbTriggerDao
* cleanup contract dao connection pool
* simplify Dao shutdown
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* remove redundant config setting
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* fix code formatting issue, NonUnitStatments warning
* use doobie 0.9.0 Fragment-in-Fragment interpolation in json-api db-backend
Since tpolecat/doobie#1045 (and therefore 4ca02e0eb6) doobie has
supported interpolating fragments in fragments. We've used this feature
for several fragments written since #7618, but have left the ones
written before alone to use ++. Here we change that where it
meaningfully clarifies the SQL subexpression.
Note that this does not entail a Put or Write instance for Fragment.
You cannot abstract over Fragment and arbitrary interpolated data in
this way, because Fragments are not treated as positional parameters;
that would mean being able to put arbitrary SQL substrings in positional
parameters.
CHANGELOG_BEGIN
CHANGELOG_END
* scalafmt
* useless whitespace accidentally removed
* new projection for aggregated matched-queries
We can redo all the template-ID matches (and payload query matches, if
needed) in the SELECT projection clause to emit a list of matchedQueries
indices SQL-side.
CHANGELOG_BEGIN
CHANGELOG_END
* selectContractsMultiTemplate always returns one query
* factoring
* remove multiquery deduplication from ContractDao
* test simplest case of projectedIndex; remove uniqueSets tests
* remove uniqueSets
* add more test cases for the 3 main varieties of potential inputs
* remove uniqueSets tests that were commented for reference
* remove unneeded left-join
* scala 2.12 port
* port Map test order to 2.12
* use SortedMap so the Scala version tests are unified
- suggested by @cocreature; thanks
* Support deletion of a large number of contracts
fixes#10339
There are two orthogonal issues here:
1. scalaz’s toVector from the Foldable[Set] instance
stackoverflows. I’ve just avoided using that altogether.
2. Oracle doesn’t like more than 1k items in the IN clause. I chunked
the queries into chunks of size 1k to fix this.
changelog_begin
- [JSON API] Fix an error where transactions that delete a large
number of contracts resulted in stackoverflows with the PostgreSQL
backend and database errors with Oracle.
changelog_end
* fix benchmark
changelog_begin
changelog_end
* Update ledger-service/db-backend/src/main/scala/com/digitalasset/http/dbbackend/Queries.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Update ledger-service/db-backend/src/main/scala/com/digitalasset/http/dbbackend/Queries.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* that's not how you foldA
changelog_begin
changelog_end
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Printing stacktraces is consider an antipattern by some people and
gets flagged by VeraCode. While this shouldn’t actually be an issue
here, it is also not super useful so dropping it is easier than
arguing that this is a false positive.
changelog_begin
changelog_end
changelog_begin
- [JSON-API] Connection tries from the json api to the ledger now include the logging context, more specifically the instance_uuid is included in each logging statement.
changelog_end
Was curious if there were any relevant performance improvements in
newer versions. Looks like the answer is no but we might as well
upgrade anyway.
changelog_begin
changelog_end
* daml-lf/data: Move ID aliases to `Ref` from _ledger-api-common_.
This allows us to remove a lot of dependencies on _ledger-api-common_,
and use these aliases in other places where that module is not used.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-integration-api: Remove an unused import.
* http-json-oracle: Remove `ledger-api-common` as a dependency.
* bindings-rxjava: Remove a now-unused dependency.
* [DOCS] Add documentation for the JSON API metrics
changelog_begin
- [JSON-API] You can now find a section `Metrics` in the http-json api documentation explaining how to enable metrics and which are available
changelog_end
* Fix rst build warnings
* Update docs/source/json-api/metrics.rst
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Adapt metrics doc to state that it IS an exhaustive list and remove wrong copy pasta text & add info about prometheus
* Update the legal values for the metrics reporter cli option
* shorten the description, the change prior was unnecessary ._.
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* [JSON-API] Log json request & response bodies in debug
This also readds logging of incoming requests and the responses which are being send out.
changelog_begin
- [JSON-API] Logging of the request and response bodies are now available for appropriate requests if the chosen log level is equal or lower than DEBUG. These can then be found in the logging context of the request begin & end log messages (The field names in the ctx are "request_body" and "response_body").
changelog_end
* Move the http request throughput marking to the right place including the logging of the processing time
* Ensure that the processing time measuring is implemented consistent
* participant-state: Remove the `ParticipantId` alias.
This alias adds nothing. By using `Ref.ParticipantId` directly, many
packages can remove their dependency on the _participant-state_ package.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-state: Remove the `PackageId` and `Party` aliases.
They don't add anything. Let's just use `Ref`.
* kvutils: Restore missing compat imports.
This PR extends the test to test a full matrix of different party &
template id numbers. Summarizing the results, as expected we index by
party but not by template id:
Benchmark (batchSize) (extraParties) (extraTemplates) Mode Cnt Score Error Units
QueryBenchmark.run 10000 1 0 avgt 5 0.255 ± 0.064 s/op
QueryBenchmark.run 10000 10 0 avgt 5 0.304 ± 0.245 s/op
QueryBenchmark.run 10000 100 0 avgt 5 0.296 ± 0.064 s/op
Benchmark (batchSize) (extraParties) (extraTemplates) Mode Cnt Score Error Units
QueryBenchmark.run 10000 0 1 avgt 5 0.277 ± 0.037 s/op
QueryBenchmark.run 10000 0 10 avgt 5 0.479 ± 0.301 s/op
QueryBenchmark.run 10000 0 100 avgt 5 2.131 ± 0.497 s/op
We know how to fix that so I’ll get on that.
changelog_begin
changelog_end
CHANGELOG_BEGIN
* [Integration Kit] Removed trace_context field from Ledger API and its bindings as we now have trace context propagation support via gRPC metadata. If you are constructing or consuming Ledger API requests or responses directly, you may need to update your code.
CHANGELOG_END
I haven’t found any conclusive information as to why ON COMMIT doesn’t
work incrementally but
https://docs.oracle.com/en/database/oracle/oracle-database/19/adjsn/json-query-rewrite-use-materialized-view-json_table.html#GUID-8B0922ED-C0D1-45BD-9588-B7719BE4ECF0
recommends that for rewriting (which isn’t what we do here but both
involve a materialized view on json_table).
Benchmarks:
before:
InsertBenchmark.run 1000 1 1000 avgt 5 0.327 ± 0.040 s/op
InsertBenchmark.run 1000 3 1000 avgt 5 0.656 ± 0.043 s/op
InsertBenchmark.run 1000 5 1000 avgt 5 1.034 ± 0.051 s/op
InsertBenchmark.run 1000 7 1000 avgt 5 1.416 ± 0.106 s/op
InsertBenchmark.run 1000 9 1000 avgt 5 1.734 ± 0.143 s/op
QueryBenchmark.run 1000 10 N/A avgt 5 0.071 ± 0.016 s/op
After:
Benchmark (batchSize) (batches) (numContracts) Mode Cnt Score Error Units
InsertBenchmark.run 1000 1 1000 avgt 5 0.217 ± 0.034 s/op
InsertBenchmark.run 1000 3 1000 avgt 5 0.232 ± 0.027 s/op
InsertBenchmark.run 1000 5 1000 avgt 5 0.226 ± 0.051 s/op
InsertBenchmark.run 1000 7 1000 avgt 5 0.225 ± 0.048 s/op
InsertBenchmark.run 1000 9 1000 avgt 5 0.232 ± 0.021 s/op
QueryBenchmark.run 1000 10 N/A avgt 5 0.080 ± 0.014 s/op
The difference in query times is just noise and changes across runs.
So we get the expected behavior of inserts being independent of the
total ACS size now. We could still explore if we gain something by
avoiding the materialized view to reduce constant factors but that’s
much less of an issue.
fixes#10243
changelog_begin
changelog_end
* LF: change type from Try to Either in archive module
This is the first part of restructuring errors in archive module.
This is part of #9974.
CHANGELOG_BEGIN
CHANGELOG_END
* Apply suggestions from code review
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* remove type alias
* apply stephen suggestion
* fix after rebase
* fix test
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* [JSON-API] Refactor Endpoints.scala to use path directives etc.
changelog_begin
changelog_end
* Don't warn that the ev param in toRoute is not used
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/Endpoints.scala
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Remove weird stuff to have nice stuff with the toRoute function
* Rename the toRoute function & remove comments as things are now clarified
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Add a benchmark for contract insertion in the JSON API
Unfortunately the results seem to match up with my initial benchmark
in #10234
Benchmark (batchSize) (batches) (numContracts) Mode Cnt Score Error Units
InsertBenchmark.run 1000 1 1000 avgt 5 336.674 ± 42.058 ms/op
InsertBenchmark.run 1000 3 1000 avgt 5 787.086 ± 223.018 ms/op
InsertBenchmark.run 1000 5 1000 avgt 5 1181.041 ± 317.017 ms/op
InsertBenchmark.run 1000 7 1000 avgt 5 1531.185 ± 341.060 ms/op
InsertBenchmark.run 1000 9 1000 avgt 5 1945.345 ± 436.352 ms/op
Score should ideally be more or less constant but it goes up very
significantly as the total ACS size changes
fixes#10245
changelog_begin
changelog_end
* throughput -> average time
changelog_begin
changelog_end
* Add a ContractDao benchmark
This PR adds a simple benchmark that uses the ContractDao directly and
is therefore a bit more fine-grained and easier to analyze than the
gatling benchmarks. I expect we’ll want to extend this, this really
only tests queries on reasonably large size ACS filtered by party but
let’s start somewhere.
fixes#10247
changelog_begin
changelog_end
* Factorize
changelog_begin
changelog_end