* mark some of dbbackend private
* fetchBy* functions for Queries
* shift in-memory filtering into the transaction stream
- removes irrelevant contracts from memory as soon as possible for fetch by
contract ID and key in-memory
* push the three synchronous search varieties into new signatures
* replace the core findByContract* functions with DB-delegating versions
* remove the GADT equality and most of the explicit traversals
- compiles again, finally
* factoring
* ContractDao wrappers for fetchById and fetchByKey
* DB version of findByContractId
* DB version of findByContractKey
* Search is the split of ContractsService
* fix SQL for keys
* trade the typeclass for a sum type
- sealed instead of final because of the path dependency on ContractsService
instance
* number conversion is done already in ContractDao
* make fetch-by-key tests depend on proper number conversion for SQL
* add changelog
CHANGELOG_BEGIN
- [JSON API] ``/v1/fetch`` now uses the Postgres database, if configured, to
look up contracts by ID or key, except when querying a contract by ID without
its corresponding template ID. The fallback in-memory version of
``/v1/fetch`` is also now significantly more efficient for large datasets,
though still linear.
You may optionally re-create JSON API's database to take full advantage.
See `issue #7993 <https://github.com/digital-asset/daml/pull/7993>`__.
CHANGELOG_END
* use search.search for search
- suggested by @cocreature; thanks
* add an index for contract key lookups
- suggested by @cocreature; thanks
* kvutils: Use ScalaPB to generate a Scala JAR for daml_kvutils.proto.
* Bazel: Delete the unused `da_java_binary` rule, and inline `_wrap_rule`.
* Bazel: Factor out Java/Scala protobuf class generation into a helper.
CHANGELOG_BEGIN
CHANGELOG_END
* daml-lf/archive: Use `proto_jars`.
* Bazel: Remove the visibility modifier from `proto_jars`.
It's too confusing. Just make everything public.
* daml-lf/archive: Push protobuf source tarballs into `proto_jars`.
* Bazel: Add comments to the various parts of `proto_jars`.
* daml-assistant: Do unpleasant things with `location` in Bazel.
* Upgrade akka-http to 10.2
Follow up to #8048, I left out this upgrade to reduce noise and since
I wasn’t quite sure how involved it was going to be.
changelog_begin
changelog_end
* Reenable transparent HEAD requests
Apparently no longer on by default but we depend on this in waitForHttpServer
changelog_begin
changelog_end
* Upgrade akka and akka-http
Was chasing an issue somewhere and thought this might affect it in
some way. It didn’t but I might as well turn the upgrade into a PR.
changelog_begin
changelog_end
* Fix trigger service tests
changelog_begin
changelog_end
* Downgrade akka-http again
changelog_begin
changelog_end
* Upgrade akka-http again and fix tests
changelog_begin
changelog_end
* Cleanup trigger service
changelog_begin
changelog_end
Previously we didn’t build up the `OneAnd[Set, Party]` properly and
included the one party in the set as well. This was an issue if you
have the same party multiple times, most likely in readAs and
actAs (but not limited to that). This then lead to SQL queries failing
since we tried to insert twice for a given party. This PR fixes that
by properly deduplicating the parties and adding a test for this.
changelog_begin
- [JSON API] Fix a regression introduced in SDK 1.7.0, where using a
party multiple times in the same JWT token (e.g., readAs and actAs)
broke database queries for that party. Note that there is never a
reason to include a party multiple times since actAs implies readAs.
changelog_end
* Make HealthService public
DABL patches the rest adapter so making this public helps them plug it
together with other things.
Also removes some garbage debug print which I forgot to remove
🤦
changelog_begin
changelog_end
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/HealthService.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* No logging
changelog_begin
changelog_end
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Add healthcheck endpoints to JSON API
This PR adds /livez and /readyz (following k8s naming scheme) that can
be used as liveness and readyness check. There isn’t much we can do
for liveness apart from showing that we can still respond to http
requests but readyness can be a bit more clever and check the ledger
connection as well as the database connection.
changelog_begin
- [JSON API] Add `/livez` and `/readyz` health check endpoints for
easier integration with k8s and other schedulers.
changelog_end
* I hate windows
changelog_begin
changelog_end
HTTP 1.1 exists since 1999 so there isn’t really a good reason not do
use this. In fact, the docs recommend to use Chunked in favor of
CloseDelimited.
changelog_begin
changelog_end
* Factor JWT verifier CLI flags
changelog_begin
changelog_end
* Use cli-opts in auth middleware
* Use cli-opts in sandbox cli
* Mark trigger service test as long
These have become prone to timeout on CI.
Increasing the size (timeout) is a temporary fix. A proper
solution is to a) not start a fresh sandbox per test-case and b)
separate the in-mem/db and no-auth/auth configrations into
separate Bazel test targets.
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
This PR extends the failure tests with one that establishes a
websocket connection, kills the connection and reenables it at the
last offset.
changelog_begin
changelog_end
The previous restriction was both too lax and too strict now that we
have multi pary queries:
1. It allowed a party in `readAs` for command submissions which just
fails on the ledger side. Changing this is technically breaking but
only if you used a token that would have been rejected as soon as you
enabled auth so that seems very resonable to break.
2. It didn’t allow extra parties in `readAs`.
This PR switches to requiring exactly one party in `actAs` while
supporting multiple parties in `readAs`.
changelog_begin
- [JSON API] JWTs on command submissions can now contain extra parties
in the `readAs` field. `actAs` is still limited to a single party.
changelog_end
* Add tests for connection failures in the JSON API
This PR adds some toxiproxy based tests to see how the JSON API reacts
if the connection to the ledger is killed. There are a bunch of
inconsistencies here in the tests some of which we might want to
address and the others we should at least document but I’ll leave that
for future PRs.
changelog_begin
changelog_end
* Import HttpServiceTestFixture instead of prefixing
changelog_begin
changelog_end
* set doobie version to 0.9.2 and rerun maven pin
* port extractor and some of JSON API
* repin maven
* use doobie's own builder compatibility where required
* use probably bad derivations to supply Blockers where transactEC was required
- The point of using Blocker instead of ExecutionContext seems to be to
especially emphasize to API users that it isn't appropriate to use an
ExecutionContext with ordinary behavior. That is what we have done, which
should probably change, but just compiling for now.
* fix fragment inspection test for internal restructuring
- This test depends on implementation details of Doobie, so naturally it must be
altered when that runs. Fortunately, it's been made easier by the changes
in this upgrade.
* allow 256 blockers for navigator transaction blocker, like the global EC
* allow as many blockers as the pool size for trigger service
- The transactor shouldn't share ExecutionContext for transactions with the
caller, so we set up a new one based on configured pool size.
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
This PR shuffles things around a bit to make it easier to test and
adds `--query-store-jdbc-config-env` which specifies the name of an
environment variable containing the jdbc URL. The UX for this is
modeled after #7660.
Added a test for the different formats.
fixes#7667
changelog_begin
- [JSON API] The JDBC url can now also be specified via
`--query-store-jdbc-config-env` which reads it from the given
environment variable.
changelog_end
* add silent_annotations option to da scala bazel functions
* use silent_annotations for several scala targets
* use silencer_plugin instead when the lib isn't used
* use silent_annotations for several more scala targets
* use silencer_lib for strange indirect requirement for running tests
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* silent_annotations support for scaladoc
* Support multi-party reads on the JSON API
Given that those aren’t going away and we’re instead doubling down on
this and adding multi-party writes as well, the JSON API needs to
support this. This PR only implements the read side (since the ledgers
do not yet support the write side).
This does not deviate from the approach chosen by the JSON API to
infer the parties from the token, we just don’t error out anymore when
more than one party is passed.
changelog_begin
changelog_end
* Apply suggestions from code review
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Remove dependency on doobie_postgres from db-backend
changelog_begin
changelog_end
* Fix offset update
changelog_begin
changelog_end
* Use nonempty sets for parties
changelog_begin
changelog_end
* Fix updateOffset under concurrent transactions
changelog_begin
changelog_end
* Add tests for multi-party websocket queries and fetches
changelog_begin
changelog_end
* fmt
changelog_begin
changelog_end
* Fix perf tests
changelog_begin
changelog_end
* Cleanup
changelog_begin
changelog_end
* Update ledger-service/http-json/src/main/scala/com/digitalasset/http/dbbackend/ContractDao.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Move ParsePayload instances, thanks Stephen!
changelog_begin
changelog_end
* More unsubst
changelog_begin
changelog_end
* Fix off by 1 error
changelog_begin
changelog_end
* Remove redundant type annotation
changelog_begin
changelog_end
* fmt
changelog_begin
changelog_end
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
With the introduction of the standalone JAR, we cannot rely on the
assistant anymore to pass the default logback config. Users can still
override the logback config with `-Dlogback.configurationFile` if they
need something else but this provides a more sensible default logging
config than seeing a ton of debug logs from netty.
changelog_begin
changelog_end
* Add http-json-perf daily cron job
changelog_begin
changelog_end
* commenting out other jobs so we can manually test the new one
* commenting out other jobs so we can manually test the new one
* Fix the shell script
* Fixing the gs bucket, `gs://http-json bucket does not exist`
* uncomment the other jobs
* timestamp from git log
* get rid of DAR copying
* comment out the other jobs, so we can test it
* uncomment the other jobs
* Open sourcing gatling statistics reporter
Running gatling scenarios with `RunLikeGatling` from libs-scala/gatling-utils
* cleaning up
* Replace "\n" with System.lineSeparator
so the formatting test cases pass on windows
* Testing DurationStatistics Monoid laws
* Renaming RunLikeGatling -> CustomRunner
This PR uses the new data structure introduced in #7220.
Additionnally this fix `Value Equal instance` which was considering
<a: x, b: y> different from <b:y, a:x>.
CHANGELOG_BEGIN
CHANGELOG_END
* Adding `uniqueModuleEntity`
and making sure that generated domain Template IDs do not
have module entity duplicates, so resolution should work
with no problem.
changelog_begin
changelog_end
* cleaning up
* Run gatling scenario from the perf runner main
reports are disabled for now, getting a class not found
when generating them
changelog_begin
changelog_end
There are two sources of flakiness and I’ve seen both on CI:
1. We can get more than one offset at the beginning if things are too
slow. This is addressed by just filtering those out.
2. The stream completes as soon as the input is closed. This is
addressed by keeping the stream open and closing it with `take`.
Point 2 is a problem for all tests, see #7293, but I’ll leave the
other tests for separate PRs (I’ve also never seen them flake).
You can observe the failures locally if you add a `Thread.sleep` between
creating the stream future and sending the commands.
changelog_begin
changelog_end
* ledger-api-client: `maxInboundMessageSize` -> `maxInboundMetadataSize`.
CHANGELOG_BEGIN
- [Scala Bindings] Rename a field in the ``LedgerClientConfiguration``
to ``maxInboundMetadataSize``, to match the builder Netty channel
builder. It was incorrectly named ``maxInboundMessageSize``, which is
a different channel property that configures the maximum message size,
not the header size.
CHANGELOG_END
* ledger-api-client: Introduce a `maxInboundMessageSize` config property.
We use this a lot; easier if it's in the configuration.
CHANGELOG_BEGIN
- [Scala Bindings] Replace the
``LedgerClientConfiguration.maxInboundMessageSize`` property with a
new one that represents the maximum size of the response body.
CHANGELOG_END
fix JSON API multikey stream
In the current state, the JSON API only handles multiple keys _from
different templates_. This makes it work for multiple keys from the same
template too.
Extracted from #7066 with the following changes:
- Use of a mutable `HashSet` to test for keys, because perf.
- Addition of a test at the JSON API level.
CHANGELOG_BEGIN
- [JSON API] Fix a bug where streaming multiple keys could silently
ignore some of the given keys.
CHANGELOG_END
* apply @cocreature's patch
https://gist.github.com/cocreature/d35367153a7331dc15cca4e5ea9098f0
* fix fmt
* reintroducing the main
* Introducing `ledger-service/http-json-testing`
* cleaning up
* Starting sandbox and json-api from perf-test main
changelog_begin
changelog_end
* Deprecate noop `--application-id`
changelog_begin
[JSON API]
Hiding and deprecating `--application-id` command-line option. JSON API never used it.
It is required to instantiate LedgerClientConfiguration and was not used for any command submission.
JSON API uses Application ID specified in the JWT. See #7162
changelog_end
* removing further usage of noop applicationId
* a bit of explanation what this is for
Apparently `[[]]` links don't work to external resources. I wanted to
turn the first referencee into a link too but that contains too many
weird characters.
CHANGELOG_BEGIN
CHANGELOG_END
* Adding `package-max-inbound-message-size`
this is to allow separate configuration settings for command submission
and package management ledger clients
* Fixing formatting
* Updating docs
changelog_begin
[JSON API] Adding `--package-max-inbound-message-size` command line option.
Optional max inbound message size in bytes used for uploading and downloading package updates. Defaults to the `max-inbound-message-size` setting.
changelog_end
* Addressing code review comments
fixes#2506
Judging from the issue this was originally introduced to workaround a
bug. I couldn’t actually track down what that bug was but at this
point they are identical so no point keeping this around.
changelog_begin
changelog_end
* Introducing `TickTriggerOrStep` ADT, filtering out `TickTrigger`s preceding the initial ACS retrieval
changelog_begin
[JSON API] Filter out offset ticks preceding the ACS events block. See issue: #6940.
changelog_end
* Cleaning up a bit
* Do not emit offset tick unless we know the real offset
wait for LiveBegin message
* Make WebsocketConfig configurable
* Adding offset tick integration tests
reverting WebsocketService to 05d49b37c3 makes these tests fail
* cleaning up
* Refactoring `emitOffsetTicksAndFilterOutEmptySteps`
keep offset instead of StepAndError with offset
* factor --address, --http-port, --port-file options from http-json to cli-opts
- enabling reuse in trigger service
* use cli-opts for address and http-port options in Trigger service
* mark ServiceConfig and some defaults private
* use --address option to set up server
* document Setter
* test --address option is parsed
* missing (c) headers
* add changelog
CHANGELOG_BEGIN
- [Trigger Service] Accepts a new ``--address`` option to listen for HTTP connections on
interfaces other than localhost, such as ``0.0.0.0`` for all addresses.
See `issue #7090 <https://github.com/digital-asset/daml/pull/7090>`__.
CHANGELOG_END
* AsyncQueryNewAcs scenario
sync query with totally new ACS every time
changelog_begin
changelog_end
* cleanup
* cleanup
* fixing numbers
* fixing numbers
* adding groups to self-document the scenario
* Archive and then Create
* cleanup
* silence archive and create
* with ACS of 5000 it takes too long to run
* Perf test scenario for query with variable ACS, WIP
* WIP
* change ACS with every query
exercise a choice + create a new contract to keep ACS size the same
* change ACS with every query
running exercise and create in parallel with the query
* exercise Archive instead of Transfer
* Adding copyright header
* Thanks @S11001001
* improve error message on failed JSON parsing
Fixes#6971.
Interestingly, all the other cases in that block already had useful
feedback, not sure why this ones was missing.
CHANGELOG_BEGIN
CHANGELOG_END
* add tests
* Moving `Statements.discard` from //ledger-server/http-json into //libs-scala/scala-utils
changelog_begin
changelog_end
* Add new module to the published artifacts
* `com.daml.scalautil` instead of `com.daml.scala.util`
@S11001001: That's because if this is in classpath and you import com.daml._,
you have a different scala in scope than the one you expect.
* daml-on-sql: Pull out a new `Main` object that wraps sandbox-classic.
CHANGELOG_BEGIN
CHANGELOG_END
* daml-on-sql: Fail if a JDBC URL is not provided or not for PostgreSQL.
* sandbox-classic: Rename the conformance test H2 database.
* daml-on-sql + sandbox-classic: Report configuration errors cleanly.
This means letting `ProgramResource` catch the errors, log, and exit.
* daml-on-sql: Change the name logged on startup.
* daml-on-sql: Change the default participant ID.
* sandbox-common: Give the ledger name its own tagged string type.
* sandbox-classic: Generate random ledger IDs using the ledger name.
* daml-on-sql: Remove the banner, replacing it with a blank line.
* daml-on-sql: Enable strong seeding by default.
And weak seeding in the conformance tests.
* sandbox-classic: Move the ledger name to a separate parameter.
It's not really configurable.
* sandbox-classic: Move LedgerName from sandbox-common.
* daml-on-sql: Remove "-participant" from the participant ID.
* daml-on-sql: Use `Name` where possible.
* daml-on-sql: Make the ledger ID mandatory.
* Revert "sandbox-classic: Move LedgerName from sandbox-common."
This reverts commit 0dad1584a7.
* daml-on-sql: Print "DAML-on-SQL" in the CLI help, not "Sandbox".
* daml-on-sql + sandbox + sandbox-classic: Split out custom CLI parsing. (#6846)
* participant-state: Simplify naming the seeding modes.