This means that if a websocket query is initiated with an explicit offset, and the ledger returns an error reporting that the offset has been pruned, the websocket query will be terminated with an error containing a status of `410` (Gone) and a message indicating that the offset has been pruned.
The client will need to react to this by clearing any state that had been incrementally built based on updates, and refresh with current version of ledger state.
This is related to https://github.com/digital-asset/daml/issues/13788 and https://github.com/digital-asset/daml/issues/13680
As discussed there, the existing akka APIs do not support setting the websocket close code and reason. The approach taken here is to return a single message with a custom error and close the websocket.
This changes guarantees akka does not appear in maven_install_2.13.json
we add a new maven depo deprecated_maven
we add gatling to deprecated_maven
we reactive gatling-utils and http-json-perf
we move gatling-utils in ledger-service/http-json-perf/ (we do not want new component to use gatling)
* Automated renames by bash script
This commit is exclusively contains changes by the bash script.
For the bash script is present at the pull request.
* Manual pekko migration changes
* adapt fully qualified name references
* adapt pekko package declarations
* adapt bazel files with dependency changes
* adapt canton pekko lib shade_rule
* adapt logger configuration declarations
* pin maven dependencies
* revert incorrect changes by script to compatibility module
Workarounds for further TODOs:
* disable http-json-perf and libs-scala/gatling-utils modules to maintain clean pekko dependencies (without akka)
* disable GraphQLSchemaSpec test (sangria library needs to be upgraded)
* Formatting
In doing so, we disable the automatic production of the extra reports from gatling results.
These can be re-enabled in future if they're considered valuable.
* pin dependencies to json and add missing dep
* fix cyclic dep
* remove unused dep
* add missing dep to //ledger-api/testing-utils:testing-utils
* remove unused dep in //ledger/ledger-api-auth:ledger-api-auth
* remove more unused deps
* more dep fixes
* yet more dep fixing
* more fixing..
* more of the same
* hopefully the last deps to fix
* Bump the version of protobuf and fix everything that depends on it. Took shortcuts that I need to fix in a next commit, but would like to run the CI on this now that it compiles
* don't error out in the grpc-haskell patch
* remove obsolete patch
* patch absl to compile on mingw
* Add a patch to recognize the compiler
* Define _DNS_SD_LIBDISPATCH for macOS gRPC
* bump netty_tcnative_version according to https://github.com/grpc/grpc-java/blob/master/SECURITY.md#netty
* pin maven deps
* Fix macos linking errors 'dyld[xxx]: missing symbol called'
* Skip Darwin frameworks in package-app.sh
* pin stackage packages
* pin stackage windows deps
* use the netty version agreed on
* bump the windows global cache to try and debug the upb issue
* restart the CI after timeout
* clean up
* disable failing tests for now
* comment out unused code
* reset the windows machine name to 'default'
---------
Co-authored-by: Moisés Ackerman <6054733+akrmn@users.noreply.github.com>
* Implement vetDar and unvetDar
Blocked by canton not returning uploaded dars
* Upgrades testing infra/IT
* Fix HttpServiceTestFixture
* Fix some tests
* Fix cantonRunner for windows
* Add delay after vet actions so topology transactions can land
* Implement invalid data upgrades tests
* Add temporary internal setContractUpgradingEnabled flag to daml-script
* Switch to setProvidePackageId
* Write choice and multi-participant tests
* Formatting
* Remove unused import
* Address reviews
* Update errors
* Fix canton runner for windows
* Address review comments
* Add new proto to artifacts
* avoid pushing canton admin proto as maven artifact (#17742)
---------
Co-authored-by: Remy <remy.haemmerle@daml.com>
* Adapt JSON API write path to the new explicit disclosure Ledger API interface
* Address review comments
* Switch to vanilla Base64 for createdEventBlob instead of Base64Url
* update TypeScript bindings of DisclosedContract to use the createdEventBlob field instead of payload, payloadBlob and metadata, and short-circuit a test which depends on canton populating the createdEventBlob field
* get TypeScript integration tests to use transaction service to get created_event_blob data
---------
Co-authored-by: = <=>
* Explicit disclosure based on blobs only
* cosmetic changes
* Changes post-review
* remove buf check suppression
* Silence deprecation warnings
* more silencing of deprecation warnings
* Changes after recent round of reveiews
JSON API will be served from an HTTPS endpoint if the config includes a `server.https` section.
Note https://digitalasset.atlassian.net/browse/LT-37 to enable the unit test on windows and mac systems.
Using `java.net.http.WebSocket` seemed like the simplest interface to be able to verify the closed status from the outside of the system. We looked into akka's `WSProbe::expectCompletion` but wiring that up would have required building our own `WebSocketService` and all its dependencies.
Fixes termination of WebSocket streams when the client sends a close-frame. The problem was that upstream completion (i.e. termination on the client-side of WebSocket streams) does not propagate to infinitely-running substreams. This PR propagates the upstream completion explicitly to all substreams using a kill-switch.
- Introduces a new major version, "2", in the daml_lf proto
- Adds new major versions to the compiler and the engine
- Updates all code that assumes only one major version
- Updates all code that assumes only one dev version
* Fix a probable typo in //daml-fl/encoder/testing-dar-*
* apply TODOs in bazel files
* remove obsolete comments in bazel files
* use 'default' instead of 'latest' for targets relying on 'latest' in order to ensure interfaces are supported
* Update to rules_haskell v0.16
* Update comments re bazel patches
* clean up bazel overrides
* Upgrade to Bazel 5.2.0
* Remove '--distinct_host_configuration=false'
* Update buildifier to 6.3.2
* Suffix macos and ubuntu caches with yyyymm
* bump windows cache to v14
* [REVERTME] bump linux/macos/darwin timeout to 4h
If the ledger has been pruned more recently that the last cached copy, then attempting to fetch the changes since that last offset will fail, rending the relevant template(s) unqueryable. This PR detects that condition, clears the cache for the relevant template and queries again, which will refresh the cache with a fresh copy of the ACS for that template, and serve the query from that.
I also made some usability tweaks around running canton-ee tests, to help improve the dev experience for failures I came across while trying to run them. Specifically
* Use `--config=canton-ee` to opt into the tests which require canton-ee
* When downloading that EE from artifactory, provide better diagnostics if the required auth isn't set up.
* [LF] make Timestamp parsing consistent between Java 11 and Java 17
Between Java 11 and Java 17 there is one bug fix on Instant.parse
that expands the range of values that can be parsed into an
Instant. See https://bugs.openjdk.org/browse/JDK-8166138
Daml-LF happened to uses Instant.parse to parse a string into a
Daml-LF timestamp and we observe a different behavior when running
Daml on Java 11 and Java 17
additionally make explicit that conversion form java Instant and
string may drop nanoseconds, i.e. we create a lenient version that may
drop the significant nanoseconds (legacy or) and a strict
version that reject instant/string that cannot be converted without
loss of precision.
* Pruning needs to be retried, with artificial activity added, until the safe-offset has advanced far enough for it to succeed.
* The "max deduplication duration" needs to be dropped, otherwise pruning cannot be done for at least the default of 168h.
* The "reconciliation interval" needs to be lowered. This is a dynamic config, so we set it via a bootstrap script. The change is not effected immediately, but asynchronously some time after startup. Lowering this enables the safe-offset to catch up faster.
* We need to ensure the relevant tests are only enabled when testing against an Enterprise edition of Canton.
Contributes to https://digitalasset.atlassian.net/browse/LT-17
* added new end point to refresh the cache
* formatting
* returning old logic
* added logic to update the cache with a specific offset provided in the body
* formatting
* addressed comments
* formatting
* formatting
* formatting
* Return unit instead of List for processing refresh
* last changes on logic
* formatting
* simplify conversion
* comments addressed
According to current documentation, only one of these may be set, but we can currently return both.
Also in this case we currently return a status code of 501, which is not one of the documented status codes. This PR switches that to 500 instead.
https://docs.daml.com/json-api/index.html#http-status-codes
* reduce fork count, measurement and warmup iterations, and extra parameters which had a multiplicative effect on total work done
* fix db connection setup - the combination of annotations and inheritance meant it was trying to setup the trial twice, causing Postgres benchmarks to fail
* log when an error occurs during test setup
* add logback resources to the benchmarks, to enable configuration of logging, and avoid dumping all the debug logs by default
With these changes, on my 20 core 64G linux laptop, the benchmarks now all run in under 6 mins.
Without this change, it took over 9 hours in total and none of the Postgres benchmarks were successful.
The tests are not comprehensive.
We demonstrate that if the Oracle payload index is on, the names of fields with type Int may not exceed 251 chars.
For other configurations, Int and Text fields can have a name with lengths of at least 512 chars.
Also updated the naming and explanation of the guards which disable some tests when using Oracle with JSON index.
Oracle conflates empty strings with NULL, which breaks comparison operations against empty strings.
This change makes Oracle-backed queries have the behaviour you'd expect when comparing empty strings, in line with what we see on Postgres and in-memory backed queries.
We had to take a bit of care to ensure it worked irrespective of whether the JSON index was enabled.
Fixes https://digitalasset.atlassian.net/browse/LT-24
* remove sandbox-on-x project
update bazel readme
update release artifacts
comment out last remaining SoX test
remove ledger-runner-common
remove participant-state-kv-errors
remove recovering-indexer-integration-tests
remove participant-integration-api
update doc pages
cleanup ledger-api-auth usage
remove participant-state
fix build
fix build
clean up ledger-api-common part I
clean up ledger-api-comon part II
clean up ledger-api-common part III
remove ledger/metrics
clean up ledger-api-health and ledger-api-domain
* remove ledger-configuration ad ledger-offset
* remove ledger-grpc and clean up participant-local-store
* reshuffle few more classes
* format
Previously, when using an Oracle-based ACS cache, JSON queries such as
```
{
...
"query": {
"si_detail": {
"si_input_no": {
"%gt": "foo"
}
}
}
}
```
were resulting in SQL queries containing
```
JSON_EXISTS(payload, '$."si_detail"."si_input_no"?(@ > $X)' PASSING ? AS X)
```
which works when the payload json index is disabled, but when the index is enabled it results in an error.
We can avoid this by passing in the literal value rather than using a query parameter when the index is enabled, e.g.
```
JSON_EXISTS(payload, '$."si_detail"."si_input_no"?(@ > $X)' PASSING 'foo' AS X)
```
Fixes https://github.com/digital-asset/daml/issues/15006
And contributes towards https://digitalasset.atlassian.net/browse/LT-14