* Explicit disclosure based on blobs only
* cosmetic changes
* Changes post-review
* remove buf check suppression
* Silence deprecation warnings
* more silencing of deprecation warnings
* Changes after recent round of reveiews
JSON API will be served from an HTTPS endpoint if the config includes a `server.https` section.
Note https://digitalasset.atlassian.net/browse/LT-37 to enable the unit test on windows and mac systems.
Using `java.net.http.WebSocket` seemed like the simplest interface to be able to verify the closed status from the outside of the system. We looked into akka's `WSProbe::expectCompletion` but wiring that up would have required building our own `WebSocketService` and all its dependencies.
Fixes termination of WebSocket streams when the client sends a close-frame. The problem was that upstream completion (i.e. termination on the client-side of WebSocket streams) does not propagate to infinitely-running substreams. This PR propagates the upstream completion explicitly to all substreams using a kill-switch.
- Introduces a new major version, "2", in the daml_lf proto
- Adds new major versions to the compiler and the engine
- Updates all code that assumes only one major version
- Updates all code that assumes only one dev version
* Fix a probable typo in //daml-fl/encoder/testing-dar-*
* apply TODOs in bazel files
* remove obsolete comments in bazel files
* use 'default' instead of 'latest' for targets relying on 'latest' in order to ensure interfaces are supported
* Update to rules_haskell v0.16
* Update comments re bazel patches
* clean up bazel overrides
* Upgrade to Bazel 5.2.0
* Remove '--distinct_host_configuration=false'
* Update buildifier to 6.3.2
* Suffix macos and ubuntu caches with yyyymm
* bump windows cache to v14
* [REVERTME] bump linux/macos/darwin timeout to 4h
If the ledger has been pruned more recently that the last cached copy, then attempting to fetch the changes since that last offset will fail, rending the relevant template(s) unqueryable. This PR detects that condition, clears the cache for the relevant template and queries again, which will refresh the cache with a fresh copy of the ACS for that template, and serve the query from that.
I also made some usability tweaks around running canton-ee tests, to help improve the dev experience for failures I came across while trying to run them. Specifically
* Use `--config=canton-ee` to opt into the tests which require canton-ee
* When downloading that EE from artifactory, provide better diagnostics if the required auth isn't set up.
* [LF] make Timestamp parsing consistent between Java 11 and Java 17
Between Java 11 and Java 17 there is one bug fix on Instant.parse
that expands the range of values that can be parsed into an
Instant. See https://bugs.openjdk.org/browse/JDK-8166138
Daml-LF happened to uses Instant.parse to parse a string into a
Daml-LF timestamp and we observe a different behavior when running
Daml on Java 11 and Java 17
additionally make explicit that conversion form java Instant and
string may drop nanoseconds, i.e. we create a lenient version that may
drop the significant nanoseconds (legacy or) and a strict
version that reject instant/string that cannot be converted without
loss of precision.
* Pruning needs to be retried, with artificial activity added, until the safe-offset has advanced far enough for it to succeed.
* The "max deduplication duration" needs to be dropped, otherwise pruning cannot be done for at least the default of 168h.
* The "reconciliation interval" needs to be lowered. This is a dynamic config, so we set it via a bootstrap script. The change is not effected immediately, but asynchronously some time after startup. Lowering this enables the safe-offset to catch up faster.
* We need to ensure the relevant tests are only enabled when testing against an Enterprise edition of Canton.
Contributes to https://digitalasset.atlassian.net/browse/LT-17
* added new end point to refresh the cache
* formatting
* returning old logic
* added logic to update the cache with a specific offset provided in the body
* formatting
* addressed comments
* formatting
* formatting
* formatting
* Return unit instead of List for processing refresh
* last changes on logic
* formatting
* simplify conversion
* comments addressed
According to current documentation, only one of these may be set, but we can currently return both.
Also in this case we currently return a status code of 501, which is not one of the documented status codes. This PR switches that to 500 instead.
https://docs.daml.com/json-api/index.html#http-status-codes
* reduce fork count, measurement and warmup iterations, and extra parameters which had a multiplicative effect on total work done
* fix db connection setup - the combination of annotations and inheritance meant it was trying to setup the trial twice, causing Postgres benchmarks to fail
* log when an error occurs during test setup
* add logback resources to the benchmarks, to enable configuration of logging, and avoid dumping all the debug logs by default
With these changes, on my 20 core 64G linux laptop, the benchmarks now all run in under 6 mins.
Without this change, it took over 9 hours in total and none of the Postgres benchmarks were successful.
The tests are not comprehensive.
We demonstrate that if the Oracle payload index is on, the names of fields with type Int may not exceed 251 chars.
For other configurations, Int and Text fields can have a name with lengths of at least 512 chars.
Also updated the naming and explanation of the guards which disable some tests when using Oracle with JSON index.
Oracle conflates empty strings with NULL, which breaks comparison operations against empty strings.
This change makes Oracle-backed queries have the behaviour you'd expect when comparing empty strings, in line with what we see on Postgres and in-memory backed queries.
We had to take a bit of care to ensure it worked irrespective of whether the JSON index was enabled.
Fixes https://digitalasset.atlassian.net/browse/LT-24
* remove sandbox-on-x project
update bazel readme
update release artifacts
comment out last remaining SoX test
remove ledger-runner-common
remove participant-state-kv-errors
remove recovering-indexer-integration-tests
remove participant-integration-api
update doc pages
cleanup ledger-api-auth usage
remove participant-state
fix build
fix build
clean up ledger-api-common part I
clean up ledger-api-comon part II
clean up ledger-api-common part III
remove ledger/metrics
clean up ledger-api-health and ledger-api-domain
* remove ledger-configuration ad ledger-offset
* remove ledger-grpc and clean up participant-local-store
* reshuffle few more classes
* format
Previously, when using an Oracle-based ACS cache, JSON queries such as
```
{
...
"query": {
"si_detail": {
"si_input_no": {
"%gt": "foo"
}
}
}
}
```
were resulting in SQL queries containing
```
JSON_EXISTS(payload, '$."si_detail"."si_input_no"?(@ > $X)' PASSING ? AS X)
```
which works when the payload json index is disabled, but when the index is enabled it results in an error.
We can avoid this by passing in the literal value rather than using a query parameter when the index is enabled, e.g.
```
JSON_EXISTS(payload, '$."si_detail"."si_input_no"?(@ > $X)' PASSING 'foo' AS X)
```
Fixes https://github.com/digital-asset/daml/issues/15006
And contributes towards https://digitalasset.atlassian.net/browse/LT-14
When the `observers` list was too long, we were getting an `ORA-40478: output value too large (maximum 4000)` error from Oracle.
This was being hit when the `contract_stakeholders` view was being updated. An intermediate part of the expression which updates the view was being defaulted to a type which didn't allow larger values. Adding an explicit type fixes the problem.
- Added test which could reproduce the previous failure.
- Moved `randomTextN` to be accessible to the new test.
Fixes https://digitalasset.atlassian.net/browse/LT-11
* restore transaction fetch from #16529497b195043
* add a view so we can ask for blobs
* perform the extra disclosure and check its response text
* remove a layer we won't use
* include token; test passes with user management
* rename to reflect multiple contracts
Adds a `disclosedContracts` optional list field to the `meta` argument
for `create`, `exercise` and `create-and-exercise` endpoints.
The argument is ignored in all cases but `exercise` (#16611 builds on
this PR to add `create-and-exercise` support). A single disclosed
contract looks more or less like follows:
{
"contractId": "abcd",
"templateId": "Mod:Tmpl",
$argumentsJsonField,
"metadata": {
"createdAt": "2023-03-21T18:00:33.246813Z",
"contractKeyHash": "77656c6c2068656c6c6f",
"driverMetadata": "dGhlcmUgcmVhZGVy"
}
}
where `argumentsJsonField` may be either one of these, setting aside the
extra quotes added for these tests:
"payload": {"owner": "Alice"}
"payloadBlob": {
"typeUrl": "type.googleapis.com/com.daml.ledger.api.v1.Record",
"value": "Eg4KBW93bmVyEgVaA0JvYg=="
}
(Note that `typeUrl` is variable, not constant; use the actual blob's
`typeUrl` contents, **do not assume it is exactly the above example**.)
This PR uses base-64 for `payloadBlob.value` and
`metadata.driverMetadata`, and base-16 for `metadata.contractKeyHash`.
* Forward port of #16401
* specify arguments for delete in a fixed order
* disable backpressure transaction batching for Oracle update
* deterministic specification of offset update DMLs
* switch to updateMany for delete instead of `in` on Oracle
- suggested by @ray-roestenburg-da; thanks
* don't update ledger_offset table or start transaction stream if caught up to ledger end
* include tpid in delete order consideration
---------
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* forward port of: #16261b718717dec
* [JSON API] Log errors suppressed during streaming of /v1/query and
/v1/fetch results
---------
Co-authored-by: Stephen Compall <stephen.compall@daml.com>