* remove sandbox-on-x project
update bazel readme
update release artifacts
comment out last remaining SoX test
remove ledger-runner-common
remove participant-state-kv-errors
remove recovering-indexer-integration-tests
remove participant-integration-api
update doc pages
cleanup ledger-api-auth usage
remove participant-state
fix build
fix build
clean up ledger-api-common part I
clean up ledger-api-comon part II
clean up ledger-api-common part III
remove ledger/metrics
clean up ledger-api-health and ledger-api-domain
* remove ledger-configuration ad ledger-offset
* remove ledger-grpc and clean up participant-local-store
* reshuffle few more classes
* format
Previously, when using an Oracle-based ACS cache, JSON queries such as
```
{
...
"query": {
"si_detail": {
"si_input_no": {
"%gt": "foo"
}
}
}
}
```
were resulting in SQL queries containing
```
JSON_EXISTS(payload, '$."si_detail"."si_input_no"?(@ > $X)' PASSING ? AS X)
```
which works when the payload json index is disabled, but when the index is enabled it results in an error.
We can avoid this by passing in the literal value rather than using a query parameter when the index is enabled, e.g.
```
JSON_EXISTS(payload, '$."si_detail"."si_input_no"?(@ > $X)' PASSING 'foo' AS X)
```
Fixes https://github.com/digital-asset/daml/issues/15006
And contributes towards https://digitalasset.atlassian.net/browse/LT-14
When the `observers` list was too long, we were getting an `ORA-40478: output value too large (maximum 4000)` error from Oracle.
This was being hit when the `contract_stakeholders` view was being updated. An intermediate part of the expression which updates the view was being defaulted to a type which didn't allow larger values. Adding an explicit type fixes the problem.
- Added test which could reproduce the previous failure.
- Moved `randomTextN` to be accessible to the new test.
Fixes https://digitalasset.atlassian.net/browse/LT-11
* restore transaction fetch from #16529497b195043
* add a view so we can ask for blobs
* perform the extra disclosure and check its response text
* remove a layer we won't use
* include token; test passes with user management
* rename to reflect multiple contracts
Adds a `disclosedContracts` optional list field to the `meta` argument
for `create`, `exercise` and `create-and-exercise` endpoints.
The argument is ignored in all cases but `exercise` (#16611 builds on
this PR to add `create-and-exercise` support). A single disclosed
contract looks more or less like follows:
{
"contractId": "abcd",
"templateId": "Mod:Tmpl",
$argumentsJsonField,
"metadata": {
"createdAt": "2023-03-21T18:00:33.246813Z",
"contractKeyHash": "77656c6c2068656c6c6f",
"driverMetadata": "dGhlcmUgcmVhZGVy"
}
}
where `argumentsJsonField` may be either one of these, setting aside the
extra quotes added for these tests:
"payload": {"owner": "Alice"}
"payloadBlob": {
"typeUrl": "type.googleapis.com/com.daml.ledger.api.v1.Record",
"value": "Eg4KBW93bmVyEgVaA0JvYg=="
}
(Note that `typeUrl` is variable, not constant; use the actual blob's
`typeUrl` contents, **do not assume it is exactly the above example**.)
This PR uses base-64 for `payloadBlob.value` and
`metadata.driverMetadata`, and base-16 for `metadata.contractKeyHash`.
* Forward port of #16401
* specify arguments for delete in a fixed order
* disable backpressure transaction batching for Oracle update
* deterministic specification of offset update DMLs
* switch to updateMany for delete instead of `in` on Oracle
- suggested by @ray-roestenburg-da; thanks
* don't update ledger_offset table or start transaction stream if caught up to ledger end
* include tpid in delete order consideration
---------
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* forward port of: #16261b718717dec
* [JSON API] Log errors suppressed during streaming of /v1/query and
/v1/fetch results
---------
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Common metrics reporter config format
Fixed the config for metrics reporter for trigger service and oauth
middleware.
They are using a common config definition, as well as the JSON API
service.
The format matches the one used in canton configs.
CHANGELOG_BEGIN
CHANGELOG_END
* confirm that monadifying the package fetch still suppresses the error
* thread ExecutionContext from request
- makes the error less likely
- but still fairly easy to repro with 3 tabs
* experiment with setting executor
* explain that the cache isn't a cache
* random order, maybe
- #3090 mentions keeping the order as a goal; I don't see why we should,
though
* random order with groups of 8
* embed the decoding
- this slows down the processing of a group, yielding somewhat less
granular contention
- and also makes hits cost much less, at the cost of making granular
contention more expensive
* reduce diff size before resolution
- this won't improve contention, but does nearly eliminate the cost of
resolution for already-resolved packages, making hits nearly free
(amortized)
* randomize groups instead
- while groups themselves can overlap with this arrangement, each
costing ParallelLoadFactor granular contention, on average it seems to
perform a little better due to groups never overlapping
* refactor StatusEnvelope to utils
* constant 250ms retry
* detect contention earlier and skip decode
* factor traverseFM
* declare needed NonEmpty query lists and condition lists
* selectContractsMultiTemplate requires non-empty query list
* propagate nonempty query sets through ContractDao
* propagate some NE constraints from selectContractsMultiTemplate through WebSocketService
* HashSet no longer needed
* pass non-emptiness through dbQueries
* add NE-preserving groupMap and groupMap1
* validate that resolvedWithKey is nonempty
* log more termination
From the timeout loop:
+ fmm-outer
+ fmm-inner
x ACS-before-tx
x tx-after-ACS
* spam eagerCancel=true and see what happens
From the timeout loop:
+ after-split
+ IDSS-outer
+ fmm-outer
+ contractsAndBoundary
+ tx-after-ACS
+ fmm-inner
+ GTSFP-outer
x ACS-before-tx
* passing acs-and-tx tests
* trying combinations of reverting eagerCancel settings
- setting eagerCancel = false in acsAndBoundary causes the ACS
cancellation to fail (first test), but the tx cancellation still
succeeds
- setting eagerCancel = false in project2 causes both the ACS and tx
stream cancellation tests (first and third tests) to fail
- the offset broadcast in acsFollowingAndBoundary appears to be
redundant with respect to cancellation, so we revert it in the
interest of conservatism
* make test size small
* current measurement
Still fine after the refactoring of logTermination and removal of fmm-*.
+ GTSFP-outer
+ contractsAndBoundary
x IDSS-outer-2
+ after-split
+ tx-after-ACS
+ IDSS-outer-1
x ACS-before-tx
* set level of the logTermination messages to trace
* Add "component" to `SecurityTest` entry for AbstractWebsocketServiceIntegrationTest
* Add README to test-evidence explaining the convention for components
* WIP
* do it on getuesr first
* remove old getUesr
* createUser
* createUser
* user post paths
* user post paths
* WIP
* removed routesetup from UserManagement
* refactor parties and allocateParty
* remove proxyWithCommand
* refactor PackagesAndDars
* remove proxyWithoutCommand
* refactor PackagesAndDars
* merge from main
* move UploadDarFile implementation back to package and dars
* add -Xlint options requiring no changes
* add -Xlint:recurse-with-default
- very minor code changes
* factor http-json hj_scalacopts duplication
* use lf_scalacopts_stricter in libs-scala where NonUnitStatements was
* use hj_scalacopts in api-type-signature
* add nonlocal-return and nullary-unit to hj_scalacopts
* commented-out excluded options
* add unit-special globally
* check implicit-recursion for clients code