Remove sandbox on x (#16890)

* remove sandbox-on-x project

* update bazel readme

* update release artifacts

* comment out last remaining SoX test

* remove ledger-runner-common

* remove participant-state-kv-errors

* remove recovering-indexer-integration-tests

* remove participant-integration-api

* update doc pages

* cleanup ledger-api-auth usage

* remove participant-state

* fix build

* fix build
This commit is contained in:
mziolekda 2023-05-23 09:25:54 +02:00 committed by GitHub
parent b55d80a881
commit 95cc249ddd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
909 changed files with 74 additions and 88678 deletions

View File

@ -53,31 +53,31 @@ file, which defines external dependencies. The workspace contains several
file. Each package holds multiple *targets*. Targets are either *files* under
the package directory or *rules* defined in the `BUILD.bazel` file. You can
address a target by a *label* of the form `//path/to/package:target`. For
example, `//ledger/sandbox-on-x:sandbox-on-x`. Here `sandbox-on-x` is a target in the package
`ledger/sandbox-on-x`. It is defined in the file `ledger/sandbox-on-x/BUILD.bazel`
using `da_scala_library` as shown below.
example, `//daml-script/runner:daml-script-binary`. Here `daml-script-binary` is a target in the package
`//daml-script/runner`. It is defined in the file `//daml-script/runner/BUILD.bazel`
using `da_scala_binary` as shown below.
```
da_scala_library(
name = "sandbox-on-x",
srcs = glob(["src/main/scala/**/*.scala"]),
da_scala_binary(
name = "daml-script-binary",
main_class = "com.daml.lf.engine.script.ScriptMain",
resources = glob(["src/main/resources/**/*"]),
scala_deps = [
"@maven//:com_github_scopt_scopt",
"@maven//:com_typesafe_akka_akka_actor",
"@maven//:com_typesafe_akka_akka_stream",
scala_runtime_deps = [
"@maven//:com_typesafe_akka_akka_slf4j",
],
tags = ["maven_coordinates=com.daml:sandbox-on-x:__VERSION__"],
visibility = [
"//visibility:public",
scalacopts = lf_scalacopts_stricter,
visibility = ["//visibility:public"],
runtime_deps = [
"@maven//:ch_qos_logback_logback_classic",
],
deps = [
...list of deps
":script-runner-lib",
"//release:ee-license",
],
)
```
The arguments to `da_scala_library` are called *attributes*. These define the
The arguments to `da_scala_binary` are called *attributes*. These define the
name of the target, the sources it is compiled from, its dependencies, etc.
Note, that Bazel build rules are hermetic. I.e. only explicitly declared
dependencies will be available during execution. In particular, if a rule
@ -401,7 +401,7 @@ detailed information.
- Build an individual target
```
bazel build //ledger/sandbox-on-x:app
bazel build //daml-script/runner:daml-script-binary
```
### Running Tests
@ -415,13 +415,13 @@ detailed information.
- Execute a test suite
```
bazel test //ledger/sandbox-on-x:sandbox-on-x-unit-tests
bazel test //daml-script/runner:tests
```
- Show test output
```
bazel test //ledger/sandbox-on-x:sandbox-on-x-unit-tests --test_output=streamed
bazel test //daml-script/runner:tests --test_output=streamed
```
Test outputs are also available in log files underneath the convenience
@ -430,20 +430,20 @@ detailed information.
- Do not cache test results
```
bazel test //ledger/sandbox-on-x:sandbox-on-x-unit-tests --nocache_test_results
bazel test //daml-script/runner:tests --nocache_test_results
```
- Execute a specific Scala test-suite class
```
bazel test //ledger/participant-integration-api:participant-integration-api-tests_test_suite_src_test_suite_scala_platform_store_dao_JdbcLedgerDaoPostgresqlSpec.scala
bazel test //daml-lf/engine:tests_test_suite_src_test_scala_com_digitalasset_daml_lf_engine_ApiCommandPreprocessorSpec.scala
```
- Execute a test with a specific name
```
bazel test \
//ledger/participant-integration-api:participant-integration-api-tests_test_suite_src_test_suite_scala_platform_store_dao_JdbcLedgerDaoPostgresqlSpec.scala \
//daml-lf/engine:tests_test_suite_src_test_scala_com_digitalasset_daml_lf_engine_ApiCommandPreprocessorSpec.scala \
--test_arg=-t \
--test_arg="JdbcLedgerDao (divulgence) should preserve divulged contracts"
```
@ -452,7 +452,7 @@ detailed information.
```
bazel test \
//ledger/participant-integration-api:participant-integration-api-tests_test_suite_src_test_suite_scala_platform_store_dao_JdbcLedgerDaoPostgresqlSpec.scala \
//daml-lf/engine:tests_test_suite_src_test_scala_com_digitalasset_daml_lf_engine_ApiCommandPreprocessorSpec.scala \
--test_arg=-z \
--test_arg="preserve divulged"
```
@ -464,13 +464,13 @@ detailed information.
- Run an executable target
```
bazel run //ledger/sandbox-on-x:app
bazel run //daml-script/runner:daml-script-binary
```
- Pass arguments to an executable target
```
bazel run //ledger/sandbox-on-x:app -- --help
bazel run //daml-script/runner:daml-script-binary -- --help
```
### Running a REPL
@ -529,7 +529,7 @@ expressions can be combined using set operations like `intersect` or `union`.
- List all Scala library dependencies of a target
```
bazel query 'kind("scala.*library rule", deps(//ledger/sandbox-on-x:app))'
bazel query 'kind("scala.*library rule", deps(//daml-script/runner:daml-script-binary))'
```
- Find available 3rd party dependencies
@ -546,7 +546,7 @@ query includes. These can then be rendered using Graphviz.
- Graph all Scala library dependencies of a target
```
bazel query --noimplicit_deps 'kind(scala_library, deps(//ledger/sandbox-on-x:app))' --output graph > graph.in
bazel query --noimplicit_deps 'kind(scala_library, deps(//daml-script/runner:daml-script-binary))' --output graph > graph.in
dot -Tpng < graph.in > graph.png
```
@ -585,7 +585,7 @@ it will watch these files for changes and rerun the command on file change. For
example:
```
ibazel test //ledger/sandbox-on-x:sandbox-on-x-unit-tests
ibazel test //daml-script/runner:tests
```
Note, that this interacts well with Bazel's test result caching (which is

View File

@ -45,7 +45,6 @@ NOTICES @garyverhaegen-da @dasormeter
/ledger-api/grpc-definitions/ @meiersi-da @digital-asset/kv-participant @digital-asset/kv-committer
/ledger/ledger-configuration/ @meiersi-da @digital-asset/kv-participant @digital-asset/kv-committer
/ledger/ledger-offset/ @meiersi-da @digital-asset/kv-participant @digital-asset/kv-committer
/ledger/participant-state/src @meiersi-da @digital-asset/kv-participant @digital-asset/kv-committer
# Owned by KV Participant with KV Committer added for notifications
/ledger/ledger-api-common/ @digital-asset/kv-participant @digital-asset/kv-committer
@ -53,16 +52,8 @@ NOTICES @garyverhaegen-da @dasormeter
/ledger/ledger-api-health/ @digital-asset/kv-participant @digital-asset/kv-committer
/ledger/ledger-configuration/ @digital-asset/kv-participant @digital-asset/kv-committer
/ledger/ledger-offset/ @digital-asset/kv-participant @digital-asset/kv-committer
/ledger/participant-state/ @digital-asset/kv-participant @digital-asset/kv-committer
/ledger/participant-state-index/ @digital-asset/kv-participant @digital-asset/kv-committer
/ledger/participant-state-kv-errors/ @digital-asset/kv-participant @digital-asset/kv-committer
/ledger/participant-state-metrics/ @digital-asset/kv-participant @digital-asset/kv-committer
/ledger/sandbox/ @digital-asset/kv-participant @digital-asset/kv-committer
/ledger-test-tool/ @digital-asset/kv-participant @digital-asset/kv-committer
# KV Participant
/ledger/participant-integration-api/ @digital-asset/kv-participant
# Conformance test on canton
/ledger-test-tool/ledger-api-test-tool-on-canton @remyhaemmerle-da @rgugliel-da

View File

@ -94,8 +94,7 @@ You can also participate in the discussions at the following link: [discuss.daml
Deny Allow
````
(As of 2021.10.29) this is caused by the following test `//ledger/participant-integration-api:participant-integration-api-tests_test_suite_src_test_suite_scala_platform_apiserver_tls_TlsCertificateRevocationCheckingSpec.scala`.
The test can succeeds independent of whether `Deny` or `Allow`.
The test can succeed independent of whether `Deny` or `Allow` is selected.
If the dialog doesn't appear for you, you've probably already excercised one of these two choices.
To check your Firewall settings go to: `System Preferences` -> `Security & Privacy` -> `Firewall` -> `Firewall Options...` (checked on macOS Big Sur 11.5.2).

View File

@ -1,14 +0,0 @@
# Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
version: v1beta1
build:
roots:
- ledger/participant-integration-api/src/main/protobuf
breaking:
use:
# Using WIRE_JSON here to ensure we have the option of also using JSON encodings of the
# .proto values stored in the IndexDB.
- WIRE_JSON

View File

@ -39,7 +39,6 @@ function check_non_lf_protos() {
declare -a BUF_MODULES_AGAINST_STABLE=(
"buf-ledger-api.yaml"
"buf-ledger-configuration.yaml"
"buf-participant-integration-api.yaml"
)
echo "Checking protobufs against git target '${BUF_GIT_TARGET_TO_CHECK}'"

View File

@ -24,10 +24,9 @@ da_scala_library(
"//daml-lf:__subpackages__",
# TODO https://github.com/digital-asset/daml/issues/15453
# Extract the error types into a separate package
# in order to decouple the error definitions and participant-integration-api
# in order to decouple the error definitions
# from unnecessary daml-lf/validation dependencies
"//ledger/ledger-api-errors:__subpackages__",
"//ledger/participant-integration-api:__subpackages__",
],
deps = [
"//daml-lf/data",

View File

@ -34,24 +34,25 @@ def daml_ledger_export_test(
expected_daml_yaml = expected_daml_yaml,
)
client_server_test(
name = name,
client = client_name,
client_args = [
"--target-port=%PORT%",
"--script-identifier=%s" % script_identifier,
"--party=" + ",".join(parties),
],
client_files = ["$(rootpath %s)" % dar],
data = [dar],
server = "//ledger/sandbox-on-x:app",
server_args = [
"run",
"-C ledger.participants.default.api-server.port=0",
"-C ledger.participants.default.api-server.port-file=%PORT_FILE%",
] + server_dev_args,
timeout = timeout,
)
# Commented out - awaiting a port to canton
# client_server_test(
# name = name,
# client = client_name,
# client_args = [
# "--target-port=%PORT%",
# "--script-identifier=%s" % script_identifier,
# "--party=" + ",".join(parties),
# ],
# client_files = ["$(rootpath %s)" % dar],
# data = [dar],
# server = "//ledger/sandbox-on-x:app",
# server_args = [
# "run",
# "-C ledger.participants.default.api-server.port=0",
# "-C ledger.participants.default.api-server.port-file=%PORT_FILE%",
# ] + server_dev_args,
# timeout = timeout,
# )
# Generate the Daml ledger export and compare to the expected files. This is
# used both for golden tests on ledger exports and to make sure that the

View File

@ -46,7 +46,6 @@ da_scala_test(
"//ledger/ledger-api-client",
"//ledger/ledger-api-common",
"//ledger/ledger-api-domain",
"//ledger/participant-integration-api",
"//libs-scala/fs-utils",
"//libs-scala/ledger-resources",
"//libs-scala/ports",

View File

@ -279,11 +279,7 @@ da_scala_test_suite(
"//ledger/ledger-api-auth",
"//ledger/ledger-api-common",
"//ledger/ledger-configuration",
"//ledger/ledger-runner-common",
"//ledger/metrics",
"//ledger/participant-integration-api",
"//ledger/participant-integration-api:participant-integration-api-tests-lib",
"//ledger/participant-state",
"//libs-scala/caching",
"//libs-scala/contextualized-logging",
"//libs-scala/fs-utils",

View File

@ -1,234 +1,10 @@
1. KvErrors
===================================================================================================================
Errors that are specific to ledgers based on the KV architecture: Daml Sandbox and VMBC.
1.1. KvErrors / Consistency
===================================================================================================================
Errors that highlight transaction consistency issues in the committer context.
.. _error_code_INCONSISTENT_INPUT:
INCONSISTENT_INPUT
---------------------------------------------------------------------------------------------------------------------------------------
**Explanation**: At least one input has been altered by a concurrent transaction submission.
**Category**: ContentionOnSharedResources
**Conveyance**: This error is logged with log-level INFO on the server side and exposed on the API with grpc-status ABORTED including a detailed error message.
**Resolution**: The correct resolution depends on the business flow, for example it may be possible to proceed without an archived contract as an input, or the transaction submission may be retried to load the up-to-date value of a contract key.
.. _error_code_VALIDATION_FAILURE:
VALIDATION_FAILURE
---------------------------------------------------------------------------------------------------------------------------------------
**Explanation**: Validation of a transaction submission failed using on-ledger data.
**Category**: InvalidGivenCurrentSystemStateOther
**Conveyance**: This error is logged with log-level INFO on the server side and exposed on the API with grpc-status FAILED_PRECONDITION including a detailed error message.
**Resolution**: Either some input contracts have been pruned or the participant is misbehaving.
1.2. KvErrors / Internal
===================================================================================================================
Errors that arise from an internal system misbehavior.
.. _error_code_INVALID_PARTICIPANT_STATE:
INVALID_PARTICIPANT_STATE
---------------------------------------------------------------------------------------------------------------------------------------
**Explanation**: An invalid participant state has been detected.
**Category**: SystemInternalAssumptionViolated
**Conveyance**: This error is logged with log-level ERROR on the server side. It is exposed on the API with grpc-status INTERNAL without any details for security reasons.
**Resolution**: Contact support.
.. _error_code_MISSING_INPUT_STATE:
MISSING_INPUT_STATE
---------------------------------------------------------------------------------------------------------------------------------------
**Explanation**: The participant didn't provide a necessary transaction submission input.
**Category**: SystemInternalAssumptionViolated
**Conveyance**: This error is logged with log-level ERROR on the server side. It is exposed on the API with grpc-status INTERNAL without any details for security reasons.
**Resolution**: Contact support.
.. _error_code_REJECTION_REASON_NOT_SET:
REJECTION_REASON_NOT_SET
---------------------------------------------------------------------------------------------------------------------------------------
**Explanation**: A rejection reason has not been set.
**Category**: SystemInternalAssumptionViolated
**Conveyance**: This error is logged with log-level ERROR on the server side. It is exposed on the API with grpc-status INTERNAL without any details for security reasons.
**Resolution**: Contact support.
.. _error_code_SUBMISSION_FAILED:
SUBMISSION_FAILED
---------------------------------------------------------------------------------------------------------------------------------------
**Explanation**: An unexpected error occurred while submitting a command to the ledger.
**Category**: SystemInternalAssumptionViolated
**Conveyance**: This error is logged with log-level ERROR on the server side. It is exposed on the API with grpc-status INTERNAL without any details for security reasons.
**Resolution**: Contact support.
1.3. KvErrors / Resources
===================================================================================================================
Errors that relate to system resources.
.. _error_code_RESOURCE_EXHAUSTED:
RESOURCE_EXHAUSTED
---------------------------------------------------------------------------------------------------------------------------------------
**Deprecation**: Replaced by RESOURCE_OVERLOADED. Since: 2.3.0
**Explanation**: A system resource has been exhausted.
**Category**: ContentionOnSharedResources
**Conveyance**: This error is logged with log-level INFO on the server side and exposed on the API with grpc-status ABORTED including a detailed error message.
**Resolution**: Retry the transaction submission or provide the details to the participant operator.
.. _error_code_RESOURCE_OVERLOADED:
RESOURCE_OVERLOADED
---------------------------------------------------------------------------------------------------------------------------------------
**Explanation**: A system resource is overloaded.
**Category**: ContentionOnSharedResources
**Conveyance**: This error is logged with log-level INFO on the server side and exposed on the API with grpc-status ABORTED including a detailed error message.
**Resolution**: Retry the transaction submission or provide the details to the participant operator.
1.4. KvErrors / Time
===================================================================================================================
Errors that relate to the Daml concepts of time.
.. _error_code_CAUSAL_MONOTONICITY_VIOLATED:
CAUSAL_MONOTONICITY_VIOLATED
---------------------------------------------------------------------------------------------------------------------------------------
**Explanation**: At least one input contract's ledger time is later than that of the submitted transaction.
**Category**: InvalidGivenCurrentSystemStateOther
**Conveyance**: This error is logged with log-level INFO on the server side and exposed on the API with grpc-status FAILED_PRECONDITION including a detailed error message.
**Resolution**: Retry the transaction submission.
.. _error_code_INVALID_RECORD_TIME:
INVALID_RECORD_TIME
---------------------------------------------------------------------------------------------------------------------------------------
**Explanation**: The record time is not within bounds for reasons other than deduplication, such as excessive latency. Excessive clock skew between the participant and the committer or a time model that is too restrictive may also produce this rejection.
**Category**: InvalidGivenCurrentSystemStateOther
**Conveyance**: This error is logged with log-level INFO on the server side and exposed on the API with grpc-status FAILED_PRECONDITION including a detailed error message.
**Resolution**: Retry the submission or contact the participant operator.
.. _error_code_RECORD_TIME_OUT_OF_BOUNDS:
RECORD_TIME_OUT_OF_BOUNDS
---------------------------------------------------------------------------------------------------------------------------------------
**Explanation**: The record time is not within bounds for reasons such as excessive latency, excessive clock skew between the participant and the committer or a time model that is too restrictive.
**Category**: ContentionOnSharedResources
**Conveyance**: This error is logged with log-level INFO on the server side and exposed on the API with grpc-status ABORTED including a detailed error message.
**Resolution**: Retry the submission or contact the participant operator.
.. _error_code_RECORD_TIME_OUT_OF_RANGE:
RECORD_TIME_OUT_OF_RANGE
---------------------------------------------------------------------------------------------------------------------------------------
**Explanation**: The record time is not within bounds for reasons other than deduplication, such as excessive latency. Excessive clock skew between the participant and the committer or a time model that is too restrictive may also produce this rejection.
**Category**: InvalidGivenCurrentSystemStateOther
**Conveyance**: This error is logged with log-level INFO on the server side and exposed on the API with grpc-status FAILED_PRECONDITION including a detailed error message.
**Resolution**: Retry the transaction submission or contact the participant operator.
2. ParticipantErrorGroup
1. ParticipantErrorGroup
===================================================================================================================
2.1. ParticipantErrorGroup / CommonErrors
1.1. ParticipantErrorGroup / CommonErrors
===================================================================================================================
Common errors raised in Daml services and components.
@ -314,13 +90,13 @@ UNSUPPORTED_OPERATION
2.2. ParticipantErrorGroup / IndexErrors
1.2. ParticipantErrorGroup / IndexErrors
===================================================================================================================
Errors raised by the Participant Index persistence layer.
2.2.1. ParticipantErrorGroup / IndexErrors / DatabaseErrors
1.2.1. ParticipantErrorGroup / IndexErrors / DatabaseErrors
===================================================================================================================
@ -374,7 +150,7 @@ INDEX_DB_SQL_TRANSIENT_ERROR
2.3. ParticipantErrorGroup / LedgerApiErrors
1.3. ParticipantErrorGroup / LedgerApiErrors
===================================================================================================================
Errors raised by or forwarded by the Ledger API.
@ -460,7 +236,7 @@ THREADPOOL_OVERLOADED
2.3.1. ParticipantErrorGroup / LedgerApiErrors / AdminServices
1.3.1. ParticipantErrorGroup / LedgerApiErrors / AdminServices
===================================================================================================================
Errors raised by Ledger API admin services.
@ -514,7 +290,7 @@ PACKAGE_UPLOAD_REJECTED
2.3.1.1. ParticipantErrorGroup / LedgerApiErrors / AdminServices / IdentityProviderConfigServiceErrorGroup
1.3.1.1. ParticipantErrorGroup / LedgerApiErrors / AdminServices / IdentityProviderConfigServiceErrorGroup
===================================================================================================================
@ -616,7 +392,7 @@ TOO_MANY_IDENTITY_PROVIDER_CONFIGS
2.3.1.2. ParticipantErrorGroup / LedgerApiErrors / AdminServices / PartyManagementServiceErrorGroup
1.3.1.2. ParticipantErrorGroup / LedgerApiErrors / AdminServices / PartyManagementServiceErrorGroup
===================================================================================================================
@ -718,7 +494,7 @@ PARTY_NOT_FOUND
2.3.1.3. ParticipantErrorGroup / LedgerApiErrors / AdminServices / UserManagementServiceErrorGroup
1.3.1.3. ParticipantErrorGroup / LedgerApiErrors / AdminServices / UserManagementServiceErrorGroup
===================================================================================================================
@ -820,7 +596,7 @@ USER_NOT_FOUND
2.3.2. ParticipantErrorGroup / LedgerApiErrors / AuthorizationChecks
1.3.2. ParticipantErrorGroup / LedgerApiErrors / AuthorizationChecks
===================================================================================================================
Authentication and authorization errors.
@ -890,7 +666,7 @@ UNAUTHENTICATED
2.3.3. ParticipantErrorGroup / LedgerApiErrors / CommandExecution
1.3.3. ParticipantErrorGroup / LedgerApiErrors / CommandExecution
===================================================================================================================
Errors raised during the command execution phase of the command submission evaluation.
@ -912,7 +688,7 @@ FAILED_TO_DETERMINE_LEDGER_TIME
2.3.3.1. ParticipantErrorGroup / LedgerApiErrors / CommandExecution / Interpreter
1.3.3.1. ParticipantErrorGroup / LedgerApiErrors / CommandExecution / Interpreter
===================================================================================================================
Errors raised during the command interpretation phase of the command submission evaluation.
@ -982,7 +758,7 @@ DAML_INTERPRETER_INVALID_ARGUMENT
2.3.3.1.1. ParticipantErrorGroup / LedgerApiErrors / CommandExecution / Interpreter / LookupErrors
1.3.3.1.1. ParticipantErrorGroup / LedgerApiErrors / CommandExecution / Interpreter / LookupErrors
===================================================================================================================
Errors raised in lookups during the command interpretation phase.
@ -1004,7 +780,7 @@ CONTRACT_KEY_NOT_FOUND
2.3.3.2. ParticipantErrorGroup / LedgerApiErrors / CommandExecution / Package
1.3.3.2. ParticipantErrorGroup / LedgerApiErrors / CommandExecution / Package
===================================================================================================================
Command execution errors raised due to invalid packages.
@ -1042,7 +818,7 @@ PACKAGE_VALIDATION_FAILED
2.3.3.3. ParticipantErrorGroup / LedgerApiErrors / CommandExecution / Preprocessing
1.3.3.3. ParticipantErrorGroup / LedgerApiErrors / CommandExecution / Preprocessing
===================================================================================================================
Errors raised during command conversion to the internal data representation.
@ -1064,7 +840,7 @@ COMMAND_PREPROCESSING_FAILED
2.3.4. ParticipantErrorGroup / LedgerApiErrors / ConsistencyErrors
1.3.4. ParticipantErrorGroup / LedgerApiErrors / ConsistencyErrors
===================================================================================================================
Potential consistency errors raised due to race conditions during command submission or returned as submission rejections by the backing ledger.
@ -1214,7 +990,7 @@ SUBMISSION_ALREADY_IN_FLIGHT
2.3.5. ParticipantErrorGroup / LedgerApiErrors / PackageServiceError
1.3.5. ParticipantErrorGroup / LedgerApiErrors / PackageServiceError
===================================================================================================================
Errors raised by the Package Management Service on package uploads.
@ -1268,7 +1044,7 @@ PACKAGE_SERVICE_INTERNAL_ERROR
2.3.5.1. ParticipantErrorGroup / LedgerApiErrors / PackageServiceError / Reading
1.3.5.1. ParticipantErrorGroup / LedgerApiErrors / PackageServiceError / Reading
===================================================================================================================
Package parsing errors raised during package upload.
@ -1370,7 +1146,7 @@ ZIP_BOMB
2.3.6. ParticipantErrorGroup / LedgerApiErrors / RequestValidation
1.3.6. ParticipantErrorGroup / LedgerApiErrors / RequestValidation
===================================================================================================================
Validation errors raised when evaluating requests in the Ledger API.
@ -1520,7 +1296,7 @@ PARTICIPANT_PRUNED_DATA_ACCESSED
2.3.6.1. ParticipantErrorGroup / LedgerApiErrors / RequestValidation / NotFound
1.3.6.1. ParticipantErrorGroup / LedgerApiErrors / RequestValidation / NotFound
===================================================================================================================
@ -1590,7 +1366,7 @@ TRANSACTION_NOT_FOUND
2.3.7. ParticipantErrorGroup / LedgerApiErrors / WriteServiceRejections
1.3.7. ParticipantErrorGroup / LedgerApiErrors / WriteServiceRejections
===================================================================================================================
Generic submission rejection errors returned by the backing ledger's write service.
@ -1680,7 +1456,7 @@ SUBMITTING_PARTY_NOT_KNOWN_ON_LEDGER
2.3.7.1. ParticipantErrorGroup / LedgerApiErrors / WriteServiceRejections / Internal
1.3.7.1. ParticipantErrorGroup / LedgerApiErrors / WriteServiceRejections / Internal
===================================================================================================================
Errors that arise from an internal system misbehavior.

View File

@ -418,7 +418,6 @@ da_scala_test(
"//ledger-api/testing-utils",
"//ledger/ledger-api-common",
"//ledger/ledger-api-domain",
"//ledger/participant-integration-api",
"//libs-scala/ledger-resources",
"//libs-scala/ledger-resources:ledger-resources-test-lib",
"//libs-scala/ports",

View File

@ -147,7 +147,6 @@ da_scala_test(
"//ledger-api/testing-utils",
"//ledger/ledger-api-client",
"//ledger/ledger-api-common",
"//ledger/participant-integration-api",
"//libs-scala/ledger-resources",
"//libs-scala/ports",
"//libs-scala/resources",

View File

@ -111,9 +111,7 @@ da_scala_test_suite(
"//ledger-service/http-json-cli:ee",
"//ledger-service/http-json-testing:ee",
"//ledger-service/utils",
"//ledger/ledger-api-auth",
"//ledger/ledger-api-common",
"//ledger/ledger-runner-common",
"//libs-scala/db-utils",
"//libs-scala/jwt",
"//libs-scala/ledger-resources",

View File

@ -49,7 +49,6 @@ load("//ledger-service/utils:scalaopts.bzl", "hj_scalacopts")
"//ledger-service/utils",
"//ledger/ledger-api-auth",
"//ledger/ledger-api-common",
"//ledger/participant-integration-api",
"//libs-scala/contextualized-logging",
"//libs-scala/jwt",
"//libs-scala/ledger-resources",

View File

@ -502,7 +502,6 @@ alias(
"//ledger-service/fetch-contracts",
"//ledger/ledger-api-auth",
"//ledger/ledger-api-common",
"//ledger/ledger-runner-common",
"//ledger/metrics",
"//libs-scala/ledger-resources",
"//ledger-service/http-json-cli:{}".format(edition),
@ -575,7 +574,6 @@ test_suite(
"//ledger-service/metrics",
"//ledger-service/utils",
"//ledger/ledger-api-common",
"//ledger/participant-integration-api",
"//libs-scala/db-utils",
"//libs-scala/jwt",
"//libs-scala/ledger-resources",
@ -642,12 +640,8 @@ test_suite(
"//ledger-service/http-json-cli:{}".format(edition),
"//ledger-service/http-json-testing:{}".format(edition),
"//ledger-service/utils",
"//ledger/ledger-api-auth",
"//ledger/ledger-api-common",
"//ledger/ledger-runner-common",
"//ledger/metrics",
"//ledger/participant-integration-api",
"//ledger/participant-integration-api:participant-integration-api-tests-lib",
"//libs-scala/caching",
"//libs-scala/db-utils",
"//libs-scala/jwt",

View File

@ -20,7 +20,6 @@ def deps(lf_version):
"//ledger/ledger-api-common",
"//ledger-test-tool/infrastructure:infrastructure-%s" % lf_version,
"//libs-scala/ledger-resources",
"//ledger/participant-state-kv-errors",
"//libs-scala/test-evidence/tag:test-evidence-tag",
"//test-common:dar-files-%s-lib" % lf_version,
"//test-common:model-tests-%s.scala" % lf_version,

View File

@ -10,7 +10,6 @@ import com.daml.ledger.api.testtool.infrastructure.LedgerTestSuite
import com.daml.ledger.api.testtool.infrastructure.Synchronize.synchronize
import com.daml.ledger.api.testtool.infrastructure.participant.ParticipantTestContext
import com.daml.ledger.api.v1.admin.config_management_service.{SetTimeModelRequest, TimeModel}
import com.daml.ledger.error.definitions.kv.KvErrors
import com.google.protobuf.duration.Duration
import scala.concurrent.{ExecutionContext, Future}
@ -167,7 +166,6 @@ final class ConfigManagementServiceIT extends LedgerTestSuite {
failure,
LedgerApiErrors.Admin.ConfigurationEntryRejected,
LedgerApiErrors.RequestValidation.InvalidArgument,
KvErrors.Consistency.PostExecutionConflicts,
)
}
})

View File

@ -65,7 +65,6 @@ da_scala_library(
runtime_deps = [
# Add error definition targets to the classpath so they can be picked up by the generator
"//ledger/ledger-api-errors",
"//ledger/participant-state-kv-errors",
],
deps = [
"//ledger/error",

View File

@ -1,49 +0,0 @@
# Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
load("@os_info//:os_info.bzl", "is_windows")
load("//rules_daml:daml.bzl", "daml_compile")
load(
"//bazel_tools:scala.bzl",
"da_scala_binary",
"da_scala_library",
"da_scala_test_suite",
"scaladoc_jar",
)
load("//bazel_tools:pom_file.bzl", "pom_file")
load("@scala_version//:index.bzl", "scala_major_version_suffix")
da_scala_library(
name = "indexer-benchmark-lib",
srcs = glob(["src/main/scala/**/*.scala"]),
resources = glob(["src/main/resources/**/*"]),
scala_deps = [
"@maven//:com_github_scopt_scopt",
"@maven//:com_typesafe_akka_akka_actor",
"@maven//:com_typesafe_akka_akka_stream",
],
tags = ["maven_coordinates=com.daml:ledger-indexer-benchmark-lib:__VERSION__"],
visibility = [
"//visibility:public",
],
deps = [
"//daml-lf/data",
"//ledger/ledger-api-health",
"//ledger/ledger-configuration",
"//ledger/ledger-offset",
"//ledger/metrics",
"//ledger/participant-integration-api",
"//ledger/participant-state",
"//libs-scala/contextualized-logging",
"//libs-scala/ledger-resources",
"//libs-scala/resources",
"//libs-scala/resources-akka",
"//libs-scala/resources-grpc",
"//observability/metrics",
"//observability/metrics:metrics-test-lib",
"//observability/telemetry",
"@maven//:io_dropwizard_metrics_metrics_core",
"@maven//:io_opentelemetry_opentelemetry_api",
"@maven//:org_slf4j_slf4j_api",
],
)

View File

@ -1,3 +0,0 @@
# `indexer-benchmark`
A tool that measures the performance of the indexer. Used in downstream repositories.

View File

@ -1,17 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
~ Copyright (c) 2022 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
~ SPDX-License-Identifier: Apache-2.0
-->
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg %replace(, context: %marker){', context: $', ''} %n</pattern>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="STDOUT"/>
</root>
</configuration>

View File

@ -1,129 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.indexerbenchmark
import com.daml.lf.data.Ref
import com.daml.platform.configuration.IndexServiceConfig
import com.daml.platform.configuration.Readers._
import com.daml.platform.indexer.{IndexerConfig, IndexerStartupMode}
import com.daml.platform.store.DbSupport.ParticipantDataSourceConfig
import scopt.OptionParser
import java.time.Duration
import com.daml.metrics.api.reporters.MetricsReporter
/** @param updateCount The number of updates to process.
* @param updateSource The name of the source of state updates.
* @param waitForUserInput If enabled, the app will wait for user input after the benchmark has finished,
* but before cleaning up resources.
*/
case class Config(
updateCount: Option[Long],
updateSource: String,
metricsReporter: Option[MetricsReporter],
metricsReportingInterval: Duration,
indexServiceConfig: IndexServiceConfig,
indexerConfig: IndexerConfig,
waitForUserInput: Boolean,
minUpdateRate: Option[Long],
participantId: Ref.ParticipantId,
dataSource: ParticipantDataSourceConfig,
)
object Config {
private[indexerbenchmark] val DefaultConfig: Config = Config(
updateCount = None,
updateSource = "",
metricsReporter = None,
metricsReportingInterval = Duration.ofSeconds(1),
indexServiceConfig = IndexServiceConfig(),
indexerConfig = IndexerConfig(
startupMode = IndexerStartupMode.MigrateAndStart()
),
waitForUserInput = false,
minUpdateRate = None,
participantId = Ref.ParticipantId.assertFromString("IndexerBenchmarkParticipant"),
dataSource = ParticipantDataSourceConfig(""),
)
private[this] val Parser: OptionParser[Config] =
new OptionParser[Config]("indexer-benchmark") {
head("Indexer Benchmark")
note(
s"Measures the performance of the indexer"
)
help("help")
arg[String]("source")
.text("The name of the source of state updates.")
.action((value, config) => config.copy(updateSource = value))
opt[Int]("indexer-input-mapping-parallelism")
.text("Sets the value of IndexerConfig.inputMappingParallelism.")
.action((value, config) =>
config.copy(indexerConfig = config.indexerConfig.copy(inputMappingParallelism = value))
)
opt[Int]("indexer-ingestion-parallelism")
.text("Sets the value of IndexerConfig.ingestionParallelism.")
.action((value, config) =>
config.copy(indexerConfig = config.indexerConfig.copy(ingestionParallelism = value))
)
opt[Int]("indexer-batching-parallelism")
.text("Sets the value of IndexerConfig.batchingParallelism.")
.action((value, config) =>
config.copy(indexerConfig = config.indexerConfig.copy(batchingParallelism = value))
)
opt[Long]("indexer-submission-batch-size")
.text("Sets the value of IndexerConfig.submissionBatchSize.")
.action((value, config) =>
config.copy(indexerConfig = config.indexerConfig.copy(submissionBatchSize = value))
)
opt[Boolean]("indexer-enable-compression")
.text("Sets the value of IndexerConfig.enableCompression.")
.action((value, config) =>
config.copy(indexerConfig = config.indexerConfig.copy(enableCompression = value))
)
opt[Int]("indexer-max-input-buffer-size")
.text("Sets the value of IndexerConfig.maxInputBufferSize.")
.action((value, config) =>
config.copy(indexerConfig = config.indexerConfig.copy(maxInputBufferSize = value))
)
opt[String]("jdbc-url")
.text(
"The JDBC URL of the index database. Default: the benchmark will run against an ephemeral Postgres database."
)
.action((value, config) => config.copy(dataSource = ParticipantDataSourceConfig(value)))
opt[Long]("update-count")
.text(
"The maximum number of updates to process. Default: consume the entire input stream once (the app will not terminate if the input stream is endless)."
)
.action((value, config) => config.copy(updateCount = Some(value)))
opt[Boolean]("wait-for-user-input")
.text(
"If enabled, the app will wait for user input after the benchmark has finished, but before cleaning up resources. Use to inspect the contents of an ephemeral index database."
)
.action((value, config) => config.copy(waitForUserInput = value))
opt[Long]("min-update-rate")
.text(
"Minimum value of the processed updates per second. If not satisfied the application will report an error."
)
.action((value, config) => config.copy(minUpdateRate = Some(value)))
opt[MetricsReporter]("metrics-reporter")
.optional()
.text(s"Start a metrics reporter. ${MetricsReporter.cliHint}")
.action((reporter, config) => config.copy(metricsReporter = Some(reporter)))
opt[Duration]("metrics-reporting-interval")
.optional()
.text("Set metric reporting interval.")
.action((interval, config) => config.copy(metricsReportingInterval = interval))
}
def parse(args: collection.Seq[String]): Option[Config] =
Parser.parse(args, Config.DefaultConfig)
}

View File

@ -1,209 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.indexerbenchmark
import java.util.concurrent.{Executors, TimeUnit}
import akka.NotUsed
import akka.actor.ActorSystem
import akka.stream.Materializer
import akka.stream.scaladsl.Source
import com.codahale.metrics.MetricRegistry
import com.daml.ledger.api.health.{HealthStatus, Healthy}
import com.daml.ledger.configuration.{Configuration, LedgerInitialConditions, LedgerTimeModel}
import com.daml.ledger.offset.Offset
import com.daml.ledger.participant.state.v2.{ReadService, Update}
import com.daml.ledger.resources.{Resource, ResourceContext, ResourceOwner}
import com.daml.lf.data.Time
import com.daml.logging.LoggingContext
import com.daml.logging.LoggingContext.newLoggingContext
import com.daml.metrics.api.dropwizard.DropwizardMetricsFactory
import com.daml.metrics.api.opentelemetry.OpenTelemetryMetricsFactory
import com.daml.metrics.api.testing.{InMemoryMetricsFactory, ProxyMetricsFactory}
import com.daml.metrics.{JvmMetricSet, Metrics}
import com.daml.platform.LedgerApiServer
import com.daml.platform.indexer.{Indexer, IndexerServiceOwner, JdbcIndexer}
import com.daml.resources
import com.daml.telemetry.OpenTelemetryOwner
import org.slf4j.LoggerFactory
import scala.concurrent.duration.Duration
import scala.concurrent.{Await, ExecutionContext, ExecutionContextExecutor, Future}
import scala.io.StdIn
class IndexerBenchmark() {
def run(
createUpdates: () => Future[Source[(Offset, Update), NotUsed]],
config: Config,
): Future[Unit] = {
newLoggingContext { implicit loggingContext =>
val system = ActorSystem("IndexerBenchmark")
implicit val materializer: Materializer = Materializer(system)
implicit val resourceContext: ResourceContext = ResourceContext(system.dispatcher)
val indexerExecutor = Executors.newWorkStealingPool()
val indexerExecutionContext = ExecutionContext.fromExecutor(indexerExecutor)
println("Generating state updates...")
val updates = Await.result(createUpdates(), Duration(10, "minute"))
println("Creating read service and indexer...")
val readService = createReadService(updates)
val resource = for {
metrics <- metricsResource(config).acquire()
servicesExecutionContext <- ResourceOwner
.forExecutorService(() => Executors.newWorkStealingPool())
.map(ExecutionContext.fromExecutorService)
.acquire()
(inMemoryState, inMemoryStateUpdaterFlow) <-
LedgerApiServer
.createInMemoryStateAndUpdater(
config.indexServiceConfig,
metrics,
indexerExecutionContext,
)
.acquire()
indexerFactory = new JdbcIndexer.Factory(
config.participantId,
config.dataSource,
config.indexerConfig,
readService,
metrics,
inMemoryState,
inMemoryStateUpdaterFlow,
servicesExecutionContext,
)
_ = println("Setting up the index database...")
indexer <- indexer(config, indexerExecutionContext, indexerFactory)
_ = println("Starting the indexing...")
startTime = System.nanoTime()
handle <- indexer.acquire()
_ <- Resource.fromFuture(handle)
stopTime = System.nanoTime()
_ = println("Indexing done.")
_ <- Resource.fromFuture(system.terminate())
_ = indexerExecutor.shutdown()
} yield {
val result = new IndexerBenchmarkResult(
config,
metrics,
startTime,
stopTime,
)
println(result.banner)
// Note: this allows the user to inspect the contents of an ephemeral database
if (config.waitForUserInput) {
println(
s"Index database is still running at ${config.dataSource.jdbcUrl}."
)
StdIn.readLine("Press <enter> to terminate this process.")
}
if (result.failure) throw new RuntimeException("Indexer Benchmark failure.")
()
}
resource.asFuture
}
}
private def indexer(
config: Config,
indexerExecutionContext: ExecutionContextExecutor,
indexerFactory: JdbcIndexer.Factory,
)(implicit
loggingContext: LoggingContext,
rc: ResourceContext,
): resources.Resource[ResourceContext, Indexer] =
Await
.result(
IndexerServiceOwner
.migrateOnly(config.dataSource.jdbcUrl)
.map(_ => indexerFactory.initialized())(indexerExecutionContext),
Duration(5, "minute"),
)
.acquire()
private def metricsResource(config: Config) = {
OpenTelemetryOwner(setAsGlobal = true, config.metricsReporter, Seq.empty).flatMap {
openTelemetry =>
val registry = new MetricRegistry
val dropwizardFactory = new DropwizardMetricsFactory(registry)
val openTelemetryFactory =
new OpenTelemetryMetricsFactory(openTelemetry.getMeter("indexer-benchmark"))
val inMemoryMetricFactory = new InMemoryMetricsFactory
JvmMetricSet.registerObservers()
registry.registerAll(new JvmMetricSet)
val metrics = new Metrics(
new ProxyMetricsFactory(
dropwizardFactory,
inMemoryMetricFactory,
),
new ProxyMetricsFactory(openTelemetryFactory, inMemoryMetricFactory),
registry,
)
config.metricsReporter
.fold(ResourceOwner.unit)(reporter =>
ResourceOwner
.forCloseable(() => reporter.register(metrics.registry))
.map(_.start(config.metricsReportingInterval.getSeconds, TimeUnit.SECONDS))
)
.map(_ => metrics)
}
}
private[this] def createReadService(
updates: Source[(Offset, Update), NotUsed]
): ReadService = {
val initialConditions = LedgerInitialConditions(
IndexerBenchmark.LedgerId,
Configuration(
generation = 0,
timeModel = LedgerTimeModel.reasonableDefault,
maxDeduplicationDuration = java.time.Duration.ofDays(1),
),
Time.Timestamp.Epoch,
)
new ReadService {
override def ledgerInitialConditions(): Source[LedgerInitialConditions, NotUsed] = {
Source.single(initialConditions)
}
override def stateUpdates(
beginAfter: Option[Offset]
)(implicit loggingContext: LoggingContext): Source[(Offset, Update), NotUsed] = {
assert(beginAfter.isEmpty, s"beginAfter is $beginAfter")
updates
}
override def currentHealth(): HealthStatus = Healthy
}
}
}
object IndexerBenchmark {
private val logger = LoggerFactory.getLogger(getClass)
val LedgerId = "IndexerBenchmarkLedger"
def runAndExit(
config: Config,
updates: () => Future[Source[(Offset, Update), NotUsed]],
): Unit = {
val result: Future[Unit] = new IndexerBenchmark()
.run(updates, config)
.recover { case ex =>
logger.error("Error running benchmark", ex)
sys.exit(1)
}(scala.concurrent.ExecutionContext.Implicits.global)
Await.result(result, Duration(100, "hour"))
println("Done.")
// TODO: some actor system or thread pool is still running, preventing a shutdown
sys.exit(0)
}
}

View File

@ -1,193 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.indexerbenchmark
import java.util.concurrent.TimeUnit
import com.codahale.metrics.Snapshot
import com.daml.metrics.Metrics
import com.daml.metrics.api.MetricHandle.{Counter, Histogram, Timer}
import com.daml.metrics.api.dropwizard.{DropwizardCounter, DropwizardHistogram, DropwizardTimer}
import com.daml.metrics.api.noop.{NoOpCounter, NoOpTimer}
import com.daml.metrics.api.testing.InMemoryMetricsFactory.{
InMemoryCounter,
InMemoryHistogram,
InMemoryTimer,
}
import com.daml.metrics.api.testing.MetricValues
import com.daml.metrics.api.testing.ProxyMetricsFactory.{ProxyCounter, ProxyHistogram, ProxyTimer}
import scala.concurrent.duration.DurationDouble
class IndexerBenchmarkResult(
config: Config,
metrics: Metrics,
startTimeInNano: Long,
stopTimeInNano: Long,
) extends MetricValues {
private val duration: Double =
(stopTimeInNano - startTimeInNano).toDouble.nanos.toUnit(TimeUnit.SECONDS)
private val updates: Long = counterState(metrics.daml.parallelIndexer.updates)
private val updateRate: Double = updates / duration
val (failure, minimumUpdateRateFailureInfo): (Boolean, String) =
config.minUpdateRate match {
case Some(requiredMinUpdateRate) if requiredMinUpdateRate > updateRate =>
(
true,
s"[failure][UpdateRate] Minimum number of updates per second: required: $requiredMinUpdateRate, metered: $updateRate",
)
case _ => (false, "")
}
val banner =
s"""
|--------------------------------------------------------------------------------
|Indexer benchmark results
|--------------------------------------------------------------------------------
|
|Input:
| source: ${config.updateSource}
| count: ${config.updateCount}
| required updates/sec: ${config.minUpdateRate.getOrElse("-")}
| jdbcUrl: ${config.dataSource.jdbcUrl}
|
|Indexer parameters:
| maxInputBufferSize: ${config.indexerConfig.maxInputBufferSize}
| inputMappingParallelism: ${config.indexerConfig.inputMappingParallelism}
| ingestionParallelism: ${config.indexerConfig.ingestionParallelism}
| submissionBatchSize: ${config.indexerConfig.submissionBatchSize}
| full indexer config: ${config.indexerConfig}
|
|Result:
| duration: $duration
| updates: $updates
| updates/sec: $updateRate
| $minimumUpdateRateFailureInfo
|
|Other metrics:
| inputMapping.batchSize: ${histogramToString(
metrics.daml.parallelIndexer.inputMapping.batchSize
)}
| seqMapping.duration: ${timerToString(
metrics.daml.parallelIndexer.seqMapping.duration
)}|
| seqMapping.duration.rate: ${timerMeanRate(
metrics.daml.parallelIndexer.seqMapping.duration
)}|
| ingestion.duration: ${timerToString(
metrics.daml.parallelIndexer.ingestion.executionTimer
)}
| ingestion.duration.rate: ${timerMeanRate(
metrics.daml.parallelIndexer.ingestion.executionTimer
)}
| tailIngestion.duration: ${timerToString(
metrics.daml.parallelIndexer.tailIngestion.executionTimer
)}
| tailIngestion.duration.rate: ${timerMeanRate(
metrics.daml.parallelIndexer.tailIngestion.executionTimer
)}
|
|Notes:
| The above numbers include all ingested updates, including package uploads.
| Inspect the metrics using a metrics reporter to better investigate how
| the indexer performs.
|
|--------------------------------------------------------------------------------
|""".stripMargin
private[this] def histogramToString(histogram: Histogram): String = {
histogram match {
case DropwizardHistogram(_, metric) =>
val data = metric.getSnapshot
dropwizardSnapshotToString(data)
case _: InMemoryHistogram =>
recordedHistogramValuesToString(histogram.values)
case ProxyHistogram(_, targets) =>
targets
.collectFirst { case inMemory: InMemoryHistogram =>
inMemory
}
.fold(throw new IllegalArgumentException(s"Histogram $histogram cannot be printed."))(
histogramToString
)
case other => throw new IllegalArgumentException(s"Metric $other not supported")
}
}
private[this] def timerToString(timer: Timer): String = {
timer match {
case DropwizardTimer(_, metric) =>
val data = metric.getSnapshot
dropwizardSnapshotToString(data)
case NoOpTimer(_) => ""
case _: InMemoryTimer =>
recordedHistogramValuesToString(timer.values)
case ProxyTimer(_, targets) =>
targets
.collectFirst { case inMemory: InMemoryTimer =>
inMemory
}
.fold(throw new IllegalArgumentException(s"Timer $timer cannot be printed."))(
timerToString
)
case other => throw new IllegalArgumentException(s"Metric $other not supported")
}
}
private[this] def timerMeanRate(timer: Timer): Double = {
timer match {
case DropwizardTimer(_, metric) =>
metric.getMeanRate
case NoOpTimer(_) => 0
case timer: InMemoryTimer =>
timer.data.values.size.toDouble / duration
case ProxyTimer(_, targets) =>
targets
.collectFirst { case inMemory: InMemoryTimer =>
inMemory
}
.fold(throw new IllegalArgumentException(s"Timer $timer cannot be printed."))(
timerMeanRate
)
case other => throw new IllegalArgumentException(s"Metric $other not supported")
}
}
private[this] def counterState(counter: Counter): Long = {
counter match {
case DropwizardCounter(_, metric) =>
metric.getCount
case NoOpCounter(_) => 0
case InMemoryCounter(_, _) => counter.value
case ProxyCounter(_, targets) =>
targets
.collectFirst { case inMemory: InMemoryCounter =>
inMemory
}
.fold(throw new IllegalArgumentException(s"Counter $counter cannot be printed."))(
counterState
)
case other => throw new IllegalArgumentException(s"Metric $other not supported")
}
}
private def dropwizardSnapshotToString(data: Snapshot) = {
s"[min: ${data.getMin}, median: ${data.getMedian}, max: ${data.getMax}"
}
private def recordedHistogramValuesToString(data: Seq[Long]) = {
s"[min: ${data.min}, median: ${median(data)}, max: ${data.max}"
}
private def median(data: Seq[Long]) = {
val sorted = data.sorted
if (sorted.size % 2 == 0) {
(sorted(sorted.size / 2 - 1) + (sorted.size / 2)) / 2
} else {
sorted(sorted.size / 2)
}
}
}

View File

@ -45,20 +45,6 @@ da_scala_library(
],
)
da_scala_binary(
name = "ledger-api-auth-bin",
srcs = glob(["src/app/scala/**/*.scala"]),
main_class = "com.daml.ledger.api.auth.Main",
scala_deps = [
"@maven//:com_github_scopt_scopt",
"@maven//:org_scalaz_scalaz_core",
],
deps = [
":ledger-api-auth",
"//libs-scala/jwt",
],
)
da_scala_test_suite(
name = "ledger-api-auth-scala-tests",
srcs = glob(["src/test/suite/**/*.scala"]),
@ -85,7 +71,6 @@ da_scala_test_suite(
"//ledger/ledger-api-domain",
"//ledger/ledger-api-errors",
"//ledger/participant-local-store",
"//ledger/participant-state-index",
"//libs-scala/adjustable-clock",
"//libs-scala/contextualized-logging",
"//libs-scala/jwt",

View File

@ -1,262 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.api.auth
import java.io.File
import java.nio.charset.StandardCharsets
import java.nio.file.Files
import java.security.interfaces.RSAPublicKey
import java.time.Instant
import com.daml.jwt.domain.{DecodedJwt, Jwt}
import com.daml.jwt.{JwtSigner, KeyUtils}
import scalaz.syntax.show._
object Main {
object ErrorCodes {
val InvalidUsage = 100
val GenerateTokensError = 101
}
final case class Config(
command: Option[Command] = None
)
sealed abstract class Command
final case class GenerateJwks(
output: Option[File] = None,
publicKeys: List[File] = List(),
) extends Command
final case class GenerateToken(
output: Option[File] = None,
signingKey: Option[File] = None,
ledgerId: Option[String] = None,
applicationId: Option[String] = None,
exp: Option[Instant] = None,
kid: Option[String] = None,
parties: List[String] = List(),
readOnly: Boolean = false,
admin: Boolean = false,
) extends Command
/** By default, RSA key Ids are generated from their file name. */
private[this] def defaultKeyId(file: File): String = {
val fileName = file.getName
val pos = fileName.lastIndexOf(".")
if (pos > 0 && pos < (fileName.length - 1)) {
fileName.substring(0, pos)
} else {
fileName
}
}
def main(args: Array[String]): Unit = {
parseConfig(args) match {
case Some(Config(Some(GenerateJwks(Some(outputFile), publicKeys)))) =>
// Load RSA keys. They ID of each key is its file name.
val keys: Map[String, RSAPublicKey] = publicKeys
.map(f =>
defaultKeyId(f) -> KeyUtils
.readRSAPublicKeyFromCrt(f)
.fold(
t =>
handleGenerateTokensError(
"Error loading RSA public key from a X509 certificate file."
)(t.getMessage),
x => x,
)
)
.toMap
// Generate and write JWKS for all keys
val jwks = KeyUtils.generateJwks(keys)
Files.write(outputFile.toPath, jwks.getBytes(StandardCharsets.UTF_8))
()
case Some(
Config(
Some(
GenerateToken(
Some(outputFile),
Some(signingKeyFile),
ledgerIdO,
applicationIdO,
exp,
kid,
parties,
readOnly @ _,
admin,
)
)
)
) =>
val keyId = kid.getOrElse(defaultKeyId(signingKeyFile))
val payload = CustomDamlJWTPayload(
ledgerIdO,
None,
applicationIdO,
exp,
admin,
parties,
parties,
)
val signingKey = KeyUtils
.readRSAPrivateKeyFromDer(signingKeyFile)
.fold(
t =>
handleGenerateTokensError(
"Error loading RSA private key from a PKCS8/DER file. Use the following command to convert a PEM encoded private key: openssl pkcs8 -topk8 -inform PEM -outform DER -in private-key.pem -nocrypt > private-key.der."
)(t.getMessage),
x => x,
)
val jwtPayload = AuthServiceJWTCodec.compactPrint(payload)
val jwtHeader = s"""{"alg": "RS256", "typ": "JWT", "kid": "$keyId"}"""
val signed: Jwt = JwtSigner.RSA256
.sign(DecodedJwt(jwtHeader, jwtPayload), signingKey)
.valueOr(e => handleGenerateTokensError("Error signing JWT token")(e.shows))
def changeExtension(file: File, extension: String): File = {
val filename = file.getName
new File(file.getParentFile, filename + extension)
}
Files.write(outputFile.toPath, signed.value.getBytes(StandardCharsets.UTF_8))
Files.write(
changeExtension(outputFile, "-bearer.txt").toPath,
signed.value.getBytes(StandardCharsets.UTF_8),
)
Files.write(
changeExtension(outputFile, "-payload.json").toPath,
jwtPayload.getBytes(StandardCharsets.UTF_8),
)
Files.write(
changeExtension(outputFile, "-header.json").toPath,
jwtHeader.getBytes(StandardCharsets.UTF_8),
)
()
case Some(_) =>
configParser.displayToErr(configParser.usage)
sys.exit(ErrorCodes.InvalidUsage)
case None =>
sys.exit(ErrorCodes.InvalidUsage)
}
}
private def handleGenerateTokensError(message: String)(details: String): Nothing = {
Console.println(s"$message. Details: $details")
sys.exit(ErrorCodes.GenerateTokensError)
}
private def parseConfig(args: collection.Seq[String]): Option[Config] = {
configParser.parse(args, Config())
}
private val configParser = new scopt.OptionParser[Config]("ledger-api-auth") {
cmd("generate-jwks")
.text("Generate a JWKS JSON object for the given set of RSA public keys")
.action((_, c) => c.copy(command = Some(GenerateJwks())))
.children(
opt[File]("output")
.required()
.text("The output file")
.valueName("<paths>")
.action((x, c) =>
c.copy(command = c.command.map(_.asInstanceOf[GenerateJwks].copy(output = Some(x))))
),
opt[Seq[File]]("keys")
.required()
.text("List of RSA certificates (.crt)")
.valueName("<paths>")
.action((x, c) =>
c.copy(
command = c.command.map(_.asInstanceOf[GenerateJwks].copy(publicKeys = x.toList))
)
),
)
cmd("generate-token")
.text("Generate a signed access token for the Daml ledger API")
.action((_, c) => c.copy(command = Some(GenerateToken())))
.children(
opt[File]("output")
.required()
.text("The output file")
.valueName("<paths>")
.action((x, c) =>
c.copy(command = c.command.map(_.asInstanceOf[GenerateToken].copy(output = Some(x))))
),
opt[File]("key")
.required()
.text("The RSA private key (.der)")
.valueName("<path>")
.action((x, c) =>
c.copy(
command = c.command.map(_.asInstanceOf[GenerateToken].copy(signingKey = Some(x)))
)
),
opt[Seq[String]]("parties")
.required()
.text("Parties to generate tokens for")
.valueName("<list of parties>")
.action((x, c) =>
c.copy(command = c.command.map(_.asInstanceOf[GenerateToken].copy(parties = x.toList)))
),
opt[String]("ledgerId")
.optional()
.text(
"Restrict validity of the token to this ledger ID. Default: None, token is valid for all ledgers."
)
.action((x, c) =>
c.copy(command = c.command.map(_.asInstanceOf[GenerateToken].copy(ledgerId = Some(x))))
),
opt[String]("applicationId")
.optional()
.text(
"Restrict validity of the token to this application ID. Default: None, token is valid for all applications."
)
.action((x, c) =>
c.copy(command =
c.command.map(_.asInstanceOf[GenerateToken].copy(applicationId = Some(x)))
)
),
opt[String]("exp")
.optional()
.text("Token expiration date, in ISO 8601 format. Default: no expiration date.")
.action((x, c) =>
c.copy(command =
c.command.map(_.asInstanceOf[GenerateToken].copy(exp = Some(Instant.parse(x))))
)
),
opt[String]("kid")
.optional()
.text("The key id, as used in JWKS. Default: the file name of the RSA private key.")
.action((x, c) =>
c.copy(command =
c.command.map(_.asInstanceOf[GenerateToken].copy(exp = Some(Instant.parse(x))))
)
),
opt[Boolean]("admin")
.optional()
.text("If set, authorizes the bearer to use admin endpoints. Default: false")
.action((x, c) =>
c.copy(command = c.command.map(_.asInstanceOf[GenerateToken].copy(admin = x)))
),
opt[Boolean]("readonly")
.optional()
.text("If set, prevents the bearer from acting on the ledger. Default: false")
.action((x, c) =>
c.copy(command = c.command.map(_.asInstanceOf[GenerateToken].copy(admin = x)))
),
)
}
}

View File

@ -1,134 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.api.auth
import com.auth0.jwt.JWT
import com.daml.jwt.{JwtFromBearerHeader, JwtVerifier}
import com.daml.jwt.domain.DecodedJwt
import com.daml.ledger.api.domain.IdentityProviderId
import com.daml.logging.{ContextualizedLogger, LoggingContext}
import io.grpc.Metadata
import spray.json._
import com.daml.jwt.{Error => JwtError}
import com.daml.ledger.api.auth.interceptor.IdentityProviderAwareAuthService
import scala.concurrent.{ExecutionContext, Future}
class IdentityProviderAwareAuthServiceImpl(
identityProviderConfigLoader: IdentityProviderConfigLoader,
jwtVerifierLoader: JwtVerifierLoader,
)(implicit
executionContext: ExecutionContext,
loggingContext: LoggingContext,
) extends IdentityProviderAwareAuthService {
private implicit val logger: ContextualizedLogger = ContextualizedLogger.get(getClass)
def decodeMetadata(headers: Metadata): Future[ClaimSet] =
getAuthorizationHeader(headers) match {
case None => Future.successful(ClaimSet.Unauthenticated)
case Some(header) =>
parseJWTPayload(header).recover { case error =>
// While we failed to authorize the token using IDP, it could still be possible
// to be valid by other means of authorizations, i.e. using default auth service
logger.warn("Failed to authorize the token: " + error.getMessage)
ClaimSet.Unauthenticated
}
}
private def getAuthorizationHeader(headers: Metadata): Option[String] =
Option(headers.get(AuthService.AUTHORIZATION_KEY))
private def parseJWTPayload(
header: String
): Future[ClaimSet] =
for {
token <- toFuture(JwtFromBearerHeader(header))
decodedJWT <- Future(JWT.decode(token))
claims <- extractClaims(
token,
Option(decodedJWT.getIssuer),
Option(decodedJWT.getKeyId),
)
} yield claims
def extractClaims(
token: String,
issuer: Option[String],
keyId: Option[String],
): Future[ClaimSet] = {
issuer match {
case None => Future.successful(ClaimSet.Unauthenticated)
case Some(issuer) =>
for {
identityProviderConfig <- identityProviderConfigLoader
.getIdentityProviderConfig(issuer)
verifier <- jwtVerifierLoader.loadJwtVerifier(
jwksUrl = identityProviderConfig.jwksUrl,
keyId,
)
decodedJwt <- verifyToken(token, verifier)
payload <- Future(
parse(decodedJwt.payload, targetAudience = identityProviderConfig.audience)
)
_ <- checkAudience(payload, identityProviderConfig.audience)
jwtPayload <- parsePayload(payload)
} yield toAuthenticatedUser(jwtPayload, identityProviderConfig.identityProviderId)
}
}
private def checkAudience(
payload: AuthServiceJWTPayload,
targetAudience: Option[String],
): Future[Unit] =
(payload, targetAudience) match {
case (payload: StandardJWTPayload, Some(audience)) if payload.audiences.contains(audience) =>
Future.unit
case (_, None) =>
Future.unit
case _ =>
Future.failed(new Exception(s"JWT token has an audience which is not recognized"))
}
private def verifyToken(token: String, verifier: JwtVerifier): Future[DecodedJwt[String]] =
toFuture(verifier.verify(com.daml.jwt.domain.Jwt(token)).toEither)
private def toFuture[T](e: Either[JwtError, T]): Future[T] =
e.fold(err => Future.failed(new Exception(err.message)), Future.successful)
private def parsePayload(
jwtPayload: AuthServiceJWTPayload
): Future[StandardJWTPayload] =
jwtPayload match {
case _: CustomDamlJWTPayload =>
Future.failed(new Exception("Unexpected token payload format"))
case payload: StandardJWTPayload =>
Future.successful(payload)
}
private def parse(jwtPayload: String, targetAudience: Option[String]): AuthServiceJWTPayload =
if (targetAudience.isDefined)
parseAudienceBasedPayload(jwtPayload)
else
parseAuthServicePayload(jwtPayload)
private def parseAuthServicePayload(jwtPayload: String): AuthServiceJWTPayload = {
import AuthServiceJWTCodec.JsonImplicits._
JsonParser(jwtPayload).convertTo[AuthServiceJWTPayload]
}
private[this] def parseAudienceBasedPayload(
jwtPayload: String
): AuthServiceJWTPayload = {
import AuthServiceJWTCodec.AudienceBasedTokenJsonImplicits._
JsonParser(jwtPayload).convertTo[AuthServiceJWTPayload]
}
private def toAuthenticatedUser(payload: StandardJWTPayload, id: IdentityProviderId.Id) =
ClaimSet.AuthenticatedUser(
identityProviderId = id,
participantId = payload.participantId,
userId = payload.userId,
expiration = payload.exp,
)
}

View File

@ -1,17 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.api.auth
import com.daml.ledger.api.domain.IdentityProviderConfig
import com.daml.logging.LoggingContext
import scala.concurrent.Future
trait IdentityProviderConfigLoader {
def getIdentityProviderConfig(issuer: String)(implicit
loggingContext: LoggingContext
): Future[IdentityProviderConfig]
}

View File

@ -110,10 +110,7 @@ da_scala_test_suite(
"//ledger/ledger-api-common",
"//ledger/ledger-api-domain",
"//ledger/ledger-configuration",
"//ledger/ledger-runner-common",
"//ledger/metrics",
"//ledger/participant-integration-api",
"//ledger/participant-state",
"//libs-scala/caching",
"//libs-scala/concurrent",
"//libs-scala/contextualized-logging",

View File

@ -132,7 +132,6 @@ da_scala_test_suite(
"//ledger/ledger-api-health",
"//ledger/ledger-offset",
"//ledger/metrics",
"//ledger/participant-state-index",
"//libs-scala/concurrent",
"//libs-scala/contextualized-logging",
"//libs-scala/grpc-utils",

View File

@ -20,8 +20,6 @@ da_scala_library(
"//daml-lf/validation",
"//ledger-api/grpc-definitions:ledger_api_proto_scala",
"//ledger/error",
"//ledger/participant-integration-api:participant-integration-api-proto_scala",
"//ledger/participant-state",
"//observability/metrics",
"@maven//:com_google_api_grpc_proto_google_common_protos",
"@maven//:io_grpc_grpc_api",

View File

@ -1,8 +1,7 @@
# Ledger error definitions
Home to error definitions commonly reported via the Ledger API server.
As opposed to definitions in `//ledger/participant-state-kv-errors`, these errors
are generic wrt to the ledger backend used by the participant server.
These errors are generic wrt to the ledger backend used by the participant server.
## Daml-LF dependencies

View File

@ -1,7 +1,7 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.participant.state.v2
package com.daml.error.definitions
import com.daml.lf.crypto.Hash
import com.daml.lf.data.Ref

View File

@ -5,7 +5,7 @@ package com.daml.error.definitions.groups
import java.time.Instant
import com.daml.error.definitions.{DamlErrorWithDefiniteAnswer, LedgerApiErrors}
import com.daml.error.definitions.{ChangeId, DamlErrorWithDefiniteAnswer, LedgerApiErrors}
import com.daml.error.{
ContextualizedErrorLogger,
ErrorCategory,
@ -14,7 +14,6 @@ import com.daml.error.{
Explanation,
Resolution,
}
import com.daml.ledger.participant.state.v2.ChangeId
import com.daml.lf.transaction.GlobalKey
import com.daml.lf.value.Value

View File

@ -10,7 +10,6 @@ proto_jars(
maven_group = "com.daml",
visibility = [
"//ledger/ledger-configuration:__subpackages__",
"//ledger/participant-state:__subpackages__",
],
deps = [
"@com_google_protobuf//:duration_proto",

View File

@ -2,7 +2,7 @@
// SPDX-License-Identifier: Apache-2.0
//
// The Daml Ledger configuration. Please refer to the spec
// (ledger/participant-state/protobuf/ledger_configuration.rst)
// (ledger/ledger-configuration/protobuf/ledger_configuration.rst)
// for detailed information on versions and semantics.
//
// version summary:

View File

@ -1,110 +0,0 @@
# Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
load(
"//bazel_tools:scala.bzl",
"da_scala_binary",
"da_scala_library",
"da_scala_test_suite",
)
da_scala_library(
name = "ledger-runner-common",
srcs = glob(["src/main/scala/**/*.scala"]),
scala_deps = [
"@maven//:com_github_scopt_scopt",
"@maven//:com_github_pureconfig_pureconfig_core",
"@maven//:com_github_pureconfig_pureconfig_generic",
"@maven//:com_chuusai_shapeless",
],
tags = ["maven_coordinates=com.daml:ledger-runner-common:__VERSION__"],
visibility = [
"//visibility:public",
],
runtime_deps = [
"@maven//:ch_qos_logback_logback_classic",
"@maven//:ch_qos_logback_logback_core",
],
deps = [
"//daml-lf/data",
"//daml-lf/engine",
"//daml-lf/language",
"//daml-lf/transaction",
"//language-support/scala/bindings",
"//ledger/ledger-api-auth",
"//ledger/ledger-api-common",
"//ledger/ledger-configuration",
"//ledger/metrics",
"//ledger/participant-integration-api",
"//libs-scala/contextualized-logging",
"//libs-scala/jwt",
"//libs-scala/ledger-resources",
"//libs-scala/ports",
"//libs-scala/resources",
"//observability/metrics",
"@maven//:com_typesafe_config",
"@maven//:io_netty_netty_handler",
],
)
da_scala_library(
name = "ledger-runner-common-test-lib",
srcs = glob(["src/test/lib/**/*.scala"]),
scala_deps = [
"@maven//:org_scalacheck_scalacheck",
],
deps = [
":ledger-runner-common",
"//daml-lf/data",
"//daml-lf/engine",
"//daml-lf/language",
"//daml-lf/transaction",
"//ledger/ledger-api-common",
"//ledger/participant-integration-api",
"//libs-scala/jwt",
"//libs-scala/ports",
"//observability/metrics",
"@maven//:io_netty_netty_handler",
],
)
da_scala_test_suite(
name = "ledger-runner-common-tests",
size = "medium",
srcs = glob(["src/test/scala/**/*.scala"]),
data = [
":src/test/resources/test.conf",
":src/test/resources/test2.conf",
":src/test/resources/testp.conf",
],
resources = glob(["src/test/resources/**/*"]),
scala_deps = [
"@maven//:com_github_scopt_scopt",
"@maven//:org_scalatest_scalatest_core",
"@maven//:org_scalatest_scalatest_matchers_core",
"@maven//:org_scalatest_scalatest_shouldmatchers",
"@maven//:com_github_pureconfig_pureconfig_core",
"@maven//:com_github_pureconfig_pureconfig_generic",
"@maven//:org_scalacheck_scalacheck",
"@maven//:org_scalatestplus_scalacheck_1_15",
"@maven//:com_chuusai_shapeless",
],
deps = [
":ledger-runner-common",
":ledger-runner-common-test-lib",
"//bazel_tools/runfiles:scala_runfiles",
"//daml-lf/data",
"//daml-lf/engine",
"//daml-lf/language",
"//daml-lf/transaction",
"//ledger/ledger-api-common",
"//ledger/metrics",
"//ledger/participant-integration-api",
"//libs-scala/jwt",
"//libs-scala/ports",
"//observability/metrics",
"@maven//:com_typesafe_config",
"@maven//:io_netty_netty_handler",
"@maven//:org_scalatest_scalatest_compatible",
],
)

View File

@ -1,749 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.daml.ledger.api.tls.TlsVersion.TlsVersion
import com.daml.ledger.api.tls.{SecretsUrl, TlsConfiguration}
import com.daml.lf.data.Ref
import com.daml.lf.engine.EngineConfig
import com.daml.lf.language.LanguageVersion
import com.daml.platform.apiserver.{AuthServiceConfig, AuthServiceConfigCli}
import com.daml.platform.apiserver.SeedService.Seeding
import com.daml.platform.config.ParticipantConfig
import com.daml.platform.configuration.Readers._
import com.daml.platform.configuration.{AcsStreamsConfig, CommandConfiguration, IndexServiceConfig}
import com.daml.platform.indexer.{IndexerConfig, IndexerStartupMode}
import com.daml.platform.localstore.UserManagementConfig
import com.daml.platform.services.time.TimeProviderType
import com.daml.ports.Port
import io.netty.handler.ssl.ClientAuth
import scopt.OParser
import java.io.File
import java.nio.file.Paths
import java.time.Duration
import java.util.UUID
import com.daml.metrics.api.reporters.MetricsReporter
import scala.jdk.DurationConverters.JavaDurationOps
final case class CliConfig[Extra](
engineConfig: EngineConfig,
authService: AuthServiceConfig,
acsContractFetchingParallelism: Int,
acsIdFetchingParallelism: Int,
acsIdPageSize: Int,
configurationLoadTimeout: Duration,
commandConfig: CommandConfiguration,
eventsPageSize: Int,
bufferedStreamsPageSize: Int,
bufferedEventsProcessingParallelism: Int,
extra: Extra,
ledgerId: String,
maxDeduplicationDuration: Option[Duration],
maxInboundMessageSize: Int,
metricsReporter: Option[MetricsReporter],
metricsReportingInterval: Duration,
mode: Mode,
participants: Seq[CliParticipantConfig],
seeding: Seeding,
timeProviderType: TimeProviderType,
tlsConfig: Option[TlsConfiguration],
userManagementConfig: UserManagementConfig,
maxTransactionsInMemoryFanOutBufferSize: Int,
configFiles: Seq[File] = Seq(),
configMap: Map[String, String] = Map(),
) {
def withTlsConfig(modify: TlsConfiguration => TlsConfiguration): CliConfig[Extra] =
copy(tlsConfig = Some(modify(tlsConfig.getOrElse(TlsConfiguration.Empty))))
def withUserManagementConfig(
modify: UserManagementConfig => UserManagementConfig
): CliConfig[Extra] =
copy(userManagementConfig = modify(userManagementConfig))
}
object CliConfig {
val DefaultPort: Port = Port(6865)
val DefaultMaxInboundMessageSize: Int = 64 * 1024 * 1024
def createDefault[Extra](extra: Extra): CliConfig[Extra] =
CliConfig(
engineConfig = EngineConfig(
allowedLanguageVersions = LanguageVersion.StableVersions,
profileDir = None,
stackTraceMode = false,
forbidV0ContractId = true,
),
authService = AuthServiceConfig.Wildcard,
acsContractFetchingParallelism = AcsStreamsConfig.DefaultAcsContractFetchingParallelism,
acsIdFetchingParallelism = AcsStreamsConfig.DefaultAcsIdFetchingParallelism,
acsIdPageSize = AcsStreamsConfig.DefaultAcsIdPageSize,
configurationLoadTimeout = Duration.ofSeconds(10),
commandConfig = CommandConfiguration.Default,
eventsPageSize = AcsStreamsConfig.DefaultEventsPageSize,
bufferedStreamsPageSize = IndexServiceConfig.DefaultBufferedStreamsPageSize,
bufferedEventsProcessingParallelism =
IndexServiceConfig.DefaultBufferedEventsProcessingParallelism,
extra = extra,
ledgerId = UUID.randomUUID().toString,
maxDeduplicationDuration = None,
maxInboundMessageSize = DefaultMaxInboundMessageSize,
metricsReporter = None,
metricsReportingInterval = Duration.ofSeconds(10),
mode = Mode.Run,
participants = Vector.empty,
seeding = Seeding.Strong,
timeProviderType = TimeProviderType.WallClock,
tlsConfig = None,
userManagementConfig = UserManagementConfig.default(enabled = false),
maxTransactionsInMemoryFanOutBufferSize =
IndexServiceConfig.DefaultMaxTransactionsInMemoryFanOutBufferSize,
)
private def checkNoEmptyParticipant[Extra](config: CliConfig[Extra]): Either[String, Unit] =
if (config.mode == Mode.RunLegacyCliConfig && config.participants.isEmpty)
OParser.builder[CliConfig[Extra]].failure("No --participant provided to run")
else
OParser.builder[CliConfig[Extra]].success
private def checkFileCanBeRead[Extra](config: CliConfig[Extra]): Either[String, Unit] = {
val fileErrors = config.configFiles.collect {
case file if !file.canRead => s"Could not read file ${file.getName}"
}
if (fileErrors.nonEmpty) {
OParser.builder[CliConfig[Extra]].failure(fileErrors.mkString(", "))
} else
OParser.builder[CliConfig[Extra]].success
}
def parse[Extra](
name: String,
extraOptions: OParser[_, CliConfig[Extra]],
defaultExtra: Extra,
args: collection.Seq[String],
getEnvVar: String => Option[String] = sys.env.get(_),
): Option[CliConfig[Extra]] = {
val builder = OParser.builder[CliConfig[Extra]]
val parser: OParser[_, CliConfig[Extra]] = OParser.sequence(
builder.head(s"$name as a service"),
builder.help("help").text("Print this help page."),
commandRunLegacy(name, getEnvVar, extraOptions),
commandDumpIndexMetadata,
commandRunHocon(name),
commandConvertConfig(getEnvVar, extraOptions),
commandPrintDefaultConfig(),
builder.checkConfig(checkNoEmptyParticipant),
)
OParser.parse[CliConfig[Extra]](
parser,
args,
createDefault(defaultExtra),
)
}
private def configKeyValueOption[Extra] =
OParser
.builder[CliConfig[Extra]]
.opt[Map[String, String]]('C', "config key-value's")
.text(
"Set configuration key value pairs directly. Can be useful for providing simple short config info."
)
.valueName("<key1>=<value1>,<key2>=<value2>,...")
.unbounded()
.action { (map, cli) =>
cli.copy(configMap = map ++ cli.configMap)
}
private def configFilesOption[Extra] =
OParser
.builder[CliConfig[Extra]]
.opt[Seq[File]]('c', "config")
.text(
"Set configuration file(s). If several configuration files assign values to the same key, the last value is taken."
)
.valueName("<file1>,<file2>,...")
.unbounded()
.action((files, cli) => cli.copy(configFiles = cli.configFiles ++ files))
private def commandRunHocon[Extra](name: String): OParser[_, CliConfig[Extra]] = {
val builder = OParser.builder[CliConfig[Extra]]
OParser.sequence(
builder
.cmd("run")
.text(
s"Run $name with configuration provided in HOCON files."
)
.action((_, config) => config.copy(mode = Mode.Run))
.children(
OParser.sequence(
configKeyValueOption,
configFilesOption,
)
),
builder.checkConfig(checkFileCanBeRead),
)
}
private def commandRunLegacy[Extra](
name: String,
getEnvVar: String => Option[String],
extraOptions: OParser[_, CliConfig[Extra]],
): OParser[_, CliConfig[Extra]] =
OParser
.builder[CliConfig[Extra]]
.cmd("run-legacy-cli-config")
.text(
s"Run $name in a legacy mode with cli-driven configuration."
)
.action((_, config) => config.copy(mode = Mode.RunLegacyCliConfig))
.children(legacyCommand(getEnvVar, extraOptions))
private def commandConvertConfig[Extra](
getEnvVar: String => Option[String],
extraOptions: OParser[_, CliConfig[Extra]],
): OParser[_, CliConfig[Extra]] =
OParser
.builder[CliConfig[Extra]]
.cmd("convert-config")
.text(
"Converts configuration provided as legacy CLI options into HOCON file."
)
.action((_, config) => config.copy(mode = Mode.ConvertConfig))
.children(legacyCommand(getEnvVar, extraOptions))
private def commandPrintDefaultConfig[Extra](): OParser[_, CliConfig[Extra]] = {
val builder = OParser.builder[CliConfig[Extra]]
builder
.cmd("print-default-config")
.text(
"Prints default config to stdout or to a file"
)
.action((_, config) => config.copy(mode = Mode.PrintDefaultConfig(None)))
.children(
builder
.arg[String]("<output-file-path>")
.minOccurs(0)
.text("An optional output file")
.action((outputFilePath, config) =>
config.copy(mode = Mode.PrintDefaultConfig(Some(Paths.get(outputFilePath))))
),
configKeyValueOption,
)
}
def legacyCommand[Extra](
getEnvVar: String => Option[String],
extraOptions: OParser[_, CliConfig[Extra]],
): OParser[_, CliConfig[Extra]] =
OParser.sequence(parser(getEnvVar), extraOptions)
private def commandDumpIndexMetadata[Extra]: OParser[_, CliConfig[Extra]] = {
val builder = OParser.builder[CliConfig[Extra]]
builder
.cmd("dump-index-metadata")
.text(
"Connect to the index db. Print ledger id, ledger end and integration API version and quit."
)
.children {
builder
.arg[String](" <jdbc-url>...")
.minOccurs(1)
.unbounded()
.text("The JDBC URL to connect to an index database")
.action((jdbcUrl, config) =>
config.copy(mode = config.mode match {
case Mode.DumpIndexMetadata(jdbcUrls) =>
Mode.DumpIndexMetadata(jdbcUrls :+ jdbcUrl)
case _ =>
Mode.DumpIndexMetadata(Vector(jdbcUrl))
})
)
}
}
private def parser[Extra](
getEnvVar: String => Option[String]
): OParser[_, CliConfig[Extra]] = {
val builder = OParser.builder[CliConfig[Extra]]
import builder._
val seedingMap =
Map[String, Seeding]("testing-weak" -> Seeding.Weak, "strong" -> Seeding.Strong)
OParser.sequence(
configKeyValueOption,
configFilesOption,
opt[Map[String, String]]("participant")
.unbounded()
.text(
"The configuration of a participant. Comma-separated pairs in the form key=value, with mandatory keys: [participant-id, port] and optional keys [" +
"address, " +
"port-file, " +
"server-jdbc-url, " +
"api-server-connection-pool-size" +
"api-server-connection-timeout" +
"management-service-timeout, " +
"indexer-connection-timeout, " +
"indexer-max-input-buffer-size, " +
"indexer-input-mapping-parallelism, " +
"indexer-ingestion-parallelism, " +
"indexer-batching-parallelism, " +
"indexer-submission-batch-size, " +
"indexer-tailing-rate-limit-per-second, " +
"indexer-batch-within-millis, " +
"indexer-enable-compression, " +
"contract-state-cache-max-size, " +
"contract-key-state-cache-max-size, " +
"]"
)
.action((kv, config) => {
val participantId = Ref.ParticipantId.assertFromString(kv("participant-id"))
val port = Port(kv("port").toInt)
val address = kv.get("address")
val portFile = kv.get("port-file").map(new File(_).toPath)
val jdbcUrlFromEnv =
kv.get("server-jdbc-url-env").flatMap(getEnvVar(_))
val jdbcUrl =
kv.getOrElse(
"server-jdbc-url",
jdbcUrlFromEnv
.getOrElse(ParticipantConfig.defaultIndexJdbcUrl(participantId)),
)
val apiServerConnectionPoolSize = kv
.get("api-server-connection-pool-size")
.map(_.toInt)
.getOrElse(CliParticipantConfig.DefaultApiServerDatabaseConnectionPoolSize)
val apiServerConnectionTimeout = kv
.get("api-server-connection-timeout")
.map(Duration.parse)
.map(_.toScala)
.getOrElse(CliParticipantConfig.DefaultApiServerDatabaseConnectionTimeout)
val indexerInputMappingParallelism = kv
.get("indexer-input-mapping-parallelism")
.map(_.toInt)
.getOrElse(IndexerConfig.DefaultInputMappingParallelism)
val indexerMaxInputBufferSize = kv
.get("indexer-max-input-buffer-size")
.map(_.toInt)
.getOrElse(IndexerConfig.DefaultMaxInputBufferSize)
val indexerBatchingParallelism = kv
.get("indexer-batching-parallelism")
.map(_.toInt)
.getOrElse(IndexerConfig.DefaultBatchingParallelism)
val indexerIngestionParallelism = kv
.get("indexer-ingestion-parallelism")
.map(_.toInt)
.getOrElse(IndexerConfig.DefaultIngestionParallelism)
val indexerSubmissionBatchSize = kv
.get("indexer-submission-batch-size")
.map(_.toLong)
.getOrElse(IndexerConfig.DefaultSubmissionBatchSize)
val indexerEnableCompression = kv
.get("indexer-enable-compression")
.map(_.toBoolean)
.getOrElse(IndexerConfig.DefaultEnableCompression)
val managementServiceTimeout = kv
.get("management-service-timeout")
.map(Duration.parse)
.map(_.toScala)
.getOrElse(CliParticipantConfig.DefaultManagementServiceTimeout)
val maxContractStateCacheSize = kv
.get("contract-state-cache-max-size")
.map(_.toLong)
.getOrElse(IndexServiceConfig.DefaultMaxContractStateCacheSize)
val maxContractKeyStateCacheSize = kv
.get("contract-key-state-cache-max-size")
.map(_.toLong)
.getOrElse(IndexServiceConfig.DefaultMaxContractKeyStateCacheSize)
val partConfig = CliParticipantConfig(
participantId = participantId,
address = address,
port = port,
portFile = portFile,
serverJdbcUrl = jdbcUrl,
indexerConfig = IndexerConfig(
startupMode = IndexerStartupMode.MigrateAndStart(allowExistingSchema = false),
maxInputBufferSize = indexerMaxInputBufferSize,
inputMappingParallelism = indexerInputMappingParallelism,
batchingParallelism = indexerBatchingParallelism,
ingestionParallelism = indexerIngestionParallelism,
submissionBatchSize = indexerSubmissionBatchSize,
enableCompression = indexerEnableCompression,
),
apiServerDatabaseConnectionPoolSize = apiServerConnectionPoolSize,
apiServerDatabaseConnectionTimeout = apiServerConnectionTimeout,
managementServiceTimeout = managementServiceTimeout,
maxContractStateCacheSize = maxContractStateCacheSize,
maxContractKeyStateCacheSize = maxContractKeyStateCacheSize,
)
config.copy(participants = config.participants :+ partConfig)
}),
opt[String]("ledger-id")
.optional()
.text(
"The ID of the ledger. This must be the same each time the ledger is started. Defaults to a random UUID."
)
.action((ledgerId, config) => config.copy(ledgerId = ledgerId)),
opt[String]("pem")
.optional()
.text(
"TLS: The pem file to be used as the private key. Use '.enc' filename suffix if the pem file is encrypted."
)
.action((path, config) =>
config.withTlsConfig(c => c.copy(privateKeyFile = Some(new File(path))))
),
opt[String]("tls-secrets-url")
.optional()
.text(
"TLS: URL of a secrets service that provide parameters needed to decrypt the private key. Required when private key is encrypted (indicated by '.enc' filename suffix)."
)
.action((url, config) =>
config.withTlsConfig(c => c.copy(secretsUrl = Some(SecretsUrl.fromString(url))))
),
checkConfig(c =>
c.tlsConfig.fold(success) { tlsConfig =>
if (
tlsConfig.privateKeyFile.isDefined
&& tlsConfig.privateKeyFile.get.getName.endsWith(".enc")
&& tlsConfig.secretsUrl.isEmpty
) {
failure(
"You need to provide a secrets server URL if the server's private key is an encrypted file."
)
} else {
success
}
}
),
opt[String]("crt")
.optional()
.text(
"TLS: The crt file to be used as the cert chain. Required if any other TLS parameters are set."
)
.action((path, config) =>
config.withTlsConfig(c => c.copy(certChainFile = Some(new File(path))))
),
opt[String]("cacrt")
.optional()
.text("TLS: The crt file to be used as the trusted root CA.")
.action((path, config) =>
config.withTlsConfig(c => c.copy(trustCollectionFile = Some(new File(path))))
),
opt[Boolean]("cert-revocation-checking")
.optional()
.text(
"TLS: enable/disable certificate revocation checks with the OCSP. Disabled by default."
)
.action((checksEnabled, config) =>
config.withTlsConfig(c => c.copy(enableCertRevocationChecking = checksEnabled))
),
opt[ClientAuth]("client-auth")
.optional()
.text(
"TLS: The client authentication mode. Must be one of none, optional or require. If TLS is enabled it defaults to require."
)
.action((clientAuth, config) => config.withTlsConfig(c => c.copy(clientAuth = clientAuth))),
opt[TlsVersion]("min-tls-version")
.optional()
.text(
"TLS: Indicates the minimum TLS version to enable. If specified must be either '1.2' or '1.3'."
)
.action((tlsVersion, config) =>
config.withTlsConfig(c => c.copy(minimumServerProtocolVersion = Some(tlsVersion)))
),
opt[Int]("max-commands-in-flight")
.optional()
.action((value, config) =>
config.copy(commandConfig = config.commandConfig.copy(maxCommandsInFlight = value))
)
.text(
s"Maximum number of submitted commands for which the CommandService is waiting to be completed in parallel, for each distinct set of parties, as specified by the `act_as` property of the command. Reaching this limit will cause new submissions to wait in the queue before being submitted. Default is ${CommandConfiguration.Default.maxCommandsInFlight}."
),
opt[Int]("input-buffer-size")
.optional()
.action((value, config) =>
config.copy(commandConfig = config.commandConfig.copy(inputBufferSize = value))
)
.text(
s"Maximum number of commands waiting to be submitted for each distinct set of parties, as specified by the `act_as` property of the command. Reaching this limit will cause the server to signal backpressure using the ``RESOURCE_EXHAUSTED`` gRPC status code. Default is ${CommandConfiguration.Default.inputBufferSize}."
),
opt[Duration]("tracker-retention-period")
.optional()
.action((value, config) =>
config.copy(commandConfig = config.commandConfig.copy(trackerRetentionPeriod = value))
)
.text(
"The duration that the command service will keep an active command tracker for a given set of parties." +
" A longer period cuts down on the tracker instantiation cost for a party that seldom acts." +
" A shorter period causes a quick removal of unused trackers." +
s" Default is ${CommandConfiguration.DefaultTrackerRetentionPeriod}."
),
opt[Duration]("max-deduplication-duration")
.optional()
.hidden()
.action((maxDeduplicationDuration, config) =>
config
.copy(maxDeduplicationDuration = Some(maxDeduplicationDuration))
)
.text(
"Maximum command deduplication duration."
),
opt[Int]("max-inbound-message-size")
.optional()
.text(
s"Max inbound message size in bytes. Defaults to ${CliConfig.DefaultMaxInboundMessageSize}."
)
.action((maxInboundMessageSize, config) =>
config.copy(maxInboundMessageSize = maxInboundMessageSize)
),
opt[Int]("events-page-size")
.optional()
.text(
s"Number of events fetched from the index for every round trip when serving streaming calls. Default is ${AcsStreamsConfig.DefaultEventsPageSize}."
)
.validate { pageSize =>
if (pageSize > 0) Right(())
else Left("events-page-size should be strictly positive")
}
.action((eventsPageSize, config) => config.copy(eventsPageSize = eventsPageSize)),
opt[Int]("buffered-streams-page-size")
.optional()
.text(
s"Number of transactions fetched from the buffer when serving streaming calls. Default is ${IndexServiceConfig.DefaultBufferedStreamsPageSize}."
)
.validate { pageSize =>
Either.cond(pageSize > 0, (), "buffered-streams-page-size should be strictly positive")
}
.action((pageSize, config) => config.copy(bufferedStreamsPageSize = pageSize)),
opt[Int]("ledger-api-transactions-buffer-max-size")
.optional()
.hidden()
.text(
s"Maximum size of the in-memory fan-out buffer used for serving Ledger API transaction streams. Default is ${IndexServiceConfig.DefaultMaxTransactionsInMemoryFanOutBufferSize}."
)
.validate(bufferSize =>
Either.cond(
bufferSize >= 0,
(),
"ledger-api-transactions-buffer-max-size must be greater than or equal to 0.",
)
)
.action((maxBufferSize, config) =>
config.copy(maxTransactionsInMemoryFanOutBufferSize = maxBufferSize)
),
opt[Int]("buffers-prefetching-parallelism")
.optional()
.text(
s"Number of events fetched/decoded in parallel for populating the Ledger API internal buffers. Default is ${IndexServiceConfig.DefaultBufferedEventsProcessingParallelism}."
)
.validate { buffersPrefetchingParallelism =>
if (buffersPrefetchingParallelism > 0) Right(())
else Left("buffers-prefetching-parallelism should be strictly positive")
}
.action((bufferedEventsProcessingParallelism, config) =>
config.copy(bufferedEventsProcessingParallelism = bufferedEventsProcessingParallelism)
),
opt[Int]("acs-id-page-size")
.optional()
.text(
s"Number of contract ids fetched from the index for every round trip when serving ACS calls. Default is ${AcsStreamsConfig.DefaultAcsIdPageSize}."
)
.validate { acsIdPageSize =>
if (acsIdPageSize > 0) Right(())
else Left("acs-id-page-size should be strictly positive")
}
.action((acsIdPageSize, config) => config.copy(acsIdPageSize = acsIdPageSize)),
opt[Int]("acs-id-fetching-parallelism")
.optional()
.text(
s"Number of contract id pages fetched in parallel when serving ACS calls. Default is ${AcsStreamsConfig.DefaultAcsIdFetchingParallelism}."
)
.validate { acsIdFetchingParallelism =>
if (acsIdFetchingParallelism > 0) Right(())
else Left("acs-id-fetching-parallelism should be strictly positive")
}
.action((acsIdFetchingParallelism, config) =>
config.copy(acsIdFetchingParallelism = acsIdFetchingParallelism)
),
opt[Int]("acs-contract-fetching-parallelism")
.optional()
.text(
s"Number of event pages fetched in parallel when serving ACS calls. Default is ${AcsStreamsConfig.DefaultAcsContractFetchingParallelism}."
)
.validate { acsContractFetchingParallelism =>
if (acsContractFetchingParallelism > 0) Right(())
else Left("acs-contract-fetching-parallelism should be strictly positive")
}
.action((acsContractFetchingParallelism, config) =>
config.copy(acsContractFetchingParallelism = acsContractFetchingParallelism)
),
opt[Int]("acs-global-parallelism-limit")
.optional()
.text(
s"This configuration option is deprecated and has no effect on the application"
)
.action((_, config) => config),
opt[Long]("max-lf-value-translation-cache-entries")
.optional()
.text(
s"Deprecated parameter -- lf value translation cache doesn't exist anymore."
)
.action((_, config) => config),
opt[String]("contract-id-seeding")
.optional()
.text(s"""Set the seeding of contract ids. Possible values are ${seedingMap.keys
.mkString(
","
)}. Default is "strong".""")
.validate(v =>
Either.cond(
seedingMap.contains(v.toLowerCase),
(),
s"seeding must be ${seedingMap.keys.mkString(",")}",
)
)
.action((text, config) => config.copy(seeding = seedingMap(text)))
.hidden(),
opt[MetricsReporter]("metrics-reporter")
.optional()
.text(s"Start a metrics reporter. ${MetricsReporter.cliHint}")
.action((reporter, config) => config.copy(metricsReporter = Some(reporter))),
opt[Duration]("metrics-reporting-interval")
.optional()
.text("Set metric reporting interval.")
.action((interval, config) => config.copy(metricsReportingInterval = interval)),
checkConfig(c => {
val participantIds = c.participants
.map(pc => pc.participantId)
.toList
def filterDuplicates[A](items: List[A]): Set[A] =
items.groupBy(identity).filter(_._2.size > 1).keySet
val participantIdDuplicates = filterDuplicates(participantIds)
if (participantIdDuplicates.nonEmpty)
failure(
participantIdDuplicates.mkString(
"The following participant IDs are duplicate: ",
",",
"",
)
)
else
success
}),
opt[Unit]("early-access")
.optional()
.action((_, c) =>
c.copy(engineConfig =
c.engineConfig.copy(allowedLanguageVersions = LanguageVersion.EarlyAccessVersions)
)
)
.text(
"Enable preview version of the next Daml-LF language. Should not be used in production."
),
opt[Unit]("daml-lf-dev-mode-unsafe")
.optional()
.hidden()
.action((_, c) =>
c.copy(engineConfig =
c.engineConfig.copy(allowedLanguageVersions = LanguageVersion.DevVersions)
)
)
.text(
"Enable the development version of the Daml-LF language. Highly unstable. Should not be used in production."
),
// TODO append-only: remove
opt[Unit]("index-append-only-schema")
.optional()
.text("Legacy flag with no effect")
.action((_, config) => config),
// TODO remove
opt[Unit]("mutable-contract-state-cache")
.optional()
.hidden()
.text("Legacy flag with no effect")
.action((_, config) => config),
// TODO remove
opt[Unit]("buffered-ledger-api-streams")
.optional()
.hidden()
.text("Legacy flag with no effect.")
.action((_, config) => config),
opt[Boolean]("enable-user-management")
.optional()
.text(
"Whether to enable participant user management."
)
.action((enabled, config: CliConfig[Extra]) =>
config.withUserManagementConfig(_.copy(enabled = enabled))
),
opt[Int]("user-management-cache-expiry")
.optional()
.text(
s"Defaults to ${UserManagementConfig.DefaultCacheExpiryAfterWriteInSeconds} seconds. " +
"Used to set expiry time for user management cache. " +
"Also determines the maximum delay for propagating user management state changes which is double its value."
)
.action((value, config: CliConfig[Extra]) =>
config.withUserManagementConfig(_.copy(cacheExpiryAfterWriteInSeconds = value))
),
opt[Int]("user-management-max-cache-size")
.optional()
.text(
s"Defaults to ${UserManagementConfig.DefaultMaxCacheSize} entries. " +
"Determines the maximum in-memory cache size for user management state."
)
.action((value, config: CliConfig[Extra]) =>
config.withUserManagementConfig(_.copy(maxCacheSize = value))
),
opt[Int]("user-management-max-users-page-size")
.optional()
.text(
s"Maximum number of users that the server can return in a single response. " +
s"Defaults to ${UserManagementConfig.DefaultMaxUsersPageSize} entries."
)
.action((value, config: CliConfig[Extra]) =>
config.withUserManagementConfig(_.copy(maxUsersPageSize = value))
),
checkConfig(c => {
val v = c.userManagementConfig.maxUsersPageSize
if (v == 0 || v >= 100) {
success
} else {
failure(
s"user-management-max-users-page-size must be either 0 or greater than 99, was: $v"
)
}
}),
opt[Unit]('s', "static-time")
.optional()
.hidden() // Only available for testing purposes
.action((_, c) => c.copy(timeProviderType = TimeProviderType.Static))
.text("Use static time. When not specified, wall-clock-time is used."),
opt[File]("profile-dir")
.optional()
.action((dir, c) =>
c.copy(engineConfig = c.engineConfig.copy(profileDir = Some(dir.toPath)))
)
.text("Enable profiling and write the profiles into the given directory."),
opt[Boolean]("stack-traces")
.hidden()
.optional()
.action((enabled, config) =>
config.copy(engineConfig = config.engineConfig.copy(stackTraceMode = enabled))
)
.text(
"Enable/disable stack traces. Default is to disable them. " +
"Enabling stack traces may have a significant performance impact."
),
AuthServiceConfigCli.parse(builder)((v, c) => c.copy(authService = v)),
)
}
}

View File

@ -1,36 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.daml.lf.data.Ref
import com.daml.platform.configuration.IndexServiceConfig
import com.daml.platform.indexer.IndexerConfig
import com.daml.ports.Port
import java.nio.file.Path
import scala.concurrent.duration._
final case class CliParticipantConfig(
participantId: Ref.ParticipantId,
address: Option[String],
port: Port,
portFile: Option[Path],
serverJdbcUrl: String,
managementServiceTimeout: Duration = CliParticipantConfig.DefaultManagementServiceTimeout,
indexerConfig: IndexerConfig,
apiServerDatabaseConnectionPoolSize: Int =
CliParticipantConfig.DefaultApiServerDatabaseConnectionPoolSize,
apiServerDatabaseConnectionTimeout: Duration =
CliParticipantConfig.DefaultApiServerDatabaseConnectionTimeout,
maxContractStateCacheSize: Long = IndexServiceConfig.DefaultMaxContractStateCacheSize,
maxContractKeyStateCacheSize: Long = IndexServiceConfig.DefaultMaxContractKeyStateCacheSize,
)
object CliParticipantConfig {
val DefaultManagementServiceTimeout: FiniteDuration = 2.minutes
val DefaultApiServerDatabaseConnectionTimeout: Duration = 250.millis
// this pool is used for all data access for the ledger api (command submission, transaction service, ...)
val DefaultApiServerDatabaseConnectionPoolSize = 16
}

View File

@ -1,49 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.daml.ledger.runner.common.Config.{
DefaultEngineConfig,
DefaultLedgerId,
DefaultParticipants,
DefaultParticipantsDatasourceConfig,
}
import com.daml.platform.config.MetricsConfig.DefaultMetricsConfig
import com.daml.lf.data.Ref
import com.daml.lf.data.Ref.ParticipantId
import com.daml.lf.engine.EngineConfig
import com.daml.lf.language.LanguageVersion
import com.daml.platform.config.{MetricsConfig, ParticipantConfig}
import com.daml.platform.store.DbSupport.ParticipantDataSourceConfig
final case class Config(
engine: EngineConfig = DefaultEngineConfig,
ledgerId: String = DefaultLedgerId,
metrics: MetricsConfig = DefaultMetricsConfig,
dataSource: Map[Ref.ParticipantId, ParticipantDataSourceConfig] =
DefaultParticipantsDatasourceConfig,
participants: Map[Ref.ParticipantId, ParticipantConfig] = DefaultParticipants,
) {
def withDataSource(dataSource: Map[Ref.ParticipantId, ParticipantDataSourceConfig]): Config =
copy(dataSource = dataSource)
}
object Config {
val DefaultLedgerId: String = "default-ledger-id"
val DefaultEngineConfig: EngineConfig = EngineConfig(
allowedLanguageVersions = LanguageVersion.StableVersions,
profileDir = None,
stackTraceMode = false,
forbidV0ContractId = true,
)
val DefaultParticipants: Map[Ref.ParticipantId, ParticipantConfig] = Map(
ParticipantConfig.DefaultParticipantId -> ParticipantConfig()
)
val DefaultParticipantsDatasourceConfig: Map[ParticipantId, ParticipantDataSourceConfig] = Map(
ParticipantConfig.DefaultParticipantId -> ParticipantDataSourceConfig(
"jdbc:h2:mem:default;db_close_delay=-1;db_close_on_exit=false"
)
)
val Default: Config = Config()
}

View File

@ -1,57 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.daml.ledger.api.auth.{AuthService, CachedJwtVerifierLoader, JwtVerifierLoader}
import com.daml.ledger.configuration.Configuration
import com.daml.metrics.Metrics
import com.daml.platform.apiserver.{ApiServerConfig, TimeServiceBackend}
import com.daml.platform.config.ParticipantConfig
import com.daml.platform.configuration.InitialLedgerConfiguration
import com.daml.platform.services.time.TimeProviderType
import java.time.{Duration, Instant}
import scala.concurrent.ExecutionContext
class ConfigAdaptor {
def initialLedgerConfig(
maxDeduplicationDuration: Option[Duration]
): InitialLedgerConfiguration = {
val conf = Configuration.reasonableInitialConfiguration
InitialLedgerConfiguration(
maxDeduplicationDuration = maxDeduplicationDuration.getOrElse(
conf.maxDeduplicationDuration
),
avgTransactionLatency = conf.timeModel.avgTransactionLatency,
minSkew = conf.timeModel.minSkew,
maxSkew = conf.timeModel.maxSkew,
// If a new index database is added to an already existing ledger,
// a zero delay will likely produce a "configuration rejected" ledger entry,
// because at startup the indexer hasn't ingested any configuration change yet.
// Override this setting for distributed ledgers where you want to avoid these superfluous entries.
delayBeforeSubmitting = Duration.ZERO,
)
}
def timeServiceBackend(config: ApiServerConfig): Option[TimeServiceBackend] =
config.timeProviderType match {
case TimeProviderType.Static => Some(TimeServiceBackend.simple(Instant.EPOCH))
case TimeProviderType.WallClock => None
}
def authService(participantConfig: ParticipantConfig): AuthService =
participantConfig.authentication.create(
participantConfig.jwtTimestampLeeway
)
def jwtVerifierLoader(
participantConfig: ParticipantConfig,
metrics: Metrics,
executionContext: ExecutionContext,
): JwtVerifierLoader =
new CachedJwtVerifierLoader(
jwtTimestampLeeway = participantConfig.jwtTimestampLeeway,
metrics = metrics,
)(executionContext)
}

View File

@ -1,49 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import pureconfig.{ConfigReader, ConfigSource, Derivation}
import com.typesafe.config.{ConfigFactory, Config => TypesafeConfig}
import pureconfig.error.ConfigReaderFailures
import java.io.File
trait ConfigLoader {
private def toError(failures: ConfigReaderFailures): String = {
s"Failed to load configuration: ${System.lineSeparator()}${failures.prettyPrint()}"
}
private def configFromMap(configMap: Map[String, String]): TypesafeConfig = {
import scala.jdk.CollectionConverters._
val map = configMap.map {
case (key, value) if value.nonEmpty => key -> value
case (key, _) => key -> null
}.asJava
ConfigFactory.parseMap(map)
}
def loadConfig[T](config: TypesafeConfig)(implicit
reader: Derivation[ConfigReader[T]]
): Either[String, T] =
ConfigSource.fromConfig(config).load[T].left.map(toError)
def toTypesafeConfig(
configFiles: Seq[File] = Seq(),
configMap: Map[String, String] = Map(),
fallback: TypesafeConfig = ConfigFactory.load(),
): TypesafeConfig = {
val mergedConfig = configFiles
.map(ConfigFactory.parseFile)
.foldLeft(ConfigFactory.empty())((combined, config) => config.withFallback(combined))
.withFallback(fallback)
Seq(configFromMap(configMap)).foldLeft(mergedConfig)((combined, config) =>
config.withFallback(combined)
)
}
}
object ConfigLoader extends ConfigLoader {}

View File

@ -1,10 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.daml.resources.ProgramResource.SuppressedStartupException
private[common] final class ConfigParseException
extends RuntimeException
with SuppressedStartupException

View File

@ -1,46 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.daml.ledger.resources.{Resource, ResourceContext, ResourceOwner}
import com.daml.logging.LoggingContext.newLoggingContext
import com.daml.logging.{ContextualizedLogger, LoggingContext}
import com.daml.platform.store.IndexMetadata
import scala.concurrent.{ExecutionContext, Future}
object DumpIndexMetadata {
val logger = ContextualizedLogger.get(this.getClass)
def dumpIndexMetadata(
jdbcUrl: String
)(implicit
executionContext: ExecutionContext,
context: ResourceContext,
): Future[Unit] = {
newLoggingContext { implicit loggingContext: LoggingContext =>
val metadataFuture = IndexMetadata.read(jdbcUrl).use { metadata =>
logger.warn(s"ledger_id: ${metadata.ledgerId}")
logger.warn(s"participant_id: ${metadata.participantId}")
logger.warn(s"ledger_end: ${metadata.ledgerEnd}")
logger.warn(s"version: ${metadata.participantIntegrationApiVersion}")
Future.unit
}
metadataFuture.failed.foreach { exception =>
logger.error("Error while retrieving the index metadata", exception)
}
metadataFuture
}
}
def apply(
jdbcUrls: Seq[String]
): ResourceOwner[Unit] = {
new ResourceOwner[Unit] {
override def acquire()(implicit context: ResourceContext): Resource[Unit] = {
Resource.sequenceIgnoringValues(jdbcUrls.map(dumpIndexMetadata).map(Resource.fromFuture))
}
}
}
}

View File

@ -1,98 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.daml.platform.apiserver.ApiServerConfig
import com.daml.platform.config.{MetricsConfig, ParticipantConfig}
import com.daml.platform.configuration.{AcsStreamsConfig, IndexServiceConfig}
import com.daml.platform.store.DbSupport.{
ConnectionPoolConfig,
DataSourceProperties,
ParticipantDataSourceConfig,
}
import java.util.concurrent.TimeUnit
import scala.concurrent.duration.FiniteDuration
import scala.jdk.DurationConverters.JavaDurationOps
object LegacyCliConfigConverter {
private def toParticipantConfig(
configAdaptor: ConfigAdaptor,
cliConfig: CliConfig[_],
config: CliParticipantConfig,
): ParticipantConfig = ParticipantConfig(
authentication = cliConfig.authService,
indexer = config.indexerConfig,
indexService = IndexServiceConfig(
acsStreams = AcsStreamsConfig(
maxParallelPayloadCreateQueries = cliConfig.acsContractFetchingParallelism,
maxParallelIdCreateQueries = cliConfig.acsIdFetchingParallelism,
maxIdsPerIdPage = cliConfig.acsIdPageSize,
maxPayloadsPerPayloadsPage = cliConfig.eventsPageSize,
),
bufferedStreamsPageSize = cliConfig.bufferedStreamsPageSize,
bufferedEventsProcessingParallelism = cliConfig.bufferedEventsProcessingParallelism,
maxContractStateCacheSize = config.maxContractStateCacheSize,
maxContractKeyStateCacheSize = config.maxContractKeyStateCacheSize,
maxTransactionsInMemoryFanOutBufferSize = cliConfig.maxTransactionsInMemoryFanOutBufferSize,
inMemoryStateUpdaterParallelism = IndexServiceConfig.DefaultInMemoryStateUpdaterParallelism,
),
dataSourceProperties = DataSourceProperties(
connectionPool = ConnectionPoolConfig(
connectionPoolSize = config.apiServerDatabaseConnectionPoolSize,
connectionTimeout = FiniteDuration(
config.apiServerDatabaseConnectionTimeout.toMillis,
TimeUnit.MILLISECONDS,
),
)
),
apiServer = ApiServerConfig(
port = config.port,
address = config.address,
tls = cliConfig.tlsConfig,
maxInboundMessageSize = cliConfig.maxInboundMessageSize,
initialLedgerConfiguration =
Some(configAdaptor.initialLedgerConfig(cliConfig.maxDeduplicationDuration)),
configurationLoadTimeout = FiniteDuration(
cliConfig.configurationLoadTimeout.toMillis,
TimeUnit.MILLISECONDS,
),
portFile = config.portFile,
seeding = cliConfig.seeding,
managementServiceTimeout = FiniteDuration(
config.managementServiceTimeout.toMillis,
TimeUnit.MILLISECONDS,
),
userManagement = cliConfig.userManagementConfig,
command = cliConfig.commandConfig,
timeProviderType = cliConfig.timeProviderType,
),
)
def toConfig(configAdaptor: ConfigAdaptor, config: CliConfig[_]): Config = {
Config(
engine = config.engineConfig,
ledgerId = config.ledgerId,
metrics = MetricsConfig(
enabled = config.metricsReporter.isDefined,
reporter = config.metricsReporter.getOrElse(MetricsConfig.DefaultMetricsConfig.reporter),
reportingInterval = config.metricsReportingInterval.toScala,
),
dataSource = config.participants.map { participantConfig =>
participantConfig.participantId -> ParticipantDataSourceConfig(
participantConfig.serverJdbcUrl
)
}.toMap,
participants = config.participants.map { participantConfig =>
participantConfig.participantId -> toParticipantConfig(
configAdaptor,
config,
participantConfig,
)
}.toMap,
)
}
}

View File

@ -1,26 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import java.nio.file.Path
sealed abstract class Mode
object Mode {
/** Run the participant, accepts HOCON configuration */
case object Run extends Mode
/** Run the participant in legacy mode with accepted CLI arguments */
case object RunLegacyCliConfig extends Mode
/** Accepts legacy Cli parameters, but just prints configuration */
case object ConvertConfig extends Mode
final case class PrintDefaultConfig(outputFilePath: Option[Path]) extends Mode
/** Dump index metadata and exit */
final case class DumpIndexMetadata(jdbcUrls: Vector[String]) extends Mode
}

View File

@ -1,93 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.typesafe.config.{ConfigObject, ConfigValue, ConfigValueFactory}
import pureconfig.error.{ConfigReaderFailures, UnknownKey}
import pureconfig.generic.ProductHint
import pureconfig.{ConfigConvert, ConfigCursor, ConfigObjectCursor, ConfigReader, ConfigWriter}
object OptConfigValue {
val enabledKey = "enabled"
/** Reads configuration object of `T` and `enabled` flag to find out if this object has values.
*/
def optReaderEnabled[T](reader: ConfigReader[T]): ConfigReader[Option[T]] =
(cursor: ConfigCursor) =>
for {
objCur <- cursor.asObjectCursor
enabledCur <- objCur.atKey(enabledKey)
enabled <- enabledCur.asBoolean
value <-
if (enabled) {
reader.from(cursor).map(x => Some(x))
} else {
Right(None)
}
} yield value
/** Writes object of `T` and adds `enabled` flag for configuration which contains value.
*/
def optWriterEnabled[T](writer: ConfigWriter[T]): ConfigWriter[Option[T]] = {
import scala.jdk.CollectionConverters._
def toConfigValue(enabled: Boolean) =
ConfigValueFactory.fromMap(Map(enabledKey -> enabled).asJava)
(optValue: Option[T]) =>
optValue match {
case Some(value) =>
writer.to(value) match {
// if serialised object of `T` is `ConfigObject` and
// has `enabled` inside, it cannot be supported by this writer
case configObject: ConfigObject if configObject.toConfig.hasPath(enabledKey) =>
throw new IllegalArgumentException(
s"Ambiguous configuration, object contains `${enabledKey}` flag"
)
case _ =>
writer.to(value).withFallback(toConfigValue(enabled = true))
}
case None => toConfigValue(enabled = false)
}
}
def optConvertEnabled[T](
reader: ConfigReader[T],
writer: ConfigWriter[T],
): ConfigConvert[Option[T]] =
ConfigConvert.apply(optReaderEnabled(reader), optWriterEnabled(writer))
def optConvertEnabled[T](convert: ConfigConvert[T]): ConfigConvert[Option[T]] =
optConvertEnabled(convert, convert)
class OptProductHint[T](allowUnknownKeys: Boolean) extends ProductHint[T] {
val hint = ProductHint[T](allowUnknownKeys = allowUnknownKeys)
override def from(cursor: ConfigObjectCursor, fieldName: String): ProductHint.Action =
hint.from(cursor, fieldName)
override def bottom(
cursor: ConfigObjectCursor,
usedFields: Set[String],
): Option[ConfigReaderFailures] = if (allowUnknownKeys)
None
else {
val unknownKeys = cursor.map.toList.collect {
case (k, keyCur) if !usedFields.contains(k) && k != enabledKey =>
keyCur.failureFor(UnknownKey(k))
}
unknownKeys match {
case h :: t => Some(ConfigReaderFailures(h, t: _*))
case Nil => None
}
}
override def to(value: Option[ConfigValue], fieldName: String): Option[(String, ConfigValue)] =
hint.to(value, fieldName)
}
def optProductHint[T](allowUnknownKeys: Boolean): OptProductHint[T] = new OptProductHint[T](
allowUnknownKeys = allowUnknownKeys
)
}

View File

@ -1,398 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.daml.jwt.JwtTimestampLeeway
import com.daml.ledger.api.tls.TlsVersion.TlsVersion
import com.daml.ledger.api.tls.{SecretsUrl, TlsConfiguration, TlsVersion}
import com.daml.ledger.runner.common.OptConfigValue.{optConvertEnabled, optProductHint}
import com.daml.lf.data.Ref
import com.daml.lf.engine.EngineConfig
import com.daml.lf.language.LanguageVersion
import com.daml.lf.transaction.ContractKeyUniquenessMode
import com.daml.lf.{VersionRange, interpretation, language}
import com.daml.metrics.api.reporters.MetricsReporter
import com.daml.platform.apiserver.SeedService.Seeding
import com.daml.platform.apiserver.configuration.RateLimitingConfig
import com.daml.platform.apiserver.{ApiServerConfig, AuthServiceConfig}
import com.daml.platform.config.{MetricsConfig, ParticipantConfig}
import com.daml.platform.configuration.{
AcsStreamsConfig,
CommandConfiguration,
IndexServiceConfig,
InitialLedgerConfiguration,
TransactionFlatStreamsConfig,
TransactionTreeStreamsConfig,
}
import com.daml.platform.indexer.ha.HaConfig
import com.daml.platform.indexer.{IndexerConfig, IndexerStartupMode, PackageMetadataViewConfig}
import com.daml.platform.localstore.{IdentityProviderManagementConfig, UserManagementConfig}
import com.daml.platform.services.time.TimeProviderType
import com.daml.platform.store.DbSupport.{
ConnectionPoolConfig,
DataSourceProperties,
ParticipantDataSourceConfig,
}
import com.daml.platform.store.backend.postgresql.PostgresDataSourceConfig
import com.daml.platform.store.backend.postgresql.PostgresDataSourceConfig.SynchronousCommitValue
import com.daml.ports.Port
import io.netty.handler.ssl.ClientAuth
import pureconfig.configurable.{genericMapReader, genericMapWriter}
import pureconfig.error.CannotConvert
import pureconfig.generic.ProductHint
import pureconfig.generic.semiauto._
import pureconfig.{ConfigConvert, ConfigReader, ConfigWriter, ConvertHelpers}
import scala.concurrent.duration.{Duration, FiniteDuration}
import scala.jdk.DurationConverters.{JavaDurationOps, ScalaDurationOps}
import scala.util.Try
class PureConfigReaderWriter(secure: Boolean = true) {
val Secret = "<REDACTED>"
implicit val javaDurationWriter: ConfigWriter[java.time.Duration] =
ConfigWriter.stringConfigWriter.contramap[java.time.Duration] { duration =>
duration.toScala.toString()
}
implicit val javaDurationReader: ConfigReader[java.time.Duration] =
ConfigReader.fromString[java.time.Duration] { str =>
Some(Duration.apply(str))
.collect { case d: FiniteDuration => d }
.map(_.toJava)
.toRight(CannotConvert(str, Duration.getClass.getName, s"Could not convert $str"))
}
implicit val versionRangeReader: ConfigReader[VersionRange[language.LanguageVersion]] =
ConfigReader.fromString[VersionRange[LanguageVersion]] {
case "daml-lf-dev-mode-unsafe" => Right(LanguageVersion.DevVersions)
case "early-access" => Right(LanguageVersion.EarlyAccessVersions)
case "stable" => Right(LanguageVersion.StableVersions)
case "legacy" => Right(LanguageVersion.LegacyVersions)
case value if value.split("-").length == 2 =>
val Array(min, max) = value.split("-")
val convertedValue: Either[String, VersionRange[LanguageVersion]] = for {
min <- language.LanguageVersion.fromString(min)
max <- language.LanguageVersion.fromString(max)
} yield {
VersionRange[language.LanguageVersion](min, max)
}
convertedValue.left.map { error =>
CannotConvert(value, VersionRange.getClass.getName, s"$value is not recognized. " + error)
}
case otherwise =>
Left(
CannotConvert(otherwise, VersionRange.getClass.getName, s"$otherwise is not recognized. ")
)
}
implicit val versionRangeWriter: ConfigWriter[VersionRange[language.LanguageVersion]] =
ConfigWriter.toString {
case LanguageVersion.DevVersions => "daml-lf-dev-mode-unsafe"
case LanguageVersion.EarlyAccessVersions => "early-access"
case LanguageVersion.StableVersions => "stable"
case LanguageVersion.LegacyVersions => "legacy"
case range => s"${range.min.pretty}-${range.max.pretty}"
}
implicit val interpretationLimitsHint =
ProductHint[interpretation.Limits](allowUnknownKeys = false)
implicit val interpretationLimitsConvert: ConfigConvert[interpretation.Limits] =
deriveConvert[interpretation.Limits]
implicit val contractKeyUniquenessModeConvert: ConfigConvert[ContractKeyUniquenessMode] =
deriveEnumerationConvert[ContractKeyUniquenessMode]
implicit val engineHint = ProductHint[EngineConfig](allowUnknownKeys = false)
implicit val engineConvert: ConfigConvert[EngineConfig] = deriveConvert[EngineConfig]
implicit val metricReporterReader: ConfigReader[MetricsReporter] = {
ConfigReader.fromString[MetricsReporter](ConvertHelpers.catchReadError { s =>
MetricsReporter.parseMetricsReporter(s)
})
}
implicit val metricReporterWriter: ConfigWriter[MetricsReporter] =
ConfigWriter.toString {
case MetricsReporter.Console => "console"
case MetricsReporter.Csv(directory) => s"csv://${directory.toAbsolutePath.toString}"
case MetricsReporter.Graphite(address, prefix) =>
s"graphite://${address.getHostName}:${address.getPort}/${prefix.getOrElse("")}"
case MetricsReporter.Prometheus(address) =>
s"prometheus://${address.getHostName}:${address.getPort}"
}
implicit val metricsRegistryTypeConvert: ConfigConvert[MetricsConfig.MetricRegistryType] =
deriveEnumerationConvert[MetricsConfig.MetricRegistryType]
implicit val metricsHint = ProductHint[MetricsConfig](allowUnknownKeys = false)
implicit val metricsConvert: ConfigConvert[MetricsConfig] = deriveConvert[MetricsConfig]
implicit val secretsUrlReader: ConfigReader[SecretsUrl] =
ConfigReader.fromString[SecretsUrl] { url =>
Right(SecretsUrl.fromString(url))
}
implicit val secretsUrlWriter: ConfigWriter[SecretsUrl] =
ConfigWriter.toString {
case SecretsUrl.FromUrl(url) if !secure => url.toString
case _ => Secret
}
implicit val clientAuthReader: ConfigReader[ClientAuth] =
ConfigReader.fromStringTry[ClientAuth](value => Try(ClientAuth.valueOf(value.toUpperCase)))
implicit val clientAuthWriter: ConfigWriter[ClientAuth] =
ConfigWriter.toString(_.name().toLowerCase)
implicit val tlsVersionReader: ConfigReader[TlsVersion] =
ConfigReader.fromString[TlsVersion] { tlsVersion =>
TlsVersion.allVersions
.find(_.version == tlsVersion)
.toRight(
CannotConvert(tlsVersion, TlsVersion.getClass.getName, s"$tlsVersion is not recognized.")
)
}
implicit val tlsVersionWriter: ConfigWriter[TlsVersion] =
ConfigWriter.toString(tlsVersion => tlsVersion.version)
implicit val tlsConfigurationHint = ProductHint[TlsConfiguration](allowUnknownKeys = false)
implicit val tlsConfigurationConvert: ConfigConvert[TlsConfiguration] =
deriveConvert[TlsConfiguration]
implicit val portReader: ConfigReader[Port] = ConfigReader.intConfigReader.map(Port.apply)
implicit val portWriter: ConfigWriter[Port] = ConfigWriter.intConfigWriter.contramap[Port] {
port: Port => port.value
}
implicit val initialLedgerConfigurationHint =
optProductHint[InitialLedgerConfiguration](allowUnknownKeys = false)
implicit val initialLedgerConfigurationConvert
: ConfigConvert[Option[InitialLedgerConfiguration]] =
optConvertEnabled(deriveConvert[InitialLedgerConfiguration])
implicit val seedingReader: ConfigReader[Seeding] =
// Not using deriveEnumerationReader[Seeding] as we prefer "testing-static" over static (that appears
// in Seeding.name, but not in the case object name).
ConfigReader.fromString[Seeding] {
case Seeding.Strong.name => Right(Seeding.Strong)
case Seeding.Weak.name => Right(Seeding.Weak)
case Seeding.Static.name => Right(Seeding.Static)
case unknownSeeding =>
Left(
CannotConvert(
unknownSeeding,
Seeding.getClass.getName,
s"Seeding is neither ${Seeding.Strong.name}, ${Seeding.Weak.name}, nor ${Seeding.Static.name}: ${unknownSeeding}",
)
)
}
implicit val seedingWriter: ConfigWriter[Seeding] = ConfigWriter.toString(_.name)
implicit val userManagementConfigHint =
ProductHint[UserManagementConfig](allowUnknownKeys = false)
implicit val userManagementConfigConvert: ConfigConvert[UserManagementConfig] =
deriveConvert[UserManagementConfig]
implicit val identityProviderManagementConfigHint =
ProductHint[IdentityProviderManagementConfig](allowUnknownKeys = false)
implicit val identityProviderManagementConfigConvert
: ConfigConvert[IdentityProviderManagementConfig] =
deriveConvert[IdentityProviderManagementConfig]
implicit val jwtTimestampLeewayConfigHint: OptConfigValue.OptProductHint[JwtTimestampLeeway] =
optProductHint[JwtTimestampLeeway](allowUnknownKeys = false)
implicit val jwtTimestampLeewayConfigConvert: ConfigConvert[Option[JwtTimestampLeeway]] =
optConvertEnabled(deriveConvert[JwtTimestampLeeway])
implicit val authServiceConfigUnsafeJwtHmac256Reader
: ConfigReader[AuthServiceConfig.UnsafeJwtHmac256] =
deriveReader[AuthServiceConfig.UnsafeJwtHmac256]
implicit val authServiceConfigUnsafeJwtHmac256Writer
: ConfigWriter[AuthServiceConfig.UnsafeJwtHmac256] =
deriveWriter[AuthServiceConfig.UnsafeJwtHmac256].contramap[AuthServiceConfig.UnsafeJwtHmac256] {
case x if secure => x.copy(secret = Secret)
case x => x
}
implicit val authServiceConfigJwtEs256CrtHint =
ProductHint[AuthServiceConfig.JwtEs256](allowUnknownKeys = false)
implicit val authServiceConfigJwtEs512CrtHint =
ProductHint[AuthServiceConfig.JwtEs512](allowUnknownKeys = false)
implicit val authServiceConfigJwtRs256CrtHint =
ProductHint[AuthServiceConfig.JwtRs256](allowUnknownKeys = false)
implicit val authServiceConfigJwtRs256JwksHint =
ProductHint[AuthServiceConfig.JwtRs256Jwks](allowUnknownKeys = false)
implicit val authServiceConfigWildcardHint =
ProductHint[AuthServiceConfig.Wildcard.type](allowUnknownKeys = false)
implicit val authServiceConfigHint = ProductHint[AuthServiceConfig](allowUnknownKeys = false)
implicit val authServiceConfigJwtEs256CrtConvert: ConfigConvert[AuthServiceConfig.JwtEs256] =
deriveConvert[AuthServiceConfig.JwtEs256]
implicit val authServiceConfigJwtEs512CrtConvert: ConfigConvert[AuthServiceConfig.JwtEs512] =
deriveConvert[AuthServiceConfig.JwtEs512]
implicit val authServiceConfigJwtRs256CrtConvert: ConfigConvert[AuthServiceConfig.JwtRs256] =
deriveConvert[AuthServiceConfig.JwtRs256]
implicit val authServiceConfigJwtRs256JwksConvert: ConfigConvert[AuthServiceConfig.JwtRs256Jwks] =
deriveConvert[AuthServiceConfig.JwtRs256Jwks]
implicit val authServiceConfigWildcardConvert: ConfigConvert[AuthServiceConfig.Wildcard.type] =
deriveConvert[AuthServiceConfig.Wildcard.type]
implicit val authServiceConfigConvert: ConfigConvert[AuthServiceConfig] =
deriveConvert[AuthServiceConfig]
implicit val commandConfigurationHint =
ProductHint[CommandConfiguration](allowUnknownKeys = false)
implicit val commandConfigurationConvert: ConfigConvert[CommandConfiguration] =
deriveConvert[CommandConfiguration]
implicit val timeProviderTypeConvert: ConfigConvert[TimeProviderType] =
deriveEnumerationConvert[TimeProviderType]
implicit val dbConfigSynchronousCommitValueConvert: ConfigConvert[SynchronousCommitValue] =
deriveEnumerationConvert[SynchronousCommitValue]
implicit val dbConfigConnectionPoolConfigHint =
ProductHint[ConnectionPoolConfig](allowUnknownKeys = false)
implicit val dbConfigConnectionPoolConfigConvert: ConfigConvert[ConnectionPoolConfig] =
deriveConvert[ConnectionPoolConfig]
implicit val dbConfigPostgresDataSourceConfigHint =
ProductHint[PostgresDataSourceConfig](allowUnknownKeys = false)
implicit val dbConfigPostgresDataSourceConfigConvert: ConfigConvert[PostgresDataSourceConfig] =
deriveConvert[PostgresDataSourceConfig]
implicit val dataSourcePropertiesHint =
ProductHint[DataSourceProperties](allowUnknownKeys = false)
implicit val dataSourcePropertiesConvert: ConfigConvert[DataSourceProperties] =
deriveConvert[DataSourceProperties]
implicit val rateLimitingConfigHint: OptConfigValue.OptProductHint[RateLimitingConfig] =
optProductHint[RateLimitingConfig](allowUnknownKeys = false)
implicit val rateLimitingConfigConvert: ConfigConvert[Option[RateLimitingConfig]] =
optConvertEnabled(deriveConvert[RateLimitingConfig])
implicit val apiServerConfigHint =
ProductHint[ApiServerConfig](allowUnknownKeys = false)
implicit val apiServerConfigConvert: ConfigConvert[ApiServerConfig] =
deriveConvert[ApiServerConfig]
implicit val validateAndStartConvert: ConfigConvert[IndexerStartupMode.ValidateAndStart.type] =
deriveConvert[IndexerStartupMode.ValidateAndStart.type]
implicit val MigrateOnEmptySchemaAndStartReader
: ConfigConvert[IndexerStartupMode.MigrateOnEmptySchemaAndStart.type] =
deriveConvert[IndexerStartupMode.MigrateOnEmptySchemaAndStart.type]
implicit val migrateAndStartConvertHint =
ProductHint[IndexerStartupMode.MigrateAndStart](allowUnknownKeys = false)
implicit val migrateAndStartConvert: ConfigConvert[IndexerStartupMode.MigrateAndStart] =
deriveConvert[IndexerStartupMode.MigrateAndStart]
implicit val validateAndWaitOnlyHint =
ProductHint[IndexerStartupMode.ValidateAndWaitOnly](allowUnknownKeys = false)
implicit val validateAndWaitOnlyConvert: ConfigConvert[IndexerStartupMode.ValidateAndWaitOnly] =
deriveConvert[IndexerStartupMode.ValidateAndWaitOnly]
implicit val indexerStartupModeConvert: ConfigConvert[IndexerStartupMode] =
deriveConvert[IndexerStartupMode]
implicit val haConfigHint =
ProductHint[HaConfig](allowUnknownKeys = false)
implicit val haConfigConvert: ConfigConvert[HaConfig] = deriveConvert[HaConfig]
private def createParticipantId(participantId: String) =
Ref.ParticipantId
.fromString(participantId)
.left
.map(err => CannotConvert(participantId, Ref.ParticipantId.getClass.getName, err))
implicit val participantIdReader: ConfigReader[Ref.ParticipantId] = ConfigReader
.fromString[Ref.ParticipantId](createParticipantId)
implicit val participantIdWriter: ConfigWriter[Ref.ParticipantId] =
ConfigWriter.toString[Ref.ParticipantId](_.toString)
implicit val packageMetadataViewConfigHint =
ProductHint[PackageMetadataViewConfig](allowUnknownKeys = false)
implicit val packageMetadataViewConfigConvert: ConfigConvert[PackageMetadataViewConfig] =
deriveConvert[PackageMetadataViewConfig]
implicit val indexerConfigHint =
ProductHint[IndexerConfig](allowUnknownKeys = false)
implicit val indexerConfigConvert: ConfigConvert[IndexerConfig] = deriveConvert[IndexerConfig]
implicit val indexServiceConfigHint =
ProductHint[IndexServiceConfig](allowUnknownKeys = false)
implicit val acsStreamsConfigConvert: ConfigConvert[AcsStreamsConfig] =
deriveConvert[AcsStreamsConfig]
implicit val transactionTreeStreamsConfigConvert: ConfigConvert[TransactionTreeStreamsConfig] =
deriveConvert[TransactionTreeStreamsConfig]
implicit val transactionFlatStreamsConfigConvert: ConfigConvert[TransactionFlatStreamsConfig] =
deriveConvert[TransactionFlatStreamsConfig]
implicit val indexServiceConfigConvert: ConfigConvert[IndexServiceConfig] =
deriveConvert[IndexServiceConfig]
implicit val participantConfigHint =
ProductHint[ParticipantConfig](allowUnknownKeys = false)
implicit val participantConfigConvert: ConfigConvert[ParticipantConfig] =
deriveConvert[ParticipantConfig]
implicit val participantDataSourceConfigReader: ConfigReader[ParticipantDataSourceConfig] =
ConfigReader.fromString[ParticipantDataSourceConfig] { url =>
Right(ParticipantDataSourceConfig(url))
}
implicit val participantDataSourceConfigWriter: ConfigWriter[ParticipantDataSourceConfig] =
ConfigWriter.toString {
case _ if secure => Secret
case dataSourceConfig => dataSourceConfig.jdbcUrl
}
implicit val participantDataSourceConfigMapReader
: ConfigReader[Map[Ref.ParticipantId, ParticipantDataSourceConfig]] =
genericMapReader[Ref.ParticipantId, ParticipantDataSourceConfig]((s: String) =>
createParticipantId(s)
)
implicit val participantDataSourceConfigMapWriter
: ConfigWriter[Map[Ref.ParticipantId, ParticipantDataSourceConfig]] =
genericMapWriter[Ref.ParticipantId, ParticipantDataSourceConfig](_.toString)
implicit val participantConfigMapReader: ConfigReader[Map[Ref.ParticipantId, ParticipantConfig]] =
genericMapReader[Ref.ParticipantId, ParticipantConfig]((s: String) => createParticipantId(s))
implicit val participantConfigMapWriter: ConfigWriter[Map[Ref.ParticipantId, ParticipantConfig]] =
genericMapWriter[Ref.ParticipantId, ParticipantConfig](_.toString)
implicit val configHint =
ProductHint[Config](allowUnknownKeys = false)
implicit val configConvert: ConfigConvert[Config] = deriveConvert[Config]
}
object PureConfigReaderWriter {
implicit val Secure = new PureConfigReaderWriter(secure = true)
}

View File

@ -1,511 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.daml.jwt.JwtTimestampLeeway
import com.daml.lf.engine.EngineConfig
import com.daml.lf.interpretation.Limits
import com.daml.lf.language.LanguageVersion
import com.daml.lf.transaction.ContractKeyUniquenessMode
import com.daml.lf.VersionRange
import org.scalacheck.Gen
import com.daml.ledger.api.tls.{TlsConfiguration, TlsVersion}
import com.daml.lf.data.Ref
import com.daml.platform.apiserver.{ApiServerConfig, AuthServiceConfig}
import com.daml.platform.apiserver.SeedService.Seeding
import com.daml.platform.apiserver.configuration.RateLimitingConfig
import com.daml.platform.config.{MetricsConfig, ParticipantConfig}
import com.daml.platform.config.MetricsConfig.MetricRegistryType
import com.daml.platform.configuration.{
AcsStreamsConfig,
CommandConfiguration,
IndexServiceConfig,
InitialLedgerConfiguration,
TransactionFlatStreamsConfig,
TransactionTreeStreamsConfig,
}
import com.daml.platform.indexer.{IndexerConfig, IndexerStartupMode, PackageMetadataViewConfig}
import com.daml.platform.indexer.ha.HaConfig
import com.daml.platform.localstore.{IdentityProviderManagementConfig, UserManagementConfig}
import com.daml.platform.services.time.TimeProviderType
import com.daml.platform.store.DbSupport
import com.daml.platform.store.DbSupport.DataSourceProperties
import com.daml.platform.store.backend.postgresql.PostgresDataSourceConfig
import com.daml.platform.store.backend.postgresql.PostgresDataSourceConfig.SynchronousCommitValue
import com.daml.ports.Port
import io.netty.handler.ssl.ClientAuth
import java.io.File
import java.net.InetSocketAddress
import java.nio.file.Paths
import java.time.Duration
import java.time.temporal.ChronoUnit
import com.daml.metrics.api.reporters.MetricsReporter
object ArbitraryConfig {
val duration: Gen[Duration] = for {
value <- Gen.chooseNum(0, Int.MaxValue)
unit <- Gen.oneOf(
List(
ChronoUnit.NANOS,
ChronoUnit.MICROS,
ChronoUnit.MILLIS,
ChronoUnit.SECONDS,
)
)
} yield Duration.of(value.toLong, unit)
val versionRange: Gen[VersionRange[LanguageVersion]] = for {
min <- Gen.oneOf(LanguageVersion.All)
max <- Gen.oneOf(LanguageVersion.All)
if LanguageVersion.Ordering.compare(max, min) >= 0
} yield VersionRange[LanguageVersion](min, max)
val limits: Gen[Limits] = for {
contractSignatories <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
contractObservers <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
choiceControllers <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
choiceObservers <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
choiceAuthorizers <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
transactionInputContracts <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
} yield Limits(
contractSignatories,
contractObservers,
choiceControllers,
choiceObservers,
choiceAuthorizers,
transactionInputContracts,
)
val contractKeyUniquenessMode: Gen[ContractKeyUniquenessMode] =
Gen.oneOf(ContractKeyUniquenessMode.Strict, ContractKeyUniquenessMode.Off)
val engineConfig: Gen[EngineConfig] = for {
allowedLanguageVersions <- versionRange
packageValidation <- Gen.oneOf(true, false)
stackTraceMode <- Gen.oneOf(true, false)
forbidV0ContractId <- Gen.oneOf(true, false)
requireSuffixedGlobalContractId <- Gen.oneOf(true, false)
contractKeyUniqueness <- contractKeyUniquenessMode
limits <- limits
} yield EngineConfig(
allowedLanguageVersions = allowedLanguageVersions,
packageValidation = packageValidation,
stackTraceMode = stackTraceMode,
profileDir = None,
contractKeyUniqueness = contractKeyUniqueness,
forbidV0ContractId = forbidV0ContractId,
requireSuffixedGlobalContractId = requireSuffixedGlobalContractId,
limits = limits,
)
val inetSocketAddress = for {
host <- Gen.alphaStr
port <- Gen.chooseNum(1, 65535)
} yield new InetSocketAddress(host, port)
val graphiteReporter: Gen[MetricsReporter] = for {
address <- inetSocketAddress
prefixStr <- Gen.alphaStr if prefixStr.nonEmpty
prefix <- Gen.option(prefixStr)
} yield MetricsReporter.Graphite(address, prefix)
val prometheusReporter: Gen[MetricsReporter] = for {
address <- inetSocketAddress
} yield MetricsReporter.Prometheus(address)
val csvReporter: Gen[MetricsReporter] = for {
path <- Gen.alphaStr
} yield MetricsReporter.Csv(Paths.get(path).toAbsolutePath)
val metricsReporter: Gen[MetricsReporter] =
Gen.oneOf(graphiteReporter, prometheusReporter, csvReporter, Gen.const(MetricsReporter.Console))
val metricRegistryType: Gen[MetricRegistryType] =
Gen.oneOf[MetricRegistryType](MetricRegistryType.JvmShared, MetricRegistryType.New)
val metricConfig = for {
enabled <- Gen.oneOf(true, false)
reporter <- metricsReporter
reportingInterval <- Gen.finiteDuration
registryType <- metricRegistryType
} yield MetricsConfig(enabled, reporter, reportingInterval, registryType)
val clientAuth = Gen.oneOf(ClientAuth.values().toList)
val tlsVersion = Gen.oneOf(TlsVersion.allVersions)
val tlsConfiguration = for {
enabled <- Gen.oneOf(true, false)
keyCertChainFile <- Gen.option(Gen.alphaStr)
keyFile <- Gen.option(Gen.alphaStr)
trustCertCollectionFile <- Gen.option(Gen.alphaStr)
clientAuth <- clientAuth
enableCertRevocationChecking <- Gen.oneOf(true, false)
minimumServerProtocolVersion <- Gen.option(tlsVersion)
} yield TlsConfiguration(
enabled,
keyCertChainFile.map(fileName => new File(fileName)),
keyFile.map(fileName => new File(fileName)),
trustCertCollectionFile.map(fileName => new File(fileName)),
None,
clientAuth,
enableCertRevocationChecking,
minimumServerProtocolVersion,
)
val port = Gen.choose(0, 65535).map(p => Port(p))
val initialLedgerConfiguration = for {
maxDeduplicationDuration <- duration
avgTransactionLatency <- duration
minSkew <- duration
maxSkew <- duration
delayBeforeSubmitting <- duration
config = InitialLedgerConfiguration(
maxDeduplicationDuration,
avgTransactionLatency,
minSkew,
maxSkew,
delayBeforeSubmitting,
)
optConfig <- Gen.option(config)
} yield optConfig
val seeding = Gen.oneOf(Seeding.Weak, Seeding.Strong, Seeding.Static)
val userManagementConfig = for {
enabled <- Gen.oneOf(true, false)
maxCacheSize <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
cacheExpiryAfterWriteInSeconds <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
maxUsersPageSize <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
} yield UserManagementConfig(
enabled = enabled,
maxCacheSize = maxCacheSize,
cacheExpiryAfterWriteInSeconds = cacheExpiryAfterWriteInSeconds,
maxUsersPageSize = maxUsersPageSize,
)
val identityProviderManagementConfig = for {
cacheExpiryAfterWrite <- Gen.finiteDuration
} yield IdentityProviderManagementConfig(
cacheExpiryAfterWrite = cacheExpiryAfterWrite
)
def jwtTimestampLeewayGen: Gen[JwtTimestampLeeway] = {
for {
default <- Gen.option(Gen.posNum[Long])
expiresAt <- Gen.option(Gen.posNum[Long])
issuedAt <- Gen.option(Gen.posNum[Long])
notBefore <- Gen.option(Gen.posNum[Long])
} yield JwtTimestampLeeway(
default = default,
expiresAt = expiresAt,
issuedAt = issuedAt,
notBefore = notBefore,
)
}
val UnsafeJwtHmac256 = for {
secret <- Gen.alphaStr
aud <- Gen.option(Gen.alphaStr)
} yield AuthServiceConfig.UnsafeJwtHmac256(secret, aud)
val JwtRs256Crt = for {
certificate <- Gen.alphaStr
aud <- Gen.option(Gen.alphaStr)
} yield AuthServiceConfig.JwtRs256(certificate, aud)
val JwtEs256Crt = for {
certificate <- Gen.alphaStr
aud <- Gen.option(Gen.alphaStr)
} yield AuthServiceConfig.JwtEs256(certificate, aud)
val JwtEs512Crt = for {
certificate <- Gen.alphaStr
aud <- Gen.option(Gen.alphaStr)
} yield AuthServiceConfig.JwtEs512(certificate, aud)
val JwtRs256Jwks = for {
url <- Gen.alphaStr
aud <- Gen.option(Gen.alphaStr)
} yield AuthServiceConfig.JwtRs256Jwks(url, aud)
val authServiceConfig = Gen.oneOf(
Gen.const(AuthServiceConfig.Wildcard),
UnsafeJwtHmac256,
JwtRs256Crt,
JwtEs256Crt,
JwtEs512Crt,
JwtRs256Jwks,
)
val commandConfiguration = for {
inputBufferSize <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
maxCommandsInFlight <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
trackerRetentionPeriod <- duration
} yield CommandConfiguration(
inputBufferSize,
maxCommandsInFlight,
trackerRetentionPeriod,
)
val timeProviderType = Gen.oneOf(TimeProviderType.Static, TimeProviderType.WallClock)
val connectionPoolConfig = for {
connectionPoolSize <- Gen.chooseNum(0, Int.MaxValue)
connectionTimeout <- Gen.finiteDuration
} yield DbSupport.ConnectionPoolConfig(
connectionPoolSize,
connectionTimeout,
)
val postgresDataSourceConfig = for {
synchronousCommit <- Gen.option(Gen.oneOf(SynchronousCommitValue.All))
tcpKeepalivesIdle <- Gen.option(Gen.chooseNum(0, Int.MaxValue))
tcpKeepalivesInterval <- Gen.option(Gen.chooseNum(0, Int.MaxValue))
tcpKeepalivesCount <- Gen.option(Gen.chooseNum(0, Int.MaxValue))
} yield PostgresDataSourceConfig(
synchronousCommit,
tcpKeepalivesIdle,
tcpKeepalivesInterval,
tcpKeepalivesCount,
)
val dataSourceProperties = for {
connectionPool <- connectionPoolConfig
postgres <- postgresDataSourceConfig
} yield DataSourceProperties(connectionPool = connectionPool, postgres = postgres)
val rateLimitingConfig = for {
maxApiServicesQueueSize <- Gen.chooseNum(0, Int.MaxValue)
maxApiServicesIndexDbQueueSize <- Gen.chooseNum(0, Int.MaxValue)
maxUsedHeapSpacePercentage <- Gen.chooseNum(0, Int.MaxValue)
minFreeHeapSpaceBytes <- Gen.long
element = RateLimitingConfig(
maxApiServicesQueueSize,
maxApiServicesIndexDbQueueSize,
maxUsedHeapSpacePercentage,
minFreeHeapSpaceBytes,
)
optElement <- Gen.option(element)
} yield optElement
val apiServerConfig = for {
address <- Gen.option(Gen.alphaStr)
apiStreamShutdownTimeout <- Gen.finiteDuration
command <- commandConfiguration
configurationLoadTimeout <- Gen.finiteDuration
initialLedgerConfiguration <- initialLedgerConfiguration
managementServiceTimeout <- Gen.finiteDuration
maxInboundMessageSize <- Gen.chooseNum(0, Int.MaxValue)
port <- port
portFile <- Gen.option(Gen.alphaStr.map(p => Paths.get(p)))
rateLimit <- rateLimitingConfig
seeding <- seeding
timeProviderType <- timeProviderType
tls <- Gen.option(tlsConfiguration)
userManagement <- userManagementConfig
} yield ApiServerConfig(
address = address,
apiStreamShutdownTimeout = apiStreamShutdownTimeout,
command = command,
configurationLoadTimeout = configurationLoadTimeout,
initialLedgerConfiguration = initialLedgerConfiguration,
managementServiceTimeout = managementServiceTimeout,
maxInboundMessageSize = maxInboundMessageSize,
port = port,
portFile = portFile,
rateLimit = rateLimit,
seeding = seeding,
timeProviderType = timeProviderType,
tls = tls,
userManagement = userManagement,
)
val indexerStartupMode: Gen[IndexerStartupMode] = for {
allowExistingSchema <- Gen.oneOf(true, false)
schemaMigrationAttempts <- Gen.chooseNum(0, Int.MaxValue)
schemaMigrationAttemptBackoff <- Gen.finiteDuration
value <- Gen.oneOf[IndexerStartupMode](
IndexerStartupMode.ValidateAndStart,
IndexerStartupMode
.ValidateAndWaitOnly(schemaMigrationAttempts, schemaMigrationAttemptBackoff),
IndexerStartupMode.MigrateOnEmptySchemaAndStart,
IndexerStartupMode.MigrateAndStart(allowExistingSchema),
)
} yield value
val haConfig = for {
mainLockAcquireRetryMillis <- Gen.long
workerLockAcquireRetryMillis <- Gen.long
workerLockAcquireMaxRetry <- Gen.long
mainLockCheckerPeriodMillis <- Gen.long
indexerLockId <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
indexerWorkerLockId <- Gen.chooseNum(Int.MinValue, Int.MaxValue)
} yield HaConfig(
mainLockAcquireRetryMillis,
workerLockAcquireRetryMillis,
workerLockAcquireMaxRetry,
mainLockCheckerPeriodMillis,
indexerLockId,
indexerWorkerLockId,
)
val packageMetadataViewConfig = for {
initLoadParallelism <- Gen.chooseNum(0, Int.MaxValue)
initProcessParallelism <- Gen.chooseNum(0, Int.MaxValue)
} yield PackageMetadataViewConfig(initLoadParallelism, initProcessParallelism)
val indexerConfig = for {
batchingParallelism <- Gen.chooseNum(0, Int.MaxValue)
dataSourceProperties <- Gen.option(dataSourceProperties)
enableCompression <- Gen.oneOf(true, false)
highAvailability <- haConfig
ingestionParallelism <- Gen.chooseNum(0, Int.MaxValue)
inputMappingParallelism <- Gen.chooseNum(0, Int.MaxValue)
maxInputBufferSize <- Gen.chooseNum(0, Int.MaxValue)
restartDelay <- Gen.finiteDuration
startupMode <- indexerStartupMode
submissionBatchSize <- Gen.long
packageMetadataViewConfig <- packageMetadataViewConfig
} yield IndexerConfig(
batchingParallelism = batchingParallelism,
dataSourceProperties = dataSourceProperties,
enableCompression = enableCompression,
highAvailability = highAvailability,
ingestionParallelism = ingestionParallelism,
inputMappingParallelism = inputMappingParallelism,
maxInputBufferSize = maxInputBufferSize,
restartDelay = restartDelay,
startupMode = startupMode,
submissionBatchSize = submissionBatchSize,
packageMetadataView = packageMetadataViewConfig,
)
def genAcsStreamConfig: Gen[AcsStreamsConfig] =
for {
eventsPageSize <- Gen.chooseNum(0, Int.MaxValue)
acsIdPageSize <- Gen.chooseNum(0, Int.MaxValue)
acsIdPageBufferSize <- Gen.chooseNum(0, Int.MaxValue)
acsIdPageWorkingMemoryBytes <- Gen.chooseNum(0, Int.MaxValue)
acsIdFetchingParallelism <- Gen.chooseNum(0, Int.MaxValue)
acsContractFetchingParallelism <- Gen.chooseNum(0, Int.MaxValue)
} yield AcsStreamsConfig(
maxIdsPerIdPage = acsIdPageSize,
maxPayloadsPerPayloadsPage = eventsPageSize,
maxPagesPerIdPagesBuffer = acsIdPageBufferSize,
maxWorkingMemoryInBytesForIdPages = acsIdPageWorkingMemoryBytes,
maxParallelIdCreateQueries = acsIdFetchingParallelism,
maxParallelPayloadCreateQueries = acsContractFetchingParallelism,
)
def genTransactionFlatStreams: Gen[TransactionFlatStreamsConfig] =
for {
maxIdsPerIdPage <- Gen.chooseNum(0, Int.MaxValue)
maxPayloadsPerPayloadsPage <- Gen.chooseNum(0, Int.MaxValue)
maxPagesPerIdPagesBuffer <- Gen.chooseNum(0, Int.MaxValue)
maxWorkingMemoryInBytesForIdPages <- Gen.chooseNum(0, Int.MaxValue)
maxParallelIdCreateQueries <- Gen.chooseNum(0, Int.MaxValue)
maxParallelPayloadCreateQueries <- Gen.chooseNum(0, Int.MaxValue)
maxParallelIdConsumingQueries <- Gen.chooseNum(0, Int.MaxValue)
maxParallelPayloadConsumingQueries <- Gen.chooseNum(0, Int.MaxValue)
maxParallelPayloadQueries <- Gen.chooseNum(0, Int.MaxValue)
transactionsProcessingParallelism <- Gen.chooseNum(0, Int.MaxValue)
} yield TransactionFlatStreamsConfig(
maxIdsPerIdPage = maxIdsPerIdPage,
maxPagesPerIdPagesBuffer = maxPayloadsPerPayloadsPage,
maxWorkingMemoryInBytesForIdPages = maxPagesPerIdPagesBuffer,
maxPayloadsPerPayloadsPage = maxWorkingMemoryInBytesForIdPages,
maxParallelIdCreateQueries = maxParallelIdCreateQueries,
maxParallelIdConsumingQueries = maxParallelPayloadCreateQueries,
maxParallelPayloadCreateQueries = maxParallelIdConsumingQueries,
maxParallelPayloadConsumingQueries = maxParallelPayloadConsumingQueries,
maxParallelPayloadQueries = maxParallelPayloadQueries,
transactionsProcessingParallelism = transactionsProcessingParallelism,
)
def genTransactionTreeStreams: Gen[TransactionTreeStreamsConfig] =
for {
maxIdsPerIdPage <- Gen.chooseNum(0, Int.MaxValue)
maxPayloadsPerPayloadsPage <- Gen.chooseNum(0, Int.MaxValue)
maxPagesPerIdPagesBuffer <- Gen.chooseNum(0, Int.MaxValue)
maxWorkingMemoryInBytesForIdPages <- Gen.chooseNum(0, Int.MaxValue)
maxParallelIdCreateQueries <- Gen.chooseNum(0, Int.MaxValue)
maxParallelPayloadCreateQueries <- Gen.chooseNum(0, Int.MaxValue)
maxParallelIdConsumingQueries <- Gen.chooseNum(0, Int.MaxValue)
maxParallelPayloadConsumingQueries <- Gen.chooseNum(0, Int.MaxValue)
maxParallelPayloadQueries <- Gen.chooseNum(0, Int.MaxValue)
transactionsProcessingParallelism <- Gen.chooseNum(0, Int.MaxValue)
maxParallelIdNonConsumingQueries <- Gen.chooseNum(0, Int.MaxValue)
maxParallelPayloadNonConsumingQueries <- Gen.chooseNum(0, Int.MaxValue)
} yield TransactionTreeStreamsConfig(
maxIdsPerIdPage = maxIdsPerIdPage,
maxPagesPerIdPagesBuffer = maxPayloadsPerPayloadsPage,
maxWorkingMemoryInBytesForIdPages = maxPagesPerIdPagesBuffer,
maxPayloadsPerPayloadsPage = maxWorkingMemoryInBytesForIdPages,
maxParallelIdCreateQueries = maxParallelIdCreateQueries,
maxParallelIdConsumingQueries = maxParallelPayloadCreateQueries,
maxParallelPayloadCreateQueries = maxParallelIdConsumingQueries,
maxParallelPayloadConsumingQueries = maxParallelPayloadConsumingQueries,
maxParallelPayloadQueries = maxParallelPayloadQueries,
transactionsProcessingParallelism = transactionsProcessingParallelism,
maxParallelIdNonConsumingQueries = maxParallelIdNonConsumingQueries,
maxParallelPayloadNonConsumingQueries = maxParallelPayloadNonConsumingQueries,
)
val indexServiceConfig: Gen[IndexServiceConfig] = for {
acsStreams <- genAcsStreamConfig
transactionFlatStreams <- genTransactionFlatStreams
transactionTreeStreams <- genTransactionTreeStreams
eventsProcessingParallelism <- Gen.chooseNum(0, Int.MaxValue)
bufferedStreamsPageSize <- Gen.chooseNum(0, Int.MaxValue)
maxContractStateCacheSize <- Gen.long
maxContractKeyStateCacheSize <- Gen.long
maxTransactionsInMemoryFanOutBufferSize <- Gen.chooseNum(0, Int.MaxValue)
apiStreamShutdownTimeout <- Gen.finiteDuration
} yield IndexServiceConfig(
eventsProcessingParallelism,
bufferedStreamsPageSize,
maxContractStateCacheSize,
maxContractKeyStateCacheSize,
maxTransactionsInMemoryFanOutBufferSize,
apiStreamShutdownTimeout,
acsStreams = acsStreams,
transactionFlatStreams = transactionFlatStreams,
transactionTreeStreams = transactionTreeStreams,
)
val participantConfig = for {
apiServer <- apiServerConfig
dataSourceProperties <- dataSourceProperties
indexService <- indexServiceConfig
indexer <- indexerConfig
jwtTimestampLeeway <- Gen.option(jwtTimestampLeewayGen)
} yield ParticipantConfig(
apiServer = apiServer,
authentication = AuthServiceConfig.Wildcard, // hardcoded to wildcard, as otherwise it
// will be redacted and cannot be checked for isomorphism
jwtTimestampLeeway = jwtTimestampLeeway,
dataSourceProperties = dataSourceProperties,
indexService = indexService,
indexer = indexer,
)
val config = for {
engine <- engineConfig
ledgerId <- Gen.alphaStr
metrics <- metricConfig
participant <- participantConfig
} yield Config(
engine = engine,
ledgerId = ledgerId,
metrics = metrics,
dataSource = Map.empty, // hardcoded to wildcard, as otherwise it
participants = Map(
Ref.ParticipantId.fromString("default").toOption.get -> participant
), // will be redacted and cannot be checked for isomorphism
)
}

View File

@ -1,6 +0,0 @@
test {
value-1 = v1
value-2 = v2
value-3 = v3
}

View File

@ -1,6 +0,0 @@
test {
value-1 = overriden_v1
value-2 = overriden_v2
#value-3 = overriden_v3 #not overriden so to check if `value3`
}

View File

@ -1,3 +0,0 @@
value-1 = v1
value-2 = v2
value-3 = v3

View File

@ -1,471 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.daml.bazeltools.BazelRunfiles._
import com.daml.ledger.api.tls.{SecretsUrl, TlsConfiguration, TlsVersion}
import com.daml.ledger.runner.common.CliConfigSpec.TestScope
import com.daml.lf.data.Ref
import com.daml.platform.config.ParticipantConfig
import io.netty.handler.ssl.ClientAuth
import org.scalatest.OptionValues
import org.scalatest.flatspec.AnyFlatSpec
import org.scalatest.matchers.should.Matchers
import org.scalatest.prop.TableDrivenPropertyChecks
import scopt.OParser
import java.io.File
import java.time.Duration
final class CliConfigSpec
extends AnyFlatSpec
with Matchers
with OptionValues
with TableDrivenPropertyChecks {
behavior of "CliConfig with RunLegacy mode"
it should "succeed when server's private key is encrypted and secret-url is provided" in new TestScope {
val actual = configParser(
Seq(
"run-legacy-cli-config",
"--participant=participant-id=example,port=0",
"--pem",
"key.enc",
"--tls-secrets-url",
"http://aaa",
)
)
actual should not be None
actual.get.tlsConfig shouldBe Some(
TlsConfiguration(
enabled = true,
secretsUrl = Some(SecretsUrl.fromString("http://aaa")),
privateKeyFile = Some(new File("key.enc")),
certChainFile = None,
trustCollectionFile = None,
)
)
}
it should "fail when server's private key is encrypted but secret-url is not provided" in new TestScope {
configParser(
Seq(
"run-legacy-cli-config",
"--participant=participant-id=example,port=0",
"--pem",
"key.enc",
)
) shouldBe None
}
it should "fail parsing a bogus TLS version" in new TestScope {
configParser(
Seq(
"run-legacy-cli-config",
"--participant=participant-id=example,port=0",
"--min-tls-version",
"111",
)
) shouldBe None
}
it should "succeed parsing a supported TLS version" in new TestScope {
val actual = configParser(
Seq(
"run-legacy-cli-config",
"--participant=participant-id=example,port=0",
"--min-tls-version",
"1.3",
)
)
actual should not be None
actual.get.tlsConfig shouldBe Some(
TlsConfiguration(
enabled = true,
minimumServerProtocolVersion = Some(TlsVersion.V1_3),
)
)
}
it should "succeed when server's private key is in plaintext and secret-url is not provided" in new TestScope {
val actual = configParser(
Seq(
"run-legacy-cli-config",
"--participant=participant-id=example,port=0",
"--pem",
"key.txt",
)
)
actual should not be None
actual.get.tlsConfig shouldBe Some(
TlsConfiguration(
enabled = true,
secretsUrl = None,
privateKeyFile = Some(new File("key.txt")),
certChainFile = None,
trustCollectionFile = None,
)
)
}
it should "be running in Run mode if parameters list is empty" in new TestScope {
configParser(Seq()).value.mode shouldBe Mode.Run
}
it should "fail if a participant is not provided in run mode" in new TestScope {
configParser(Seq("run-legacy-cli-config")) shouldEqual None
}
it should "fail if a participant is not provided when dumping the index metadata" in new TestScope {
configParser(Seq(dumpIndexMetadataCommand)) shouldEqual None
}
it should "succeed if a participant is provided when dumping the index metadata" in new TestScope {
configParser(Seq(dumpIndexMetadataCommand, "some-jdbc-url")) should not be empty
}
it should "succeed if more than one participant is provided when dumping the index metadata" in new TestScope {
configParser(
Seq(dumpIndexMetadataCommand, "some-jdbc-url", "some-other-jdbc-url")
) should not be empty
}
it should "accept single default participant" in new TestScope {
val config =
configParser(Seq("run-legacy-cli-config", "--participant", "participant-id=p1,port=123"))
.getOrElse(fail())
config.participants(0).participantId should be(Ref.ParticipantId.assertFromString("p1"))
}
it should "accept multiple participant, with unique id" in new TestScope {
val config = configParser(
Seq(
"run-legacy-cli-config",
"--participant",
"participant-id=p1,port=123",
"--participant",
"participant-id=p2,port=123",
)
).getOrElse(fail())
config.participants(0).participantId should be(Ref.ParticipantId.assertFromString("p1"))
config.participants(1).participantId should be(Ref.ParticipantId.assertFromString("p2"))
}
it should "fail to accept multiple participants with non-unique ids" in new TestScope {
configParser(
Seq(
"run-legacy-cli-config",
"--participant",
"participant-id=p1,port=123",
"--participant",
"participant-id=p1,port=123",
)
) shouldBe None
}
it should "get the jdbc string from the command line argument when provided" in new TestScope {
val jdbcFromCli = "command-line-jdbc"
val config = configParser(
Seq(
"run-legacy-cli-config",
participantOption,
s"$fixedParticipantSubOptions,$jdbcUrlSubOption=${TestJdbcValues.jdbcFromCli}",
)
)
.getOrElse(fail())
config.participants.head.serverJdbcUrl should be(jdbcFromCli)
}
it should "get the jdbc string from the environment when provided" in new TestScope {
val config = configParser(
Seq(
"run-legacy-cli-config",
participantOption,
s"$fixedParticipantSubOptions,$jdbcUrlEnvSubOption=$jdbcEnvVar",
),
{ case `jdbcEnvVar` => Some(TestJdbcValues.jdbcFromEnv) },
).getOrElse(parsingFailure())
config.participants.head.serverJdbcUrl should be(TestJdbcValues.jdbcFromEnv)
}
it should "return the default when env variable not provided" in new TestScope {
val defaultJdbc = ParticipantConfig.defaultIndexJdbcUrl(participantId)
val config = configParser(
Seq(
"run-legacy-cli-config",
participantOption,
s"$fixedParticipantSubOptions,$jdbcUrlEnvSubOption=$jdbcEnvVar",
)
).getOrElse(parsingFailure())
config.participants.head.serverJdbcUrl should be(defaultJdbc)
}
it should "get the certificate revocation checking parameter when provided" in new TestScope {
val config =
configParser(parameters = minimumRunLegacyOptions ++ List(s"$certRevocationChecking", "true"))
.getOrElse(parsingFailure())
config.tlsConfig.value.enableCertRevocationChecking should be(true)
}
it should "get the tracker retention period when provided" in new TestScope {
val periodStringRepresentation = "P0DT1H2M3S"
val expectedPeriod = Duration.ofHours(1).plusMinutes(2).plusSeconds(3)
val config =
configParser(parameters =
minimumRunLegacyOptions ++ List(trackerRetentionPeriod, periodStringRepresentation)
)
.getOrElse(parsingFailure())
config.commandConfig.trackerRetentionPeriod should be(expectedPeriod)
}
it should "set the client-auth parameter when provided" in new TestScope {
val cases = Table(
("clientAuthParam", "expectedParsedValue"),
("none", ClientAuth.NONE),
("optional", ClientAuth.OPTIONAL),
("require", ClientAuth.REQUIRE),
)
forAll(cases) { (param, expectedValue) =>
val config =
configParser(parameters = minimumRunLegacyOptions ++ List(clientAuth, param))
.getOrElse(parsingFailure())
config.tlsConfig.value.clientAuth shouldBe expectedValue
}
}
it should "handle '--enable-user-management' flag correctly" in new TestScope {
configParser(
minimumRunLegacyOptions ++ Seq(
"--enable-user-management"
)
) shouldBe None
configParser(
minimumRunLegacyOptions ++ Seq(
"--enable-user-management",
"false",
)
).value.userManagementConfig.enabled shouldBe false
configParser(
minimumRunLegacyOptions ++ Seq(
"--enable-user-management",
"true",
)
).value.userManagementConfig.enabled shouldBe true
configParser(
minimumRunLegacyOptions
).value.userManagementConfig.enabled shouldBe false
}
it should "set REQUIRE client-auth when the parameter is not explicitly provided" in new TestScope {
val aValidTlsOptions = List(s"$certRevocationChecking", "false")
val config =
configParser(parameters = minimumRunLegacyOptions ++ aValidTlsOptions)
.getOrElse(parsingFailure())
config.tlsConfig.value.clientAuth shouldBe ClientAuth.REQUIRE
}
it should "handle '--user-management-max-cache-size' flag correctly" in new TestScope {
// missing cache size value
configParserRunLegacy(
Seq("--user-management-max-cache-size")
) shouldBe None
// default
configParserRunLegacy().value.userManagementConfig.maxCacheSize shouldBe 100
// custom value
configParserRunLegacy(
Seq(
"--user-management-max-cache-size",
"123",
)
).value.userManagementConfig.maxCacheSize shouldBe 123
}
it should "handle '--user-management-cache-expiry' flag correctly" in new TestScope {
// missing cache size value
configParserRunLegacy(
Seq("--user-management-cache-expiry")
) shouldBe None
// default
configParserRunLegacy().value.userManagementConfig.cacheExpiryAfterWriteInSeconds shouldBe 5
// custom value
configParserRunLegacy(
Seq(
"--user-management-cache-expiry",
"123",
)
).value.userManagementConfig.cacheExpiryAfterWriteInSeconds shouldBe 123
}
it should "handle '--user-management-max-users-page-size' flag correctly" in new TestScope {
// missing value
configParserRunLegacy(
Seq("--user-management-max-users-page-size")
) shouldBe None
// default
configParserRunLegacy().value.userManagementConfig.maxUsersPageSize shouldBe 1000
// custom value
configParserRunLegacy(
Seq(
"--user-management-max-users-page-size",
"123",
)
).value.userManagementConfig.maxUsersPageSize shouldBe 123
// values in range [1, 99] are disallowed
configParserRunLegacy(
Array(
"--user-management-max-users-page-size",
"1",
)
) shouldBe None
configParserRunLegacy(
Array(
"--user-management-max-users-page-size",
"99",
)
) shouldBe None
// negative values are disallowed
configParserRunLegacy(
Array(
"--user-management-max-users-page-size",
"-1",
)
) shouldBe None
}
behavior of "CliConfig with Run mode"
it should "support empty cli parameters" in new TestScope {
configParserRun().value.configFiles shouldBe Seq.empty
configParserRun().value.configMap shouldBe Map.empty
}
it should "support key-value map via -C option" in new TestScope {
configParserRun(Seq("-C", "key1=value1,key2=value2")).value.configMap shouldBe Map(
"key1" -> "value1",
"key2" -> "value2",
)
}
it should "support key-value map via multiple -C options" in new TestScope {
configParserRun(Seq("-C", "key1=value1", "-C", "key2=value2")).value.configMap shouldBe Map(
"key1" -> "value1",
"key2" -> "value2",
)
}
it should "support key-value map with complex hocon path" in new TestScope {
configParserRun(
Seq("-C", "ledger.participant.api-server.host=localhost")
).value.configMap shouldBe Map(
"ledger.participant.api-server.host" -> "localhost"
)
}
it should "support existing config file" in new TestScope {
configParserRun(Seq("-c", confFilePath)).value.configFiles shouldBe Seq(
confFile
)
}
it should "support multiple existing config files" in new TestScope {
configParserRun(Seq("-c", confFilePath, "-c", confFilePath2)).value.configFiles shouldBe Seq(
confFile,
confFile2,
)
configParserRun(Seq("-c", s"$confFilePath,$confFilePath2")).value.configFiles shouldBe Seq(
confFile,
confFile2,
)
}
it should "fail for non-existing config file" in new TestScope {
configParserRun(Seq("-c", "somefile.conf")) shouldBe None
}
}
object CliConfigSpec {
trait TestScope extends Matchers {
val dumpIndexMetadataCommand = "dump-index-metadata"
val participantOption = "--participant"
val participantId: Ref.ParticipantId =
Ref.ParticipantId.assertFromString("dummy-participant")
val fixedParticipantSubOptions = s"participant-id=$participantId,port=123"
val jdbcUrlSubOption = "server-jdbc-url"
val jdbcUrlEnvSubOption = "server-jdbc-url-env"
val jdbcEnvVar = "JDBC_ENV_VAR"
val certRevocationChecking = "--cert-revocation-checking"
val trackerRetentionPeriod = "--tracker-retention-period"
val clientAuth = "--client-auth"
object TestJdbcValues {
val jdbcFromCli = "command-line-jdbc"
val jdbcFromEnv = "env-jdbc"
}
val minimumRunLegacyOptions = List(
"run-legacy-cli-config",
participantOption,
s"$fixedParticipantSubOptions,$jdbcUrlSubOption=${TestJdbcValues.jdbcFromCli}",
)
val minimumRunOptions = List(
"run"
)
def configParser(
parameters: Seq[String],
getEnvVar: String => Option[String] = (_ => None),
): Option[CliConfig[Unit]] =
CliConfig.parse(
name = "Test",
extraOptions = OParser.builder[CliConfig[Unit]].note(""),
defaultExtra = (),
args = parameters,
getEnvVar = getEnvVar,
)
def configParserRunLegacy(
parameters: Iterable[String] = Seq.empty
): Option[CliConfig[Unit]] =
configParser(
minimumRunLegacyOptions ++ parameters
)
def configParserRun(
parameters: Iterable[String] = Seq.empty
): Option[CliConfig[Unit]] =
configParser(
minimumRunOptions ++ parameters
)
def parsingFailure(): Nothing = fail("Config parsing failed.")
def confStringPath = "ledger/ledger-runner-common/src/test/resources/test.conf"
def confFilePath = rlocation(confStringPath)
def confFile = requiredResource(confStringPath)
def confStringPath2 = "ledger/ledger-runner-common/src/test/resources/test2.conf"
def confFilePath2 = rlocation(confStringPath2)
def confFile2 = requiredResource(confStringPath2)
}
}

View File

@ -1,208 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.daml.bazeltools.BazelRunfiles._
import com.daml.ledger.runner.common.ConfigLoaderSpec.{ComplexObject, InnerObject, TestScope}
import com.typesafe.config.{Config => TypesafeConfig}
import com.typesafe.config.{ConfigFactory, ConfigValueFactory}
import org.scalatest.EitherValues
import org.scalatest.flatspec.AnyFlatSpec
import org.scalatest.matchers.should.Matchers
import pureconfig.ConfigConvert
import pureconfig.generic.semiauto._
import java.io.File
import java.nio.file.Paths
class ConfigLoaderSpec extends AnyFlatSpec with Matchers with EitherValues {
behavior of "ConfigLoader.toTypesafeConfig"
it should "load defaults if no file or key-value map is provided" in new TestScope {
ConfigLoader.toTypesafeConfig(fallback = empty) shouldBe ConfigFactory.empty()
}
it should "override value from the configMap" in new TestScope {
ConfigLoader.toTypesafeConfig(
configMap = Map("a" -> "c"),
fallback = updatedConfig(empty, "a", "b"),
) shouldBe updatedConfig(empty, "a", "c")
}
it should "load correctly config file" in new TestScope {
val testConf = ConfigLoader.toTypesafeConfig(
configFiles = Seq(testFileConfig),
fallback = empty,
)
testConf.getString("test.value-1") shouldBe "v1"
testConf.getString("test.value-2") shouldBe "v2"
testConf.getString("test.value-3") shouldBe "v3"
}
it should "override one config file by another" in new TestScope {
val testConf = ConfigLoader.toTypesafeConfig(
configFiles = Seq(testFileConfig, testFileConfig2),
fallback = empty,
)
testConf.getString("test.value-1") shouldBe "overriden_v1"
testConf.getString("test.value-2") shouldBe "overriden_v2"
testConf.getString("test.value-3") shouldBe "v3"
}
it should "take precedence of configMap over configFiles" in new TestScope {
val testConf = ConfigLoader.toTypesafeConfig(
configFiles = Seq(testFileConfig, testFileConfig2),
configMap = Map("test.value-1" -> "configmapvalue"),
fallback = empty,
)
testConf.getString("test.value-1") shouldBe "configmapvalue"
testConf.getString("test.value-2") shouldBe "overriden_v2"
testConf.getString("test.value-3") shouldBe "v3"
}
it should "overwrite complex objects with simple value" in new TestScope {
val testConf = ConfigLoader.toTypesafeConfig(
configFiles = Seq(testFileConfig),
configMap = Map("test" -> "configmapvalue"),
fallback = empty,
)
testConf.getString("test") shouldBe "configmapvalue"
testConf.hasPath("test.value-1") shouldBe false
testConf.hasPath("test.value-2") shouldBe false
testConf.hasPath("test.value-3") shouldBe false
}
behavior of "ConfigLoader.loadConfig"
it should "fail to load empty config" in new TestScope {
val actualValue: String = ConfigLoader
.loadConfig[ComplexObject](ConfigFactory.empty())
.left
.value
actualValue shouldBe
"""Failed to load configuration:
|at the root:
| - (empty config) Key not found: 'value-1'.
| - (empty config) Key not found: 'value-2'.""".stripMargin
}
it should "successfully load config" in new TestScope {
import ComplexObject._
ConfigLoader.loadConfig[ComplexObject](fileConfig).value shouldBe ComplexObject(
value1 = "v1",
value2 = "v2",
value3 = "v3",
)
}
it should "successfully load config and take default from the object if value not provided" in new TestScope {
import ComplexObject._
val testConf = ConfigLoader.toTypesafeConfig(
configMap = Map("value-1" -> "v1", "value-2" -> "v2"),
fallback = empty,
)
val actual: Either[String, ComplexObject] = ConfigLoader.loadConfig[ComplexObject](testConf)
actual.value shouldBe ComplexObject(
value1 = "v1",
value2 = "v2",
value3 = "default_value",
)
}
it should "support complex objects override with empty string" in new TestScope {
ConfigLoader
.loadConfig[ComplexObject](
ConfigLoader.toTypesafeConfig(
fallback = complexConfig
)
)
.value shouldBe ComplexObject(
value1 = "v1",
value2 = "v2",
value3 = "v3",
inner = Some(InnerObject("inner_v1", "inner_v2")),
)
ConfigLoader
.loadConfig[ComplexObject](
ConfigLoader.toTypesafeConfig(
fallback = complexConfig,
configMap = Map("inner" -> ""),
)
)
.value shouldBe ComplexObject(
value1 = "v1",
value2 = "v2",
value3 = "v3",
inner = None,
)
}
behavior of "ConfigLoader.loadConfig for com.daml.ledger.runner.common.Config"
it should "load config from empty config and resolve to default" in {
import PureConfigReaderWriter.Secure._
val fromConfig = ConfigLoader.loadConfig[Config](ConfigFactory.empty())
fromConfig.value shouldBe Config.Default
}
}
object ConfigLoaderSpec {
case class InnerObject(value1: String, value2: String)
object InnerObject {
implicit val Convert: ConfigConvert[InnerObject] = deriveConvert[InnerObject]
}
case class ComplexObject(
value1: String,
value2: String,
value3: String = "default_value",
inner: Option[InnerObject] = None,
)
object ComplexObject {
implicit val Convert: ConfigConvert[ComplexObject] = deriveConvert[ComplexObject]
}
trait TestScope {
val complexConfig = ConfigFactory.parseString("""
| value-1 = v1
| value-2 = v2
| value-3 = v3
| inner {
| value-1 = inner_v1
| value-2 = inner_v2
| }
|""".stripMargin)
val empty: TypesafeConfig = ConfigFactory.empty()
def loadTestFile(path: String): File = {
val uri = getClass.getResource(path).toURI()
println(uri)
Paths.get(uri).toFile()
}
def fileConfig = ConfigLoader.toTypesafeConfig(
configFiles = Seq(config),
fallback = ConfigFactory.empty(),
)
val config = requiredResource(
"ledger/ledger-runner-common/src/test/resources/testp.conf"
)
val testFileConfig = requiredResource(
"ledger/ledger-runner-common/src/test/resources/test.conf"
)
val testFileConfig2 = requiredResource(
"ledger/ledger-runner-common/src/test/resources/test2.conf"
)
def updatedConfig(config: TypesafeConfig, path: String, value: String) =
config.withValue(path, ConfigValueFactory.fromAnyRef(value))
}
}

View File

@ -1,859 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.runner.common
import com.daml.jwt.JwtTimestampLeeway
import com.daml.lf.interpretation.Limits
import com.daml.lf.language.LanguageVersion
import com.daml.lf.transaction.ContractKeyUniquenessMode
import com.daml.lf.{VersionRange, language}
import com.typesafe.config.ConfigValueFactory.fromAnyRef
import org.scalacheck.Gen
import org.scalatest.{Assertion, EitherValues}
import org.scalatest.flatspec.AnyFlatSpec
import org.scalatest.matchers.should.Matchers
import org.scalatestplus.scalacheck.ScalaCheckPropertyChecks
import pureconfig.{ConfigConvert, ConfigReader, ConfigSource, ConfigWriter}
import com.daml.ledger.api.tls.{SecretsUrl, TlsConfiguration, TlsVersion}
import com.daml.ledger.runner.common
import com.daml.ledger.runner.common.OptConfigValue.{optReaderEnabled, optWriterEnabled}
import com.daml.platform.apiserver.{ApiServerConfig, AuthServiceConfig}
import com.daml.platform.apiserver.SeedService.Seeding
import com.daml.platform.apiserver.configuration.RateLimitingConfig
import com.daml.platform.config.MetricsConfig
import com.daml.platform.configuration.{
CommandConfiguration,
IndexServiceConfig,
InitialLedgerConfiguration,
}
import com.daml.platform.indexer.{IndexerConfig, PackageMetadataViewConfig}
import com.daml.platform.indexer.ha.HaConfig
import com.daml.platform.localstore.UserManagementConfig
import com.daml.platform.services.time.TimeProviderType
import com.daml.platform.store.DbSupport.ParticipantDataSourceConfig
import com.daml.platform.store.backend.postgresql.PostgresDataSourceConfig.SynchronousCommitValue
import com.typesafe.config.ConfigFactory
import pureconfig.error.ConfigReaderFailures
import java.net.InetSocketAddress
import java.nio.file.Path
import java.time.Duration
import com.daml.metrics.api.reporters.MetricsReporter
import scala.reflect.{ClassTag, classTag}
class PureConfigReaderWriterSpec
extends AnyFlatSpec
with Matchers
with ScalaCheckPropertyChecks
with EitherValues {
def convert[T](converter: ConfigReader[T], str: String): Either[ConfigReaderFailures, T] = {
val value = ConfigFactory.parseString(str)
for {
source <- ConfigSource.fromConfig(value).cursor()
result <- converter.from(source)
} yield result
}
def testReaderWriterIsomorphism[T: ClassTag: ConfigWriter: ConfigReader](
secure: Boolean,
generator: Gen[T],
name: Option[String] = None,
): Unit = {
val secureText = secure match {
case true => "secure "
case false => ""
}
secureText + name.getOrElse(classTag[T].toString) should "be isomorphic" in forAll(generator) {
generatedValue =>
val writer = implicitly[ConfigWriter[T]]
val reader = implicitly[ConfigReader[T]]
reader.from(writer.to(generatedValue)).value shouldBe generatedValue
}
}
def testReaderWriterIsomorphism(secure: Boolean): Unit = {
val readerWriter = new PureConfigReaderWriter(secure)
import readerWriter._
testReaderWriterIsomorphism(secure, ArbitraryConfig.duration)
testReaderWriterIsomorphism(secure, ArbitraryConfig.versionRange)
testReaderWriterIsomorphism(secure, ArbitraryConfig.limits)
testReaderWriterIsomorphism(secure, ArbitraryConfig.contractKeyUniquenessMode)
testReaderWriterIsomorphism(secure, ArbitraryConfig.engineConfig)
testReaderWriterIsomorphism(secure, ArbitraryConfig.metricsReporter)
testReaderWriterIsomorphism(secure, ArbitraryConfig.metricRegistryType)
testReaderWriterIsomorphism(secure, ArbitraryConfig.metricConfig)
testReaderWriterIsomorphism(secure, Gen.oneOf(TlsVersion.allVersions))
testReaderWriterIsomorphism(secure, ArbitraryConfig.tlsConfiguration)
testReaderWriterIsomorphism(secure, ArbitraryConfig.port)
testReaderWriterIsomorphism(
secure,
ArbitraryConfig.initialLedgerConfiguration,
Some("InitialLedgerConfiguration"),
)
testReaderWriterIsomorphism(secure, ArbitraryConfig.clientAuth)
testReaderWriterIsomorphism(secure, ArbitraryConfig.userManagementConfig)
testReaderWriterIsomorphism(secure, ArbitraryConfig.identityProviderManagementConfig)
testReaderWriterIsomorphism(secure, ArbitraryConfig.connectionPoolConfig)
testReaderWriterIsomorphism(secure, ArbitraryConfig.postgresDataSourceConfig)
testReaderWriterIsomorphism(secure, ArbitraryConfig.dataSourceProperties)
testReaderWriterIsomorphism(
secure,
ArbitraryConfig.rateLimitingConfig,
Some("RateLimitingConfig"),
)
testReaderWriterIsomorphism(secure, ArbitraryConfig.indexerConfig)
testReaderWriterIsomorphism(secure, ArbitraryConfig.indexerStartupMode)
testReaderWriterIsomorphism(secure, ArbitraryConfig.packageMetadataViewConfig)
testReaderWriterIsomorphism(secure, ArbitraryConfig.commandConfiguration)
testReaderWriterIsomorphism(secure, ArbitraryConfig.apiServerConfig)
testReaderWriterIsomorphism(secure, ArbitraryConfig.haConfig)
testReaderWriterIsomorphism(secure, ArbitraryConfig.indexServiceConfig)
testReaderWriterIsomorphism(secure, ArbitraryConfig.participantConfig)
testReaderWriterIsomorphism(secure, ArbitraryConfig.config)
}
testReaderWriterIsomorphism(secure = true)
testReaderWriterIsomorphism(secure = false)
import PureConfigReaderWriter.Secure._
behavior of "Duration"
it should "read/write against predefined values" in {
def compare(duration: Duration, expectedString: String): Assertion = {
javaDurationWriter.to(duration) shouldBe fromAnyRef(expectedString)
javaDurationReader.from(fromAnyRef(expectedString)).value shouldBe duration
}
compare(Duration.ofSeconds(0), "0 days")
compare(Duration.ofSeconds(1), "1 second")
compare(Duration.ofSeconds(30), "30 seconds")
compare(Duration.ofHours(1), "3600 seconds")
}
behavior of "JwtTimestampLeeway"
val validJwtTimestampLeewayValue =
"""
| enabled = true
| default = 1
|""".stripMargin
it should "read/write against predefined values" in {
def compare(configString: String, expectedValue: Option[JwtTimestampLeeway]) = {
convert(jwtTimestampLeewayConfigConvert, configString).value shouldBe expectedValue
}
compare(
"""
| enabled = true
| default = 1
|""".stripMargin,
Some(JwtTimestampLeeway(Some(1), None, None, None)),
)
compare(
"""
| enabled = true
| expires-at = 2
|""".stripMargin,
Some(JwtTimestampLeeway(None, Some(2), None, None)),
)
compare(
"""
| enabled = true
| issued-at = 3
|""".stripMargin,
Some(JwtTimestampLeeway(None, None, Some(3), None)),
)
compare(
"""
| enabled = true
| not-before = 4
|""".stripMargin,
Some(JwtTimestampLeeway(None, None, None, Some(4))),
)
compare(
"""
| enabled = true
| default = 1
| expires-at = 2
| issued-at = 3
| not-before = 4
|""".stripMargin,
Some(JwtTimestampLeeway(Some(1), Some(2), Some(3), Some(4))),
)
compare(
"""
| enabled = false
| default = 1
| expires-at = 2
| issued-at = 3
| not-before = 4
|""".stripMargin,
None,
)
}
it should "not support unknown keys" in {
convert(
jwtTimestampLeewayConfigConvert,
"unknown-key=yes\n" + validJwtTimestampLeewayValue,
).left.value
.prettyPrint(0) should include("Unknown key")
}
behavior of "PureConfigReaderWriter VersionRange[LanguageVersion]"
it should "read/write against predefined values" in {
def compare(
range: VersionRange[language.LanguageVersion],
expectedString: String,
): Assertion = {
versionRangeWriter.to(range) shouldBe fromAnyRef(expectedString)
versionRangeReader.from(fromAnyRef(expectedString)).value shouldBe range
}
compare(LanguageVersion.DevVersions, "daml-lf-dev-mode-unsafe")
compare(LanguageVersion.EarlyAccessVersions, "early-access")
compare(LanguageVersion.LegacyVersions, "legacy")
versionRangeReader
.from(fromAnyRef("stable"))
.value shouldBe LanguageVersion.StableVersions
}
behavior of "Limits"
val validLimits =
"""
| choice-controllers = 2147483647
| choice-observers = 2147483647
| choice-authorizers = 2147483647
| contract-observers = 2147483647
| contract-signatories = 2147483647
| transaction-input-contracts = 2147483647""".stripMargin
it should "support current defaults" in {
convert(interpretationLimitsConvert, validLimits).value shouldBe Limits.Lenient
}
it should "validate against odd values" in {
val value =
s"""
| unknown-key = yes
| $validLimits
|""".stripMargin
convert(interpretationLimitsConvert, value).left.value
.prettyPrint(0) should include("Unknown key")
}
it should "read/write against predefined values" in {
val value =
ConfigFactory.parseString(
"""
| choice-controllers = 123
| choice-observers = 234
| contract-observers = 345
| contract-signatories = 456
| transaction-input-contracts = 567
| choice-authorizers = 678
|""".stripMargin
)
val expectedValue = Limits(
choiceControllers = 123,
choiceObservers = 234,
contractObservers = 345,
contractSignatories = 456,
transactionInputContracts = 567,
choiceAuthorizers = 678,
)
val source = ConfigSource.fromConfig(value).cursor().value
interpretationLimitsConvert.from(source).value shouldBe expectedValue
interpretationLimitsConvert.to(expectedValue) shouldBe value.root()
}
behavior of "ContractKeyUniquenessMode"
it should "read/write against predefined values" in {
def compare(mode: ContractKeyUniquenessMode, expectedString: String): Assertion = {
contractKeyUniquenessModeConvert.to(mode) shouldBe fromAnyRef(expectedString)
contractKeyUniquenessModeConvert.from(fromAnyRef(expectedString)).value shouldBe mode
}
compare(ContractKeyUniquenessMode.Off, "off")
compare(ContractKeyUniquenessMode.Strict, "strict")
}
behavior of "EngineConfig"
val validEngineConfigValue =
"""
|allowed-language-versions = stable
|contract-key-uniqueness = strict
|forbid-v-0-contract-id = true
|limits {
| choice-controllers = 2147483647
| choice-observers = 2147483647
| choice-authorizers = 2147483647
| contract-observers = 2147483647
| contract-signatories = 2147483647
| transaction-input-contracts = 2147483647
|}
|package-validation = true
|require-suffixed-global-contract-id = false
|stack-trace-mode = false
|""".stripMargin
it should "support current defaults" in {
convert(engineConvert, validEngineConfigValue).value shouldBe Config.DefaultEngineConfig
}
it should "not support additional invalid keys" in {
val value =
s"""
|unknown-key = yes
|$validLimits
|""".stripMargin
convert(engineConvert, value).left.value
.prettyPrint(0) should include("Unknown key")
}
behavior of "TlsConfiguration"
val validTlsConfigurationValue =
"""enabled=false
|client-auth=require
|enable-cert-revocation-checking=false""".stripMargin
it should "read/write against predefined values" in {
convert(
tlsConfigurationConvert,
validTlsConfigurationValue,
).value shouldBe TlsConfiguration(enabled = false)
}
it should "not support invalid unknown keys" in {
convert(
tlsConfigurationConvert,
"unknown-key=yes\n" + validTlsConfigurationValue,
).left.value
.prettyPrint(0) should include("Unknown key")
}
behavior of "MetricsReporter"
it should "read/write against predefined values" in {
def compare(
reporter: MetricsReporter,
expectedString: String,
): Assertion = {
metricReporterWriter.to(reporter) shouldBe fromAnyRef(expectedString)
metricReporterReader.from(fromAnyRef(expectedString)).value shouldBe reporter
}
compare(
MetricsReporter.Prometheus(new InetSocketAddress("localhost", 1234)),
"prometheus://localhost:1234",
)
compare(
MetricsReporter.Graphite(new InetSocketAddress("localhost", 1234)),
"graphite://localhost:1234/",
)
compare(
MetricsReporter.Graphite(new InetSocketAddress("localhost", 1234), Some("test")),
"graphite://localhost:1234/test",
)
val path = Path.of("test").toAbsolutePath
compare(
MetricsReporter.Csv(path),
"csv://" + path.toString,
)
compare(MetricsReporter.Console, "console")
}
behavior of "MetricsConfig"
val validMetricsConfigValue =
"""
| enabled = false
| reporter = console
| registry-type = jvm-shared
| reporting-interval = "10s"
|""".stripMargin
it should "support current defaults" in {
convert(metricsConvert, validMetricsConfigValue).value shouldBe MetricsConfig()
}
it should "not support additional invalid keys" in {
val value =
s"""
| unknown-key = yes
| $validMetricsConfigValue
|""".stripMargin
convert(metricsConvert, value).left.value
.prettyPrint(0) should include("Unknown key")
}
behavior of "SecretsUrl"
it should "read/write against predefined values" in {
val secretUrl = "https://www.daml.com/secrets.json"
secretsUrlReader.from(fromAnyRef(secretUrl)).value shouldBe SecretsUrl.fromString(secretUrl)
secretsUrlWriter.to(SecretsUrl.fromString(secretUrl)) shouldBe fromAnyRef("<REDACTED>")
new common.PureConfigReaderWriter(false).secretsUrlWriter
.to(SecretsUrl.fromString(secretUrl)) shouldBe fromAnyRef(secretUrl)
}
behavior of "InitialLedgerConfiguration"
val validInitialLedgerConfiguration =
"""
| enabled = true
| avg-transaction-latency = 0 days
| delay-before-submitting = 0 days
| max-deduplication-duration = 30 minutes
| max-skew = 30 seconds
| min-skew = 30 seconds
| """.stripMargin
it should "support current defaults" in {
val value = validInitialLedgerConfiguration
convert(initialLedgerConfigurationConvert, value).value shouldBe Some(
InitialLedgerConfiguration()
)
}
it should "not support unknown keys" in {
val value = "unknown-key=yes\n" + validInitialLedgerConfiguration
convert(initialLedgerConfigurationConvert, value).left.value
.prettyPrint(0) should include("Unknown key")
}
it should "read/write against predefined values" in {
val value =
"""
|enabled = true
|avg-transaction-latency = 1 days
|delay-before-submitting = 2 days
|max-deduplication-duration = 3 minutes
|max-skew = 4 seconds
|min-skew = 5 seconds
|""".stripMargin
val expectedValue = InitialLedgerConfiguration(
maxDeduplicationDuration = Duration.ofMinutes(3),
avgTransactionLatency = Duration.ofDays(1),
minSkew = Duration.ofSeconds(5),
maxSkew = Duration.ofSeconds(4),
delayBeforeSubmitting = Duration.ofDays(2),
)
convert(initialLedgerConfigurationConvert, value).value shouldBe Some(expectedValue)
}
behavior of "Seeding"
it should "read/write against predefined values" in {
seedingWriter.to(Seeding.Static) shouldBe fromAnyRef("testing-static")
seedingWriter.to(Seeding.Weak) shouldBe fromAnyRef("testing-weak")
seedingWriter.to(Seeding.Strong) shouldBe fromAnyRef("strong")
seedingReader.from(fromAnyRef("testing-static")).value shouldBe Seeding.Static
seedingReader.from(fromAnyRef("testing-weak")).value shouldBe Seeding.Weak
seedingReader.from(fromAnyRef("strong")).value shouldBe Seeding.Strong
}
behavior of "userManagementConfig"
val validUserManagementConfigValue =
"""
| cache-expiry-after-write-in-seconds = 5
| enabled = false
| max-cache-size = 100
| max-users-page-size = 1000""".stripMargin
it should "support current defaults" in {
val value = validUserManagementConfigValue
convert(userManagementConfigConvert, value).value shouldBe UserManagementConfig()
}
it should "not support invalid keys" in {
val value = "unknown-key=yes\n" + validUserManagementConfigValue
convert(userManagementConfigConvert, value).left.value
.prettyPrint(0) should include("Unknown key")
}
it should "read/write against predefined values" in {
val value = """
| cache-expiry-after-write-in-seconds = 1
| enabled = true
| max-cache-size = 99
| max-users-page-size = 999""".stripMargin
convert(userManagementConfigConvert, value).value shouldBe UserManagementConfig(
enabled = true,
cacheExpiryAfterWriteInSeconds = 1,
maxCacheSize = 99,
maxUsersPageSize = 999,
)
}
behavior of "AuthServiceConfig"
it should "be isomorphic and support redaction" in forAll(ArbitraryConfig.authServiceConfig) {
generatedValue =>
val redacted = generatedValue match {
case AuthServiceConfig.UnsafeJwtHmac256(_, targetAudience) =>
AuthServiceConfig.UnsafeJwtHmac256("<REDACTED>", targetAudience)
case _ => generatedValue
}
val insecureWriter = new PureConfigReaderWriter(false)
authServiceConfigConvert
.from(authServiceConfigConvert.to(generatedValue))
.value shouldBe redacted
insecureWriter.authServiceConfigConvert
.from(insecureWriter.authServiceConfigConvert.to(generatedValue))
.value shouldBe generatedValue
}
it should "read/write against predefined values" in {
def compare(configString: String, expectedValue: AuthServiceConfig) = {
val source =
ConfigSource.fromConfig(ConfigFactory.parseString(configString)).cursor().value
authServiceConfigConvert.from(source).value shouldBe expectedValue
}
compare("type = wildcard", AuthServiceConfig.Wildcard)
compare(
"type = unsafe-jwt-hmac-256\nsecret=mysecret",
AuthServiceConfig.UnsafeJwtHmac256("mysecret", None),
)
compare(
"type = unsafe-jwt-hmac-256\nsecret=mysecret2",
AuthServiceConfig.UnsafeJwtHmac256("mysecret2", None),
)
compare(
"type = jwt-rs-256\ncertificate=certfile",
AuthServiceConfig.JwtRs256("certfile", None),
)
compare(
"type = jwt-es-256\ncertificate=certfile3",
AuthServiceConfig.JwtEs256("certfile3", None),
)
compare(
"type = jwt-es-512\ncertificate=certfile4",
AuthServiceConfig.JwtEs512("certfile4", None),
)
compare(
"""
|type = jwt-rs-256-jwks
|url="https://daml.com/jwks.json"
|""".stripMargin,
AuthServiceConfig.JwtRs256Jwks("https://daml.com/jwks.json", None),
)
}
behavior of "CommandConfiguration"
val validCommandConfigurationValue =
"""
| input-buffer-size = 512
| max-commands-in-flight = 256
| tracker-retention-period = "300 seconds"""".stripMargin
it should "read/write against predefined values" in {
val value = validCommandConfigurationValue
convert(commandConfigurationConvert, value).value shouldBe CommandConfiguration()
}
it should "not support additional unknown keys" in {
val value = "unknown-key=yes\n" + validCommandConfigurationValue
convert(commandConfigurationConvert, value).left.value
.prettyPrint(0) should include("Unknown key")
}
behavior of "TimeProviderType"
it should "read/write against predefined values" in {
timeProviderTypeConvert.to(TimeProviderType.Static) shouldBe fromAnyRef("static")
timeProviderTypeConvert.to(TimeProviderType.WallClock) shouldBe fromAnyRef("wall-clock")
timeProviderTypeConvert.from(fromAnyRef("static")).value shouldBe TimeProviderType.Static
timeProviderTypeConvert.from(fromAnyRef("wall-clock")).value shouldBe TimeProviderType.WallClock
}
behavior of "SynchronousCommitValue"
it should "read/write against predefined values" in {
val conv = dbConfigSynchronousCommitValueConvert
def compare(value: SynchronousCommitValue, str: String): Assertion = {
conv.to(value) shouldBe fromAnyRef(str)
conv.from(fromAnyRef(str)).value shouldBe value
}
compare(SynchronousCommitValue.On, "on")
compare(SynchronousCommitValue.Off, "off")
compare(SynchronousCommitValue.RemoteWrite, "remote-write")
compare(SynchronousCommitValue.RemoteApply, "remote-apply")
compare(SynchronousCommitValue.Local, "local")
}
behavior of "RateLimitingConfig"
val validRateLimitingConfig =
"""
| enabled = true
| max-api-services-index-db-queue-size = 1000
| max-api-services-queue-size = 10000
| max-used-heap-space-percentage = 85
| min-free-heap-space-bytes = 300000""".stripMargin
it should "support current defaults" in {
val value = validRateLimitingConfig
val expected = RateLimitingConfig(
maxApiServicesQueueSize = 10000,
maxApiServicesIndexDbQueueSize = 1000,
maxUsedHeapSpacePercentage = 85,
minFreeHeapSpaceBytes = 300000,
maxStreams = 1000,
)
convert(rateLimitingConfigConvert, value).value shouldBe Some(expected)
}
it should "not support unknown keys" in {
val value = "unknown-key=yes\n" + validRateLimitingConfig
convert(rateLimitingConfigConvert, value).left.value.prettyPrint(0) should include(
"Unknown key"
)
}
behavior of "ApiServerConfig"
val validApiServerConfigValue =
"""
|api-stream-shutdown-timeout = "5s"
|command {
| input-buffer-size = 512
| max-commands-in-flight = 256
| tracker-retention-period = "300 seconds"
|}
|initial-ledger-configuration {
| enabled = true
| avg-transaction-latency = 0 days
| delay-before-submitting = 0 days
| max-deduplication-duration = 30 minutes
| max-skew = 30 seconds
| min-skew = 30 seconds
|}
|configuration-load-timeout = "10s"
|management-service-timeout = "2m"
|max-inbound-message-size = 67108864
|port = 6865
|rate-limit {
| enabled = true
| max-api-services-index-db-queue-size = 1000
| max-api-services-queue-size = 10000
| max-used-heap-space-percentage = 100
| min-free-heap-space-bytes = 0
|}
|seeding = strong
|time-provider-type = wall-clock
|user-management {
| cache-expiry-after-write-in-seconds = 5
| enabled = false
| max-cache-size = 100
| max-users-page-size = 1000
|}""".stripMargin
it should "support current defaults" in {
val value = validApiServerConfigValue
convert(apiServerConfigConvert, value).value shouldBe ApiServerConfig()
}
it should "not support unknown keys" in {
val value = "unknown-key=yes\n" + validApiServerConfigValue
convert(apiServerConfigConvert, value).left.value.prettyPrint(0) should include("Unknown key")
}
behavior of "HaConfig"
val validHaConfigValue =
"""
| indexer-lock-id = 105305792
| indexer-worker-lock-id = 105305793
| main-lock-acquire-retry-millis = 500
| main-lock-checker-period-millis = 1000
| worker-lock-acquire-max-retry = 1000
| worker-lock-acquire-retry-millis = 500
| """.stripMargin
it should "support current defaults" in {
val value = validHaConfigValue
convert(haConfigConvert, value).value shouldBe HaConfig()
}
it should "not support unknown keys" in {
val value = "unknown-key=yes\n" + validHaConfigValue
convert(haConfigConvert, value).left.value.prettyPrint(0) should include("Unknown key")
}
behavior of "PackageMetadataViewConfig"
val validPackageMetadataViewConfigValue =
"""
| init-load-parallelism = 16
| init-process-parallelism = 16
| init-takes-too-long-initial-delay = 1 minute
| init-takes-too-long-interval = 10 seconds
| """.stripMargin
it should "support current defaults" in {
val value = validPackageMetadataViewConfigValue
convert(packageMetadataViewConfigConvert, value).value shouldBe PackageMetadataViewConfig()
}
it should "not support unknown keys" in {
val value = "unknown-key=yes\n" + validPackageMetadataViewConfigValue
convert(packageMetadataViewConfigConvert, value).left.value.prettyPrint(0) should include(
"Unknown key"
)
}
behavior of "IndexerConfig"
val validIndexerConfigValue =
"""
| batching-parallelism = 4
| enable-compression = false
| high-availability {
| indexer-lock-id = 105305792
| indexer-worker-lock-id = 105305793
| main-lock-acquire-retry-millis = 500
| main-lock-checker-period-millis = 1000
| worker-lock-acquire-max-retry = 1000
| worker-lock-acquire-retry-millis = 500
| }
| ingestion-parallelism = 16
| input-mapping-parallelism = 16
| max-input-buffer-size = 50
| restart-delay = "10s"
| startup-mode {
| allow-existing-schema = false
| type = migrate-and-start
| }
| submission-batch-size = 50""".stripMargin
it should "support current defaults" in {
val value = validIndexerConfigValue
convert(indexerConfigConvert, value).value shouldBe IndexerConfig()
}
it should "not support unknown keys" in {
val value = "unknown-key=yes\n" + validIndexerConfigValue
convert(indexerConfigConvert, value).left.value.prettyPrint(0) should include(
"Unknown key"
)
}
behavior of "IndexServiceConfig"
val validIndexServiceConfigValue =
"""|
|acs-streams {
| contract-processing-parallelism=8
| max-ids-per-id-page=20000
| max-pages-per-id-pages-buffer=1
| max-parallel-id-create-queries=2
| max-parallel-payload-create-queries=2
| max-payloads-per-payloads-page=1000
| max-working-memory-in-bytes-for-id-pages=104857600
|}
|api-stream-shutdown-timeout="5s"
|buffered-streams-page-size=100
|completions-page-size=1000
|buffered-events-processing-parallelism=8
|global-max-event-id-queries=20
|global-max-event-payload-queries=10
|in-memory-fan-out-thread-pool-size=16
|in-memory-state-updater-parallelism=2
|max-contract-key-state-cache-size=100000
|max-contract-state-cache-size=100000
|max-transactions-in-memory-fan-out-buffer-size=10000
|prepare-package-metadata-time-out-warning="5s"
|transaction-flat-streams {
| max-ids-per-id-page=20000
| max-pages-per-id-pages-buffer=1
| max-parallel-id-consuming-queries=4
| max-parallel-id-create-queries=4
| max-parallel-payload-consuming-queries=2
| max-parallel-payload-create-queries=2
| max-parallel-payload-queries=2
| max-payloads-per-payloads-page=1000
| max-working-memory-in-bytes-for-id-pages=104857600
| transactions-processing-parallelism=8
|}
|transaction-tree-streams {
| max-ids-per-id-page=20000
| max-pages-per-id-pages-buffer=1
| max-parallel-id-consuming-queries=8
| max-parallel-id-create-queries=8
| max-parallel-id-non-consuming-queries=4
| max-parallel-payload-consuming-queries=2
| max-parallel-payload-create-queries=2
| max-parallel-payload-non-consuming-queries=2
| max-parallel-payload-queries=2
| max-payloads-per-payloads-page=1000
| max-working-memory-in-bytes-for-id-pages=104857600
| transactions-processing-parallelism=8
|}""".stripMargin
it should "support current defaults" in {
val value = validIndexServiceConfigValue
convert(indexServiceConfigConvert, value).value shouldBe IndexServiceConfig()
}
it should "not support unknown keys" in {
val value = "unknown-key=yes\n" + validIndexServiceConfigValue
convert(indexServiceConfigConvert, value).left.value.prettyPrint(0) should include(
"Unknown key"
)
}
behavior of "ParticipantDataSourceConfig"
it should "read/write against predefined values" in {
val secretUrl = "https://www.daml.com/secrets.json"
participantDataSourceConfigReader
.from(fromAnyRef(secretUrl))
.value shouldBe ParticipantDataSourceConfig(secretUrl)
participantDataSourceConfigWriter.to(
ParticipantDataSourceConfig(secretUrl)
) shouldBe fromAnyRef("<REDACTED>")
new PureConfigReaderWriter(false).participantDataSourceConfigWriter.to(
ParticipantDataSourceConfig(secretUrl)
) shouldBe fromAnyRef(secretUrl)
}
behavior of "optReaderEnabled/optWriterEnabled"
case class Cfg(i: Int)
case class Cfg2(enabled: Boolean, i: Int)
import pureconfig.generic.semiauto._
val testConvert: ConfigConvert[Cfg] = deriveConvert[Cfg]
val testConvert2: ConfigConvert[Cfg2] = deriveConvert[Cfg2]
it should "read enabled flag" in {
val reader: ConfigReader[Option[Cfg]] = optReaderEnabled[Cfg](testConvert)
convert(reader, "enabled = true\ni = 1").value shouldBe Some(Cfg(1))
convert(reader, "enabled = true\ni = 10").value shouldBe Some(Cfg(10))
convert(reader, "enabled = false\ni = 1").value shouldBe None
convert(reader, "enabled = false").value shouldBe None
}
it should "write enabled flag" in {
val writer: ConfigWriter[Option[Cfg]] = optWriterEnabled[Cfg](testConvert)
writer.to(Some(Cfg(1))) shouldBe ConfigFactory.parseString("enabled = true\ni = 1").root()
writer.to(Some(Cfg(10))) shouldBe ConfigFactory.parseString("enabled = true\ni = 10").root()
writer.to(None) shouldBe ConfigFactory.parseString("enabled = false").root()
}
it should "throw if configuration is ambiguous" in {
val writer: ConfigWriter[Option[Cfg2]] = optWriterEnabled[Cfg2](testConvert2)
an[IllegalArgumentException] should be thrownBy writer.to(Some(Cfg2(enabled = false, 1)))
}
}

View File

@ -1,41 +0,0 @@
# Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
load(
"//bazel_tools:scala.bzl",
"da_scala_binary",
"da_scala_library",
)
da_scala_library(
name = "metering-verification-app-lib",
srcs = glob(["src/app/scala/**/*.scala"]),
resources = glob(["src/app/resources/**/*"]),
scala_deps = [
"@maven//:io_spray_spray_json",
"@maven//:org_scalaz_scalaz_core",
],
tags = ["maven_coordinates=com.daml:metering-verification-app-lib:__VERSION__"],
visibility = ["//visibility:public"],
deps = [
"//ledger/participant-integration-api",
],
)
da_scala_binary(
name = "metering-verification-app",
main_class = "com.daml.ledger.metering.Main",
tags = [
"fat_jar",
"maven_coordinates=com.daml:metering-verification-app-jar:__VERSION__",
"no_scala_version_suffix",
],
visibility = ["//visibility:public"],
runtime_deps = [
"@maven//:ch_qos_logback_logback_classic",
"@maven//:ch_qos_logback_logback_core",
],
deps = [
":metering-verification-app-lib",
],
)

View File

@ -1,148 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.ledger.metering
import com.daml.platform.apiserver.meteringreport.HmacSha256.Key
import com.daml.platform.apiserver.meteringreport.JcsSigner.VerificationStatus
import com.daml.platform.apiserver.meteringreport.{JcsSigner, MeteringReportKey}
import java.nio.charset.StandardCharsets
import java.nio.file.{Files, Path}
import scala.jdk.StreamConverters._
import scala.util.Try
/** The metering report validation app can be used to verify that a provided metering report is consistent with
* a `check` section within the report. When the report is created the check section is populated with a digest
* and a scheme that was used in the creation of that digest. Different schemes may be used by different
* release variants (e.g. community verses enterprise) and for different points in time.
*
* The app takes as parameters a directory that contains one file for each supported scheme and the report
* file to be verified. The scheme files need to contain JSON with fields for the scheme name digest algorithm
* and encoded key. For example:
*
* {
* "scheme": "community-2021",
* "algorithm": "HmacSHA256",
* "encoded": "ifKEd83-fAvOBTXnGjIVfesNzmWFKpo_35zpUnXEsg="
* }
*
* Verification works by inspecting the scheme referenced in the report file and then checking that the
* recalculated digest matches that provided in the file.
*
* Usage: metering-verification-app <directory> <report>
*/
object Main {
trait ExitCode {
def code: String
def detail: String
final def message = s"$code: $detail"
}
type ExitCodeOr[T] = Either[ExitCode, T]
abstract class Code(val code: String) extends ExitCode
case class CodeDetail(code: String, detail: String) extends ExitCode
val OK: ExitCode = CodeDetail("OK", "The digest is as expected")
case class ErrUsage(app: String) extends Code("ERR_USAGE") {
def detail = s"Usage: $app <directory> <report>"
}
case class NotDirectory(dir: String) extends Code("ERR_NOT_DIRECTORY") {
def detail = s"The passed directory is not a valid directory: $dir"
}
case class NotKeyFile(path: Path, t: Throwable) extends Code("ERR_NOT_KEY_FILE") {
def detail = s"Unable to parse metering report key from key file: $path [${t.getMessage}]"
}
case class NoKeys(dir: String) extends Code("ERR_NO_KEYS") {
def detail = s"No keys found in the key directory: $dir"
}
case class NotFile(report: String) extends Code("ERR_NO_REPORT") {
def detail = s"The passed report is not a file: $report"
}
case class FailedToReadReport(report: String, t: Throwable) extends Code("ERR_READING_REPORT") {
def detail = s"Failed to read the participant report: $report [${t.getMessage}]"
}
case class FailedVerification(status: VerificationStatus)
extends Code("ERR_FAILED_VERIFICATION") {
def detail = s"Report verification failed with status: $status"
}
def main(args: Array[String]): Unit = {
val result = for {
keyReport <- checkUsage(args)
(keyDir, reportFile) = keyReport
report <- readReport(reportFile)
keys <- readKeys(keyDir)
_ <- verifyReport(report, keys)
} yield {
()
}
result match {
case Right(_) =>
System.out.println(OK.message)
System.exit(0)
case Left(exitCode) =>
System.err.println(exitCode.message)
System.exit(1)
}
}
private def checkUsage(args: Array[String]): ExitCodeOr[(String, String)] = {
args.toList match {
case List(dir, report) => Right((dir, report))
case _ => Left(ErrUsage("metering-verification-app"))
}
}
private def readKey(keyPath: Path): ExitCodeOr[Key] = {
Try(MeteringReportKey.assertParseKey(Files.readAllBytes(keyPath)))
.fold(t => Left(NotKeyFile(keyPath, t)), Right(_))
}
private def readKeys(keyDir: String): ExitCodeOr[Map[String, Key]] = {
import scalaz._
import scalaz.syntax.traverse._
import std.either._
import std.list._
val dir = Path.of(keyDir)
if (Files.isDirectory(dir)) {
for {
keys <- Files.list(dir).toScala(List).traverse(readKey)
_ <- if (keys.isEmpty) Left(NoKeys(keyDir)) else Right(())
} yield {
keys.map(k => k.scheme -> k).toMap
}
} else {
Left(NotDirectory(keyDir))
}
}
private def readReport(reportFile: String): ExitCodeOr[String] = {
val path = Path.of(reportFile)
if (Files.isRegularFile(path)) {
Try(new String(Files.readAllBytes(path), StandardCharsets.UTF_8))
.fold(t => Left(FailedToReadReport(reportFile, t)), Right(_))
} else {
Left(NotFile(reportFile))
}
}
private def verifyReport(json: String, keys: Map[String, Key]): ExitCodeOr[Unit] = {
JcsSigner.verify(json, keys.get) match {
case VerificationStatus.Ok => Right(())
case status => Left(FailedVerification(status))
}
}
}

View File

@ -1,388 +0,0 @@
# Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
load("@os_info//:os_info.bzl", "is_windows")
load("@scala_version//:index.bzl", "scala_major_version", "scala_major_version_suffix")
load("//bazel_tools:proto.bzl", "proto_jars")
load(
"//bazel_tools:scala.bzl",
"da_scala_benchmark_jmh",
"da_scala_binary",
"da_scala_library",
"da_scala_test",
"da_scala_test_suite",
"scaladoc_jar",
)
load("//bazel_tools:pom_file.bzl", "pom_file")
load("//rules_daml:daml.bzl", "daml_compile")
proto_jars(
name = "participant-integration-api-proto",
srcs = glob(["src/main/protobuf/**/*.proto"]),
maven_artifact_prefix = "participant-integration-api",
maven_group = "com.daml",
strip_import_prefix = "src/main/protobuf",
visibility = ["//visibility:public"],
deps = [
"@com_google_protobuf//:any_proto",
],
)
compile_deps = [
":participant-integration-api-proto_scala",
"//daml-lf/archive:daml_lf_archive_reader",
"//daml-lf/archive:daml_lf_1.dev_archive_proto_java",
"//daml-lf/data",
"//daml-lf/engine",
"//daml-lf/language",
"//daml-lf/transaction",
"//daml-lf/transaction:value_proto_java",
"//daml-lf/validation",
"//language-support/scala/bindings",
"//ledger-api/rs-grpc-akka",
"//ledger-api/rs-grpc-bridge",
"//libs-scala/caching",
"//ledger/error",
"//ledger/ledger-api-errors",
"//ledger/ledger-api-auth",
"//ledger/ledger-api-client",
"//ledger/ledger-api-common",
"//ledger/ledger-api-domain",
"//ledger/ledger-api-health",
"//ledger/ledger-configuration",
"//ledger/ledger-offset",
"//libs-scala/ledger-resources",
"//ledger/metrics",
"//ledger/participant-state",
"//ledger/participant-state-index",
"//ledger/participant-local-store:participant-local-store",
"//ledger/participant-state-metrics",
"//libs-scala/jwt",
"//libs-scala/crypto:crypto",
"//libs-scala/build-info",
"//libs-scala/contextualized-logging",
"//libs-scala/concurrent",
"//libs-scala/executors",
"//libs-scala/logging-entries",
"//libs-scala/ports",
"//libs-scala/resources",
"//libs-scala/resources-akka",
"//libs-scala/resources-grpc",
"//libs-scala/scala-utils",
"//libs-scala/timer-utils",
"//libs-scala/nameof",
"//libs-scala/struct-json/struct-spray-json",
"//observability/tracing",
"//observability/metrics",
"@maven//:com_google_guava_guava",
"@maven//:com_zaxxer_HikariCP",
"@maven//:io_dropwizard_metrics_metrics_core",
"@maven//:io_grpc_grpc_context",
"@maven//:io_grpc_grpc_netty",
"@maven//:io_grpc_grpc_services",
"@maven//:io_netty_netty_handler",
"@maven//:org_flywaydb_flyway_core",
"@maven//:io_opentelemetry_opentelemetry_api",
"@maven//:io_opentelemetry_opentelemetry_context",
"@maven//:org_slf4j_slf4j_api",
"@maven//:com_h2database_h2",
"@maven//:org_postgresql_postgresql",
"@maven//:com_oracle_database_jdbc_ojdbc8",
"@maven//:com_google_api_grpc_proto_google_common_protos",
"@maven//:com_auth0_java_jwt",
]
scala_compile_deps = [
"@maven//:com_github_scopt_scopt",
"@maven//:com_typesafe_akka_akka_actor",
"@maven//:com_typesafe_akka_akka_stream",
"@maven//:com_typesafe_scala_logging_scala_logging",
"@maven//:org_playframework_anorm_anorm",
"@maven//:org_playframework_anorm_anorm_tokenizer",
"@maven//:org_scalaz_scalaz_core",
"@maven//:io_spray_spray_json",
]
runtime_deps = [
"@maven//:ch_qos_logback_logback_classic",
]
da_scala_library(
name = "participant-integration-api",
srcs = glob(["src/main/scala/**/*.scala"]),
resources =
glob(
["src/main/resources/**/*"],
# Do not include logback.xml into the library: let the user
# of the sandbox-as-a-library decide how to log.
exclude = ["src/main/resources/logback.xml"],
) + [
"//ledger-api:api-version-files",
],
scala_deps = scala_compile_deps,
tags = ["maven_coordinates=com.daml:participant-integration-api:__VERSION__"],
visibility = [
"//visibility:public",
],
runtime_deps = runtime_deps,
deps = compile_deps,
)
da_scala_library(
name = "ledger-api-server",
srcs = glob(["src/main/scala/**/*.scala"]),
resources =
glob(
["src/main/resources/**/*"],
# Do not include logback.xml into the library: let the user
# of the sandbox-as-a-library decide how to log.
exclude = ["src/main/resources/logback.xml"],
),
scala_deps = scala_compile_deps,
tags = ["maven_coordinates=com.daml:ledger-api-server:__VERSION__"],
visibility = [
"//visibility:public",
],
runtime_deps = runtime_deps,
deps = compile_deps,
)
da_scala_library(
name = "participant-integration-api-tests-lib",
srcs = glob(["src/test/lib/**/*.scala"]),
scala_deps = [
"@maven//:com_typesafe_akka_akka_actor",
"@maven//:com_typesafe_akka_akka_stream",
"@maven//:org_scalacheck_scalacheck",
"@maven//:org_scalactic_scalactic",
"@maven//:org_scalatest_scalatest_freespec",
"@maven//:org_scalatest_scalatest_core",
"@maven//:org_scalatest_scalatest_flatspec",
"@maven//:org_scalatest_scalatest_matchers_core",
"@maven//:org_scalatest_scalatest_shouldmatchers",
"@maven//:org_playframework_anorm_anorm",
],
scala_runtime_deps = [
"@maven//:com_typesafe_akka_akka_slf4j",
],
visibility = ["//visibility:public"],
runtime_deps = [
"@maven//:com_h2database_h2",
"@maven//:org_postgresql_postgresql",
],
deps = [
":participant-integration-api",
"//bazel_tools/runfiles:scala_runfiles",
"//daml-lf/archive:daml_lf_1.dev_archive_proto_java",
"//daml-lf/archive:daml_lf_archive_reader",
"//daml-lf/data",
"//daml-lf/engine",
"//daml-lf/language",
"//daml-lf/transaction",
"//daml-lf/transaction-test-lib",
"//language-support/scala/bindings",
"//ledger-api/rs-grpc-bridge",
"//ledger-api/sample-service",
"//ledger-api/testing-utils",
"//ledger/ledger-api-client",
"//ledger/ledger-api-common",
"//ledger/ledger-api-domain",
"//ledger/ledger-api-health",
"//ledger/ledger-configuration",
"//ledger/ledger-offset",
"//ledger/metrics",
"//ledger/participant-local-store",
"//ledger/participant-local-store:participant-local-store-tests-lib",
"//ledger/participant-state",
"//ledger/participant-state-index",
"//libs-scala/contextualized-logging",
"//libs-scala/ledger-resources",
"//libs-scala/ledger-resources:ledger-resources-test-lib",
"//libs-scala/logging-entries",
"//libs-scala/oracle-testing",
"//libs-scala/ports",
"//libs-scala/postgresql-testing",
"//libs-scala/resources",
"//libs-scala/resources-akka",
"//libs-scala/resources-grpc",
"//libs-scala/scala-utils",
"//libs-scala/timer-utils",
"//observability/metrics",
"//test-common:dar-files-default-lib",
"@maven//:io_dropwizard_metrics_metrics_core",
"@maven//:io_grpc_grpc_netty",
"@maven//:io_netty_netty_common",
"@maven//:io_netty_netty_handler",
"@maven//:io_netty_netty_transport",
"@maven//:io_opentelemetry_opentelemetry_api",
"@maven//:org_flywaydb_flyway_core",
"@maven//:org_scalatest_scalatest_compatible",
],
)
openssl_executable = "@openssl_dev_env//:bin/openssl" if not is_windows else "@openssl_dev_env//:usr/bin/openssl.exe"
da_scala_test_suite(
name = "participant-integration-api-tests",
size = "large",
srcs = glob(
["src/test/suite/**/*.scala"],
exclude = [
"**/*Oracle*",
],
),
data = [
"//test-common:model-tests-default.dar",
"//test-common/test-certificates",
openssl_executable,
],
jvm_flags = [
"-Djava.security.debug=\"certpath ocsp\"", # This facilitates debugging of the OCSP checks mechanism
],
resources = glob(["src/test/resources/**/*"]),
scala_deps = [
"@maven//:com_typesafe_akka_akka_actor",
"@maven//:com_typesafe_akka_akka_actor_typed",
"@maven//:com_typesafe_akka_akka_testkit",
"@maven//:com_typesafe_akka_akka_stream",
"@maven//:com_typesafe_akka_akka_stream_testkit",
"@maven//:org_mockito_mockito_scala",
"@maven//:org_playframework_anorm_anorm",
"@maven//:org_playframework_anorm_anorm_tokenizer",
"@maven//:org_scalacheck_scalacheck",
"@maven//:org_scalactic_scalactic",
"@maven//:org_scalatest_scalatest_core",
"@maven//:org_scalatest_scalatest_flatspec",
"@maven//:org_scalatest_scalatest_matchers_core",
"@maven//:org_scalatest_scalatest_shouldmatchers",
"@maven//:org_scalatest_scalatest_wordspec",
"@maven//:org_scalatestplus_scalacheck_1_15",
"@maven//:org_scalaz_scalaz_core",
"@maven//:io_spray_spray_json",
"@maven//:com_thesamet_scalapb_scalapb_json4s",
"@maven//:org_scalaz_scalaz_scalacheck_binding",
],
deps = [
":participant-integration-api",
":participant-integration-api-proto_scala",
":participant-integration-api-tests-lib",
"//bazel_tools/runfiles:scala_runfiles",
"//daml-lf/archive:daml_lf_1.dev_archive_proto_java",
"//daml-lf/archive:daml_lf_archive_reader",
"//daml-lf/data",
"//daml-lf/encoder",
"//daml-lf/engine",
"//daml-lf/interpreter",
"//daml-lf/language",
"//daml-lf/parser",
"//daml-lf/transaction",
"//daml-lf/transaction:value_proto_java",
"//daml-lf/transaction-test-lib",
"//language-support/scala/bindings",
"//ledger-api/rs-grpc-akka",
"//ledger-api/rs-grpc-akka:rs-grpc-akka-tests-lib",
"//ledger-api/rs-grpc-bridge",
"//ledger-api/sample-service",
"//ledger-api/testing-utils",
"//ledger/error",
"//ledger/error:error-test-lib",
"//ledger/ledger-api-client",
"//ledger/ledger-api-common",
"//ledger/ledger-api-common:ledger-api-common-scala-tests-lib",
"//ledger/ledger-api-domain",
"//ledger/ledger-api-errors",
"//ledger/ledger-api-health",
"//ledger/ledger-configuration",
"//ledger/ledger-offset",
"//ledger/metrics",
"//ledger/participant-local-store",
"//ledger/participant-local-store:participant-local-store-tests-lib",
"//ledger/participant-state",
"//ledger/participant-state-index",
"//libs-scala/caching",
"//libs-scala/concurrent",
"//libs-scala/contextualized-logging",
"//libs-scala/crypto",
"//libs-scala/executors",
"//libs-scala/grpc-utils",
"//libs-scala/ledger-resources",
"//libs-scala/ledger-resources:ledger-resources-test-lib",
"//libs-scala/logging-entries",
"//libs-scala/ports",
"//libs-scala/postgresql-testing",
"//libs-scala/resources",
"//libs-scala/resources-akka",
"//libs-scala/resources-grpc",
"//libs-scala/scala-utils",
"//libs-scala/scalatest-utils",
"//libs-scala/timer-utils",
"//observability/metrics",
"//observability/metrics:metrics-test-lib",
"//observability/tracing",
"//observability/tracing:tracing-test-lib",
"//test-common",
"//test-common:dar-files-default-lib",
"@maven//:ch_qos_logback_logback_classic",
"@maven//:ch_qos_logback_logback_core",
"@maven//:com_github_ben_manes_caffeine_caffeine",
"@maven//:com_google_api_grpc_proto_google_common_protos",
"@maven//:com_google_guava_guava",
"@maven//:com_typesafe_config",
"@maven//:com_zaxxer_HikariCP",
"@maven//:commons_io_commons_io",
"@maven//:io_dropwizard_metrics_metrics_core",
"@maven//:io_grpc_grpc_context",
"@maven//:io_grpc_grpc_netty",
"@maven//:io_grpc_grpc_services",
"@maven//:io_netty_netty_handler",
"@maven//:io_netty_netty_transport",
"@maven//:io_opentelemetry_opentelemetry_api",
"@maven//:io_opentelemetry_opentelemetry_context",
"@maven//:io_opentelemetry_opentelemetry_sdk_testing",
"@maven//:io_opentelemetry_opentelemetry_sdk_trace",
"@maven//:org_flywaydb_flyway_core",
"@maven//:org_mockito_mockito_core",
"@maven//:org_reactivestreams_reactive_streams",
"@maven//:org_scalatest_scalatest_compatible",
"@maven//:org_slf4j_slf4j_api",
],
)
exports_files(["src/main/resources/logback.xml"])
scaladoc_jar(
name = "scaladoc",
srcs = [
":sources",
"//ledger/ledger-api-auth:sources",
"//ledger/participant-state:sources",
],
doctitle = "Daml participant integration API",
root_content = "rootdoc.txt",
visibility = [
"//visibility:public",
],
deps =
compile_deps +
[
"{}_{}".format(d, scala_major_version_suffix)
for d in scala_compile_deps
],
) if not is_windows else None
filegroup(
name = "sources",
srcs = glob(["src/main/scala/**/*.scala"]),
visibility = ["//visibility:public"],
)
da_scala_benchmark_jmh(
name = "string-interning-benchmark",
srcs = glob(["src/bench/platform/store/interning/**/*.scala"]),
visibility = ["//visibility:public"],
deps = [
"//bazel_tools/runfiles:scala_runfiles",
"//ledger/participant-integration-api",
"//libs-scala/contextualized-logging",
],
)

View File

@ -1,75 +0,0 @@
# JCS
This document describes the JCS implementation used for ledger metering
## Background
As part of the ledger metering tamperproofing design a decision was made to use the JSON
Canonicalization Scheme (JCS) to render a byte array that represented the contents of the
metering report. The MAC of this byte array is then appended to the metering report JSON
as a tamperproofing measure. The JCS spec is published
as [RFC 8785](https://datatracker.ietf.org/doc/html/rfc8785).
## Java Implementation
The java reference implementation provided in the RFC is
by [Samuel Erdtman](https://github.com/erdtman/java-json-canonicalization). Concerns
about this implementation are that it:
* Has not been released since 2020
* Has only has one committer
* Has vulnerability [CVE-2020-15250](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15250)
* Does its own JSON parsing
* Has relatively few lines of testing given the size of the code base
## Javascript Implementation
Erdtman also has a [javascript implementation of the algorithm in 32 lines of javascript](https://github.com/erdtman/canonicalize/blob/master/lib/canonicalize.js).
The reason this implementation is so small is because:
* Javascript has native support for JSON
* The JCS spec uses the [javascript standard for number formatting](https://262.ecma-international.org/10.0/#sec-tostring-applied-to-the-number-type)
* The JCS uses JSON.stringify to serialize strings
## DA Implementation
Our starting point is similar to the situation in the javascript implementation in that:
* We have a parsed JSON object (or standard libraries we use to do this)
* We have JSON library functions that provide methods to stringify JSON objects.
For this reason the `Jcs.scala` class implements an algorithm similar to that in javascript.
## Number Limitation
When testing this implementation we discovered that the number formatting did not follow
the javascript standard. This is in part because scala implementations usually support
numeric values to be larger than a `IEEE-754` 64 bit `Double` (e.g. `BigDecimal`).
### Workaround
By adding a limitation that we only support whole numbers smaller than 2^52 in absolute size
and formatting them without scientific notation (using `BigInt`) we avoid these problems.
This limitation is not restrictive for the ledger metering JSON whose only numeric field
is the event count.
### Approximation
The following approximates that javascript spec double formatting rules but was discarded
in favour of explicitly limiting the implementation.
```scala
def serialize(bd: BigDecimal): String = {
if (bd.isWhole && bd.toBigInt.toString().length < 22) {
bd.toBigInt.toString()
} else {
val s = bd.toString()
if (s.contains('E')) {
s.replaceFirst("(\\.0|)E", "e")
} else {
s.replaceFirst("0+$", "")
}
}
}
```

View File

@ -1,17 +0,0 @@
This is the documentation for the Daml participant integration API.
Notable interfaces to be implemented by ledger integrations include:
- [[com.daml.ledger.participant.state.v2.ReadService `ReadService`]] - an interface for reading data from the underlying ledger.
- [[com.daml.ledger.participant.state.v2.WriteService `WriteService`]] - an interface for writing data to the underlying ledger.
- [[com.daml.ledger.api.auth.AuthService `AuthService`]] - an interface for authorizing ledger API calls.
Notable classes for running a ledger participant node include:
- [[com.daml.platform.indexer.IndexerServiceOwner `IndexerServiceOwner`]] - the indexer reads data from the
[[com.daml.ledger.participant.state.v2.ReadService `ReadService`]] and writes it to an index database.
- [[com.daml.platform.apiserver.LedgerApiService `LedgerApiService`]] - the API server reads data from the index
database and Indexer (see `com.daml.platform.index.InMemoryStateUpdater`) and serves it over the gRPC ledger API.
See the complete list on the right for details.

View File

@ -1,3 +0,0 @@
# Enforce Unix newlines
*.sql text eol=lf
*.sha256 text eol=lf

View File

@ -1,60 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.platform.store.interning
import org.openjdk.jmh.annotations._
import scala.concurrent.Future
import scala.concurrent.duration._
import scala.util.Random
@State(Scope.Benchmark)
abstract class BenchmarkState {
@Param(Array("10000", "100000", "1000000", "10000000"))
var stringCount: Int = _
@Param(Array("10", "100"))
var stringLength: Int = _
protected val perfTestTimeout: FiniteDuration = 5.minutes
protected var entries: Array[(Int, String)] = _
protected var interning: StringInterningView = _
protected var interningEnd: Int = _
protected def extraStringCount = 0
@Setup(Level.Trial)
def setupEntries(): Unit = {
entries = BenchmarkState.createEntries(stringCount + extraStringCount, stringLength)
}
}
object BenchmarkState {
protected val perfTestTimeout: FiniteDuration = 5.minutes
private[this] def randomString(length: Int): String = Random.alphanumeric.take(length).mkString
def createEntries(stringCount: Int, stringLength: Int): Array[(Int, String)] = {
Console.print(
s"Creating an array with $stringCount entries with string length $stringLength..."
)
val entries = new Array[(Int, String)](stringCount)
(0 until stringCount).foreach(i => entries(i) = (i + 1) -> randomString(stringLength))
Console.println(s" done.")
Console.println(s"First few entries: ${entries(0)}, ${entries(1)}, ${entries(2)}, ...")
entries
}
def loadStringInterningEntries(
entries: Array[(Int, String)]
): LoadStringInterningEntries = {
(fromExclusive, toInclusive) =>
// Note: for slice(), the begin is inclusive and the end is exclusive (opposite of the enclosing call)
_ => Future.successful(entries.view.slice(fromExclusive + 1, toInclusive + 1))
}
}

View File

@ -1,43 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.platform.store.interning
import com.daml.logging.LoggingContext
import org.openjdk.jmh.annotations.{
Benchmark,
BenchmarkMode,
Fork,
Level,
Measurement,
Mode,
OutputTimeUnit,
Setup,
Warmup,
}
import java.util.concurrent.TimeUnit
import scala.concurrent.Await
class InitializationTimeBenchmark extends BenchmarkState {
@Setup(Level.Invocation)
def setupIteration(): Unit = {
interning = new StringInterningView()
}
@Benchmark
@BenchmarkMode(Array(Mode.AverageTime))
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@Fork(value = 5)
@Warmup(iterations = 5)
@Measurement(iterations = 5)
def run(): Unit = {
Await.result(
interning
.update(stringCount)(BenchmarkState.loadStringInterningEntries(entries))(
LoggingContext.ForTesting
),
perfTestTimeout,
)
}
}

View File

@ -1,44 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
package com.daml.platform.store.interning
import com.daml.logging.LoggingContext
import org.openjdk.jmh.annotations._
import scala.concurrent.Await
class UpdateTimeBenchmark extends BenchmarkState {
// Set up some extra entries for the repeated update() calls
override def extraStringCount = 10000000
@Setup(Level.Iteration)
def setupIteration(): Unit = {
interning = new StringInterningView()
interningEnd = stringCount
Await.result(
interning.update(interningEnd)(BenchmarkState.loadStringInterningEntries(entries))(
LoggingContext.ForTesting
),
perfTestTimeout,
)
}
@Benchmark
@BenchmarkMode(Array(Mode.Throughput))
@Fork(value = 5)
@Warmup(iterations = 5)
@Measurement(iterations = 5)
def run(): Unit = {
interningEnd = interningEnd + 1
if (interningEnd > entries.length) throw new RuntimeException("Can't ingest any more strings")
Await.result(
interning.update(interningEnd)(BenchmarkState.loadStringInterningEntries(entries))(
LoggingContext.ForTesting
),
perfTestTimeout,
)
}
}

View File

@ -1,22 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
// Serialization format for Protocol Buffers values stored in the index.
//
// WARNING:
// As all messages declared here represent values stored to the index database, we MUST ensure that
// they remain backwards-compatible forever.
syntax = "proto3";
package daml.platform.index;
option java_package = "com.daml.platform.index";
import "google/protobuf/any.proto";
// Serialized status details, conveyed from the driver `ReadService` to the ledger API client.
// To be combined with a status code and message.
message StatusDetails {
repeated google.protobuf.Any details = 1;
}

View File

@ -1,16 +0,0 @@
// Copyright (c) 2023 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
syntax = "proto3";
package daml.platform.apiserver;
option java_package = "com.daml.platform.apiserver";
// Describes the payload of a page token for listing users.
// Not intended to be handled directly by clients and should be presented to them as an opaque string.
message ListUsersPageTokenPayload {
// Users are ordered by ``user_id``, and the next page starts with users whose ``user_id`` is larger than ``user_id_lower_bound_excl``.
string user_id_lower_bound_excl = 1;
}

View File

@ -1,3 +0,0 @@
The files in this folder cannot change, as their hash is stored in the database
when the migration is executed. Therefore, they should not be subject to the
automatic copyright update.

View File

@ -1 +0,0 @@
0f6b9019ee8544b20c501e56672df0e28f445a1f40d4d63999e0fc7d1b4aecc0

View File

@ -1,489 +0,0 @@
-- Copyright (c) 2021 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
-- SPDX-License-Identifier: Apache-2.0
CREATE ALIAS array_intersection FOR "com.daml.platform.store.backend.h2.H2FunctionAliases.arrayIntersection";
---------------------------------------------------------------------------------------------------
-- Parameters table
---------------------------------------------------------------------------------------------------
CREATE TABLE parameters (
ledger_id VARCHAR NOT NULL,
participant_id VARCHAR NOT NULL,
ledger_end VARCHAR NOT NULL,
ledger_end_sequential_id BIGINT NOT NULL,
ledger_end_string_interning_id INTEGER NOT NULL,
participant_pruned_up_to_inclusive VARCHAR,
participant_all_divulged_contracts_pruned_up_to_inclusive VARCHAR
);
---------------------------------------------------------------------------------------------------
-- Configurations table
---------------------------------------------------------------------------------------------------
CREATE TABLE configuration_entries (
ledger_offset VARCHAR PRIMARY KEY NOT NULL,
recorded_at BIGINT NOT NULL,
submission_id VARCHAR NOT NULL,
typ VARCHAR NOT NULL,
configuration BINARY LARGE OBJECT NOT NULL,
rejection_reason VARCHAR,
CONSTRAINT configuration_entries_check_reason
CHECK (
(typ = 'accept' AND rejection_reason IS NULL) OR
(typ = 'reject' AND rejection_reason IS NOT NULL)
)
);
CREATE INDEX idx_configuration_submission ON configuration_entries (submission_id);
---------------------------------------------------------------------------------------------------
-- Packages table
---------------------------------------------------------------------------------------------------
CREATE TABLE packages (
package_id VARCHAR PRIMARY KEY NOT NULL,
upload_id VARCHAR NOT NULL,
source_description VARCHAR,
package_size BIGINT NOT NULL,
known_since BIGINT NOT NULL,
ledger_offset VARCHAR NOT NULL,
package BINARY LARGE OBJECT NOT NULL
);
CREATE INDEX idx_packages_ledger_offset ON packages (ledger_offset);
---------------------------------------------------------------------------------------------------
-- Package entries table
---------------------------------------------------------------------------------------------------
CREATE TABLE package_entries (
ledger_offset VARCHAR PRIMARY KEY NOT NULL,
recorded_at BIGINT NOT NULL,
submission_id VARCHAR,
typ VARCHAR NOT NULL,
rejection_reason VARCHAR,
CONSTRAINT check_package_entry_type
CHECK (
(typ = 'accept' AND rejection_reason IS NULL) OR
(typ = 'reject' AND rejection_reason IS NOT NULL)
)
);
CREATE INDEX idx_package_entries ON package_entries (submission_id);
---------------------------------------------------------------------------------------------------
-- Party entries table
---------------------------------------------------------------------------------------------------
CREATE TABLE party_entries (
ledger_offset VARCHAR PRIMARY KEY NOT NULL,
recorded_at BIGINT NOT NULL,
submission_id VARCHAR,
party VARCHAR,
display_name VARCHAR,
typ VARCHAR NOT NULL,
rejection_reason VARCHAR,
is_local BOOLEAN,
party_id INTEGER,
CONSTRAINT check_party_entry_type
CHECK (
(typ = 'accept' AND rejection_reason IS NULL) OR
(typ = 'reject' AND rejection_reason IS NOT NULL)
)
);
CREATE INDEX idx_party_entries ON party_entries (submission_id);
CREATE INDEX idx_party_entries_party_and_ledger_offset ON party_entries(party, ledger_offset);
CREATE INDEX idx_party_entries_party_id_and_ledger_offset ON party_entries(party_id, ledger_offset);
---------------------------------------------------------------------------------------------------
-- Completions table
---------------------------------------------------------------------------------------------------
CREATE TABLE participant_command_completions (
completion_offset VARCHAR NOT NULL,
record_time BIGINT NOT NULL,
application_id VARCHAR NOT NULL,
submitters INTEGER ARRAY NOT NULL,
command_id VARCHAR NOT NULL,
-- The transaction ID is `NULL` for rejected transactions.
transaction_id VARCHAR,
-- The submission ID will be provided by the participant or driver if the application didn't provide one.
-- Nullable to support historical data.
submission_id VARCHAR,
-- The three alternatives below are mutually exclusive, i.e. the deduplication
-- interval could have specified by the application as one of:
-- 1. an initial offset
-- 2. a duration (split into two columns, seconds and nanos, mapping protobuf's 1:1)
-- 3. an initial timestamp
deduplication_offset VARCHAR,
deduplication_duration_seconds BIGINT,
deduplication_duration_nanos INT,
deduplication_start BIGINT,
-- The three columns below are `NULL` if the completion is for an accepted transaction.
-- The `rejection_status_details` column contains a Protocol-Buffers-serialized message of type
-- `daml.platform.index.StatusDetails`, containing the code, message, and further details
-- (decided by the ledger driver), and may be `NULL` even if the other two columns are set.
rejection_status_code INTEGER,
rejection_status_message VARCHAR,
rejection_status_details BINARY LARGE OBJECT
);
CREATE INDEX participant_command_completions_application_id_offset_idx ON participant_command_completions USING btree (application_id, completion_offset);
---------------------------------------------------------------------------------------------------
-- Events table: divulgence
---------------------------------------------------------------------------------------------------
CREATE TABLE participant_events_divulgence (
-- * fixed-size columns first to avoid padding
event_sequential_id bigint NOT NULL, -- event identification: same ordering as event_offset
-- * event identification
event_offset VARCHAR, -- offset of the transaction that divulged the contract
-- * transaction metadata
workflow_id VARCHAR,
-- * submitter info (only visible on submitting participant)
command_id VARCHAR,
application_id VARCHAR,
submitters INTEGER ARRAY,
-- * shared event information
contract_id VARCHAR NOT NULL,
template_id INTEGER,
tree_event_witnesses INTEGER ARRAY NOT NULL DEFAULT ARRAY[], -- informees
-- * contract data
create_argument BINARY LARGE OBJECT,
-- * compression flags
create_argument_compression SMALLINT
);
-- offset index: used to translate to sequential_id
CREATE INDEX participant_events_divulgence_event_offset ON participant_events_divulgence (event_offset);
-- sequential_id index for paging
CREATE INDEX participant_events_divulgence_event_sequential_id ON participant_events_divulgence (event_sequential_id);
-- lookup divulgance events, in order of ingestion
CREATE INDEX participant_events_divulgence_contract_id_idx ON participant_events_divulgence (contract_id, event_sequential_id);
---------------------------------------------------------------------------------------------------
-- Events table: create
---------------------------------------------------------------------------------------------------
CREATE TABLE participant_events_create (
-- * fixed-size columns first to avoid padding
event_sequential_id bigint NOT NULL, -- event identification: same ordering as event_offset
ledger_effective_time bigint NOT NULL, -- transaction metadata
node_index integer NOT NULL, -- event metadata
-- * event identification
event_offset VARCHAR NOT NULL,
-- * transaction metadata
transaction_id VARCHAR NOT NULL,
workflow_id VARCHAR,
-- * submitter info (only visible on submitting participant)
command_id VARCHAR,
application_id VARCHAR,
submitters INTEGER ARRAY,
-- * event metadata
event_id VARCHAR NOT NULL, -- string representation of (transaction_id, node_index)
-- * shared event information
contract_id VARCHAR NOT NULL,
template_id INTEGER NOT NULL,
flat_event_witnesses INTEGER ARRAY NOT NULL DEFAULT ARRAY[], -- stakeholders
tree_event_witnesses INTEGER ARRAY NOT NULL DEFAULT ARRAY[], -- informees
-- * contract data
create_argument BINARY LARGE OBJECT NOT NULL,
create_signatories INTEGER ARRAY NOT NULL,
create_observers INTEGER ARRAY NOT NULL,
create_agreement_text VARCHAR,
create_key_value BINARY LARGE OBJECT,
create_key_hash VARCHAR,
-- * compression flags
create_argument_compression SMALLINT,
create_key_value_compression SMALLINT,
-- * contract driver metadata
driver_metadata BINARY LARGE OBJECT
);
-- offset index: used to translate to sequential_id
CREATE INDEX participant_events_create_event_offset ON participant_events_create (event_offset);
-- sequential_id index for paging
CREATE INDEX participant_events_create_event_sequential_id ON participant_events_create (event_sequential_id);
-- lookup by contract id
CREATE INDEX participant_events_create_contract_id_idx ON participant_events_create (contract_id);
-- lookup by contract_key
CREATE INDEX participant_events_create_create_key_hash_idx ON participant_events_create (create_key_hash, event_sequential_id);
---------------------------------------------------------------------------------------------------
-- Events table: consuming exercise
---------------------------------------------------------------------------------------------------
CREATE TABLE participant_events_consuming_exercise (
-- * fixed-size columns first to avoid padding
event_sequential_id bigint NOT NULL, -- event identification: same ordering as event_offset
ledger_effective_time bigint NOT NULL, -- transaction metadata
node_index integer NOT NULL, -- event metadata
-- * event identification
event_offset VARCHAR NOT NULL,
-- * transaction metadata
transaction_id VARCHAR NOT NULL,
workflow_id VARCHAR,
-- * submitter info (only visible on submitting participant)
command_id VARCHAR,
application_id VARCHAR,
submitters INTEGER ARRAY,
-- * event metadata
event_id VARCHAR NOT NULL, -- string representation of (transaction_id, node_index)
-- * shared event information
contract_id VARCHAR NOT NULL,
template_id INTEGER NOT NULL,
flat_event_witnesses INTEGER ARRAY NOT NULL DEFAULT ARRAY[], -- stakeholders
tree_event_witnesses INTEGER ARRAY NOT NULL DEFAULT ARRAY[], -- informees
-- * information about the corresponding create event
create_key_value BINARY LARGE OBJECT, -- used for the mutable state cache
-- * choice data
exercise_choice VARCHAR NOT NULL,
exercise_argument BINARY LARGE OBJECT NOT NULL,
exercise_result BINARY LARGE OBJECT,
exercise_actors INTEGER ARRAY NOT NULL,
exercise_child_event_ids VARCHAR ARRAY NOT NULL,
-- * compression flags
create_key_value_compression SMALLINT,
exercise_argument_compression SMALLINT,
exercise_result_compression SMALLINT
);
-- offset index: used to translate to sequential_id
CREATE INDEX participant_events_consuming_exercise_event_offset ON participant_events_consuming_exercise (event_offset);
-- sequential_id index for paging
CREATE INDEX participant_events_consuming_exercise_event_sequential_id ON participant_events_consuming_exercise (event_sequential_id);
-- lookup by contract id
CREATE INDEX participant_events_consuming_exercise_contract_id_idx ON participant_events_consuming_exercise (contract_id);
---------------------------------------------------------------------------------------------------
-- Events table: non-consuming exercise
---------------------------------------------------------------------------------------------------
CREATE TABLE participant_events_non_consuming_exercise (
-- * fixed-size columns first to avoid padding
event_sequential_id bigint NOT NULL, -- event identification: same ordering as event_offset
ledger_effective_time bigint NOT NULL, -- transaction metadata
node_index integer NOT NULL, -- event metadata
-- * event identification
event_offset VARCHAR NOT NULL,
-- * transaction metadata
transaction_id VARCHAR NOT NULL,
workflow_id VARCHAR,
-- * submitter info (only visible on submitting participant)
command_id VARCHAR,
application_id VARCHAR,
submitters INTEGER ARRAY,
-- * event metadata
event_id VARCHAR NOT NULL, -- string representation of (transaction_id, node_index)
-- * shared event information
contract_id VARCHAR NOT NULL,
template_id INTEGER NOT NULL,
flat_event_witnesses INTEGER ARRAY NOT NULL DEFAULT ARRAY[], -- stakeholders
tree_event_witnesses INTEGER ARRAY NOT NULL DEFAULT ARRAY[], -- informees
-- * information about the corresponding create event
create_key_value BINARY LARGE OBJECT, -- used for the mutable state cache
-- * choice data
exercise_choice VARCHAR NOT NULL,
exercise_argument BINARY LARGE OBJECT NOT NULL,
exercise_result BINARY LARGE OBJECT,
exercise_actors INTEGER ARRAY NOT NULL,
exercise_child_event_ids VARCHAR ARRAY NOT NULL,
-- * compression flags
create_key_value_compression SMALLINT,
exercise_argument_compression SMALLINT,
exercise_result_compression SMALLINT
);
-- offset index: used to translate to sequential_id
CREATE INDEX participant_events_non_consuming_exercise_event_offset ON participant_events_non_consuming_exercise (event_offset);
-- sequential_id index for paging
CREATE INDEX participant_events_non_consuming_exercise_event_sequential_id ON participant_events_non_consuming_exercise (event_sequential_id);
CREATE TABLE string_interning (
internal_id integer PRIMARY KEY NOT NULL,
external_string text
);
-----------------------------
-- Filter tables for events
-----------------------------
-- create stakeholders
CREATE TABLE pe_create_id_filter_stakeholder (
event_sequential_id BIGINT NOT NULL,
template_id INTEGER NOT NULL,
party_id INTEGER NOT NULL
);
CREATE INDEX pe_create_id_filter_stakeholder_pts_idx ON pe_create_id_filter_stakeholder(party_id, template_id, event_sequential_id);
CREATE INDEX pe_create_id_filter_stakeholder_pt_idx ON pe_create_id_filter_stakeholder(party_id, event_sequential_id);
CREATE INDEX pe_create_id_filter_stakeholder_s_idx ON pe_create_id_filter_stakeholder(event_sequential_id);
CREATE TABLE pe_create_id_filter_non_stakeholder_informee (
event_sequential_id BIGINT NOT NULL,
party_id INTEGER NOT NULL
);
CREATE INDEX pe_create_id_filter_non_stakeholder_informee_ps_idx ON pe_create_id_filter_non_stakeholder_informee(party_id, event_sequential_id);
CREATE INDEX pe_create_id_filter_non_stakeholder_informee_s_idx ON pe_create_id_filter_non_stakeholder_informee(event_sequential_id);
CREATE TABLE pe_consuming_id_filter_stakeholder (
event_sequential_id BIGINT NOT NULL,
template_id INTEGER NOT NULL,
party_id INTEGER NOT NULL
);
CREATE INDEX pe_consuming_id_filter_stakeholder_pts_idx ON pe_consuming_id_filter_stakeholder(party_id, template_id, event_sequential_id);
CREATE INDEX pe_consuming_id_filter_stakeholder_ps_idx ON pe_consuming_id_filter_stakeholder(party_id, event_sequential_id);
CREATE INDEX pe_consuming_id_filter_stakeholder_s_idx ON pe_consuming_id_filter_stakeholder(event_sequential_id);
CREATE TABLE pe_consuming_id_filter_non_stakeholder_informee (
event_sequential_id BIGINT NOT NULL,
party_id INTEGER NOT NULL
);
CREATE INDEX pe_consuming_id_filter_non_stakeholder_informee_ps_idx ON pe_consuming_id_filter_non_stakeholder_informee(party_id, event_sequential_id);
CREATE INDEX pe_consuming_id_filter_non_stakeholder_informee_s_idx ON pe_consuming_id_filter_non_stakeholder_informee(event_sequential_id);
CREATE TABLE pe_non_consuming_id_filter_informee (
event_sequential_id BIGINT NOT NULL,
party_id INTEGER NOT NULL
);
CREATE INDEX pe_non_consuming_id_filter_informee_ps_idx ON pe_non_consuming_id_filter_informee(party_id, event_sequential_id);
CREATE INDEX pe_non_consuming_id_filter_informee_s_idx ON pe_non_consuming_id_filter_informee(event_sequential_id);
CREATE TABLE participant_transaction_meta(
transaction_id VARCHAR NOT NULL,
event_offset VARCHAR NOT NULL,
event_sequential_id_first BIGINT NOT NULL,
event_sequential_id_last BIGINT NOT NULL
);
CREATE INDEX participant_transaction_meta_tid_idx ON participant_transaction_meta(transaction_id);
CREATE INDEX participant_transaction_meta_event_offset_idx ON participant_transaction_meta(event_offset);
CREATE TABLE transaction_metering (
application_id VARCHAR NOT NULL,
action_count INTEGER NOT NULL,
metering_timestamp BIGINT NOT NULL,
ledger_offset VARCHAR NOT NULL
);
CREATE INDEX transaction_metering_ledger_offset ON transaction_metering(ledger_offset);
CREATE TABLE metering_parameters (
ledger_metering_end VARCHAR,
ledger_metering_timestamp BIGINT NOT NULL
);
CREATE TABLE participant_metering (
application_id VARCHAR NOT NULL,
from_timestamp BIGINT NOT NULL,
to_timestamp BIGINT NOT NULL,
action_count INTEGER NOT NULL,
ledger_offset VARCHAR NOT NULL
);
CREATE UNIQUE INDEX participant_metering_from_to_application ON participant_metering(from_timestamp, to_timestamp, application_id);
-- NOTE: We keep participant user and party record tables independent from indexer-based tables, such that
-- we maintain a property that they can be moved to a separate database without any extra schema changes.
---------------------------------------------------------------------------------------------------
-- Participant local store: identity provider configurations
---------------------------------------------------------------------------------------------------
CREATE TABLE participant_identity_provider_config
(
identity_provider_id VARCHAR(255) PRIMARY KEY NOT NULL,
issuer VARCHAR NOT NULL UNIQUE,
jwks_url VARCHAR NOT NULL,
is_deactivated BOOLEAN NOT NULL,
audience VARCHAR NULL
);
---------------------------------------------------------------------------------------------------
-- Participant local store: users
---------------------------------------------------------------------------------------------------
CREATE TABLE participant_users (
internal_id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
user_id VARCHAR(256) NOT NULL UNIQUE,
primary_party VARCHAR(512),
identity_provider_id VARCHAR(255) REFERENCES participant_identity_provider_config (identity_provider_id),
is_deactivated BOOLEAN NOT NULL,
resource_version BIGINT NOT NULL,
created_at BIGINT NOT NULL
);
CREATE TABLE participant_user_rights (
user_internal_id INTEGER NOT NULL REFERENCES participant_users (internal_id) ON DELETE CASCADE,
user_right INTEGER NOT NULL,
for_party VARCHAR(512),
for_party2 VARCHAR(512) GENERATED ALWAYS AS (CASE
WHEN for_party IS NOT NULL
THEN for_party
ELSE ''
END),
granted_at BIGINT NOT NULL,
UNIQUE (user_internal_id, user_right, for_party2)
);
CREATE TABLE participant_user_annotations (
internal_id INTEGER NOT NULL REFERENCES participant_users (internal_id) ON DELETE CASCADE,
name VARCHAR(512) NOT NULL,
-- 256k = 256*1024 = 262144
val VARCHAR(262144),
updated_at BIGINT NOT NULL,
UNIQUE (internal_id, name)
);
INSERT INTO participant_users(user_id, primary_party, identity_provider_id, is_deactivated, resource_version, created_at)
VALUES ('participant_admin', NULL, NULL, false, 0, 0);
INSERT INTO participant_user_rights(user_internal_id, user_right, for_party, granted_at)
SELECT internal_id, 1, NULL, 0
FROM participant_users
WHERE user_id = 'participant_admin';
---------------------------------------------------------------------------------------------------
-- Participant local store: party records
---------------------------------------------------------------------------------------------------
CREATE TABLE participant_party_records (
internal_id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
party VARCHAR(512) NOT NULL UNIQUE,
identity_provider_id VARCHAR(255) REFERENCES participant_identity_provider_config (identity_provider_id),
resource_version BIGINT NOT NULL,
created_at BIGINT NOT NULL
);
CREATE TABLE participant_party_record_annotations (
internal_id INTEGER NOT NULL REFERENCES participant_party_records (internal_id) ON DELETE CASCADE,
name VARCHAR(512) NOT NULL,
-- 256k = 256*1024 = 262144
val VARCHAR(262144),
updated_at BIGINT NOT NULL,
UNIQUE (internal_id, name)
);

View File

@ -1 +0,0 @@
e849b962d982296efa4e17e846681beea7ada94815897132eda75fc1e511c29e

View File

@ -1,12 +0,0 @@
-- Copyright (c) 2022 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
-- SPDX-License-Identifier: Apache-2.0
-- Note: can't make the ledger_end column NOT NULL, as we need to be able to insert the empty string
UPDATE parameters SET ledger_end_sequential_id = 0 WHERE ledger_end_sequential_id IS NULL;
UPDATE parameters SET ledger_end_string_interning_id = 0 WHERE ledger_end_string_interning_id IS NULL;
ALTER TABLE parameters MODIFY ( ledger_end_sequential_id NOT NULL);
ALTER TABLE parameters MODIFY ( ledger_end_string_interning_id NOT NULL);

View File

@ -1 +0,0 @@
f0518b9fdf84752d0b47a57aa9725a2b4ef820bd44ba79f2a82482d089e1c5fc

View File

@ -1,5 +0,0 @@
-- Copyright (c) 2022 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
-- SPDX-License-Identifier: Apache-2.0
-- Participant-side deduplication not supported anymore
DROP TABLE participant_command_submissions PURGE;

View File

@ -1 +0,0 @@
45e3500305c86bf7928f78ba6b97572cf8f900e18805e181b44d7837fe0c3c09

View File

@ -1,24 +0,0 @@
-- Copyright (c) 2022 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
-- SPDX-License-Identifier: Apache-2.0
-- Transaction metering alterations
ALTER TABLE transaction_metering MODIFY application_id VARCHAR2(4000);
-- Create metering parameters
CREATE TABLE metering_parameters (
ledger_metering_end VARCHAR2(4000),
ledger_metering_timestamp NUMBER NOT NULL
);
-- Create participant metering
CREATE TABLE participant_metering (
application_id VARCHAR2(4000) NOT NULL,
from_timestamp NUMBER NOT NULL,
to_timestamp NUMBER NOT NULL,
action_count NUMBER NOT NULL,
ledger_offset VARCHAR2(4000) NOT NULL
);
CREATE UNIQUE INDEX participant_metering_from_to_application ON participant_metering(from_timestamp, to_timestamp, application_id);

View File

@ -1 +0,0 @@
8e829aaeff43e821dd1d1d32d110d58a9e5c851c4580bca2ffa7411934f6a077

View File

@ -1,4 +0,0 @@
-- Copyright (c) 2022 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
-- SPDX-License-Identifier: Apache-2.0
DROP VIEW participant_events;

View File

@ -1 +0,0 @@
0640b2a5616b5ba8bdc4260fc2f50951b5ca49bd09463083039a9eea01ec2f4b

View File

@ -1,31 +0,0 @@
-- Copyright (c) 2022 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
-- SPDX-License-Identifier: Apache-2.0
-- NOTE: We keep participant user and party record tables independent from indexer-based tables, such that
-- we maintain a property that they can be moved to a separate database without any extra schema changes.
-- User tables
CREATE TABLE participant_user_annotations (
internal_id NUMBER NOT NULL REFERENCES participant_users (internal_id) ON DELETE CASCADE,
name VARCHAR2(512 CHAR) NOT NULL,
val CLOB,
updated_at NUMBER NOT NULL,
UNIQUE (internal_id, name)
);
ALTER TABLE participant_users ADD is_deactivated NUMBER DEFAULT 0 NOT NULL;
ALTER TABLE participant_users ADD resource_version NUMBER DEFAULT 0 NOT NULL;
-- Party record tables
CREATE TABLE participant_party_records (
internal_id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
party VARCHAR2(512 CHAR) NOT NULL UNIQUE,
resource_version NUMBER NOT NULL,
created_at NUMBER NOT NULL
);
CREATE TABLE participant_party_record_annotations (
internal_id NUMBER NOT NULL REFERENCES participant_party_records (internal_id) ON DELETE CASCADE,
name VARCHAR2(512 CHAR) NOT NULL,
val CLOB,
updated_at NUMBER NOT NULL,
UNIQUE (internal_id, name)
);

View File

@ -1 +0,0 @@
570a46ac2bf098015d65f9be78499c1fe7ebf6de1e9cb5e1acbfe6cfc6d95cfe

View File

@ -1 +0,0 @@
alter table participant_events_create add driver_metadata BLOB;

View File

@ -1 +0,0 @@
1358474f56f8c553bf071b3790aa691c9a2e6d6f013f7898fa16349e9c879cc0

View File

@ -1,7 +0,0 @@
CREATE TABLE participant_identity_provider_config
(
identity_provider_id VARCHAR2(255) PRIMARY KEY NOT NULL,
issuer VARCHAR2(4000) NOT NULL UNIQUE,
jwks_url VARCHAR2(4000) NOT NULL,
is_deactivated NUMBER DEFAULT 0 NOT NULL
);

View File

@ -1 +0,0 @@
4610a80400190baab58d90946a9cd90ce336c9b7f518af1760e7913cb9bdbf99

View File

@ -1 +0,0 @@
CREATE INDEX participant_command_completions_application_id_offset_idx ON participant_command_completions(application_id, completion_offset);

View File

@ -1 +0,0 @@
c7237f1999153b92070d2498507bafc73efa001cd70878b33b2d9fa58f181ff0

View File

@ -1,58 +0,0 @@
-- Flat transactions
ALTER TABLE participant_events_create_filter RENAME
TO pe_create_id_filter_stakeholder;
ALTER INDEX idx_participant_events_create_filter_party_template_seq_id_idx RENAME
TO pe_create_id_filter_stakeholder_pts_idx;
ALTER INDEX idx_participant_events_create_filter_party_seq_id_idx RENAME
TO pe_create_id_filter_stakeholder_pt_idx;
ALTER INDEX idx_participant_events_create_seq_id_idx RENAME
TO pe_create_id_filter_stakeholder_s_idx;
CREATE TABLE pe_consuming_id_filter_stakeholder (
event_sequential_id NUMBER NOT NULL,
template_id NUMBER NOT NULL,
party_id NUMBER NOT NULL
);
CREATE INDEX pe_consuming_id_filter_stakeholder_pts_idx ON pe_consuming_id_filter_stakeholder(party_id, template_id, event_sequential_id);
CREATE INDEX pe_consuming_id_filter_stakeholder_ps_idx ON pe_consuming_id_filter_stakeholder(party_id, event_sequential_id);
CREATE INDEX pe_consuming_id_filter_stakeholder_s_idx ON pe_consuming_id_filter_stakeholder(event_sequential_id);
--- Tree transactions
CREATE TABLE pe_create_id_filter_non_stakeholder_informee (
event_sequential_id NUMBER NOT NULL,
party_id NUMBER NOT NULL
);
CREATE INDEX pe_create_id_filter_non_stakeholder_informee_ps_idx ON pe_create_id_filter_non_stakeholder_informee(party_id, event_sequential_id);
CREATE INDEX pe_create_id_filter_non_stakeholder_informee_s_idx ON pe_create_id_filter_non_stakeholder_informee(event_sequential_id);
CREATE TABLE pe_consuming_id_filter_non_stakeholder_informee (
event_sequential_id NUMBER NOT NULL,
party_id NUMBER NOT NULL
);
CREATE INDEX pe_consuming_id_filter_non_stakeholder_informee_ps_idx ON pe_consuming_id_filter_non_stakeholder_informee(party_id, event_sequential_id);
CREATE INDEX pe_consuming_id_filter_non_stakeholder_informee_s_idx ON pe_consuming_id_filter_non_stakeholder_informee(event_sequential_id);
CREATE TABLE pe_non_consuming_id_filter_informee (
event_sequential_id NUMBER NOT NULL,
party_id NUMBER NOT NULL
);
CREATE INDEX pe_non_consuming_id_filter_informee_ps_idx ON pe_non_consuming_id_filter_informee(party_id, event_sequential_id);
CREATE INDEX pe_non_consuming_id_filter_informee_s_idx ON pe_non_consuming_id_filter_informee(event_sequential_id);
-- Point-wise lookup
CREATE TABLE participant_transaction_meta(
transaction_id VARCHAR2(4000) NOT NULL,
event_offset VARCHAR2(4000) NOT NULL,
event_sequential_id_first NUMBER NOT NULL,
event_sequential_id_last NUMBER NOT NULL
);
CREATE INDEX participant_transaction_meta_tid_idx ON participant_transaction_meta(transaction_id);
CREATE INDEX participant_transaction_meta_event_offset_idx ON participant_transaction_meta(event_offset);

View File

@ -1 +0,0 @@
d20c1c205aa437a17e8b04be8e0e5f029081f0049acb81b1c6a8968f336beecb

View File

@ -1,5 +0,0 @@
ALTER TABLE participant_users
ADD identity_provider_id VARCHAR2(255) DEFAULT NULL REFERENCES participant_identity_provider_config (identity_provider_id);
ALTER TABLE participant_party_records
ADD identity_provider_id VARCHAR2(255) DEFAULT NULL REFERENCES participant_identity_provider_config (identity_provider_id);

View File

@ -1 +0,0 @@
fe940b22140c0a1da0a3e54c0087c8872828e4cb08bdcc30bbaa1db84f98b62b

View File

@ -1,548 +0,0 @@
-- Copyright (c) 2021 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
-- SPDX-License-Identifier: Apache-2.0
---------------------------------------------------------------------------------------------------
-- V100: Append-only schema
--
-- This is a major redesign of the index database schema. Updates from the ReadService are
-- now written into the append-only table participant_events, and the set of active contracts is
-- reconstructed from the log of create and archive events.
---------------------------------------------------------------------------------------------------
CREATE TABLE packages
(
-- The unique identifier of the package (the hash of its content)
package_id VARCHAR2(4000) primary key not null,
-- Packages are uploaded as DAR files (i.e., in groups)
-- This field can be used to find out which packages were uploaded together
upload_id NVARCHAR2(1000) not null,
-- A human readable description of the package source
source_description NVARCHAR2(1000),
-- The size of the archive payload (i.e., the serialized DAML-LF package), in bytes
package_size NUMBER not null,
-- The time when the package was added
known_since NUMBER not null,
-- The ledger end at the time when the package was added
ledger_offset VARCHAR2(4000) not null,
-- The DAML-LF archive, serialized using the protobuf message `daml_lf.Archive`.
-- See also `daml-lf/archive/da/daml_lf.proto`.
package BLOB not null
);
CREATE INDEX packages_ledger_offset_idx ON packages(ledger_offset);
CREATE TABLE configuration_entries
(
ledger_offset VARCHAR2(4000) not null primary key,
recorded_at NUMBER not null,
submission_id NVARCHAR2(1000) not null,
-- The type of entry, one of 'accept' or 'reject'.
typ NVARCHAR2(1000) not null,
-- The configuration that was proposed and either accepted or rejected depending on the type.
-- Encoded according to participant-state/protobuf/ledger_configuration.proto.
-- Add the current configuration column to parameters.
configuration BLOB not null,
-- If the type is 'rejection', then the rejection reason is set.
-- Rejection reason is a human-readable description why the change was rejected.
rejection_reason NVARCHAR2(1000),
-- Check that fields are correctly set based on the type.
constraint configuration_entries_check_entry
check (
(typ = 'accept' and rejection_reason is null) or
(typ = 'reject' and rejection_reason is not null))
);
CREATE INDEX idx_configuration_submission ON configuration_entries (submission_id);
CREATE TABLE package_entries
(
ledger_offset VARCHAR2(4000) not null primary key,
recorded_at NUMBER not null,
-- SubmissionId for package to be uploaded
submission_id NVARCHAR2(1000),
-- The type of entry, one of 'accept' or 'reject'
typ NVARCHAR2(1000) not null,
-- If the type is 'reject', then the rejection reason is set.
-- Rejection reason is a human-readable description why the change was rejected.
rejection_reason NVARCHAR2(1000),
constraint check_package_entry_type
check (
(typ = 'accept' and rejection_reason is null) or
(typ = 'reject' and rejection_reason is not null)
)
);
-- Index for retrieving the package entry by submission id
CREATE INDEX idx_package_entries ON package_entries (submission_id);
CREATE TABLE party_entries
(
-- The ledger end at the time when the party allocation was added
-- cannot BLOB add as primary key with oracle
ledger_offset VARCHAR2(4000) primary key not null,
recorded_at NUMBER not null,
-- SubmissionId for the party allocation
submission_id NVARCHAR2(1000),
-- party
party NVARCHAR2(1000),
-- displayName
display_name NVARCHAR2(1000),
-- The type of entry, 'accept' or 'reject'
typ NVARCHAR2(1000) not null,
-- If the type is 'reject', then the rejection reason is set.
-- Rejection reason is a human-readable description why the change was rejected.
rejection_reason NVARCHAR2(1000),
-- true if the party was added on participantId node that owns the party
is_local NUMBER(1, 0),
constraint check_party_entry_type
check (
(typ = 'accept' and rejection_reason is null and party is not null) or
(typ = 'reject' and rejection_reason is not null)
)
);
CREATE INDEX idx_party_entries ON party_entries(submission_id);
CREATE INDEX idx_party_entries_party_and_ledger_offset ON party_entries(party, ledger_offset);
CREATE TABLE participant_command_completions
(
completion_offset VARCHAR2(4000) NOT NULL,
record_time NUMBER NOT NULL,
application_id NVARCHAR2(1000) NOT NULL,
-- The submission ID will be provided by the participant or driver if the application didn't provide one.
-- Nullable to support historical data.
submission_id NVARCHAR2(1000),
-- The three alternatives below are mutually exclusive, i.e. the deduplication
-- interval could have specified by the application as one of:
-- 1. an initial offset
-- 2. a duration (split into two columns, seconds and nanos, mapping protobuf's 1:1)
-- 3. an initial timestamp
deduplication_offset VARCHAR2(4000),
deduplication_duration_seconds NUMBER,
deduplication_duration_nanos NUMBER,
deduplication_start NUMBER,
submitters CLOB NOT NULL CONSTRAINT ensure_json_submitters CHECK (submitters IS JSON),
command_id NVARCHAR2(1000) NOT NULL,
transaction_id NVARCHAR2(1000), -- null for rejected transactions and checkpoints
rejection_status_code INTEGER, -- null for accepted transactions and checkpoints
rejection_status_message CLOB, -- null for accepted transactions and checkpoints
rejection_status_details BLOB -- null for accepted transactions and checkpoints
);
CREATE INDEX participant_command_completions_idx ON participant_command_completions(completion_offset, application_id);
CREATE TABLE participant_command_submissions
(
-- The deduplication key
deduplication_key NVARCHAR2(1000) primary key not null,
-- The time the command will stop being deduplicated
deduplicate_until NUMBER not null
);
---------------------------------------------------------------------------------------------------
-- Events table: divulgence
---------------------------------------------------------------------------------------------------
CREATE TABLE participant_events_divulgence (
-- * event identification
event_sequential_id NUMBER NOT NULL,
-- NOTE: this must be assigned sequentially by the indexer such that
-- for all events ev1, ev2 it holds that '(ev1.offset < ev2.offset) <=> (ev1.event_sequential_id < ev2.event_sequential_id)
event_offset VARCHAR2(4000), -- offset of the transaction that divulged the contract
-- * transaction metadata
command_id VARCHAR2(4000),
workflow_id VARCHAR2(4000),
application_id VARCHAR2(4000),
submitters CLOB CONSTRAINT ensure_json_ped_submitters CHECK (submitters IS JSON),
-- * shared event information
contract_id VARCHAR2(4000) NOT NULL,
template_id VARCHAR2(4000),
tree_event_witnesses CLOB DEFAULT '[]' NOT NULL CONSTRAINT ensure_json_tree_event_witnesses CHECK (tree_event_witnesses IS JSON), -- informees for create, exercise, and divulgance events
-- * divulgence and create events
create_argument BLOB,
-- * compression flags
create_argument_compression SMALLINT
);
-- offset index: used to translate to sequential_id
CREATE INDEX participant_events_divulgence_event_offset ON participant_events_divulgence(event_offset);
-- sequential_id index for paging
CREATE INDEX participant_events_divulgence_event_sequential_id ON participant_events_divulgence(event_sequential_id);
-- filtering by template
CREATE INDEX participant_events_divulgence_template_id_idx ON participant_events_divulgence(template_id);
-- filtering by witnesses (visibility) for some queries used in the implementation of
-- GetActiveContracts (flat), GetTransactions (flat) and GetTransactionTrees.
-- Note that Potsgres has trouble using these indices effectively with our paged access.
-- We might decide to drop them.
CREATE SEARCH INDEX participant_events_divulgence_tree_event_witnesses_idx ON participant_events_divulgence (tree_event_witnesses) FOR JSON;
-- lookup divulgance events, in order of ingestion
CREATE INDEX participant_events_divulgence_contract_id_idx ON participant_events_divulgence(contract_id, event_sequential_id);
---------------------------------------------------------------------------------------------------
-- Events table: create
---------------------------------------------------------------------------------------------------
CREATE TABLE participant_events_create (
-- * event identification
event_sequential_id NUMBER NOT NULL,
-- NOTE: this must be assigned sequentially by the indexer such that
-- for all events ev1, ev2 it holds that '(ev1.offset < ev2.offset) <=> (ev1.event_sequential_id < ev2.event_sequential_id)
ledger_effective_time NUMBER NOT NULL,
node_index INTEGER NOT NULL,
event_offset VARCHAR2(4000) NOT NULL,
-- * transaction metadata
transaction_id VARCHAR2(4000) NOT NULL,
workflow_id VARCHAR2(4000),
command_id VARCHAR2(4000),
application_id VARCHAR2(4000),
submitters CLOB CONSTRAINT ensure_json_pec_submitters CHECK (submitters IS JSON),
-- * event metadata
event_id VARCHAR2(4000) NOT NULL, -- string representation of (transaction_id, node_index)
-- * shared event information
contract_id VARCHAR2(4000) NOT NULL,
template_id VARCHAR2(4000) NOT NULL,
flat_event_witnesses CLOB DEFAULT '[]' NOT NULL CONSTRAINT ensure_json_pec_flat_event_witnesses CHECK (flat_event_witnesses IS JSON), -- stakeholders of create events and consuming exercise events
tree_event_witnesses CLOB DEFAULT '[]' NOT NULL CONSTRAINT ensure_json_pec_tree_event_witnesses CHECK (tree_event_witnesses IS JSON), -- informees for create, exercise, and divulgance events
-- * divulgence and create events
create_argument BLOB NOT NULL,
-- * create events only
create_signatories CLOB NOT NULL CONSTRAINT ensure_json_create_signatories CHECK (create_signatories IS JSON),
create_observers CLOB NOT NULL CONSTRAINT ensure_json_create_observers CHECK (create_observers is JSON),
create_agreement_text VARCHAR2(4000),
create_key_value BLOB,
create_key_hash VARCHAR2(4000),
-- * compression flags
create_argument_compression SMALLINT,
create_key_value_compression SMALLINT
);
-- offset index: used to translate to sequential_id
CREATE INDEX participant_events_create_event_offset ON participant_events_create(event_offset);
-- sequential_id index for paging
CREATE INDEX participant_events_create_event_sequential_id ON participant_events_create(event_sequential_id);
-- lookup by event-id
CREATE INDEX participant_events_create_event_id_idx ON participant_events_create(event_id);
-- lookup by transaction id
CREATE INDEX participant_events_create_transaction_id_idx ON participant_events_create(transaction_id);
-- filtering by template
CREATE INDEX participant_events_create_template_id_idx ON participant_events_create(template_id);
-- filtering by witnesses (visibility) for some queries used in the implementation of
-- GetActiveContracts (flat), GetTransactions (flat) and GetTransactionTrees.
-- Note that Potsgres has trouble using these indices effectively with our paged access.
-- We might decide to drop them.
CREATE SEARCH INDEX participant_events_create_flat_event_witnesses_idx ON participant_events_create (flat_event_witnesses) FOR JSON;
CREATE SEARCH INDEX participant_events_create_tree_event_witnesses_idx ON participant_events_create (tree_event_witnesses) FOR JSON;
-- lookup by contract id
CREATE INDEX participant_events_create_contract_id_idx ON participant_events_create(contract_id);
-- lookup by contract_key
CREATE INDEX participant_events_create_create_key_hash_idx ON participant_events_create(create_key_hash, event_sequential_id);
---------------------------------------------------------------------------------------------------
-- Events table: consuming exercise
---------------------------------------------------------------------------------------------------
CREATE TABLE participant_events_consuming_exercise (
-- * event identification
event_sequential_id NUMBER NOT NULL,
-- NOTE: this must be assigned sequentially by the indexer such that
-- for all events ev1, ev2 it holds that '(ev1.offset < ev2.offset) <=> (ev1.event_sequential_id < ev2.event_sequential_id)
event_offset VARCHAR2(4000) NOT NULL,
-- * transaction metadata
transaction_id VARCHAR2(4000) NOT NULL,
ledger_effective_time NUMBER NOT NULL,
command_id VARCHAR2(4000),
workflow_id VARCHAR2(4000),
application_id VARCHAR2(4000),
submitters CLOB CONSTRAINT ensure_json_pece_submitters CHECK (submitters is JSON),
-- * event metadata
node_index INTEGER NOT NULL,
event_id VARCHAR2(4000) NOT NULL, -- string representation of (transaction_id, node_index)
-- * shared event information
contract_id VARCHAR2(4000) NOT NULL,
template_id VARCHAR2(4000) NOT NULL,
flat_event_witnesses CLOB DEFAULT '[]' NOT NULL CONSTRAINT ensure_json_pece_flat_event_witnesses CHECK (flat_event_witnesses IS JSON), -- stakeholders of create events and consuming exercise events
tree_event_witnesses CLOB DEFAULT '[]' NOT NULL CONSTRAINT ensure_json_pece_tree_event_witnesses CHECK (tree_event_witnesses IS JSON), -- informees for create, exercise, and divulgance events
-- * information about the corresponding create event
create_key_value BLOB, -- used for the mutable state cache
-- * exercise events (consuming and non_consuming)
exercise_choice VARCHAR2(4000) NOT NULL,
exercise_argument BLOB NOT NULL,
exercise_result BLOB,
exercise_actors CLOB NOT NULL CONSTRAINT ensure_json_pece_exercise_actors CHECK (exercise_actors IS JSON),
exercise_child_event_ids CLOB NOT NULL CONSTRAINT ensure_json_pece_exercise_child_event_ids CHECK (exercise_child_event_ids IS JSON),
-- * compression flags
create_key_value_compression SMALLINT,
exercise_argument_compression SMALLINT,
exercise_result_compression SMALLINT
);
-- offset index: used to translate to sequential_id
CREATE INDEX participant_events_consuming_exercise_event_offset ON participant_events_consuming_exercise(event_offset);
-- sequential_id index for paging
CREATE INDEX participant_events_consuming_exercise_event_sequential_id ON participant_events_consuming_exercise(event_sequential_id);
-- lookup by event-id
CREATE INDEX participant_events_consuming_exercise_event_id_idx ON participant_events_consuming_exercise(event_id);
-- lookup by transaction id
CREATE INDEX participant_events_consuming_exercise_transaction_id_idx ON participant_events_consuming_exercise(transaction_id);
-- filtering by template
CREATE INDEX participant_events_consuming_exercise_template_id_idx ON participant_events_consuming_exercise(template_id);
-- filtering by witnesses (visibility) for some queries used in the implementation of
-- GetActiveContracts (flat), GetTransactions (flat) and GetTransactionTrees.
CREATE SEARCH INDEX participant_events_consuming_exercise_flat_event_witnesses_idx ON participant_events_consuming_exercise (flat_event_witnesses) FOR JSON;
CREATE SEARCH INDEX participant_events_consuming_exercise_tree_event_witnesses_idx ON participant_events_consuming_exercise (tree_event_witnesses) FOR JSON;
-- lookup by contract id
CREATE INDEX participant_events_consuming_exercise_contract_id_idx ON participant_events_consuming_exercise (contract_id);
---------------------------------------------------------------------------------------------------
-- Events table: non-consuming exercise
---------------------------------------------------------------------------------------------------
CREATE TABLE participant_events_non_consuming_exercise (
-- * event identification
event_sequential_id NUMBER NOT NULL,
-- NOTE: this must be assigned sequentially by the indexer such that
-- for all events ev1, ev2 it holds that '(ev1.offset < ev2.offset) <=> (ev1.event_sequential_id < ev2.event_sequential_id)
ledger_effective_time NUMBER NOT NULL,
node_index INTEGER NOT NULL,
event_offset VARCHAR2(4000) NOT NULL,
-- * transaction metadata
transaction_id VARCHAR2(4000) NOT NULL,
workflow_id VARCHAR2(4000),
command_id VARCHAR2(4000),
application_id VARCHAR2(4000),
submitters CLOB CONSTRAINT ensure_json_pence_submitters CHECK (submitters IS JSON),
-- * event metadata
event_id VARCHAR2(4000) NOT NULL, -- string representation of (transaction_id, node_index)
-- * shared event information
contract_id VARCHAR2(4000) NOT NULL,
template_id VARCHAR2(4000) NOT NULL,
flat_event_witnesses CLOB DEFAULT '{}' NOT NULL CONSTRAINT ensure_json_pence_flat_event_witnesses CHECK (flat_event_witnesses IS JSON), -- stakeholders of create events and consuming exercise events
tree_event_witnesses CLOB DEFAULT '{}' NOT NULL CONSTRAINT ensure_json_pence_tree_event_witnesses CHECK (tree_event_witnesses IS JSON), -- informees for create, exercise, and divulgance events
-- * information about the corresponding create event
create_key_value BLOB, -- used for the mutable state cache
-- * exercise events (consuming and non_consuming)
exercise_choice VARCHAR2(4000) NOT NULL,
exercise_argument BLOB NOT NULL,
exercise_result BLOB,
exercise_actors CLOB NOT NULL CONSTRAINT ensure_json_exercise_actors CHECK (exercise_actors IS JSON),
exercise_child_event_ids CLOB NOT NULL CONSTRAINT ensure_json_exercise_child_event_ids CHECK (exercise_child_event_ids IS JSON),
-- * compression flags
create_key_value_compression SMALLINT,
exercise_argument_compression SMALLINT,
exercise_result_compression SMALLINT
);
-- offset index: used to translate to sequential_id
CREATE INDEX participant_events_non_consuming_exercise_event_offset ON participant_events_non_consuming_exercise(event_offset);
-- sequential_id index for paging
CREATE INDEX participant_events_non_consuming_exercise_event_sequential_id ON participant_events_non_consuming_exercise(event_sequential_id);
-- lookup by event-id
CREATE INDEX participant_events_non_consuming_exercise_event_id_idx ON participant_events_non_consuming_exercise(event_id);
-- lookup by transaction id
CREATE INDEX participant_events_non_consuming_exercise_transaction_id_idx ON participant_events_non_consuming_exercise(transaction_id);
-- filtering by template
CREATE INDEX participant_events_non_consuming_exercise_template_id_idx ON participant_events_non_consuming_exercise(template_id);
-- filtering by witnesses (visibility) for some queries used in the implementation of
-- GetActiveContracts (flat), GetTransactions (flat) and GetTransactionTrees.
-- There is no equivalent to GIN index for oracle, but we explicitly mark as a JSON column for indexing
CREATE SEARCH INDEX participant_events_non_consuming_exercise_flat_event_witness_idx ON participant_events_non_consuming_exercise (flat_event_witnesses) FOR JSON;
CREATE SEARCH INDEX participant_events_non_consuming_exercise_tree_event_witness_idx ON participant_events_non_consuming_exercise (tree_event_witnesses) FOR JSON;
CREATE VIEW participant_events AS
SELECT cast(0 as SMALLINT) AS event_kind,
participant_events_divulgence.event_sequential_id,
cast(NULL as VARCHAR2(4000)) AS event_offset,
cast(NULL as VARCHAR2(4000)) AS transaction_id,
cast(NULL as NUMBER) AS ledger_effective_time,
participant_events_divulgence.command_id,
participant_events_divulgence.workflow_id,
participant_events_divulgence.application_id,
participant_events_divulgence.submitters,
cast(NULL as INTEGER) as node_index,
cast(NULL as VARCHAR2(4000)) as event_id,
participant_events_divulgence.contract_id,
participant_events_divulgence.template_id,
to_clob('[]') AS flat_event_witnesses,
participant_events_divulgence.tree_event_witnesses,
participant_events_divulgence.create_argument,
to_clob('[]') AS create_signatories,
to_clob('[]') AS create_observers,
cast(NULL as VARCHAR2(4000)) AS create_agreement_text,
NULL AS create_key_value,
cast(NULL as VARCHAR2(4000)) AS create_key_hash,
cast(NULL as VARCHAR2(4000)) AS exercise_choice,
NULL AS exercise_argument,
NULL AS exercise_result,
to_clob('[]') AS exercise_actors,
to_clob('[]') AS exercise_child_event_ids,
participant_events_divulgence.create_argument_compression,
cast(NULL as SMALLINT) AS create_key_value_compression,
cast(NULL as SMALLINT) AS exercise_argument_compression,
cast(NULL as SMALLINT) AS exercise_result_compression
FROM participant_events_divulgence
UNION ALL
SELECT (10) AS event_kind,
participant_events_create.event_sequential_id,
participant_events_create.event_offset,
participant_events_create.transaction_id,
participant_events_create.ledger_effective_time,
participant_events_create.command_id,
participant_events_create.workflow_id,
participant_events_create.application_id,
participant_events_create.submitters,
participant_events_create.node_index,
participant_events_create.event_id,
participant_events_create.contract_id,
participant_events_create.template_id,
participant_events_create.flat_event_witnesses,
participant_events_create.tree_event_witnesses,
participant_events_create.create_argument,
participant_events_create.create_signatories,
participant_events_create.create_observers,
participant_events_create.create_agreement_text,
participant_events_create.create_key_value,
participant_events_create.create_key_hash,
cast(NULL as VARCHAR2(4000)) AS exercise_choice,
NULL AS exercise_argument,
NULL AS exercise_result,
to_clob('[]') AS exercise_actors,
to_clob('[]') AS exercise_child_event_ids,
participant_events_create.create_argument_compression,
participant_events_create.create_key_value_compression,
cast(NULL as SMALLINT) AS exercise_argument_compression,
cast(NULL as SMALLINT) AS exercise_result_compression
FROM participant_events_create
UNION ALL
SELECT (20) AS event_kind,
participant_events_consuming_exercise.event_sequential_id,
participant_events_consuming_exercise.event_offset,
participant_events_consuming_exercise.transaction_id,
participant_events_consuming_exercise.ledger_effective_time,
participant_events_consuming_exercise.command_id,
participant_events_consuming_exercise.workflow_id,
participant_events_consuming_exercise.application_id,
participant_events_consuming_exercise.submitters,
participant_events_consuming_exercise.node_index,
participant_events_consuming_exercise.event_id,
participant_events_consuming_exercise.contract_id,
participant_events_consuming_exercise.template_id,
participant_events_consuming_exercise.flat_event_witnesses,
participant_events_consuming_exercise.tree_event_witnesses,
NULL AS create_argument,
to_clob('[]') AS create_signatories,
to_clob('[]') AS create_observers,
NULL AS create_agreement_text,
participant_events_consuming_exercise.create_key_value,
NULL AS create_key_hash,
participant_events_consuming_exercise.exercise_choice,
participant_events_consuming_exercise.exercise_argument,
participant_events_consuming_exercise.exercise_result,
participant_events_consuming_exercise.exercise_actors,
participant_events_consuming_exercise.exercise_child_event_ids,
NULL AS create_argument_compression,
participant_events_consuming_exercise.create_key_value_compression,
participant_events_consuming_exercise.exercise_argument_compression,
participant_events_consuming_exercise.exercise_result_compression
FROM participant_events_consuming_exercise
UNION ALL
SELECT (25) AS event_kind,
participant_events_non_consuming_exercise.event_sequential_id,
participant_events_non_consuming_exercise.event_offset,
participant_events_non_consuming_exercise.transaction_id,
participant_events_non_consuming_exercise.ledger_effective_time,
participant_events_non_consuming_exercise.command_id,
participant_events_non_consuming_exercise.workflow_id,
participant_events_non_consuming_exercise.application_id,
participant_events_non_consuming_exercise.submitters,
participant_events_non_consuming_exercise.node_index,
participant_events_non_consuming_exercise.event_id,
participant_events_non_consuming_exercise.contract_id,
participant_events_non_consuming_exercise.template_id,
participant_events_non_consuming_exercise.flat_event_witnesses,
participant_events_non_consuming_exercise.tree_event_witnesses,
NULL AS create_argument,
to_clob('[]') AS create_signatories,
to_clob('[]') AS create_observers,
NULL AS create_agreement_text,
participant_events_non_consuming_exercise.create_key_value,
NULL AS create_key_hash,
participant_events_non_consuming_exercise.exercise_choice,
participant_events_non_consuming_exercise.exercise_argument,
participant_events_non_consuming_exercise.exercise_result,
participant_events_non_consuming_exercise.exercise_actors,
participant_events_non_consuming_exercise.exercise_child_event_ids,
NULL AS create_argument_compression,
participant_events_non_consuming_exercise.create_key_value_compression,
participant_events_non_consuming_exercise.exercise_argument_compression,
participant_events_non_consuming_exercise.exercise_result_compression
FROM participant_events_non_consuming_exercise;
---------------------------------------------------------------------------------------------------
-- Parameters table
---------------------------------------------------------------------------------------------------
-- new field: the sequential_event_id up to which all events have been ingested
CREATE TABLE parameters
-- this table is meant to have a single row storing all the parameters we have
(
-- the generated or configured id identifying the ledger
ledger_id NVARCHAR2(1000) not null,
-- stores the head offset, meant to change with every new ledger entry
ledger_end VARCHAR2(4000),
participant_id NVARCHAR2(1000) not null,
participant_pruned_up_to_inclusive VARCHAR2(4000),
participant_all_divulged_contracts_pruned_up_to_inclusive VARCHAR2(4000),
ledger_end_sequential_id NUMBER
);

View File

@ -1 +0,0 @@
7a92a22e7e7c0bae3057bf7f5c9019a7a7ebdab1b2441fa7deb9e6e2ca0a5b0f

View File

@ -1,123 +0,0 @@
------------------------------------ ETQ Data migration -------------------------------
-- Removes all elements from a that are present in b, essentially computes a - b.
CREATE OR REPLACE FUNCTION etq_array_diff(
clobA IN CLOB,
clobB IN CLOB
)
RETURN CLOB
IS
aDiffB CLOB;
BEGIN
SELECT coalesce(JSON_ARRAYAGG(elemA), '[]') foo
INTO aDiffB
FROM
(
SELECT elemA FROM json_table(clobA, '$[*]' columns (elemA NUMBER PATH '$'))
) arrayA
LEFT JOIN
(
SELECT elemB FROM json_table(clobB, '$[*]' columns (elemB NUMBER PATH '$'))
) arrayB
ON elemA = elemB
WHERE elemB IS NULL;
RETURN aDiffB;
END;
/
-- Populate pe_create_id_filter_non_stakeholder_informee
INSERT INTO pe_create_id_filter_non_stakeholder_informee(event_sequential_id, party_id)
WITH
input1 AS
(
SELECT
event_sequential_id AS i,
etq_array_diff(tree_event_witnesses, flat_event_witnesses) AS ps
FROM participant_events_create
)
SELECT i, p
FROM input1, json_table(ps, '$[*]' columns (p NUMBER PATH '$'));
-- Populate pe_consuming_id_filter_stakeholder
INSERT INTO pe_consuming_id_filter_stakeholder(event_sequential_id, template_id, party_id)
WITH
input1 AS
(
SELECT
event_sequential_id AS i,
template_id AS t,
flat_event_witnesses AS ps
FROM participant_events_consuming_exercise
)
SELECT i, t, p
FROM input1, json_table(ps, '$[*]' columns (p NUMBER PATH '$'));
-- Populate pe_consuming_id_filter_non_stakeholder_informee
INSERT INTO pe_consuming_id_filter_non_stakeholder_informee(event_sequential_id, party_id)
WITH
input1 AS
(
SELECT
event_sequential_id AS i,
etq_array_diff(tree_event_witnesses, flat_event_witnesses) AS ps
FROM participant_events_consuming_exercise
)
SELECT i, p
FROM input1, json_table(ps, '$[*]' columns (p NUMBER PATH '$'));
-- Populate pe_non_consuming_exercise_filter_nonstakeholder_informees
INSERT INTO pe_non_consuming_id_filter_informee(event_sequential_id, party_id)
WITH
input1 AS
(
SELECT
event_sequential_id AS i,
etq_array_diff(tree_event_witnesses, flat_event_witnesses) AS ps
FROM participant_events_non_consuming_exercise
)
SELECT i, p
FROM input1, json_table(ps, '$[*]' columns (p NUMBER PATH '$'));
-- Populate participant_transaction_meta
INSERT INTO participant_transaction_meta(transaction_id, event_offset, event_sequential_id_first, event_sequential_id_last)
WITH
input1 AS (
SELECT
transaction_id AS t,
event_offset AS o,
event_sequential_id AS i
FROM participant_events_create
UNION ALL
SELECT
transaction_id AS t,
event_offset AS o,
event_sequential_id AS i
FROM participant_events_consuming_exercise
UNION ALL
SELECT
transaction_id AS t,
event_offset AS o,
event_sequential_id AS i
FROM participant_events_non_consuming_exercise
UNION ALL
SELECT
c.transaction_id AS t,
c.event_offset AS o,
d.event_sequential_id AS i
FROM participant_events_divulgence d
JOIN participant_events_create c ON d.contract_id = c.contract_id
),
input2 AS (
SELECT
t,
o,
min(i) as first_i,
max(i) as last_i
FROM input1
GROUP BY t, o
)
SELECT t, o, first_i, last_i FROM input2, parameters WHERE
parameters.participant_pruned_up_to_inclusive is null
or o > parameters.participant_pruned_up_to_inclusive;
DROP FUNCTION etq_array_diff;

View File

@ -1 +0,0 @@
68900c9d2b68e7b27c7717fdf4c23d314d358fce7f4b7d8906489f1533a6a5ed

View File

@ -1,3 +0,0 @@
DROP INDEX participant_events_create_transaction_id_idx;
DROP INDEX participant_events_consuming_exercise_transaction_id_idx;
DROP INDEX participant_events_non_consuming_exercise_transaction_id_idx;

View File

@ -1 +0,0 @@
79b7a0170ffa5b403e8f2d2726b227a7780ac8e3082c080a05c1efc8d7c58e4e

View File

@ -1 +0,0 @@
ALTER TABLE participant_identity_provider_config ADD audience VARCHAR2(4000) DEFAULT NULL;

View File

@ -1 +0,0 @@
643daa2f3a0a8f8f58289bf83ad8c05d5e50734c702f508e1b7abcaafd7fb1b3

View File

@ -1,7 +0,0 @@
DROP INDEX participant_events_create_tree_event_witnesses_idx;
DROP INDEX participant_events_create_flat_event_witnesses_idx;
DROP INDEX participant_events_consuming_exercise_flat_event_witnesses_idx;
DROP INDEX participant_events_consuming_exercise_tree_event_witnesses_idx;
DROP INDEX participant_events_non_consuming_exercise_flat_event_witness_idx;
DROP INDEX participant_events_non_consuming_exercise_tree_event_witness_idx;
DROP INDEX participant_events_divulgence_tree_event_witnesses_idx;

View File

@ -1 +0,0 @@
f87eb3e6709109d9fa34a9757cf53e3f8b84584a079442eea6e8da4f4224dc2e

Some files were not shown because too many files have changed in this diff Show More