ca66061b28
* WIP : first cut at changed schema files for oracle Define Oracle as DbType and handle necessary case match switches for it recomputed shas for oracle migration scripts Oracle fixtures get things compiling Able to connect to Oracle Working through getting schema definitions functional with Oracle runnable schema definitions only for active tables on oracle delete commented lines in schema scripts use oracle enterprise correct inadvertently changed postgres schemas WIP - latest oracle-ificiation passing upload packages spec add additional test for package upload entry read correct typo in oracle database spec name use BLOB for parties ledger_offset package_entries use hex version of offset for range queries reformat and update shas for sql scripts binary numeric implicit conversion for oracle correct duplicate exception text for oracle parties test passing on oracle add additional column to hold hex offset for party_entries party_entries working for all dbs scalafmt Configuration ledger_offset should be BLOB update sha of oracle sql files enable passing tests in order remove misleading null comments define additional custom VARRAY types add participant-integration-api-oracle tests to linux-oracle job Add TODO for places where we need to deal with separate implicit imports for Oracle vs Postgres/H2 oracle implicit conversions for custom arrays and other problematic types Do not override default debug level for all tests in participant-integration-api CHANGELOG_BEGIN Ledger API and Indexer Oracle Support CHANGELOG_END passing TransactionWriterSpec passing JdbcLedgerDaoCompletionsSpec JdbcLedgerDaoDivulgenceSpec passing JdbcLedgerDaoContractsSpec All Oracle tests passing apart from one post-commit validation test * Remove JdbcLedgerDaoValidatedOracleSpec as this is only relevant for classic postgres-backed sandbox * rebase to master -- offsets are now varchar2 rather than blob * remove use of DBMS_LOB operations * remove all greater than/less than variants for DBMS_LOB * revert postgres files that need not be touched * code review feedback : avoid code duplication * avoid indirection in type names for oracle arrays * code review: HexString implicit conversions are not needed * code review: Oracle case is not yet implemented for appendonlydao * code review: Oracle case is not yet implemented for appendonlydao (cleanup import) * code review: revert files that should not be touched * address code review feedback: db specific imports for command completion become part of queries * code review: perform db-specific reserved word escape to avoid case match * code review: remove all dbms_lob comparison operations * use simpler insert into with ignore dupes hint for oracle * code review: avoid db specific match case in events range, use db specific limitClause * code review: restore group by on Binary and Array fields for H2 and Postgres, disable for Oracle * code review: restore group by on Binary and Array fields for H2 and Postgres, disable for Oracle * code review: restore group by on binary and array fields for non-oracle dbs, honour the calculation of limit size from QueryParty.ByArith * code review: honour the calculation of limit size from QueryParty.ByArith * code review: drop user after oracle test * code review: remove drop user as it throws errors due to dangling sessions * code review: revert incorrectly changed postgres schema files * code review: clean up TODOs * Remove // before hostname for consistency with other oracle connection strings * code review: unambiguously scope table column referenced in select and where queries * code review: correct duplicate table alias |
||
---|---|---|
.. | ||
caching | ||
cli-opts | ||
daml-on-sql | ||
ledger-api-akka | ||
ledger-api-auth | ||
ledger-api-auth-client | ||
ledger-api-client | ||
ledger-api-common | ||
ledger-api-domain | ||
ledger-api-health | ||
ledger-api-test-tool | ||
ledger-api-test-tool-on-canton | ||
ledger-on-memory | ||
ledger-on-sql | ||
ledger-resources | ||
metrics | ||
participant-integration-api | ||
participant-state | ||
participant-state-index | ||
participant-state-metrics | ||
recovering-indexer-integration-tests | ||
sandbox | ||
sandbox-classic | ||
sandbox-common | ||
sandbox-on-x | ||
sandbox-perf | ||
test-common | ||
README.md |
ledger
Home of our reference ledger implementation (Sandbox) and various ledger related libraries.
Logging
Logging Configuration
The Sandbox and Ledger API Server use Logback for logging configuration.
Log Files
The Sandbox logs at INFO
level to standard out and to the file sandbox.log
in the current working directory.
Log levels
As most Java libraries and frameworks, the Sandbox and Ledger API Server use INFO as the default logging level. This level is for minimal and important information (usually only startup and normal shutdown events). INFO level logging should not produce increasing volume of logging during normal operation.
WARN level should be used for transition between healthy/unhealthy state, or in other close to error scenarios.
DEBUG level should be turned on only when investigating issues in the system, and usually that means we want the trail loggers. Normal loggers at DEBUG level can be useful sometimes (e.g. Daml interpretation).
Metrics
Sandbox and Ledger API Server provide a couple of useful metrics:
Sandbox and Ledger API Server
The Ledger API Server exposes basic metrics for all gRPC services and some additional ones.
Metric Name | Description |
LedgerApi.com.daml.ledger.api.v1.$SERVICE.$METHOD | A meter that tracks the number of calls to the respective service and method. |
CommandSubmission.failedCommandInterpretations | A meter that tracks the failed command interpretations. |
CommandSubmission.submittedTransactions | A timer that tracks the commands submitted to the backing ledger. |
Indexer
Metric Name | Description |
JdbcIndexer.processedStateUpdates | A timer that tracks duration of state update processing. |
JdbcIndexer.lastReceivedRecordTime | A gauge that returns the last received record time in milliseconds since EPOCH. |
JdbcIndexer.lastReceivedOffset | A gauge that returns that last received offset from the ledger. |
JdbcIndexer.currentRecordTimeLag | A gauge that returns the difference between the Indexer's wallclock time and the last received record time in milliseconds. |
Metrics Reporting
The Sandbox automatically makes all metrics available via JMX under the JMX domain com.daml.platform.sandbox
.
When building an Indexer or Ledger API Server the implementer/ledger integrator is responsible to set up
a MetricRegistry
and a suitable metric reporting strategy that fits their needs.
Health Checks
Ledger API Server health checks
The Ledger API Server exposes health checks over the gRPC Health Checking Protocol. You can check the health of
the overall server by making a gRPC request to grpc.health.v1.Health.Check
.
You can also perform a streaming health check by making a request to grpc.health.v1.Health.Watch
. The server will
immediately send the current health of the Ledger API Server, and then send a new message whenever the health changes.
The ledger may optionally expose health checks for underlying services and connections; the names of the services are ledger-dependent. For example, the Sandbox exposes two service health checks:
- the
"index"
service tests the health of the connection to the index database - the
"write"
service tests the health of the connection to the ledger database
To use these, make a request with the service
field set to the name of the service. An unknown service name will
result in a gRPC NOT_FOUND
error.
Indexer health checks
The Indexer does not currently run a gRPC server, and so does not expose any health checks on its own.
In the situation where it is run in the same process as the Ledger API Server, the authors of the binary are encouraged to add specific health checks for the Indexer. This is the case in the Sandbox and Reference implementations.
Checking the server health in production
We encourage you to use the grpc-health-probe tool to periodically check the health of your Ledger API Server in production. On the command line, you can run it as follows (changing the address to match your ledger):
$ grpc-health-probe -addr=localhost:6865
status: SERVING
An example of how to naively configure Kubernetes to run the Sandbox, with accompanying health checks, can be found in sandbox/kubernetes.yaml.
More details can be found on the Kubernetes blog, in the post titled Health checking gRPC servers on Kubernetes.
gRPC and back-pressure
RPC
Standard RPC requests should return with RESOURCE_EXHAUSTED status code to signal back-pressure. Envoy can be configured to retry on these errors. We have to be careful not to have any persistent changes when returning with such an error as the same original request can be retried on another service instance.
Streaming
gRPC's streaming protocol has built-in flow-control, but it's not fully active by default. What it does it controls the flow between the TCP/HTTP layer and the library so it builds on top of TCP's own flow control. The inbound flow control is active by default, but the outbound does not signal back-pressure out of the box.
AutoInboundFlowControl
: The default behaviour for handling incoming items in a stream is to automatically signal demand
after every onNext
call. This is the correct thing to do if the handler logic is CPU bound and does not depend on other
reactive downstream services. By default it's active on all inbound streams. One can disable this and signal demand by
manually calling request
to follow demands of downstream services. Disabling this feature is possible by calling
disableAutoInboundFlowControl
on CallStreamObserver
.
ServerCallStreamObserver
: casting an outbound StreamObserver
manually to ServerCallStreamObserver
gives us access
to isReady
and onReadyHandler
. With these methods we can check if there is available capacity in the channel i.e.
we are safe to push into it. This can be used to signal demand to our upstream flow. Note that gRPC buffers 32Kb data
per channel and isReady
will return false only when this buffer gets full.