daml/ledger
Leonid Shlyapnikov 1dfbdb903e
HOTFIX: increasing 30.sec timeout to 300.sec (#462)
* HOTFIX: increasing 30.sec timeout to 300.sec

this to fix SQL-backed ledger tests that currently keep timing out.
It is NOT really a fix, but a temp work-around. Tried to get rid of
this timeout, but it propogates all over the place. Need a separate
story to address this

* HOTFIX: adding a link to the issue
2019-04-15 12:17:53 -04:00
..
api-server-damlonx Enforce consistent formatting of BUILD files. (#412) 2019-04-12 13:10:16 +02:00
backend-api Enforce consistent formatting of BUILD files. (#412) 2019-04-12 13:10:16 +02:00
ledger-api-akka Enforce consistent formatting of BUILD files. (#412) 2019-04-12 13:10:16 +02:00
ledger-api-client Enforce consistent formatting of BUILD files. (#412) 2019-04-12 13:10:16 +02:00
ledger-api-common fix various conversion functions from string to Decimal (#439) 2019-04-14 13:49:28 +02:00
ledger-api-domain Enforce consistent formatting of BUILD files. (#412) 2019-04-12 13:10:16 +02:00
ledger-api-integration-tests HOTFIX: increasing 30.sec timeout to 300.sec (#462) 2019-04-15 12:17:53 -04:00
ledger-api-scala-logging Enforce consistent formatting of BUILD files. (#412) 2019-04-12 13:10:16 +02:00
ledger-api-server-example Reuse the same Engine object between ledger resets (#466) 2019-04-15 09:19:04 +02:00
participant-state daml-on-x-server: document all participant state interfaces (#432) 2019-04-12 17:43:45 +02:00
participant-state-index Enforce consistent formatting of BUILD files. (#412) 2019-04-12 13:10:16 +02:00
sandbox use .dars consistently in tests (#484) 2019-04-15 16:45:08 +02:00
sandbox-perf use .dars consistently in tests (#484) 2019-04-15 16:45:08 +02:00
scripts correct jq in dev-env (#463) 2019-04-12 16:44:15 -04:00
API.md open-sourcing daml 2019-04-04 09:33:38 +01:00
CONTRIBUTING.md open-sourcing daml 2019-04-04 09:33:38 +01:00
README.md open-sourcing daml 2019-04-04 09:33:38 +01:00
UNRELEASED.md open-sourcing daml 2019-04-04 09:33:38 +01:00

ledger

TODO: write something here

v1 gRPC API

The v1 gRPC API is described here

Logging

Logging Configuration

Ledger Server uses Logback for logging configuration.

Log Files

By default our log configuration creates two log files:

  • a plaintext file logs/ledger.log
  • a json file logs/ledger.json.log (for Logstash type log processors)

The path the file is stored in can be adjusted by setting -Dlogging.location=some/other/path. The filename used can be adjusted by setting -Dlogging.file=my-process-logs.

By default no output is sent to stdout (beyond logs from the logging setup itself).

standard streams logging

For development and testing it can be useful to send all logs to stdout & stderr rather than files (for instance to use the IntelliJ console or getting useful output from docker containers).

We ship a logging configuration for this which can be enabled by using -Dlogback.configurationFile=classpath:logback-standard.xml -Dlogging.config=classpath:logback-standard.xml.

INFO level and below goes to stdout. WARN and above goes to stderr.

_Note: always use both -Dlogback.configurationFile and -Dlogging.config. Logback is first initialized with the configuration file from logback.configurationFile. When the Spring framework boots it recreates logback and uses the configuration specified in logging.config.

Log levels

As most Java libraries and frameworks, ledger server uses INFO as the default logging level. This level is for minimal and important information (usually only startup and normal shutdown events). INFO level logging should not produce increasing volume of logging during normal operation.

WARN level should be used for transition between healthy/unhealthy state, or in other close to error scenarios.

DEBUG level should be turned on only when investigating issues in the system, and usually that means we want the trail loggers. Normal loggers at DEBUG level can be useful sometimes (e.g. DAML interpretation).

gRPC and back-pressure

RPC

Standard RPC requests should return with RESOURCE_EXHAUSTED status code to signal back-pressure. Envoy can be configured to retry on these errors. We have to be careful not to have any persistent changes when returning with such an error as the same original request can be retried on another service instance.

Streaming

gRPC's streaming protocol has built-in flow-control, but it's not fully active by default. What it does it controls the flow between the TCP/HTTP layer and the library so it builds on top of TCP's own flow control. The inbound flow control is active by default, but the outbound does not signal back-pressure out of the box.

AutoInboundFlowControl: The default behaviour for handling incoming items in a stream is to automatically signal demand after every onNext call. This is the correct thing to do if the handler logic is CPU bound and does not depend on other reactive downstream services. By default it's active on all inbound streams. One can disable this and signal demand by manually calling request to follow demands of downstream services. Disabling this feature is possible by calling disableAutoInboundFlowControl on CallStreamObserver.

ServerCallStreamObserver: casting an outbound StreamObserver manually to ServerCallStreamObserver gives us access to isReady and onReadyHandler. With these methods we can check if there is available capacity in the channel i.e. we are safe to push into it. This can be used to signal demand to our upstream flow. Note that gRPC buffers 32Kb data per channel and isReady will return false only when this buffer gets full.