remove mentions of da-int servers (#2485)

This commit is contained in:
Gary Verhaegen 2019-08-12 10:42:41 +01:00 committed by GitHub
parent bbfa0a1318
commit bf5995f529
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 12 additions and 549 deletions

View File

@ -62,31 +62,3 @@ to be set by `dade-common`).
contains libraries for dev-env usage, assume its immutable.
* `DADE_NIXPKGS` - points to a GC root which is a symlink to a Nixpkgs snapshot
used by dev-env, used only in `dade-dump-profile`.
## Versioning
We use Nix store paths as versions of dev-env. Briefly, this means
that non-`da` repositories will have a `dev-env/dev-env.version` file
with the Nix store path of the desired dev-env version. Please see the
[design doc of DEL-1294][design-doc] for details.
## Changelog
Notable changes to the dev-env should be added to `UNRELEASED.md`.
From there, they can be move to `CHANGELOG.md` with the
`update-changelog` script from ledger. Run the script as:
$ ../ledger/scripts/update-changelog UNRELEASED.md CHANGELOG.md $(cat VERSION)
This will move the entries from `UNRELEASED.md` to `CHANGELOG.md`.
Pull request references in the form of `[pr:1234]` and Jira references
in the form of `[jira:ABC-1234]` are automatically turned into links
during this process.
## Releases
Run `dade-freeze` to generate a release of dev-env, suitable for consumption in
other repositories. **Please note** That this is not published by the command.
Hydra will build new dev-env version from master as soon as it's pushed at:
http://hydra.da-int.net/job/da/master/cached.x86_64-darwin.dev-env

View File

@ -1,60 +1,6 @@
# dade-nix-install
# dade-preload
This tool installs DA-built Nix distribution for MacOS X or Linux. It downloads
the release from:
- http://hydra.da-int.net/jobset/nix/release-3
Runtime prerequisites:
- `uname`, `mktemp`, `tar`, `curl`, `awk` to download and perform installation
- `sudo` needed if and only if `/nix/store` is not present or is not owned by
the current user.
Notes:
- the script checks for High Sierra induced failure of Nix and would reinstall
Nix installation automatically in this case.
# dade-raw-preload
This tool is meant to be run from a root account and precache all relevant
dev-env provided tools on a given machine. It finds the right user to
impersonate and downloads most recently built tools from `hydra.da-int.net`.
Runtime prerequisites:
- Nix installed in a single-user mode (e.g. via `dade-nix-install`)
- `uname` and `awk` to detect OS and architecture
- `sudo` to perform the prefetch from under the owner of the `/nix/store`
It is used to preload dev-env caches on developers workstations. A developer can
create a file which would disable the automatic precaching:
touch $HOME/.dade-raw-preload.skip
Note: downloaded tools are not added to Nix garbage collection roots, hence will
get deleted with next `nix-collect-garbage` invocation.
Non-code dependencies:
- Hydra jobsets
- http://hydra.da-int.net/jobset/da/master-dade-linux
- http://hydra.da-int.net/jobset/da/master-dade-darwin
- http://hydra.da-int.net/jobset/da/dev-env-next-dade-linux
- http://hydra.da-int.net/jobset/da/dev-env-next-dade-darwin
Used by:
- Casper policy to deploy jobs (owned by Edward Newman)
-- based of https://github.com/DACH-NY/da/blob/master/dev-env/bin/download-dade-service-script.sh
Tested by:
- https://github.com/DACH-NY/da/pipeline/jenkins/src/jobs/pipeline/dev-env/dadeRawPreload.Jenkinsjob
- http://ci.da-int.net/job/pipeline/job/dev-env/job/dade-raw-preload/
Implementation sketch:
- finds out the Nix store owner;
- checks for the skip file;
- finds the user's Nix profile and sources it;
- creates a temporary nix.conf to ensure `hydra.da-int.net` is used;
- sets up a temporary nix-shell with required tools (e.g. jq);
- fetches all store paths from all last evaluations of all jobsets;
- downloads them (aka "realizes" with `nix-store -r`).
This tool will force nix to build every derivation in the dev-env.
# Wrapped tools

View File

@ -12,9 +12,6 @@
# To ignore files under a certain directory and below:
# $ touch some/directory/NO_AUTO_COPYRIGHT
# $ git add some/directory/NO_AUTO_COPYRIGHT
#
# For more information see the engineering handbook:
# http://engineering.da-int.net/docs/engineering-handbook/licenses-copyrights.html
import subprocess
import re

View File

@ -1,128 +0,0 @@
#!/usr/bin/env nix-shell
#! nix-shell -i bash -p coreutils nix-info getopt
set -Eeuo pipefail
DADE_CURRENT_SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
source "$DADE_CURRENT_SCRIPT_DIR/../lib/dade-common"
usage() {
cat <<EOF
Prints out information about DA development environment.
Usage: $0 [-h|--help] [-a|--all] [-n|--nix|--no-nix] [-d|--dev-env|--no-dev-env] [-t|--tools|--no-tools]
-h: print this message before exiting
-a: print all available information, enables all options below.
-n: print Nix configuration information, enabled by default.
--no-nix: disable printing Nix configuration information.
-d: print dev-env configuration information, enabled by default.
--no-dev-env: disable printing dev-env configuration information.
EOF
}
opt_nix=1
opt_devenv=1
opt_tools=0
PARAMS=`getopt -n $0 -o handp -l all,no-nix,no-dev-env,help -- "$@"`
if [ $? != 0 ]
then
usage
exit 1
fi
eval set -- "$PARAMS"
# Evaluate valid options
while [ $# -gt 0 ]
do
case "$1" in
-a | --all)
opt_nix=1;
opt_devenv=1;
opt_tools=1;
;;
-n)
opt_nix=1
;;
--no-nix)
opt_nix=0;
;;
-d)
opt_devenv=1
;;
--no-dev-env)
opt_devenv=0;
;;
-t)
opt_tools=1
;;
--no-tools)
opt_tools=0;
;;
-h|--help)
usage
exit 0
;;
esac
shift
done
removeLines() {
echo "$1" | tr '\n' ' '
}
toolInfo() {
local tool="$1"
local target="$DADE_GC_ROOTS/$tool"
local hashfile="${target}.hash"
local hash="missing"
test -e $hashfile && hash="$(cat $hashfile)"
echo "- $tool"
if [[ ! -e "$target" ]]; then
echo " * notice: missing local gcroot $target"
fi
if [[ "$hash" != "$currentHash" ]]; then
echo " * notice: hash mismatch (run the tool to update it, or use dade-preload):"
echo " expected:" $(removeLines "$currentHash")
echo " got: " $(removeLines "$hash")
fi
}
currentHash="$(dadeBaseHash)"
echo "This will output debugging information about development environment on this machine."
echo
if [[ "$opt_devenv" == "1" ]]; then
DADE_PACKAGE=$(nix-build --no-out-link -A cached.dev-env "$DADE_REPO_ROOT/nix" 2>/dev/null)
# dev-env derivation is system-agnostic, so we use Linux version on all machines.
DADE_LATEST_PACKAGE_LINUX=$(
curl -s -L -H"Accept: application/json" \
http://hydra.da-int.net/jobset/da/master-dade-linux/latest-eval/store-paths | \
jq 'map(select(contains ("dev-env")))[0]' -r)
echo "Base dade hashes:" $(removeLines "$currentHash")
echo
echo "Repo root: $DADE_REPO_ROOT"
echo "DADE root dir: $DADE_BASE_ROOT"
echo "DADE dev-env dir: $DADE_DEVENV_DIR"
echo "DADE var dir: $DADE_VAR_DIR"
echo "DADE lib dir: $DADE_LIB_DIR"
echo "DADE gcroots: $DADE_GC_ROOTS"
echo "DADE version: $DADE_PACKAGE ($(cat "$DADE_PACKAGE/VERSION"))"
echo "DADE latest version: $DADE_LATEST_PACKAGE_LINUX"
echo
fi
if [[ "$opt_nix" == "1" ]]; then
echo "nix-info output:"
nix-info -m
fi
if [[ "$opt_tools" == "1" ]]; then
echo "Available tools:"
for tool in $(dadeListTools); do
toolInfo $tool
done
fi

View File

@ -1,209 +0,0 @@
#!/usr/bin/env bash
# This script preloads system-speific packages to Nix store (a cache). It
# downloads it from `master-dade-{darwin,linux}` Hydra jobset.
set -Eeuo pipefail
{ # Prevent execution if this script was only partially downloaded
version=1.0.2
# Extract the name of this script, without the extension, to use in output and related files
me=$(basename -- "${0%%.*}")
type 'uname' >/dev/null 2>&1 || which 'uname' >/dev/null 2>&1 || (
>&2 echo "[dev-env] ${me}:" "uname is not present";
exit 1
)
# Do not use syslog by default, unless it's running on MacOS. To see the content
# on macOS run:
# $ log show --predicate 'eventMessage contains "dev-env"' --last 30m
logger=0
if [[ "$(uname -s)" == "Darwin" ]]; then
logger=1
fi
# use MASTER_STORE_PATHS_URL only
# this is the use case when populating docker container for ci builds
include_next=1
function usage
{
echo "Usage: ${me} [-s]"
echo " -s - output to standard error instead of system logger"
exit 0
}
function errcho() {
if (( logger )); then
logger -s -p local0.notice -t "${me}[$$]" -- "[dev-env] ${me}:" "$@"
else
>&2 echo "[dev-env] ${me}:" "$@";
fi
}
errcho "Outer environment (for debugging of DEL-2933):"
env | while IFS='' read -r line || [[ -n "$line" ]]; do
errcho "env:" "$line"
done
# Exits script with 1 with an error message ($@) sent to stderr.
function oops() {
stop 1 "$@"
}
# Exits script with specified code ($1) with an error message ($[1:]) sent to stderr.
function stop() {
code=$1
shift
errcho "$@"
exit "$code"
}
# Exit script with exit code 1 if a tool does not exist on $PATH.
function require_util() {
local util_path=$(which "$1" 2>/dev/null)
if [ $? -eq 0 ]
then
errcho "you have '$1' installed at '$util_path', which $2"
else
oops "you do not have '$1' installed, which $2"
fi
}
# Exit script with exit code 1 if a file or directory does not exist.
function require_exists() {
test -e "$1" >/dev/null 2>&1 ||
oops "you do not have '$1' present, which $2"
}
# Exit script gracefully if a file or directory does not exist.
function check_exists() {
test -e "$1" >/dev/null 2>&1 ||
stop 0 "you do not have '$1' present, which $2"
}
while getopts "sm" opt; do
case "$opt" in
s)
logger=0
;;
m)
include_next=0
;;
\?)
echo "Invalid option: -$OPTARG" >&2
usage
;;
esac
done
errcho "Starting: version ${version}"
check_exists "/nix/store" "is what we want to prewarm!"
case "$(uname -s)" in
Linux) SYSTEM=linux;;
Darwin) SYSTEM=darwin;;
*) oops "sorry, there is no automatic preload implemented for your platform";;
esac
errcho "Detecting the Nix store owner..."
NIX_OWNER=$(ls -ld /nix/store | awk '{print $3}')
NIX_OWNER_HOME=$(eval echo "~$NIX_OWNER")
SKIP_MARKER_FILE="${NIX_OWNER_HOME}/.dade-raw-preload.skip"
CURRENT_USER=$(id -un)
SUDO=
if [ "$NIX_OWNER" != "$CURRENT_USER" ]; then
errcho "Running as $CURRENT_USER"
if [ "$CURRENT_USER" != "root" ]; then
errcho "ERROR: Script can be run only by either root or owner of /nix/store (${NIX_OWNER}), instead current user is $CURRENT_USER."
exit 987;
fi
SUDO="sudo -E -u $NIX_OWNER LC_CTYPE= LANG=C" ;
fi
errcho "Checking available tools in the system..."
[ ! -z "$SUDO" ] && require_util sudo "needed to run the script from a user."
require_util awk "needed to detect owner of the Nix store."
errcho "Checking for marker file ${SKIP_MARKER_FILE}..."
if [[ -f "${SKIP_MARKER_FILE}" ]]; then
errcho "Skipping preloading due to existence of marker file ${SKIP_MARKER_FILE}!"
exit 0
fi
errcho "Finding that user's Nix profile..."
( # Running in a subshell to avoid leaking exported variables.
errcho "Sourcing Nix profile..."
export HOME="${NIX_OWNER_HOME}"
export include_next
# shellcheck source=../lib/ensure-nix
source "$(dirname "${BASH_SOURCE[0]}")/../lib/ensure-nix"
errcho "Validating Nix profile..."
require_util nix-shell "needed to setup environment where the preloading logic runs."
NIX_SHELL=$(which nix-shell)
errcho "Setting up ephemeral Nix configuration..."
tmpDir="$(${SUDO} mktemp -d -t dade-raw-preload.XXXXXXXXXX || \
oops "Could not create temporary directory to set up an ephemeral nix.conf")"
errcho "Using $tmpDir as temporary directory for ephemeral nix.conf."
cleanup() {
errcho "Cleaning up temporary directory $tmpDir"
rm -rf "$tmpDir"
}
${SUDO} tee "$tmpDir/nix.conf" >/dev/null <<EOF
binary-caches = http://cache.da-int.net http://cache.nixos.org
binary-cache-public-keys = hydra.da-int.net-1:6Oy2+KYvI7xkAOg0gJisD7Nz/6m8CmyKMbWfSKUe03g= cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= hydra.nixos.org-1:CNHJZBh9K4tP3EKF6FkkgeVYsS3ohTl+oS0Qa8bezVs=
EOF
trap cleanup INT QUIT TERM
MASTER_STORE_PATHS_URL="http://hydra.da-int.net/jobset/da/master-dade-${SYSTEM}/latest-eval/store-paths"
NEXT_STORE_PATHS_URL="http://hydra.da-int.net/jobset/da/dev-env-next-dade-${SYSTEM}/latest-eval/store-paths"
errcho "Setting up ephemeral shell with required tools..."
# Set cache configuration before creating the shell so it is also used by
# the shell setup itself, i.e. hopefully we get bash, curl, jq, and nix
# from the cache and we don't build them ourselves.
export NIX_CONF_DIR=$tmpDir
# - `nix` attribute used below refers to nix 2.0 as added in nixpkgs 18.03.
# - NIX_PATH is pinned to a release to ensure consistency.
# - ${SUDO:-env} is there to ensure that if we don't run a sudo, we preface
# the command with `env`, otherwise the invocation fails.
${SUDO:-env} NIX_PATH="nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/nixos-18.03.tar.gz" "$NIX_SHELL" \
-p curl jq nix --timeout 120 -vvv \
--run bash <<EOF
set -Eeuo pipefail
errcho() { >&2 echo "[dev-env] \$@"; }
errcho "Inner environment (for debugging of DEL-2933):"
env >&2
errcho "Precaching dev-env tools from master branch..."
curl -s -L -H"Accept: application/json" "$MASTER_STORE_PATHS_URL" | \
jq -r ".|arrays[]" | xargs nix-store --timeout 360 -r
if (( include_next )); then
errcho "Precaching dev-env tools from dev-env-branch branch..."
curl -s -L -H"Accept: application/json" "$NEXT_STORE_PATHS_URL" | \
jq -r ".|arrays[]" | xargs nix-store --timeout 360 -r
fi
EOF
# do cleanup at the end
cleanup
)
errcho "Preloading completed!"
}

View File

@ -1,5 +1,8 @@
build-max-jobs = 2
binary-caches = https://nix-cache.da-ext.net https://cache.nixos.org
# Note: the "hydra.da-int.net" string is now part of the name of the key for
# legacy reasons; it bears no relation to the DNS hostname of the current
# cache.
binary-cache-public-keys = hydra.da-int.net-1:6Oy2+KYvI7xkAOg0gJisD7Nz/6m8CmyKMbWfSKUe03g= cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= hydra.nixos.org-1:CNHJZBh9K4tP3EKF6FkkgeVYsS3ohTl+oS0Qa8bezVs=
# Keep build-time dependencies of non-garbage outputs around

View File

@ -10,10 +10,9 @@ To add new tool:
- create new manifest file under `/dev-env/windows/manifests/` folder, ensuring:
- it follows the naming convention of `<name>-<version>.json`
- it sources binaries from a URL, which points to specific version of the tool and is unlikely to change
(preferably source it from the engineering host as described below)
- add a `<name>-<version>` entry to `/.dadew` file, extending `tools` element list
## Adding new version of the existing tool
## Adding new version of the existing tool
Process of adding new version of the existing tool is very similar to adding new one, but:
- you should not modify existing manifest files to allow backward compatibility of win-dev-env,
@ -32,27 +31,7 @@ Default set of Scoop App manifests (also called a default bucket) can be found [
Other buckets are listed in Scoop's `buckets.json` [file][scoop-all-buckets].
## Source of binaries
All binaries referenced in manifest files should be hosted internally to reduce the likelihood of an upstream change.
In general, binaries are provided from: https://engineering.da-int.net/nix-vendored/<tool_name>/<file_name>
which is kept in-sync with the upstream Artifactory `nix-vendored` repository:
https://digitalasset.jfrog.io/digitalasset/webapp/#/artifacts/browse/tree/General/nix-vendored
To upload a binary to the repo follow the link above and click 'Deploy' button on the page.
Ensure that:
- the Target Repository is `nix-vendored`,
- the Target Path attribute is in form of `/<tool_name>/<file_name>`, e.g. `/bazel/bazel-0.20.0-windows-x86_64.zip`,
- the `<file_name>` contains the tool version (and architecture label if required) like the one above
To read more on how to deploy artifacts to the Artifactory see [JFrog documentation][jfrog-deploying].
[jfrog-deploying]: https://www.jfrog.com/confluence/display/RTF/Deploying+Artifacts
[scoop]: https://github.com/lukesampson/scoop
[scoop-manifests]: https://github.com/lukesampson/scoop/wiki/App-Manifests
[scoop-bucket]: https://github.com/lukesampson/scoop/tree/master/bucket
[scoop-all-buckets]: https://github.com/lukesampson/scoop/blob/master/buckets.json
[scoop-all-buckets]: https://github.com/lukesampson/scoop/blob/master/buckets.json

View File

@ -1,59 +0,0 @@
{
"description": "Oracle JDK 8",
"homepage": "http://www.oracle.com/technetwork/java/javase/overview/index.html",
"version": "8u111-b14",
"license": "http://www.oracle.com/technetwork/java/javase/terms/license",
"architecture": {
"64bit": {
"url": "https://engineering.da-int.net/nix-vendored/jdk-8u111-windows-x64.exe#/dl.7z",
"hash": "259caf1052673573096bb27b0ee10a8e97734461483856be1a81aa4130c3fe29"
},
"32bit": {
"url": "https://engineering.da-int.net/nix-vendored/jdk-8u111-windows-i586.exe#/dl.7z",
"hash": "2bc788869fe09b073067592021ab9f93d2407a078826138a49297badb6b0a06d"
}
},
"extract_to": "tmp",
"installer": {
"script": [
"If (Test-Path -Path \"$dir\\tmp\\.rsrc\") {",
" # Java Source (src.zip)",
" extract_7zip \"$dir\\tmp\\.rsrc\\1033\\JAVA_CAB9\\110\" \"$dir\"",
" # JDK (tools.zip)",
" extract_7zip \"$dir\\tmp\\.rsrc\\1033\\JAVA_CAB10\\111\" \"$dir\\tmp\"",
" # Copyright (COPYRIGHT)",
" extract_7zip \"$dir\\tmp\\.rsrc\\1033\\JAVA_CAB11\\112\" \"$dir\"",
"}",
"extract_7zip \"$dir\\tmp\\tools.zip\" \"$dir\"",
"# Convert .pack to .jar, and remove .pack",
"pushd \"$dir\"",
"ls \"$dir\" -recurse | ? name -match '^[^_].*?\\.(?i)pack$' | % {",
" $name = $_.fullname -replace '\\.(?i)pack$', ''",
" $pack = \"$name.pack\"",
" $jar = \"$name.jar\"",
" & \"bin\\unpack200.exe\" \"-r\" \"$pack\" \"$jar\"",
"}",
"popd",
"dl https://engineering.da-int.net/nix-vendored/jce_policy-8.zip \"$dir\\tmp\\jce_policy-8.zip\"",
"extract_7zip \"$dir\\tmp\\jce_policy-8.zip\" \"$dir\\tmp\\jce_policy-8\"",
"Copy-Item -Path \"$dir\\tmp\\jce_policy-8\\UnlimitedJCEPolicyJDK8\\*.jar\" \"$dir\\jre\\lib\\security\" -Force",
"rm -r \"$dir\\tmp\" | out-null"
]
},
"bin": [
"bin\\java.exe",
"bin\\javac.exe",
"bin\\jps.exe",
"bin\\jhat.exe",
"bin\\jstack.exe",
"bin\\jstat.exe",
"bin\\keytool.exe"
],
"env_set": {
"JAVA_HOME": "$dir"
}
}

View File

@ -1,35 +0,0 @@
# Extractor
# Build
The code uses SBT as a build system
```
sbt compile
```
To run the tests
```
sbt test
```
# Release
1. Create a branch `extractor-release-<major>-<minor>-<point>` for the release. Check the file [ledger-tools/extractor/version.sbt](version.sbt). For example, let's say the the current
version of the project is `0.9.3-SNAPSHOT`. You need to create a branch called `extractor-release-0-9-3`;
2. Move the ## Unreleased section of [ledger-tools/extractor/UNRELEASED.md](UNRELEASED.md) to the new release
version you have created in [ledger-tools/extractor/CHANGELOG.md](CHANGELOG.md) and commit to that branch. The message of the commit is not important as it
will be discarded;
3. Run `sbt release `. This will ask for the release version and the next version, and will create
commit with the release version and next version, and also takes care about tagging;
4. Push your **branch and tag**:
```
git push origin release/extractor/0.9.3 # check the tag that has been created, and push it
git push -u origin extractor-release-0-9-3 # push your branch
```
5. Go to the release [Jenkins job](http://ci.da-int.net/job/ledger-tools/job/extractor-release/build?delay=0sec)
Enter the tag you published and run the job.
6. Create a Pull Request from your branch, have it reviewed
and merged. After it's done, you can delete the branch.

View File

@ -209,10 +209,6 @@ bazel run //compiler/damlc:daml-ghc-test -- --pattern=
```
5. When the tests pass, push your branch to origin and raise a PR.
## `ghc-lib` in CI
At this time we have a pipeline in Jenkins [here](https://ci2.da-int.net/job/daml/job/ghc-lib/). It is run on a cron and can be run on demand.
## How to rebase `ghc-lib` on upstream master
To keep `ghc-lib` consistent with changes to upstream GHC source code, it is neccessary to rebase our branches on the upstream `master` from time to time. The procedure for doing this is as follows:

View File

@ -104,6 +104,9 @@ echo "vsts ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers.d/nix_installation
su --command "sh <(curl https://nixos.org/nix/install) --daemon" --login vsts
rm /etc/sudoers.d/nix_installation
# Note: the "hydra.da-int.net" string is now part of the name of the key for
# legacy reasons; it bears no relation to the DNS hostname of the current
# cache.
cat <<NIX_CONF > /etc/nix/nix.conf
binary-cache-public-keys = hydra.da-int.net-1:6Oy2+KYvI7xkAOg0gJisD7Nz/6m8CmyKMbWfSKUe03g= cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= hydra.nixos.org-1:CNHJZBh9K4tP3EKF6FkkgeVYsS3ohTl+oS0Qa8bezVs=
binary-caches = https://nix-cache.da-ext.net https://cache.nixos.org

View File

@ -39,13 +39,11 @@ as run from the main project root directory (adjust the location of the JAR acco
Sandbox uses models compiled in to the DAR format.
The dar files are the archives containing compiled DAML code. We highly recommend generating the dar files using the new DAML packaging, as described in https://engineering.da-int.net/docs/da-all-docs/packages/daml-project/. This will ensure that you're generating the .dar files correctly. The linked page also gives a good overview of what the dar files are, along with other key concepts.
Note that the new Ledger API only supports DAML 1.0 or above codebases compiled to DAML-LF v1. Again, using the DAML packaging as suggested above will ensure that you are generating dar files that the Sandbox can consume.
# Ledger API
The new Ledger API uses gRPC. You can find the full documentation of all the services involved rendered at http://ci.da-int.net/job/ledger-api/job/build/job/master/lastSuccessfulBuild/artifact/ledger-api/grpc-definitions/target/docs/index.html (save the file locally to get the styling to work). If you just want to create / exercise contracts, I suggest you start by looking at `command_service.proto`, which exposes a synchronous API to the DAML ledger.
The new Ledger API uses gRPC. If you just want to create / exercise contracts, I suggest you start by looking at [`command_service.proto`](/ledger-api/grpc-definitions/com/digitalasset/ledger/api/v1/command_service.proto), which exposes a synchronous API to the DAML ledger.
# Logging

View File

@ -263,7 +263,7 @@ in rec {
inherit (pkgs.python37Packages)
pyyaml semver GitPython;
};
# Packages used in command-line tools, e.g. `dade-info`.
# Packages used in command-line tools
cli-tools = {
inherit (pkgs) coreutils nix-info getopt;
};