a4e29299be
* Add DA_Internal_Interface to convertTypeDef exclusion check * Move desugared types/values for interface views into DA.Internal.Desugar * Convert viewtype and view method declarations * Typecheck view types and view declarations * Add unit viewtypes to InterfaceChoiceCollision tests * Add unit viewtypes to more compiler/damlc tests * Modify empty interfaces/implements to have viewtypes/views * Add unit viewtypes to remaining Interface tests * Test for error thrown when viewtype not specified * Fix daml-lf/ interface tests to use new syntax / unit views * Add placeholder view of type Unit to TestInterfaces CHANGELOG_BEGIN CHANGELOG_END * Fix typescript interface tests with unit viewtype * Add unit viewtype to Java & Scala codegen * Add unit viewtype to triggers tests interface * Add unit viewtypes to ledger/test-common * Add unit viewtypes to ledger-service/http-json * Fix some damlc tests * Use viewtype syntax in InterfaceViewNonSerializable test * Remove HasInferfaceView instances in convertBind * Add unit view to QualifiedInterface.daml test * Generate HasInterfaceView instances in DataDependencies * Add unit viewtypes and views to compiler/damlc/tests * Document reconstruction of HasInterfaceView * Move desugared types/values for interface views into DA.Internal.Desugar * Add desugarable function view w/ EViewInterface desugaring * Remove templateName and viewtype from generated EViewInterface * Update desugared-daml * Proof of concept: use _view -> EViewInterface prim via HasInterfaceView * Move view, HasInterfaceView to DA.Internal.Interface * Update desugared-daml tests * Stop removing and re-generating HasInterfaceView instances * Do not generate docs for generated "_view_" methods * Report errors with interface TypeConNames prettily * Fix InterfaceViewNotSpecified @ERROR to reflect new error text * Don't import magic `view` method from Prelude to avoid name clash * Update shake interface goto definition test locations * Temp disable view spec tests, will re-implement views in speedy * Add unit viewtype to Java code snippet docs * Update Interface syntax docs for views * Add unit viewtype to Transferrable.daml * Add back "empty" interface implementation to interfaces.rst * Re-enable view tests by having _view methods on InterfaceViews.daml file * Remove deprecated commented-out view desugaring code * Remove templateId and viewtype from EViewInterface * Fix lint * Compile DefRefs for views, return them in SBViewInterface * Stop looking for `_view` method, remove ENoViewFound in TypeChecker.hs * Remove ENoViewFound / remove view method lookup, use view param instead * Fix use of viewMethod.returnType / import of MethodName * Replace uses of `_view` in Scala, update parser w/ viewtype/view clauses * Fix lint * Update Interface LF encoding test for views * Stop using unit views in ParsersSpec * Fix tests to have new Scala test parser viewtype/view syntax * Fix Interface_extended_1.dev_.lf to have viewtypes/views * Change all spec/lf viewtypes/views to Unit * Remove accidental use of requires * Fix ParsersSpec to have record viewtypes/views * Fix ValueEnricherSpec for switch to unit views * Remove no view test from Scala - lack of view is caught at compile-time * Remove unused _view method in InterfaceViews.daml test * Return control expression via Control.Expression * Retrieve compiled views via implementation body |
||
---|---|---|
.. | ||
archive | ||
data | ||
data-scalacheck | ||
encoder | ||
engine | ||
interface | ||
interpreter | ||
kv-support | ||
language | ||
notes | ||
parser | ||
repl | ||
scenario-interpreter | ||
snapshot | ||
spec | ||
tests | ||
transaction | ||
transaction-test-lib | ||
validation | ||
governance.rst | ||
README.md |
The unified Daml-LF interpreter and engine
This package contains the canonical in-memory LF ASTs of both the public interface and the whole contents, decoders from on-wire LF to those, and an interpreter for LF.
Additionally a separate package is provided for a standalone REPL allowing loading of .dalf files and interpretation of pure functions, updates and scenarios.
We provide both Bazel-based and Sbt-based builds for this project. The Sbt builds are provided solely for development purposes, to facilitate incremental compilation and IDE integration. You can simply import the sbt project for development, but if making changes, know that the Bazel build is the sole source of truth for CI and releases.
Components
-
archive
contains the Protobuf definition of the LF format, and Protobuf utilities for reading it into a raw memory form. This should reflect the official LF specification at any given time. As with the LF specification, changes to the Protobuf definition are governed by the Daml-LF Governance process. -
interface
is an ADT of the "public interface" of a given LF package, meaning its templates, their choices, and serializable data types in the package. The ADT does not includedef
s or expressions. A reader from the raw protobuf is included. The ADT is usable from Java. -
lfpackage
is the canonical LF ADT, containing all information about an LF package. Its main consumer is theinterpreter
, which compiles from this faithful representation of the protobuf LF archive into a lower-level AST that is then interpreted.The current plan with
lfpackage
is to be able to load both old and new LF versions into, so that the interpreter and other consumers can work with a common format.For most use cases
lfpackage
is too complex, andinterface
is more convenient; if you need the extra information, this is available, though, but without guarantees of stability. -
transaction
holds ADTs related to the interpretation of LF, aslfpackage
represents the definitions in LF. The base of these is the Value ADT, representing serializable values (i.e. values of serializable LF type). Building on that is the Transaction ADT, representing ledger updates.Both have associated Protobuf definitions, also contained in this package, and are used in
interpreter
andengine
respectively. -
transaction-lib-test
supplies tools to generate transaction and the value and transaction ADTs provided by thetransaction
library. -
data
contains utility datatypes used in the engine, and functions designed around specified LF semantics. For example, if you want LF-compatible decimal handling, theDecimal
API is a good source of useful functions. -
data-scalacheck
supplies ScalacheckArbitrary
s for the custom collections provided by thedata
library. -
interpreter
is the "unified interpreter" used for both the sandbox and the production ledger. It is an efficient CEK machine, interpreting thelfpackage
terms using a (non-serializable) internal value model, ultimately producingtransaction
s. Most downstream will want to useengine
in addition to this, because only the pure interpreter lives here. -
engine
holds the ledger state oninterpreter
's behalf and implements all of its public-facing aspects, such as theCommand
interface, events, and loaded packages. -
scenario-interpreter
practically demonstrates whyinterpreter
is separate fromengine
: it is a small set of library functions usinginterpreter
to evaluate scenarios from an LF. -
repl
is the below-described REPL, manipulating an internal engine state and running scenarios at your command. -
testing-tools
helps you run scenarios from Scalatest.
Building and testing
Daml-LF uses Bazel to build and test the components. Please refer to top-level
BAZEL.md
and BAZEL-JVM.md
documents for high-level instructions on how to
use Bazel and how it works within IntelliJ.
To get a list of build targets:
bazel query //daml-foundations/daml-lf/...
To build and test everything:
bazel build //daml-foundations/daml-lf/...
bazel test //daml-foundations/daml-lf/...
To watch a target and re-run tests when files change:
ibazel test //daml-foundations/daml-lf/...
All the above can of course take more fine-grained targets as arguments. "..." means all targets under this directory, recursively. ":all" would specify all targets in the specified directory.
To load a package in the scala repl you will need to add a "@repl" target to BUILD.bazel:
load("@io_bazel_rules_scala//scala:scala.bzl", "scala_repl")
scala_repl(
name = "interpreter@repl",
deps = [
":interpreter"
]
)
This target can then be invoked with "bazel run":
da$ bazel run //daml-lf/interpreter:interpreter@repl
or:
interpreter$ bazel run interpreter@repl
Since "rules_scala" does not currently support incremental compilation you will need to help Bazel along a bit by keep the dependency graph lean. Try to divide your code into separate scala_library targets as build results are cached at this level. Preferably unrelated modules should be separate scala_library targets, unvisible to the outside. A visible scala_library target should then collect the unrelated modules into a single target that can be depended on from outside.
Benchmarking
Benchmarks for scenario execution can be run with
bazel run //daml-lf/scenario-interpreter:scenario-perf
A run of this benchmark will take between 6 and 7 minutes. A faster, less precise benchmark which takes around 1 minute can be invoked with
bazel run //daml-lf/scenario-interpreter:scenario-perf -- -f 0
To benchmark scenarios other than the ones configured by default, you can invoke
bazel run //daml-lf/scenario-interpreter:scenario-perf -- -p dar=/path/to/some/dar -p scenario=Some.Module:test
This can be combined with the -f 0
flag as well.
These benchmarks are focused on Daml execution speed and try to avoid noise caused by, say, I/O as much as possible.
Daml-LF-REPL Usage
The REPL can be compiled with bazel build //:daml-lf-repl
and run with
bazel run //:daml-lf-repl -- repl
. The //:
prefix is not needed when
at repository root.
Example use:
$ bazel run //:daml-lf-repl -- repl daml> :load project.dar daml> Project.double 4 8 daml> :scenario Project.tests ...
See :help
for more instructions.
The REPL application also provides commands test
and testAll
for
running scenarios in packages:
$ bazel run //:daml-lf-repl -- testAll $PWD/project.dalf $ bazel run //:daml-lf-repl -- test Project.tests $PWD/project.dalf
NOTE: When running via bazel run
one needs to specify full path (or relative path from repo root), since Bazel runs all commands from repository root.
Profiling scenarios
Daml-LF-REPL provides a command to run a scenario and collect profiling information while running it. This information is then written into a file that can be viewed using the speedscope flamegraph visualizer. The easiest way to install speedscope is to run
$ npm install -g speedscope
See the Offline usage section of its documentation for alternatives.
Once speedscope is installed, the profiler can be invoked via
$ bazel run //:daml-lf-repl -- profile Module.Name:scenarioName /path/to.dar /path/to/output.json
and the profile viewed via
$ speedscope /path/to/output.json
Scala house rules
-
Do not use
Seq
in the interpreter's code paths, with the possible exceptions of accepting inputs in external APIs. UseImmArray
,FrontStack
, andBackStack
as appropriate.The reason for this rule is that
Seq
hides completely the performance of operations -- for example it defines cons and snoc and append for all structures even if it requires a full copy for an array.ImmArray
should be used in cases where you do not need to append or prepend content often. It is however very cheap to slice theImmArray
(removing elements from either end).FrontStack
should be used when needing to build up a list of elements by prepending elements. Both single elements or chunks in the form ofImmArray
can be prepended. A typical use case if traversing a tree in topological order by keeping a stack of children to still be visited.BackStack
is likeFrontStack
but you can append rather than prepend. For example if you find yourself building and then reversing a list, useBackStack
instead. -
Avoid mutable data structures in external APIs. This is not set in stone but generally a code smell.
-
Try to always define functions on user-provided data structures (of which we have a lot of in this codebase) to be tail recursive. The typical way to do this is by defining little "interpreters" to perform your function. XXX put good example here once we have an established pattern. In doubt, ask Francesco Mazzoli or Gyorgy Farkas about this.
-
Disable "Optimize imports on the fly" and the "Optimize Imports" shortcut in IntelliJ IDEA, since they mess up diffs and can subtly, insidiously change the semantics of your code (Scala imports are order sensitive). You can disable "on the fly" at Menu -> Preferences -> Editor -> General -> Auto Import -> Scala, and the shortcut key in Preferences -> Keymap -> search Optimize Imports -> double-click result -> Remove.