* extracting large command test from TransactionServiceIT
* make `preprocessCommands` more performant (see comment)
* fix duplicated command generation in `preprocessCommands`
* enable in-memory test-fixture as well
* Add semantic test for the reference server
Currently the semantic test is failing. Likely due to the
location annotation changes.
* Do not compare location annotations in isReplayedBy
The location annotations may not, and do not need to, match due to the
fact that the reconstructed update expression may not exactly match
the original one, and since the interpreter currently picks the closest
location annotation we cannot guarantee that they exactly match.
* scalafmt.
* buildifier.
* client_server_test: Increase timeout to 60s
Spawning a JVM-based server can easily take a long time on a very
loaded system (e.g. when running `bazel test //ledger/...` with enough
parallelism), so better have a high default timeout.
* ledger/api-server-damlonx: Address code review
* Fix client_server_test runner compilation. Bump timeout.
* Mark the reference server semantic test exclusive
* fmt reference/BUILD.bazel
Add Dar Traverse and Equal, replace manually written test cases with
scalaz law checks;
Add test case for multiple DARs support;
Enforcing non empty list of input files
The DAML-LF Archive Protobuf definitions, as packaged and distributed
via Bintray and the SDK, currently packages the files in a directory
structure that does not match with the actual one, causing the usage of
`protoc` on the packaged definitions to raise errors. This commit fixes
it by packaging the files in the same directory structure as the one
found in the repository.
* fix various conversion functions from string to Decimal
Fixes#399.
This fixes a critical bug -- since:
* The DAML-LF spec specifies that the scale of `Decimal` is 10 --
that is, there are at most 10 digits past the decimal point:
<79bbf5c794/daml-lf/spec/value.rst (field-decimal)>.
* However, the code converting from the string representation that
we get on the wire was _not_ performing this check. This is due
to two reasons:
- `Decimal.check` is a function that checks whether a given
`Decimal` is within the upper and lower bounds of what the
DAML-LF spec specifies, but crucially it does _not_ check that
the scale is not exceeded:
<79bbf5c794/daml-lf/data/src/main/scala/com/digitalasset/daml/lf/data/Decimal.scala (L31)>.
This behavior is correct in some cases (more on that later),
but not in others. Crucially, `Decimal.fromString`, which was
supposed to check if a decimal string literal is valid, used
this function, which means that it accepted string literals
containing numbers out of the required scale, rounding them to
make them fit within the scale. This function has been renamed
to `Decimal.checkWithinBoundsAndRound`, and a new function
`Decimal.checkWithinBoundsAndWithinScale` has been added, which
fails if the number provided has data not within the scale.
`Decimal.fromString` now uses
`Decimal.checkWithinBoundsAndWithinScale`.
- `ApiToLfEngine.parseDecimal` assumed that `Decimal.fromString` _did_
fail when passed numbers that were in any way invalid, and
moreover did _not_ use the result of `Decimal.fromString`, but rather
re-parsed the string into an unconstrained `BigDecimal`:
<79bbf5c794/ledger/ledger-api-common/src/main/scala/com/digitalasset/platform/participant/util/ApiToLfEngine.scala (L96)>.
The reason for the code to work this way is that in the past
it was responsible for converting decimal strings both for the
current engine but also for an older version of the engine which
handled decimals of a different type. Both issues have been fixed.
* Therefore, `Decimal`s with scale exceeding the specified scale
made it into the engine, and contracts could be created containing
this invalid value.
* Once on the ledger, these bad numbers can be used to produce extremely
surprising results, due to how `Decimal` operations are
implemented. Generally speaking, all operations on `Decimal`
first compute the result and then run the output through
`Decimal.checkWithinBoundsAndRound`. The reason for this behavior
is that we specify multiplication and division as rounding their
output. Consider the case where the bad value 0.00000000005 made
it to the engine, and is then added to 100. The full-precision
result will be 100.00000000005, which after rounding becomes 100.
Therefore, on a ledger where such invalid values exist, it is not
the case that `y > 0 ==> x + y != x`, and so on.
Thanks a bunch to @briandbecker for the excellent bug report.
* fix failing test using to much precision
* Add buildifier targets.
The tool allows to check and format BUILD files in the repo.
To check if files are well formatted, run:
bazel run //:buildifier
To fix badly-formatted files run:
bazel run //:buildifier-fix
* Cleanup dade-copyright-headers formatting.
* Fix dade-copyright-headers on files with just the copyright.
* Run buildifier automatically on CI via 'fmt.sh'.
* Reformat all BUILD files with buildifier.
Excludes autogenerated Bazel files.
* Add location information to DAML-LF produced by damlc
This is required to get error locations in the scenario view. Rigth now,
the location information for `create`/`exercise` still points to the
template/choice. I'll fix that in a separate PR.
* Fix test expectations
* Fix more tests
* Do not divulge contracts to observers in nonconsuming exercises
Disables support for non-default ledger feature flags, as they
are meaningless since ledger server logic does not respect the flags.
Instead of large refactoring to add support for the old flag settings,
it is best to disallow the deprecated flags, and later on phase out the
flags completely.
Re-enables test_divulgence_of_token in sandbox semantic tests.
Fixes#157.
* purge LedgerFlags entirely...
...since we only support one version of them anyway, and clearing them
* updated release notes
* Implement legacy DAR support, #308
if MANIFEST.MF is not present filter zip entries by .dalf extension,
expect one main DALF and one optional *-prim* DALF, fail in other cases
* Implement legacy DAR support, #308
if MANIFEST.MF is not present filter zip entries by .dalf extension,
expect one main DALF and one optional *-prim* DALF, fail in other cases
* comments, naming
* Fetch status.proto from remote, simplify JS gRPC codegen
Fetch the `status.proto` file (part of the standard gRPC distribution)
from a distribution channel. _Moreover_, use the recently introduced
`proto_gen` rule to simplify how the gRPC code for the Node.js bindings
are generated (and remove the need to have `google/rpc/status.proto`
locally in the repository.
* Add plugin_runfiles option to proto_gen
This allows use to add additional files to the bazel sandbox so that
plugins can refer to them. This will subsequently be used by the
protoc-gen-doc plugin.
Also, pass the plugin options via --name_opt parameter.
* Add missing status.proto dependency /language-support/java and /ledger
* Build proto docs using the proto_gen rule
To make this work, I had to turn on the bazel build flag
`--protocopt=--include_source_info` because we cannot turn enable this
flag only for specific build rules.
* Make /ledger-api/grpc-definitions:docs public again
* Revert to the old style of passing plugin arguments to --name_out=options:path
* Suppress output of unzipping
* Fix link for google.rpc.Status in proto-docs
* Change contract key maintainers to have template argument in scope
More precisely, change the contract key type class we desugar the surface
syntax to such that the maintainers function has the template argument
in scope instead of the key. The compiler rewrites all references to the
template argument into references to the key and fails whenever that is
not possible.
* Add TemplateKey class to desugaring module
* Make template key check slightly more lenient
* Fix TODO descriptions
* decode key expressions in scala correctly. fixes#177
replaced a `foldRight` with a `foldLeft`, projections were decoded
backwards.
* remove ContractKeysSimple -- superseded by ContractKeys