Considering that there is no canonical source for status.proto generated
for scalapb (for java it is pulled in via a transitive dependency of
grpc-protobuf), we should rather compile and include it in the
ledger-api-scalapb artefact.
Compared to older versions of the Engine, we do A LOT more validation
now. So when starting with a fresh engine after each reset (via
the ResetService), we also repeat the validation of the loaded packages
again and again. This is VERY expensive, especially for large DAML
packages.
Luckily, the specification of the ResetService states the following:
// Resets the ledger state. Note that loaded DARs won't be removed --
// this only rolls back the ledger to genesis.
This means that we can re-use the same Engine object and benefit from
not having "re-compile" packages via
ConcurrentCompiledPackages#addPackage, more specifically we can omit
this line: 1f2246c822/daml-lf/engine/src/main/scala/com/digitalasset/daml/lf/engine/ConcurrentCompiledPackages.scala (L56)Fixes#178
* fix various conversion functions from string to Decimal
Fixes#399.
This fixes a critical bug -- since:
* The DAML-LF spec specifies that the scale of `Decimal` is 10 --
that is, there are at most 10 digits past the decimal point:
<79bbf5c794/daml-lf/spec/value.rst (field-decimal)>.
* However, the code converting from the string representation that
we get on the wire was _not_ performing this check. This is due
to two reasons:
- `Decimal.check` is a function that checks whether a given
`Decimal` is within the upper and lower bounds of what the
DAML-LF spec specifies, but crucially it does _not_ check that
the scale is not exceeded:
<79bbf5c794/daml-lf/data/src/main/scala/com/digitalasset/daml/lf/data/Decimal.scala (L31)>.
This behavior is correct in some cases (more on that later),
but not in others. Crucially, `Decimal.fromString`, which was
supposed to check if a decimal string literal is valid, used
this function, which means that it accepted string literals
containing numbers out of the required scale, rounding them to
make them fit within the scale. This function has been renamed
to `Decimal.checkWithinBoundsAndRound`, and a new function
`Decimal.checkWithinBoundsAndWithinScale` has been added, which
fails if the number provided has data not within the scale.
`Decimal.fromString` now uses
`Decimal.checkWithinBoundsAndWithinScale`.
- `ApiToLfEngine.parseDecimal` assumed that `Decimal.fromString` _did_
fail when passed numbers that were in any way invalid, and
moreover did _not_ use the result of `Decimal.fromString`, but rather
re-parsed the string into an unconstrained `BigDecimal`:
<79bbf5c794/ledger/ledger-api-common/src/main/scala/com/digitalasset/platform/participant/util/ApiToLfEngine.scala (L96)>.
The reason for the code to work this way is that in the past
it was responsible for converting decimal strings both for the
current engine but also for an older version of the engine which
handled decimals of a different type. Both issues have been fixed.
* Therefore, `Decimal`s with scale exceeding the specified scale
made it into the engine, and contracts could be created containing
this invalid value.
* Once on the ledger, these bad numbers can be used to produce extremely
surprising results, due to how `Decimal` operations are
implemented. Generally speaking, all operations on `Decimal`
first compute the result and then run the output through
`Decimal.checkWithinBoundsAndRound`. The reason for this behavior
is that we specify multiplication and division as rounding their
output. Consider the case where the bad value 0.00000000005 made
it to the engine, and is then added to 100. The full-precision
result will be 100.00000000005, which after rounding becomes 100.
Therefore, on a ledger where such invalid values exist, it is not
the case that `y > 0 ==> x + y != x`, and so on.
Thanks a bunch to @briandbecker for the excellent bug report.
* fix failing test using to much precision
As multiple platforms will create different annotated tags (because an
annotated tag includes a tag time), they will conflict on trying to
push. Therefore, we go for a lightweight tag for now, as those are
simple pointers to a commit and git will recognize that they are the
same and there is no conflict.
Warm up local caches by building dev-env and current daml master This is
allowed to fail, as we still want to have CI machines around, even when
their caches are only warmed up halfway.
Afterwards, we purge old agents that might still be around, that didn't
unregister themselves
This depends on #402 to be merged, as otherwise purge_old_agents.py
can't be found obviously.
* Accept multiple files in damlc test
Since damlc test also runs tests in transitive dependencies, this can
be significantly faster than running "damlc test" individually on
a set of files as you will end up recompiling and rerunning tests
multiple times if those files depend on each other.
For //docs:daml-ref-daml-test This is roughly a 10x improvement from
~70s to ~7s.
This rewrites the release script to be a lot simpler and significantly
faster:
- The artifacts are now declared in a separate yaml file which should
make it easier for people to modify and doesn’t clutter the actual
code.
- There is only a constant number of calls to Bazel which speeds up
the script quite a bit.
I verified that the release artifacts are the same that we got
before and I traced the calls to the jfrog binary in a fake release
and ignoring order they are identical.
This adds `ci/azure-cleanup`, containing a script that talks to azure pipelines, removing agents older than 25 hours in a specific pool.
Machines are meant to be killed after 24 hours anyway, make sure they're properly unregistered from Azure Pipelines, too.
By doing this, we don't need to unregister nodes manually on shutdown.
Idea is to execute this every time a new agent is provisioned, it has cloned the repo. We intend to clone the repo and pre-warm the caches there anyhow.
WIP until the repo fetching and cache pre-warming is present, too.
cc @zimbatm
### Pull Request Checklist
- [x] Read and understand the [contribution guidelines](https://github.com/digital-asset/daml/blob/master/CONTRIBUTING.md)
- [x] Include appropriate tests
- [x] Set a descriptive title and thorough description
- [x] Add a reference to the [issue this PR will solve](https://github.com/digital-asset/daml/issues), if appropriate
- [x] Add a line to the [release notes](https://github.com/digital-asset/daml/blob/master/docs/source/support/release-notes.rst), if appropriate
NOTE: CI is not automatically run on non-members pull-requests for security
reasons. The reviewer will have to comment with `/AzurePipelines run` to
trigger the build.
* Add buildifier targets.
The tool allows to check and format BUILD files in the repo.
To check if files are well formatted, run:
bazel run //:buildifier
To fix badly-formatted files run:
bazel run //:buildifier-fix
* Cleanup dade-copyright-headers formatting.
* Fix dade-copyright-headers on files with just the copyright.
* Run buildifier automatically on CI via 'fmt.sh'.
* Reformat all BUILD files with buildifier.
Excludes autogenerated Bazel files.
Azure Pipelines has direct integration with GitHub, so we're just using
that. Releases on GitHub have to target a tag, so we also need to push
the tag as an intermediate step; we also need to include the platform
name in the artifact to avoid overwriting from different builds.
The two "GitHub release" steps depend on two Azure variables that are
not defined in the pipeline script. This may look like it should not
work, but in fact it does, because these variables are set by the
release script.
In Azure Pipelines, any build step can set variables for the next build
steps by outputting specially-formatted text to stdout. This text will
not appear in the build output displayed by Azure Pipelines, e.g.:
```
echo '##vso[task.setvariable variable=sauce]tomatoes'
```
would define the Azure variable `sauce` to have the string `tomatoes` as
its value for the next build steps.
See [0] for details.
[0]: https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-in-script
* TransactionServiceIT passes
* fixup
* using record time taken from TimeProvider
* taming TransactionBackPressureIT to avoid overloading of submission service
* implement AutoClosable on all Ledger related components having stateful resources
* reenabling InMemory fixture
* disabling contract key integration test for SandboxSQL fixture
* removing TODO
* bumping up timeout on TransactionServiceIT due to slow Azure pipeline
* 1 minute timeout for SqlLedgerSpec
* making jdbcurl CLI argument hidden
* updating release notes
* Initial rattle prototype
* Build the IDE core
* Ignore the Rattle directory
* Clean up the dependencies
* Require stack-1.10, since that does much better extra-dep caching
* Streamline the ghc-lib dependencies
* Compile more pieces
* Add a build.bat
* Make the Windows build use the correct stack.yaml to bootstrap
* Fix up enough to build on Windows
* Generate the dylib's on Mac
* Remove accidental -v
* Make the Haskell build driven by the Bazel metadata
* Get proto3-suite building
* Delete the unneeded haskell-dependencies
* Allow generating the proto files and compiling them
* Fix metadata to deal with """ syntax
* Fix metadata to deal with a list of globs
* More work in the direction of daml-ghc
* Use correct daml_lf proto version
* Tell GHC to use shared objects in TH
* Specify needed packages
* wip
* wip
* Switch to the fork of gRPC-haskell
* Build executables with rattle
* setup build.sbt in daml-lf
* Build binaries with rattle
* rattle-sbt, move scala build scripts out of daml-lf subdir, and into rattle subdir
* convert scala-build.sh into MainScala.hs
* Clean up rattle build
* Pre-merge clean up
* Switch to the newer version of ghc-lib-parser
* remove dev ls from MainScala.hs
* compile java generated from protos as separate projects
* Add copyright headers
* HLint fixes
* Uscrewup an HLint fix
* fix scala formatting of rattle/build.sbt
* Move tuple repacking in conversion to DAML-LF into separate function
I need to use the same logic for implementing the `TextMap.toList` function.
* Improve naming and add documentation