CHANGELOG_BEGIN
- [JSON API] The check that connections are made through a reverse-proxy
providing HTTPS, ensuring that JWT tokens don't leak, only logs a warning
rather than rejecting the request.
See `issue #5856 <https://github.com/digital-asset/daml/pull/5856>`_.
CHANGELOG_END
* Apply plotform_suffix on all Windows pipelines
To distinguish action keys between the compatibility and the main
workspace and avoid the "undeclared input(s)" error. We also modify the
main workspace's action cache keys to avoid poisoned cache items.
CHANGELOG_BEGIN
CHANGELOG_END
* Avoid exceeding MAX_PATH on Windows
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
* Update SDK versions in compatibility tests
This adds a Haskell script to generate a versions.bzl file that
contains the list of versions as well as their hashes. This should
make it a bit easier to keep things up2date going forward.
The script is a bit slow since downloading all the SDKs takes quite a
while but for now it should be good enough and is much more pleasant
than having to figure this out manually.
changelog_begin
changelog_end
* Address review comments
changelog_begin
changelog_end
* Fix excluded tests
changelog_begin
changelog_end
* docs: publish daml-cheat-sheet on docs.daml.com
* separate cheat sheet rule
* Fixed `@daml-cheat-sheet`
* Uses `_config.yml` to determine the source directory root.
* Uses `tar h` to resolve symbolic links. Otherwise the tarball just
contains symbolic links to the execroot.
* Uses flags to make `tar` and `gzip` reproducible, i.e. avoid
timestamps and the like.
* cleanup
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
* add GenMap to the "all types" test generators
* report bad GenMap format with DeserializationError, not MatchError
* document GenMap JSON
* notes on missing features
* enable -Xsource:2.13 in transaction
* make an Order instance for Value resolvable, but unimplemented
* use the skeleton from SValue ordering to make a Value ordering skeleton
* add Party Order
* add Order instance for SortedLookupList
* add Order for FrontStack, deriving everything
* factor the Order lookup, and tie a knot in the recursive Value instances
* we're going to need this Iterator thing again
* replacing Order#contramap with version that supports equalIsNatural
* use new equalBy, orderBy for FrontStack, SortedLookupList, ImmArray
* _2 comparator, upgrade Name Equal to an Order
* incorporate lookup for enums, variants into Value order; record/struct cases
* Enum/Variant comparison
* looking up the singleton implicitly won't work for non-`object`s, alas
* test Order laws for values of all primitive types
* test Order laws for record and variant types
* test Order laws for enum types
* test that enum strings are not compared
* use checkLaws for Value Equal as well
* test that enums match order to constructor rank
* factor genAddend and genAddendNoListMap
* reintroduce Order for TypedValueGenerators
* more addend order
* record, variant order cases
* record cons order
* deriving Order while decoding from JSON
* make ApiCodecCompressed's Cid codec based on the typeclass
* test how the Value ordering and the underlying projected value orderings line up
- hint: they don't, yet
- this is also a template for how we'll check the fidelity with SValue
ordering
* test how the Value ordering and SValue ordering line up
- hint: they don't, yet
* typed Arbitrarys have access to Order
* produce proper ValueGenMap
* inj requires Order, sometimes
- we encode this as "all the time" but there is a type-level unification
approach to remove this requirement in some cases
* make inj a function
* test that order doesn't matter for JSON decoder
* use Utf8 order for TVG text; don't pretend that base equal works
* sort JSON GenMaps, and check for duplicates
* make injarb use IntroCtx
* remove stray import
* Order instances for Bytes, Hash, AbsoluteContractId
* require Order[Cid] to decode JSON to LF values
* clean up map reordering test
* remove unused Instant instance
* fake Order instance no longer needed, valid instance defined
* test parity of global AbsoluteContractId order and SContractId order
* bazel fmt
* test AbsoluteContractId Order lawfulness
* test duplicate key detection
CHANGELOG_BEGIN
- [JSON API] Prepare full support for the planned GenMap primitive type.
See `issue #5031 <https://github.com/digital-asset/daml/issues/5031>`_.
CHANGELOG_END
* Adapt ResponseFormat from JSON API
* Add some type annotations
* Use response format with status and errors/result fields
* Update and refactor tests
changelog_begin
changelog_end
* Add basic Sandbox data continuity tests
This adds some basic tests that check that data migrations work
properly. For now, I use DAML Script to create and query contracts at
each step. This isn’t perfect since queries can only use the active
contract service but not things like the transaction stream but it’s
clearly better than nothing.
The runner for executing the tests is a simple Haskell executable. It
didn’t really seem useful to throw tasty at this.
I’ve added two sets of tests, one that runs only through stable
versions and one that includes snapshots since migrating through
snapshots is not necessarily equivalent.
Sadly these tests use sandbox-classic since I discovered while writing
these tests that sandbox-next does not actually support migrating data
between SDK versions.
changelog_begin
changelog_end
* Use the sandbox module instead of a custom withSandbox
changelog_begin
changelog_end
* Update compatibility/sandbox-migration/SandboxMigrationRunner.hs
Co-authored-by: Andreas Herrmann <42969706+aherrmann-da@users.noreply.github.com>
Co-authored-by: Andreas Herrmann <42969706+aherrmann-da@users.noreply.github.com>
To avoid "undeclared inclusion(s)" errors by modifying the action keys.
Another option is `--action_env`. However, this only affects actions
that set `use_default_shell_env=True`, which is few, since that setting
is bad for hermeticity.
CHANGELOG_BEGIN
CHANGELOG_END
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
So PR of substance (speedy-returnValue) will be easier for reviewers to read
- rename machine member: kont -> kontStack
- rename method: kontPop -> popKont
- abstracted new method: kont.add -> pushCont
- addition of 4 @inline annotations
changelog_begin
changelog_end
* daml new script-example
* Build DAML script DAR
* daml_script_dar macro over sdk_version
* Run an individual daml-script test
* DAML script test matrix
* format
CHANGELOG_BEGIN
CHANGELOG_END
* Use named arguments on daml_script_test
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
* Set JVM memory limits for sandbox
Using the same settings as defined in `@daml//bazel_tools:scala.bzl`.
* Mark tests as large
Their memory consumption is somewhere around 300MiB which is considered
"large" according to
https://docs.bazel.build/versions/master/be/common-definitions.html?cl=head#common-attributes-tests.
CHANGELOG_BEGIN
CHANGELOG_END
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
--project-root is a bit confusing when using the assistant since it
will choose the SDK version before going to the project
directory. This is almost never what you intend to do so setting
DAML_PROJECT seems like a better option in most cases.
See https://github.com/digital-asset/daml/issues/5769 for details
changelog_begin
changelog_end
* Upgrade scala compiler silencer to 1.6.0
CHANGELOG_BEGIN
CHANGELOG_END
* Adapt build bazel file to new targets
* Switch to silencer plugin scala 2.12.11 per Samir's feedback
rather than 2.12.8
* Add missed bazel files
* Review feedback from Leo
We have seen the following error message crop up a couple times
recently:
```
FATAL: could not create shared memory segment: No space left on device
DETAIL: Failed system call was shmget(key=5432001, size=56, 03600).
HINT: This error does *not* mean that you have run out of disk space.
It occurs either if all available shared memory IDs have been taken, in
which case you need to raise the SHMMNI parameter in your kernel, or
because the system's overall limit for shared memory has been reached.
The PostgreSQL documentation contains more information about shared
memory configuration.
child process exited with exit code 1
```
Based on [the PostgreSQL
documentation](https://www.postgresql.org/docs/12/kernel-resources.html),
this should fix it.
CHANGELOG_BEGIN
CHANGELOG_END
* CHANGELOG_BEGIN
Added filterA to Prelude.
* Update compiler/damlc/daml-stdlib-src/DA/Internal/Prelude.daml
Co-authored-by: Shayne Fletcher <shayne@shaynefletcher.org>
* Removed not so useful comment.
* Moved filterA from Prelude to Action.
* filterA is a one-liner now.
* Provided more meaningful example to filterA.
* Added test for filterA.
* Removed failing doctest.
CHANGELOG_END
Co-authored-by: Shayne Fletcher <shayne@shaynefletcher.org>
Speedy: run() dont step()
- Running the Speedy machine with `run()` instead of `step()`
- Remove: `SResultContinue`
- Add: `SResultFinalValue(_)`
We change the top level control of Speedy: from machine.step() to machine.run, with the control of stepping while the machine returns SResultContinue moved into speedy itself. (And so SResultContinue is removed in favour of SResultFinalValue.) The main advantage of this approach is that the tight while loop can be moved inside the exception handler, rather than having to wrap the handler every step.
changelog_begin
changelog_end
* Integrate PostCommitValidation with JdbcLedgerDao and SqlLedger
Closes#5035Closes#5663
changelog_begin
[Sandbox] Skip unnecessary double post-commit validation inherited by sandbox-classic, expect performance improvement
changelog_end
* Ensure SqlLedger recovers from failures and logs them when publishing a transaction
* Remove unused import
* Remove tests for ledger entries
* Fix completions test to make them compile
* Fix compilation errors in tests, address self-review items, apply necessary fixes
- address https://github.com/digital-asset/daml/pull/5781#pullrequestreview-403293667
- address https://github.com/digital-asset/daml/pull/5781#pullrequestreview-403378192
* Pass TransactionTimeModelComplianceIT
* Minor tweaks to variable naming
* Fix failing tests
* Stop deduplicating commands on failures
* Attempt at making sandbox-classic allocate parties implicitly
* Remove implicit party allocation test (without full server) for SQL backed sandbox-classic
* Removing ImplicitPartyAdditionIT (covered in conformance tests)
* Add migrations
* Fix test for ledger DAO with post-commit validation against PostgreSQL
* Update PostgresIT
* Fix missing/wrong items from previous commits
* Don't perform batch processing of enqueued persistence entries
* Rebase against master
Currently, there are quite a few releases that are lacking the
Standard-Change label, even though they did publish artifacts. This
makes our SOC2-compliance tracking a bit harder. For the past two
months, I have manually added the label after-the-fact while preparing
the monthly compliance report, but that doesn't seem like a great
solution.
This PR changes the release process to be more optimistic: assume the
release is going to succeed by putting in the label immediately, and
then (optionally) removing it if the release fails.
Note that the label should only be removed in the rare case where the
release was merged into master but somehow did not produce any artifact.
This can only happen if the Linux build fails quite early, which as far
as I know only happened once over the past two months when we had the
release notes race condition.
CHANGELOG_BEGIN
CHANGELOG_END
* Sandbox: expose back pressure config in CLI
CHANGELOG_BEGIN
[Sandbox]: Added ``--max-commands-in-flight`` as CLI configs. See ``daml sandbox --help``.
[Sandbox Classic]: Added ``--max-commands-in-flight`` and
``--max-parallel-submissions`` as CLI configs. See ``daml sandbox-classic --help``.
CHANGELOG_END
* Bumping the default maxParallelSubmissions to 512 for sandbox classic
We used to use `maxCommandsInFlight * 2` in SqlServer, but it makes more
sense to use `maxParallelSubmissions` there. Since the lower default value of 128
would result in the conformance tests to fail, I'm bumping it to 512
* Use maxCommandsInFlight to configure the parallel submissions for CommandService
* Add a reason text field to RejectReason.Inconsistent (#5180)
CHANGELOG_BEGIN
- Add a reason text field to RejectReason.Inconsistent.
See `#5810 <https://github.com/digital-asset/daml/issues/5810>`__.
CHANGELOG_END
* Change wording in contributing instructions to reflect best practice (#5820)
* Also make add reason text to other reject reasons that don't have it (#5820)
* Update with review comments (#5820)
* Update with review comments (#5820)
* Update with review comments (#5820)
* Fix handling of packages in damlc visual
Previously we just ran the analysis on the modules of the main
package. This failed for obvious reasons as soon as you reference a
template from another package which happens pretty
frequently (e.g. for anything that uses finlib).
This PR fixes this to run the analysis on the whole World which is
self-contained. This required a bunch of reshuffling to make sure that
we always reference fully qualified identifiers but most of it is
very mechanical.
Note that currently you cannot distinguish between templates with
identical names in the resulting graph (they will be separate but you
have no idea which one is which). This was already an issue
before if you have the same template name in different modules so I
consider this an orthogonal issue.
This fixes the expected failure we already had and I added another
test that checks that colliding template names do at least show up as
separate nodes in the graph. I also manually tested this against
ex-bond-issuance.
Disclaimier: I’m aware that the code is very messy but I tried to
resist the urge to rewrite it completely and only change what was
necessary.
fixes#5776
changelog_begin
- [DAML Compiler] ``damlc visual`` now works properly in projects
consisting of multiple packages.
changelog_end
* Rename templateChoiceId to templateId
changelog_begin
changelog_end
The two functions to convert between text and a list of codepoints
were documented the wrong way around. This PR fixes the issue. We
also sprinkle in a few plural "s" where needed.
CHANGELOG_BEGIN
CHANGELOG_END
All functions in `DA.Numeric` take the scale of the result as their
first type argument. IMO, this is a nice API since you usually only
want to specify the scale of the result since the scale of the
term arguments is most of the times inferred.
However, the current type signatures in `DA.Numeric` bear quite some
risk of being confusing. For instance, in
```haskell
mul : NumericScale n3 => Numeric n1 -> Numeric n2 -> Numeric n3
```
the naming of the type variables suggests that the order of the
type parameters is `n1 n2 n3` when it actually is `n3 n1 n2`.
I consider the knowledge of implicit `forall`s are filled in quite
expert and hence think we should make the order of these type arguments
explicit.
There is also a related mistake in the docs of `shift`. Running a
scenario confirmed that
```haskell
shift @1 @2 1.0 == 10.0
```
Hence, `shift` has multiplied its argument by `10^(2-1)`, which is
`10^(n1 - n2)`.
CHANGELOG_BEGIN
CHANGELOG_END
* resize elements of containers by 1/3 in TypedValueGenerators
- no explosions in 2m samples
* use Order.apply
* resize key types of GenMaps more vigorously
- At max 100, the default, max size of a key *type* is 10, key size 33
* revert most of "reduce test count for Ordering tests (#5741)" 00025a5337
CHANGELOG_BEGIN
CHANGELOG_END
- Only upload packages during the initial startup.
- Avoid loading packages during subsequent resets
- Share an engine between Ledger API Server and Committer
* Use a randomized H2 URL to simular in-memory
The reset service test assumes to get a completely new ledger for each
test case. But because we use H2 in-memory with db_close_delay=1 and the
same H2 database name, the second test case gets the remnants of the
first test case.
Since we know that sandbox in-memory uses an H2 in-memory URL, we can
simply use SandboxBackend.H2Database for ResetServiceInMemoryIT.
CHANGELOG_BEGIN
[Sandbox] Drastically lower the time needed to do a reset via the
ResetService.
CHANGELOG_END
With the current setup, we always push whatever version GitHub considers
as the latest, which is defined by date. This means that at the moment a
patch release could overwrite a less recent but higher-version release,
essentially downgrading the SDK to a previous, presumably less good user
experience.
This patches the upload process to choose the highest-numbered release
instead of the most recent one by date.
CHANGELOG_BEGIN
CHANGELOG_END
CWD will be set to the same execroot for all targets on Windows. While
this will contain the things we are searching for it contains a whole
bunch of other stuff and in particular it can also change during the
execution of `find`. This resulted in errors with temporary files such
as the local-spawn-runner-* stuff that appear and disappear while
find is running.
This PR switches it to a tmp dir which works around this issue and
makes more sense anyway since we clearly don’t want to search in the
whole execroot.
changelog_begin
changelog_end
At the moment, collect_build_data will wait for the Windows
compatibility test to have "finished", but doesn't check its return
status. This means two things:
1. Should the compatibility test end without a success or error (e.g.
communication broken between Azure and the node), the option to rerun
failed jobs will not appear, as there will be no failed job.
2. The subsequent notify_user step will ignore failures in the
compatibility_windows job when reporting to Slack, making for
confusing reports.
CHANGELOG_BEGIN
CHANGELOG_END
With the change in release model (VERSION to LATEST), I forgot to change
the workspace_status script. The result is that our sitemap will forever
indicate all the pages in the docs have been last modified on Feb 25,
discouraging search engines from indexing them again at any point since.
This PR fixes that by updating the workspace_status script, which
hopefully should result in search engines indexing us again.
CHANGELOG_BEGIN
CHANGELOG_END