This PR adds a first version of update documentation and removes the
existing docs which were talking about `damlc migrate`.
So far the docs focus on the high-level approach to upgrades taken in
DAML and give an example of how to structure upgrade contracts.
What is not covered so far (and I’d like to leave that for a separate
PR) is:
1. Technical details: How are things split up into packages, which
restrictions apply to data-dependencies, …
2. Deployment and Running the upgrade via triggers/daml script/…
3. Common patterns for handling this in UIs (e.g. “locking” old contracts)
changelog_begin
changelog_end
I’m not really happy with this “fix” but after having spend way too
much time on this, this was the best I came up with. (The details are
in an inline comment). If anyone has better ideas, I’m all ears.
changelog_begin
changelog_end
Replace `daml-lf-repl validate` in packaging tests with `damlc validate-dar`.
Simplify test setup a little by passing tools (damlc,validate, etc) in a record.
changelog_begin
changelog_end
* Use yarn installation link instead of homepage
* Indentation of Messages segment
* Update git link to downloads page
* Explain why you see friends of friends in the network
* Tweak wording
* Descibe how to run new app more explicitly
* Make change to ui folder more obvious
* Add next steps
changelog_begin
changelog_end
* Fix title levels
Includes a change that should reduce the size of runfiles manifest files
and runfiles trees for targets with runtime dependencies on the Python
toolchain.
CHANGELOG_BEGIN
CHANGELOG_END
Co-authored-by: Andreas Herrmann <andreash87@gmx.ch>
Let's see how far we get with #4745.
It's a bit of a shame I can't retry the same commit multiple times.
Maybe I should have accounted for that in the version format...
CHANGELOG_BEGIN
CHANGELOG_END
This has been running for a few days now and while I have seen a bunch
of these cases, I have not once received a message with a BACKOFF value
different from 512. This means that, likely due to some sort of internal
caching in Azure, retrying in this case is useless and just makes the
build failure take more time, i.e. more time before we can rerun.
Rerunning does usually solve it, though.
I have also noticed that we still get these notifications when the job
has been canceled, which usually means the user has force-pushed (in
which case it makes sense that the commit is no longer available). I'm
not sure we can detect this, but I take this opportunity to print the
JobStatus just in case.
CHANGELOG_BEGIN
CHANGELOG_END
* Update create-daml-app code
* Remove reference to infix elem
* Remove the licenses from the code we tell users to copy
* Tweak conclusion
changelog_begin
changelog_end
Somehow, in the current setup, the publish steps do not get executed on
master. This is what Azure reports:
```
Evaluating: and(succeeded(), eq('$(is_release)', 'true'),
eq(variables['Build.SourceBranchName'], 'master'), eq('linux', 'linux'))
Expanded: and(True, eq('$(is_release)', 'true'),
eq(variables['Build.SourceBranchName'], 'master'), eq('linux', 'linux'))
Result: False
```
So it looks like, in the condition, `${{parameters.is_release}}`
evaluates to the literal string `$(is_release)`. If we look at the point
of invocation of the ~function~ template, we can see:
```
- template: ci/build-unix.yml
parameters:
release_tag: $(release_tag)
name: 'linux'
is_release: $(is_release)
```
so it does not seem completely crazy. However, according to the
documentation, we should expect that to be replaced by the value of the
corresponding variable, as per:
```
variables:
release_sha: $[ dependencies.check_for_release.outputs['out.release_sha'] ]
release_tag: $[ coalesce(dependencies.check_for_release.outputs['out.release_tag'], '0.0.0') ]
trigger_sha: $[ dependencies.check_for_release.outputs['out.trigger_sha'] ]
is_release: $[ dependencies.check_for_release.outputs['out.is_release'] ]
```
What's interesting here is that, within `build-unix.yml`, we are also
using `release_tag` in the exact same way:
```
- bash: ./build.sh "_$(uname)"
displayName: 'Build'
env:
DAML_SDK_RELEASE_VERSION: ${{parameters.release_tag}}
```
and this time output from the build seems to show the value being
correctly substituted:
```
damlc - Compiler and IDE backend for the Digital Asset Modelling
Language
SDK Version: 0.13.55-snapshot.20200226.3266.d58bb459
Usage: <interactive> COMMAND
Invoke the DAML compiler. Use -h for help.
```
My current guess is that the (undocumented, as far as I can tell)
evaluation order is as follows:
1. In the template, syntactically replace all the parameters.
2. In the job definition, replace the call to the template with the code
of the template. So it is as if we had written the template directly in
the `azure-pipelines.yml` file, with `$(release_tag)` and
`$(is_release)`.
3. Run the build. When we reach the time to run this specific job,
we can evaluate the expressions for the variables and replace them in
the rest of the job.
So what is going wrong? I believe the issue is with the quotes,
preventing the substitution of `is_release`. They came directly from the
[documented
syntax](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml#use-a-template-parameter-as-part-of-a-condition),
but if the above evaluation order is correct, they should not be there.
There are actually two things going wrong here. The first one is that
the syntax `$()` is used to substitute a value in what Azure considers a
string. This is the case for `env` keys. However, the `condition` key
is not a string, it is an Azure "expression". Expressions have their own
evaluation rules and syntax, and in particular, `$()` is not a
substitution rule there, so when it sees `$()` in a string in an
expression (due to the quoptes), it leaves it alone.
Removing the quotes does not directly help, though, as we then end with
```
condition: eq($(is_release), 'true')
```
and `$()` is not valid syntax in an expression. The way to use variables
in an expression is `variables.name` (or `variables["name"]`, because
why have only one?).
So that means we have to pass variables to the template in different
ways depending on how they will be used. So much fun.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Name the arguments to `ApiServices.create` for clarity.
* sandbox: Clarify numbers and types in configuration classes.
* sandbox-next: Log the correct port on startup.
* sandbox-next: Connect up the command configuration.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox-next: Wire up TLS configuration.
* sandbox-next: Wire up the maximum inbound message size.
* sandbox-next: Set the global log level if specified.
And if it's not specified, default to the level in logback.xml, INFO.
* sandbox-next: Connect up the submission configuration.
* sandbox-next: Log the correct ledger ID.
* sandbox-next: Use `TimeProvider.UTC`.
The existing approach is a historical accident. The reason for the
additional tarball/install jobs was that, in my original attempt, the
build steps would still build the current commit, as opposed to the
target commit.
This is not such an issue on Linux, but setting up the build environment
on Windows and macOS _again_ for no good reason is a pure waste of time
(and effort in getting it right). Now that the build steps build the
target commit (with the env var set), we can go back to the way things
were previously: just take the build products directly from the build
step.
CHANGELOG_BEGIN
CHANGELOG_END
* Make completion service return checkpoints
The new table for #4681 and the query used to retrieve completions
currently does not return checkpoints. These do not have to match
the application_id and submitting_party query since those fields
are not populated.
CHANGELOG_BEGIN
CHANGELOG_END
* Address https://github.com/digital-asset/daml/pull/4735#discussion_r384713277
* Extract ErrorOps, use liftErr instead of leftMap
JSON error formatting cleanup
CHANGELOG_BEGIN
CHANGELOG_END
* Good we have tests for this stuff
* Apply https://github.com/scala/bug/issues/3664 work-around,
so JsonError can be used instead of JsonError.apply
* error formatting
This doesn’t seem to bring any benefit anymore on cache hits with the
reduced closure size and on cache misses it’s significantly slower.
changelog_begin
changelog_end
This reduces the number of GHCs to 2 on Linux (regular and DWARF) and
1 on macOS. Given that each derivation is > 1 GB this should hopefully
help a bit.
changelog_begin
changelog_end
* move BeginBookmark to util
* adding offsets to steps
* offsetAfter belongs in Txn, not InsertDeleteStep
* make transaction stream a ContractStreamStep.Txn stream
* add several ContractStreamStep append cases
* rewrite 'render' to emit offset in the right places
* make ContractStreamStep#append total again
* check for offset in a few tests
* revert useless whitespace changes
* missed argument
* simpler mapPreservingIds
* rewrite states for new "live" format
* remove invalidated "events" block structure assertions
* make shutdown in withHttpService deterministic, to try to catch race condition
* exhaustiveness checking somehow disabled; fixed fetch flow and all is well
* documentation and changelog
CHANGELOG_BEGIN
- [JSON API - Experimental] Remove ``{"live": true}`` marker from websocket streams;
instead, live data is indicated by the presence of an "offset".
See `issue #4593 <https://github.com/digital-asset/daml/pull/4593>`_.
CHANGELOG_END
* be more specific about what liveness marker may be in docs
* fix daml2ts websocket tests
* mention type rules for all cases in offset documentation
Previously, we mapped `dependencies` under
Pkg_$pkgId.originalmodule name and imported them this way. However, we
did not map `dependencies` the same way. This PR unifies the two and
cleans up the import handling logic a bit.
This also fixes imports if we have two packages with the same name but
a different version since the package name (which is the only thing
usable in package-qualified imports) is not sufficient to
disambiguate. I’ve added a test for this.
changelog_begin
changelog_end
* Extend /party endpoint to allow specifying party ids
* Extend /party endpoint to allow specifying party ids
* Update docs
CHANGELOG_BEGIN
[JSON API - Experimental] Fetch Parties by their Identifiers. See #4512
``/v1/parties`` endpoint supports POST method now, which expects
a JSON array of party identifiers as an input.
CHANGELOG_END
* minor update
* minor update
* Use type alias
* Add warnings to the sync response
* test cases
* update docs, add test case for an empty input
* cleanup
* cleanup
* Addressing code review comments
This disables the PDF docs builds on MacOS on CI (they are still built
locally by default) and removes them from the Nix closure by
introducing a separate ci-cached attribute that filters out texlive.
Since we built `nix-build nix -A tools -A cached` on CI, I’ve also
removed all the Tex stuff from tools which only means that it ends up
in PATH which nobody seems to care about.
changelog_begin
changelog_end
This removes the sample/reference implementation of kvutils
InMemoryKVParticipantState.
This used to be the only implementation of kvutils, but now with the
simplified kvutils api we have ledger-on-memory and ledger-on-sql.
InMemoryKVParticipantState was also used for the ledger dump utility,
which now uses ledger-on-memory.
* Runner now supports a multi participant configuration
This change removes the "extra participants" config and goes for consistent
participant setup with --participant.
* Run all conformance tests in the repository in verbose mode.
This means we'll print stack traces on error, which should make it
easier to figure out what's going on with flaky tests on CI.
This doesn't change the default for other users of the
ledger-api-test-tool; we just add the flag for:
- ledger-api-test-tool-on-canton
- ledger-on-memory
- ledger-on-sql
- sandbox
Fixes#4225.
CHANGELOG_BEGIN
CHANGELOG_END
The default behaviour of an Azure job that has a dependency is to only
run if the dependency has succeeded. However, that default behaviour is
overridden if there is an exmplicit `condition` attribute.
This PR restores the expected behaviour that we only try to build a
release tarball if the actual build has succeeded.
CHANGELOG_BEGIN
CHANGELOG_END
* Ensure DarReader prevents zip bombs
CHANGELOG_BEGIN
[DAML-LF] The DarReader has a 1GB hard cap on ZIP archive entry size to prevent zip bombs
CHANGELOG_END
* Properly test UniversalArchiveReader, make it prevent bombs, break away memory heavy tests
* Exclude the zip bomb detection test from running on Mac CI nodes
* sandbox-next: Get the authorization service from configuration.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox-next: Add parameter names to Runner function calls.
It's getting very confusing what's what without them, and the mix of
with and without is more confusing.
* Copy App component from create-daml-app
* Introduce App component and start changing explanation of MainView
changelog_begin
changelog_end
* Elaborate on MainView, esp useStreamQuery
* Show where to find components and rearrange introduction of hooks
* ts
Co-Authored-By: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* Tweak context description
Co-Authored-By: Martin Huschenbett <martin.huschenbett@posteo.me>
* Don't talk about []
Co-Authored-By: Martin Huschenbett <martin.huschenbett@posteo.me>
* Address rest of comments
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
Co-authored-by: Martin Huschenbett <martin.huschenbett@posteo.me>
These commands were intended for debugging but neither @associahedron
nor I actually use them since running `daml build` and looking at the
generated files in `.daml` is a much more robust solution.
I’ve also deleted some leftover code from the old-style
data-dependencise where we generated actual template instances (not
just dummy instances). We’ve already deleted everything else around
this, this was just leftover by accident.
The only usage was a testcase which I’ve just switched over to using
`daml build`.
changelog_begin
changelog_end
While flailing about randomly trying to reset the Windows cache
yesterday, I noticed a couple issues with the current script:
- The `fork_point` calculation is just plain broken. Somehow our `set
-euo pipefail` does not fail on subshell errors, but the existing
command is just never going to work: it looks like `git` does not
resolve refs on the `merge-base` command. It also looks like the
`--fork-point` option is not what we want. I don't know how this
happened.
- `sort` on my machine and on CI do not seem to behave the same with
respect to upper/lower case ordering. To make the script independent
of the specific sort order on the machine (probably controlled by the
locale), we now sort both the actual and the expected list.
Finally, based on the failure to recognize a release commit once merged
into master, I realized that of course computing the diff between a
commit and itself will yield an empty diff. The `git_sha` step will now
identify the "master" and "fork point" commits as the parent for a
master build.
CHANGELOG_BEGIN
CHANGELOG_END
* Add TTL field to protobuf
* Add command deduplication to index service
* Wire command deduplication to DAO
* Implement in-memory command deduplication
* Remove Deduplicator
* Implement JDBC command deduplication
* Add TTL field to domain commands
* Deduplicate commands in the submission service
CHANGELOG_BEGIN
- [Sandbox] Implement a new command submission deduplication mechanism
based on a time-to-live (TTL) for commands.
See https://github.com/digital-asset/daml/issues/4193
CHANGELOG_END
* Remove unused command service parameter
* fixup protobuf
* Add configuration for TTL
* Fix Haskell bindings
* Rename SQL table
* Add command deduplication test
* Redesign command deduplication queries
* Address review comment
* Address review comment
* Address review comments
* Make command deduplication test optional
* Disable more tests
* Address review comments
* Address review comments
* Refine test
* Address review comments
* scalafmt
* Truncate new table on reset
* Store original command result
* Rename table columns
... to be consistent with other upcoming tables
* Rename migrations to solve conflicts
Fixes#4193.
* Bump rules_haskell
Still checking if that helps with GHC 8.8 but we should upgrade this
either way.
changelog_begin
changelog_end
* disable grpc patch
* shut up buildifier
* delete unused ghci grpc patch
* Fix Cffi library not found issues
* Update deps.bzl
Co-Authored-By: Andreas Herrmann <42969706+aherrmann-da@users.noreply.github.com>
Co-authored-by: Andreas Herrmann <andreash87@gmx.ch>
Co-authored-by: Andreas Herrmann <42969706+aherrmann-da@users.noreply.github.com>