* interfaces: consuming/non-consuming iface choices
We add the consumption behaviour to the interface choice definition and
typecheck accordingly.
CHANGELOG_BEGIN
CHANGELOG_END
update to new ghc-lib, conversion implementation
* update ghc-lib
* pinning stackage on unix
* pin stackage on windows
We believe the Blakduck logic is currently faulty. We have had a
violation on an NPM dependency, and Blackduck keeps reporting it despite
our having removed the dependency.
We believe that what is happening is that, in the first step of
checking, we udpate the Haskell dependencies, _and then check the
validity of the whole project_, which includes the NPM deps. Because
that fails, we never get to the step where we actually update the NPM
deps, and Blackduck is stuck forever.
The solution is to not fail on violations for the Haskell update steps.
Haskell deps are still checked in the second step, because, again, it is
checking the whole project.
CHANGELOG_BEGIN
CHANGELOG_END
* Check protobuf compatibility of release commits w.r.t. previous stable release
CHANGELOG_BEGIN
CHANGELOG_END
* Remove blank line
* Don't persist credentials
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
* check-protobuf-against-stable.sh: SRC_DIR -> PROJECT ROOT + simplify
* Don't set LATEST_STABLE global in a function
* Simplify by using only the main work tree
* Simplify further as the check will be only run from `main`
* Move the check to `ci/build.yml` so that it is also run on PRs
* Enter the development environment to use tools
* Make variables read-only
* Support release branches and PRs targeting them
* Fix and document the reference tag finding logic
* Fix SYSTEM_PULLREQUEST_TARGETBRANCH and print it
* Don't log the source branch
* Fix comment formatting
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
* Enable Slack integration
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
* Don't check if the branch is a release one
...as the check won't be run on release branches.
* Add compatibility_stable_protobuf to collect_build_data
* Do not activate dev-env globally but only in sub-shells
* Add an explanation about why the check is not run on release branch commits
* Simplify further by leveraging `buf`'s ability to compare against branches
* Use `buf`'s `tag` locator instead of `branch`
* Split buf checks by module and remove previous manual check
* Explain how to run locally
* Use more future-proof WIRE_JSON for participant-integration-api
Co-authored-by: Simon Meier <meiersi-da@users.noreply.github.com>
* Use stricter FILE for the ledger gRPC API
* Propose an explanation for WIRE in kvutils
* Fix comment typo
* Re-introduce linting configuration for kvutils
* Simplify explanation for KVUtils' breaking check rule
* Remove extra (C) header from 3rd-party proto
* Don't touch the copyright of google/rpc/status.proto
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
Co-authored-by: Simon Meier <meiersi-da@users.noreply.github.com>
This bumps dotnet to the version required by the latest azuresigntool,
and pins azuresigntool for the future.
As usual for live CI upgrades, this will be rolled out using the
blue/green approach. I'll keep each deployed commit in this PR.
For future reference, this is PR [#10979].
[#10979]: https://github.com/digital-asset/daml/pull/10979
CHANGELOG_BEGIN
CHANGELOG_END
* Support adding tests as an hidden option
* Simplify existing suites
CHANGELOG_BEGIN
CHANGELOG_END
* Remove stale conformance suites from build.yml
* `--add` -> ``--additional`
* Re-add `--all-tests` as deprecated CLI option to be tested
* Move sandbox-classic pruning test to wall clock again
* Run KVCommandDeduplicationIT for sandbox append-only
* Tidy-up
* Also add participant pruning test to ledger-on-memory/single-participant
* Remove KVCommandDeduplicationIT on ledger-on-memory/append-only
* Run the full suite plus pruning (rather than just pruning) for ledger-on-memory with multiple participants and append-only
* Add KVCommandDeduplicationIT to ledger-on-memory append-only
* Exclude ConfigManagementServiceIT from ledger-on-memory append-only multi-participant
* Tidy-up
* Use KVCommandDeduplicationIT for sandbox-on-x too
* Fix merge
Add max dedup duration arg to all the test suites that include command dedup tests
* Make `--include` and `--additional` mutually exclusive
* Uniform formatting of multi-line strings
* Move exclusions after additions as they are applied last
* Re-disable deduplication test on sandbox with static time
* Re-disable deduplication test on sandbox-on-x
I've witnessed a build ([link], though that will likely expire soon)
that failed with a "No space left on device" error after skipping the
cleanup step because the machine still had 68GB free.
[link]: https://dev.azure.com/digitalasset/daml/_build/results?buildId=87591&view=logs&j=870bb40c-6da0-5bff-67ed-547f10fa97f2&t=deecee86-545a-596e-8b0d-fb7d606fe9f2
With the machines only having 200GB disk size total, cleaning up at 80
is probably going to start hampering the overall efficiency of the
cache. It may be time to think about increasing the disk size itself (or
finding ways to reduce the size requirements of our builds). Important
note, though: we can't actually increase the macOS disk size very much.
The failure happened on the `compatibility_linux` job.
CHANGELOG_BEGIN
CHANGELOG_END
* Add avg, stddev, p90, p99, requests_per_second numbers to be reported on slack similar to speedy_perf
changelog_begin
changelog_end
* changes based on code review
* fix failing job due to breaking function export
* vanilla job test on main pipeline
changelog_begin
changelog_end
* move job to daily compat tests
* add timeout to dev-env and changes based on code review
* Bump ghc-lib to include dropped parsing code for generic templates
changelog_begin
changelog_end
* bump snapshot
changelog_begin
changelog_end
* drop old generics file
changelog_begin
changelog_end
* drop other broken file
changelog_begin
changelog_end
* Bump again
changelog_begin
changelog_end
* bump to merged commit
changelog_begin
changelog_end
* and bump snapshots
changelog_begin
changelog_end
* Generate short to long name mapping in aspect
Maps shortened test names in da_scala_test_suite on Windows to their
long name on Linux and MacOS.
Names are shortened on Windows to avoid exceeding MAX_PATH.
* Script to generate scala test name mapping
* Generate scala-test-suite-name-map.json on Windows
changelog_begin
changelog_end
* Generate UTF-8 with Unix line endings
Otherwise the file will be formatted using UTF-16 with CRLF line
endings, which confuses `jq` on Linux.
* Apply Scala test name remapping before ES upload
* Pipe bazel output into intermediate file
Bazel writes the output of --experimental_show_artifacts to stderr
instead of stdout. In Powershell this means that these outputs are not
plain strings, but instead error objects. Simply redirecting these to
stdout and piping them into further processing will lead to
indeterministically missing items or indeterministically introduced
additional newlines which may break paths.
To work around this we extract the error message from error objects,
introduce appropriate newlines, and write the output to a temporary file
before further processing.
This solution is taken and adapted from
https://stackoverflow.com/a/48671797/841562
* Add copyright header
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
* Reminder to put and empty line between subject and body
changelog_begin
changelog_end
* Update ci/check-changelog.sh
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
* Skip subject in changelog check
This matches what unreleased.sh does.
Ideally we’d probably share the code but this is bash and I do not
like bash so I cannot be bothered to do this right now.
changelog_begin
changelog_end
* better error message
changelog_begin
changelog_end
Dropping the strict source name patch has resulted in empty
directories which Bazel is happy about but then fail at
runtime. They’re not quite empty, the digest points to 0x0a 0x00
so we match on that.
changelog_begin
changelog_end
I'm still not sure how or why this happens, but if we can detect it
"early" to fail and try to debug, we can also just try to fix it 🤷
CHANGELOG_BEGIN
CHANGELOG_END
When machine disks are full, we can't clean the Bazel cache if it
happens to not be a mount point. I don't quite understand yet why it's
not a mount point, but maybe I'll be able to investigate more if we catch
the issue early, rather than waiting for the disk to be full and the
clean-up to fail.
CHANGELOG_BEGIN
CHANGELOG_END
* Fix status check in collect_build_data
follow up to #10270 which caused the linux & macos builds to go
through but then screwed us over in collect_build_data. I hate CI.
changelog_begin
changelog_end
* .
changelog_begin
changelog_end
This has now screwed us over for two releases (1.14 and currently
blocking 1.15) because we didn’t backport the change. While we could
backport this, it is annoying and provides little to no benefit given
that a failure here is harmless so let’s just ignore failures here.
changelog_begin
changelog_end
uname is the name for Linux and Linux_scala_2_12 which causes builds
to override each other and it looks like that might even break in case
of concurrent uploads although that could also be general flakiness in Azure.
changelog_begin
changelog_end
Even with the cache retries something still doesn’t seem to be cached
quite like I expect. I can’t really debug this without exec logs so
this PR starts publishing those.
changelog_begin
changelog_end
Anecdotally, I see a 25x reduction in size when compressing. Time to
compress and decompress is negligible, whereas storage costs and transfer
times may not be.
Since, as far as I'm aware, we don't currently have anything depending
on the current format, I could run a script locally to transform all of
the existing logs to match the new format.
CHANGELOG_BEGIN
CHANGELOG_END
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
* Oracle compliant append only schema
CHANGELOG_BEGIN
CHANGELOG_END
WIP : oracle on new appendonly schema
* diff to postgres dump, create consolidated view
* diff to postgres dump to ensure all oracle setup is equiv
* recompute sha for changed oracle flyway scripts
* drop old tables to prevent clash on name of new participant_events view
* recompute sha for flyway script
* prelim oracle StorageBackend
* Adds support for special preparedStatement for oracle
* Add support and wires to setObject by default everywhere
* Add the full OracleField suite with TODOs for convenience
* Wires OracleField suite to OracleFieldStrategy
changelog_begin
changelog_end
* enable debug version of oracle driver
* conversion Instant -> Timestamp for oracle
* WIP: primitive println debugging
* Passing PackagesSpec with appendonlyschema on Oracle
Rename size column to siz to avoid reserved word clash, including migration script for postgres
* include sha for new postgres migration script
* add missing copyright header
* cleanup
* passing party spec for appendonly on oracle
* passing configuration spec for appendonly on oracle
* scalafmt
* bazel buildifier reformat
* use db generic FETCH NEXT n ROW ONLY rather than limit for cross db compat
* siz instead of size for packages table on all dbs and schema
* revert enabling oracle jdbc debug
* Support Array[String] -> String conversion (and vice versa) for JSON array
Remove as aliases for tables as this does not work with oracle
Extract submitters clause for all db types
Use append transaction injector for oracle append only spec
* scalafmt
* correct oracle failing active contract spec tests
* wire in JdbcLedgerDaoCompletionsSpec
* remove semi-colons for ending statements that are problematic for oracle driver
* all tests up to divulgence passing for append only on oracle
* all appendonly tests passing on oracle
* remove ignore on fall back to limit-based query with consistent results
* do not change name of size column in packages table for mutable schema for all DBs
* do not change name of size column in packages table for mutable schema for all DBs
* standalone oracle appendonly schema script
regen shas on flyway scripts
revert some cosmetic refactoring in CommonStorageBackend
* Fixes conversion to parties from Oracle-JSON at flatEventWitnessesColumn
* Switches from composit SQLs to single SQLs at prepared statements to accommodate Oracle limitation
* Fixes arrayIntersectionWhereClause by applying patch from mutable Oracle schema integration
* Fixes queries with empty startExclusive Offsets
* First draw adding Oracle conformance test suites to CI
* wire in the oracle conformance tests for CI
* Use cross-db fetch next $n rows only syntax instead of limit syntax that works only for postgres/h2
* rename siz to package_size
* recompute shas
* scalafmt and include sha check for oracle append only flyway script
* correct missing package_size rename
* remove some todos -- correct corrupted V1__Init.sql
* Update ledger/ledger-on-sql/src/test/lib/scala/com/daml/ledger/on/sql/MainWithEphemeralOracleUser.scala
Co-authored-by: Robert Autenrieth <31539813+rautenrieth-da@users.noreply.github.com>
* correct version number for postgres rename column scripts
* remove unnecessary migration tables for oracle append only
* review feedback: rename createEventFilter as requested, remove todos
* review feedback: case consistency
* review feedback: update todos with issue markers
* review feedback: cleanup
* review feedback: OracleField and OracleSchema cleanup
* Fixing Table generators to use preparedData for convenience
* Placing TODOs for refactorings later
* Renames initial append-only oracle script, for convenience
* Falls back to original behavior as far prepared statements go at couple of queries
Co-authored-by: Marton Nagy <marton.nagy@digitalasset.com>
Co-authored-by: Robert Autenrieth <31539813+rautenrieth-da@users.noreply.github.com>
This PR drops two things:
1. The check that the benchmark hasn’t been modified. This hasn’t ever
been useful and it keeps being annoying.
2. It stops the comparison against the old version and instead just
benchmarks the current version. We really only care about the day to
day changes. Comparing against an arbitrary year old version has lost
all meaning at this point.
changelog_begin
changelog_end
* Generate Bazel logs and upload to GCS
changelog_begin
changelog_end
* Move git_*_sha into variables template
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
This currently breaks older releases because they require a different
Scala 2.12 version. It also adds zero value for a release that
defaults to Scala 2.12 and it adds basically no value for a release
that defaults to Scala 2.13 (see comment for details).
changelog_begin
changelog_end
This took me embarassingly long to understand and debug (partially
because afaict Azure is broken):
The issue is that in the current state, parameters.is_release is not
expanded when setting the env var. That makes sense. The variable is
only set at runtime but the ${{}} template expressions are expanded
before that (it works below in the condition since that’s not in a
${{}} and is evaluated at runtime).
Now if we look at the other env var that does work (the release tag)
we can see something interesting. We set it to the macro
$(release_tag) in build.yml. However, that is not expanded since
template expansion happens way earlier. So the template parameter is
set to the literal string "$(release_tag)". We then splice that in via
template expansion ${{parameters.release_tag}} and then at runtime
azure will expand the macro.
Just changing is_release to a macro however would break the use in the
condition (I think you might be able to fix that if you put it in a
string but that just seems even more hacky).
So this PR instead defines a new variable skip_tests which we define
in the job and the splice it in via a macro.
Confusingly, `$[variables.is_release]` is not expanded in an env
definition. Afaict this is simply a bug. The only difference between
macros and runtime expressions according to the docs is that runtime
expressions need to replace the full RHs and that macros can only
reference a single variable. Wouldn’t help much here either anyway if
we want to stick to the parameter instead of referencing a variable
directly (which maybe we don’t, it doesn’t seem to help much but
that’s a separate question).
changelog_begin
changelog_end
* Disable per commit windows compat tests
Windows version of #9370 to further reduce queues until we either get
more nodes or find another solution. We still test everything in the
daily run.
changelog_begin
changelog_end
* remove success check for skipped windows compat
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
This should reduce the pressure on CI nodes a bit. Note that we're still
running the full compatibility matrix on macOS as part of the daily
build.
CHANGELOG_BEGIN
CHANGELOG_END
Caching doesn't seem to work very well here. On a release, we build an
old commit, which has already been tested twice (once as a commit on
`main`, once as part of the release PR).
CHANGELOG_BEGIN
CHANGELOG_END
In the automated process, the Azure build is triggered on the branch
directly, which will be named `auto-release-pr-$(date -I)`. But if a
manual change needs to be made, and people subsequently use the `/azp
run` feature of Azure, the build then runs for the PR, which means it
actually runs on the merge commit of the branch and `main`, not on the
branch itself. In that case, the branch that we run the build on is
called `merge` and is thus not starting with `auto-release-pr-`.
This change should get us the notification back on manual PR builds too.
CHANGELOG_BEGIN
CHANGELOG_END
The patch that it downloads via Nix was taken from a GH PR instead of a
commit such that the hash is not fully stable. This adds a patch to
download the relevant patch directly from a GitHub commit.
changelog_begin
changelog_end
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
* record dot updates: update to new ghc-lib-parser
This updates the ghc-lib-parser library featuring record dot updates and
adds tests for the new feature.
CHANGELOG_BEGIN
CHANGELOG_END
* update snapshot after pin on windows
* added a test for error locations
* nested record puns test
* update ghc commit
* update of stack dependencies (linux)
* update stack snapshot(windows)
* participant-integration-api: Build Oracle tests, but don't run them.
CHANGELOG_BEGIN
CHANGELOG_END
* triggers: Switch to an environment variable for enabling Oracle tests.
* http-json: Switch to an environment variable for enabling Oracle tests.
* Disable running Oracle tests by default, not building them.
* triggers/service: Remove unused test dependencies.
The existing public key is set to expire in May, so we've changed it.
Note: this _should_ require no other change as the private key is
unchanged (i.e. the new public key can be used to verify old
signatures), but my understanding of GPG is somewhat limited so 🤷.
CHANGELOG_BEGIN
CHANGELOG_END
* WIP : first cut at changed schema files for oracle
Define Oracle as DbType and handle necessary case match switches for it
recomputed shas for oracle migration scripts
Oracle fixtures
get things compiling
Able to connect to Oracle
Working through getting schema definitions functional with Oracle
runnable schema definitions only for active tables on oracle
delete commented lines in schema scripts
use oracle enterprise
correct inadvertently changed postgres schemas
WIP - latest oracle-ificiation
passing upload packages spec
add additional test for package upload entry read
correct typo in oracle database spec name
use BLOB for parties ledger_offset
package_entries use hex version of offset for range queries
reformat and update shas for sql scripts
binary numeric implicit conversion for oracle
correct duplicate exception text for oracle
parties test passing on oracle
add additional column to hold hex offset for party_entries
party_entries working for all dbs
scalafmt
Configuration ledger_offset should be BLOB
update sha of oracle sql files
enable passing tests in order
remove misleading null comments
define additional custom VARRAY types
add participant-integration-api-oracle tests to linux-oracle job
Add TODO for places where we need to deal with separate implicit imports for Oracle vs Postgres/H2
oracle implicit conversions for custom arrays and other problematic types
Do not override default debug level for all tests in participant-integration-api
CHANGELOG_BEGIN
Ledger API and Indexer Oracle Support
CHANGELOG_END
passing TransactionWriterSpec
passing JdbcLedgerDaoCompletionsSpec JdbcLedgerDaoDivulgenceSpec
passing JdbcLedgerDaoContractsSpec
All Oracle tests passing apart from one post-commit validation test
* Remove JdbcLedgerDaoValidatedOracleSpec as this is only relevant for classic postgres-backed sandbox
* rebase to master -- offsets are now varchar2 rather than blob
* remove use of DBMS_LOB operations
* remove all greater than/less than variants for DBMS_LOB
* revert postgres files that need not be touched
* code review feedback : avoid code duplication
* avoid indirection in type names for oracle arrays
* code review: HexString implicit conversions are not needed
* code review: Oracle case is not yet implemented for appendonlydao
* code review: Oracle case is not yet implemented for appendonlydao (cleanup import)
* code review: revert files that should not be touched
* address code review feedback: db specific imports for command completion become part of queries
* code review: perform db-specific reserved word escape to avoid case match
* code review: remove all dbms_lob comparison operations
* use simpler insert into with ignore dupes hint for oracle
* code review: avoid db specific match case in events range, use db specific limitClause
* code review: restore group by on Binary and Array fields for H2 and Postgres, disable for Oracle
* code review: restore group by on Binary and Array fields for H2 and Postgres, disable for Oracle
* code review: restore group by on binary and array fields for non-oracle dbs, honour the calculation of limit size from QueryParty.ByArith
* code review: honour the calculation of limit size from QueryParty.ByArith
* code review: drop user after oracle test
* code review: remove drop user as it throws errors due to dangling sessions
* code review: revert incorrectly changed postgres schema files
* code review: clean up TODOs
* Remove // before hostname for consistency with other oracle connection strings
* code review: unambiguously scope table column referenced in select and where queries
* code review: correct duplicate table alias
We've recently seen a few cases where the macOS nodes ended up not
having the cache partition mounted. So far this has only happened on
semi-broken nodes (guest VM still up and running but host unable to
connect to it), so I haven't been able to actually poke at a broken
machine, but I believe this should allow a machine in such a state to
recover.
While we haven't observed a similar issue on Linux nodes (as far as I'm
aware), I have made similar changes there to keep both scripts in sync.
CHANGELOG_BEGIN
CHANGELOG_END
I've seen reports of Artifactory returning 409 when it detects an
invalid POM file, which would map cleanly to our observed behaviour (as
other files do seem to upload fine). I'm not a POM expert so not
entirely sure how to check the actual files, but I do see one error in
the existing, commented code: the path is not a valid Maven repository
path. It should be `groupid/artifactid/version`, i.e. it is currently
missing the `artifactid` bit. So I'd like to try adding that.
I don't know how to test this without making a release, so my plan is to
make a release once this is merged. Open to suggestion on faster ways to
test this.
CHANGELOG_BEGIN
CHANGELOG_END
* Add Oracle support in the trigger service
This PR migrates the ddl & queries and adds tests for this. It does
not yet expose this to users. I’ll handle that in a separate PR.
changelog_begin
changelog_end
* use getOrElse
changelog_begin
changelog_end
base64 includes / which is at the very least pretty confusing and also
broke our cache cleanup which assumes that the cache suffix takes up
one directory. Afaict, we are not length-restricted in gcp paths so we
can just use the hex digits we get from md5
changelog_begin
changelog_end
This is adapting the same approach as #9137 to the macOS machines. The
setup is very similar, except macOS apparently doesn't require any kind
of `sudo` access in the process.
The main reason for the change here is that while `~/.bazel-cache` is
reasonably fast to clean, cleaning just that has finally caught up to us
with a recent cleanup step that proudly claimed:
```
before: 638Mi free
after: 1.2Gi free
```
So we do need to start cleaning the other one after all.
CHANGELOG_BEGIN
CHANGELOG_END
In #9169, I changed the compat jobs to not run on releases.
Unfortunately I forgot to update the `collect_build_data` job to know
about that. Hopefully after this has been merged we'll be able to rerun
\#9221.
CHANGELOG_BEGIN
CHANGELOG_END
As requested by @coceature.
Note: skipping the ts_lib job is enough to skip all compat tests because
they all depend on it, and in the Azure model if one of your
dependencies was skipped you get skipped too.
CHANGELOG_BEGIN
CHANGELOG_END
* Generate exception instances from syntax.
changelog_begin
changelog_end
* II
* III
* VII
* update ghc patch and add test
* VIII
* IX
* Remove DatatypeContexts
* X
* update stack snapshot
* don't need datatypecontexts warning anymore
* X-2
* XII
* XIII
Three issues here:
1. The release job runs on an Azure-hosted agent, so it doesn't have the
`reset_caches.sh` script (and doesn't need it).
2. The `bash-lib` step should not run if the current job has already
failed.
3. The `skip-github` jobs should also not run if the job has failed.
CHANGELOG_BEGIN
CHANGELOG_END
This is a continuation of #8595 and #8599. I somehow had missed that
`/etc/fstab` can be used to tell `mount` to let users mount some
filesystems with preset options.
This is using the full history of `mount` hardening so should be safe
enough. The option `user` in `/etc/fstab` automatically disables any kind
of `setuid` feature on the mounted filesystem, which is the main attack
vector I know of.
This works flawlessly on my local VM, so hopefully this time's the
charm. (It also happens to be my third PR specifically targeted on this
issue, so, who knows, it may even work.)
CHANGELOG_BEGIN
CHANGELOG_END
In Azure Yaml, by defualt, a step runs only if the previous step was
successful. However, that default _disappears if the step has an
explicit condition_. I believe we have a number of conditional steps
that have been written without that intention, and this is thus
restoring what I believe to be the original intention, i.e. _adding_ an
additional condition rather than _replacing_ the default one.
CHANGELOG_BEGIN
CHANGELOG_END
* Release EE SDK tarballs and installer
As before, no way of testing this. I’ll do a snapshot afterwards.
changelog_begin
changelog_end
* .
changelog_begin
changelog_end
* .
changelog_begin
changelog_end
* Rename EE artifacts
changelog_begin
changelog_end
* Move artifact publishing out of yaml files
The current publishing process pretty much hardcodes the set of
artifacts we publish in the yaml config. This is a problem because we
always release from `main` so the yaml files are always
identical. However, we will add new artifacts over time and this
starts falling apart. This PR changes this such that the process
described in the yaml files is very generic and just uploads and
downloads everything in a directory whereas the details are handled in
bash scripts that will come from the respective release branch and are
therefore version-dependent.
As usual for these type of changes, I don’t have a great way to test
this. I did do some due diligence to test that at least the artifacts
are published correctly and I can download them but I can’t test the
actual publishing.
changelog_begin
changelog_end
* Update ci/copy-unix-release-artifacts.sh
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
* Update ci/copy-windows-release-artifacts.sh
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
* Update ci/publish-artifactory.sh
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
* ci/cron/check: remove dade-assist calls
We can only run this in a context where the dev-env is already set up
anyway, as that's how we get Bazel to build the script in the first
place.
CHANGELOG_BEGIN
CHANGELOG_END
* remove skip_java logic
Two quick improvements I made while waiting on #9039:
- Avoid loading Java. When looking at the logs flow by this seemed to be
taking a huge amount of time.
- Isolate the gcloud config files, which allows for running gcloud
downloads in parallel.
Together these reduce the `check_releases` runtime from about 5 hours to
about 2. There's much more (and smarter) work needed on this, but this
was really easy to do.
CHANGELOG_BEGIN
CHANGELOG_END
The people who care about these alerts monitor the channel closely
enough anyway, and having frequent automated @here bells ringing makes
it harder for individuals to highlight important messages.
CHANGELOG_BEGIN
CHANGELOG_END
* Conduitify download in release file
Apologies, my ocd kicked in when I saw this in another PR.
changelog_begin
changelog_end
* fixup deps
changelog_begin
changelog_end
* i should stop working
changelog_begin
changelog_end
I think the retry is clobbering the files. Here is my theory:
- The HTTP request is lazy, i.e. it starts producing a byte stream
before it has finished downloading.
- The connection somehow crashes in the middle of that lazy handling,
possibly because the Haskell code blocks for too long on something
else and GCP thus closes the connection. (If this is true, making sure
we download the entire thing before we start writing may make the
download more reliable.) This explains why we get a "resource vanished"
and not a plain 404 to start with.
- The retry policy doesn't know anything about HTTP requests; it just
sees an IO action throwing an exception and restarts the whole thing.
- Because the IO action opens the file in Append mode, we thus end up
with a file that is too big and has its "starting bytes" multiple
times. That obviously fails to sign-check.
If this is what happens then the retry does not help at all, which does
seem to be what we've been observing (though I haven't tracked the exact
error rate too closely). The fix would likely be as simple as changing
`IO.AppendMode to IO.WriteMode (which truncates, per [documentation]).
[documentation]: https://hackage.haskell.org/package/base-4.14.1.0/docs/System-IO.html
CHANGELOG_BEGIN
CHANGELOG_END
* Merge Maven uploads for different Scala versions
It turns out Maven will abort an existing staging operation if you
create a new one. This means our jobs race against each other. We
could try to fix that by either sequencing the jobs in a clever
way (annoying and can break things like rerunning if only parts
failed), or by creating more profiles (unclear if you can even have
two profiles for the same group id, even if you do, it’s annoying to
merge).
So in this PR I (grudgingly) merged both uploads into the Haskell
script. This isn’t all bad:
1. It moves some logic from bash embedded in yaml string literals into
Haskell code.
2. It duplicates some versions but it removes duplication in other
places so overall not too much worse.
3. It does however, make things slower. We don’t run this stuff in
parallel. That said, the release step is relatively small (< 5min) and
it only runs on Linux.
We could add CLI arguments to make the Scala versions configurable for
local development. Given that this is blocking releases, I wanted to
get something in that works first and then see what we need in that regard.
changelog_begin
changelog_end
* .
changelog_begin
changelog_end
* .
changelog_begin
changelog_end
* .
changelog_begin
changelog_end
There is really no reason to first capture this to a String and then
putStrLn it. That only causes issues since if we crash with a non-zero
exitcode we won’t output anything.
changelog_begin
changelog_end
* Stop pretending strings are booleans
Sorry for all the mess here. I’m not capable of programming in yaml.
It turns out is_release is a string not a boolean and
```
and('false', eq('true', 'true'))
```
is true.
I hate everything about this.
changelog_begin
changelog_end
* Name bash step to reduce confusion
changelog_begin
changelog_end
* fix version in test
changelog_begin
changelog_end
* names can’t have spaces apparently
changelog_begin
changelog_end
* Fixup condition for running publish_mvn_npm
This needs to run for both linux and linux-scala-2.13
changelog_begin
changelog_end
* Update ci/build-unix.yml
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
* Fixup scala 2.13 check
Somehow I managed to misread the helpcheck and get confused by my
experiments and thought semver produces an exit code of 1,0,-1 but
actually it writes that to stdout.
changelog_begin
changelog_end
* Update ci/build.yml
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
* Update ci/build.yml
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
* Port Ledger API Test Tool to Scala 2.13
And with that we’re finally at //... building on Scala 2.13.
changelog_begin
changelog_end
* Fix build on 2.12
changelog_begin
changelog_end
* Fix kvutils export on 2.13
changelog_begin
changelog_end
* separate OracleQueries from PostgresQueries
- with some changes from 8161e63189 courtesy @cocreature
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* abstract BIGINT
* json, signatories, observers columns
* compatible lastOffset
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* oracle functions for select (single template ID), insert
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
* add oracle branch to integration tests
* oracle CLI configuration for json-api
* run integration tests with ojdbc in classpath
* update maven_install for ojdbc
* drop table if exists for Oracle
* make create DDLs and drops more planned out; drop in reverse order for Oracle integrity
* repin maven
* port agreement_text
* port (by removal) array part of ledger offset update
* use CASE instead of JSON map lookup for multiparty offset update
* simplify self types
* fix contract archival
* repin
* remove selectContracts in favor of selectContractsMultiTemplate
* move Oracle test execution to separate build target
* move websocket test to itlib
* make a bad array instance for Oracle
* report actually-available JDBC drivers only
* configure Oracle test from CI
* attempt with platforms and constraints
* a mismash of bazel to get it to conditionally enable oracle testing
* fix dep resolution in Scala 2.13
* make the Oracle test a stub (inits and does empty DB query)
* remove commented unused deps
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* repin
* we never supply a value for the surrogate ID columns
- suggested by @cocreature; thanks
* add not null to json in DB-specific place
- suggested by @cocreature; thanks
* why DBContractKey
- suggested by @cocreature; thanks
* textType isn't finalized
- suggested by @cocreature; thanks
Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
This fixes Scaladoc and our pom file generation.
It also clears up the confusing error around gatling and removes a
redundant dependency on sbt (no idea why we had that in the first
place) both of which resulted in Scala 2.12 dependencies in our 2.13
lockfile which is obviously bad.
With this, we should now be ready to publish Scala 2.13 artifacts once
the ledger API test tool PR lands.
changelog_begin
changelog_end
Tests are still missing and blocked on #8821.
The main change here is the switch from `ArraySeq[Byte]` to
`ArraySeq.ofByte`. `ArraySeq` allows for boxed and unboxed
representaitons. That means that `ArraySeq[Byte]unsafeArray` does not always return
an Array[Byte] (boxed version would be Array[AnyRef]).
Apparently collection-compat has taken the yolo approach and pretends
it can give you an Array[Byte] anyway 🤷 Scala 2.13 on the other
hand, does things properly in this regard which means the code relying
on `unsafeArray` fails to compile.
`ArraySeq.ofByte` is the specialized unboxed version where none of
this is an issue on both 2.13 and 2.12.
changelog_begin
changelog_end
* Draw the rest of the Scala 2.13 owl
Not quite but pretty close and this switches us over from inclusions
to exclusions which makes it much easier to track.
Ledger API test tool should be fixed by #8821. Non-repudiation needs a
tiny bit of work since unwrapArray doesn’t work the same on 2.13 but
shouldn’t be hard to fix.
changelog_begin
changelog_end
* Fix ScriptService tests
Those tests were all dumb. They asserted on a fixed order while the
function to sort the things was broken so we ended up with the random
Map order which is unsurprisingly not the same.
This is easily fixed by fixing the sort function.
There is also a second issue with query not sorting.
changelog_begin
changelog_end
* Turns out if you fix one test the next one breaks
And clearly nobody ever tested this or give this a second thought.
changelog_begin
changelog_end
fixes#8498
This fixes the error in 2.13 wtr to the location change of Predef. It
doesn’t yet address the warning wtr to the import of higherKinds. For
now, our build ignores that warning. Trying to figure out if we can
get away with a breaking change here or if we need to hide that change
behind a flag but either way, no need to block fixing the actual error
on that.
changelog_begin
changelog_end
It does not seem like CI machines recover from a failed clean-up. This
is not the most elegant solution possible, but it's a cheap one that
should work.
Not: shutting down the machine in the middle of the build will not
provide an error message to Slack for main branch builds (because the
`tell_slack_failed` step would need to run on the same machine) but will
correctly report failure for PRs (that was the original purpose of the
`collect_build_data` step).
An alternative here would be to give a delay to the shutdown command,
and try to calibrate it so that it's long enough for this job to
correctly report its failure to both Azure and Slack, while making it
short enough that no other job gets assigned to the machine. I'm not
clear enough on how often Azure assigns jobs to try and bet on that.
CHANGELOG_BEGIN
CHANGELOG_END
* Disable MacOS CI jobs
5/6 macos nodes are down and we cannot fix it quickly, so to unblock
everyone, let’s disable those jobs for now.
I deliberately did not remove MacOS from releases. Those really should run on MacOS.
changelog_begin
changelog_end
* Undo unnecessary changes
changelog_begin
changelog_end
* Allow skipping macos jobs
changelog_begin
changelog_end
My goal here is to investigate the new warning Azure has been showing
for the past few days:
> ##[warning]%25 detected in ##vso command. In March 2021, the agent command parser will be updated to unescape this to %. To opt out of this behavior, set a job level variable DECODE_PERCENTS to false. Setting to true will force this behavior immediately. More information can be found at https://github.com/microsoft/azure-pipelines-agent/blob/master/docs/design/percentEncoding.md
As far as I'm aware we are not deliberately passing in any `%25` in any
of our `vso` commands, so I was a bit surprised by this.
CHANGELOG_BEGIN
CHANGELOG_END
Unfortunately missing the actual interesting part since porting
`partitionBimap` seems to be rather annoying but this at least gets us
started on the easy parts.
changelog_begin
changelog_end
* Add a prototype for DAML Script dumps
This is still fairly rough unfortunately but it does at least have
some tests and it doesn’t interact with anything else, so hopefully we
can land this and then parallelize the work from there on.
changelog_begin
changelog_end
* Update daml-script/dump/src/main/scala/com/daml/script/dump/Encode.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* view all the things
changelog_begin
changelog_end
* Update daml-script/dump/src/main/scala/com/daml/script/dump/Dependencies.scala
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* Fixup the switch to exists
changelog_begin
changelog_end
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
For various reasons, my attempts at improving the cache cleanup process
have been delayed. There are, however, two simple, non-controversial
changes I can "backport" without having to wait for consensus on the
whole thing:
1. Increase the threshold. At least for the compat jobs, we have seen
builds failing after starting with ~32GB free.
2. Kill dangling Bazel processes, which keep some files open and
sometimes cause the clean-up process to crash.
CHANGELOG_BEGIN
CHANGELOG_END
The report currently chokes on quotes in the commit message (see
550aa48fc5). Rather than trying to
correctly excape things in Bash, this PR delegates the quote handling to
jq, because having to deal with Bash embedded in YAML is hard enough.
CHANGELOG_BEGIN
CHANGELOG_END
We’ve seen a number of "resource vanished (connection reset by peer)"
errors. Slapping some retries on that should hopefully make the CI job
a bit more robust.
changelog_begin
changelog_end
* Clean broken entries from the Bazel cache
This is hopefully a somewhat reasonable workaround for the "output not
created" errors that keep annoying us.
For now, this is just part of the hourly cronjob but we could move it
somewhere else if desired.
changelog_begin
changelog_end
* Fix GCS credentials
changelog_begin
changelog_end
CHANGELOG_BEGIN
- Our Linux binaries are now built on Ubuntu 20.04 instead of 16.04. We
do not expect any user-level impact, but please reach out if you
do notice any issue that might be caused by this.
CHANGELOG_END
* include oauth2 logback config in release tarball
overlooked in https://github.com/digital-asset/daml/pull/8611
* Release trigger-service and oauth2-middleware JARs
changelog_begin
changelog_end
* drop from artifacts.yaml
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
Current reports look like:
```
Disk cache small enough:\n20G/home/vsts/.bazel-cache
```
because `echo` does not convert `\n`. An alternative would be to replace
`echo` with `printf`, but I have had enough issues with
subshells-in-strings lately that I prefer just avoiding them when
possible.
CHANGELOG_BEGIN
CHANGELOG_END
I discovered yesterday that the `snapshots.json` (and actually also the
`versions.json`) file is no longer purely internal to the docs process,
as it was meant to be, but is now depended upon by the assistant. This
means the renaming from `snapshots.json` to `hidden.json` cannot happen,
and we reverted that yesterday in #8513 (& #8514), though that was done
in a bit of a hurry. This PR aims at cleaning up the resulting mess and
achieve a better long-term end state.
I will be manually removing the `hidden.json` file as soon as this is
merged, so that nothing ends up depending on _that_. There is no
occurrence of `hidden.json` outside this docs cron so hopefully this
works out.
CHANGELOG_BEGIN
CHANGELOG_END
This is the equivalent of #8515 for Linux. There was some concern that
`bazel` would be upset at having that cache removed, so I spent a fair
amount of time trying to break it (on a Linux VM, as for some reason
`bazel` chooses not to use `~/.cache` on macOS). I could not make
`bazel` unhappy by deleting the whole thing. Deleting random files,
however, did end up producing error messages along the lines of:
```
$ bazel build //...
FATAL: corrupt installation: file '/home/vagrant/.cache/bazel/_bazel_vagrant/install/73d06d52dbf3a8e6ed43f5bf5f115eb0/embedded_tools/src/BUILD' is missing or modified. Please remove '/home/vagrant/.cache/bazel/_bazel_vagrant/install/73d06d52dbf3a8e6ed43f5bf5f115eb0' and try again.
```
which suggest busting the entire thing as a solution, so I think we're
safe here.
CHANGELOG_BEGIN
CHANGELOG_END
Hopefully this works around our recent CI disk space issues, while 80GB
should be large enough that it only happens once per machine per day, so
perf shouldn't be impacted too much.
CHANGELOG_BEGIN
CHANGELOG_END
* Update both hidden.json and snapshots.json
The assistant relies on the latter, our docs cronjob on the former. I
have no idea why we have two but keeping them in sync should be fine.
changelog_begin
changelog_end
* maybe I should test if my code compiles before pushing
changelog_begin
changelog_end
* Port the rest of //ledger/... to Scala 2.13
draw the rest of the fcking owl
Omitted for now are the ledger API test tool which has a dependency
only compatible with 2.12 and the generated code of the Scala
codegen (the codegen compiles and runs with 2.13, the generated code
does not).
changelog_begin
changelog_end
* Less symbols
changelog_begin
changelog_end
* Port more of //ledger/... to Scala 2.13
changelog_begin
changelog_end
* Remove unusued dependency
changelog_begin
changelog_end
* Rename bf to factory to reflect the fact that it’s now a Factory
changelog_begin
changelog_end
* Use regex match instead of sliding string equalityt
changelog_begin
changelog_end
* regex matches are bad
changelog_begin
changelog_end
* Port //ledger/ledger-api-client/... to Scala 2.13
This pulls in Sandbox next and kvutils as a dependency so those now
build on 2.13 as well.
changelog_begin
changelog_end
* Upgrade scala-colllection-compat
changelog_begin
changelog_end
* Use toVector.sortBy instead of to(LazyList).sortBy
changelog_begin
changelog_end
* Use a view for passing things to varargs
changelog_begin
changelog_end
* avoid symbol literal in CommandClientIT
changelog_begin
changelog_end
* Port parts of //ledger/... to Scala 2.13
Fairly random choice of directories, I just went through them in
alphabetical order. The one thing that I had to disable for now are
the conformance tests since the ledger API test tool has a dependency
not compatible with Scala 2.13.
changelog_begin
changelog_end
* Remove accidentally included //ledger/ledger-api-client/...
doesn’t actually work yet
changelog_begin
changelog_end
This make the docs bundle available as a download from any build on
Azure. I mostly thought of this as a workaround for @bame-da because of
the Big Sur thing, but I figure that may occasionally useful to other
people too.
CHANGELOG_BEGIN
CHANGELOG_END
The one thing that is still missing is making the generated Scala code
from the codegen compatible with Scala 2.13 so the examples are
excluded for now.
changelog_begin
changelog_end
* Replace many occurrences of DAML with Daml
* Update docs logo
* A few more CLI occurrences
CHANGELOG_BEGIN
- Change DAML capitalization and docs logo
CHANGELOG_END
* Fix some over-eager replacements
* A few mor occurrences in md files
* Address comments in *.proto files
* Change case in comments and strings in .ts files
* Revert changes to frozen proto files
* Also revert LF 1.11
* Update get-daml.sh
* Update windows installer
* Include .py files
* Include comments in .daml files
* More instances in the assistant CLI
* some more help texts
* Port the rest //daml-lf/... to Scala 2.13
Draw the rest of the owl
changelog_begin
changelog_end
* Update daml-lf/encoder/src/main/scala/com/digitalasset/daml/lf/archive/testing/DamlLfEncoder.scala
Co-authored-by: Remy <remy.haemmerle@daml.com>
Co-authored-by: Remy <remy.haemmerle@daml.com>
* Port damlc dependencies to Scala 2.13
I got a bit fed up by the fact that going directory by directory
didn’t really work since there are two many interdependencies in
tests (e.g., client tests depend on sandbox, sandbox tests depend on
clients, engine tests depend on DARs which depend on damlc, …).
So before attempting to continue with the per-directory process, this
is a bruteforce approach to break a lot of those cycles by porting all
dependencies of damlc which includes client bindings (for DAML Script)
and Sandbox Classic (also for DAML Script).
If this is too annoying to review let me know and I’ll try to split it
up into a few chunks.
changelog_begin
changelog_end
* Update daml-lf/data/src/main/2.13/com/daml/lf/data/LawlessTraversals.scala
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* fixup lawlesstraversal
changelog_begin
changelog_end
* less iterator more view
changelog_begin
changelog_end
* document safety of unsafeWrapArray
changelog_begin
changelog_end
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* Port //daml-lf/interpreter to Scala 2.13
For now the perf tests are left out since they depend on a DAR built
by damlc which depends on daml script which depends on the world
:exploding-head:
changelog_begin
changelog_end
* Scala 2.13-style to for ImmArray and FrontStack
changelog_begin
changelog_end
* Avoid extra conversion
changelog_begin
changelog_end
* Port //daml-lf/(parser|validation) to Scala 2.13
changelog_begin
changelog_end
* Rename (Expr|Type)Traversable to (Expr|Type)Iterable
changelog_begin
changelog_end
For a couple weeks now there has been a warning on the Azure Pipelines
web UI that says `ubuntu-latest` is in the process of switching from
18.04 to 20.04. I am not aware of any specific issue this would cause
for our particular workflows, but I don't like my dependencies changing
from under me.
CHANGELOG_BEGIN
CHANGELOG_END
This is another take on #8276, with the same underlying motivation.
However, this approach is mostly duplication-free, which seems better,
especially given the already-pretty-sorry state of our CI config.
Like #8276, this is done in 2 commits for ease of review. The first
commit is wholly unintresting and just copies `azure-pipelines.yml` to
both `ci/prs.yml` and `ci/build.yml`; the second commit removes from
each part what it shouldn't have. The intention is for `ci/build.yml` to
have all of the common parts.
CHANGELOG_BEGIN
CHANGELOG_END
This commit changes the docs cron to create a new file
`docs.daml.com/latest`, a simple text file containing the version number
of the latest released version. This is done in response to #8354, to
avoid having to replicate the logic for which version is the latest
across this, the assistant and the get-daml.sh script.
This commit also cleans up a small transitional FIXME in the docs cron
regarding the transition from `snapshots.json` to `hidden.json`.
For the solution to #8354 to be complete, we'll also need to update
get-daml.sh and the assistant to use the new latest file. Note that this
file would only be published on the next stable release, so this commit
also includes a temporary hack to re-generate it (and `versions.json`
and `hidden.json`) unconditionally on every run; this can be removed as
soon as this has run once.
CHANGELOG_BEGIN
CHANGELOG_END
As we strive for more inclusiveness, we are becoming less comfortable
with historically-charged terms being used in our everyday work.
This is targeted for merge on Dec 26, _after_ the necessary
corresponding changes at both the GitHub and Azure Pipelines levels.
CHANGELOG_BEGIN
- DAML Connect development is now conducted from the `main` branch,
rather than the `master` one. If you had any dependency on the
digital-asset/daml repository, you will need to update this parameter.
CHANGELOG_END
Turns out bash is hard and I’m stupid :sadpanda:
We need to write thoutput to stderr, otherwise this ends up in the
JSON output which obviously is not valid JSON.
changelog_begin
changelog_end
We changed the patch to target more than one file which made the
checkout insufficient for restoring the state and then the following
git checkout of current fails with:
```
error: Your local changes to the following files would be overwritten by checkout:
stack-snapshot.yaml
```
A git reset --hard should make sure everything gets reset.
changelog_begin
changelog_end