In order to make the flamegraphs easier to consume by humans, we change
the profiler output in the following ways:
1. Don't print package ids anymore. They are not particularly useful
but cause a lot of noise.
2. Remove a few useless angle bracket and move the printed names of
DAML-LF closer to their surface level names.
3. Unmangle identifiers on a best effort basis.
4. Give the profiles shorter names such that they don't occupy the
whole screen and leave some space for the navigation buttons of the
Speedscope UI.
CHANGELOG_BEGIN
CHANGELOG_END
This adds a function withTriggerServiceAndDb which runs a test twice, once with and once without a database, and succeeds if both succeed. This will be useful for reusing test logic with both backends and making sure behaviour is consistent. I have used this function where possible, but it won't work for everything until stop is implemented on the DB side.
At the moment this new function squashes two tests into one making it hard to tell whether it failed with or without the database. In a future PR I will investigate using an abstract class to run the tests separately (hopefully with altered descriptions).
This feature required a few changes in the process, mainly:
- Use PostgresAroundAll to connect/disconnect to the database before and after all tests run
- Add a destroy method to the TriggerDao to reset the database between tests
- Use the TriggerDao in the withTriggerService functions to initialize / clean up the database at the start / end of each test
- Sort trigger instances from list using Scala's sort, not relying on Postgres' ordering of UUIDs. This also means we need to use UUIDs for trigger instances in the tests and sort nonempty vectors in expected results.
Currently the report fails with variables[Build.SourceBranchName]:
command not found which is obviously not what we want (it’s mixing up
the syntax in Azure’s yaml config and Bash). Looking at the
code in the tell-slack-failed.yml, this one does seem to work but I
haven’t tested this so :crossed-fingers:.
changelog_begin
changelog_end
automated ghc-lib build
This PR aims at automating the build of ghc-lib. The current process
still has a few manual steps; it needs to be updated because Bintray is
going away, so this seemed like a good opportunity to fully automate it.
This works like the "patch bazel on Windows" jobs: the filename will
contain a hash of the `ci/da-ghc-lib` folder, and the job will run only
if the corresponding filename does not yet exist on the GCS bucket. PRs
aiming at changing the ghc-lib version will need to run twice: once to
create the artifacts, and once to change the `stack-snapshot.yaml` file
to match.
CHANGELOG_BEGIN
CHANGELOG_END
* Introduce CLI option for input buffer size
Also improves CLI help text for other back-pressure related options.
changelog_begin
[Sandbox] Allow to configure --input-buffer-size, which allows to tune the number of commands waiting to be submitted before the Sandbox applies back-pressure, run daml sandbox --help for more info.
changelog_end
* Update ledger/sandbox/src/main/scala/com/digitalasset/platform/sandbox/cli/Cli.scala
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
* Include rules_haskell revision in platform suffix
Hopefully this makes CI a bit less of a dumpsterfire. I’ve also
followed the comment and made the suffix actually 3 characters long
instead of 2 since that makes me worry less about collisions and
should hopefully still be short enough to not hit MAX_PATH.
changelog_begin
changelog_end
* Update ci/configure-bazel.sh
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
This rule is repeated for every file. While we cache the computation,
we still forced it to NF which is super slow. On my (realworld)
testcase, this is a speedup of > 1.7x, cuts allocations to 1/3 and max
residency also goes down to 1/3.
changelog_begin
changelog_end
This small PR makes a few QoL improvements to the release.sh script:
1. The snapshot command will now work for any commit. Previously, it
would refuse to print the snapshot suffix for commits that were not
ancestors of the `master` branch. The new version will print a
warning if the commit does not seem to be part of a release branch,
but will still print the result.
2. On checking the LATEST file, the script will now print a slightly
more useful error message if the file format is not valid.
3. The snapshot command will now print the entire line to be added into
the LATEST file, rather than just the version suffix.
CHANGELOG_BEGIN
CHANGELOG_END
Currently the message to Slack is always triggered by running the daily
checks. This means that it gets very noisy to:
1. Run the check on PRs affecting the check (like this one),
2. Rerun the check multiple times to ascertain that a given failure is
flaky.
With this PR, the message to Slack is replaced with a simple `echo` when
these checks are not run from the `master` branch, so whoever (manually)
triggered them can still get feedback on the result, but other people
don't get spurious `@here` mentions.
CHANGELOG_BEGIN
CHANGELOG_END
Having them set to `false` when they will immediately bet set to `true`
does not make too much sense. This allows for simplifying uses of
`useFetchByKey` where the key is know to always be present since you
can now operate under the assumption that the contract is never `null`
when the loading indicator is `false`.
This fixes#6171.
CHANGELOG_BEGIN
- @daml/react: Initialize the loading indicators of ``useQuery``,
``useFetchByKey`` and their streaming variants with ``true``. This
removes a glitch where the loading indicator was ``false`` for a very
brief moment when components using these hooks were mounted although
no data had been loaded yet. Code using these hooks does not need to
adapted in response to this change.
CHANGELOG_END
* events denormalization, WIP
* too soon to drop
* migration script
* add generated sha256 digest
* fixing the migration script naming
has to be double undescore after the version number
* flat event table queries
* write witnesses to events table during insert; disable inserts to witnesses tables
* use varchar[] for new witness columns instead of text[]
* ::varchar[] cast
* remove event witnesses table support code
* lookupFlatTransactionById works for postgres
* lookupTransactionTreeById works with postgres
* fixing the queries, replacing @> with &&
* cleanup
* multi-party postgres queries, WIP
* fixing multi-party queries, thanks @stefano.baghino
* fixing wildcardParties query
* minor cleanup
* h2 schema changes, h2 queries is WIP
* inlining some constants
* SqlFunctions introduced
* reformat
* Adding `SqlFunctions.arrayIntersectionValues`
* Adding `SqlFunctions.arrayIntersectionValues`
* Removing truncates for deleted tables
* Removing truncates for deleted tables
* filtering tree_event_witnesses
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
This drops the `Documentation` section from the generated typescript
library docs on docs.daml.com, because it contains a link that points to
itself.
CHANGELOG_BEGIN
CHANGELOG_END
CHANGELOG_BEGIN
[create-daml-app] Change the create-daml-app template so that it can run against a HTTP JSON API port specified in the environment variable REACT_APP_LEDGER_ID
CHANGELOG_END
* Optimize the execution of Saturated Builtin Applications in Speedy.
We special case applications where the expression in function-position is a builtin operator, and the number of arguments matches the arity of the builtin. The special-case detection is done at compile time, and allows for more efficient runtime execution, specifically:
- We don't need to construct an `SPAP` value, only to immediately deconstruct/enter it.
- We don't need to do arity checking at runtime, with special case handling for _partial-_ and _over-_ applications.
The change gives about 3% speedup.
changelog_begin
changelog_end
* improve doc comments & make class names more descriptive
* share code for evaluating arguments
* improve name: SEAppSaturatedBuiltinFun
* optimize over-applied builtin function applications
* fix bug in the refactoring which introduced evaluateArguments
* replace NodeExercises#controllers with controllersDifferFromActors
* remove controllers from ActorMismatch and scenario service exercise
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* remove reserved ID #s in scenario-service grpc
As discussed, we don't need to worry about
version mismatches.
Co-authored-by: Remy <remy.haemmerle@daml.com>
* Insert running trigger to DB when using one
If the DB write fails, the server sends itself a
TriggerInitializationFailure message so that the corresponding trigger
runner is stopped and the table is in sync with the actors.
We still need to retry writes here.
Includes basic test that runs the server with a JDBC config set and adds
a trigger, expecting a new entry to be added to the DB. However does not
check the running trigger table which we can do once reads are
implemented.
changelog_begin
changelog_end
* Await on future in test
* Update to new assertTriggerIds
* Apply scalafmt suggestions
* Create index on party token
* Read db in list command
* Update comment in test script
* Remove outdated comment
* Fix strings in insert and select
* Clean up test
* Add a second trigger in the db test
* Fix comment in test script
* Comment db tables
* Order trigger instances in list command
* Comment about TriggerDao execution context
* Moved caching related classes to under validator.caching package.
* Introduced CacheUpdatePolicy for controlling what type of state keys should be updated in the cache and when.
* Consistently use 'cache update policy'.
CHANGELOG_BEGIN
CHANGELOG_END
* Made it explicit what policy we are testing against.
* Remove older isLoading function calls.
* Clarify omitting of query argument to use(Stream)Query
CHANGELOG_BEGIN
CHANGELOG_END
Co-authored-by: Martin Huschenbett <martin.huschenbett@posteo.me>
* DAML profiler: Use non-blocking IO for writing profiles
As suggested by @SamirTalwar-DA.
CHANGELOG_BEGIN
CHANGELOG_END
* Follow Samir's advice even closer
CHANGELOG_BEGIN
CHANGELOG_END
* Store trigger history
changelog_begin
changelog_end
* Harvest trigger histories
changelog_begin
changelog_end
* Switch to Vector over List (and other bits and bobs)
* Use a better verb for updating trigger status method
* Add a comment
* Fix mangled comments
The issues underlying #6173 prompted me to look into which `find`
version we had in dev-env, and I was surprised to notice we had none. We
already have `findutils` in our nix configuration, however, so this is
just adding the symlink.
Note that, on my machine at least, switching from the macOS-provided one
to the dev-env one does not change the result order for the hash
calculation in `ci/patch_bazel_windows`, so this would likely not have
helped for #6173. Still, we use `find` in many places in our scripts so
I think it's worth having there.
CHANGELOG_BEGIN
CHANGELOG_END
* Special case atomic expressions in the functional-position of applications.
We regard builtins, values and variables as atomic. The special case detection is done at compile time; at run time, the atomic case avoids one push/pop on the continuation stack.
For the bench example, about 2/3 of the applications performed at run-time, fall into the special case, leading to an overall reduction in about 15% of the steps taken by the Speedy CEK machine.
This change gives a 3% to 4% performance improvement.
changelog_begin
changelog_end
* address review comments
* Sort files when calculating CACHE_KEY
The order returned by `find` is unspecified and seems to have changed
for whatever reason in some cases. This changed the cache key which is
obviously not intended. It looks like the one we currently have in our
scoop manifest is the one that we get by sorting. Reversing the sort
produces the one CI currently calculates.
changelog_begin
changelog_end
* update manifest to match CI output
Co-authored-by: Gary Verhaegen <gary.verhaegen@digitalasset.com>
* Making `PaginatingAsyncStream.streamFrom` more generic
so it does not specify what exactly `Offset` is.
changelog_begin
changelog_end
* Addressing code review comments + cleanup
Previously, we just crashed the scenario service instead of throwing a
proper scenario error. This meant that you had to look at the
debugging output to figure out what is going wrong. This is both a
shitty UX and also inconsistent with how we handle this for fetch and
exercise on contract ids that are not visible. This PR adds a new
error type that matches the one for invisible contract ids.
changelog_begin
- [DAML Studio] Fetches and exercises of contract keys associated with
contracts not visible to the submitter are now handled properly
instead of showing a low-level error.
changelog_end
fixes#5903
* Add additional metrics when storing transactions
Since event witnesses will soon be denormalized into the participant_events
table, I did not include metrics right now.
CHANGELOG_BEGIN
[DAML Ledger Integration Kit] Add additional metrics for storing transactions. The overall time is measured by ``daml.index.db.store_ledger_entry``.
- Timer ``daml.index.db.store_ledger_entry.prepare_batches``: measures the time for preparing batch insert/delete statements
- Timer ``daml.index.db.store_ledger_entry.events_batch``: measures the time for inserting events
- Timer ``daml.index.db.store_ledger_entry.delete_contract_witnesses_batch``: measures the time for deleting contract witnesses
- Timer ``daml.index.db.store_ledger_entry.delete_contracts_batch``: measures the time for deleting contracts
- Timer ``daml.index.db.store_ledger_entry.insert_contracts_batch``: measures the time for inserting contracts
- Timer ``daml.index.db.store_ledger_entry.insert_contract_witnesses_batch``: measures the time for inserting contract witnesses
- Timer ``daml.index.db.store_ledger_entry.insert_completion``: measures the time for inserting the completion
- Timer ``daml.index.db.store_ledger_entry.update_ledger_end``: measures the time for updating the ledger end
[Sandbox Classic] Added Timer ``daml.index.db.store_ledger_entry.commit_validation``: measure the time for commit validation in Sandbox Classic
CHANGELOG_END
* Refactoring: rename metrics *dao to *DbMetrics