daml/libs-scala/scala-utils/BUILD.bazel

55 lines
1.4 KiB
Python
Raw Normal View History

# Copyright (c) 2021 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
load(
"//bazel_tools:scala.bzl",
"da_scala_library",
"da_scala_test",
"lf_scalacopts",
detect unsynchronized contract table and retry (#10617) * enumerating out-of-sync offsets at the DB level * cleanup in lastOffset * write the latest-requested-or-read offset when catching up - Writing only the latest-read, as before, would imply unsynced offsets that are actually well-synced. This puts the DB in a more uniform state, i.e. it should actually reflect the single value that the fetchAndPersist loop tries to catch everything up to. * detecting lagging offsets from the unsynced-offsets set - Treating every unsynced offset as a lag would make us needlessly retry perfectly synchronized query results. * add Foldable1 derived from Foldable for NonEmpty * nicer version of the unsynced function * ConnectionIO scalaz monad * rename Offset.ordering to `Offset ordering` so it can be imported verbatim * finish aggregating in the lag-detector function, compiles * port sjd * XTag, a scalaz 7.3-derived tag to allow stacked tags * make the complicated aggregation properly testable * extra semantic corner cases I didn't think of * tests for laggingOffsets * a way to rerun queries if the laggingOffsets check reveals inconsistency * if bookmark is ever different, we always have to rerun anyway * boolean blindness * incorporate laggingOffsets into fetchAndPersistBracket * split fetchAndPersist from getTermination and clean up its arguments * just compose functors * add looping to fetchAndPersistBracket * more mvo tests * test unsyncedOffsets, too * Lagginess collector * supply more likely actual data with mvo tests; don't trust Java equals * rework minimumViableOffsets to track sync states across template IDs * extra note * fix the tests to work against the stricter mvo * move surrogatesToDomains call * more tests for lagginess accumulator * add changelog CHANGELOG_BEGIN - [JSON API] Under rare conditions, a multi-template query backed by database could have an ACS portion that doesn't match its transaction stream, if updated concurrently. This conditions is now checked and accounted for. See `issue #10617 <https://github.com/digital-asset/daml/pull/10617>`__. CHANGELOG_END * port toSeq to Scala 2.12 * handle a corner case with offsets being too close to expected values * didn't need XTag
2021-09-28 23:47:42 +03:00
"silencer_plugin",
)
load("@scala_version//:index.bzl", "scala_major_version", "scala_version_suffix")
scalacopts = lf_scalacopts + [
"-P:wartremover:traverser:org.wartremover.warts.NonUnitStatements",
]
da_scala_library(
name = "scala-utils",
srcs = glob(["src/main/scala/**/*.scala"]) + glob([
"src/main/{}/**/*.scala".format(scala_major_version),
]),
plugins = [
"@maven//:org_typelevel_kind_projector_{}".format(scala_version_suffix),
detect unsynchronized contract table and retry (#10617) * enumerating out-of-sync offsets at the DB level * cleanup in lastOffset * write the latest-requested-or-read offset when catching up - Writing only the latest-read, as before, would imply unsynced offsets that are actually well-synced. This puts the DB in a more uniform state, i.e. it should actually reflect the single value that the fetchAndPersist loop tries to catch everything up to. * detecting lagging offsets from the unsynced-offsets set - Treating every unsynced offset as a lag would make us needlessly retry perfectly synchronized query results. * add Foldable1 derived from Foldable for NonEmpty * nicer version of the unsynced function * ConnectionIO scalaz monad * rename Offset.ordering to `Offset ordering` so it can be imported verbatim * finish aggregating in the lag-detector function, compiles * port sjd * XTag, a scalaz 7.3-derived tag to allow stacked tags * make the complicated aggregation properly testable * extra semantic corner cases I didn't think of * tests for laggingOffsets * a way to rerun queries if the laggingOffsets check reveals inconsistency * if bookmark is ever different, we always have to rerun anyway * boolean blindness * incorporate laggingOffsets into fetchAndPersistBracket * split fetchAndPersist from getTermination and clean up its arguments * just compose functors * add looping to fetchAndPersistBracket * more mvo tests * test unsyncedOffsets, too * Lagginess collector * supply more likely actual data with mvo tests; don't trust Java equals * rework minimumViableOffsets to track sync states across template IDs * extra note * fix the tests to work against the stricter mvo * move surrogatesToDomains call * more tests for lagginess accumulator * add changelog CHANGELOG_BEGIN - [JSON API] Under rare conditions, a multi-template query backed by database could have an ACS portion that doesn't match its transaction stream, if updated concurrently. This conditions is now checked and accounted for. See `issue #10617 <https://github.com/digital-asset/daml/pull/10617>`__. CHANGELOG_END * port toSeq to Scala 2.12 * handle a corner case with offsets being too close to expected values * didn't need XTag
2021-09-28 23:47:42 +03:00
silencer_plugin,
],
scala_deps = [
detect unsynchronized contract table and retry (#10617) * enumerating out-of-sync offsets at the DB level * cleanup in lastOffset * write the latest-requested-or-read offset when catching up - Writing only the latest-read, as before, would imply unsynced offsets that are actually well-synced. This puts the DB in a more uniform state, i.e. it should actually reflect the single value that the fetchAndPersist loop tries to catch everything up to. * detecting lagging offsets from the unsynced-offsets set - Treating every unsynced offset as a lag would make us needlessly retry perfectly synchronized query results. * add Foldable1 derived from Foldable for NonEmpty * nicer version of the unsynced function * ConnectionIO scalaz monad * rename Offset.ordering to `Offset ordering` so it can be imported verbatim * finish aggregating in the lag-detector function, compiles * port sjd * XTag, a scalaz 7.3-derived tag to allow stacked tags * make the complicated aggregation properly testable * extra semantic corner cases I didn't think of * tests for laggingOffsets * a way to rerun queries if the laggingOffsets check reveals inconsistency * if bookmark is ever different, we always have to rerun anyway * boolean blindness * incorporate laggingOffsets into fetchAndPersistBracket * split fetchAndPersist from getTermination and clean up its arguments * just compose functors * add looping to fetchAndPersistBracket * more mvo tests * test unsyncedOffsets, too * Lagginess collector * supply more likely actual data with mvo tests; don't trust Java equals * rework minimumViableOffsets to track sync states across template IDs * extra note * fix the tests to work against the stricter mvo * move surrogatesToDomains call * more tests for lagginess accumulator * add changelog CHANGELOG_BEGIN - [JSON API] Under rare conditions, a multi-template query backed by database could have an ACS portion that doesn't match its transaction stream, if updated concurrently. This conditions is now checked and accounted for. See `issue #10617 <https://github.com/digital-asset/daml/pull/10617>`__. CHANGELOG_END * port toSeq to Scala 2.12 * handle a corner case with offsets being too close to expected values * didn't need XTag
2021-09-28 23:47:42 +03:00
"@maven//:org_scala_lang_modules_scala_collection_compat",
"@maven//:org_scalaz_scalaz_core",
],
scalacopts = scalacopts,
tags = ["maven_coordinates=com.daml:scala-utils:__VERSION__"],
visibility = [
"//visibility:public",
],
deps = [
],
)
da_scala_test(
name = "test",
srcs = glob(["src/test/scala/**/*.scala"]),
plugins = [
"@maven//:org_typelevel_kind_projector_{}".format(scala_version_suffix),
],
scala_deps = [
"@maven//:com_chuusai_shapeless",
"@maven//:org_scalaz_scalaz_core",
],
scalacopts = scalacopts,
deps = [
":scala-utils",
"//libs-scala/scalatest-utils",
],
)