* Move all datatypes out of daml-prim
This moves the remaining two modules DA.Types and GHC.Tuple to
separate LF packages with stable identifiers.
The only data types remaining are the ones for typeclasses which will
disappear once we move this to type synonyms.
CHANGELOG_BEGIN
- [DAML Compiler] The modules DA.Types and GHC.Tuple from daml-prim
have been moved to separate packages.
CHANGELOG_END
* Fix codegen tests
* Fix DarReader test
* Fix kvutils tests
* Fix jdbcdao tests
* Fix hs ledger bindings tests
* Add ledger and participant ID to claims
CHANGELOG_BEGIN
- [Ledger] AuthService implementations can now restrict the validity of access tokens to a single ledger or participant.
- [Sandbox] The sandbox JWT authentication now respects the ledgerId and participantId fields of the token payload.
CHANGELOG_END
* Add tests for ledger and participant in claims
* Address review comment
* Address review comment
* Fix tests
* Fix tests
* Full package name collision check
* Handle type synonyms appropriately
* Better comment
* Make isAscendant case-insensitive
* Document isAscendant and explain case-insensitivity
* Add a package-wide name collision test.
* ledger-api-scala-logging: Fix errors in IntelliJ IDEA.
The Bazel plugin for IntelliJ doesn't seem to be smart enough to be able
to handle a Scala library that is part `src` directory and part
generated code from another Bazel rule. It just ignores the second part.
This means that IntelliJ cannot find the *ServiceLogging classes, as
they're not represented on the Bazel-generated classpath, and so
complains with lots of errors when working on the equivalent Api*Service
files.
To fix this, we can split these in two, compiling the base traits to
`ledger-api-scala-logging-base` and then the generated code separately.
It does result in an extra Bazel dependency for the users of
ledger-api-scala-logging, as Bazel doesn't realise transitive
dependencies for us.
* Release: Add `ledger-api-scala-logging-base` to the Maven list.
* Ensure the access token is initialized when constructing a client
CHANGELOG_BEGIN
- [Java Client] Ensure the access token is initialized when using a
deprecated constructor.
CHANGELOG_END
* Improve phrasing and grammar
* ledger: Document the health checks.
* sandbox: Build a Docker image.
* sandbox: Create a sample Kubernetes YAML file.
* sandbox: Add health probes to the sample Kubernetes configuration file.
Startup and liveness are tested with a simple TCP connection to port
6865. Readiness checks are done with `grpc-health-probe`, which is added
to the Sandbox container image.
* sandbox: Link to kubernetes.yaml in the README and provide a disclaimer.
// changelog removed as it's not actually relevant to users
* sandbox: Don't try and build `sandbox-image-base` on Windows.
* Apply suggestions from code review
Co-Authored-By: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* daml2ts: Add E2E tests
* Attempt to fix failing test
* Don't try to delete the temp directory
* Cleanup block can't be empty
* Leave TMP_DIR before removing it
* sandbox: HikariJdbcConnectionProvider.start(), rather than construction.
It's too easy to construct things; I'd like it to be obvious that we're
starting something (in this case a connection pool).
* sandbox: In SqlLedgerSpec, stop ledgers before shutting down PostgreSQL.
* sandbox: Perform JDBC health checks on a timer.
The problem with hooking into existing requests is that if the Sandbox
is running in a load balancer and reports itself as unhealthy, it might
be taken out of the load balancer. Without any requests going through,
it'll never realize it's healthy again, and so will remain thinking it's
unhealthy forever.
* RxJava Bindings: Allow bot to run on a scheduler
It seems that having many bots results in a some sort of deadlock
or blocking of data flow within the flowable network.
Adding some async boundaries to allow for concurrent processing
seems to help.
Fixes#2356
CHANGELOG_BEGIN
- [RxJava Bindings] Added a method to the ``Bot`` class allowing users to specify a ``Scheduler`` to use for running the bot. See `issue #2356 <https://github.com/digital-asset/daml/issues/2356>`__.
CHANGELOG_END
* Go easy, test!
* Fix compiler warnings in generated enums
* Fix warnings in generated equals method for parameterized types.
* Remove warning in equals for records without fields.
CHANGELOG_BEGIN
- [Java Bindings] Removed warnings in code emitted by the Java Codegen.
CHANGELOG_END
* fix compilation error in tests
* sandbox: Inline SqlExecutor into DbDispatcher.
There's no clear delineation of responsibilities. I'll try and separate
some of this back out again later.
* sandbox: Make it clear that we're "starting" the DbDispatcher.
* sandbox: Simplify the promise-based behavior in DbDispatcher.
* sandbox: Rename `noOfShortLivedConnections` to `maxConnections`.
The timeout for witnessing the party allocation is now increased
to 30 seconds (from 10 seconds). Additionally the LotsOfParties
conformance test is marked as flaky.
* added tagmanger to docs
- 3rd party integration
- event tracking
* Removed the duplicate TagManager script
* Pushed the wrong script, sorry. This PR has the propper one
* Sandbox: Remove streaming connections
The separate database connection pool for streaming connections
was only used for the active contracts stream. However, a single
db connection was being occupied until the last active contract was
streamed over the Ledger API to the client. This effectively means
that only ever 2 concurrent active contract streams could exist.
No need to say that this is bad design.
The following changes happened:
- remove the db connection pool for streaming connections
- replace the streaming mechanism for active contracts with
the already existing pagination mechanism in JdbcLedgerDao
- change the pagination mechanism to actually use database level
limit and offset instead of doing the pagination "client side"
- configure the HikariDataSource with the metric registry
CHANGELOG_BEGIN
- [Sandbox] Improve loading of active contracts for the Sandbox SQL backend.
CHANGELOG_END
* Extract PaginatingAsyncStream from JdbcLedgerDao for testing
* Reset metrics registry before each test
In #3706 we fixed the SDK versions of the daml-trigger an daml-script
libraries and (rightfully) stopped filtering out 0.0.0 from the SDK
version check in `damlc build`. However, this broke daml-sdk-head
since we still distributed the DARs in daml-sdk-head with the
Sdk-Version set to whatever the current release is rather than 0.0.0.
* Refactor packaging logic
This is a first step towards cleaning up the packaging logic and
adding some comments to make it clearer what is going on. There are no
functional changes in this PR.
There is more stuff here that we can and should cleanup but I will
leave that for separate PRs.
* Update compiler/damlc/lib/DA/Cli/Damlc/Packaging.hs
Co-Authored-By: associahedron <231829+associahedron@users.noreply.github.com>
* Update compiler/damlc/lib/DA/Cli/Damlc/Packaging.hs
Co-Authored-By: associahedron <231829+associahedron@users.noreply.github.com>
* Update compiler/damlc/lib/DA/Cli/Damlc/Packaging.hs
Co-Authored-By: associahedron <231829+associahedron@users.noreply.github.com>
* Update compiler/damlc/lib/DA/Cli/Damlc/Packaging.hs
Co-Authored-By: associahedron <231829+associahedron@users.noreply.github.com>
* Document topological sorting
* Undo requiredE change
* toSqlWhereClause WIP
* literal cases for SQL predicate generation
* fill in the remaining Literals' sql-equal data
* add other likely elements of the toSqlWhereClause fold
* SQL ListMatch case
* SQL VariantMatch case
* partial SQL RecordSubset case
* SQL Range case
* comments on JSON encoding for DB
* new relationship for the 3 paths in toSqlWhereClause Rec
- conversion to Fragment is further delayed to allow mixed-mode matching
of records
* handle new Rec semantics in ListMatch (can't drop @> on the floor anymore)
* compile RecordSubset to SQL
* compile MapMatch to SQL
* note optimization for record & map
* optimize = when @> unavailable for record subsets
* optimize range when degenerately =
* don't discard the @> safety of empty test sets
* unnested SQL optional matching
* converting DB JSON to response body JSON
* compiling nested optionals to SQL
* add missing length check to ListMatch
* remove unneeded parens from generated predicates
* remove 1 = 1 leader from generated predicates
* test the generated SQL from toSqlWhereClause
* searchDb WIP
* searchDb integrated
* coerce party to text in selectContracts; log that query
* fixing query
* fixing scala formatting
* removing unused functions
* removing unused type alias
* checking a that search returned exactly what we searched for,
also checking that all contracts got stored in the DB
* factor commonalities of MapMatch and RecordSubset
* cleanup
* cleanup
* changelog
CHANGELOG_BEGIN
- [JSON API - Experimental] Queries will always run against Postgres if Postgres is
configured. See `issue #3388 <https://github.com/digital-asset/daml/issues/3388>`_.
CHANGELOG_END
* fix record =-unsafe detection
Sadly, I haven’t managed to come up with an isolated test case for
this but I’ve tested this on a large internal codebase and it fixes an
issue with damlc build --incremental=yes.
* sandbox: If the Flyway migrations fail, crash the process.
Otherwise running it in the background can make it look like
everything's OK.
* reference-v2: If the Flyway migrations fail, crash the process.
Otherwise running it in the background can make it look like
everything's OK.
* sandbox: Remove an errant debugging `println`.
* sandbox: Use `DirectExecutionContext.implicitEC`.
This still contains the main class so you can use it like you would
use the fat jar but publishing fat jars to maven central is apparently
bad practise and some peple have asked for the library.
This includes some slight tweaks to the scala_docs rule to make it
capable of coping with the generated source file and a hack in the
release script to avoid it complaining about the scenario proto
library not being published to Maven even though it is included in the
transitive deps.
* Spin off TokenHolder into a new library
Avoids having weird dependencies between different packages, makes TokenHolder available on Maven
* Fix auth-utils path
* language: suffix all dalfs dependencies in a dar with the pkgid.
This makes sure that dalf dependencies are not accidentally overwritten
when two packages with equally named dalfs are imported.
* factor out parseUnitId