* Start working on getting rid of unreleased.rst
Document new process in CONTRIBUTING.md,
.github/pull_request_template.md and unreleased.rst (for good measure)
Report the previous changelog additions here so that they're not lost in
the mist of times.
CHANGELOG_BEGIN
- [DAML Stdlib] Added the ``NumericScale`` typeclass, which improves the type inference for Numeric literals, and helps catch the creation of out-of-bound Numerics earlier in the compilation process.
- [DAML Triggers] ``emitCommands`` now accepts an additional argument
that allows you to mark contracts as pending. Those contracts will
be automatically filtered from the result of ``getContracts`` until
we receive the corresponding completion/transaction.
- [Navigator] Fixed a bug where Navigator becomes unresponsive if the ledger does not contain any DAML packages.
- [Ledger-API] Add field ``gen_map`` in Protobuf definition for ledger
api values. This field is used to support generic maps, an new
feature currently in development. See issue
https://github.com/digital-asset/daml/pull/3356 for more details
about generic maps.
The Ledger API will send no messages where this field is set, when
using a stable version of DAML-LF. However the addition of this
field may cause pattern-matching exhaustive warnings in the code of
ledger API clients. Those warnings can be safely ignored until
GenMap is made stable in an upcoming version of DAML-LF.
- [JSON API - Experimental] CLI configuration to enable serving static content as part of the JSON API daemon:
``--static-content "directory=/full/path,prefix=static"``
This configuration is NOT recommended for production deployment. See issue #2782.
- [Extractor] The app can now work against a Ledger API server that requires client authentication. See `issue #3157 <https://github.com/digital-asset/daml/issues/3157>`__.
- [DAML Script] This release contains a first version of an experimental DAML script
feature that provides a scenario-like API that is run against an actual ledger.
- [DAML Compiler] The default DAML-LF version is now 1.7. You can
still produce DAML-LF 1.6 by passing ``--target=1.6`` to ``daml
build``.
- [JSON API - Experimental] The database schema has changed; if using
``--query-store-jdbc-config``, you must rebuild the database by adding
``,createSchema=true``.
See `issue #3461 <https://github.com/digital-asset/daml/pull/3461>`_.
- [JSON API - Experimental] Terminate process immediately after creating schema. See issue #3386.
- [DAML Stdlib] ``fromAnyChoice`` and ``fromAnyContractKey`` now take
the template type into account.
CHANGELOG_END
* Document new release process to gather changelog additions
* Change the release script to ignore unreleased.rst
* Remove spurious unreleased.rst lines
* Transition to use tags
* Document new way to get changelog additions with tags
* Update release/RELEASE.md
Co-Authored-By: Gary Verhaegen <gary.verhaegen@digitalasset.com>
* Address https://github.com/digital-asset/daml/pull/3547#discussion_r348438786
* Document correction process
* Add copyright header to unreleased.sh
* Update CONTRIBUTING.md
Co-Authored-By: Gary Verhaegen <gary.verhaegen@digitalasset.com>
* Modify CONTRIBUTING.md after @garyverhaegen-da's proposal
* Make unreleased.sh run per commit and treat tags as case-insensitive
* Fix documentation for replacements
* replace JSON witnessParties column with a PG text[] column
* notes about the concurrent behavior of updateOffset
* include witness party in selectContracts query
* fmt
* fix witness_parties fetching in http-json tests
* release note
* typo in prior release note
* move new release note to bottom
* StaticContentEndpoint
* cleanup
* release notes
* Add test
* SDK doc update
* Copying the directory content explicitly
to avoid creating a symlink, I believe that is the cause of the Windows issue
* creating a static content dir from the test
* fixing release notes
* creating a tmp dir
* fixing release notes
* Using Gen.Identifier to generate random strings
Previously, we use SValue.fromValue for the conversion. However, this
breaks in cases like Numeric where the scale information is lost. By
using the ValueTranslator instead, we avoid this issue.
There is a similar problem in DAML script but I’ll fix that in a
separate PR.
Since the ValueTranslator is package private, this PR moves the
triggers in the engine package.
* comparison query parser given scalar parser
- written in half-error-propagation style to better suit other potential
error features
* factor dupes in RangeExpr
* text range parsing
* interpreting Range into in-memory predicate; points out bad dedupe from earlier
* make reuse of the scalar extractors much nicer
* express date, time, int64 as factored-out range exprs
* express numeric as factored-out range expr
* factor \&/ usage
* refactor LF value extractors for reuse in range queries
* factor mkRange usage
* totally deconstruct the int64 and text cases
* totally deconstruct the date and timestamp cases
* totally deconstruct the numeric case
* document comparison queries
* use Utf8.Ordering for text comparison queries
* int64 range query tests
* more int64 range query tests
* date, string, numeric range query tests
* include line # in query test successes table
* timestamp range query tests
* add release note
* remove duplicate changelog entry from #3425
* Retry on UNIQUE_VIOLATION
* Update release notes
* Hiding method that is not supposed to be used directly
* using connection.raiseError
* Force re-run on stale offset update
The parameters for the private key and the certificate were swapped,
resulting in extractor not being able to establish a secure connection
to the ledger.
* language: introduce data-imports
Right now the user experience for importing dalfs and dars from
different sdks is quiet confusing. This PR tries to solve this. We add
an additional field `data-imports` to daml.yaml. These imports can come
from different SDK's and we will generate interface files containing the
data types and their Template instances.
This also simplifies the migration command, as it now always imports the
respective packages as `data-imports`.
* Implement proper stream pagination
The previous pagination mechanism for streaming ledger entries was implemented
as a recursive method call with manually concatenating akka Sources.
However, this didn't work properly and resulted in all the subsources being
forced immediately, resulting in parallel requests for all pages
(0-100, 100-200, 200-300, ...) instead of the expected mechanism of loading
the first page and on upstream request (i.e. client requests for more data
over grpc) loading the next page of data.
The current mechanism uses Source.unfoldAsync which makes the paging
mechanism work as expected: sequential loading of pages on demand.
* Serialize and deserialize Transactions outside the SQL Executor
This frees up the SQL Executor threads sooner for other work
and the conversion work only happens on a by need basis when
the consumer requests more data.
* Add support for on-disk incremental builds in damlc build
* Normalise file paths of internal modules because Windows
* stop stealing my $s hlint
* Apparently jars are also called exe
* Address review comments
* Bump to proper ghcide revision
* Divulged contract visibility in multi-participant environments #3351
* Ledger api server time service optional ability for testing #3225
* Allow ledger api server to share DAML-on-X DAML engine #2975
* Allow ledger api server participant ids with LedgerString chars #3327
* Ledger api server includes SQL description in errors #3324
* Display release notes using webview
* Use const and fix string
* Check for version upgrade before showing release notes
* Changelog entry
* Use node-fetch instead of web-request
* Remove spurious state update
* new library ledger-service/db-backend
* borrow contracts table schema from extractor
* borrow contract insertion, removing some data to be unused
* match contract schema with insert function
* factor insertContract arguments
* offset table declarations
* CLI argument for query store
* surrogate template IDs
* compute surrogate template IDs on-the-fly
* database init action
* incoherent typeclasses, eh
* newtype SurrogateTpId
* offset fetch/update functions
* bad sql
* bulk insert contracts, function for selecting contracts
* expose contract column name for query's usage
* Initializing DB on startup if configured
* dropping existing tables as part of initialization
* fix some query syntax errors
* createSchema flag
* function for streaming transactions with jwt party selected
* formatting
* usage
* collect acs contracts and the ledger offset at the end
* lastOffset
* fixing merge conflicts, updating the way 3rd party deps are specified
* Moving ContractDao into http-json module
so it can take domain AST as an input
* cleanup
* injecting new dependencies
* split transaction batches into inserts and deletes
* generate sql for deleting contracts
* `fetch_sources = True` for java_deps
* make the delete-constructed fragment more efficient; handle empty list here
* pass logHandler for insertContracts
* ContractDao returns ConnectionIO, it's up to the caller to wrap query into a transaction
* fixing typo
* minor cleanup, moving fromLedgerApi factory function into corresponding companion objects
* don't need it any more
* GetActiveContractsResponse => domain.Contract factory
* make concatFragment private
* add partition graph; move other contract-fetching experiments to ContractsFetch
* experimenting with akka sources
* introducing domain.Offset to work around API's empty/null offset cases
* minor cleanup
* decompose fetchActiveContractsFromOffset
* missed via
* ACS splitting graph
* finish doc for ACS splitting graph
* remove unneeded stages
* WIP
* lazily read a stream of ConnectionIO into a single ConnectionIO
* cancel on IO error
* figuring out how to put all the pieces together
* graph WIP
* Removing workflowId from the JSON API
* simplify acsAndBoundary; describe other flow pieces
* WIP
* use Vector in InsertDeleteStep; add variant for ACS (no deletes)
* `org.wartremover.warts.NonUnitStatements` enforced in `http-json` module
* evaluate InsertDeleteStep to a ConnectionIO
* database variant of LfValueCodec, using numbers for numbers
* convert input to JSON, combine insert plans, connect rest of contractsToOffsetIo
* remove strict contractsToOffset sink
* moving dao methods into an object
* putting pieces together
* contractsFromOffset WIP
* should be it
* cleanup
* cleanup
* contractsIo that takes List[domain.TemplateId.RequiredPkg])
* contractsIo that takes List[domain.TemplateId.RequiredPkg])
* cleanup
* put all pieces together, testing
something does not work yet
* diff is not required to return anything
that is why Sink.lastOption that gives Option[domain.Offset]
* factor out tuple split
* use traverse syntax in contractsIo2
* factor explicit flow steps out of graph DSL; remove aggregate
* locally model the Absolute/Begin distinction for offset bookmarking in DB
* Adding test cases to run HTTP Service with Postgres backend
the same set of test cases, run with and without DB backend
* make better use of domain.Offset in OffsetBoundary
* monomorphize InsertDeleteStep#append
* Disabling a test that fails with DB backend
* add release note
* add release note about workflowId
* a test case that checks the number or stored contracts
* trying to figure out why Postgres test fails on Windows with NPE
* Allow data A = A by prepending DamlEnum$ to type name.
* Single con enums basically work.
* Fix export lists for single constructor enum types.
* Revert "Fix export lists for single constructor enum types."
This reverts commit 7475a3dfbe3531d3ef62fdbcfe64c01a9e22d7af.
* Switch to a "stupid theta" approach
* Clean up enum type preprocessor
* Run enum preprocessor on generated code.
* Add daml-docs golden test for single constructor enums
* s/genPreprocessor/generatedPreprocessor/g
* Update copyright header
* Update release notes
* Remove unnecessary OverloadedStrings
The only reason for having AbsoluteContractId was that we could get
some more instances in particular `Ord` and `MapKey` but given that
`ContractId` will be a valid key type for the new DAML-LF maps, we can
just use slower implementations for now and switch to Map-based
implementations once that has landed.
fixes#3336
For numeric, the superclass dicts can be of the for `dict @10` so only
handling variables doesn’t work. Now we walk down applications and
check the name on the left.
The symptom for receiving messages larger than the configured maxInboundMessageSize is a
gRPC error like:
Oct 31, 2019 1:52:37 PM io.grpc.internal.AbstractClientStream$TransportState inboundDataReceived
INFO: Received data on closed stream
Fixes#3301.
* Bring back daml integration kit docs
This just revives the documentation, without updating it yet.
* Updated URLs and remvoe references to the IndexService
* Add release note for revival of integration kit docs
* Update unreleased.rst
* update release notes
* Don't cache the GHC Core produced during compilation
In our experiments, this reduced the memory footprint by ca. 18% on a very
big code base.
* Adapt integration tests
* Fix integration tests