* ledger-api: Use `proto_jars`.
CHANGELOG_BEGIN
- [Ledger API] The Scala JARs containing the gRPC definitions no longer
contain the *.proto files used to generate the ScalaPB-based classes.
CHANGELOG_END
* Create a source JAR for *.proto files in `proto_jars`.
* ledger-api: Publish the protobuf sources as "ledger-api-proto".
CHANGELOG_BEGIN
- [Ledger API] The *.proto files containing the gRPC definitions are now
provided by a new Maven Central artifact, with the group "com.daml"
and the artifact name "ledger-api-proto".
CHANGELOG_END
* release: We don't need the "main-jar" option.
* Bazel: Proto JARs will always have a Maven artifact suffix.
* Bazel: Simplify Protobuf source file TAR and JAR targets.
* Bazel: Extract out Protobuf functions.
* resources: Release sequenced resources in parallel.
This isn't used much, but has been bothering me for a while. While we
acquire the resources in parallel, we used to release them sequentially.
This reimplements `sequence` so they're released all at once.
CHANGELOG_BEGIN
CHANGELOG_END
* resources: Drop an unnecessary `.map`.
* resources: Fix the Scaladoc for `sequence`.
* Obtain refresh token from Auth0
Auth0 requires the `offline_access` scope to be set to return a refresh
token.
See https://auth0.com/docs/tokens/refresh-tokens/get-refresh-tokens
Additionally, the `audience` claim needs to be set to obtain a JWT
access token and a refresh token.
See https://auth0.com/docs/tokens/refresh-tokens
changelog_begin
changelog_end
* Implement refresh endpoint on auth middleware
Following the refresh spec [1] and Auth0 documentation [2].
[1]: https://tools.ietf.org/html/rfc6749#section-6
[2]: https://auth0.com/docs/tokens/refresh-tokens/use-refresh-tokens
* Adapt Auth0 example configuration
Ignore any requests outside the ledger-api audience.
Don't throw on missing query fields. Otherwise the unhandled exception
would prevent unrelated requests from succeeding. E.g. token refresh
requests would always fail.
* Forward unauthorized/forbidden response on refresh
* re-use precomputed token payload
* Implement token refresh in auth test server
Reuses the association between authorization code and token payload to
associate refresh tokens and token payload.
Adds an expiry to the generated token to make tokens distinguishable
across refresh.
* obtain refresh token in test client
* Test auth server refresh token
* auth test server clock configurable
The clock used to define token expiry is configurable
* Override default clock in test fixture
* implement an adjustable clock
* Test token refresh with adjustable clock
* Test token expiry on /auth backend
* Test case for auth middleware /refresh endpoint
* handle malformed code/refresh token in auth server
* Forward client errors on middleware refresh
* Test middleware refresh failure
* Clarify meaning of offline accesss
* Remove redundant testing only comment
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
* restate the submit stage as a Flow and derived Sink
* take submit out of the trigger-to-submit flow
* type for the failures produced directly by command submission
* directly connect the msgSource failure queue to the submitter output
* parens
* slow down submission as we exceed max parallel submissions
* restricting alterF so it will be usable with ConcurrentMap
* disable buffer for the delay
* split out the delay function
* drafting a retry loop
* degenerate test for retry loop, factoring the forAllFuture utility
* map input to retrying properly
* make retrying accessible to tests
* test happy path and fix off-by-one
* further tests for retrying
* reveal that elements can get lost
* more determinism in test
* let failures block further elements from being attempted
- Previously failures would go into a separate queue, where they awaited expiry
of their delay and further initial upstream elements were given their first
tries. However, closing the upstream could mean that queue was dropped, and
detecting that situation is not trivial. So, instead, we don't use a separate
queue.
* plug retrying into the trigger submission flow
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* remove throttle; pendingCommandIds may leak
* report random parameter on failure
* revert comment about throttling
* explanation for fail in the error queue
- suggested by @cocreature; thanks
* Check the trigger dao migrations digest
Following the example of the corresponding ledger on SQL tests.
The digests had to be updated as both of them had gone out of sync.
The init digest presumably due to the change in #7226 and the one for
adding the access token during review of #7890.
changelog_begin
changelog_end
* define abstract migrations test
* Use abstract migrations test in trigger service tests
* use abstract migrations test in ledger on SQL
* Retain check for number of .sql resources
* Factor out the hash-migrations script
* Consistent shell settings
Addressing review comment
Co-authored-by: Andreas Herrmann <andreas.herrmann@tweag.io>
I saw a failure in CI due to timer fluctuation.
> 94 milliseconds was not around 100 milliseconds [95, 1000)
This widens the lower bound from 5ms to 20ms to be safe.
CHANGELOG_BEGIN
CHANGELOG_END
* resources: Move builders into //ledger/ledger-resources.
Keep the actual constructors in a trait, but instantiate it when working
with ledger code.
This allows us to later introduce an extra "context" type parameter to
ResourceOwner.
* resources-akka: Move the builders in to //ledger/ledger-resources.
* resources: Introduce an abstract `Context` parameter for owners.
This replaces the concrete `ExecutionContext`. While it _can_ be an
execution context, it really doesn't matter as long as we can get at one
somehow.
This is being introduced so we can wrap the context in a container,
either for type tagging or to include extra information.
Because our current context _is_ `ExecutionContext`, and an implicit is
provided to extract it, we can end up with two ways to get the same
value. We use shadowing to prevent this. This problem should go away in
the near future when a new context type is added.
CHANGELOG_BEGIN
- [Integration Kit] The `ResourceOwner` type is now parameterized by a
`Context`, which is filled in by the corresponding `Context` class in
the _ledger-resources_ dependency. This allows us to pass extra
information through resource acquisition.
CHANGELOG_END
* ledger-resources: Move `ResourceOwner` here from `resources`.
* ledger-resources: Remove dependencies from outside //ledger.
* ledger-resource: Wrap the acquisition execution context in `Context`.
So we can add a logging context to it.
* resources: Pass the Context, not the ExecutionContext, to Resource.
* Avoid importing `HasExecutionContext`.
* ledger-resources: Publish to Maven Central.
* resources: Make the small changes suggested by @stefanobaghino-da.
Co-Authored-By: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* ledger-resources: Pull out a trait for test resource contexts.
Saves a few lines of code.
* Restore some imports that were accidentally wildcarded.
* resources: Replace an `implicit def` with a couple of imports.
* participant-integration-api: Simplify the JdbcLedgerDaoBackend tests.
Try and use the right execution context where possible.
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* add silent_annotations option to da scala bazel functions
* use silent_annotations for several scala targets
* use silencer_plugin instead when the lib isn't used
* use silent_annotations for several more scala targets
* use silencer_lib for strange indirect requirement for running tests
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* silent_annotations support for scaladoc
* metrics: Support tagged Futures when timing.
* ledger-on-sql: Use tagged execution contexts in `Database`.
We have to deal with multiple execution contexts in `Database`. This
makes it possible to use them implicitly, which is much cleaner.
CHANGELOG_BEGIN
CHANGELOG_END
* ledger-on-sql: Simplify `Database` a little.
* ledger-on-sql: Make the connection pool implicit.
* ledger-on-sql: Move the execution context into the connection pool.
* ledger-on-sql: Make connection pools more implicit.
* ledger-on-sql: Use the `sc` prefix for `scala.concurrent`.
* ledger-on-sql: Remove an unnecessary import.
* concurrent: Tag DirectExecutionContext.
1. Tag `DirectExecutionContext` as `ExecutionContext[Nothing]`, thereby
stating that it works for any tagged `Future`.
2. Move `DirectExecutionContext` to the _libs-scala/concurrent_
library, as it requires it and it's tiny.
CHANGELOG_BEGIN
CHANGELOG_END
* concurrent: Fix the privacy of `DirectExecutionContextInternal`.
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
Co-authored-by: Stephen Compall <stephen.compall@daml.com>
* add phantom-tagged ExecutionContext and Future to scala-utils concurrent package
* many new operations for Futures
* Future, ExecutionContext combinators from porting ledger-on-sql
- picked from 546b84ab9cdf4de2d93ec5682bdee6cfd6b385f8
* move Future, ExecutionContext companions into normal package
* lots of new docs
* many new Future utilities
* working zipWith
* tests for ExecutionContext resolution, showing what will be picked under different scenarios
* even more tests for ExecutionContext resolution
* tests showing some well-typed and ill-typed Future combinator usage
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* missed scalafmt
* one more doc note
* split concurrent package to concurrent library
Co-authored-by: Samir Talwar <samir.talwar@digitalasset.com>
* Open sourcing gatling statistics reporter
Running gatling scenarios with `RunLikeGatling` from libs-scala/gatling-utils
* cleaning up
* Replace "\n" with System.lineSeparator
so the formatting test cases pass on windows
* Testing DurationStatistics Monoid laws
* Renaming RunLikeGatling -> CustomRunner
* get a LoggingContext into the TriggerRunnerImpl
* make some implicits more implicitly scoped and explicitly ascribed
* make some private/final markings
* most of JsonFormat[Identifier] is in companion
* experimental LoggingContext with phantom type
* ActorContext#log isn't really doing that much
* more details of LoggingContextOf
* make LoggingContextOf compile
* add trigger message logging, yet without context
* fix parent compile errors
* use Config as the phantom for its own logging extensions
* LocalDateTimeFormat cleanup
* switch TriggerRunner to contextual logging
* add trigger definition ID to logs
* log trigger-submitted commands, fix trigger test compile
* log trigger stopping and DAR uploads
* add context to PostStop/PreRestart logs
* add changelog
CHANGELOG_BEGIN
- [Triggers] More detailed logging of trigger actions and trigger service actions.
See `issue #7205 <https://github.com/digital-asset/daml/pull/7205>`_.
CHANGELOG_END
* missed copyright header
* switch to Unit, scala/bug#9240 fixed
* participant-integration-api: Never use a delay of zero.
If `akka.pattern.after` is passed a delay of zero, it will execute the
body synchronously, potentially leading to a stack overflow error.
This error was observed in tests.
CHANGELOG_BEGIN
CHANGELOG_END
* timer-utils: Add tests for Delayed.Future.
* timer-utils: Add tests for RetryStrategy.
* timer-utils: Remove duplication in RetryStrategy tests.
* timer-utils: Allow for more wiggle room in the RetryStrategy tests.
* timer-utils: Fail after retrying the correct number of times.
* timer-utils: Ensure we don't overflow the stack in RetryStrategy.
* timer-utils: Reject a negative number of retry attempts.
* add -Ywarn-unused to all scalac options
* remove some unused arguments
* remove some unused definitions
* remove some unused variable names
* suppress some unused variable names
* changeExtension doesn't use baseName
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* work around no plugins in scenario interpreter perf tests
* remove many more unused things
* remove more unused things, restore some used things
* remove more unused things, restore a couple signature mistakes
* missed import
* unused argument
* remove more unused loggingContexts
* some unused code in triggers
* some unused code in sandbox and kvutils
* some unused code in repl-service and daml-script
* some unused code in bindings-rxjava tests
* some unused code in triggers runner
* more comments on silent usages
- suggested by @cocreature; thanks
* fix missing reference in TestCommands
* more unused in triggers
* more unused in sandbox
* more unused in daml-script
* more unused in ledger-client tests
* more unused in triggers
* more unused in kvutils
* more unused in daml-script
* more unused in sandbox
* remove unused in ledger-api-test-tool
* suppress final special case for codegen unused warnings
.../com/daml/sample/mymain/ContractIdNT.scala:24: warning: parameter value ev 0 in method ContractIdNT Value is never used
implicit def `ContractIdNT Value`[a_a1dk](implicit `ev 0`: ` lfdomainapi`.Value[a_a1dk]): ` lfdomainapi`.Value[_root_.com.daml.sample.MyMain.ContractIdNT[a_a1dk]] = {
^
.../com/daml/sample/mymain/ContractIdNT.scala:41: warning: parameter value eva_a1dk in method ContractIdNT LfEncodable is never used
implicit def `ContractIdNT LfEncodable`[a_a1dk](implicit eva_a1dk: ` lfdomainapi`.encoding.LfEncodable[a_a1dk]): ` lfdomainapi`.encoding.LfEncodable[_root_.com.daml.sample.MyMain.ContractIdNT[a_a1dk]] = {
^
* one more unused in daml-script
* special scaladoc rules may need silencer, too
* unused in compatibility/sandbox-migration
* more commas, a different way to `find`
- suggested by @remyhaemmerle-da; thanks
* reenable 'restart triggers after shutdown'
CHANGELOG_BEGIN
CHANGELOG_END
* wait for everything to shut down before completing a withTriggerService fixture
- similar to a change to HttpServiceFixture.withHttpService in #4593,
but without the suppression of shutdown errors
* label the WithDb tests
* in CI, test only 'recover packages after shutdown', 50 times
* experiment: Process#destroy appears to be async
* is it in the in-between period?
* partial -> total
* replace some booleans with assertions for better error reporting
* make triggerLog concurrent
* close channel and file in other error cases for port locking
- suggested by @leo-da; thanks
* use port locking instead of port 0 for trigger service fixtures
* destroy one service at a time
* missed continuation in build script
* use assertion language for "restart triggers with update errors"
* Revert "is it in the in-between period?"
This reverts commit 211ebfe9d2.
* use better assertion language for "restart triggers with update errors"
* restore full CI build
java.lang.NullPointerException:
at com.daml.ports.PortLock$Locked.unlock(PortLock.scala:55)
at com.daml.ports.PortLock$.lock(PortLock.scala:41)
at com.daml.ports.LockedFreePort$.find(LockedFreePort.scala:15)
at com.daml.lf.engine.trigger.TriggerServiceFixture$.$anonfun$withTriggerService$1(TriggerServiceFixture.scala:65)
CHANGELOG_BEGIN
CHANGELOG_END
* Moving `Statements.discard` from //ledger-server/http-json into //libs-scala/scala-utils
changelog_begin
changelog_end
* Add new module to the published artifacts
* `com.daml.scalautil` instead of `com.daml.scala.util`
@S11001001: That's because if this is in classpath and you import com.daml._,
you have a different scala in scope than the one you expect.
* triggers: Use `FreePort.find()`.
* ports: Move `LockedFreePort` from postgresql-testing for reuse.
* triggers: Use `LockedFreePort` to avoid race conditions.
* ports + triggers: Move common port testing into the ports library.
CHANGELOG_BEGIN
CHANGELOG_END
* participant-integration-api: `GrpcServerOwner` -> `GrpcServer.Owner`.
Mostly so I can create a test class named `GrpcServerSpec`.
* ports: Move the free port search from postgresql-testing.
* participant-integration-api: Test the basics of GrpcServer.
This uses the HelloService to make sure the server behaves normally.
* ledger-api-client: Extract out channel configuration from LedgerClient.
So we can test it independently of the LedgerClient itself.
* ledger-api-client: Increase the default maximum inbound header size.
Increased from 8 KB to 1 MB.
* participant-integration-api: Reduce the maximum error message size.
Truncate GRPC error descriptions to 256 KB.
* participant-integration-api: Use `Port.Dynamic` instead of `FreePort`.
In tests.
* participant-integration-api: Explicit null checks when they're shorter.
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* ledger-api-client: Reduce the max inbound message size back to 8 KB.
And reduce the maximum size of an error description pushed out by the
server accordingly.
CHANGELOG_BEGIN
- [Integration Kit] Truncate GPRC error messages at 4 KB. This ensures
that we won't trigger a protocol error when sending errors to the
client.
CHANGELOG_END
Co-authored-by: Stefano Baghino <43749967+stefanobaghino-da@users.noreply.github.com>
* add -Xsource:2.13, -Ypartial-unification to common_scalacopts
* add now-referenced scalaz-core where needed
* work around bad type signatures in scalatest Aggregating, Containing
* unused Any suppression
* work around bad partial-unification wrought by type alias
* remove unused Conversions import
- not required in 4f68cfc480 either, so unsure how it's survived this long
* work around Future.traverse; remove unused show import
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* remove unused bounds
* remove -Ypartial-unification and -Xsource:2.13 where they were explicitly passed
* longer comment on what the options do
- suggested by @stefanobaghino-da; thanks
* forget Future.traverse, just use scalaz, it knows how to do this
* disable Any wart
* first pass removal of Any suppressions for false positives
* second pass removal of Any suppressions for false positives
* no changelog
CHANGELOG_BEGIN
CHANGELOG_END
* third pass removal of Any suppressions for false positives
* fourth pass removal of Any suppressions for false positives
* reformat newly single-suppressions into single lines
- suggested by @SamirTalwar-DA; thanks
* equalz Scalatest matcher in new daml-lf/scalatest-tools library
* equalz typing tests
* a 'should' replacing design
* a 'MatcherFactory1' design
- this fails because the TC parameter should be a type member to avoid
scala/bug#5075 but it is not
* MatcherFactory1 with chained Lub+Equal typeclass
- requires partial-unification at point of use, which is not great
* LubEqual's extra tparam is probably unneeded
* better LtEqual
* demonstrate that HK LubEqual's resolve with DMT should + MatcherFactory
* remove unneeded 3rd param from LubEqual, again
* update dependency specs and license headers
* allow use with should, shouldNot in some cases, preserving the shouldx/shouldNotx alternatives
* move Equalz to libs-scala/scalatest-utils
* rename bzl targets and place in com.daml.scalatest package
* add scalatest-utils to release
* move *SpecCheckLaws, Unnatural to scalatest-utils
* missed scalacheck dep in scalatest-utils
* downstreams of *SpecCheckLaws now get them from scalatest-utils
* test equal-types case as well
* update LF documentation
CHANGELOG_BEGIN
CHANGELOG_END
* whitespace error
* Extract caching from participant-state as a library
This will be used to keep a cache of values to cut on LF translation cost when serving transactions.
changelog_begin
changelog_end
* Add dependency where missing
* postgresql-testing: Store the JDBC URL separately.
* postgresql-testing: Expose the username and password.
* postgresql-testing: Get the caller to create the database.
And make sure it's a random one, not "test".
CHANGELOG_BEGIN
CHANGELOG_END
* postgresql-testing: Only store the JDBC URL for tests.
Less mutable state, innit.
* postgresql-testing: Capture the individual JDBC URL parameters.
* Bazel: Fix PostgreSQL binary paths.
* postgresql-testing: Just recreate the database in PostgresAroundEach.
There's no need to restart the process with a different data directory.
CHANGELOG_BEGIN
- [Sandbox] The ledger API server will now always use the most recent ledger configuration.
Until a ledger configuration is read from the ledger, command submissions will fail with the UNAVAILABLE error.
CHANGELOG_END
In kvutils, the first ledger configuration change needs
to have a generation one higher than the one returned
by getLedgerInitialConditions().
Remove initial config writing from sandbox as it's now written by the ledger API server
* resources: Rename `tearDownDuration` to `tearDownTimeout`.
* resources: Don't time out `ProgramResource` on startup.
Let it go for as long as it needs to.
Sometimes Sandbox takes a while to start, especially if you're
preloading large DARs. We should not try and predict how long it will
take.
CHANGELOG_BEGIN
CHANGELOG_END
* Adding `--port-file` support
* ``--port-file`` support
* Updating docs
changelog_begin
[JSON API] Add support for ``--port-file`` command line option.
``--http-port 0 --port-file ./json-api.port`` will pick up a free port
and write it into ``./json-api.port` file.
changelog_end
* reformatting
* Usage grammar
* use bimap
* Adding `PortFiles` utility for creating and deleting port files on JVM exit
* Adding scaladoc explaining that the port file should be deleted on
JVM termination.
* Updating usage and docs to reflect that the file must be unique and
will be deleted on graceful shutdown
* Relying on `java.nio.file.FileAlreadyExistsException` to determine the
case when failed due to the nonunique file name.
* toString instead of Exception.getMessage
java.nio exception's getMessage can be just a file name, need the class
name to capture the error context.
* updatePortFile -> createPortFile
* write to file instead of write into file
Packages com.digitalasset.daml and com.daml have been unified under com.daml
Ledger API and DAML-LF DEV protos have also been moved from `com/digitalasset`
to `com/daml` on the file system.
Protos for already released DAML LF versions (1.6, 1.7, 1.8) stay in the
package `com.digitalasset`.
CHANGELOG_BEGIN
[SDK] All Java and Scala packages starting with
``com.digitalasset.daml`` and ``com.digitalasset`` are now consolidated
under ``com.daml``. Simply changing imports should be enough to
migrate your code.
CHANGELOG_END
* Use com.daml as groupId for all artifacts
CHANGELOG_BEGIN
[SDK] Changed the groupId for Maven artifacts to ``com.daml``.
CHANGELOG_END
* Add 2 additional maven related checks to the release binary
1. Check that all maven upload artifacts use com.daml as the groupId
2. Check that all maven upload artifacts have a unique artifactId
* Address @cocreature's comments in https://github.com/digital-asset/daml/pull/5272#pullrequestreview-385026181
* http-json: Ask for a free port by specifying port 0.
This will avoid race conditions.
* bindings-akka-testing: Delete RandomPorts; it's unused.
* ports: Fix the Bazel test glob.
* ports: Move FreePort to postgresql-testing and add a test case.
* postgresql-testing: Make `FreePort.find()` return a `Port`.
* postgresql-testing: Lock free ports until the server starts.
This uses a `FileLock`, which should work well on all our
supported operating systems as long as everyone agrees to use it.
CHANGELOG_BEGIN
CHANGELOG_END
* postgresql-testing: Try to find a free port 10 times, then give up.
* postgresql-testing: Use a shared directory for the port lock.
* postgresql-testing: Try an alternative way of getting `%LOCALAPPDATA%`.
`logs` used to be an `Option[Seq[String]]`. They were changed to
`Seq[String]` with a default of `Seq.empty`, but the code to print them
still assumed an `Option`, which means they were printing as
"ArrayBuffer(line 1, line 2, line 3)" instead of, well, nicely.
CHANGELOG_BEGIN
CHANGELOG_END
Some Option2Iterable ignore annotations are not needed, others were needed for unused methods.
In a few occasions we were ignoring the warning for the very purpose for which is was there,
i.e. avoiding an implicit conversion. I'm all for not verifying this rule if we agree we
don't need it.
For ProcessFailedException it was a bit gratuitous, I changed the way in which the exception
message is built.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Re-use the root actor system in the StandaloneIndexerServer.
* kvutils/app: Don't use the ActorSystem execution context randomly.
Instead, make `Runner` a proper ResourceOwner, with an `acquire` method.
* sandbox: Re-use the root actor system in the StandaloneApiServer.
CHANGELOG_BEGIN
CHANGELOG_END
* resources: Remove the now-unused `ResourceOwner.sequence` functions.
They weren't well-thought-out anyway; they acquire resources
sequentially, rather than in parallel.
* sandbox-next: Make the Runner a real ResourceOwner.
* sandbox: Don't construct the ResetService twice.
* sandbox: Inline and simplify methods in StandaloneApiServer.
* resources: Define a `ResettableResource`, which can be `reset()`.
`reset()` releases the resource, performs an optional reset operation,
and then re-acquires it, binding it to the same variable.
* resources: Pass the resource value into the reset operation.
* sandbox: Fix warnings in `TestCommands`.
* sandbox-next: Add the ResetService.
CHANGELOG_BEGIN
CHANGELOG_END
* sandbox: Make sure the SandboxResetService resets asynchronously.
It was being too clever and negating its own asynchronous behavior.
* sandbox-next: Forbid no seeding.
This double negative is really hard to phrase well.
* sandbox-next: Implement ResetService for a persistent ledger.
* sandbox: Delete the comment heading StandaloneIndexerServer.
It's no longer meaningful.
* sandbox-next: No need to wrap the SandboxResetService in an owner.
* sandbox-next: Bump the ResetService test timeouts.
It looks like it's definitely slower than on Sandbox Classic™. Gonna
look into this as part of future work.
* Revert to previous asynchronous reset behavior
Co-authored-by: Gerolf Seitz <gerolf.seitz@digitalasset.com>
* libs-scala/ports: Wrap socket ports in a type, `Port`.
* sandbox: Use `Port` for the API server port, and propagate.
CHANGELOG_BEGIN
CHANGELOG_END
* extractor: Use `Port` for the server port.
* ports: Make Port a compile-time class only.
* ports: Allow port 0; it can be specified by a user.
* ports: Publish to Maven Central.