* release/next-vere: (317 commits)
u3: improves error output on c3_assert()
vere: improves error messages on ipc EOF
vere: fixes %trim effect handler
vere: fix crash on u3_pier_bail()
king: move most things from debug log level to info
vere: support rendering +stub as ansi escape codes
king: change logging from tracing to info in Wai.hs
king dawn: export functions used in tests
king dawn: nits
king dawn: remove stray marks
king dawn: instead of crashing, return an error
king dawn: replace web3 usage with hand rolled jsonrpc messages.
vere: v0.10.9-rc1
vere: more correct lane cache commentary
vere: reset instead of decrement scry fail counter
vere: ames.c stylistic improvements
vere: only touch forward queue counter if scrying
vere: cache lanes for stateless forwarding
vere: give up ames scry after successive failures
vere: properly clean up dropped laneless packets
...
Adds support to term.c for a %klr blit, containing a +stub describing
styled text.
Dill will start making use of this in a separate commit, for release
cutting reasons.
I noticed that the king's text log file kept filling up, and it's
mostly 'sending chunk "\n"' from eyre. This changes the log level
of every partial send over an active eyre channel to info, instead
of always.
This replaces the autogenerated bindings to the Azimuth contracts
which use Network.Web3 with hand rolled json messages. Booting a
ship involved 256 individual galaxy point lookups using web3, while
Vere batched all of that into one JSONRPC message.
With this patch, we also batch everything at each phase into one
JSONRPC batch.
Slightly simplify the logic around changing the queue size counter by
only modifying it when we're _actually_ scrying, instead of
synchronously processing the forward.
While stateless forwarding doesn't need to touch disk, there's still
overhead in needing to communicate with the serf over IPC. By caching
lanes, we get to skip the IPC pipeline, and can respond to forwarding
requests synchronously.
We include timestamps alongside the entries in the cache, and consider
entries older than two minutes as stale.
The cache is capped at around ~100mb of memory use. Further commentary
is provided inline.
Previously, ~nus would drop 3 forward requests, for every one it
fulfilled. Now, it seems able to keep up with demand, only dropping
forwards shortly after boot, while the cache isn't primed yet.
Instead of giving up on scrying at the first sight of a u3_none result,
keep trying for a little bit. If five scries fail in direct succession,
consider scrying not worth the effort, and stop trying as before.
Previously, if ames told us there was no lane for a target, we would
drop the packet, but fail to register this in the queue, or even reclaim
the memory it was using.
Now, we do all the required book-keeping when dropping packets in the
"no lane for this" case.