Support /=peers= and /=peer=/~ship scries for getting at all peers and
a specific peer's connection state, respectively.
Moves some internal types into zuse for easier external use.
Makes it so that |cancel %force skips the next thing in the queue if
you're not in the middle of something. If you are in the middle of
something, it skips the thing you're in the middle of (just like naked
|cancel).
This should resolve issues where |cancel doesn't drain the queue.
When a ship breaches, we remove all messages that have yet to be
delivered to an app (eg if it's not yet started). We also add
|gall-sear to do this manually, but this shouldn't be needed in normal
operation.
Finally, to unblock ~zod and ~bus on mainnet, we sear one particular
ship automatically on loading hood. It cannot be done manually because
no userpace changes can be made until it's unblocked.
Gives you a poor man's progress bar. For example, to determine how much
of an OTA you've downloaded from your sponsor, run:
|ames-sift (sein:title our now our)
|ames-verb %rcv
and then to turn it off:
|ames-verb
* master: (484 commits)
king: Slight CLI cleanup and fix test build.
king: Add command-line flags to configure HTTP and HTTPS ports.
groups: reduce metadata updates, removal
chat: reducer handles metadata removal
groups: exclude group metadata from channels list
groups: set and surface group name metadata
groups: remove dummy 'share' flow, 'default' group
contacts: rename, migrate '~contacts' to '~groups'
sh/release: rename vere release tarballs
vere: patch version bump (v0.10.3 -> v0.10.4.rc1) [ci skip]
pills: updated brass and solid
chat: pull room contacts from associated group
chat: spell 'permanent' correctly
eyre: remove padding from 'access' input
chat: only delete metadata for a chat if you created it
chat: settings inputs add borders on focus
vere: disables gc on |mass in the daemon process
chat: remove console.log from metadataAction
chat: style fixes during review, use metadata-hook
chat: edit description, color settings
...
* origin/os1-rc: (439 commits)
pills: updated brass and solid
chat: pull room contacts from associated group
chat: spell 'permanent' correctly
eyre: remove padding from 'access' input
chat: only delete metadata for a chat if you created it
chat: settings inputs add borders on focus
chat: remove console.log from metadataAction
chat: style fixes during review, use metadata-hook
chat: edit description, color settings
chat: add update-metadata to metadata reducer
chat: revise api.js to match data structures
metadata-json: add json to action parsers
chat: construct settings page for metadata
chat: correct bottom border on join links
chat: copy shortcodes
chat: linkify unmanaged chats
metadata-hook: support group members other than host creating shared resources
contacts: add bg-gray0 to root page
chat + contact views: updated for style and to assert that group-path must be equal to app-path if there are ships in the members set
contacts: changed color + copy of "add to group" button
...
It's hard to say what's the safest thing to do when we get an ack we
weren't expecting due to losing outstanding.agents.state in +load
3-to-4, so this gives both a watch-ack and a poke-ack. This seems most
likely to succeed.
Does not change state type, but clears outstanding.agents.state since
it's full of garbage values. This introduces a possibility that we may
have been in the middle of something, so we handle that in a reasonably
sane way.
outstanding.agents.state is a queue of what sort of message we sent to a
foreign app. We use it so that when the acknowledgment comes back we
know whether to treat it as a watch-ack, poke-ack, or neither. We used
to put this info in the wire, but this gave us a different ames flow,
which meant %leave and %watch didn't get associated (causing #2079).
The error was that when when retrieving the item from the queue, we put
the new 1-item-shorter queue back in outstanding.agents.state at a
different wire than it came from, so the queues never actually got
shorter, and acknowledgments of the wrong sort were commonly produced.
This caused problems mainly in situations where we poke and peer on the
same wire, and possibly when a subscription was cancelled.
Possibly related to #2206 and #2176. I would expect this bug to cause
those issues, but I haven't verified the converse. Also possibly
related to #2153 and #2079.
Due to asynchronicity, Ford can receive responses from Clay to requests
that it has already attempted to cancel. This removes some overzealous
assertions that this wouldn't happen.
@ixv recently uncovered a bug (#2180) in Ford that caused certain
rebuilds to crash. @Fang- and I believe this change should fix the bug,
and we have confirmed that the reproduction that used to fail about two
thirds of the time now has not failed at all in the ten or so times
we've run it since then. @Fang- is still running more tests to confirm
the fix with more certainty.
It turned out the cause was that (depending on the rebuild order, which
is unspecified and should not need to be specified), Ford could enqueue
a provisional sub-build to be run but then, later in the same +gather
call, discover that the sub-build was in fact an orphan and delete it
from builds.state accordingly. Then when Ford tried to run the
sub-build, it would have already been deleted from the state, so Ford
would crash when trying to process its result in +reduce.
The fix was to make sure that when we discover a provisional sub-build
is orphaned, dequeue it from candidate-builds and next-builds to make
sure we don't try to run it. I'm about 95% sure this fix completely
solves the bug.
Uses Zuse's previously unused +harden helper function to streamline
+task unwrapping in vanes.
(Arguably, in landlocked vanes like Ford, we should crash if we get a
%soft task, since no events should be coming in directly from the
outside.)