* origin/os1-rc: (439 commits)
pills: updated brass and solid
chat: pull room contacts from associated group
chat: spell 'permanent' correctly
eyre: remove padding from 'access' input
chat: only delete metadata for a chat if you created it
chat: settings inputs add borders on focus
chat: remove console.log from metadataAction
chat: style fixes during review, use metadata-hook
chat: edit description, color settings
chat: add update-metadata to metadata reducer
chat: revise api.js to match data structures
metadata-json: add json to action parsers
chat: construct settings page for metadata
chat: correct bottom border on join links
chat: copy shortcodes
chat: linkify unmanaged chats
metadata-hook: support group members other than host creating shared resources
contacts: add bg-gray0 to root page
chat + contact views: updated for style and to assert that group-path must be equal to app-path if there are ships in the members set
contacts: changed color + copy of "add to group" button
...
It's hard to say what's the safest thing to do when we get an ack we
weren't expecting due to losing outstanding.agents.state in +load
3-to-4, so this gives both a watch-ack and a poke-ack. This seems most
likely to succeed.
Does not change state type, but clears outstanding.agents.state since
it's full of garbage values. This introduces a possibility that we may
have been in the middle of something, so we handle that in a reasonably
sane way.
outstanding.agents.state is a queue of what sort of message we sent to a
foreign app. We use it so that when the acknowledgment comes back we
know whether to treat it as a watch-ack, poke-ack, or neither. We used
to put this info in the wire, but this gave us a different ames flow,
which meant %leave and %watch didn't get associated (causing #2079).
The error was that when when retrieving the item from the queue, we put
the new 1-item-shorter queue back in outstanding.agents.state at a
different wire than it came from, so the queues never actually got
shorter, and acknowledgments of the wrong sort were commonly produced.
This caused problems mainly in situations where we poke and peer on the
same wire, and possibly when a subscription was cancelled.
Possibly related to #2206 and #2176. I would expect this bug to cause
those issues, but I haven't verified the converse. Also possibly
related to #2153 and #2079.
Due to asynchronicity, Ford can receive responses from Clay to requests
that it has already attempted to cancel. This removes some overzealous
assertions that this wouldn't happen.
@ixv recently uncovered a bug (#2180) in Ford that caused certain
rebuilds to crash. @Fang- and I believe this change should fix the bug,
and we have confirmed that the reproduction that used to fail about two
thirds of the time now has not failed at all in the ten or so times
we've run it since then. @Fang- is still running more tests to confirm
the fix with more certainty.
It turned out the cause was that (depending on the rebuild order, which
is unspecified and should not need to be specified), Ford could enqueue
a provisional sub-build to be run but then, later in the same +gather
call, discover that the sub-build was in fact an orphan and delete it
from builds.state accordingly. Then when Ford tried to run the
sub-build, it would have already been deleted from the state, so Ford
would crash when trying to process its result in +reduce.
The fix was to make sure that when we discover a provisional sub-build
is orphaned, dequeue it from candidate-builds and next-builds to make
sure we don't try to run it. I'm about 95% sure this fix completely
solves the bug.
Uses Zuse's previously unused +harden helper function to streamline
+task unwrapping in vanes.
(Arguably, in landlocked vanes like Ford, we should crash if we get a
%soft task, since no events should be coming in directly from the
outside.)
There was a typo in the routing logic that was comparing equality
against a value where it should have been doing a pattern match. The
value compared against contained the literal * gate, which would never
match route.peer-state, so this condition was always true, meaning the
fix that had added this extra condition (5406f06) did not actually
change the behavior from what it been previously.
If we receive the naxplanation before the nack, the assertion in the gte
direction fails. The intent of the assertion is to make sure top of the
live queue never falls behind current.state, so it was simply in the
wrong direction.
Instead of providing a (unit path), allows for (list path), which better
supports the "update to path and subpath cases".
For example, if /things wants updates about everything, and
/things/specific wants updates about the specific thing, they'll both
need to receive a %fact when the specific thing changes.
Previously, these would have been two separate moves. Now, gall handles
the multi-targeting for you.
Previously, it would always produce ~, regardless of the path asked
about.
Now, it produces a loobean, based on whether or not a file exists at the
specified path.
Two bugs fixed here: first, if the %done reentrancy triggered another
%boon, that wasn't getting translated to a %lost, even though it could
have been the reason the event crashed in the first place.
Second, the %done reentrancy needs to happen after we emit our move, so
that we don't invert the order of the %boon's we produce.
OTAs commonly end up in an inconsistent state if apps depend on changes
to /sys. For example, the %sift changes break on OTA because %spider
needs to be reloaded so that it's aware of the new thread type. This
adds a %goad app, which reloads all apps after every change to /sys.
Getting this to start OTA is nontrivial, but this pattern should work
for apps in the future. The changes to clock shouldn't generally be
necessary; they are only necessary here because we can't rely on hood to
start goad, since hood fails to compile if it's run before zuse is
reloaded. Once goad is active, this will cease to be a problem.
%leave over the network didn't work because we included the message type
in the wire from gall, so the duct for the initial %watch and the %leave
were different. We need to know the message type so we can route the
acknowledgment as %poke-ack, %watch-ack, or no-op.
This moves this piece of information to a piece of state, where we queue
up the message types per [duct wire]. Ames guarantees that
acknowledgments will come in order.
This also includes an easy state adapter. The more interesting part of
the upgrade is that we likely have outstanding subscriptions with the
old wire format. The disadvantage of storing information in wires is
that it can't be upgraded in +load. So, here we listen for updates on
the old wire format, and when we get them we kill the old subscription,
so that it will be recreated with the new wire format.
As an aside, this is a good example of what we mean when we say
subscriptions may be killed at any time, so apps must handle this case.
Finally, this fixes the "attributing" ship to ~zod for agent requests.
This information was ignored for agent requests, but including it causes
spurious duct mismatches.
We've seen issues where the message-num of the head of live.state is
less than current.state. When this happens, we continually try to
resend message n-1, but we throw away any acknowledgment for n-1 because
current.state is already n. This halts progress on that flow.
We don't know what causes us to get in this bad state, so this adds an
assert to the packet pump that we're in a good state, run every time
the packet pump is run. When this crashes, we can turn on |ames-verb
and hopefully identify the cause.
This also adds logic to +on-wake in the packet pump to not try to resend
any messages that have already been acknowledged. This is just to
rescue ships that currently have these stuck flows.
(Incidentally, I'd love to have a rr-style debugger for stuff like this.
Just run a command that says "replay my event log watching for this
specific condition and then stop and let me poke around".)
This is why basically all packets are going through the galaxies right
now. Most of the time, the flow right now is:
* talking to ~dopzod but don't know where it is, so ask ~zod to forward,
which it does
* ~dopzod responds both directly (on the origin lane) and through ~zod
* (if NAT, the direct response doesn't get back, but the one through
~zod does. Then you respond directly to ~dopzod because their lane
piggybacked on the response. ~dopzod responds both directly and
through ~zod, and the story picks up the same as if you weren't behind a
NAT)
* now you have a direct lane to ~dopzod, so all is well.
* now the duplicate response from ~dopzod through ~zod comes in (takes a
little longer because it's bouncing off ~zod), resetting your lane to
"provisional"
* since your lane is provisional, you send your next packet both
directly and through ~zod
* GOTO 2
This change says "if I already have a direct lane, don't overwrite it
with a provisional one". This way, the only way the direct lane can be
overwritten is if they stop responding on it (cleared on "not
responding; still trying").
I also added |- to +send-blob to make |ames-verb %rot less confusing.
Compare +mute and +mule. Those pass through scry, which doesn't allow us to
catch crashes due to blocking scry. If you intercept scry, you can't preserve
the type polymorphically. By monomorphizing, we are able to do so safely.
This broke when %kick was handled by resubscribing on your own ship
because it processed the %kick before the %leave. For example, `@t`404
at the dojo would put the dojo in an unworkable state.
You want the %leave to be processed first because you can't do a
"resubscribe" in response to that.
This adds syntax for running imps. For example:
-time ~s1
Runs the "time" imp with the argument ~s1. This blocks the terminal
until the imp has completed (backspace kills it, of course). You could
avoid blocking the terminal if you sacrifice the ability to use imps as
sources in more complex commands.
In keeping with this one-and-done view of imps, this also changes spider
to not use a live build of imps. This significantly reduces the amount
of uncertainty around imps -- spider will try exactly once to run your
imp, and if it fails it'll tell you. If you want to retry, that's up to
you.