Does not change state type, but clears outstanding.agents.state since
it's full of garbage values. This introduces a possibility that we may
have been in the middle of something, so we handle that in a reasonably
sane way.
Due to asynchronicity, Ford can receive responses from Clay to requests
that it has already attempted to cancel. This removes some overzealous
assertions that this wouldn't happen.
@ixv recently uncovered a bug (#2180) in Ford that caused certain
rebuilds to crash. @Fang- and I believe this change should fix the bug,
and we have confirmed that the reproduction that used to fail about two
thirds of the time now has not failed at all in the ten or so times
we've run it since then. @Fang- is still running more tests to confirm
the fix with more certainty.
It turned out the cause was that (depending on the rebuild order, which
is unspecified and should not need to be specified), Ford could enqueue
a provisional sub-build to be run but then, later in the same +gather
call, discover that the sub-build was in fact an orphan and delete it
from builds.state accordingly. Then when Ford tried to run the
sub-build, it would have already been deleted from the state, so Ford
would crash when trying to process its result in +reduce.
The fix was to make sure that when we discover a provisional sub-build
is orphaned, dequeue it from candidate-builds and next-builds to make
sure we don't try to run it. I'm about 95% sure this fix completely
solves the bug.
Uses Zuse's previously unused +harden helper function to streamline
+task unwrapping in vanes.
(Arguably, in landlocked vanes like Ford, we should crash if we get a
%soft task, since no events should be coming in directly from the
outside.)
There was a typo in the routing logic that was comparing equality
against a value where it should have been doing a pattern match. The
value compared against contained the literal * gate, which would never
match route.peer-state, so this condition was always true, meaning the
fix that had added this extra condition (5406f06) did not actually
change the behavior from what it been previously.
If we receive the naxplanation before the nack, the assertion in the gte
direction fails. The intent of the assertion is to make sure top of the
live queue never falls behind current.state, so it was simply in the
wrong direction.
Instead of providing a (unit path), allows for (list path), which better
supports the "update to path and subpath cases".
For example, if /things wants updates about everything, and
/things/specific wants updates about the specific thing, they'll both
need to receive a %fact when the specific thing changes.
Previously, these would have been two separate moves. Now, gall handles
the multi-targeting for you.
Previously, it would always produce ~, regardless of the path asked
about.
Now, it produces a loobean, based on whether or not a file exists at the
specified path.
This fixes +put:in so that it works without the correct jet. There's a
mismatch where the hoon code is wrong and the jet is correct, so that
when we try to run this on alternate interpreters which may not have the
+in jets, things won't work.
%leave over the network didn't work because we included the message type
in the wire from gall, so the duct for the initial %watch and the %leave
were different. We need to know the message type so we can route the
acknowledgment as %poke-ack, %watch-ack, or no-op.
This moves this piece of information to a piece of state, where we queue
up the message types per [duct wire]. Ames guarantees that
acknowledgments will come in order.
This also includes an easy state adapter. The more interesting part of
the upgrade is that we likely have outstanding subscriptions with the
old wire format. The disadvantage of storing information in wires is
that it can't be upgraded in +load. So, here we listen for updates on
the old wire format, and when we get them we kill the old subscription,
so that it will be recreated with the new wire format.
As an aside, this is a good example of what we mean when we say
subscriptions may be killed at any time, so apps must handle this case.
Finally, this fixes the "attributing" ship to ~zod for agent requests.
This information was ignored for agent requests, but including it causes
spurious duct mismatches.
This is why basically all packets are going through the galaxies right
now. Most of the time, the flow right now is:
* talking to ~dopzod but don't know where it is, so ask ~zod to forward,
which it does
* ~dopzod responds both directly (on the origin lane) and through ~zod
* (if NAT, the direct response doesn't get back, but the one through
~zod does. Then you respond directly to ~dopzod because their lane
piggybacked on the response. ~dopzod responds both directly and
through ~zod, and the story picks up the same as if you weren't behind a
NAT)
* now you have a direct lane to ~dopzod, so all is well.
* now the duplicate response from ~dopzod through ~zod comes in (takes a
little longer because it's bouncing off ~zod), resetting your lane to
"provisional"
* since your lane is provisional, you send your next packet both
directly and through ~zod
* GOTO 2
This change says "if I already have a direct lane, don't overwrite it
with a provisional one". This way, the only way the direct lane can be
overwritten is if they stop responding on it (cleared on "not
responding; still trying").
I also added |- to +send-blob to make |ames-verb %rot less confusing.
The old version of ping hung when your sponsor breached while you had an
outstanding poke. I believe it would do the same if your sponsor
changed and the old sponsor didn't respond to you.
This explicitly subscribes to Jael for updates to our sponsorship tree,
and kicks the pings of any ships that change rift and any changed
sponsors.