Gall tells ames to %cork flows for subscriptions it has closed.
Receiving a kick also closes a subscription, but gall wasn't issuing a
%cork in that case. We correct that here.
Inlines +mo-handle-ames-response's logic at its only callsite.
seems that this structure has been unused since
e75ab631a4 and confuses
newbies trying to figure out exactly what the commit
structure is (which is how I came across this)
Without this, a ship would send a cork on a max of one flow per
recork timer, which could take years to clear for some ships.
This starts a hot loop of trying the next cork once one gets
positively acked.
The previous recork timer queued up %cork messages without sending them.
It also relied on making sure pump timers didn't get set for recork bones.
This was fragile.
The new design enqueues up to one new %cork message per ship during each
recork timer, based on the state of the flow. If the flow is closing but
there are no outstanding messages in it, then it needs to be recorked.
Flows will be recorked in ascending numerical order by bone.
The condition got butchered during refactor: instead of avoiding the creation
of pump timers during recork wake, it was setting them _exclusively_ during
recork wake.
Problem:
by-channel has its own copy of server-state from line 2182. discard-channel returns an altered state, with one channel removed from the state of by-channel.
but the state of by-channel isn't changing with each iteration, so |trim is only removing one channel per invocation.
Solution:
update by-channel on each iteration.
This is a temporary fix, and first part of the gall-request-queue-fix
release in two stages. This gives a publisher ship the ability to
understand a %cork and handle it properly, but no subscriber will
be sending %corks at this stage when leaving a subscription.
We still add a nonce to all subscription wires but it doesn't
increment it when resubscribing, allowing flows to be reused.
Tested locally with toy pub/sub agents and Group join/leaving
Previously, the initial Azimuth snapshot was stored in Clay and shipped
in the pill. This causes several problems:
- It bloats the pill
- Updating the snapshot added large blobs to Clay's state. Even now
that tombstoning is possible, you don't want to have to do that
regularly.
- As a result, the snapshot was never updated.
- Even if you did tombstone those files, it could only be updated as
often as the pill
- And those updates would be sent over the network to people who didn't
need them
This moves the snapshot out of the pill and refactors Azimuth's
initialization process. On boot, when app/azimuth starts up, it first
downloads a snapshot from bootstrap.urbit.org and uses that to
initialize its state. As before, updates after this initial snapshot
come from an Ethereum node directly and are verified locally.
Relevant commands are:
- `-azimuth-snap-state %filename` creates a snapshot file
- `-azimuth-load "url"` downloads and inits from a snapshot, with url
defaulting to https://bootstrap.urbit.org/mainnet.azimuth-snapshot
- `:azimuth &azimuth-poke-data %load snap-state` takes a snap-state any
way you have it
Note the snapshot is downloaded from the same place as the pill, so this
doesn't introduce additional trust beyond what was already required.
When remote scry is released, we should consider allowing downloading
the snapshot in that way.
Because the publisher will send the cork plea back to the subscriber on
the next bone, we are not able to know the bone for the original cork.
To handle it, we add the cork bone to the plea path
still wip: it keeps resending the cork plea faster than its ~h1 timer
Removes the !: at the top of gall, so that it no longer gets included in traces about agent builds or crashes.
We also refine intentional crashes with ~_s, so that we still see a crash reason even if we don't get a full trace.
Lastly, flops the trace for +on-load crashes, which were getting printed bottom-first.
Stale lanes may cause forwarding loops. Imagine the following:
1) Planet A is live. Galaxy B, its indirect sponsor, learns of its route.
2) A goes offline. Another ship, C, is started in its place, at the same route.
3) B receives a packet for A, forwards it to the known route.
4) C received the packet, forwards it to B.
5) Repeat from 3.
Here, we update the forward lane(s) scry used by the runtime to not produce a
peer's lane if they haven't communicated with us in the last hour. Everyone's
supposed to ping their sponsorship chain every 30 seconds. If those aren't
going through, you shouldn't expect to be reachable anyway.
We may or may not want to update +send-blob to match.
Previously, if the pointer for a syntax error pointed to the end of the file
(and the file ended in a newline) the code snippet rendering would try to
display a line _beyond_ the end of the file, causing a crash.
Here, we detect that case, and display `<<end of file>>` instead.
This accounts for a possible race condition where ames expects a
response, but regresses into the larval state. Upon receiving the
$sign on +take, we would remain stuck as a larva. Now we check
that we have enough information to re-evolve and then start a
/larval timer to begin draining the queue.
Previously we stored the nonce in $boat, which changed the $bowl of each
agent. This compiles and all agents reload, but more testing is needed.
It also renames inbound/outbound watches to $bitt/$boat.