PR #5840 mostly fixed#1559, but introduced a new bug. before, you could safely `=dir` into a desk without a case, and it would use the nonexistent case `ud+0` as the beam for dojo state, and switch that out for da+now whenever it tries to resolve the current path. but this check causes it to fail, because `ud+0` is a nonexistent case. this uses he-beam to transform the beam in the conditional to see if the case is 0, and if it is, changes the case to da+now before it scries
if a cert is configured and a secure port is live it will set the
redirect flag in http-config.state.
When it gets a ++request it will return a 301 redirect to
https://[host]/[path] if:
1. not already secure
2. redirect flag set
3. secure port live
4. is not requesting /.well-known/acme-challenge/...
5. the host is in domains.state
It will not happen if forwarded-secured, localhost, local loopback, ip
addresses or domains not in domains.state.
in ++load it checks the secure port is live and a cert is set and
enables it if so (for people who already use in-urbit letencrypt)
%rule %cert tasks also toggle it (only turning it on if secure port
live)
%live tasks also toggle it (only turning it on if cert set)
Have tested with a couple of ships and seems to work fine.
This is useful in combination with pyry's auto arvo.network dns config
system - can finally get rid of reverse proxies entirely.
Eyre always gets passed request headers in lowercase, so we should search for
the lowercased version of the header.
Arguably `+get-header` should lowercase keys before comparing them, but that's
a more serious behavioral change.
This allows you to pass a thread directly into khan, instead of passing
a filename. This has several implications:
- The friction for using threads from an app is significantly lower.
Consider:
=/ shed
=/ m (strand ,vase)
;< ~ bind:m (poke:strandio [our %hood] %helm-hi !>('hi'))
;< ~ bind:m (poke:strandio [our %hood] %helm-hi !>('there'))
(pure:m !>('product'))
[%pass /wire %arvo %k %lard %base shed]
- These threads close over their subject, so you don't need to parse
arguments out from a vase -- you can just refer to them. The produced
value must still be a vase.
++ hi-ship
|= [=ship msg1=@t msg2=@t]
=/ shed
=/ m (strand ,vase)
;< ~ bind:m (poke:strandio [ship %hood] %helm-hi !>(msg1))
;< ~ bind:m (poke:strandio [ship %hood] %helm-hi !>(msg2))
(pure:m !>('product'))
[%pass /wire %arvo %k %lard %base shed]
- Inline threads can be added to the dojo, though this PR does not add
any sugar for this.
=strandio -build-file %/lib/strandio/hoon
=sh |= message=@t
=/ m (strand:rand ,vase)
;< ~ bind:m (poke:strandio [our %hood] %helm-hi !>('hi'))
;< ~ bind:m (poke:strandio [our %hood] %helm-hi !>(message))
(pure:m !>('product'))
|pass [%k %lard %base (sh 'the message')]
Implementation notes:
- Review the commits separately: the first is small and implements the
real feature. The second moves the strand types into lull so khan can
refer to them.
- In lull, I wanted to put +rand inside +khan, but this fails to that
issue that puts the compiler in a loop. +rand depends on +gall, which
depends on +sign-arvo, which depends on +khan. If +rand is in +khan,
this spins the compiler. The usual solution is to either move
everything into the same battery (very ugly here) or break the
recursion (which we do here).
Before this, the %watch to eth-watcher was happening before the %poke,
and so eth-watcher was responding with its entire history immediately.
This is bad because it takes a lot of memory to process that many logs,
and also because those logs are stale.
Now, the %poke happens first, which clears the history.
%kick is supposed to start back from the snapshot and move forward.
Without this, we would only fetch logs that we hadn't already fetched.
Thus, if you were up-to-date when you kicked, you would miss anything
that happened between the time the snapshot was taken and the present,
though you would see things after the present.
Also reverted lull change to make this a safer upgrade.
Previously, when the larva got to processing enqueued events, it was
doing so without loading state into the adult beforehand, resulting in
incorrect processing of events.
Here, we make the larva call +molt more eagerly, ensuring that the adult
always has its state available when we use it.
Yes, there is a global timer for closing flows, but all that does is
enqueue a cork message. +on-stir needs to set _pump_ timers for all
flows that might still have messages to send, which includes closing
flows.
When ames notifies us that our subscription has been kicked, we enqueue
a cork to clean up the flow. Unlike the %leave case, however, we were
not registering the cork in the queue of outstanding comms. We would
eventually get an ack, but not know what for, and erroneously inject
%poke-acks and %watch-acks.
Here we simply add a %cork entry to the queue before sending it.
This is sufficient to bring the normal (non-prerelease-bugged) cases
into the new world.
For the prerelease ships that ran a buggier version of the new gall
subscription logic, we note that the conditional may trigger for the
nonce=1 case where it had already triggered for their
(shouldn't-be-possible) nonce=0 case. This results in a %leave on a wire
that wasn't in use. This no-ops on the publisher side though, and the
flow gets corked right away, so this is considered harmless.
In response to clog notification from remote ames, we were sending a
%cork to clean up the flow. However, the wire we were using had the /sys
prefix already stripped off. Here, we put it back in.
Start by killing subscription nonce 0, then work our way up instead of
down. We enhance the printf with a "total nonces" indicator so we can
still easily see the progress being made.
Previous +ap-doff kicked the agent repeatedly. We needed to kick
it only once. Now publisher agents clear their incoming subscription
state without the subscriber making lots of new subscriptions because
of repeated kicking.
+on-plea gets called in two very different ways:
1) handling request from local vane to send %plea to peer
2) handling %cork request from another ship, which our local ames has %pass'ed
to ourselves
In the second case, we shouldn't print misleadingly, or bind a duct in the ossuary.
+ap-nuke was not including the nonce, but should.
+ap-handle-peers was potentially including a zero nonce.
(The latter shouldn't have been possible, but there's a bug in +load
where sub-nonce.yoke gets initialized as 0 instead of 1.)
Gall tells ames to %cork flows for subscriptions it has closed.
Receiving a kick also closes a subscription, but gall wasn't issuing a
%cork in that case. We correct that here.
Inlines +mo-handle-ames-response's logic at its only callsite.
seems that this structure has been unused since
e75ab631a4 and confuses
newbies trying to figure out exactly what the commit
structure is (which is how I came across this)
Without this, a ship would send a cork on a max of one flow per
recork timer, which could take years to clear for some ships.
This starts a hot loop of trying the next cork once one gets
positively acked.
The previous recork timer queued up %cork messages without sending them.
It also relied on making sure pump timers didn't get set for recork bones.
This was fragile.
The new design enqueues up to one new %cork message per ship during each
recork timer, based on the state of the flow. If the flow is closing but
there are no outstanding messages in it, then it needs to be recorked.
Flows will be recorked in ascending numerical order by bone.
The condition got butchered during refactor: instead of avoiding the creation
of pump timers during recork wake, it was setting them _exclusively_ during
recork wake.
This test started failing presumably somewhere during #5886. Testing
with a comet on the network, the test seems inaccurate: the comet can
communicate and be communicated to just fine.