When changing description, some pre-metadata refactors were resulting in
permanently broken calls to the API. This accesses our resource object
correctly.
This commit pulls the spinner out of the header bar -- and
reincorporates it as a component that hooks into local state when
awaiting a new prop, or disabling an input.
Before, when we got new props for the metadata of the notebook, all the
fields would flash blank or to previous inputs. This rewrites the
update function to be more atomic with how it edits state,
which seems to correct the behaviour to avoid blank fields and disable fields
correctly.
By using an array, not a set, we stop deduplicating our group index,
pushing redundant information instead. When searching, this prevents a
component fail state where it cannot search a non-existent index for
matches.
RFC 2396 specifies that segments must be zero or more pchars.[1] We were
deviating from this by requiring at least one pchar per segment.
With this change, we support /some//path, and no longer lose the
trailing slash in /some/path/.
[1]: https://tools.ietf.org/html/rfc2396#section-3.3
In the wild, ships that were live pre-OS1 still had launch subscriptions
open to the clock on the /tile path, instead of the currently-used
/clocktile path. Additionally, launch state for the clock tile seemed
incomplete.
Here, we simply re-%add the clock to launch.
Note that launch currently does not clean up old subscriptions on
path change. For the pre-OS1 case, the old path is no longer in use,
rendering the subscription harmless. For cases where the correct
subscription was already in place, it'll print a %watch-wire-not-unique,
but doesn't do any harm besides that.
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.