Previously, the initial Azimuth snapshot was stored in Clay and shipped
in the pill. This causes several problems:
- It bloats the pill
- Updating the snapshot added large blobs to Clay's state. Even now
that tombstoning is possible, you don't want to have to do that
regularly.
- As a result, the snapshot was never updated.
- Even if you did tombstone those files, it could only be updated as
often as the pill
- And those updates would be sent over the network to people who didn't
need them
This moves the snapshot out of the pill and refactors Azimuth's
initialization process. On boot, when app/azimuth starts up, it first
downloads a snapshot from bootstrap.urbit.org and uses that to
initialize its state. As before, updates after this initial snapshot
come from an Ethereum node directly and are verified locally.
Relevant commands are:
- `-azimuth-snap-state %filename` creates a snapshot file
- `-azimuth-load "url"` downloads and inits from a snapshot, with url
defaulting to https://bootstrap.urbit.org/mainnet.azimuth-snapshot
- `:azimuth &azimuth-poke-data %load snap-state` takes a snap-state any
way you have it
Note the snapshot is downloaded from the same place as the pill, so this
doesn't introduce additional trust beyond what was already required.
When remote scry is released, we should consider allowing downloading
the snapshot in that way.
If there were pending-logs in an existing watchdog that was not fully
restarted, and the number of the starting block is newer than the first
in pending, when starting a new thread, those logs will be carried over
to the new thread, which will then be re-downloaded and will fail to be
verified in /lib/naive
The previous value—used for testing—didn't consider
block reorgs, which meant that if we zoom to the latest
block that has no transactions, but that gets later replaced
by a 1-block reorg that does have a transaction, we'll miss it,
making our Azimuth state incomplete.
To fix it, we rewind the Azimuth state to the contents of the snapshot,
and then start retrieving logs from the latest one we have.
When you first boot, if you try talk to someone before your azimuth is
up-to-date (for example by import), then if they've ever breached
(twice) then you'll get breach notification, cancelling your message.
This changes is it so that if we haven't heard anything about this ship,
we don't signal a breach.
The implementation complexity is primarily because we need
eth-watcher/azimuth-tracker to produce an update of a list instead of a
list of updates. This way, Jael can keep a "state as of the beginning
of this move" variable to check when deciding whether to signal a
breach.
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
Instead of providing a (unit path), allows for (list path), which better
supports the "update to path and subpath cases".
For example, if /things wants updates about everything, and
/things/specific wants updates about the specific thing, they'll both
need to receive a %fact when the specific thing changes.
Previously, these would have been two separate moves. Now, gall handles
the multi-targeting for you.
When configuring a watchdog on a path that already exists, we now
"overwrite" it, meaning we throw away all history and trawl the node
for logs again.
If the only config change is the url, however, we silently modify it,
and simply use it "from this point onward".
This matches the behavior of the original azimuth-tracker.
In order to give an initial response to incoming subscriptions (without
resorting to retrieving that data from chain again) we now store event
log history in state.
Instead of discarding pending-logs entirely after sending out updates,
we add them to the watchdog's history.
Just like pending-logs, we remove from the head during a rewind (though
not before exhausting the pending-logs).
Kicks the update timer on application start, then sets a new timer
whenever it's awoken. This aims to ensure eth-watcher never stops
looking for updates periodically.
Uses the logic existing in azimuth-tracker to implement a new
eth-watcher, which can look at Ethereum nodes for _any_ events, as
opposed to exclusively a subset of the Azimuth contract's events.
Azimuth-tracker will be reimplemented as a dependent of this in
forthcoming commits.
These were deprecated in favor of azimuth-tracker in #1320.
(Azimuth-tracker, however, isn't a general-purpose Ethereum log watcher
tool. Commits to transform it into a more broadly useful tool are
forthcoming.)