Before this, the %watch to eth-watcher was happening before the %poke,
and so eth-watcher was responding with its entire history immediately.
This is bad because it takes a lot of memory to process that many logs,
and also because those logs are stale.
Now, the %poke happens first, which clears the history.
%kick is supposed to start back from the snapshot and move forward.
Without this, we would only fetch logs that we hadn't already fetched.
Thus, if you were up-to-date when you kicked, you would miss anything
that happened between the time the snapshot was taken and the present,
though you would see things after the present.
Also reverted lull change to make this a safer upgrade.
Previously, the initial Azimuth snapshot was stored in Clay and shipped
in the pill. This causes several problems:
- It bloats the pill
- Updating the snapshot added large blobs to Clay's state. Even now
that tombstoning is possible, you don't want to have to do that
regularly.
- As a result, the snapshot was never updated.
- Even if you did tombstone those files, it could only be updated as
often as the pill
- And those updates would be sent over the network to people who didn't
need them
This moves the snapshot out of the pill and refactors Azimuth's
initialization process. On boot, when app/azimuth starts up, it first
downloads a snapshot from bootstrap.urbit.org and uses that to
initialize its state. As before, updates after this initial snapshot
come from an Ethereum node directly and are verified locally.
Relevant commands are:
- `-azimuth-snap-state %filename` creates a snapshot file
- `-azimuth-load "url"` downloads and inits from a snapshot, with url
defaulting to https://bootstrap.urbit.org/mainnet.azimuth-snapshot
- `:azimuth &azimuth-poke-data %load snap-state` takes a snap-state any
way you have it
Note the snapshot is downloaded from the same place as the pill, so this
doesn't introduce additional trust beyond what was already required.
When remote scry is released, we should consider allowing downloading
the snapshot in that way.
It's too easy to get trapped in the dojo %dy-edit-busy escape room. Just
type something like:
-build-file /=base/gen/ls/hoon
This modifies the dojo output to tell the user how to get out.
Fixes#1462.
Incorporates @Fang- suggested changes (thanks!).
Drops the path serialization as it will print on two separate lines,
and it is already displayed in dojo immediately above the error message:
> =dir /=base=/ge
dojo: dir does not exist
If there were pending-logs in an existing watchdog that was not fully
restarted, and the number of the starting block is newer than the first
in pending, when starting a new thread, those logs will be carried over
to the new thread, which will then be re-downloaded and will fail to be
verified in /lib/naive
Only "shortmoons" though, due to some ames lane size limitation which
makes encoding longer ships difficult.
Notably the -ph-moon-az test does not pass, the moon cannot talk to a
non-sponsor galaxy.
We had trie operations independently implemented in +de in arvo,
+an:cloy in zuse, +zu in clay, lib/trie, and app/spider. This unifies
them all into +de in arvo, aggregating the used operations.
And inject their latest keys as soon as we pull them from cache. This
way, we avoid having to do the whole boot sequence again just for a
modified dawn event.
addresses #5442 by adding %thread-done and %thread-fail marks. also
fixes await-thread:strandio and removes some blank lines from
app/spider.hoon
%thread-done loses the type of the result, so you'll need to use ;; to
get it back. the real way to fix this is to have threads produce cages
instead of vases
Small touch-ups to simulation behavior and ph tests. Most of them pass
now, even if they're still really slow at times.
The breach ones don't pass, but also complain of dangling bone, so might
work once the fix for that is in.
If a batch gets bigger than a max size defined by the ethereum node
the raw transaction is sent to, the /ted/roller/send thread will crash and
the batch will be blocked, stopping any subsequent batches to be sent.
This detects when the current batch reaches a certain threshold and only
includes transactions up to that point, moving the ones that are not sent
back to the pending queue, adjusting their history and finding status.