%contact-store is responsible for sending updates about contacts, eg
profile color. When it hears an update, it fans that out to its
subsribers, unless that update is stale. If you reguarly fan out stale
updates, then they reverberate across the network indefinitely -- we
call this "echoing".
To cut off this echoing, all edits have a timestamp, and we consider any
updates from before this timestamp to be stale. Additions are separate
from edits, and for them we instead do a value comparison on the contact
-- if it didn't change, we consider the update stale.
The problem with this scheme is that if an addition and edit happen one
after the other in quick succession, you might have the following
sequence:
- add comes in with timestamp T1
- edit comes in with timestamp T2 after T1
- we hear an echo of the add, and that errantly applies because it
passes our "did the contact actually change" check
- we hear an echo of the edit, which applies because T2 is after T1
- GOTO 3
Each time we apply the stale update, we fan that out to our subscribers,
and if any two hosts subscribe to each other, this will loop. This may
even loop unconditionally because the ship that made the profile changes
seems like it might not recognize that those changes didn't come from
itself, so it sends them to all the groups it's in. If so, that's an
important issue to fix.
Fixestloncorp/landscape-issues#1442
Previously, the initial Azimuth snapshot was stored in Clay and shipped
in the pill. This causes several problems:
- It bloats the pill
- Updating the snapshot added large blobs to Clay's state. Even now
that tombstoning is possible, you don't want to have to do that
regularly.
- As a result, the snapshot was never updated.
- Even if you did tombstone those files, it could only be updated as
often as the pill
- And those updates would be sent over the network to people who didn't
need them
This moves the snapshot out of the pill and refactors Azimuth's
initialization process. On boot, when app/azimuth starts up, it first
downloads a snapshot from bootstrap.urbit.org and uses that to
initialize its state. As before, updates after this initial snapshot
come from an Ethereum node directly and are verified locally.
Relevant commands are:
- `-azimuth-snap-state %filename` creates a snapshot file
- `-azimuth-load "url"` downloads and inits from a snapshot, with url
defaulting to https://bootstrap.urbit.org/mainnet.azimuth-snapshot
- `:azimuth &azimuth-poke-data %load snap-state` takes a snap-state any
way you have it
Note the snapshot is downloaded from the same place as the pill, so this
doesn't introduce additional trust beyond what was already required.
When remote scry is released, we should consider allowing downloading
the snapshot in that way.
* master: (33 commits)
groups: updating glob and version
interface: use single sig in NotificationText
interface: fix subscription reconnect issues
landscape: fixing bad glob
landscape: updating glob and version
interface: adds `theme-color` meta tag, removes outdated safari web app meta tag
zuse: add missing assertions
landscape: cache marks again
zuse: comment clarifying sk bounds check
bounds-check against sk=1
zuse: style cleanup, use +rep/+end
pill: solid, brass
interface: refine joining error cases
group-view: fix errored rollback
helm: cleanup +poke-rekey to match #5522
helm: fix |rekey to work with multikey files
test: schnorr bounds checking
zuse: boundary assertions for schnorr
zuse: schnorr test cases
zuse: schnorr address
...
Resolves a good number of conflicts. Most notably, re-propagates removal
of gall's %onto, confirms new /app/herm behavior, coerces hood/drum
state adapters back into place, and updates webterm to use the latest
api.
Boot was broken, fixing the hark-note file mark and re-adding the
hark-store library fixed it.
This lets us push a new pill, which is necessary for the fix in #5434 to
actually work.
Fixes#5501Fixes#5502