Instead of updating the origin lane and mug prior to serializing the
packet, we now treat that as part of the serialization logic.
Since we don't want to do this always (we might want to serialize the
packet as-is), we add a flag to _ames_serialize_packet for this
behavior.
Store them in ames.c state as a doubly linked list.
This allows us to clean up not-yet-forwarded packets on-exit gracefully,
rather than just letting them disappear into the void.
We could probably get away with a singly linked list instead, but then
we would need to depend on scry responses being given in request-order.
In the forward-to-galaxy case, we don't go through _ames_lane_scry_cb,
instead calling _ames_forward directly, meaning _ames_panc_free wouldn't
get called on those packets.
Now, we move the _ames_panc_free call into _ames_forward, and only keep
it in the "scry failed" section of _ames_lane_scry_cb.
By moving that call into the
Keeps a counter, increments and decrements when starting and completing
forwarding logic respectively. Once the counter hits 1000, further
packets are dropped instead of forwarded.
Ideally you want to drop the oldest packets first, but this'd imply
removing scry events from the event log, which ames.c shouldn't know
how to do. Perhaps once u3_lord_peek_cancel or similar gets implemented,
we can do this sanely. Until then, this isn't the end of the world.
Instead of copying it into the local struct.
Arguably the port should still be present in the ames struct, since it's
written to. There's a comment for removing it from the _pier_ struct
though, which seems like the better change, but out of scope for here.
When receiving a packet for which we are not the recipient, attempt to
forward it statelessly. This means scrying a lane for the recipient out
of ames, then (if necessary) updating the origin lane in the packet and
sending the updates packet over the lane resulting from the scry.
Remaining work listed among the TODOs:
- implement a cap on the amount of pending to-forward packets we hold,
- handle version mismatches in the sponsee case,
- track counters around filtering & forwarding packets,
- test for memory leaks more aggressively.
* master:
vere: bump version to 0.10.7
libsigsegv: disable stack vma check
vere: bump version to 0.10.6
ci: add travis as trusted user
jets: use appropriate macro
noun: add -C to control memo cache size
jets: restore fond/play/peek hooks
jam: add commented-out functionality to count size of atom
jets: cap memo cache and remove peek, play, and fond jets
noun: add functions to count size of noun
* master: (147 commits)
vere: bump version to 0.10.7
libsigsegv: disable stack vma check
vere: bump version to 0.10.6
ci: add travis as trusted user
jets: use appropriate macro
noun: add -C to control memo cache size
jets: restore fond/play/peek hooks
jam: add commented-out functionality to count size of atom
jets: cap memo cache and remove peek, play, and fond jets
noun: add functions to count size of noun
release: urbit-os-v1.0.23
interface/config: fix production build
soto: run +on-load migration once
publish, links: restore full height
sh/build-interface: amend for SPA
interface/CONTRIBUTING: amend for SPA / webpack
solid: update pill
hood + apps: fix OTA process for feat/SPA
hood: add version %6 for %file-server upgrade
chat: equally size both code + s3 buttons
...
* ipc-redux:
behn: optimize bounded timers scry
vere: support saving scry jam to directory
vere: u3_nul in place of c3__null
vere: if behn scry fails, don't try again
vere: rename behn.c's alm -> alm_o
vere: scry out next behn timer for backstop
vere: warn on invalid behn doze
behn: improve scry interface
arvo: allow the empty desk (%$) in scries
vere: add -X flag for running a scry
This patches libsigsegv to not check the stack vma on Linux, since that
involves reading procfs, and we make very heavy use of sigsegv. This
eliminates most of urbit's performance discrepancy between Linux and
MacOS. These are the benchmarks used; note this is a local MBP vs a
cloud Linux server, and the MBP is almost certainly faster hardware.
We take two benchmarks, one of which decrements 10 million times and the
other simply allocates 125MB of memory. These are the results:
cpu-heavy == =/ n 10.000.000 |-(?~(n n $(n (dec n))))
mem-heavy == =a (bex 1.000.000.008)
macos, cpu-heavy: 6 seconds
macos, mem-heavy: 1 second
linux-before, cpu-heavy: 30 seconds
linux-before, mem-heavy: 160 seconds
linux-after, cpu-heavy 9 seconds
linux-after, mem-heavy 1.3 seconds
This represents a 3x speedup for the cpu-heavy operation and a 120x
speedup for the memory-heavy operation.
This check was used to try to distinguish stack overflow from other
forms of segmentation fault. In the comments in src/handler-unix.c, it
describes three heuristics it uses, depending on what's available from
the OS. In the linux-i386 case, all three are availble, so we simply
disable the slow one. This correctly recognizes stack overflow if you
simply alloca(10000000000).