2019-10-29 22:51:58 +03:00
|
|
|
:: eth-watcher: ethereum event log collector
|
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
/- *eth-watcher, spider
|
2020-12-06 03:20:42 +03:00
|
|
|
/+ ethereum, default-agent, verb, dbug
|
2019-10-25 19:39:02 +03:00
|
|
|
=, ethereum-types
|
2020-12-08 03:47:06 +03:00
|
|
|
=, jael
|
2019-11-04 20:57:17 +03:00
|
|
|
::
|
2019-07-05 04:15:53 +03:00
|
|
|
=> |%
|
2019-11-19 07:36:21 +03:00
|
|
|
+$ card card:agent:gall
|
2019-07-19 01:26:15 +03:00
|
|
|
+$ app-state
|
2021-04-15 11:02:58 +03:00
|
|
|
$: %5
|
2019-10-25 19:39:02 +03:00
|
|
|
dogs=(map path watchdog)
|
|
|
|
==
|
|
|
|
::
|
2019-10-29 22:51:58 +03:00
|
|
|
+$ context [=path dog=watchdog]
|
2019-10-25 19:39:02 +03:00
|
|
|
+$ watchdog
|
|
|
|
$: config
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
running=(unit [since=@da =tid:spider])
|
2019-07-19 03:08:01 +03:00
|
|
|
=number:block
|
2019-10-25 19:39:02 +03:00
|
|
|
=pending-logs
|
2019-10-30 02:37:12 +03:00
|
|
|
=history
|
2019-07-19 03:08:01 +03:00
|
|
|
blocks=(list block)
|
2019-07-19 01:26:15 +03:00
|
|
|
==
|
2019-10-25 19:39:02 +03:00
|
|
|
::
|
2019-10-30 20:44:34 +03:00
|
|
|
:: history: newest block first, oldest event first
|
2019-10-30 02:37:12 +03:00
|
|
|
+$ history (list loglist)
|
2019-07-05 04:15:53 +03:00
|
|
|
--
|
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
:: Helpers
|
2019-07-05 04:15:53 +03:00
|
|
|
::
|
|
|
|
=> |%
|
2019-11-19 06:28:59 +03:00
|
|
|
++ wait
|
2019-11-28 18:46:07 +03:00
|
|
|
|= [=path now=@da time=@dr]
|
2019-11-19 06:28:59 +03:00
|
|
|
^- card
|
2019-11-28 18:46:07 +03:00
|
|
|
[%pass [%timer path] %arvo %b %wait (add now time)]
|
2019-07-19 01:26:15 +03:00
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
++ wait-shortcut
|
2019-11-28 18:46:07 +03:00
|
|
|
|= [=path now=@da]
|
2019-11-19 06:28:59 +03:00
|
|
|
^- card
|
2019-11-28 18:46:07 +03:00
|
|
|
[%pass [%timer path] %arvo %b %wait now]
|
2019-07-19 01:26:15 +03:00
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
++ poke-spider
|
|
|
|
|= [=path our=@p =cage]
|
|
|
|
^- card
|
|
|
|
[%pass [%running path] %agent [our %spider] %poke cage]
|
2019-07-05 04:15:53 +03:00
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
++ watch-spider
|
|
|
|
|= [=path our=@p =sub=path]
|
|
|
|
^- card
|
|
|
|
[%pass [%running path] %agent [our %spider] %watch sub-path]
|
2019-07-19 01:26:15 +03:00
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
++ leave-spider
|
|
|
|
|= [=path our=@p]
|
|
|
|
^- card
|
|
|
|
[%pass [%running path] %agent [our %spider] %leave ~]
|
2019-07-05 04:15:53 +03:00
|
|
|
--
|
|
|
|
::
|
|
|
|
:: Main
|
|
|
|
::
|
2020-03-12 03:33:48 +03:00
|
|
|
%- agent:dbug
|
2019-11-19 07:36:21 +03:00
|
|
|
^- agent:gall
|
2019-11-19 06:28:59 +03:00
|
|
|
=| state=app-state
|
2019-11-26 22:57:26 +03:00
|
|
|
%+ verb |
|
2019-11-19 07:36:21 +03:00
|
|
|
|_ =bowl:gall
|
2019-11-19 06:28:59 +03:00
|
|
|
+* this .
|
|
|
|
def ~(. (default-agent this %|) bowl)
|
|
|
|
::
|
|
|
|
++ on-init
|
|
|
|
^- (quip card _this)
|
2019-11-28 18:46:07 +03:00
|
|
|
[~ this]
|
2019-10-29 21:11:36 +03:00
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
++ on-save !>(state)
|
|
|
|
++ on-load
|
|
|
|
|= old=vase
|
2019-12-01 03:14:16 +03:00
|
|
|
|^
|
|
|
|
=+ !<(old-state=app-states old)
|
|
|
|
=? old-state ?=(%0 -.old-state)
|
2019-12-01 06:38:43 +03:00
|
|
|
%- (slog leaf+"upgrading eth-watcher from %0" ~)
|
|
|
|
^- app-state-1
|
2019-12-01 03:14:16 +03:00
|
|
|
%= old-state
|
|
|
|
- %1
|
|
|
|
dogs
|
|
|
|
%- ~(run by dogs.old-state)
|
|
|
|
|= dog=watchdog-0
|
|
|
|
%= dog
|
|
|
|
-> [~m5 ->.dog]
|
|
|
|
==
|
|
|
|
==
|
2019-12-01 06:38:43 +03:00
|
|
|
::
|
|
|
|
=^ cards-1=(list card) old-state
|
|
|
|
?. ?=(%1 -.old-state)
|
|
|
|
`old-state
|
2019-12-02 13:08:37 +03:00
|
|
|
%- (slog leaf+"upgrading eth-watcher from %1" ~)
|
2019-12-01 06:38:43 +03:00
|
|
|
:_ old-state(- %2)
|
|
|
|
%+ turn ~(tap by dogs.old-state)
|
2019-12-09 22:41:01 +03:00
|
|
|
|= [=path dog=watchdog-1]
|
2019-12-01 06:38:43 +03:00
|
|
|
(wait-shortcut path now.bowl)
|
|
|
|
::
|
2019-12-09 22:41:01 +03:00
|
|
|
=? old-state ?=(%2 -.old-state)
|
|
|
|
%- (slog leaf+"upgrading eth-watcher from %2" ~)
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
^- app-state-3
|
2019-12-09 22:41:01 +03:00
|
|
|
%= old-state
|
|
|
|
- %3
|
|
|
|
dogs
|
|
|
|
%- ~(run by dogs.old-state)
|
|
|
|
|= dog=watchdog-1
|
|
|
|
%= dog
|
|
|
|
-> [| ->.dog]
|
|
|
|
==
|
|
|
|
==
|
|
|
|
::
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
=? old-state ?=(%3 -.old-state)
|
|
|
|
%- (slog leaf+"upgrading eth-watcher from %3" ~)
|
2021-04-15 11:02:58 +03:00
|
|
|
^- app-state-4
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
%= old-state
|
|
|
|
- %4
|
|
|
|
dogs
|
|
|
|
%- ~(run by dogs.old-state)
|
|
|
|
|= dog=watchdog-3
|
2021-04-15 11:02:58 +03:00
|
|
|
^- watchdog-4
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
%= dog
|
|
|
|
-
|
2021-04-15 11:02:58 +03:00
|
|
|
^- config-4
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
=, -.dog
|
|
|
|
[url eager refresh-rate (mul refresh-rate 6) from contracts topics]
|
|
|
|
::
|
|
|
|
running
|
|
|
|
?~ running.dog ~
|
|
|
|
`[now.bowl u.running.dog]
|
|
|
|
==
|
|
|
|
==
|
|
|
|
::
|
2021-04-15 11:02:58 +03:00
|
|
|
=? old-state ?=(%4 -.old-state)
|
|
|
|
%- (slog leaf+"upgrading eth-watcher from %4" ~)
|
|
|
|
^- app-state
|
|
|
|
%= old-state
|
|
|
|
- %5
|
|
|
|
dogs
|
|
|
|
%- ~(run by dogs.old-state)
|
|
|
|
|= dog=watchdog-4
|
|
|
|
%= dog
|
|
|
|
-
|
|
|
|
=, -.dog
|
|
|
|
[url eager refresh-rate timeout-time from contracts ~ topics]
|
|
|
|
::
|
|
|
|
pending-logs-4
|
|
|
|
%- ~(run by pending-logs-4.dog)
|
|
|
|
|= =loglist-4
|
|
|
|
%+ turn loglist-4
|
|
|
|
|= =event-log-4
|
|
|
|
event-log-4(mined ?~(mined.event-log-4 ~ `mined.event-log-4))
|
|
|
|
::
|
|
|
|
history-4
|
|
|
|
%+ turn history-4.dog
|
|
|
|
|= =loglist-4
|
|
|
|
%+ turn loglist-4
|
|
|
|
|= =event-log-4
|
|
|
|
event-log-4(mined ?~(mined.event-log-4 ~ `mined.event-log-4))
|
|
|
|
==
|
|
|
|
==
|
|
|
|
::
|
|
|
|
[cards-1 this(state ?>(?=(%5 -.old-state) old-state))]
|
2019-12-01 03:14:16 +03:00
|
|
|
::
|
|
|
|
+$ app-states
|
2021-04-15 11:02:58 +03:00
|
|
|
$%(app-state-0 app-state-1 app-state-2 app-state-3 app-state-4 app-state)
|
|
|
|
::
|
|
|
|
+$ app-state-4
|
|
|
|
$: %4
|
|
|
|
dogs=(map path watchdog-4)
|
|
|
|
==
|
|
|
|
::
|
|
|
|
+$ watchdog-4
|
|
|
|
$: config-4
|
|
|
|
running=(unit [since=@da =tid:spider])
|
|
|
|
=number:block
|
|
|
|
=pending-logs-4
|
|
|
|
=history-4
|
|
|
|
blocks=(list block)
|
|
|
|
==
|
|
|
|
::
|
|
|
|
+$ config-4
|
|
|
|
$: url=@ta
|
|
|
|
eager=?
|
|
|
|
refresh-rate=@dr
|
|
|
|
timeout-time=@dr
|
|
|
|
from=number:block
|
|
|
|
contracts=(list address:ethereum)
|
|
|
|
=topics
|
|
|
|
==
|
|
|
|
+$ pending-logs-4 (map number:block loglist-4)
|
|
|
|
+$ history-4 (list loglist-4)
|
|
|
|
+$ loglist-4 (list event-log-4)
|
|
|
|
+$ event-log-4
|
|
|
|
$: $= mined %- unit
|
|
|
|
$: log-index=@ud
|
|
|
|
transaction-index=@ud
|
|
|
|
transaction-hash=@ux
|
|
|
|
block-number=@ud
|
|
|
|
block-hash=@ux
|
|
|
|
removed=?
|
|
|
|
==
|
|
|
|
::
|
|
|
|
address=@ux
|
|
|
|
data=@t
|
|
|
|
topics=(lest @ux)
|
|
|
|
==
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
::
|
|
|
|
+$ app-state-3
|
|
|
|
$: %3
|
|
|
|
dogs=(map path watchdog-3)
|
|
|
|
==
|
|
|
|
::
|
|
|
|
+$ watchdog-3
|
|
|
|
$: config-3
|
|
|
|
running=(unit =tid:spider)
|
|
|
|
=number:block
|
2021-04-15 11:02:58 +03:00
|
|
|
=pending-logs-4
|
|
|
|
=history-4
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
blocks=(list block)
|
|
|
|
==
|
|
|
|
::
|
|
|
|
+$ config-3
|
|
|
|
$: url=@ta
|
|
|
|
eager=?
|
|
|
|
refresh-rate=@dr
|
|
|
|
from=number:block
|
|
|
|
contracts=(list address:ethereum)
|
|
|
|
=topics
|
|
|
|
==
|
2019-12-09 22:41:01 +03:00
|
|
|
::
|
|
|
|
+$ app-state-2
|
|
|
|
$: %2
|
|
|
|
dogs=(map path watchdog-1)
|
|
|
|
==
|
2019-12-01 06:38:43 +03:00
|
|
|
::
|
|
|
|
+$ app-state-1
|
|
|
|
$: %1
|
2019-12-09 22:41:01 +03:00
|
|
|
dogs=(map path watchdog-1)
|
|
|
|
==
|
|
|
|
::
|
|
|
|
+$ watchdog-1
|
|
|
|
$: config-1
|
|
|
|
running=(unit =tid:spider)
|
|
|
|
=number:block
|
2021-04-15 11:02:58 +03:00
|
|
|
=pending-logs-4
|
|
|
|
=history-4
|
2019-12-09 22:41:01 +03:00
|
|
|
blocks=(list block)
|
|
|
|
==
|
|
|
|
::
|
|
|
|
+$ config-1
|
|
|
|
$: url=@ta
|
|
|
|
refresh-rate=@dr
|
|
|
|
from=number:block
|
|
|
|
contracts=(list address:ethereum)
|
|
|
|
=topics
|
2019-12-01 06:38:43 +03:00
|
|
|
==
|
2019-12-01 03:14:16 +03:00
|
|
|
::
|
|
|
|
+$ app-state-0
|
|
|
|
$: %0
|
|
|
|
dogs=(map path watchdog-0)
|
|
|
|
==
|
|
|
|
::
|
|
|
|
+$ watchdog-0
|
|
|
|
$: config-0
|
|
|
|
running=(unit =tid:spider)
|
|
|
|
=number:block
|
2021-04-15 11:02:58 +03:00
|
|
|
=pending-logs-4
|
|
|
|
=history-4
|
2019-12-01 03:14:16 +03:00
|
|
|
blocks=(list block)
|
|
|
|
==
|
|
|
|
::
|
|
|
|
+$ config-0
|
|
|
|
$: url=@ta
|
|
|
|
from=number:block
|
|
|
|
contracts=(list address:ethereum)
|
|
|
|
=topics
|
|
|
|
==
|
|
|
|
--
|
2019-10-29 21:11:36 +03:00
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
++ on-poke
|
|
|
|
|= [=mark =vase]
|
|
|
|
?: ?=(%noun mark)
|
|
|
|
~& state
|
|
|
|
`this
|
|
|
|
?. ?=(%eth-watcher-poke mark)
|
|
|
|
(on-poke:def mark vase)
|
|
|
|
::
|
|
|
|
=+ !<(=poke vase)
|
|
|
|
?- -.poke
|
2019-10-25 19:39:02 +03:00
|
|
|
%watch
|
2019-10-30 18:46:37 +03:00
|
|
|
:: fully restart the watchdog if it doesn't exist yet,
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
:: or if result-altering parts of the config changed.
|
2019-10-30 18:46:37 +03:00
|
|
|
=/ restart=?
|
2019-11-19 06:28:59 +03:00
|
|
|
?| !(~(has by dogs.state) path.poke)
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
?! .= ->+>+:(~(got by dogs.state) path.poke)
|
|
|
|
+>+>.config.poke
|
2019-10-30 18:46:37 +03:00
|
|
|
==
|
2019-12-04 06:13:41 +03:00
|
|
|
::
|
|
|
|
=/ already (~(has by dogs.state) path.poke)
|
2021-04-15 11:02:58 +03:00
|
|
|
~& [already=already restart=restart]
|
2019-12-04 06:13:41 +03:00
|
|
|
~? &(already restart)
|
2019-11-19 06:28:59 +03:00
|
|
|
[dap.bowl 'overwriting existing watchdog on' path.poke]
|
2019-12-07 07:44:36 +03:00
|
|
|
=/ wait-cards=(list card)
|
2019-12-04 06:13:41 +03:00
|
|
|
?: already
|
|
|
|
~
|
|
|
|
[(wait-shortcut path.poke now.bowl) ~]
|
|
|
|
::
|
2019-12-07 01:22:11 +03:00
|
|
|
=/ restart-cards=(list card)
|
2019-11-19 06:28:59 +03:00
|
|
|
=/ dog (~(get by dogs.state) path.poke)
|
|
|
|
?. ?& restart
|
|
|
|
?=(^ dog)
|
|
|
|
?=(^ running.u.dog)
|
|
|
|
==
|
|
|
|
~
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
=/ =cage [%spider-stop !>([tid.u.running.u.dog &])]
|
2019-12-07 07:44:36 +03:00
|
|
|
:_ ~
|
|
|
|
`card`[%pass [%starting path.poke] %agent [our.bowl %spider] %poke cage]
|
2019-11-19 06:28:59 +03:00
|
|
|
=/ new-dog
|
2019-10-30 18:46:37 +03:00
|
|
|
=/ dog=watchdog
|
|
|
|
?: restart *watchdog
|
2019-11-19 06:28:59 +03:00
|
|
|
(~(got by dogs.state) path.poke)
|
|
|
|
%_ dog
|
|
|
|
- config.poke
|
|
|
|
number from.config.poke
|
|
|
|
==
|
|
|
|
=. dogs.state (~(put by dogs.state) path.poke new-dog)
|
2019-12-07 07:44:36 +03:00
|
|
|
[(weld wait-cards restart-cards) this]
|
2019-10-25 19:39:02 +03:00
|
|
|
::
|
|
|
|
%clear
|
2019-11-19 06:28:59 +03:00
|
|
|
=. dogs.state (~(del by dogs.state) path.poke)
|
2019-12-07 07:44:36 +03:00
|
|
|
[~ this]
|
2019-07-05 04:15:53 +03:00
|
|
|
==
|
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
:: +on-watch: subscribe & get initial subscription data
|
2019-10-30 02:37:12 +03:00
|
|
|
::
|
|
|
|
:: /logs/some-path:
|
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
++ on-watch
|
2019-07-28 07:01:55 +03:00
|
|
|
|= =path
|
2019-11-19 07:36:21 +03:00
|
|
|
^- (quip card agent:gall)
|
2019-10-30 02:37:12 +03:00
|
|
|
?. ?=([%logs ^] path)
|
|
|
|
~| [%invalid-subscription-path path]
|
|
|
|
!!
|
2019-11-19 06:28:59 +03:00
|
|
|
:_ this :_ ~
|
|
|
|
:* %give %fact ~ %eth-watcher-diff !>
|
|
|
|
:- %history
|
|
|
|
^- loglist
|
|
|
|
%- zing
|
|
|
|
%- flop
|
|
|
|
=< history
|
|
|
|
(~(gut by dogs.state) t.path *watchdog)
|
|
|
|
==
|
2019-10-29 21:11:36 +03:00
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
++ on-leave on-leave:def
|
|
|
|
::
|
|
|
|
:: +on-peek: get diagnostics data
|
2019-10-29 21:11:36 +03:00
|
|
|
::
|
|
|
|
:: /block/some-path: get next block number to check for /some-path
|
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
++ on-peek
|
2019-10-29 21:11:36 +03:00
|
|
|
|= =path
|
2019-11-19 06:28:59 +03:00
|
|
|
^- (unit (unit cage))
|
2019-12-01 06:38:43 +03:00
|
|
|
?+ path ~
|
|
|
|
[%x %block ^]
|
|
|
|
?. (~(has by dogs.state) t.t.path) ~
|
|
|
|
:+ ~ ~
|
|
|
|
:- %atom
|
|
|
|
!>(number:(~(got by dogs.state) t.t.path))
|
|
|
|
::
|
|
|
|
[%x %dogs ~]
|
|
|
|
``noun+!>(~(key by dogs.state))
|
2019-12-09 22:41:01 +03:00
|
|
|
::
|
|
|
|
[%x %dogs %configs ~]
|
|
|
|
``noun+!>((~(run by dogs.state) |=(=watchdog -.watchdog)))
|
2019-12-01 06:38:43 +03:00
|
|
|
==
|
2019-11-19 06:28:59 +03:00
|
|
|
::
|
|
|
|
++ on-agent
|
2019-11-19 07:36:21 +03:00
|
|
|
|= [=wire =sign:agent:gall]
|
2019-11-19 06:28:59 +03:00
|
|
|
|^
|
2019-11-19 07:36:21 +03:00
|
|
|
^- (quip card agent:gall)
|
2019-11-19 06:28:59 +03:00
|
|
|
?. ?=([%running *] wire)
|
|
|
|
(on-agent:def wire sign)
|
|
|
|
?- -.sign
|
|
|
|
%poke-ack
|
|
|
|
?~ p.sign
|
|
|
|
[~ this]
|
|
|
|
%- (slog leaf+"eth-watcher couldn't start thread" u.p.sign)
|
|
|
|
:_ (clear-running t.wire) :_ ~
|
|
|
|
(leave-spider t.wire our.bowl)
|
|
|
|
::
|
|
|
|
%watch-ack
|
|
|
|
?~ p.sign
|
|
|
|
[~ this]
|
2020-07-15 22:26:55 +03:00
|
|
|
%- (slog leaf+"eth-watcher couldn't start listening to thread" u.p.sign)
|
|
|
|
:: TODO: kill thread that may have started, although it may not
|
|
|
|
:: have started yet since we get this response before the
|
|
|
|
:: %start-spider poke is processed
|
|
|
|
::
|
2019-11-19 06:28:59 +03:00
|
|
|
[~ (clear-running t.wire)]
|
|
|
|
::
|
|
|
|
%kick [~ (clear-running t.wire)]
|
|
|
|
%fact
|
|
|
|
=* path t.wire
|
|
|
|
=/ dog (~(get by dogs.state) path)
|
|
|
|
?~ dog
|
|
|
|
[~ this]
|
|
|
|
?+ p.cage.sign (on-agent:def wire sign)
|
|
|
|
%thread-fail
|
|
|
|
=+ !<([=term =tang] q.cage.sign)
|
|
|
|
%- (slog leaf+"eth-watcher failed; will retry" leaf+<term> tang)
|
|
|
|
[~ this(dogs.state (~(put by dogs.state) path u.dog(running ~)))]
|
|
|
|
::
|
|
|
|
%thread-done
|
|
|
|
=+ !<([vows=disavows pup=watchpup] q.cage.sign)
|
|
|
|
=. u.dog
|
|
|
|
%_ u.dog
|
|
|
|
- -.pup
|
|
|
|
number number.pup
|
|
|
|
blocks blocks.pup
|
|
|
|
pending-logs pending-logs.pup
|
|
|
|
==
|
|
|
|
=^ cards-1 u.dog (disavow path u.dog vows)
|
|
|
|
=^ cards-2 u.dog (release-logs path u.dog)
|
|
|
|
=. dogs.state (~(put by dogs.state) path u.dog(running ~))
|
|
|
|
[(weld cards-1 cards-2) this]
|
|
|
|
==
|
|
|
|
==
|
|
|
|
::
|
|
|
|
++ clear-running
|
|
|
|
|= =path
|
|
|
|
=/ dog (~(get by dogs.state) path)
|
|
|
|
?~ dog
|
|
|
|
this
|
|
|
|
this(dogs.state (~(put by dogs.state) path u.dog(running ~)))
|
|
|
|
::
|
|
|
|
++ disavow
|
|
|
|
|= [=path dog=watchdog vows=disavows]
|
|
|
|
^- (quip card watchdog)
|
|
|
|
=/ history-ids=(list [id:block loglist])
|
|
|
|
%+ murn history.dog
|
|
|
|
|= logs=loglist
|
|
|
|
^- (unit [id:block loglist])
|
|
|
|
?~ logs
|
|
|
|
~
|
|
|
|
`[[block-hash block-number]:(need mined.i.logs) logs]
|
|
|
|
=/ actual-vows=disavows
|
|
|
|
%+ skim vows
|
|
|
|
|= =id:block
|
|
|
|
(lien history-ids |=([=history=id:block *] =(id history-id)))
|
|
|
|
=/ actual-history=history
|
|
|
|
%+ murn history-ids
|
|
|
|
|= [=id:block logs=loglist]
|
|
|
|
^- (unit loglist)
|
|
|
|
?: (lien actual-vows |=(=vow=id:block =(id vow-id)))
|
|
|
|
~
|
|
|
|
`logs
|
|
|
|
:_ dog(history actual-history)
|
|
|
|
%+ turn actual-vows
|
|
|
|
|= =id:block
|
2019-12-22 23:12:28 +03:00
|
|
|
[%give %fact [%logs path]~ %eth-watcher-diff !>([%disavow id])]
|
2019-11-19 06:28:59 +03:00
|
|
|
::
|
|
|
|
++ release-logs
|
|
|
|
|= [=path dog=watchdog]
|
|
|
|
^- (quip card watchdog)
|
2021-04-15 11:02:58 +03:00
|
|
|
~& > release-logs=pending-logs.dog
|
|
|
|
?: (lth number.dog 0) :: TODO: 30!
|
2019-11-19 06:28:59 +03:00
|
|
|
`dog
|
2021-04-15 11:02:58 +03:00
|
|
|
=/ rel-number (sub number.dog 0) :: TODO: 30!
|
2019-11-19 06:28:59 +03:00
|
|
|
=/ numbers=(list number:block) ~(tap in ~(key by pending-logs.dog))
|
|
|
|
=. numbers (sort numbers lth)
|
2020-12-05 10:08:47 +03:00
|
|
|
=^ logs=(list event-log:rpc:ethereum) dog
|
|
|
|
|- ^- (quip event-log:rpc:ethereum watchdog)
|
|
|
|
?~ numbers
|
2019-11-19 06:28:59 +03:00
|
|
|
`dog
|
2020-12-05 10:08:47 +03:00
|
|
|
?: (gth i.numbers rel-number)
|
|
|
|
$(numbers t.numbers)
|
|
|
|
=^ rel-logs-1 dog
|
|
|
|
=/ =loglist (~(get ja pending-logs.dog) i.numbers)
|
|
|
|
=. pending-logs.dog (~(del by pending-logs.dog) i.numbers)
|
|
|
|
?~ loglist
|
|
|
|
`dog
|
|
|
|
=. history.dog [loglist history.dog]
|
|
|
|
[loglist dog]
|
|
|
|
=^ rel-logs-2 dog $(numbers t.numbers)
|
|
|
|
[(weld rel-logs-1 rel-logs-2) dog]
|
|
|
|
:_ dog
|
|
|
|
?~ logs
|
|
|
|
~
|
|
|
|
^- (list card:agent:gall)
|
|
|
|
[%give %fact [%logs path]~ %eth-watcher-diff !>([%logs logs])]~
|
2019-11-19 06:28:59 +03:00
|
|
|
--
|
|
|
|
::
|
|
|
|
++ on-arvo
|
|
|
|
|= [=wire =sign-arvo]
|
2019-11-19 07:36:21 +03:00
|
|
|
^- (quip card agent:gall)
|
2020-07-15 22:26:55 +03:00
|
|
|
?+ +<.sign-arvo ~|([%strange-sign-arvo -.sign-arvo] !!)
|
2019-11-19 06:28:59 +03:00
|
|
|
%wake
|
2019-12-01 06:38:43 +03:00
|
|
|
?. ?=([%timer *] wire) ~& weird-wire=wire [~ this]
|
2019-11-28 18:46:07 +03:00
|
|
|
=* path t.wire
|
|
|
|
?. (~(has by dogs.state) path)
|
|
|
|
[~ this]
|
|
|
|
=/ dog=watchdog
|
|
|
|
(~(got by dogs.state) path)
|
2019-11-19 06:28:59 +03:00
|
|
|
?^ error.sign-arvo
|
|
|
|
:: failed, try again. maybe should tell user if fails more than
|
|
|
|
:: 5 times.
|
|
|
|
::
|
2019-12-01 06:38:43 +03:00
|
|
|
%- (slog leaf+"eth-watcher failed; will retry" ~)
|
2019-11-28 18:46:07 +03:00
|
|
|
[[(wait path now.bowl refresh-rate.dog)]~ this]
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
:: maybe kill a timed-out update thread, maybe start a new one
|
2019-11-19 06:28:59 +03:00
|
|
|
::
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
=^ stop-cards=(list card) dog
|
|
|
|
:: if still running beyond timeout time, kill it
|
2019-11-19 06:28:59 +03:00
|
|
|
::
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
?. ?& ?=(^ running.dog)
|
|
|
|
::
|
|
|
|
%+ gth now.bowl
|
|
|
|
(add since.u.running.dog timeout-time.dog)
|
|
|
|
==
|
2019-12-01 06:38:43 +03:00
|
|
|
`dog
|
|
|
|
::
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
%- (slog leaf+"eth-watcher {(spud path)} timed out; will restart" ~)
|
|
|
|
=/ =cage [%spider-stop !>([tid.u.running.dog |])]
|
2019-11-28 18:46:07 +03:00
|
|
|
:_ dog(running ~)
|
2019-12-04 06:13:41 +03:00
|
|
|
:~ (leave-spider path our.bowl)
|
|
|
|
[%pass [%starting path] %agent [our.bowl %spider] %poke cage]
|
2019-11-28 18:46:07 +03:00
|
|
|
==
|
2019-11-19 06:28:59 +03:00
|
|
|
::
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
=^ start-cards=(list card) dog
|
|
|
|
:: if not (or no longer) running, start a new thread
|
|
|
|
::
|
|
|
|
?^ running.dog
|
|
|
|
`dog
|
|
|
|
::
|
2019-12-01 06:38:43 +03:00
|
|
|
=/ new-tid=@ta
|
|
|
|
(cat 3 'eth-watcher--' (scot %uv eny.bowl))
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
:_ dog(running `[now.bowl new-tid])
|
2019-12-01 06:38:43 +03:00
|
|
|
=/ args
|
|
|
|
:^ ~ `new-tid %eth-watcher
|
2020-12-05 10:08:47 +03:00
|
|
|
!>([~ `watchpup`[- number pending-logs blocks]:dog])
|
2019-12-01 06:38:43 +03:00
|
|
|
:~ (watch-spider path our.bowl /thread-result/[new-tid])
|
|
|
|
(poke-spider path our.bowl %spider-start !>(args))
|
|
|
|
==
|
|
|
|
::
|
eth-watcher: separate timeout from refresh-rate
Previously, when the refresh-rate timer activated, and the thread from
the previous activation was still running, we would kill it and start
a new one. For low refresh rates, on slower machines, nodes, or network
connections, this could cause the update to never conclude.
Here we add a timeout-time to eth-watcher's config. If the refresh-rate
timer activates, and a thread exists, but hasn't been running for at
least the specified timeout-time yet, we simply take no action, and wait
for the next refresh timer.
Note that we opted for "at least timeout-time", instead of killing &
restarting directly after the specified timeout-time has passed, to
avoid having to handle an extra timer flow.
In the +on-load logic, we configure the timeout-time for existing
watchdogs as six times the refresh-rate. We want to set
azimuth-tracker's timeout-time to ~m30, and don't care much about other,
less-likely-to-be-active use cases of eth-watcher.
2020-04-01 13:41:04 +03:00
|
|
|
:- [(wait path now.bowl refresh-rate.dog) (weld stop-cards start-cards)]
|
2019-12-01 06:38:43 +03:00
|
|
|
this(dogs.state (~(put by dogs.state) path dog))
|
2019-11-19 06:28:59 +03:00
|
|
|
==
|
|
|
|
::
|
|
|
|
++ on-fail on-fail:def
|
2019-07-05 04:15:53 +03:00
|
|
|
--
|