* philip/mem:
vere: bump version to 0.10.6
ci: add travis as trusted user
jets: use appropriate macro
noun: add -C to control memo cache size
jets: restore fond/play/peek hooks
jam: add commented-out functionality to count size of atom
jets: cap memo cache and remove peek, play, and fond jets
noun: add functions to count size of noun
Signed-off-by: Philip Monk <phil@pcmonk.me>
The component for editing a post was tied up in checking for API
instantiation -- we want the check for a blank body to be independent
of that.
Fixes#3040.
This is a convenient way to count memory usage of noun by simplying
running `(jam 1.337 noun-1 noun-2 ... ~)`. This should
be a hint, but for debugging this is sufficient.
With these changes, about 90% less memory and 15% less time is needed to
compile hoon.hoon. The produced noun is within 3% of the same size,
which suggests this results in little if any duplication.
These are three of the four most commonly hit +ut jets. The other is
+nest, which cannot be un-memoized without taking much longer to compile
(it didn't finish in my test). These four jets combined for 2.3 million
out of the 2.4 million cache entries, the other +ut jets combine for
less than 100k, and literal ~+ accounted for about 50k entries.
This also caps the memo cache at 50k entries. Even with these jets not
memoized, the memo cache grows to 357k entries and 122 MB. Capping at
50k entries has no effect on time and reduces memory usage of the hash
table to about 25MB. Entries are reclaimed with the clock algorithm,
which seems to be sufficient for this use.
Adds a few functions to count the size of nouns in the current road.
Since this marks the nouns (high bit of refcount), you need to
"discount" them immediately after to unmark them. Parallel functions
exist for the counting the size of a hashtable.
It would nice to hook this up to a hint, but these are useful to have
available to run in the debugger or by inserting callsites as necessary.
It's also possible to hook them up to the +jam jet gated on a special
value.
When merging, +reachable-takos is called roughly once per merge commit
in the ancestry of the new commit. +reachable-takos was exponential in
the number of merge commits in the ancestry of the commit it's looking
at, due to mishandling of the accumulator. This makes it linear.
Of course, linear x linear is still quadratic, which is not great. I
doubt +reachable-takos can be made asymptotically better, but
+reduce-merge-points/+find-merge-points probably can. 50 merge commits
already gives about 14.000 iterations through the loop in
+reachable-takos. Another option is to try to memoize this somehow, but
a simple ~+ is insufficient since `s` is usually different.
In local tests on macOS with a -L copy of ~wicdev-wisryt, this speeds up
OTAs significantly. The majority of time was spent on this.