mirror of
https://github.com/facebook/sapling.git
synced 2024-10-10 08:47:12 +03:00
cd9479ae54
Summary: `gettreepack` accounts for ~6B logged scuba rows a day (https://fburl.com/scuba/mononoke_test_perf/vpnsn1ny) out of ~10B totally logged rows (https://fburl.com/scuba/mononoke_test_perf/qw78ecxe), so 60% of rows. For the vast majority of `gettreepack` instances we log 3 log tags: "Start processing", "Gettreepack params" and "Command processed". Similarly, the vast majority requests just 1 mfnode: https://fburl.com/scuba/mononoke_test_perf/3xwotsgq. If we sample logging for these commands by a factor of 100, we'll be able to save almost all of these 60% of rows (it's not entirely clear how that will actually influence our retention, but likely pretty significantly). What do we lose if we do this sampling? There are a few perf counters, like GettreepackResponseSize, GettreepackNumTreepacks, GettreepackDirectories, GettreepackDesignatedNodes, that will lose their aggregation accuracy. Given that we're only sampling single-mfnode gettreepacks, these values are not likely to be very interesting. However, we are still leaving a possibility to turn verbose logging back on and get full amount of logging. Reviewed By: mitrandir77, krallin Differential Revision: D26148453 fbshipit-source-id: a8521364bb5323d41c6c0c7d82d50508c0eda068 |
||
---|---|---|
.. | ||
getbundle_response | ||
mononoke_repo | ||
obsolete | ||
remotefilelog | ||
repo_read_write_status | ||
reverse_filler_queue | ||
schemas | ||
scribe_commit_queue | ||
src | ||
streaming_clone | ||
unbundle | ||
wirepack | ||
Cargo.toml |