mirror of
https://github.com/facebook/sapling.git
synced 2024-10-11 01:07:15 +03:00
6da3dc939a
Summary: Our blobstore_sync_queue selects entries with a limit on the number of unique keys it's going to load. Then, it tries to delete them. However, the number of entries might be (much) bigger than the number of keys. When we try to delete them, we time out waiting for MySQL because deleting 100K entries at once isn't OK. This results in crashlooping in the healer, where we start, delete 100K entries, then time out. This is actually double bad, because when we come back up we just go wihhout checking replication lag first, so if we're crashlooping, we disregard the damage we're doing in MySQL (I'm fixing this later in this stack). So, let's be a bit more disciplined, and delete keys 10K at a time, at most. Reviewed By: HarveyHunt Differential Revision: D19997588 fbshipit-source-id: 2262f9ba3f7d3493d0845796ad8f841855510180 |
||
---|---|---|
.. | ||
schemas | ||
src | ||
test |