Summary:
Our blobstore_sync_queue selects entries with a limit on the number of unique
keys it's going to load. Then, it tries to delete them. However, the number of
entries might be (much) bigger than the number of keys. When we try to delete
them, we time out waiting for MySQL because deleting 100K entries at once isn't
OK.
This results in crashlooping in the healer, where we start, delete 100K
entries, then time out.
This is actually double bad, because when we come back up we just go wihhout
checking replication lag first, so if we're crashlooping, we disregard the
damage we're doing in MySQL (I'm fixing this later in this stack).
So, let's be a bit more disciplined, and delete keys 10K at a time, at most.
Reviewed By: HarveyHunt
Differential Revision: D19997588
fbshipit-source-id: 2262f9ba3f7d3493d0845796ad8f841855510180
Summary:
This commit manually synchronizes the internal move of
fbcode/scm/mononoke under fbcode/eden/mononoke which couldn't be
performed by ShipIt automatically.
Reviewed By: StanislavGlebik
Differential Revision: D19722832
fbshipit-source-id: 52fbc8bc42a8940b39872dfb8b00ce9c0f6b0800
Summary:
D19767626 added an original_timestamp column to the
blobstore_sync_queue. Update the sqlite schema to keep it in sync.
Reviewed By: krallin
Differential Revision: D19787488
fbshipit-source-id: ad576e2ec99349953e2ab69e3defb73d1ff556c0
Summary:
Modify the multiplexed blobstore implementation so that the
multiplex_id is written to the healer queue after a put. Further, update the
blobstore healer to only look at entries with the same multiplex ID as it's
configured to run with.
Reviewed By: ahornby
Differential Revision: D19770057
fbshipit-source-id: 41db19f0b0f84c048d49ab9e6258cccc89cf4195