mirror of
https://github.com/facebook/sapling.git
synced 2024-10-10 08:47:12 +03:00
63d19e1eca
Summary: During an hg update we first prefetch all the data, then write all the data to disk. There are cases where the prefetched data is not available during the writing phase, in which case we fall back to fetching the files one-by-one. This has truly atrocious performance. Let's allow the worker threads to check for missing data then do bulk fetching of it. In the case where the cache was completely lost for some reason, this would reduce the number of serial fetches by 100x. Note, the background workers already spawn their own ssh connection's, so they're already getting some level of parallelism even when they're doing 1-by-1 fetching. That's why we aren't seeing a 100x improvement in performance. Reviewed By: xavierd Differential Revision: D23766424 fbshipit-source-id: d88a1e55b1c21e9cea7e50fc6dbfd8a27bd97bb0 |
||
---|---|---|
.. | ||
bindings | ||
__init__.py | ||
clindex.pyi | ||
clindex.pyx | ||
linelog.pyx | ||
patchrmdir.pyi | ||
patchrmdir.pyx | ||
traceprof.pyx |