mirror of
https://github.com/facebook/sapling.git
synced 2024-10-07 15:27:13 +03:00
util: increase chunk size from 128KB to 2MB
Summary: On connections that might experience higher latency the throughput for a clone is easy to spike, raising the chunk size results in more consistent throughput. Given that hardware these days has plenty of memory, this change shouldn't make any significant difference either. Reviewed By: farnz Differential Revision: D14502443 fbshipit-source-id: 0e90f955304e9955df0ec69b5733db7c34b09e83
This commit is contained in:
parent
3ca494b062
commit
3e436ef80b
@ -2105,9 +2105,9 @@ class chunkbuffer(object):
|
||||
return "".join(buf)
|
||||
|
||||
|
||||
def filechunkiter(f, size=131072, limit=None):
|
||||
def filechunkiter(f, size=2097152, limit=None):
|
||||
"""Create a generator that produces the data in the file size
|
||||
(default 131072) bytes at a time, up to optional limit (default is
|
||||
(default 2MB) bytes at a time, up to optional limit (default is
|
||||
to read all data). Chunks may be less than size bytes if the
|
||||
chunk is the last chunk in the file, or the file is a socket or
|
||||
some other type of file that sometimes reads less data than is
|
||||
|
Loading…
Reference in New Issue
Block a user