AK: Bypass Buffered's buffer for large reads

Before, if we couldn't read enough data out of the buffer, we would re-
fill the buffer and recursively call read(), which in turn reads data
from the buffer into the resliced target span. This incurs very
intensive superflous memmove's when large chunks of data are read from
a buffered stream.

This commit changes the behavior so that when we exhaust the buffer, we
first read any necessary additional data directly into the target, then
fill up the buffer again. Effectively, this results in drastically
reduced overhead from Buffered when reading large contiguous chunks.
Of course, Buffered is designed to speed up data access patterns with
small frequent reads, but it's nice to be able to combine both access
patterns on one stream without penalties either way.

The final performance gain is about an additional 80% of abench decoding
speed.
This commit is contained in:
kleines Filmröllchen 2021-12-17 01:02:16 +01:00 committed by Brian Gianforcaro
parent 982529a948
commit d5dce448ea
Notes: sideshowbarker 2024-07-17 22:40:37 +09:00

View File

@ -57,15 +57,13 @@ public:
auto nread = buffer().trim(m_buffered).copy_trimmed_to(bytes);
m_buffered -= nread;
buffer().slice(nread, m_buffered).copy_to(buffer());
if (m_buffered > 0)
buffer().slice(nread, m_buffered).copy_to(buffer());
if (nread < bytes.size()) {
nread += m_stream.read(bytes.slice(nread));
m_buffered = m_stream.read(buffer());
if (m_buffered == 0)
return nread;
nread += read(bytes.slice(nread));
}
return nread;