mirror of
https://github.com/LadybirdBrowser/ladybird.git
synced 2024-12-26 04:35:41 +03:00
AK: Bypass Buffered's buffer for large reads
Before, if we couldn't read enough data out of the buffer, we would re- fill the buffer and recursively call read(), which in turn reads data from the buffer into the resliced target span. This incurs very intensive superflous memmove's when large chunks of data are read from a buffered stream. This commit changes the behavior so that when we exhaust the buffer, we first read any necessary additional data directly into the target, then fill up the buffer again. Effectively, this results in drastically reduced overhead from Buffered when reading large contiguous chunks. Of course, Buffered is designed to speed up data access patterns with small frequent reads, but it's nice to be able to combine both access patterns on one stream without penalties either way. The final performance gain is about an additional 80% of abench decoding speed.
This commit is contained in:
parent
982529a948
commit
d5dce448ea
Notes:
sideshowbarker
2024-07-17 22:40:37 +09:00
Author: https://github.com/kleinesfilmroellchen Commit: https://github.com/SerenityOS/serenity/commit/d5dce448ea6 Pull-request: https://github.com/SerenityOS/serenity/pull/11285 Reviewed-by: https://github.com/bgianfo
@ -57,15 +57,13 @@ public:
|
||||
auto nread = buffer().trim(m_buffered).copy_trimmed_to(bytes);
|
||||
|
||||
m_buffered -= nread;
|
||||
buffer().slice(nread, m_buffered).copy_to(buffer());
|
||||
if (m_buffered > 0)
|
||||
buffer().slice(nread, m_buffered).copy_to(buffer());
|
||||
|
||||
if (nread < bytes.size()) {
|
||||
nread += m_stream.read(bytes.slice(nread));
|
||||
|
||||
m_buffered = m_stream.read(buffer());
|
||||
|
||||
if (m_buffered == 0)
|
||||
return nread;
|
||||
|
||||
nread += read(bytes.slice(nread));
|
||||
}
|
||||
|
||||
return nread;
|
||||
|
Loading…
Reference in New Issue
Block a user