mirror of
https://github.com/LadybirdBrowser/ladybird.git
synced 2024-09-21 02:08:12 +03:00
a54fdd5212
On the code path where we are setting a TypedArray from another TypedArray of the same type, we forgo the spec text and simply do a memmove between the two ArrayBuffers. However, we forgot to apply source's byte offset on this code path. This meant if we tried setting a TypedArray from a TypedArray we got from .subarray(), we would still copy from the start of the subarray's ArrayBuffer. This is because .subarray() returns a new TypedArray with the same ArrayBuffer but the new TypedArray has a smaller length and a byte offset that the rest of the codebase is responsible for applying. This affected pako when it was decompressing a zlib stream that has multiple zlib chunks in it. To read from the second chunk, it would set the zlib window TypedArray from the .subarray() of the chunk offset in the stream's TypedArray. This effectively made the decompressed data from the second chunk a mis-mash of old data that looked completely scrambled. It would also cause all future decompression using the same pako Inflate instance to also appear scrambled. As a pako comment aptly puts it: > Call updatewindow() to create and/or update the window state. > Note: a memory error from inflate() is non-recoverable. This allows us to properly decompress the large compressed payloads that Discord Gateway sends down to the Discord client. For example, for an account that's only in the Serenity Discord, one of the payloads is a 20 KB zlib compressed blob that has two chunks in it. Surprisingly, this is not covered by test262! I imagine this would have been caught earlier if there was such a test :^) |
||
---|---|---|
.. | ||
Applets | ||
Applications | ||
Demos | ||
DevTools | ||
DynamicLoader | ||
Games | ||
Libraries | ||
Services | ||
Shell | ||
Utilities | ||
CMakeLists.txt |