With this change, ".*make.*" function family now does error checking
earlier, which improves experience while using clangd. Note that the
change also make them instantiate classes a bit more eagerly, so in
LibVideo/PlaybackManager, we have to first define SeekingStateHandler
and only then make() it.
Co-Authored-By: stelar7 <dudedbz@gmail.com>
These changes are compatible with clang-format 16 and will be mandatory
when we eventually bump clang-format version. So, since there are no
real downsides, let's commit them now.
This commit un-deprecates DeprecatedString, and repurposes it as a byte
string.
As the null state has already been removed, there are no other
particularly hairy blockers in repurposing this type as a byte string
(what it _really_ is).
This commit is auto-generated:
$ xs=$(ack -l \bDeprecatedString\b\|deprecated_string AK Userland \
Meta Ports Ladybird Tests Kernel)
$ perl -pie 's/\bDeprecatedString\b/ByteString/g;
s/deprecated_string/byte_string/g' $xs
$ clang-format --style=file -i \
$(git diff --name-only | grep \.cpp\|\.h)
$ gn format $(git ls-files '*.gn' '*.gni')
Since it will become a stream in a little bit, it should behave like all
non-trivial stream classes, who are not primarily intended to have
shared ownership to make closing behavior more predictable. Across all
uses of MappedFile, there is only one use case of shared mapped files in
LibVideo, which now uses the thin SharedMappedFile wrapper.
Ignoring Size for a second, we currently have:
Rect::scale_by
Rect::scaled
Point::scale_by
Point::scaled
In Size, before this patch, we have:
Size::scale_by
Size::scaled_by
This aligns Size to use the same method name as Rect and Point. While
subjectively providing API symmetry, this is mostly to allow using this
method in templated helpers without caring what the exact underlying
type is.
These methods are slightly more convenient than storing the Bytes
separately. However, it it feels unsanitary to reach in and access this
data directly. Both of the users of these already have the
[Readonly]Bytes available in their constructors, and can easily avoid
using these methods, so let's remove them entirely.
Due to overload resolutions rules, this simple code provokes a crash:
ReadonlyBytes readonly_bytes{};
FixedMemoryStream stream{readonly_bytes};
ReadonlyBytes give_them_back{stream.bytes()};
// -> Panics on VERIFY(m_writing_enabled);
// but this is fine:
auto bytes = static_cast<FixedMemoryStream const&>(*stream).bytes()
If we need to be explicit about it, let's rename the overload instead of
adding that `static_cast`.
`seek_demuxer_to_most_recent_keyframe()` wasn't correctly returning in
cases where an error was thrown by the demuxer. To avoid this, the
function now returns the error, and the playback state handler must act
on it instead, allowing it to exit the seeking state early.
The EBML specification allows for CRC32 elements to be placed as the
first child element of a master element. However, our parsing of master
elements didn't take that into account, so an error would be thrown.
Instead of erroring out, the `parse_master_element()` function will now
skip CRC32 elements that are found as the first child of a master
element. If it is found after the first child, that will be considered
an error.
Void elements will also be skipped by `parse_master_element()`.
Since the `parse_cluster()` function has to seek the stream back to the
cluster's first child in order to allow cues' positions to be used
correctly, `parse_master_element()` had to be changed to return the
first element position, since the callback is not invoked for CRC32
elements. This means that the parameter used to communicate the element
position to the child element parsing function is unused, so that is
removed.
Errors are now deferred until `finish_decode()` is finished, meaning
branches to return errors only need to occur at the end of a ranged
decode. If VPX_DEBUG is enabled, a debug message will be printed
immediately when an overread occurs.
Average decoding times for `Tests/LibGfx/test-inputs/4.webp` improve
by about 4.7% with this change, absolute decode times changing from
27.4ms±1.1ms down to 26.1ms±1.0ms.
This does a few things:
- The decoder uses a 32- or 64-bit integer as a reservoir of the data
being decoded, rather than one single byte as it was previously.
- `read_bool()` only refills the reservoir (value) when the size drops
below one byte. Previously, it would read out a bit-sized range from
the data to completely refill the 8-bit value, doing much more work
than necessary for each individual read.
- VP9-specific code for reading the marker bit was moved to its own
function in Context.h.
- A debug flag `VPX_DEBUG` was added to optionally enable checking of
the final bits in a VPX ranged arithmetic decode and ensure that it
contains all zeroes. These zeroes are a bitstream requirement for
VP9, and are also present for all our lossy WebP test inputs
currently. This can be useful to test whether all the data present in
the range has been consumed.
A lot of the size of this diff comes from the removal of error handling
from all the range decoder reads in LibVideo/VP9 and LibGfx/WebP (VP8),
since it is now checked only at the end of the range.
In a benchmark decoding `Tests/LibGfx/test-inputs/4.webp`, decode times
are improved by about 22.8%, reducing average runtime from 35.5ms±1.1ms
down to 27.4±1.1ms.
This should cause no behavioral changes.
Previously, we would unwrap the duration value even when it contained
an error, causing an assertion failure.
Later, we should change it to return the last known sample timestamp
in the media data, allowing for example live-streamed video to have
a semi-useful duration. In general, though, this is not used by
players in the wild, so we can leave it for now.
...and keep a forwarding header around in VP9, so we don't have to
update all references to the class there.
In time, we probably want to merge LibGfx/ImageDecoders and LibVideo
into LibMedia, but for now I need just this class for the lossy
webp decoder. So move just it over.
No behvior change.
That's what this class really is; in fact that's what the first line of
the comment says it is.
This commit does not rename the main files, since those will contain
other time-related classes in a little bit.
...from "bytes" to "size_in_bytes".
I thought at first that `bytes` meant the variable contained
bytes that the decoder would read from.
Also, this variable is called `sz` (for `size`) in the spec.
No behavior change.
In addition, this renames the internal timer to better suit its new
purpose since the playback state handlers were added. Not only is it
used to time frame presentations, but also to poll the queue when
seeking or buffering.
Running decoding on the main thread can cause frames to be delayed due
to the presentation timer waiting for a decode to finish. If a frame
takes too long to decode, this delay can be quite visible. Therefore,
decoding has been moved to a separate thread, with frames sent to the
main thread through a thread-safe queue.
This results in frame times going from being late by up to 16ms to a
very consistent ~1-2ms.
This allows us to create a PlaybackManager from a file which has already
been mapped, instead of passing a file name.
This means that anyone who uses `PlaybackManager` can now use LibFSAC :)
The framebuffer size was reduced in f2c0cee, but this caused some niche
block layouts to write outside of the frame.
This could be fixed by adding checks to see if a block being predicted/
reconstructed is within the frame, but the branches introduced by that
reduce performance slightly. Therefore, it's better to keep the
framebuffer sized according to the decoded frame size in 8x8 blocks so
that any block can be decoded without bounds checking.
A test was added to ensure that this continues to work.
Some occasional cases could cause the accumulator to overflow and have
an incorrect result. It would be nice to use a smaller accumulator, but
it seems not to be correct. :^(
We now cast to i16 to allow 128-bit vectorization to make use of one
whole register instead of having to split the loop into multiple.
This results in about a 5% reduction in performance in my testing.
We don't need to run through the whole floating-point color converter
for videos that use sRGB transfer characteristics and BT.709 color
primaries. This commit adds a new templated inlining function to
ColorConverter to do a very fast fixed-point YCbCr to RGB conversion.
With the fast path, frame conversion times go from ~7.8ms down to
~3.7ms. The fast path can benefit a lot more from extra SIMD vector
width, as well.
Clang was reluctant to inline these for some reason. However, inlining
them seems to be quite beneficial, reducing decoding time in an intra-
heavy video by about 21% (~12.7s -> ~10.0s).
Quantizers are a constant for the whole frame, except when segment
features override them, in which case they are a constant per segment
ID. We take advantage of this by pre-calculating those after reading
the quantization parameters and segmentation features for a frame.
This results in a small 1.5% improvement (~12.9s -> ~12.7s).
This throws out some ugly `#define`s we had that were taking the role
of an enum anyway. We now have some nice getters in the contexts that
take the place of the combo of `seg_feature_active()` and then doing a
lookup in `FrameContext::m_segmentation_features` directly.
Bit reversals are used very often in intra-predicted frames. Turning
these into a constexpr lookup table reduces the branching needed for
block transforms significantly. This reduces the times spent decoding
an intra-heavy 1080p video by about 9% (~14.3s -> ~12.9s).
Previously, the block sizes would be checked at runtime to
determine the transform size to apply for residuals. Making the block
sizes into constant expressions allows all the loops to be unrolled
and reduces branching significantly.
This results in about a 26% improvement (~18s -> ~13.2s) in speed in an
intra-heavy test video.
Inter-prediction convolution filters are selected based on the
subpixel position determined for the motion vector relative to the
block being predicted. The subpixel position 0 only uses one single
sample in the center of the convolution, not averaging any other
samples. Let's call this a copy.
Reference frames can also be a different size relative to the frame
being predicted, but in almost every case, that scale will be 1:1
for every single frame in a video.
Taking into account these facts, we can create multiple fast paths for
inter prediction. These fast paths are only active when scaling is 1:1.
If we are doing a copy in both dimensions, then we can do a straight
memcpy from the reference frame to the output block buffer. In videos
where there is no motion, this is a dramatic speedup.
If we are doing a copy in one dimension, we can just do one convolution
and average directly into the output block buffer.
If we aren't doing a copy in either dimension, we can still cut out a
few operations from the convolution loops, since we only need to
advance our samples by whole pixels instead of subpixels.
These fast paths result in about a 34% improvement (~31.2s -> ~20.6s)
in a video which relies heavily on intra-predicted blocks due to high
motion. In videos with less motion, the improvement will be even
greater.
Also, note that the accumulators in these faster loops are only 16-bit.
High bit-depth videos will overflow those, so for now the fast path is
only used for 8-bit videos.