"TPGD" is short for "Typical Prediction for Generic Direct coding",
and the "ON" bit turns it on. In this mode, before decoding a line,
we decode a single bit first that controls if the current line is
just a copy of the previous line. If so, the line's pixels aren't
encoded, the decoder just copies the previous line.
I created this by running
jbig2 -i Tests/LibGfx/test-inputs/bmp/bitmap -f bmp \
-o bitmap -F jb2 -ini tpgdon.ini
where tpgdon.ini contained:
-Gen -Seg 1
-Gen -Param -TpGDon 1
See previous commits in this directory for details on the `jbig2` tool.
Sadly, the TPGDON writing path in `jbig2` wasn't implemented yet,
so I had to add this. See the PR that added this commit for my
local diff to `jbig2`.
I'm somewhat confident that my change to `jbig2` (and hence the
image added in this commit) is correct because:
1. `jbig2` succeeds in converting this file to a bmp file,
while it failed without my patch (the decoding codepath in
`jbig2` does have TPGDON support)
2. Other pdf viewers display the output of
`Meta/jbig2_to_pdf.py -o foo.pdf path/to/bitmap-tpgdon.jbig2 399 400`
the same way we do
These are standalone applications meant to be run by the user directly,
as opposed to other libexec processes which are programmatically forked
by the browser. To do this, we simply remove these processes from the
`ladybird_helper_processes` list. We must also explicitly list the
dependencies for these processes.
It seems to do the right thing already, and nothing in the spec says
not to do this as far as I can tell.
With this, we can finally decode
Tests/LibGfx/test-inputs/jbig2/bitmap.jbig2 and add a test for
decoding simple arithmetic-coded images.
In practice, everything uses white backgrounds and operators `or`
or `xor` to turn them black, at least for the simple images we're
about to be able to decode.
To make sure we don't forget implementing this for real once needed,
reject other ops, and also reject black backgrounds (because 1 | 0
is 1, not 0 like our overwrite implementation will produce).
This means we have to remove a test, but since this scenario doesn't
seem to happen in practice, that seems ok.
The context can vary for every bit we read.
This does not affect the one use in the test which reuses the same
context for all bits, but it is necessary for future changes.
The API value of a <textarea> element is its raw value with normalized
newlines. This should be used in a couple of places where we currently
use the raw value.
These tests seem to interact in a way that times out the test runner and
messes up its expectations. The 'current test' moves on just as the
previous crypto test calls done, resulting in the wrong expectations
being checked. In reality these tests should be timing out themselves,
rather than causing adjacent tests to fail intermittently...
I think the context normally changes for every bit. But this here
is enough to correctly decode the test bitstream in Annex H.2 in
the spec, which seems like a good checkpoint.
The internals of the decoder use spec naming, to make the code
look virtually identical to what's in the spec. (Even so, I managed
to put in several typos that took a while to track down.)
The behavior of Crypto::UnsignedBigInt::export_data unexpectedly
does not actually remove leading zero bytes when the corresponding
parameter is passed. The caller must manually adjust for the location
of the zero bytes.
We were unconditionally creating new File objects for all Blob-type
values passed to `FormData.append`. We should only do so if the value is
not already a File object (or if the `filename` attribute is present).
We must also carry MIME type information forward from the underlying
Blob object.
This patch implements and tests window.crypto.sublte.generateKey with
an RSA-OAEP algorithm. In order for the types to be happy, the
KeyAlgorithms objects are moved to their own .h/.cpp pair, and the new
KeyAlgorithms for RSA are added there.
When inserting a node into a parent, any live DOM ranges that reference
the parent may need to be updated. The spec does this by increasing or
decreasing the start/end offsets of each live range *before* actually
performing the insertion.
This caused us to crash with a verification failure, since it was
possible to set the range offset to an invalid value (that would go on
to immediately become valid after the insertion was finished).
This patch fixes the issue by adding special badged helpers on Range for
Node to reach into it and increase/decrease the offsets during node
insertion. This skips the offset validity check and actually makes our
code read slightly more like the spec.
Found by Domato :^)
Rather than try to lay out masks normally, this updates the TreeBuilder
to create layout nodes for masks as a child of their user (i.e. the
masked element). This allows each use of a mask to be laid out
differently, which makes supporting `maskContentUnits=objectBoundingBox`
fairly easy.
The `SVGFormattingContext` is then updated to lay out masks last (as
their sizing may depend on their parent), and treats them like
viewports.
This is pretty ad-hoc, but the SVG specification does not give any
guidance on how to actually implement this.