This change makes hit-testing more consistent in the handling of hidden
overflow by reusing the same clip-rectangles.
Also, it fixes bugs where the box is visible for hit-testing even
though it is clipped by the hidden overflow of the containing block.
Non-interleaved files always have an MCU of one data unit.
(A "data unit" is an 8x8 tile of pixels, and an "MCU" is a
"minium coded unit", e.g. 2x2 data units for luminance and
1 data unit each for Cr and Cb for a YCrCb image with
4:2:0 subsampling.)
For the test case, I converted an existing image to a ppm:
Build/lagom/bin/image -o out.ppm \
Tests/LibGfx/test-inputs/jpg/12-bit.jpg
Then I converted it to grayscale and saved it as a pgm in Photoshop.
Then I turned it into a weird jpeg like so:
path/to/cjpeg \
-outfile Tests/LibGfx/test-inputs/jpg/grayscale_mcu.jpg \
-sample 2x2 -restart 3 out.pgm
Makes 3 of the 5 jpegs failing to decode at #22780 go.
This allows positioning a child SVG relative to its parent SVG.
Note: These have been implemented as CSS properties as in SVG 2, these
are geometry properties that can be used in CSS (see
https://www.w3.org/TR/SVG/geometry.html), but there is not much browser
support for this. It is nicer to implement than the ad-hoc SVG
attribute parsing though, so I feel it may make sense to port the rest
of the attributes specified here (which should fix some issues with
viewport relative sizes).
The hit-testing position is now shifted by the scroll offsets before
performing any checks for containment. This is implemented by assigning
each PaintableBox/InlinePaintable an offset corresponding to the scroll
frame in which it is contained. The non-scroll-adjusted position is
still passed down when recursing to children because the assigned
offset accumulated for nested scroll frames.
With this change, hit testing works in the Inspector.
Fixes https://github.com/SerenityOS/serenity/issues/22068
This is a fix for regression introduced in
0bf82f748f
All CSS transforms need to be removed from the clip rectangle before
applying it. However, it is still necessary to calculate it with
applied transforms to find the correct intersection of all clip
rectangles in the containing block chain.
With this change, clip rectangles for boxes with hidden overflow or the
clip property are no longer calculated during the recording of painting
commands. Instead, it has moved to the "pre-paint" phase, along with
the assignment of scrolling offsets, and works in the following way:
1. The paintable tree is traversed to collect all paintable boxes that
have hidden overflow or use the CSS clip property. For each of these
boxes, the "final" clip rectangle is calculated by intersecting clip
rectangles in the containing block chain for a box.
2. The paintable tree is traversed another time, and a clip rectangle
is assigned for each paintable box contained by a node with hidden
overflow or the clip property.
This way, clipping becomes much easier during the painting commands
recording phase, as it only concerns the use of already assigned clip
rectangles. The same approach is applied to handle scrolling offsets.
Also, clip rectangle calculation is now implemented more correctly, as
we no longer stop at the stacking context boundary while intersecting
clip rectangles in the containing block chain.
Fixes:
https://github.com/SerenityOS/serenity/issues/22932https://github.com/SerenityOS/serenity/issues/22883https://github.com/SerenityOS/serenity/issues/22679https://github.com/SerenityOS/serenity/issues/22534
Previously, we were handling viewBoxes/viewports in a slightly hacky
way, asking graphics elements to figure out what viewBox to use during
layout. This does not work in all cases, and can't allow for more
complex SVGs where it is possible to have nested viewports.
This commit makes the SVGFormattingContext keep track of the
viewport/boxes, and it now lays out each viewport recursively, where
each nested `<svg>` or `<symbol>` can establish a new viewport.
This fixes some previous edge cases, and starts to allow nested
viewports (there's still some issues to resolve there).
Fixes#22931
Just creating a stream on the JS heap isn't enough, as we will later
crash when trying to read from that stream as it hasn't been properly
initialized. Instead, until we have teeing implemented (which is a
rather huge part of the Streams spec), create streams using proper AOs
that do initialize the stream.
For some reason on macOS with ASAN enabled, the test promises in
HTML/Window-postMessage do not resolve. These promises wait for both
the in-document and the blob url iframes to load. It's not clear why
this works fine in Linux nor why the onload handler doesn't fire
when the iframe has already loaded.
We don't currently calculate the fill- or stroke-boxes of SVG elements,
so for now we use the content- and border-boxes respectively, as those
are the closest equivalents. The test will need updating when we do
support them.
Also, the test is a screenshot because of rendering differences when
applying transforms: a 20px box does not get painted the same as a 10px
box scaled up 2x. Otherwise that would be the more ideal form of test.
I opened Base/res/graphics/buggie.png in Photoshop, converted it
to U.S. Web Coated (SWOP) v2, flattened the image so we don't have
CMYK with alpha, and saved it as a jpeg (with color profile embedded).
Before this change, it was possible for flex lines with negative
remaining space (due to overflowing items) to put a negative amount
of space between items for some values of `justify-content`.
This makes https://polar.sh/SerenityOS look much better :^)
This attribute has some compatbility issues...
- The spec says it should be an SVGAnimatedRect which contains
a DOMRect and a DOMReadOnlyRect.
- Blink gives you an SVGAnimatedRect with 2x SVGRect
- Gecko gives you an SVGAnimatedRect with 2x SVGRect? (nullable)
I ended up with something similar to Gecko, an SVGAnimatedRect
with 2x DOMRect? (nullable)
With this fixed, we can now load https://polar.sh/ :^)
Previously, our code for the fixup of table rows assumed that missing
cells in a table row must be sequential. This may not be true if the
table contains cells have a rowspan greater than one.
When present, the alpha channel is also affected by the horizontal
differencing predictor.
The test case was generated with GIMP with the following steps:
- Open an RGB image
- Add a transparency layer
- Export as TIFF with the LZW compression scheme
Due to the way expression parser is written, we need to resolve the
ambiguity between member access operators and dots used for punctuation
during lexing. The lexer uses a (totally bulletproof) heuristic to do
that: whenever '.' is followed by ' ' or '\n', it is considered a dot
and member access otherwise. While it works fine for prettified test
cases, non-prettified files often lack enter after a trailing dot
character. Since MemberAccess will always be invalid at that position,
explicitly treat trailing dot as a part of punctuation.
RecursiveASTVisitor was recursing into the subtrees of an old root if it
was changed in on_entry callback. Fix that by querying root pointer just
after on_entry callback returns. While on it, also use
`AK::TemporaryChange` instead of setting `m_current_subtree_pointer`
manually.
As it turns out, `FunctionCallCanonicalizationPass` was relying on being
able to replace tree on entry, and the bug in RecursiveASTVisitor made
the pass to not fully canonicalize nested function calls.
The changes to GenericASTPass.cpp alone are enough to fix the problem
but it is canonical (for some definition of canonicity) to only change
trees in on_leave. Therefore, the commit also switches
FunctionCallCanonicalizationPass to on_leave callback.
A test for this fix and one from the previous commit is also included.
This method (unlike can_read_line) ensures that the delimiter is present
in the buffer, and doesn't return true after eof when the delimiter is
absent.
Spec defines `[LegacyNullToEmptyString]` on `value` argument of
`setProperty` but since `internal_set` calls `setProperty` directly
instead of using IDL generated binding, we need to make sure that null
is treated as empty string.
Fixes items grid loading on https://d.rsms.me/stuff/
When iterating inline level chunks for a piece of text like " hello ",
we will get three separate items from InlineLevelIterator:
- Text " "
- Text "hello"
- Text " "
If the first item also had some leading margin (e.g margin-left: 10px)
we would lose that information when deciding that the whitespace is
collapsible.
This patch fixes the issue by accumulating the amount of leading margin
present in any collapsed whitespace items, and then adding them to the
next non-whitespace item in IFC.
It's a wee bit hackish, but so is the rest of the leading/trailing
margin mechanism.
This makes the header menu on https://www.gimp.org/ look proper. :^)
Add a boolean to ProcessSpawnOptions, `search_for_executable_in_path`,
which when true, calls into posix_spawnp() instead of posix_spawn().
This defaults to false to maintain the existing behavior.
The `path` field is renamed to `executable` because having two fields
refer to "path" and mean different things seemed unnecessarily
confusing.
This tag is required by the specification, but some encoders (at least
Krita) don't write it for images with a single strip.
The test file was generated by opening deflate.tiff in Krita and saving
it with the DEFLATE compression.
Window and other global objects are not technically legacy platform
objects, and have other ways to override their setters and getters.
However, Window does need to share some code with the legacy platform
object paths, and simply adding another bool check to the mix seems
the shortest putt.
These set the horizontal scale factor, character spacing, word
spacing, and text rise respectively.
Also add a global scale transform, and set a text transform matrix
with a scale for some of the text.
Type 2 <=> One-dimensional Group3, customized for TIFF
Type 3 <=> Two-dimensional Group3, uses the original 1D internally
Type 4 <=> Two-dimensional Group4
So let's clarify that this is not Group3 1D but the TIFF variant, which
is called `CCITTRLE` in libtiff. So let's stick with this name to avoid
confusion.
Images with a display mask ("stencil" as it's called in DPaint) add
an extra bitplane which acts as a mask. For now, at least skip it
properly. Later we should render masked pixels as transparent, but
this requires some refactoring.
Now, we will evenly distribute the remaining free space across tracks
using the auto max-tracks sizing function, exactly as the specification
states. Many tests are affected, but they are not visually broken.
Fixes https://github.com/SerenityOS/serenity/issues/22798
We now allow all subsampling factors where the subsampling factors
of follow-on components evenly decode the ones of the first component.
In practice, this allows YCCK 2111, CMYK 2112, and CMYK 2111.
This refactoring makes WebContent less aware of LibWeb internals.
The code that initializes paint recording commands now resides in
`Navigable::paint()`. Additionally, we no longer need to reuse
PaintContext across iframes, allowing us to avoid saving and restoring
its state before recursing into an iframe.
There are a bunch of situations where we need to treat cross axis
max-size properties as "none", notably percentage values when the
reference containing block size is an intrinsic sizing constraint.
This fixes an issue where flex items with definite width would get
shrunk to 0px by "max-width: 100%" in case the item itself is an
SVG with no natural width or height.
For consistency, we now use the should_treat_max_width/height_as_none
helpers throughout FFC.
This makes the search/account/cart icons show up in the top right
on https://twinings.co.uk :^)
Mostly because I audited all places that assigned to `m_text_matrix`
after #22760.
This one is very difficult to trigger in practice.
`show_text()` marks the text rendering matrix dirty already,
so this only has an effect if the `TJ` array starts with a
number, and the matrix isn't marked dirty going in.
`Tm` caches the text rendering matrix, so I changed text.pdf
to contain:
```
1 0 0 1 45 130 Tm
[ 200 (Hello) -2000 (World) ] TJ T*
```
This first sets an x offset of 5 (on top of the normal 40), and
then undoes it (`200` is multiplied by font size (25) / -1000,
and `200 * 25 / -1000` is -5). Before this change, the topmost
"Hello World" ended up slightly indented.
Likely no behavior change in practice, but makes the code easier
to understand, and maybe it helps in the wild somewhere.
Previously, when calling `BigFraction::from_string()`, the fractional
part of the number was always treated as positive. This led to an
incorrect result if the input string was negative.
HTML fragments are parsed with a temporary HTML document that never has
its flag set to say that it is ready to have scripts executed. For these
fragments, in the HTMLParser, these scripts are prepared, but
execute_script is never called on them.
This results in the HTMLParser waiting forever on the document to be
ready to have scripts executed.
To fix this, only wait for the document to be ready if we are definitely
going to execute a script.
This fixes a hang processing the HTML in the attached test, as seen on:
https://github.com/SerenityOS/serenityFixes: #22735
Previously, constructing a `UnsignedBigInteger::from_base()` could
produce an incorrect result if the input string contained a valid
Base36 digit that was out of range of the given base. The same method
would also crash if the input string contained an invalid Base36 digit.
An error is now returned in both these cases.
Constructing a BigFraction from string is now also fallible, so that we
can handle the case where we are given an input string with invalid
digits.
The paintable tree structure more closely matches the painting order
when fragments are owned by corresponding inline paintables. This
change does not affect the layout tree, as it is more convenient for
layout purposes to have all fragments owned by a block container in
one place.
Additionally, this improves performance significantly on pages with
many fragments, as we no longer have to walk the ancestor chain up
to the closest block container to determine if a fragment belongs
to an inline paintable.
This is a part of refactoring towards making the paintable tree
independent of the layout tree. Now, instead of transferring text
fragments from the layout tree to the paintable tree during the layout
commit phase, we allocate separate PaintableFragments that contain only
the information necessary for painting. Doing this also allows us to
get rid LineBoxes, as they are used only during layout.
`JsonValue::to_byte_string` has peculiar type-erasure semantics which is
not usually intended. Unfortunately, it also has a very stereotypical
name which does not warn about unexpected behavior. So let's prefix it
with `deprecated_` to make new code use `as_string` if it just wants to
get string value or `serialized<StringBuilder>` if it needs to do proper
serialization.
A bunch of users used consume_specific with a constant ByteString
literal, which can be replaced by an allocation-free StringView literal.
The generic consume_while overload gains a requires clause so that
consume_specific("abc") causes a more understandable and actionable
error.
I opened Tests/LibGfx/test-inputs/png/wide-gamut-only.png in
Preview.app and used File->Export as PDF... to convert it to a PDF.
I then ran
mutool clean -d Tests/LibPDF/wide-gamut-only.pdf \
Tests/LibPDF/wide-gamut-only.pdf
to decompress it, edited by hand to remove padding around the image
and shrunk the page's MediaBox to be as big as the image, and ran the
command above again to fix up binary offsets in the xref table.
I created a 16-bpp RGB file in Display P3 in photoshop, filled it
with (0, 255, 0), and then drew something on it with (100, 255, 0).
(Since it's a 16-bpp image, 255 ix stored as 0xffff and 100 is stored
as 65535 * 100 / 255 == 0x6464 in the file.)
I verified that Edit->Convert to Profile...->sRGB resulted in an
image filled with (0, 255, 0) in that color space (due to gamut
clipping).
Similar to these:
* https://webkit.org/blog-files/color-gamut/Webkit-logo-P3.png
* https://www.dropbox.com/s/tgarynpj65ouafd/insta-logo.png?dl=1
...but in green instead of in red, and hand-drawn by me so no license
concerns.
This patch makes a few changes to the way we calculate line-height:
- `line-height: normal` is now resolved using metrics from the used
font (specifically, round(A + D + lineGap)).
- `line-height: calc(...)` is now resolved at style compute time.
- `line-height` values are now absolutized at style compute time.
As a consequence of the above, we no longer need to walk the DOM
ancestor chain looking for line-heights during style computation.
Instead, values are inherited, resolved and absolutized locally.
This is not only much faster, but also makes our line-height metrics
match those of other engines like Gecko and Blink.
When the caller of NumericCalculationNode::resolve() does not provide
a percentage_basis, it expects the method to return a raw percentage
value.
Fixes crashing on https://discord.com/login
Support for constructing a Value from a UnixDateTime was added in commit
effcd080ca.
That constructor just stores the value as the number of milliseconds
since epoch. There's no way for outside users to know this, so this adds
a helper to retrieve the value as a UnixDateTime and let SQL::Value be
the source of truth for how the value is encoded/decoded.
Until now, we had implemented flex container sizing by awkwardly doing
exactly what the spec said (basically having FFC size the container)
despite that not really making sense in the big picture. (Parent
formatting contexts should be responsible for sizing and placing their
children)
This patch moves us away from the Flexbox spec text a little bit, by
removing the logic for sizing the flex container in FFC, and instead
making sure that all formatting contexts can set both width and height
of flex container children.
This required changes in BFC and IFC, but it's actually quite simple!
Width was already not a problem, and it turns out height isn't either,
since the automatic height of a flex container is max-content.
With this in mind, we can simply determine the height of flex containers
before transferring control to FFC, and everything flows nicely.
With this change, we can remove all the virtuals and FFC logic for
negotiating container size with the parent formatting context.
We also don't need the "available space for flex container" stuff
anymore either, so that's gone as well.
There are some minor diffs in layout test results from this, but the
new results actually match other browsers more closely, so that's fine.
This should make flex layout, and indeed layout in general, easier to
understand, since this was the main weird special case outside of
BFC/IFC where a formatting context delegates work to its parent instead
of the other way around. :^)
Previously, these were clipped by the RecordingPainter, which used the
path's bounding box (which in this case is zero width or height). The
fix is to expand the bounding box by the stroke width.
Fixes#22661
Note: This is unrelated to any recent LibGfx changes :^)
Some apps seem to generate malformed images that are accepted
by most readers. We now only throw if malformed data would lead to
a write outside the chunky buffer.
Includes a set of wav files of different frequencies, these are
each loaded and then written to a temporary file, checking that
the meta-data is correctly read and that the output matches the input.
Before this change, we used the wrong insertion point for flex items
in reverse layouts with `justify-content: normal`. This caused flex
items to overflow the flex containers "backwards" from the start edge.
Instead of assuming that the first font in the cascade font list will
have a glyph for space, we need to find it in the list taking into
account unicode ranges.
When using the BMP encoding, ICO images are expected to contain a 1-bit
mask for transparency. Regardless an alpha channel is already included
in the image, the mask is always required. As stated here[1], the
mask is used to provide shadow around the image.
Unfortunately, it seems that some encoder do not include that second
transparency mask. So let's read that mask only if some data is still
remaining after decoding the image.
The test case has been generated by truncating the 64 last bytes
(originally dedicated to the mask) from the `serenity.ico` file and
changing the declared size of the image in the ICO header. The size
value is stored at the offset 0x0E in the file and I changed the value
from 0x0468 to 0x0428.
[1]: https://devblogs.microsoft.com/oldnewthing/20101021-00/?p=12483
- We now propagate changes in font and line-height to anonymous wrappers
when doing a partial style update after invalidation.
- We no longer (incorrectly) propagate style from table wrapper boxes
to the table root, since inheritance works in the other direction.
Fixes#22395
This change fixes the function that calculates the number of auto-fill
tracks, ensuring it uses height when applied to rows, instead of
assuming that it always operates on columns.
Fixes the mistake that gaps are counted as if they exist after each
track, when actually gaps are present only between tracks.
Visual progression on https://kde.org/products/
CSSPixels should not be wrapped into CSS::Length before being passed
to resolved() to end up resolving percentages without losing
precision.
Fixes thrashing layout when 33.3333% width is used together with
"box-sizing: border-box".
Percentage vertical margin and padding values are relative to the
containing block *width*, not *height*. This has to be one of the most
commonly recurring mistakes we make :^)
With this change, a stacking context can be established by any
paintable, including inline paintables. The stacking context traversal
is updated to remove the assumption that the stacking context root is
paintable box.
Before this change, parsed grid-template-columns/grid-template-rows
were represented as two lists: line names and track sizes. The problem
with this approach is that it erases the relationship between tracks
and their names, which results in unnecessarily complicated code that
restores this data (incorrectly if repeat() is involved) during layout.
This change solves that by representing line definitions as a list of
sizes and names in the order they were defined.
Visual progression https://genius.com/
This fixes an issue where GIF images without a global color table would
have the first segment incorrectly interpreted as color table data.
Makes many more screenshots appear on https://virtuallyfun.com/ :^)
Resolves a FIXME in MimeSniff::Resource allowing us to determine
the computed MIME type given supplied types that are used in older
versions of Apache that need special handling.
TIFF files are made in a way that make them easily extendable and over
the years people have made sure to exploit that. In other words, it's
easy to find images with non-standard tags. Instead of returning an
error for that, let's skip them.
Note that we need to make sure to realign the reading head in the file.
The test case was originally a 10x10 checkerboard image with required
tags, and also the `DocumentName` tag. Then, I modified this tag in a
hexadecimal editor and replaced its id with 30 000 (0x3075 as a LE u16)
and the type with the same value as well. This is AFAIK, never used as
a custom TIFF tag, so this should remain an invalid tag id and type.
Similar to another problem we had in CharacterData, we were assuming
that the offsets were raw utf8 byte offsets into the data, instead of
utf16 code units. Fix this by using the substring helpers in
CharacterData to get the text data from the Range.
There are more instances of this issue around the place that we will
need to track down and add tests for, but this fixes one of them :^)
For the test included in this commit, we were previously returning:
llo💨😮
Instead of the expected:
llo💨😮 Wo
We previously skipped updating the lookback buffer when copying
uncompressed data, which resulted in a wrong total byte count.
With a wrong total byte count, our decompressor implementation
ended up choosing a wrong offset into the dictionary.
Hand-written (with offsets fixed up by `mutool clean`).
Uses the default encoding for each font. Manual test for now.
Byte strings generated with:
python3 -c "for i in range(4):
print('<' +
''.join('%02x' % r for r in range(i * 64, (i + 1) * 64)) +
'>')"
A local (non-public) PDF I have lying around contains this in
a page's operator stream:
```
[<00b4003e> 3 <002600480051> 3 <005700550044004f0003> -29
<00330044> 3 <0055> -3 <004e0040> 4 <0003> -29 <004c00560003> -31
<0057004b> 4 <00480003> -37 <0050
>] TJ
```
That is, there's a newline in a hexstring after a character.
This led to `Parser error at offset 5184: Unexpected character`.
The spec says in 3.2.3 String Objects, Hexadecimal Strings:
"""Each pair of hexadecimal digits defines one byte of the string.
White-space characters (such as space, tab, carriage return, line feed,
and form feed) are ignored."""
But we didn't ignore whitespace before or after a character, only
in between the bytes.
The spec also says:
"""If the final digit of a hexadecimal string is missing—that is, if
there is an odd number of digits—the final digit is assumed to be 0."""
In that case, we were skipping the closing `>` twice -- or, more
accurately, we ignored the character after it too. This has been
wrong all the way back in #6974.
Add a test that fails if either of the two changes isn't present.
Instead of implementing stacking context painting order exactly as it
is defined in CSS2.2 "Appendix E. Elaborate description of Stacking
Contexts" we need to account for changes in the latest standards where
a box can establish a stacking context without being positioned, for
example, by having an opacity different from 1.
Fixes https://github.com/SerenityOS/serenity/issues/21137
The JS::Value being passed through is not a bigint, and needs to be
converted using ConvertToInt, as per:
https://webidl.spec.whatwg.org/#es-unsigned-long-long
Furthermore, the IDL definition also specifies that this is associated
with the [EnforceRange] extended attribute.
This makes it actually possible to pass through an autoAllocateChunkSize
to the ReadableStream constructor without it throwing a TypeError.
This overload is currently unused. When used, it doesn't compile due to
mismatched return types in the handler provided to the function and the
type of `on_resolution`.
Using a vector to represent a list of painting commands results in many
reallocations, especially on pages with a lot of content.
This change addresses it by introducing a SegmentedVector, which allows
fast appending by representing a list as a sequence of fixed-size
vectors. Currently, this new data structure supports only the
operations used in RecordingPainter, which are appending and iterating.
The fix here was to stop using StringBuilder::append(char) when told to
append a code point, and switch to StringBuilder::append_code_point(u32)
There's probably a bunch more issues like this, and we should stop using
append(char) in general since it allows building of garbage strings.
With this change, instead of applying scroll offsets during the
recording of the painting command list, we do the following:
1. Collect all boxes with scrollable overflow into a PaintContext,
each with an id and the total amount of scrolling offset accumulated
from ancestor scrollable boxes.
2. During the recording phase assign a corresponding scroll_frame_id to
each command that paints content within a scrollable box.
3. Before executing the recorded commands, translate each command that
has a scroll_frame_id by the accumulated scroll offset.
This approach has following advantages:
- Implementing nested scrollables becomes much simpler, as the
recording phase only requires the correct assignment of the nearest
scrollable's scroll_frame_id, while the accumulated offset from
ancestors is applied subsequently.
- The recording of painting commands is not tied to a specific offset
within scrollable boxes, which means in the future, it will be
possible to update the scrolling offset and repaint without the need
to re-record painting commands.
We currently assume that the K (black) channel uses the same sampling
as the Y channel already, so this already works as long as we don't
error out on it.
Obtained by running:
convert rgb_components.jpg -colorspace cmyk \
-sampling-factor 1 ycck-1111.jpg
convert rgb_components.jpg -colorspace cmyk \
-sampling-factor 2 ycck-2111.jpg
convert rgb_components.jpg -colorspace cmyk ycck-2112.jpg
where rgb_components.jpg is the file in Tests/LibGfx/test-inputs/jpg.
(I used the web version of `convert` at
https://cancerberosgx.github.io/magic/playground/index.html)
While this does indeed produce a cmyk jpg (using the YCCK encoding
internally), it uses the mathematical rgb->cmyk conversion and does
not embed an cmyk color space in the output jpg.
Normally, cmyk images are for printing and hence converting them
from cmyk to rgb using a color profile like SWOP leads to better
results. So if a cmyk image does not contain color space information,
applications might use something like SWOP instead of the simple
math transform to convert to RGB. Programs doing that will show
these images as fairly muted (and would arguably be correct doing
so).
Hence, tests using these images shouldn't check their RGB values.
Ideally, we'd add a way to get the raw cmyk data from a cmyk jpeg,
and then tests could test color values against that.
The -1111 image uses no subsampling, meaning each channel's sampling
factor is 1.
The -2111 image uses subsampling for the non-Y channels, meaning the
sampling factors are 2 for Y and 1 each for YYK.
The -2112 image uses subsampling for the two C channels, meaning the
sampling factors are 2 for Y and K and 1 each for YY.
We correctly render the -1111 variant (using e.g.
`Build/lagom/bin/image -o out.png .../ycck-1111.jpg).
We render the -2111 variant, but it looks pretty broken.
We refuse to decode the -2112 variant. This is #21259.
Manual tests for now, but having these in tree will make it easier
to write unit tests later, once things work better.
This stopped being called for anything without a navigable container
after 76a97d8, due to the early return. This broke SVG <use> elements
that reference elements defined later in the document.
Previously we would incorrectly handle the (somewhat uncommon) case of
binding and then separately connecting a tcp socket to a server, as we
would register the socket during the manual bind(2) in the sockets by
tuple table, but our effective tuple would then change as the result of
the connect updating our target peer address. This would result in the
the entry not being removed from the table on destruction, which could
lead to a UAF.
We now make sure to update the table entry if needed during connects.
POSIX (rightfully so) specifies that the sendto address argument is
ignored in connection-oriented protocols.
The TCPSocket also assumed the peer address may not change post-connect
and would trigger a UAF in sockets_by_tuple() when it did.
This aligns Workers and Window and MessagePorts to all use the same
mechanism for transferring serialized messages across realms.
It also allows transferring more message ports into a worker.
Re-enable the Worker-echo test, as none of the MessagePort tests have
themselves been flaky, and those are now using the same underlying
implementation.
This avoids doing pointless plotting for scanlines that will never be
seen.
Note: This currently can only clip edges vertically. Horizontal clipping
is more tricky, since edges that are not visible can still change how
things accumulate across the scanline.
Fixes#22382
Sadly, this causes a bunch of LibWeb test churn as this change
imperceptibly changes the final rasterized output.
We would previously not return a RadioNodeList in the curious case where
a named item resolves to two different elements within the form.
This was not a problem when calling namedItem directly in the IDL as
named_item_or_radio_node_list shadows named_item but is exposed when
calling the property through array bracket notation (as an example).
Fix this, and add a bunch more tests to cover
HTMLFormControlsCollection.
With this change "display: contents" ancestors are not considered as
insertion point for inline nodes similar to how we already ignore them
for non-inline nodes.
Fixes https://github.com/SerenityOS/serenity/issues/22396
TIFF images with the PhotometricInterpretation tag set to RGBPalette are
based on indexed colors instead of explicitly describing the color for
each pixel. Let's add support for them.
The test case was generated with GIMP using the Indexed image mode after
adding an alpha layer. Not all decoders are able to open this image, but
GIMP can.
We were previously assuming that the input offsets and lengths were all
in raw byte offsets into a UTF-8 string. While internally our String
representation may be in UTF-8 from the external world it is seen as
UTF-16, with code unit offsets passed through, and used as the returned
length.
Beforehand, the included test included in this commit would crash
ladybird (and otherwise return wrong values).
The implementation here is very inefficient, I am sure there is a
much smarter way to write it so that we would not need a conversion
from UTF-8 to a UTF-16 string (and then back again).
Fixes: #20971
In a bunch of cases, this actually ends up simplifying the code as
to_number will handle something such as:
```
Optional<I> opt;
if constexpr (IsSigned<I>)
opt = view.to_int<I>();
else
opt = view.to_uint<I>();
```
For us.
The main goal here however is to have a single generic number conversion
API between all of the String classes.
Calling test() multiple times in the same test file is not actually
valid, and can cause the following test to hang forever. So let's stop
doing that in the one test that did so, and also prevent the same
mistake happening again. :^)
Throwing an exception on subsequent test() calls means that we don't
hang, the test will fail with missing output, and we get a log message
explaining why.
This is a pretty straightforward test, but I managed to make this crash
on real sites while trying to fix#20971 without any other test in the
existing suite failing.
UnassociatedAlpha is the one used by GIMP when generating TIFF images
with transparency. Support is added for Grayscale and RGB images as it's
the two that we support right now but managing transparency should be
really straightforward for other types as well.
This is to allow future changes to do cross-process MessagePorts in an
implementation-agnostic way. Add some tests for this behavior.
Delivering messages that were posted to a MessagePort just before it was
transferred is not yet implemented still.
Having some rendering test coverage is motivated by #22362, but this
test wouldn't have found the crashes over there (since colorspaces.pdf
does not contain pattern color spaces). Still, good to have some
in-repo test coverage of PDF rendering.
This compression (tag Compression=2) is not very popular on its own, but
a base to implement CCITT3 2D and CCITT4 compressions.
As the format has no real benefits, it is quite hard to find an app that
accepts tho encode that for you. So I used the following program that
calls `libtiff` directly:
```cpp
#include <vector>
#include <cstdlib>
#include <iostream>
#include <tiffio.h>
// An array containing 0 and 1 of length width * height.
extern std::vector<uint8_t> array;
int main() {
// From: https://stackoverflow.com/a/34257789
TIFF *image = TIFFOpen("input.tif", "w");
int const width = 400;
int const height = 300;
TIFFSetField(image, TIFFTAG_IMAGEWIDTH, width);
TIFFSetField(image, TIFFTAG_IMAGELENGTH, height);
TIFFSetField(image, TIFFTAG_PHOTOMETRIC, 0);
TIFFSetField(image, TIFFTAG_COMPRESSION, COMPRESSION_CCITTRLE);
TIFFSetField(image, TIFFTAG_BITSPERSAMPLE, 1);
TIFFSetField(image, TIFFTAG_SAMPLESPERPIXEL, 1);
TIFFSetField(image, TIFFTAG_ROWSPERSTRIP, 1);
std::vector<uint8_t> scan_line(width / 8 + 8, 0);
int count = 0;
for (int i = 0; i < height; i++) {
std::fill(scan_line.begin(), scan_line.end(), 0);
for (int x = 0; x < width; ++x) {
uint8_t eight_pixels = scan_line.at(x / 8);
eight_pixels = eight_pixels << 1;
eight_pixels |= !array.at(i * width + x);
scan_line.at(x / 8) = eight_pixels;
}
int bytes = int(width / 8.0 + 0.5);
if (TIFFWriteScanline(image, scan_line.data(), i, bytes) != 1)
std::cerr << "Something went wrong\n";
}
TIFFClose(image);
}
```
Fixes following mistakes:
- "scrolling box" for a document is not `scrollable_overflow_rect()`
but size of viewport (initial containing block, like spec says).
- comparing edges of "scrolling box" with edges of target element
does not make any sense because "scrolling box" edges are relative
to page while result of `get_bounding_client_rect()` is relative
to viewport.
The styling of elements using the `use_pseudo_element()` was only
applied on layout. When an element style was recomputed later that
styling was not overruled with the pseudo element selector styles.
This moves the styling override from `TreeBuilder.cpp` to
`StyleComputer.cpp`. Now the styles are always correctly applied.
I also removed the method `property_id_by_index()` because it was
not needed anymore.
Als some calls to `invalidate_layout()` in the Meter, Progress and
Select elements where not needed anymore because the style values
are update on the changing of the style attribute.
This fixes issue #22278.
Previously, `replace` used `find_all` to find all of the positions to
replace. But `find_all` finds all the *overlapping* instances of the
needle, while `replace` assumed that the next position was always at
least `needle.length()` away from the last one. This led to crashes like
https://github.com/SerenityOS/jakt/issues/1159.
This commit un-deprecates DeprecatedString, and repurposes it as a byte
string.
As the null state has already been removed, there are no other
particularly hairy blockers in repurposing this type as a byte string
(what it _really_ is).
This commit is auto-generated:
$ xs=$(ack -l \bDeprecatedString\b\|deprecated_string AK Userland \
Meta Ports Ladybird Tests Kernel)
$ perl -pie 's/\bDeprecatedString\b/ByteString/g;
s/deprecated_string/byte_string/g' $xs
$ clang-format --style=file -i \
$(git diff --name-only | grep \.cpp\|\.h)
$ gn format $(git ls-files '*.gn' '*.gni')
With this change, Document now always has a Web::Page. This means we no
longer rely on the breakable link between Document and BrowsingContext
to find a relevant Web::Page.
Fixes#22290
This fixes the issue that occurred when, after clicking an inline
paintable page would always scroll to the top. The problem was that
`scroll_an_element_into_view()` relies on `get_bounding_client_rect()`
to produce the correct scroll position and for inline paintables we
were always returning zero rect before this change.
As outlined in: https://www.w3.org/TR/selectors-4/#compat
We now do not treat unknown webkit pseudo-elements as invalid at parse
time, and also support serializing these elements.
Fixes: #21959
This is currently only used by CSS attr ref tests, but is useful outside
of attr ref tests as well. Give a more generic name to this ref file for
this usage.
Out-of-flow boxes (floating and absolutely-positioned elements) were
previously collected and put in the anonymous block wrapper as well, but
this actually made hit testing not able to find them, since they were
breaking expectations about tree structure that hit testing relies on.
After this change, we simply let out-of-flow boxes stay in their
original parent, preserving the author's intended box tree structure.
(Or rather, bring offsetLeft and offsetTop closer to spec, and implement
the previously-missing offsetParent)
This makes mouse inputs on https://nerget.com/fluidSim/ work properly.
Sizing already worked correctly, but before this change, we were too
aggressive with inserting line breaks when negative margins would
still an atomic inline to fit on the line.