Make TopDict's defaultWidthX and nominalWidthX Optional<>s so that
we can check if they're set per fdselect-selected font dict, and
if so use the value from there in CID-keyed fonts. Otherwise, keep
using the value in the top dict.
This begins the process of aligning our implementation with the spec
with regard to using CalendarMethodsRecord. The main intent here is to
make it much easier to make normative changes to AOs which have been
updated to CalendarMethodsRecord.
While this does resolve various FIXMEs, many others above need to be
added in order to be able to pass through a CalendarMethodsRecord. The
use here aligns with what I can gather from the spec of what the
arguments to CreateCalendarMethodsRecord should be, but various AOs have
been updated so much with other changes it's not completely obvious.
Other AOs do not even exist in the latest version of the spec, but we
still rely on them.
As part of these updates, this commit coincidentally also fixes two
PlainDate roundingmode issues seen in test262 - a test of which is also
added in test-js. This issue boiled down to what appears to be an
observable optimization in the spec, where it can avoid calling
dateUntil in certain situations (roundingGranularityIsNoop).
However, the main goal here is to make it much easier to fix many more
issues in the future :^)
since/calendar-dateuntil-called-with-singular-largestunit.js ❌ -> ✅
until/calendar-dateuntil-called-with-singular-largestunit.js ❌ -> ✅
This is part of a large refactor made as part of the temporal spec.
Most AOs using the calendar now pass through this record. There will
need to be a long process of going through updating AOs to use this
record.
Duplicates can show up when copy-pasting IDL snippets into existing IDL
files, and the resulting error is extremely useless and misleading.
This commit makes it so the parser catches these cases, and emits a more
helpful error like "Overload set 'instantiate' contains multiple
identical declarations".
This is a simple extension of GenericLexer, and is used in more than
just LibXML, so let's move it into AK.
The move also resolves a FIXME, which is removed in this commit.
Before this change, the rebaseline script would generate reference
based on 'about:blank' (at least running it on my macOS system).
This commit allows 'about:blank' through the assertion and only dumps
the layout tree when theloaded URL matches the one we are interested
in.
Along with this, Port.h is include which helps generalising common
information for the port package, like it's name and version. With
SemVer complaint versions, it is possible to show positive change
(upgrade) or negative change (downgrade) in the installed ports.
However, for some non-complaint versions (eg. using git commit hash),
non-equality (`!=`) is used to notify upgrade. Since there is no
algorithm (without git history) to check the order of commits, it is
not possible to inform whether it is an upgrade or downgrade.
Semantic Versioning (SemVer) is a versioning scheme for software that
uses MAJOR.MINOR.PATCH format. MAJOR for significant, possibly
breaking changes; MINOR for backward-compatible additions; PATCH for
bug fixes. It aids communication, compatibility prediction, and
dependency management. In apps dependent on specific library versions,
SemVer guides parsing and validates compatibility, ensuring apps use
appropriate dependencies.
<valid semver> ::= <version core>
| <version core> "-" <pre-release>
| <version core> "+" <build>
| <version core> "-" <pre-release> "+" <build>
It seems that the difference between pending and ASAP in the spec is
only to allow the implementation to perform implementation-defined
operations between the two states. We don't need to distinguish the two
states, so lets just combine them for now.
* FDArray, FDSelect must be present
* Encoding must not be present
* Charset maps from GID (Glyph ID) to CID (Character ID),
instead of to character name
The fdselect array (that we already read) maps eachs glyph ID
to an fdarray index. The font dict at that index then stores
information for that glyph.
In practice, this is used to assign different defaultWidthX /
nominalWidthX values to blocks of glyphs in CID-keyed fonts.
We don't do anything yet with the data, and we also don't send
data of CID-keyed CFFs into this parser either, so no behavior
change.
This happens for CFFs that contain multiple fonts. This doesn't
happen in practice, but the same code will be used for fdarray
parsing, which will contain several dicts.
No behavior change.
Elements are now collected according to paint order as spec says,
replacing the depth-first traversal of the paint tree with hit-testing
on each box.
This change resolves a FIXME in an existing test and adds a new
previously non-working test.
This change modifies hit_test() to no longer return the first paintable
encountered at a specified position. Instead, this function accepts a
callback that is invoked for each paintable located at a position, in
hit-testing order.
This modification will allow us to reuse this call for
`Document.elementsFromPoint()` in upcoming changes.
This is very similar to SimpleFont::draw_string() for now, but
it'll become a bit different when we add support for vertical
text.
CIDFontType now only needs to draw single glyphs. Neither of the
subclasses can do that yet, so no behavior change yet.
When a tab or nested traversable navigable is closed, there might be
messages still in the pipe from the UI process that we need to
gracefully drop, rather than crash trying to access an invalid pointer.
There's a chance that we try to choose a navigable before a previously
destroyed navigable is fully destroyed and GC'd. Investigating why this
can happen is a separate endeavor, let's just not crash for now.
...instead of reading them in Filter::decode() for all filters and
then passing them around to only the LZW and flate filters.
(EarlyChange is LZWDecode-only, so that's read there instead.)
No behavior change.
Previously, we'd loop over the index of the output coordinate,
for example for a CMYK->RGB function, we'd loop over RGB. For
every output index, we'd then sample the function at the CMYK
input point.
Now, we sample at CMYK once and return a span for all outputs,
since they're stored in contiguous memory. And we then loop
over the outputs only to do weighting and mapping to the target
range at the end.
Reduces the runtime of
(cd Tests/LibPDF; \
../../Build/lagom/bin/BenchmarkPDF --benchmark_repetitions 5)
from 235.6±2.3ms to 103.2±3.3ms on my system, and makes
SampledFunction::evaluate() more similar to lerp_nd() in TagTypes.h.
Keyframes can be given in two separate forms:
- As an array of separate keyframe objects, where the keys of each
keyframe represent CSS properties, and their values represents the
values that those CSS properties should take
e.x.:
[{ color: 'red', offset: 0.3 }, { color: 'blue', offset: 0.7 }]
- As a single monolithic keyframe object, where the keys of each
keyframe represent CSS properties, and their values are arrays of
values, where each index k represents the value of the given
property at the k'th frame.
e.x.:
{ color: ['red', 'blue'], offset: [0.3, 0.7] }
This commit only implements the first option, as it is much simpler. See
the next commit for the implementation of the second option.
Change 'dom_node_for_event_dispatch' to locate the closest layout node
with a DOM node instead of only checking the direct ancestor.
This fixes hit-testing for buttons because they are wrapped into
multiple anonymous layout nodes (internally we use flex formatting for
them).
A tile is basically a strip with a user-defined width. With that in
mind, adding support for them is quite straightforward. As a lot the
common code was named after 'strips', to avoid future confusion I
renamed everything that interact with either strips or tiles to a
global term: 'segment'.
Note that tiled images are supposed to always have a 'TileOffsets' tag
instead of 'StripOffset'. However, this doesn't seem to be enforced by
encoders, so we support having either of them indifferently.
The test case was generated with the following Python script:
import pyvips
img = pyvips.Image.new_from_file('deflate.tiff')
img.write_to_file('tiled.tiff',
compression=pyvips.ForeignTiffCompression.DEFLATE,
tile=True, tile_width=64, tile_height=64)
This variable stores the number of rows from the beginning of the image,
contrary to `row` that stores the number of rows relative to the start
of the current segment.
The first marker is always white in CCITT streams, so lines starting
with a black pixel encodes a symbol meaning 0 white pixels. Then, the
decoding would proceed with a black symbol. We used to set the symbol's
color based on `column == 0`, which is wrong in this situation.
This API seems to be used by WPT for sending synthetic input events.
Implementing the naive translation of elementFromPoint to the spec steps
for this algorithm turns 4 'tests had errors unexpectedly' and 3 'tests
had timeouts unexpectedly' into 1 pass and 7 'tests had unexpected
subtest results' on the infrastructure/ subdirectory of WPT.
This was blocked because it can be used for cross-protocol attacks on
some network printers. However, it's also used by the web platform
tests. One can argue that getting WPT working is more important than
theoretical attacks on poorly configured printers.
This prevents us from returning an 'unrecognized capability' error when
a WebDriver client sends us a proxy capability. We still don't actually
support setting or returning a non-empty proxy capability, though.
We just don't choke on the input capability request from the server.
This patch also doesn't actually validate the input proxy requests.
This fixes rendering of commas in 0000941.pdf page 1. The commas
use the default width, and without this they show up very large,
covering the page.
Also, it's nice that the code now looks like the regular case 4 lines
further up.
This is one of the two top dict entries we need for CID-keyed fonts.
We don't send any CID-keyed font data into the CFF parser yet,
so no behavior change.
No real behavior change. We don't actually load the CFF data yet
(blocked on #23136 and some more), and we don't have drawing code
yet, and Type0Font::draw_string() doesn't do any drawing yet.
But it's a step in the right direction.
Refactor to resolve paint-only properties before painting, aiming to
stop using layout nodes during recording of painting commands.
Also adds a test, as we have not had any for outlines yet.
This allows for:
* Transformed text (e.g. rotated text)
* Stroked text
* Filling/stroking text with PaintStyles (e.g. gradients)
* Squashed/condensed text (via maxWidth parameter)
Fixes part of #22817
We previously never called event.ignore(), so the keydown event for F2
or F11 etc would be consumed by the BrickGame widget instead of
bubbling up. Now you can start a new game, or escape fullscreen mode,
even if you've paused the game. :^)
This makes it possible to use MakeIndexSequqnce in functions like:
template<typename T, size_t N>
constexpr auto foo(T (&a)[N])
This means AK/StdLibExtraDetails.h must now include AK/Types.h
for size_t, which means AK/Types.h can no longer include
AK/StdLibExtras.h (which arguably it shouldn't do anyways),
which requires rejiggering some things.
(IMHO Types.h shouldn't use AK::Details metaprogramming at all.
FlatPtr doesn't necessarily have to use Conditional<> and ssize_t could
maybe be in its own header or something. But since it's tangential to
this PR, going with the tried and true "lift things that cause the
cycle up to the top" approach.)
No behavior change, except that we now dbgln() if we see a
PrivDictOperator we don't know about. (I haven't seen this in
practice, but I found this useful while debugging things.)
Before, it was only possible to generate 27 control characters (from ^A
to ^Z, and ^\) (with only one possible key combination).
Now, the remaining 5 (^@, ^[, ^], ^^, and ^_) can also be generated with
control plus key combinations. :^)
Also added are the legacy aliases supported by most terminals:
Ctrl+{2, Space} -> ^@ (NUL)
Ctrl+3 -> ^[ (ESC)
Ctrl+4 -> ^\
Ctrl+5 -> ^]
Ctrl+6 -> ^^
Ctrl+7 -> ^_
Ctrl+8 -> ^? (DEL)
Ctrl+/ -> ^_
Note that now, one extra key combination corresponding to a character
that shares the same least significant five bits with the original
character (used in caret notation) can also generate a control
character. For example, in the US English keyboard layout both Ctrl+[
and Ctrl+{ (same as Ctrl+Shift+[) will generate the Escape control
character (^[).
With this change, clicking on an editable element, such as an `input`
or `textarea` causes the cursor position to be updated to the current
mouse position.
With this change, instead of applying only the border-radius clipping
from the closest containing block with hidden overflow, we now collect
all boxes within the containing block chain and apply the clipping from
all of them.
Previously, 'now' was set to the time `requestAnimationFrame()` was
called, and the EventLoop's 'now' was ignored. This was a little odd and
meant the time was always in the past.
The Annotations panel is the most obvious place to perform actions
related to annotations, so let's make that possible. :^)
The toolbar gets open/save/save-as actions for annotations, and one for
adding an annotation. The table itself gets a context menu for editing
or deleting the selected annotation.
This lets the user swap them around and pop them out as separate windows
as desired. We do lose the ability to individually resize them though,
until DynamicWidgetContainer supports that.
This gets rid of the last use of the offset_margin_width() magic number.
I initially tried using `character_width() * 10` and found it was not
accurate enough with between-character spacing, so instead this measures
a specific string. All offset strings should be the same width in a
fixed-width font.
Compare the x position with the start of the hex characters, which is
m_padding pixels to the right of hex_start_x. (And the same for the
text characters.) If the position is within the padding area, clamp it
so it's on the nearest character.
This was previously covered-up somewhat by using the buggy
m_address_bar_width value to calculate the start and end x positions of
these areas. Fixing that in the previous commit made this more obvious.
We repeat the same calculations a lot, and it's easy to make mistakes.
m_address_bar_width and offset_margin_width() have basically the same
purpose, but both are a little wrong. This removes the former, but
we'll get to the latter soon.
Subtracting 1 from both axes causes a kink in the line, because the
start point isn't also translated. The clip rect prevents us from
painting too far down, so we can just remove the translated() call.
In this commit we have optimized the handling of scroll offsets and
clip rectangles to improve performance. Previously, the process
involved multiple full traversals of the paintable tree before each
repaint, which was highly inefficient, especially on pages with a
large number of paintables. The steps were:
1. Traverse the paintable tree to identify all boxes with scrollable or
clipped overflow.
2. Gather the accumulated scroll offset or clip rectangle for each box.
3. Perform another traversal to apply the corresponding scroll offset
and clip rectangle to each paintable.
To address this, we've adopted a new strategy that separates the
assignment of the scroll/clip frame from the refresh of accumulated
scroll offsets and clip rectangles, thus reducing the workload:
1. Post-relayout: Identify all boxes with overflow and link each
paintable to the state of its containing scroll/clip frame.
2. Pre-repaint: Update the clip rectangle and scroll offset only in the
previously identified boxes.
This adjustment ensures that the costly tree traversals are only
necessary after a relayout, substantially decreasing the amount of work
required before each repaint.
...and do string expansion at the call site.
CID-keyed fonts treat the charset as CIDs instead of as SIDs,
so having access to the SIDs in numberic form will be useful
when we implement support for CID-keyed CFF fonts.
No behavior change.
The `read_tag()` function is not mandated to keep the reading head at a
meaningful position, so we also need to align the pointer after the last
tag. This solves a bug where reading the last field of an IFD, which is
placed after the tags, was incorrect.
Every TIFF containers is composed of a main IFD. Some entries of this
one can be a pointer to a sub-IFD. We are now capable of exploring these
underlying structures. Note that we don't do anything with them yet.
... instead of inserting it into the current output character stream
that the terminal widget is going to render.
This ensures that the emoji gets sent to the foreground process of the
terminal.
A markdown file gets loaded as an inline content document by
`create_document_for_inline_content()`, for which the default document
URL is "about:error". That breaks the fragment links.
Overriding "about:error" URL by passing the URL of the just loaded
markdown file as an argument to `HTMLParser::run()` ensures that the URL
of the document is as expected.
Previously, attempting to load a value from an invalid reference would
cause a crash. We now return a CodeGenerationError rather than hitting
an assertion. This is not a complete solution, as ideally we would want
to return a ReferenceError, but this now matches the behavior we see
when we attempt to store something to an invalid reference.
JPEGStream::byte_offset() now returns an offset relative to the start
of the stream, instead of relative to the buffered part.
No behavior change except if JPEG_DEBUG is set.
Double-clicking the edges of a window results in the edge being extended
until it latches to the screen edge. This used to violate the fixed
aspect ratio property of certain windows because of only extending the
window in one dimension.
This commit adds a special case to the latching logic that makes sure to
also extend the other dimension of the window such that the fixed aspect
ratio is maintained.
- Add interactive flag which tells `cp` to prompt before overwriting
files
- Accept 'yes', 'no', 'n', and 'y' as input, all other inputs will
result in the prompt being shown again
Ensure that the Utf-8 encoded "mapping character" in String (from
InputBox) gets converted to Utf-32 code point. Before, only the first
byte of Utf-8 char sequence was used as a Utf-32 code point.
That used to result in incorrect characters getting written in the
keymap file for all the "non-ascii" characters used as the mapping
character.
I implemented CFF charset format 2 in 6f783929dd with the note
"I haven't seen this being used in the wild". Now that I have
seen it (0000658.pdf), I can say that this has never worked,
despite me claiming "it's easy to implement".
But now it works!
We now reject fonts where the active cmap subtable is in a format
we can't read yet, instead of silently drawing squares for all glyphs.
This doesn't fire at all for my 1000-file PDF test set, but seems
like a good thing to check.
(Instead of duplicating the switch, I first tried making a
glyph_id_for_code_point_or_else() that returns ErrorOr<u32> and then
make both glyph_id_for_code_point() and validate_format_can_be_read()
call that, but I liked less how that worked out -- felt too clever.)
This would've saved me some debugging on #23103.
We now return an error instead of a font that draws squares for all
characters. That seems preferable since it makes these cases easy to
find. This fires for three files in my 1000-file PDF test set, so it's
not exceedingly common (...but I wasn't aware that three files were
rendering boxes for this reason, and now I am and can just make them
work in the future).
Previously, if the number of history items was already above the history
limit, then it would never trim old history items. This could happen if
the ClipboardHistory.json file was manually edited.
Prevent this by sorting and trimming the data that is read in. For good
measure, also guard against `> m_history_limit` in add_item().
Previously, the ClipboardHistory.json file contained a series of
individual JSON objects for history items, separated by newlines. This
is convenient for appending a single item, but makes the file itself
invalid JSON.
This commit changes that file to instead contain those objects in a
proper JSON array.
Before this change, `set_needs_to_resolve_paint_only_properties()` was
only called after style invalidation. However, since relayout can be
triggered independently from style invalidation, we need to ensure that
paint-only properties are updated in that case too.
The change ensures that functions that parse string values for terminal
settings fall back to a default value, if there is no value present or
the value is invalid.
Automarks are similar to bookmarks placed by the terminal, allowing the
user to selectively remove a single command and its output from the
terminal scrollback.
This commit implements a single way to add marks: automatically placing
them when the shell becomes interactive.
To make sure the shell behaves correctly after its expected prompt
position changes, the terminal layer forces a resize event to be passed
to the shell on such (possibly) partial clears; this also has the nice
side effect of fixing the disappearing prompt on the preexisting "clear
including history" action: Fixes#4192.
Instead of polluting global namespace with definitions from
libkern/OSByteOrder.h and machine/endian.h on MacOS, just use AK
functions for conversions.
This is a bit tangled in that updating these functions involves a slew
of other spec changes.
However those spec updates fix a bunch of rounding issues, fixing 32
test cases.
Diff Tests:
+32 ✅ -32 ❌
This has the guts of the old temporal AO BalanceDuration with some
differences such as an extra precision of one unit. This appears to be
important for different rounding modes to act as a tiebreaker.
It also does not have any logic regarding a zoned date time 'relative
to' - the spec seems to have this factored in a way where callers are
expected to perform this logic if neccessary.
`x.size()` is 3 or 4 in practice and at most 15 in theory
(cf `number_of_components_in_color_space()` in Profile.cpp),
so using a VLA for these should be fine from a stack size PoV.
It's only accessed through a span, so there's no additional
security risk.
Takes
Build/lagom/bin/image --no-output \
--assign-color-profile \
Build/lagom/Root/res/icc/Adobe/CMYK/USWebCoatedSWOP.icc \
--convert-to-color-profile serenity-sRGB.icc \
cmyk.jpg
from 2.74s to 2.66s on my machine, almost 3% faster.
(Don't do this in LibPDF's SampledFunction::evaluate() since there's
no bound on the dimension of the input function. Realistically,
size of the table puts a pretty low bound on that dimension though,
so we should probably enforce some bound in SampledFunction::create()
and do this there too.)
`x.size()` is 3 or 4 in practice and at most 15 in theory
(cf `number_of_components_in_color_space()` in Profile.cpp),
so using a VLA for these should be fine from a stack size PoV.
They're accessed from two local loops iterating from 0 to
`x.size()`, so it's hopefully not too risky from a security
PoV either.
Takes
Build/lagom/bin/image --no-output \
--assign-color-profile \
Build/lagom/Root/res/icc/Adobe/CMYK/USWebCoatedSWOP.icc \
--convert-to-color-profile serenity-sRGB.icc \
cmyk.jpg
from 2.81s to 2.74s on my machine, about 2.5% faster.
The set_main_widget<T>() function only calls the construct and not
`try_create()` that the GMLCompiler generates so all calls to
`find_descendant_of_type_named()` would result in null
pointers that would resolve in a crash.
Implements following rule from CSS Overflow Module Level 3:
"The visible/clip values of overflow compute to auto/hidden
(respectively) if one of overflow-x or overflow-y is neither visible
nor clip."
When WebDriver asks to destroy a window, we can hit this case with no
active browsing context. This seems odd, but perhaps is a spec issue as
well. Just log to dbgln for now.
This aligns it better with the current state of the spec.
There's still some functions and data members that need moved into
Navigable or TraversableNavigable, but we can leave those for the next
cleanup PR.
This check has been if (false && stuff) for quite a while, since the
transition to Navigables. No one updates the BrowsingContext's session
history, so the check for it having an about blank document and only an
about blank document is always false.
Previously, if we wanted to to e.g. do linear interpolation in 2-D,
we'd get a sample point like (1.3, 4.4), then get 4 samples around
it at (1, 4), (2, 4), (1, 5), (2, 5), then reduce the 4 samples
to 2 samples by computing the combined samples
`0.3 * f(1, 4) + 0.7 * f(2, 4)` and `0.3 * f(1, 5) + 0.8 * f(2, 5)`,
and then 1-D linearly blending between these two samples with the
factor 0.4. In the end we'd multiply the first value by 0.3 * 0.4,
the second by 0.7 * 0.4, the third by 0.3 * 0.6, and the third by
0.7 * 0.6, and then sum them all up.
This requires computing and storing 2**N samples, followed by
another 2**N iterations to combine the 2**N sampls to a single value.
(N is in practice either 4 or 3, so 2**N isn't super huge.)
Instead, for every sample we can directly compute the product of
weights and sum them up directly. This lets us omit the second loop
and storing 2**N values, in exchange for doing an additional O(n)
work to compute the product.
Takes
Build/lagom/bin/image --no-output --invert-cmyk \
--assign-color-profile \
Build/lagom/Root/res/icc/Adobe/CMYK/USWebCoatedSWOP.icc \
--convert-to-color-profile serenity-sRGB.icc \
cmyk.jpg
form 3.42s to 3.08s on my machine, almost 10% faster (and less code).
Here cmyk.jpg is a 2253x3080 cmyk jpeg, and USWebCoatedSWOP.icc is an
mft2 profile with input tables with 256 samples and a 9x9x9x9 CLUT.
The LibPDF change is covered by TEST_CASE(sampled) in LibPDF.cpp,
and the LibGfx change is basically the same change as the one in
LibPDF (where the test results don't change) and the output
subjectively looks identical. So hopefully this causes indeed no
behavior change :^)
This is useful for working with CMYK jpegs extracted from PDFs
by mutool. CMYK channels for jpegs in PDFs are inverted compared to
CMYK channels in standalone jpegs (!), and mutool doesn't compensate
for that.
You can now do:
mutool extract ~/Downloads/0000/0000711.pdf 461
Followed by:
Build/lagom/bin/image -o out.png \
--invert-cmyk \
--assign-color-profile \
Build/lagom/Root/res/icc/Adobe/CMYK/USWebCoatedSWOP.icc \
--convert-to-color-profile serenity-sRGB.icc \
cmyk.jpg
Doesn't exactly roll off the keyboard, but at least it's possible.
(We should probably add an implicit default CMYK color profile if
the input file doesn't have one, and assume conversion to sRGB when
saving to a format that can only store RGB.)
The IPC layer between chromes and LibWeb now understands that multiple
top level traversables can live in each WebContent process.
This largely mechanical change adds a billion page_id/page_index
arguments to make sure that pages that end up opening new WebViews
through mechanisms like window.open() still work properly with those
extra windows.
When an element with an ID is added to or removed from the DOM, or if
an ID is added, removed, or changed, then we must reset the form owner
of all form-associated elements who have a form attribute.
We do this in 2 steps, using the DOM document as the messenger to handle
these changes:
1. All form-associated elements with a form attribute are stored on the
document. If the form attribute is removed, the element is removed
from that list as well.
2. When a DOM element with an ID undergoes any of the aforementioned
changes, it notifies the document of the change. The document then
forwards that change to the stored form-associated elements.
This is an editorial change in the ECMA-262 spec. See:
https://github.com/tc39/ecma262/commit/12d3687
This AO is meant to replace usages of IteratorNext followed by
IteratorValue with a single operation.
This allows, for example:
ThrowCompletionOr<Optional<Value>> foo()
{
return OptionalNone {};
}
The constructors and constraints here are lifted verbatim from
AK::Optional.
By replacing the `page_did_request_scroll_to()` calls with a request
to perform scrolling in the corresponding navigable, we ensure that
the scrolling of iframes will scroll within them instead of triggering
scroll of top level document.
There is already a parser for the time offset but it requires a positive
or negative sign. There are some weird formats on the web that do not
have the sign, so let's support that. I chose to implement this as a new
option instead of editing the old one to avoid any unknown breaking
changes.
Recently, we moved the resolution of CSS properties that do not affect
layout to occur within LayoutState::commit(). This decision was a
mistake as it breaks invalidation. With this change, we now re-resolve
all properties that do not affect layout before each repaint.
For pages containing images or embedded fonts, --dump-contents
used to dump a ton of binary data. That isn't very useful, so
stop doing it.
Before:
% time Build/lagom/bin/pdf --render out.png \
~/Downloads/0000/0000711.pdf --dump-contents | wc -l
937972
Now:
% time Build/lagom/bin/pdf --render out.png \
~/Downloads/0000/0000711.pdf --dump-contents | wc -l
6566
Printing 7k lines is also much faster than printing 940k,
0.15s instead of 2s.
WIFEXITED() returns a bool, so previously we were setting
exited_successfully to true when the service was terminated by a signal,
and false if it exited, regardless of the exit status. To test the exit
status, we have to use WEXITSTATUS() instead.
This causes us to correctly use the "3 tries then give up" logic for
services that crash, instead of infinitely attempting to respawn them.
bab2113ec1 made read_whitespace() return ErrorOr, which makes this
easy to do.
(7cafd7d177, which added the fixmes, landed slightly after bab2113ec1,
so not quite sure why it wasn't like this immediately. Maybe commit
order got changed during review; both commits were in #17831.)
No behavior change.
These were here due to the prefix-less name conflicting with a local
bool, but now that options are in a struct that's no longer a problem.
No behavior change.
`image` can't transform from RGB to CMYK yet, so the only way to
do this is by having a CMYK input.
For example, this works now (and it does a full decode and re-encode):
Build/lagom/bin/image -o out.jpg \
Tests/LibGfx/test-inputs/jpg/buggie-cmyk.jpg
Using this to convert `.pam` files written by `mutool extract`
to jpegs is a somewhat convenient method of looking at these
.pam files.
Previously `image -o foo.asdf foo.png` would create a 0-byte foo.asdf
before complaining that it doesn't know how to write .asdf images.
(Lifetimes of temporaries are extended to the end of the full
expresion, so this is fine.)
We always store CMYK data as YCCK, for two reasons:
1. If we ever want to do subsampling, then doing 2111 or
2112 makes sense with YCCK, while it doesn't make sense
if we store CMYK directly.
2. It forces us to write a color transform header. With a color
transform header, everyone agrees that the CMYK channels should
be stored inverted, while without it behavior between decoders
is inconsistent. (We could write an explicit color transform header
for CMYK too though, but with YCCK it's harder to forget since the
output will look wrong everywhere without it.)
initialize_mcu() grows a full CMYKBitmap override. Some of the
macroblock traversal could probably shared with some kind of
for_all_macroblocks() type function in the future, but the color
conversion math is different enough that this should be a separate
function.
Other than that, we pass around a mode parameter and make a few fuctions
write 4 instead of 3 channels, and that's it.
We use the luminance quantization and huffman tables for the K
channel.
`CMYKBitmap::to_low_quality_rgb()` morally still does the same thing,
but it has a slightly more scary name, and it doesn't use this exact
function. So let's toss it :^)
CMYK data describes which inks a printer should use to print a color.
If a screen should display a color that's supposed to look similar
to what the printer produces, it results in a color very different
to what Color::from_cmyk() produces. (It's also printer-dependent.)
There are many ICC profiles describing printing processes. It doesn't
matter too much which one we use -- most of them look somewhat
similar, and they all look dramatically better than Color::from_cmyk().
This patch adds a function to download a zip file that Adobe offers
on their web site. They even have a page for redistribution:
https://www.adobe.com/support/downloads/iccprofiles/icc_eula_win_dist.html
(That one leads to a broken download though, so this downloads the
end-user version.)
In case we have to move off this download at some point, there are also
a whole bunch of profiles at https://www.color.org/registry/index.xalter
that "may be used, embedded, exchanged, and shared without restriction".
The adobe zip contains a whole bunch of other useful and fun profiles,
so I went with it.
For now, this only unzips the USWebCoatedSWOP.icc file though, and
installs it in ${CMAKE_BINARY_DIR}/Root/res/icc/Adobe/CMYK/. In
Serenity builds, this will make it to /res/icc/Adobe/CMYK in the
disk image. And in lagom build, after #23016 this is the
lagom res staging directory that tools can install via
Core::ResourceImplementation. `pdf` and `MacPDF` already do that,
`TestPDF` now does it too.
The final piece is that LibPDF then loads the profile from there
and uses it for DeviceCMYK color conversions.
(Doing file access from the bowels of a library is a bit weird,
especially in a system that has sandboxing built in. But LibGfx does
that in FontDatabase too already, and LibPDF uses that, so it's not a
new problem.)
In cases where the stacking context painting requires a separate
bitmap, the destination position needs to be translated by the
scrolling offset to ensure it ends up in the correct position.
See #22821 for a previous attempt. This attempt should settle
things once and for all.
The opentype render path adjusts by `-font_ascender * -y_scale` in
Glyf::Glyph::append_simple_path(), so that's what we need to undo
to draw at the font's baseline.
(OpenType::Font::metrics() returns ascender scaled by y_scale already,
so no need to have the scale here where we undo the shift.)
Previously, we called `baseline()` which just returns the font's
font size, which is pretty meaningless:
https://tonsky.me/blog/font-size/https://simoncozens.github.io/fonts-and-layout/opentype.html#vertical-metrics-hhea-and-os2
Also, conceptually it makes sense to translate up by the ascender
to get from the upper edge of the glyph to the baseline.
This is a fairly simple JSON format: A single array, containing objects,
with the Annotation fields as key:value pairs.
When reading a file, we let invalid or missing keys fall back to the
default values. This is mostly intended to set a pattern so that if we
add new fields in the future, we won't fail to load old annotations
files. If loading the file fails though, we keep the previously loaded
set of annotations.
https://llvm.org/devmtg/2022-11/slides/TechTalk5-WhatDoesItTakeToRunLLVMBuildbots.pdf
has an xref table that starts like so:
```
xref
0 214
0000000002 65535 f
0000924663 00000 n
0000000003 00000 f
0000000000 00000 f
0000000016 00000 n
0000000160 00000 n
0000000263 00000 n
```
This is a list of objects in the PDF file. The lines ending with 'f'
mean that this object is "free", that is it's not stored in the file.
In this file, objects 0, 2, 3 are free. For free objects, the first
number is the offset of the next free object: Object 0 refers to object
2, 2 to 3, and 3 back to 0 (since it's the last free object).
The lines ending with "n" are actual objects; here the first number is
a byte offset to where that object is stored in the file.
Furthermore, the file contains
```
/Outlines
2
0
R
```
in its root object, meaning that object 2 stores the page outlines.
Since object 2 is set as free, there is no object 2. But the spec
says that an invalid object reference is just the null object.
This patch makes us return null objects for references to free
objects, and it also makes us treat a null object as /Outlines value
the same as not having /Outlines in the first place.
Fixes#23023 -- we can now open that file. (We don't render it super
well, but only for already-known reasons.)
Since I found it a bit confusing: XRefTable has two related methods
here:
1. has_object() returns if an object was explicitly listed in an
xref table. The first number right after `xref` is the start
index. So if an xref table were to start with `10`, we'd implicitly
create 10 trailing objects for which has_object() would return false
2. is_object_in_use() returns true if an object that was in a table
(i.e. one where has_object() returns true) was listed with 'n' and
false if it was listed with 'f'.
DocumentParser::parse_object_with_index() should probably return a null
object for the `!has_object()` case as well instead of VERIFY()ing
that has_object() is true. But I haven't seen this in the wild yet,
so keeping as-is for now.
This change addresses an issue with overflow clipping in scenarios
where `overflow: hidden` is applied to boxes nested within elements
with `overflow: scroll`.
Fixes https://github.com/SerenityOS/serenity/issues/22733
The comment appears as a tooltip when hovering over the annotation.
A couple of properties of the TextEditor would ideally be set in GML,
but either don't have a setter exposed, or the GML compiler doesn't
recognise the enum. I'll fix those up after the current big GML
compiler PR gets merged.
Rather than construct a new DeclarationsModel each time the user types
something in the Locator, keep a single one around permanently in the
ProjectDeclarations, and then use a FilteringProxyModel over it for the
suggestions.
JPEGs can store a `restart_interval`, which controls how many
minimum coded units (MCUs) apart the stream state resets.
This can be used for error correction, decoding parts of a jpeg
in parallel, etc.
We tried to use
u32 i = vcursor * context.mblock_meta.hpadded_count + hcursor;
i % (context.dc_restart_interval *
context.sampling_factors.vertical *
context.sampling_factors.horizontal) == 0
to check if we hit a multiple of an MCU.
`hcursor` is the horizontal offset into 8x8 blocks, vcursor the
vertical offset, and hpadded_count stores how many 8x8 blocks
we have per row, padded to a multiple of the sampling factor.
This isn't quite right if hcursor isn't divisible by both
the vertical and horizontal sampling factor. Tweak things so
that they work.
Also rename `i` to `number_of_mcus_decoded_so_far` since that
what it is, at least now.
For the test case, I converted an existing image to a ppm:
Build/lagom/bin/image -o out.ppm \
Tests/LibGfx/test-inputs/jpg/12-bit.jpg
Then I resized it to 102x77px in Photoshop and saved it again.
Then I turned it into a jpeg like so:
path/to/cjpeg \
-outfile Tests/LibGfx/test-inputs/jpg/odd-restart.jpg \
-sample 2x2,1x1,1x1 -quality 5 -restart 3B out.ppm
The trick here is to:
a) Pick a size that's not divisible by the data size width (8),
and that when rounded to a block size (13) still isn't divisible
by the subsample factor -- done by picking a width of 102.
b) Pick a huffman table that doesn't happen to contain the bit
pattern for a restart marker, so that reading a restart marker
from the bitstream as data causes a failure (-quality 5 happens
to do this)
c) Pick a restart interval where we fail to skip it if our calculation
is off (-restart 3B)
Together with #22987, fixes#22780.
This change makes hit-testing more consistent in the handling of hidden
overflow by reusing the same clip-rectangles.
Also, it fixes bugs where the box is visible for hit-testing even
though it is clipped by the hidden overflow of the containing block.
Hit-testing relies on updated clip rectangles and containing scroll
offsets, so it's necessary to ensure that paintables have these elements
updated.
This also removes the enclosing scroll offsets update from
`Internals::hit_test()`, as it is no longer needed.
Paintable boxes should not hold information stored in device pixels.
It should be converted from CSS pixels only by the time painting
command recording occurs.
Non-interleaved files always have an MCU of one data unit.
(A "data unit" is an 8x8 tile of pixels, and an "MCU" is a
"minium coded unit", e.g. 2x2 data units for luminance and
1 data unit each for Cr and Cb for a YCrCb image with
4:2:0 subsampling.)
For the test case, I converted an existing image to a ppm:
Build/lagom/bin/image -o out.ppm \
Tests/LibGfx/test-inputs/jpg/12-bit.jpg
Then I converted it to grayscale and saved it as a pgm in Photoshop.
Then I turned it into a weird jpeg like so:
path/to/cjpeg \
-outfile Tests/LibGfx/test-inputs/jpg/grayscale_mcu.jpg \
-sample 2x2 -restart 3 out.pgm
Makes 3 of the 5 jpegs failing to decode at #22780 go.
Previously, we ignored the -p argument if it was specified. This
would resort in a crash because final_target_directory wasn't given a
value.
This snapshot does away with giving this variable an Optional<> and
just has the -p argument be its default value.
We use these canonicalized_path variables as StringViews, so it doesn't
matter if they are a String or ByteString. And they're paths so
shouldn't be String anyway.
Where it was straightforward to do so, I've updated the users to also
use ByteStrings for their file paths, but most of them have a temporary
String::from_byte_string() call instead.
That's all this function reads from Component.
Also rename from validate_luma_and_modify_context() to
validate_sampling_factors_and_modify_context().
No behavior change.
Many widget classes need to run substantial initialization code after
they have been setup from GML. With this change, an
initialize_fallibles() function is called if available, allowing the
initialization to be invoked from the GML setup automatically. This
means that the GML-generated creation function can now be used directly
for many more cases, and reduces code duplication.
This allows positioning a child SVG relative to its parent SVG.
Note: These have been implemented as CSS properties as in SVG 2, these
are geometry properties that can be used in CSS (see
https://www.w3.org/TR/SVG/geometry.html), but there is not much browser
support for this. It is nicer to implement than the ad-hoc SVG
attribute parsing though, so I feel it may make sense to port the rest
of the attributes specified here (which should fix some issues with
viewport relative sizes).
The hit-testing position is now shifted by the scroll offsets before
performing any checks for containment. This is implemented by assigning
each PaintableBox/InlinePaintable an offset corresponding to the scroll
frame in which it is contained. The non-scroll-adjusted position is
still passed down when recursing to children because the assigned
offset accumulated for nested scroll frames.
With this change, hit testing works in the Inspector.
Fixes https://github.com/SerenityOS/serenity/issues/22068
Since we might enter Internals::hit_test() before the enclosing scroll
offsets are updated in the paintables tree during pre-paint, this
update need to be enforced.
This is a fix for regression introduced in
0bf82f748f
All CSS transforms need to be removed from the clip rectangle before
applying it. However, it is still necessary to calculate it with
applied transforms to find the correct intersection of all clip
rectangles in the containing block chain.
With this change, clip rectangles for boxes with hidden overflow or the
clip property are no longer calculated during the recording of painting
commands. Instead, it has moved to the "pre-paint" phase, along with
the assignment of scrolling offsets, and works in the following way:
1. The paintable tree is traversed to collect all paintable boxes that
have hidden overflow or use the CSS clip property. For each of these
boxes, the "final" clip rectangle is calculated by intersecting clip
rectangles in the containing block chain for a box.
2. The paintable tree is traversed another time, and a clip rectangle
is assigned for each paintable box contained by a node with hidden
overflow or the clip property.
This way, clipping becomes much easier during the painting commands
recording phase, as it only concerns the use of already assigned clip
rectangles. The same approach is applied to handle scrolling offsets.
Also, clip rectangle calculation is now implemented more correctly, as
we no longer stop at the stacking context boundary while intersecting
clip rectangles in the containing block chain.
Fixes:
https://github.com/SerenityOS/serenity/issues/22932https://github.com/SerenityOS/serenity/issues/22883https://github.com/SerenityOS/serenity/issues/22679https://github.com/SerenityOS/serenity/issues/22534
Now that these algorithms are a HeapFunction as opposed to SafeFunction,
the problem mentioned in the FIXME is no longer applicable as these
functions are GC-allocated like everything else.
This helps us avoid from needing to construct a Function<T> when
invoking `create_heap_function` with a lambda.
Co-Authored-By: Ali Mohammad Pur <mpfard@serenityos.org>
This change allows `contentinfo`, `none`, `subscript` and `superscript`
RoleTypes to be built. These were the only non-abstract role types
missing from this function.
This fixes an issue where clicking on an element with one of these role
types in Inspector would cause a crash.
Previously, we were handling viewBoxes/viewports in a slightly hacky
way, asking graphics elements to figure out what viewBox to use during
layout. This does not work in all cases, and can't allow for more
complex SVGs where it is possible to have nested viewports.
This commit makes the SVGFormattingContext keep track of the
viewport/boxes, and it now lays out each viewport recursively, where
each nested `<svg>` or `<symbol>` can establish a new viewport.
This fixes some previous edge cases, and starts to allow nested
viewports (there's still some issues to resolve there).
Fixes#22931
Just creating a stream on the JS heap isn't enough, as we will later
crash when trying to read from that stream as it hasn't been properly
initialized. Instead, until we have teeing implemented (which is a
rather huge part of the Streams spec), create streams using proper AOs
that do initialize the stream.
This makes them cheap to move around, since we can store them in a
NonnullOwnPtr instead of memcopying 2584(!) bytes.
Drastically reduces the chance of stack overflow while building the
layout tree for deeply nested DOMs (since tree building was putting
these things on the stack).
This change also exposed a completely unnecessary ComputedValues deep
copy in block layout.
We don't currently calculate the fill- or stroke-boxes of SVG elements,
so for now we use the content- and border-boxes respectively, as those
are the closest equivalents. The test will need updating when we do
support them.
Also, the test is a screenshot because of rendering differences when
applying transforms: a 20px box does not get painted the same as a 10px
box scaled up 2x. Otherwise that would be the more ideal form of test.
LibSQL still uses ByteString internally for storage, but to help outside
application port to String, add helpers to construct values from String
and to convert a value to a String.
We don't need to forward Cookie-related IPCs from the WebView through
the Tab to the BrowserWindow. We can skip the Tab like we do in other
chromes.
This is primarily to reduce overhead when changing Cookie IPCs in future
patches.
These are written by `mutool extract` for CMYK images.
They don't contain color profiles so they're not super convenient.
But `image` can convert them (*) to sRGB as long as you use it with
`--assign-color-profile` pointing to some CMYK icc profile of your
choice.
*: Once #22922 is merged.
.pam is a "portrable arbitrarymap" as documented at
https://netpbm.sourceforge.net/doc/pam.html
It's very similar to .pbm, .pgm, and .ppm, so this uses the
PortableImageMapLoader framework. The header is slightly different,
so this has a custom header parsing function.
Also, .pam only exixts in binary form, so the ascii form support
becomes optional.