Ceiling width or height of a chrome viewport (this function is only used
when a chrome notifies LibWeb about a new viewport size) is never
correct. If we do that, PageClient::page_did_layout will set content
size to be 1 larger than an actual physical width or height respectively
(it always ceils) and thus a spurious scrollbar will appear.
This prevents occasional scrollbar flickering in Ladybird/Qt on Wayland
with fractional scaling enabled on compositors supporting
wp-fractional-scale-v1.
This removes the two boolean hack in favor of using the existing
mechanism to remove queued tasks. It also exposes the element
invalidation behavior for call sites that don't necessarily want to
update the finished state, but still need to invalidate the associated
target.
It turns out that hmtx and OS/2 table values _are_ used when
rendering OpenType for PDFs: hmtx is used for the left-side bearing
value (which is read in `Painter::draw_glyph()`), and OS/2 is used
for the ascender, which Type0's CIDFontType2::draw_glyph()
and TrueTypeFont::draw_glyph() read.
So instead of not trying to read these tables, instead try to read
them but tolerate them failing to read and ignore them then.
Follow-up to #23276.
(I've seen weird glyph positioning from not reading the hmtx table.
I haven't seen any problems caused by not reading the OS/2 table yet,
but since the PDF code does use the ascender value, let's read that
too.)
The former automatically adapts the prefix to binary and octal
output, and is what we already use in the majority of cases.
Patch generated by:
rg -l '0x\{' | xargs sed -i '' -e 's/0x{:/{:#/'
I ran it 4 times (until it stopped changing things) since each
invocation only converted one instance per line.
No behavior change.
Height definiteness is now preserved as intended by CSS-SIZING-3
(assuming I've understood it correctly) and not implicitly granted by
layout algorithms when they assign height.
For the specific special/magical cases where some sizes become definite
during layout, the preceding commits have made them explicit in code.
This fixes a number of flex layout issues where we were previously
resolving percentage values against post-layout flex container heights,
but other browsers don't.
Fixing this function will be quite an undertaking since a *lot* of code
relies on set_content_width() implicitly flipping the definiteness of
the width. It is wrong though, so we do need to fix it eventually.
In particular, these two interesting cases:
- The containing block of an abspos box is always definite from the
perspective of the abspos box.
- When resolving abspos box sizes from two opposing fixed insets,
we now mark those sizes as definite (since no layout was required
to resolve them).
The whole way we lay out SVG content is ad-hoc, so this doesn't follow
any particular spec. However, our viewport transform logic depends on
having definite sizes, so let's just mark them as such for now.
This will be required for percentages to resolve against it correctly
after we make set_content_height() not automatically mark heights as
definite sizes.
The CSS-FLEXBOX-1 spec has a bunch of special cases where sizes are
considered definite after reaching a specific point of the layout
algorithm.
Before this change, we were relying on set_content_width/height also
implicitly marking those content sizes as definite.
To prepare for that implicit behavior going away, this patch makes
the special cases explicit.
Before this change, we were always assigning the calculated height to
each block container after laying it out in BFC.
This should really only happen during intrinsic sizing, since that is
how measurements are communicated to the client there.
We should still add an informational message about when this happens
before we even get here - but we still shouldn't be able to locate a
place to apply a hunk as it ends up producing unexpected results where
the patch is prepended to the existing file.
Check if we have a cmap before dereferencing it again.
Fixes a crash on page 8 of 0000188.pdf now that the font no
longer fails to load to due to a missing name table.
Looks like this is a Type2 truetype font, where we don't provide
an external cmap. How this font is supposed to work without a cmap
I don't know -- but for now, we no longer crash on it, and draw
some of the text with the previous font (which happens to work
fine in this particular case).
Kind of reverts #21675, but #21744 made that better
4 of my 1000 test PDFs complained "Invalid table offset or length in
font" before.
For example, in 0000203.pdf, these tags had length 0: 'cvt ', 'fpgm',
'prep', 'name', 'OS/2'. (Generally it's tables that aren't needed
for rendering PDFs, and the PDF writer figured it's easier to zero
out these tables instead of omitting them altogether for some reason.)
Increases number of PDFs that render without diagnostics from
765 to 767.
It is sometimes truncated in fonts embedded in PDFs, and the data
is not needed to render PDFs. 2 of my 1000 test PDFs used to
complain "Could not load OS2 v1: Not enough data" and 1
"Could not load OS2 v2: Not enough data" before.
Increases number of PDFs that render without diagnostics from
764 to 765 (and decreases the number of distinct error messages
from 27 to 25).
It is sometimes truncated in fonts embedded in PDFs, and the data
is not needed to render PDFs. 26 of my 1000 test files complained
"Could not load Hmtx: Not enough data" before.
Increases number of PDFs that render without diagnostics from
743 to 764.
It is often missing in fonts embedded in PDFs. 75 of my 1000 test
files complained "Font is missing Name" when trying to read fonts
before.
Increases number of PDFs that render without diagnostics from
682 to 743.
Before, we used to reject profiles where the creation datetime was
invalid per spec. But invalid dates happen in practice (most commonly,
all fields set to 0). They don't affect profile conversion at all,
so be lenient about this, in exchange for slightly more wordy code
in the places that want to show the creation datetime.
Fixes a crash rendering page 2 of
https://fredrikbk.com/publications/copy-and-patch.pdf
We only need to know that the Global Object of the environment is an
event target in order to dispatch an event on it. This resolves a FIXME
where we assumed that the only type of Global in LibWeb is HTML::Window.
The default canvas size is 300x150 pixels. If the element or document
we are trying to screenshot for the WebDriver is not at least that size,
then we will create a canvas that is wider or taller than the actual
element we are painting, resulting in a bunch of transparent pixels
falling off the end.
This fixes 14 WPT css/CSS2/floats tests that we run in CI, and
presumably a ton of other reftests in the WPT test suite.
With this change "max-width: max-content" is treated as "none" when
the available width is also "max-content". This fix prevents a stack
overflow in the grid track size maximization algorithm by avoiding
recursive calls to calculate_max_width() when determining the maximum
grid container size.
Exif metadata have two tags to store the pixel density along each axis.
If both values are different and no action is taken, the resulting image
will appear deformed. This commit scales the displayed bitmap
accordingly to these tags in order to show the image in its intended
shape. This unfortunately includes a lot of plumbing to get this
information through IPC.
We have a 5 second timeout between a user-activated event occurring and
an activation-gated API being invoked in order for that API to succeed.
This is quite fine in normal circumstances, but the machines used in CI
often exceed that limit (we see upwards of 10 seconds passing between
generating the user-activated event and the API call running).
So instead of generating a user-activated event, add a hook to allow
tests to bypass the very next activation check.
Instead of having Call refer to a range of VM registers, it now has
a trailing list of argument operands as part of the instruction.
This means we no longer have to shuffle every argument value into
a register before making a call, making bytecode smaller & faster. :^)
By handling common cases like Int32 arithmetic directly in the
instruction handler, we can avoid the cost of calling the generic helper
functions in Value.cpp.
Instead of splitting the postfix variants into ToNumeric + Inc/Dec,
we now have dedicated PostfixIncrement and PostfixDecrement instructions
that handle both outputs in one go.
If a call to `document.write` inserts an incomplete HTML tag, e.g.:
document.write("<p");
we would previously continue parsing the document until we reached a
closing angle bracket. However, the spec states we should stop once we
reach the new insertion point.
We were previously embedding the original text to handle the special
case where the text is empty. We generate an extra span to hold the
string "#text" as a placeholder, so that we don't generate a 0px-wide,
unclickable (and therefore uneditable) node. Instead, we can just detect
when this is the case in the Inspector JS.
This further reduces the generated HTML for the Inspector on
https://github.com/SerenityOS/serenity from 1.9MB to 1.8MB (about 94KB,
or 4.7%).
Attribute values may contain HTML, and may contain invalid HTML at that.
If the latter occurs, let's not generate invalid Inspector HTML when we
embed the attribute values as data attributes. Instead, cache the values
in the InspectorClient, and embed just a lookup index into the HTML.
This also nicely reduces the size of the generated HTML. The Inspector
on https://github.com/SerenityOS/serenity reduces from 2.3MB to 1.9MB
(about 318KB, or 13.8%).
This is required by the CFF spec, and is consistent with what we do for
the encoding 24 lines down.
As far as I can tell, nothing in `Type1FontProgram::rasterize_glyph()`
or in Type1Font.cpp implements the "If an encoding maps to a character
name that does not exist in the Type 1 font pro- gram, the .notdef glyph
is substituted." line from the PDF 1.7 spec (in 5.5.5 Character
Encoding, Encodings for Type 1 Fonts) yet, so this does yet have an
effect.
This adapts our implementation to the editorial change in the temporal
proposal: https://github.com/tc39/proposal-temporal/commit/737baf2d
The changes to CalendarMethodsRecordLookup had already been implemented,
but we had followed the typo in the spec for CalendarMethodsRecordCall.
The issue in CalendarMethodsRecordCall hasn't surfaced yet, as the AOs
using Calendar Methods Record are currently not passing through a String
to represent a Calendar builtin.
No change to test-262.
When a node is removed from the DOM tree, its paintable needs to be
removed to ensure that it is not used to obtain sizes that are no
longer valid.
This change enables the ResizeObserver to send a notification if a node
is removed, as it should, because a removed node now has a size of zero
It should be okay to nullify pointers without concerning
parent/sibling/child relationships because the layout and paintable
trees will be rebuilt following any DOM mutation anyway.
Extends event loop processing steps to include gathering and
broadcasting resize observations.
Moves layout updates from Navigable::paint() to event loop processing
steps. This ensures resize observation processing occurs between layout
updates and painting.
In this change, updating layout and painting are moved to the EventLoop
processing steps. This modification allows the addition of resize
observation dispatching that needs to happen in event loop processing
steps and must occur in the following order relative to layout and
painting:
1. Update layout.
2. Gather and broadcast resize observations.
3. Paint.
Adds the initial implementation for interfaces defined in the
ResizeObserver specification. These interfaces will be used to
construct and send observation events in the upcoming changes.
This number is pure guesswork but it appears to fix GCC builds with
both ASAN and UBSAN hitting a native stack overflow before we have
a chance to catch it on our Azure CI.
This implements support for `glBlendEquation` and
`glBlendEquationSeparate`. These functions modify the calculation of the
resulting color in blending mode.
This patch moves us away from the accumulator-based bytecode format to
one with explicit source and destination registers.
The new format has multiple benefits:
- ~25% faster on the Kraken and Octane benchmarks :^)
- Fewer instructions to accomplish the same thing
- Much easier for humans to read(!)
Because this change requires a fundamental shift in how bytecode is
generated, it is quite comprehensive.
Main implementation mechanism: generate_bytecode() virtual function now
takes an optional "preferred dst" operand, which allows callers to
communicate when they have an operand that would be optimal for the
result to go into. It also returns an optional "actual dst" operand,
which is where the completion value (if any) of the AST node is stored
after the node has "executed".
One thing of note that's new: because instructions can now take locals
as operands, this means we got rid of the GetLocal instruction.
A side-effect of that is we have to think about the temporal deadzone
(TDZ) a bit differently for locals (GetLocal would previously check
for empty values and interpret that as a TDZ access and throw).
We now insert special ThrowIfTDZ instructions in places where a local
access may be in the TDZ, to maintain the correct behavior.
There are a number of progressions and regressions from this test:
A number of async generator tests have been accidentally fixed while
converting the implementation to the new bytecode format. It didn't
seem useful to preserve bugs in the original code when converting it.
Some "does eval() return the correct completion value" tests have
regressed, in particular ones related to propagating the appropriate
completion after control flow statements like continue and break.
These are all fairly obscure issues, and I believe we can continue
working on them separately.
The net test262 result is a progression though. :^)
This is pure prep work for refactoring the bytecode to use more operands
instead of only registers.
generate_bytecode() virtuals now return an Optional<Operand>, and the
idea is to return an Operand referring to the value produced by this
AST node.
They also take an Optional<Operand> "preferred_dst" input. This is
intended to communicate the caller's preference for an output operand,
if any. This will be used to elide temporaries when we can store the
result directly in a local, for example.
The JIT compiler was an interesting experiment, but ultimately the
security & complexity cost of doing arbitrary code generation at runtime
is far too high.
In subsequent commits, the bytecode format will change drastically, and
instead of rewriting the JIT to fit the new bytecode, this patch simply
removes the JIT instead.
Other engines, JavaScriptCore in particular, have already proven that
it's possible to handle the vast majority of contemporary web content
with an interpreter. They are currently ~5x faster than us on benchmarks
when running without a JIT. We need to catch up to them before
considering performance techniques with a heavy security cost.
When an <input type=image> button is clicked, we now send the (x,y)
coordinates of the click event (relative to the image) along with the
form submission data.
Regarding the text test, we can currently only test this feature with
dialogs. The headless-browser test infrastructure cannot yet handle the
resulting navigation that would occur if we were to test with normal
form submission.
HTMLFormElement::elements is not the correct filter for submittable
elements. It includes non-submittable elements (HTMLObjectElement) and
also excludes submittable elements (HTMLInputElements in the "image"
type state, "for historical reasons").
This implements enough to represent <input type=image> with its loaded
source image (or fallback to its alt text, if applicable). This does not
implement acquring coordinates from user-activated click events on the
image.
They currently assume the DOM node is an HTMLImageElement with respect
to handling the alt attribute. The HTMLInputElement will require the
same behavior.
Previously, when constructing an XML document, the default namespace
was the empty string. This led to XML documents having empty xmlns
attributes when serialized.
Previously, CDATASection nodes were being serialized as if they were
text, which meant they were missing their start and end delimiters.
Occurrences of the '&', '<' and '>' characters were also being replaced
with their entity names.
The DOM specification states that: "Unless stated otherwise, a
document’s [...] type is 'xml'".
Previously, calls to `Document::document_type()` were returning the
incorrect value for non-HTML documents.
Our current CLUT code is pretty slow. A one-element cache can make
it quite a bit faster for images that have long runs of a single
color, such as illustrations.
It also seems to help with photos some (in `0000711.pdf` page 1) --
I suppose even them have enough repeating pixels for it to be worth
it.
Some numbers, with `pdf --render-bench`:
`0000711.pdf --page 1` (high-res photo)
before: 2.9s
after: 2.43s
`0000277.pdf --page 19` (my nemesis PDF, large-ish illustration)
before: 2.17s
after: 0.58s (!)
`0000502.pdf --page 2` (wat hoe dat)
before: 0.66s
after: 0.27s
`0000521.pdf --page 10 ` (japanese)
before: 0.52s
after: 0.29s
`0000364.pdf --page 1` (4 min test case)
before: 0.48s
after: 0.19s
Thanks to that last one, reduces the time for
`time Meta/test_pdf.py ~/Downloads/0000` from 4m22s to 1m28s.
Helps quite a bit with #23157 (but high-res photos are still too
slow).
This reverts commit a4b2e5b27b.
This was just plain wrong, I remember it making sense and fixing
something but that was probably due to local changes. It should never
have landed on master, my bad.
This allows the user to pass very specific values instead of a u64 value
that is later cast to the underlying type.
The change also adds support for passing v128 values:
- v(i64.const 4) (splat 4)
- v128.const 0x4 (just a few bits specificed, everything else = 0)
This function was called over and over in `manage_extra_channels()`,
even if the result depends only on the metadata. Instead, we now call it
once and store the result.
Of my 1000 test files, 73 have stream Type0 truetype fonts with stream
CIDToGIDMaps. This makes that work.
(With this patch, the number of files in my 1000 test files complaining
"Font is missing Name" increases from 41 to 75, so a bit under half of
the fonts using stream CIDToGIDMaps also have no 'name' table. So that's
next.)
Increases files without issues from 652 to 681.
Previously, during the start of the game BoardView component tried to
get the font from the database passing empty string as a parameter.
It caused a debug message informing user that the lookup failed.
Painting command executors are defined within the "Painting" namespace,
allowing us to remove this prefix from their names.
This commit performs the following renamings:
- Painting::PaintingCommandExecutor to Painting::CommandExecutor
- Painting::PaintingCommandExecutorCPU to Painting::CommandExecutorCPU
- Painting::PaintingCommandExecutorGPU to Painting::CommandExecutorGPU
Separating the recorder list from the painter will allow us to save it
for later execution without carrying along the painter's state. This
will be useful once we have a separate thread for executing painting
commands, to which we will have to transfer commands from the main
thread.
Preparation for https://github.com/SerenityOS/serenity/pull/23108
The setter was missing an implementation for the default and default/on
value attribute modes. This patch adds a method to get the current value
attribute mode, and implements the value setter and getter based on that
mode according to the spec.
Previously, this just checked the tag names. For elements that exist in
different namespaces (like HTMLScriptElement vs SVGScriptElement) this
could lead to invalid casts, as the namespace was not checked.
This switches to using the safer helpers on the DOM::Node.
Previously, the check for `.html` meant that `.svg` tests were excluded.
This led to a few `.svg` with missing or bit-rotted expectations, which
have now been added/updated.
Previously, step 5 of the stacking context hit testing would just call
`hit_test()` on the stacking context's paintable box. Which (at least
for SVGs) is just indirect infinite recursion (it'll end up right back
where it started after going through a few functions).
This now explicitly hit tests the descendants, which seems more correct,
and avoids the crash for SVGs, but nothing really seems to depend on
this step. Another solution (which I've done for a while working on
SVGs) is just to delete step 5 entirely, and nothing seems to break.
Fixes#22305
https://adobe-type-tools.github.io/font-tech-notes/pdfs/5177.Type2.pdf
says "The behavior of undefined operators is unspecified." but
https://learn.microsoft.com/en-us/typography/opentype/spec/cff2
says "When an unrecognized operator is encountered, it is ignored and
the stack is cleared."
Some type 0 CIDFontType0C fonts (i.e. CID-keyed non-OpenType CFF fonts)
depend on the latter, even though they're governed by the former spec.
Fixes rendering of text in 0000521.pdf (e.g. page 10 or 5). The font
there has a bunch of 0 opcodes for some reason.
While all callers of lerp_nd created a Vector with inline_capacity of 4
to ensure no heap allocation will take place for most inputs, lerp_nd
requested a reference to a Vector with 0 inline_capacity, which meant a
new vector was allocated on each call.
VectorN::length() returns the vector length (aka the square root of the
dot product of the vector with itself), not the count of components.
This is seemingly both wrong and slow.
Disclaimers, similar to what's on #23202 (and most of the
prerequisites mentioned there are needed for this too):
* Only supports the `Identity-H` type0 cmap at the moment
* Doesn't support vertical text yet
* Only supports the `Identity` CIDToGIDMap at the moment
(this one is a truetype-only thing)
If this is passed in, we don't use the cmap table from the font,
but use the external lookup object instead.
This conveniently happens to create a place where we can put all the
Cmap configuration logic that was a bit spread out before.
No behavior change yet.
We now cache potentially named elements on the Document when elements
are inserted and removed. This allows us to do lookup of what names are
supported much faster than if we had to iterate the tree every time.
This first cut doesn't implement the rules for 'exposed' object and
embed elements.
Together with the already-merged #23122, #23128, #23135, #23136, #23162,
and #23167, #23179, #23190, #23194 this adds initial support for
rendering some CFF-based Type0 fonts :^)
There's a long list of things that still need improving after this:
* A small number of CFF programs contain the charstring command 0,
which is invalid. Currently, this makes us reject the whole font.
* Type1FontProgram::rasterize_glyph() is name-based. For CID-based
fonts, we want a version that takes CIDs (character IDs) instead.
For now, I'm printing the CID to a string and using that, yuck.
(I looked into doing this nicely. I do want to do that, but I
need to read up on how the `seac` type1 charstring command uses
character names to identify parts of an accented character.
Also, it looks like `seac`'s accented character handling moved
over to `endchar` in type2 charstring commands (i.e. in CFF data),
and it looks like we don't implement that at all. So I need to do
more reading first, and I didn't want to block this on that.)
* The name for the first string in name-based CFF fonts looks wrong;
added a FIXME for that for now.
* This supports the named Identity-H cmap only for now. Identity-H
maps UTF16-BE values to glyph IDs with the idenity function, and
assumes it's horizontal text. Other named cmaps in my test files are
UniJIS-UCS2-H, UniCNS-UCS2-H, Identity-V, UniGB-UCS2-H, UniKS-UCS2-H.
(There are also 2 files using the stream-based cmaps instead of the
name-based ones.)
* In particular, we can't draw vertical text (`-V`) yet
* Passing in the encoding to CFF::create() is awkward (it's nullptr
for CID-keyed fonts), and it's also not necessary since
`Type1Font::draw_glyph()` already does the "take encoding from PDF,
and only from font if the PDF doesn't store one" dance.
* This doesn't cache glyphs but re-rasterizes them each time. Easy
to add, but maybe I want to look at rotation first. And things
don't feel glacial as-is.
* Type0Font::draw_glyph() is pretty similar to second half of
Type1Font::draw_glyph()
Make TopDict's defaultWidthX and nominalWidthX Optional<>s so that
we can check if they're set per fdselect-selected font dict, and
if so use the value from there in CID-keyed fonts. Otherwise, keep
using the value in the top dict.
This begins the process of aligning our implementation with the spec
with regard to using CalendarMethodsRecord. The main intent here is to
make it much easier to make normative changes to AOs which have been
updated to CalendarMethodsRecord.
While this does resolve various FIXMEs, many others above need to be
added in order to be able to pass through a CalendarMethodsRecord. The
use here aligns with what I can gather from the spec of what the
arguments to CreateCalendarMethodsRecord should be, but various AOs have
been updated so much with other changes it's not completely obvious.
Other AOs do not even exist in the latest version of the spec, but we
still rely on them.
As part of these updates, this commit coincidentally also fixes two
PlainDate roundingmode issues seen in test262 - a test of which is also
added in test-js. This issue boiled down to what appears to be an
observable optimization in the spec, where it can avoid calling
dateUntil in certain situations (roundingGranularityIsNoop).
However, the main goal here is to make it much easier to fix many more
issues in the future :^)
since/calendar-dateuntil-called-with-singular-largestunit.js ❌ -> ✅
until/calendar-dateuntil-called-with-singular-largestunit.js ❌ -> ✅
This is part of a large refactor made as part of the temporal spec.
Most AOs using the calendar now pass through this record. There will
need to be a long process of going through updating AOs to use this
record.
Duplicates can show up when copy-pasting IDL snippets into existing IDL
files, and the resulting error is extremely useless and misleading.
This commit makes it so the parser catches these cases, and emits a more
helpful error like "Overload set 'instantiate' contains multiple
identical declarations".
This is a simple extension of GenericLexer, and is used in more than
just LibXML, so let's move it into AK.
The move also resolves a FIXME, which is removed in this commit.
Before this change, the rebaseline script would generate reference
based on 'about:blank' (at least running it on my macOS system).
This commit allows 'about:blank' through the assertion and only dumps
the layout tree when theloaded URL matches the one we are interested
in.
Along with this, Port.h is include which helps generalising common
information for the port package, like it's name and version. With
SemVer complaint versions, it is possible to show positive change
(upgrade) or negative change (downgrade) in the installed ports.
However, for some non-complaint versions (eg. using git commit hash),
non-equality (`!=`) is used to notify upgrade. Since there is no
algorithm (without git history) to check the order of commits, it is
not possible to inform whether it is an upgrade or downgrade.
Semantic Versioning (SemVer) is a versioning scheme for software that
uses MAJOR.MINOR.PATCH format. MAJOR for significant, possibly
breaking changes; MINOR for backward-compatible additions; PATCH for
bug fixes. It aids communication, compatibility prediction, and
dependency management. In apps dependent on specific library versions,
SemVer guides parsing and validates compatibility, ensuring apps use
appropriate dependencies.
<valid semver> ::= <version core>
| <version core> "-" <pre-release>
| <version core> "+" <build>
| <version core> "-" <pre-release> "+" <build>
It seems that the difference between pending and ASAP in the spec is
only to allow the implementation to perform implementation-defined
operations between the two states. We don't need to distinguish the two
states, so lets just combine them for now.
* FDArray, FDSelect must be present
* Encoding must not be present
* Charset maps from GID (Glyph ID) to CID (Character ID),
instead of to character name
The fdselect array (that we already read) maps eachs glyph ID
to an fdarray index. The font dict at that index then stores
information for that glyph.
In practice, this is used to assign different defaultWidthX /
nominalWidthX values to blocks of glyphs in CID-keyed fonts.
We don't do anything yet with the data, and we also don't send
data of CID-keyed CFFs into this parser either, so no behavior
change.
This happens for CFFs that contain multiple fonts. This doesn't
happen in practice, but the same code will be used for fdarray
parsing, which will contain several dicts.
No behavior change.
Elements are now collected according to paint order as spec says,
replacing the depth-first traversal of the paint tree with hit-testing
on each box.
This change resolves a FIXME in an existing test and adds a new
previously non-working test.
This change modifies hit_test() to no longer return the first paintable
encountered at a specified position. Instead, this function accepts a
callback that is invoked for each paintable located at a position, in
hit-testing order.
This modification will allow us to reuse this call for
`Document.elementsFromPoint()` in upcoming changes.
This is very similar to SimpleFont::draw_string() for now, but
it'll become a bit different when we add support for vertical
text.
CIDFontType now only needs to draw single glyphs. Neither of the
subclasses can do that yet, so no behavior change yet.
When a tab or nested traversable navigable is closed, there might be
messages still in the pipe from the UI process that we need to
gracefully drop, rather than crash trying to access an invalid pointer.
There's a chance that we try to choose a navigable before a previously
destroyed navigable is fully destroyed and GC'd. Investigating why this
can happen is a separate endeavor, let's just not crash for now.
...instead of reading them in Filter::decode() for all filters and
then passing them around to only the LZW and flate filters.
(EarlyChange is LZWDecode-only, so that's read there instead.)
No behavior change.