As new demuxers are added, this will get quite full of files, so it'll
be good to have a separate folder for these.
To avoid too many chained namespaces, the Containers subdirectory is
not also a namespace, but the Matroska folder is for the sake of
separating the multiple classes for parsed information entering the
Video namespace.
Timers keep their previously set interval even for single-shot mode.
We want all timers to fire immediately if they don't have a delay set
in the start() call.
These actions were being constructed, and they work, but were not shown
in the toolbar. Adding them will allow users to actually use them, as
well as pick up any bugs they might have.
These actions were not updated accordingly when one scrolled through the
document, and thus one could accidentally, for example, move to the next
page when standing on the last, which caused a crash.
This commit fixes that behavior, toggling the actions' enabled status
depending on the new page being displayed.
When removing all contents from the NumericInput box in PDFViewer the
callback set the (empty) text again back in the box, triggering another
callback in a recursive, non-stopping fashion. Not setting the text back
in the box avoids the problem.
Now that the Renderer accepts preferences, PDFViewer can offer ways for
changing these preferences. The first step in this direction is to add a
checkbox that allows toggling whether clipping paths are visible or not.
A Config item has also been added to remember this setting.
A new struct allows users to specify specific rendering preferences that
the Renderer class might use to paint some Document elements onto the
target bitmap. The first toggle allows rendering (or not) the clipping
paths on a page, which is useful for debugging.
The existing path clipping support was broken, as it performed the
clipping operation as soon as the path clipping commands (W/W*) were
received. The correct behavior is to keep a clipping path in the
graphic state, *intersect* that with the current path upon receiving
W/W*, and apply the clipping when performing painting operations. On top
of that, the intersection happening at W/W* time does not affect the
painting operation happening on the current on-build path, but takes
effect only after the current path is cleared; therefore a current and a
next clipping path need to be kept track of.
Path clipping is not yet supported on the Painter class, nor is path
intersection. We thus continue using the same simplified bounding box
approach to calculate clipping paths.
Since now we are dealing with more rectangles-as-path code, I've made
helper functions to build a rectangle path and reuse it as needed.
Previously, shuf exclusively read input from stdin. This PR adds an
option to read from a file using Core::Stream::File. Since a file might
contain arbitrary bytes, including null bytes, this PR represents lines
as Spans of Bytes instead of Strings.
Lasso selection works by allowing the user to draw an arbitrary shape
much like the pen tool and ensuring the shape is closed by connecting
the start/end points when the user is done drawing. Everything inside
the shape becomes the selection.
Selection is determined via an outer flood fill. We begin a flood fill
from a point that is guaranteed to be outside of the drawn shape, and
anything the fill doesn't touch is determined to be the selection
region.
This makes ImageEditor responsible for clearing the active selection
when the escape key is pressed. If the active tool didn't act on the
Escape key (like some selection tools use this to indicate cancelling of
making a new selection), then ImageEditor will check for an active
selection and clear it.
We now can handle dynamic updating of the disabled attribute of a <link>
of the stylesheet type.
We do this by hooking the adding and removing attribute's handlers and
dynamically loading/removing the stylesheet if it has been
enabled/disabled.
An XRef table usually starts with an object number of zero. While it
could technically start at any other number, this is a tell-tale sign
of a broken table.
For the "broken" documents I encountered, this always meant that some
objects must have been removed from the start of the table, without
updating the following indices. When this is the case, the document is
not able to be read normally.
However, most other PDF parsers seem to know of this quirk and fix the
XRef table automatically.
Likewise, we now check for this exact case, and if it matches up with
what we expect, we update the XRef table such that all object numbers
match the actual objects found in the file again.
It was previously the job of the renderer to create fonts, load
replacements for the standard 14 fonts and to pass the font size back
to the PDFFont when asking for glyph widths.
Now, the renderer tells the font its size at creation, as it doesn't
change throughout the life of the font. The PDFFont itself is now
responsible to decide whether or not it needs to use a replacement
font, which still is Liberation Serif for now.
This means that we can now render embedded TrueType fonts as well :^)
It also makes the renderer's job much more simple and leads to a much
cleaner API design.
We would previously pass this function a unicode code point, which is
not actually what we want here.
Instead, we want the "raw" code point, with the font itself deciding
whether or not it needs to be re-mapped.
This same mistake in terminology applied to PS1FontProgram.
This version can already:
- load all of the defined file format except for the image type and the
frame-specific stuff
- navigate frames and slides (though frames are mostly stubbed out)
- display text with various common settings
- displays text with various fitting and scaling methods
- scale and position objects correctly no matter the window size
This category includes anything useful for getting work done with your
computer. It is mostly a split-off from the Utilites category which was
becoming very large.
This allows rectangle specifications in the form [x, y, width, height],
which mirrors margin properties and is much more convenient than the
JSON object specifications that were allowed before. Those are still
allowed, of course.
Because the ".." entry in a directory is a separate inode, if a
directory is renamed to a new location, then we should update this entry
the point to the new parent directory as well.
Co-authored-by: Liav A <liavalb@gmail.com>
This has two benefits:
- I observed a ~34% decrease in decoding time running TestVP9Decode.
- Removing all of these silly Vector fields helps simplify the code
relationships between all the functions in Decoder.cpp. It'll also be
much easier to make these static with template specializations, if
that turns out to be worthy performance improvement.
Turns out we have to be a little more lenient when dealing with the
slightly hostile inputs in tests262-parser-tests.
This fix papers over some problematic situations by returning a fallback
range if we can't figure out exactly where an error occurred.
I've added a FIXME about returning nicer values.
Generating Error objects got a lot slower after the introduction of
SourceCode in b0b022507b.
This was noticeable with `test-js` which generates a lot of errors,
so walking the source code over and over to compute (line, column)
was eating a ton of time.
This patch makes repeated lookups a lot faster by building a cache
of line break offsets in the source code. The cache is built on first
offset lookup, so we only pay for this in code that actually throws.
On my machine, this takes `test-js` runtime from 6.7 sec to 4.3 sec.
Unlike iterator_at_byte_offset(), this function assumes the provided
byte offset is a valid offset into the UTF-8 character stream.
This avoids walking the stream from the start.
Now we attempt to look for the path of e2fsck before checking if the
path can be found in any of the predefined routes. This fixes e2fsck
not being found on some "special" distros like NixOS.
Related #13754