Using a boxed slice means that we hold exactly the memory required
for the file data, rather than the next-power-of-two, which can
be wasteful when a large number of images are being sent to
the terminal.
This is a API breaking change for termwiz, so bump its version.
refs: #534
It's been replaced with an opaque termwiz error type instead.
This is a bit of a more conservative approach than that in (refs: #407)
and has less of an impact on the surrounding code, which appeals to
me from a maintenance perspective.
refs: #406
refs: #407
This allows us to support the kitty style underline sequence,
or the : separated form of the true color escape sequences.
refs: https://github.com/wez/wezterm/issues/415
This is one of those massive time sinks that I almost regret...
As part of recent changes to dust-off the allsorts shaper, I noticed
that the harfbuzz shaper wasn't shaping as well as the allsorts one.
This commit:
* Adds emoji-test.txt, a text file you can `cat` to see how well
the emoji are shaped and rendered.
* Fixes (or at least, improves) the column width calculation for
combining sequences such as "deaf man" which was previously calculated
at 3 cells in width when it should have just been 2 cells wide, which
resulted in a weird "prismatic" effect during rendering where the
glyph would be rendered with an extra RHS portion of the glyph across
3 cells.
* Improved/simplified the clustering logic used to compute fallbacks.
Previously we could end up with some wonky/disjoint sequence of
undefined glyphs which wouldn't be successfully resolved from a
fallback font. We now make a better effort to consolidate runs of
undefined glyphs for fallback.
* For sequences such as "woman with veil: dark skin tone" that occupy a
single cell, the shaper may return 3 clusters with 3 glyphs in the
case that the font doesn't fully support this grapheme. At render
time we'd just take the last glyph from that sequence and render it,
resulting in eg: a female symbol in this particular case. It is
generally a bit more useful to show the first glyph in the sequence
(eg: person with veil) rather than the gender or skin tone, so the
renderer now checks for this kind of overlapping sequence and renders
only the first glyph from the sequence.
Replaces SmallVec with an internal TeenyString that only
occupies a single machine word and avoids heap allocation
in the common case on most architectures. This takes the
textual portion of Cell from 32 bytes to 8 bytes.
This corrects an issue where the mode byte of the DCS sequence was
discarded from the DcsHook, making it impossible to know what sequence
is being activated.
So far this hasn't come up as these sequences are relatively rare,
but in looking at sixel parsing I noticed the error.
This commit changes the behavior on Windows:
* If $TERM is set and the `terminfo` crate is able to
successfully initialize and locate a terminfo database (this also
requires that $TERMINFO be set in the environment), then we'll
use the `TerminfoRenderer` instead of the `WindowsConsoleRenderer`
* If $TERM is set to `xterm-256color` and no terminfo database was
found, use our modern compiled-in copy (look in the `termwiz/data/`
directory for the source and compiled version of this) and use
the `TerminfoRenderer`.
* Otherwise use the `WindowsConsoleRenderer`.
In practice, this allows termwiz apps to opt in to features such as
true color support on Windows 10 build 1903 an later by setting their
`TERM=xterm-256color`. This happens to be the default behavior when
`ssh`ing in to a windows host via `wezterm`.
You can see the truecolor mode get applied by running this example:
```
cargo run --example widgets_basic --features widgets
```
with TERM set as above the background region that is painted by the app
will be a blueish/purplish color, but with it unset or set to something
invalid, it will fall back to black.
I'd like to eventually make termwiz assume the equivalent configuration
to `TERM=xterm-256color` by default on Windows 10 build 1903 and later,
but it's worth getting some feedback on how this works for clients such
as `streampager`.
cc: @quark-zju and @markbt
derive_builder has some extra dependencies that take a while to compile.
The builder feature can be expressed via a 30-line macro. So let's do
that to make termwiz compile faster.
The palette crate has a codegen step that translates svg_colors.txt to named.rs.
That makes it hard to build using buck.
Remove the palette dependency so termwiz is easier to build using buck.
I made sure the following code:
fn main() {
use termwiz::color::RgbColor;
let r = RgbColor::from_rgb_str("#02abcd").unwrap();
let r1 = r.to_tuple_rgba();
let r2 = r.to_linear_tuple_rgba();
println!("r1 = {:?}", r1);
println!("r2 = {:?}", r2);
}
prints
r1 = (0.007843138, 0.67058825, 0.8039216, 1.0)
r2 = (0.000607054, 0.4072403, 0.6104956, 1.0)
before and after the change.
Embed rgb.txt and parse it on the fly to produce the list of colors.
This list is a superset of palette's SVG color list.
refs: https://github.com/wez/wezterm/pull/144
I noticed while scrolling `emoji-test.txt` that some of the combined
emoji sequences rendered very poorly. This was due to the unicode
width being reported as up to 4 in some cases.
Digging into it, I discovered that the unicode width crate uses a
standard calculation that doesn't take emoji combination sequences
into account (see https://github.com/unicode-rs/unicode-width/issues/4).
This commit takes a dep on the xi-unicode crate as a lightweight way
to gain access to emoji tables and test whether a given grapheme is
part of a combining sequence of emoji.
I've noticed this off and on for a while, and thought it was something
fishy with my shell dotfiles.
Tracing through I found that the final byte in the "Face with head
bandage" emoji 🤕 U+1F915 was being interpreted as the MW control
code and causing the vt parser to jump out of the OSC state.
The solution for this is to hook up proper UTF-8 processing in the
same way that it is applied in the ground state.
Since we don't have enough bits to introduce new state values (we're
pretty tightly packed in the 16 bits available), I've introduced a
memory of the state to which the utf8 parser needs to return once
a complete sequence is detected.
This enables using large OSC buffers in a form that we can publish
to crates.io without blocking on an external crate. Large OSC
buffers are important both for some tunnelling use cases and for
eg: iTerm2 image protocol handling.
It's taking a while for https://github.com/jwilm/vte/pull/20 to get
merged, so point to my branch directly while I build out some
tunneled mux protocol escape sequences.
I'll need to fork vte on crates.io if vte doesn't merge the PR
before the next termwiz crate bump.
I wanted to see how much work remains to enable iterm2 image
display; one of the blockers was a limit in the size of the
buffer in the vte crate, which has been removed in my fork
of vte.
As part of testing our ability to absorb that data, I found
a couple of issues with applying the image cells to the display,
so this commit also takes care of that.
We still don't have code to connect the cell image data to the
opengl render layer.