https://wiki.lazarus.freepascal.org/Cocoa_DPI states that the dpi
on macOS is 72. That matches up to the experimental results reported
in #332 (in which 74.0 appears about the right size).
This commit introduces a `DEFAULT_DPI` constant that is set to 72 on
macOS and 96 on other operating systems.
The result of this is that a 10 point Menlo font now appears to be
the same size in Terminal.app and WezTerm.app.
refs: https://github.com/wez/wezterm/issues/332
This commit is a bit noisy because it also meant flipping the key map
code from using the termwiz input types to the window input types, which
I thought I'd done some time ago, but clearly didn't.
This commit allows defining key assignments in terms of the underlying
operating system raw codes, if provided by the relevant layer in the
window crate (currently, only X11/Wayland).
The raw codes are inherently OS/Machine/Hardware dependent; they are the
rawest value that we have available and there is no meaningful
understanding that we can perform in code to understand what that key
is.
One useful property of the raw code is that, because it hasn't gone
through any OS level keymapping processing, its value reflects its
physical position on the keyboard, allowing you to map keys by position
rather than by value. That's useful if you use software to implement
eg: DVORAK or COLEMAK but want your muscle memory to kick in for some of
your key bindings.
New config option:
`debug_key_events = true` will cause wezterm to log an "error" to stderr
each time you press a key and show the details in the key event:
```
2020-12-06T21:23:10.313Z ERROR wezterm_gui::gui::termwindow > key_event KeyEvent { key: Char('@'), modifiers: SHIFT | CTRL, raw_key: None, raw_modifiers: SHIFT | CTRL, raw_code: Some(11), repeat_count: 1, key_is_down: true }
```
This is useful if you want to figure out the `raw_code` for a key in your
setup.
In your config, you can use this information to setup new key bindings.
The motivating example for me is that because `raw_key` (the unmodified
equivalent of `key`) is `None`, the built-in `CTRL-SHIFT-1` key
assignment doesn't function for me on Linux, but I can now "fix" this in
my local configuration, taking care to make it linux specific:
```lua
local wezterm = require 'wezterm';
local keys = {}
if wezterm.target_triple == "x86_64-unknown-linux-gnu" then
local tab_no = 0
-- raw codes 10 through 19 correspond to the number key 1-9 positions
-- on my keyboard on my linux system. They may be different on
-- your system!
for i = 10, 20 do
table.insert(keys, {
key="raw:"..tostring(i),
mods="CTRL|SHIFT",
action=wezterm.action{ActivateTab=tab_no},
})
tab_no = tab_no + 1
end
end
return {
keys = keys,
}
```
Notice that the key assignment accepts encoding a raw key code using
a value like `key="raw:11"` to indicate that you want a `raw_code` of
`11` to match your key assignment. The `raw_modifiers` portion of
the `KeyEvent` is used together with the `raw_code` when deciding
the key assignment.
cc: @bew
164adb78e3 added blowing some
opengl related state during resize, however, on some systems
(BigSur with M1 silicon, perhaps also Intel?) and Windows 10
can generate a resize event before we've spun up opengl, so
we need to make this conditional.
refs: #359closes: #358
Previously, we'd enumerate the font dirs on every font resolve for
every bit of styled text.
This moves the new FontDatabase instances to be single instanced
in the FontConfiguration. The font-dirs will be scanned once
on a config reload, but the built-in in-memory fonts will only
every be enumerated once per FontConfiguration instance.
The recent addition of dynamic fallback resolution highlighted this
issue.
The test scenario is:
1. Output some glyphs that need dynamic fallback
2. ctrl-+ to change the font scaling
3. rasterization fails because of some bad cached state; the font_idx's
were invalidated by the scale change which reset the dynamically
discovered fallback fonts.
The resolution is to blow the glyph and shape caches when scaling
is changed.
Adds an option to control how wide glyphs (more specifically: square
aspect glyphs) are scaled to conform to their specified width.
The three options are `Never`, `Always`, and `WhenFollowedBySpace`.
When a glyph is loaded, if it is approximately square, this option is
consulted. If overflow is permitted then the glyph will be scaled
to fit only the height of the cell, rather than ensuring that it fits
in both the height and width of the cell.
refs: #342
Use the scaling factor between the font metrics for the base font
and those of the fallback font selected for a given glyph.
The scenario is this: the base font is typically the first one selected
from the font configuration. There may be multiple fallback fonts that
are different sizes; for instance, the Font Awesome font has glyphs that
are square in aspect and are thus about twice the width of a typical
textual monospace font. Similarly, Noto Color Emoji is another square
font but that has a single set of bitmap strikes at a fixed 128 px
square.
The shaper returns advance metrics in the scale of the containing font,
and the rasterizer will target the supplied size and dpi.
We need to scale these to match the base metrics.
Previously we used a crude heuristic to decide whether to scale,
and that happened to work for Noto Color Emoji but not for Font Awesome,
whose metrics were just inside the bounds of the heuristic.
This commit allows retrieving the metrics for a given font_idx so
that we can compute the correct scale factor without any heuristics,
and applies that to the rasterized glyph.
refs: https://github.com/wez/wezterm/issues/342
This is one of those massive time sinks that I almost regret...
As part of recent changes to dust-off the allsorts shaper, I noticed
that the harfbuzz shaper wasn't shaping as well as the allsorts one.
This commit:
* Adds emoji-test.txt, a text file you can `cat` to see how well
the emoji are shaped and rendered.
* Fixes (or at least, improves) the column width calculation for
combining sequences such as "deaf man" which was previously calculated
at 3 cells in width when it should have just been 2 cells wide, which
resulted in a weird "prismatic" effect during rendering where the
glyph would be rendered with an extra RHS portion of the glyph across
3 cells.
* Improved/simplified the clustering logic used to compute fallbacks.
Previously we could end up with some wonky/disjoint sequence of
undefined glyphs which wouldn't be successfully resolved from a
fallback font. We now make a better effort to consolidate runs of
undefined glyphs for fallback.
* For sequences such as "woman with veil: dark skin tone" that occupy a
single cell, the shaper may return 3 clusters with 3 glyphs in the
case that the font doesn't fully support this grapheme. At render
time we'd just take the last glyph from that sequence and render it,
resulting in eg: a female symbol in this particular case. It is
generally a bit more useful to show the first glyph in the sequence
(eg: person with veil) rather than the gender or skin tone, so the
renderer now checks for this kind of overlapping sequence and renders
only the first glyph from the sequence.
When subpixel or greyscale AA are in use, the glyph data includes
some lighter and darker shaded pixels. That's their purpose,
but if the fg and bg color are the same, the expectation is that
the glyph is invisible and we don't want "phantom" pixels around
the character.
This commit adjusts the shader to set the color to transparent
when the fg and bg are the same, and we are not rendering a color
emoji.
refs: #331
This makes it possible to configure wezterm to eg: triple click
on command input (or output) to select the entire input or output
without messing around trying to find the bounds.
The docs have an example of how to configure this; it requires
setting up shell integration to define the appropriate semantic
zones.
9892b16d40 adjusted how the text
colors are produced; it resulted in some ugly dark edges, especially
on lighter backgrounds.
This commit routes that tint via an alpha compositing helper which
produces smoother edges.
refs: #320
This commit more cleanly separates the load from the render flags,
and fixes up the render call; at some point this got messed up such
that we'd never end up with freetype returning subpixel format data
(LCD) and instead we'd only ever get grayscale data.
With that fixed, it's apparent that the colorization of the glyph
data was wonky in the shader so this commit also cleans this up.
refs: #320
refs: #121
If you have a primary font whose height is a bit more than double the
width then a double-wide emoji would be scaled to a bit more than two
cells in width.
This commit adjust the glyph scaling to check both the x and y scaling
to ensure that they glyph fits within the appropriate number of cells.
This has the consequence of rendering eg: the heart emoji smaller than
in previous versions; the heart glyph is typically square but the
broadly used concept of width for the heart unicode character is a
single cell. Previously we'd incorrectly render this double wide.
I'm not sure of a way to do better than we are right now because
freetype doesn't provide much help for scaling this kind of bitmap
font AFAICS.
refs: #320
There were two problems:
* It was using an old code path that didn't even try to resolve the cwd
* The NewWindow code path would "forget" the originating window and then
fail to resolve the current pane + path from the new, empty window
that it is building.
closes: https://github.com/wez/wezterm/issues/322
Adds some supporting methods for computing the `SemanticZone`s
in the display and a key assignment that allows scrolling the
viewport to jump to the next/prev Prompt zone.
Full error
```
error[E0277]: can't compare `&[u8]` with `std::vec::Vec<u8>`
--> wezterm-gui/src/gui/termwindow.rs:817:40
|
817 | if existing.data() == data {
| ^^ no implementation for `&[u8] == std::vec::Vec<u8>`
|
= help: the trait `std::cmp::PartialEq<std::vec::Vec<u8>>` is not implemented for `&[u8]`
error: aborting due to previous error
```
Needed to re-order a couple of things to match recent changes.
Also: don't hard fail if the ssh server rejects a setenv request,
just log the error instead.
A consequence of reducing the initial texture size is that for
larger starting font sizes it isn't big enough. We need to make
a couple of passes to determine the required size, so that's
what this commit does.
refs: #307
This commit moves a bunch of stuff around such that `wezterm` is now a
lighter-weight executable that knows how to spawn the gui, talk to
the mux or emit some escape sequences for imgcat.
The gui portion has been moved into `wezterm-gui`, a separate executable
that doesn't know about the CLI or imgcat functionality.
Importantly, `wezterm.exe` is no longer a window subsystem executable
on windows, which makes interactions such as `wezterm -h` feel more
natural when spawned from `cmd`, and should allow
`type foo.png | wezterm imgcat` to work as expected.
That said, I've only tested this on linux so far, and there's a good
chance that something mac or windows specific is broken by this
change and will need fixing up.
refs: #301