This commit adjusts the default color palette to use the same color
cube calculation as xterm; it isn't the ideal color cube calculation
and results in slightly brighter colors.
This commit also memoizes the default palette calculation so that
it isn't recomputed each time a palette is created.
refs: https://github.com/wez/wezterm/issues/348
Teach the core text locator how to obtain the system fallback list
and add that to the fallback.
Fixup handling of ttc files on macOS; we'd always assume index 0
when extracting font info from the font descriptor. We now make
the effort to enumerate the contents of the TTC and find a match.
https://wiki.lazarus.freepascal.org/Cocoa_DPI states that the dpi
on macOS is 72. That matches up to the experimental results reported
in #332 (in which 74.0 appears about the right size).
This commit introduces a `DEFAULT_DPI` constant that is set to 72 on
macOS and 96 on other operating systems.
The result of this is that a 10 point Menlo font now appears to be
the same size in Terminal.app and WezTerm.app.
refs: https://github.com/wez/wezterm/issues/332
This commit improves input processing on macOS; passing the keyUp
events to the input context is required for dead keys to correct
process their state transitions.
In addition, we weren't passing key events through if any modifiers
were down; for dead keys we need to allow Option through.
This commit rigs up a little bit of extra state to avoid double-emitting
key outputs from the input context.
Lastly, the virtual key code is passed through to the KeyEvent to
enable binding to raw keys per 61c52af491
refs: #357
This commit is a bit noisy because it also meant flipping the key map
code from using the termwiz input types to the window input types, which
I thought I'd done some time ago, but clearly didn't.
This commit allows defining key assignments in terms of the underlying
operating system raw codes, if provided by the relevant layer in the
window crate (currently, only X11/Wayland).
The raw codes are inherently OS/Machine/Hardware dependent; they are the
rawest value that we have available and there is no meaningful
understanding that we can perform in code to understand what that key
is.
One useful property of the raw code is that, because it hasn't gone
through any OS level keymapping processing, its value reflects its
physical position on the keyboard, allowing you to map keys by position
rather than by value. That's useful if you use software to implement
eg: DVORAK or COLEMAK but want your muscle memory to kick in for some of
your key bindings.
New config option:
`debug_key_events = true` will cause wezterm to log an "error" to stderr
each time you press a key and show the details in the key event:
```
2020-12-06T21:23:10.313Z ERROR wezterm_gui::gui::termwindow > key_event KeyEvent { key: Char('@'), modifiers: SHIFT | CTRL, raw_key: None, raw_modifiers: SHIFT | CTRL, raw_code: Some(11), repeat_count: 1, key_is_down: true }
```
This is useful if you want to figure out the `raw_code` for a key in your
setup.
In your config, you can use this information to setup new key bindings.
The motivating example for me is that because `raw_key` (the unmodified
equivalent of `key`) is `None`, the built-in `CTRL-SHIFT-1` key
assignment doesn't function for me on Linux, but I can now "fix" this in
my local configuration, taking care to make it linux specific:
```lua
local wezterm = require 'wezterm';
local keys = {}
if wezterm.target_triple == "x86_64-unknown-linux-gnu" then
local tab_no = 0
-- raw codes 10 through 19 correspond to the number key 1-9 positions
-- on my keyboard on my linux system. They may be different on
-- your system!
for i = 10, 20 do
table.insert(keys, {
key="raw:"..tostring(i),
mods="CTRL|SHIFT",
action=wezterm.action{ActivateTab=tab_no},
})
tab_no = tab_no + 1
end
end
return {
keys = keys,
}
```
Notice that the key assignment accepts encoding a raw key code using
a value like `key="raw:11"` to indicate that you want a `raw_code` of
`11` to match your key assignment. The `raw_modifiers` portion of
the `KeyEvent` is used together with the `raw_code` when deciding
the key assignment.
cc: @bew
This allows stashing the raw key identifier from the keyboard layer.
Interpreting this value is hardware and OS dependent.
At this time, only X11/Wayland implementations populate this value,
and there is no way to do key assignment based upon it.
164adb78e3 added blowing some
opengl related state during resize, however, on some systems
(BigSur with M1 silicon, perhaps also Intel?) and Windows 10
can generate a resize event before we've spun up opengl, so
we need to make this conditional.
refs: #359closes: #358
827d94a seems to have broken building on aarch64. The fix is pretty
much adapted from bf962c8.
I know little about rust, so I might've missed some obvious issues with
this PR - it seems to work so far, though.
This commit adds support for computing the codepoint coverage for fonts
loaded from font-dirs and the built-in, in-memory fonts.
What this means is that if you have eg: a font with chinese glyphs in
your font-dirs but not explicitly listed in your wezterm config, if
chinese text is rendered and no match from your config is found, wezterm
will be able to find the font from your font-dirs and use that
implicitly.
Computing the codepoint coverage is relatively expensive so we defer it
until we need to perform it, and cache it.
Previously, we'd enumerate the font dirs on every font resolve for
every bit of styled text.
This moves the new FontDatabase instances to be single instanced
in the FontConfiguration. The font-dirs will be scanned once
on a config reload, but the built-in in-memory fonts will only
every be enumerated once per FontConfiguration instance.
This tidies up the font-dir and built-in font management a little
bit and paves the way for codepoint -> font resolution for fonts
discovered in font-dirs.
The recent addition of dynamic fallback resolution highlighted this
issue.
The test scenario is:
1. Output some glyphs that need dynamic fallback
2. ctrl-+ to change the font scaling
3. rasterization fails because of some bad cached state; the font_idx's
were invalidated by the scale change which reset the dynamically
discovered fallback fonts.
The resolution is to blow the glyph and shape caches when scaling
is changed.
By default, freetype doesn't include error strings and FT_Error_String
will always return NULL. Turn on the compile time option that makes
this function useful!
This commit uses a bit of DirectWrite to discover which font(s)
can be used to render a set of codepoints.
While hooking this up, I found that the method we were using
to extract the font data didn't handle TTC data so this commit
improves some parser diagnostics and handling for that.
refs: https://github.com/wez/wezterm/issues/299
Profiling showed that set_font_size was a hotspot. While there was
caching of the size info at the shaper layer, it was also needed in
the raster layer, so move it into the raster layer from the shaper
layer.
refs: #353
98f289f511 causes more metrics retrieval
than in earlier versions; each unchached glyph render would trigger
a metrics recompute for the relevant font.
Add a simple cache for this.
refs: #353
This commit makes some adjustments to FontConfiguration and LoadedFont
such that it the shaper is unable to resolve a (non-last-resort) font
for a set of codepoints, the locator can be used to try to find a
font that has coverage for those codepoints.
At the moment this is a bit limited:
* Only the font-config locator implements this function
* The directory based locator isn't actually an implementor of the
locator trait and doesn't have a way to be invoked for this.
Adds an option to control how wide glyphs (more specifically: square
aspect glyphs) are scaled to conform to their specified width.
The three options are `Never`, `Always`, and `WhenFollowedBySpace`.
When a glyph is loaded, if it is approximately square, this option is
consulted. If overflow is permitted then the glyph will be scaled
to fit only the height of the cell, rather than ensuring that it fits
in both the height and width of the cell.
refs: #342
Make an effort to explain what failed to load and where it came from,
and funnel users to the documentation on font configuration.
The message presented is slightly different depending on whether
we think that the font was their primary font, an explicitly
specified font_rule or an implicitly synthesized font_rule.
refs: #340
Use the scaling factor between the font metrics for the base font
and those of the fallback font selected for a given glyph.
The scenario is this: the base font is typically the first one selected
from the font configuration. There may be multiple fallback fonts that
are different sizes; for instance, the Font Awesome font has glyphs that
are square in aspect and are thus about twice the width of a typical
textual monospace font. Similarly, Noto Color Emoji is another square
font but that has a single set of bitmap strikes at a fixed 128 px
square.
The shaper returns advance metrics in the scale of the containing font,
and the rasterizer will target the supplied size and dpi.
We need to scale these to match the base metrics.
Previously we used a crude heuristic to decide whether to scale,
and that happened to work for Noto Color Emoji but not for Font Awesome,
whose metrics were just inside the bounds of the heuristic.
This commit allows retrieving the metrics for a given font_idx so
that we can compute the correct scale factor without any heuristics,
and applies that to the rasterized glyph.
refs: https://github.com/wez/wezterm/issues/342
Don't short circuit on just the family portion of the name;
if the criteria don't match there, we should fall back to
test against the full font name.
closes: https://github.com/wez/wezterm/issues/341
Weird that this was set to not enable C++, but I suspect it is the
reason why the C++11 compiler flag isn't being added to resolve this:
```
warning: harfbuzz/src/hb-meta.hh:41:18: warning: variadic templates are a C++11 extension [-Wc++11-extensions]
warning: template<typename... Ts> struct _hb_void_t { typedef void type; };
```