This commit is a bit noisy because it also meant flipping the key map
code from using the termwiz input types to the window input types, which
I thought I'd done some time ago, but clearly didn't.
This commit allows defining key assignments in terms of the underlying
operating system raw codes, if provided by the relevant layer in the
window crate (currently, only X11/Wayland).
The raw codes are inherently OS/Machine/Hardware dependent; they are the
rawest value that we have available and there is no meaningful
understanding that we can perform in code to understand what that key
is.
One useful property of the raw code is that, because it hasn't gone
through any OS level keymapping processing, its value reflects its
physical position on the keyboard, allowing you to map keys by position
rather than by value. That's useful if you use software to implement
eg: DVORAK or COLEMAK but want your muscle memory to kick in for some of
your key bindings.
New config option:
`debug_key_events = true` will cause wezterm to log an "error" to stderr
each time you press a key and show the details in the key event:
```
2020-12-06T21:23:10.313Z ERROR wezterm_gui::gui::termwindow > key_event KeyEvent { key: Char('@'), modifiers: SHIFT | CTRL, raw_key: None, raw_modifiers: SHIFT | CTRL, raw_code: Some(11), repeat_count: 1, key_is_down: true }
```
This is useful if you want to figure out the `raw_code` for a key in your
setup.
In your config, you can use this information to setup new key bindings.
The motivating example for me is that because `raw_key` (the unmodified
equivalent of `key`) is `None`, the built-in `CTRL-SHIFT-1` key
assignment doesn't function for me on Linux, but I can now "fix" this in
my local configuration, taking care to make it linux specific:
```lua
local wezterm = require 'wezterm';
local keys = {}
if wezterm.target_triple == "x86_64-unknown-linux-gnu" then
local tab_no = 0
-- raw codes 10 through 19 correspond to the number key 1-9 positions
-- on my keyboard on my linux system. They may be different on
-- your system!
for i = 10, 20 do
table.insert(keys, {
key="raw:"..tostring(i),
mods="CTRL|SHIFT",
action=wezterm.action{ActivateTab=tab_no},
})
tab_no = tab_no + 1
end
end
return {
keys = keys,
}
```
Notice that the key assignment accepts encoding a raw key code using
a value like `key="raw:11"` to indicate that you want a `raw_code` of
`11` to match your key assignment. The `raw_modifiers` portion of
the `KeyEvent` is used together with the `raw_code` when deciding
the key assignment.
cc: @bew
This allows stashing the raw key identifier from the keyboard layer.
Interpreting this value is hardware and OS dependent.
At this time, only X11/Wayland implementations populate this value,
and there is no way to do key assignment based upon it.
This is basically the same issue as
70fc76a040 but on macOS. Now that we're
using EGL in more places, the same sort of check needs to used in more
places!
Will need to do the same on Windows in a follow-up commit.
refs: #316
Not 100% sure that this is it, but it seems much less likely that
artifacts will appear in conjunction with transparency when the window
shadow effect is disabled; I didn't see the ghosting with this disabled,
but I sometimes dididn't see it with it enabled, so I'm not sure that we
have a 100% reliable reproduction, and thus am not sure that this is a
fix.
I found mention of disabling the shadow in some example code on
stackoverflow when I was first researching this, but it wasn't supplied
with an explanation. Perhaps this is why?
Longer term we might want to be smarter about turning off the shadow
only when the opacity is != 1.0, but at the moment the window layer
can't see the config, so let's just default it off for the moment
until we see if it does the trick.
refs: #310
Wheel events wouldn't get reported to eg: vim in wsl if the
window's X position was larger than the window width due to
mouse wheel messages being reported with screen coordinates
rather than client coordinates.
This commit addresses that.
When allocating space in the texture atlas, we typically use
a small padding to avoid accidentally interpolating textures
into glyphs.
When it comes to rendering images via iterm2 or sixel image
protocols, the image emitted by the user may not exactly fill
the cell dimensions, and due to the how the shader works to
apply those textures we could end up revealing nearby images
in the texture when displaying an unrelated image.
This commit adjusts the texture atlas allocation when making
space for image protocol textures; excess padding based on
an overestimate of the cell dimensions is added to the right
and bottom of the image, guaranteeing that that border will
be filled with transparent pixels.
This is a bit wasteful of texture space, but isn't egregiously
bad and is easy to reason about and makes things look less
janky.
refs: #292
This commit uses the guillotine algorithm to assign rectangles,
which is superior to the dumb algorithm previously in use.
In addition, in the first pass of painting, if we get a texture
space error, we clear the atlas and try again without increasing
it size, which should serve as the ultimate defrag.
Subsequent passes will cause the texture to grow if needed.
refs: #306
This is a bit more involved than I'd like, but it seems more
deterministic than using `TranslateMessage` or `ToUnicode` in all cases.
This commit expands the depth of the keyboard layout probing that
is performed when we detect a changed keyboard layout.
We know detect starting `(Modifier, VK) -> char` for a dead key press,
as well as the map of terminating `(Modifier, VK) -> char` for valid
dead key presses.
This information allows us to simply lookup the mapping without
calling `ToUnicode`. Avoiding `ToUnicode` is desirable because it
maintains a global state and it is unpredictable what else is
manipulating that same state. In particular, for the ESP keyboard
layout where `~` is a dead key that is reached via `AltGr 4`, there
doesn't appear to be a reliable way to extract the correct mapping
from it when calling `ToUnicode` in response to the various KEYUP,
KEYDOWN messages. We could get it if we always called
`TranslateMessage` and only looked at `WM_CHAR`, but that means that
we cannot decompose `WM_CHAR` back to the raw key events when we
need to. Bleh!
Test Plan for this commit:
* With ENG layout active, check that CTRL, ALT and so on have the
intended effect in the terminal; eg: CTRL-C, CTRL-W (in vim).
* Switch to pinyin layout, check that typing still invokes the
IME and that it can insert text
* Switch to DEU. Check that `AltGr m` produces a `mu` symbol.
Check that grave (`\``) (a dead key) doesn't immediately output
anything, then press `e`; that produces an `e` with a grave
diacritic. Grave followed by space emits grave. Grave
followed by grave emits a grave and holds the second grave; pressing
`e` at this point now emits `e` with a grave diacritic.
(This is a difference from the "normal" system behavior, which
would just emit two graves in a row, then a regular `e`).
* Switch to ESP. Check that `AltGr 4` (tilde) doesn't immediately
output anything, then press `n`; that produces an `n` with the
tilde diacritic.
* Change `use_dead_keys = false`. Now verify in DEU that `grave`
just emits grave. In ESP, verify that `AltGr 4` just emits
a tilde.
* Switch back to ENG. Verify that `ALT-space` pops up the system
menu.
refs: #275
refs: #305
Change the cursor to an appropriate one of these when hoving
over and dragging a split.
Fix an issue where we wouldn't always change the cursor when
hovering over a split when multiple splits are present.
There's a few different knobs to turn, but this
commit turns them and we're now able to respect
opacity settings for both OpenGL/CGL and Metal
renderers.
closes: #141
This is similar in spirit to the work in 4d71a7913a
but for Windows.
This commit adds ANGLE binaries built from
07ea804e62
to the repo. The build and packaging will copy those into the same
directory as wezterm.exe so that they can be resolved at runtime.
By default, `prefer_egl = true`, which will cause the window
crate to first try to load an EGL implementation. If that fails,
or if `prefer_egl = false`, then the window crate will perform
the usual WGL initialization.
The practical effect of this change is that Direct3D11 is used for the
underlying render, which avoids problematic OpenGL drivers and means
that the process can survive graphics drivers being updated.
It may also increase the chances that the GPU will really be used
in an RDP session rather than the pessimised use of the software
renderer.
The one downside that I've noticed is that the resize behavior feels a
little janky in comparison to WGL (frames can render with mismatched
surface/window sizes which makes the window contents feel like they're
zooming/rippling slightly as the window is live resized). I think this
is specific to the ANGLE D3D implementation as EGL on other platforms
feels more solid.
I'm a little on the fence about making this the default; I think
it makes sense to prefer something that won't quit unexpectedly
while a software update is in progress, so that's a strong plus
in favor of EGL as the default, but I'm not sure how much the
resize wobble is going to set people off.
If you prefer WGL and are fine with the risk of a drive update
killing wezterm, then you can set this in your config:
```lua
return {
prefer_egl = false,
}
```
refs: https://github.com/wez/wezterm/issues/265
closes: https://github.com/wez/wezterm/issues/156
6c5a996423 was almost great...
the problem is that CTRL-W for example was generating a raw
uppercase W instead of a lowercase W which meant that CTRL-W
for split navigation in vim would trigger the close pane
key assignment.
I noticed that the built-in CTRL-SHIFT-1 assignment had
stopped working because that key press was being recognized
as CTRL-SHIFT-! with the recent changes in handling keyboard
input.
This commit sets the raw key to the position-based fallback
that we'd use if ToUnicode didn't return the correct mapping.
This is sufficient for this sort of un-modified key assignment
because the key is based on the virtual key code and is ignorant
of how the keyboard layout might compose those keys with SHIFT;
that is exactly what we want in this situation.
This commit adjusts the window layer to have it try to load EGL
implementations on macOS. This is important as the system
provided OpenGL implementation is deprecated and I wanted to
have a path forward for when it is finally removed.
If EGL fails to initialize, we fall back to the CGL/OpenGL
implementation that we used previously.
I've included binaries built for 64-bit intel from the MetalANGLE
project; here's how I built them:
```
git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git --depth 1
git clone https://github.com/kakashidinho/metalangle --depth 1
cd metalangle
PATH=$PWD/../depot_tools:$PATH python scripts/bootstrap.py
PATH=$PWD/../depot_tools:$PATH gclient sync
PATH=$PWD/../depot_tools:$PATH gn --args="is_debug=false angle_enable_metal=true angle_enable_vulkan=false angle_enable_gl=false angle_build_all=false" gen out/Release
PATH=$PWD/../depot_tools:$PATH autoninja -C out/Release
```
Those steps are a little too long to want to put them directly
into the wezterm CI.
It is important for metalangle to be >= 8230df39a5
in order for scaling to be handled correctly when dragging windows
between monitors.
refs: https://github.com/kakashidinho/metalangle/issues/34
This changes the ALT/dead key behavior a little bit more,
and in a way that is likely more useful to terminal users.
The default behavior is that system dead key processing is enabled.
For example, with DEU keyboard layout activated:
* `^` `<SPACE>` results in a single `^`
* `^` `e` result in those two characters combining into an e with a
diacritic.
If the config sets `use_dead_keys = false` then the behavior changes;
wezterm probes the active keymap to determine which keys are marked
as dead keys and computes their single character expansion. When
the dead key is pressed then that expansion is substituted instead.
So `^` is simply `^`.
In order to pull this off, the window layer needs to selectively
call `TranslateMessage` for the system dead key expansion case
instead of unconditionally in the global message loop.
As a result of *that*, it means that we don't perform the default ALT
key translation for every key press any more. I looked to see how old
friend putty handles this and found that it only allows default system
processing for ALT-space and ALT-F4. I was resistent to selectively
processing system shortcuts because the full set are effectively
unknowable to an application and I didn't want to try to replicate
a wide selection of varying keypresses. I'm fine to only allow
these two, so this commit does that, and reverts the portion of
the prior commit that prevented passing general ALT key combinations
through.
refs: #275
refs: #296
For some definition of improve, at least.
On Windows, ALT is basically reserved by the Window management
layer for functions such as ALT-space, ALT-F4 and so on.
Windows doesn't provide a method by which an application can
test whether a given key would be processed by the default
window procedure so we're in a bit of a bind in terms of
allowing ALT+a keypress to do something meaningful in the
terminal.
What I've settled on for now is:
On Windows only, if ALT is pressed, allow matching key assignments that
include ALT to be matched. If there are no key assignments, then DON'T
pass the key press to the active pane, and instead allow it to be passed
to DefWindowProc. This allows ALT-space to be handled correctly,
provided the user hasn't defined an ALT-space key assignment of their
own.
This may have some unforeseen consequences. For example, ALT-<number>
is a readline binding that repeats an argument a number of times.
This change "breaks" that, but the user can provide a key assignment
to `SendString` the equivalent sequence to restore that behavior.
I'm kindof hoping that no one notices, but I'm prepared to explicitly
add default key assignments for that.
The other aspect of this commit is that I now understand a bit better
what a dead key is and how they should be handled. I've tested the
behavior of wezterm with these changes and the behavior is consistent
with a regular CMD window when I have the DEU keymap active.
Specifically, using the on-screen keyboard, if I click `^` then click
`e` wezterm will emit `ê`. If I click `^` then `^` then wezterm emits
`^^`.
refs: #275
refs: #296
This appears to be an unexpected consequence of 6708ea4b36
but thankfully that change allows de-coupling shift processing
from the ctrl processing in this block of code.
refs: #275
It's not clear why the first choice isn't always the right choice
for some users.
This commit changes the logic to try all potential configs,
one after the other, until we find one that sticks.
I don't know if this will work in practice: I suspect that
trying to configure one of them may prevent later configs from
being used.
But maybe it will, and it may reveal more information about
what the real cause of the problem is.
refs: #272
This is imperfect in that it may feel slightly off for very large
or very small font sizes, but it feels more similar to the scroll
speed in eg: iTerm2 with these changes.
refs: #206
To reproduce the problem, maximize wezterm, then press CMD-N.
This commit tells the window not to use cocoa native tabs and
instead really create a new window when we ask it to create
a new window.
closes: #254
025732d00f introduced deferred
window creation; the creation would get scheduled into the
spawn queue and then get run again a few milliseconds later
on the main thread.
For reasons that I don't understand, returning to the scheduler
loop to flush or otherwise process messages causes a wayland
protocol error.
Adjusting the notify routine to dispatch immediately if we're
already on the mux thread seems to resolve this.
While looking at this, I cleaned up a destruction order issue
with the opengl state that was then causing a segfault on shutdown.
I also removed a bit of dead paint related code that doesn't
appear to be needed any more.
refs: #293
This was broken by the changes in
aad493ab2a. The issue was that the
channel send didn't wakeup the receiver. I'm not sure why, and I tried
a couple of different async channel implementation.
Doing the simplistic solution here works reliably.
This is a bit of a switch-up, see this comment for more background:
refs: https://github.com/wez/wezterm/issues/265#issuecomment-701882933
This commit:
* Adds a pre-compiled mesa3d opengl32.dll replacement
* The mesa dll is deployed to `<appdir>/mesa/opengl32.dll` which by
default is ignored.
* When the frontend is set to `Software` then the `mesa` directory
is added to the dll search path, causing the llvmpipe renderer
to be enabled.
* The old software renderer implementation is available using the
`OldSoftware` frontend name
I'm not a huge fan of the subdirectory for the opengl32.dll, but
I couldn't get it to work under a different dll name; the code
thought that everything was initialized, but the window just rendered
a white rectangle.
If we've failed to initialize EGL, try setting `LIBGL_ALWAYS_SOFTWARE=true`
in the environment and make another pass at initialization in the hope
that it brings up something usable.
This commit only impacts linux systems at the time of writing.
I've made the line that logs the GL implementation information
have `error` level again, because it is more convenient for me
even if it isn't technically an error.
refs: https://github.com/wez/wezterm/issues/272
(but isn't the true fix; this is just trying to make the consequences
of that problem less. I would like to get that fixed correctly)
refs: https://github.com/wez/wezterm/issues/265#issuecomment-701882933
(which discusses what I think the end state should be)
This could be reproduced via `wezterm connect localhost`.
This bug was surfaced after the last release added a Drop impl
to cleanup the display.
This commit tracks the display in the connection.
closes: https://github.com/wez/wezterm/issues/252
This isn't a fix by any stretch of the imagination, but it stops
a crash. Should be good enough until I get a chance to fix this
properly.
refs: https://github.com/wez/wezterm/issues/252
This tidies up the valgrind output some more, but seems to highlight
some leaks in the egl implementation around init/shutdown.
I still don't see a smoking gun for a memory leak that grows over time.
refs: https://github.com/wez/wezterm/issues/238
When running on a 30bpp display with 2 bit alpha, eglChooseConfig
will match and list the 10bpc configuration first, which don't match
the desired pixel format.
Filter the config list so that it only includes 8bpc configurations.
refs: https://github.com/wez/wezterm/issues/240
In refs: https://github.com/wez/wezterm/issues/240 there are a number
of configurations that report 0 for the alpha size and where we are
unable to otherwise find a working config.
This is a speculative commit to releax the alpha channel size to
basically anything available and see if that helps.
This commit refactors the wayland EGL init code to call into the
non-wayland init code which is more in the spirit of DRY.
It also highlights that we were requesting PBUFFER and PIXMAP capable
contexts in the non-wayland case. Since we appear to survive without
those in the wayland renderer, presumably we can survive without them
in all cases.
This may help with activating opengl for this issue:
refs: https://github.com/wez/wezterm/issues/240
While looking into what it might take to support 10bpc (30bpp) displays
(https://github.com/wez/wezterm/issues/240) I was experimenting with
Xephyr at a reduced 16bpp depth and noticed that the server still
offered a 32bpp TrueColor depth option.
This commit adjusts the window/bitmap code to allow it to select depths
24bpp or 32bpp, preferring the largest depth. we restrict ourselves to
24 and 32 bit selections for this, as those appear to be bit for bit
compatible for the r/g/b channels. I suspect that 10bpc will require
some scaling somewhere.
This change allows running wezterm against the reduced depth Xephyr, but
since Xephyr doesn't support GL it runs with the software renderer; I
don't know quite how opengl is going to play with this. I can confirm
that running wezterm on my native 24bpp display when it picks a 32bpp
visual does run with opengl enabled, so maybe this is good enough?
8f1f1a65ea added support for probing
for opengl extensions, and I thought that I had the fallback covered
but it turned out that we were only falling back if one of the major
extensions wasn't present.
This commit adds a fallback for the case where things look ok at
first glance, but where they fail at runtime for whatever reason.
refs: https://github.com/wez/wezterm/issues/235
With this commit, we now survive a reinstall or upgrade of the nvidia
drivers on my Windows sytem without crashing.
This commit allows notifying the application of the context loss
so the application can either try to reinit opengl or open a new
window as a replacement and init opengl there.
I've not had success at reinitializing opengl after a driver upgrade;
it seems to be persistently stuck in a state where it fails to allocate
a vertex buffer.
SO, the state we have now is that we try to reinit opengl on a new
window, and if that fails, leave it set to the software renderer.
This isn't a perfect UX, but it is better than terminating!
refs: https://github.com/wez/wezterm/issues/156
I don't have a great way to test this on those platforms,
so other than compiling and running and verifying that things
work normally, I'm not sure if this is sufficient!
This commit allows distinguishing between left and right alt
modifiers at the window layer. So far only macos provides
this additional information.
Expand the logic that decides whether Alt should emit the
composed key or act as the raw key with the Alt modifier flag
set so that we can set that behavior separately for the left
and right modifiers on systems that support it, and use the
existing config for systems that don't support it.
The default settings for these flags is that Left Alt will
send the uncomposed key + Alt modifier while the Right Alt
will behave more like AltGr (which is typically on the RHS
of the keyboard) and send the composed key.
This gives more flexibility by default and hopefully matches
expectations a bit better.
refs: https://github.com/wez/wezterm/issues/216
The goal at the window layer is to preserve enough useful information
for other layers. In this specific circumstance on macos we'd like
to be able know both that eg: ALT-1 was pressed and that ALT-1 composes
to a different unmodified sequence and then allow the user's key
binding assignment to potentially match on both.
We sort of allowed for this, but didn't separate out the modifier keys.
This commit adds a `raw_modifiers` concept to the underlying event
struct so that we can carry both the raw key and modifier information
as well as the composed key and modifier information.
In the scenario above, we want the raw key/modifier tuple to be ALT-1
but the composed key/modifier to be eg: unmodified `¡` in my english
keymap.
refs: https://github.com/wez/wezterm/issues/158
Adds some detection to see if the active keyboard layout has
AltGr, and if so, adjust our key mapping logic to accomodate it.
With this change, when using an ENG layout, I can use either left
or right alt-b/alt-f to move through words in wsl. When I switch
to DEU my left alt is still alt and my right alt causes the
Windows On-Screen keyboard to act as though AltGr is pressed.
I can then use the On-Screen keyboard to press the `<` key which
is to the left of the `Z` key on a German layout and have it produce
the `|` character.
refs: https://github.com/wez/wezterm/issues/185
We switched to using clipboard because of problems under XWayland.
These days we have much better native Wayland support and folks
should use that.
Test plan:
In one window:
```
echo "clipboard" | xclip -i -selection clipboard; echo "primary" | xclip -i -selection primary;
```
then start `wezterm` and press shfit-insert.
Prior to this change we'd always print `clipboard`.
After this change we'll print `primary`.
However, if you run:
```
WEZTERM_X11_PREFER_CLIPBOARD_OVER_PRIMARY=1 wezterm
```
then we'll use the old `clipboard` behavior.
Teach the window layer about window icons and implement the
plumbing for this on X11.
For Wayland there is no direct way to specify the icon; instead
the application ID is used to locate an appropriate .desktop filename.
We set the app id from the classname but that didn't match the installed
name for our desktop file which is namespaced under my domain, so change
the window class to match that and enable the window icon on Wayland.
refs: https://github.com/wez/wezterm/issues/172#issuecomment-619938047
@kalgynirae showed me weirdly laggy behavior when moving the mouse
in front of his x11 window. My suspicion was that this is somehow
related to updating the mouse cursor glyph, and looking at this code
there were two things that might influence this:
* We weren't saving the newly applied cursor value, so we'd create
a new cursor every time the mouse moved (doh!)
* We'd create a new cursor id each time it changed, and then destroy it
(which isn't that bad, but if it contributes to lag, maybe it is?)
This commit addresses both of these by making a little cache map
from cursor type to cursor id.
I can't observe a difference on my system, so I wonder if this might
also be partially related to graphics drivers and hardware/software
cursors?
Hiding a window is implemented as miniaturizing the window, which
is typically shown with an animation of the window moving into the
dock.
This is not the same as the application-wide hide function in macOS;
that function hides the entire app with no animation. We don't use
that here because our Hide function is defined as a window operation
and not an application operation.
refs: https://github.com/wez/wezterm/issues/150
The opengl based render first clears the window to the background
color and then renders the cells over the top.
on macOS I noticed a weird lighter strip to the bottom and right of
the window and ran it down to the initial clear: our colors are SRGB
rather than plain RGB and the discrepancy from rendering SRGB as RGB
results in the color being made a brighter shade. This was less
noticeable for black backgrounds.
Remove a normalizing function that made assumptions based on the
keycaps that did not hold up when selecting Dvorak as an input
source. For example "CTRL-C" where `C` is the key with the C keycap
would send `CTRL-C` even when Dvorak was selected; it should send CTRL-J
in that layout.
I think with the other normalization that happens in the termwindow
layer we don't need this function any more.
The default values are 3 lines. With this change, scrolling speed now seems
similar to other programs like cmd.exe. Before this change it feels too slow.
I noticed my trackpoint or touchpad reports a lot of < 120 (WHEEL_DELTA) events.
They shouldn't be ignored.
Also https://docs.microsoft.com/en-us/windows/win32/inputdev/wm-mousewheel says:
> The wheel rotation will be a multiple of WHEEL_DELTA, which is set at 120.
> This is the threshold for action to be taken, and one such action (for
> example, scrolling one increment) should occur for each delta.
>
> The delta was set to 120 to allow Microsoft or other vendors to build
> finer-resolution wheels (a freely-rotating wheel with no notches) to send
> more messages per rotation, but with a smaller value in each message. To use
> this feature, you can either add the incoming delta values until WHEEL_DELTA
> is reached (so for a delta-rotation you get the same response), or scroll
> partial lines in response to the more frequent messages. You can also choose
> your scroll granularity and accumulate deltas until it is reached.
macos generates fractional distance values for the mouse wheel,
with one tick starting at 0.1. We were truncating this to a 0 row
move, which meant that you'd need to build up some acceleration to
move the rows when all you really wanted was a single tick.
This commit changes things so that we round up to at least 1.0 in this
situation.
The IME stuff on macos tends to swallow repeats for some keys.
Ugh. So this commit adds an option to disable the use of the IME.
Switching away from it effectively inverts the meanging of backspace
and delete (because our method is no longer called by the IME), so
we need to check for that and remap it. Ugh.
Ugh.
double clicks weren't registering correctly with the new selection
logic. Tell windows that we're doing all our own click counting
and simplify the logic.
Force using xcb-util 0.2.1 precisely because 0.2.2 pulls in a
conflicting major version of xcb (0.8 -> 0.9).
It's a non-trivial upgrade: the types around xkb are different
and features need to be specified in the manifest to enable compilation
of the things that we depend upon.
In addition, xkbdcommon, on which we depend, requires xcb 0.8 and
results in pulling in two conflicting versions of the crates.
It's a bit of a painful situation and will require some effort to
figure out how to upgrade the xcb dependency, when we're ready for that.
refs: https://github.com/meh/rust-xcb-util/issues/12
On x11 we'd get just a single line per scroll wheel tick.
Contrast with Wayland where we get multiple.
This config change makes us feel more snappy by default on X11.
I'd like to make this configurable using the live configuration
infra, but we don't currently have a way for this crate to see
that config, so this just changes the default to be "better".
refs: https://github.com/wez/wezterm/issues/92
This reverts commit bfa8d0c207,
which proved not to be needed because it was already covered
by the `KeyboardEvent::Enter` and `KeyboardEvent::Leave` handling.
On a Fedora 31 system running Wayland I noticed that wezterm and
the compositor were running pretty hot on their respective CPU
cores.
It turned out that we had a lot of
[Refresh](https://docs.rs/smithay-client-toolkit/0.6.4/smithay_client_toolkit/window/enum.Event.html#variant.Refresh)
events being generated and consumed. We were treating this as needing
a full paint so we'd be effectively continually running the opengl
paint cycle over and over.
The docs for that event say that it is intended to refresh the client
decorations so let's focus it towards that instead. This does bring
the CPU usage back down to intended levels.
I believe this hot CPU usage to be compositor-dependent: this is the
first I've seen of it out of 4 different Wayland environments!
1f81a064ed added support for noticing
that the dpi scale was not 1 on startup, but the timing of this
signal was different between the opengl and software renderers.
When using the software renderer, we'd end up computing a scaling
change with a pre-change pixel size but adjusted by a post-post
scaling factor, and that effectively caused the window to halve
its size on startup.
This commit improves things by also tracking the dpi in our locally
stored dimensions.
@sunshowers mentioned to me that the window appeared blurry on a hidpi
display on startup, and was fixed by changing focus in a tiling window
manager.
I could replicate this using weston with scaling set to 2; the issue was
that the initial scale factor change event wasn't fully propagated and
bubbled up as a resize event to the terminal layer.
This commit taps into the dpi change event and forces it to be
interpreted as a window configuration change, resulting in more crisp
text.
Adds the ability to specify `--font-shaper Allsorts` and use that
for metrics and shaping.
It is sufficient to expand things like ligatures, but there's something
slightly off about how the metrics are computed and they differ slightly
from the freetype renderer, which leads to some artifacts when rendering
with opengl.
One of my tests is to `grep message src/main.rs` to pull out the line
that has a selection of emoji. The heart emoji is missing from that
line currently.
Refs: https://github.com/wez/wezterm/issues/66
I noticed that we were relatively undersized for newly created
windows; there were two problems:
1. We weren't propagating the old rows and cols counts through
to the speculative resize.
2. The speculative resize wasn't implemented on wayland, and
needs a surprising amout of work to actually make the resize
take effect.