zed/crates/gpui
Thorsten Ball 43d1a8040d
linux: run runnables only when event loop is idle (#12839)
This change ensures that the event loop prioritizes enqueueing another
render or handling user input over executing runnables.

It's a subtle change as a result of a week of digging into performance
on X11. It's also not perfect: ideally we'd get rid of the intermediate
channel here and had more control over when and how we run runnables vs.
X11 events, but I think short of rewriting how we use an event loop,
this is good cost/benefit change.

To illustrate:

Before this change, it was possible to block the app from rendering for
a long time by just creating a ton of futures that were executed on the
"main" thread (we don't have a "main" thread on Linux, but we have a
single thread in which we run the event loop).

That was relatively easy to reproduce by opening the `zed` repository
and starting `rust-analyzer`: at some point `rust-analyzer` sends us so
many notifications, that are all handled in futures, that the event loop
is busy just working off the runnables, never getting to the events that
X11 sends us or our own timer to re-enqueue another render.

When you put print statements into the code to show when which event was
handled, you'd see something like this **before this change**:

```
[ ... hundreds of runnable.run() ... ]
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
new render tick timer. lag: 56.942049ms
X11 event
new render tick timer. lag: 9.668µs
X11 event
new render tick timer. lag: 9.955µs
X11 event
runnable.run()
runnable.run()
runnable.run()
runnable.run()
new render tick timer. lag: 12.462µs
X11 event
new render tick timer. lag: 14.868µs
X11 event
new render tick timer. lag: 11.234µs
X11 event
new render tick timer. lag: 11.681µs
X11 event
new render tick timer. lag: 13.926µs
X11 event
```

Note the `lag: 56ms`: that's the difference between when we wanted to
execute the callback that enqueues another render and when it ran.

Longer lags are possible, this is just the first one I grabbed from the
logs.

Now, compare this with the logs **after this change**:

```
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
new render tick timer. lag: 36.051µs
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
X11 event
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
runnable.run()
```

In-between many `runnable.run()` we'll always handle events.

So, in essence, what this change does is to introduce 2 priorities into
the X11 event queue:

- high: X11 events (user events, render events, ...), render tick, XIM
events, ...
- low: all async rust code

I've tested this with a debug build and release build and I think the
app now feels more responsive. It doesn't feel perfect still, especially
in the slow debug builds, but I couldn't observe 10s lockups anymore.

Since it's a pretty small change, I think we should go for it and see
how it behaves.

Thanks to @maan2003 this now also includes the same change to Wayland.

Release Notes:

- N/A

---------

Co-authored-by: maan2003 <manmeetmann2003@gmail.com>
2024-06-10 14:04:41 +02:00
..
docs Fix broken character (#9992) 2024-03-30 14:39:45 -04:00
examples Make tests less noisy (#12463) 2024-05-29 18:06:45 -07:00
resources/windows windows: Move manifest file to gpui (#11036) 2024-04-26 13:56:48 -07:00
src linux: run runnables only when event loop is idle (#12839) 2024-06-10 14:04:41 +02:00
tests Remove some todo!'s 2024-01-09 11:36:36 +02:00
build.rs Use HIGH priority to wake blocked timers (#11269) 2024-05-01 16:03:27 -06:00
Cargo.toml Fix or promote leftover TODOs and GPUI APIs (#12514) 2024-05-31 18:36:15 -07:00
LICENSE-APACHE chore: Add crate licenses. (#4158) 2024-01-23 16:56:22 +01:00
README.md Fix some comments (#8760) 2024-03-03 07:55:42 -05:00

Welcome to GPUI!

GPUI is a hybrid immediate and retained mode, GPU accelerated, UI framework for Rust, designed to support a wide variety of applications.

Getting Started

GPUI is still in active development as we work on the Zed code editor and isn't yet on crates.io. You'll also need to use the latest version of stable rust and be on macOS. Add the following to your Cargo.toml:

gpui = { git = "https://github.com/zed-industries/zed" }

Everything in GPUI starts with an App. You can create one with App::new(), and kick off your application by passing a callback to App::run(). Inside this callback, you can create a new window with AppContext::open_window(), and register your first root view. See gpui.rs for a complete example.

The Big Picture

GPUI offers three different registers depending on your needs:

  • State management and communication with Models. Whenever you need to store application state that communicates between different parts of your application, you'll want to use GPUI's models. Models are owned by GPUI and are only accessible through an owned smart pointer similar to an Rc. See the app::model_context module for more information.

  • High level, declarative UI with Views. All UI in GPUI starts with a View. A view is simply a model that can be rendered, via the Render trait. At the start of each frame, GPUI will call this render method on the root view of a given window. Views build a tree of elements, lay them out and style them with a tailwind-style API, and then give them to GPUI to turn into pixels. See the div element for an all purpose swiss-army knife of rendering.

  • Low level, imperative UI with Elements. Elements are the building blocks of UI in GPUI, and they provide a nice wrapper around an imperative API that provides as much flexibility and control as you need. Elements have total control over how they and their child elements are rendered and can be used for making efficient views into large lists, implement custom layouting for a code editor, and anything else you can think of. See the element module for more information.

Each of these registers has one or more corresponding contexts that can be accessed from all GPUI services. This context is your main interface to GPUI, and is used extensively throughout the framework.

Other Resources

In addition to the systems above, GPUI provides a range of smaller services that are useful for building complex applications:

  • Actions are user-defined structs that are used for converting keystrokes into logical operations in your UI. Use this for implementing keyboard shortcuts, such as cmd-q. See the action module for more information.

  • Platform services, such as quit the app or open a URL are available as methods on the app::AppContext.

  • An async executor that is integrated with the platform's event loop. See the executor module for more information.,

  • The [gpui::test] macro provides a convenient way to write tests for your GPUI applications. Tests also have their own kind of context, a TestAppContext which provides ways of simulating common platform input. See app::test_context and test modules for more details.

Currently, the best way to learn about these APIs is to read the Zed source code, ask us about it at a fireside hack, or drop a question in the Zed Discord. We're working on improving the documentation, creating more examples, and will be publishing more guides to GPUI on our blog.