* Be sure we send updates to multiple clients for the same user
* Be sure we send a full contacts update on initial connection
As part of this commit, I fixed an issue where we couldn't disconnect and reconnect in tests. The first disconnect would cause the I/O future to terminate asynchronously, which caused us to sign out even though the active connection didn't belong to that future. I added a guard to ensure that we only sign out if the I/O future is associated with the current connection.
This ensures that entries don't randomly re-appear on remote worktrees
due to observing an update too late. In fact, it ensures that the remote
worktree has the same starting state of the host before preemptively applying
the fs operation locally.
When opening a buffer, some language servers might start reporting
diagnostics. When closing a buffer, they might report that no diagnostics
are present for that buffer. Previously, we would keep an empty summary entry
which would cause us to open a buffer in the project diagnostics view, only to
drop it because it contained no diagnostics. However, the act of opening it
caused the language server to asynchronously report non-empty diagnostics.
We would therefore handle this as an update, but the previous closing of the
buffer would cause the language server to report empty diagnostics again. This
would cause the project diagnostics view to thrash infinitely between these two
states, pegging the CPU and constantly refreshing the UI.
With this commit we won't maintain empty summary entries for files that contain
no diagnostics, which fixes the above issue.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
We have an hypothesis that the server gets stuck while processing
an incoming message, either because the buffer fills up or because
a handler never completes. This should mitigate that and, once we
add logging, give us some clue as to what is causing it exactly.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
Previously, we weren't fully clearing the state associated with projects and worktrees when losing connection. This caused us to not see guest avatars disappear and not be able to re-share upon reconnect.
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
Also, increase the receive timeout to 30 seconds. We'll still respond immediately
to explicit disconnection, but when there are temporary network blips that
delay pings, we think we should err on the side of keeping the connection
alive. This is in response to a false positive 'host disconnected' state
that we observed when pairing today, while the host (Keith) still clearly
had a working internet connection, because we were screen sharing.
Co-Authored-By: Keith Simmons <keith@zed.dev>
* Make advance_clock more realistic by waking timers in order,
instead of all at once.
* Don't advance the clock when simulating random delays.
Co-Authored-By: Keith Simmons <keith@zed.dev>
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
When a network connection is lost without being explicitly closed by the
other end, writes to that connection will error, but reads will just wait
indefinitely.
This allows the tests to exercise our heartbeat logic.
This allows us to drop the context *after* we ran all futures to
completion and that's crucial otherwise we'll never drop entities
and/or flush effects.
However, the randomized integration test is still failing:
```
ITERATIONS=100000 SEED=3027 OPERATIONS=200 cargo test --release test_random --package=zed-server -- --nocapture
```
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
Instead, create an empty worktree on guests when a worktree is first *registered*, then update it via an initial UpdateWorktree message.
This prevents the host from referencing a worktree in definition RPC responses that hasn't yet been observed by the guest. We could have waited until the entire worktree was shared, but this could take a long time, so instead we create an empty one on guests and proceed from there.
We still have randomized test failures as of this commit:
SEED=9519 MAX_PEERS=2 ITERATIONS=10000 OPERATIONS=7 ct -p zed-server test_random_collaboration
Co-Authored-By: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
We don't need this anymore because worktree updates are foreground
messages.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
Co-Authored-By: Max Brunsfeld <max@zed.dev>
Also, add completion requests to the randomized collaboration integration test,
to demonstrate that this is valid.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
Some types of messages, which entail state updates on the host, should be
processed in the order that they were sent. Other types of messages should
not block the processing of other messages.
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
When handling this messages on the host, wait until the desired
version has been observed before performing the save.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
* On the server, spawn a separate task for each incoming message
* In the peer, eliminate the barrier that was used to enforce ordering
of responses with respect to other incoming messages
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
This is required because, after joining, we want to be able to refer
to operations that have happened prior to joining, which are not
captured by the state. There is probably a way of reconstructing operations
from the state, but that seems unnecessary and we've already talked about
wanting to have the server store operations rather than state once we start
persisting worktrees.
Using a bounded channel may have blocked the collaboration server
from making progress handling RPC traffic.
There's no need to apply backpressure to calling code within the
same process - suspending a task that is attempting to call `send` has
an even greater memory cost than just buffering a protobuf message.
We do still want a bounded channel for incoming messages, so that
we provide backpressure to noisy peers - blocking their writes as opposed
to allowing them to buffer arbitrarily many messages in our server.
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
Sometimes we will have more than one worktree associated with the same
language server and in that case it's unclear which worktree id we should
report an event for.
We hold these locks for a short amount of time anyway, and using an
async lock could cause parallel sends to happen in an order different
than the order in which `send`/`request` was called.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
The main reason for this is that we need to include information about
a buffer's UndoMap into its protobuf representation. But it's a bit
complex to correctly incorporate this information into the current
protobuf representation.
If we want to continue reusing `Buffer::apply_remote_edit` for
incorporating the historical operations, we need to either make
that method capable of incorporating already-undone edits, or
serialize the UndoMap into undo *operations*, so that we can apply
these undo operations after the fact when deserializing. But this is
not trivial, because an UndoOperation requires information about
the full offset ranges that were undone.
In the case of the new Next.js app, the app will follow a redirect
from 'zed.dev/rpc' to the subdomain where the rust service is hosted.
Until then, the app will connect directly to zed.dev/rpc.
This will make it possible for us to render their avatars. Previously we only had the user ids. During rendering, everything needs to be available synchronously. So now, whenever collaborators are added, we perform the async I/O to fetch their user data prior to adding them to the worktree.
Again, this is about reserving the concept of a "collaborator" for actual collaborators on a worktree.
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
This will allow us to use the word "collaborator" to describe users that are actively collaborating on a worktree.
Co-Authored-By: Antonio Scandurra <me@as-cii.com>