* Be sure we send updates to multiple clients for the same user
* Be sure we send a full contacts update on initial connection
As part of this commit, I fixed an issue where we couldn't disconnect and reconnect in tests. The first disconnect would cause the I/O future to terminate asynchronously, which caused us to sign out even though the active connection didn't belong to that future. I added a guard to ensure that we only sign out if the I/O future is associated with the current connection.
Also, allow arbitrary types to be used as Actions via the impl_actions macro
Co-authored-by: Nathan Sobo <nathan@zed.dev>
Co-authored-by: Keith Simmons <keith@zed.dev>
Previously, we were achieving this by deleting the keychain item, but this can sometimes fail which leads to an infinite loop. Now, we explicitly never try the keychain when reattempting authentication after authentication fails.
Co-Authored-By: Max Brunsfeld <maxbrunsfeld@gmail.com>
* Make advance_clock more realistic by waking timers in order,
instead of all at once.
* Don't advance the clock when simulating random delays.
Co-Authored-By: Keith Simmons <keith@zed.dev>
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
When a network connection is lost without being explicitly closed by the
other end, writes to that connection will error, but reads will just wait
indefinitely.
This allows the tests to exercise our heartbeat logic.
This allows us to drop the context *after* we ran all futures to
completion and that's crucial otherwise we'll never drop entities
and/or flush effects.
* Don't send a chat message before the previous chat message
is acknowledged.
* Fix emitting of notifications in RPC server
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
Using a bounded channel may have blocked the collaboration server
from making progress handling RPC traffic.
There's no need to apply backpressure to calling code within the
same process - suspending a task that is attempting to call `send` has
an even greater memory cost than just buffering a protobuf message.
We do still want a bounded channel for incoming messages, so that
we provide backpressure to noisy peers - blocking their writes as opposed
to allowing them to buffer arbitrarily many messages in our server.
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
We hold these locks for a short amount of time anyway, and using an
async lock could cause parallel sends to happen in an order different
than the order in which `send`/`request` was called.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
This commit fixes a panic that could occur when registering N subscriptions for
N entities of the same kind. Before, when dropping the first of the
subscriptions, we would remove the entity ID extractor as well. This was,
however, used by all the other N - 1 subscriptions which would then start
losing messages. In addition, dropping yet another subscription of that kind
would result in a panic, because we wouldn't find the extractor in the map
upon invoking `Subscription::drop`.
With this change we will avoid removing the ID extractor when dropping a
subscription. Crucially, we also avoid inserting extractors for simple message
subscriptions. This enables these non-entity subscriptions to be dropped and
re-registered without seeing a "registered handler for the same message twice"
panic.
In the case of the new Next.js app, the app will follow a redirect
from 'zed.dev/rpc' to the subdomain where the rust service is hosted.
Until then, the app will connect directly to zed.dev/rpc.
This will make it possible for us to render their avatars. Previously we only had the user ids. During rendering, everything needs to be available synchronously. So now, whenever collaborators are added, we perform the async I/O to fetch their user data prior to adding them to the worktree.
This will allow us to use the word "collaborator" to describe users that are actively collaborating on a worktree.
Co-Authored-By: Antonio Scandurra <me@as-cii.com>