From a4d16e61c8e26417cac60a1d2df0bcefe099314f Mon Sep 17 00:00:00 2001 From: Max Brunsfeld Date: Mon, 31 May 2021 15:31:57 -0700 Subject: [PATCH 1/2] Add document outlining plans for collaboration Co-Authored-By: Nathan Sobo --- docs/collaboration-v1-plan.md | 56 +++++++++++++++++++++++++++++++++++ 1 file changed, 56 insertions(+) create mode 100644 docs/collaboration-v1-plan.md diff --git a/docs/collaboration-v1-plan.md b/docs/collaboration-v1-plan.md new file mode 100644 index 0000000000..0a7bd0d5ec --- /dev/null +++ b/docs/collaboration-v1-plan.md @@ -0,0 +1,56 @@ +# Collaboration V1 + +### Sharing UI + +* For each worktree that I edit in Zed, there is a *Share* button that I can click to turn *sharing* + on or off for that worktree. +* For each worktree that I share, Zed shows me a URL that I can give to others to let them + collaboratively edit that worktree. +* __Question__ - Does the sharing on/off state of each worktree persist across application restart? + When I close Zed while sharing a worktree, should I resume sharing when I reopen Zed? + Pros: + * This would remove friction from teams collaborating continuously. + Cons: + * I might have added something secret to the worktree since I last opened Zed. Could we detect + changes that have occured outside of Zed, and avoid auto-sharing on startup when that has + happened? + +### Sharing Semantics + +* While sharing, the entire state of my worktree is replicated and stored forever on the Zed server. + Other collaborators can freely read the last state of my worktree, even after I've quit Zed. +* __Potential Scope Cut__ - For now, we won't store the history locally, as this isn't needed for + collaboration. Later, we may explore keeping a partial history locally as well, to support using + the history while offline. A local history would allow: + * Undo after re-opening a buffer. + * Avoiding redundant uploads when re-opening a buffer while sharing. + +* When I begin sharing: + * Immediately, I upload a list of all the paths in my worktree, along a digest of each path + * The server responds with a list of paths that needs + * First, I upload the contents of all of my open buffers. + * At this point, sharing has begun. I am shown a URL. + * Asynchronously, I upload the contents of all other files in my worktree that the server needs. +* While I'm sharing: + * Buffer operations are streamed to the Zed server, and to any peers that I'm collaborating with. + * When FS changes are detected to files that I *don't* have open: + * I again upload to the server a list of the paths that changed and their new digests. + * The server responds with a list of paths that it needs + * Asynchronously, I upload the new contents of these paths. + * If a peer requests to open one of my files that I haven't yet asynchronously uploaded, then + the server tells me to upload the contents of that file immediately. +* When I stop sharing: + * I immediately stop uploading anything to the Zed server. + +* __Question__ - If, while sharing, I undo an operation that I performed while *not* sharing, what + information do I need to send to the server? + * Can we transmit the operation as a new `Edit`, instead of as an `Undo`, so that the server can see + the details of the operation? Will this be guaranteed to converge, since there can't have been any + operations concurrent with the undone operation? + +### Further Improvements + +* When we add a local persisten history of our worktree, we will be able to + avoid uploading entire snapshots of files that have changes since our last sharing session. + Instead, the server can report that last version vector that it has seen for a file, + and we can use that to construct a diff based on our history. \ No newline at end of file From 6d83ed2824957f8168b30d2623a12b01893c967a Mon Sep 17 00:00:00 2001 From: Nathan Sobo Date: Mon, 31 May 2021 17:40:39 -0600 Subject: [PATCH 2/2] Add RPC implementation details to the collaboration plan --- docs/collaboration-v1-plan.md | 64 +++++++++++++++++++++++++++++++++-- 1 file changed, 61 insertions(+), 3 deletions(-) diff --git a/docs/collaboration-v1-plan.md b/docs/collaboration-v1-plan.md index 0a7bd0d5ec..1064d9d3bc 100644 --- a/docs/collaboration-v1-plan.md +++ b/docs/collaboration-v1-plan.md @@ -1,6 +1,7 @@ # Collaboration V1 -### Sharing UI + +## Sharing UI * For each worktree that I edit in Zed, there is a *Share* button that I can click to turn *sharing* on or off for that worktree. @@ -15,7 +16,7 @@ changes that have occured outside of Zed, and avoid auto-sharing on startup when that has happened? -### Sharing Semantics +## Sharing Semantics * While sharing, the entire state of my worktree is replicated and stored forever on the Zed server. Other collaborators can freely read the last state of my worktree, even after I've quit Zed. @@ -53,4 +54,61 @@ * When we add a local persisten history of our worktree, we will be able to avoid uploading entire snapshots of files that have changes since our last sharing session. Instead, the server can report that last version vector that it has seen for a file, - and we can use that to construct a diff based on our history. \ No newline at end of file + and we can use that to construct a diff based on our history. + +## RPC implementation details + +Every client will have a single TCP connection to `zed.dev`. + +The API will consist of resources named with URL-like paths, for example: `/worktrees/1`. + +You'll be able to communicate with any resource in the following ways: + +* `send`: A "fire-and-forget" message with no reply. (We may not need this) +* `request`: A message that expects a reply message that is tagged with the same sequence number as the request. +* `request_stream`: A message that expects a series of reply messages that are tagged with the same sequence number as the request. Unsure if this is needed beyond `subscribe`. +* `subscribe`: Returns a stream that allows the resource to emit messages at any time in the future. When the stream is dropped, we unsubscribe automatically. + +Any resource you can subscribe to is considered a *channel*, and all of its processing needs to occur on a single machine. We'll recognize channels based on their URL pattern and handle them specially in our frontend servers. For any channel, the frontend will perform a lookup for the machine on which that channel exists. If no machine exists, we'll select one. Maybe it's always the frontend itself?. If a channel already exists on another server, we'll proxy the connection through the frontend and relay and broadcasts from this channel to the client. + +The client will interact with the server via a `api::Client` object. Model objects with remote behavior will interact directly with this client to communicate with the server. For example, `Worktree` will be changed to an enum type with `Local` and `Remote` variants. The local variant will have an optional `client` in order to stream local changes to the server when sharing. The remote variant will always have a client and implement all worktree operations in terms of it. + +```rs +enum Worktree { + Local { + remote: Option, + } + Remote { + remote: Client, + } +} + +impl Worktree { + async fn remote(client, id, cx) -> anyhow::Result { + // Subscribe to the stream of all worktree events going forward + let events = client.subscribe::(format!("/worktrees/{}", worktree_id)).await?; + // Stream the entries of the worktree + let entry_chunks = client.request_stream() + + // In the background, populate all worktree entries in the initial stream and process any change events. + // This is similar to what we do + let _handle = thread::spawn(smol::block_on(async move { + for chunk in entry_chunks { + // Grab the lock and fill in the new entries + } + + while let Some() = events.recv_next() { + // Update the tree + } + })) + + // The _handle depicted here won't actually work, but we need to terminate the thread and drop the subscription + // when the Worktree is dropped... maybe we use a similar approach to how we handle local worktrees. + + Self::Remote { + _handle, + client, + } + } +} +``` \ No newline at end of file