mononoke: fix of the discovery problem

Summary:
This is the test to cover tricky case in the discovery logic.
Previously Mononoke's known() wireproto method returned `true` for both public
and draft commits. The problem was in that it affects pushrebase.

There are a few problems with the current setup. A push command like `hg push
-r HASH --to BOOK` may actually do two things - it can either move a bookmark
on the server or do a pushrebase. What it does depends on how discovery phase
of the push finishes.

Each `hg push` starts with a discovery algorithm that tries to figure out what commits
to send to the server. If client decides that server already has all the
commits then it'll just move the bookmark, otherwise it'll run the pushrebase.

During discovery client sends wireproto `known()` method to the server
with a list of commit hashes, and server returns a list of booleans telling if
a server knows the commit or not. Before this diff Mononoke returned true for
both draft commits and public commits, while Mercurial returned true it only
for public commits.

So if Mononoke already has a draft commit (it might have it because the commit was created
via `hg pushbackup` or was created in the previous unsuccessful push attempt),
then hg client discovery will decide to move a bookmark instead of
pushrebasing, which in the case of master bookmark might have disastrous
consequences.

To fix it let's  return false for draft commits, and also implement `knownnodes` which return true for draft commits (a better name for these methods would be `knownpublic` and `known`).

Note though that in order to trigger the problem the position of the bookmark on the server
should be different from the position of the bookmark on the client. This is
because of short-circuting in the hg client discovery logic (see
https://fburl.com/s5r76yle).

The potential downside of the change is that we'll fetch bookmarks more often,
but we'll add bookmark cache later if necessary.

Reviewed By: ikostia

Differential Revision: D14560355

fbshipit-source-id: b943714199576e14a32e87f325ae8059d95cb8ed
This commit is contained in:
Stanislau Hlebik 2019-03-25 02:17:27 -07:00 committed by Facebook Github Bot
parent 152d8a7e12
commit a98f532abd
11 changed files with 344 additions and 88 deletions

View File

@ -50,7 +50,6 @@ use slog::Logger;
use stats::Timeseries;
use std::collections::HashMap;
use std::convert::From;
use std::iter::FromIterator;
use std::str::FromStr;
use std::sync::Arc;
use time_ext::DurationExt;
@ -500,20 +499,6 @@ impl BlobRepo {
.boxify()
}
pub fn many_changesets_exists(
&self,
ctx: CoreContext,
changesetids: &[HgChangesetId],
) -> BoxFuture<Vec<HgChangesetId>, Error> {
STATS::many_changesets_exists.add_value(1);
let param = BonsaiOrHgChangesetIds::Hg(Vec::from_iter(changesetids.iter().cloned()));
self.bonsai_hg_mapping.get(ctx, self.repoid, param)
.map(|entries| entries.into_iter().map(|entry| entry.hg_cs_id).collect())
// TODO(stash, luk): T37303879 also need to check that entries exist in changeset table
.boxify()
}
pub fn changeset_exists_by_bonsai(
&self,
ctx: CoreContext,
@ -822,6 +807,7 @@ impl BlobRepo {
.map(|entry| (entry.hg_cs_id, entry.bcs_id))
.collect()
})
// TODO(stash, luk): T37303879 also need to check that entries exist in changeset table
.boxify()
}

View File

@ -26,7 +26,7 @@ use dechunker::Dechunker;
use futures_ext::{BoxFuture, BoxStream, BytesStream, FutureExt, StreamExt};
use mercurial_bundles::bundle2::{self, Bundle2Stream, StreamEvent};
use mercurial_bundles::Bundle2Item;
use mercurial_types::MPath;
use mercurial_types::{HgChangesetId, MPath};
use tokio_io::codec::Decoder;
use tokio_io::AsyncRead;
use HgFileNodeId;
@ -169,6 +169,15 @@ impl<H: HgCommands + Send + 'static> HgCommandHandler<H> {
.boxify(),
ok(instream).boxify(),
),
SingleRequest::Knownnodes { nodes } => (
hgcmds
.knownnodes(nodes)
.map(SingleResponse::Known)
.map_err(self::Error::into)
.into_stream()
.boxify(),
ok(instream).boxify(),
),
SingleRequest::Unbundle { heads } => {
let dechunker = Dechunker::new(instream);
let (dechunker, maybe_full_content) = if hgcmds.should_preserve_raw_bundle2() {
@ -666,6 +675,11 @@ pub trait HgCommands {
unimplemented("known")
}
// @wireprotocommand('known', 'nodes *')
fn knownnodes(&self, _nodes: Vec<HgChangesetId>) -> HgCommandRes<Vec<bool>> {
unimplemented("knownnodes")
}
// @wireprotocommand('unbundle', 'heads')
fn unbundle(
&self,

View File

@ -54,7 +54,7 @@ use std::sync::Mutex;
use bytes::Bytes;
use mercurial_types::{HgFileNodeId, HgNodeHash};
use mercurial_types::{HgChangesetId, HgFileNodeId, HgNodeHash};
mod batch;
mod commands;
@ -108,6 +108,9 @@ pub enum SingleRequest {
Known {
nodes: Vec<HgNodeHash>,
},
Knownnodes {
nodes: Vec<HgChangesetId>,
},
Unbundle {
heads: Vec<String>,
},
@ -131,6 +134,7 @@ impl SingleRequest {
&SingleRequest::Listkeys { .. } => "listkeys",
&SingleRequest::Lookup { .. } => "lookup",
&SingleRequest::Known { .. } => "known",
&SingleRequest::Knownnodes { .. } => "knownnodes",
&SingleRequest::Unbundle { .. } => "unbundle",
&SingleRequest::Gettreepack(_) => "gettreepack",
&SingleRequest::Getfiles => "getfiles",
@ -219,6 +223,7 @@ pub enum SingleResponse {
Listkeys(HashMap<Vec<u8>, Vec<u8>>),
Lookup(Bytes),
Known(Vec<bool>),
Knownnodes(Vec<bool>),
ReadyForStream,
Unbundle(Bytes),
Gettreepack(Bytes),

View File

@ -11,7 +11,7 @@ use std::str::{self, FromStr};
use bytes::{Bytes, BytesMut};
use nom::{is_alphanumeric, is_digit, Err, ErrorKind, FindSubstring, IResult, Needed, Slice};
use HgNodeHash;
use {HgChangesetId, HgNodeHash};
use batch;
use errors;
@ -204,6 +204,18 @@ named!(
separated_list_complete!(tag!(" "), nodehash)
);
/// A changeset is simply 40 hex digits.
named!(
hg_changeset_id<HgChangesetId>,
map_res!(take!(40), |v: &[u8]| str::parse(str::from_utf8(v)?))
);
/// A space-separated list of hg changesets
named!(
hg_changeset_list<Vec<HgChangesetId>>,
separated_list_complete!(tag!(" "), hg_changeset_id)
);
/// A space-separated list of strings
named!(
stringlist<Vec<String>>,
@ -544,6 +556,9 @@ fn parse_with_params(
| command_star!("known", Known, parse_params, {
nodes => hashlist,
})
| command_star!("knownnodes", Knownnodes, parse_params, {
nodes => hg_changeset_list,
})
| command!("unbundle", Unbundle, parse_params, {
heads => stringlist,
})

View File

@ -121,6 +121,15 @@ fn encode_cmd(response: SingleResponse) -> Bytes {
Bytes::from(out)
}
Knownnodes(knowns) => {
let out: Vec<_> = knowns
.into_iter()
.map(|known| if known { b'1' } else { b'0' })
.collect();
Bytes::from(out)
}
ReadyForStream => Bytes::from(b"0\n".as_ref()),
// TODO(luk, T25574469) The response for Unbundle should be chunked stream of bundle2

View File

@ -4,28 +4,22 @@
// This software may be used and distributed according to the terms of the
// GNU General Public License version 2 or any later version.
use std::collections::{HashMap, HashSet};
use std::iter::FromIterator;
use std::mem;
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use std::time::Duration;
use blobrepo::BlobRepo;
use blobrepo::HgBlobChangeset;
use bookmarks::Bookmark;
use bundle2_resolver;
use bytes::{BufMut, Bytes, BytesMut};
use context::CoreContext;
use errors::*;
use failure::err_msg;
use fbwhoami::FbWhoAmI;
use futures::future::ok;
use futures::{future, stream, stream::empty, Async, Future, IntoFuture, Poll, Stream};
use futures_ext::{select_all, BoxFuture, BoxStream, FutureExt, StreamExt, StreamTimeoutError};
use futures_stats::{StreamStats, Timed, TimedStreamTrait};
use hgproto::{self, GetbundleArgs, GettreepackArgs, HgCommandRes, HgCommands};
use hooks::HookManager;
use itertools::Itertools;
use stats::Histogram;
use time_ext::DurationExt;
use blobrepo::HgBlobChangeset;
use bookmarks::Bookmark;
use bundle2_resolver;
use context::CoreContext;
use mercurial_bundles::{create_bundle_stream, parts, wirepack, Bundle2Item};
use mercurial_types::manifest_utils::{
changed_entry_stream_with_pruner, CombinatorPruner, DeletedPruner, EntryStatus, FilePruner,
@ -37,28 +31,29 @@ use mercurial_types::{
NULL_HASH,
};
use metaconfig_types::{LfsParams, RepoReadOnly};
use mononoke_repo::{MononokeRepo, SqlStreamingCloneConfig};
use percent_encoding;
use phases::Phases;
use phases::{Phase, Phases};
use rand::{self, Rng};
use reachabilityindex::LeastCommonAncestorsHint;
use scribe::ScribeClient;
use scuba_ext::{ScribeClientImplementation, ScubaSampleBuilder, ScubaSampleBuilderExt};
use serde_json;
use tokio::timer::timeout::Error as TimeoutError;
use tokio::util::FutureExt as TokioFutureExt;
use tracing::Traced;
use blobrepo::BlobRepo;
use hgproto::{self, GetbundleArgs, GettreepackArgs, HgCommandRes, HgCommands};
use remotefilelog::{
self, create_remotefilelog_blob, get_unordered_file_history_for_multiple_nodes,
};
use scribe::ScribeClient;
use scuba_ext::{ScribeClientImplementation, ScubaSampleBuilder, ScubaSampleBuilderExt};
use serde_json;
use stats::Histogram;
use std::collections::{HashMap, HashSet};
use std::iter::FromIterator;
use std::mem;
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use std::time::Duration;
use streaming_clone::RevlogStreamingChunks;
use errors::*;
use hooks::HookManager;
use mononoke_repo::{MononokeRepo, SqlStreamingCloneConfig};
use time_ext::DurationExt;
use tokio::timer::timeout::Error as TimeoutError;
use tokio::util::FutureExt as TokioFutureExt;
use tracing::Traced;
const MAX_NODES_TO_LOG: usize = 5;
@ -80,6 +75,7 @@ mod ops {
pub static LOOKUP: &str = "lookup";
pub static LISTKEYS: &str = "listkeys";
pub static KNOWN: &str = "known";
pub static KNOWNNODES: &str = "knownnodes";
pub static BETWEEN: &str = "between";
pub static GETBUNDLE: &str = "getbundle";
pub static GETTREEPACK: &str = "gettreepack";
@ -141,6 +137,7 @@ fn wireprotocaps() -> Vec<String> {
"stream_option".to_string(),
"streamreqs=generaldelta,lz4revlog,revlogv1".to_string(),
"treeonly".to_string(),
"knownnodes".to_string(),
]
}
@ -734,38 +731,103 @@ impl HgCommands for RepoClient {
let nodes: Vec<_> = nodes.into_iter().map(HgChangesetId::new).collect();
let nodes_len = nodes.len();
({
let ref_nodes: Vec<_> = nodes.iter().cloned().collect();
blobrepo.many_changesets_exists(self.ctx.clone(), &ref_nodes[..])
})
.map(move |cs| {
let cs: HashSet<_> = cs.into_iter().collect();
let known_nodes: Vec<_> = nodes
.into_iter()
.map(move |node| cs.contains(&node))
.collect();
known_nodes
})
.timeout(timeout_duration())
.map_err(process_timeout_error)
.traced(self.ctx.trace(), ops::KNOWN, trace_args!())
.timed(move |stats, known_nodes| {
if let Ok(known) = known_nodes {
let extra_context = json!({
"num_known": known.len(),
"num_unknown": nodes_len - known.len(),
})
.to_string();
let phases_hint = self.phases_hint.clone();
scuba_logger.add("extra_context", extra_context);
}
cloned!(self.ctx);
blobrepo
.get_hg_bonsai_mapping(ctx.clone(), nodes.clone())
.map(|hg_bcs_mapping| {
let mut bcs_ids = vec![];
let mut bcs_hg_mapping = hashmap! {};
scuba_logger
.add_future_stats(&stats)
.log_with_msg("Command processed", None);
Ok(())
})
.boxify()
for (hg, bcs) in hg_bcs_mapping {
bcs_ids.push(bcs);
bcs_hg_mapping.insert(bcs, hg);
}
(bcs_ids, bcs_hg_mapping)
})
.and_then(move |(bcs_ids, bcs_hg_mapping)| {
phases_hint
.get_all(ctx, blobrepo, bcs_ids)
.map(move |phases| (phases, bcs_hg_mapping))
})
.map(|(phases, bcs_hg_mapping)| {
phases
.calculated
.into_iter()
.filter_map(|(cs, phase)| {
if phase == Phase::Public {
bcs_hg_mapping.get(&cs).cloned()
} else {
None
}
})
.collect::<HashSet<_>>()
})
.map(move |found_hg_changesets| {
nodes
.into_iter()
.map(move |node| found_hg_changesets.contains(&node))
.collect::<Vec<_>>()
})
.timeout(timeout_duration())
.map_err(process_timeout_error)
.traced(self.ctx.trace(), ops::KNOWN, trace_args!())
.timed(move |stats, known_nodes| {
if let Ok(known) = known_nodes {
let extra_context = json!({
"num_known": known.len(),
"num_unknown": nodes_len - known.len(),
})
.to_string();
scuba_logger.add("extra_context", extra_context);
}
scuba_logger
.add_future_stats(&stats)
.log_with_msg("Command processed", None);
Ok(())
})
.boxify()
}
fn knownnodes(&self, nodes: Vec<HgChangesetId>) -> HgCommandRes<Vec<bool>> {
let blobrepo = self.repo.blobrepo().clone();
let mut scuba_logger = self.prepared_ctx(ops::KNOWNNODES, None).scuba().clone();
let nodes_len = nodes.len();
blobrepo
.get_hg_bonsai_mapping(self.ctx.clone(), nodes.clone())
.map(|hg_bcs_mapping| {
let hg_bcs_mapping: HashMap<_, _> = hg_bcs_mapping.into_iter().collect();
nodes
.into_iter()
.map(move |node| hg_bcs_mapping.contains_key(&node))
.collect::<Vec<_>>()
})
.timeout(timeout_duration())
.map_err(process_timeout_error)
.traced(self.ctx.trace(), ops::KNOWNNODES, trace_args!())
.timed(move |stats, known_nodes| {
if let Ok(known) = known_nodes {
let extra_context = json!({
"num_known": known.len(),
"num_unknown": nodes_len - known.len(),
})
.to_string();
scuba_logger.add("extra_context", extra_context);
}
scuba_logger
.add_future_stats(&stats)
.log_with_msg("Command processed", None);
Ok(())
})
.boxify()
}
// @wireprotocommand('getbundle', '*')
@ -1206,8 +1268,7 @@ impl HgCommands for RepoClient {
LfsParams::default(),
validate_hash,
);
let fut = fut
.map(move |(content, _)| (filenode, content));
let fut = fut.map(move |(content, _)| (filenode, content));
contents.push(fut);
}
future::join_all(contents)
@ -1454,8 +1515,7 @@ fn fetch_treepack_part_input(
path,
expected,
actual,
}
.into())
}.into())
}
})
.left_future()

View File

@ -21,7 +21,6 @@ extern crate futures_ext;
extern crate futures_stats;
extern crate itertools;
#[macro_use]
#[cfg(test)]
extern crate maplit;
extern crate percent_encoding;
extern crate rand;

View File

@ -60,7 +60,7 @@ Do infinitepush (aka commit cloud) push
sending between command
remote: * DEBG Session with Mononoke started with uuid: * (glob)
remote: * (glob)
remote: capabilities: clienttelemetry lookup known getbundle unbundle=HG10GZ,HG10BZ,HG10UN gettreepack remotefilelog pushkey stream-preferred stream_option streamreqs=generaldelta,lz4revlog,revlogv1 treeonly bundle2=HG20%0Achangegroup%3D02%0Ab2x%3Ainfinitepush%0Ab2x%3Ainfinitepushscratchbookmarks%0Apushkey%0Atreemanifestserver%3DTrue%0Ab2x%3Arebase%0Ab2x%3Arebasepackpart%0Aphases%3Dheads
remote: capabilities: * (glob)
remote: 1
query 1; heads
sending batch command
@ -133,9 +133,9 @@ Pushbackup also works
sending between command
remote: * DEBG Session with Mononoke started with uuid: * (glob)
remote: * (glob)
remote: capabilities: clienttelemetry lookup known getbundle unbundle=HG10GZ,HG10BZ,HG10UN gettreepack remotefilelog pushkey stream-preferred stream_option streamreqs=generaldelta,lz4revlog,revlogv1 treeonly bundle2=HG20%0Achangegroup%3D02%0Ab2x%3Ainfinitepush%0Ab2x%3Ainfinitepushscratchbookmarks%0Apushkey%0Atreemanifestserver%3DTrue%0Ab2x%3Arebase%0Ab2x%3Arebasepackpart%0Aphases%3Dheads
remote: capabilities: * (glob)
remote: 1
sending known command
sending knownnodes command
backing up stack rooted at 47da8b81097c
2 changesets found
list of changesets:
@ -198,7 +198,7 @@ Pushbackup that pushes only bookmarks
sending between command
remote: * DEBG Session with Mononoke started with uuid: * (glob)
remote: * (glob)
remote: capabilities: clienttelemetry lookup known getbundle unbundle=HG10GZ,HG10BZ,HG10UN gettreepack remotefilelog pushkey stream-preferred stream_option streamreqs=generaldelta,lz4revlog,revlogv1 treeonly bundle2=HG20%0Achangegroup%3D02%0Ab2x%3Ainfinitepush%0Ab2x%3Ainfinitepushscratchbookmarks%0Apushkey%0Atreemanifestserver%3DTrue%0Ab2x%3Arebase%0Ab2x%3Arebasepackpart%0Aphases%3Dheads
remote: capabilities: * (glob)
remote: 1
sending unbundle command
bundle2-output-bundle: "HG20", (1 params) 3 parts total

View File

@ -233,7 +233,7 @@ push to Mononoke
sending between command
remote: * DEBG Session with Mononoke started with uuid: * (glob)
remote: * (glob)
remote: capabilities: clienttelemetry lookup known getbundle unbundle=HG10GZ,HG10BZ,HG10UN gettreepack remotefilelog pushkey stream-preferred stream_option streamreqs=generaldelta,lz4revlog,revlogv1 treeonly bundle2=* (glob)
remote: capabilities: * (glob)
remote: 1
query 1; heads
sending batch command

View File

@ -55,7 +55,7 @@ create new commit in repo2 and check that push fails
sending between command
remote: * DEBG Session with Mononoke started with uuid: * (glob)
remote: * (glob)
remote: capabilities: clienttelemetry lookup known getbundle unbundle=HG10GZ,HG10BZ,HG10UN gettreepack remotefilelog pushkey stream-preferred stream_option streamreqs=generaldelta,lz4revlog,revlogv1 treeonly bundle2=* (glob)
remote: capabilities: * (glob)
remote: 1
query 1; heads
sending batch command

View File

@ -0,0 +1,168 @@
This is the test to cover tricky case in the discovery logic.
Previously Mononoke's known() wireproto method returned `true` for both public and
draft commits. The problem was in that it affects pushrebase. If Mononoke
returns true for a draft commit and client runs `hg push -r HASH --to BOOK`,
then hg client logic may decide to just move a bookmark instead of running the
actual pushrebase.
$ . $TESTDIR/library.sh
setup configuration
$ setup_common_config
$ cd "$TESTTMP/mononoke-config"
$ cat >> repos/repo/server.toml <<CONFIG
> [[bookmarks]]
> name="master_bookmark"
> CONFIG
$ mkdir -p common/hooks
$ cat > common/hooks/failing_hook.lua <<CONFIG
> hook = function (ctx)
> return false, "failed"
> end
> CONFIG
$ register_hook failing_hook common/hooks/failing_hook.lua PerChangeset <(
> echo 'bypass_pushvar="BYPASS_REVIEW=true"'
> )
setup common configuration
$ cd $TESTTMP
$ cat >> $HGRCPATH <<EOF
> [ui]
> ssh="$DUMMYSSH"
> [extensions]
> amend=
> infinitepush=
> infinitepushbackup=
> EOF
Setup helpers
$ log() {
> hg log -G -T "{desc} [{phase};rev={rev};{node|short}] {remotenames}" "$@"
> }
setup repo
$ hg init repo-hg
$ cd repo-hg
$ setup_hg_server
$ hg debugdrawdag <<EOF
> C
> |
> B
> |
> A
> EOF
create master bookmark
$ hg bookmark master_bookmark -r tip
blobimport them into Mononoke storage and start Mononoke
$ cd ..
$ blobimport repo-hg/.hg repo
start mononoke
$ mononoke
$ wait_for_mononoke $TESTTMP/repo
Clone the repo
$ hgclone_treemanifest ssh://user@dummy/repo-hg repo2 --noupdate --config extensions.remotenames= -q
$ hgclone_treemanifest ssh://user@dummy/repo-hg repo3 --noupdate --config extensions.remotenames= -q
$ cd repo2
$ setup_hg_client
$ cat >> .hg/hgrc <<EOF
> [extensions]
> pushrebase =
> remotenames =
> EOF
$ hg up -q 0
$ echo 1 > 1 && hg addremove -q
$ hg ci -m 'to push'
Unsuccessful push creates a draft commit on the server
$ hgmn push -r . --to master_bookmark
remote: * Session with Mononoke started with uuid: * (glob)
pushing rev 812eca0823f9 to destination ssh://user@dummy/repo bookmark master_bookmark
searching for changes
remote: Command failed
remote: Error:
remote: hooks failed:
remote: failing_hook for 812eca0823f97743f8d85cdef5cf338b54cebb01: failed
remote: Root cause:
remote: ErrorMessage {
remote: msg: "hooks failed:\nfailing_hook for 812eca0823f97743f8d85cdef5cf338b54cebb01: failed"
remote: }
abort: stream ended unexpectedly (got 0 bytes, expected 4)
[255]
In order to hit an edge case the master on the server needs to point to another commit.
Let's make a push
$ cd ../repo3
$ setup_hg_client
$ cat >> .hg/hgrc <<EOF
> [extensions]
> pushrebase =
> remotenames =
> [remotenames]
> EOF
$ hg up -q 0
$ echo 2 > 2 && hg addremove -q
$ hg ci -m 'to push2'
$ hgmn push -r . --to master_bookmark --pushvar BYPASS_REVIEW=true -q
Now let's push the same commit again but with a bypass. It should pushrebase,
not move a bookmark
$ cd ../repo2
$ hgmn push -r . --to master_bookmark --pushvar BYPASS_REVIEW=true -q
$ hgmn up -q master_bookmark
$ log
@ to push [public;rev=5;a6205c464622] default/master_bookmark
|
o to push2 [public;rev=4;854b7c3bdd1f]
|
| o to push [draft;rev=3;812eca0823f9]
| |
o | C [public;rev=2;26805aba1e60]
| |
o | B [public;rev=1;112478962961]
|/
o A [public;rev=0;426bada5c675]
The same procedure, but with commit cloud commit
$ hg up -q 0
$ echo commitcloud > commitcloud && hg addremove -q
$ hg ci -m commitcloud
$ hgmn pushbackup -q
Move master again
$ cd ../repo3
$ hg up -q 0
$ echo 3 > 3 && hg addremove -q
$ hg ci -m 'to push3'
$ hgmn push -r . --to master_bookmark --pushvar BYPASS_REVIEW=true -q
Now let's push commit cloud commit. Again, it should do pushrebase
$ cd ../repo2
$ hgmn push -r . --to master_bookmark --pushvar BYPASS_REVIEW=true -q
$ hgmn up -q master_bookmark
$ log
@ commitcloud [public;rev=8;3308f3bd8048] default/master_bookmark
|
o to push3 [public;rev=7;c3f020572849]
|
| o commitcloud [draft;rev=6;17f29bea0858]
| |
o | to push [public;rev=5;a6205c464622]
| |
o | to push2 [public;rev=4;854b7c3bdd1f]
| |
| | o to push [draft;rev=3;812eca0823f9]
| |/
o | C [public;rev=2;26805aba1e60]
| |
o | B [public;rev=1;112478962961]
|/
o A [public;rev=0;426bada5c675]