sapling/eden/scm/tests/test-infinitepush-replaybookmarksqueue-one-bookmark.t

59 lines
1.6 KiB
Perl
Raw Normal View History

#require py2
#chg-compatible
infinitepush: add replaybookmarksqueue Summary: This adds a new queue to replay scratch bookmark changes into Mononoke, which will allow us to replay them there. There's not a lot going on in Mercurial (i.e. in this diff) to achieve this: we simply record bookmark changes as they happen in infinitepush, and allow for excluding some bookmarks. Specifically, we'll want to exclude backup branches, which we don't want to copy, since a) there's way too many of them and b) they're deprecated in favor of Commit Cloud. Currently, this does not allow for replaying deletions. That would require further rework of how we delete things, since right now we do it my matching on bookmark names in the DB, which means the Python side of things is not aware of which bookmarks exactly were deleted. I'm not aware of how much use this is currently getting, but I'll research that and add it in if necessary. Finally, one thing that's worth calling out here is the `bookmark_hash` column in this table. This is here in case we need to scale out the replication of bookmarks across multiple workers. Indeed, we'll always want the replication of any given bookmark to happen sequentially, so we should perform it in a single worker. However, if we have too many bookmarks to replicate, then that could become a bottleneck. If that happens, we'll want to scale out workers, which we can do by having each worker operate on separate bookmarks. The `bookmark_hash` column allows us to evenly divide up the space of bookmarks across workers if that becomes necessary (e.g. we could have 16 workers: one for each first hex digit of the hash). We won't use `bookmark_hash` immediately, but since it's very cheap to add (just compute one hash in Mercurial and put it in the table), I'm adding it in this diff now in case we need it later to avoid the friction of having to re-redeploy hg servers for that. Reviewed By: StanislavGlebik Differential Revision: D15778665 fbshipit-source-id: c34898c1a66e5bec08663a0887adca263222300d
2019-06-17 16:16:05 +03:00
#if no-windows no-osx
$ disable treemanifest
infinitepush: add replaybookmarksqueue Summary: This adds a new queue to replay scratch bookmark changes into Mononoke, which will allow us to replay them there. There's not a lot going on in Mercurial (i.e. in this diff) to achieve this: we simply record bookmark changes as they happen in infinitepush, and allow for excluding some bookmarks. Specifically, we'll want to exclude backup branches, which we don't want to copy, since a) there's way too many of them and b) they're deprecated in favor of Commit Cloud. Currently, this does not allow for replaying deletions. That would require further rework of how we delete things, since right now we do it my matching on bookmark names in the DB, which means the Python side of things is not aware of which bookmarks exactly were deleted. I'm not aware of how much use this is currently getting, but I'll research that and add it in if necessary. Finally, one thing that's worth calling out here is the `bookmark_hash` column in this table. This is here in case we need to scale out the replication of bookmarks across multiple workers. Indeed, we'll always want the replication of any given bookmark to happen sequentially, so we should perform it in a single worker. However, if we have too many bookmarks to replicate, then that could become a bottleneck. If that happens, we'll want to scale out workers, which we can do by having each worker operate on separate bookmarks. The `bookmark_hash` column allows us to evenly divide up the space of bookmarks across workers if that becomes necessary (e.g. we could have 16 workers: one for each first hex digit of the hash). We won't use `bookmark_hash` immediately, but since it's very cheap to add (just compute one hash in Mercurial and put it in the table), I'm adding it in this diff now in case we need it later to avoid the friction of having to re-redeploy hg servers for that. Reviewed By: StanislavGlebik Differential Revision: D15778665 fbshipit-source-id: c34898c1a66e5bec08663a0887adca263222300d
2019-06-17 16:16:05 +03:00
$ mkcommit() {
> echo "$1" > "$1"
> hg add "$1"
> hg ci -d "0 0" -m "$1"
> }
$ . "$TESTDIR/infinitepush/library.sh"
$ setupcommon
Configure the server
$ hg init server
$ cd server
$ setupsqlserverhgrc repo123
$ setupdb
$ cd ..
Without replaybookmarks, it should not insert into the queue
$ hg clone -q ssh://user@dummy/server client1
$ cd client1
$ setupsqlclienthgrc
$ mkcommit commit1
$ hg push -r . --to scratch/book --create
pushing to ssh://user@dummy/server
searching for changes
remote: pushing 1 commit:
remote: cb9a30b04b9d commit1
$ cd ..
Enable replaybookmarks on the server
$ cd server
$ enablereplaybookmarks
$ cd ..
With replaybookmarks, it should insert into the queue
$ hg clone -q ssh://user@dummy/server client2
$ cd client2
$ setupsqlclienthgrc
$ mkcommit commit2
$ hg push -r . --to scratch/book2 --create
pushing to ssh://user@dummy/server
searching for changes
remote: pushing 1 commit:
remote: 6fdf683f5af9 commit2
$ cd ..
Proper metadata should have been recorded
$ querysqlindex "SELECT * FROM nodestobundle;"
node bundle reponame
6fdf683f5af9a2be091b81ef475f335e2624fb0d 8347a06785e3bdd572ebeb7df3aac1356acb4ce5 repo123
cb9a30b04b9df854f40d21fdac525408f3bd6c78 944fe1c133f63c7711aa15db2dd9216084dacc36 repo123
$ querysqlindex "SELECT id, reponame, synced, bookmark, node, bookmark_hash FROM replaybookmarksqueue;"
id reponame synced bookmark node bookmark_hash
1 repo123 0 scratch/book2 6fdf683f5af9a2be091b81ef475f335e2624fb0d bd2df38131efcfd3f7bd81b4307f9e84d8984729
#endif