sapling/eden/scm/tests/test-infinitepush-replaybookmarksqueue-multiple-updates.t

60 lines
2.0 KiB
Perl
Raw Normal View History

#chg-compatible
infinitepush: add replaybookmarksqueue Summary: This adds a new queue to replay scratch bookmark changes into Mononoke, which will allow us to replay them there. There's not a lot going on in Mercurial (i.e. in this diff) to achieve this: we simply record bookmark changes as they happen in infinitepush, and allow for excluding some bookmarks. Specifically, we'll want to exclude backup branches, which we don't want to copy, since a) there's way too many of them and b) they're deprecated in favor of Commit Cloud. Currently, this does not allow for replaying deletions. That would require further rework of how we delete things, since right now we do it my matching on bookmark names in the DB, which means the Python side of things is not aware of which bookmarks exactly were deleted. I'm not aware of how much use this is currently getting, but I'll research that and add it in if necessary. Finally, one thing that's worth calling out here is the `bookmark_hash` column in this table. This is here in case we need to scale out the replication of bookmarks across multiple workers. Indeed, we'll always want the replication of any given bookmark to happen sequentially, so we should perform it in a single worker. However, if we have too many bookmarks to replicate, then that could become a bottleneck. If that happens, we'll want to scale out workers, which we can do by having each worker operate on separate bookmarks. The `bookmark_hash` column allows us to evenly divide up the space of bookmarks across workers if that becomes necessary (e.g. we could have 16 workers: one for each first hex digit of the hash). We won't use `bookmark_hash` immediately, but since it's very cheap to add (just compute one hash in Mercurial and put it in the table), I'm adding it in this diff now in case we need it later to avoid the friction of having to re-redeploy hg servers for that. Reviewed By: StanislavGlebik Differential Revision: D15778665 fbshipit-source-id: c34898c1a66e5bec08663a0887adca263222300d
2019-06-17 16:16:05 +03:00
#if no-windows no-osx
$ disable treemanifest
infinitepush: add replaybookmarksqueue Summary: This adds a new queue to replay scratch bookmark changes into Mononoke, which will allow us to replay them there. There's not a lot going on in Mercurial (i.e. in this diff) to achieve this: we simply record bookmark changes as they happen in infinitepush, and allow for excluding some bookmarks. Specifically, we'll want to exclude backup branches, which we don't want to copy, since a) there's way too many of them and b) they're deprecated in favor of Commit Cloud. Currently, this does not allow for replaying deletions. That would require further rework of how we delete things, since right now we do it my matching on bookmark names in the DB, which means the Python side of things is not aware of which bookmarks exactly were deleted. I'm not aware of how much use this is currently getting, but I'll research that and add it in if necessary. Finally, one thing that's worth calling out here is the `bookmark_hash` column in this table. This is here in case we need to scale out the replication of bookmarks across multiple workers. Indeed, we'll always want the replication of any given bookmark to happen sequentially, so we should perform it in a single worker. However, if we have too many bookmarks to replicate, then that could become a bottleneck. If that happens, we'll want to scale out workers, which we can do by having each worker operate on separate bookmarks. The `bookmark_hash` column allows us to evenly divide up the space of bookmarks across workers if that becomes necessary (e.g. we could have 16 workers: one for each first hex digit of the hash). We won't use `bookmark_hash` immediately, but since it's very cheap to add (just compute one hash in Mercurial and put it in the table), I'm adding it in this diff now in case we need it later to avoid the friction of having to re-redeploy hg servers for that. Reviewed By: StanislavGlebik Differential Revision: D15778665 fbshipit-source-id: c34898c1a66e5bec08663a0887adca263222300d
2019-06-17 16:16:05 +03:00
$ mkcommit() {
> echo "$1" > "$1"
> hg add "$1"
> hg ci -d "0 0" -m "$1"
> }
$ . "$TESTDIR/infinitepush/library.sh"
$ setupcommon
Configure the server
$ hg init server
$ cd server
$ setupsqlserverhgrc repo123
$ setupdb
$ enablereplaybookmarks
$ cd ..
It should insert an entry for each update
$ hg clone -q ssh://user@dummy/server client2
$ cd client2
$ setupsqlclienthgrc
$ mkcommit commit2
$ hg push -r . --to scratch/123 --create
pushing to ssh://user@dummy/server
searching for changes
remote: pushing 1 commit:
remote: 6fdf683f5af9 commit2
$ mkcommit commit3
$ hg push -r . --to scratch/123
pushing to ssh://user@dummy/server
searching for changes
remote: pushing 2 commits:
remote: 6fdf683f5af9 commit2
remote: 8e0c8ddac9fb commit3
$ mkcommit commit4
$ hg push -r . --to scratch/123
pushing to ssh://user@dummy/server
searching for changes
remote: pushing 3 commits:
remote: 6fdf683f5af9 commit2
remote: 8e0c8ddac9fb commit3
remote: feccf85eaa94 commit4
$ cd ..
Proper metadata should have been recorded
$ querysqlindex "SELECT * FROM nodestobundle;"
node bundle reponame
6fdf683f5af9a2be091b81ef475f335e2624fb0d f47f4ea5c9dade34f2a38376fe371dc6e4c49c1d repo123
8e0c8ddac9fb06e5cb0b3ca65a51632a7814f576 f47f4ea5c9dade34f2a38376fe371dc6e4c49c1d repo123
feccf85eaa94ff5ec0f80b8fd871d0fa3125a09b f47f4ea5c9dade34f2a38376fe371dc6e4c49c1d repo123
$ querysqlindex "SELECT id, reponame, synced, bookmark, node, bookmark_hash FROM replaybookmarksqueue;"
id reponame synced bookmark node bookmark_hash
1 repo123 0 scratch/123 6fdf683f5af9a2be091b81ef475f335e2624fb0d 68e2c1170bb6960df6ab9e2c7da427b5d3eca47e
2 repo123 0 scratch/123 8e0c8ddac9fb06e5cb0b3ca65a51632a7814f576 68e2c1170bb6960df6ab9e2c7da427b5d3eca47e
3 repo123 0 scratch/123 feccf85eaa94ff5ec0f80b8fd871d0fa3125a09b 68e2c1170bb6960df6ab9e2c7da427b5d3eca47e
#endif