Summary:
This updates our receive path for B2xInfinitepush to create new scratch bookmarks.
Those scratch bookmarks will:
- Be non-publishing.
- Be non-pull-default.
- Not be replicated to Mercurial (there is no entry in the update log).
I added a sanity check on infinite pushes to validate that bookmarks fall within a given namespace (which is represented as a Regexp in configuration). We'll want to determine whether this is a good mechanism and what the regexp for this should be prior to landing (I'm also considering adding a soft-block mode that would just ignore the push instead of blocking it).
This ensures that someone cannot accidentally perform an infinitepush onto master by tweaking their client-side configuration.
---
Note that, as of this diff, we do not support the B2xInfinitepushBookmarks part (i.e. backup bookmarks). We might do that separately later, but if we do, it won't be through scratch Bookmarks (we have too many backup bookmarks for this to work)
Reviewed By: StanislavGlebik
Differential Revision: D15364677
fbshipit-source-id: 23e67d4c3138716c791bb8050459698f8b721277
Summary:
This adds support for recording server-side whether a given bookmark is publishing and / or pull-default. This is a change on the way towards supporting Infinite Push in Mononoke. This diff will require schema changes on `xdb.mononoke_production`.
There isn't a whole lot of new user-facing functionality in this particular diff. For starters, nobody can edit this flag on bookmarks, and pushes that create a new bookmark will always result in setting a bookmark as publishing AND pull_default.
What this change does however introduce is the notion of `BookmarkHgKind`, which is represents the behavior of this bookmark as far as Mercurial operations are concerned.
There are 3 such kinds, which are relevant in different parts of the codebase:
- PullDefault - this is useful when we want to respond to listkeys queries.
- Publishing — this is useful when we want to identify commit Phases.
- All - this is useful when we want to respond to listkeyspatterns.
Note that only the first two groups are cached in CachedBookmarks.
---
There are a few things going on in this diff (which logically have to happen together):
- First, I updated the `Bookmarks` trait and its various implementations to expose new methods to select Bookmarks based on their hg kind. There's one method per hg kind, and all the methods use a `Freshness` parameter to determine whether to hit cache or not.
- Second, I refactored the various bookmark-access methods in blobrepo. A few of them were duplicative of each other, and some were unused, which became a bigger problem now that we had more (for publishing, pull default, etc.). We are now down to just one method that doesn't hit cache (which is used only by the blobimport script and some tests — perhaps this could be changed?).
- Third, I updated the call sites for the methods that were udpated in step 2 to use the proper method for their use case.
---
Here's a summary of where each method is used (I'm only listing stuff that isn't unit tests):
- `get_bonsai_heads_maybe_stale`:
- `SqlPhases`'s `get_public`
- `build_skiplist_index`
- `get_bonsai_publishing_bookmarks_most_recent`:
- Blobimport (perhaps we can update this to a `maybe_stale` read?)
- `get_pull_default_bookmarks_maybe_stale`
- `listkeys` implementations
- `get_publishing_bookmarks_maybe_stale`
- API Server's `get_branches`
- `get_bookmarks_by_prefix_maybe_stale`:
- `listkeyspatterns` implementation
---
As an aside, note that a lot of the code changes in this diff are actually in CacheBookmark's tests — I had to update those to minimize sleeps: it was fine to sleep in the earlier tests, but I introduced new quickcheck-based tests, and sleeping in those isn't ideal.
Reviewed By: StanislavGlebik
Differential Revision: D15298598
fbshipit-source-id: 4f0fe80ea98dea8af0c8289db4f6f4e4070af622
Summary:
As part of adding support for infinitepush in Mononoke, we'll include additional server-side metadata on Bookmarks (specifically, whether they are publishing and pull-default).
However, we do use the name `Bookmark` right now to just reference a Bookmark name. This patch updates all reference to `Bookmark` to `BookmarkName` in order to free up `Bookmark`.
Reviewed By: StanislavGlebik
Differential Revision: D15364674
fbshipit-source-id: 126142e24e4361c19d1a6e20daa28bc793fb8686
Summary: I'm going to be doing some work on this file, but it's not up-to-date with rustfmt. To minimize merge conflicts and simplify diff reviews, I ran that earlier.
Reviewed By: StanislavGlebik
Differential Revision: D15364676
fbshipit-source-id: 691e00e091e68ce55bc67b29848284a8fedec359
Summary:
First and foremost, this is a safe diff to land on its own as this query
is only used by the sync job and only with the `limit=1`. So the things I am
introducing are not changing any existing behavior.
General goal of this diff is to make sure that these queries always return
series of bookmark update log entries where each entry has the same reason
and bookmark. This way it is always safe to merge these entries into a single
combined entry and send this entry to Mercurial servers for replay.
NB: this same can obviously be done by a nested query, but it is a bit more convenient for me to write it this way. It can be changed if people have strong feelings about it, either now or in a separate diff.
Reviewed By: krallin
Differential Revision: D15251977
fbshipit-source-id: 028c085bb7c4c325c1926bf351b985ef1200ef41
Summary: This adds the ability to exclude blobimport entries when querying the count of remaining entries in the HG sync replay log.
Reviewed By: ikostia
Differential Revision: D15097549
fbshipit-source-id: ae1a9a31f51a044924fdebbdd219fff1e2b3d46a
Summary:
This introduces a new `--skip` flag in Mononoke admin under `hg-sync-bundle last-processed`. This will update the last-synced counter to the last blobimport that preceeds a log entry that is no a blobimport.
In other words, it makes it so that the last change to be processed is *not* a blobimport.
It will fail if:
- There is no valid log entry to jump ahead to (e.g. all the further log entries are blobimports).
- The current change to be processed is not a blobimport.
- The mutable counter was changed by someone else in the meantime.
Reviewed By: ikostia
Differential Revision: D15081759
fbshipit-source-id: 8465321b08d9c7b5bc97526400518bcf3ac77f13
Summary: This adds a command in mononoke admin to verify the consistency of remaining bundles to sync -- i.e. whether all bundles are blobimports or all of them are not blobimports.
Reviewed By: ikostia
Differential Revision: D15097935
fbshipit-source-id: a0df221c38e84897213edf232972ba420977e9d3
Summary:
Add a LABEL constant to the SqlConstructors trait to make it easier to identify
which table is being used, for stats and logging.
Reviewed By: HarveyHunt
Differential Revision: D13457488
fbshipit-source-id: a061a9582bc1783604f249d5b7dcede4b1e1d3c5
Summary: We'll do batching to save time on the sync job. We need to sync faster
Reviewed By: ikostia, farnz
Differential Revision: D14929027
fbshipit-source-id: 3139d0ece07f344cdafa5e39b698bc3b02625f0a
Summary:
Add a functionality to show the log of a bookmark i.e. show previous positions of the bookmark. It should look like
mononoke_admin bookmarks log master
2d0e180d7fbec1fd9825cfb246e1fecde31d8c35 push March 18, 15:03
9740f4b7f09a8958c90dc66cbb2c79d1d7da0555 push March 17, 15:03
b05aafb29abb61f59f12fea13a907164e54ff683 manual move March 17, 15:03
...
Reviewed By: StanislavGlebik
Differential Revision: D14639245
fbshipit-source-id: 59d6a559a7ba9f9537735fa2e36fbd0f3f9db77c
Summary: To verify that BookmarkCache is actually needed we want to collect data on how bookmarks are queried
Reviewed By: StanislavGlebik
Differential Revision: D14560748
fbshipit-source-id: ce08511b98c3566cc6ed9052180d73f6076c68fe
Summary: Should be in sync with D14424208
Reviewed By: markbt
Differential Revision: D14541101
fbshipit-source-id: 2e1d544081cd7dd336a76d0490ee91e9137cef55
Summary: To learn how far behind are we in the absolute bundle numbers.
Reviewed By: StanislavGlebik
Differential Revision: D14491672
fbshipit-source-id: 31d16f115b2b6fe4b88c25a847ce229e123b048b
Summary:
Before this diff `hg push` that doesn't move a bookmark will fail with `bookmark transaction failed` error.
I don't think it's an expected behaviour. The reason seems to be in the difference of the number of affected rows - mysql returns 0 while sqlite returns 1.
Note that I can't repro the same behaviour in unit-tests, so I assume sqlite
have different semantics from mysql in case of no-op updates.
Reviewed By: ikostia
Differential Revision: D14070737
fbshipit-source-id: e384074dade2b5a7296331ef2fe2da88508c691e
Summary:
Previously an error was raised if we tried to commit a transaction that already
existed. Instead let's return `false` which indicates that transaction was
reverted
Reviewed By: ikostia
Differential Revision: D14055392
fbshipit-source-id: a2e78f8c4609a272fe41a1d73478d2ce5503a962
Summary:
Previously it fetched data for all repos. It'd be more useful if we fetch just
for one.
We may later want to replay for many repos at once. When that's the case we can
add a new method.
Reviewed By: ikostia
Differential Revision: D14028467
fbshipit-source-id: e047a891cc920047596ff9221c62ef5cb0090598
Summary:
Together with logging bookmark moves, let's also log bundle handle. It will be
used during replay from Mononoke to mercurial.
Reviewed By: ikostia
Differential Revision: D13990929
fbshipit-source-id: 4039322903b13e84fb31c8e65cc2e097ca765213
Summary:
This is the first step in adding support for tracking all bookmark moves. They
will be recorded in the separate mysql table in the same transaction as
bookmark is updated.
That gives us two things:
1) Ability to inspect all bookmark moves and debug issues with them
2) Also record which mercurial bundle moved a bookmark if any so that we could
later replay these bundles in correct order on hg
Add a struct that let us track bookmark moves.
Reviewed By: ikostia
Differential Revision: D13958872
fbshipit-source-id: 9adfee6d977457db5af4ad5d3a6734c73fcbcd76
Summary:
These are **not** the schemas that we use in production.
So at the moment they are not used for anything and they just create confusion.
Reviewed By: aslpavel
Differential Revision: D13986001
fbshipit-source-id: 7aae0a5da474f579c9cdf1bbf5dfe183835cae2d
Summary: The Copy trait means that something is so cheap to copy that you don't even need to explicitly do `.clone()` on it. As it doesn't make much sense to pass &i64 it also doesn't make much sense to pass &<Something that is Copy>, so I have removed all the occurences of passing one of ouf hashes that are Copy.
Reviewed By: fanzeyi
Differential Revision: D13974622
fbshipit-source-id: 89efc1c1e29269cc2e77dcb124964265c344f519
Summary:
It breaks the pushrebase test.
Original commit: 4e084bee13ff4941d1a42d1f75fe501575858a63
Original diff: D13573105
Reviewed By: StanislavGlebik
Differential Revision: D13651039
fbshipit-source-id: b67c32e0fc4acc953265a089e746ede3d4426b6f
Summary:
After some discussion with Pavel Aslanov, Lukas Piatkowski and Stanislau Hlebik, it was evident that shared future is the best approach for the bookmarks cache.
The cache in this implementation maintains a shared future for each repo, fetching the full list of bookmarks. When a list of bookmarks with a given prefix is required, a filter is applied to a full list future.
Two locks are used in this implementation: one for adding new repos to the hashtable and one for updating the cache. In both cases the optimistic strategy: "let's first first grab a read lock and try checking if it is good enough" is applied.
Reviewed By: StanislavGlebik
Differential Revision: D13573105
fbshipit-source-id: 4e084bee13ff4941d1a42d1f75fe501575858a63
Summary: There's nothing Mercurial-specific about identifying a repo. This also outright removes some dependencies on mercurial-types.
Reviewed By: StanislavGlebik
Differential Revision: D13512616
fbshipit-source-id: 4496a93a8d4e56cd6ca319dfd8effc71e694ff3e
Summary:
Removed:
cmd-line cmd tool for filenodes and bookmarks. These should be a part of
mononoke_admin script
Outdates docs folder
Commitsim crate, because it's replaced by real pushrebase
unused hooks_old crate
storage crate which wasn't used
Reviewed By: aslpavel
Differential Revision: D13301035
fbshipit-source-id: 3ae398752218915dc4eb85c11be84e48168677cc
Summary:
Currently we read all bookmarks from primary replica a few times during `hg
pull`. First time when we do listkeys, second time when we get heads.
That might create a high load on primary replica.
However the delay between primary and secondary replicas is fairly small, and so it
*should* be fine to read bookmarks from secondary local replica as long as there is only
one replica per region (because if we have a few replicas per region, then
heads and listkeys response might be inconsistent).
Reviewed By: lukaspiatkowski
Differential Revision: D13039779
fbshipit-source-id: e1b8050f63a3a05dc6cf837e17a448c3b346b723
Summary: It causes test failures. REvert for now until they fixed
Reviewed By: farnz
Differential Revision: D13040073
fbshipit-source-id: fc05373c882baf42f7bd2a3a1c1173e8ba26a952
Summary:
Previously sql query that was sent to the server had `name like CONCAT('', %)`
for empty prefixes. That's inefficient, and since empty prefix requests are
common it's worth fixing them.
Reviewed By: lukaspiatkowski
Differential Revision: D13021876
fbshipit-source-id: 2fd9b2361e9be57cb15251e37f7988ee048468ec
Summary:
We're seeing long transaction hold times in MyRouter during big
blobimports - let's see if it's just that we've got a huge amount of events
outstanding on single tasks
Reviewed By: HarveyHunt
Differential Revision: D12945071
fbshipit-source-id: 3aca0b8cb649fc572fca8cadec8f0d265be2d564
Summary: - Make `Bookmakrs` work with `ChangsetId` instead of `HgChangesetId`
Reviewed By: StanislavGlebik, farnz
Differential Revision: D9297139
fbshipit-source-id: e18661793d144669354e509271044410caa3502a
Summary:
Unify all uses of Sqlite and of Mysql
This superceded D8712926
Reviewed By: farnz
Differential Revision: D8732579
fbshipit-source-id: a02cd04055a915e5f97b540d6d98e2ff2d707875
Summary:
We need to be able to distinguish logical transaction failure from
infrastructure failure, so change the `Transaction::commit` future to
`Future<Item=bool, Error=Error>`, where `false` indicates a logical transaction
failure. This allows the caller to determine whether a retry or other recovery
logic is needed.
Reviewed By: lukaspiatkowski
Differential Revision: D8555727
fbshipit-source-id: 8ab64f3019f2644e7eaabc8d699d99aa8eb08fbb
Summary:
All sorts of weird can happen. The reason it didn't is because we had just one
repo
Reviewed By: lukaspiatkowski
Differential Revision: D8530459
fbshipit-source-id: aa3acd393f2dc96ea2141d09deb0209cbc29a740
Summary:
Now it is as it should be: mercurial_types have the types, mercurial has revlog related structures
burnbridge
Reviewed By: farnz
Differential Revision: D8319906
fbshipit-source-id: 256e73cdd1b1a304c957b812b227abfc142fd725
Summary: It only needs to borrow them.
Reviewed By: kulshrax
Differential Revision: D8244267
fbshipit-source-id: 2a24a3b7c6eb65177e4e26c57650dd7e096b4202
Summary:
This will make it easier to change the "real" bookmark type from AsciiString to
String if we decide to do that.
BookmarkPrefix is a separate type because we may want to change it from
AsciiString to String. Also we don't want to confuse a bookmark prefix with a
bookmark name.
Reviewed By: jsgf
Differential Revision: D7909992
fbshipit-source-id: 3d4d075c204ed5ef1114a743430982c2836bac04
Summary: For on-disk-rocksdb use cases we should persist bookmarks like any other table we use
Reviewed By: farnz
Differential Revision: D7728717
fbshipit-source-id: f63a6410f5ed254a719a16a7504d1b31da5a20a8
Summary:
The diff size is formidable, however it's not that bad really. This is just
about moving the code inside the macro and tests in a separate file.
This is similar to changesets or filenodes crates
Reviewed By: lukaspiatkowski
Differential Revision: D7639681
fbshipit-source-id: 4216652780bf99939245ae39e508b3b46cf96a03
Summary:
This is similar to what we've already done in filenodes and changesets. Sqlite
implementation just grabs the lock, future mysql implementation will get a
connection from pool.
Reviewed By: lukaspiatkowski
Differential Revision: D7585387
fbshipit-source-id: 95a046808f4d78d7776a26bdf3f7d939b1ee0451
Summary: mercurial_types::DChangesetId should be replaced by types from mononoke_types in most cases and by mercurial::HgChangesetId in others. This rename should help with tracking this
Reviewed By: sid0
Differential Revision: D7618897
fbshipit-source-id: 78904f57376606be99b56662164e0c110e632c64
Summary:
Bump bookmark size limit - we use 512 for infinitepush, let's use the same
here. Also add a primary key
Reviewed By: farnz
Differential Revision: D7534967
fbshipit-source-id: aeef926de910a3a9934fb1588778f8f503821071