Summary:
This updates the mononoke server code to support booting without myrouter. This required 2 changes:
- There were a few callsites where we didn't handle not having a myrouter port.
- In our function that waits for myrouter, we were failing if we had no myrouter port, but that's not desirable: if we don't have a myrouter port, we simply don't need to wait.
Arguably, This isn't 100% complete yet. Notably, RepoReadWriteStatus still requires myrouter. I'm planning to create a bootcamp task for this since it's not blocking my work adding integration tests, but would be a nice to have.
Speaking of further refactor, it'd be nice if we supported a `SqlConstructors::with_xdb` function that did this matching for us so we didn't have to duplicate it all over the place. I'm also planning to bootcamp this.
Reviewed By: farnz
Differential Revision: D15855431
fbshipit-source-id: 96187d887c467abd48ac605c28b826d8bf09862b
Summary:
- more tracing for potentialy large pieces of work
- removed some unnecessary tracing
Reviewed By: StanislavGlebik
Differential Revision: D15851576
fbshipit-source-id: 6686c00da56176cad43f72d1671e08eb8141110f
Summary:
By saving bookmarks when the client calls `heads` (and thus is starting discovery), we can fix the race that stopped us supporting `listkeys` as a bundle2 part.
This saves about 1 second per no-op pull on my devvm - I'm hoping for more improvement on Sandcastle.
Reviewed By: StanislavGlebik
Differential Revision: D15625211
fbshipit-source-id: 47e59848dff56fcf9d893ee3b3c329d69883a57e
Summary:
This is just a boilerplate, giving `resolve` access to the config option,
which controls whether we should allow pure pushes. This option will be
enabled by default and we'll disable it explicitly for repos where it makes
no sense.
Reviewed By: StanislavGlebik
Differential Revision: D15520450
fbshipit-source-id: b1bb913c14a6aac6aa68ed8b8a7b4ff270da1688
Summary:
This adds a sanity check that limits the count of matches in `list_all_bookmarks_with_prefix`.
If we find more matches than the limit, then an error will be returned (right now, we don't have support for e.g. offsets in this functionality, so the only alternative approach is for the caller to retry with a more specific pattern).
The underlying goal is to ensure that we don't trivially expose Mononoke to accidental denial of service when a list lists `*` and we end up querying literally all bookmarks.
I picked a fairly conservative limit here (500,000), which is > 5 times the number of bookmarks we currently have (we can load what we have right now successfully... but it's pretty slow);
Note that listing pull default bookmarks is not affected by this limit: this limit is only used when our query includes scratch bookmarks.
Reviewed By: StanislavGlebik
Differential Revision: D15413620
fbshipit-source-id: 1030204010d78a53372049ff282470cdc8187820
Summary:
This updates our receive path for B2xInfinitepush to create new scratch bookmarks.
Those scratch bookmarks will:
- Be non-publishing.
- Be non-pull-default.
- Not be replicated to Mercurial (there is no entry in the update log).
I added a sanity check on infinite pushes to validate that bookmarks fall within a given namespace (which is represented as a Regexp in configuration). We'll want to determine whether this is a good mechanism and what the regexp for this should be prior to landing (I'm also considering adding a soft-block mode that would just ignore the push instead of blocking it).
This ensures that someone cannot accidentally perform an infinitepush onto master by tweaking their client-side configuration.
---
Note that, as of this diff, we do not support the B2xInfinitepushBookmarks part (i.e. backup bookmarks). We might do that separately later, but if we do, it won't be through scratch Bookmarks (we have too many backup bookmarks for this to work)
Reviewed By: StanislavGlebik
Differential Revision: D15364677
fbshipit-source-id: 23e67d4c3138716c791bb8050459698f8b721277
Summary:
This adds support for recording server-side whether a given bookmark is publishing and / or pull-default. This is a change on the way towards supporting Infinite Push in Mononoke. This diff will require schema changes on `xdb.mononoke_production`.
There isn't a whole lot of new user-facing functionality in this particular diff. For starters, nobody can edit this flag on bookmarks, and pushes that create a new bookmark will always result in setting a bookmark as publishing AND pull_default.
What this change does however introduce is the notion of `BookmarkHgKind`, which is represents the behavior of this bookmark as far as Mercurial operations are concerned.
There are 3 such kinds, which are relevant in different parts of the codebase:
- PullDefault - this is useful when we want to respond to listkeys queries.
- Publishing — this is useful when we want to identify commit Phases.
- All - this is useful when we want to respond to listkeyspatterns.
Note that only the first two groups are cached in CachedBookmarks.
---
There are a few things going on in this diff (which logically have to happen together):
- First, I updated the `Bookmarks` trait and its various implementations to expose new methods to select Bookmarks based on their hg kind. There's one method per hg kind, and all the methods use a `Freshness` parameter to determine whether to hit cache or not.
- Second, I refactored the various bookmark-access methods in blobrepo. A few of them were duplicative of each other, and some were unused, which became a bigger problem now that we had more (for publishing, pull default, etc.). We are now down to just one method that doesn't hit cache (which is used only by the blobimport script and some tests — perhaps this could be changed?).
- Third, I updated the call sites for the methods that were udpated in step 2 to use the proper method for their use case.
---
Here's a summary of where each method is used (I'm only listing stuff that isn't unit tests):
- `get_bonsai_heads_maybe_stale`:
- `SqlPhases`'s `get_public`
- `build_skiplist_index`
- `get_bonsai_publishing_bookmarks_most_recent`:
- Blobimport (perhaps we can update this to a `maybe_stale` read?)
- `get_pull_default_bookmarks_maybe_stale`
- `listkeys` implementations
- `get_publishing_bookmarks_maybe_stale`
- API Server's `get_branches`
- `get_bookmarks_by_prefix_maybe_stale`:
- `listkeyspatterns` implementation
---
As an aside, note that a lot of the code changes in this diff are actually in CacheBookmark's tests — I had to update those to minimize sleeps: it was fine to sleep in the earlier tests, but I introduced new quickcheck-based tests, and sleeping in those isn't ideal.
Reviewed By: StanislavGlebik
Differential Revision: D15298598
fbshipit-source-id: 4f0fe80ea98dea8af0c8289db4f6f4e4070af622
Summary: Add an endpoint to the API server that provides functionality similar to the `gettreepack` wire protocol command.
Reviewed By: fanzeyi
Differential Revision: D15492734
fbshipit-source-id: 7d0f113f0d33c68d5bfba5781269a92f0d5a66e8
Summary: We need to access the stream of `Entries` for a given `gettreepack` call from the API server. Currently, these entries are only available in the Mercurial wire protocol format from `RepoClient`. This diff splits out the entry fetching code into its own function, which can later be called from the API server.
Reviewed By: xavierd
Differential Revision: D15483702
fbshipit-source-id: e3050b6a0504f97aa28a2c9adbbdfb0f613f3030
Summary: It will be used in the sync job.
Reviewed By: HarveyHunt
Differential Revision: D15449661
fbshipit-source-id: eace10b5f5622cec3d54c011767c35f49fce5960
Summary:
As part of adding support for infinitepush in Mononoke, we'll include additional server-side metadata on Bookmarks (specifically, whether they are publishing and pull-default).
However, we do use the name `Bookmark` right now to just reference a Bookmark name. This patch updates all reference to `Bookmark` to `BookmarkName` in order to free up `Bookmark`.
Reviewed By: StanislavGlebik
Differential Revision: D15364674
fbshipit-source-id: 126142e24e4361c19d1a6e20daa28bc793fb8686
Summary:
`Phases` currently have very ugly API, which is constant source of confusion. I've made following changes
- only return/cache public phases
- do not require `public_heads` and always request them from `BlobRepo::get_bonsai_heads_maybe_stale`
- consolidated `HintPhases` and `SqlPhases`
- removed `Hint` from types which does not carry any meaning
- fixed all effected callsites
Reviewed By: StanislavGlebik
Differential Revision: D15344092
fbshipit-source-id: 848245a58a4e34e481706dbcea23450f3c43810b
Summary: This updates Mononoke's repo_read_write_status to fetch the reason from the database. The "Repo is locked in DB" default message is used as a fallback if the reason is NULL.
Reviewed By: HarveyHunt
Differential Revision: D15164791
fbshipit-source-id: f4cb68c28db1db996c7ef1a309b737cb781659d1
Summary: Seems like 15 mins is not enough, let's bump it
Reviewed By: farnz
Differential Revision: D15154597
fbshipit-source-id: 78b8a43bbc95845719245f71ac85fd22336f3ed6
Summary: Added obsmarkers to pushrebase output. This allows the client to hide commits that were rebased server-side, and check out the rebased commit.
Reviewed By: StanislavGlebik
Differential Revision: D14932842
fbshipit-source-id: f215791e86e32e6420e8bd6cd2bc25c251a7dba0
Summary:
Will be used to coordinate hg sync job and blobimport - only one should run at
the same time.
Reviewed By: HarveyHunt
Differential Revision: D14799003
fbshipit-source-id: abbf06350114f1d756e288d467236e3a5b7b2f01
Summary:
getpackv1 needs to return copy metadata together with the content. This diff
fixes it
Reviewed By: kulshrax
Differential Revision: D14668319
fbshipit-source-id: 56a6ea2eb3a116433446773fa4fbe1f5a66c5746
Summary:
Removed references to HgNodeHash in repo_client in the specified functions. In
addition, updated other files due to type related dependencies.
Reviewed By: StanislavGlebik
Differential Revision: D14543934
fbshipit-source-id: b0d860fe7085ed4b91a62ab1e27fb2907a642a1d
Summary: Slim down the blobstore trait crate as much as possible.
Reviewed By: aslpavel
Differential Revision: D14542675
fbshipit-source-id: faf09255f7fe2236a491742cd836226474f5967c
Summary:
This is the test to cover tricky case in the discovery logic.
Previously Mononoke's known() wireproto method returned `true` for both public
and draft commits. The problem was in that it affects pushrebase.
There are a few problems with the current setup. A push command like `hg push
-r HASH --to BOOK` may actually do two things - it can either move a bookmark
on the server or do a pushrebase. What it does depends on how discovery phase
of the push finishes.
Each `hg push` starts with a discovery algorithm that tries to figure out what commits
to send to the server. If client decides that server already has all the
commits then it'll just move the bookmark, otherwise it'll run the pushrebase.
During discovery client sends wireproto `known()` method to the server
with a list of commit hashes, and server returns a list of booleans telling if
a server knows the commit or not. Before this diff Mononoke returned true for
both draft commits and public commits, while Mercurial returned true it only
for public commits.
So if Mononoke already has a draft commit (it might have it because the commit was created
via `hg pushbackup` or was created in the previous unsuccessful push attempt),
then hg client discovery will decide to move a bookmark instead of
pushrebasing, which in the case of master bookmark might have disastrous
consequences.
To fix it let's return false for draft commits, and also implement `knownnodes` which return true for draft commits (a better name for these methods would be `knownpublic` and `known`).
Note though that in order to trigger the problem the position of the bookmark on the server
should be different from the position of the bookmark on the client. This is
because of short-circuting in the hg client discovery logic (see
https://fburl.com/s5r76yle).
The potential downside of the change is that we'll fetch bookmarks more often,
but we'll add bookmark cache later if necessary.
Reviewed By: ikostia
Differential Revision: D14560355
fbshipit-source-id: b943714199576e14a32e87f325ae8059d95cb8ed
Summary:
As part of the mononoke lock testing, we realised that it would
be helpful to see why a repo is locked. Add the ability to express this
to the RepoReadOnly enum entry.
Reviewed By: aslpavel
Differential Revision: D14544801
fbshipit-source-id: ecae1460f8e6f0a8a31f4e6de3bd99c87fba8944
Summary:
Previously it was counting all stream entries, and for each file we have more
than one entry. This diff fixes it
Reviewed By: aslpavel
Differential Revision: D14519710
fbshipit-source-id: faf31f92933d63c3d4015efdc71eabb6c21888d7
Summary: To verify that slow downloads are caused by the client connection we log the total amount of time spent downloading files from Manifold.
Reviewed By: HarveyHunt
Differential Revision: D14502779
fbshipit-source-id: 9d6e1fa18faa4689680ed39087aefd418ac2bf62
Summary: Let's validate the content we return to users in the same way we do it for getfiles
Reviewed By: farnz
Differential Revision: D14420148
fbshipit-source-id: e109f6586210858e26334c1547d374c1c9b9e441
Summary:
Allow using a database entry to determine if a repository
is in read-only mode.
If a repo is marked read/write in the config then we will communicate with the db.
Reviewed By: ikostia
Differential Revision: D14279170
fbshipit-source-id: 57abd597f939e57f274079011d2fdcc7e7037854
Summary:
This is a hook in mercurial, in Mononoke it will be part of the implementation. By default all non fastforward pushes are blocked, except when using the NON_FAST_FORWARD pushvar (--non-forward-move is also needed to circumvent client side restrictions). Additionally certain bookmarks (e.g. master) shouldn't be able to be moved in a non fastforward manner at all. This can be done by setting block_non_fast_forward field in config.
Pushrebase can only move the bookmark that is actually being pushrebased so we do not need to check whether it is a fastforward move (it always is)
Reviewed By: StanislavGlebik
Differential Revision: D14405696
fbshipit-source-id: 782b49c26a753918418e02c06dcfab76e3394dc1
Summary:
Before adding hash validation to getpackv1 let's do this refactoring to make it
easier.
This diff also make hash validation more reliable. Previously we were
refetching the same file content again during validation instead of verifying the actual content
that was sent to the client. Since the content was in cache it was fine, but
it's better to check the same content that's sent to the client.
This diff also adds a integration test
Reviewed By: jsgf
Differential Revision: D14407292
fbshipit-source-id: b0667cb3dd6a7e0cee0b02cf87a61d43926d6058
Summary: Let's log it to scuba and to scribe just as we do with getfiles requests.
Reviewed By: jsgf
Differential Revision: D14404236
fbshipit-source-id: 079140372c128ee30e152c5626ef8f1127da36b1
Summary:
The diff adds support for getpackv1 request. Supporting this request is
necessary because hg client will stop using getfiles soon.
There are two main differences between getfiles and getpackv1. getfiles returns
loose files formats, while getpackv1 returns wirepack pack file (same format as
used by gettreepack). getpackv1 supports fetching more than one filenode per
file.
Differential Revision: D14404234
fbshipit-source-id: cfaef6a2ebb76da2df68c05e838d89690f56a9f0
Summary: Increase timeout in line with observed operations.
Differential Revision: D14421000
fbshipit-source-id: 68941a5188e41c6dd7fbb3b59af0a912327f76a4
Summary:
Client telemetry is used to send which commands are being run on the client,
which makes debugging slow sessions easier. For the client perspective, the
server hostname is send back so that the user knows to which physical host it's
connected, which is also useful debug info.
Reviewed By: StanislavGlebik
Differential Revision: D14261097
fbshipit-source-id: f4dc752671b76483f9dcb38aa4e3d16680087850
Summary: Optionally specify a max depth for history fetching.
Reviewed By: StanislavGlebik
Differential Revision: D14218337
fbshipit-source-id: b6b92181172637e58a43bf61793257559915c7f1
Summary:
Mononoke and hg both have their own implementation wrappers for lz4
compression, unify these to avoid duplication.
Reviewed By: StanislavGlebik
Differential Revision: D14131430
fbshipit-source-id: 3301b755442f9bea00c650c22ea696912a4a24fd
Summary:
Followup from the previous diff. Instead of parsing it in nom let's write a
small function that does all the parsing of getbundle capabilities.
Reviewed By: lukaspiatkowski
Differential Revision: D13972034
fbshipit-source-id: a6fbc9742217f3d77e7d93bb8cf5165f94d8b3e1
Summary:
It breaks mononoke traffic replay - https://fburl.com/scuba/1rxovt8t.
It changed how mononoke logs requets to replay and now bundlecaps is a dict
instead of a string, so this line in traffic replay fails -
https://fburl.com/g8gwhzrq.
Reviewed By: lukaspiatkowski
Differential Revision: D13972033
fbshipit-source-id: a6e8258da8e7a5b6f90b869781f448a2202a07a1
Summary: Move the `HgFileHistoryEntry` type from the `remotefilelog` crate into `mercurial-types`, since it is useful outside of `repo_client`, and will be extended with additional functionality unrelated to its existing usage later in this stack.
Reviewed By: StanislavGlebik
Differential Revision: D14079336
fbshipit-source-id: d65d6dded840e396e227c9af2aa6dc27097dcbef
Summary: Add a method for fetching the history of a given file (specified as a path/filenode pair). This will be used by the API server to answer file history requests.
Reviewed By: lukaspiatkowski
Differential Revision: D13910533
fbshipit-source-id: 985dc8c19f844a0d521d672848b7309dbaa07e85
Summary: This is adds a metaconfig option to preserve push/pushrebase bundles in the blobstore.
Reviewed By: StanislavGlebik
Differential Revision: D14020299
fbshipit-source-id: 94304d69e0ac5d81232f058c6d94eec61eb0020a
Summary:
This diff does not change anything on it's own, but rather adds the not
reachable (but already somewhat tested) code to preserve bundles when doing
pushes and pushrebases.
I want to land it now so that conflict resolution is easier.
Reviewed By: StanislavGlebik
Differential Revision: D14001738
fbshipit-source-id: e3279bc34946400210d8d013910e28f8d519a5f8