Summary: We need to access the stream of `Entries` for a given `gettreepack` call from the API server. Currently, these entries are only available in the Mercurial wire protocol format from `RepoClient`. This diff splits out the entry fetching code into its own function, which can later be called from the API server.
Reviewed By: xavierd
Differential Revision: D15483702
fbshipit-source-id: e3050b6a0504f97aa28a2c9adbbdfb0f613f3030
Summary: It will be used in the sync job.
Reviewed By: HarveyHunt
Differential Revision: D15449661
fbshipit-source-id: eace10b5f5622cec3d54c011767c35f49fce5960
Summary:
As part of adding support for infinitepush in Mononoke, we'll include additional server-side metadata on Bookmarks (specifically, whether they are publishing and pull-default).
However, we do use the name `Bookmark` right now to just reference a Bookmark name. This patch updates all reference to `Bookmark` to `BookmarkName` in order to free up `Bookmark`.
Reviewed By: StanislavGlebik
Differential Revision: D15364674
fbshipit-source-id: 126142e24e4361c19d1a6e20daa28bc793fb8686
Summary:
`Phases` currently have very ugly API, which is constant source of confusion. I've made following changes
- only return/cache public phases
- do not require `public_heads` and always request them from `BlobRepo::get_bonsai_heads_maybe_stale`
- consolidated `HintPhases` and `SqlPhases`
- removed `Hint` from types which does not carry any meaning
- fixed all effected callsites
Reviewed By: StanislavGlebik
Differential Revision: D15344092
fbshipit-source-id: 848245a58a4e34e481706dbcea23450f3c43810b
Summary: This updates Mononoke's repo_read_write_status to fetch the reason from the database. The "Repo is locked in DB" default message is used as a fallback if the reason is NULL.
Reviewed By: HarveyHunt
Differential Revision: D15164791
fbshipit-source-id: f4cb68c28db1db996c7ef1a309b737cb781659d1
Summary: Seems like 15 mins is not enough, let's bump it
Reviewed By: farnz
Differential Revision: D15154597
fbshipit-source-id: 78b8a43bbc95845719245f71ac85fd22336f3ed6
Summary: Added obsmarkers to pushrebase output. This allows the client to hide commits that were rebased server-side, and check out the rebased commit.
Reviewed By: StanislavGlebik
Differential Revision: D14932842
fbshipit-source-id: f215791e86e32e6420e8bd6cd2bc25c251a7dba0
Summary:
Will be used to coordinate hg sync job and blobimport - only one should run at
the same time.
Reviewed By: HarveyHunt
Differential Revision: D14799003
fbshipit-source-id: abbf06350114f1d756e288d467236e3a5b7b2f01
Summary:
getpackv1 needs to return copy metadata together with the content. This diff
fixes it
Reviewed By: kulshrax
Differential Revision: D14668319
fbshipit-source-id: 56a6ea2eb3a116433446773fa4fbe1f5a66c5746
Summary:
Removed references to HgNodeHash in repo_client in the specified functions. In
addition, updated other files due to type related dependencies.
Reviewed By: StanislavGlebik
Differential Revision: D14543934
fbshipit-source-id: b0d860fe7085ed4b91a62ab1e27fb2907a642a1d
Summary: Slim down the blobstore trait crate as much as possible.
Reviewed By: aslpavel
Differential Revision: D14542675
fbshipit-source-id: faf09255f7fe2236a491742cd836226474f5967c
Summary:
This is the test to cover tricky case in the discovery logic.
Previously Mononoke's known() wireproto method returned `true` for both public
and draft commits. The problem was in that it affects pushrebase.
There are a few problems with the current setup. A push command like `hg push
-r HASH --to BOOK` may actually do two things - it can either move a bookmark
on the server or do a pushrebase. What it does depends on how discovery phase
of the push finishes.
Each `hg push` starts with a discovery algorithm that tries to figure out what commits
to send to the server. If client decides that server already has all the
commits then it'll just move the bookmark, otherwise it'll run the pushrebase.
During discovery client sends wireproto `known()` method to the server
with a list of commit hashes, and server returns a list of booleans telling if
a server knows the commit or not. Before this diff Mononoke returned true for
both draft commits and public commits, while Mercurial returned true it only
for public commits.
So if Mononoke already has a draft commit (it might have it because the commit was created
via `hg pushbackup` or was created in the previous unsuccessful push attempt),
then hg client discovery will decide to move a bookmark instead of
pushrebasing, which in the case of master bookmark might have disastrous
consequences.
To fix it let's return false for draft commits, and also implement `knownnodes` which return true for draft commits (a better name for these methods would be `knownpublic` and `known`).
Note though that in order to trigger the problem the position of the bookmark on the server
should be different from the position of the bookmark on the client. This is
because of short-circuting in the hg client discovery logic (see
https://fburl.com/s5r76yle).
The potential downside of the change is that we'll fetch bookmarks more often,
but we'll add bookmark cache later if necessary.
Reviewed By: ikostia
Differential Revision: D14560355
fbshipit-source-id: b943714199576e14a32e87f325ae8059d95cb8ed
Summary:
As part of the mononoke lock testing, we realised that it would
be helpful to see why a repo is locked. Add the ability to express this
to the RepoReadOnly enum entry.
Reviewed By: aslpavel
Differential Revision: D14544801
fbshipit-source-id: ecae1460f8e6f0a8a31f4e6de3bd99c87fba8944
Summary:
Previously it was counting all stream entries, and for each file we have more
than one entry. This diff fixes it
Reviewed By: aslpavel
Differential Revision: D14519710
fbshipit-source-id: faf31f92933d63c3d4015efdc71eabb6c21888d7
Summary: To verify that slow downloads are caused by the client connection we log the total amount of time spent downloading files from Manifold.
Reviewed By: HarveyHunt
Differential Revision: D14502779
fbshipit-source-id: 9d6e1fa18faa4689680ed39087aefd418ac2bf62
Summary: Let's validate the content we return to users in the same way we do it for getfiles
Reviewed By: farnz
Differential Revision: D14420148
fbshipit-source-id: e109f6586210858e26334c1547d374c1c9b9e441
Summary:
Allow using a database entry to determine if a repository
is in read-only mode.
If a repo is marked read/write in the config then we will communicate with the db.
Reviewed By: ikostia
Differential Revision: D14279170
fbshipit-source-id: 57abd597f939e57f274079011d2fdcc7e7037854
Summary:
This is a hook in mercurial, in Mononoke it will be part of the implementation. By default all non fastforward pushes are blocked, except when using the NON_FAST_FORWARD pushvar (--non-forward-move is also needed to circumvent client side restrictions). Additionally certain bookmarks (e.g. master) shouldn't be able to be moved in a non fastforward manner at all. This can be done by setting block_non_fast_forward field in config.
Pushrebase can only move the bookmark that is actually being pushrebased so we do not need to check whether it is a fastforward move (it always is)
Reviewed By: StanislavGlebik
Differential Revision: D14405696
fbshipit-source-id: 782b49c26a753918418e02c06dcfab76e3394dc1
Summary:
Before adding hash validation to getpackv1 let's do this refactoring to make it
easier.
This diff also make hash validation more reliable. Previously we were
refetching the same file content again during validation instead of verifying the actual content
that was sent to the client. Since the content was in cache it was fine, but
it's better to check the same content that's sent to the client.
This diff also adds a integration test
Reviewed By: jsgf
Differential Revision: D14407292
fbshipit-source-id: b0667cb3dd6a7e0cee0b02cf87a61d43926d6058
Summary: Let's log it to scuba and to scribe just as we do with getfiles requests.
Reviewed By: jsgf
Differential Revision: D14404236
fbshipit-source-id: 079140372c128ee30e152c5626ef8f1127da36b1
Summary:
The diff adds support for getpackv1 request. Supporting this request is
necessary because hg client will stop using getfiles soon.
There are two main differences between getfiles and getpackv1. getfiles returns
loose files formats, while getpackv1 returns wirepack pack file (same format as
used by gettreepack). getpackv1 supports fetching more than one filenode per
file.
Differential Revision: D14404234
fbshipit-source-id: cfaef6a2ebb76da2df68c05e838d89690f56a9f0
Summary: Increase timeout in line with observed operations.
Differential Revision: D14421000
fbshipit-source-id: 68941a5188e41c6dd7fbb3b59af0a912327f76a4
Summary:
Client telemetry is used to send which commands are being run on the client,
which makes debugging slow sessions easier. For the client perspective, the
server hostname is send back so that the user knows to which physical host it's
connected, which is also useful debug info.
Reviewed By: StanislavGlebik
Differential Revision: D14261097
fbshipit-source-id: f4dc752671b76483f9dcb38aa4e3d16680087850
Summary: Optionally specify a max depth for history fetching.
Reviewed By: StanislavGlebik
Differential Revision: D14218337
fbshipit-source-id: b6b92181172637e58a43bf61793257559915c7f1
Summary:
Mononoke and hg both have their own implementation wrappers for lz4
compression, unify these to avoid duplication.
Reviewed By: StanislavGlebik
Differential Revision: D14131430
fbshipit-source-id: 3301b755442f9bea00c650c22ea696912a4a24fd
Summary:
Followup from the previous diff. Instead of parsing it in nom let's write a
small function that does all the parsing of getbundle capabilities.
Reviewed By: lukaspiatkowski
Differential Revision: D13972034
fbshipit-source-id: a6fbc9742217f3d77e7d93bb8cf5165f94d8b3e1
Summary:
It breaks mononoke traffic replay - https://fburl.com/scuba/1rxovt8t.
It changed how mononoke logs requets to replay and now bundlecaps is a dict
instead of a string, so this line in traffic replay fails -
https://fburl.com/g8gwhzrq.
Reviewed By: lukaspiatkowski
Differential Revision: D13972033
fbshipit-source-id: a6e8258da8e7a5b6f90b869781f448a2202a07a1
Summary: Move the `HgFileHistoryEntry` type from the `remotefilelog` crate into `mercurial-types`, since it is useful outside of `repo_client`, and will be extended with additional functionality unrelated to its existing usage later in this stack.
Reviewed By: StanislavGlebik
Differential Revision: D14079336
fbshipit-source-id: d65d6dded840e396e227c9af2aa6dc27097dcbef
Summary: Add a method for fetching the history of a given file (specified as a path/filenode pair). This will be used by the API server to answer file history requests.
Reviewed By: lukaspiatkowski
Differential Revision: D13910533
fbshipit-source-id: 985dc8c19f844a0d521d672848b7309dbaa07e85
Summary: This is adds a metaconfig option to preserve push/pushrebase bundles in the blobstore.
Reviewed By: StanislavGlebik
Differential Revision: D14020299
fbshipit-source-id: 94304d69e0ac5d81232f058c6d94eec61eb0020a
Summary:
This diff does not change anything on it's own, but rather adds the not
reachable (but already somewhat tested) code to preserve bundles when doing
pushes and pushrebases.
I want to land it now so that conflict resolution is easier.
Reviewed By: StanislavGlebik
Differential Revision: D14001738
fbshipit-source-id: e3279bc34946400210d8d013910e28f8d519a5f8
Summary:
These are **not** the schemas that we use in production.
So at the moment they are not used for anything and they just create confusion.
Reviewed By: aslpavel
Differential Revision: D13986001
fbshipit-source-id: 7aae0a5da474f579c9cdf1bbf5dfe183835cae2d
Summary: HgFileNodeId is a stronger typed id, so it is prefered to use it instead of HgNodeHash whenever it is identifying a filenode
Reviewed By: aslpavel
Differential Revision: D13986172
fbshipit-source-id: c0334652345acb868e86c38b8c0045e9c023c176
Summary: The Copy trait means that something is so cheap to copy that you don't even need to explicitly do `.clone()` on it. As it doesn't make much sense to pass &i64 it also doesn't make much sense to pass &<Something that is Copy>, so I have removed all the occurences of passing one of ouf hashes that are Copy.
Reviewed By: fanzeyi
Differential Revision: D13974622
fbshipit-source-id: 89efc1c1e29269cc2e77dcb124964265c344f519
Summary: The `create_remotefilelog_blob` function was fairly large, but consisted of several distinct steps in a long `Future` combinator chain. This diff refactors this module, splitting up this function into several smaller ones, and adding types for intermediate results where needed.
Reviewed By: StanislavGlebik
Differential Revision: D13953990
fbshipit-source-id: 61d8115cc80d44bc624fa6de8a4ffcf2cdd5266e
Summary:
Implemented advanced parser to parse bundlecaps json object into more suitable to work datastructure.
Patched version of D13602738
Reviewed By: ikostia
Differential Revision: D13751782
fbshipit-source-id: 977b70c121d0df587082b8db615cbea7a00e3aa4
Summary:
Currently if a crate depends even on a single type from metaconfig then in
order to compile this trait buck first compiles metaconfig crate with all the
logic of parsing the configs.
This diff split metaconfig into two crates. The first one just holds the types for
"external consumption" by other crates. The second holds the parsing logic.
That makes builds faster
Reviewed By: jsgf, lukaspiatkowski
Differential Revision: D13877592
fbshipit-source-id: f353fb2d1737845bf1fa0de515ff8ef131020063
Summary:
The main reason to do it is to remove dependency from cmdlib to repo_client.
repo_client depends on a lot of other crates like bundle2-resolver, hooks etc.
And it means that in order to compile mononoke_admin we need to compile these
crates too. By moving open_blobrepo into blobrepo crate we are removing
unnecessary dependencies.
Also let's remove unused blobrepo type
Reviewed By: aslpavel
Differential Revision: D13848878
fbshipit-source-id: cd3d04354649cdb5b2947f08762051318725c781