Summary:
- healer runs on all repositories at once, and queries for some repositories are timing out
- It is now possible to run healer just for specified repository
Reviewed By: HarveyHunt
Differential Revision: D14539978
fbshipit-source-id: 9139999da97b2655ae9312c33c9e8c15f0b24016
Summary:
Basically we should check that the commits have been backed up.
If this is not true and the commits are local we can just back them up.
If they are not known by this repo, pull from the old one and then back them
up.
Reviewed By: markbt
Differential Revision: D14508239
fbshipit-source-id: 3fdd83335cb937b153510ec3c7510ecd1167d0ca
Summary:
As part of the mononoke lock testing, we realised that it would
be helpful to see why a repo is locked. Add the ability to express this
to the RepoReadOnly enum entry.
Reviewed By: aslpavel
Differential Revision: D14544801
fbshipit-source-id: ecae1460f8e6f0a8a31f4e6de3bd99c87fba8944
Summary:
- convert to 2018 edition, and removed all `extern crate`
- wait for `myrouter` to be available before actually doing anything
Reviewed By: HarveyHunt
Differential Revision: D14524247
fbshipit-source-id: ebe2e2e74935f00c87945129370f268c794fcab7
Summary: All of the API server handlers were unnecessarily cloning URL parameters. This diff eliminates these clones.
Reviewed By: singhsrb
Differential Revision: D14537692
fbshipit-source-id: 02ea9dccae02b2813d04dd95feb8225e35870b6c
Summary:
This is particulary useful for `hg cloud sync` when it calls pushbackup in the
background to the secondary storage at the end of cloud sync.
Pushbackup is not smart enough, so it will back up again what we just pulled from cloud sync.
However, all those commits are probably already backed up on the secondary
storage on another machine while cloud sync.
Reviewed By: singhsrb
Differential Revision: D14386616
fbshipit-source-id: e62ed0afb89c28fe6880346077c279e6705da602
Summary:
Previously it was counting all stream entries, and for each file we have more
than one entry. This diff fixes it
Reviewed By: aslpavel
Differential Revision: D14519710
fbshipit-source-id: faf31f92933d63c3d4015efdc71eabb6c21888d7
Summary: To verify that slow downloads are caused by the client connection we log the total amount of time spent downloading files from Manifold.
Reviewed By: HarveyHunt
Differential Revision: D14502779
fbshipit-source-id: 9d6e1fa18faa4689680ed39087aefd418ac2bf62
Summary: `LooseHistoryEntry` and `PackHistoryEntry` aren't the best names for these types, since the latter is what most users should use, whereas the former should only typically used for data transmission. As such, we should rename these to clarify the intent.
Differential Revision: D14512749
fbshipit-source-id: 5293df89766825077b2ba07224297b958bf46002
Summary: Let's validate the content we return to users in the same way we do it for getfiles
Reviewed By: farnz
Differential Revision: D14420148
fbshipit-source-id: e109f6586210858e26334c1547d374c1c9b9e441
Summary: In Mononoke we want to be able to block merge commits on a repo per repo basis.
Reviewed By: aslpavel
Differential Revision: D14455502
fbshipit-source-id: 400e85834c20df811674405bc0c391860cf677dd
Summary:
Allow using a database entry to determine if a repository
is in read-only mode.
If a repo is marked read/write in the config then we will communicate with the db.
Reviewed By: ikostia
Differential Revision: D14279170
fbshipit-source-id: 57abd597f939e57f274079011d2fdcc7e7037854
Summary: Bookmarks that are set to be non fastforward moveable should also not be deleteable
Reviewed By: StanislavGlebik
Differential Revision: D14420457
fbshipit-source-id: a10231466350c0b25437972c66472b46044fc625
Summary:
This is a hook in mercurial, in Mononoke it will be part of the implementation. By default all non fastforward pushes are blocked, except when using the NON_FAST_FORWARD pushvar (--non-forward-move is also needed to circumvent client side restrictions). Additionally certain bookmarks (e.g. master) shouldn't be able to be moved in a non fastforward manner at all. This can be done by setting block_non_fast_forward field in config.
Pushrebase can only move the bookmark that is actually being pushrebased so we do not need to check whether it is a fastforward move (it always is)
Reviewed By: StanislavGlebik
Differential Revision: D14405696
fbshipit-source-id: 782b49c26a753918418e02c06dcfab76e3394dc1
Summary: There is no need to have an Option<Vec<XYZ>> as None can simply be represented by an empty vector. This makes these fields easier to use.
Reviewed By: StanislavGlebik
Differential Revision: D14405687
fbshipit-source-id: e4c5ba12a1e3c6a18130026af6814d54952da4d2
Summary: This is needed to run the scmadmin tool and lock the repo.
Reviewed By: StanislavGlebik
Differential Revision: D14497608
fbshipit-source-id: 5865b90375db29a17d462044ca4cdb87242a8209
Summary: To learn how far behind are we in the absolute bundle numbers.
Reviewed By: StanislavGlebik
Differential Revision: D14491672
fbshipit-source-id: 31d16f115b2b6fe4b88c25a847ce229e123b048b
Summary: We want to be able to manipulate hg-sync counters from the Mononoke admin.
Reviewed By: StanislavGlebik
Differential Revision: D14477676
fbshipit-source-id: 11218390bf469d4f297f7f13e9daee2d5f9bb35b
Summary:
Before adding hash validation to getpackv1 let's do this refactoring to make it
easier.
This diff also make hash validation more reliable. Previously we were
refetching the same file content again during validation instead of verifying the actual content
that was sent to the client. Since the content was in cache it was fine, but
it's better to check the same content that's sent to the client.
This diff also adds a integration test
Reviewed By: jsgf
Differential Revision: D14407292
fbshipit-source-id: b0667cb3dd6a7e0cee0b02cf87a61d43926d6058
Summary: Let's log it to scuba and to scribe just as we do with getfiles requests.
Reviewed By: jsgf
Differential Revision: D14404236
fbshipit-source-id: 079140372c128ee30e152c5626ef8f1127da36b1
Summary:
The diff adds support for getpackv1 request. Supporting this request is
necessary because hg client will stop using getfiles soon.
There are two main differences between getfiles and getpackv1. getfiles returns
loose files formats, while getpackv1 returns wirepack pack file (same format as
used by gettreepack). getpackv1 supports fetching more than one filenode per
file.
Differential Revision: D14404234
fbshipit-source-id: cfaef6a2ebb76da2df68c05e838d89690f56a9f0
Summary: The order was changed after ec00921aece0adc6aaca49e5580bff52784c4ca5
Reviewed By: quark-zju
Differential Revision: D14482440
fbshipit-source-id: d13327ed16387e597ca6fc1cfab0787a13f38d00
Summary:
We've see failures during the attempts to deploy this job to TW without such
waits. After adding the wait, things look better.
Reviewed By: StanislavGlebik
Differential Revision: D14456207
fbshipit-source-id: 8469932d7387060c026164ac97fa604453b0d296
Summary: Every packman-managed RPM in fbcode should now have a `packager: ONCALL` line in its packman.yml file. This diff adds one for fb-mononoke-admin. Assuming I found the right oncall, would you Accept+Ship this change?
Reviewed By: sunshowers
Differential Revision: D14399325
fbshipit-source-id: 5dd03b3f8722b4fb13b85acbe7702bc780b49f76
Summary: Increase timeout in line with observed operations.
Differential Revision: D14421000
fbshipit-source-id: 68941a5188e41c6dd7fbb3b59af0a912327f76a4
Summary: See D14279065, this diff is simply to clean up the deprecated code
Reviewed By: StanislavGlebik
Differential Revision: D14279210
fbshipit-source-id: 10801fb04ad533a80bb7a2f9dcdf3ee5906aa68d
Summary:
Currently we are logging new commits from BlobRepo. This will lead to issues once CommitCloud starts using Mononoke as we cannot differentiate between phases at that level. The solution is to log commits when they are pushrebased as this guarantees that they are public commits.
Note: This only introduces the new logic, cleaning up the existing implementation is part of D14279210
Reviewed By: StanislavGlebik
Differential Revision: D14279065
fbshipit-source-id: d714fae7164a8af815fc7716379ff0b7eb4826fb
Summary:
Fixes the below type error when pulling from a git repo using hg-git on my laptop:
File "edenscm/hgext/remotenames.py", line 225, in expull
pullremotenames(repo, remote, bookmarks)
File "edenscm/hgext/remotenames.py", line 314, in pullremotenames
path = activepath(repo.ui, remote)
File "edenscm/hgext/remotenames.py", line 1464, in activepath
rpath = _normalizeremote(remote.url)
File "edenscm/hgext/remotenames.py", line 1439, in _normalizeremote
u = util.url(remote)
File "edenscm/hgext/hggit/__init__.py", line 164, in _url
if not (path.startswith(pycompat.ossep) and ":" in path):
AttributeError: 'function' object has no attribute 'startswith'
Basically, `peer.url()` is the API, `peer._url` is a private field that does
not always exist somehow.
Besides, further remove named branches that can crash hg-git with
NotImplementedError:
File "edenscm/hgext/remotenames.py", line 225, in expull
pullremotenames(repo, remote, bookmarks)
File "edenscm/hgext/remotenames.py", line 322, in pullremotenames
for branch, nodes in remote.branchmap().iteritems():
File "edenscm/hgext/hggit/gitrepo.py", line 73, in branchmap
raise NotImplementedError
Reviewed By: DurhamG
Differential Revision: D14144462
fbshipit-source-id: 2e886c639cf6689480f84626eaf0d5ec25512ea0
Summary:
Apparently Gluster really can't copy with renames, so write to a
unique name, then symlink the canonical key to it. If the symlink already
exists, then we'll assume that the file already does, and remove the unique
name.
Reviewed By: StanislavGlebik
Differential Revision: D14014167
fbshipit-source-id: 1e5e2ce989652232d67d2aaac776e35127f58fb0
Summary: Keep track of error causes so we can print better errors.
Reviewed By: StanislavGlebik
Differential Revision: D14014165
fbshipit-source-id: e9a7846256bbfbfd689e0d78f01b1ac50cf64c1b
Summary:
Reduce directory fan out levels to 2 wide ones (xxx/xxx) rather than 4 narrow
(xx/xx/xx/xx). This saves write latency cost: with 4 levels, until we get
~billions of blobs, every write will also be a mkdir, and we'll have many
single-entry directories. With 2 wide levels this will be true up to ~millions of
blobs, but then start to fill out the leaf directories.
The downside is that the leaf directory loading will be higher once we do have
billions of objects, but I think it will be managable.
This is NOT INTEROPERABLE with 4-level fanout, so invalidates existing stores.
Reviewed By: aslpavel
Differential Revision: D13923991
fbshipit-source-id: b93ddc49305c921f8a1b34606796b6d75273fb75
Summary:
Adding `.data` to the end breaks the "rsync hack" to keep files colocated on
the same node.
Reviewed By: aslpavel
Differential Revision: D14014284
fbshipit-source-id: c65b237ed03c9c8ea3a6ac239135820379b7e38d
Summary: In this final diff for verify_integrity the data authorization data is being passed from hgcli (where the ssh connection is established) to Mononoke where it is further passed to verify_integrity script.
Reviewed By: StanislavGlebik
Differential Revision: D14387759
fbshipit-source-id: 2c0f9eef4128f5af0052276a1830ea2f449fb03a
Summary: The username and ssh vars are both properties of request, so they fit nicely inside per request CoreContext. They will be used in verify_integrity hook.
Reviewed By: StanislavGlebik
Differential Revision: D14385840
fbshipit-source-id: 9fe3cb96ffa89d8b017c730e37ca9ea4124ede0c
Summary:
This additional data that is send to Mononoke will be used e.g. in verify_integrity hook.
This diff also contains change to the user_unixname, instead of passing env variable USER there we pass value produced by users crate. Since the user_unixname is not used anywhere now it is safe to do so.
Reviewed By: StanislavGlebik
Differential Revision: D14385280
fbshipit-source-id: 1e48c232fafbba5d5188c7d162e9ef21efd738f7
Summary:
This stack is about adding getpackv1 wireproto request support (see task for
more details).
This diff adds a Decoder trait implementation. It parses all parameters for a
single file (i.e. filename + list of nodes). This Decoder is combined with
the previous diff to fully parse getpackv1 request
Reviewed By: lukaspiatkowski
Differential Revision: D14401516
fbshipit-source-id: 9b5fa1f48de338e58a288eb0653bd734cd8d1623
Summary:
The stack is about adding support for getpackv1 wireproto request.
This diff make decode_getfiles_arg_stream function generic over Decoder type.
At the moment decode_getfiles_arg_stream is used for unpacking `getfiles` wireproto
request. However getpackv1 packs request in a similar way to getfiles (which
is different from any other wireproto requests). Let's re-use the same
functionality for getpackv1
Reviewed By: lukaspiatkowski
Differential Revision: D14401517
fbshipit-source-id: ef0e9abeecec61c7c3d25b4a9e36da3f05c870cb
Summary:
Copy & rename sources must be included in the list of conflict files so that if
a copy was modified between root and `onto` bookmark then pushrebase should
fail with conflicts.
Note that some some merge cases are not handled yet - see TODO in the code
Reviewed By: lukaspiatkowski
Differential Revision: D14322036
fbshipit-source-id: d69bcceaa24987dd1e9d67e77f6a3205b580a7d8
Summary:
Mononoke does not support pushrebasing over a merge commit. Previously
`find_closest_ancestor_root` didn't detect merges.
In cases like
```
o <- onto
|
o o <- commit to pushrebase
|\ /
| o
o <- main branch
...
```
`find_closest_ancestor_root` would go to the main branch and finally fail with
`RootTooFarBehind` error. By detecting merge commit earlier we could print
better error message and avoid doing useless traversal
Reviewed By: lukaspiatkowski
Differential Revision: D14321616
fbshipit-source-id: 2aa53a2627f25897a241616a429864f1cfca3100
Summary:
The problem was in using `file_changes()` of a bonsai object. If a file
replaces a directory, then it just returns an added file, but not a removed
directory.
However `changed_entry_stream` didn't return an entry if just it's mode was changed (i.e. file became executable, or file became a symlink). This diff fixes it as well
Let's use the same computing changing files method instead of `file_changes()`.
Differential Revision: D14279470
fbshipit-source-id: 976b0abd93646f7d68137c83cb07a8564922ce17
Summary:
In integration tests, the sent replycaps part can be different lengths
depending on client capabilities. Usually this is globbed, but a couple
of places are missing.
Reviewed By: aslpavel
Differential Revision: D14363789
fbshipit-source-id: 5db161bd646c7a9fa8aeef2281ee7eb4d0c7771e
Summary: This endpoint is also used in Mercurial now.
Reviewed By: StanislavGlebik
Differential Revision: D14303557
fbshipit-source-id: fe38b62d010de2846dcf800f93ba050d9c396873