Commit Graph

49 Commits

Author SHA1 Message Date
Simon Farnsworth
73ef342f82 Allow pushrebase to work directly on bonsai changesets
Summary:
Current pushrebase starts by mapping your `HgChangeset`s to `BonsaiChangeset`s. Cross-repo sync will generate bonsai form directly, then pushrebase it.

Reuse the existing pushrebase machinery - this also puts into a good place to start removing HgChangeset knowledge from pushrebase.

Reviewed By: StanislavGlebik

Differential Revision: D16915565

fbshipit-source-id: f6a1e77453eece047c1f3586ce659f9c88cc81e1
2019-08-20 13:25:55 -07:00
Thomas Orozco
895aa3c27a mononoke: getpackv2 LFS support (+ getfiles / getpack / eden_get_data refactor)
Summary:
This updates Mononoke to support LFS metadata when serving data over getpackv2.

However, in doing so, I've also refactored the various ways in which we currently access file data to serve it to clients or to process client uploads (when we need to compute deltas). The motivation to do that is that we've had several issues recently where some protocols knew about some functionality, and others didn't. Notably, redaction and LFS were supported in getfiles, but neither of them were supported in getpack or eden_get_data.

This patch refactors all those callsites away from blobrepo and instead through repo_client/remotefilelog, which provides an internal common method to fetch a filenode and return its metadata and bytes (prepare_blob), and separate protocol specific implementations for getpackv1 (includes metadata + file content -- this is basically the existing fetch_raw_filenode_bytes function), getpackv2 (includes metadata + file contents + getpackv2 metadata), getfiles (includes just file content, and ties file history into its response) and eden_get_data (which uses getpackv1).

Here are a few notable changes here that are worth noting as you review this:

- The getfiles method used to get its filenode from get_maybe_draft_filenode, but all it needed was the copy info. However, the updated method gets its filenode from the envelope (which also has this data). This should be equivalent.
- I haven't been able to remove fetch_raw_filenode_bytes yet because there's a callsite that still uses it and it's not entirely clear to me whether this is used and why. I'll look into it, but for now I left it unchanged.
- I've used the Mercurial implementation of getpack metadata here. This feels like the better approach so we can reuse some of the code, but historically I don't think we've depended on many Mercurial crates. Let me know if there's a reason not to do that.

Finally, there are a couple things to be aware of as you review:

- I removed some more `Arc<BlobRepo>` in places where it made it more difficult to call the new remotefilelog methods.
- I updated the implementation to get copy metadata out of a file envelope to not require copying the metadata into a mercurial::file::File only to immediately discard it.
- I cleaned up an LFS integration test a little bit. There are few functional changes there, but it makes tests a little easier to work with.

Reviewed By: farnz

Differential Revision: D16784413

fbshipit-source-id: 5c045d001472fb338a009044ede1e22ccd34dc55
2019-08-14 08:48:35 -07:00
Pavel Aslanov
1bb01bb7a1 move mercurial related creates to mercurial subdirectory
Summary:
Start moving mercurial related stuff to `mercurial` directory:
- rename `mercurial` to `mercurial_revlog` and moved to `/mercurial/revlog`
- move `mercurial_types` to `/mercurial/types`
- move `mercurial_bundles` to `/mercurial/bundels`

Reviewed By: farnz

Differential Revision: D16783728

fbshipit-source-id: 79cf1757bb7cc84a6273a4a3c486242b1ef4cd00
2019-08-14 04:03:00 -07:00
Thomas Orozco
10116e3a04 mononoke/blobimport: use shared limits for blob uploads and LFS imports
Summary:
This updates blobimport to avoid using a per-changeset limit for the number of blob uploads (which used to be 100), and instead to use a global blob upload limit, and a global LFS import limit.

This allows for finer-grained control over the resource utilization (notably in terms of Manifold quota) of blobimport.

While I was in there, I also eliminated an `Arc<BlobRepo>` that was laying around, sicne we don't use those anymore.

Reviewed By: HarveyHunt

Differential Revision: D16667044

fbshipit-source-id: 9fc2f347969c7ca9472ce8dd3d4e2f1acb175b66
2019-08-07 08:33:43 -07:00
Thomas Orozco
f9360cab9d mononoke/filestore: incorporate in Mononoke
Summary:
NOTE: This isn't 100% complete yet. I have a little more work to do around the aliasverify binary, but I think it'll make sense to rework this a little bit with the Filestore anyway.

This patch incorporates the Filestore throughout Mononoke. At this time, what this means is:

- Blobrepo methods return streams of `FileBytes`.
- Various callsites that need access to `FileBytes` call `concat2` on those streams.

This also eliminates the Sha256 aliasing code that we had written for LFS and replaces it with a Filestore-based implementation.

However, note that this does _not_ change how files submitted through `unbundle` are written to blobstores right now. Indeed, those contents are passed into the Filestore through `store_bytes`, which doesn't do chunking. This is intentional since it lets us use LFS uploads as a testbed for chunked storage before turning it on for everything else (also, chunking those requires further refactoring of content uploads, since right now they don't expect the `ContentId` to come back through a Future).

The goal of doing it this way is to make the transition simpler. In other words, this diff doesn't change anything functionally — it just updates the underlying API we use to access files. This is also important to get a smooth release: it we had new servers that started chunking things while old servers tried to read them, things would be bad. Doing it this way ensures that doesn't happen.

This means that streaming is there, but it's not being leveraged just yet. I'm planning to do so in a separate diff, starting with the LFS read and write endpoints in

Reviewed By: farnz

Differential Revision: D16440671

fbshipit-source-id: 02ae23783f38da895ee3052252fa6023b4a51979
2019-07-31 05:19:40 -07:00
Thomas Orozco
5e8148a968 mononoke: UploadHgFileContents: optimistically reuse HG filenodes
Summary:
This adds supporting for reusing Mercurial filenodes in
`UploadHgFileContents::execute`.

The motivation behind this is that given file contents, copy info, and parents,
the file node ID is deterministic, but computing it requires fetching and
hashing the body of the file. This implementation implements a lookup through
the blobstore to find a pre-computed filenode ID.

Doing so is in general a little inefficient (but it's not entirely certain that
the implementation I'm proposing here is faster -- more on this below), but
it's particularly problematic large files. Indeed, fetching a multiple-GB file
to recompute the filenode even if we received it from the client can be fairly
slow (and use up quite a bit of RAM, though that's something we can mitigate by
streaming file contents).

Once thing worth noting here (hence the RFC flag) is that there is a bit of a
potential for performance regression. Indeed, we could have a cache miss when
looking up the filenode ID, and then we'll have to fetch the file.

*At this time*, this is also somewhat inefficient, since we'll have to fetch
the file anyway to peek at its contents in order to generate metadata. This
is fixed later in this Filestore stack.

That said, an actual regression seems a little unlikely to happen since in
practice we'll write out the lookup entry when accepting a pushrebase
then do a lookup on it later when converting the pushrebased Bonsai changeset
to a Mercurial changeset).

If we're worried, then perhaps adding hit / miss stats on the lookup might make
sense. Let me know what you think.

 ---

Finally, there's a bit I don't love here, which is trusting LFS clients with the size of their uploads.
I'm thinking of fixing this when I finish the Filestore work.

Reviewed By: aslpavel

Differential Revision: D16345248

fbshipit-source-id: 6ce8a191efbb374ff8a1a185ce4b80dc237a536d
2019-07-31 05:19:39 -07:00
Simon Farnsworth
ba39d7e2e6 Commits we are going to pushrebase should be treated as drafts
Summary:
When a commit is pushrebased, we generate a new Bonsai commit to represent the result of the commit, which is the commit we actually make public.

Therefore, the commit that will be rebased should be uploaded as a draft. The rebase result will then provide filenodes, and thus we will generate good linknodes for pushrebaseed commits.

Reviewed By: krallin

Differential Revision: D16550469

fbshipit-source-id: 1f2f00f8cd9ad0f75441aca0eb8daae62ae299e0
2019-07-30 06:40:21 -07:00
Stanislau Hlebik
1270d709a8 mononoke: remove Logger from BlobRepo
Summary:
It's used only in a very few places, and most likely that's by accident. We
pass logger via CoreContext now

Reviewed By: krallin

Differential Revision: D16336953

fbshipit-source-id: 36ea4678b3c3df448591c606628b93ff834fae45
2019-07-17 08:31:56 -07:00
David Tolnay
fed2ac83f4 rust: Head start on some upcoming warnings
Summary:
This diff sets two Rust lints to warn in fbcode:

```
[rust]
  warn_lints = bare_trait_objects, ellipsis_inclusive_range_patterns
```

and fixes occurrences of those warnings within common/rust, hg, and mononoke.

Both of these lints are set to warn by default starting with rustc 1.37. Enabling them early avoids writing even more new code that needs to be fixed when we pull in 1.37 in six weeks.

Upstream tracking issue: https://github.com/rust-lang/rust/issues/54910

Reviewed By: Imxset21

Differential Revision: D16200291

fbshipit-source-id: aca11a7a944e9fa95f94e226b52f6f053b97ec74
2019-07-12 00:56:44 -07:00
Kostia Balytskyi
8c60ecf680 mononoke: some minor and (hopefully) uncontroversial refactoring of resolver.rs
Summary:
1. Let's import `PartId` rather than redefine it
2. Let's extract chainig boilerplate into an extra fn

Reviewed By: StanislavGlebik

Differential Revision: D16122281

fbshipit-source-id: eb1f8b9f3c3440acb9106377d978090389d6c66c
2019-07-05 02:52:57 -07:00
Kostia Balytskyi
a8c4de3938 mononoke: implement support for force pushrebases
Summary:
This intends to mimic the behavior of core hg server.
https://fburl.com/jf3iyl7y shows that in case when client sends a special
magic string in the `onto` field, we don't run a traditional pushrebase.
Rather, we set the rebase target to be the parent of the rebase set
(https://fburl.com/cr4ut813). The following pushrebase is just reduced to
moving a bookmark to where the `pushkey` part points.

This is equivalent to the following:
- upload all changesets (which Mononoke already does)
- run push instead of a pushrebase (e.g. move the bookmark where needed)
- generate a pushrebase-like response

NB: the code below is ugly IMO, but not more ugly than the rest of the resolver. Originally I intended to refactor the resolver, but got a pushback as my idea of refactoring was shuffling things out into different files and that would break blame.

Reviewed By: krallin

Differential Revision: D16091234

fbshipit-source-id: 6ef7f7de214edb9ac7bda14ad287a3e7f46d0014
2019-07-04 07:08:58 -07:00
Thomas Orozco
db49ed1679 mononoke: expose option to disable infinitepush server write support
Summary:
This adds the ability to provide an infinitepush namespace configuration without actually allowing infinite pushes server side. This is useful while Mercurial is the write master for Infinite Push commits, for two reasons:

- It lets us enable the infinitepush namespace, which will allow the sync to proceed between Mercurial and Mononoke, and also prevents users from making regular pushes into the infinitepush namespace.
- It lets us prevent users from sending commit cloud backups to Mononoke (we had an instance of this reported in the Source Control @ FB group).

Note that since we are routing backfills through the shadow tier, I've left infinitepush enabled there.

Reviewed By: StanislavGlebik

Differential Revision: D16071684

fbshipit-source-id: 21e26f892214e40d94358074a9166a8541b43e88
2019-07-02 10:39:52 -07:00
Kostia Balytskyi
5e8b126693 refactor resolver: move upload_changset out
Summary:
I intended to do more of this refactoring, but release oncall duties have held me down this week :(

In any case, this seems landable.

Reviewed By: farnz

Differential Revision: D15987481

fbshipit-source-id: c1a542843462f98227ba20e2def888128fbf4e86
2019-06-28 09:24:21 -07:00
Greg Cowan
041770b090 Transition fbcode Rust crates to 2018 edition
Summary: Marking all Cargo.tomls in fbcode as 2018 edition.

Reviewed By: jsgf

Differential Revision: D15951175

fbshipit-source-id: edf18449c214ee1ff285d6a2cb61839aaf58a8cd
2019-06-24 13:15:17 -07:00
Xavier Deguillard
f0862cbe16 mononoke: add getpackv2 wire protocol
Summary:
The getpackv1 protocol doesn't unfortunately support LFS blobs, which is
therefore blocking deploying remotefilelog.fetchpacks on ovrsource on the
clients.

The easiest way to get there was to simply add a getpackv2 API that is similar
in every way to getpackv1, but with the addition of a metadata field. While
full support for this was added to Mercurial, the Mononoke support is the
absolute minimum required as Mononoke doesn't support LFS.

I'm expecting that EdenAPI will completely remove the need for getpackv2 and
therefore for this code should be fairly short-lived.

Reviewed By: farnz

Differential Revision: D15954031

fbshipit-source-id: 465ac13ed8987191ccf9a7cec198d913143aaf13
2019-06-24 11:19:41 -07:00
Pavel Aslanov
5636a2623b remove BlobRepo::find_file_in_manifest
Summary: Remove inefficient `find_file_in_manifest`

Reviewed By: farnz

Differential Revision: D15854558

fbshipit-source-id: fe6bf723459d641bba69a232361e057af467a3d7
2019-06-19 07:53:29 -07:00
Pavel Aslanov
a0a3a421f2 make HgEntryId an enum
Summary: `HgEntryId` is much more useful in a typed from (enum of `(FileType, HgFileNodeId)` and `HgManifestId`), most of the time we now which type entry should contain and it makes it harder to make and error, all other use cases which require just hash should use `HgNodeHash` instead. This diff includes minimal changes which are necessary to make it work. Some of the cases which do sequence of `Entry::get_hash().into_nondehash() -> HgManifestId::new() | HgFileNodeId::new()` are left for future diffs.

Reviewed By: farnz

Differential Revision: D15866081

fbshipit-source-id: 5be9ecc30dbfd0a49ae6c5d084cdfe2dac351dac
2019-06-18 11:11:52 -07:00
Pavel Aslanov
69e7a7e7c8 add more traces
Summary:
- more tracing for potentialy large pieces of work
- removed some unnecessary tracing

Reviewed By: StanislavGlebik

Differential Revision: D15851576

fbshipit-source-id: 6686c00da56176cad43f72d1671e08eb8141110f
2019-06-17 05:13:11 -07:00
Kostia Balytskyi
2d9a45ca45 mononoke: reject pure pushes if the appropriate config is not set
Summary:
This is the final step in making sure we have control over whether
non-pushrebase pushes are supported by a given repo.

Reviewed By: krallin

Differential Revision: D15522276

fbshipit-source-id: 7e3228f7f0836f3dcd0b1a3b2500545342af1c5e
2019-05-31 10:50:10 -07:00
Kostia Balytskyi
e2e5250e36 mononoke: thread the pure_push_allowed param to the resolve function
Summary:
This is just a boilerplate, giving `resolve` access to the config option,
which controls whether we should allow pure pushes. This option will be
enabled by default and we'll disable it explicitly for repos where it makes
no sense.

Reviewed By: StanislavGlebik

Differential Revision: D15520450

fbshipit-source-id: b1bb913c14a6aac6aa68ed8b8a7b4ff270da1688
2019-05-31 10:50:09 -07:00
Thomas Orozco
0b3cd11c26 mononoke: verify bookmark namespace even during pushrebase
Summary:
We accidentally were not verifying the bookmark namespace (with regard to infinitepush) when performing a pushrebase.

This is a problem, because it means if a pushrebase was performed into the infinitepush namespace, then the push would be allowed. This could happen by accident (the earlier diff in this stack fixes a Mercurial bug that did this), or simply if the end-user changes their infinitepush branchpattern.

This patch fixes the bug, and extracts the "do basic checks for whether this bookmark can move" logic into a single function to minimize the potential for this validation logic diverging again between pushrebase and push.

Reviewed By: ikostia

Differential Revision: D15576198

fbshipit-source-id: 24cf9999a7370503e5e0173e34185d9aa57903f7
2019-05-31 08:57:06 -07:00
Thomas Orozco
7f0e3eb64b mononoke: create scratch bookmarks from B2xInfinitepush
Summary:
This updates our receive path for B2xInfinitepush to create new scratch bookmarks.

Those scratch bookmarks will:

- Be non-publishing.
- Be non-pull-default.
- Not be replicated to Mercurial (there is no entry in the update log).

I added a sanity check on infinite pushes to validate that bookmarks fall within a given namespace (which is represented as a Regexp in configuration). We'll want to determine whether this is a good mechanism and what the regexp for this should be prior to landing (I'm also considering adding a soft-block mode that would just ignore the push instead of blocking it).

This ensures that someone cannot accidentally perform an infinitepush onto master by tweaking their client-side configuration.

 ---

Note that, as of this diff, we do not support the B2xInfinitepushBookmarks part (i.e. backup bookmarks). We might do that separately later, but if we do, it won't be through scratch Bookmarks (we have too many backup bookmarks for this to work)

Reviewed By: StanislavGlebik

Differential Revision: D15364677

fbshipit-source-id: 23e67d4c3138716c791bb8050459698f8b721277
2019-05-30 07:14:32 -07:00
Thomas Orozco
9068a413d4 mononoke: support non-publishing / non-pull_default Bookmarks
Summary:
This adds support for recording server-side whether a given bookmark is publishing and / or pull-default. This is a change on the way towards supporting Infinite Push in Mononoke. This diff will require schema changes on `xdb.mononoke_production`.

There isn't a whole lot of new user-facing functionality in this particular diff. For starters, nobody can edit this flag on bookmarks, and pushes that create a new bookmark will always result in setting a bookmark as publishing AND pull_default.

What this change does however introduce is the notion of `BookmarkHgKind`, which is represents the behavior of this bookmark as far as Mercurial operations are concerned.

There are 3 such kinds, which are relevant in different parts of the codebase:

- PullDefault - this is useful when we want to respond to listkeys queries.
- Publishing — this is useful when we want to identify commit Phases.
- All - this is useful when we want to respond to listkeyspatterns.

Note that only the first two groups are cached in CachedBookmarks.

 ---

There are a few things going on in this diff (which logically have to happen together):

- First, I updated the `Bookmarks` trait and its various implementations to expose new methods to select Bookmarks based on their hg kind. There's one method per hg kind, and all the methods use a `Freshness` parameter to determine whether to hit cache or not.
- Second, I refactored the various bookmark-access methods in blobrepo. A few of them were duplicative of each other, and some were unused, which became a bigger problem now that we had more (for publishing, pull default, etc.). We are now down to just one method that doesn't hit cache (which is used only by the blobimport script and some tests — perhaps this could be changed?).
- Third, I updated the call sites for the methods that were udpated in step 2 to use the proper method for their use case.

 ---

Here's a summary of where each method is used (I'm only listing stuff that isn't unit tests):

- `get_bonsai_heads_maybe_stale`:
  - `SqlPhases`'s `get_public`
  - `build_skiplist_index`
- `get_bonsai_publishing_bookmarks_most_recent`:
  - Blobimport (perhaps we can update this to a `maybe_stale` read?)
- `get_pull_default_bookmarks_maybe_stale`
  - `listkeys` implementations
- `get_publishing_bookmarks_maybe_stale`
  - API Server's `get_branches`
- `get_bookmarks_by_prefix_maybe_stale`:
  - `listkeyspatterns` implementation

 ---

As an aside, note that a lot of the code changes in this diff are actually in CacheBookmark's tests — I had to update those to minimize sleeps: it was fine to sleep in the earlier tests, but I introduced new quickcheck-based tests, and sleeping in those isn't ideal.

Reviewed By: StanislavGlebik

Differential Revision: D15298598

fbshipit-source-id: 4f0fe80ea98dea8af0c8289db4f6f4e4070af622
2019-05-30 07:14:32 -07:00
Simon Farnsworth
920d1901f3 Fix up confusion between description and long_description in hooks
Summary:
The idea was that the short description is used for mechanical summaries of the hook failures, and the long description is used for human-readable "how to handle this" forms.

Instead, we had a mixture of styles, plus only ever returning the short description. Change this to only ever return the long description and fix hooks so that the long description is meaningful

Reviewed By: StanislavGlebik

Differential Revision: D15537580

fbshipit-source-id: 6289c1c9786862db8190b4464a3133c0620eb09c
2019-05-29 11:34:26 -07:00
Harvey Hunt
2f7a107fa7 mononoke: bundle2: Don't create a bookmark transaction if there are no bookmark moves
Summary:
If a bundle is resolved that doesn't have any bookmark moves,
it's wasteful to create a transaction. Further, it causes the bookmark
cache to be purged once txn.commit() is called.

Reviewed By: krallin

Differential Revision: D15536821

fbshipit-source-id: 0ddab1d2d2a86d964d5dbab02966a1b13edb9b72
2019-05-29 09:44:06 -07:00
Thomas Orozco
77ba80ebd8 mononoke: Rename Bookmark to BookmarkName
Summary:
As part of adding support for infinitepush in Mononoke, we'll include additional server-side metadata on Bookmarks (specifically, whether they are publishing and pull-default).

However, we do use the name `Bookmark` right now to just reference a Bookmark name. This patch updates all reference to `Bookmark` to `BookmarkName` in order to free up `Bookmark`.

Reviewed By: StanislavGlebik

Differential Revision: D15364674

fbshipit-source-id: 126142e24e4361c19d1a6e20daa28bc793fb8686
2019-05-21 12:26:02 -07:00
Pavel Aslanov
39599a866d simplify phases API
Summary:
`Phases` currently have very ugly API, which is constant source of confusion. I've made following changes
- only return/cache public phases
- do not require `public_heads`  and always request them from `BlobRepo::get_bonsai_heads_maybe_stale`
- consolidated `HintPhases` and `SqlPhases`
- removed  `Hint` from types which does not carry any meaning
-  fixed all effected callsites

Reviewed By: StanislavGlebik

Differential Revision: D15344092

fbshipit-source-id: 848245a58a4e34e481706dbcea23450f3c43810b
2019-05-21 12:26:01 -07:00
Jun Wu
150f6f7f6b mononoke: add bookmark config to disable pushrebase date rewrite
Summary:
There was a request about importing a GitHub repo into fbsource. While pushing
it to Mononoke with pushrebase disabled, the sync job broke because it can only
handle pushrebase pushes.

Before this diff, pushrebase has a repo-level config about whether dates need
to be rewritten. We definitely want "master" to have date rewritten turned on,
but not the imported commits. This diff adds logic to turn off date rewriting
for bookmarks by using the `rewrite_dates` config, to address the repo import
requirement.

Reviewed By: StanislavGlebik

Differential Revision: D15291030

fbshipit-source-id: 8dcf8359d7de9ac33f0af6f9ab3bcbac424323e4
2019-05-21 12:25:55 -07:00
Stanislau Hlebik
53478d3ab2 mononoke: apply delta in parallel
Summary:
Note: it usually doesn't matter because delta application usually doesn't need
any fetching from blobstore. But this change is safe and can prevent problems
in future.

Reviewed By: HarveyHunt

Differential Revision: D15241499

fbshipit-source-id: 43fbfd495f0f795b90ef343ac1055d16cdda129c
2019-05-21 12:25:37 -07:00
Pavel Aslanov
c1099d91e3 correctly mark all pushrebased changesets as public
Summary: Before this change we were only marking head of the pushrebase, this change fixes this problem and marks all reachable changesets as public

Reviewed By: StanislavGlebik

Differential Revision: D15063835

fbshipit-source-id: 2a360684fc01cec0f639c1789eff8150e5ba5ebb
2019-05-21 12:25:20 -07:00
Thomas Orozco
178931cc02 mononoke: add obsmarkers to pushrebase output
Summary: Added obsmarkers to pushrebase output. This allows the client to hide commits that were rebased server-side, and check out the rebased commit.

Reviewed By: StanislavGlebik

Differential Revision: D14932842

fbshipit-source-id: f215791e86e32e6420e8bd6cd2bc25c251a7dba0
2019-05-21 12:25:17 -07:00
Pavel Aslanov
7ca65d1da3 restrict bookmarks moves onty to allowed set of users
Summary: restrict bookmarks moves onty to allowed set of users

Reviewed By: farnz

Differential Revision: D14934606

fbshipit-source-id: d149824b4d3d6376dde6e855cac214e5fda89fac
2019-05-21 12:25:11 -07:00
Pavel Aslanov
75ecc788b6 convert bundle2_resolver to rust-2018
Summary: convert `bundle2_resolver` to rust-2018

Reviewed By: StanislavGlebik

Differential Revision: D14853999

fbshipit-source-id: b3620016524e4f9f5c9746b9887952d9bdcb373c
2019-05-21 12:25:05 -07:00
Pavel Aslanov
bde4a8319e make it possible to disable casefoldind check for admin tier
Summary: We want to have casefolding configurable for admin tier

Reviewed By: StanislavGlebik

Differential Revision: D14775535

fbshipit-source-id: 103a13b98e9c06053cf0636009c6f1e96c80f741
2019-05-21 12:25:01 -07:00
Pavel Aslanov
905ff80ee6 forbid rebases when root is not a p1 of the rebase set
Summary: We want to forbid pushrebases when root node is a p2 of its parents for now, since mercurial swaps them after pushrebase which causes inconsistency

Reviewed By: StanislavGlebik

Differential Revision: D14642177

fbshipit-source-id: f8f6e9565c53958e8cff5df6f4d006ddfe5a69c0
2019-05-21 12:24:59 -07:00
Stanislau Hlebik
bb52969617 mononoke: always create bookmark if doesn't exist
Summary: The behaviour was changed in mercurial and we need to match it

Reviewed By: ikostia

Differential Revision: D14743136

fbshipit-source-id: 8f0184e0fc1fbf666323e5d3f1e2f0c85a1e3dc6
2019-05-21 12:24:58 -07:00
Kostia Balytskyi
731b56b977 mononoke: remove irrelevant TODOs
Summary:
T40115672 is finished by David, while timestamp setting is implemnted by Stas
elsewhere.

Reviewed By: StanislavGlebik

Differential Revision: D14743113

fbshipit-source-id: 597c6f289362842ebb6dd1b8b35c6f2325bb6e48
2019-05-21 12:24:57 -07:00
Stanislau Hlebik
ce1ae4a9a8 mononoke: check p1 and then p2
Summary: The same fix as in D14100259

Reviewed By: farnz

Differential Revision: D14666307

fbshipit-source-id: 7c455373068c66ffce9f4b9c6825c04a69fee35b
2019-05-21 12:24:56 -07:00
Stanislau Hlebik
34e77e6d94 mononoke: append metadata in getpackv1
Summary:
getpackv1 needs to return copy metadata together with the content. This diff
fixes it

Reviewed By: kulshrax

Differential Revision: D14668319

fbshipit-source-id: 56a6ea2eb3a116433446773fa4fbe1f5a66c5746
2019-05-21 12:24:53 -07:00
Pavel Aslanov
3003fb9a6b log start of the pushrebase to scuba
Summary:
- This type of pushrebase should not succeed but it should not timeout either, as it does now
```
$ hg push -r. --to master

o < -master (public)
|
o @ <- commit we are trying to rebase
| |
: o <- master2
|/
o
```
- this code uses `get_changeset_parents_by_bonsai` for traversal which makes it much faster, and it is correctly fails even if we are running mononoke in lla region
- I've also added additional logging to scuba to indicate begging of pushrebase, which will simplify debugging in the future

Reviewed By: StanislavGlebik

Differential Revision: D14598576

fbshipit-source-id: 25e792996aa08ca977678bd61ffe8cb51f386bcf
2019-03-25 11:11:51 -07:00
Harvey Hunt
e915e3a168 mononoke: Add lock reason to RepoReadOnly type
Summary:
As part of the mononoke lock testing, we realised that it would
be helpful to see why a repo is locked. Add the ability to express this
to the RepoReadOnly enum entry.

Reviewed By: aslpavel

Differential Revision: D14544801

fbshipit-source-id: ecae1460f8e6f0a8a31f4e6de3bd99c87fba8944
2019-03-20 13:45:44 -07:00
David Budischek
2098c4a304 Prevent merge commits
Summary: In Mononoke we want to be able to block merge commits on a repo per repo basis.

Reviewed By: aslpavel

Differential Revision: D14455502

fbshipit-source-id: 400e85834c20df811674405bc0c391860cf677dd
2019-03-18 11:20:10 -07:00
David Budischek
43f060988c Prevent deletion of configured bookmarks
Summary: Bookmarks that are set to be non fastforward moveable should also not be deleteable

Reviewed By: StanislavGlebik

Differential Revision: D14420457

fbshipit-source-id: a10231466350c0b25437972c66472b46044fc625
2019-03-18 04:12:09 -07:00
David Budischek
2a93fe345c Block non fastforward bookmark moves
Summary:
This is a hook in mercurial, in Mononoke it will be part of the implementation. By default all non fastforward pushes are blocked, except when using the NON_FAST_FORWARD pushvar (--non-forward-move is also needed to circumvent client side restrictions). Additionally certain bookmarks (e.g. master) shouldn't be able to be moved in a non fastforward manner at all. This can be done by setting block_non_fast_forward field in config.

Pushrebase can only move the bookmark that is actually being pushrebased so we do not need to check whether it is a fastforward move (it always is)

Reviewed By: StanislavGlebik

Differential Revision: D14405696

fbshipit-source-id: 782b49c26a753918418e02c06dcfab76e3394dc1
2019-03-18 04:12:09 -07:00
Jeremy Fitzhardinge
32330eb699 mononoke/bundle_resolver: remove unused lazy_static dep
Reviewed By: zertosh

Differential Revision: D14422817

fbshipit-source-id: a9cc62353e1f07211576d8a1d20d2c439dd1fdb8
2019-03-12 17:30:30 -07:00
David Budischek
a76d7c1cdd Log pushrebase commits to scribe
Summary:
Currently we are logging new commits from BlobRepo. This will lead to issues once CommitCloud starts using Mononoke as we cannot differentiate between phases at that level. The solution is to log commits when they are pushrebased as this guarantees that they are public commits.

Note: This only introduces the new logic, cleaning up the existing implementation is part of D14279210

Reviewed By: StanislavGlebik

Differential Revision: D14279065

fbshipit-source-id: d714fae7164a8af815fc7716379ff0b7eb4826fb
2019-03-12 04:50:45 -07:00
Stanislau Hlebik
d4e93edae5 mononoke: add copy/rename sources to list of conflict files in pushrebase
Summary:
Copy & rename sources must be included in the list of conflict files so that if
a copy was modified between root and `onto` bookmark then pushrebase should
fail with conflicts.

Note that some some merge cases are not handled yet - see TODO in the code

Reviewed By: lukaspiatkowski

Differential Revision: D14322036

fbshipit-source-id: d69bcceaa24987dd1e9d67e77f6a3205b580a7d8
2019-03-11 05:18:31 -07:00
Stanislau Hlebik
816305f75a monononoke: earlier merge detection in pushrebase
Summary:
Mononoke does not support pushrebasing over a merge commit. Previously
`find_closest_ancestor_root` didn't detect merges.
In cases like

```

o <- onto
|
o   o <- commit to pushrebase
|\ /
| o
o  <- main branch
...

```

`find_closest_ancestor_root` would go to the main branch and finally fail with
`RootTooFarBehind` error. By detecting merge commit earlier we could print
better error message and avoid doing useless traversal

Reviewed By: lukaspiatkowski

Differential Revision: D14321616

fbshipit-source-id: 2aa53a2627f25897a241616a429864f1cfca3100
2019-03-11 04:20:32 -07:00
Kostia Balytskyi
e561682ecd mononoke: rename crates to contain underscores instead of dashes
Summary: Let's not use dashes in crate names.

Reviewed By: StanislavGlebik

Differential Revision: D14341596

fbshipit-source-id: 85a7ded60cf2e326997ac70ee47a29116af97590
2019-03-06 07:18:28 -08:00