Commit Graph

2714 Commits

Author SHA1 Message Date
Mark Thomas
2747bc2342 warm_bookmarks_cache: derive each data type in a separate task
Summary:
Currently the warm bookmarks cache runs all the warmers in a single task.  This
means they compete for CPU.  Run them in separate tasks, so that they might
execute in parallel.

Reviewed By: krallin

Differential Revision: D18505881

fbshipit-source-id: e8045bd14916caf3c2c592afbe35309534fe3446
2019-11-14 10:08:29 -08:00
Stanislau Hlebik
fff3c89b28 mononoke: do not check direction when creating repo sync target
Reviewed By: ikostia

Differential Revision: D18502290

fbshipit-source-id: 971dc89c3e3d45afd604ba78530ecaf6747544ef
2019-11-14 08:36:28 -08:00
Stanislau Hlebik
c8d026a746 mononoke: try to fix integration test
Summary: D18478452 broke mode/opt 2 tests. One of them directly modifies bookmarks table, in another separate binary modifies the boomarks. This creates undeterministic tests. Let's disable bookmarks cache for those 2 tests

Reviewed By: farnz

Differential Revision: D18501660

fbshipit-source-id: d4f625dbdf2f8b110eb6196761e655187407abf6
2019-11-14 00:24:59 -08:00
Ryan Menezes
25bb61e972 Implement quickcheck::Arbitrary for fbinit::FacebookInit
Summary: Implement quickcheck::Arbitrary for fbinit::FacebookInit and remove uses of *fbinit::FACEBOOK in quickcheck-based tests.

Reviewed By: dtolnay

Differential Revision: D18493275

fbshipit-source-id: fc0e64700b09fa521e3871071e9602281c9e0690
2019-11-13 20:11:54 -08:00
Adam Simpkins
b7f76e50b1 update the location of the run_tests library
Summary: Find the `run_tests` code under `eden/scm/tests` rather than `scm/hg/tests`

Reviewed By: singhsrb

Differential Revision: D16823961

fbshipit-source-id: 828b68311d0af9ab6d0dac6e574748313a96c02b
2019-11-13 17:27:30 -08:00
Stanislau Hlebik
a5e451eeb8 mononoke: make sure caches are purged when backsyncer updates transaction
Reviewed By: ikostia, farnz

Differential Revision: D18478452

fbshipit-source-id: 8a0a96508f82224d2e1b312390616974084a5e03
2019-11-13 11:37:27 -08:00
Lukas Piatkowski
b0bc02a308 Re-sync with internal repository 2019-11-13 20:16:37 +01:00
Thomas Orozco
c023f09a61 mononoke/repo_client: add support for commit rate limiting
Summary:
This adds support for limiting the number of commits for a given author. This
runs after we've received Bonsais from the client, but before we attempt to
pushrebase them or anything. This is all controlled through Configerator,
through the same configuration as our throttling.

I'll also send a diff in Landcastle in order to make this non-retryable.

Differential Revision: D18375115

fbshipit-source-id: 089bdcd7bebfd2ea42c37921fc80b53f96a1d40e
2019-11-13 10:11:47 -08:00
Daniel Grzegorzewski
1018b751fc Added more integration tests
Summary: I added more integration tests to cover all options for getting commits. Also added tests to check output when asking about globalrevs repo with or without globalrev. Modified list_repos to make it deterministic.

Reviewed By: markbt, HarveyHunt

Differential Revision: D18448770

fbshipit-source-id: 8662c3a0d1676813def5dd9f2b17200ca1c52040
2019-11-13 08:48:33 -08:00
Alex Hornby
22b9b58b13 mononoke: add --readonly-storage option to cmdlib
Summary: Add --readonly-storage option to cmdlib that will cause an error on any attempt to write to SQL metadata or Blobstore

Reviewed By: StanislavGlebik

Differential Revision: D18297959

fbshipit-source-id: e879183b74fb50abfb60d2424ea579708322963f
2019-11-13 05:31:14 -08:00
Mateusz Kwapich
7701bb4a25 diff command for diffing two commmits
Summary: Command will be used only in the tests for now.

Reviewed By: markbt

Differential Revision: D18303171

fbshipit-source-id: 4938cca6b0ac0fa1868ab75a64db6d23c201a4f8
2019-11-12 11:16:29 -08:00
Pavel Aslanov
46229b13d4 blame derived data implementation
Summary: Derived data implementation for Blame data

Reviewed By: StanislavGlebik

Differential Revision: D18201489

fbshipit-source-id: d5ebd73f3a9b210108f509b7d2447fed3e7fb997
2019-11-12 09:25:00 -08:00
Stanislau Hlebik
2fee923329 mononoke: add crossrepo command to verify bookmarks
Reviewed By: farnz

Differential Revision: D18430445

fbshipit-source-id: 0bdbb0391c8640a8dc42e1a396f7a133e25ca210
2019-11-12 09:00:00 -08:00
Simon Farnsworth
6813447b6c Have an ODS counter for the backsyncer batch size
Summary: We want an alarm if the backsyncer Tupperware job is doing significant work; it only does anything if the pushredirect logic is buggy, or in rare cases where it wins a race. Log to ODS when this task sees a queue, so that we can investigate

Reviewed By: StanislavGlebik

Differential Revision: D18450577

fbshipit-source-id: 6aac1c8638c6275fad5db3db1bb4915c1b824930
2019-11-12 08:42:15 -08:00
Stanislau Hlebik
acc0524f2b mononoke: setup --log-level and --debug in add_logger_args
Summary:
It should make it possible to use these arguments in commands that do not use
MononokeApp::build() (e.g. backsyncer_cmd).

Also it generally feels like the right cleanup, because all logger arguments
will be specified only in one place

Reviewed By: krallin

Differential Revision: D18448014

fbshipit-source-id: 729d12b42df4b28ab37820bc4a86cefa0ea870a9
2019-11-12 01:38:28 -08:00
David Tolnay
5c3bf2a51c Update futures and tokio alphas
Summary:
Update futures-preview from 0.3.0-alpha.18 to 0.3.0-alpha.19, tokio-preview from 0.4.0-alpha.4 to 0.4.0-alpha.6, hyper-preview from 0.13.0-alpha.1 to 0.13.0-alpha.4.

The source changes are from the `hostname` crate releasing 0.2.0 which changes the signature to return io::Result<OsString> instead of Option<String>.

Reviewed By: jsgf

Differential Revision: D18435884

fbshipit-source-id: 548ec3c53f597caa10f8c65b27ae642324a8e484
2019-11-11 19:33:52 -08:00
Aida Getoeva
2c8660cf78 revert processing renames in file history
Summary:
[the diff is create with `hg backout`]

I've noticed slowness in Mononoke APIServer `get_file_history` due to recently deployed changes introduced in D18138431.
The slowness (that followed with timeouts) was caused by fetching a large changeset while checking renames for the paths, that were touched by the changeset.

Reverting the feature as it causes fallbacks while the rename history is not renderred by Diffusion anyway.

Reviewed By: StanislavGlebik

Differential Revision: D18430395

fbshipit-source-id: 2fbf9376d370624435f3846c7c88a1c5b9a53021
2019-11-11 11:48:12 -08:00
Stanislau Hlebik
2ffc1312d2 mononoke: tool to sync diamond merges in megarepo
Summary:
Handling diamond merges correctly in megarepo is a hard problem. For now I'd
like to add this half-manual tool that can sync a merge commit into a megarepo
should we have it again.  This tool is a hack until we start fully support a merge commits
in megarepo.

Notes:
1) tool is a best-effort, not a production quality. It might not handle all
edge-cases. It might require tweaking and should be used with care (e.g. run
mononoke_repo crossrepo verify-wc). That said I'd like to land it -
previously it took me > 4 hours to sync a diamond merge. I'd like the next one
to take less, and even this hacky tool should help.
2) A diff below in the stack adds changes to blobsync crate to not upload blob if it
already exists. It is necessary for this tool to work. Currently `upload_commit`
copies all blobs from source repo, however the merge commit the tool creates can contain
entries from multiple source repos - trying to copy all of them from the single source repo
will fail!

Reviewed By: farnz

Differential Revision: D18373457

fbshipit-source-id: 7cdb042b3a335cdc0807d0cf98533f9aec937fd0
2019-11-11 11:14:01 -08:00
Stanislau Hlebik
b620930a24 mononoke: fail if verify-wc found an error
Summary: Previously it print a warning which was easy to miss. Let's fail now

Reviewed By: farnz

Differential Revision: D18427675

fbshipit-source-id: d0d638d7449108469e5acf7e71b8e951576792df
2019-11-11 09:23:08 -08:00
Stanislau Hlebik
aac4194f7c mononoke: do not move paths in mononoke_admin if a commit was preserved
Summary:
verify-wc didn't work correctly for commits that were preserved i.e. commits
that are the same in small and large repos. For those commits we don't need to
move paths

Reviewed By: farnz

Differential Revision: D18427624

fbshipit-source-id: 102ce743714fe63a3d5ba9e6441fa735361063cb
2019-11-11 09:23:08 -08:00
Simon Farnsworth
77f1960967 Log to Scuba whenever pushredirection kicks in
Summary:
We're going to pushredirect some commits from small repos to large repos as part of the megarepo write path. Add some Scuba logging, so that we can see when redirection happens and react accordingly.

Note that I've deliberately kept the logging small - just tells you about the target repo - to avoid filling up Scuba. We can increase or reduce the amount of logging as we test this code.

Reviewed By: StanislavGlebik

Differential Revision: D18405345

fbshipit-source-id: bafc8f0aa0b4329b261dc0d6c99306fc9df95cf9
2019-11-11 08:57:42 -08:00
Stanislau Hlebik
64b6032eee mononoke: extract creation of CommitSyncer
Summary: Will be used in the next diff

Reviewed By: farnz

Differential Revision: D18373627

fbshipit-source-id: 74dca2fef6a256eefed026a93c4c4381511e611c
2019-11-11 08:55:14 -08:00
Stanislau Hlebik
2140a2b053 mononoke: extract converting some diamond merge functionality from pushrebase
Summary: I'll use it in the next diff

Reviewed By: farnz

Differential Revision: D18373455

fbshipit-source-id: 295c10135be01ef18a9baa8e8a6085fd088d0306
2019-11-11 08:55:13 -08:00
Stanislau Hlebik
db61f6da5a mononoke: correct handling of non-pushrebase pushes in push redirector
Summary: One notable change - non-pushrebase pushes on shared bookmarks will now fail.

Reviewed By: ikostia

Differential Revision: D18425394

fbshipit-source-id: 4983789a15caa1ceec044d379e3a4748b2821dea
2019-11-11 08:51:50 -08:00
Simon Farnsworth
09d9a5bed9 Add some Scuba logging to the backsyncer
Summary: We currently have no way to track what the backsyncer is doing, if anything, and it can get stuck. Log for each sync, so that we can see what bookmark moves (if any) are getting stuck.

Reviewed By: StanislavGlebik

Differential Revision: D18397848

fbshipit-source-id: 67ce60a129c020185f41ba69fe3ed046d540f047
2019-11-11 06:26:04 -08:00
Kostia Balytskyi
d1c840cd8b mononoke: add integration tests for pushrebase case of push redirector
Summary: This diff adds basic happy path pushrebase tests for the push redirector. In other words, it covers a situation, where there's a single repository, which is push-redirected into a large repo, and which only serves pushrebase pushes.

Reviewed By: StanislavGlebik

Differential Revision: D18421133

fbshipit-source-id: c58af0c3c8fa767660f5e864554cc4a91cd0402c
2019-11-11 05:48:48 -08:00
Andreas Backx
5782d32dcc Removed --stdlog from apiserver.
Reviewed By: StanislavGlebik

Differential Revision: D18425301

fbshipit-source-id: 428bd2664e37b5b20913fd1b589b5626b2afd3f4
2019-11-11 05:38:07 -08:00
Kostia Balytskyi
db105021c8 mononoke: split uploaded_hg_bonsais_map into bonsais and hg_cs_ids
Summary:
In the previous refactorings I preserved the mapping between the originally
uploaded Mercurial hashes and the way Mononoke saved them in order to decide
whether we need it later: it seemed easier to keep it than to add it later.
But it does seem like the wrong thing to do: we do not need the mapping for
any purposes, we just need Mercurial changeset ids themselves to run hooks.
So let's separate concerns and preserve two different things:
`BonsaiChangeset`s to run `unbundle` and `HgChangesetId`s to run hooks.

Reviewed By: farnz

Differential Revision: D18421050

fbshipit-source-id: cd28e56465ae0d3d96381072de1f9bc5bb009516
2019-11-11 03:10:23 -08:00
Kostia Balytskyi
d2de3470af mononoke: add push redirector logic
Summary:
This diff adds some meat to the backbone, introduced in D18370903. Test are to come in later diffs.

Copied from the parent diff:

Push redirector is one of the core components of cross-repo sync in Mononoke. It comes into play when large repository serves writes. Here's the schematic flow of the `unbundle` pipeline:

|Step| Small repo | Push redirector/Backsyncer | Large repo |
|1|Parse `unbundle` bod, decide whether it's push, pushrebase, etc | | |
|2|Upload all of the changesets, provided in the `unbundle` body | | |
|3||(small-to-large direction) ->||
|4| |Convert parsing result (`PostResolveAction`) to be usable in the large repo. This involves syncing uploaded changesets, renaming bookmarks if needed, etc. ||
|5|||Process converted `PostResolveAction` (i.e. perform push, pushrebase or infinitepush). Create an `UnbundleResponse` struct, which contains all the information, necessary to generate response bytes to be sent to the user.|
|6|| <- (large-to-small direction) ||
|7||Call the Backsyncer to sync all the commits, created in the large repo into the small repo. Then, convert `UnbundleResponse` struct (by replacing commits with their equivalents and renaming bookmarks) to be suitable to be used in the small repo||
|8|Generate response bytes from the `UnbundleResponse` struct, and send those bytes to the user. |||

Reviewed By: StanislavGlebik

Differential Revision: D18288854

fbshipit-source-id: 36eb78fcc03ca5249776237ef9dda2b4747ecc68
2019-11-11 03:10:22 -08:00
Kostia Balytskyi
2979cb564a mononoke: more logging in the admin crossrepo subcommand
Summary: Just a minor detail of additional logging.

Reviewed By: farnz

Differential Revision: D18420815

fbshipit-source-id: 583c51591460c71b21d373d000f51752fa6c05e6
2019-11-10 13:19:48 -08:00
Kostia Balytskyi
16fd03b185 mononoke: accept arbitrary log levels
Summary: It would be nice to be able to have a more fine-grained verbosity control, than provided by `--debug`. At the same time, to not break all of the running jobs, we can't just get rid of `--debug`. So let's add `--log-level`, which conflicts with `--debug`.

Reviewed By: farnz

Differential Revision: D18417028

fbshipit-source-id: 74c365fc8225098921e301674e5bd7e240411617
2019-11-10 12:37:45 -08:00
Jun Wu
0cb9961ac4 pushrebase: do push in one transaction
Summary:
The push operation has 2 phases: push, then move bookmarks, etc.  Make that in
one transaction so it wouldn't end up with an inconsistent state.

Reviewed By: sfilipco

Differential Revision: D18362363

fbshipit-source-id: 338ef1b088975a9d1b043ccef81782e14c77d8e1
2019-11-08 22:29:40 -08:00
David Tolnay
c7ec208365 Rename rust client_async module to plain client
Reviewed By: bolinfest

Differential Revision: D18392259

fbshipit-source-id: 0dada0d44e8756a01727fd1f7977b3670e5690d6
2019-11-08 14:22:16 -08:00
David Tolnay
6f2ec04783 rust/async: Port fbcode to std::future Thrift clients
Summary:
This diff rewrites all use of Thrift-generated `client` modules to `client_async`, then inserts `.compat()` calls to cast the std::future::Future objects produced by `client_async` back to futures::Future objects as they would have been produced by `client`, thus preserving the behavior.

https://docs.rs/futures/0.3.1/futures/future/trait.TryFutureExt.html#method.compat

This diff is just the minimal change to allow deleting the old futures 0.1 Thrift client. We'll follow up further in each of these projects to remove the compat shims by migrating off of futures 0.1 entirely.

Reviewed By: bolinfest

Differential Revision: D18392206

fbshipit-source-id: b58d2b6bf7a3d3adebc31d04e332a0917c8a6f28
2019-11-08 12:34:34 -08:00
Thomas Orozco
38c3b10bda mononoke: CoreContext: move PerfCounters to the command scope
Summary:
Right now, our PerfCounters live in the CoreContext's SessionContainer, so they are shared across all commands for a given session. One downside of that is that if a session has a lot of command, it's hard to:

- Figure out what the total value is for the PerfCounters (you have to look for the max on all the commands in the session — if you sum, you'll double-count a lot of things).
- Figure out what the value is for a PerfCounter for a given command (that one effectively impossible).

With this change, our PerfCounters are tied to individual commands, so we can figure out the total value by summing, and get the value for a given command directly.

Reviewed By: mitrandir77

Differential Revision: D18371382

fbshipit-source-id: 377a6594a95f47fcbed51361f4f457099c414962
2019-11-08 12:20:24 -08:00
Thomas Orozco
d63045e56b mononoke: repo_client: eliminate duplication in Wireproto / command finished logging
Summary:
Currently, each individual command is responsible for logging its own `Command Processed` output. We typically want that to have perf counters (and wire proto logging), so that also has to be done on a per-command basis (and if we want to add new things to log when commands finish, then that would have to be done on a per-command basis as well), which results in quitea lot of code duplication / overlap.

This refactor reworks said command logging to eliminate this duplication by routing everything trough a single `CommandLogger` instance for each command (which is obtained along with a command's context).

This also removes a bit of the duplication we had around logging new commands (it's now in a `start_command` method. It also removes a now-useless with_logger_kv call that was adding the command to a logger that already has it.

There is still a requirement for each command to actually call `finalize_command` in order to log, but that's arguably still a step forward :)

Reviewed By: mitrandir77

Differential Revision: D18371381

fbshipit-source-id: a9bccb64120fee5c68633d3b43a8850416e2ffd4
2019-11-08 12:20:24 -08:00
Lukas Piatkowski
3f465ffb2c cargo_from_buck: generate single section in Cargo.toml from multiple targets
Summary:
This is the second diff in a stack that will change how multiple targets are generated inside Cargo.toml files.
Previously it used to be that every target is generated independently, which would guarantee invalid Cargo.toml creation, since multiple `[package]` or `[dependencies]` sections would be added.

In this diff the cargo_validator starts to expect only one generated section in Cargo.toml files and cargo_generator starts to generate only one section instead of multiple.

Reviewed By: farnz

Differential Revision: D18114194

fbshipit-source-id: 306b2fa297cf33a1e607d6914513f76a7e1c5580
2019-11-08 11:04:35 -08:00
Thomas Orozco
0b5791afef mononoke/lfs_server: log http query
Summary:
We've started using this for consistent routing. It's a good idea to also log
it!

Reviewed By: HarveyHunt

Differential Revision: D18400169

fbshipit-source-id: b8f8c3b82631aab024c8b3afe09b59f0fde4430b
2019-11-08 10:14:19 -08:00
Thomas Orozco
0bb44df808 mononoke: rework CoreContext into session and logging info
Summary:
This reworks our CoreContext a little bit to contain two fields instead of putting everything together in just one `Inner` struct. There's a few reasons why I'd like to make that change:

- First, with the `Inner` approach, everything has to be cloneable in `Inner`, even things that are largely static for a given session, because that's how we create new contexts to update their logger or Scuba sample. If we split things out, then we can clone the Logger & Scuba without cloning Inner.
- Second, this approach allows for better separation of concerns in the repo handler. Right now, it's a bit of a mess: many of our methods there aren't actually providing their command in Scuba for example, because they just use `self.ctx` and forget to update the method. It's just too error-prone. By separating things out, we have a data model that maps a little better to the state of the world (one session, multiple commands), and we make sure that we can't accidentally use a `ctx` without first tying it to a request. We had `prepared_ctx` as an attempt to do that, but since it wasn't mandatory it wasn't used everyhwere properly. The replacement `command_ctx` forces command code to pass their command in order to acquire a `CoreContext`.

Note that this diff is a lot of busywork here and there to update callsites accordingly. That said, there is one functional change in the commit cloud bookmarks filler, which was using a method it didn't need (and which never worked), so I took that out.

Relatedly, note that I removed `CoreContext` from the bundle2 parsing code, and passed just a `Logger` there. That code doesn't actually use `CoreContext`. This allowed for removing a few TODO's of dtolnay's when he introduced `FacebookInit`.

Reviewed By: StanislavGlebik

Differential Revision: D18352597

fbshipit-source-id: cd91042cef666c38b9cbd5f07518bc558e172aa2
2019-11-08 09:44:02 -08:00
Thomas Orozco
44c97ffef5 mononoke/cmdlib: add support for enabling McRouter for ratelim
Summary: The ratelim library can talk to a local McRouter if one is available. Doing so avoids having to wait for 20 seconds on the first load check, which is convenient. THis exposes this option in mononoke server.

Differential Revision: D18375114

fbshipit-source-id: 6ea26fdefc0c3e8d3989949d91b0da58e2c7add1
2019-11-08 09:24:55 -08:00
Simon Farnsworth
8c512d66fc Read the configured pushredirection from Configerator
Summary:
We assume that Configerator is in good state - it will error if not - and read config.

Push redirection is only enabled when we can read Configerator and it says that pushrebase is enabled

Reviewed By: StanislavGlebik

Differential Revision: D18331952

fbshipit-source-id: 5cdccdf7cf347ead7ebef7a4348621e47ff7887e
2019-11-08 07:30:25 -08:00
Simon Farnsworth
89f34a9849 Provide a higher level API that includes mocking
Summary: We want to be able to mock our config in lower level users - as a first step on that route, provide an API that encapsulates the API accesses, and allows you to point at a named file, or a `configerator/materialized_configs` and still get answers

Reviewed By: StanislavGlebik

Differential Revision: D18349750

fbshipit-source-id: b6b61235bc564ffe9478e15507e31100bd24cef0
2019-11-08 07:30:25 -08:00
Stanislau Hlebik
cb7570678b mononoke: fix new backsyncer update reason
Summary:
Do a few fixes for the new "backsyncer" update reason.

I also added a few checks to make sure we don't run into this problems again.

Reviewed By: ikostia

Differential Revision: D18395398

fbshipit-source-id: d61839d54fe1c8f9c2a4c858762c040db32daf4d
2019-11-08 06:34:34 -08:00
Alex Hornby
b363deb4f0 rust/sql: add role_override and get_user to raw connection
Summary: Add role_override to raw connection so that we can request readonly role if desired, and  and get_user to be able to check that readonly user was returned.

Reviewed By: krallin

Differential Revision: D18297444

fbshipit-source-id: 3563d9584980c7347fde36e9ce93fbdc53970923
2019-11-08 05:37:06 -08:00
Kostia Balytskyi
9e15674957 mononoke: make topo-sort slightly easier to read
Summary:
Add some comments/docstrings to `sort_topological` and rename `Mark` enum variants to be more self-explanatory.

Obviously, this kind of stuff is very personal, so feel free to tell me that this is not an improvement.

Reviewed By: StanislavGlebik

Differential Revision: D18394886

fbshipit-source-id: 836fee39d8ead985de136c6aebc689680ca30ba4
2019-11-08 05:25:27 -08:00
Pavel Aslanov
c8c420b2b8 correct handling of already resolved DAG node
Summary: If node was already resolved old implementation mishandled result, as it was trying to update current node, which has not been created by the time of update. This would happen because `process_unfold` would call `enqueue_unfold` before  current node had been created and if child had already been resolved (execution tree contains `Node::Done(value)`) would try to update current node by calling `update_location` which in turn would fail.

Reviewed By: StanislavGlebik

Differential Revision: D18373666

fbshipit-source-id: fe1dca89f2f5015985fb4b04d671750fa3e84c37
2019-11-08 04:07:13 -08:00
Stanislau Hlebik
8ef4848ed4 blobsync: do not upload a blob exists
Summary:
There's no need to sync a blob if it already exists. It seems useful anyway,
but it's necessary for the next diff in the stack.

Reviewed By: farnz

Differential Revision: D18373456

fbshipit-source-id: c6e18bea3c9199670b7f4cb429547f922c611735
2019-11-07 23:46:55 -08:00
Kostia Balytskyi
0f2b3003c4 mononoke: add push redirector high-level structure
Summary:
The purpose of this diff is to make the review of further diffs easier. Once
you review this diff, you should have an idea of how the push redirector is
intended to work at a high level:

|Step| Small repo | Push redirector/Backsyncer | Large repo |
|1|Parse `unbundle` body, decide whether it's push, pushrebase, etc | | |
|2|Upload all of the changesets, provided in the `unbundle` body | | |
|3||(small-to-large direction) ->||
|4| |Convert parsing result (`PostResolveAction`) to be usable in the large repo. This involves syncing uploaded changesets, renaming bookmarks if needed, etc. ||
|5|||Process converted `PostResolveAction` (i.e. perform push, pushrebase or infinitepush). Create an `UnbundleResponse` struct, which contains all the information, necessary to generate response bytes to be sent to the user.|
|6|| <- (large-to-small direction) ||
|7||Call the Backsyncer to sync all the commits, created in the large repo into the small repo. Then, convert `UnbundleResponse` struct (by replacing commits with their equivalents and renaming bookmarks) to be suitable to be used in the small repo||
|8|Generate response bytes from the `UnbundleResponse` struct, and send those bytes to the user. |||

Further diffs are intended to populate the functions with business logic, add unit and integration tests.

Reviewed By: StanislavGlebik

Differential Revision: D18370903

fbshipit-source-id: 4b29db586abcad7c3deda2738116cebd26e9fccf
2019-11-07 15:39:01 -08:00
Daniel Grzegorzewski
57aba7e348 Update scs to make it use globalrevs
Summary: Added option to use globalrev, so now we can fetch commits using their globalrevs, and ask about commit's globalrev.

Reviewed By: krallin

Differential Revision: D18324846

fbshipit-source-id: 73e69b697dd7b84b0b15e435a95191243cc75a19
2019-11-07 12:39:38 -08:00
Jun Wu
124a49a8bc changegroup: do not show "(+/- ? heads)"
Summary:
In an upcoming change we'll replace `cl.heads()` with `repo.heads()`.  In
changegroup code path the repo can be in an inconsistent state (ex. a bookmark
refers to an unknown commit) which breaks `repo.heads()`.

Since the whole purpose of the heads calculation is just to show
`(+/- ?  heads)`, which does not affect any real features, let's just
remove it.

Reviewed By: markbt

Differential Revision: D18366735

fbshipit-source-id: 893be2cec0c32b64a80b3ef4ca65b69f8ed76b27
2019-11-07 10:51:34 -08:00