Summary:
Currently the warm bookmarks cache runs all the warmers in a single task. This
means they compete for CPU. Run them in separate tasks, so that they might
execute in parallel.
Reviewed By: krallin
Differential Revision: D18505881
fbshipit-source-id: e8045bd14916caf3c2c592afbe35309534fe3446
Summary: D18478452 broke mode/opt 2 tests. One of them directly modifies bookmarks table, in another separate binary modifies the boomarks. This creates undeterministic tests. Let's disable bookmarks cache for those 2 tests
Reviewed By: farnz
Differential Revision: D18501660
fbshipit-source-id: d4f625dbdf2f8b110eb6196761e655187407abf6
Summary: Find the `run_tests` code under `eden/scm/tests` rather than `scm/hg/tests`
Reviewed By: singhsrb
Differential Revision: D16823961
fbshipit-source-id: 828b68311d0af9ab6d0dac6e574748313a96c02b
Summary:
This adds support for limiting the number of commits for a given author. This
runs after we've received Bonsais from the client, but before we attempt to
pushrebase them or anything. This is all controlled through Configerator,
through the same configuration as our throttling.
I'll also send a diff in Landcastle in order to make this non-retryable.
Differential Revision: D18375115
fbshipit-source-id: 089bdcd7bebfd2ea42c37921fc80b53f96a1d40e
Summary: I added more integration tests to cover all options for getting commits. Also added tests to check output when asking about globalrevs repo with or without globalrev. Modified list_repos to make it deterministic.
Reviewed By: markbt, HarveyHunt
Differential Revision: D18448770
fbshipit-source-id: 8662c3a0d1676813def5dd9f2b17200ca1c52040
Summary: Add --readonly-storage option to cmdlib that will cause an error on any attempt to write to SQL metadata or Blobstore
Reviewed By: StanislavGlebik
Differential Revision: D18297959
fbshipit-source-id: e879183b74fb50abfb60d2424ea579708322963f
Summary: Command will be used only in the tests for now.
Reviewed By: markbt
Differential Revision: D18303171
fbshipit-source-id: 4938cca6b0ac0fa1868ab75a64db6d23c201a4f8
Summary: Derived data implementation for Blame data
Reviewed By: StanislavGlebik
Differential Revision: D18201489
fbshipit-source-id: d5ebd73f3a9b210108f509b7d2447fed3e7fb997
Summary: We want an alarm if the backsyncer Tupperware job is doing significant work; it only does anything if the pushredirect logic is buggy, or in rare cases where it wins a race. Log to ODS when this task sees a queue, so that we can investigate
Reviewed By: StanislavGlebik
Differential Revision: D18450577
fbshipit-source-id: 6aac1c8638c6275fad5db3db1bb4915c1b824930
Summary:
It should make it possible to use these arguments in commands that do not use
MononokeApp::build() (e.g. backsyncer_cmd).
Also it generally feels like the right cleanup, because all logger arguments
will be specified only in one place
Reviewed By: krallin
Differential Revision: D18448014
fbshipit-source-id: 729d12b42df4b28ab37820bc4a86cefa0ea870a9
Summary:
Update futures-preview from 0.3.0-alpha.18 to 0.3.0-alpha.19, tokio-preview from 0.4.0-alpha.4 to 0.4.0-alpha.6, hyper-preview from 0.13.0-alpha.1 to 0.13.0-alpha.4.
The source changes are from the `hostname` crate releasing 0.2.0 which changes the signature to return io::Result<OsString> instead of Option<String>.
Reviewed By: jsgf
Differential Revision: D18435884
fbshipit-source-id: 548ec3c53f597caa10f8c65b27ae642324a8e484
Summary:
[the diff is create with `hg backout`]
I've noticed slowness in Mononoke APIServer `get_file_history` due to recently deployed changes introduced in D18138431.
The slowness (that followed with timeouts) was caused by fetching a large changeset while checking renames for the paths, that were touched by the changeset.
Reverting the feature as it causes fallbacks while the rename history is not renderred by Diffusion anyway.
Reviewed By: StanislavGlebik
Differential Revision: D18430395
fbshipit-source-id: 2fbf9376d370624435f3846c7c88a1c5b9a53021
Summary:
Handling diamond merges correctly in megarepo is a hard problem. For now I'd
like to add this half-manual tool that can sync a merge commit into a megarepo
should we have it again. This tool is a hack until we start fully support a merge commits
in megarepo.
Notes:
1) tool is a best-effort, not a production quality. It might not handle all
edge-cases. It might require tweaking and should be used with care (e.g. run
mononoke_repo crossrepo verify-wc). That said I'd like to land it -
previously it took me > 4 hours to sync a diamond merge. I'd like the next one
to take less, and even this hacky tool should help.
2) A diff below in the stack adds changes to blobsync crate to not upload blob if it
already exists. It is necessary for this tool to work. Currently `upload_commit`
copies all blobs from source repo, however the merge commit the tool creates can contain
entries from multiple source repos - trying to copy all of them from the single source repo
will fail!
Reviewed By: farnz
Differential Revision: D18373457
fbshipit-source-id: 7cdb042b3a335cdc0807d0cf98533f9aec937fd0
Summary: Previously it print a warning which was easy to miss. Let's fail now
Reviewed By: farnz
Differential Revision: D18427675
fbshipit-source-id: d0d638d7449108469e5acf7e71b8e951576792df
Summary:
verify-wc didn't work correctly for commits that were preserved i.e. commits
that are the same in small and large repos. For those commits we don't need to
move paths
Reviewed By: farnz
Differential Revision: D18427624
fbshipit-source-id: 102ce743714fe63a3d5ba9e6441fa735361063cb
Summary:
We're going to pushredirect some commits from small repos to large repos as part of the megarepo write path. Add some Scuba logging, so that we can see when redirection happens and react accordingly.
Note that I've deliberately kept the logging small - just tells you about the target repo - to avoid filling up Scuba. We can increase or reduce the amount of logging as we test this code.
Reviewed By: StanislavGlebik
Differential Revision: D18405345
fbshipit-source-id: bafc8f0aa0b4329b261dc0d6c99306fc9df95cf9
Summary: Will be used in the next diff
Reviewed By: farnz
Differential Revision: D18373627
fbshipit-source-id: 74dca2fef6a256eefed026a93c4c4381511e611c
Summary: We currently have no way to track what the backsyncer is doing, if anything, and it can get stuck. Log for each sync, so that we can see what bookmark moves (if any) are getting stuck.
Reviewed By: StanislavGlebik
Differential Revision: D18397848
fbshipit-source-id: 67ce60a129c020185f41ba69fe3ed046d540f047
Summary: This diff adds basic happy path pushrebase tests for the push redirector. In other words, it covers a situation, where there's a single repository, which is push-redirected into a large repo, and which only serves pushrebase pushes.
Reviewed By: StanislavGlebik
Differential Revision: D18421133
fbshipit-source-id: c58af0c3c8fa767660f5e864554cc4a91cd0402c
Summary:
In the previous refactorings I preserved the mapping between the originally
uploaded Mercurial hashes and the way Mononoke saved them in order to decide
whether we need it later: it seemed easier to keep it than to add it later.
But it does seem like the wrong thing to do: we do not need the mapping for
any purposes, we just need Mercurial changeset ids themselves to run hooks.
So let's separate concerns and preserve two different things:
`BonsaiChangeset`s to run `unbundle` and `HgChangesetId`s to run hooks.
Reviewed By: farnz
Differential Revision: D18421050
fbshipit-source-id: cd28e56465ae0d3d96381072de1f9bc5bb009516
Summary:
This diff adds some meat to the backbone, introduced in D18370903. Test are to come in later diffs.
Copied from the parent diff:
Push redirector is one of the core components of cross-repo sync in Mononoke. It comes into play when large repository serves writes. Here's the schematic flow of the `unbundle` pipeline:
|Step| Small repo | Push redirector/Backsyncer | Large repo |
|1|Parse `unbundle` bod, decide whether it's push, pushrebase, etc | | |
|2|Upload all of the changesets, provided in the `unbundle` body | | |
|3||(small-to-large direction) ->||
|4| |Convert parsing result (`PostResolveAction`) to be usable in the large repo. This involves syncing uploaded changesets, renaming bookmarks if needed, etc. ||
|5|||Process converted `PostResolveAction` (i.e. perform push, pushrebase or infinitepush). Create an `UnbundleResponse` struct, which contains all the information, necessary to generate response bytes to be sent to the user.|
|6|| <- (large-to-small direction) ||
|7||Call the Backsyncer to sync all the commits, created in the large repo into the small repo. Then, convert `UnbundleResponse` struct (by replacing commits with their equivalents and renaming bookmarks) to be suitable to be used in the small repo||
|8|Generate response bytes from the `UnbundleResponse` struct, and send those bytes to the user. |||
Reviewed By: StanislavGlebik
Differential Revision: D18288854
fbshipit-source-id: 36eb78fcc03ca5249776237ef9dda2b4747ecc68
Summary: Just a minor detail of additional logging.
Reviewed By: farnz
Differential Revision: D18420815
fbshipit-source-id: 583c51591460c71b21d373d000f51752fa6c05e6
Summary: It would be nice to be able to have a more fine-grained verbosity control, than provided by `--debug`. At the same time, to not break all of the running jobs, we can't just get rid of `--debug`. So let's add `--log-level`, which conflicts with `--debug`.
Reviewed By: farnz
Differential Revision: D18417028
fbshipit-source-id: 74c365fc8225098921e301674e5bd7e240411617
Summary:
The push operation has 2 phases: push, then move bookmarks, etc. Make that in
one transaction so it wouldn't end up with an inconsistent state.
Reviewed By: sfilipco
Differential Revision: D18362363
fbshipit-source-id: 338ef1b088975a9d1b043ccef81782e14c77d8e1
Summary:
This diff rewrites all use of Thrift-generated `client` modules to `client_async`, then inserts `.compat()` calls to cast the std::future::Future objects produced by `client_async` back to futures::Future objects as they would have been produced by `client`, thus preserving the behavior.
https://docs.rs/futures/0.3.1/futures/future/trait.TryFutureExt.html#method.compat
This diff is just the minimal change to allow deleting the old futures 0.1 Thrift client. We'll follow up further in each of these projects to remove the compat shims by migrating off of futures 0.1 entirely.
Reviewed By: bolinfest
Differential Revision: D18392206
fbshipit-source-id: b58d2b6bf7a3d3adebc31d04e332a0917c8a6f28
Summary:
Right now, our PerfCounters live in the CoreContext's SessionContainer, so they are shared across all commands for a given session. One downside of that is that if a session has a lot of command, it's hard to:
- Figure out what the total value is for the PerfCounters (you have to look for the max on all the commands in the session — if you sum, you'll double-count a lot of things).
- Figure out what the value is for a PerfCounter for a given command (that one effectively impossible).
With this change, our PerfCounters are tied to individual commands, so we can figure out the total value by summing, and get the value for a given command directly.
Reviewed By: mitrandir77
Differential Revision: D18371382
fbshipit-source-id: 377a6594a95f47fcbed51361f4f457099c414962
Summary:
Currently, each individual command is responsible for logging its own `Command Processed` output. We typically want that to have perf counters (and wire proto logging), so that also has to be done on a per-command basis (and if we want to add new things to log when commands finish, then that would have to be done on a per-command basis as well), which results in quitea lot of code duplication / overlap.
This refactor reworks said command logging to eliminate this duplication by routing everything trough a single `CommandLogger` instance for each command (which is obtained along with a command's context).
This also removes a bit of the duplication we had around logging new commands (it's now in a `start_command` method. It also removes a now-useless with_logger_kv call that was adding the command to a logger that already has it.
There is still a requirement for each command to actually call `finalize_command` in order to log, but that's arguably still a step forward :)
Reviewed By: mitrandir77
Differential Revision: D18371381
fbshipit-source-id: a9bccb64120fee5c68633d3b43a8850416e2ffd4
Summary:
This is the second diff in a stack that will change how multiple targets are generated inside Cargo.toml files.
Previously it used to be that every target is generated independently, which would guarantee invalid Cargo.toml creation, since multiple `[package]` or `[dependencies]` sections would be added.
In this diff the cargo_validator starts to expect only one generated section in Cargo.toml files and cargo_generator starts to generate only one section instead of multiple.
Reviewed By: farnz
Differential Revision: D18114194
fbshipit-source-id: 306b2fa297cf33a1e607d6914513f76a7e1c5580
Summary:
We've started using this for consistent routing. It's a good idea to also log
it!
Reviewed By: HarveyHunt
Differential Revision: D18400169
fbshipit-source-id: b8f8c3b82631aab024c8b3afe09b59f0fde4430b
Summary:
This reworks our CoreContext a little bit to contain two fields instead of putting everything together in just one `Inner` struct. There's a few reasons why I'd like to make that change:
- First, with the `Inner` approach, everything has to be cloneable in `Inner`, even things that are largely static for a given session, because that's how we create new contexts to update their logger or Scuba sample. If we split things out, then we can clone the Logger & Scuba without cloning Inner.
- Second, this approach allows for better separation of concerns in the repo handler. Right now, it's a bit of a mess: many of our methods there aren't actually providing their command in Scuba for example, because they just use `self.ctx` and forget to update the method. It's just too error-prone. By separating things out, we have a data model that maps a little better to the state of the world (one session, multiple commands), and we make sure that we can't accidentally use a `ctx` without first tying it to a request. We had `prepared_ctx` as an attempt to do that, but since it wasn't mandatory it wasn't used everyhwere properly. The replacement `command_ctx` forces command code to pass their command in order to acquire a `CoreContext`.
Note that this diff is a lot of busywork here and there to update callsites accordingly. That said, there is one functional change in the commit cloud bookmarks filler, which was using a method it didn't need (and which never worked), so I took that out.
Relatedly, note that I removed `CoreContext` from the bundle2 parsing code, and passed just a `Logger` there. That code doesn't actually use `CoreContext`. This allowed for removing a few TODO's of dtolnay's when he introduced `FacebookInit`.
Reviewed By: StanislavGlebik
Differential Revision: D18352597
fbshipit-source-id: cd91042cef666c38b9cbd5f07518bc558e172aa2
Summary: The ratelim library can talk to a local McRouter if one is available. Doing so avoids having to wait for 20 seconds on the first load check, which is convenient. THis exposes this option in mononoke server.
Differential Revision: D18375114
fbshipit-source-id: 6ea26fdefc0c3e8d3989949d91b0da58e2c7add1
Summary:
We assume that Configerator is in good state - it will error if not - and read config.
Push redirection is only enabled when we can read Configerator and it says that pushrebase is enabled
Reviewed By: StanislavGlebik
Differential Revision: D18331952
fbshipit-source-id: 5cdccdf7cf347ead7ebef7a4348621e47ff7887e
Summary: We want to be able to mock our config in lower level users - as a first step on that route, provide an API that encapsulates the API accesses, and allows you to point at a named file, or a `configerator/materialized_configs` and still get answers
Reviewed By: StanislavGlebik
Differential Revision: D18349750
fbshipit-source-id: b6b61235bc564ffe9478e15507e31100bd24cef0
Summary:
Do a few fixes for the new "backsyncer" update reason.
I also added a few checks to make sure we don't run into this problems again.
Reviewed By: ikostia
Differential Revision: D18395398
fbshipit-source-id: d61839d54fe1c8f9c2a4c858762c040db32daf4d
Summary: Add role_override to raw connection so that we can request readonly role if desired, and and get_user to be able to check that readonly user was returned.
Reviewed By: krallin
Differential Revision: D18297444
fbshipit-source-id: 3563d9584980c7347fde36e9ce93fbdc53970923
Summary:
Add some comments/docstrings to `sort_topological` and rename `Mark` enum variants to be more self-explanatory.
Obviously, this kind of stuff is very personal, so feel free to tell me that this is not an improvement.
Reviewed By: StanislavGlebik
Differential Revision: D18394886
fbshipit-source-id: 836fee39d8ead985de136c6aebc689680ca30ba4
Summary: If node was already resolved old implementation mishandled result, as it was trying to update current node, which has not been created by the time of update. This would happen because `process_unfold` would call `enqueue_unfold` before current node had been created and if child had already been resolved (execution tree contains `Node::Done(value)`) would try to update current node by calling `update_location` which in turn would fail.
Reviewed By: StanislavGlebik
Differential Revision: D18373666
fbshipit-source-id: fe1dca89f2f5015985fb4b04d671750fa3e84c37
Summary:
There's no need to sync a blob if it already exists. It seems useful anyway,
but it's necessary for the next diff in the stack.
Reviewed By: farnz
Differential Revision: D18373456
fbshipit-source-id: c6e18bea3c9199670b7f4cb429547f922c611735
Summary:
The purpose of this diff is to make the review of further diffs easier. Once
you review this diff, you should have an idea of how the push redirector is
intended to work at a high level:
|Step| Small repo | Push redirector/Backsyncer | Large repo |
|1|Parse `unbundle` body, decide whether it's push, pushrebase, etc | | |
|2|Upload all of the changesets, provided in the `unbundle` body | | |
|3||(small-to-large direction) ->||
|4| |Convert parsing result (`PostResolveAction`) to be usable in the large repo. This involves syncing uploaded changesets, renaming bookmarks if needed, etc. ||
|5|||Process converted `PostResolveAction` (i.e. perform push, pushrebase or infinitepush). Create an `UnbundleResponse` struct, which contains all the information, necessary to generate response bytes to be sent to the user.|
|6|| <- (large-to-small direction) ||
|7||Call the Backsyncer to sync all the commits, created in the large repo into the small repo. Then, convert `UnbundleResponse` struct (by replacing commits with their equivalents and renaming bookmarks) to be suitable to be used in the small repo||
|8|Generate response bytes from the `UnbundleResponse` struct, and send those bytes to the user. |||
Further diffs are intended to populate the functions with business logic, add unit and integration tests.
Reviewed By: StanislavGlebik
Differential Revision: D18370903
fbshipit-source-id: 4b29db586abcad7c3deda2738116cebd26e9fccf
Summary: Added option to use globalrev, so now we can fetch commits using their globalrevs, and ask about commit's globalrev.
Reviewed By: krallin
Differential Revision: D18324846
fbshipit-source-id: 73e69b697dd7b84b0b15e435a95191243cc75a19
Summary:
In an upcoming change we'll replace `cl.heads()` with `repo.heads()`. In
changegroup code path the repo can be in an inconsistent state (ex. a bookmark
refers to an unknown commit) which breaks `repo.heads()`.
Since the whole purpose of the heads calculation is just to show
`(+/- ? heads)`, which does not affect any real features, let's just
remove it.
Reviewed By: markbt
Differential Revision: D18366735
fbshipit-source-id: 893be2cec0c32b64a80b3ef4ca65b69f8ed76b27