Summary: Add an `scsc ls` command which can be used to display the contents of a tree.
Reviewed By: farnz
Differential Revision: D18573093
fbshipit-source-id: dffd904769dbc618603c54e3c9931e5683503b98
Summary:
Add an `scsc info` command which can be used to display information about
commits, trees or files.
Reviewed By: mitrandir77
Differential Revision: D18570328
fbshipit-source-id: 4dc2b078e7174c8b6fa366e0f6607de9bccd7396
Summary:
Add the timezone to the information about commits being returned. This will
allow clients to render the timestamp in the timezone the commit was created
in.
Reviewed By: mitrandir77
Differential Revision: D18570330
fbshipit-source-id: a903b6430e7dd21e80afb260497c3629be0ccebf
Summary:
Admin subcommand for blame
- show difference between current version and parents (debug)
- show blame by fetching (deriving if needed)
- show blame by going through history of all changes and computing full blame
Reviewed By: StanislavGlebik
Differential Revision: D18371921
fbshipit-source-id: a882fe20358eb526efb6774a58db8fcf085274b9
Summary:
Using this hook will bring us closer to the real scenario we're trying to
cover.
Reviewed By: StanislavGlebik
Differential Revision: D18572546
fbshipit-source-id: 7af3e37f647e235e13fb7bcecfcbf406d02e13fc
Summary:
Generating hg changesets might take a lot of time, and when running commands
like `mononoke_admin` all you see is that mononoke_admin is doing something but
not clear what. Adding this debug logging make it clearer where the time is
spent.
Reviewed By: krallin
Differential Revision: D18401876
fbshipit-source-id: 6ae12fd4ea430cd0b049ff8932b5ff7369efc498
Summary:
get_hg_bonsai_mapping was very easy to misuse. If you pass a list of bonsai
changesets then it just searched for corresponding hg changesets at the
bonsai_hg_mapping database.
The problem is that hg changesets might not be generated yet for bonsai
changesets, and normally the callers would expect that hg changesets will be
lazily generated. In apiserver we had a workaround for that, however it seems
better to move the logic in the blobrepo itself.
A few notes:
1) We don't need to change anything for hg changesets, because bonsai commits
shouldn't be generated from hg changesets.
2) Note that to preserve semantic of this function we don't fail if bonsai
changeset doesn't exist.
Reviewed By: krallin
Differential Revision: D18394642
fbshipit-source-id: 977aab08f3321b9797fa8c032f1d83c0ef7ff487
Summary: For megarepo, we care that derived data tailing ends up with nothing to do most of the time - otherwise, we have a problem that's being papered over by hg changeset tailing. Add an ODS timeseries to derived data tailing, so that we can trivially detect when derived data tailing is fixing things for us
Reviewed By: krallin
Differential Revision: D18503883
fbshipit-source-id: a12b6519ff7813b9622ab55f17dda69ed3a3a137
Summary:
This will be used for updates from small repo to large repo (in particular, in
the next diff in the stack).
Reviewed By: ikostia
Differential Revision: D18532491
fbshipit-source-id: 57ca896ec271ae95b4291b144dfa49e5c4dd0e73
Summary: Do a refactor which will make next diff easier
Reviewed By: ikostia
Differential Revision: D18532299
fbshipit-source-id: 7c22dffd23890724c76d2a045ea23636d2cf9803
Summary:
Add an `scsc cat` command which can be used to fetch file contents at a
particular revision.
This uses chunking to ensure the server doesn't need to buffer the whole file
in memory, and to provide some streaming of the output.
JSON output is less useful for `cat`, but it is still provided. File chunks
that are falid UTF-8 are rendered as strings. File chunks that are not valid
UTF-8 are hex-encoded.
Reviewed By: ikostia
Differential Revision: D18528708
fbshipit-source-id: 38f82d05d956928fe655e7cf9862e7c0a57d97ac
Summary:
D18224367 introduces `compare_with_parents` that also includes copy/move information but it turned out that we'd also need to do manifest diff in that case as the bonsai doesn't have all the necessary info (so the perf of that call becomes very similar to this one).
Until I figure out what I want to do with the other call I can still provide
some copy/move information so we can start redirecting diff traffic to the new service.
Reviewed By: markbt
Differential Revision: D18505946
fbshipit-source-id: 12bebb368f999afff01d3a5f8aebc97f338ff162
Summary: Extend --readonly-storage support to sqlite. Should make local testing more realistic, hopefully will show any issues with which connections are used.
Reviewed By: krallin
Differential Revision: D18416033
fbshipit-source-id: 7ea963f4dba2f7005b67b34a7055933e0593bb30
Summary:
This diff replaces code of the form:
```
use failure::Fail;
#[derive(Fail, Debug)]
pub enum ErrorKind {
#[fail(display = "something failed {} times", _0)]
Failed(usize),
}
```
with:
```
use thiserror::Error;
#[derive(Error, Debug)]
pub enum ErrorKind {
#[error("something failed {0} times")]
Failed(usize),
}
```
The former emits an implementation of failure 0.1's `Fail` trait while the latter emits an impl of `std::error::Error`. Failure provides a blanket impl of `Fail` for any type that implements `Error`, so these `Error` impls are strictly more general. Each of these error types will continue to have exactly the same `Fail` impl that it did before this change, but now also has the appropriate `std::error::Error` impl which sets us up for dropping our various dependencies on `Fail` throughout the codebase.
Reviewed By: Imxset21
Differential Revision: D18523700
fbshipit-source-id: 0e43b10d5dfa79820663212391ecbf4aeaac2d41
Summary:
This diff is preparation for migrating off of failure::Fail / failure::Error for errors in favor of errors that implement std::error::Error. The Fallible terminology is unique to failure and in non-failure code we should be using Result<T>. To minimize the size of the eventual diff that removes failure, this codemod replaces all use of Fallible with Result by:
- In modules that do not use Result<T, E>, we import `failure::Fallible as Result`;
- In modules that use a mix of Result<T, E> and Fallible<T> (only 5) we define `type Result<T, E = failure::Error> = std::result::Result<T, E>` to allow both Result<T> and Result<T, E> to work simultaneously.
Reviewed By: Imxset21
Differential Revision: D18499758
fbshipit-source-id: 9f5a54c47f81fdeedbc6003cef42a1194eee55bf
Summary:
The new type alias allows being used as both `Result<T>` and `Result<T, SomeOtherError>`. This is useful when almost every Result in a module uses failure::Error as the error type but there are a few special-purpose signatures with a different error type; we still want to import failure_ext::Result, but keep using Result<T, Other> where needed.
This is forward-compatible with migrating to https://docs.rs/anyhow/1.0.19/anyhow/type.Result.html which follows the same scheme.
Reviewed By: Imxset21
Differential Revision: D18499669
fbshipit-source-id: e9ad9b2853fc0a16c56018573f9e64fbee363f61
Summary:
As evidenced by the fact that it is not used anywhere, this argument is not useful.
Before:
```
.with_context(|_| format!(...))
```
After:
```
.with_context(|| format!(...))
```
In the case that someone in the future does need a reference to the error value in a context, the recommended way to write would be:
```
.map_err(|e| {
/* whatever you want to do with &e */
e.context(format!(...))
})
```
This diff is in preparation for migrating off of failure::Fail to std::error::Error for error types, in combination with anyhow::Error whose [with_context closure does not take an argument](https://docs.rs/anyhow/1.0.19/anyhow/trait.Context.html).
Reviewed By: Imxset21
Differential Revision: D18498898
fbshipit-source-id: 2c6f1929944d5877c408e367cde3c99ab2a5a07a
Summary:
StanislavGlebik reported that running the tests locally in mode/opt was failing. The
reason for that is because while `buck test` sets `FBCODE_BUILD_MODE`, it's not
set when the binary runs.
This fixes that by looking up our manifest to know if our binary is optimized
or not... which is kinda clowny but it will have to do.
Reviewed By: farnz
Differential Revision: D18507748
fbshipit-source-id: ed6f545df61a3f3be050e7cc519f6a4e39e2e4aa
Summary:
Currently the warm bookmarks cache runs all the warmers in a single task. This
means they compete for CPU. Run them in separate tasks, so that they might
execute in parallel.
Reviewed By: krallin
Differential Revision: D18505881
fbshipit-source-id: e8045bd14916caf3c2c592afbe35309534fe3446
Summary: D18478452 broke mode/opt 2 tests. One of them directly modifies bookmarks table, in another separate binary modifies the boomarks. This creates undeterministic tests. Let's disable bookmarks cache for those 2 tests
Reviewed By: farnz
Differential Revision: D18501660
fbshipit-source-id: d4f625dbdf2f8b110eb6196761e655187407abf6
Summary: Find the `run_tests` code under `eden/scm/tests` rather than `scm/hg/tests`
Reviewed By: singhsrb
Differential Revision: D16823961
fbshipit-source-id: 828b68311d0af9ab6d0dac6e574748313a96c02b
Summary:
This adds support for limiting the number of commits for a given author. This
runs after we've received Bonsais from the client, but before we attempt to
pushrebase them or anything. This is all controlled through Configerator,
through the same configuration as our throttling.
I'll also send a diff in Landcastle in order to make this non-retryable.
Differential Revision: D18375115
fbshipit-source-id: 089bdcd7bebfd2ea42c37921fc80b53f96a1d40e
Summary: I added more integration tests to cover all options for getting commits. Also added tests to check output when asking about globalrevs repo with or without globalrev. Modified list_repos to make it deterministic.
Reviewed By: markbt, HarveyHunt
Differential Revision: D18448770
fbshipit-source-id: 8662c3a0d1676813def5dd9f2b17200ca1c52040
Summary: Add --readonly-storage option to cmdlib that will cause an error on any attempt to write to SQL metadata or Blobstore
Reviewed By: StanislavGlebik
Differential Revision: D18297959
fbshipit-source-id: e879183b74fb50abfb60d2424ea579708322963f
Summary: Command will be used only in the tests for now.
Reviewed By: markbt
Differential Revision: D18303171
fbshipit-source-id: 4938cca6b0ac0fa1868ab75a64db6d23c201a4f8
Summary: Derived data implementation for Blame data
Reviewed By: StanislavGlebik
Differential Revision: D18201489
fbshipit-source-id: d5ebd73f3a9b210108f509b7d2447fed3e7fb997
Summary: We want an alarm if the backsyncer Tupperware job is doing significant work; it only does anything if the pushredirect logic is buggy, or in rare cases where it wins a race. Log to ODS when this task sees a queue, so that we can investigate
Reviewed By: StanislavGlebik
Differential Revision: D18450577
fbshipit-source-id: 6aac1c8638c6275fad5db3db1bb4915c1b824930
Summary:
It should make it possible to use these arguments in commands that do not use
MononokeApp::build() (e.g. backsyncer_cmd).
Also it generally feels like the right cleanup, because all logger arguments
will be specified only in one place
Reviewed By: krallin
Differential Revision: D18448014
fbshipit-source-id: 729d12b42df4b28ab37820bc4a86cefa0ea870a9
Summary:
Update futures-preview from 0.3.0-alpha.18 to 0.3.0-alpha.19, tokio-preview from 0.4.0-alpha.4 to 0.4.0-alpha.6, hyper-preview from 0.13.0-alpha.1 to 0.13.0-alpha.4.
The source changes are from the `hostname` crate releasing 0.2.0 which changes the signature to return io::Result<OsString> instead of Option<String>.
Reviewed By: jsgf
Differential Revision: D18435884
fbshipit-source-id: 548ec3c53f597caa10f8c65b27ae642324a8e484
Summary:
[the diff is create with `hg backout`]
I've noticed slowness in Mononoke APIServer `get_file_history` due to recently deployed changes introduced in D18138431.
The slowness (that followed with timeouts) was caused by fetching a large changeset while checking renames for the paths, that were touched by the changeset.
Reverting the feature as it causes fallbacks while the rename history is not renderred by Diffusion anyway.
Reviewed By: StanislavGlebik
Differential Revision: D18430395
fbshipit-source-id: 2fbf9376d370624435f3846c7c88a1c5b9a53021
Summary:
Handling diamond merges correctly in megarepo is a hard problem. For now I'd
like to add this half-manual tool that can sync a merge commit into a megarepo
should we have it again. This tool is a hack until we start fully support a merge commits
in megarepo.
Notes:
1) tool is a best-effort, not a production quality. It might not handle all
edge-cases. It might require tweaking and should be used with care (e.g. run
mononoke_repo crossrepo verify-wc). That said I'd like to land it -
previously it took me > 4 hours to sync a diamond merge. I'd like the next one
to take less, and even this hacky tool should help.
2) A diff below in the stack adds changes to blobsync crate to not upload blob if it
already exists. It is necessary for this tool to work. Currently `upload_commit`
copies all blobs from source repo, however the merge commit the tool creates can contain
entries from multiple source repos - trying to copy all of them from the single source repo
will fail!
Reviewed By: farnz
Differential Revision: D18373457
fbshipit-source-id: 7cdb042b3a335cdc0807d0cf98533f9aec937fd0
Summary: Previously it print a warning which was easy to miss. Let's fail now
Reviewed By: farnz
Differential Revision: D18427675
fbshipit-source-id: d0d638d7449108469e5acf7e71b8e951576792df
Summary:
verify-wc didn't work correctly for commits that were preserved i.e. commits
that are the same in small and large repos. For those commits we don't need to
move paths
Reviewed By: farnz
Differential Revision: D18427624
fbshipit-source-id: 102ce743714fe63a3d5ba9e6441fa735361063cb
Summary:
We're going to pushredirect some commits from small repos to large repos as part of the megarepo write path. Add some Scuba logging, so that we can see when redirection happens and react accordingly.
Note that I've deliberately kept the logging small - just tells you about the target repo - to avoid filling up Scuba. We can increase or reduce the amount of logging as we test this code.
Reviewed By: StanislavGlebik
Differential Revision: D18405345
fbshipit-source-id: bafc8f0aa0b4329b261dc0d6c99306fc9df95cf9
Summary: Will be used in the next diff
Reviewed By: farnz
Differential Revision: D18373627
fbshipit-source-id: 74dca2fef6a256eefed026a93c4c4381511e611c
Summary: We currently have no way to track what the backsyncer is doing, if anything, and it can get stuck. Log for each sync, so that we can see what bookmark moves (if any) are getting stuck.
Reviewed By: StanislavGlebik
Differential Revision: D18397848
fbshipit-source-id: 67ce60a129c020185f41ba69fe3ed046d540f047
Summary: This diff adds basic happy path pushrebase tests for the push redirector. In other words, it covers a situation, where there's a single repository, which is push-redirected into a large repo, and which only serves pushrebase pushes.
Reviewed By: StanislavGlebik
Differential Revision: D18421133
fbshipit-source-id: c58af0c3c8fa767660f5e864554cc4a91cd0402c