Summary:
This allows adding progress bars tracking downloads from the server.
We could be smarter in this instance if we were to deserialize on the fly.
The first part of the payload contains the number of idmap entries that we need
but it needs more work to make it clear. The progress object right now is
designed for general bytes.
Reviewed By: quark-zju
Differential Revision: D25840470
fbshipit-source-id: c466c8d606b44981fe63c95352db2d8f14d6071b
Summary:
Unbundlereplay command was not implemented in the mononoke but it is used by sync job. So let's add this command here
together with additional integration test for sync between 2 mononoke repos. In addition I'm adding non fast forward bookmark movements by specifying key to sync job.
Reviewed By: StanislavGlebik
Differential Revision: D25803375
fbshipit-source-id: 6be9e8bfed8976d47045bc425c8c796fb0dff064
Summary:
The segmented changelog tailer is going to run with multiple instances which
may race to update the database. This change adds a test that checks that
concurrent updates keep the IdMap correct.
Reviewed By: ahornby
Differential Revision: D25684783
fbshipit-source-id: a09f6e6c915bde38158d9737dcfdc7adc3f15cb7
Summary:
The most common scenario where we see matcher errors is when we iterate through
a manifest and the user sends SIGTERM to the process. The matcher may be both
Rust and Python code. The Python code handles the interrupt and prevents future
function calls. The iterating Rust code will continue to call matcher functions
through this time so we get matcher errors from the terminated Python stack.
As long as we have Python matcher code, errors are valid.
It is unclear to me whether the matcher trait should have `Result` return
values when all implementations are Rust. It is easy to imagine implementations
that can fail in different circumstances but the ones that we use if we just
port the Python code wouldn't fail.
All in all, I think that this is a reasonable step forward.
Reviewed By: quark-zju
Differential Revision: D25697099
fbshipit-source-id: f61c80bd0a8caa58040a447ed02d48a1ae84ad60
Summary: These globs were lost as part of D25315954 (ec0b533381).
Reviewed By: quark-zju
Differential Revision: D25814934
fbshipit-source-id: b1896893e37e355a73eb136758f8966666e0ec05
Summary:
On Windows, it's possible that not all files could be removed from the
repository due to some other process holding a reference to it. When that
happens the `edenfsctl rm` operation will fail. Sometimes, instead of failing
with the actual reason for why the removal failed, it throws a cryptic "list
index out of range" error.
The reason was that when the file that can't be removed is actually a
directory, the `errors` list would be empty. Since filtering the folders is a
bit silly, let's not do it.
Reviewed By: fanzeyi
Differential Revision: D25836412
fbshipit-source-id: 36f936ff9d7697dfd2f4c68d4e56bdb18b66b06a
Summary:
Fixed `README.md` so commands in it work now.
Fixed integration_runner.
Reviewed By: lukaspiatkowski
Differential Revision: D25823461
fbshipit-source-id: 0d6784758c9f86bca38beafe014af4766169bee3
Summary:
This unbreaks the test. The reversefiller need access to SMC to talk to
scmquery (we could set up our own scmquery instance but I don't think it's worh
it).
Reviewed By: krallin
Differential Revision: D25824395
fbshipit-source-id: 676b3ac1e3af95e8e02bd272f7cb25250e047eed
Summary:
Sometimes we want to rechunk just a few file contents, this diff makes it
possible to do so.
Reviewed By: ahornby
Differential Revision: D25804144
fbshipit-source-id: 6ce69f7cee8616a872531bdf5a48746dd401442d
Summary: There is only one implementation of the trait so remove it and use that impl directly. Removing the trait makes it simpler to work on bulkops in the rest of this stack.
Reviewed By: farnz
Differential Revision: D25804021
fbshipit-source-id: 22fe797cf87656932d383ae236f2f867e788a832
Summary:
Unless we can't update to a public root, there is nothing wrong with having local changes and switching workspaces feature.
Those are not related. Uncommited changes shouldn't impact switching workspaces.
Reviewed By: mitrandir77
Differential Revision: D25802406
fbshipit-source-id: 3fcb70864002bed11ad32621947294f643ca1fc3
Summary:
Right now we get zero logs from the blobstore healer, which is pretty annoying
because it makes it impossible to really tell what it's doing.
This fixes that.
Reviewed By: HarveyHunt
Differential Revision: D25823800
fbshipit-source-id: ded420753ba809626d6e4291eb3d900dcfbff3d1
Summary:
This was a request from users. Repo could go into a disconnected state, for example, if rejoin in fbclone fails due to some reason.
In this case it was confusing that `hg cloud switch` command doesn't work. Users have to run `hg cloud join` command first.
If the repo is disconnected but doesn't contain any relevant local changes for commit cloud, it should be fine to switch workspace.
Reviewed By: mitrandir77
Differential Revision: D25802193
fbshipit-source-id: 3216a10c3438463773602b2dfd13740866fb5908
Summary:
In some cases we might have chunked file content in one blobstore component and
unchunked file content in another. And rechunking the second component was
awkward since we never know which version a filestore will fetch - filestore
can fetch a chunked version and decide that rechunking is not necessary.
This diff makes it possible to rechunk only a single component of a multiplexed
blobstore. It does so by manually creating BlobRepo with the single-component
blobstore.
Reviewed By: krallin
Differential Revision: D25803821
fbshipit-source-id: f2a992b73d0c5fc9d389a4b81e0f3e312c17fdea
Summary:
The cert path isn't correctly set up on all platforms, so this can
cause Mercurial to throw an error complaining about missing certs, even when
edenapi isn't enabled.
Let's back this out for now until we can fix the cert paths or only hit this
path when we actually use edenapi.
Reviewed By: singhsrb
Differential Revision: D25792491
fbshipit-source-id: 022a89a089cabcc709a07934eb62b883082261c2
Summary: Convert `Changsets` trait and all its uses to new type futures
Reviewed By: krallin
Differential Revision: D25638875
fbshipit-source-id: 947423e2ee47a463861678b146641bcc6b899a4a
Summary:
Lots of things can look like CBOR data, such as ... strings representing
errors. Right now, if the data in our CBOR stream is actually an error message,
then we'll just ignore it (see details in T80406893).
This isn't how we normally handle invalid data on the stream (we'd raise an
error) — it only happens with trailing data. This fixes our decoding to raise
an error in this case.
Reviewed By: quark-zju
Differential Revision: D25759082
fbshipit-source-id: c3d8be5007112ec1d2e7f25a102d8caaf0dbba56
Summary:
enable switching from a draft commit possible for most of the cases
make it possible if the public root of the current commit is an ancestor of the main bookmark
this condition we need because the remote bookmarks can be different for different workspaces and they define phases
I think it will cover most of workflows
Reviewed By: mitrandir77
Differential Revision: D25780999
fbshipit-source-id: b1c25b29a7668d51244ca43d6b0c30fa2fc068d9
Summary: Skip some very long configs to make rage output cleaner.
Reviewed By: DurhamG
Differential Revision: D25625452
fbshipit-source-id: 44bf8b9f93d9cb06d065a89f5d0ffa53ad6d6286
Summary:
The StringPiece constructor is untyped, and was only used in test. We can
afford to build the PathComponent in tests instead to avoid future headaches.
Reviewed By: genevievehelsel
Differential Revision: D25434556
fbshipit-source-id: 4b10bf2576870e81412d76c4b9755b45e26986b3
Summary:
Mercurial support files with `\` in their name, which can't be represented on
Windows due to `\` being the path separator. Currently, EdenFS will throw
errors at the user when such file are encountered, let's simply warn, and
continue.
Reviewed By: chadaustin
Differential Revision: D25430523
fbshipit-source-id: 4167b4cd81380226aead8e4f4850a7738087fd95
Summary:
On OSX, if Mercurial is built from fbcode, these environment variables
(which point specifically to Eden's own par file data) can break Mercurial's
ability to load dynamic libraries. Let's unset them.
Reviewed By: xavierd
Differential Revision: D25783552
fbshipit-source-id: 74e6232d225856fedc0382abc6cd223a6c47d8bc
Summary:
All of the strace logging was done in PrjfsChannel except for the notification
callbacks, let's remediate this.
Reviewed By: kmancini
Differential Revision: D25643491
fbshipit-source-id: 7eaed2503557b0e486d7d1b0637c68287ee9df90
Summary:
In a previous diff, chadaustin noted that there was a bunch of duplicated code
prior to calling into the PrjfsChannel, let's use template to solve this.
One of the non-refactored piece is the BAIL_ON_RECURSIVE_CALL, and I'm not sure
of a good way to move it into runCallback while still being able to understand
what callback is recursive. Previously, the line number from XLOG was
sufficient, moving it into the runCallback function would lose that.
Reviewed By: chadaustin
Differential Revision: D25576860
fbshipit-source-id: 619ed0c9fecf05cda2263dfcdf2fbcbaec85e45a
Summary:
The RcuPtr abstraction allows us to use RCU instead of the significantly more
expensive Synchronized<shared_ptr>. This should reduce the cost of all the
callbacks while not sacrificing the guarantee that unmounting a repository
needs to wait for all the pending callbacks to complete.
A new rcu_domain is used as the pending callbacks may sleep and take a long
time to complete when the servers aren't reachable. To avoid penalizing all the
other RCU clients, it's best to be isolated in its own domain.
Reviewed By: kmancini
Differential Revision: D25351535
fbshipit-source-id: bd40d59056e3e710c28c42d651b79876be496bc3
Summary:
We should not filter based on parsed level when passiing an inner drain into
the `DynamicLevelDrain`, as in cases when the binary is ran
`--with-dynamic-observability=true`, this would default the level to `INFO` and
make the inner drain filter on that level, which would essentially make debug
logging impossible. Instead, we should pass unfiltering inner drain into
`DynamicLevelDrain`, as `DynamicLevelDrain` actually uses
`ObservabilityContext`, which when the binary is called with `--debug` or
`--level=SOMETHING` would [instantiate](https://fburl.com/diffusion/sib8ayrn) a `Static` variant, behaving just as
current static level filtering.
Note also that this bug does not affect production, as we never actually try to
control the logging levels dynamically: we always run either with `--debug` or
with `--level=SOMETHING`, which again uses `Static` variant of
`ObservabilityContext`, which in turn filters the same way as the inner drain.
Reviewed By: krallin
Differential Revision: D25783488
fbshipit-source-id: 8054863fb655dd66747b6d2306a38c13cbc64443
Summary:
This diff adds an (as yet unused) option to log verbose scuba samples.
Here's the high-level overview.
In addition to doing `scuba_sample.log_with_msg` and `scuba_sample.log()`, you can now do `scuba_sample.log_with_msg_verbose()` and `scuba_sample.log_verbose()`. These two methods indicate that the intended sample is verbose and should go through some filtering prior to logging.
By default verbose samples are just not logged, but there are ways to override this via `ScubaObservabilityConfig`. Namely, the config has a "system" `ScubaVerbosityLevel`, which is either `Normal` or `Verbose`. When the level is `Verbose`, all samples are logged (those triggered by `.log_with_msg()`, `.log()`, `.log_with_msg_verbose()` and `.log_verbose()`. In addition to the "system" verbosity level, `ScubaObservabilityConfig` supports a few filtering overrides: a list of verbose sessions, a list of verbose unixnames and a list of verbose hostnames. Whenever a verbose sample's session, unixname or source hostname belongs to a corresponding list, the sample is logged.
`ScubaObservabilityConfig` is a struct, queried from `configerator` without the need to restart a service. Querying/figuring out whether logging is needed is done by the `ObservabilityContext` struct, which was introduced a few diffs earlier.
Note: I also want to add regex-based filtering for hostnames, as it's likely to be more useful than exact-match filtering, but I will do that later.
Reviewed By: StanislavGlebik
Differential Revision: D25232429
fbshipit-source-id: 057af95fc31f70d796063cefac5b8f7c69d7b3ef
Summary:
In the previous diff I had to make the same change in two places, this change
deduplicates the code so we can reuse the change. This isn't 100% equivalent,
since now we have 2 layers of boxing on the stream in `Fetch`.
That being said, that seems quite unlikely to matter considering that this is
ultimately handling responses that came to us over HTTP, so one pointer
traversal seems to be reasonable overhead (also, similar experience in Mononoke
suggests it really does not matter).
Reviewed By: quark-zju
Differential Revision: D25758652
fbshipit-source-id: 399ead1b67ffbb241597615a29129411580cf194
Summary:
This updates the edenapi fetch mechanism to check status codes from the server.
If the server responds with an error, we propagate the error up to the caller.
This is equivalent to what we would do if e.g. the server had just crashed.
Reviewed By: quark-zju
Differential Revision: D25758653
fbshipit-source-id: f44f6384be7944dce670c3825ccbb60b5fa2090a
Summary: This was a bit triggering while looking at logs :p
Reviewed By: StanislavGlebik
Differential Revision: D25781047
fbshipit-source-id: 22ebf1273b8b8d0b765c1bc7df2ba93752bf45e8
Summary:
See D25780870 for a bit of context. Our admin server was failing to start up
because of changesets warmup taking too long, but that's not easy to figure out
if all you have are the logs that don't tell you what we are doing (you'd have
to look at counters to work this out).
Let's just log this stuff.
Reviewed By: StanislavGlebik
Differential Revision: D25781048
fbshipit-source-id: 57a783dadc618956f577f32df3d2ec92ee729d56
Summary:
Like it says in the title. This is helpful with e.g. Mononoke server where the
"server" handle includes a long winded startup sequence. Right now, if we get
an error, then we don't get an error message immediately, even if we have one.
This leaves you with logs like this:
```
0105 04:20:48.563924 995374 [main] eden/mononoke/cmdlib/src/helpers.rs:229] Server has exited! Starting shutdown...
I0105 04:20:48.564076 995374 [main] eden/mononoke/cmdlib/src/helpers.rs:240] Waiting 0s before shutting down server
I0105 04:20:48.564238 995374 [main] eden/mononoke/cmdlib/src/helpers.rs:248] Shutting down...
E0105 04:20:48.564315 995374 [main] eden/mononoke/server/src/main.rs:119] could not send termination signal: ()
```
This isn't great because you might have to wait for a while to see the error,
and if something hangs in the shutdown sequence later, then you might not see
it at all.
The downside is we might log twice if we have a server that crashes like this,
but I guess that's probably better than not logging at all.
Reviewed By: StanislavGlebik
Differential Revision: D25781095
fbshipit-source-id: bf5bf016d7aa36e3ff6302175bef1aab826977bc
Summary:
After the refactoring in the previous diff let's stop using CommitSyncConfig in
PushRedirectorArgs and start using get_common_pushrebase_bookmarks() method.
Reviewed By: mitrandir77
Differential Revision: D25636577
fbshipit-source-id: 126b38860b011c5a9506a38d4568e5d51b2af648
Summary:
At the moment we are in the bit of a mess with cross repo sync configuration,
and this diff will try to clean it up a bit.
In particular, we have LiveCommitSyncConfig which is refreshed automatically,
and also we have CommitSyncConfig which is stored in RepoConfig. The latter is
deprecated and is not supposed to be used, however there are still a few places
that do that. This stack is an attempt to clean it up.
In particular deprecated CommitSyncConfig is used to fetch common pushrebase
bookmarks i.e. bookmarks where pushes from both repos are sent. This diff adds
get_common_pushrebase_bookmarks() method to CommitSyncer so that in the later
diffs we can avoid using CommitSyncConfig for that.
Reviewed By: mitrandir77
Differential Revision: D25636394
fbshipit-source-id: 09b049eb8a54834881d215bc6b9c4150377e387f
Summary: Starting from 3.11.1, OSXFUSE switched into using macOS's major version number for different system versions. So we need to consider that when calculating path to the kernel extensions on macOS.
Reviewed By: xavierd
Differential Revision: D25675984
fbshipit-source-id: ea8c76ce7204ba5da3ca98ceca2cfbeb9c84fa8f
Summary:
Make sure we give more explanation to users to they can self-fix any errors
related to certificates that might pop up.
Reviewed By: xavierd
Differential Revision: D25758517
fbshipit-source-id: 3b9929be3d1c0c44a5e13cc9c1e7b2a4f785abf4
Summary:
The introduction of `eden trace` broke the Buck build on Windows due to its use
of streaming thrift which unfortunately doesn't compile on Windows. Since `eden
trace` is not supported on Windows for now, let's only depend on the streaming
thrift on Linux and macOS.
With this, we can now compile edenfsctl on Windows with Buck. This will later
enable integration tests to be run on Windows.
Reviewed By: genevievehelsel
Differential Revision: D25758445
fbshipit-source-id: d4be2cafd9472840f65dcfab63a5fcfb8eceffb7
Summary:
Like it says in the title. Judging by an earlier similar change (D21092866 (15f98fe58c)),
this kind of flakiness in walker tests occurs when a node's children are
reachable via other paths.
Reviewed By: HarveyHunt
Differential Revision: D25756891
fbshipit-source-id: 05bc0697381e068d466ea6dfe85529dbd9ef1a50
Summary:
Like it says in the title. Note that I did *not* retry stuff like resolving
hosts or connecting, so this should only really temporary blips in
connectivity. We probably shouldn't go much beyond that at a low level like
this.
Reviewed By: HarveyHunt
Differential Revision: D25615915
fbshipit-source-id: 78c33eff2e9ce380a260708e9fbeb929eede383c
Summary:
This is the goal of this stack: retry errors that occur when Curl detects that
the transfer speed is too low. This should let us eventually set a much higher
timeout on overall request completion, thus ensuring that we don't uploads
that make progress, all the while aborting connections early and retrying them
if they are legitimately stuck.
Reviewed By: farnz
Differential Revision: D25615790
fbshipit-source-id: fe294aee090758b1a3aef138788ac2926c741b79
Summary:
Right now, the error handling in LFS doesn't handle e.g. transfer timeouts. I'd
like us to support that, notably so that we can have curl require a minimum
transfer speed and retry if we fail.
To do so, I need to be able to capture the errors and figure out if they're
retryable. Right now, everything is either a `FetchError` that includes a HTTP
status and URL, or just an `Error` that aborts.
This diff introduces a `TransferError` that explains why a transfer failed and
can be used for retry decisions. We later add the request details if we decide
to not retry and turn it into a `FetchError`.
Reviewed By: xavierd
Differential Revision: D25615789
fbshipit-source-id: e4a2f4f16a34ca2f86bd61491bb26e7f328dec63
Summary:
Like it says in the title. This adds support for setting a min-transfer-speed
in Curl. My goal with this is to fix two problems we have:
- a) Uploads that timeout on slow connections. Right now we set a transfer
timeout on requests, but given files to upload can be arbitrarily large, that
can fail. This happened earlier this week to a user (T81365552).
- b) Transfer timeouts in LFS. Right now, we have a very high timeout on
requests and we can't lower it due to this problem with uploads. Besides,
the reason for lowering the timeout would be to retry thing, but right now
we don't support this anyway.
Reviewed By: xavierd
Differential Revision: D25615788
fbshipit-source-id: 57d75ee8f522cf8524f9d12103e34b0765b6846a
Summary:
I'd like to make it a little easier to add more options without having to
thread them all the way through to the HTTP transfer callsite.
Reviewed By: xavierd
Differential Revision: D25615787
fbshipit-source-id: 4c6274dc2e6b5ba878e0027aae9a08b04f974463
Summary: Extended git-import test to include both `full-repo` and `missing-for-commit` import modes.
Reviewed By: ahornby
Differential Revision: D25675361
fbshipit-source-id: b93e2db963c2060540308bf0477cd891d40e5810
Summary:
Managing tailer processes that run multiple times and run once is different. We
want separate code paths when we run contiously than when running only once.
Reviewed By: quark-zju
Differential Revision: D25684782
fbshipit-source-id: 354b32c1dd73f867d6a7b1bd4518d9dd98e6b9a3
Summary:
The intention was to sort entries by Dag Id entry. This was instead sorted
lexicographically.
Reviewed By: quark-zju
Differential Revision: D25684784
fbshipit-source-id: 0a3db6398aec7d8df080bbb2366e41660483608c