Summary:
It comes in handy sometimes, especially when we're unsetting (shadowing) configs in command line like `--config abcd.efgh= hg blah`.
I'm not changing existing behavior because this might break something but we sometimes rely on the fact that `bool("") == False`.
Reviewed By: markbt
Differential Revision: D29934620
fbshipit-source-id: fbc1231d88e00e1d887ffbba2d71e03a30653426
Summary:
The use of dyn traits of the Thrift-generated server traits was emitting future compatibility warnings with recent versions of rustc, due to a fixed soundness hole in the trait object system:
```
error: the trait `x_account_aggregator_if::server::XAccountAggregator` cannot be made into an object
|
= this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
```
This diff pulls in https://github.com/dtolnay/async-trait/releases/tag/0.1.51 which results in the Thrift-generated server traits no longer hitting the problematic pattern.
Reviewed By: zertosh
Differential Revision: D29979939
fbshipit-source-id: 3e6e976181bfcf35ed453ae681baeb76a634ddda
Summary:
The "Calculating" step corresponds to comparing manifests to determine what files have changed. Previously it was just a spinner with no concept of progress. I've tweaked it to display the current tree depth as we are planning. This gives the user something to derive progress from. Depth refers to the directory depth as the checkout plan breadth-first searches across the current tree and destination tree. The time required to diff the two trees is normally proprotional to the depth of the repo, so this is a decent indicator.
Note that there is no "total" set for the progress bar since the max or average depth of a repo doesn't seem readily available. We can add that in the future to provide a true progress bar.
Reviewed By: andll
Differential Revision: D29948994
fbshipit-source-id: bdb5d7a868345d66b9812c2e56159bbf66e6daff
Summary: Boost is not happy when it encounters UNIX domain socket file on Windows. Let's detect the file type manually instead of using boost.
Reviewed By: xavierd
Differential Revision: D29974282
fbshipit-source-id: e11558abdbc565014189ae763a5b3fb5486d38d7
Summary:
Update backup state in `hg cloud upload` command
The backup state is used by `hg sl`, so it would be nice to keep it up-to-date after `hg cloud upload` command, similar to old `hg cloud backup`.
Also, we should add heads what we filtered in order to update the backup state correctly.
So, it will now returned list of uploaded heads as nodes (including filtered) and list of failed commits as nodes (not only heads).
Reviewed By: markbt
Differential Revision: D29878296
fbshipit-source-id: 5848e9f86175fbdc56db123cf7ba0d5fc51273b0
Summary: If the "update" destination sparse profile is identical to the current sparse profile, we now pass "None" to checkoutplan, allowing it to avoid slow call to with_sparse_profile_change. Updating across a small number of commits still took a while due to the unnecessary futzing with sparse profiles.
Reviewed By: andll
Differential Revision: D29949015
fbshipit-source-id: d6fc2ceb7776cd2f2ee42f3dc5b8358f770a7947
Summary:
During a diff operation, files that have a case change should not reported as
changed for case insensitive mount. This is a follow up to D29150552 (37ccaa9231) where the
TreeInode code was taught to ignore case changes for case insensitive mounts.
Reviewed By: kmancini
Differential Revision: D29952464
fbshipit-source-id: e5fa8eccc1c1984a680838fd8589a0278989d9d0
Summary:
We didn't print underlying causes of insertion failure. The reason we didn't was because
```
let s = format!("failed to insert {}", err);
```
used `{}`, and in order to print caused we need either `{:#}` or {:?}` - see https://docs.rs/anyhow/1.0.42/anyhow/struct.Error.html#display-representations.
However krallin suggested that we can achieve the same by by converting the error to SharedError instead of stringifying it. Let's do that instead.
Reviewed By: krallin
Differential Revision: D29985083
fbshipit-source-id: 8ae3abcfc4db9ef62581a3e20462eb6bbfb401b6
Summary:
On Windows the path separator (slash) is different. Let's skip part of the test
for now.
Reviewed By: andll
Differential Revision: D29980940
fbshipit-source-id: c45fe9f5f3b3ffe6d4d8eb209fa5572149db8d0e
Summary:
D29933694 (7652fc3d4a) introduced error code on failure to debughiddencommit, but was still printing hash, which might confuse some automation
It is expectation that hash that is printed by debughiddencommit is uploaded to commit cloud, so printing hash when upload failed does not make sense
Reviewed By: kulshrax
Differential Revision: D29967801
fbshipit-source-id: f88b80326c6c424492a493d106962cecd58239f6
Summary:
The `http.convert-cert` option is normally set globally for each Mercurial process via an event handler (created at startup) that sets the option for every `http_client::Request`.
However, the `edenapi` crate can be used outside of Mercurial (such as by EdenFS), in which case these global HTTP options will no longer be applied.
Given that this particular option is essential for EdenAPI to work at all, this diff makes the EdenAPI `Builder` explicitly set it.
Reviewed By: xavierd
Differential Revision: D29971619
fbshipit-source-id: 186c7f2ffdcfbdc8d7fb43b8fda0eb6aa918c0b8
Summary:
Warnings in Rust code can often indicate real problems, so it's important to
keep the build warning-free. This fixes all warnings where the fix is obvious.
Reviewed By: liubov-dmitrieva
Differential Revision: D29933213
fbshipit-source-id: d1a418c1a70630a2aa1a5740fcbafc66ce8bdf91
Summary:
While not strictly necessary, we should be providing the correct `Key` for
trees when fetching them from the Hg datapack store. Pass the `path` through
to the backing store so it can be used to construct the `Key` for the tree.
Reviewed By: DurhamG
Differential Revision: D29933214
fbshipit-source-id: d9631ea37b5ffa3f7051112d12cf3161c7c677ef
Summary:
If Mercurial asks EdenFS to update to a commit that it has just created, this
can cause a long delay while EdenFS tries to import the commit.
EdenFS needs to get the trees out of the Hg data store. But these also
won't know about the new trees until the data store is refreshed or synced.
To fix this, call the refresh method on the store if we fail to find the tree,
and try again. To make this work, we must first only look locally. To keep
things simple, we only do this for the root tree.
However, currently indexedlogdatastore doesn't actually do anything when you
ask it to refresh.
To fix these, we call `flush()`, which actually does a `sync` operation and
loads the latest data from disk, too.
Reviewed By: DurhamG
Differential Revision: D29915585
fbshipit-source-id: 34fe02ddf5804be425d9cabe1a56069f22e5e4d4
Summary:
If Mercurial asks EdenFS to update to a commit that it has just created, this
can cause a long delay while EdenFS tries to import the commit.
EdenFS needs to resolve the commit to a root manifest. It does this via the
import helper, but the import helper won't know about the commit until it is
restarted, which takes a long time.
To fix this, we add an optional "root manifest" parameter to the checkout or
reset parents thrift calls. This allows the Mercurial client to inform EdenFS
of the root manifest that it already knows about, allowing EdenFS to skip this
step.
Reviewed By: chadaustin
Differential Revision: D29845604
fbshipit-source-id: 61736d84971cd2dd9a8fdaa29a1578386246e4bf
Summary:
We had somewhat inconsistent behaviour in multiplexed blobstore:
1) If on_put handlers are too slow (i.e. they are slower than all blobstores) then we
succeed as soon as all blobstores were successful (regardless of the value of
minimum_successful_writes). It doesn't matter if on_put handlers fail or
succeed, we've already returned success to our user.
2) However if all writes to the queue quickly fail, then we return a failure
even if writes to all blobstore were successful.
#2 seems like a change in behaviour from an old diff D17421208 (9de1de2d8b), and not a
desirable one - if blobstore sync queue is unavailable and it responds with
failures quickly, then blobstore writes will always fail even if all blobstores
are healthy.
So this diff makes it so that we always succeed if all blobstore puts were
successful, regardless of success or failures of on_put handlers.
Reviewed By: liubov-dmitrieva
Differential Revision: D29985084
fbshipit-source-id: 64338d552be45a70d9b1d16dfbe7d10346ab539c
Summary:
At the moment we read all bookmarks from leader database all the time. This is
quite wasteful for repos with large number of repos. Let's instead use
BookmarksSubscription - it uses bookmarks update log to read only bookmark that
changed
Reviewed By: krallin
Differential Revision: D29964975
fbshipit-source-id: 1cd8bc61c363e8254f0663139f90fef24b9df93e
Summary:
It's nice to now how much time tailer spends deriving things, and how long it's
idling. It can hint us on how much head room we have.
Reviewed By: farnz
Differential Revision: D29963128
fbshipit-source-id: 179c140d20f1097e7059a13549e39ae63ffd8198
Summary:
It wasn't really ever used, and it's quite complicated and unnecessary. Let's
just remove it
Reviewed By: krallin
Differential Revision: D29963129
fbshipit-source-id: d31ec788fe31e010dcc8f110431f4e4fbda21778
Summary:
fix `debughiddencommit` command for LFS
There was an issue with the current implementation. It was hidden too early.
backup.backup logic triggers lfs upload but because lfs extension is not enabled, it goes to
remotefilelog.uploadblobs. This function contains the following code:
```
def uploadblobs(repo, nodes):
toupload = []
for ctx in repo.set("%ln - public()", nodes):
for f in ctx.files():
if f not in ctx:
continue
fctx = ctx[f]
toupload.append((fctx.path(), fctx.filenode()))
repo.fileslog.contentstore.upload(toupload)
```
The revset doesn't work for commits.
Reviewed By: markbt
Differential Revision: D29960636
fbshipit-source-id: 8c746c5b0484678e9e988105e980381a0172b38c
Summary:
There is no need to obtain the PrjfsChannel twice, especially if the first one
is unused.
Reviewed By: fanzeyi
Differential Revision: D29977555
fbshipit-source-id: 56428eae84a6abd8689b4f997173e0aa1501aede
Summary:
Just as with D29874802 and D29848377, let's make sure if the same
sync_changeset request was sent again then we would return the same result.
Reviewed By: mojsarn
Differential Revision: D29876414
fbshipit-source-id: 91c3bd38983809da8ce246f44066204df667bb12
Summary:
# Goal of the stack
There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.
Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.
In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.
However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.
So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not
# This diff
Now that we have target_location in sync_changeset() method,
let's move bookmark in sync_changeset conditionally, just as in D29874803 (5afc48a292).
This would prevent race conditions from happening when the same sync_changeset
method is executing twice.
Reviewed By: krallin
Differential Revision: D29876413
fbshipit-source-id: c076e14171c6615fba2cedf4524d442bd25f83ab
Summary:
# Goal of the stack
There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.
Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.
In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.
However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.
So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not
# This diff
We already have it for change_target_config, and it's useful to prevent races
and inconsistencies. That's especially important given that our async request
worker might run a few identical sync_changeset methods at the same time, and
target_location can help process this situation correctly.
Let's add target_location to sync_changeset, and while there I also updated the
comment for these fields in other methods. The comment said
```
// This operation will succeed only if the
// `target`'s bookmark is still at the same location
// when this operation tries to advance it
```
This is not always the case - operation might succeed if the same operation has been
re-sent twice, see previous diffs for more explanationmotivation.
Reviewed By: krallin
Differential Revision: D29875242
fbshipit-source-id: c14b2148548abde984c3cb5cc62d04f920240657
Summary:
# Goal of the stack
There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.
Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.
In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.
However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.
So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not
# This diff
Same as with D29848377 - if result was already computed and client retries the
same request, then return it.
Differential Revision: D29874802
fbshipit-source-id: ebc2f709bc8280305473d6333d0725530c131872
Summary:
# Goal of the stack
There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.
Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.
In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.
However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.
So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not
# This diff
If a previous add_sync_target() call was successful on mononoke side, but we
failed to deliver this result to the client (e.g. network issues), then client
would just try to retry this call. Before this diff it wouldn't work (i.e. we
just fail to create a bookmark because it's already created). This diff fixes
it by checking a commit this bookmark points to and checking if it looks like
it was created by a previous add_sync_target call. In particular, it checks
that remapping state file matches the request parameters, and that config
version is the same.
Differential Revision: D29848377
fbshipit-source-id: 16687d975748929e5eea8dfdbc9e206232ec9ca6
Summary:
When uploading exactly the same file more than once, let's only upload each blob once.
This is already done on the function that also uploads filenodes, but not here.
Reviewed By: liubov-dmitrieva
Differential Revision: D29941483
fbshipit-source-id: ef8509223a11816c1b6f1e7f376d05b96f074340
Summary: return non zero error code if we failed to upload commits or backup an ephemeral commit.
Reviewed By: andll
Differential Revision: D29933694
fbshipit-source-id: dd7fdb020c1d0c5bbd04cb22edb41a33470e0ebd
Summary:
There were 3 places that use the same type of response:
```
Response {
index: usize,
token: UploadToken,
}
```
This diff merges all of them by using a single `UploadTokensResponse`. I'm still using aliases (`use as`) for all of them, if desired I can rename everywhere to use the actual type `UploadTokensReponse`.
Reviewed By: liubov-dmitrieva
Differential Revision: D29878626
fbshipit-source-id: 92af2d4c40eae42edd0a8594642ef0b816df4feb
Summary:
## High level goal
This stack aims to add a way to upload commits directly using the bonsai format via edenapi, instead of using the hg format and converting on server.
The reason this is necessary is that snapshots will be uploaded on bonsai format directly, as hg format doesn't support them. So this is a stepping stone to do that, first being implemented on commit cloud upload, as that code already uses eden api, and later will be used by the snapshotting commands.
## This diff
This diff actually ties everything together from the stack and makes it work end to end. By creating the following client side changes:
- Add some config to use the bonsai format when uploading via EdenApi. The config is disabled by default.
- Add wrapper around new uploadfileblobs method (from D29799484 (8586ae1077))
- Getting the correct data to call the bonsai changeset upload endpoint created on D29849963 (b6548a10cb)
- Some fields are String and not bytes
- Some fields are renamed
- File size and type can be acquired from file context. file content id, which is also required, is obtained as a response from the uploadfileblobs method: Behaviour added on D29879617 (9aae11a5ab)
Reviewed By: liubov-dmitrieva
Differential Revision: D29849964
fbshipit-source-id: a039159f927f49bbc45d4e0160ec1d3a01334eca
Summary: This project has custom stubs that contain type errors. We'll need to fix them so the switch to new Pyre backend won't create any regressions.
Reviewed By: dkgi
Differential Revision: D29953374
fbshipit-source-id: f54d25682d6b01eed4867eab6823e29ddb95e754
Summary:
We need to set the infinitepush path too so that the Mercurial autopull code
won't try to pull from Mononoke
Reviewed By: genevievehelsel
Differential Revision: D29943227
fbshipit-source-id: a67dbfe97e7ab46dee885d9ec91a4d194dc2bd37
Summary:
High-level goal of this diff:
We have a problem in long_running_request_queue - if a tw job dies in the
middle of processing a request then this request will never be picked up by any
other job, and will never be completed.
The idea of the fix is fairly simple - while a job is executing a request it
needs to constantly update inprogress_last_updated_at field with the current
timestamp. In case a job dies then other jobs would notice that timestamp
hasn't been updated for a while and mark this job as "new" again, so that
somebody else can pick it up.
Note that it obviously doesn't prevent all possible race conditions - the worker
might just be too slow and not update the inprogress timestamp in time, but
that race condition we'd handle on other layers i.e. our worker guarantees that
every request will be executed at least once, but it doesn't guarantee that it will
be executed exactly once.
Now a few notes about implementation:
1) I intentionally separated methods for finding abandoned requests, and marking them new again. I did so to make it easier to log which requests where abandoned (logging will come in the next diffs).
2) My original idea (D29821091) had an additional field called execution_uuid, which would be changed each time a new worker claims a request. In the end I decided it's not worth it - while execution_uuid can reduce the likelyhood of two workers running at the same time, it doesn't eliminate it completely. So I decided that execution_uuid doesn't really gives us much.
3) It's possible that there will be two workers will be executing the same request and update the same inprogress_last_updated_at field. As I mentioned above, this is expected, and request implementation needs to handle it gracefully.
Reviewed By: krallin
Differential Revision: D29845826
fbshipit-source-id: 9285805c163b57d22a1936f85783154f6f41df2f
Summary:
Currently they got zeros by default, but having NULL here seems like a nicer
option.
Reviewed By: krallin
Differential Revision: D29846254
fbshipit-source-id: 981d979055eca91594ef81f0d6dc4ba571a2e8be
Summary:
This option would let us tell that a given bookmark (or bookmarks if they are
specified via a regex) is allowed to move only if it stays an ancestor of a
given bookmark.
Note - this is a sev followup, and we intend to use it for */stable bookmarks
(e.g. fbcode/stable, fbsource/stable etc). They are always intended to be an
ancestor of master
Reviewed By: krallin
Differential Revision: D29878144
fbshipit-source-id: a5ce08a09328e6a19af4d233c1a273a5e620b9ce
Summary:
When rebasing during "amend --to", we were mixing up file contents when there was more than a single merged file in a three way merge. The cause was a lambda within a loop closing over a loop variable.
Also suppress the "merging some/file.c" message if the result of the three way merge is identical to the "local" version of the file. It is confusing to see unexpected "merging" messages.
Reviewed By: DurhamG
Differential Revision: D29940097
fbshipit-source-id: 5a4c19279c14209268359939fbf91f164c791b2e
Summary:
TSAN complains that pipes_ is read and written in different threads without
proper synchronization. To avoid this, we can simply close the FileDescriptor
without removing it from the pipes map as this achieve the same result: it
notifies the reader that the endpoint is closed.
Differential Revision: D29924043
fbshipit-source-id: be92630799bb5c78089dbe85f9c2f02f937300bc
Summary:
## High level goal
This stack aims to add a way to upload commits directly using the bonsai format via edenapi, instead of using the hg format and converting on server.
The reason this is necessary is that snapshots will be uploaded on bonsai format directly, as hg format doesn't support them. So this is a stepping stone to do that, first being implemented on commit cloud upload, as that code already uses eden api, and later will be used by the snapshotting commands.
## This diff
This diff fixes the bonsai changeset upload endpoint, by making it get the changesets for the parents using hgids by querying them from blobrepo. The inner map is not enough as the bottom of the stack always has a parent outside of the stack.
Reviewed By: liubov-dmitrieva
Differential Revision: D29880356
fbshipit-source-id: b6b5428159e8c74f5a910f39dadb98aa10c78542
Summary:
## High level goal
This stack aims to add a way to upload commits directly using the bonsai format via edenapi, instead of using the hg format and converting on server.
The reason this is necessary is that snapshots will be uploaded on bonsai format directly, as hg format doesn't support them. So this is a stepping stone to do that, first being implemented on commit cloud upload, as that code already uses eden api, and later will be used by the snapshotting commands.
## This diff
Modifies `uploadfileblobs_py` API method so that it returns the indexed UploadToken's. This will be used on the client code to get the content ids from the upload files, which are necessary to specify when storing the commit in Bonsai format. This is calculated in Rust when uploading the files, and we reuse the result so it doesn't need to be calculated again.
It reuses the `UploadHgFilenodeResponse` struct, which has the exact same format it needs. That is a bit ugly, and its refactored in D29878626.
Reviewed By: liubov-dmitrieva
Differential Revision: D29879617
fbshipit-source-id: 4e7bda1e1160da11c83f43002530fd1aba08d46d
Summary:
## High level goal
This stack aims to add a way to upload commits directly using the bonsai format via edenapi, instead of using the hg format and converting on server.
The reason this is necessary is that snapshots will be uploaded on bonsai format directly, as hg format doesn't support them. So this is a stepping stone to do that, first being implemented on commit cloud upload, as that code already uses eden api, and later will be used by the snapshotting commands.
## This diff
This diff adds the new endpoint to the API that can be acessed via python code, and in turn calls the api implemented on the last diff.
Reviewed By: liubov-dmitrieva
Differential Revision: D29849962
fbshipit-source-id: 5a2a674aef1edd3b0d95cb2b45b02ef9c20aca48
Summary:
## High level goal
This stack aims to add a way to upload commits directly using the bonsai format via edenapi, instead of using the hg format and converting on server.
The reason this is necessary is that snapshots will be uploaded on bonsai format directly, as hg format doesn't support them. So this is a stepping stone to do that, first being implemented on commit cloud upload, as that code already uses eden api, and later will be used by the snapshotting commands.
## This diff
This diff adds a trait method to call the endpoint added on D29849963.
It's mostly boilerplate, calling the correct endpoint, converting types from/to wire.
Reviewed By: liubov-dmitrieva
Differential Revision: D29849965
fbshipit-source-id: 4a821e965fe4319fddd8e13b13ed4de5b7f86e93
Summary:
## High level goal
This stack aims to add a way to upload commits directly using the bonsai format via edenapi, instead of using the hg format and converting on server.
The reason this is necessary is that snapshots will be uploaded on bonsai format directly, as hg format doesn't support them. So this is a stepping stone to do that, first being implemented on commit cloud upload, as that code already uses eden api, and later will be used by the snapshotting commands.
## This diff
This diff creates an endpoint on eden api which uploads a commit using the bonsai format.
It also adds all the necessary types to represent a bonsai commit (basically the same as hg commit, but no manifests, and a bit more detail on how each file changed) via the wire, and related boilerplate.
Reviewed By: liubov-dmitrieva
Differential Revision: D29849963
fbshipit-source-id: 2ff44d53874449ae4373a0135a60ead40c541309
Summary:
## High level goal
This stack aims to add a way to upload commits directly using the bonsai format via edenapi, instead of using the hg format and converting on server.
The reason this is necessary is that snapshots will be uploaded on bonsai format directly, as hg format doesn't support them. So this is a stepping stone to do that, first being implemented on commit cloud upload, as that code already uses eden api, and later will be used by the snapshotting commands.
## This diff
Uploads file contents without uploading filenodes. Will be used for uploading a
commit in bonsair format instead of mercurial format via EdenAPI.
This is basically the same method as before D29549091 (d327996144).
Reviewed By: liubov-dmitrieva
Differential Revision: D29799484
fbshipit-source-id: 136c058ebcd814f39c5b903f5d8bfef7ff6005dc
Summary: It makes it easier to understand what went wrong
Reviewed By: krallin
Differential Revision: D29894836
fbshipit-source-id: 1bc759067350b823d388fcab9a8cee41da4423af
Summary:
If for some reason EdenFS cannot be started, we shouldn't attempt to run the
fsck tests as these would always fail.
Reviewed By: genevievehelsel
Differential Revision: D29918436
fbshipit-source-id: 6e4a01a747157427e5c1028084e32cef8066c96a
Summary: This affects all platforms but more noticeable on Mac that tons of 100% printed (e.g. P409794954), probably due to some weirdness with cursor.
Reviewed By: fanzeyi
Differential Revision: D29922276
fbshipit-source-id: 987f6b9ef5a8a4ab738aa6edbd617184bbcb2d1c
Summary: As title. `RequsetContext` allows us to track metrics such as latency and count.
Reviewed By: genevievehelsel
Differential Revision: D29835813
fbshipit-source-id: 6b85fc8f11923f530fce6d871fa2253db21bfa98
Summary:
Previously the missing vertex cache was ignored by vertex_id_batch.
Respecting it can help reduce remote lookups.
Reviewed By: andll
Differential Revision: D29889457
fbshipit-source-id: 0469b1e61c42ad31e0dd486ab7c752bf4aeeba5c
Summary:
This will help remove some unnecessary cache invalidations, and help avoid
remote lookups.
Reviewed By: andll
Differential Revision: D29889458
fbshipit-source-id: e9a36b227c3b2c7f6b9830a8b27f5a16e363c94e
Summary:
This will be used to detect if the NameDag was changed between reloads,
and decide whether we need to invalidate caches or not.
Reviewed By: andll
Differential Revision: D29888938
fbshipit-source-id: 377879bd8d28c92feca80c025613a65139ccb866
Summary:
The version gets bumped on writing to disk.
This makes it easier for callsites to detect whether there are changes to the
MultiLog. It will be used by the upcoming changes.
Reviewed By: andll
Differential Revision: D29888939
fbshipit-source-id: 278887cd59c85e49f606334529a27557a4bc1dc5
Summary:
It turns out that the namedag was opened multiple times. Add a fail point to
help figure out the callsite.
The `fail` crate allows something like:
FAILPOINTS="dag-namedag-open=1*sleep(1)->return"
FAILPOINTS="dag-namedag-open=1*sleep(1)->panic"
Meaning that the first open causes 1ms sleep, and the second
causes an error (turns into a Python backtrace), or a panic (turns into a Rust
backtrace with RUST_BACKTRACE=1).
Reviewed By: andll
Differential Revision: D29888937
fbshipit-source-id: b1644d7196f68262523ab9a5fc4fb110a4cc0062
Summary:
Previously it checks whether the new hash exists remotely, which makes offline
commit impossible.
`tip` is not that important. Just do a local check instead.
Reviewed By: andll
Differential Revision: D29834904
fbshipit-source-id: 94924591a5827942f428b74231b4494999856361
Summary: Show that lazy changelog makes it impossible to commit or amend offline.
Reviewed By: andll
Differential Revision: D29834907
fbshipit-source-id: a268be05947cbf215cff1471a25dba72447bafec
Summary:
Similar to D29440143 (38f3ceafbc), add a way to disable resolving names by setting
a limit using `EDENSCM_REMOTE_NAME_THRESHOLD`.
This is useful to figure out callsite that tries to resolve names that
are previously unknwon, ex. newly generated commit hashes.
Reviewed By: andll
Differential Revision: D29834906
fbshipit-source-id: 9b6161bd62a026fa5a37e1cda9912bcb8bca6971
Summary:
The patches to these crates have been upstreamed.
allow-large-files
Reviewed By: jsgf
Differential Revision: D29891894
fbshipit-source-id: a9f2ee0744752b689992b770fc66b6e66b3eda2b
Summary:
On Windows, there a commonly occuring issue where a checkout operation would
crash EdenFS as a conflict is being added for an unlinked inode, thus
triggering the XCHECK in the addConflict method.
From looking at the code, the comment that claims that inodes cannot be
unlinked during checkout isn't entirely accurate: EdenFS will unlink inodes
during checkout when their content changed. The code itself should properly
remove the unlinked inode from its parent TreeInode and thus I haven't fully
figured out the exact series of event that leads to a conflict being added for
an unlinked inode. Since the asumption from the comment is invalid, it should
be safe to not assert that the inode shouldn't be unlinked and use
InodeBase::getUnsafePath instead of InodeBase::getPath
Reviewed By: kmancini
Differential Revision: D29241901
fbshipit-source-id: 4239df576b3cbf716fb336fd4d6542939337a297
Summary:
In some cases, the code needs to have access to the path for an inode even if
that inode is unlinked. In such situation, neither getPath nor getLogPath are
suitable, thus let's introduce a getUnsafePath, which is intended for these
handful of places.
The only known use case for such method is when adding conflicts during checkouts.
Reviewed By: genevievehelsel
Differential Revision: D29241902
fbshipit-source-id: 7756a95813d6fd5e471538cf82d29604dd5b8e5e
Summary:
Implement batch derivation of blame V2.
Blame derivations are independent so long as the two commits do not change or
delete any of the same files. We can re-use the existing batching code so long
as we change it to split the stacks on *any* change (not just a
change-vs-delete conflict).
Reviewed By: StanislavGlebik
Differential Revision: D29776514
fbshipit-source-id: b06289467c9ec502170c2f851b07569214b6ff0a
Summary:
I noticed that reading one of the mononoke configs was failing with
```
invalid type: string \"YnrbN4fJXYGlR1EzoxLRvVbibyUiRM/HZThRJnKBThA\", expected
a sequence at line 2587 column 61)\x18ninvalid type: string
\"YnrbN4fJXYGlR1EzoxLRvVbibyUiRM/HZThRJnKBThA\", expected a sequence at line
2587 column 61
```
The problem is coming from the fact that configerator configs use thrift simple
json encoding, which is different from normal json encoding. At the very least
the difference is in how binary fields are encoded - thrift simple json
encoding uses base64 to encode them. [1]
Because of this encoding difference reading the configs with binary fields in
them fails.
This diff fixes it by using simple_json deserialization for
get_config_handle()... but the existing callers used the old broken
`get_config_handle()` which is
incompatible with the new one. Old `get_config_handle()` relied on the fact
that serde::Deserializer can be used to deserialize the config, while thrift
simple json doesn't implement serde::Deserializer.
As a first step I migrated existing callers to use old deprecated method, and
we can migrate them to the new one as needed.
[1] It was a bit hard to figure out for sure what kind of encoding is used, but
discussion in
https://fb.workplace.com/groups/configerator.users/posts/3062233117342191
suggests that it's thrift simple json encoding after all
Reviewed By: farnz
Differential Revision: D29815932
fbshipit-source-id: 6a823d0e01abe641e0e924a1b2a4dc174687c0b4
Summary:
Do a similar change to change_target_config as we've done for add_sync_target
in D29848378. Move bookmark only if it points to an expected commit. That would
prevent make it safer to deal with cases where the same change_target_config
was executing twice.
Reviewed By: mojsarn
Differential Revision: D29874803
fbshipit-source-id: d21a3029ee58e2a8acc41e37284d0dd03d2803a3
Summary:
This is the first diff that tries to make megarepo asynchronous methods
idempotent - replaying the same reqeust twice shouldn't cause corruption on the
server. At the moment this is not the case - if we have a runaway
add_sync_target call, then in the end it moves a bookmark to a random place,
even if there was another same successful add_sync_target call and a few others on
top.
add_sync_target should create a new bookmark, and if a bookmark already exists
it's better to not move it to a random place.
This diff does it, however it creates another problem - if a request was successful on mononoke side, but we failed to deliver the successful result to the client (e.g. network issues), then retrying this request would fail because bookmark already exists. This problem will be addressed in the next diff.
Reviewed By: mojsarn
Differential Revision: D29848378
fbshipit-source-id: 8a58e35c26b989a7cbd4d4ac4cbae1691f6e9246
Summary: As discussed, extends Mononoke service to support commits w/o parents for the AI Infra usecase.
Reviewed By: markbt
Differential Revision: D29810303
fbshipit-source-id: f07fd7f1521ffe1cea85f1f54e71fe37fc39bb62
Summary: Ensure checking is covered with the pyre config. unblocks deprecation of pyre targets based checking.
Reviewed By: pradeep90
Differential Revision: D29878832
fbshipit-source-id: 1bbeca3b61ae5b0362b768bbbe53057a1d72ee7f
Summary:
This adds debug commands for ActivityRecorder:
```
eden debug start_recording --output-dir <DIR>
* stdout: the id of the profile
eden debug stop_recording --unique <ID>
* stdout: the output file path
```
Users can record multiple profiles concurrently. Each profile is identified by the timestamp when it started.
Reviewed By: genevievehelsel
Differential Revision: D29666359
fbshipit-source-id: 487ca67de77378a8141bc4ac46b9abd1375ffd23
Summary:
We want to introduce two debug commands to record perf profiles such as files read. This can later be integrated to CI so that we can have this data for troubleshooting perf issues.
* `eden debug start_recording` starts recording perf metrics such as files read/written and fetch counts/latency for a given mount.
* `eden debug end_recording` stops recording and dumps the recorded profile to a local file.
This diff adds the boilerplate `ActivityRecorder` (borrowed heavily from `HiveLogger`'s implementation). The start command would create an instance of the recorder; the end command would destroy the recorder. The recording and dumping are handled by the implementing class.
Reviewed By: genevievehelsel
Differential Revision: D29506895
fbshipit-source-id: a927a363942a041d5ae54186a265576325dfeed5
Summary: These are needed for mercurial in the test cases, we set this in the testharness C++ repo as well
Differential Revision: D29868460
fbshipit-source-id: e11cf41823ee073e3863fb5a38ecbf1146073ff5
Summary: This test reads files one by one and measures latency of reads
Reviewed By: kulshrax
Differential Revision: D29824745
fbshipit-source-id: abf73d4c279c184c2e76f2052304ea13c40e86b4
Summary:
Clean Up: remove old snapshots extension.
This extension is unused and its name clashes with the new Snapshots project which offers a different implementation.
As discussed with markbt and yancouto, we should unfortunately delete this code.
Reviewed By: markbt
Differential Revision: D29875074
fbshipit-source-id: 2bcd835e58fc50c5aad94a184a4d2ecb6be79c9c
Summary:
Add support for issue CAT commit cloud tokens on corp Macs.
This is needed to support enabling Scm Daemon for all mac users **automatically**.
The tool `corp_clicat` is probably not yet deployed everywhere but it will work once deployed.
The current manual way will remain working as well.
The current way is manual:
* obtaining OAuth token from https://our.intern.facebook.com/intern/oauth/184975892288525
* running `hg cloud auth -t <token>` (the command stores the token into Mac keychain, Scm Daemon fetches from it).
The problem with the manual way is that users are not aware about scm daemon and the features it provides. The manual way worked fine when the tokens were also used for hg commit cloud extension as well but now they are not used.
Reviewed By: markbt
Differential Revision: D29791737
fbshipit-source-id: 2391a196167e1f17d53a3231dbf58f7cb2bcd39a
Summary: This utility will be used to measure latency and throughput of EdenFS
Reviewed By: kulshrax
Differential Revision: D29824747
fbshipit-source-id: f5298125bdaa16ccd52cb00a6bc3cd544c0967b7
Summary: It's nice to be able to keep track of what's going on
Reviewed By: mwdevine
Differential Revision: D29790543
fbshipit-source-id: b855d72efe8826a99b3a6a562722e299e9cbfece
Summary:
Added an optional argument to `/upload/file`, that allows specifying a bubble id, which will be used to upload the file into the ephemeral blobstore instead of the main one.
This is necessary in order to create a snapshot, as all files must be in the ephemeral blobstore.
Reviewed By: liubov-dmitrieva
Differential Revision: D29734333
fbshipit-source-id: c1dcf8d5a78819925f8defbfbd7d06b0f6a9e973
Summary: Insert id's are always positive, so let's use `NonZeroU64` instead of `u64`. This is more restricted, which is good, but also has the added benefit that `Option<NonZeroU64>` doesn't use any additional space, because of compiler optimizations.
Reviewed By: StanislavGlebik
Differential Revision: D29733877
fbshipit-source-id: 8a0e1a1bd84bcedbba51840f1da8f8cac79bca42
Summary: Ephemeral handle is a blobstore that's built from a bubble and a "main blobstore", which first attempts to read from the ephemeral blobstore, but falls back to the main one. Will be used to read/write stuff in snapshots.
Reviewed By: liubov-dmitrieva
Differential Revision: D29733408
fbshipit-source-id: f15ae9d3009632cd71fafa88eac09986e0b958e7
Summary:
Move EdenApi Uploads code from commit cloud extension to core
So this can be later used for pushes as well. The code is not commit cloud specific.
The function takes revs and returns uploaded, failed lists that are also revs.
Reviewed By: yancouto
Differential Revision: D29846299
fbshipit-source-id: e3a7fbc56f0b651c738dc06da7fdb7cde4feedf7
Summary: This test is overly reliant on exact logging output, and the output has changed. Update the test for the new output, and make it a bit more lenient in the process.
Reviewed By: StanislavGlebik
Differential Revision: D29787827
fbshipit-source-id: 3e8aa77d2edcf3d0ca95c0d17d0b4e3845b78ae3
Summary:
Improve integration tests coverage for `hg cloud upload` and `hg cloud sync` with enabled upload.
This includes end2end tests for uploading mutation information, pulling commits from another repo,
generally how uploads behaves after a rebase, after file moves, after editing a commit message, how copy_from data has been preserved.
Reviewed By: markbt
Differential Revision: D29816436
fbshipit-source-id: 2aa421c8479683721984e13d537c34df8b1ca2d1
Summary: update test certificates for another 10 years rather than the default 1 year
Reviewed By: markbt
Differential Revision: D29846930
fbshipit-source-id: 98bc139c21e4d9e4cb5bab46485d849345bcc43d
Summary:
Move commitcloud to use TLS auth settings provided already for the default path.
I think it is a right approach to clean up separate configuration given that the certs for the default path has been supported for all platforms now and we have fully migrated to http/TLS.
Tested the token mode as well. Keep it fully supported for now.
This change will also allow on Demand to remove their custom commit cloud configuration.
Reviewed By: mzr
Differential Revision: D29764745
fbshipit-source-id: bc5681a919dfa6ec79ea3b832c9f4b98551278de
Summary:
It's useful to be able to overwrite or inject http headers without changing client's source code. I needed that when I was proxying client requests through ncat running locally on my machine - I had to set Host header.
Moreover, if it's part of the config, we'll be able to distribute it quickly with dynamicconfig to do some things.
Also, we have http.verbose that's being used by rust http client but it's not respected by mononokepeer in python. I'm adding that here as well. Not full blown verbose mode because it's not necessary - we're only upgrading to websocket.
Reviewed By: farnz
Differential Revision: D29792386
fbshipit-source-id: 51b7f3cf07a870636ac6aa126f0efb45e979ef30
Summary:
Add methods to easily determine whether a tree exists, or whether anything
(either a file or a tree) exists at a particular path.
Reviewed By: StanislavGlebik
Differential Revision: D29815982
fbshipit-source-id: f3fb1919545bdcb46ed663a0a514338dc137abee
Summary: This adds counters for memory and disk counts in addition to import count so that we can understand cache hit rates during local investigation or output this in ActivityRecorder.
Reviewed By: genevievehelsel
Differential Revision: D29805637
fbshipit-source-id: 34261f91c33d6bd4bcb4b85b17d2e68360410896
Summary: Combine `eden debug log` and `eden logs`. The logic from `eden logs` is moved to `eden debug log upload`.
Reviewed By: genevievehelsel
Differential Revision: D29801785
fbshipit-source-id: 6283a33a3180fec6934ac52fc8d5eed4a0a483b0
Summary:
This is not used. Even though this method has the "right intention" (i.e. we
need to start marking long running requests as new), I'm not sure we can use it
as is. So let's just delete it for now.
Reviewed By: farnz
Differential Revision: D29817068
fbshipit-source-id: 84d392fea01dfb5fb7bc56f0072baf2cf70b39f4
Summary:
support --date option in commit cloud history
this could significantly simplify lookup for users if they know certain date range they are interested in
the date logic has been implemented to match the current `hg cloud sl --date` logic.
(the first timestamp >= date)
Reviewed By: markbt
Differential Revision: D29818966
fbshipit-source-id: eee5e8e7709d4ead1e9df35300177fc9abb63028
Summary: The new UI is proven to be great, so there is no value to preserve the old one in our codebase I think.
Reviewed By: markbt
Differential Revision: D29817354
fbshipit-source-id: 152c7eaeb8c669fe71322c822f4e7f39c6f9062c
Summary: Currently we only print what's in the `error` annotation.
Reviewed By: krallin
Differential Revision: D29794843
fbshipit-source-id: a2c411208d7be8fd856dd9b3f82fd96a4ed37aee
Summary:
bugfix: fix uninitalized state variable and add a test
in rare cases it is used further down the code
Reviewed By: StanislavGlebik
Differential Revision: D29815203
fbshipit-source-id: e117df5575f025787d94f0a8ed4a171408e361d0
Summary:
The gitignore matcher assumes everything is a directory. This causes problems
if a directory pattern matches a file that we don't wnat to be ignored.
Add a test demonstrating the issue.
Reviewed By: quark-zju
Differential Revision: D29788782
fbshipit-source-id: 4cd41c7c0985a8729443d6c0507ba98fa212049e
Summary:
commitcloud: remove dead code:
this bit of code is unsed, was left from obsmarkers code
Reviewed By: quark-zju
Differential Revision: D29762009
fbshipit-source-id: 5f70ebe8a6b0b0261bba2d96c94eabac93f2fe4c
Summary:
We seem to be reloading it every minute, even though we are supposed to reload
only when it's changed. That's probably not a huge deal, but we just get a
spammy stderr message. Let's remove it.
Reviewed By: yancouto
Differential Revision: D29789760
fbshipit-source-id: 65a39cca67636ae71befb963c78b6473b5b9f3fc
Summary:
mysql tests were failing because of invalid config with
```
+ E0719 14:48:27.582197 1846476 [main] eden/mononoke/cmdlib/src/helpers.rs:318] Execution error: unknown keys in config parsing: `{"blobstore.ephemeral_blobstore.?.metadata.?.filenodes", "blobstore.ephemeral_blobstore.?.metadata.?.mutation", "blobstore.ephemeral_blobstore.?.metadata.?.primary"}`
```
See example - https://www.internalfb.com/intern/testinfra/diagnostics/6473924511735259.562949979040542.1626706163/
This diff fixes it
Reviewed By: akushner
Differential Revision: D29812804
fbshipit-source-id: c71f7f38103194137523ca947e4b23819da37c35
Summary: This change let Eden cli can ```clone``` and ```info``` on a RE Digest backed store
Reviewed By: chadaustin
Differential Revision: D28855458
fbshipit-source-id: 5582992acc5b3b3acb05b0b53d59a6768cc02491
Summary:
For norepo use-cases the config remains unset because dynamicconfig (still)
requires a repo. This means lazy changelog clone won't work. To workaround
it, we set `http.convert-cert` to `true` directly on Windows.
Reviewed By: andll
Differential Revision: D29804599
fbshipit-source-id: c0fc819711ec8d4f9f77cbce7b9d08b193ee9e6d
Summary: streampager has been upgraded to 0.10. 0.9 no longer exists.
Reviewed By: andll
Differential Revision: D29794320
fbshipit-source-id: 415d4bc8649236a49dd41bb922a636096ab67be7
Summary: Currently the tests manually run fsck command (Python) on the snapshot. This makes the change so that the tests will not only test the manual fsck command but also test the auto fsck (C++) so that we cover both Eden CLI fsck (Python) and Eden mount auto fsck (C++).
Reviewed By: chadaustin
Differential Revision: D29690188
fbshipit-source-id: 593db1db587c3294aad1314ea8da1d8e778df8ee
Summary:
Rename to avoid confusion. The function filters errors from the underlying stream.
The first error and number of errors are logged to scuba but the errors are not passed to the client.
Reviewed By: kulshrax
Differential Revision: D29734930
fbshipit-source-id: 503adaa9e618d931a354011ef83c3ab22eb3b9bf
Summary: We already have AccessType for FUSE, this adds the same categorization for NFS. This allows us to easily filter events in trace stream and ActivityRecorder.
Reviewed By: chadaustin
Differential Revision: D29771074
fbshipit-source-id: a437f0693f9062fb2df3b6f618a9d8860a05df12
Summary:
Using all the preparations added in the stack, this diff adds the `/:repo/ephemeral/prepare` endpoint to eden api.
It simply creates an ephemeral bubble and returns its id via the call.
Reviewed By: markbt
Differential Revision: D29698714
fbshipit-source-id: 5bc289cad97657db850b151849784e50a17a9da6
Summary: This allows ephemeral blobstore to be used in places that have a Repo context, like in the eden api, which will be used on the next diff to implement a new endpoint on eden api to create a bubble.
Reviewed By: markbt
Differential Revision: D29697657
fbshipit-source-id: b7e83c5c7c5e77243f0dba29c024d9f66ca4b2f9
Summary:
Config for ephemeral blobstore and some code for creating ephemeral blobstores was already added, this diff ties them both together by making the ephemeral blobstore be build using the default config on RepoFactory, so it can be used as a Repo attribute easily in other places.
I was able to do this easily because I stopped using `BackingBlostore` and started simply using `dyn Blobstore` in the ephemeral blobstore. Using BackingBlobstore would require some significant changes, because:
1. Building of blobstores is not ergonomic, it is quite hard and requires a bunch of manual code to be able to build some subtrait of Blobstore.
2. A lot of the blobstores "wrappers" do not implement things like BlobstoreKeySource, which would need to be implemented individually (example: D29678881 (817948ca75) would be just the start).
Reviewed By: markbt
Differential Revision: D29677545
fbshipit-source-id: 0f5cffe6bdfece1aaa74339ef40376d1ff27e6c2
Summary:
Use the class added on previous diff on segmented changelog periodic reloader as well.
To do this, I needed to add some changes to reloader:
- Add auto implementation of `Loader` trait for functions
- Add a tokio notify, as that was used on tests in segmented changelog
Reviewed By: markbt
Differential Revision: D29524220
fbshipit-source-id: 957f21db91f410fcdabb0d1c16d5c4f615892ab6
Summary: If we ever want to start using things like BlobstoreKeySource more extensively, we'll need to implement it for a lot of blobstores. This starts that, though it's not used for now.
Reviewed By: ahornby
Differential Revision: D29678881
fbshipit-source-id: 918a169b8b934c6f5e1eefaba7d11dc220eb7c59
Summary: This is needed to disable sync-queue lookups and second blobstores lookups later on.
Reviewed By: StanislavGlebik
Differential Revision: D29663435
fbshipit-source-id: abb5109de6063158a7ff0a116a5c1d336bfdb43f
Summary: This just helps to understand where we definitely have to fail in case of "ProbablyNotPresent" and work on those in the future.
Reviewed By: StanislavGlebik
Differential Revision: D29663436
fbshipit-source-id: c8428115f3c9637114e3964c948123d473207d53
Summary:
The memcache client has a 2+s setup time, so let's avoid it for short
commands.
Reviewed By: quark-zju
Differential Revision: D29627225
fbshipit-source-id: c755fbaadd35e423b6dafe772ffbed82fe41abce
Summary:
Previously we could add a MemcacheStore to the store hierarchy in two
ways: directly as MemcacheStore and through the MemcacheHgIdData/HistoryStore
wrappers.
In a future diff we'll want to track how long since the store was created and
not call the inner MemcacheStore in certain situations. To do this, we want all
accesses to go through the MemcacheHgId*Store abstraction.
Reviewed By: quark-zju
Differential Revision: D29627226
fbshipit-source-id: 9e979281fbb2eec123577d99a8879bcf80578136
Summary:
The memcache client takes 2 seconds to set up, so let's not use it for
short commands.
Reviewed By: quark-zju
Differential Revision: D29627229
fbshipit-source-id: bbefcd362e215a3b0f8a0f07c39b7ef00c71379e
Summary: We want to support different batch sizes for blob or tree. This diff moves the batch size read logic into `HgImportRequestQueue`, adds a new config `import-batch-size-tree` (added in D29714772), and updates tests accordingly.
Reviewed By: kmancini
Differential Revision: D29703450
fbshipit-source-id: b85666838a0a8c1857b9d1de4f6c47128063633a
Summary:
My previous attempt at fixing it chose the wrong hash. Let's just
delete these lines entirely.
Reviewed By: singhsrb
Differential Revision: D29742490
fbshipit-source-id: 72a174e0e2d59aec4de35d7eb3fcc43939be8ea1
Summary:
Segmented changelog is initialized in every BlobRepo, and that's quite annoying
- there's a lot of spam goes to stderr in jobs like hg sync job which don't use
segmented changelog at all.
At the same time segmented changelog is only used in mononoke api, so we can
just initialize segmented changelog in InnerRepo, and remove from BlobRepo
completely.
Reviewed By: markbt
Differential Revision: D29735623
fbshipit-source-id: 9137c9266169b7ef16b1c6c0b80cae896214203b
Summary:
Undo revlog corruption should not affect other commands.
There were multiple user reports. This should not need a reclone.
Reviewed By: markbt
Differential Revision: D29723869
fbshipit-source-id: 8f48f45c1e25478ac9ff713fbcf69eaa08f464a8
Summary: This can be used on windows since it uses `shutil.rmtree` instead of `fm -rf`
Differential Revision: D29723916
fbshipit-source-id: 7d12ce8d265661698c1f5ecd17271d1c2e950a55
Summary:
add option for using EdenApi Uploads for `cloud sync` command
This option could be used for early testing of the new protocol within the team.
The only different is that the upload function doesn't use the local backup state to filter initially.
TBD if we need it.
Reviewed By: quark-zju
Differential Revision: D29696175
fbshipit-source-id: 2261930f1f01bf2957b418cc24f31ef61d536e77
Summary:
These tests are broken and they are always skipped.
They are relared to old pullbackup/restorebackup.
Let's remove them.
Reviewed By: quark-zju
Differential Revision: D29695847
fbshipit-source-id: 9fc6babe70710ea1ba5f143a2203997945f1ccff
Summary:
remove readheadsfromallpaths from commitcloud backup state
This was used for migration and can be removed now
Reviewed By: quark-zju
Differential Revision: D29711512
fbshipit-source-id: 567071be255b12eba35f5a5d4b84a37ee4944ad8
Summary:
My recent treemanifest test hash update didn't update this Windows case
correctly, which broke hgbuild.
Reviewed By: akushner
Differential Revision: D29727165
fbshipit-source-id: 7310c49929411816b1929c52d4be3a74e8177c45
Summary:
The previous diff only added these metrics to the scmstore path. Let's
add it to the legacy path as well, so we can start getting some insights while
scmstore rolls out.
Reviewed By: quark-zju
Differential Revision: D29601063
fbshipit-source-id: cb1935d02bc0b758c63abdf5d59f7a00d05ff4eb
Summary:
EspanId and tracing::span::Id had From/Into implementations that allowed them to be treated as
equivalent when they shouldn't be. This allowed for a subtle but where TracingData::record was
receiving an EspanId where the id was some huge u64 from a tracing::span::Id, when instead that
huge u64 should've been mapped to the real EspanId via the id_map.
Ideally we'd treat EspanId and the tracing Id separately, but the TracingCollector Subscriber
implementation relies on this. Since it controls span creation it can guarantee that all Id's are safely
convertible to EspanId.
Reviewed By: quark-zju
Differential Revision: D29599901
fbshipit-source-id: 79d24d41d86c6098888b1747cc0b9bc2838493fa
Summary: Add a new helper function 'print_env_variables' that reads the environment variables and prints them at the bottom of the rage report.
Reviewed By: genevievehelsel
Differential Revision: D29713709
fbshipit-source-id: 04e10597c93d11b58420f184048d9b55ad4e5166
Summary:
A recent diff, D29167397 (ccc95d7f5a), refactored treemanifest fetching such that
1) all http fetching happened inside the store, instead of routing up through
the treemanifest extension, and 2) all tree fetches use ondemand fetching
instead of full tree fetching.
The eden python logic broke because it wasn't updated. With the two changes
mentioned above, the eden logic becomes simpler.
Reviewed By: singhsrb
Differential Revision: D29717646
fbshipit-source-id: 9c29ab38b27ada15424d54d8c251a730730b8352
Summary:
This currently works well with ContentStore with EdenApi backend, did not test in other combinations, probably some other tweaks will be needed
Injection is controlled by MISSING_FILES env variable, it is interpreted as list of comma separated repo path prefixes
Reviewed By: DurhamG
Differential Revision: D29640090
fbshipit-source-id: 4925eabd63dc3a28a1133d332a072c0e224ea74b
Summary:
This landed at the same time as my treemanifest flatcompat removal, so
it didn't receive the test update.
Reviewed By: singhsrb
Differential Revision: D29715926
fbshipit-source-id: bdc962da64da71e0cd3d8f834edb914b3266fdec
Summary: This adds some types that will be used by the `/ephemeral/prepare` call in eden api. The call will actually be added on D29698714.
Reviewed By: liubov-dmitrieva
Differential Revision: D29696921
fbshipit-source-id: 9516661c1f41fcf87d37181fbab3eda5b6131179
Summary:
Remove everything about the infinitepush write path.
Infinitepush path has been splitted into 2 for migration purpose. It is now time to clean up.
Reviewed By: StanislavGlebik
Differential Revision: D29711414
fbshipit-source-id: c61799fe124e2def4254cdd45e550c82c501e514
Summary:
Clean up usage of callback or progress in the new code.
There is no need to use a callback or progress. They were used for legacy progress support. The new progress bar automatically applies to every HTTP request.
Reviewed By: yancouto
Differential Revision: D29710912
fbshipit-source-id: 33889d89680c90e63f4520626a166d0b39b67afb
Summary:
Now that the new `rate_limiting` crate is being used by LFS server we
can remove the throttle limits code and config.
Differential Revision: D29396505
fbshipit-source-id: 19638bd93ad9dea2638e8501837c6c13e4dd48ff
Summary:
This test is broken due to Rust panic from hg (likely there was some change in hg and the original snapshot was from 2018). It's hard to know exactly what's causing this issue because 1) it's not from eden 2) this has been broken for a while so it's tricky to bisect.
To bring back the coverage and run on a modern repo, this diff:
* generates a new snapshot (`buck run //eden/integration/snapshot:gen_snapshot -- basic`).
* adds a step to translate hgrc because the path at snapshot generation time is in it.
* migrates this test to the new snapshot
Reviewed By: kmancini
Differential Revision: D29670241
fbshipit-source-id: 1c5dccc673d516334de83582b32e2e3c9dc308f1
Summary:
Integration tests rely on specific debug output. This changed for `String`
in Rust 1.53, so update accordingly.
Reviewed By: yancouto
Differential Revision: D29696713
fbshipit-source-id: 751d72660f1d8772d754ab404192281857b32b2f
Summary: Code Clean Up. Remove everything about infinitepush-other path. This is a legacy to support migration.
Reviewed By: markbt
Differential Revision: D29677983
fbshipit-source-id: e9972117119d5e6005c3ec0b07809cf9d1fdc4a4
Summary:
Remove the panicking APIs in D29647203 and instead propagate "missing content attribute" errors.
It was difficult to make these changes at the bottom of my diff stack, so I've added them here instead.
Reviewed By: DurhamG
Differential Revision: D29670495
fbshipit-source-id: 952ed4913a413c39ac3dff14a22f56e4766512ff
Summary: Add an option, `scmstore.prefercomputingauxdata`, which enables computing aux data from locally available content instead of requesting aux data from EdenApi when possible.
Reviewed By: DurhamG
Differential Revision: D29659777
fbshipit-source-id: e381c8beac359dc1735d76378c602fbf2bb0b668
Summary: Update `FileStore` to fetch aux data from EdenApi. As written, `FileStore` will prefer remotely fetching aux data to computing it from locally available content.
Reviewed By: DurhamG
Differential Revision: D29659721
fbshipit-source-id: 13e33830ed84fbba31e19b00aaf748dcc4f67727
Summary:
Adds a new method, `files_attrs`, to the `EdenApi` trait, which allows the caller to specify per-key attributes.
This method is intended to be temporary, and should later be unified with `files`.
Implement `files_attrs` in `FakeEdenApi`, and implement a placeholder method in EagerRepo.
Reviewed By: DurhamG
Differential Revision: D29635233
fbshipit-source-id: d0773927939527d799918139e4abba5ea3b5efca
Summary: Update `FileStore` to use new `indexedlogauxstore` rather than JSON + `indexedlogdatastore`.
Reviewed By: DurhamG
Differential Revision: D29635152
fbshipit-source-id: 3f73b7f7ee1ebc4aa1a0e804973d98d342cbc6ba
Summary: Add utility methods to ContentHash for use in computing aux data
Reviewed By: DurhamG
Differential Revision: D29659855
fbshipit-source-id: fb5c9749899147ea03dbb9e0e19b492c62bde2dd
Summary: Add a new indexedlog wrapper, `indexedlogauxstore`, for storing file aux data for scmstore.
Reviewed By: DurhamG
Differential Revision: D29597641
fbshipit-source-id: 34a9d9095ee580b4d210c82760691496358e0c6d
Summary:
Implement the `content` attribute.
Introduce a new `FileContent` type which stores the hg file blob and metadata, and modify `FileEntry` to allow constructing `FileEntry` with optional `FileContent` builder-style.
Reviewed By: DurhamG
Differential Revision: D29647203
fbshipit-source-id: b956c294d03dc81affc90d7274b2e430a3556e96
Summary:
Add support for optional file attributes to EdenApi, with `content` there as a placeholder.
Modifies the `FileRequest` type, adding a vec of `FileSpec`, which allows the client to specify desired attributes per-key. The existing `keys` field will be treated as a request for the content attribute and may be used in combination with the new per-key attributes.
Reviewed By: DurhamG
Differential Revision: D29634709
fbshipit-source-id: 6571837f87d1635e8529490e10dbe4ba054b7348
Summary: `parse_key` will be re-used in my incoming EdenApi aux data change.
Reviewed By: kulshrax
Differential Revision: D29601071
fbshipit-source-id: 2039c8478e717ff58af2030588dd31ec0b418b19
Summary:
We have a variety of enums in revisionstore that all serve the same purpose. With this change, I've consolidated them to a single type.
Not 100% sure if `StoreType` is the best name, in scmstore I named it `LocalStoreType` to be more clear - open to suggestions. I also decided to keep the variants named `Local` and `Shared` instead of adopting the `Local` and `Cache` terminology I used in scmstore - I'd rather not change that unless we decide to change the terminology everywhere to avoid confusion.
Reviewed By: kulshrax
Differential Revision: D29598025
fbshipit-source-id: 76d5a02230a8c1e5327cc5d90bbcae70049f251d
Summary:
Introduce a basic implementation of per-backend metrics collection for `FileStore`.
Currently, only indexedlog, lfs, and contentstore backends are instrumented, and only with basic metrics. Additional metrics (size, elapsed time, lfs pointer hits) and additional backends (aux, edenapi, lfs server) will be added in a later change.
Reviewed By: kulshrax
Differential Revision: D29552888
fbshipit-source-id: 54267f4de6f14db046cfae271b5c25a6bb494d7c
Summary: Introduce a new config, `devel.skip-metrics`, intended to be used along with `devel.print-metrics` for filtering which metrics are printed in tests.
Reviewed By: DurhamG
Differential Revision: D29641812
fbshipit-source-id: 507099c5ad44a95e906d5ee4235a8a7eca64bd28
Summary: The tests no longer require it, so let's remove all the logic for it.
Reviewed By: quark-zju
Differential Revision: D29643267
fbshipit-source-id: 86b44e3b7b4a4eb19d7f89b54b40957a7648573a
Summary:
This was a hack to allow the tests to produce the same hashes as
before. Let's disable this and fix the remaining test failures. A future diff
will remove the feature entirely.
Where possible I changed input hashes to desc() and output hashes to globs so
hopefully future hash changes are a little easier.
Differential Revision: D29567762
fbshipit-source-id: cf5150c112c56b08f583feba80e5a636cc07db0a
Summary: sendtrees is always on now, and server is always False.
Differential Revision: D29567763
fbshipit-source-id: f58aa7d84e97b7d69959e0796014a7ff09eb81e9
Summary:
This was accidentally committed in an earlier diff. It's unused, so
let's delete it.
Reviewed By: krallin
Differential Revision: D29668138
fbshipit-source-id: 105bf466665c447c37c73462e102d8771d0368ee
Summary:
The amend "--to" flag amends the specified commit rather than ".". Previously it made a temporary commit and used histedit to scoot it back. This is not optimal due to unnecessary disk operations and fragile conflict handling.
Instead, "--to" now does its work in memory, checking for conflicts as it goes. If it finds any conflicts it aborts the operation. It works by generating a patch based on the working context and applying it to the specified commit. Then it does a mini-rebase of the stack tail onto the amended comit.
I tweaked patch.py to unlink the "from" of a rename _after_ creating the "to", which seems like the natural order to me. Other than the repobackend which defers unlinking, I don't see how other patch backends would have worked when renaming a file.
Reviewed By: DurhamG
Differential Revision: D29471052
fbshipit-source-id: 83406ec16b724b27d9a23473b630cafbb75da4d2
Summary:
Support exchange of mutation information during changesets uploads
Add new api for mutation information.
Add implementation for this new api.
Add client side support.
Reviewed By: markbt
Differential Revision: D29661255
fbshipit-source-id: 1d8cfa356599c215460aee49dd0c78b11af987b8
Summary:
Rename function entriesforbundle => entriesfornodes.
This function will be used for edenapi uploads and not related to bundles
Reviewed By: markbt
Differential Revision: D29661256
fbshipit-source-id: b101c31d53f0f535db0b90804472c70b2f3b2c9e
Summary: Extended eden doctor to check if the PrivHelper is accessible and report when it is not.
Reviewed By: genevievehelsel
Differential Revision: D29593250
fbshipit-source-id: 2390e75b91c9d6f713db4b6084868af91a0b6623
Summary:
Allow SCS server to use blame V2 to serve blame requests, if it is enabled.
This uses `CompatBlame` so that it can use either blame V1 or blame V2.
Reviewed By: liubov-dmitrieva
Differential Revision: D29645410
fbshipit-source-id: 8d02e295995439c3b64e0128bdb5e6f5f6153159
Summary:
Allow access to blame V2 in `mononoke_admin` by using `fetch_blame_compat`.
This uses `CompatBlame` to provide blame support using either blame V1 or blame V2.
Reviewed By: liubov-dmitrieva
Differential Revision: D29492859
fbshipit-source-id: 38c73690d36b57be73cec98ae2a013f16b3e0f7a
Summary:
Implement `fetch_blame_compat`, which will fetch either blame V1 or blame V2,
depending on the repo config, and return a compatibility adapter that can be
used by code to use both kinds.
Reviewed By: StanislavGlebik
Differential Revision: D29492857
fbshipit-source-id: 88d68ef2988e316642a5ebd9aa38b541c02c5da4
Summary:
Add `blame_version` to `BlameDeriveOptions`, and if this is set to `V2`, derive
a V2 blame root for the changeset.
Blame V2 and its roots are in a separate blobstore key space, so this derivation
is entirely independent of blame V1.
The key prefix for blame V2 roots is `derived_root_blame_v2`, even though this
is slightly different to the prefix for blame V1. This is so that it matches
other derived data roots (e.g. unode V2). Similarly, `BlameRoot` becomes
`RootBlameV2` so that it matches the other root types.
Blame V2 uses a separate mapping for blame roots, which contain the root
unode manifest id as additional data.
Differential Revision: D29492858
fbshipit-source-id: de2799040129e1ab90cc6bd8f775a6d47c607db7
Summary:
Split the `derived` module into `derive_v1`, which handles derivation of blame
V1, and `mapping_v1`, which handles the derived data mapping.
This is in preparation for introducing derivation of blame V2.
Reviewed By: StanislavGlebik
Differential Revision: D29463127
fbshipit-source-id: ae3add600ca62141e7f25713367680b667507da3
Summary:
Extract `fetch_content_for_blame` to a separate module so we can re-use it in
blame V2.
The method previously returned nested `Result`s, which can be confusing as in
most contexts, the blame being rejected is not actually an error. Switch to an
explicit enum to make it clearer what the inner result represents.
Reviewed By: yancouto
Differential Revision: D29462095
fbshipit-source-id: 52ffcb4173a3b36f4b6cdafe4f42a4cafd993f49
Summary:
Implement using of uploading changesets in `hg cloud upload` command.
This is the last part for `hg cloud upload` - uploading changesets via Edenapi
test
```
```
# machine #2
liubovd {emoji:1f352} ~/fbsource
[15] → hg pull -r 0b6075b4bda143d5212c1525323fb285d96a1afb
pulling from mononoke://mononoke.c2p.facebook.net/fbsource
connected to twshared27150.03.cln3.facebook.com session RaIPDgvF6l8rmXkA
abort: 0b6075b4bda143d5212c1525323fb285d96a1afb not found!
```
```
# machine #1
devvm1006.cln0 {emoji:1f440} ~/fbsource/fbcode/eden/scm
[6] → EDENSCM_LOG="edenapi::client=info" ./hg cloud upload
Jul 11 13:26:26.322 INFO edenapi::client: Requesting lookup for 1 item(s)
commitcloud: head '0b6075b4bda1' hasn't been uploaded yet
Jul 11 13:26:26.472 INFO edenapi::client: Requesting lookup for 6 item(s)
commitcloud: queue 1 commit for upload
Jul 11 13:26:26.648 INFO edenapi::client: Requesting lookup for 1 item(s)
commitcloud: queue 0 files for upload
Jul 11 13:26:26.698 INFO edenapi::client: Requesting lookup for 4 item(s)
commitcloud: queue 4 trees for upload
Jul 11 13:26:27.393 INFO edenapi::client: Requesting trees upload for 4 item(s)
commitcloud: uploaded 4 trees
commitcloud: uploading commit '0b6075b4bda143d5212c1525323fb285d96a1afb'...
Jul 11 13:26:28.426 INFO edenapi::client: Requesting changesets upload for 1 item(s)
commitcloud: uploaded 1 commit
```
```
# machine #2
liubovd {emoji:1f352} ~/fbsource
[16] → hg pull -r 0b6075b4bda143d5212c1525323fb285d96a1afb
pulling from mononoke://mononoke.c2p.facebook.net/fbsource
connected to twshared16001.08.cln2.facebook.com session QCpy1x9yrflRF6xF
searching for changes
adding commits
adding manifests
adding file changes
added 895 commits with 0 changes to 0 files
(running background incremental repack)
prefetching trees for 4 commits
liubovd {emoji:1f352} ~/fbsource
[17] → hg up 0b6075b4bda143d5212c1525323fb285d96a1afb
warning: watchman has recently started (pid 93231) - operation will be slower than usual
connected to twshared32054.08.cln2.facebook.com session Hw91G8kRYzt4c5BV
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
liubovd {emoji:1f352} ~/fbsource
[18] → hg diff -c .
connected to twshared0965.07.cln2.facebook.com session rrYSvRM6pnBYZ2Fn
diff --git a/fbcode/eden/scm/test b/fbcode/eden/scm/test
new file mode 100644
--- /dev/null
+++ b/fbcode/eden/scm/test
@@ -0,0 +1,1 @@
+test
```
Initial perf wins:
Having a large stack of 6 commits (total 24 files changed), tested *adding a single line to a file at the top commit*. We can see at least 2X win but it should be more because I have tested with a local instance of edenapi service that runs on my devserver.
```
╷
╷ @ 5582fc8ee 6 minutes ago liubovd
╷ │ test
╷ │
╷ o d55f9bb65 86 minutes ago liubovd D29644738
╷ │ [hg] edenapi: Implement using of uploading changesets in `hg cloud upload` command
╷ │
╷ o 561149783 Friday at 15:10 liubovd D29644797
╷ │ [hg] edenapi: Add request handler for uploading hg changesets
╷ │
╷ o c3dda964a Friday at 15:10 liubovd D29644800
╷ │ [edenapi_service] Add new /:repo/upload/changesets endpoint
╷ │
╷ o 28ce2fa0c Friday at 15:10 liubovd D29644799
╷ │ [hg] edenapi/edenapi_service: Add new API for uploading Hg Changesets
╷ │
╷ o 13325b361 Yesterday at 15:23 liubovd D29644798
╭─╯ [edenapi_service] Implement uploading of hg changesets
```
```
# adding new line to a file test in the test commit, and then run:
devvm1006.cln0 {emoji:1f440} ~/fbsource/fbcode/eden/scm
[8] → time hg cloud upload
commitcloud: head '4e4f947d73e6' hasn't been uploaded yet
commitcloud: queue 1 commit for upload
commitcloud: queue 0 files for upload
commitcloud: queue 4 trees for upload
commitcloud: uploaded 4 trees
commitcloud: uploading commit '4e4f947d73e676b63df7c90c4e707d38e6d0a93b'...
commitcloud: uploaded 1 commit
real 0m3.778s
user 0m0.017s
sys 0m0.027s
```
```
# adding another new line to a file test in the test commit, and then run:
devvm1006.cln0 {emoji:1f440} ~/fbsource/fbcode/eden/scm
[11] → time hg cloud backup
connected to twshared30574.02.cln2.facebook.com session uvOvhxtBfeM7pMgl
backing up stack rooted at 13325b3612d2
commitcloud: backed up 1 commit
real 0m7.507s
user 0m0.013s
sys 0m0.030s
```
Test force mode of the new command that reupload everything:
```
devvm1006.cln0 {emoji:1f440} ~/fbsource/fbcode/eden/scm
[13] → time hg cloud upload --force
commitcloud: head '5582fc8ee382' hasn't been uploaded yet
commitcloud: queue 6 commits for upload
commitcloud: queue 24 files for upload
commitcloud: uploaded 24 files
commitcloud: queue 61 trees for upload
commitcloud: uploaded 61 trees
commitcloud: uploading commit '13325b3612d20c176923d1aab8a28383cea2ba9a'...
commitcloud: uploading commit '28ce2fa0c6a02de57cdc732db742fd5c8f2611ad'...
commitcloud: uploading commit 'c3dda964a71b65f01fc4ccadc9429ee887ea982c'...
commitcloud: uploading commit '561149783e2fb5916378fe27757dcc2077049f8c'...
commitcloud: uploading commit 'd55f9bb65a0829b1731baa686cb8a6e0c5500cc2'...
commitcloud: uploading commit '5582fc8ee382c4c367a057db2a1781377bf55ba4'...
commitcloud: uploaded 6 commits
real 0m7.830s
user 0m0.011s
sys 0m0.032s
```
We can see the time is similar to the current `hg cloud backup` command.
Reviewed By: markbt
Differential Revision: D29644738
fbshipit-source-id: cbbfcb2e8018f83f323f447848b3b6045baf47c5
Summary:
API for uploading HgChangeset
This is the last part for uploding commit cloud commits via edenapi.
The structures are generic, could be reused for pull, etc
Reviewed By: markbt
Differential Revision: D29644799
fbshipit-source-id: 53347992cf399d99eaee4b5d2ad5f6ea30417022
Summary:
implement uploading of hg changesets
For now, reuse the upload code path from unbundle but calling it with empty filenodes and manifests.
Those are used for parents validation but this is not needed for us because we load trees and filenodes and their parents to construct the bonsai cs.
We might want to rewrite it to a cleaner code later and separate from unbundle but for now reusing the fucntion is the easiest way because we know the implementation is correct and also has logging.
Reviewed By: markbt
Differential Revision: D29644798
fbshipit-source-id: 27217d3061ab8d9712417facdbfbbc7e3aebfc5b
Summary: This allows to use EDENSCM_LOG env variable to control trace logging from linked rust libraries
Reviewed By: fanzeyi
Differential Revision: D29641903
fbshipit-source-id: cc7395a2a5740c557e3166b786343ec0fefd6039
Summary:
If the old and new nodes are the same, there is no need to call the pull fast
path.
Reviewed By: andll
Differential Revision: D29646089
fbshipit-source-id: 752c9aec7479961ff0fb4cfa5b30cd611dcc170e
Summary: There is a code path that `heads` contain duplicated items. Be compatible with it.
Reviewed By: andll
Differential Revision: D29645743
fbshipit-source-id: ff73bc51e877c2d02fcfff28cd1001e70478f212
Summary: `-s` requires session id, so get the `sid` from `pid`
Reviewed By: genevievehelsel
Differential Revision: D29627171
fbshipit-source-id: b2812a150fe56b6fd6cfa246298247164861fc9d
Summary:
In previous diff we started creating deletion commits on megarepo mainline.
This is not great since it breaks bisects, and this diff avoids that.
The way it does it is the following:
1) First do the same thing we did before - create deletion commit, and then
create a merge commit with p1 as deletion commit and p2 as an addition commit.
Let's call it "fake merge", since this commit won't be used for our mainline
2) Generate manifest for our "fake merge", and then use this manifest to
generate bonsai diff. But this time make p1 an old target commit (i.e. remove
deleted commit as if it never exited).
3) Use generated bonsai diff to create a commit.
So in short we split the procedure in two - first generate and validate the
resulting manifest (this is what we use "fake merge" commit for), and then
generate bonsai changeset using this manifest. It's unfortunate that in order
to generate resulting manifest we actually need to create commit and save it to
blobstore. If we had in-memory manifests we could have avoided that, but alas
we don't have them yet.
This way of creating bonsai changesets is a bit unconventional, but I think it has the benefit of relying on tools that we have confidence that they work (i.e. bonsai_diff), and we don't need to reimplement all the bonsai logic again
Reviewed By: mitrandir77
Differential Revision: D29633340
fbshipit-source-id: eebdb0e4db5abbab9346c575b662b7bb467497c4
Summary:
Initially I just wanted to address comments from D29515737 (fa8796ae19) about unnecessary
manifest retraversals, but there were a few more problems:
1) We didn't detect file conflicts in the final merge commit correctly. For
example, if additions_merge commit added a file "dir/1.txt", but there's
already file "dir" in target changeset that we won't detect this problem.
2) What's worse is that we might produce invalid bonsai merge changeset
ourselves. Say, if we delete "source_1/dir/file.txt", and then add file
"source_1/dir" in additions merge commit then resulting bonsai changeset should
have "source_1/dir" entry in the bonsai changeset.
This diff does the following:
1) Adds more tests to cover different corner cases - some of them were failing
before this diff.
2) Improves logic to verify file conflicts
3) Instead of trying to generate correct merge bonsai changeset it simplifies
the task and creates a separate deletion commit.
Note that creating a deletion commit on the mainline is something we want to
avoid to not break bisects. This will be addressed in the next diff.
Reviewed By: mitrandir77
Differential Revision: D29633341
fbshipit-source-id: 8f755d852212fbce8f9331049bf836c1d0a4ef42
Summary: This new method will allow the megarepo customers to create a sync target that's branching off the existing target. This feature is meant to be used for release branches.
Reviewed By: StanislavGlebik
Differential Revision: D29275281
fbshipit-source-id: 7b58d5cc49c99bbc5f7e01814178376aa3abfcdf
Summary:
First integration test for the `hg cloud upload` command.
We will be able to cover more cases once last part (uploading of changesets) will be implemented.
Reviewed By: markbt
Differential Revision: D29612725
fbshipit-source-id: cb8fedfc4e8c2408bccaa4195dc1e5c0758d742a
Summary:
Original commit changeset: 53d067bc2bb9
Same as https://www.internalfb.com/diff/D29586388 (0be0a68cca).
D29586388 (0be0a68cca) was disconnected from its parent and landed too early by mistake and reverted (because it wouldn't even compile without the parent commit).
This is just another version for the D29586388 (0be0a68cca) as it's not possible to reopen the original one.
Reviewed By: markbt
Differential Revision: D29598758
fbshipit-source-id: 96b6137366e196f9cd3c35c178eaa3d2fce1e071
Summary: The hybrid changelog relies on edenapi which isn't currently supported by tests. Disable the migration for integration tests, until test repos are able to use edenapi.
Reviewed By: singhsrb
Differential Revision: D29602284
fbshipit-source-id: 8a2b4395fc5717c3880d7b74c45a0aef571cdc17
Summary: when hg client is using x2pagentd it is not using TLS but plain HTTP. We shouldn't respect use-lfs-certs config option in this case.
Reviewed By: johansglock
Differential Revision: D29613166
fbshipit-source-id: 3a7c9c5add974dd927f4c76f1da2d5b8b67e864b
Summary: Log the key, metadata, and size of contentstore fallback hits in order to assist in debugging them.
Reviewed By: kulshrax
Differential Revision: D29552730
fbshipit-source-id: c10ed9dd50c48a28c2a256b9175e8555ea0862b2
Summary: Previously, filescmstore was flushed and logged twice, once via the contentstore shim and once via the filescmstore object directly. This change addresses that issue.
Reviewed By: kulshrax
Differential Revision: D29552720
fbshipit-source-id: d44003a016f735f528b560f259f64a5e76ce1865
Summary: Added a --deep-clean option to eden du that removes .edeb/clients/x/fsck directories.
Reviewed By: genevievehelsel
Differential Revision: D29501641
fbshipit-source-id: 9c01dc76b54e151ada977c0ee0c28baafe761824
Summary: There were bunch of warnings when compiling locally with debug_assertions
Differential Revision: D29594303
fbshipit-source-id: 7d257ff3d2450bfe8a089246b18511eb091ca361
Summary:
We had an issue in native checkout when update needed to remove a symlink and then create a directory with same files instead of symlink.
This used to fail, because update has a plan to write to new files, but the files has already 'existed' as part of a symlink, so unknown files check were failing.
This code makes sure that when listing untracked files we do not go inside symlinks, and treat audit errors from VFS as if file did not exist
Reviewed By: DurhamG
Differential Revision: D29567562
fbshipit-source-id: 1b6751cc00c3c628e2cab8c081540dba200209fa
Summary:
Add new client side API for upload trees.
Before uploading them, check what is already present on the server, similar as we check for filenodes.
I also added --force flag for the `hg cloud upload` command. It should be useful in general and useful for testing.
Reviewed By: markbt
Differential Revision: D29586388
fbshipit-source-id: 73c549f1a0d4328a64a133ab508fb4d253a4c33d
Summary:
upload filenodes (client side)
On the client side I implemented file upload and filenodes upload in the same API repo.edenapi.uploadfiles
This is because we should use the tokens from the file upload part to feed then into filenodes upload request.
Reviewed By: markbt
Differential Revision: D29549091
fbshipit-source-id: 436de187c8dce9a603c0c0a182e88b582a2d8001
Summary:
subprocess.run doesn't capture the output of a command by default, thus the
buckversion is populated with a CompletedProcess, which cannot fit in the
environment.
Differential Revision: D29576149
fbshipit-source-id: 9d0e13477ac2ffc479e093ea7231eb552c31a5ec
Summary: update bundle to use byteorder::BigEndian in preparation for Bytes upgrade. New versions of Bytes no longer reexport it.
Differential Revision: D29561928
fbshipit-source-id: ce44d9c27f9786a4bcec8f7166763c95828847e8
Summary: Use the class added on previous diff on redacted config as well
Reviewed By: mitrandir77
Differential Revision: D29521423
fbshipit-source-id: 70f5a1cbce80a0068a0f438b7d217bfffb6a1592
Summary:
I've seen periodic reloading of stuff in at least 3 places in mononoke (2 of which I added, skiplists and redaction config, and also on segmented changelog, there might be more).
This stack extracts that logic to a common place, so we don't need to reinvent that logic all the time, and it's easier to do it the next time.
Reviewed By: mitrandir77
Differential Revision: D29520651
fbshipit-source-id: 59820c03f168cb25e2c6345e36746121451f34e2
Summary: We don't need it anymore, and we recently had a sev that was caused by globalrev sql syncer. Let's remove it
Reviewed By: mitrandir77
Differential Revision: D29557246
fbshipit-source-id: c7d0232203b098dff3d750d34093877240d961c4
Summary: needed to set up tw health check
Reviewed By: StanislavGlebik
Differential Revision: D29580808
fbshipit-source-id: 6a3833d652979915fd44dc6d89511192397d8b96
Summary: There is no point to keep an empty buffer around.
Reviewed By: DurhamG
Differential Revision: D29565105
fbshipit-source-id: 1b8ea5e0158d89e119b01b1bbedd25dc280b44f3
Summary: The keepfiles arg for the purge method is unused. Delete it and save a repo recrawl.
Reviewed By: DurhamG
Differential Revision: D29567714
fbshipit-source-id: 47d6b1d13aab3b740685528bffda4e2f77c97b2a
Summary:
If heads exist in the repo, there is no need to pull them.
Practically we configured selectivepull to include master and stable.
While the master head is excluded by the pull fast path, the stable bookmark
previously triggers the heavyweight pull. This diff makes it that we can
skip the heavyweight pull and avoid other issues like devel-warn importing
empty changegroup.
Reviewed By: DurhamG
Differential Revision: D29525476
fbshipit-source-id: 9d1ff28d1194cac22ae66e669a5bd9dbe3f750c4
Summary:
The recent treemanifest refactors broke this. The behavior seems
different on Mac vs others, so let's just make the output optional with (?).
Reviewed By: singhsrb
Differential Revision: D29565879
fbshipit-source-id: 35457a6d38b02b802cc0f98d31dcab38711ff1fe
Summary:
Reenables dynamicconfig loading with eden backingstore. Previously it
broke edenfs-windows-release, but we believe the
opensource/fbcode_builder/manifests/eden tweak has fixed it.
Reviewed By: xavierd
Differential Revision: D29561192
fbshipit-source-id: 775dd21d177f3baa09b0192e7d3f7231008c3766
Summary: The `repo` weakref might be invalid after `__del__`. Check before using it.
Reviewed By: DurhamG, singhsrb
Differential Revision: D29565718
fbshipit-source-id: 54c86414cd80db0d10a3966ed4e677b31ddcd906
Summary:
cpython-ext is not part of hg business APIs. It does not need to be consistent
with lower-case class name (and hg codebase is okay with non-lowercase class names,
just that most classes there are lower-case).
This resolves a rustc warning about the struct name.
Reviewed By: kulshrax
Differential Revision: D29526579
fbshipit-source-id: a4bc8e788d55c65aae9eaa87e2c684c2fded7ae2
Summary: There are a lot of places in user visible text such as command line help where EdenFS is mentioned as eden/Eden/edenfs/EdenFS. This will make it consistent to 'EdenFS' in most cases. In the places where it is referring to the process/daemon, 'edenfs' will be used.
Reviewed By: chadaustin
Differential Revision: D29419151
fbshipit-source-id: 7b8296f0a0c84fdcb566ff811f7fcedbe7079189
Summary:
I got frustrated with the fact that half of the functions in
megarepo_api required the source name to be wrapped into newtype and
other half didn't. This refactor unifes it everywhere except the thrift
datastructure itself - not sure if we can afffect thrift codegen in this way.
Reviewed By: StanislavGlebik
Differential Revision: D29515474
fbshipit-source-id: 2d55a03cf396b174b0228c3fcc627b2296600400
Summary:
The merge commit in case of change_target_sync_config won't be representing any
consistent state of the target so we don't want to write the remapping state
file there.
Reviewed By: StanislavGlebik
Differential Revision: D29515476
fbshipit-source-id: b0703be1127af6582785510fde51ff8501fb4f17
Summary:
in case of change_target_sync_config we'll be creating move commits only for subset
of sources to let's change the function singature to so it's possible to
specify such subset.
Reviewed By: StanislavGlebik
Differential Revision: D29515475
fbshipit-source-id: 31002ec56dad872948bcbc79b0ed5fdb794e1f10
Summary:
The `change_target_config` methods responsibilities have a huge intersection
with `add_target_config`: the change method needs to know how to merge-in new
sources into the target and the whole "create move commits, then create merge
commits" flow can be reused.
Reviewed By: StanislavGlebik
Differential Revision: D29515301
fbshipit-source-id: c15f95875cbcbf5aad00e5047f6a8ffb55c4da31
Summary:
With segmented changelog, this `if head not in repo` check goes and queries
the server to know if this exists. That's slow:
https://fb.workplace.com/groups/corehg/permalink/880425025886062/
This should hopefully fix it.
Reviewed By: quark-zju
Differential Revision: D29550877
fbshipit-source-id: f874fea3f42e1bde0acd4146bcfede4854b585f1
Summary:
Currently `is_present` makes a blobstores lookup and in case it couldn't determine whether the key exists or not, it checks the sync-queue (in case the key was written recently) and then might check the multiplex stores again, then fails if still unsure. This brings unnecessary complications and makes the multiplex blobstore less reliable.
More details in: https://fb.quip.com/wOCeAhGx6Oa1
This diff allows us to get rid of the queue and second store lookups and move the decision-making to the callers. The new logic is under the tunable for the safer rollout.
*This diff is safe to land.*
Reviewed By: StanislavGlebik
Differential Revision: D29428268
fbshipit-source-id: 9fc286ed4290defe16d58b2b9983e3baaf1a3fe4
Summary:
Now that Mononoke uses the `rate_limiting` library we can shed load if
a server is overloaded. Add load shedding checks to the entry points for
wireproto and EdenAPI HTTP traffic.
At the time of writing, there aren't any load shedding limits configure so this
change won't have any effect.
Differential Revision: D29396504
fbshipit-source-id: c90cc40fc2609bdae1a267be3a1aecfe7fd33b7b
Summary:
Update Mononoke server to use the new `rate_limiting` crate. This diff
also removes the old rate limiting library.
Differential Revision: D29396507
fbshipit-source-id: 05adb9322705b771a739c8bcaf2816c95218a42d
Summary:
Replace the LFS server's load shedding logic with that provided by the
`rate_limiting` crate.
Differential Revision: D29396503
fbshipit-source-id: a71812a55b9c9f111ee2861dc1b131ad20ca82d2
Summary:
Add a new rate limiting library that also supports load shedding when
an individual server is overloaded. This library provides a few benefits:
- The code can be shared between the LFS server and Mononoke server.
- The library supports more complex expressions of which clients to apply a
rate limit to (e.g. 10% of sandcastle and mactest machines).
- The rate limiting `Target` can be expanded in the future as the client
provides more information (e.g. client region).
- Mononoke server will be able to loadshed if an individual host is overloaded,
as we can currently do with the LFS server.
I've added this library as a separate crate rather than rewriting
`load_limiter` to make it easier to review. The next diff will make use of the
new library and remove the old one.
Reviewed By: StanislavGlebik
Differential Revision: D29396509
fbshipit-source-id: 2fbc04e266b18392062e6f952075efd5e24e89ba