Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally, we generally should not be using these singletons
in our production code.
Some thrift methods used by watchman to get metadata implicitly are not threaded.
These can cause fetches, so let's thread the fetch context here too.
Reviewed By: genevievehelsel
Differential Revision: D28842300
fbshipit-source-id: b1e4b3aea879d6ed7b92afa26184616dedad5935
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally, we generally should not be using these singletons
in our production code.
This change is for symlink
Reviewed By: genevievehelsel
Differential Revision: D28841453
fbshipit-source-id: 080eb62f0b562f8e0995c34e9a8302238fc59ed8
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally we generally should not be using these singletons
in our production code.
Most of rmdir is already threaded, not sure if this case can actuall cause
fetches in production, but might as well thread.
Reviewed By: genevievehelsel
Differential Revision: D28840211
fbshipit-source-id: 8dea08e775be470dd1730e2d32750a6912650ee0
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally, we generally should not be using these singletons
in our production code.
this change is for rename
Reviewed By: genevievehelsel
Differential Revision: D23467437
fbshipit-source-id: e9d79c65fb5c4d686f0597550e43a0e87c4792cb
Summary:
we now want macFUSE to be installed instead of osxfuse. eden doctor will spew
errors if osxfuse is not installed or loaded, so it perpetually complains.
Update these checks for macFUSE now.
Reviewed By: xavierd
Differential Revision: D28751766
fbshipit-source-id: 4bc61349e33492aebe888a4e869ef7620c74768e
Summary:
implement filenode and tree lookup in edenapi
simple lookup in blobstore without any additional validation of blob content
assuming all the validation will be inside upload logic
here, assuming if key is known that content of blob is valid
Reviewed By: markbt
Differential Revision: D28868028
fbshipit-source-id: 590cc404f33adbec69f8adafd33365a0249d3241
Summary:
create end to end intergation for the lookup API on the client
Start prototyping of `hg cloud upload` command.
Currently, it just performs lookup for existing heads.
This way we can end to end test the new APIs.
Reviewed By: markbt
Differential Revision: D28848205
fbshipit-source-id: 730c1ed4a21c1559d5d9b54d533b0cf551c41b9c
Summary:
Files upload will be executed in 2 stages:
* check if content is already present
* upload missing files
The check api is generic, it could be used for any id type. Called 'lookup' API.
Reviewed By: markbt
Differential Revision: D28708934
fbshipit-source-id: 654c73b054790d5a4c6e76f7dac6c97091a4311f
Summary:
Previously we set this in the rpm spec, but we need to set it in make
local as well since sometimes hgbuild invokes make local directly.
Ideally we'd put this in setup.py, since make and rpmspecs go through that, but
we need this environment also set for the dulwich build, which we don't really
control the setup.py for.
Reviewed By: singhsrb
Differential Revision: D28902015
fbshipit-source-id: bfc170c3027cc43b24c6a517512a63a71f433d23
Summary:
Recently we had an issue with `connectivity-lab` repo where 3 keys P416141335 had different values because of parent ordering P416094337.
Walker can detect difference between keys in the multiplex inner blobstores and repair them, however it doesn't have notion of the copy keys (there isn't concept of source and the target). We have a copy_blobstore_keys tool, which is used for restoring keys from the backup and with small modification it can handle copy between innerstore.
Reviewed By: StanislavGlebik
Differential Revision: D28707364
fbshipit-source-id: 3d5a4f39999623023539b9159fa7310d430f0ee4
Summary:
All megarepo methods write to a repository, so we need to check if write is
allowed to a given repo, and previously we weren't checking that.
Let's fix it and Let's start doing so now by trying to get RepoWriteContext
for the target repo. If writes are not allowed then RepoWriteContext fails.
Reviewed By: farnz
Differential Revision: D28838994
fbshipit-source-id: e45d4fe72603e7fe2755141874fc4125998bfed8
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally, we generally should not be using these singletons
in our production code.
This change is for mkdir
Reviewed By: genevievehelsel
Differential Revision: D23458622
fbshipit-source-id: f3914a4f692490434882143664a5d5f1701e93ba
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally we generally should not be using these singletons
in our production code.
This change is for create
Reviewed By: genevievehelsel
Differential Revision: D23457862
fbshipit-source-id: d4c9cc658c26b3119b2b2a1da061e299eaf510c9
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally we generally should not be using these singletons
in our production code.
Most of lookup is already threaded. This finishes the threading for lookup.
Reviewed By: xavierd
Differential Revision: D23456910
fbshipit-source-id: fab7397caeee19f921d8fba1fb6528baa5cf2960
Summary:
The recent change to make run-tests work with Python 3 broke the
allow/deny list functionality because it started testing the full test name
instead of the base. This fixes that.
Reviewed By: quark-zju
Differential Revision: D28885125
fbshipit-source-id: 586a71e66e0f094b79e6a3e07e27813db6f662d3
Summary: create `uncopy` command to unmark files that were copied using `copy`.
Reviewed By: quark-zju
Differential Revision: D28821574
fbshipit-source-id: c1c15f6fb2837cec529860aba70b516ddd794f10
Summary:
Time 0.2 is current, and 0.1 is long obsolete. Unfortunately there's a
large 0.1 -> 0.2 API change, so I preserved 0.1 and updated the targets of its
users. Also unfortunate that `chrono` has `oldtime` as a default feature, which
makes it use `time-0.1`'s `Duration` type. Excluding it from the features
doesn't help because every other user is specifying it by default.
Reviewed By: dtolnay
Differential Revision: D28854148
fbshipit-source-id: 0c41ac6b998dfbdcddc85a22178aadb05e2b2f2b
Summary:
The worker should be able to process the requests from the queue no matter
which repo it is and what are it ACLs. It's during the request scheduling when
we should check the identity of the entity scheduling request.
Reviewed By: StanislavGlebik
Differential Revision: D28866807
fbshipit-source-id: 5d57eb9ba86e10d477be5cfc51dfb8f62ea16b9e
Summary:
BackingStore and LocalStore are no longer tied at the hip, so decouple
FakeBackingStore from LocalStore.
Reviewed By: kmancini
Differential Revision: D28615431
fbshipit-source-id: ee6bc807da6de4ed8fba8ab6d52ff5aeff34e8ae
Summary:
The meaning of the root ID is defined by the BackingStore, so move
parsing and rendering into the BackingStore interface.
Reviewed By: xavierd
Differential Revision: D28560426
fbshipit-source-id: 7cfed4870d48016811b604348742754f6cdbd842
Summary:
Currently bookmark warmers receives a `BlobRepo`. This diff makes it receive an `InnerRepo`, but does no logic changes.
This will be useful as fields are moved from `BlobRepo` to `InnerRepo`, and will also be useful for accessing skiplists from warmers, as I plan to do on the next diff.
Reviewed By: StanislavGlebik
Differential Revision: D28796543
fbshipit-source-id: dbe5bec9fc34da3ae51e645ea09b03e2bb620445
Summary:
This diff simply extracts `InnerRepo` object type to a separate target.
This will be used on the next diff where we need to access `InnerRepo` from `BookmarkWarmer`, which needed to be split to a different target in order to not create a circular dependency in buck.
Reviewed By: StanislavGlebik
Differential Revision: D28796406
fbshipit-source-id: 4fdadbbde31719b809abb6b8a9ba8fa24b426299
Summary:
This diff creates a new `InnerRepo` container, that contains `BlobRepo` as well as the skiplist index.
The plan here is:
- As of code organisation, `InnerRepo` will eventually contain most of the fields currently in `Repo`, as well as the fields of `BlobRepo` that are only used in binaries that use `Repo`. This way each binary will only build the "attribute fields" it needs to, but the code to build them can still be neatly shared.
- As for `SkiplistIndex`, the plan is to be able to modify it inside `WarmBookmarksCache`, that's why I'm moving it to `InnerRepo` as well. I'll make bookmark warmers receive `InnerRepo` instead of `BlobRepo`, so they can access the skiplist index if wanted, and then modify it (this is an attempt to try to make skiplists faster on bookmarks).
Reviewed By: StanislavGlebik
Differential Revision: D28748221
fbshipit-source-id: bca31c14a6789a715a215cc69ad0a69b5e73404c
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally we generally should not be using these singletons
in our production code.
this change is for mknod
Reviewed By: chadaustin
Differential Revision: D23452153
fbshipit-source-id: 7b9bc6b624fbe81b91770bc65a0d27bc9d397032
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally we generally should not be using these singletons
in our production code.
this change is for getxattr
Reviewed By: chadaustin
Differential Revision: D23451954
fbshipit-source-id: bae73878754d59661cddf7c0b001e506bbc88d13
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally we generally should not be using these singletons
in our production code.
this change is for readlink
Reviewed By: chadaustin
Differential Revision: D23451821
fbshipit-source-id: 1f8ee369a992ab3489a9366f9a972f67461970de
Summary: The run_start in the pack info logging is the run start for a given checkpoint name when checkpointing, so let's include that to provide context.
Reviewed By: farnz
Differential Revision: D28867962
fbshipit-source-id: 113b9e10f5b8e1869702b3ea83374d0d08a8792e
Summary: This makes it easier to understand what each move and merge commits are for.
Reviewed By: mitrandir77
Differential Revision: D28839677
fbshipit-source-id: 1a42205c164224b64c773cff80b690b251a48381
Summary:
When deriving `hgchangesets` for merge commits, we try to get filenodes right.
Speed this up a little by recognising that history checks are unnecessary if both parents have the same filenode.
Reviewed By: StanislavGlebik
Differential Revision: D28866024
fbshipit-source-id: 6da4c162abce5b426269630f82e9e0b84eea2b33
Summary: Sometimes it is useful to search/group by commit hash.
Reviewed By: krallin
Differential Revision: D28834644
fbshipit-source-id: 93f650e19ae512450e33542cf74b8aa3333c6c35
Summary:
Fetching all history of both filenodes to see if there's common history either side of a merge is wasteful, and in some megarepo work is causing long delays deriving merge changesets.
Where we have already derived filenodes for a given merge's ancestors, we can go faster; we can use the linknodes to determine the older of the two filenodes, and fetch only history for the newer of the two.
This is imperfect for the import use case, since filenodes depend on hgchangesets, and the batching in use at the moment prefers to generate all derived data of a given type before moving onto another type, but it's an improvement for cases where some filenodes are already derived (e.g. due to import of a repo with a similar history).
Reviewed By: StanislavGlebik
Differential Revision: D28796253
fbshipit-source-id: 5384b5d2841844794a518c321dbf995891374d3a
Summary:
This is a precaution. add_sync_target can create a very branchy repository, and
I'm not sure how well Mononoke is going to handle deriving of these commits by
all mononoke hosts at once (in particular, I'm worried about traversal that has
to be done by all hosts in parallel). In theory it should work fine, but
deriving data during add_sync_target call should be a reasonable thing to do
anyway.
While working on that I noticed that "git" derived data is not supported by derived data utils, and it was causing test failures. I don't think there's any reason to not have TreeHandle in derived data utils, so I'm adding it now.
Reviewed By: farnz
Differential Revision: D28831052
fbshipit-source-id: 5f60ac5b93d2c38b4afa0f725c6908edc8b98b18
Summary:
Previously even if two commits are unrelated (e.g. they are in two branches
that are not related to each other) we'd still derive them sequentially. This
diff makes it possible to derive them in parallel to speed up derivation.
A few notes:
1) There's a killswitch to disable parallel derivation
2) By default we derive at most 10 commits at the same time, but it can be
configured.
Reviewed By: farnz
Differential Revision: D28830311
fbshipit-source-id: b5499ad5ac179f73dc94ca09927ec9c906592460
Summary:
They are breaking and hgsql is not relevant (hg server repo was forked). So
let's just remove the tests.
Reviewed By: andll
Differential Revision: D28852159
fbshipit-source-id: 04a47ea489b3f190cffe7f714a9f4161847a2c86
Summary:
Fix remaining issues like encoding and `bname` vs `name` difference
(bname was deleted by a previous change, but it's not just encoding
difference from name, bname does not have " (case x)" suffix).
Differential Revision: D28852092
fbshipit-source-id: df013b284414600deb6f20a5c0883f09906bf976
Summary:
source_name to be unique even if the same git-repo and project name is mapped several times in the same manifest.
source_name need to match between all the different places we use it, `add_target_config, sync_changeset, changesets_to_merge` we where not consistent.
Reviewed By: StanislavGlebik
Differential Revision: D28845869
fbshipit-source-id: 54e96dcdeaf22ec68f626e9c30e5e60c54ec149b
Summary:
Instrument file scmstore with tracing logging. There's more we should add here, but this will be a good starting place - I've already discovered some issues from looking at the log output. (Why does drop run twice? How does it run twice?)
It'd also probably be nice to support formatting the output like https://crates.io/crates/tracing-tree, which will be a lot less cluttered by the logged fields (like `attrs` on `fetch`).
Reviewed By: DurhamG
Differential Revision: D28750954
fbshipit-source-id: 63baa602f7147d24ac3e34defa969a70a92f96a4
Summary:
We are going to create quite a lot of move commits at the same time, and it can
be slow. Let's instead create them in parallel, and then call
`save_bonsai_changesets` for all the commits in one go.
Reviewed By: mitrandir77
Differential Revision: D28795525
fbshipit-source-id: f6b6420c2fe30bb98680ac7e25412c55c99883e0
Summary:
Previously add sync target method was creating a single merge commit. That
means that we might create a bonsai commit with hundreds of parents. This is
not ideal, because mercurial can only work correctly with 2 parents - for
bonsai changeset with 3 parents or more mercurial file histories might be lost.
So instead of creating a single giant merge commit let's create a stack of
merge commits.
Reviewed By: mitrandir77
Differential Revision: D28792581
fbshipit-source-id: 2f8ff6b49db29c4692b7385f1d1ab57986075d57
Summary:
This fixes the potential race condition in megarepo methods where the bookmark
is updated between executing the actual method and returning the current
bookmark value
Reviewed By: StanislavGlebik
Differential Revision: D28796099
fbshipit-source-id: 472f9bbcfafe1c2eb78c62ccf88d149b5157b646
Summary: Let's stop executing requests synchronously when we can do it asynchronously.
Reviewed By: farnz
Differential Revision: D28759935
fbshipit-source-id: ae3db711d3cd466cc142fd9c7f4c3a988150042c