Summary: Client portion for the commit/revlog_data endpoint that was added to the server.
Reviewed By: kulshrax
Differential Revision: D23065989
fbshipit-source-id: 3115ad2b426daca22472e2106fcd293f3ccd70f3
Summary:
When doing large clones or checkouts the amount of data we add to an
indexedlog can be many GB. On a laptop we don't have much memory, so let's set a
max memory threshold for the file data/history indexedlogs.
Reviewed By: xavierd
Differential Revision: D23046489
fbshipit-source-id: 43b7686b11fe05e4c074bcb02c475ebf8cf14ab1
Summary:
We were experiencing hangs during lfs http fetches. We've seen similar
issues before when using Hyper, which Reqwest is based off of. Let's switch to
the new Curl-based http_client instead.
Note, this does get rid of the ability to obey hg config http proxy settings.
That said, they shouldn't be used much, and the http proxy environment variables
are respected by libcurl.
Reviewed By: xavierd
Differential Revision: D22935348
fbshipit-source-id: 1c61c04bbb4043e3bde592251f12bf846ab3afd4
Summary:
One repack code path would return an error if the pack was already
deleted before the repack started. This is a fine situation, so let's just eat
the error and repack what can be repacked.
Reviewed By: xavierd
Differential Revision: D22873219
fbshipit-source-id: c716a5f0cd6106fd3464702753fb79df0bc7d13f
Summary:
During large prefetches, (say a clone), it is possible that 2 different
filenode actually refer to the same file content, which thus share the same LFS
blob. The code would wrongly prefetch this blob twice which would then fail due
to the `obj_set` only containing one instance of this object.
Instead of using a Vec for the objects to prefetch, we can simply use a
`HashSet` which will take care of de-duplicating the objects.
Reviewed By: DurhamG
Differential Revision: D22903606
fbshipit-source-id: 4983555d2b16639051acbbb591ebb752d55acc2d
Summary:
There was a small but easy to miss mistake when prefetch was changed to return
the keys that couldn't be prefetched. For LFS pointers, the code would wrongly
return that the blob was fetched, which is misleading as the LFS blob isn't
actually downloaded. For LFS pointers, we need to translate them to their LFS
blob content hashes.
Reviewed By: DurhamG
Differential Revision: D22903607
fbshipit-source-id: e86592cd986498d9f4a574585eb92da695de2e27
Summary:
An earlier diff, D21772132 (713fbeec24), add an option to default hgcache data store
writes to indexedlog but it only did it for data, not history. Let's also do it
for history.
Reviewed By: quark-zju
Differential Revision: D22870952
fbshipit-source-id: 649361b2d946359b9fbdd038867e1058077bd101
Summary:
We're seeing users report lfs fetching hanging for 24+ hours. Stack
traces seem to show it hanging on the lfs fetch. Let's read bytes off the wire
in smaller chunks and add a timeout to each read (default timeout is 10s).
Reviewed By: xavierd
Differential Revision: D22853074
fbshipit-source-id: 3cd9152c472acb1f643ba8c65473268e67d59505
Summary:
If the LFS server is down, we are going to retry fetching filenode from the
Mercurial server directly, who is expected to not return a pointer.
Previously, this was achieved by adding a hack into `get_missing`, but since
the function is no longer called during prefetch calls, we cannot rely on it
anymore. Instead, we can wrap the regular remote store and translate all the
StoreKey::Content onto their corresponding hgid keys.
Reviewed By: DurhamG
Differential Revision: D22565604
fbshipit-source-id: 2532a1fc3dfd9ba5600957ed5cf905255cb5b3fd
Summary:
The ContentStore code has a strong assumption that all the data fetched ends up
in the hgcache. Unfortunately, this assumption breaks somewhat if an LFS
pointer is in the local store but the blob isn't alonside it. This can happen
for instance when ubundling a bundle that contains a pointer, the pointer will
be written to the local store, but the blob would be fetched in the shared
store.
We can break this assumption a bit in the LFS store code by writing the fetched
blob alongside the pointer, this allows the `get` operation to find both the
pointer and the blob in the same store.
Reviewed By: DurhamG
Differential Revision: D22714708
fbshipit-source-id: 01aedf04d692c787b7cddb0f7a76828ea37dcf29
Summary:
The pointer for a blob might very well be in the local store, so let's search
in it.
Reviewed By: DurhamG
Differential Revision: D22565608
fbshipit-source-id: 925dd5718fc19e11a1ccaa0887bf5c477e85b2e5
Summary: Similarly to the changes made for `get`, the same can be applied to prefetch.
Reviewed By: DurhamG
Differential Revision: D22565609
fbshipit-source-id: 0fbc1a0086fa44593a6aaffb746ed36b3261040c
Summary: It was once used by EdenFS, but is now dead code, no need to keep it around.
Reviewed By: singhsrb
Differential Revision: D22784582
fbshipit-source-id: d01cf5a99a010530166cabb0de55a5ea3c51c9c7
Summary:
When using LFS, it's possible that a pointer may be present in the local
LfsStore, but the blob would only be in the shared one. Such scenario can
happen after an upload, when the blob is moved to the shared store for
instance. In this case, during a `get` call, the local LFS store won't be able
to find the blob and thus would return Ok(None), the shared LFS store woud not
be able to find the pointer itself and would thus return Ok(None) too. If the
server is not aware of the file node itself, the `ContentStore::get` would also
return Ok(None), even though all the information is present locally.
The main reason why this is happening is due to the `get` call operating
primarily on file node based keys, and for content-based stores (like LFS),
this means that the translation layer needs to be present in the same store,
which in some case may not be the case. By allowing stores to return a
`StoreKey` when progress was made in finding the key we can effectively solve
the problem described above, the local store would translate the file node key
onto a content key, and the shared store would read the blob properly.
Reviewed By: DurhamG
Differential Revision: D22565607
fbshipit-source-id: 94dd74a462526778f7a7e232a97b21211f95239f
Summary: Move the `tokio::Runtime` into `EdenApiRemoteStore` so that if initialization fails, we can propagate the error instead of panicking.
Reviewed By: xavierd
Differential Revision: D22564210
fbshipit-source-id: 9db1be99f2f77c6bb0f6e9dc445d624dc5990afe
Summary:
Add a metadata field to `read_res` containing a `revisionstore::Metadata` struct (which contains the object size and flags). The main purpose of this is to support LFS, which is indicated via a metadata flag.
Although this change affects the `DataEntry` struct which is serialized over the wire, version skew between the client and server should not break things since the field will automatically be populated with a default value if it is missing in the serialized response, and ignored if the client was built with an earlier version of the code without this field.
In practice, version skew isn't really a concern since this isn't used in production yet.
Reviewed By: quark-zju
Differential Revision: D22544195
fbshipit-source-id: 0af5c0565c17bdd61be5d346df008c92c5854e08
Summary: Instead of restricting the allowed characters in a repo name, allow any UTF-8 string. The string will be percent-encoded before being used in URLs.
Reviewed By: quark-zju
Differential Revision: D22559830
fbshipit-source-id: f9caa51d263e06d424531e0947766f4fd37b035f
Summary:
Previously, the EdenAPI stores would not report errors returned from the remote store. The intention behind this pattern in other stores is to prevent `KeyError`s from aborting the operation since the local store might still have the key.
However, in the case of the EdenAPI store, EdenAPI will simply omit missing keys in its response rather than returning an error. Instead, any error returned by the EdenAPI store indicates a more fundamental problem (e.g., unable to reach the server, connection reset, etc) which should cause an abort and return the error.
Reviewed By: quark-zju
Differential Revision: D22544031
fbshipit-source-id: e01e8d88b75e46dcebd2eef5203e3a0edde69fc7
Summary: Add an EdenAPI-backed history store. Notably, thanks to the strongly-typed remote store design from the previous diff, it is not possible to construct an `EdenApiHistoryStore` for trees, even when the underlying remote store is behind a trait object. (This is because EdenAPI does not support fetching history for trees.)
Reviewed By: quark-zju
Differential Revision: D22492162
fbshipit-source-id: 23f1393919c4e8ac0918d2009a16f482d90df15c
Summary: Reimplement `EdenApiHgIdRemoteStore` as `EdenApiRemoteStore<T>`, where `T` is a marker type indicating whether this store fetches files or trees. This allows working with the stores in a more strongly-typed way, and avoid having to check what kind of store this is at runtime when fetching data.
Reviewed By: quark-zju
Differential Revision: D22492160
fbshipit-source-id: e17556093fa9b81d2301f281da36d75a03e33c5e
Summary: Move `src/edenapi.rs` to `src/edenapi/mod.rs` in anticipation of adding more files to this module.
Reviewed By: quark-zju
Differential Revision: D22492161
fbshipit-source-id: f6252ea9a9e32d94029b8e6e76be5d9d1754f63d
Summary:
We're seeing cases were cloning can take 10's of GB of memory because
we pend all the history information in memory. Let's flush the history info
every 10 million adds to bound the memory usage.
10 million was chosen somewhat arbitrarily, but it results in pack files that
are 800MB, which corresponds roughly with 8GB of memory usage.
This requires updating repack to be aware that a single flush could produce
multiple packs. Note, since repack writes via this same path, it may result in
repack producing multiple pack files. In the degenerate case repack could
produce the same number (or more) of pack files than was inputted. If we set the
threshold high enough I think we'll be fine though. 800MB is probably
sufficient.
Reviewed By: xavierd
Differential Revision: D22438569
fbshipit-source-id: 425d5d3b7999b81e44d1dbe1f2a4ea453ab6ca4f
Summary: Add an `add_entry` convenience method to `HgIdMutableDeltaStore`, similar to the one present in `HgIdMutableHistoryStore`.
Reviewed By: quark-zju
Differential Revision: D22443031
fbshipit-source-id: 84fdaae9fbd51e6f2df466b0441ec5f7ce6715f7
Summary: Update the `revisionstore` and `backingstore` crates to use the new EdenAPI crate.
Reviewed By: quark-zju
Differential Revision: D22378330
fbshipit-source-id: 989f34827b744ff4b4ac0aa10d004f03dbe9058f
Summary:
The key is "hit_count" not "hits". This typo caused the trace to always claim
that no data was fetched from memcache, which is obviously not true as the
getpack trace that follows listed significantly less requested keys.
Reviewed By: kulshrax
Differential Revision: D22401592
fbshipit-source-id: ab2ea3e7f8ff3a9c7322678afc8a174e09d6dc09
Summary: Move old EdenAPI crate to `scm/lib/edenapi/old` to make room for the new crate. This old code will eventually been deleted once all references to it are removed from the codebase.
Reviewed By: quark-zju
Differential Revision: D22305173
fbshipit-source-id: 45d211340900192d0488543ba13d9bf84909ce53
Summary:
We've had a case where a client sent a bundle that contained an LFS pointer to
Sandcastle. On that original repo, the LFS blob and pointer were in the
hgcache, which signified that the server has it. When that bundle gets applied
on Sandcastle, the pointer then makes its way onto the local lfs pointer store
and the blob will be fetched from the LFS server itself properly.
Where it gets tricky is that Sandcastle then tried to re-bundle the change and
thus tried to re-upload it. The expectation from the client perspective is that
since the blob was fetched from the server in the first place, the server will
just not ask for the blob to be re-uploaded. The asumption proved to be
semi-inacurate in the case where the LFS server mirrors all the writes to a
secondary LFS server. When that secondary LFS server is standalone and not
Mononoke, it may not have the blob in question, and thus the LFS server may
request the blob to be re-uploaded so it can write it to the secondary LFS
server.
Unfortunately, things start breaking down at this point. Since the blob isn't
present in the local store the client can't rely on being able to read it and
fetching it from the server to then upload it is silly and a bit too complex.
Instead, what we can do is teach the server of this specific scenario by
manually setting the Content-Length to 0 in the case of an upload where the
blob wasn't found in the local store. The server recognizes this and will
manually copy the blob to the secondary LFS server.
Reviewed By: krallin
Differential Revision: D22192718
fbshipit-source-id: 67c5ba1a751cc07d5d5d51e07703282d8175b010
Summary: This will help in debugging potential performance issues.
Reviewed By: DurhamG
Differential Revision: D22140035
fbshipit-source-id: d7403897fbb843a4eca874c1da3788190d181bc0
Summary:
We've seen a handful of hangs in the LFS code, adding a timeout on the
connection request would help in surfacing issues when the client for some
reason doesn't get a reply from the server.
We unfortunately can't add a general timeout, as a user's connection speed
would need to be taken into account.
Reviewed By: krallin
Differential Revision: D22139560
fbshipit-source-id: 0118fb8a38af488e2f40bed0a09677bc68d666a8
Summary:
At least one mac user had a confirmed hang in the new LFS code that caused us
to rollback LFS for fbsource. One of the thing that was weird is that both
Sandcastle and devservers have several orders of magnitude higher request to
the LFS server and thus we would expect this issue to be widespread there. We
haven't seen this.
One of the difference between the package on the devservers/sandcastle and on
macs is how they are compiled. The former is with buck, while the later is with
cargo, and the package built with buck has all the tokio features enabled,
while cargo has no features enabled. What interest us here is the kind of
scheduler that is in use: on mac the basic scheduler, on devserver/sandcastle,
the threaded scheduler.
While I'm not entirely convinced that the difference in the scheduler is what
causes the hang, it's a good idea to make sure that both of our packages are
compiled the same way.
Reviewed By: krallin
Differential Revision: D22138593
fbshipit-source-id: ce9e64a6a930d2b0cccb634a1af9c3b5b8816a21
Summary:
Several of the structs used by EdenAPI were previously defined in Mercurial's
`types` crate. This is not ideal since these structs are used for data interchange
between the client and server, and changes to them can break Mononoke, Mercurial,
or both. As such, they are more fragile than other types and their use should be
limited to the EdenAPI server and client to avoid the need for extensive breaking
changes or costly refactors down the line.
I'm about to make a series of breaking changes to these types as part of the
migration to the new EdenAPI server, so this seemed like an ideal time to split
these types into their own crate.
Reviewed By: krallin
Differential Revision: D21857425
fbshipit-source-id: 82dedc4a2d597532e58072267d8a3368c3d5c9e7
Summary:
Adds a remotefilelog.write-hgcache-to-indexedlog config which directs
all hgcache writes to the indexedlog.
This change also removes remotefilelog.indexedlogdatastore as it's on by default
now.
Reviewed By: xavierd
Differential Revision: D21772132
fbshipit-source-id: a71e188df2958fb4f8c4776da23ba87deccb3b79
Summary:
We can skip the round trip to the server if we have nothing to ask or tell the
server.
Reviewed By: xavierd
Differential Revision: D21763025
fbshipit-source-id: 7f199c12ccaa0f66ce765dd976723e4002c5dd4e
Summary:
If either the config, or the environment variable is empty, the URL parsing
would raise an error, while the intent is to not have proxy. Let's handle it
properly.
Reviewed By: DurhamG
Differential Revision: D21669922
fbshipit-source-id: 84c8d09242ac8d5571f7592d874f4954692110f5
Summary:
We didn't have a high level overview of the types and traits and their
purposes, add that.
Reviewed By: DurhamG
Differential Revision: D21599759
fbshipit-source-id: 18e8d23ed00134f9d662778eddaee4a9451f7c2c
Summary:
As a developpers is working on large blobs and iterating on them, the local LFS
store will be growing significantly over time, and that growth is unfortunately
unbounded and will never be cleaned up. Thankfully, one the guarantee that the
server is making is that an uploaded LFS blob will never be removed[0]. By using
this property, we can simply move blobs from the local store to the shared
store after uploading the blob is complete.
[0]: As long as it is not censored.
Reviewed By: DurhamG
Differential Revision: D21134191
fbshipit-source-id: ca43ddeb2322a953aca023b49589baa0237bbbc5
Summary: The compiler is warning about it.
Reviewed By: singhsrb
Differential Revision: D21550266
fbshipit-source-id: 4e66b0dda0e443ed63aeccd888d38a8fcb5e4066
Summary:
Pass `configparser::config::ConfigSet` to `repack` in
`revisionstore/src/repack.rs` so that we can use various config values in `filter_incrementalpacks`.
* `repack.maxdatapacksize`, `repack.maxhistpacksize`
* The overall max pack size
* `repack.sizelimit`
* The size limit for any individual pack
* `repack.maxpacks`
* The maximum number of packs we want to have after repack (overrides sizelimit)
Reviewed By: xavierd
Differential Revision: D21484836
fbshipit-source-id: 0407d50dfd69f23694fb736e729819b7285f480f
Summary:
If http_proxy.no is set, we should respect it to avoid sending traffic to it
whenever required.
Reviewed By: wez
Differential Revision: D21383138
fbshipit-source-id: 4c8286aaaf51cbe19402bcf8e4ed03e0d167228b
Summary:
When Qing implemented all the get method, the translate_lfs_missing function
didn't exist, and I forgot to add them in the right places when landing the
diff that added it. Fix this.
Reviewed By: sfilipco
Differential Revision: D21418043
fbshipit-source-id: baf67b0fe60ed20aeb2c1acd50a209d04dc91c5e
Summary: If no LFS blobs needs uploading, then don't try to connect to the LFS server in the first place.
Reviewed By: DurhamG
Differential Revision: D21478243
fbshipit-source-id: 81fa960d899b14f47aadf2fc90485747889041e1
Summary:
Remove HgIdDataStore::get_delta and all implementations. Remove HgIdDataStore::get_delta_chain from trait, remove all unnecessary implentations, remove all implementations from public Rust API. Leave Python API and introduce "delta-wrapping".
MutableDataPack::get_delta_chain must remain in some form, as it necessary to implement get using a sequence of Deltas. It has been moved to a private inherent impl.
DataPack::get_delta_chain must remain in some form for the same reasons, and in fact both implenetations can probably be merged, but it is also used in repack.rs for the free function repack_datapack. There are a few ways to address this without making DataPack::get_delta_chain part of the public API. I've currently chosen to make the method pub(crate), ie visible only within the revisionstore crate. Alternatively, we could move the repack_datapack function to a method on DataPack, or use a trait in a private module, or some other technique to restrict visibility to only where necessary.
UnionDataStore::get has been modified to call get on it's sub-stores and return the first which matches the given key.
MultiplexDeltaStore has been modified to implement get similarly to UnionDataStore.
Reviewed By: xavierd
Differential Revision: D21356420
fbshipit-source-id: d04e18a0781374a138395d1c21c3687897223d15
Summary: When we create directory at certain places, we want these directories to be shared between different users on the same machine. This Diff uses the previously added `create_shared_dir` function to create these directories.
Reviewed By: xavierd
Differential Revision: D21322776
fbshipit-source-id: 5af01d0fc79c8d2bc5f946c105d74935ff92daf2
Summary:
While the change looks fairly mechanical and simple, the why is a bit tricky.
If we follow the calls of `ContentStore::get`, we can see that it first goes
through every on-disk stores, and then switches to the remote ones, thanks to
that, when we reach the remote stores there is no reason to believe that the
local store attached to them contains the data we're fetching. Thus the
code used to always prefetch the data, before reading from the store what was
just written.
While this is true for regular stores (packstore, indexedlog, etc), it starts
to break down for the LFS store. The reason being that the LFS store is
internally represented as 2 halves: a pointer store, and a blob store. It is
entirely possible that the LFS store contains a pointer, but not the actual
blob. In that case, the `get` executed on the LFS store will simply return
`Ok(None)` as the blob just isn't present, which will cause us to fallback to
the remote stores. Since we do have the pointer locally, we shouldn't try to
refetch it from the remote store, and thus why a `get_missing` needs to be run
before fetching from the remote store.
As I was writing this, I realized that all of this subtle behavior is basically
the same between all the stores, but unfortunately, doing a:
impl<T: RemoteDataStore + ?Sized> HgIdDataStore for T
Conflicts with the one for `Deref<Target=HgIdDataStore>`. Macros could be used
to avoid code duplication, but for now let's not stray into them.
Reviewed By: DurhamG
Differential Revision: D21132667
fbshipit-source-id: 67a2544c36c2979dbac70dac5c1d055845509746
Summary: implement the get() functions on the various LocalDataStore interface implementations
Reviewed By: quark-zju
Differential Revision: D21220723
fbshipit-source-id: d69e805c40fb47db6970934e53a7cc8ac057b62b
Summary:
Memcache isn't available for Mac, but we can build the revisionstore with Buck
on macOS when building EdenFS. Let's only use Memcache for fbcode builds on
Linux for now.
Reviewed By: chadaustin
Differential Revision: D21235247
fbshipit-source-id: 5943ad84f6442e4dabbd2a44ae105457f5bb9d21
Summary:
On repack, when the Rust stores are in use, the repack code relies on
ContentStore::commit_pending to return the path of a newly created packfile, so
it won't delete it when going over the repacked ones. When LFS is enabled, both
the shared and the local stores are behind the LfsMultiplexer store that
unfortunately would always return `Ok(None)`. In this situation, the repack
code would delete all the repacked packfiles, which usually is the expected
behvior, unless only one packfile is being repacked, in which case the repack
code effectively re-creates the same packfile, and is then subsequently
deleted.
The solution is for the multiplex stores to properly return a path if one was
returned from the underlying stores.
Reviewed By: DurhamG
Differential Revision: D21211981
fbshipit-source-id: 74e4b9e5d2f5d9409ce732935552a02bdde85b93
Summary: This will help us debug slow commands
Reviewed By: xavierd
Differential Revision: D21075895
fbshipit-source-id: 3e7667bb0e4426d743841d8fda00fa4a315f0120
Summary:
The Memcache store is voluntarily added to the ContentStore read store, first
as a regular store, and then as a remote one. The regular store is added to
enable the slower remote store to write to it so that blobs are uploaded to
Memcache as we read them from the network. The subtle part of this is that the
HgIdDataStore methods should not do anything, or the data fetched won't be
written to any on-disk store, forcing a refetch next time the blob is needed.
Reviewed By: DurhamG
Differential Revision: D21132669
fbshipit-source-id: 96e963c7bb4209add5a51a5fc48bc38f6bcd2cd9
Summary:
Ideally, either the ContentStore, or the upper layer should verify that we
haven't missed uploading a blob, which could lead to weird behavior down the
line. For now, all the stores will return the keys of the blobs that weren't
uploaded, which allows us to return these keys to Python.
Reviewed By: DurhamG
Differential Revision: D21103998
fbshipit-source-id: 5bab0bbec32244291c65a07aa2a13aec344e715e