Summary:
Several of the structs used by EdenAPI were previously defined in Mercurial's
`types` crate. This is not ideal since these structs are used for data interchange
between the client and server, and changes to them can break Mononoke, Mercurial,
or both. As such, they are more fragile than other types and their use should be
limited to the EdenAPI server and client to avoid the need for extensive breaking
changes or costly refactors down the line.
I'm about to make a series of breaking changes to these types as part of the
migration to the new EdenAPI server, so this seemed like an ideal time to split
these types into their own crate.
Reviewed By: krallin
Differential Revision: D21857425
fbshipit-source-id: 82dedc4a2d597532e58072267d8a3368c3d5c9e7
Summary:
Adds a remotefilelog.write-hgcache-to-indexedlog config which directs
all hgcache writes to the indexedlog.
This change also removes remotefilelog.indexedlogdatastore as it's on by default
now.
Reviewed By: xavierd
Differential Revision: D21772132
fbshipit-source-id: a71e188df2958fb4f8c4776da23ba87deccb3b79
Summary:
We can skip the round trip to the server if we have nothing to ask or tell the
server.
Reviewed By: xavierd
Differential Revision: D21763025
fbshipit-source-id: 7f199c12ccaa0f66ce765dd976723e4002c5dd4e
Summary:
If either the config, or the environment variable is empty, the URL parsing
would raise an error, while the intent is to not have proxy. Let's handle it
properly.
Reviewed By: DurhamG
Differential Revision: D21669922
fbshipit-source-id: 84c8d09242ac8d5571f7592d874f4954692110f5
Summary:
We didn't have a high level overview of the types and traits and their
purposes, add that.
Reviewed By: DurhamG
Differential Revision: D21599759
fbshipit-source-id: 18e8d23ed00134f9d662778eddaee4a9451f7c2c
Summary:
As a developpers is working on large blobs and iterating on them, the local LFS
store will be growing significantly over time, and that growth is unfortunately
unbounded and will never be cleaned up. Thankfully, one the guarantee that the
server is making is that an uploaded LFS blob will never be removed[0]. By using
this property, we can simply move blobs from the local store to the shared
store after uploading the blob is complete.
[0]: As long as it is not censored.
Reviewed By: DurhamG
Differential Revision: D21134191
fbshipit-source-id: ca43ddeb2322a953aca023b49589baa0237bbbc5
Summary: The compiler is warning about it.
Reviewed By: singhsrb
Differential Revision: D21550266
fbshipit-source-id: 4e66b0dda0e443ed63aeccd888d38a8fcb5e4066
Summary:
Pass `configparser::config::ConfigSet` to `repack` in
`revisionstore/src/repack.rs` so that we can use various config values in `filter_incrementalpacks`.
* `repack.maxdatapacksize`, `repack.maxhistpacksize`
* The overall max pack size
* `repack.sizelimit`
* The size limit for any individual pack
* `repack.maxpacks`
* The maximum number of packs we want to have after repack (overrides sizelimit)
Reviewed By: xavierd
Differential Revision: D21484836
fbshipit-source-id: 0407d50dfd69f23694fb736e729819b7285f480f
Summary:
If http_proxy.no is set, we should respect it to avoid sending traffic to it
whenever required.
Reviewed By: wez
Differential Revision: D21383138
fbshipit-source-id: 4c8286aaaf51cbe19402bcf8e4ed03e0d167228b
Summary:
When Qing implemented all the get method, the translate_lfs_missing function
didn't exist, and I forgot to add them in the right places when landing the
diff that added it. Fix this.
Reviewed By: sfilipco
Differential Revision: D21418043
fbshipit-source-id: baf67b0fe60ed20aeb2c1acd50a209d04dc91c5e
Summary: If no LFS blobs needs uploading, then don't try to connect to the LFS server in the first place.
Reviewed By: DurhamG
Differential Revision: D21478243
fbshipit-source-id: 81fa960d899b14f47aadf2fc90485747889041e1
Summary:
Remove HgIdDataStore::get_delta and all implementations. Remove HgIdDataStore::get_delta_chain from trait, remove all unnecessary implentations, remove all implementations from public Rust API. Leave Python API and introduce "delta-wrapping".
MutableDataPack::get_delta_chain must remain in some form, as it necessary to implement get using a sequence of Deltas. It has been moved to a private inherent impl.
DataPack::get_delta_chain must remain in some form for the same reasons, and in fact both implenetations can probably be merged, but it is also used in repack.rs for the free function repack_datapack. There are a few ways to address this without making DataPack::get_delta_chain part of the public API. I've currently chosen to make the method pub(crate), ie visible only within the revisionstore crate. Alternatively, we could move the repack_datapack function to a method on DataPack, or use a trait in a private module, or some other technique to restrict visibility to only where necessary.
UnionDataStore::get has been modified to call get on it's sub-stores and return the first which matches the given key.
MultiplexDeltaStore has been modified to implement get similarly to UnionDataStore.
Reviewed By: xavierd
Differential Revision: D21356420
fbshipit-source-id: d04e18a0781374a138395d1c21c3687897223d15
Summary: When we create directory at certain places, we want these directories to be shared between different users on the same machine. This Diff uses the previously added `create_shared_dir` function to create these directories.
Reviewed By: xavierd
Differential Revision: D21322776
fbshipit-source-id: 5af01d0fc79c8d2bc5f946c105d74935ff92daf2
Summary:
While the change looks fairly mechanical and simple, the why is a bit tricky.
If we follow the calls of `ContentStore::get`, we can see that it first goes
through every on-disk stores, and then switches to the remote ones, thanks to
that, when we reach the remote stores there is no reason to believe that the
local store attached to them contains the data we're fetching. Thus the
code used to always prefetch the data, before reading from the store what was
just written.
While this is true for regular stores (packstore, indexedlog, etc), it starts
to break down for the LFS store. The reason being that the LFS store is
internally represented as 2 halves: a pointer store, and a blob store. It is
entirely possible that the LFS store contains a pointer, but not the actual
blob. In that case, the `get` executed on the LFS store will simply return
`Ok(None)` as the blob just isn't present, which will cause us to fallback to
the remote stores. Since we do have the pointer locally, we shouldn't try to
refetch it from the remote store, and thus why a `get_missing` needs to be run
before fetching from the remote store.
As I was writing this, I realized that all of this subtle behavior is basically
the same between all the stores, but unfortunately, doing a:
impl<T: RemoteDataStore + ?Sized> HgIdDataStore for T
Conflicts with the one for `Deref<Target=HgIdDataStore>`. Macros could be used
to avoid code duplication, but for now let's not stray into them.
Reviewed By: DurhamG
Differential Revision: D21132667
fbshipit-source-id: 67a2544c36c2979dbac70dac5c1d055845509746
Summary: implement the get() functions on the various LocalDataStore interface implementations
Reviewed By: quark-zju
Differential Revision: D21220723
fbshipit-source-id: d69e805c40fb47db6970934e53a7cc8ac057b62b
Summary:
Memcache isn't available for Mac, but we can build the revisionstore with Buck
on macOS when building EdenFS. Let's only use Memcache for fbcode builds on
Linux for now.
Reviewed By: chadaustin
Differential Revision: D21235247
fbshipit-source-id: 5943ad84f6442e4dabbd2a44ae105457f5bb9d21
Summary:
On repack, when the Rust stores are in use, the repack code relies on
ContentStore::commit_pending to return the path of a newly created packfile, so
it won't delete it when going over the repacked ones. When LFS is enabled, both
the shared and the local stores are behind the LfsMultiplexer store that
unfortunately would always return `Ok(None)`. In this situation, the repack
code would delete all the repacked packfiles, which usually is the expected
behvior, unless only one packfile is being repacked, in which case the repack
code effectively re-creates the same packfile, and is then subsequently
deleted.
The solution is for the multiplex stores to properly return a path if one was
returned from the underlying stores.
Reviewed By: DurhamG
Differential Revision: D21211981
fbshipit-source-id: 74e4b9e5d2f5d9409ce732935552a02bdde85b93
Summary: This will help us debug slow commands
Reviewed By: xavierd
Differential Revision: D21075895
fbshipit-source-id: 3e7667bb0e4426d743841d8fda00fa4a315f0120
Summary:
The Memcache store is voluntarily added to the ContentStore read store, first
as a regular store, and then as a remote one. The regular store is added to
enable the slower remote store to write to it so that blobs are uploaded to
Memcache as we read them from the network. The subtle part of this is that the
HgIdDataStore methods should not do anything, or the data fetched won't be
written to any on-disk store, forcing a refetch next time the blob is needed.
Reviewed By: DurhamG
Differential Revision: D21132669
fbshipit-source-id: 96e963c7bb4209add5a51a5fc48bc38f6bcd2cd9
Summary:
Ideally, either the ContentStore, or the upper layer should verify that we
haven't missed uploading a blob, which could lead to weird behavior down the
line. For now, all the stores will return the keys of the blobs that weren't
uploaded, which allows us to return these keys to Python.
Reviewed By: DurhamG
Differential Revision: D21103998
fbshipit-source-id: 5bab0bbec32244291c65a07aa2a13aec344e715e
Summary:
The revisionstore is a large crate with many dependencies, split out the types part which is most likely to be shared between different pieces of eden/mononoke infrastructure.
With this split it was easy to get eden/mononoke/mercurial/bundles
Reviewed By: farnz
Differential Revision: D20869220
fbshipit-source-id: e9ee4144e7f6250af44802e43221a5b6521d965d
Summary:
On upload, read all the local blobs and upload them to the LFS server. This is
necessary to support `hg push` or `hg cloud sync` for local LFS blobs.
One of the change made here is to switch from having the batch method return an
Iterator to having them take a callback. This made it easier to write the gut
of the batch implementation in a more generic way.
A future change will also take care of moving local blobs to the shared store
after upload.
Reviewed By: DurhamG
Differential Revision: D20843136
fbshipit-source-id: 92d34a0971263829ff58e137e9905b527e18358d
Summary: This method will be used to upload local LFS blobs to the LFS server.
Reviewed By: DurhamG
Differential Revision: D20843137
fbshipit-source-id: 33a331c42687c47442189ee329da33cb5ce4d376
Summary:
Loose files makes it easier to interact with a Mercurial server for tests, use
it instead of an IndexedLog.
Reviewed By: DurhamG
Differential Revision: D20786432
fbshipit-source-id: 61c1fc601d9a6ed157c5add9748e40840b081870
Summary:
Instead of using Sha256::from_slice, just use Sha256::from with a correctly
sized array.
Reviewed By: quark-zju
Differential Revision: D20756181
fbshipit-source-id: 17c869325025078e4c91a564fc57ac1d9345dd15
Summary:
All of the callers are already using an Arc, so instead of forcing the remote
store to be cloneable, and thus wrap an inner self with an Arc, let's just pass
self as an Arc.
Reviewed By: DurhamG
Differential Revision: D20715580
fbshipit-source-id: 1bef23ae7da7b314d99cb3436a94d04134f1c0e4
Summary:
When LFS will be enabled on fbsource, the enablement will rolled out server,
with the server serving pointers (or not). In the catastrophic scenario where
Mononoke has to be rolled out, the Mononoke LFS server will be unable to serve
blobs, but some clients may still have LFS pointers locally but not the
corresponding blob. For this, we need to be able to fallback to fetching the
blob via the getpackv2 protocol.
Reviewed By: DurhamG
Differential Revision: D20662667
fbshipit-source-id: 4ac45558f6d205cbd1db33c21c6fb137a81bdbd5
Summary:
The LFS server might be temporarily having issues, let's retry a bit before
giving up.
Reviewed By: DurhamG
Differential Revision: D20686659
fbshipit-source-id: 90dabd19e45a681d6eae5cd50c72b635d44c0517
Summary:
When repacking for the purpose of file format changes, a single packfile may
contain data that needs to be moved out of it, and thus, we need to do a repack
then.
Reviewed By: DurhamG
Differential Revision: D20677442
fbshipit-source-id: c621dd2e657f5a4565b37d4b029731415b899117
Summary:
Remotestores can implement get_missing properly by simply querying the
underlying store that they will be writing to. This may prevent double fetching
some blobs in `hg prefetch` that we already have.
Reviewed By: DurhamG
Differential Revision: D20662668
fbshipit-source-id: 22140b5b7200c687e0ec723dd8879dc8fbea6fb9
Summary:
There are cases where the user of the abstraction needs to know if this is a
local store, this will simplify the caller code.
Reviewed By: DurhamG
Differential Revision: D20662666
fbshipit-source-id: e0bde7eb0dc3484979732a7c4cdf888fedc70e13
Summary:
By regularly flushing the blob store, we avoid keeping too many LFS blobs in
memory, which could cause OOM issues.
The default size is chosen to be 1GB, but is configurable for more control.
Reviewed By: DurhamG
Differential Revision: D20646213
fbshipit-source-id: 12c06fd0212ef3974bea10c82026b6e74fb5bf21
Summary:
In the legacy lfs extension, LFS blobs were stored as loosefiles on disk, and
as we saw with loosefiles for remotefilelog, they can incur a significant
overhead to maintain. Due to LFS blobs being large by definition, the number of
loose LFS blobs should be reasonable for repack to walk over all of them to
chose which one to throw away.
A different approach would be to simply store the blobs in an on-disk format
that allows automatic size management, and simple indexing. That format is an
IndexedLog. This of course doesn't come without drawbacks, the main one being
that the IndexedLog API mandate that the full blob is present on insertion,
preventing streaming writes to it, the solution is to simply chunk the blobs
before writing them to it. While proper streaming is not done just yet, the
storage format no longer prevent it from being implemented.
Reviewed By: DurhamG
Differential Revision: D20633783
fbshipit-source-id: 37a88331e747cf22511aa348da2d30edfa481a60
Summary:
Due to the Mononoke LFS server only being available on FB's network, the tests
using them cannot run outside of FB, including in the github workflows.
Reviewed By: quark-zju
Differential Revision: D20698062
fbshipit-source-id: f780c35665cf8dc314d1f20a637ed615705fd6cf
Summary:
One of the main drawback of the current version of repack is that it writes
back the data to a packfile, making it hard to change file format. Currently, 2
file format changes are ongoing: moving away from packfiles entirely, and
moving from having LFS pointers stored in the packfiles, to a separate storage.
While an ad-hoc solution could be designed for this purpose, repack can
fullfill this goal easily by simply writing to the ContentStore, the
configuration of the ContentStore will then decide where this data will
be written into.
The main drawback of this code is the unfortunate added duplication of code.
I'm sure there is a way to avoid it by having new traits, I decided against it
for now from a code readability point of view.
Reviewed By: DurhamG
Differential Revision: D20567118
fbshipit-source-id: d67282dae31db93739e50f8cc64f9ecce92d2d30
Summary:
While the primary (for now) way of addressing an LFS blob is via its sha256,
being able to address them via different hash schemes (sha1 for Eden/Buck,
blake2, etc) will be helpful down the line. Thus, let's store a HashMap of
ContentHash in the pointer store.
Reviewed By: DurhamG
Differential Revision: D20560197
fbshipit-source-id: 8bdc4fc4cd7fc19c7eed6a27d11953c4eedf9195
Summary: No locking is required for this one due to being loose files on disk.
Reviewed By: DurhamG
Differential Revision: D20522890
fbshipit-source-id: 72b7ebc063060a89f54976a1128977a3b7501053
Summary:
Instead of having the magic number 0x2000 all over the place, let's move the
logic to this method.
Reviewed By: DurhamG
Differential Revision: D20637749
fbshipit-source-id: bf666f8787e37e6d6c58ad8982a5679b7e3e717b
Summary:
Replace `rust-crypto` with `hex`, `sha-1`, `sha2`.
- `crypto::sha1::Sha1` with `sha1::Sha1`
- `crypto::sha2::Sha2` with `sha2::Sha2`
- `crypto::digest::Digest` with `sha1::Digest` and `sha2::Digest`
- `.result_str()` with `hex::encode` and `.result()`
Reviewed By: jsgf
Differential Revision: D20588313
fbshipit-source-id: 75c4342e8b6285f0f960f864c21457a1a0808f64
Summary:
This is only intended for Mercurial .t tests and not in any production
environment.
Reviewed By: DurhamG
Differential Revision: D20504236
fbshipit-source-id: 618e17631b73afa650875cb7217ba7c55fb9f737
Summary:
For now, this is only used for LFS, as this is the only store that can
correctly answer both.
This API will be exposed to Python to be able to have cheap filectx comparison,
and other use cases.
Reviewed By: DurhamG
Differential Revision: D20504234
fbshipit-source-id: 0edb912ce479eb469d679b7df39ba80fceef05f2
Summary:
This enables fetching blobs from the LFS server. For now, this is limited to
fetching them, but the protocol specify ways to also upload. That second part
will matter for commit cloud and when pushing code to the server.
One caveat to this code is that the LFS server is not mocked in tests, and thus
requests are done directly to the server. I chose very small blobs to limit the
disruption to the server, by setting a test specific user-agent, we should be
able to monitor traffic due to tests and potentially rate limit it.
Reviewed By: DurhamG
Differential Revision: D20445628
fbshipit-source-id: beb3acb3f69dd27b54f8df7ccb95b04192deca30
Summary:
The later is what is now recommended, and no longer requires a macro to
initialize a lazy value, leading to nicer code.
Reviewed By: DurhamG
Differential Revision: D20491488
fbshipit-source-id: 2e0126c9c61d0885e5deee9dbf112a3cd64376d6
Summary:
Lots of different warnings on this one. Main ones were:
- One bug where .write was used instead of .write_all
- Using .next instead of .nth(0) for iterators,
- Using .cloned() instead of .map(|x| x.clone())
- Using conditions as expressions instead of mut variables
- Using .to_vec() on slices instead of .iter().cloned().collect().
- Using .is_empty instead of comparing .len() against 0.
Reviewed By: DurhamG
Differential Revision: D20469894
fbshipit-source-id: 3666a44ad05e0fbfa68d490595703c022073af63
Summary:
Clippy had 3 sources of warnings in this crate:
- from_str method not in impl FromStr. We still have 2 of them in path.rs, but
this is documented as not supported by the FromStr trait due to returning a
reference. Maybe we can find a different name?
- Use of mem::transmute while casts are sufficient. I find the cast to be
ugly, but they are simply safer as the compiler can do some type checking on
them.
- Unecessary lifetime parameters
Reviewed By: quark-zju
Differential Revision: D20452257
fbshipit-source-id: 94abd8d8cd76ff7af5e0bbfc97c1e106cdd142b0
Summary:
While the sha256 of a blob gives access to its content, it doesn't allow
accessing its metadata, by adding a sha256 index, we can easily get the
metadata of a blob via its content hash.
Reviewed By: quark-zju
Differential Revision: D20445624
fbshipit-source-id: 42c04bd69d3c7380706c6237c5b4f4061c016cca
Summary: This is necessary to properly test LFS stores.
Reviewed By: quark-zju
Differential Revision: D20445625
fbshipit-source-id: 530ddf87249e8d721957806f2d8edef3262f303c
Summary:
The OpenOptions allow for multiple indices to be added, but lookup had no way
to querying these multiple indices.
Reviewed By: quark-zju
Differential Revision: D20445627
fbshipit-source-id: 0cb754ba17b452d892b7bcb56d502d5753ef963a
Summary:
This type can either be a Mercurial type key, or a content hash based key. Both
the prefetch and get_missing now can handle these properly. This is essential
for stores where data can either be fetched in both ways or when the data is
split in 2. For LFS for instance, it is possible to have the LFS pointer (via
getpackv2), but not the actual blob. In which case get_missing will simply
return the content hash version of the StoreKey, to signify what it actually
has missing.
Reviewed By: quark-zju
Differential Revision: D20445631
fbshipit-source-id: 06282f70214966cc96e805e9891f220b438c91a7