Summary: This has been superseded by `test_repo_factory::TestRepoFactory`.
Reviewed By: StanislavGlebik
Differential Revision: D27169434
fbshipit-source-id: 97fcd400c5e3c6e8f86c9acfc5e979909e2eda31
Summary: Use the test factory for the remaining existing tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169443
fbshipit-source-id: 00d62d7794b66f5d3b053e8079f09f2532d757e7
Summary: Use the test factory for existing bookmarks and pushrebase tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169430
fbshipit-source-id: 65a7c87bc37cfa2b3b42873bc733cec177d8c1b0
Summary: Use the test factory for test fixtures that are used in many existing tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169432
fbshipit-source-id: bb3cbfa95b330cf6572d1009c507271a7b008dec
Summary: Use the test factory for existing lfs_server tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169427
fbshipit-source-id: 004867780ad6a41c3b17963006a7d3b0e5b82113
Summary: Use the test factory for existing commit_rewriting tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169442
fbshipit-source-id: df2447b2b6423d172e684d7e702752ad717a2a4b
Summary: Use the test factory for existing repo_client and repo_import tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169425
fbshipit-source-id: 2d0c34f129447232cec8faee42056d83613de179
Summary: Use the test factory for existing hooks and mapping tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169436
fbshipit-source-id: f814e462bb2244e117c83bd355042016f5bfde8e
Summary: Use the test factory for existing mononoke_api tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169444
fbshipit-source-id: d940bfa494dfe89fdb891d5248b73ba764f74faf
Summary: Use the test factory for existing derived data tests.
Reviewed By: farnz
Differential Revision: D27169433
fbshipit-source-id: b9b57a97f73aeb639162c359ab60c6fd92da14a8
Summary:
Create a factory that can be used to build repositories in tests.
The test repo factory will be kept in a separate crate to the production repo factory, so that it can depend on a smaller set of dependencies: just those needed for in-memory test repos. This should eventually help make compilation speeds faster for tests.
A notable difference between the test repos produced by this factory and the ones produced by `blobrepo_factory` is that the new repos share the in-memory metadata database. This is closer to what we use in production, and in a couple of places it is relied upon and existing tests must use `dangerous_override` to make it happen.
Reviewed By: ahornby
Differential Revision: D27169441
fbshipit-source-id: 82541a2ae71746f5e3b1a2a8a19c46bf29dd261c
Summary:
Convert `BlobRepo` to a `facet::container`. This will allow it to be built
from an appropriate facet factory.
This only changes the definition of the structure: we still use
`blobrepo_factory` to construct it. The main difference is in the types
of the attributes, which change from `Arc<dyn Trait>` to
`Arc<dyn Trait + Send + Sync + 'static`>, specified by the `ArcTrait` alias
generated by the `#[facet::facet]` macro.
Reviewed By: StanislavGlebik
Differential Revision: D27169437
fbshipit-source-id: 3496b6ee2f0d1e72a36c9e9eb9bd3d0bb7beba8b
Summary: To prepare for making `RepoBlobstore` a facet, convert it to a newtype wrapper.
Reviewed By: ahornby
Differential Revision: D27169439
fbshipit-source-id: ceefe307e962c03c3b89be660b5b6c18d79acf3e
Summary: We have support for backup-repo-id, but tw blobimport doesn't have id and have source repo name to use. Let's add support similar to other repo-id/source-repo-id etc.
Reviewed By: StanislavGlebik
Differential Revision: D27325583
fbshipit-source-id: 44b5ec7f99005355b8eaa4c066cb7168ec858049
Summary:
I'm trying to track down an issue in SQLBlob construction, where it takes over 3 wall-clock seconds to construct.
This is indicative of a missing `blocking` annotation somewhere; to make it easier to add the correct annotation, switch construction to modern futures and `async`
Reviewed By: krallin
Differential Revision: D27275801
fbshipit-source-id: 2b516b4eca7143e4be17c50c6542a9da601d6ac6
Summary:
The pack key needs to be removed using unlink() after target keys are pointed to it so that old unused packs can be GCed. e.g.
packer run 1: keys A and B are packed to P1
packer run 2: keys A and C are packed to P2
packer run 3: keys B and D are packed to P3
If we don't unlink P1 during run1 then GC can't collect the old pack after run 3 as the key will keep it live
Reviewed By: farnz
Differential Revision: D27188108
fbshipit-source-id: bbe22a011abbf85370a8c548413401ffb54b16b6
Summary:
Refactor `scmstore::types` into separate `file` and `tree` modules and introduce a new `StoreTree` type to represent trees in the scmstore API.
Introduce a minimal `StoreTree` type. Importantly, for now this type does not provide any methods for deserializing the tree manifest and inspecting it's contents. This functionality won't be too hard to implement, though - it'll require some revisions to the `manifest-tree` crate and / or moving the `StoreTree` and `StoreFile` types to `revisionstore_types`.
Reviewed By: kulshrax
Differential Revision: D27310878
fbshipit-source-id: 712330fba87f33c49587fa895efea3601ce377af
Summary:
The revset is not optimized for the (to be migrated to segments) revlog backend.
Optimize it.
Reviewed By: jteosw
Differential Revision: D27317708
fbshipit-source-id: cec9d6aad0f6c30c69a931898f8e1cc7c904b3f8
Summary:
We've seen occasional index timeouts on inserts in MySQL. This is very
reminiscent of D19158550 (fef360b284).
aida points out this seems to have started to happen (occasionally) recently.
That would make sense: we used to insert one-by-one so we wouldn't have ordering
issues (because bug), but we also recently started inserting in batches again (because bugfix).
There's little reason to expect we couldn't run into the same bug there as well.
So let's give filenodes & copydata the same sorting treatment we give paths.
Worst case, this does nothing. Best case, it fixes the issue.
Reviewed By: StanislavGlebik
Differential Revision: D27301588
fbshipit-source-id: 2b24ddd68e1a1c4e31fe33e03efcef47dad3657d
Summary: Previously, accessing both repo.filelog and repo.manifestlog in a test debug command, under buck builds only, would cause a "memcache singleton leaked" error to be printed after the command completes (see https://www.internalfb.com/intern/qa/87181/runtime-warnings-about-memcacheconnectionmanagerim ). There are still some unanswered questions about this issue, as noted in the QA post, but this fixes the main issue at hand.
Reviewed By: sfilipco
Differential Revision: D27297498
fbshipit-source-id: e19665333bae9f91e1c3c6db370962a3aea2727d
Summary:
Rename "newstore" to "scmstore"
The three main options I'm considering are "edenstore", "scmstore", and "hgstore". They all describe the project sufficiently well, in my opinion. "edenstore" might be a stretch, as I have no reals plans for Mononoke to use it, while "hgstore" might be a bit limited, as integration with EdenFS is a core goal, and it'll be used by EdenFS to fetch remote data via EdenApi, not just Mercurial's local storage. I feel like "scmstore" rolls off the tongue the easiest, particularly vs. "hgstore" (both "H.G. Store" and "Mercurial Store"), and is a bit easier to type than "edenstore" (which has "ede" all with the same finger). I don't have a strong opinion on this matter, so If you'd like a different name I'm open to suggestions.
Speak now or forever hold your peace.
Reviewed By: andll
Differential Revision: D27180332
fbshipit-source-id: 19e6972ea0f6527e671792845dcfd339cf1ab767
Summary:
Introduce a new `StoreFile` type for the `revisionstore` crate. This is an enum of `File`, `LfsPointer`, and `RedactedFile`, which represent the different cases the `Entry` type's `content` might actually represent. The `File` variant also handles stripping the copy header, if present, and stores both the `copied_from`, raw unstripped `content`, and stripped `content`, though only the latter can be accessed through the public API right now. Ideally, we'll move copy information into the history API and never actually need to make it public here.
Conversions are provided from `Entry` and EdenApi's `FileEntry` type (which also supports redaction and LFS pointers, but as errors instead of first-class values).
Modify the output type used in the Python bindings to use this new `StoreFile` type.
Currently, the `LfsPointer` variant is unused. This will be used later when we add first-class LFS support.
Reviewed By: kulshrax
Differential Revision: D26862710
fbshipit-source-id: 8326921f3ee43bf2e253847d5735c61f5a50bfa6
Summary: Adds `ExtStoredPolicy` support to the EdenApi and IndexedLog ReadStore implementations. This flag controls how LfsPointers found during a file query are handled (either returned as a result, or treated as "not found").
Reviewed By: kulshrax
Differential Revision: D27171814
fbshipit-source-id: 14dda47f32184c3ee703fbc77106885ca4d3ea27
Summary:
Add an unlink operation to BlobstoreWithLink for use from packblob in the next diff.
The reason an unlink operation is needed is so that pack keys don't keep old packed data live longer than it should be. e.g.
Before the packer runs, keys A, B, C, D each point to content A, B, C, D
packer run 1: keys A and B are packed to P1. Keys A, B, P1 now all point to pack P1
GC run here can remove content A and B
packer run 2: keys A and C are packed to P2. Keys A, C, P2 now all point to P2.
GC run here can remove content C
packer run 3: keys B and D are packed to P3. Keys B, D, P3 now all point to P3.
GC run here can remove content B and D.
At this point, keys A, C point to pack P2, keys B, D point to pack P3.
If we don't unlink pack P1 during packer run 1 then GC can't collect the old pack after run 3 as the key P1 will keep it live; if we unlink key P1 after pointing keys A and B to point at P1, then when keys A and B stop pointing to pack P1, there will be no further references.
The unlink method is on the BlobstoreWithLink trait which most Mononoke code does not reference and therefore can't easily accidentally call.
Its modelled on posix unlink(2), it will error if the key being unlinked does not exist.
Reviewed By: farnz
Differential Revision: D27188109
fbshipit-source-id: 628698dd5894895e2aff5cf7db979d32e1bc2b3e
Summary: For batch deriviation of filenodes we can first dervie non-root filenodes and write them into db and after that write all root-filenodes for that batch of commits.
Reviewed By: StanislavGlebik
Differential Revision: D27044302
fbshipit-source-id: 2a2eb8e336c787c4210192500b4d4a2a0663a29c
Summary:
This used to be very useful in the early stages, as a way to manually test the
code, now that the NFS procotol is pretty well defined and tests are actually
running, this has outlived its usefulness, let's simply remove the code.
Reviewed By: kmancini
Differential Revision: D27194039
fbshipit-source-id: af86edd9f438448209a7d14ba66c9b54d90a9594
Summary:
When I wrote the NFS code, I used `std::move` a bit too much, on datastructure
where moving them is equivalent to copying them. Instead, we can simply use
references, which makes the code shorter, and thus more efficient.
Reviewed By: kmancini
Differential Revision: D27172574
fbshipit-source-id: d9f06bf3f519e3539cf5cd0a0c4e4a49ef8009a8
Summary:
This should have been added in D27243075 (5a150e125a) but I forgot to run `hg add` and it
was thus not added...
Reviewed By: fanzeyi
Differential Revision: D27279169
fbshipit-source-id: 69807cc05fef33f51b2a491b66c2e8aeb7136deb
Summary:
Direct reads mean we cut the Manifold server out of the loop for bigger reads, and is an enabler for Hedwig and other ways to distribute a big blob between tasks in a region.
Weak consistency means that we don't always check the master server for metadata for a blob, which should speed up reads especially in remote regions; it depends on direct reads to function, as it allows us to grab the read plan from Memcache and then go directly to the backend storage (ZippyDB, Everstore) instead of via the Manifold servers
Reviewed By: StanislavGlebik
Differential Revision: D27264690
fbshipit-source-id: 78fe77dfa4f85b5bf657b64e56577f4e00af0b94
Summary:
Make the `RequestCreationEventListeners::new_request` event listener take an `&mut Request` instead of an `&mut RequestContext` as a parameter.
In the existing code (particularly in the `hg-http` crate), this event listener is used to configure the `RequestContext` for reporting progress. This diff just generalizes this idea, allowing the listener to modify the entire `Request`.
This is useful when we need to hijack request creation in `hg-http` to do Mercurial-specific configuration. The specific use case here is to disable TLS certificate checking when the global `--insecure` flag is set.
(Note that `http-client` itself is application-agnostic, so Mercurial specific configuration should not happen in this crate. This is why `hg-http` exists at all.)
Reviewed By: quark-zju
Differential Revision: D27242947
fbshipit-source-id: 019e19037642fe24acaa8c2917d446b91e7bcb26
Summary: Add `verify_tls_cert` and `verify_tls_host` settings to `http-client::Request` that are equivalent to [`CURLOPT_SSL_VERIFYPEER`](https://curl.se/libcurl/c/CURLOPT_SSL_VERIFYPEER.html) and [`CURLOPT_SSL_VERIFYHOST`](https://curl.se/libcurl/c/CURLOPT_SSL_VERIFYHOST.html). These will be used to allow skipping cert validation with the `--insecure` flag.
Reviewed By: quark-zju, sfilipco
Differential Revision: D27242946
fbshipit-source-id: cfa0fe800d0d132ca10ec0203bfd20b53c68b814