Summary:
Some tests run `hg init` right inside the test directory, turning the
entire $TESTTMP into a repo. In future diffs we'll start to rely more on hgcache
being present during tests, which creates a directory in $TESTTMP. Let's make
sure all repos are created as sub-directories of $TESTTMP.
Reviewed By: kulshrax
Differential Revision: D23662077
fbshipit-source-id: 2b2b974ebfd1bd19ad6acd1ebe3e68dd03a09869
Summary:
Adds the initial condition and creation logic for creating a Rust
treemanifest store. Fetching and some other code paths don't work just yet, but
subsequent diffs enable more and more functionality.
Reviewed By: quark-zju
Differential Revision: D23662052
fbshipit-source-id: a0e7090c9a3bf27a7738bf093f2d4eb6098b1ed6
Summary: The old logic would just double pack some bits. Let's prevent that.
Reviewed By: xavierd
Differential Revision: D23661933
fbshipit-source-id: 155291fa08ec2c060619329bd1cb6040769feb63
Summary:
The rust pack stores currently have logic to refresh their list of
packs if there's a key miss and if it's been a while since we last loaded the
list of packs. In some cases we want to manually trigger this refresh, like if
we're in the middle of a histedit and it invokes an external command that
produces pack files that the histedit should later consume (like an external
amend, that histedit then needs to work on top of).
Python pack stores solve this by allowing callers to mark the store for a
refresh. Let's add the same logic for rust stores. Once pack files are gone we
can delete this.
This will be useful for the upcoming migration of treemanifest to Rust
contentstore. Filelog usage of the Rust contentstore avoided this issue by
recreating the entire contentstore object in certain situations, but refresh
seems useful and less expensive.
Reviewed By: quark-zju
Differential Revision: D23657036
fbshipit-source-id: 7c6438024c3d642bd22256a8e58961a6ee4bc867
Summary:
Instants do not represent actual time and can only be compared against
each other. When we subtracted arbitrary Durations from them, we run the risk of
overflowing the underlying storage, since the Instant may be represented by a
low number (such as the age of the process).
This caused crashes in test_refresh (in the next diff) on Windows.
Let's instead represent the "must rescan" state as a None last_scanned time, and avoid any arbitrary subtraction. It's generally much cleaner too.
Reviewed By: quark-zju
Differential Revision: D23752511
fbshipit-source-id: db89b14a701f238e1c549e497a5d751447115fb2
Summary:
When sending trees and files we try to avoid sending trees that are
available from the main server. To do so, we currently check to see if the
tree/file is from the local store (i.e. .hg/store instead of $HGCACHE).
In a future diff we'll be moving trees to use the Rust store, which doesn't
expose the difference between shared and local stores. So we need to stop
depending on logic to test the local store.
Instead we can test if the commit is public or not, and only send the tree/file
is the commit is not public. This is technically a revert of the 2018 D7992502 (5e95b0e32e)
diff, which stopped depending on phases because we'd receive public commits from
svn there were not public on the server yet. Since svn is gone, I think it's
safe go back to that way.
This code was usually to help when the client was further ahead than another
client and in some commit cloud edge cases, but 1) we don't do much/any p2p
exchange anymore, and 2) we did some work this year to ensure clients have more
up-to-date remote bookmarks during exchange (as a way of making phases and
discovery more reliable), so hopefully we can rely on phases more now.
Reviewed By: quark-zju
Differential Revision: D23639017
fbshipit-source-id: 34c13aa2b5ef728ea53ffe692081ef443e7e57b8
Summary:
Previously the MetadataStore would always construct a mutable pack, even
if the operation was readonly. This meant all read commands required write
access. It also means that random .tmp files get scattered all over the place
when the rust structures are not properly destructed (like if python doesn't
bother doing the final gc to call destructors for the Rust types).
Let's just only create mutable packs when we actually need them.
Reviewed By: quark-zju
Differential Revision: D23219961
fbshipit-source-id: a47f3d94f70adac1f2ee763f3170ed582ef01a14
Summary:
Previously the ContentStore would always construct a mutable pack, even
if the operation was readonly. This meant all read commands required write
access. It also means that random .tmp files get scattered all over the place
when the rust structures are not properly destructed (like if python doesn't
bother doing the final gc to call destructors for the Rust types).
Let's just only create mutable packs when we actually need them.
Reviewed By: quark-zju
Differential Revision: D23219962
fbshipit-source-id: 573844f81966d36ad324df03eecec3711c14eafe
Summary:
Some tools, like ShipIt, close stdin before they launch the subprocess.
This causes sys.stdin to be None, which breaks our pycompat buffer read. Let's
handle that.
Reviewed By: quark-zju
Differential Revision: D23734233
fbshipit-source-id: 0adc23cd5a8040716321f6ede0157bc8362d56e0
Summary:
Turns out crecord had a help screen. It was broken in Python 3. This
fixes it.
Reviewed By: singhsrb
Differential Revision: D23720798
fbshipit-source-id: 4aade9abb88355c19ee4445de116fdb40d5366bd
Summary: filter returns a generator in Python 3, but we need a list.
Reviewed By: singhsrb
Differential Revision: D23720661
fbshipit-source-id: 8de3f5844bfe8b85b37c44423733fd2a09967397
Summary: This was horribly broken, and we have no tests.
Reviewed By: singhsrb
Differential Revision: D23720984
fbshipit-source-id: 4ad47c767b0d18f700c855a7bb43f38f5c5ef317
Summary:
When I added the surrogateescape patch for the email parser decoder
used during patches, I incorrectly added a corresponding encoder on the other
end when we get the data out of the parser. It turns out the parser is
smart/dumb. When using get_payload() it attempts a few different decodings of
the data and ends up replacing all the non-ascii characters with replacement
bits (question marks). Instead we should use get_payload(decode=True), which
bizarrely actually encodes the data into bytes, correctly detecting the presence
of surrogates and using the correct ascii+surrogateescape encoding.
Reviewed By: singhsrb
Differential Revision: D23720111
fbshipit-source-id: ed40a15056c39730c91067b830f194fbe41e5788
Summary:
This test is flaky due to `hg up` not always reading data from the stores, and
thus not always failing to reading the LFS blob. A better way to force read
from the store is to simply use `hg log -p` to read from the stores.
Reviewed By: DurhamG, singhsrb
Differential Revision: D23718823
fbshipit-source-id: 98bc37a76e93a67d031ba7bfa124b1db816983a1
Summary: The files use Python 3 only syntax and is not really used. Skip them so Python 2 build won't hit invalid syntax issues.
Reviewed By: chadaustin
Differential Revision: D23717662
fbshipit-source-id: f911a83937be9ccc40194f321e3b41625a68e703
Summary:
Running `setup.py` with Python 3 for Python 2 build will cause issues as
`setup.py` writes `.pyc` files in Python 3 format.
Reviewed By: chadaustin
Differential Revision: D23717661
fbshipit-source-id: 38cfabdfdf20424a21f8a5bdaf826e74da2304ac
Summary:
In preparation of moving away from SSH as an intermediate entry point for
Mononoke, let Mononoke work with newly introduced Metadata. This removes any
assumptions we now make about how certain data is presented to us, making the
current "ssh preamble" no longer central.
Metadata is primarily based around identities and provides some
backwards-compatible entry points to make sure we can satisfy downstream
consumers of commits like hooks and logs.
Simarly we now do our own reverse DNS resolving instead of relying on what's
been provided by the client. This is done in an async matter and we don't rely
on the result, so Mononoke can keep functioning in case DNS is offline.
Reviewed By: farnz
Differential Revision: D23596262
fbshipit-source-id: 3a4e97a429b13bae76ae1cdf428de0246e684a27
Summary:
As it says in the title, this adds support for receiving compressed responses
in the revisionstore LFS client. This is controlled by a flag, which I'll
roll out through dynamicconfig.
The hope is that this should greatly improve our throughput to corp, where
our bandwidth is fairly scarce.
Reviewed By: StanislavGlebik
Differential Revision: D23652306
fbshipit-source-id: 53bf86d194657564bc3bd532e1a62208d39666df
Summary:
This imports the async-compression crate. We have an equivalent-ish in
common/rust, but it targets Tokio 0.1, whereas this community-supported crate
targets Tokio 0.2 (it offers a richer API, notably in the sense that we
can use it for Streams, whereas the async-compression crate we have is only for
AsyncWrite).
In the immediate term, I'd like to use this for transfer compression in
Mononoke's LFS Server. In the future, we might also use it in Mononoke where we
currently use our own async compression crate when all that stuff moves to
Tokio 0.2.
Finally, this also updates zstd: the version we link to from tp2 is actually
zstd 1.4.5, so it's a good idea to just get the same version of the zstd crate.
The zstd crate doesn't keep a great changelog, so it's hard to tell what has changed.
At a glance, it looks like the answer is not much, but I'm going to look to Sandcastle
to root out potential issues here.
Reviewed By: StanislavGlebik
Differential Revision: D23652335
fbshipit-source-id: e250cef7a52d640bbbcccd72448fd2d4f548a48a
Summary: That might be used to pass more data to the server
Reviewed By: markbt
Differential Revision: D23704722
fbshipit-source-id: a6e41d615f6548f2f8fd036814c59573a45f93bc
Summary:
EdenFS is adding a Python 3 Thrift client intended for use by other
projects, and the Mercurial Python 2 build doesn't understand Python 3
syntax files, so switch the default getdeps build to Python 3.
Reviewed By: quark-zju
Differential Revision: D23587932
fbshipit-source-id: 6f47f1605987f9b37f888d29b49a848370d2eb0e
Summary:
We've often had cases where we need to nuke peoples caches for various
reasons. It's a hug pain since we haven't a way to communicate with all hg
clients. Now that we have configerator dynamicconfigs, we can use that to reach
all clients.
This diff adds support for configs like:
```
[hgcache-purge]
foo=2020-08-20
```
The key, 'foo' in this case, is an identifier used to only run this purge once.
The value is a date after which this purge will no longer run. This is useful
for bounding the damager from forgetting about a purge and having it delete caches
over and over in the future for new repos or repos where the run once marker
file is deleted for some reason.
Reviewed By: quark-zju
Differential Revision: D23044205
fbshipit-source-id: 8394fcf9ba6df09f391b5317bad134f369e9b416
Summary:
`hg cloud rejoin` is used in fbclone
By providing a bit more information about the workspaces available we can improve user
experience and try to eliminate the confusion multiple workspaces cause.
Reviewed By: mitrandir77
Differential Revision: D23623063
fbshipit-source-id: 7598c1b58597032c9cfcef0b44b0ec1b00510ffa
Summary:
The corpus rev that biggrep has indexed may not be available in the
local client. Later on in the function it will pull that revision, but earlier
in the function the new logic I added a few weeks ago is just crashing.
That logic was trying to diff against the earlier revision, but that's pretty
arbitrary. Let's just diff against one of the revs at random
(deterministically) and get rid of the need for the hash to exist in the repo
early in the command.
Reviewed By: sfilipco
Differential Revision: D23635801
fbshipit-source-id: 1c284d710b8df9539a696e900183bc10d5d71869
Summary:
Fixes a few issues with Mononoke tests in Python 3.
1. We need to use different APIs to account for the unicode vs bytes difference
for path hash encoding.
2. We need to set the language environment for tests that create utf8 file
paths.
3. We need the redaction message and marker to be bytes. Oddly this test still
fails with jq CLI errors, but it makes it past the original error.
Reviewed By: quark-zju
Differential Revision: D23582976
fbshipit-source-id: 44959903aedc5dc9c492ec09a17b9c8e3bdf9457
Summary:
For repositories that have the old-style LFS extension enabled, the pointers
are stored in packfiles/indexedlog alongside with a flag that signify to the
upper layers that the blob is externally stored. With the new way of doing LFS,
pointers are stored separately.
When both are enabled, we are observing some interesting behavior where
different get and get_meta calls may return different blobs/metadata for the
same filenode. This may happen if a filenode is stored in both a packfile as an
LFS pointers, and in the LFS store. Guaranteeing that the revisionstore code is
deterministic in this situation is unfortunately way too costly (a get_meta
call would for instance have to fully validate the sha256 of the blob, and this
wouldn't guarantee that it wouldn't become corrupted on disk before calling
get).
The solution take here is to simply ignore all the lfs pointers from
packfiles/indexedlog when remotefilelog.lfs is enabled. This way, there is no
risk of reading the metadata from the packfiles, and the blob from the
LFSStore. This brings however another complication for the user created blobs:
these are stored in packfiles and would thus become unreadable, the solution is
to simply perform a one-time full repack of the local store to make sure that
all the pointers are moved from the packfiles to to LFSStore.
In the code, the Python bindings are using ExtStoredPolicy::Ignore directly as
these are only used in the treemanifest code where no LFS pointers should be
present, the repack code uses ExtStoredPolicy::Use to be able to read the
pointers, it wouldn't be able to otherwise.
Reviewed By: DurhamG
Differential Revision: D22951598
fbshipit-source-id: 0e929708ba5a3bb2a02c0891fd62dae1ccf18204
Summary:
hg-http's built client should provide integration with Mercurial's stats
collection mechanisms.
Reviewed By: kulshrax
Differential Revision: D23577867
fbshipit-source-id: 93c777021bc347511322269d678d6879710eed3e
Summary:
Add `with_stats_reporting` to HttpClient. It takes a closure that will be
called with all `Stats` objects generated. We then use this function in
the hg-http crate to integrate with the metrics backend used in Mercurial.
Reviewed By: kulshrax
Differential Revision: D23577869
fbshipit-source-id: 5ac23f00183f3c3d956627a869393cd4b27610d4
Summary: Rust based metrics so that even Rust libraries can write metrics.
Reviewed By: quark-zju
Differential Revision: D23577870
fbshipit-source-id: b19904968d9372c8ce19775fb37c7af53a370ea5
Summary:
We start off simple here. Python only really has counters so we only implement
counters. There are a lot of options on how to improve this and things get
slightly complicated when we look at the how ecosystem and fb303. Anyway,
simple start.
Reviewed By: quark-zju
Differential Revision: D23577874
fbshipit-source-id: d50f5b2ba302d900b254200308bff7446121ae1d
Summary:
Slash is probably the standard metric delimiter nowadays. Since we don't have
that many metrics I think that it makes sense to look at slash as the
standard metric delimiter going forward.
This diff updates parsing of metric names to treat both '_' and '/' as
delimiters.
Reviewed By: quark-zju
Differential Revision: D23577876
fbshipit-source-id: 03997b1285df9c52d6e2837b5af5372deb69b133
Summary:
The command is easier to use than `hg cloud join --switch`.
Also highlight the workspace name in the output of `hg cloud status`
Reviewed By: mitrandir77
Differential Revision: D23601507
fbshipit-source-id: 74eb17c9366a9dbe96881c8e3e0705619fadb3d6
Summary:
Streaming clone implementation did not check that received files have the corrects. This change addresses it.
Before this change if connection was interrupted for whatever reason client would treat fetch of changeset as successful and proceed with cloning operations, but later checks would report corruption of internal state of hg data. This is based on user [report](https://fb.workplace.com/groups/scm/permalink/3177150312334567/)
Reviewed By: quark-zju, krallin
Differential Revision: D23572058
fbshipit-source-id: d740b45ca217cd6db0a65e01aabc2ba9a4835221
Summary: The Mercurial codebase uses hyphens in crate names rather than underscores. This is similar to the convention favored by the larger Rust community, though it is different from Mononoke, which uses underscores. While we'll probably need to eventually settle on a consistent convention for all of projects in the Eden SCM repo, for now, `http_client` should be made consistent with the adjacent crates.
Reviewed By: sfilipco
Differential Revision: D23585721
fbshipit-source-id: d2e690d86815be02d7b8d645198bcd28e8cbd6e0
Summary: No more tokio-core! More `async/await`.
Reviewed By: kulshrax
Differential Revision: D23586509
fbshipit-source-id: b2e766ddb7575bc96963432f0c8582b4370b19aa
Summary:
This diff adds a `SocketTransport` implementation that no longer uses legacy `tokio-core` based futures but `tokio-tower` and `tower-service` for processing Thrift requests.
The old implementation is renamed to `SocketTransportLegacy` for better transitioning.
Reviewed By: dtolnay
Differential Revision: D20019196
fbshipit-source-id: 3bee684e9254bf1a81669ef0d2c2262a55e75daa
Summary:
In order to keep the hgcache size bounded we need to keep track of pack
file size even during normal operations and delete excess packs.
This has the negative side effect of deleting necessary data if the operation is
legitimately huge, but we'd rather have extra downloading time than fill up the
entire disk.
Reviewed By: quark-zju
Differential Revision: D23486922
fbshipit-source-id: d21be095a8671d2bfc794c85918f796358dc4834
Summary:
In a future diff we'll add logic to delete old pack files. We'll want
to use this pack iteration code, so let's move it to a function.
Reviewed By: quark-zju
Differential Revision: D23486920
fbshipit-source-id: 5f872e946ffe816289c925dd2e03c292e29da5af
Summary:
As the repository grows the opportunity for large downloads increases.
Today all writes to data packs get sent straight to disk, but we have no way to
prevent this from eating all the disk.
Let's automatically flush datapacks when they reach a certain size (default
4GB). In a future diff this will let us automatically garbage collect data packs
to bound the maximum size of packs.
Rotatelog already have this behavior.
Reviewed By: quark-zju
Differential Revision: D23478780
fbshipit-source-id: 14f9f707e8bffc59260c2d04c18b1e4f6bdb2f90