Summary:
Practically, our client-side revlog changelogs should be non-inline now for a
long time. There is no need to keep the migration logic.
The revlog is being deprecated too so its implementation details (inline) is
going to be irreverent. The related test is then removed.
Reviewed By: DurhamG
Differential Revision: D28974551
fbshipit-source-id: ea456c46dac11d6a8b225c269b49598ab34c2548
Summary: It was only useful server-side and will be incompatible with upcoming changes.
Reviewed By: DurhamG
Differential Revision: D28974549
fbshipit-source-id: 70a715ce170baa78adb8b1fcf7d29e3d1479c05e
Summary: hgsql is irrelevant. Remove more tests that will be broken by upcoming changes.
Reviewed By: DurhamG
Differential Revision: D29019287
fbshipit-source-id: 6fd04d2eb088a0ca9c975b25a4f28a5772f0e088
Summary:
This test uses bundle2 details that are hard to maintain.
Let's just remove the test.
Reviewed By: DurhamG
Differential Revision: D29019286
fbshipit-source-id: a64918736039331bf2fc3cd23e9c67dd77510c22
Summary:
This test is too tricky to maintain with modern setups. Namely, we don't
support strip and are dropping revlog usage. Let's just remove the test.
Reviewed By: DurhamG
Differential Revision: D28974550
fbshipit-source-id: e8d30e726735432820ceaf4ef27d1b83753122a4
Summary:
We no longer uses this code base for hg server logic.
The test requires non-Rust commit backend which will be removed.
Reviewed By: DurhamG
Differential Revision: D28974547
fbshipit-source-id: 433a6697f6cbf08450c43ce810490fcdb53cf718
Summary: It tests revlog details that are going to be irrelevant.
Reviewed By: DurhamG
Differential Revision: D28974552
fbshipit-source-id: 3ff08473be236849442c3c30d5cf1e1c2a1b628d
Summary: It tests revlog details that are going to be irrelevant.
Reviewed By: DurhamG
Differential Revision: D28974548
fbshipit-source-id: a44e97daa24aece446d899e7711a59cb4a133398
Summary: Revise the help text so it matches the latest implementation.
Reviewed By: DurhamG
Differential Revision: D28971683
fbshipit-source-id: 3f8fb7ccc42a71fdb65b87e2b99d06fb347983f3
Summary: This option allows to exclude ignored files, this way if user specifices wildcard patterns in --include, they will still not include ignored files
Reviewed By: mrkmndz
Differential Revision: D29007062
fbshipit-source-id: a8458811b4c16e11a91abdc31967b53c3cdf2ed7
Summary:
`df` doesn't exist on Windows, so this part of `rage` isn't populated.
The closest equivalent on Windows is `wmic LogicalDisk`. Use this to query the
free space and size of the disks.
Reviewed By: quark-zju
Differential Revision: D28997337
fbshipit-source-id: 08b3b74d70928f2e9801061f049359a58108f4bf
Summary: Update to latest version. This includes a patch to async-compression crate from [my PR updating it](https://github.com/Nemo157/async-compression/pull/125), I will remove once the crate is released.
Reviewed By: mitrandir77
Differential Revision: D28897019
fbshipit-source-id: 07c72f2880e7f8b85097837d084178c6625e77be
Summary:
Now that we don't publish any Python 2 packages, let's drop make local.
Once we've confirmed that nothing was using make local, we can rename make
local3 to be make local.
Reviewed By: kulshrax
Differential Revision: D28647154
fbshipit-source-id: de277887e93a6dbc0324a30f592198ef7c83f818
Summary:
More things needing fixing to be python 3 compatible. Caught when
trying to remove the Python 2 build.
Reviewed By: quark-zju
Differential Revision: D28880028
fbshipit-source-id: d162c78237f330f1f931c3581b25ead24e3ea375
Summary:
The combination of metalog and the new clone pattern of first creating
the repo causes local copy clones to fail on Windows because the initial metalog
files are held open and the copy can't overwrite them.
Let's drop the destrepo before we do the local copy.
Reviewed By: quark-zju
Differential Revision: D28880029
fbshipit-source-id: 2a4ef52675eebf16afa528e645acd927a6110cb4
Summary: This will be used for rate limiting decisions. Also, could be logged to scuba tables to get more info about clients.
Reviewed By: quark-zju
Differential Revision: D28750197
fbshipit-source-id: 83f54e38f998c9dd824ef2d3834c777a44d0ffed
Summary: Let clients connect to lfs with HTTP through unix socket so we don't have to worry about certificates presence.
Reviewed By: johansglock
Differential Revision: D28683392
fbshipit-source-id: f6228b4099ef04fe584e320cb1892e6cb513e355
Summary:
create end to end intergation for the lookup API on the client
Start prototyping of `hg cloud upload` command.
Currently, it just performs lookup for existing heads.
This way we can end to end test the new APIs.
Reviewed By: markbt
Differential Revision: D28848205
fbshipit-source-id: 730c1ed4a21c1559d5d9b54d533b0cf551c41b9c
Summary:
Files upload will be executed in 2 stages:
* check if content is already present
* upload missing files
The check api is generic, it could be used for any id type. Called 'lookup' API.
Reviewed By: markbt
Differential Revision: D28708934
fbshipit-source-id: 654c73b054790d5a4c6e76f7dac6c97091a4311f
Summary:
Previously we set this in the rpm spec, but we need to set it in make
local as well since sometimes hgbuild invokes make local directly.
Ideally we'd put this in setup.py, since make and rpmspecs go through that, but
we need this environment also set for the dulwich build, which we don't really
control the setup.py for.
Reviewed By: singhsrb
Differential Revision: D28902015
fbshipit-source-id: bfc170c3027cc43b24c6a517512a63a71f433d23
Summary:
The recent change to make run-tests work with Python 3 broke the
allow/deny list functionality because it started testing the full test name
instead of the base. This fixes that.
Reviewed By: quark-zju
Differential Revision: D28885125
fbshipit-source-id: 586a71e66e0f094b79e6a3e07e27813db6f662d3
Summary: create `uncopy` command to unmark files that were copied using `copy`.
Reviewed By: quark-zju
Differential Revision: D28821574
fbshipit-source-id: c1c15f6fb2837cec529860aba70b516ddd794f10
Summary:
Time 0.2 is current, and 0.1 is long obsolete. Unfortunately there's a
large 0.1 -> 0.2 API change, so I preserved 0.1 and updated the targets of its
users. Also unfortunate that `chrono` has `oldtime` as a default feature, which
makes it use `time-0.1`'s `Duration` type. Excluding it from the features
doesn't help because every other user is specifying it by default.
Reviewed By: dtolnay
Differential Revision: D28854148
fbshipit-source-id: 0c41ac6b998dfbdcddc85a22178aadb05e2b2f2b
Summary:
They are breaking and hgsql is not relevant (hg server repo was forked). So
let's just remove the tests.
Reviewed By: andll
Differential Revision: D28852159
fbshipit-source-id: 04a47ea489b3f190cffe7f714a9f4161847a2c86
Summary:
Fix remaining issues like encoding and `bname` vs `name` difference
(bname was deleted by a previous change, but it's not just encoding
difference from name, bname does not have " (case x)" suffix).
Differential Revision: D28852092
fbshipit-source-id: df013b284414600deb6f20a5c0883f09906bf976
Summary:
Instrument file scmstore with tracing logging. There's more we should add here, but this will be a good starting place - I've already discovered some issues from looking at the log output. (Why does drop run twice? How does it run twice?)
It'd also probably be nice to support formatting the output like https://crates.io/crates/tracing-tree, which will be a lot less cluttered by the logged fields (like `attrs` on `fetch`).
Reviewed By: DurhamG
Differential Revision: D28750954
fbshipit-source-id: 63baa602f7147d24ac3e34defa969a70a92f96a4
Summary:
Now that EdenFS is using EdenAPI more, let's let it take advantage of
EdenAPI's better batching. We alread have a batch API for files, let's copy the
pattern for trees as well. This adds the C++ bindings. The next diff consumes
this from EdenFS
This is largely just a copy of how batch blob fetching does this. But I'm a C++
noob, so feel free to tear this apart with nits.
Reviewed By: chadaustin
Differential Revision: D28426789
fbshipit-source-id: 88d359985e849018fb3c2b4ef9e52d07c96bf31a
Summary:
Now that EdenFS is using EdenAPI more, let's let it take advantage of
EdenAPI's better batching. We alread have a batch API for files, let's copy the
pattern for trees as well. This first diff just produces the Rust code. Future
diffs will add the C++ bindings then integrate it into EdenFS.
This is largely just a copy of how batch blob fetching does it.
Reviewed By: chadaustin
Differential Revision: D28426790
fbshipit-source-id: 822ef6e7b3458df5dba7a007657e85351162b9ff
Summary: Windows has an issue where subprocess don't inherit stdout/stderr correctly. util.system() has a workaround for this, so let's use that instead of subprocess when executing shell aliases. This fixes 'hg pull --rebase' which is a shell alias.
Reviewed By: kulshrax
Differential Revision: D28815381
fbshipit-source-id: 7521c17166a2b2c0e4ee872dacfd09d2d97e00ce
Summary: These are breaking buck test runs
Reviewed By: quark-zju
Differential Revision: D28802741
fbshipit-source-id: a30c7b64d72356df05676ffab87291a246033d49
Summary:
Previously it requires migrating to doublewrite first. There is no reason why
the doublewrite migration cannot be done automatically. So let's do it.
Reviewed By: DurhamG
Differential Revision: D28757734
fbshipit-source-id: ba2533b5506309610b87865a838d7efe22bccfac
Summary:
Add the `fetch_contentsha256` python method to `filescmstore`, which accepts a list of keys and returns a list of (key, sha256).
This is intended to be used by the modified `status` command implementation, which will prefer comparing content hashes to directly comparing file content.
Reviewed By: DurhamG
Differential Revision: D28696618
fbshipit-source-id: a0304319b0a19d4f09d07bec02dc41964aec7255
Summary:
Merge `found_file` and `found_aux_indexedlog` into a new `found_attributes` method, which simply "or"s the newly found attributes into the `found` map.
Replaces the `satisfies` concept with a new `pending` check, used the same way by each `pending_*` method, which considers a key pending if fetching from a store which returns a given set of attributes would allow us to resolve any requested by missing attributes, optionally taking into account attributes that can be computed from those already found. This will still need to be adjusted to support preferring remote fetching of attributes to local computation, but it is no longer as brittle as the previous implementation: there's no requirement that aux data be computed as content is fetched in order to avoid redundantly fetching content.
Move attribute computation to a separate phase, and filter out un-requested attributes in the `finish` function.
Reviewed By: DurhamG
Differential Revision: D28694192
fbshipit-source-id: 9b096c056736cadc0f97ff09243ed09d5266504d
Summary: Use associated constants instead of methods for `FileAttributes` bit masks.
Reviewed By: DurhamG
Differential Revision: D28724729
fbshipit-source-id: 441c0d2361166824c4ee7cfd5ad0b6f21ee1ac26
Summary:
Previously, the `found_error` required `&mut self`, even though it only ever interacted with the error fields. This prevents Rust's type checker from validating the safety of logging errors while iterating over the `found` map, for instance.
Replacing the `&mut self` method call with a field access into an existing `&mut self` resolves this problem, and allows logging errors while mutating other fetch state.
Reviewed By: DurhamG
Differential Revision: D28722547
fbshipit-source-id: 59c6a530cbf331282d6f654a56e492d47cafcd2f
Summary:
Don't try to fetch from a store if we don't have any pending keys.
Handle missing content when writing to cache after fetching from remote stores. Currently, `found_in_*` will be populated even if we don't store the content, having just used it for aux data computation. This change won't be necessary, but won't cause any problems either, after the next change which only prunes overfetching in the `finish` method, allowing remote blobs to be written to local cache even if we only fetched them to compute their attributes. I might revert this portion of the change, or warn if content is unexpectedly unavailable.
Reviewed By: DurhamG
Differential Revision: D28694964
fbshipit-source-id: 465211c9257cbf49b1cb68856473323fc940f10b
Summary: Extends the previous change to add support for computing aux data (currently only Content Sha256) and caching it locally. Introduces a `FetchState` config option, `compute_aux_data`, which controls if content will be fetched in order to compute aux data, or if unavailable aux data will be treated as "not found".
Reviewed By: DurhamG
Differential Revision: D28528456
fbshipit-source-id: 26189d18c8e453040f3c1f6e22a34d623a5aa40d
Summary:
The migration to Python 3 broke the unified diff code because difflib
expected the paths to also be bytes.
Reviewed By: quark-zju
Differential Revision: D28758876
fbshipit-source-id: 367ef237594d2908377cd8b81def364b77ee02e2
Summary:
`rm -A` means removing files that are "deleted" (`rm`-ed but not `hg rm`-ed).
It does not need to list clean files. Listing clean files can be very slow
in a large repo.
Avoid listing clean files so `rm -A` can be faster.
This has a side effect that we no longer maintain the exit value (0: repo
becomes empty, 1: repo is not empty) like before. But I guess nobody really
cares the 1 exit value (and it does not really make sense in the `rm -A`
case).
Reviewed By: DurhamG
Differential Revision: D28622558
fbshipit-source-id: 2087d6508932905564a8307e9438895538ecede9
Summary:
The usage of bytes for paths and environment variables makes this entire file hacky and makes it not work on Windows. Let's remove all of that.
We still use bytes for test output and other file content type cases.
Reviewed By: andll
Differential Revision: D28227825
fbshipit-source-id: b15993601db501160c9fa4eb2463678cde1fa554
Summary:
Previously, migrating to lazy means repo requirement changes. This diff uses
the new API to actually make the changelog lazy.
Reviewed By: DurhamG
Differential Revision: D28700896
fbshipit-source-id: 82cfd70645230cd67223195e25ef07ae5abe7df6
Summary:
Switch debugrebuildchangelog from using revlog stream clone to lazy segment clone.
This removes the revlog techdebt and can be used as a way to repair
repos with broken segmented changelog. As we migrate off double-write backend we
can no longer migrate down to revlog then migrate up, and a full reclone can be
slow. So a partial reclone command that just recreates the segmented changelog
seems useful.
This command is one of the two commands that handle emergency situations
when segmented changelog related logic goes wrong. The other command
is the emergency clone mode, added by D27897892 (d1413bbbad), which assumes everything
related to segmented changelog is broken on Mononoke side and we still
need to commit and push. This command relies on segmented changelog
related features, such as hash<->location lookup, and clone on Mononoke
to work properly and the server having a compatible IdMap. So it might
not be able to address all issues if Mononoke goes wrong.
Reviewed By: DurhamG
Differential Revision: D28430885
fbshipit-source-id: 17357a33f6fda4a67d46e2c7e7be6653b530f499
Summary:
Use the interruptible block_on API so the Python methods can be interrupted by Ctrl+C.
This is especially useful if some operation triggers lots of expensive network fetches.
Reviewed By: DurhamG
Differential Revision: D28723008
fbshipit-source-id: b6c692de6290a49852eabcd960ebd9b2fb68685a
Summary:
This will be used by the next change to test migrating from a non-lazy
changelog to a lazy changelog actually makes commits lazy.
More commits were added to the graph to test laziness. The old graph
does not have commits that will be made lazy by the current standard
(parents of merges are not lazy).
Reviewed By: DurhamG
Differential Revision: D28700897
fbshipit-source-id: 527c3ce672327ed5e2398c0d89a8e9e92e5b244f
Summary:
This will be used by the next change to migrate from a non-lazy changelog to a
lazy changelog.
Reviewed By: DurhamG
Differential Revision: D28700898
fbshipit-source-id: ff12754f224586b9d0d62f73b41bbb07fc7a6eea
Summary:
If a patch declared the length of it's last hunk as N lines, but it
only contained N-1 lines, the Rust code would enter into an infinite loop. This
could happen if a text editor remove the trailing spaces from a patch file.
Let's fix it and add a test
Reviewed By: kulshrax
Differential Revision: D28683977
fbshipit-source-id: 0a999ae108676531a2cf18e77a3b426ba4647164
Summary: Sometimes things take longer, make sure we are able to distinguish whether that's due to networking, tls handshake, http parsing, or Mononoke wireproto handling.
Reviewed By: markbt
Differential Revision: D28705508
fbshipit-source-id: 1bafda7fc447f2e429690f47fe7ab81cec511494
Summary:
Extends the `FileScmStoreBuilder` to construct two new indexedlog stores for caching aux data. The stores will be created in a directory adjacent to the normal non-LFS indexedlog stores.
Currently aux data stores will not be constructed for production users, a configuration option will be introduced to gate this before `.store_aux_data()` is called in the `filescmstore` constructor bindings.
Reviewed By: DurhamG
Differential Revision: D28689693
fbshipit-source-id: e3ad1594e5beee00b1a8b9fe489e3b6af3a2e93e
Summary:
Modify `FileStore` to introduce basic aux data fetching. Aux data is currently read from a separate IndexedLog store, serialized with `serde_json` (chosen for expediency / ease of debugging, I intend to optimize the storage format before releasing this, at the very least to avoid unnecessarily serializing the key path).
Currently aux data fetching will never succeed, as aux data fetching is not supported in the EdenApi "files" API and nothing else exists to populate the local aux data stores. Later in this stack, computing aux data (currently only content sha256) to populate the aux data storage is implemented.
Reviewed By: DurhamG
Differential Revision: D28526788
fbshipit-source-id: c8e21a1377689d7913a68426a3a480d53148da66
Summary:
Simplify tracking of incomplete fetches in preparation for attributes support in the next change.
Now, all keys which have not been completely and successfully fetched are recorded in `pending`, and are removed only when the complete fetch is recorded in `found`. Keys are now removed from `lfs_pointers` and `pointer_origin` as they are completed, as they aren't needed for anything other than fetching from local LFS and remote LFS respectively.
Reviewed By: DurhamG
Differential Revision: D28546515
fbshipit-source-id: c657e5c6350cadc8da970f57bb7694ed71022efb
Summary:
Now metalog can no longer be `None`. Let's just remove logic handling its
`None` case.
This changes the commitcloud-sync-race test because the metalog itself has
internal locking and changes are atomic.
Reviewed By: DurhamG
Differential Revision: D28595292
fbshipit-source-id: bd9851f5f3bb25f28f15d673f608af2863953c46
Summary:
fncache and store have been default on for years. Enable them unconditionally.
This also makes sure that metalog is always available.
Practically, the only place that does not use fncache is hgsql server repos and
they are irrelevant now.
Reviewed By: DurhamG
Differential Revision: D28595289
fbshipit-source-id: 32b9906c179518acdb17a206b54f98a3dc994921
Summary: I have modified the places where most of the errors were raised that users reported and were resolved by renewal of certificates.
Reviewed By: krallin
Differential Revision: D28568561
fbshipit-source-id: 44fb127a49bde83efee1c934e0435b31f8602a8d
Summary: Upcoming changes will force enable metalog so there will be no way to migrate down.
Reviewed By: DurhamG
Differential Revision: D28595290
fbshipit-source-id: a130b3c60c5b553d024868f28a28e48c50d44783
Summary:
It was added by D8527475 (72c3d8afc1) to workaround hgsql with no-fncache and long file
names synced from svn. Upcoming changes will force fncache to simplify
configuration and the hgsql server code was forked. So let's just delete
the test.
Reviewed By: DurhamG
Differential Revision: D28595291
fbshipit-source-id: 60d2449cca7af46b8b5b3c3b557a36507ff1576e
Summary: This will be used by fbclone to ship lazy commit hash backend.
Reviewed By: DurhamG
Differential Revision: D28554445
fbshipit-source-id: a263ae7683124b3b86f4025b02c7de20dcb9813e
Summary: This makes it possible to use non-debugshell to compact the metalog.
Reviewed By: DurhamG
Differential Revision: D28550902
fbshipit-source-id: 789830ba35243d248397e6a52ee343584c1e01a9
Summary:
The "compact" API rebuilds the metalog by removing older history. It could be
useful to reduce the size overhead of the metalog.
This is also useful if we're doing other "rebuild" work, such as rebuilding the
changelog.
Reviewed By: DurhamG
Differential Revision: D28550903
fbshipit-source-id: 56f875bd955247181236a976dcce6163d126a4b6
Summary:
The zipimport logic requires the pyc mtime to match its source. However, the
Windows system time zone can invalidate it and cause slow startups.
Workaround it by making the zipimport mtime function return a fallback value so
the mtime check is then bypassed.
# zipimport.py, _unmarshal_code
source_mtime, source_size = \
_get_mtime_and_size_of_source(self, fullpath)
if source_mtime: # if source_mtime is false, then the check is bypassed.
# We don't use _bootstrap_external._validate_timestamp_pyc
# to allow for a more lenient timestamp check.
if (not _eq_mtime(_unpack_uint32(data[8:12]), source_mtime) or
_unpack_uint32(data[12:16]) != source_size):
_bootstrap._verbose_message(
f'bytecode is stale for {fullname!r}')
return None
Change my Windows time zone from GMT-7 to GMT-4. Set PYTHONVERBOSE and
PYTHONDEBUG to 1. Ran `hg init -h` and check its stderr. It prints:
# bytecode is stale for 'edenscm.traceimport'
and alike before this change, and no longer after replacing the `__init__.py`
in the zip with the new version.
Reviewed By: DurhamG
Differential Revision: D28622287
fbshipit-source-id: bb3e8e378ea168e4f83f4b6aa9713103b2c90ef8
Summary:
don't apply an old public bookmark if the commit is older than max_sync_age
there is a complicated logic because we need to make sure if we later run it with different commitcloud.max_sync_age value or hg cloud sync --full the bookmarks will be appear back.
So, the changes required in both:
* checkomission
* _mergebookmarks
but both cases covered in the tests
also, if you run with max_sync_age=1000 and later max_sync_age=0, the bookmarks will not disappear, which is expected.
Reviewed By: markbt
Differential Revision: D28572875
fbshipit-source-id: 317e897a2b81c3371dbea7eb39b8925570c1d40a
Summary:
this output is not noisy for big workspaces
if a head is omitted, don't warn about the bookmark because it is expected
Reviewed By: markbt
Differential Revision: D28568919
fbshipit-source-id: eb19e1d155f65de411c1dd41a8be6d83ca71c264
Summary:
The output is a bit too noisy for large workspaces.
We could skip older commits, we know the list coming ordered from commit cloud service.
All hashes are anyway available via `hg cloud sl`.
Also I fixed several look-ups in the list. Omittedheads are heavily used to check if something is present there.
Reviewed By: markbt
Differential Revision: D28568421
fbshipit-source-id: bcf62522798fed92df7ca546c73aa14da95f1567
Summary: Add config pull.httpbookmarks to use the edenapi http protocol to fetch bookmarks in the central local repo pull method. This impacts the pull command, as well as other commands that pull bookmarks.
Reviewed By: quark-zju
Differential Revision: D27479112
fbshipit-source-id: 2b9821f458ec0af2579143fb2c2ed7d3ff41878a
Summary:
Support decompression for mononoke connections. When we request it, Mononoke
can support compression our stream, saving bandwith on low throughput
connections.
Reviewed By: StanislavGlebik
Differential Revision: D28535058
fbshipit-source-id: 7594f72978093a474efd168bb87b41c415310d6c
Summary: It can be used by `cloud ssl` template after D28000088 (b506eeea0c).
Reviewed By: liubov-dmitrieva
Differential Revision: D28561180
fbshipit-source-id: fb4bf3de85f7c320c13a2a53c6a103e85ebb5425
Summary:
Like it says in the title. The API between Bytes 1.x has changed a little bit,
but the concepts are basically the same, so we just need to change the
callsites that were calling `bytes()` and have them ask for `chunk()` instead.
This diff attempts to be as small as it can (and it's already quite big). I
didn't attempt to update *everything*: I only updated whatever was needed to
keep `common/rust/tools/scripts/check_all.sh` passing.
However, there are a few changes that fall out of this. I'll outline them here:
## `BufExt`
One little caveat is the `copy_to_bytes` we had on `BufExt`. This was
introduced into Bytes 1.x (under that name), but we can't use it here directly.
The reason we can't is because the instance we have is a `Cursor<Bytes>`, which
receives an implementation of `copy_from_bytes` via:
```
impl<T: AsRef<[u8]>> Buf for std::io::Cursor<T>
```
This means that implementation isn't capable of using the optimized
`Bytes::copy_from_bytes` which doesn't do a copy at all. So, instead, we need
to use a dedicated method on `Cursor<Bytes>`: `copy_or_reuse_bytes`.
## Calls to `Buf::to_bytes()`
This method is gone in Bytes 1.x, and replaced by the idiom
`x.copy_to_bytes(x.remaining())`, so I updated callsites of `to_bytes()`
accordingly.
## `fbthrift_ext`
This set of crates provides transports for Thrift calls that rely on Tokio 0.2
for I/O. Unfortunately, Tokio 0.2 uses Bytes 0.5, so that doesn't work well.
For now, I included a copy here (there was only one required, when reading from
the socket). This can be removed if we update the whole `fbthrift_ext` stack to
Bytes 1.x. fanzeyi had been wanting to update this to Tokio 1.x, but was blocked on `thrift/lib/rust` using Bytes 0.5, and confirmed that the overhead of a copy here is fine (besides, this code can now be updated to Tokio 1.x to remove the copy).
## Crates using both Bytes 0.5 & Bytes 1.x
This was mostly the case in Mononoke. That's no coincidence: this is why I'm
working on this. There, I had to make changes that consist of removing Bytes
0.5 to Bytes 1.x copies.
## Misuse of `Buf::bytes()`
Some places use `bytes()` when they probably mean to use `copy_to_bytes()`. For
now, I updated those to use `chunk()`, which keeps the behavior the same but
keeps the code buggy. I filed T91156115 to track fixing those (in all
likelihood I will file tasks for the relevant teams).
Reviewed By: dtolnay
Differential Revision: D28537964
fbshipit-source-id: ca42a614036bc3cb08b21a572166c4add72520ad
Summary:
This allows us to do staged rollout where some users are using "lazy" backend
and they won't be migrating down to "doublewrite" backend.
Reviewed By: liubov-dmitrieva
Differential Revision: D28554381
fbshipit-source-id: ebe2e25c96fd3b086a451c3909643d19c64a186c
Summary: Migrating from the lazy backend to lazy backend should be a no-op.
Reviewed By: liubov-dmitrieva
Differential Revision: D28554382
fbshipit-source-id: 71c06584f6f7a89096ce4a94843c88cbea542475
Summary: Modifies `treescmstore` and `filescmstore` to also construct `TreeStore` and `FileStore` respectively. Currently these newly constructed stores are not used anywhere, no application code behavior should change as a result of this.
Reviewed By: DurhamG
Differential Revision: D28237680
fbshipit-source-id: 2bf3fd4b96be8c26e5c1e55cfd2e865f98e6ba91
Summary:
Implement `HgIdDataStore`, `RemoteDataStore`, `LocalStore`, `HgIdMutableDeltaStore`, and `ContentDataStore` for `FileStore`.
Currently I've left `RemoteDataStore::upload` unimplemented, as it's a little more complicated than the other functionality (with lots of private field accesses), and is probably worth building a good API for first. As a temporary workaround, I can store an `LfsRemote` (which requires an associated `LfsStore` for cache) and just call upload on that for now, but that's pretty ugly with the current design. I could also construct one on the fly, but it currently stores a bare `LfsRemoteInner`, not an `Arc<LfsRemoteInner>`. I'll take one of these three approaches after getting the integration tests running with the new `TreeStore` and `FileStore`.
Reviewed By: DurhamG
Differential Revision: D28235602
fbshipit-source-id: 13c72cd9379cba70a2ca7038dad419346fe0b14a
Summary:
Implement `HgIdDataStore`, `RemoteDataStore`, `LocalStore`, `HgIdMutableDeltaStore`, and `ContentDataStore` for `TreeStore`.
Also add a `Drop` impl that flushes the local stores, which matches the behavior of `ContentStore` (such as impl does not exist for the underlying stores, but it might be more appropriate there).
Reviewed By: DurhamG
Differential Revision: D28235060
fbshipit-source-id: 5a12d8c2ecff9fcc204cf437bf6f2a98f08645b4
Summary:
Introduce a new, flat, FileStore implementation. This `FileStore`, like the previously submitted `TreeStore`, directly handles all the fallback, local caching, etc, necessary to implement our storage system.
The API supports fetching batches of `Key`s, writing batches of entries (currently only in the "hg file blob" format, with copy header embedded), and querying only the local subset of underlying stores (to allow implementing `get_missing`). Other store subsets and write features will be added in the future.
Reviewed By: DurhamG
Differential Revision: D28138800
fbshipit-source-id: ca5bb91c66fa078019a19180235dd632ea73a0b3
Summary:
Introduce `from_hg_file_blob` and `from_content` LfsPointersEntry constructors, which are used for creating the correct `LfsPointersEntry` for a `Delta` (HgId + file content).
Add `sha256` accessor to `LfsPointersEntry`. Comments on `LfsPointersEntry` and looking at the construction logic suggest there should always be an associated Sha256 content hash. We use it often, so an accessor is useful to avoid the cumbersome HashMap access + match.
Add `fetch_available` to `LfsStore`, which is used by scmstore for handling cases where either only the pointer, or both the pointer and data are available. Existing LFS code directly accesses the underlying blob and pointer store.
Reviewed By: kulshrax
Differential Revision: D28231747
fbshipit-source-id: e6b1f210605d821f542fcb8e87aea366a0864d44
Summary:
Convert client certificates (which are expected to be supplied as PEM files) into an in-memory PKCS#12 archive to pass into libcurl. This is necessary on certain platforms (such as Windows) whose native crypto APIs do not support loading PEM files.
This was previously landed as D27637069 (5b759a2b52), which unconditionally converted the certificates under the assumption that all major TLS backends support PKCS#12. That assumption is still true, but it did not account for the fact that libcurl itself is dynamically linked on some platforms (such as MacOS), and the system libcurl may be too old to support support in-memory certs (via `CURLOPT_SSLCERT_BLOB` added in libcurl version 7.71.0). This diff gates this feature behind the `http.convert-cert` config option, which we can selectively set on platforms where it is needed.
Reviewed By: mzr
Differential Revision: D28524444
fbshipit-source-id: 4af9cdd60b8ef3977ad81abdb8e406c63795e628
Summary:
I forget to add fbclone build rule for the Python 3 build and that's blocking Mercurial release. This diff fixes that.
(Note: this ignores all push blocking failures!)
Reviewed By: DurhamG
Differential Revision: D28541340
fbshipit-source-id: 2c12583b97ccd18e3a4717b63a4680e8a5c3de46
Summary:
add a config option to show all bookmarks in output of `hg cloud sl`
by default local bookmarks pointing to public commits are not returned unless it's a public root of some draft stack
Reviewed By: markbt
Differential Revision: D28537657
fbshipit-source-id: 0287c18b1b6c79b271f8a67f604024086a37ffcf
Summary: If you have checked out a shared workspace or other user workspace, this part of hg doctor could hide incorrectly, so, should be skipped.
Reviewed By: markbt
Differential Revision: D28505928
fbshipit-source-id: 65e1b3978a916fad2a33bb4f81ff1b75cd657567
Summary: Fetch bookmarks via the http edenapi protocol in the bookmark command with the --list-remote option when all bookmark patterns are full bookmark names (not prefixes).
Reviewed By: kulshrax
Differential Revision: D27331526
fbshipit-source-id: 4f4eda255c551c9b55c6966569755f493335b458
Summary:
The --workspace-version option is currently ignored by interactive history.
Allow it to be used to specify the initial version. This makes jumping back to
a much older version easier.
Reviewed By: liubov-dmitrieva
Differential Revision: D28478194
fbshipit-source-id: f4f121d919e89c298677256f227f2e96d63ef644
Summary: if this option is enabled, the server will be asked to add them
Reviewed By: markbt
Differential Revision: D28412810
fbshipit-source-id: d1531ecf97615cdb5e32d72c8c31598e6a406956
Summary:
This was broken by my recent change to have mergetools respect HGPLAIN
instead of ui.formatted.
Reviewed By: andll
Differential Revision: D28423783
fbshipit-source-id: 00831a6cc47acc11574fcf67462a1dccdde21fda
Summary:
Mercurial has gotten stricter about respecting interactive vs
non-interactive commands lately, and now is failing to automatically open the
editor for conflicts during arc pull. Let's force Mercurial to treat the
invocation as an interactive one.
Reviewed By: skotchvail
Differential Revision: D28358999
fbshipit-source-id: 551713a78abfe170f04e8e55318af6e157bae7da
Summary:
getdeps builds are failing on certain versions of Mac because they
choose a system python, which causes setup.py to use a hard coded library
location which isn't correct in our environment. Earlier I changed
pick_python.py to prefer the homebrew python, but it turns out getdeps doesn't
actually use pick_python. This diff fixes that and also instructs python3-sys to
use the correct version, by setting the PYTHON_SYS_EXECUTABLE environment
variable.
Reviewed By: quark-zju
Differential Revision: D28388150
fbshipit-source-id: 9b09e7472733f7a779c6212ae012116cad657b5d
Summary: I use tags extensively and I love them to be supported as well.
Reviewed By: asm89
Differential Revision: D28348565
fbshipit-source-id: 7d94d048b734c91e7d74a1c3efeefc87943066ad
Summary: Instead of passing a client certificate path to libcurl, load the certificate into memory and pass it to libcurl as a blob using `CURLOPT_SSLCERT_BLOB`. This allows us to convert the certificate format in-memory from PEM to PKCS#12, the latter of which is supported by the TLS engines on all platform (and notably SChannel on Windows, which does not support PEM certificate).
Reviewed By: quark-zju
Differential Revision: D27637069
fbshipit-source-id: f7f8eaafcd1498fabf2ee91c172e896a97ceba7e
Summary:
The Rust `openssl` crate will using dynamic linking by default when built with `cargo`. This is a problem on Windows, since we only support cargo-based builds on that platform, but OpenSSL is not present in the system's shared library search paths.
Since we already have a copy of OpenSSL uploaded to LFS, the simplest solution is to just copy the required DLLs right next to the Mercurial executable so that they will be found at launch.
A better solution would probably be to use static linking here. From reading the crate's documentation (and build script), it seems like setting `OPENSSL_STATIC=1` during the build should force static linking, but in practice I have not been able to get this to work.
Reviewed By: DurhamG
Differential Revision: D28368579
fbshipit-source-id: 3fceaa8d081650d60356bc45ebee9c91ef474319