Summary: This will be used for rate limiting decisions. Also, could be logged to scuba tables to get more info about clients.
Reviewed By: quark-zju
Differential Revision: D28750197
fbshipit-source-id: 83f54e38f998c9dd824ef2d3834c777a44d0ffed
Summary: Let clients connect to lfs with HTTP through unix socket so we don't have to worry about certificates presence.
Reviewed By: johansglock
Differential Revision: D28683392
fbshipit-source-id: f6228b4099ef04fe584e320cb1892e6cb513e355
Summary:
create end to end intergation for the lookup API on the client
Start prototyping of `hg cloud upload` command.
Currently, it just performs lookup for existing heads.
This way we can end to end test the new APIs.
Reviewed By: markbt
Differential Revision: D28848205
fbshipit-source-id: 730c1ed4a21c1559d5d9b54d533b0cf551c41b9c
Summary:
Files upload will be executed in 2 stages:
* check if content is already present
* upload missing files
The check api is generic, it could be used for any id type. Called 'lookup' API.
Reviewed By: markbt
Differential Revision: D28708934
fbshipit-source-id: 654c73b054790d5a4c6e76f7dac6c97091a4311f
Summary:
Previously we set this in the rpm spec, but we need to set it in make
local as well since sometimes hgbuild invokes make local directly.
Ideally we'd put this in setup.py, since make and rpmspecs go through that, but
we need this environment also set for the dulwich build, which we don't really
control the setup.py for.
Reviewed By: singhsrb
Differential Revision: D28902015
fbshipit-source-id: bfc170c3027cc43b24c6a517512a63a71f433d23
Summary:
The recent change to make run-tests work with Python 3 broke the
allow/deny list functionality because it started testing the full test name
instead of the base. This fixes that.
Reviewed By: quark-zju
Differential Revision: D28885125
fbshipit-source-id: 586a71e66e0f094b79e6a3e07e27813db6f662d3
Summary: create `uncopy` command to unmark files that were copied using `copy`.
Reviewed By: quark-zju
Differential Revision: D28821574
fbshipit-source-id: c1c15f6fb2837cec529860aba70b516ddd794f10
Summary:
Time 0.2 is current, and 0.1 is long obsolete. Unfortunately there's a
large 0.1 -> 0.2 API change, so I preserved 0.1 and updated the targets of its
users. Also unfortunate that `chrono` has `oldtime` as a default feature, which
makes it use `time-0.1`'s `Duration` type. Excluding it from the features
doesn't help because every other user is specifying it by default.
Reviewed By: dtolnay
Differential Revision: D28854148
fbshipit-source-id: 0c41ac6b998dfbdcddc85a22178aadb05e2b2f2b
Summary:
They are breaking and hgsql is not relevant (hg server repo was forked). So
let's just remove the tests.
Reviewed By: andll
Differential Revision: D28852159
fbshipit-source-id: 04a47ea489b3f190cffe7f714a9f4161847a2c86
Summary:
Fix remaining issues like encoding and `bname` vs `name` difference
(bname was deleted by a previous change, but it's not just encoding
difference from name, bname does not have " (case x)" suffix).
Differential Revision: D28852092
fbshipit-source-id: df013b284414600deb6f20a5c0883f09906bf976
Summary:
Instrument file scmstore with tracing logging. There's more we should add here, but this will be a good starting place - I've already discovered some issues from looking at the log output. (Why does drop run twice? How does it run twice?)
It'd also probably be nice to support formatting the output like https://crates.io/crates/tracing-tree, which will be a lot less cluttered by the logged fields (like `attrs` on `fetch`).
Reviewed By: DurhamG
Differential Revision: D28750954
fbshipit-source-id: 63baa602f7147d24ac3e34defa969a70a92f96a4
Summary:
Now that EdenFS is using EdenAPI more, let's let it take advantage of
EdenAPI's better batching. We alread have a batch API for files, let's copy the
pattern for trees as well. This adds the C++ bindings. The next diff consumes
this from EdenFS
This is largely just a copy of how batch blob fetching does this. But I'm a C++
noob, so feel free to tear this apart with nits.
Reviewed By: chadaustin
Differential Revision: D28426789
fbshipit-source-id: 88d359985e849018fb3c2b4ef9e52d07c96bf31a
Summary:
Now that EdenFS is using EdenAPI more, let's let it take advantage of
EdenAPI's better batching. We alread have a batch API for files, let's copy the
pattern for trees as well. This first diff just produces the Rust code. Future
diffs will add the C++ bindings then integrate it into EdenFS.
This is largely just a copy of how batch blob fetching does it.
Reviewed By: chadaustin
Differential Revision: D28426790
fbshipit-source-id: 822ef6e7b3458df5dba7a007657e85351162b9ff
Summary: Windows has an issue where subprocess don't inherit stdout/stderr correctly. util.system() has a workaround for this, so let's use that instead of subprocess when executing shell aliases. This fixes 'hg pull --rebase' which is a shell alias.
Reviewed By: kulshrax
Differential Revision: D28815381
fbshipit-source-id: 7521c17166a2b2c0e4ee872dacfd09d2d97e00ce
Summary: These are breaking buck test runs
Reviewed By: quark-zju
Differential Revision: D28802741
fbshipit-source-id: a30c7b64d72356df05676ffab87291a246033d49
Summary:
Previously it requires migrating to doublewrite first. There is no reason why
the doublewrite migration cannot be done automatically. So let's do it.
Reviewed By: DurhamG
Differential Revision: D28757734
fbshipit-source-id: ba2533b5506309610b87865a838d7efe22bccfac
Summary:
Add the `fetch_contentsha256` python method to `filescmstore`, which accepts a list of keys and returns a list of (key, sha256).
This is intended to be used by the modified `status` command implementation, which will prefer comparing content hashes to directly comparing file content.
Reviewed By: DurhamG
Differential Revision: D28696618
fbshipit-source-id: a0304319b0a19d4f09d07bec02dc41964aec7255
Summary:
Merge `found_file` and `found_aux_indexedlog` into a new `found_attributes` method, which simply "or"s the newly found attributes into the `found` map.
Replaces the `satisfies` concept with a new `pending` check, used the same way by each `pending_*` method, which considers a key pending if fetching from a store which returns a given set of attributes would allow us to resolve any requested by missing attributes, optionally taking into account attributes that can be computed from those already found. This will still need to be adjusted to support preferring remote fetching of attributes to local computation, but it is no longer as brittle as the previous implementation: there's no requirement that aux data be computed as content is fetched in order to avoid redundantly fetching content.
Move attribute computation to a separate phase, and filter out un-requested attributes in the `finish` function.
Reviewed By: DurhamG
Differential Revision: D28694192
fbshipit-source-id: 9b096c056736cadc0f97ff09243ed09d5266504d
Summary: Use associated constants instead of methods for `FileAttributes` bit masks.
Reviewed By: DurhamG
Differential Revision: D28724729
fbshipit-source-id: 441c0d2361166824c4ee7cfd5ad0b6f21ee1ac26
Summary:
Previously, the `found_error` required `&mut self`, even though it only ever interacted with the error fields. This prevents Rust's type checker from validating the safety of logging errors while iterating over the `found` map, for instance.
Replacing the `&mut self` method call with a field access into an existing `&mut self` resolves this problem, and allows logging errors while mutating other fetch state.
Reviewed By: DurhamG
Differential Revision: D28722547
fbshipit-source-id: 59c6a530cbf331282d6f654a56e492d47cafcd2f
Summary:
Don't try to fetch from a store if we don't have any pending keys.
Handle missing content when writing to cache after fetching from remote stores. Currently, `found_in_*` will be populated even if we don't store the content, having just used it for aux data computation. This change won't be necessary, but won't cause any problems either, after the next change which only prunes overfetching in the `finish` method, allowing remote blobs to be written to local cache even if we only fetched them to compute their attributes. I might revert this portion of the change, or warn if content is unexpectedly unavailable.
Reviewed By: DurhamG
Differential Revision: D28694964
fbshipit-source-id: 465211c9257cbf49b1cb68856473323fc940f10b
Summary: Extends the previous change to add support for computing aux data (currently only Content Sha256) and caching it locally. Introduces a `FetchState` config option, `compute_aux_data`, which controls if content will be fetched in order to compute aux data, or if unavailable aux data will be treated as "not found".
Reviewed By: DurhamG
Differential Revision: D28528456
fbshipit-source-id: 26189d18c8e453040f3c1f6e22a34d623a5aa40d
Summary:
The migration to Python 3 broke the unified diff code because difflib
expected the paths to also be bytes.
Reviewed By: quark-zju
Differential Revision: D28758876
fbshipit-source-id: 367ef237594d2908377cd8b81def364b77ee02e2
Summary:
`rm -A` means removing files that are "deleted" (`rm`-ed but not `hg rm`-ed).
It does not need to list clean files. Listing clean files can be very slow
in a large repo.
Avoid listing clean files so `rm -A` can be faster.
This has a side effect that we no longer maintain the exit value (0: repo
becomes empty, 1: repo is not empty) like before. But I guess nobody really
cares the 1 exit value (and it does not really make sense in the `rm -A`
case).
Reviewed By: DurhamG
Differential Revision: D28622558
fbshipit-source-id: 2087d6508932905564a8307e9438895538ecede9
Summary:
The usage of bytes for paths and environment variables makes this entire file hacky and makes it not work on Windows. Let's remove all of that.
We still use bytes for test output and other file content type cases.
Reviewed By: andll
Differential Revision: D28227825
fbshipit-source-id: b15993601db501160c9fa4eb2463678cde1fa554
Summary:
Previously, migrating to lazy means repo requirement changes. This diff uses
the new API to actually make the changelog lazy.
Reviewed By: DurhamG
Differential Revision: D28700896
fbshipit-source-id: 82cfd70645230cd67223195e25ef07ae5abe7df6
Summary:
Switch debugrebuildchangelog from using revlog stream clone to lazy segment clone.
This removes the revlog techdebt and can be used as a way to repair
repos with broken segmented changelog. As we migrate off double-write backend we
can no longer migrate down to revlog then migrate up, and a full reclone can be
slow. So a partial reclone command that just recreates the segmented changelog
seems useful.
This command is one of the two commands that handle emergency situations
when segmented changelog related logic goes wrong. The other command
is the emergency clone mode, added by D27897892 (d1413bbbad), which assumes everything
related to segmented changelog is broken on Mononoke side and we still
need to commit and push. This command relies on segmented changelog
related features, such as hash<->location lookup, and clone on Mononoke
to work properly and the server having a compatible IdMap. So it might
not be able to address all issues if Mononoke goes wrong.
Reviewed By: DurhamG
Differential Revision: D28430885
fbshipit-source-id: 17357a33f6fda4a67d46e2c7e7be6653b530f499
Summary:
Use the interruptible block_on API so the Python methods can be interrupted by Ctrl+C.
This is especially useful if some operation triggers lots of expensive network fetches.
Reviewed By: DurhamG
Differential Revision: D28723008
fbshipit-source-id: b6c692de6290a49852eabcd960ebd9b2fb68685a
Summary:
This will be used by the next change to test migrating from a non-lazy
changelog to a lazy changelog actually makes commits lazy.
More commits were added to the graph to test laziness. The old graph
does not have commits that will be made lazy by the current standard
(parents of merges are not lazy).
Reviewed By: DurhamG
Differential Revision: D28700897
fbshipit-source-id: 527c3ce672327ed5e2398c0d89a8e9e92e5b244f
Summary:
This will be used by the next change to migrate from a non-lazy changelog to a
lazy changelog.
Reviewed By: DurhamG
Differential Revision: D28700898
fbshipit-source-id: ff12754f224586b9d0d62f73b41bbb07fc7a6eea
Summary:
If a patch declared the length of it's last hunk as N lines, but it
only contained N-1 lines, the Rust code would enter into an infinite loop. This
could happen if a text editor remove the trailing spaces from a patch file.
Let's fix it and add a test
Reviewed By: kulshrax
Differential Revision: D28683977
fbshipit-source-id: 0a999ae108676531a2cf18e77a3b426ba4647164
Summary: Sometimes things take longer, make sure we are able to distinguish whether that's due to networking, tls handshake, http parsing, or Mononoke wireproto handling.
Reviewed By: markbt
Differential Revision: D28705508
fbshipit-source-id: 1bafda7fc447f2e429690f47fe7ab81cec511494
Summary:
Extends the `FileScmStoreBuilder` to construct two new indexedlog stores for caching aux data. The stores will be created in a directory adjacent to the normal non-LFS indexedlog stores.
Currently aux data stores will not be constructed for production users, a configuration option will be introduced to gate this before `.store_aux_data()` is called in the `filescmstore` constructor bindings.
Reviewed By: DurhamG
Differential Revision: D28689693
fbshipit-source-id: e3ad1594e5beee00b1a8b9fe489e3b6af3a2e93e
Summary:
Modify `FileStore` to introduce basic aux data fetching. Aux data is currently read from a separate IndexedLog store, serialized with `serde_json` (chosen for expediency / ease of debugging, I intend to optimize the storage format before releasing this, at the very least to avoid unnecessarily serializing the key path).
Currently aux data fetching will never succeed, as aux data fetching is not supported in the EdenApi "files" API and nothing else exists to populate the local aux data stores. Later in this stack, computing aux data (currently only content sha256) to populate the aux data storage is implemented.
Reviewed By: DurhamG
Differential Revision: D28526788
fbshipit-source-id: c8e21a1377689d7913a68426a3a480d53148da66
Summary:
Simplify tracking of incomplete fetches in preparation for attributes support in the next change.
Now, all keys which have not been completely and successfully fetched are recorded in `pending`, and are removed only when the complete fetch is recorded in `found`. Keys are now removed from `lfs_pointers` and `pointer_origin` as they are completed, as they aren't needed for anything other than fetching from local LFS and remote LFS respectively.
Reviewed By: DurhamG
Differential Revision: D28546515
fbshipit-source-id: c657e5c6350cadc8da970f57bb7694ed71022efb
Summary:
Now metalog can no longer be `None`. Let's just remove logic handling its
`None` case.
This changes the commitcloud-sync-race test because the metalog itself has
internal locking and changes are atomic.
Reviewed By: DurhamG
Differential Revision: D28595292
fbshipit-source-id: bd9851f5f3bb25f28f15d673f608af2863953c46
Summary:
fncache and store have been default on for years. Enable them unconditionally.
This also makes sure that metalog is always available.
Practically, the only place that does not use fncache is hgsql server repos and
they are irrelevant now.
Reviewed By: DurhamG
Differential Revision: D28595289
fbshipit-source-id: 32b9906c179518acdb17a206b54f98a3dc994921
Summary: I have modified the places where most of the errors were raised that users reported and were resolved by renewal of certificates.
Reviewed By: krallin
Differential Revision: D28568561
fbshipit-source-id: 44fb127a49bde83efee1c934e0435b31f8602a8d
Summary: Upcoming changes will force enable metalog so there will be no way to migrate down.
Reviewed By: DurhamG
Differential Revision: D28595290
fbshipit-source-id: a130b3c60c5b553d024868f28a28e48c50d44783
Summary:
It was added by D8527475 (72c3d8afc1) to workaround hgsql with no-fncache and long file
names synced from svn. Upcoming changes will force fncache to simplify
configuration and the hgsql server code was forked. So let's just delete
the test.
Reviewed By: DurhamG
Differential Revision: D28595291
fbshipit-source-id: 60d2449cca7af46b8b5b3c3b557a36507ff1576e
Summary: This will be used by fbclone to ship lazy commit hash backend.
Reviewed By: DurhamG
Differential Revision: D28554445
fbshipit-source-id: a263ae7683124b3b86f4025b02c7de20dcb9813e
Summary: This makes it possible to use non-debugshell to compact the metalog.
Reviewed By: DurhamG
Differential Revision: D28550902
fbshipit-source-id: 789830ba35243d248397e6a52ee343584c1e01a9
Summary:
The "compact" API rebuilds the metalog by removing older history. It could be
useful to reduce the size overhead of the metalog.
This is also useful if we're doing other "rebuild" work, such as rebuilding the
changelog.
Reviewed By: DurhamG
Differential Revision: D28550903
fbshipit-source-id: 56f875bd955247181236a976dcce6163d126a4b6
Summary:
The zipimport logic requires the pyc mtime to match its source. However, the
Windows system time zone can invalidate it and cause slow startups.
Workaround it by making the zipimport mtime function return a fallback value so
the mtime check is then bypassed.
# zipimport.py, _unmarshal_code
source_mtime, source_size = \
_get_mtime_and_size_of_source(self, fullpath)
if source_mtime: # if source_mtime is false, then the check is bypassed.
# We don't use _bootstrap_external._validate_timestamp_pyc
# to allow for a more lenient timestamp check.
if (not _eq_mtime(_unpack_uint32(data[8:12]), source_mtime) or
_unpack_uint32(data[12:16]) != source_size):
_bootstrap._verbose_message(
f'bytecode is stale for {fullname!r}')
return None
Change my Windows time zone from GMT-7 to GMT-4. Set PYTHONVERBOSE and
PYTHONDEBUG to 1. Ran `hg init -h` and check its stderr. It prints:
# bytecode is stale for 'edenscm.traceimport'
and alike before this change, and no longer after replacing the `__init__.py`
in the zip with the new version.
Reviewed By: DurhamG
Differential Revision: D28622287
fbshipit-source-id: bb3e8e378ea168e4f83f4b6aa9713103b2c90ef8
Summary:
don't apply an old public bookmark if the commit is older than max_sync_age
there is a complicated logic because we need to make sure if we later run it with different commitcloud.max_sync_age value or hg cloud sync --full the bookmarks will be appear back.
So, the changes required in both:
* checkomission
* _mergebookmarks
but both cases covered in the tests
also, if you run with max_sync_age=1000 and later max_sync_age=0, the bookmarks will not disappear, which is expected.
Reviewed By: markbt
Differential Revision: D28572875
fbshipit-source-id: 317e897a2b81c3371dbea7eb39b8925570c1d40a
Summary:
this output is not noisy for big workspaces
if a head is omitted, don't warn about the bookmark because it is expected
Reviewed By: markbt
Differential Revision: D28568919
fbshipit-source-id: eb19e1d155f65de411c1dd41a8be6d83ca71c264
Summary:
The output is a bit too noisy for large workspaces.
We could skip older commits, we know the list coming ordered from commit cloud service.
All hashes are anyway available via `hg cloud sl`.
Also I fixed several look-ups in the list. Omittedheads are heavily used to check if something is present there.
Reviewed By: markbt
Differential Revision: D28568421
fbshipit-source-id: bcf62522798fed92df7ca546c73aa14da95f1567
Summary: Add config pull.httpbookmarks to use the edenapi http protocol to fetch bookmarks in the central local repo pull method. This impacts the pull command, as well as other commands that pull bookmarks.
Reviewed By: quark-zju
Differential Revision: D27479112
fbshipit-source-id: 2b9821f458ec0af2579143fb2c2ed7d3ff41878a
Summary:
Support decompression for mononoke connections. When we request it, Mononoke
can support compression our stream, saving bandwith on low throughput
connections.
Reviewed By: StanislavGlebik
Differential Revision: D28535058
fbshipit-source-id: 7594f72978093a474efd168bb87b41c415310d6c
Summary: It can be used by `cloud ssl` template after D28000088 (b506eeea0c).
Reviewed By: liubov-dmitrieva
Differential Revision: D28561180
fbshipit-source-id: fb4bf3de85f7c320c13a2a53c6a103e85ebb5425
Summary:
Like it says in the title. The API between Bytes 1.x has changed a little bit,
but the concepts are basically the same, so we just need to change the
callsites that were calling `bytes()` and have them ask for `chunk()` instead.
This diff attempts to be as small as it can (and it's already quite big). I
didn't attempt to update *everything*: I only updated whatever was needed to
keep `common/rust/tools/scripts/check_all.sh` passing.
However, there are a few changes that fall out of this. I'll outline them here:
## `BufExt`
One little caveat is the `copy_to_bytes` we had on `BufExt`. This was
introduced into Bytes 1.x (under that name), but we can't use it here directly.
The reason we can't is because the instance we have is a `Cursor<Bytes>`, which
receives an implementation of `copy_from_bytes` via:
```
impl<T: AsRef<[u8]>> Buf for std::io::Cursor<T>
```
This means that implementation isn't capable of using the optimized
`Bytes::copy_from_bytes` which doesn't do a copy at all. So, instead, we need
to use a dedicated method on `Cursor<Bytes>`: `copy_or_reuse_bytes`.
## Calls to `Buf::to_bytes()`
This method is gone in Bytes 1.x, and replaced by the idiom
`x.copy_to_bytes(x.remaining())`, so I updated callsites of `to_bytes()`
accordingly.
## `fbthrift_ext`
This set of crates provides transports for Thrift calls that rely on Tokio 0.2
for I/O. Unfortunately, Tokio 0.2 uses Bytes 0.5, so that doesn't work well.
For now, I included a copy here (there was only one required, when reading from
the socket). This can be removed if we update the whole `fbthrift_ext` stack to
Bytes 1.x. fanzeyi had been wanting to update this to Tokio 1.x, but was blocked on `thrift/lib/rust` using Bytes 0.5, and confirmed that the overhead of a copy here is fine (besides, this code can now be updated to Tokio 1.x to remove the copy).
## Crates using both Bytes 0.5 & Bytes 1.x
This was mostly the case in Mononoke. That's no coincidence: this is why I'm
working on this. There, I had to make changes that consist of removing Bytes
0.5 to Bytes 1.x copies.
## Misuse of `Buf::bytes()`
Some places use `bytes()` when they probably mean to use `copy_to_bytes()`. For
now, I updated those to use `chunk()`, which keeps the behavior the same but
keeps the code buggy. I filed T91156115 to track fixing those (in all
likelihood I will file tasks for the relevant teams).
Reviewed By: dtolnay
Differential Revision: D28537964
fbshipit-source-id: ca42a614036bc3cb08b21a572166c4add72520ad
Summary:
This allows us to do staged rollout where some users are using "lazy" backend
and they won't be migrating down to "doublewrite" backend.
Reviewed By: liubov-dmitrieva
Differential Revision: D28554381
fbshipit-source-id: ebe2e25c96fd3b086a451c3909643d19c64a186c
Summary: Migrating from the lazy backend to lazy backend should be a no-op.
Reviewed By: liubov-dmitrieva
Differential Revision: D28554382
fbshipit-source-id: 71c06584f6f7a89096ce4a94843c88cbea542475
Summary: Modifies `treescmstore` and `filescmstore` to also construct `TreeStore` and `FileStore` respectively. Currently these newly constructed stores are not used anywhere, no application code behavior should change as a result of this.
Reviewed By: DurhamG
Differential Revision: D28237680
fbshipit-source-id: 2bf3fd4b96be8c26e5c1e55cfd2e865f98e6ba91
Summary:
Implement `HgIdDataStore`, `RemoteDataStore`, `LocalStore`, `HgIdMutableDeltaStore`, and `ContentDataStore` for `FileStore`.
Currently I've left `RemoteDataStore::upload` unimplemented, as it's a little more complicated than the other functionality (with lots of private field accesses), and is probably worth building a good API for first. As a temporary workaround, I can store an `LfsRemote` (which requires an associated `LfsStore` for cache) and just call upload on that for now, but that's pretty ugly with the current design. I could also construct one on the fly, but it currently stores a bare `LfsRemoteInner`, not an `Arc<LfsRemoteInner>`. I'll take one of these three approaches after getting the integration tests running with the new `TreeStore` and `FileStore`.
Reviewed By: DurhamG
Differential Revision: D28235602
fbshipit-source-id: 13c72cd9379cba70a2ca7038dad419346fe0b14a
Summary:
Implement `HgIdDataStore`, `RemoteDataStore`, `LocalStore`, `HgIdMutableDeltaStore`, and `ContentDataStore` for `TreeStore`.
Also add a `Drop` impl that flushes the local stores, which matches the behavior of `ContentStore` (such as impl does not exist for the underlying stores, but it might be more appropriate there).
Reviewed By: DurhamG
Differential Revision: D28235060
fbshipit-source-id: 5a12d8c2ecff9fcc204cf437bf6f2a98f08645b4
Summary:
Introduce a new, flat, FileStore implementation. This `FileStore`, like the previously submitted `TreeStore`, directly handles all the fallback, local caching, etc, necessary to implement our storage system.
The API supports fetching batches of `Key`s, writing batches of entries (currently only in the "hg file blob" format, with copy header embedded), and querying only the local subset of underlying stores (to allow implementing `get_missing`). Other store subsets and write features will be added in the future.
Reviewed By: DurhamG
Differential Revision: D28138800
fbshipit-source-id: ca5bb91c66fa078019a19180235dd632ea73a0b3
Summary:
Introduce `from_hg_file_blob` and `from_content` LfsPointersEntry constructors, which are used for creating the correct `LfsPointersEntry` for a `Delta` (HgId + file content).
Add `sha256` accessor to `LfsPointersEntry`. Comments on `LfsPointersEntry` and looking at the construction logic suggest there should always be an associated Sha256 content hash. We use it often, so an accessor is useful to avoid the cumbersome HashMap access + match.
Add `fetch_available` to `LfsStore`, which is used by scmstore for handling cases where either only the pointer, or both the pointer and data are available. Existing LFS code directly accesses the underlying blob and pointer store.
Reviewed By: kulshrax
Differential Revision: D28231747
fbshipit-source-id: e6b1f210605d821f542fcb8e87aea366a0864d44
Summary:
Convert client certificates (which are expected to be supplied as PEM files) into an in-memory PKCS#12 archive to pass into libcurl. This is necessary on certain platforms (such as Windows) whose native crypto APIs do not support loading PEM files.
This was previously landed as D27637069 (5b759a2b52), which unconditionally converted the certificates under the assumption that all major TLS backends support PKCS#12. That assumption is still true, but it did not account for the fact that libcurl itself is dynamically linked on some platforms (such as MacOS), and the system libcurl may be too old to support support in-memory certs (via `CURLOPT_SSLCERT_BLOB` added in libcurl version 7.71.0). This diff gates this feature behind the `http.convert-cert` config option, which we can selectively set on platforms where it is needed.
Reviewed By: mzr
Differential Revision: D28524444
fbshipit-source-id: 4af9cdd60b8ef3977ad81abdb8e406c63795e628
Summary:
I forget to add fbclone build rule for the Python 3 build and that's blocking Mercurial release. This diff fixes that.
(Note: this ignores all push blocking failures!)
Reviewed By: DurhamG
Differential Revision: D28541340
fbshipit-source-id: 2c12583b97ccd18e3a4717b63a4680e8a5c3de46
Summary:
add a config option to show all bookmarks in output of `hg cloud sl`
by default local bookmarks pointing to public commits are not returned unless it's a public root of some draft stack
Reviewed By: markbt
Differential Revision: D28537657
fbshipit-source-id: 0287c18b1b6c79b271f8a67f604024086a37ffcf
Summary: If you have checked out a shared workspace or other user workspace, this part of hg doctor could hide incorrectly, so, should be skipped.
Reviewed By: markbt
Differential Revision: D28505928
fbshipit-source-id: 65e1b3978a916fad2a33bb4f81ff1b75cd657567
Summary: Fetch bookmarks via the http edenapi protocol in the bookmark command with the --list-remote option when all bookmark patterns are full bookmark names (not prefixes).
Reviewed By: kulshrax
Differential Revision: D27331526
fbshipit-source-id: 4f4eda255c551c9b55c6966569755f493335b458
Summary:
The --workspace-version option is currently ignored by interactive history.
Allow it to be used to specify the initial version. This makes jumping back to
a much older version easier.
Reviewed By: liubov-dmitrieva
Differential Revision: D28478194
fbshipit-source-id: f4f121d919e89c298677256f227f2e96d63ef644
Summary: if this option is enabled, the server will be asked to add them
Reviewed By: markbt
Differential Revision: D28412810
fbshipit-source-id: d1531ecf97615cdb5e32d72c8c31598e6a406956
Summary:
This was broken by my recent change to have mergetools respect HGPLAIN
instead of ui.formatted.
Reviewed By: andll
Differential Revision: D28423783
fbshipit-source-id: 00831a6cc47acc11574fcf67462a1dccdde21fda
Summary:
Mercurial has gotten stricter about respecting interactive vs
non-interactive commands lately, and now is failing to automatically open the
editor for conflicts during arc pull. Let's force Mercurial to treat the
invocation as an interactive one.
Reviewed By: skotchvail
Differential Revision: D28358999
fbshipit-source-id: 551713a78abfe170f04e8e55318af6e157bae7da
Summary:
getdeps builds are failing on certain versions of Mac because they
choose a system python, which causes setup.py to use a hard coded library
location which isn't correct in our environment. Earlier I changed
pick_python.py to prefer the homebrew python, but it turns out getdeps doesn't
actually use pick_python. This diff fixes that and also instructs python3-sys to
use the correct version, by setting the PYTHON_SYS_EXECUTABLE environment
variable.
Reviewed By: quark-zju
Differential Revision: D28388150
fbshipit-source-id: 9b09e7472733f7a779c6212ae012116cad657b5d
Summary: I use tags extensively and I love them to be supported as well.
Reviewed By: asm89
Differential Revision: D28348565
fbshipit-source-id: 7d94d048b734c91e7d74a1c3efeefc87943066ad
Summary: Instead of passing a client certificate path to libcurl, load the certificate into memory and pass it to libcurl as a blob using `CURLOPT_SSLCERT_BLOB`. This allows us to convert the certificate format in-memory from PEM to PKCS#12, the latter of which is supported by the TLS engines on all platform (and notably SChannel on Windows, which does not support PEM certificate).
Reviewed By: quark-zju
Differential Revision: D27637069
fbshipit-source-id: f7f8eaafcd1498fabf2ee91c172e896a97ceba7e
Summary:
The Rust `openssl` crate will using dynamic linking by default when built with `cargo`. This is a problem on Windows, since we only support cargo-based builds on that platform, but OpenSSL is not present in the system's shared library search paths.
Since we already have a copy of OpenSSL uploaded to LFS, the simplest solution is to just copy the required DLLs right next to the Mercurial executable so that they will be found at launch.
A better solution would probably be to use static linking here. From reading the crate's documentation (and build script), it seems like setting `OPENSSL_STATIC=1` during the build should force static linking, but in practice I have not been able to get this to work.
Reviewed By: DurhamG
Differential Revision: D28368579
fbshipit-source-id: 3fceaa8d081650d60356bc45ebee9c91ef474319
Summary:
split full sync into 3 steps
Commit cloud by default pulls only 30 days of commits.
Users often see some of their commits are missing in their smartlog.
I discovered that most of the users know the '--full' option (`hg cloud sync --full`) but not the 'max_sync_age' config option.
So, they try --full option but it could fail due to very very old commits we haven't migrated to Mononoke.
Users often don't really need those commits but it's not nice that the whole sync run failed.
We know that at least latest 2 years of commits are present in Mononoke.
So if we split a bit how we sync with --full option works, it would at least result in partially successfully sync for the latest 2/3 years of commits.
Reviewed By: mitrandir77
Differential Revision: D28352355
fbshipit-source-id: b5bacd7d5256191528613e3c0bcbb21b0104ac3c
Summary:
deprecate 4 commits at a time limitation for unhydrated pulls
This could speedify cloud join commands significantly (by many X times) and hg cloud sync --full command.
Reviewed By: farnz
Differential Revision: D28351849
fbshipit-source-id: f9f3d7a5c07d61cb51a5bb6284afaad963662c94
Summary:
Adding mappng to keep track of two things:
1) keep track of the latest source commit that was synced into a given target - this will be used during sync_changeset() method to validate if a parent changeset of a given changeset was already synced
2) which source commit maps to what target commit
Reviewed By: ikostia
Differential Revision: D28319908
fbshipit-source-id: f776d294d779695e99d644bf5f0a5a331272cc14
Summary: Right now this is not very useful. Let's make it more useful.
Reviewed By: DurhamG
Differential Revision: D28281653
fbshipit-source-id: ef3d7acb61522549cca397048c841d1afb089b9b
Summary:
This makes it easier to see what builder functions were registered:
% EDENSCM_LOG=edenapi=debug lhg log -r .
May 06 16:40:29.355 DEBUG edenapi::builder: registered eagerepo::api::edenapi_from_config to edenapi Builder
Reviewed By: DurhamG
Differential Revision: D28271366
fbshipit-source-id: f6c7c3aa9f29c3e47c2449e3d5fc16474aa338b0
Summary:
Adding support for the stables template keyword in stablerev extension.
This keyword calls out to a script specified in the config stablerev.stables_cmd to get a list of stable aliases for a given revision.
Reviewed By: quark-zju
Differential Revision: D28204529
fbshipit-source-id: 3c5b21846ce6f686afddd00d3326a54b85be87dd
Summary:
The server1 was not used after D27629318 (ba7e1c6952) while the test intentionally wants to
exercise graph isomorphism. So let's revive server1 in the test.
Reviewed By: andll
Differential Revision: D28269926
fbshipit-source-id: 0a04031415f559f8a6eb81f1e2f2530329a2a3bc
Summary:
We were only incrementing this on `readline`, which resulted in very low
numbers. While in there, I also removed `self._totalbytes` as that was unused.
Reviewed By: johansglock
Differential Revision: D28260141
fbshipit-source-id: 6d9008f9342adaf75eecc8ed8c872f64212cd1f7
Summary:
Add a subtree to exercise treemanifest logic. Blobs in EagerRepo are verified
so we need to disable flatcompat.
Reviewed By: DurhamG
Differential Revision: D28006550
fbshipit-source-id: ac7157a9c01ed99f703601613fb3cf06add69003
Summary: This makes it easier to use it in tests.
Reviewed By: DurhamG
Differential Revision: D28006549
fbshipit-source-id: 90e29b220453a3d7a260d0a62d697d64363d9a6c
Summary:
With remotefilelog force enabled, it's now possible to read the file content after
clone. Add a test for it.
Reviewed By: DurhamG
Differential Revision: D28006547
fbshipit-source-id: 5be93e162f352b1264a6c52852c2230726652f9d
Summary:
This makes it easier to get rid of revlog stores.
`debugindexdot` is no longer supported since it reads revlogs.
Two tests use flat manifest bundles. They are no longer supported
due to remotefilelog today has some assumptions that treemanifest
extension is also being used.
Reviewed By: DurhamG
Differential Revision: D27971126
fbshipit-source-id: fdb992a8d72bbcf562b5cb95b3f29051dd1c9464
Summary:
Disabling treemanifest is a tech debt that causes problems, especially when
enabling remotefilelog.
Reviewed By: andll
Differential Revision: D27971120
fbshipit-source-id: 1a50acc23564c2d6bad79a2e99469850b5a7d1f9
Summary:
This makes it easier to filter logs related to remote fetching.
The `DEBUG dag::protocol: resolve ids [0] remotely` means the lazy hash resolution is working.
Reviewed By: kulshrax
Differential Revision: D27971117
fbshipit-source-id: f2492204c70d793997d0c3865e500bbad56b1953
Summary:
Write commit to master group. This provides proper "CloneData" and allows us to
actually test lazy commit hash backend (since only commits in the master group
can have lazy hashes).
Reviewed By: DurhamG
Differential Revision: D27971123
fbshipit-source-id: 4e19486007ddc89de7468be65445559f34d796f5
Summary:
Add clone endpoint so we can clone from an eager test repo.
Note: the master group is empty and "cloneata" does not quite work yet due to
EagerRepo not writing to the master group. It will be fixed later.
Reviewed By: DurhamG
Differential Revision: D27971121
fbshipit-source-id: 0cc35136c6987673c2c4fbbd892c344c3586fcb7
Summary:
The trees endpoint is another example where we try to send errors to the
client. As it was done previously we would fail to log any errors on the
server side. This diff corrects that by using custom_cbor_stream.
Reviewed By: kulshrax
Differential Revision: D28111102
fbshipit-source-id: 468095d024647f472b8ad9a9e17ca8364605ff98
Summary:
Add debug output to rage to make sure we have the relevant information in case
we need to debug issues with schemes.
Reviewed By: quark-zju
Differential Revision: D28222910
fbshipit-source-id: 9499c736d61b2c0e4568e05a3afc0ae9730acedf
Summary:
eagerepo -> metalog -> git2 -> libgit2-sys -> libgit2 conflicts with edenfs'
non-Rust libgit2 dependency. Rust git2 crate does not seem to provide a way to
depend on specified libgit2.
Quote https://github.com/rust-lang/git2-rs/issues/263#issuecomment-450934287:
> It's expected that git2-rs builds its own copy of libgit2 and doesn't use the
> system version, as the system version is likely incompatible
It also seems non-trivial to make buck C++ use the libgit2 frm `libgit2-sys` crate.
Let's just avoid depending on eagerepo from edenapi directly for now to solve the
issue. This basically revives D27948369 and D27951632.
Reviewed By: xavierd
Differential Revision: D28243784
fbshipit-source-id: 0c38c20c2d3a80c550732129da572fe26a229799
Summary:
This makes it easier to use `--keep` to investigate tests by using
`--configfile`.
Reviewed By: kulshrax
Differential Revision: D27971122
fbshipit-source-id: 8adcbeab825155858499c24ca74c2979049adeda
Summary:
We have a linker issue on Windows when building EdenFS with CMake:
```
backingstore.lib(winhttp.o) : error LNK2019: unresolved external symbol __imp_WinHttpSetStatusCallback referenced in function winhttp_connect
backingstore.lib(winhttp.o) : error LNK2019: unresolved external symbol __imp_WinHttpOpen referenced in function winhttp_connect
backingstore.lib(winhttp.o) : error LNK2019: unresolved external symbol __imp_WinHttpCloseHandle referenced in function winhttp_close_connection
backingstore.lib(winhttp.o) : error LNK2019: unresolved external symbol __imp_WinHttpConnect referenced in function winhttp_connect
backingstore.lib(winhttp.o) : error LNK2019: unresolved external symbol __imp_WinHttpReadData referenced in function winhttp_stream_read
backingstore.lib(winhttp.o) : error LNK2019: unresolved external symbol __imp_WinHttpWriteData referenced in function winhttp_stream_read
backingstore.lib(winhttp.o) : error LNK2019: unresolved external symbol __imp_WinHttpQueryOption referenced in function certificate_check
```
This fixes that.
Reviewed By: xavierd
Differential Revision: D28230163
fbshipit-source-id: f74e42ee30ec8f3b81c1f80b7cf63a21ea97732c
Summary: The syntax is not supported by Python 2.
Reviewed By: DurhamG
Differential Revision: D28233280
fbshipit-source-id: 9f882827b1357cb339e60180acadb38842c3cf8d
Summary: The syntax is not supported by Python 2.
Reviewed By: DurhamG
Differential Revision: D28232995
fbshipit-source-id: 62058751b4f00b78a2bd56908100a7bb7a3adfde
Summary: Windows path like `eagerepo:///C:\foo\bar` needs special handling.
Reviewed By: kulshrax
Differential Revision: D27971119
fbshipit-source-id: 9d4b87782eca2734b708565f0ee22a7495253cff
Summary: `hg gc` does not do anything anymore, so in order to reduce confusion, let's just print a message that says it is no longer supported and provide a manual remediation.
Reviewed By: xavierd
Differential Revision: D28164614
fbshipit-source-id: 7ed2392cdb0091cd604a15b4c2382632706981f2
Summary:
This avoids issues where the tree is stored without p1, p2. It is similar to
what we do for public commits (in createtreepackpart):
if sendtrees == shallowbundle.AllTrees or ctx.phase() != phases.public:
...
Note: the trees API actually provides p1, p2, but p1, p2 are dropped when
writing to the current data store implementation.
Reviewed By: liubov-dmitrieva
Differential Revision: D28200388
fbshipit-source-id: e1fe93d8ae8576e691077d6db432d55f7b9d498d
Summary: Add a way to fetch tree content without going through store.
Reviewed By: liubov-dmitrieva
Differential Revision: D28200387
fbshipit-source-id: 8f5b2214aafba39c7674f0f6b27af0c985f0ea72
Summary:
The `trees` API is coupled with a store. We're going to add another API that is
not coupled with a store so let's rename `trees` to `storetrees`.
Reviewed By: liubov-dmitrieva
Differential Revision: D28200389
fbshipit-source-id: 826116f0b461873b2f5df07e7fd35e6d1018f929
Summary:
This output is non-determenistic, and it does not seem to be important in this test
We could replace HashMap with BTreeMap to make it deterministic, as an alternative, but it is probably not justified for this test
Reviewed By: quark-zju
Differential Revision: D28204050
fbshipit-source-id: 50000671520e3bbf41849dc53c420ccab496dca0
Summary: the option has been deprecated and is not used anywhere
Reviewed By: krallin
Differential Revision: D28191314
fbshipit-source-id: f5f092b93a9644c8249628520d8d707b60854aac
Summary:
This applies the formatting changes from black v21.4b2 to all covered
projects in fbsource. Most changes are to single line docstrings, as black
will now remove leading and trailing whitespace to match PEP8. Any other
formatting changes are likely due to files that landed without formatting,
or files that previously triggered errors in black.
Any changes to code should be AST identical. Any test failures are likely
due to bad tests, or testing against the output of pyfmt.
Reviewed By: thatch
Differential Revision: D28204910
fbshipit-source-id: 804725bcd14f763e90c5ddff1d0418117c15809a
Summary:
This will consume `CloneData` from EdenApi and write to the graph.
Note `CloneData<Vertex>` and `CloneData<HgId>` has the same mincode
serialization result so there is no need to do extra type conversion.
This can be used like:
In [1]: v=api.clonedata('fbsource');
In [6]: d=bindings.dag.commits.openhybrid(None, '/tmp/seg', '/tmp/msg', repo
...: .edenapi, repo.name, lazyhash=True)
In [7]: d.importclonedata(v)
Reviewed By: kulshrax
Differential Revision: D27971125
fbshipit-source-id: 4d420c6ff0495dc184e7c9618b866a69f0a00002
Summary:
Expose NameDag's `import_clone_data` API so this can be then exposed via
`pydag`.
Reviewed By: kulshrax
Differential Revision: D27971118
fbshipit-source-id: c9d869ffbbc8ba5a7a6ae98d17a2b7ea713bc675
Summary: The `CloneData` is currently only exposed in Rust. Expose it in Python too.
Reviewed By: kulshrax
Differential Revision: D27971124
fbshipit-source-id: 1a9c63274b6e2d78a176869b6810acbc191ba314
Summary: We skip it in other places but missed this one. Skip it too.
Reviewed By: kulshrax
Differential Revision: D27957853
fbshipit-source-id: 429d25e8b692218c9bae6c10ad76d08495a4bc66
Summary: If ui.ssh is "false", then ssh cannot be used at all. Force using edenapi.
Reviewed By: kulshrax
Differential Revision: D27957312
fbshipit-source-id: 9860344779e6a6bab557d3f953ee38e40fadb78b
Summary: Make it easier to check whether APIs in EagerRepo is called or not.
Reviewed By: andll
Differential Revision: D27955426
fbshipit-source-id: 27ca505c63596368cff98642de010b5b5717454c
Summary: It has been enabled for a long time in our production config.
Reviewed By: kulshrax
Differential Revision: D27953636
fbshipit-source-id: 428f6e8a3e7eae6d44c61970624a75d7d1ab3e36
Summary: It has been enabled for a long time in our production config.
Reviewed By: kulshrax
Differential Revision: D27953635
fbshipit-source-id: a351342fbc8cffccd16967bd0e7032ac3e4f35cf
Summary:
Add "getbundle" alternative "commitgraph" for pulling from a EagerRepo.
This avoids tech-debt like bundle2 or linkrev. It depends on a lazy
(text) changelog backend.
Reviewed By: kulshrax
Differential Revision: D27951620
fbshipit-source-id: f21119d37da6505e68c6c5f3b33b9bd1f65e4d9a
Summary: It's not an error case. It just means all nodes are unknown to the repo.
Reviewed By: kulshrax
Differential Revision: D27951619
fbshipit-source-id: 672932af3a54ffa5adfa5cccbfff7edbf4f24022
Summary: It's okay to migrate to any backend if the repo is empty.
Reviewed By: kulshrax
Differential Revision: D27951626
fbshipit-source-id: 27c00c853bf73fa3c696d74f3c05eb620f35db0e
Summary:
Add "unbundle" alternative "addblobs" for pushing to a EagerRepo.
This avoids tech-debt like bundle2 and linkrev.
Reviewed By: kulshrax
Differential Revision: D27951628
fbshipit-source-id: 3315e0653ee12928993e4e9325fbe8e2c369307b
Summary:
Now EdenApi trait is moved to a separate crate, we can inline the EdenApi
backed by EagerRepo without using dynamic registration functions.
Reviewed By: andll
Differential Revision: D28006553
fbshipit-source-id: 427513da94db228745b1a7e90af0e62296056128
Summary: So that we don't duplicate the URL handling in Python.
Reviewed By: andll
Differential Revision: D28006552
fbshipit-source-id: 2efda622fe86787373fa4ec5978537588defec28
Summary:
`peer` is the interface in hg to support push/pull. Implement it for EagerRepo.
Note: `getbundle` and `unbundle` are not implemented yet. Push / Pull are not
working yet. They will be made work later.
Reviewed By: kulshrax
Differential Revision: D27951621
fbshipit-source-id: 71f9c26713a532a0712460fa2aa34125b2b67e35
Summary:
Our BigSur mactest machines have python3 defaulting to some internal
fbprojects python install. This is breaking our OSS builds. Let's change
pick_python to avoid that install.
Note, sys.stdout was changed to print because during my manual testing on Mac,
sys.stdout did not actually print the value, despite the flush. Using print()
did work.
Reviewed By: quark-zju
Differential Revision: D28101632
fbshipit-source-id: 2907d644b2baa8a53a9a2d7da176d33cd83dfbd5
Summary:
There have been a bunch of problems with the previous approach to scmstore, so I'm going to try to start simple, make it feature complete, and then add async integration and factor out generic functionality as appropriate.
This change contains a `TreeStore` implementation with a single, synchronous, batch read method (supporting local storage, memcache, and legacy fallback, with writing missing to cache).
Add `TreeStoreBuilder`, which duplicates the existing `TreeScmStoreBuilder`, which some changes that make it easier to use for this case. I intend to unify these in the future.
Add an inherent impl for `EdenApiTreeStore` that provides subset of the `BlockingEdenApi` trait, which eliminates the need to unpack this type into a different adapter as the old `scmstore` code does. This might not be the right approach here, in reality we only need a `(client: Arc<dyn EdenApi>, repo: String)` here for trees, and that + `ExtStoredPolicy` for files, so we could take the `EdenApiAdapter` approach here too. The only reason we have to do any of this is because when `pyrevisionstore` is called to construct `scmstore` / `contentstore`, all we have is `Arc<EdenApiTreeStore>`. We could also just make the `EdenApiRemoteStore` fields public, and access them through the `Arc`.
Add `add_mcdata` method to `MemcacheStore`, `impl TryFrom<Entry> for McData`, and `impl From<McData> for Entry` for convenience when working with `MemcacheStore` (so we don't need to manually unpack the type and build `Entry`, or manually build a fake `Delta` from `Entry` to write).
Reviewed By: DurhamG
Differential Revision: D28076900
fbshipit-source-id: 7fdb5e8a42d052879eff449f60d40a83cfa7145d
Summary:
Both `get_local_path` and `get_cache_path` take suffix as as `PathBuf`, even though they only ever use it as a reference. `get_local_path` also takes a reference to a `PathBuf`, even though it always clones it internally, and takes an `Option`, even though it just maps across the contents of the option.
I modified `get_local_path` to accept a `PathBuf` by move, which it uses directly, and to not take an `Option` (instead just calling `map` externally, removing some unnecessary unwraps), and for both functions to accept `impl AsRef<Path>` for suffix.
Reviewed By: DurhamG
Differential Revision: D28100527
fbshipit-source-id: df28b51c8005f3d95acc8e082b40adaab18e31c9
Summary: Add a Read/Write Guard API to IndexedLogHgIdDataStore which allows client code outside the module to perform a series of reads and writes without locking for each individually.
Reviewed By: kulshrax
Differential Revision: D28075788
fbshipit-source-id: 2a65a426f443e1a421198ad8b4c610e4822574f7
Summary:
Add get_entry, put_entry, and flush_log inherent methods to IndexedLogHgIdDataStore. Refactor callers to use them in cases where they don't lock across multiple reads / writes (to avoid performance regressions).
This should allow `ReadStore` and `WriteStore` to be moved out of the module.
Reviewed By: DurhamG
Differential Revision: D27979828
fbshipit-source-id: c9fb8c4ac68f67b285c72396509aa17928aa54ed
Summary: It has been wrong since 2014 (tweakdefault).
Reviewed By: kulshrax
Differential Revision: D28122703
fbshipit-source-id: c83ddbac2c6162e36672649c60c2e7916dc7cbd2
Summary: This is step towards unifying native merge/rebase structs with native checkout - we now construct native checkout plan from the action map, instead of directly making it from the diff
Reviewed By: quark-zju
Differential Revision: D28078156
fbshipit-source-id: 318d7e419ca9fef15a4aebf7494451f69a3bbbe5
Summary:
This diff makes concurrency of native checkout to be configurable
This config can be used to reduce concurrency on platforms that are known to cause issues with watchman due to too many checkout operations
Reviewed By: quark-zju
Differential Revision: D28074993
fbshipit-source-id: 0a09fcf3ae48d08cead36da56c06b546aecd16b4
Summary: This diff refactors out `Checkout` component from checkout plan and allows to configure parallelism in checkout
Reviewed By: quark-zju
Differential Revision: D28074994
fbshipit-source-id: 72933c757d6e27615d1ef2bb4652bc67c9c3253d
Summary:
From what I can see, this was added when EdenFS had a Mononoke store, which is
now long gone, thus we should be able to remove the Curl dependency altogether.
Reviewed By: fanzeyi
Differential Revision: D28037816
fbshipit-source-id: 834f7db64bab5dda1748ad2f033c27a2854b0ba4
Summary:
This updates hg to have a different amount of retry for backoffs requested by
the server and errors.
The rationale is that backoffs are fairly well understood and usually caused by
a surge in traffic where everybody wants the same data (in which case we should
be willing to wait to get it because there is literally no alternative),
whereas general errors aren't predictable in the same way.
We're now effectively at a point on the server side where _all_ our instances
have the exact same load, so if any server is telling you to backoff, that
pretty much guarantees that the whole tier has too much traffic to deal with.
This leaves us with two options:
- Tell clients to wait longer and smooth out the traffic surge.
- Add enough capacity that even our biggest surges don't result in _any_
throttling.
The latter is a bit unrealistic unrealistic given we routinely get egress
variations in excess of 5x (here's an example: https://fburl.com/ods/pidsrqnl),
so this does the former.
This also updates the client to tell the server how many attempts it has left
in addition to how many it used up so far. How many are left is more meaningful
for alerting!
Finally, it adds a bit of logging so that in debug mode you can see this
happening.
Reviewed By: quark-zju
Differential Revision: D28092797
fbshipit-source-id: f61410e39c4a3e3356371a3c7bd7892de4beacc8