Summary: Update the remaining tests for scmstore. In each of these cases we're just disabling scmstore for various reasons. I think `test-lfs-bundle.t` and `test-lfs.t`'s failures represents a legitimate issue with scmstore's contentstore fallback, but I don't think it should block the rollout
Reviewed By: kulshrax
Differential Revision: D29289515
fbshipit-source-id: 10d055bf679db8efdeb16ac96b7ed597d7b6d82c
Summary:
Prevent `FileStore` from deadlocking when a write falls back to contentstore and attempts to write to the same indexedlog_local which is held lock for the batch.
Note: this shouldn't need to block release, we current expect writing raw LFS pointers to only happen with non-remotefilelog LFS.
Reviewed By: kulshrax
Differential Revision: D29299050
fbshipit-source-id: bf39f87b9956165a558f3a19960d3d055685db9a
Summary:
When user runs hg diff between revision and working copy, and if the diff contains file that is not present in working copy because of it's sparse profile, then hg diff fails
Failure happens because hg diff tries to compare file sizes and calculating file size on workingfilectx fails, because file does not exists.
This diff overwrites size function for workingfilectx in the sparse extension to protect against that, similarly how it is already done for data function
Differential Revision: D29279691
fbshipit-source-id: 55d7843d23370c31693a32a0e1df8b882db0d89d
Summary: Only four tests remaining after this.
Reviewed By: kulshrax
Differential Revision: D29229656
fbshipit-source-id: 56c0a17f6585263e983ce8bc3c345b1f266422e0
Summary: Update more tests to avoid relying on pack files and legacy LFS, and override configs in `test-inconsistent-hash.t` to continue using pack files even after the scmstore rollout to test the Mononoke's response to corruption, which is not currently as easy with indexedlog.
Reviewed By: quark-zju
Differential Revision: D29229650
fbshipit-source-id: 11fe677fcecbb19acbefc9182b17062b8e1644d8
Summary:
Looks like D29145340 (d0e16f1a25) introduced regression - "hg pull" fails with
"TypeError: Population must be a sequence or set. For dicts, use list(d).", stack trace - P424458181.
This diff fixing it by converted a dag.nameset to a list first
Reviewed By: mzr
Differential Revision: D29258771
fbshipit-source-id: 9ffcc756f9931d6d24b69dadf1cd2d08faccb443
Summary:
Introduce `LegacyStore` trait, which contains ContentStore methods not covered by other datastore traits.
Implement this trait for both contentstore and scmstore, and modify rust code which consumes `contentstore` directly to use `PyObject` and `LegacyStore` to abstract over both contentstore and scmstore instead.
Reviewed By: DurhamG
Differential Revision: D29043162
fbshipit-source-id: 26e10b23efc423265d47a8a13b25f223dbaef25c
Summary: Introduce a new config option, `scmstore.enableshim`, which replaces instances of contentstore in Python with scmstore objects instead. Currently, this config is not safe to enable. Addition fixes are incoming.
Reviewed By: DurhamG
Differential Revision: D29213190
fbshipit-source-id: 7fd4db77d55cd25cc08c40bee28798d6a6d2555c
Summary: Previously, we just fetched "best effort", and logged any encountered errors using `tracing`, leaving it up to the client to inspect errors if necessary. Python relies on catching these fetch errors as exceptions, though, so this change introduces some utility methods to help propagate them correctly.
Reviewed By: DurhamG
Differential Revision: D29211683
fbshipit-source-id: 5e9dee942c2b60e0f77a051624d7f393a811fc4e
Summary:
Remove packfile-specific parts of tests and modify them to test without depending on packfiles where possible.
Currently debugindexedlogdatastore and debugindexedloghistorystore appear to be broken, and debugdumpindexedlog just dumps the raw indexedlog contents, without any semantic information, so for the time being I've simply removed most packfile inspection.
Reviewed By: DurhamG
Differential Revision: D29099241
fbshipit-source-id: 86c4f9c83520374560587b8bec5c569d9c5c6510
Summary: My previous fix was actually incorrect, we now log actual remote requests, but join that with the logs from the contentstore fallback.
Reviewed By: DurhamG
Differential Revision: D29206878
fbshipit-source-id: d22e58792bf380c274e8086ce08aebe20dd9b848
Summary:
Previously, when fetching data using several concurrent requests, the EdenAPI client would wait for the headers for every request to finish coming in before starting to deserialize and yield entries from the bodies of any of the requests.
Normally, this isn't a huge deal since the response headers on all of the requests are usually roughly the same size, so they all finish downloading at roughly the same time when the requests are run concurrently. However, this does become an issue when `edenapi.maxrequests` is set. This option makes EdenAPI configure libcurl to queue outgoing connections once the configured limit is hit.
This means that although from EdenAPI's perspective all of the requests are running concurrently, they are not actually running in parallel. The result is that the EdenAPI client ends up waiting for all of the queued requests to be sent before yielding any data to the caller, which forces it to buffer all of the received data, resulting in massive memory consumption.
This diff fixes the problem by rearranging the structure of the Futures/Streams involved such that the client immediately begins yielding entries when they are received from any of the underlying transfers.
Reviewed By: quark-zju
Differential Revision: D29204196
fbshipit-source-id: b6b56bb7d60457de3c4046a07a5965749e9dd371
Summary:
When the `send_async` method is used to dispatch multiple concurrent requests, the method needs to return an `AsyncResponse` for each request. Since `AsyncResponse`'s constructor is itself `async` (it waits for all of the headers to be received), internally the method ends up with a collection of `AsyncResponse` futures.
Previously, in an attempt to simplify the API, the method would insert all of these futures into a `FuturesUnordered`, thereby conceptually returning a `Stream` of `AsyncResponses`. Unfortunately, this API ends up making it harder to consume the resulting `AsyncResponses` concurrently, as one might want to do when streaming lots of data over several concurrent requests.
This diff changes the API to just insert the `AsyncResponse` futures into a `Vec` to allow the caller to use them as desired. To maintain compatibility with the old behavior for the sake of this diff, the one current callsite has been updated to just dump the returned `Vec` into a `FuturesUnordered`. This will be changed later in the stack.
Reviewed By: quark-zju
Differential Revision: D29204195
fbshipit-source-id: ecee8cff430badd8213c2efef62fc68fbd91fde9
Summary: Nothing was using this metadata, and removing it simplifies the subsequent diffs in this stack.
Reviewed By: quark-zju
Differential Revision: D29147228
fbshipit-source-id: aa4828b710c3ef719f4d66adec5f66cd5b7d05d1
Summary:
This dep got updated in D29165283 (b82c5672fc) across a major version but the code depending
on it wasn't so now it's broken.
Reviewed By: mitrandir77
Differential Revision: D29229087
fbshipit-source-id: 5f2a14dd9f0447dd4578e8321991dfb3df32dcc2
Summary: Update versions for several of the crates we depend on.
Reviewed By: danobi
Differential Revision: D29165283
fbshipit-source-id: baaa9fa106b7dad000f93d2eefa95867ac46e5a1
Summary: It was rolled out weeks ago and there seem to be no complaints so far.
Reviewed By: DurhamG
Differential Revision: D29172396
fbshipit-source-id: 4f3310597ef1bb1abd92ee7207700a3d0039c598
Summary:
add an option to pass some metadata in the token
This will be used for content tokens, for example. We would like to guarantee that the specific content has been uploaded and it had the specific length. This will be used for hg filenodes upload.
Reviewed By: markbt
Differential Revision: D29136295
fbshipit-source-id: 2fbd3917ee0a55f43216351fdbc1a6686eb80176
Summary:
use `commitknown` edenapi api for checking the existing commits
it uses the same `lookup_commits` under the hood but a bit shorter to use
we won't need the tokens for existing changesets, so can use a simpler api
also, make `lookupfilenodes` function a bit shorter
Reviewed By: markbt
Differential Revision: D29134677
fbshipit-source-id: 257624d64480102c34761560b2bd768049cbfa83
Summary: Obsolescence markers have been deprecated in favour of mutation and visibility for some time. Remove syncing of obsmarkers via the commit cloud service.
Reviewed By: liubov-dmitrieva
Differential Revision: D29159443
fbshipit-source-id: 33e1d526a9df5c02059931c880529d695036c616
Summary:
Non-buck build cannot succeed if Python.h is missing.
Let's check it explicitly so we don't pick a bad Python.
Reviewed By: DurhamG
Differential Revision: D29179295
fbshipit-source-id: 421c824053d066914a6611f05815527768f257ee
Summary:
It turns out that `python3` might be a symlink to a fbcode Python that cannot
perform the build in some (not all) environment. Let's try more names like
`python3.8`, `python3.7`, etc.
Reviewed By: DurhamG
Differential Revision: D29178933
fbshipit-source-id: da6cae351f25a90ab8a9da85282d09f79505c5e7
Summary:
IPython 7 has compatibility issues when bundling into zip:
- The dependency parso does not work when embedded in zip. It expects
`python/grammar38.txt` to exist on the filesystem (not in a zip).
- It stacktraces after exiting the IPython shell.
- TAB completion does not really work.
Downgrading to IPython 5 and avoid the bundling the parso dep to resolve the
issues.
This should fix `test-autofix.t` and `test-argspans.py` failures.
Reviewed By: DurhamG
Differential Revision: D29170653
fbshipit-source-id: 14a3d16deaca72fbfb7b3acc0a4246a771c4d0aa
Summary: They will be reused in import_pull_data
Reviewed By: quark-zju
Differential Revision: D29147950
fbshipit-source-id: 192bf33c30067f43c4fcaaf3054741b39efb4e25
Summary: This is an interface for importing pull data into dag
Reviewed By: quark-zju
Differential Revision: D29142979
fbshipit-source-id: b40b94403a044c0b74d1574528aa374ec309a0cf
Summary: This will be used to import pull data into segmented changelog
Reviewed By: quark-zju
Differential Revision: D29142981
fbshipit-source-id: 2d19a035ee0b6cefef8fc0547a5dfb79f284a1de
Summary:
`cl.userust()` is always True after D29020191 (3765f8bd76) so `cl.userust() is False` code
is now dead.
Reviewed By: andll
Differential Revision: D29142464
fbshipit-source-id: f9a7e5c56641218758f12bad3de43d1cd1a71716
Summary: Pack files are no longer supported, yet we still have many tests which exercise them. In preparation for landing `scmstore` as a drop-in replacement for ContentStore, I'm removing our tests which only exist to test datapack-specific functionality.
Reviewed By: DurhamG
Differential Revision: D29099012
fbshipit-source-id: 635a913ee0d93ed8d536e71f8fa6a600b823b343
Summary:
The only real change here is: https://github.com/BurntSushi/ripgrep/pull/1756
This is a patch release but fixes a very glaring bug that others have
depended on. This diff fixes the uses to match the old behavior.
Although it's billed as a "fix", it's actually a huge perf improvement
for Linttool, which uses predominantly recursive suffix globs. The fact
that we don't have to compile ~5,000 regexps at Linttool startup anymore
makes such a huge difference that I am going to do write up soon.
Reviewed By: ndmitchell
Differential Revision: D29085977
fbshipit-source-id: 304470e5fa8cb986738aa0d9dd941641684a9194
Summary:
Python 2 is no longer built anywhere. Let's make the various py3
options the default, like renaming 'make local3' to 'make local' and let's get
rid of the dead setup.py and rpm spec.
Reviewed By: quark-zju
Differential Revision: D29077093
fbshipit-source-id: 0c50c2296fe10ff1db9ac8f9b0df2a4836c0ea5b
Summary:
For a while we had two methods in async runtime: block_on_future and block_on_exclusive, due to historic reasons
Recently those methods were calling same code, and now it is time to replace both of them and rename to block_on
Reviewed By: quark-zju
Differential Revision: D29121107
fbshipit-source-id: 5faa76ae181e491b55d799c23c9de1b4e80298f3
Summary:
If this opens a transaction and fails, then we start a never-ending cycle of
starting hg cloud backup and failing.
This command normally doesn't open transactions but it might via other
extensions, here is one example: P423007962.
This ends up killing people's machines:
https://fb.workplace.com/groups/scm/permalink/3957513017631622/
This fixes that by not kicking off background backups in this case.
Reviewed By: liubov-dmitrieva
Differential Revision: D29136077
fbshipit-source-id: 1c4e6de6147571dd6d728f761324506b1804f303
Summary:
The added command gives access to read (execute) dynamicconfig without
using an on-disk repo.
It can be used by the clone script to stage rollout lazy changelog, or just to
verify dynamic config changes without using a repo.
Reviewed By: DurhamG
Differential Revision: D29123072
fbshipit-source-id: e8856d816a636fa860bfcc9694306a4a37552523
Summary:
implement uploading file content via Eden API
* in this diff I aim to upload file content for the given set of filenodes
* also, the code would check with Mononoke using the lookup request what is already there and skip those
* also, this diff introduces calculation of blake2 hash (called ContentId) for file contents (we would probably need to store/cache those and the mapping from hg filenode id to the canonical Mononoke content_id)
* for every uploaded content EdenApi returns a token that we would also need to store later
Reviewed By: markbt
Differential Revision: D29063229
fbshipit-source-id: 739a44bc3ff904cb04a39514ba5efd01c80ba6d0
Summary: This will be used in eager repo integration tests
Reviewed By: quark-zju
Differential Revision: D29113218
fbshipit-source-id: a24232bd6c19010d8ac90d1305f57f1094b06323
Summary:
Similar to D29111710. Let's avoid asking servers for unknown nodes.
In theory this might affect correctness. Practically, this should only affect "landed as" markers,
because all drafts should be non-lazy. If the "landed as" correctness is an issue, we can fix
forward "landed as" later (ex. by writing down the public commit hash explicitly in
debugmarklanded).
Reviewed By: andll
Differential Revision: D29114049
fbshipit-source-id: db8dc34244feb66919cdff9433b6f18967c9ea9b