Summary: fix the problem of printing 0% fsck progress bars when fsck is actually not happening.
Reviewed By: genevievehelsel
Differential Revision: D22927401
fbshipit-source-id: 868fa188e3c2671b0f6e18e842ec6a23281dc337
Summary: Move it from `'static` BoxFutures to async_trait and lifetimes
Reviewed By: markbt
Differential Revision: D22927171
fbshipit-source-id: 637a983fa6fa91d4cd1e73d822340cb08647c57d
Summary:
Eden runs without a sparse checkout, so "hg prefetch" can be a very expensive
operation on eden. Let's not allow "hg prefetch" on eden if pats are not
specified - this will force users to specify what exactly they want to
prefetch. Obviously they can still request everything, but this diff will make
it harder to do so.
Reviewed By: DurhamG
Differential Revision: D22946092
fbshipit-source-id: 895505bb90980a74b31ded4a75d102c527801652
Summary:
For EdenFS Redirection on Windows, we can simply use symlinks in place of bind mounts on other platforms. This works fine from what I can tell.
NOTE: at this commit EdenFS on Windows is still not able to create these redirections at start up time since that code is located in `EdenMount` and it uses `folly::subprocess`: https://www.internalfb.com/intern/diffusion/FBS/browse/master/fbcode/eden/fs/inodes/EdenMount.cpp. For the same reason we can't enable the EdenFS redirection integration tests.
Reviewed By: xavierd
Differential Revision: D22544200
fbshipit-source-id: b1a33da128c932544660c3e895583574d7891be9
Summary:
The mix use of `os.path` and `pathlib` in `redirect.py` is a little messy, which is making adding Windows support trickier since `os.path` functions do not accept `Path` until 3.8. So before I make the change I think it's better to clean it up.
Since we are target Python 3 nowadays, replacing `os.path` with `pathlib` seems to be better. Basically this diff does the following:
* Replace use of `os.path` functions with the counterpart in `pathlib`.
* Reduce unnecessary conversions from/to `Path` to `str` / `bytes`.
* Only convert `Path` to `str` or `bytes` when interactive with other APIs (Thrift or os)
* Cross-platform APIs: `os.fspath`
* API expecting `str`: `os.fsdecode`
* API expecting `bytes`: `os.fsencode`
Reviewed By: chadaustin
Differential Revision: D22879004
fbshipit-source-id: a247973dc9919c8805daa4046472124310725516
Summary:
This is a backout of D22912569 (34760b5164), which is breaking opt-clang-thinlto builds on platform007 (S206790).
Original commit changeset: 5ffdc48adb1f
Reviewed By: aaronabramov
Differential Revision: D22956288
fbshipit-source-id: 45940c288d6f10dfe5457d295c405b84314e6b21
Summary: Often users run into kerberos certificate issues, which in turn causes permissions issues with mercurial and eden. This issue is not obvious while users are running hg commands, so `eden doctor` should identity this issue and tell the user about it. `hg doctor` runs `eden doctor`, so this should also show itself if a user runs `hg doctor`.
Reviewed By: wez
Differential Revision: D22825882
fbshipit-source-id: 5f51901934862336b0ebc2996da6e1168ea8d8a3
Summary:
Added more logs when running the binary to be able to track the progress more easily.
Saved bonsai hashes into a file. In case we fail at deriving data types, we can still try to derive them manually with the saves hashes and avoid running the whole tool again.
Reviewed By: StanislavGlebik
Differential Revision: D22943309
fbshipit-source-id: e03a74207d76823f6a2a3d92a1e31929a39f39a5
Summary:
Large commits and many hooks can mean checking 100 commits at a time overload
the system. Reduce the default concurrency to something more reasonable.
While we're here, lets use the proper mechanism for default values in clap.
Reviewed By: ikostia
Differential Revision: D22945597
fbshipit-source-id: 0f0a086c3b74bec614ada44a66409c8d2b91fe69
Summary:
Argument names should be `snake_case`. Long options should be `--kebab-case`.
Retain the old long options as aliases for compatibility.
Reviewed By: HarveyHunt
Differential Revision: D22945600
fbshipit-source-id: a290b3dc4d9908eb61b2f597f101b4abaf3a1c13
Summary:
Add `--log-interval` to log every N commits, so that it can be seen to be
making progress in the logs.
The default is set to 500, which logs about once every 10 seconds on my devserver.
Reviewed By: HarveyHunt
Differential Revision: D22945599
fbshipit-source-id: 7fc09b907793ea637289c9018958013d979d6809
Summary: In cases where `stopWorkersOnStopListening` is disabled, `ThriftServer` continues to process inflight and even new requests after `ThriftServer::serve()` has returned. This is unintuitive and error-prone. This diff ensures that when `serve()` returns, there are no outstanding requests and new requests will not be processed.
Reviewed By: yfeldblum, genevievehelsel
Differential Revision: D22827447
fbshipit-source-id: cda35843ee6be084042e1a7c806c77fb472dd114
Summary: Commitcloud fillers use wishlist priority because we want them to wait their turn behind other users; let's also stop them from flooding the blobstore healer queue by making them background priority.
Reviewed By: ahornby
Differential Revision: D22867338
fbshipit-source-id: 5d16438ea185b580f3537e3c4895a545483eca7a
Summary:
Backfillers and other housekeeping processes can run so far ahead of the blobstore sync queue that we can't empty it from the healer task as fast as the backfillers can fill it.
Work around this by providing a new mode that background tasks can use to avoid filling the queue if all the blobstores are writing successfully. This has a side-effect of slowing background tasks to the speed of the slowest blobstore, instead of allowing them to run ahead at the speed of the fastest blobstore and relying on the healer ensuring that all blobs are present.
Future diffs will add this mode to appropriate tasks
Reviewed By: ikostia
Differential Revision: D22866818
fbshipit-source-id: a8762528bb3f6f11c0ec63e4a3c8dac08d0b4d8e
Summary:
This operation is useful immediately after a small repo is merged into a large repo.
See example below
```
B' <- manually synced commit from small repo (in small repo it is commit B)
|
BM <- "big merge"
/ \
... O <- big move commit i.e. commit that moves small repo files in correct location
|
A <- commit that was copied from small repo. It is identical between small and large repos.
```
Immediately after a small repo is merged into a large one we need to tell that a commit B and all of
its ancestors from small repo needs to be based on top of "big merge" commit in large repo rather than on top of
commit A.
The function below can be used to achieve exactly that.
Reviewed By: ikostia
Differential Revision: D22943294
fbshipit-source-id: 33638a6e2ebae13a71abd0469363ce63fb6b014f
Summary:
We were experiencing hangs during lfs http fetches. We've seen similar
issues before when using Hyper, which Reqwest is based off of. Let's switch to
the new Curl-based http_client instead.
Note, this does get rid of the ability to obey hg config http proxy settings.
That said, they shouldn't be used much, and the http proxy environment variables
are respected by libcurl.
Reviewed By: xavierd
Differential Revision: D22935348
fbshipit-source-id: 1c61c04bbb4043e3bde592251f12bf846ab3afd4
Summary:
A future diff does LFS fetches via http_client. Curl has some default
behavior of adding the "Expect: 100-continue" header which causes the server to
send a 100 status code response after the headers have been received but before
the the payload has been received. Since the http_client model only expects a
single response, this breaks the model and we're unable to read the second
response. Let's disable this behavior by manually setting the header to empty
string, which appears to be the official way to handle this.
Add it early so callers can overwrite it.
Reviewed By: quark-zju
Differential Revision: D22935349
fbshipit-source-id: 3009a5eb72f40584b846510f34f121e0e821a2bc
Summary:
hg rage had a couple string issues with Python 3 and subprocess output.
This fixes them.
Reviewed By: quark-zju
Differential Revision: D22937629
fbshipit-source-id: 2b90ada536f152afb2d2662eb7f8e12d787f9544
Summary:
In python 3, curses.unctrl(...) expects a single character string. If
we pass a longer string it will throw an exception. Let's handle that.
Reviewed By: singhsrb
Differential Revision: D22937223
fbshipit-source-id: 00d1a38e48ee7d6bc0aa43eb771619027dc3d802
Summary:
On Windows, this environment variable is required to be set for less to
properly recognize that the input encoding is utf-8.
Reviewed By: kulshrax
Differential Revision: D22925498
fbshipit-source-id: 7969441792e0c1d9e2b93a690f658431f0acf59c
Summary:
Reading the entire file to compute its sha1 can cause EdenFS to OOM on large
files. Instead, let's simpy read 8k at a time, like what we do on Linux.
Reviewed By: chadaustin
Differential Revision: D22924710
fbshipit-source-id: cd472f396a4385049994a1976c7d046cb901337a
Summary: We were using a git snapshot of auto_impl from somewhere between 0.3 and 0.4; 0.4 fixes a bug around Self: 'lifetime constraints on methods that blocks work I'm doing in Mononoke, so update.
Reviewed By: dtolnay
Differential Revision: D22922790
fbshipit-source-id: 7bb68589a1d187393e7de52635096acaf6e48b7e
Summary:
The non-Windows part of it returns the full path for the mount, do the same on
Windows. Since this is used in the remove path to unmount the repo, and the
paths expected in unmount are fullpaths, this meant remove would always fail to
unmount the repo.
Reviewed By: chadaustin
Differential Revision: D22915352
fbshipit-source-id: 6739267188c46d5ca1d6cb212a7155e70056f0f7
Summary:
One repack code path would return an error if the pack was already
deleted before the repack started. This is a fine situation, so let's just eat
the error and repack what can be repacked.
Reviewed By: xavierd
Differential Revision: D22873219
fbshipit-source-id: c716a5f0cd6106fd3464702753fb79df0bc7d13f
Summary:
The initialized_ field was never set, which meant that Overlay::close would
never close the sqlite overlay. This had the negative effect of allowing the
unmount thrift call to return before the sqliteoverlay is properly closed which
on Windows meant that the overlay couldn't be deleted.
Reviewed By: chadaustin
Differential Revision: D22915353
fbshipit-source-id: e501b2d0268449d109cc37dfdc62faebf1953e09
Summary:
On Windows, this has the benefit of automatically converting \r\n into \n,
which allows more tests to pass.
Reviewed By: chadaustin
Differential Revision: D22871408
fbshipit-source-id: 02ec1d21dc236175c3b0f3176db9b8c91dee21a4
Summary:
Eden api endpoint for segmented changelog. It translates a path in the
graph to the hash corresponding to that commit that the path lands on.
It is expected that paths point to unique commits.
This change looks to go through the plumbing of getting the request from
the edenapi side through mononoke internals and to the segmented changelog
crate. The request used is an example. Follow up changes will look more at
what shape the request and reponse should have.
Reviewed By: kulshrax
Differential Revision: D22702016
fbshipit-source-id: 9615a0571f31a8819acd2b4dc548f49e36f44ab2
Summary:
This functionality is going to be used in EdenApi. The translation is required
to unblock removing the changelog from the local copy of the repositories.
However the functionality is not going to be turned on in production just yet.
Reviewed By: kulshrax
Differential Revision: D22869062
fbshipit-source-id: 03a5a4ccc01dddf06ef3fb3a4266d2bfeaaa8bd2
Summary:
To start the only configuration available is whether the functionality provided
by this component is available in any shape or form. By default the component
is going to be disabled to all repositories. We will enable it first to
bootstrapped repositories and after additional tooling is added to production
repositories.
Reviewed By: kulshrax
Differential Revision: D22869061
fbshipit-source-id: fbaed88f2f45e064c0ae1bc7762931bd780c8038
Summary: The original function is changing behavior on non-UNIX platforms. Let's honor it in tests too.
Reviewed By: wez
Differential Revision: D22901850
fbshipit-source-id: 7c4e7dbfa0526d482f86794758e7ad39feacce10
Summary:
Add a way to actually try out segmented changelog by setting
`format.use-segmented-changelog`.
The main compatibility issues are assumptions that `0 .. len(repo)` are valid
revision numbers. Segmented changelog uses 2 spans `0 .. len(master_group)`
and `<a large 64-bit int> ..` for the non-master group. Some assumptions
about `len(repo)` were removed.
Segmented changelog also does not support linkrevs since revs can be
re-assigned. Some logic (ex. in treemanifest) that relies on linkrev being
`len(repo)` is changed.
There might be other issues, which will be fixed once we discover them.
Reviewed By: sfilipco
Differential Revision: D22657187
fbshipit-source-id: 688ad718741de0ca15e6aeaf84aced24a20c6b09
Summary:
For example, `master~1000000` should not trigger auto pull of the name
`1000000`. `x~y` is parsed as `(ancestor x y)`.
Reviewed By: singhsrb
Differential Revision: D22905438
fbshipit-source-id: 757ae8856f28126bc6e988d9989a894f83d83eb4
Summary:
- Enumerate API now provided via trait BlobstoreKeySource
- Implementation for Fileblob and ManifoldBlob
- Modified populate_healer to use new api
- Modified fixrepocontents to use new api
Reviewed By: ahornby
Differential Revision: D22763274
fbshipit-source-id: 8ee4503912bf40d4ac525114289a75d409ef3790
Summary: Since multiple mount points can share the same BackingStore object, it is better to start recording profile for each unique backing store instead of get backing stores by mounts.
Reviewed By: chadaustin
Differential Revision: D22904499
fbshipit-source-id: f95508044912503509c4ac0f84b4a061e41ca0f3
Summary:
added `record-profile` and `finish-profile` subcommands to `eden prefetch`
`eden prefetch record-profile` triggers `startRecordingFetch()` in `HgQueuedBackingStore`. `eden prefetch finish-profile` triggers `stopRecordingFetch()` in `HgQueuedBackingStore` and write the returned fetched file names to a text file. We can use the output file to predict what files are fetched for frequently used commands.
Reviewed By: chadaustin
Differential Revision: D22797896
fbshipit-source-id: ff240829b4fb3c47093279a1f2bd453009ff5f73
Summary:
after `startRecordingFetch()` is called by `HgQueuedBackingStore`, record the path for each fetched file. When `stopRecordingFetch()` is called, stop recording and return the collected file paths.
`startRecordingFetch()` and `stopRecordingFetch()` will be used in the next diff.
Reviewed By: chadaustin
Differential Revision: D22797037
fbshipit-source-id: a1fe30424d3c2884ffe139a0062b1e36328fd4fe
Summary: Update all the remaining steps in the walker to use the new early checks, so as to prune unnecessary edges earlier in the walk.
Reviewed By: farnz
Differential Revision: D22847412
fbshipit-source-id: 78c499a1870f97df7b641ee828fb8ec58303ebef
Summary:
This feature should be similar to `hg cloud deletebackup` but for cloud sync
It is needed to deprecate old backups.
Reviewed By: markbt
Differential Revision: D22876332
fbshipit-source-id: 90527bac4f352dc14fadf8b04f6c2df01045f5ce
Summary:
Check whether to emit an edge from the walker earlier to reduce vec allocation of unnecessary edges that would immediately be dropped in WalkVistor::visit.
The VisitOne trait is introduced as a simpler api to the Visitor that can be used to check if one edge needs to be visited, and the Checker struct in walk.rs is a helper around that that will only call the VisitOne api if necessary. Checker also takes on responsibility for respecting keep_edge_paths when returning paths, so that parameter has be removed for migrated steps.
To keep the diff size reasonable, this change has all the necessary Checker/VisitOne changes but only converts hg_manifest_step, with the remainder of the steps converted in the next in stack. Marked todos labelling unmigrated types as always emit types are be removed as part of converting remaining steps.
Reviewed By: farnz
Differential Revision: D22864136
fbshipit-source-id: 431c3637634c6a02ab08662261b10815ea6ce293
Summary:
This tool can be used in tandem with pre_merge_delete tool to merge a one large
repository into another in a controlled manner - the size of the working copy
will be increased gradually.
Reviewed By: ikostia
Differential Revision: D22894575
fbshipit-source-id: 0055d3e080c05f870cfd0026174365813b0eb253
Summary:
There are two reasons to want a write quorum:
1. One or more blobstores in the multiplex are experimental, and we don't want to accept a write unless the write is in a stable blobstore.
2. To reduce the risk of data loss if one blobstore loses data at a bad time.
Make it possible
Reviewed By: krallin
Differential Revision: D22850261
fbshipit-source-id: ed87d71c909053867ea8b1e3a5467f3224663f6a
Summary:
During large prefetches, (say a clone), it is possible that 2 different
filenode actually refer to the same file content, which thus share the same LFS
blob. The code would wrongly prefetch this blob twice which would then fail due
to the `obj_set` only containing one instance of this object.
Instead of using a Vec for the objects to prefetch, we can simply use a
`HashSet` which will take care of de-duplicating the objects.
Reviewed By: DurhamG
Differential Revision: D22903606
fbshipit-source-id: 4983555d2b16639051acbbb591ebb752d55acc2d
Summary:
There was a small but easy to miss mistake when prefetch was changed to return
the keys that couldn't be prefetched. For LFS pointers, the code would wrongly
return that the blob was fetched, which is misleading as the LFS blob isn't
actually downloaded. For LFS pointers, we need to translate them to their LFS
blob content hashes.
Reviewed By: DurhamG
Differential Revision: D22903607
fbshipit-source-id: e86592cd986498d9f4a574585eb92da695de2e27
Summary: A couple of features stabilized, so drop their `#![feature(...)]` lines.
Reviewed By: eugeneoden, dtolnay
Differential Revision: D22912569
fbshipit-source-id: 5ffdc48adb1f57a1b845b1b611f34b8a7ceff216
Summary:
Our error handling looked pretty, but allocating all of these futures
is expensive. Each future is an allocation and some atomics. This diff
buys back some performance which I will soon spend on a new async
event queue.
Reviewed By: xavierd
Differential Revision: D22799737
fbshipit-source-id: 91dcfe974cf8f461109dfaa9dbf75c054ed84f59
Summary:
In several places in `library.sh` we had `--mononoke-config-path
mononoke-config`. This ensured that we could not run such commands from
non-`$TESTTMP` directorires. Let's fix that.
Reviewed By: StanislavGlebik
Differential Revision: D22901668
fbshipit-source-id: 657bce27ce6aee8a88efb550adc2ee5169d103fa
Summary: The more contexts the better. Makes debugging errors much more pleasant.
Reviewed By: StanislavGlebik
Differential Revision: D22890940
fbshipit-source-id: 48f89031b4b5f9b15f69734d784969e2986b926d
Summary:
I've seen the error a couple of times when messing up with my clones, not
having the path makes it a bit difficult to fully understand what's going on,
make sure we log it.
Reviewed By: fanzeyi
Differential Revision: D22899098
fbshipit-source-id: c9a60b71ea20514158e62fe8fa9c409d6f0f37ff
Summary:
An extremely thin wrapper around existing APIs: just a way to create merge commits from the command line.
This is needed to make the merge strategy work:
```
C
|
M3
| \
. \
| \
M2 \
| \ \
. \ \
| \ \
M1 \ \
| \ \ \
. TM3 \ \
. / | |
. D3 (e7a8605e0d) TM2 |
. | / /
. D2 (33140b117c) TM1
. | /
. D1 (733961456f)
| |
| \
| DAG to merge
|
main DAG
```
When we're creating `M2` as a result of merge of `TM2` into the main DAG, some files are deleted in the `TM3` branch, but not deleted in the `TM2` branch. Executing merge by running `hg merge` causes these files to be absent in `M2`. To make Mercurial work, we would need to execute `hg revert` for each such file prior to `hg merge`. Bonsai merge semantics however just creates correct behavior for us. Let's therefore just expose a way to create bonsai merges via the `megarepotool`.
Reviewed By: StanislavGlebik
Differential Revision: D22890787
fbshipit-source-id: 1508b3ede36f9b7414dc4d9fe9730c37456e2ef9
Summary:
This adds a CLI for the functionality, added in the previous diff. In addition, this adds an integration test, which tests this deletion functionality.
The output of this tool is meant to be stored in the file. It simulates a simple DAG, and it should be fairly easy to automatically parse the "to-merge" commits out of this output. In theory, it could have been enough to just print the "to-merge" commits alone, but it felt like sometimes it may be convenient to quickly examine the delete commits.
Reviewed By: StanislavGlebik
Differential Revision: D22866930
fbshipit-source-id: 572b754225218d2889a3859bcb07900089b34e1c
Summary:
This implements a new strategy of creating pre-merge delete commits.
As a reminder, the higher-level goal is to gradually merge two independent DAGs together. One of them is the main repo DAG, the other is an "import". It is assumed that the import DAG is already "moved", meaning that all files are at the right paths to be merged.
The strategy is as follows: create a stack of delete commits with gradually decreasing working copy size. Merge them into `master` in reverse order.
Reviewed By: StanislavGlebik
Differential Revision: D22864996
fbshipit-source-id: bfc60836553c656b52ca04fe5f88cdb1f15b2c18
Summary:
On Windows, paths are separated by \, but the test was comparing them against
/. We can simply ask Mercurial to return / with the slashpath template filter.
Reviewed By: chadaustin
Differential Revision: D22871407
fbshipit-source-id: 421bd14f752f29265b12eb25609d4f65e593dda8
Summary:
Cache invalidation is hard, and on Windows we avoided doing a lot of them. It
turns out, this was the wrong decision as it's fairly easy to find cases where
the filesystem view is different from the manifest state.
Since the Linux code is most likely correct in where the invalidation is done,
let's also do the same on Windows, removing a whole lot of #ifdef. It is very
likely that as a result of this diff we end up invalidating more than needed,
thus slowing down EdenFS, but at this point I'd prefer to err on the side of
correctness, performance will come later.
While invalidating files should use PrjDeleteFile, for directories, we simply
need to mark them as placeholder, as directories created by a user won't have a
placeholder, thus ProjectedFS would bypass EdenFS when listing in.
Reviewed By: chadaustin
Differential Revision: D22833202
fbshipit-source-id: d807557f5e44279c49ab701b7a797253ef1f0717
Summary: While testing something for another change, I came across this overlooked typo.
Reviewed By: wez
Differential Revision: D22894060
fbshipit-source-id: 8aa48ef5da714650c974adcf8a34a542fdd4ed9e
Summary:
Avoid some overhead and complexity by storing BufVec as a
unique_ptr<IOBuf>. The complexity can be reintroduced if we ever find
FUSE splice support to be a performance win for us.
Reviewed By: kmancini
Differential Revision: D22710795
fbshipit-source-id: e58eedc0fb5cea9e9743ccd20d3e4e2b7cc5d198
Summary:
Previously we log a process to Scuba when it does 2000 (fetchThreshold_) fetchs, but then in Scuba all processes have fetch_count = 2000. In order to see how many fetches a process really did approximately, we log the same process to Scuba every time it does 2000 more fetches.
Note: this change could make the total count of fetch-heavy events in Scuba inaccurate, as we log the same process more than once. So when users want to see how many fetch-heavy events happened, instead of setting "type = fetch_heavy", they should set exactly "fetch_count = 2000".
Reviewed By: chadaustin
Differential Revision: D22867679
fbshipit-source-id: ae3c768a8d3b03628db6a77263e715303a814e3d
Summary:
With upcoming write quorum work, it'll be interesting to know all the failures that prevent a put from succeeding, not just the most recent, as the most recent may be from a blobstore whose reliability is not yet established.
Store and return all errors, so that we can see exactly why a put failed
Reviewed By: ahornby
Differential Revision: D22896745
fbshipit-source-id: a3627a04a46052357066d64135f9bf806b27b974
Summary:
"Chunking hint" is a string (expected to be in a file) of the following format:
```
prefix1, prefix2, prefix3
prefix4,
prefix5, prefix6
```
Each line represents a single chunk: if a paths starts with any of the prefixes in the line, it should belong to the corresponding chunk. Prefixes are comma-separated. Any path that does not start with any prefix in the hint goes to an extra chunk.
This hint will be used in a new pre-merge-delete approach, to be introduced further in the stack.
Reviewed By: StanislavGlebik
Differential Revision: D22864999
fbshipit-source-id: bbc87dc14618c603205510dd40ee5c80fa81f4c3
Summary:
We need to use a different type of pre-merge deletes, it seems, as the one proposed requires a huge number of commits. Namely, if we have `T` files in total in the working copy and we're happy to delete at most `D` files per commit, while merging at most `S` files per deletion stack:
```
#stacks = T/S
#delete_commits_in_stack = (T-X)/D
#delete_commits_total = T/S * (T-X)/D = (T^2 - TX)/SD ~ T^2/SD
T ~= 3*10^6
If D~=10^4 and X~=10^4:
#delete_commits_total ~= 9*10^12 / 10^8 = 9*10^4
If D~=10^5 and X~=10^5:
#delete_commits_total ~= 9*10^12 / 10^10 = 9*10^2
```
So either 90K or 900 delete commits. 90K is clearly too big. 900 may be tolerable, but it's still hard to manage and make sense of. What's more, there seems to be a way to produce fewer of these, see further in the stack.
Reviewed By: StanislavGlebik
Differential Revision: D22864998
fbshipit-source-id: e615613a34e0dc0d598f3178dde751e9d8cde4da
Summary: Since local store compaction is not a hard requirement for graceful restart, make this issue non blocking. We've seen some users fail restarts because they had compaction issues due to lack of space on their device. If we fail during the compaction stage, we should continue the restart anyway. This is also because there is a chance that the local store will clear columns that are no longer in use.
Reviewed By: chadaustin
Differential Revision: D22828433
fbshipit-source-id: 9a2aaec64e77c2d00089834fda8f8cffda472735
Summary:
We're going to add an SQL blobstore to our existing multiplex, which won't have all the blobs initially.
In order to populate it safely, we want to have normal operations filling it with the latest data, and then backfill from Manifold; once we're confident all the data is in here, we can switch to normal mode, and never have an excessive number of reads of blobs that we know aren't in the new blobstore.
Reviewed By: krallin
Differential Revision: D22820501
fbshipit-source-id: 5f1c78ad94136b97ae3ac273a83792ab9ac591a9
Summary:
Related diff: D22816538 (3abc4312af)
In repo_import tool once we move a bookmark to reveal commits to users, we want to check if hg_sync has received the commits. To do this, we extract the largest log id from bookmarks_update_log to compare it with the mutable_counter value related to hg_sync. If the counter value is larger or equal to the log id, we can move the bookmark to the next batch of commits. Otherwise, we sleep, retry fetching the mutable_counter value and compare the two again.
mutable_counters is an sql table that can track bookmarks log update instances with a counter.
This diff adds the functionality to extract the mutable_counters value for hg_sync.
======================
SQL query fix:
In the previous diff (D22816538 (3abc4312af)) we didn't cover the case where we might not get an ID which should return None. This diff fixes this error.
Reviewed By: StanislavGlebik
Differential Revision: D22864223
fbshipit-source-id: f3690263b4eebfe151e50b01a13b0193009e3bfa
Summary: The walker had a couple of unused stats fields in state.rs. Remove them.
Reviewed By: farnz
Differential Revision: D22863812
fbshipit-source-id: effc37abe29fafb51cb1421ff4962c5414b69be1
Summary:
Prefetch had some legacy logic that tried to look at the server to
determine what it needed to fetch. That's expensive, so let's just replace it
with looking at draft() commits. It also had some naive logic that looped over
every file in the manifest and tried to match a pattern. Let's instead use
mf.matches which efficiently avoids traversing unnecessary directories.
This makes prefetch much faster.
Reviewed By: kulshrax
Differential Revision: D22853075
fbshipit-source-id: cf98aa147203c2d0e811b98998b8dc89173943a6
Summary:
An earlier diff, D21772132 (713fbeec24), add an option to default hgcache data store
writes to indexedlog but it only did it for data, not history. Let's also do it
for history.
Reviewed By: quark-zju
Differential Revision: D22870952
fbshipit-source-id: 649361b2d946359b9fbdd038867e1058077bd101
Summary: It is used in lower case in all other places
Reviewed By: farnz
Differential Revision: D22867435
fbshipit-source-id: 50c78027eeacd341144d190f36cc5570d64f92c3
Summary: This makes it a little bit easier to use.
Reviewed By: sfilipco
Differential Revision: D22853717
fbshipit-source-id: aa3c1ed2a9a2d1020a48a4493a644093d8b07e67
Summary:
TL:DR:
A codemod did something a bit unclean, so they added a lint. This will keep bugging us if we make changes here, so let's satisfy the linter.
More info:
`x.y_ref() = ...` and `*x.y_ref() = ...` are pretty much the same except `*x.y_ref() = ...` can throw for optional fields.
A codemod added a bunch of `*x.y_ref() = ...`, but after they didn't want people to copy paste this for optional fields so they added a lint that pops up on non optional fields too :(
https://fb.workplace.com/groups/thriftusers/permalink/509303206445763/
Reviewed By: chadaustin
Differential Revision: D22823686
fbshipit-source-id: b3b1b8a3b6b1f1245176be19c961476e4554a8e5
Summary:
Previously, fetch heavy event's cmdline was delimited by '\x00' when logged to Scuba. (for example: `grep--color=auto-rtest.`)
Now we replace \x00 with a space, so command name and args will be separated by space. ( `grep --color=auto -r test .` )
Reviewed By: kmancini
Differential Revision: D22772868
fbshipit-source-id: 4ab42e78c7bc786767eee3413b9586739a12e8ac
Summary:
This helps in understanding what's going on when some files disappear and/or
aren't flushed properly.
Reviewed By: fanzeyi
Differential Revision: D22833201
fbshipit-source-id: 09beb5796cb40c0a93107ee6a3a3497abb2578f0
Summary:
This is expected to fix flakyness in test-walker-corpus.t
The problem was that if a FileContent node was reached via an Fsnode it did not have a path associated. This is a race condition that I've not managed to reproduce locally, but I think is highly likely to be the reason for flaky failure on CI
Reviewed By: ikostia
Differential Revision: D22866956
fbshipit-source-id: ef10d92a8a93f57c3bf94b3ba16a954bf255e907
Summary:
There have been lots of issues with user experience related to authentication
and its help messages.
Just one of it:
certs are configured to be used for authentication and they are invalid but the `hg cloud auth`
command will provide help message about the certs but then ask to copy and
paste a token from the code about interactive token obtaining.
Another thing, is certs are configired to use, it was not hard to
set up a token for Scm Daemon that can be still on tokens even if cloud
sync uses certs.
Now it is possible with `hg auth -t <token>` command
Now it should be more cleaner and all the messages should be cleaner as well.
Also certs related help message has been improved.
Also all tests were cleaned up from the authentication except for the main
test. This is to simplify the tests.
Reviewed By: mitrandir77
Differential Revision: D22866731
fbshipit-source-id: 61dd4bffa6fcba39107be743fb155be0970c4266
Summary:
We shouldn't add any tls related configs to the default configuration.
Tls is not used by default. Tokens are currently the default, and tls is another
option. It is cleaner to cover the defaults in the code itself, rather than add
complexity to the configuration here.
Reviewed By: mitrandir77
Differential Revision: D22864541
fbshipit-source-id: 0c0723c77c2a961a0915617d636b83bc65ac8541
Summary:
We're seeing users report lfs fetching hanging for 24+ hours. Stack
traces seem to show it hanging on the lfs fetch. Let's read bytes off the wire
in smaller chunks and add a timeout to each read (default timeout is 10s).
Reviewed By: xavierd
Differential Revision: D22853074
fbshipit-source-id: 3cd9152c472acb1f643ba8c65473268e67d59505
Summary:
We encountered an issue where gc kicked in after forking the Python
process. This cause it to trigger some Rust drop logic which hung because some
cross thread locks were not in a good state. Let's just disable gc during the
fork and only reenable it in the parent process.
Reviewed By: quark-zju
Differential Revision: D22855986
fbshipit-source-id: c3e99fb000bcd4cc141848e6362bb7773d0aad3d
Summary:
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/40
Those tools are being used in some integration tests, make them public so that the tests might pass
Reviewed By: ikostia
Differential Revision: D22844813
fbshipit-source-id: 7b7f379c31a5b630c6ed48215e2791319e1c48d9
Summary:
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/41
As of D22098359 (7f1588131b) the default locale used by integration tests is en_US.UTF-8, but as the comment in code mentiones:
```
The en_US.UTF-8 locale doesn't behave the same on all systems and trying to run
commands like "sed" or "tr" on non-utf8 data will result in "Illegal byte
sequence" error.
That is why we are forcing the "C" locale.
```
Additionally I've changed the test-walker-throttle.t test to use "/bin/date" directly. Previously it was using "/usr/bin/date", but the "/bin/date" is a more standard path as it works on MacOS.
Reviewed By: krallin
Differential Revision: D22865007
fbshipit-source-id: afd1346e1753df84bcfc4cf88651813c06933f79
Summary: It fails now, unknown reason, will work on it later
Reviewed By: mitrandir77, ikostia
Differential Revision: D22865324
fbshipit-source-id: c0513bfa2ce9f6baffebff472053e8a5d889c9ba
Summary:
Follow up from D22819791.
We want to use bookmark update delay only in scs, so let's configure it this
way
Reviewed By: krallin
Differential Revision: D22847143
fbshipit-source-id: b863d7fa4bf861ffe5d53a6a2d5ec44e7f60eb1a
Summary:
This is the (almost) final diff to introduce WarmBookmarksCache in repo_client.
A lot of this code is to pass through the config value, but a few things I'd
like to point out:
1) Warm bookmark cache is enabled from config, but it can be killswitched using
a tunable.
2) WarmBookmarksCache in scs derives all derived data, but for repo_client I
decided to derive just hg changeset. The main motivation is to not change the
current behaviour, and to make mononoke server more resilient to failures in
other derived data types.
3) Note that WarmBookmarksCache doesn't obsolete SessionBookmarksCache that was
introduced earlier, but rather it complements it. If WarmBookmarksCache is
enabled, then SessionBookmarksCache reads the bookmarks from it and not from
db.
4) There's one exception in point #3 - if we just did a push then we read
bookmarks from db rather than from bookmarks cache (see
update_publishing_bookmarks_after_push() method). This is done intentionally -
after push is finished we want to return the latest updated bookmarks to the
client (because the client has just moved a bookmark after all!).
I'd argue that the current code is a bit sketchy already - it doesn't read from
master but from replica, which means we could still see outdated bookmarks.
Reviewed By: krallin
Differential Revision: D22820879
fbshipit-source-id: 64a0aa0311edf17ad4cb548993d1d841aa320958
Summary:
Add a cmdlib argument to control cachelib zstd compression. The default behaviour is unchanged, in that the CachelibBlobstore will attempted compression when putting to the cache if the object is larger than the cachelib max size.
To make the cache behaviour more testable, this change also adds an option to do an eager put to cache without the spawn. The default remains to do a lazy fire and forget put into the cache with tokio::spawn.
The motivation for the change is that when running the walker the compression putting to cachelib can dominate CPU usage for part of the walk, so it's best to turn it off and let those items be uncached as the walker is unlikely to visit them again (it only revisits items that were not fully derived).
Reviewed By: StanislavGlebik
Differential Revision: D22797872
fbshipit-source-id: d05f63811e78597bf3874d7fd0e139b9268cf35d
Summary: populate_healer would panic on launch because there were 2 aguments assigned to -d: debug and destination-blobstore-id
Reviewed By: StanislavGlebik
Differential Revision: D22843091
fbshipit-source-id: e300af85b4e9d4f757b4311f2b7d776f59c7527d
Summary:
Although new changelog revlogs do not use deltas since years ago, early
revisions in our production changelog still use mpatch delta format
because they are stream-cloned.
Teach revlogindex to support them.
Reviewed By: sfilipco
Differential Revision: D22657204
fbshipit-source-id: 7aa3b76a9a6b184294432962d36e6a862c4fe371
Summary:
Now the rust-commits features are moved to changelog2, and changelog is no
longer used for rust-commits features. Let's just remove all rust-commits
features from changelog, and collapse related configs into just rust-commits.
Reviewed By: DurhamG
Differential Revision: D22657194
fbshipit-source-id: d74ae40a24fb365981679feab7c2403f84df2b3e
Summary:
Restore the behavior to before D22368827 (da42f2c17e). This also significantly speeds up
graph log like `smartlog` because the fast native path of `reachableroots`
can be used.
Reviewed By: DurhamG
Differential Revision: D22657197
fbshipit-source-id: e3236938d8acfd0935ec45e761763bf0477f2152
Summary: So reachableroots can be called from Python.
Reviewed By: sfilipco
Differential Revision: D22657186
fbshipit-source-id: 36b1b5ed1e32c88bb07e6c7c7e0a7ca89e0751a3
Summary:
The default reachable_roots implementation is good enough for segmented
changelog, but not efficient for revlogindex use-case.
Reviewed By: sfilipco
Differential Revision: D22657193
fbshipit-source-id: a81bc255d42d46c50e61fe954f027f1160dacb6c
Summary:
I thought it was just `roots & (::heads)`. It is actually more complex than
that.
Reviewed By: sfilipco
Differential Revision: D22657201
fbshipit-source-id: bd0b49fc4cdd2c516384cf70c1c5f79af4da1342
Summary:
The `changelog2.changelog` type does not inherit from `revlog`.
It is basically taking implementation from `changelog` with `userust` branches
returning true.
Reviewed By: DurhamG
Differential Revision: D22657195
fbshipit-source-id: dc718d180c7ef3d64f822c3a8c968ef6027047d5
Summary: This will help us verify that the C index is no longer necessary.
Reviewed By: DurhamG
Differential Revision: D22657196
fbshipit-source-id: 16ed74acc5400661572880adf3d8d3267c8b53e2
Summary:
This makes the Rust code path take care of commit writing.
The feature cannot be enabled yet because the `nodemap` backed by the C index
is no longer aware of new in-memory commits. The next diff migrates nodemap to
be backed by Rust and can turn on this feature altogether.
Reviewed By: DurhamG
Differential Revision: D22657191
fbshipit-source-id: 5f1a60f0b391b06fcd61d10676e2e095f8b7c9d6
Summary:
The debug print abuses the `linkmapper`. The Rust commit add logic does not
use `linkmapper`. So let's remove the debug message to be consistent with
the Rust logic.
Reviewed By: DurhamG
Differential Revision: D22657189
fbshipit-source-id: 2e92087dbb5bfce2f00711dcd62881aba64b0279
Summary:
Those tests are going to break with the latest changelog. We're moving away
from revlog so let's just remove the tests.
Reviewed By: DurhamG
Differential Revision: D22657198
fbshipit-source-id: 6d1540050d70c58636577fa3325daca511273a2b
Summary:
`tr.writepending()` removes callbacks saying "temp files are already written".
However, `tr.writepending()` might be called multiple times and the content
being written can be changed.
For example, `test-hook.t` has a test case that uses both `prechangegroup` and
`pretxnchangegroup` external process hooks. The `prechangegroup` hook runs
before the changelog gets changed, and the `pretxnchangegroup` runs after the
changelog gets changed.
Without this diff, the latter will not see the changelog change after migrating
to Rust (which buffers pending commits in memory).
The revlog changelog "addpending" is kept the original behavior - only call once
for avoiding potential performance regression.
Reviewed By: DurhamG
Differential Revision: D22657199
fbshipit-source-id: 8f96a0beaeebd45e73de3973e3ee8dd1426295fb
Summary:
In the future it's harder to provide changed "revs". Let's use commit hash
instead.
Reviewed By: DurhamG
Differential Revision: D22657203
fbshipit-source-id: b46055fe31d174a6eae47570ebec4a73c7d603f6
Summary:
Without this a few tests will fail with upcoming changes.
For example, test-clone-uncompressed.t will say "requesting all changes"
instead of "no changes found" for the "hg clone --stream" command.
Reviewed By: DurhamG
Differential Revision: D22657190
fbshipit-source-id: 349caf58e5bfdb5310b6b5585e4727e208197573
Summary:
Commands like `debugindex` relies on this function to return a revlog object
with low-level APIs. Do not return changelog as-is.
Reviewed By: DurhamG
Differential Revision: D22657202
fbshipit-source-id: b6ae84a157d3411cef6f67ee842f44134fe9b35e
Summary:
This replaces RustError that might happen during `addcommits`, and allow us to
handle it without having a stacktrace.
Reviewed By: DurhamG
Differential Revision: D22539564
fbshipit-source-id: 356814b9baf0b31528dfc92d62b0dcf352bc1e24
Summary:
The zstore-commit-data code paths are in Python. We want to move them to behind
the Rust HgCommits abstractions. So stop making Python interact with the
low-level details.
Reviewed By: DurhamG
Differential Revision: D22638457
fbshipit-source-id: 435db8425a29ce4eae24a6202ad928f85a5f5ee2
Summary: It's the same as `__add__`. It's consistent with the revset language.
Reviewed By: sfilipco
Differential Revision: D22638456
fbshipit-source-id: 928177d553220461192650f4792ac39cadd57dc2
Summary:
Follow up of D22638454.
This makes revlogindex marks its compatible DAG so "all()" fast paths can be used properly.
Reviewed By: sfilipco
Differential Revision: D22638459
fbshipit-source-id: 074e95b9fccbc486b69a947fec5172662e7dd3b7
Summary:
No need to exhaust the entire IdLazySet if there are hints.
This is important to make `small & lazy` fast.
Reviewed By: sfilipco
Differential Revision: D22638462
fbshipit-source-id: 63a71986e6e254769c42eb6250c042ea6aa5808b
Summary:
When multiple DAGs (ex. a local DAG and a commit-cloud DAG) are involved,
certain fast paths become unsound. Namely, the fast paths of the FULL hint
should check DAG compatibility. For example:
localrepodag.all() & remotedag.all()
should not simply return `localrepodag.all()` or `remotedag.all()`.
Fix it by checking DAG pointers.
A StaticSet might be created without using a DAG, add an optimization
to change `all & static` to `static & all`. So StaticSet without DAG
wouldn't require full DAG scans when intersecting with other sets.
Reviewed By: sfilipco
Differential Revision: D22638454
fbshipit-source-id: 72396417e9c1238d5411829da8f16f2c6d4c2f3a
Summary:
Improve `fmt::Debug` so it fits better in the Rust and Python eco-system:
- Support Rust formatter flags. For example `{:#5.3?}`. `5` defines limit of a
large set to show, `3` defines hex commit hash length. `#` specifies the
alternate form.
- Show commit hashes together with integer Ids for IdStaticSet.
- Use HG rev range syntax (`a:b`) to represent ranges for IdStaticSet.
- Limit spans to show for IdStaticSet, similar to StaticSet.
- Show only 8 chars of a long hex commit hash by default.
- Minor renames like `dag` -> `spans`, `difference` -> `diff`.
Python bindings uses `fmt::Debug` as `__repr__` and will be affected.
Reviewed By: sfilipco
Differential Revision: D22638455
fbshipit-source-id: 957784fec9c99c8fc5600b040d964ce5918e1bb4
Summary:
Hard link adds complexity for revlog writes. It's not that useful in production
setup. The Rust revlog `flush` API does not break hardlinked files. So let's
just avoid using hard links during local repo clone.
Reviewed By: DurhamG
Differential Revision: D22638460
fbshipit-source-id: 038f4d5c48e9972b14c9e59a9d7ef72b6bc5308d
Summary:
This makes intersection set stop early. It's useful to stop iteration on some
lazy sets. For example, the below `ancestors(tip) & span` or
`descendants(1) & span` sets can take seconds to calculate without this
optimization.
```
In [1]: cl.dag.ancestors([cl.tip()]) & cl.tonodes(bindings.dag.spans.unsaferange(len(cl)-10,len(cl)))
Out[1]: <and <lazy-id> <dag [...]>>
In [3]: %time len(cl.dag.ancestors([cl.tip()]) & cl.tonodes(bindings.dag.spans.unsaferange(len(cl)-10,len(cl))))
CPU times: user 364 µs, sys: 0 ns, total: 364 µs
Wall time: 362 µs
In [7]: %time len(cl.dag.descendants([repo[1].node()]) & cl.tonodes(bindings.dag.spans.unsaferange(0,100)))
CPU times: user 0 ns, sys: 574 µs, total: 574 µs
Wall time: 583 µs
```
Reviewed By: sfilipco
Differential Revision: D22638458
fbshipit-source-id: b9064ce2ff1aecc2d7d00025928dfcb3c0d78e0c
Summary:
Similar to the segmented changelog version using `ANCESTORS`. This makes
`heads(all())` calculates `heads_ancestors(all())` automatically and gets
the speed-up.
Reviewed By: sfilipco
Differential Revision: D22638464
fbshipit-source-id: 014412f1c226925e50387f18c1282b3cb96d434b
Summary:
Optimize it to not covert revs to `Vec<u32>`, and have a fast path to
initialize `states` with `Unspecified`. This makes it about 2x faster and match
the C revlog `headrevs` performance when calculating `headsancestors(all())`:
```
In [2]: %timeit cl.index.clearcaches(); len(cl.index.headrevs())
10 loops, best of 3: 66.9 ms per loop
In [3]: %timeit len(cl.dageval(lambda: headsancestors(all())))
10 loops, best of 3: 64.9 ms per loop
```
Reviewed By: sfilipco
Differential Revision: D22638461
fbshipit-source-id: 965eb16e3a78ae02a65a8a44559f3a64c16f6884
Summary:
Change `parents` from using the default implementation that returns `StaticSet`
of commit hashes, to a customized implementation that returns `IdStaticSet`.
This avoids unnecessary commit hash lookups, and makes `heads(all())` 30x
faster, matching `headsancestors(all())` (but is still 2x slower than the C
revlog index `headsrevs` implementation).
Reviewed By: sfilipco
Differential Revision: D22638453
fbshipit-source-id: 4fef78080b990046b91fee110c48e36301d83b4f
Summary:
The hint indicates a set `X` is equivalent to `ancestors(X)`.
This allows us to make `heads` use `heads_ancestors` (which is faster in
segmented changelog) automatically without affecting correctness. It also
makes special queries like `ancestors(all())` super cheap because it'll just
return `all()` as-is.
Reviewed By: sfilipco
Differential Revision: D22638463
fbshipit-source-id: 44d9bbcbb0d7e2975a0c8322181c88daa1ba4e37
Summary:
Re-implement the `findcommonheads` logic using `changelog` APIs that are going
to have native support from Rust.
This decouples from revlog-based Python DAG logic, namely `dagutil.revlogdag`,
and `ancestor.incrementalmissingancestors`, unblocking Rust DAG progress, and
cleans up the algorithm to not use revision numbers.
The core algorithm is unchanged. The sampling logic is simplified and tweaked
a bit (ex. no 'initial' / 'quick initial' special cases). The debug and
progress messages are more verbose, and variable names are chosen to match
the docstrings.
I improved the doc a bit, and added some TODO notes about where I think can be
improved.
Reviewed By: sfilipco
Differential Revision: D22519582
fbshipit-source-id: ac8cc8bebad91b4045d69f402e69b7ca28146414
Summary:
It has been long replaced by setdiscovery. This removes another dependency on
`dagutil.revlogdag`.
Reviewed By: DurhamG
Differential Revision: D22519585
fbshipit-source-id: ee261173ba584ffcb3371ec640b233609aafcf77
Summary:
`changegroup` uses `dagutil.revlogdag` just to "linearize" commits to optimize
file revision deltas. This is less relevant in production setup because:
- The file delta calculation with remotefilelog is quite different.
- We don't have lots of branches that make the optimization useful.
- In the future segmented changelog makes commits more linearized.
The Python `dagutil.revlogdag` is ideally removed. This is a step towards that.
Reviewed By: DurhamG
Differential Revision: D22519589
fbshipit-source-id: ac44873893df8658da0617e06cae1805d72417aa
Summary:
The changegroup logic uses those APIs, which uses low-level revlog details like
the C index. Bypass them if the Rust DAG is used.
Reviewed By: DurhamG
Differential Revision: D22519583
fbshipit-source-id: 228c7ba0a8ea77c0cf85db39d1194274d6331416
Summary:
Those methods are less fancy. Use the Rust path to avoid depending on revlog
internals.
Reviewed By: DurhamG
Differential Revision: D22519588
fbshipit-source-id: 0fede55ee04373c069ae7a6dd727f4d7208ee321
Summary: This avoids depending on the C index if the Rust DAG is available.
Reviewed By: DurhamG
Differential Revision: D22519587
fbshipit-source-id: a89d91184feaeef6641d2b04353601297bf5d4d5
Summary:
The check does not practically work because the client sends `common=[null]`
if the common set is empty.
D22519582 changes the client-side logic to send `common=[]` instead of
`common=[null]` in such cases. Therefore remove the constraint to keep
tests passing. 13 tests depend on this change.
Reviewed By: StanislavGlebik
Differential Revision: D22612285
fbshipit-source-id: 48fbc94c6ab8112f0d7bae1e276f40c2edd47364
Summary:
Replace the Python spanset with the Rust-backed idset.
The idset can represent multiple ranges and works better with Rust code.
The `idset` fast paths do not preserve order for the `or` operation, as
demonstrated in the test changes.
Reviewed By: DurhamG, kulshrax
Differential Revision: D22519584
fbshipit-source-id: 5d976a937e372a87e7f087d862e4b56d673f81d6
Summary:
Now that packfiles are marked with FILE_DELETE_ON_CLOSE, they can no longer be
opened on Windows, and thus trying to stat them will fail with a permission
denied, failing repack.
This really should only be happening when using EdenFS on Windows, which only a
handful (though growing) number of people are using.
Reviewed By: quark-zju
Differential Revision: D22801408
fbshipit-source-id: f4229e90ce076a65994fb9d193d00c309377323a
Summary:
The tilde got dropped as part of the changes in D22672240 (be3683b1d4)
(an easy mistake to make!) and that renders this function less
useful.
Thankfully the caps display isn't a critical function; just for
some diagnostic printing.
Reviewed By: chadaustin
Differential Revision: D22847590
fbshipit-source-id: 716d7c7bd674260687fbc09e3dc94538359f98b3
Summary: Move client hostname reverse DNS lookup from inside of the LFS server's `RequestContext` to an async method on `ClientIdentity`, allowing it to be used elsewhere. The behavior of `RequestContext::dispatch_post_request` should remain unchanged.
Reviewed By: krallin
Differential Revision: D22835610
fbshipit-source-id: 15c1183f64324f216bd639630396c9c6f19bcaaa
Summary: When a TLS connection fails due to a missing client certificate, the `curl` command may fail with either code 35 or 56 depending on the TLS version used. With TLS v1.3, the error is explicitly reported as a missing client certificate, whereas in TLS v1.2, it is reported as a generic handshake failure. This is because TLS v1.3 defines an explicit [`certificate_required`](https://tools.ietf.org/html/rfc8446#section-4.4.2.4) alert, which is [not present](https://github.com/openssl/openssl/issues/6804) in earlier TLS versions.
Reviewed By: krallin
Differential Revision: D22834527
fbshipit-source-id: a15d6a169d35ece6ed5a54b37b8ca9bbc506b3da
Summary:
`log()` passes fsck bars to standard output, but it will also print the same message to the log with level DBG2. (example below)
```V0713 07:05:45.971511 3510654 StartupLogger.cpp:96] [====================>] 100%: fsck on /home/ailinzhang/eden-state/clients/dev-fbsource6/local
```
Since we don't want the log file to be messed up with fsck bars, we use `logVerbose()` with level DBG7.
Reviewed By: kmancini
Differential Revision: D22727965
fbshipit-source-id: 0700503af511030df2abbca4ad2fa1540995e919
Summary:
We have some users issuing 10k+ diff queries to phabricator, which is
causing problems with their db. Since we usually only care about the latest
draft commits, let's bound the size of the requests we send.
Reviewed By: quark-zju
Differential Revision: D22834195
fbshipit-source-id: d41b449a89d6dfb2d6d33e0be6ed0ff31893ab5e
Summary:
The overall goal of this stack is to add WarmBookmarksCache support to
repo_client to make Mononoke more resilient to lands of very large
commits.
We'd like to use WarmBookmarkCache in repo client, and to do that we need to be
able to tell Publishing and PullDefault bookmarks apart. Let's teach
WarmBookmarksCache about it.
Reviewed By: krallin
Differential Revision: D22812478
fbshipit-source-id: 2642be5c06155f0d896eeb47867534e600bbc535
Summary:
This method will be used in the next diff to add a test, but it might be more
useful later as well.
Note that `update()` method in BookmarkTransaction already handles publishing bookmarks correctly
Reviewed By: farnz
Differential Revision: D22817143
fbshipit-source-id: 11cd7ba993c83b3c8bca778560af4a360f892b03
Summary:
The overall goal of this stack is to add WarmBookmarksCache support to
repo_client to make Mononoke more resilient to lands of very large
commits.
The code for managing cached_publishing_bookmarks_maybe_stale was already a bit
tricky, and with WarmBookmarksCache introduction it would've gotten even worse.
Let's move this logic to a separate SessionBookmarkCache struct.
Reviewed By: krallin
Differential Revision: D22816708
fbshipit-source-id: 02a7e127ebc68504b8f1a7401beb063a031bc0f4
Summary:
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/38
The tool is used in some integration tests, make it public so that the tests might pass
Reviewed By: ikostia
Differential Revision: D22815283
fbshipit-source-id: 76da92afb8f26f61ea4f3fb949044620a57cf5ed
Summary:
Better Engineering: remove dead code about secret tool.
Secret tool is a FB specific tool (keychain like) and has been used to transfer OAuth token between
different devservers without user's involvement. We have migrated to certs on devservers, so it is not needed anymore.
Also, it is FB specific and doesn't make sense for open source either.
Reviewed By: mitrandir77
Differential Revision: D22827264
fbshipit-source-id: cd89168ad75ca041d2a0f18d63474dd1eaad483d
Summary: It looks nicer if we highlight the current workspace in the list.
Reviewed By: mitrandir77
Differential Revision: D22826619
fbshipit-source-id: 416b77fb57d8dfe19057e248e12d411dfc5f9412
Summary:
Some users on macs and dev servers are connected to their default workspace, other
users after D22802064 will be connected to their machine workspace. Assume a
user decides to reclone the repo. Currently, as a result of rejoin, the second batch of the users will
automatically see all their commits from the default workspace and will be a bit
surprised. It makes sense to adapt rejoin logic of choosing the default
workspace if workspace name is not given.
Reviewed By: markbt
Differential Revision: D22817941
fbshipit-source-id: 764034c9f2d774051c5523cb2db093af525f27d7
Summary:
The overall goal of this stack is to add WarmBookmarksCache support to
repo_client to make Mononoke more resilient to lands of very large
commits.
The problem with large changesets is deriving hg changesets for them. It might take
a significant amount of time, and that means that all the clients are stuck waiting on
listkeys() or heads() call waiting for derivation. WarmBookmarksCache can help here by returning bookmarks
for which hg changesets were already derived.
This is the second refactoring to introduce WarmBookmarksCache.
Now let's cache not only pull default, but also publishing bookmarks. There are two reasons to do it:
1) (Less important) It simplifies the code slightly
2) (More important) Without this change 'heads()' fetches all bookmarks directly from BlobRepo thus
bypassing any caches that we might have. So in order to make WarmBookmarksCache useful we need to avoid
doing that.
Reviewed By: farnz
Differential Revision: D22816707
fbshipit-source-id: 9593426796b5263344bd29fe5a92451770dabdc6
Summary:
The overall goal of this stack is to add WarmBookmarksCache support to
repo_client to make Mononoke more resilient to lands of very large commits.
This diff just does a small refactoring that makes introducing
WarmBookmarksCache easier. In particular, later in cached_pull_default_bookmarks_maybe_stale cache I'd like to store
not only PullDefault bookmarks, but also Publishing bookmarks so that both
listkeys() and heads() method could be served from this cache. In order to do
that we need to store not only bookmark name, but also bookmark kind (i.e. is
it Publishing or PullDefault).
To do that let's store the actual Bookmarks and hg changeset objects instead of
raw bytes.
Reviewed By: farnz
Differential Revision: D22816710
fbshipit-source-id: 6ec3af8fe365d767689e8f6552f9af24cbcd0cb9
Summary:
Most out our APIs throw error when the path doesn't exist. I would like to
argue that's not the right choice for list_file_history.
Errors should be only retuned in abnormal situations and with
`history_across_deletions` param there's no other easy way to check if the file
ever existed other than calling this API - so it's not abnormal to call
it with path that doesn't exist in the repo.
Reviewed By: StanislavGlebik
Differential Revision: D22820263
fbshipit-source-id: 002bda2ef5ee9d6632259a333b7f3652cfb7aa6b
Summary:
Added a new query function to get the largest log id from bookmarks_update_log.
In repo_import tool once we move a bookmark to reveal commits to users, we want to check if hg_sync has received the commits. To do this, we extract the largest log id from bookmarks_update_log to compare it with the mutable_counter value related to hg_sync. If the counter value is larger or equal to the log id, we can move the bookmark to the next batch of commits.
Since this query wasn't implemented before, this diff add this functionality.
Next step: add query for mutable_counter
Reviewed By: krallin
Differential Revision: D22816538
fbshipit-source-id: daaa4e5159d561e698c6e1874dd8822546c699c7