Summary:
Most of the repacks are background repacks, and most of the complaints are
coming from laptops users. Thanks to the rust repack, most of time during
repack is now spent in repacking loosefiles. While repacking them is expensive,
testing whether data is in a loosefile and obtaining it is actually pretty
fast. Packfiles have the opposite issue, repacking them is fast, but finding
data in them may be expensive if a lot of them are present.
Based on this, it makes sense to repack packfiles more frequently than
loosefiles. At first, the newly added option will be used to turn-off loosefile
repack for laptop users. A less frequent loosefile repack will be implemented
in a later patch.
Reviewed By: DurhamG
Differential Revision: D14586986
fbshipit-source-id: 5bc5c839cf8d2d78bcc4ffa016bbe3cf1b2ef3f7
Summary:
```
building 'indexes' library extension
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package: /data/users/sfilip/fbsource/fbcode/scm/hg/edenscm/hgext/extlib/indexes/Cargo.toml
workspace: /data/users/sfilip/fbsource/fbcode/scm/hg/edenscm/hgext/extlib/Cargo.toml
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package: /data/users/sfilip/fbsource/fbcode/scm/hg/edenscm/hgext/extlib/pyrevisionstore/Cargo.toml
workspace: /data/users/sfilip/fbsource/fbcode/scm/hg/edenscm/hgext/extlib/Cargo.toml
```
`profile` settings are ignored for non root packages. I introduced this issue
when I added a workspace for extlib: D14543989.
Reviewed By: kulshrax
Differential Revision: D14606019
fbshipit-source-id: 7ec4743d0913e443c378ae83f392817f6e6d3aab
Summary:
If the cloud sync or pushbackup commands start in background themselves, it will trigger a new background command to back up to the secondary afterwards.
Note that for cloud sync, for the secondary store we trigger pushbackup command, not cloud sync. If should be fine. We don't backup duplicates as we always check with the server first what is already backup. To call pushback looks simpler as it doesn't involve all the complexity of cloud sync, and basically we just need to backup to the secondary.
Reviewed By: markbt
Differential Revision: D14578770
fbshipit-source-id: f81a50fd76e64f2d77d8d7004b201fcfe8a090d2
Summary:
This particular situation happens in the wild when the amend doesn't rebase
because of conflicts and users work on their stack using `hg prev` and `hg next
--rebase`. In this case when there's non-obsolete child that's always the child
we want to choose.
We're verbose about what we're doing so it's not confusing to the users.
Reviewed By: quark-zju
Differential Revision: D14560584
fbshipit-source-id: a453c0301a5156eea0d19ceb40d9a64e80b7fca7
Summary:
Move the logic of adding the common ancestor to make the DAG connected to the
smartlog revset. This makes it handy for power users to just use `log` and
`smartlog` revset to get interesting graphs. `sl` is now a very thin wrapper
around the `smartlog` revset function.
Reviewed By: DurhamG
Differential Revision: D14461520
fbshipit-source-id: 78e3991059c9da7ef4410e252a2b69b1e54918cb
Summary:
Wrap user-provided revs with `smartlog` revset function. This makes more sense
together with the next change.
The test change is because "parents" of drafts are selected.
Reviewed By: DurhamG
Differential Revision: D14461519
fbshipit-source-id: 2a48931680f0dc50b80b87cea827152c21cf4791
Summary:
With the last change, the benefit of ancestorcache is limited. Therefore just
remove it to reduce complexity. This also makes `smartlog` closer to `log`.
Reviewed By: DurhamG
Differential Revision: D14461523
fbshipit-source-id: eb108a09e12b07e5012f70aef0b2940b07d746fb
Summary:
Use the `ancestor` revset to replace the adhoc ancestor calcuation. This makes
the code much shorter.
It's in theory slightly different from the old logic. But there are no test changes.
The new code can no longer take advantage of ancestorcache. Fortunately, with
optimizations, it is pretty close to a fully warmed up ancestorcache. Of course,
it's much faster than a cold ancestorcache.
Before (ancestorcache disabled):
quark@devvm33994 ~/fbcode/scm/hg % ./hg sl -T '.' --time --pager=off --all --config smartlog.useancestorcache=0 >/dev/null
time: real 75.050 secs (user 52.540+0.000 sys 22.520+0.000)
Before (ancestorcache warmed up):
quark@devvm33994 ~/fbcode/scm/hg % ./hg sl -T '.' --time --pager=off --all --config smartlog.useancestorcache=1 >/dev/null
time: real 2.670 secs (user 2.550+0.000 sys 0.100+0.000)
After:
quark@devvm33994 ~/fbcode/scm/hg % ./hg sl -T '.' --time --pager=off --all --config smartlog.useancestorcache=0 >/dev/null
time: real 2.970 secs (user 2.760+0.000 sys 0.160+0.000)
There are 5110 commits in the above smartlog graph.
Reviewed By: DurhamG
Differential Revision: D14461524
fbshipit-source-id: 68bee3c4397be833e381c582c20a849b768b144d
Summary:
Previously, the default master is `.^` when `--rev` is passed. Change it to
null so we're not adding unexpected "master" if `--rev` is used.
Reviewed By: DurhamG, sfilipco
Differential Revision: D14516266
fbshipit-source-id: ce93f5e905d674c21cc07bb5a2957d0fad302722
Summary: Deprecate due to complexity of the code.
Reviewed By: mitrandir77
Differential Revision: D14561405
fbshipit-source-id: 6184317f549c0ab84335b09c4b48efccdf31f7fc
Summary: Added new metrics to log loose files size and number during repack. We need it to understand how much better the pack files work in terms of disc and memory usage.
Reviewed By: markbt
Differential Revision: D14544811
fbshipit-source-id: 5a4d894bd5a3358c7e0f93ecc9db5e9f2c2f2372
Summary:
Basically we should check that the commits have been backed up.
If this is not true and the commits are local we can just back them up.
If they are not known by this repo, pull from the old one and then back them
up.
Reviewed By: markbt
Differential Revision: D14508239
fbshipit-source-id: 3fdd83335cb937b153510ec3c7510ecd1167d0ca
Summary: Log data about round-trip count, and object count for files, trees, and SSH calls.
Differential Revision: D14515776
fbshipit-source-id: cce416fd7dccdad3c73a9f1751a04ddac0d2c507
Summary:
Make it possible to use `ui.metrics.gauge` to collect metrics for a single
command, via the sampling extension.
Differential Revision: D14515775
fbshipit-source-id: e8a53549b00c1bc7b6509a5990a51d955d767d7e
Summary:
Before this patch, metrics was designed to send stats to a global counter. I'd
like to use it for stats local to the current command (ex. count of file
fetching, count of round-trips, etc).
Change the API so "entity=None" forbids stats from sending to a global counter.
Differential Revision: D14515779
fbshipit-source-id: b5b3b040d674c71f467153c308b56aa6f506eb0c
Summary: This simplifies testing setup for all crates in CI.
Reviewed By: quark-zju
Differential Revision: D14543989
fbshipit-source-id: 83693fada6e64b7c21fd89a880d6452d811ea90d
Summary:
Part way through our hg-git repo's history we started adding the git
hash to the extras so it was easier to generate the map files. Let's make
git-updatemeta stop when it passes this boundary.
This is useful for speeding up internal infrastructure that needs to generate
the mapfile from scratch.
Reviewed By: kulshrax
Differential Revision: D14542354
fbshipit-source-id: 7c17fb1b1439f9b4c0c0acf8b5a85790d02a0861
Summary:
git-updatemeta performance starts to matter when we have to run
it from scratch in new containers. Let's optimize this a bit. In a large repo
this improves the speed by 30%.
Reviewed By: quark-zju
Differential Revision: D14542339
fbshipit-source-id: 34f06369543b8d4d22838fd4e3878c6bec9a597c
Summary:
Improve the performance of the revsets that calculate which commits to back up
by limiting them to consider only the non-obsolete commits that are also draft.
Reviewed By: quark-zju
Differential Revision: D14544883
fbshipit-source-id: db9ed4a7abd81956762f56140321242dbccf2df0
Summary:
Not every command requires a valid repo, but when one is used, it is always
properly closed. Hence, let's simply wrap the repo.close method instead of
wrapping around the runcommand function.
Reviewed By: kulshrax
Differential Revision: D14531515
fbshipit-source-id: bcdbe6530c94041c1185b18570846ba609b5f605
Summary:
Attempt to detect oscillation of commit cloud workspaces by comparing the new
state after the current sync with the state before the previous application of
cloud changes. If the version number is incremented by 1, the workspace is
brought back to the same state, and less than a minute has passed since the
last time that commit cloud sync ran, abort the current sync step.
This happens after the commits have been backed up, but before the new cloud
workspace is synced to the commit cloud service. This prevents further
oscillation whilst ensuring the user's commits are still backed up.
Reviewed By: quark-zju
Differential Revision: D14540355
fbshipit-source-id: 20e4b0333f5a7e34b512967a03099625f62ff9d5
Summary:
Change how infinitepush determines what to back up.
Commits to back up are all draft ancestors of non-obsolete commits, *plus* all
draft ancestors of bookmarked commits, which may be obsolete.
Previously, in a stack of the form:
```
| x B (obsolete, bookmarked)
| |
| o A
|/
o
```
neither A or B would be backed up, despite normally being visible to the user.
Reviewed By: liubov-dmitrieva
Differential Revision: D14540356
fbshipit-source-id: 0d6ad330c53c818b08f736a9af64704cf0be7cd5
Summary:
Change how commit cloud determines what to sync and what has been successfully
synced when some commits fail to push.
Commits to sync are all draft ancestors of non-obsolete commits, *plus* all
draft ancestors of bookmarked commits.
Commits to sync when only some commits have successfully been pushed are
ancestors of the newly pushed heads, *plus* ancestors of the commits to be
synced that were successfully synced last time.
Reviewed By: liubov-dmitrieva
Differential Revision: D14540357
fbshipit-source-id: c082a2f2822f8bce4cd2bbac93a70e27e2ffaa59
Summary:
Now at the beginning of pushbackup we always check what the server already
have, this triggered changes in the output of the tests.
Before the limit of the check was 3 heads. I made it to 0 by default earlier and broke tests. Now we have fast way to check on the server side, so the limit 3 was not good. There is no reason to re-back up those.
Basically before we could backup the same thing again.
Reviewed By: markbt
Differential Revision: D14539582
fbshipit-source-id: 569b354580128c944a95fa64c0c964304a2cca8b
Summary:
the test have been broken
```
buck test mode/dev scm/hg/fb/tests:fb_run_tests -- 'test_sl_output_t \(scm\.hg\.fb\.tests\.unittestify\.hgtests\)'
```
In the test the remote is not defined, so the test failed to check with the
server what has been backed up
Reviewed By: ikostia
Differential Revision: D14521930
fbshipit-source-id: 3363bb3055941decdfc65165860e1ef25a7a7891
Summary:
This is particulary useful for `hg cloud sync` when it calls pushbackup in the
background to the secondary storage at the end of cloud sync.
Pushbackup is not smart enough, so it will back up again what we just pulled from cloud sync.
However, all those commits are probably already backed up on the secondary
storage on another machine while cloud sync.
Reviewed By: singhsrb
Differential Revision: D14386616
fbshipit-source-id: e62ed0afb89c28fe6880346077c279e6705da602
Summary:
Using lookup command is slow because if something is not backed up, it throws
exception and we have to re-establish ssh connection
Differential Revision: D14386150
fbshipit-source-id: 8d5caea93516571ff36c80adb6406b0347d90384
Summary:
Reading a line over a pipe involves reading every character of the line
individually. This is extremely inefficient and slow, which cause prefetch to
be overly slow when most of the data isn't in memcache.
Using buffered reads tries to read 4096 bytes at a time, drastically reducing
the cost of reading a missing path/node pair from memcache.
Reviewed By: ikostia
Differential Revision: D14507063
fbshipit-source-id: e0910d7a303e15fe2d3c61fe2739e6c13370058f
Summary:
As part of the mononoke write path rollout we want to be
able to dynamically block writes to a repo. This is implemented as
a table in hgsql (repo_lock) that both hg and mononoke can read, but
will only be updated by scmadmin.
Expose a function that can be used by in-process hooks to check if a repo is
locked for mercurial.
Reviewed By: quark-zju
Differential Revision: D14279169
fbshipit-source-id: f8bb4afeeeda67796cf806ab7f3fe42f4089818f
Summary: This is removal of unused code that left from D14455360
Reviewed By: ikostia
Differential Revision: D14502837
fbshipit-source-id: df1912c7997847b0628b492b3fe735d5e3d7f201
Summary:
Use "interestingmaster()" to make it easier to see how "master" gets decided.
Change the type of "master" argument taken by "smartlog" revset from string to
revset. This is more consistent with other revset functions.
Reviewed By: DurhamG
Differential Revision: D14436003
fbshipit-source-id: 5aa166b523f36672f77dc4f161ae8d64c2b50579
Summary:
Similar to `interestingbookmarks()`, this exposes more smartlog logic to the
user.
Reviewed By: DurhamG
Differential Revision: D14436004
fbshipit-source-id: bd4ef1dcee8e7b29c43ce43fe6c1a3e7b5286774
Summary:
Make `heads` in `smartlog` customizable. This makes smartlog more flexible.
Instead of using the default selection, the user can choose draft branches, and
potentially pass in `interestingbookmarks()` to include bookmarks and remote
bookmarks. For example, `smartlog(.::)` shows the current branch and the public
commit it bases on.
Drop `recentdays` as it can now be expressed using `heads` directly. See the
test change.
This would also hopefully make test-fb-hgext-smartlog-hide-before.t stable,
as it no longer uses `time.time()`.
Reviewed By: DurhamG, sfilipco
Differential Revision: D14436007
fbshipit-source-id: 5e0a76e4424b01312fef02fae23a3abd74e863c6
Summary:
The old code basically selects ancestors of heads.
Rewrite the logic using revsets. Assuming we're only interested in ancestors
that are drafts, we can take advantage of `draft() & ::x` optimization.
The new logic also assumes master rev is public. Otherwise it can be slightly
different from the old logic.
The new code is much faster on my repo:
New code:
quark@devvm33994 ~/fbcode/scm/hg % ./hg log -r 'smartlog()' --hidden -T . --time | wc -c
time: real 0.630 secs (user 0.550+0.000 sys 0.030+0.000)
6716
Old code:
quark@devvm33994 ~/fbcode/scm/hg % hg.real log -r 'smartlog()' --hidden -T . --time | wc -c
time: real 5.470 secs (user 3.920+0.000 sys 1.550+0.000)
6716
This might make the ancestorcache hack (D5135746) unnecessary.
Reviewed By: DurhamG, sfilipco
Differential Revision: D14436008
fbshipit-source-id: 3c3bf47ccb67ea0e238542995009da9b9250b43b
Summary:
The `smartlog()` revset does a lot of things.
Add a new revset `interestingbookmarks()` to expose part of the smartlog features.
Reviewed By: DurhamG, sfilipco
Differential Revision: D14436006
fbshipit-source-id: 15b8d203b6547e63f8d062660ad27bdbc25b2c1c
Summary:
The code falls back to head() if there are no remotenames. We don't need that
behavior. Therefore just simplify it by always using `heads(draft())`.
Reviewed By: DurhamG
Differential Revision: D14436009
fbshipit-source-id: 25c2d245ed64a29e3e1677ededb4c2ba7b4a3ceb
Summary: D14387734 added 2 new arguments to the `httprequestpacks` function, but didn't update the callsite to pass those arguments. This diff fixes the problem.
Differential Revision: D14468592
fbshipit-source-id: 7e573838916067fc2cc12204ea1da460eb3955c8
Summary:
Currently archive is almost useless because it fetches each file
one-by-one. Let's add prefetching.
Differential Revision: D14460880
fbshipit-source-id: 1f06e1ac9d03aae3ab27d3064f9fe6141051be06
Summary:
In preparation to support Mononoke clean up the features that are Mercurial
specific and Mercurial infinitepush implementation specific.
For Mononoke migration we will to write a whole new set of logic what to do if
the "infinitepush" path has been changed. So, clean up is useful before
writing this logic.
Reviewed By: singhsrb
Differential Revision: D14455360
fbshipit-source-id: d15c3a9032b4888a1aa391da34ad5e499aba9a15
Summary: See the linked task for details.
Reviewed By: quark-zju
Differential Revision: D14448505
fbshipit-source-id: fc2efa71510b718c25a2cea3acf39663d280f19a
Summary:
After going over the code review for D14332967, I have decided to keep
things simple for now and only allow making commits to same target parent as
the original parent. This was already the intention with the existing code.
Therefore, this commit just further enforces the requirement.
Reviewed By: quark-zju
Differential Revision: D14422351
fbshipit-source-id: 2f786fc3596b17c5020de9906adf8f22b50be4dd
Summary:
These was probably introduced by moving to black.
The changes in the diff were generated by script.
Reviewed By: mitrandir77, singhsrb
Differential Revision: D14439667
fbshipit-source-id: 54f6e0bdcc59c1c6deb4eea46dc6f865bcd48cf8
Summary:
The code currently assumes that the target parent is the same as the
original parent by totally ignoring the original parent which can seem
surprising and more importantly, hinders supporting behaviors such as allowing
commits to a new parent. Therefore, this commit fixes the code to identify the
original and target parent separately.
Reviewed By: quark-zju
Differential Revision: D14422352
fbshipit-source-id: bc175f2fe636f9bf47a68f64c8efd52660e3b1b7
Summary:
D14183009 made commit cloud accept cloud bookmarks for hidden commits, rather
that omitting them. However, this only works for future bookmarks. If the
bookmark was already omitted, then `_checkomissions` would not recover the
situation for the same reason.
Update `_checkomissions` to also allow cloud bookmarks on hidden commits.
Reviewed By: liubov-dmitrieva
Differential Revision: D14437656
fbshipit-source-id: 2372323022a59bfd4210bc76f39b9a74872d5efe
Summary:
Now that hg_memcache_client will voluntarily exit when hg terminates, we no
longer need to wait for hg_memcache_client to finish before exiting hg.
Reviewed By: DurhamG
Differential Revision: D14396510
fbshipit-source-id: 7e73d9b70d481e58a0c47cd0f408580e6d548fd9
Summary:
This is useful for testing the `memcommit` command as the clients can
specify different destination parents than the original parents of the commit.
Differential Revision: D14410213
fbshipit-source-id: 846e0d764b9606f00aed95995c694f379457eec7
Summary:
Earlier message suggested the feature is not supported when in fact it
is allowed via a configuration.
Differential Revision: D14410214
fbshipit-source-id: 0ec2a22920417c378cf3ac596565f9d2fa5f6d5c
Summary:
Operations like `hg log -p` will inherently cause many requests to
hg_memcache_client. This will force many small packfiles to be created which
will significantly slow down future invocation of hg.
Now that `hg repack --incremental --packsonly` is fast, we can afford to run it
when mercurial exit after a prefetch operation was run.
Differential Revision: D14387735
fbshipit-source-id: 45f89f1120458c8b2471a1c55cafb6bc87263dd0
Summary:
When using packfiles, history and data are in different files, and thus it's
possible to only fetch one.
For now, besides the requests coming from contentstore and metadatastore, both
will be fetched, as the code hasn't been audited to know whether we only want
history or data.
Differential Revision: D14387734
fbshipit-source-id: 6aafd477ff486b9316458ce0e80636152db45b89
Summary:
For pushbackup it is needed to make hg rage more informative because we store
states for different paths separately anyway.
For cloud sync we will have to write some migration logic: if the remotepath
has been changed, we have to check what the server has to make sure everything
is backed up as cloud sync would expect.
Differential Revision: D14420713
fbshipit-source-id: 2046e9d7b16291a49d1bc40da5285de58017f4f2
Summary:
Corrupted packfiles, or background removal of them could cause repack to fail,
let's simply ignore these transient errors and continue repacking.
Reviewed By: DurhamG
Differential Revision: D14373901
fbshipit-source-id: afe88e89a3bd0d010459975abecb2fef7f8dff6f
Summary:
Fixes the below type error when pulling from a git repo using hg-git on my laptop:
File "edenscm/hgext/remotenames.py", line 225, in expull
pullremotenames(repo, remote, bookmarks)
File "edenscm/hgext/remotenames.py", line 314, in pullremotenames
path = activepath(repo.ui, remote)
File "edenscm/hgext/remotenames.py", line 1464, in activepath
rpath = _normalizeremote(remote.url)
File "edenscm/hgext/remotenames.py", line 1439, in _normalizeremote
u = util.url(remote)
File "edenscm/hgext/hggit/__init__.py", line 164, in _url
if not (path.startswith(pycompat.ossep) and ":" in path):
AttributeError: 'function' object has no attribute 'startswith'
Basically, `peer.url()` is the API, `peer._url` is a private field that does
not always exist somehow.
Besides, further remove named branches that can crash hg-git with
NotImplementedError:
File "edenscm/hgext/remotenames.py", line 225, in expull
pullremotenames(repo, remote, bookmarks)
File "edenscm/hgext/remotenames.py", line 322, in pullremotenames
for branch, nodes in remote.branchmap().iteritems():
File "edenscm/hgext/hggit/gitrepo.py", line 73, in branchmap
raise NotImplementedError
Reviewed By: DurhamG
Differential Revision: D14144462
fbshipit-source-id: 2e886c639cf6689480f84626eaf0d5ec25512ea0
Summary:
Since state files have been changed, now it is one per path, we should dump
them in a correct way
Reviewed By: markbt
Differential Revision: D14406311
fbshipit-source-id: 8d74a51e63028ec81bcf5e55ad117d3c960b4651
Summary:
To make "draft()" bounded and train users to hide unused commits manually,
change smartlog to "hide commits before <a static date>", instead of
"hide commits that are older than <a static time span>". Then we can
incrementally decrease the static date, and eventually show everything, and
force the user to think about what commits to keep or hide.
Reviewed By: singhsrb
Differential Revision: D13993584
fbshipit-source-id: 1a2b56f50d7f014a589f798cd2feaa6931e64fe3
Summary:
Subrepo is another unloved features that we don't want to support.
Aggressively remove it everywhere, instead of just turning off configs.
I didn't spend much time to split this commit so it's smaller and more friendly
to review. But it seems tests are passing.
Reviewed By: sfilipco
Differential Revision: D14220099
fbshipit-source-id: adc512a047d99cd4bafd0362e3e9b24e71defe13
Summary:
If an infinitepush bundle contains flat manifests and is served from a
treemanifest repository, it can potentially fail to send all the needed data to
the client.
Understanding the bug requires two bits of context:
1. When sending commits from a tree server to a tree client, we generally don't
send any trees because they can be fetched by the client ondemand later. The one
exception to this is for infinitepush bundles, where the trees inside the bundle
cannot be served ondemand, and therefore must be served at pull time. To do this
we check if a given manifest node exists in the repositories permanent storage,
and if it doesn't, we assume it came from an infinitepush bundle and serve it
with the pull.
2. When we lookup a manifest and fail to find a tree, our last resort is the
ondemandstore which knows how to convert a flat manifest into a tree manifest.
On the server, this is responsible for converting each of the flat bundle's
manifests into treemanifests before we serve the bundle to the client. As part
of converting the flat manifests into treemanifests, it writes the new tree
data into a pack file.
The bug is then, when serving a stack of commits, if we try to package up the
top tree first (i.e. the most recent tree), we end up converting the entire
stack from flat into trees, which inserts the bottom most trees into the
temporary pack file. Because they exist in the temporary pack file, when we
later check if they are part of the repositories store we end up finding them,
which causes us to treat them as not-infinitepush-trees which means we don't
serve them to the client.
The fix is to change the infinitepush tree-serving code to not consider the
mutable packs when checking if it should send trees.
Reviewed By: mitrandir77
Differential Revision: D14403925
fbshipit-source-id: 38043dfc49df5ff9ea2fae1d3cac341c4936509c
Summary:
When calculating bundleroots (the commits that are the common ancestors for the
infinitepush bundle), we currently include all the `nullid` parents (i.e. p2 for
most commits). This bloats the size of the list, and is unnecessary.
Reviewed By: quark-zju
Differential Revision: D14385912
fbshipit-source-id: c518b8b1aa27cff8562c2358a024b8a08ced8cba
Summary:
Some dependant libraries may only care about the serialization logic. As an
example, see D14332987 where the `hgrepo.py` only needs to depend on the
serialization. Therefore, its cleaner to extract out the serialization from the
`memcommit` data.
Reviewed By: quark-zju
Differential Revision: D14388474
fbshipit-source-id: 6f049dcc596b66b9ad72905f133529bdc9092382
Summary:
The shared code can be potentially called by clients using python 3. Therefore,
let's be compatible with python 3.
Reviewed By: quark-zju
Differential Revision: D14387005
fbshipit-source-id: 2ffb359d4d2762ffcba4a26a3ae5a7b45e89572b
Summary:
Introducing new command for Mercurial only to support check of existing commits in both the repo and the infinitepush storage.
For Mononoke case, just the standard 'known' works fine.
We can not overload the standard 'known' in mercurial case, because it is used in discovery and having there infinitepushcommits breaks some commands.
Next diff is replacing isbackedupnodes with isbackedupnodes2
Reviewed By: markbt
Differential Revision: D14370603
fbshipit-source-id: 8d7b64b4d556c0f1caa7f797dba360501571daad
Summary: Include the output from edenfs rage in the hg rage output.
Reviewed By: chadaustin
Differential Revision: D14007061
fbshipit-source-id: fe0baf6c19dd4f2afd470ba70304c78582dfe879
Summary:
Support updating of commit visibility when using pushrebase. Since obsmarkers
may not be available, this also involves looking at the commit mutation data
returned from the server.
Reviewed By: DurhamG
Differential Revision: D12980783
fbshipit-source-id: 837e356e500e7bf9710a3619a31094cae21d36c9
Summary:
When commits are added or modified, update the set of visible heads if
visibility tracking is enabled.
Reviewed By: DurhamG
Differential Revision: D12980779
fbshipit-source-id: 8f44045159c86a374ae530fa4327ee0807b4320d
Summary:
Disable the various templates that attempt to determine the fate of a
particular commit based on obsmakers when mutation is enabled. The old
templates were either insufficiently generic (e.g. `amendsuccessors`), or
leaked internal implementation (e.g. `succsandmarkers`).
With mutation enabled, callers should use the `mutations` template to get a
list of a commit's mutations.
Reviewed By: DurhamG
Differential Revision: D12980772
fbshipit-source-id: 920d47f7c61ad52f562cd90f1cb405250c14bc25
Summary:
Include mutation records for all predecessors of the pushed commits in
infinitepush bundles.
When received from infinitepush, store these additional records in the local
store. This allows us to bridge any gaps in mutation that are omitted from the
local commits when they are received from infinitepush.
Reviewed By: DurhamG, quark-zju
Differential Revision: D12980777
fbshipit-source-id: b1535ca29c0fca3e6cb0f563d78c4c60d4aef58e
Summary:
Histedit needs to adjust its records of what commits are replaced. Currently
it does this by examining obsmarkers. If mutation is enabled, use the mutation
information instead.
Reviewed By: quark-zju
Differential Revision: D13279987
fbshipit-source-id: e9622a67635afe2023088fdf0e0b43b0bcd9223f
Summary:
Implement successorssets and foreground in terms of mutation records and
replace them when mutation metadata usage is enabled.
Reviewed By: quark-zju
Differential Revision: D10149263
fbshipit-source-id: bbf6d1fc44a9787660147e15936a9ee1951373ca
Summary:
When enabled, use mutation metadata for the `obsolete`, `extinct`, `orphan`,
`phasedivergent` and `contentdivergent` revsets.
Reviewed By: quark-zju
Differential Revision: D10149265
fbshipit-source-id: 5559fa22a6025e1d341538f3eb2257d8efee15e5
Summary:
Unfortunately, Mononoke team needs to rename paths to add the markers everywhere.
They deprecated mononoke url:
ssh://hg.vip.facebook.com//mononoke/fbsource
And they are asking us not to use ssh://hg.vip.facebook.com//data/scm/fbsource url without markers.
We finally agreed to have:
```
infinitepush = ssh://hg.vip.facebook.com//data/scm/fbsource?force_mercurial
infinitepush-other = ssh://hg.vip.facebook.com/data/scm/fbsource?force_mononoke
```
So, we would like that rename of the path don't cause `hg sl` show that nothing is backed up.
We use the path as part of our filename.
So, we will go to the server to check the commits, it might slow down a bit the very first `hg sl` or `pushbackup` after the path change, but it should be acceptable.
Reviewed By: quark-zju
Differential Revision: D14366820
fbshipit-source-id: a0fd7bad530dd6690926fe02d38b93c2a72df00b
Summary: These 2 commands were broken when treemanifest.treeonly=True was set
Reviewed By: DurhamG
Differential Revision: D14316779
fbshipit-source-id: e626df41c92036fa3bd61c072f09b0d6c99c6f9f
Summary:
When searching for data, mercurial will search the datastores by first looking
into the local cache, then try to find the data over the network. When
remotefilelog.fetchpacks is enabled, all the data fetched over the network will
be stored into packfiles, but those fetches are done via the loose-files remote
datastore. Due to this, even if memcache successfully find the requested data,
the datastore won't find it, due to it expecting loosefiles.
Fixing this simply requires the fetches to be done via a packfile store when
remotefilelog.fetchpacks is enabled.
Reviewed By: DurhamG
Differential Revision: D14216815
fbshipit-source-id: ed97c64651a733b36e0f2b4e209ce8ccdbb7911e
Summary:
When using memcache in its packfile mode, the key no longer contains the name
of the repo, and therefore, memcache will store the downloaded packfiles under
/var/cache/hgcache instead of /var/cache/hgcache/fbsource/packs.
Reviewed By: quark-zju
Differential Revision: D14217056
fbshipit-source-id: f78ce1021985dbb71a1db21d8821e8b8fcda8179
Summary:
Currently when checking if a commit is backed up on the server, we assume that
if the local backup state says it is not backed up then the server can't have
it. However, it's possible for the local repo to have received the commit
from elsewhere even if the backup state says it is not backed up, when in fact
it is. Make the --remote flag always check with the server, even if the local
state says the commit is not backed up.
Reviewed By: ikostia
Differential Revision: D14279495
fbshipit-source-id: cbd8253c6bfd0ee4cc3f573fabe5b632af7ad569
Summary: Apparently we need to add newlines between config items in the docstring to ensure that it gets formatted correctly in the help output. A similar change was made upstream: https://www.mercurial-scm.org/repo/hg/rev/040447dc3c62
Reviewed By: quark-zju, singhsrb
Differential Revision: D14267738
fbshipit-source-id: c61f67f2c119fd9d71326eb42c2a4aa2106573da
Summary: Allow file data/history packs to be fetched via HTTP when the Eden API is enabled.
Reviewed By: quark-zju
Differential Revision: D14257368
fbshipit-source-id: 8b6823a57a6fdef546a596df20387b3fc1ccdd4a
Summary: This diff adds a new `hg debuggethistory` command that takes filenode/path pairs from stdin, fetches the history of the files from the API server, and writes the results to a historypack in the hg cache.
Reviewed By: quark-zju
Differential Revision: D14248082
fbshipit-source-id: 8014a758abd3a578ea213d8d3177812629b2fd51
Summary: The Eden API client in Mercurial should be a singleton. This diff assigns the client to `repo.edenapi` so that it is accessible throughout the code.
Reviewed By: quark-zju
Differential Revision: D14233314
fbshipit-source-id: 8e0ed22c32611e8f6e7d4461c3e31870d47a0e95
Summary: Add a `--long` option, similar to the one avaiable on `hg debugdatapack`, which prints full node hashes.
Differential Revision: D14256168
fbshipit-source-id: 342932aa4dd96197daf6bbba7b5bc8623ebbf9bd
Summary: just provide the important information
Reviewed By: quark-zju
Differential Revision: D14230573
fbshipit-source-id: 945bb0be48ed38ba4511d0cef605ef0b7baa2b5d
Summary:
When the pack directory is missing, os.listdir will throw an OSError. Instead
of failing repack, let's just ignore the error.
Reviewed By: singhsrb
Differential Revision: D14234830
fbshipit-source-id: 14e683b7d850ab316d9821031e91a19e5f2f4c1e
Summary:
Instead of falling back to python, we should just skip the current repack. The
python code already does this, but the rust one will report the error to the
user (and scuba).
Reviewed By: singhsrb
Differential Revision: D14234831
fbshipit-source-id: d285499ae85205d6ccee3c22eb50352d77673488
Summary:
When pushing an empty commit, the server receives a pack part with no
data, which ends up not producing any pack files. Some newly added logic tries
to access the pack paths, which then crash.
Let's fix it so we get None for the paths in this situation, and update the only
consumers of those paths to handle the None case.
Reviewed By: quark-zju
Differential Revision: D14237452
fbshipit-source-id: 418bd30179fdb76b9de3bc2c2509079502edfef8
Summary:
The Rust entry point has an incorrect `sys.executable`. Workaround it with a
hard-coded `python2` for now.
Differential Revision: D14236437
fbshipit-source-id: 97d99d59365c2d5c70bfdeebc66b51f870073ded
Summary:
We need to ensure that memcommit is executed with the hgsql lock if
the `hgsql` extension is enabled.
Reviewed By: DurhamG
Differential Revision: D14177416
fbshipit-source-id: dcabf08003b618579461c608f924fe7f5b796c37
Summary:
The `memcommit` command output will be processed by the calling
process and therefore, let's just output JSON for easy consumption.
Reviewed By: quark-zju
Differential Revision: D14177417
fbshipit-source-id: 541cf73fa2bef20512164b43f1c4224415fba596
Summary:
This commit introduces the `memcommit` command to allow creation of
commits without a working copy.
Reviewed By: quark-zju
Differential Revision: D14177415
fbshipit-source-id: 518d29e2fe8fcc7e74d10ec22ebfcd22e136da06
Summary:
We will be relying on `pushrequest` to create commits to the
repository without a working copy using the `memcommit` command that will be
introduced in D14177415. Therefore, lets introduce a class method for creating
a pushrequest based on memcommit parameters.
Reviewed By: quark-zju
Differential Revision: D14177413
fbshipit-source-id: fe326e1e2908724b81a95fbf13a05163fb435ada
Summary:
Calculating the file conditions will be a common operation for any
class method which creates the pushrequest object as in D14177413. Therefore,
it makes sense to segregate this functionality.
Reviewed By: quark-zju
Differential Revision: D14177414
fbshipit-source-id: d57919098f372a9cbed13f59e3d3c4e3cc7a0b55
Summary:
This is certainly not required while creating new commits using
stackpush. Therefore, let's change the code to make this optional. See
D14177415 for an example of when specifying the date is not required.
Reviewed By: quark-zju
Differential Revision: D14177422
fbshipit-source-id: 6a8c5bcf8a01d79c46bc4fe1b4cca8ec16f7f0c2
Summary:
Change the message about limiting the number of backup heads to only print
when actually performing a backup. Previously it was printed by anything that
used the `notbackedup()` revset predicate, which could cause it to be printed
in `hg log` commands in the middle of normal log template output.
e.g.:
$ hg log -r. --graph -T'{node} {sl_backup}\n'
@ backing up only recent 50 heads
| ffc89f60162956497cd9e8e33798dd1d63ddd1da
~
This diff also changes the behavior to print the message to stderr rather than
stdout (using `ui.warn()` instead of `ui.status()`).
Reviewed By: quark-zju
Differential Revision: D14212701
fbshipit-source-id: ef3636850d8149cb0c1931b84b9a5b45e60f89c7
Summary:
This commit introduces the `memcommit` extension along with the
`debugmemcommit` command. The `debugmemcommit` serializes a commit in a format
that is consumable by the command for creating commit i.e `memcommit`
introduced in D14177415. The `debugmemcommit` is mainly for testing purposes.
Reviewed By: quark-zju
Differential Revision: D14177419
fbshipit-source-id: 3a05a210986402f661d7d08902f28fd53f4bdb2d
Summary:
`push --new-branch` is very rarely used and it does not make much sense with
checkheads removed (D14179861). Remove it everywhere.
There is still [one user](https://fburl.com/t5hmcxrp) of the
`push --new-branch` flag. Do not remove it just yet.
Reviewed By: singhsrb
Differential Revision: D14212180
fbshipit-source-id: 18f80576ab6464fc36b047a8a35b339231ee9d8e
Summary:
Previously one couldn't use `sendunbundlereplay` to replay a bundle that just
deletes a bookmark i.e. sends only pushkey part. The problem was in that
`bundleoperation.gettransaction` method wasn't called and so a few hook
arguments weren't set.
In order to fix it this diff just calls this method before calling pushkey. The
solution is not clean, but I don't see much better alternatives.
Another smaller change that this diff is doing is changing sendunbundlereplay
command to require `--deleted` flag. This is just for convenience.
Reviewed By: quark-zju
Differential Revision: D14185380
fbshipit-source-id: f511dc0b9906520b7877501b37639d89ada6fc45
Summary:
We didn't process parts like `error:abort` and so we might have easily missed
an error. This diff fixes it.
Reviewed By: quark-zju
Differential Revision: D14185378
fbshipit-source-id: e68e365fd939a4bd6a0c2835a513ebc94530aa87
Summary:
This solves an issue vipannalla saw that the heuristics logic behaves incorrectly
when running `hg up -C c4a88583; hg graft 23001ead`. The file `great_persons_on_ex_civilization-inl.h`
would be marked as "unresolved" and removed from the working copy potentially
due to other mergedriver actions, while it should be merged cleanly and do not
appear in mergestate at all.
After debugging, the file was only renamed on one side, and not changed on the
other side. In the heuristics code path, the file was reported as copied and
confused the callsite.
Reviewed By: singhsrb
Differential Revision: D14195031
fbshipit-source-id: 0602fd56b75219f851c0175debfe72c4d49d652d
Summary:
This is probably not the proper fix but we're getting rid of svn too.
Remove tests coupled with named branches. Modified some critical tests (ex.
"push") so they can still pass.
Reviewed By: DurhamG
Differential Revision: D14210968
fbshipit-source-id: 62e02179b0e4745db8f90f718613afad42d7376a
Summary:
In some cases, a user has a badly configured ssh config which leads to
unexpected errors when running hg pull. Collecting the ssh config should help
us catch what is wrong.
Reviewed By: sfilipco
Differential Revision: D14189702
fbshipit-source-id: 73fff933987bcc95f23795c5cb6beee54ae2f141
Summary:
We are working on a `memcommit` extension to provide a command for
making commits without a working copy. This is required for implementing the
ScmWrite Commit API i.e. https://fb.quip.com/u5WJAx6i59Kl. The plan is that
ScmWrite service will use this command to create the commits in the repository.
This commit just introduces the data model for the memcommit.
Reviewed By: DurhamG
Differential Revision: D14177420
fbshipit-source-id: 5c5e63bfecedd71a56d9e0b27e308e1803a4dafe
Summary: I think when we moved to process commits stack by stack we didn't change the timeout.
Differential Revision: D14188545
fbshipit-source-id: a01432e6aef29f7e603742f854c323996856fdda
Summary:
Move the strip extension to core. Rename the command to `hg debugstrip` as it
is not intended for use by users. Users should use `hg hide` instead.
Reviewed By: quark-zju
Differential Revision: D14185822
fbshipit-source-id: ef096488cb94b72a7bb79f5bf153c064e0555b34
Summary:
When receiving cloud bookmarks, if they point to a hidden/obsolete commit,
don't omit them. The bookmark will make the commit visible again.
Reviewed By: quark-zju
Differential Revision: D14183009
fbshipit-source-id: ddcb8cce6aaa1eefae93490f76c3dffeaffda21c
Summary:
`fetch` is a deprecated extension. It uses non-trivial branch-related logic.
Drop it to unblock further branch related cleanups.
Differential Revision: D14180593
fbshipit-source-id: 8288a7f0ac1ba72cf476cb78f109c71aa11c92c0
Summary:
They're no longer used. Drop them. The `branchcache` is still somehow used,
although it's basically equvilent to `{"default": heads}`.
We can probably clean it up further after detached from subversion.
Reviewed By: singhsrb
Differential Revision: D14180592
fbshipit-source-id: 45230d486f203bf3f55e89ce9eb89e6855e14e54
Summary:
The CVS branch support will break with the upcoming branchcache removal.
CVS repos are rare. Let's just remove the code.
Differential Revision: D14179863
fbshipit-source-id: 890ce34958d2efe4f2ef02b1d72cf92f6269378c
Summary:
We had `ui.checkheads=false` set for a long time. Let's remove the feature
entirely and update the tests.
This is necessary before killing the branch cache, as some tests still use
different branches and there would be suddently more "heads" that cause
test issues.
Reviewed By: sfilipco
Differential Revision: D14179861
fbshipit-source-id: 0de76566799a9560b45e823cc5f49cfda9e3dd30
Summary: Mononoke uses the markers but we shouldn't look at them for Commit Cloud.
Differential Revision: D14188356
fbshipit-source-id: e5dee581728a9bc83d2f7a17575b3ae6b3183d39
Summary:
add this check to avoid overhead
we don't need to backup to the secondary place if it is the same as the first.
Reviewed By: singhsrb
Differential Revision: D14187754
fbshipit-source-id: 6ee59ae2f0846716ca99253958af7088d0538df9
Summary: Just renaming, and another variable as well to look similar
Reviewed By: DurhamG, quark-zju
Differential Revision: D14185033
fbshipit-source-id: e34de690274afd2f2c6e51db21c7b158f6c3452a
Summary:
Treemanifest uses it's own fallbackpath for reads in all cases, but
particularly in the case of remotenames it should stay on the path that is
actively being used by the push. It is possible to have remotenames which are
mirrored and selected by query strings in the repo path. In this case it is
possible that the mirror is still out of date when reading back data from our
push. Ensure that when doing a push the remote server this is considered
'sticky' so that we read back from the remote we pushed to, rather then
determening the path ourselves.
To disable, please use:
treemanifest.stickypushpath=False
Reviewed By: DurhamG, markbt, quark-zju
Differential Revision: D14165444
fbshipit-source-id: 75a53ffab895d87a4c52814f7887145c134868b5
Summary:
Context: in pushrebase replay job we run on Mononoke we also want to test our
hooks i.e. replay pushrebase requests that were denied on mercurial because of
a hook failure. The problem is that at the moment the only error message we have
is `prepushrebase.driver hook failed` (see xdb table `xdb.mononoke_production`,
query `select * from pushrebaserecording where pushrebase_errmsg like '%hook%'
limit 10;`). This error message tells us nothing, not even what hook failed.
To change it I suggest to make it possible for python hooks to raise HookAbort
and provide more information in the `hook_failure_reason` field.
Reviewed By: quark-zju
Differential Revision: D14131359
fbshipit-source-id: 69a2b51b30c78efadf3ba0d3332f46a777022568
Summary: In chef there is currently an override to 50, so if someone got more due to some reason, it is nice to add a message.
Reviewed By: quark-zju
Differential Revision: D14170749
fbshipit-source-id: 3f3c79e4e6103523c14c8d9c1600f104e3d5d3d8
Summary: `pastry` is the modern replacement for `arc paste`. Let's use it in `hg rage` instead of `arc paste`.
Reviewed By: singhsrb
Differential Revision: D14172108
fbshipit-source-id: 6586b9a8d8b90bac55d91baed852edbc7c1d9db9
Summary:
Currently, stackpush requires specifying the original node for each
new node to be created. This inhibits the creation of new commits which are not
replacing any commit via stackpush. This commit changes the behaviour such that
specifying the original node is optional.
Reviewed By: quark-zju
Differential Revision: D14153674
fbshipit-source-id: b3d150a8a8044ac1740937f2dc058ce542ee13e4
Summary:
First stab at a job that will keep hg in sync with Mononoke when Mononoke
becomes a source of truth.
Reviewed By: ikostia
Differential Revision: D14018269
fbshipit-source-id: f88c5eba8bf5482f2f162b7807ca8e41a3b4291d
Summary: There was a typo, the timezone is `[1]` and not at `[0]`
Reviewed By: ikostia
Differential Revision: D14147329
fbshipit-source-id: 8ad4bff810ed949a9f8e86d03ef62bc63aaf11bd
Summary: the push-path will be used for Mononoke write path roll out.
Reviewed By: markbt
Differential Revision: D14153207
fbshipit-source-id: 158e6e08dfa017c98e668fdfc37138dd7b7d76c7
Summary:
We can't run in parallel at the moment as the log file and the lock file are
shared.
Every path maintains independent backup state (the previous diff).
The secondary backup state doesn't affect smartlog (only the main one)
The issue with this approach is that we maintain backup lock a bit longer.
Unfortunately, the progress in smartlog doesn't show anything about the second backup.
I added 'finished', it makes it easier to compare in the logs.
Reviewed By: markbt
Differential Revision: D14149399
fbshipit-source-id: f90e8aac6cb8dee53d5c7468bd6adba067e13362
Summary:
We are going to support 2 different backends of Commit Cloud: Mercurial and
Mononoke.
Each of them should maintain local backup state separately.
Output of some tests have been slightly changes, this is because a separate backup
state, the same error appears earlier when we are trying the backup stacks.
The idea is to have separate backup states for different remote paths, but
there will be only one cloud sync state for the current source of
truth. We could include there the remote path and then validate that cloud sync
state is correct if the remote path has been changed.
However, for backup states it is much easier to have them separately (and we
will backup in 2 places)
Reviewed By: markbt
Differential Revision: D14138496
fbshipit-source-id: 0a7a763a395be5456cbd724bff7ebc069f03fb0e
Summary: The `hg update` command has short flags for `--merge` and `--clean`, namely `-m` and `-C`. Let's add the short versions to `hg next` and `hg prev` as well.
Reviewed By: quark-zju
Differential Revision: D14155218
fbshipit-source-id: d51f5f658b525809e4c512ffa300dea4b4cabcdf
Summary:
We're encountering an issue that I think is caused by invalid data
coming back from memcache, possibly due to our recently introduced connection
reusing. Let's add some checksums to verify that the data we put in is identical
to the data we get out for a given key.
Reviewed By: kulshrax
Differential Revision: D14141683
fbshipit-source-id: 206b51b862db7d54def02f5310b90f473d5a0d03
Summary: We weren't passing the `--merge` flag from `hg prev` and `hg next` to the underlying invocation of `hg update`. This diff fixes the problem so that `--merge` now works as expected.
Reviewed By: DurhamG
Differential Revision: D14154244
fbshipit-source-id: 4a2412cca714ec8f269eb5f2989e39821642fbb3
Summary:
Backs out D13944829 that add preloading the manifest to pushrebase. In
a situation where you receive tree commits, but the repo is hybrid, this causes
it to try to lookup the bundled tree manifests in the flat manifest revlog.
Let's just back this out for now until we can come up with an appropriate fix,
or move all our repos to treeonly.
Reviewed By: quark-zju
Differential Revision: D14151876
fbshipit-source-id: 0f7419d5b9bcd9d7ce363f4349e9e2e4a86cf713
Summary:
If treestate is in use, fsmonitor state is stored in the treestate, rather than
in a separate fsmonitor.state file. Update rage to understand this.
Reviewed By: quark-zju
Differential Revision: D14131131
fbshipit-source-id: d80d766625d7915b6a76f66f33e763eed23677e9
Summary:
This is a perf optimization. `unbundlereplay.respondlightly` instructs the
server to not produce the pushback parts regardless of what `replycaps` part
of the incoming bundle says. This is important, since the mononoke-hg sync will
send all the bundles in a searialized way, so we want to optimize time where
possible.
Reviewed By: StanislavGlebik
Differential Revision: D14131575
fbshipit-source-id: afec15347d43fa52b1ec64b4ac8ece5b227ccf7d
Summary:
A lot of code is duplicated between data stores, and history stores, and one
reason for it is the absence of common trait between these 2. By adding a new
Store trait it will make it easier to write generic code that works accross
data and history store.
Reviewed By: quark-zju
Differential Revision: D14091899
fbshipit-source-id: deef1d43a7d300cb3607c67554ad54f20c870e23
Summary:
Prepare the code to use 'known' wireprotocol command to check what is backed up
on the server instead of slower 'lookup'. We don't need to reestablish ssh
connection, and we can test all the nodes in one go.
Also test the 'known' request with Mononoke is working (all commands have been
tested with isbackedupnodes => isbackedupnodes2)
Reviewed By: markbt
Differential Revision: D14131383
fbshipit-source-id: 5a150b64d0e84a02357c2367879b2da8493d9167
Summary:
Use the same way to check if the commits have been backed up in Mononoke and Mercurial.
The only common way to check is 'lookup' command because Mercurial doesn't support discovery for commit cloud commits, the command 'known' is also not supported.
Also we have to go one by one because lookup doesn't got any better API.
It is still much faster than backup commits that are already there.
Introduce such check for pushbackup as well.
Remove hacky way to check it from cloud sync.
For commit cloud in Mononoke we will have backfill, so the server side check will be heavily used when you go to Mononoke at the first time.
Unfortunately connection pool module in mercurial is not good enough in detecting closed connections and can easily return a broken connection on the next call.
Reviewed By: markbt
Differential Revision: D14085849
fbshipit-source-id: d76d9a71f9efdbdfec4de3198cd428b6b693418d
Summary:
This is to be used from Mononoke->hg sync.
Currently expects only `pushrebase` bundles, e.g. one head and one book to
move.
Reviewed By: StanislavGlebik
Differential Revision: D14116130
fbshipit-source-id: 959a6e51f51e21da5592c84188e294a33057ffaa
Summary: When comparing the number of files changed, fsmonitor was incorrectly using the length of the result dict (always 4), rather than the number of files watchman returned. Use the right list of files instead.
Reviewed By: markbt
Differential Revision: D14123604
fbshipit-source-id: 94684f1f189d045b2f6a880180b15e52ba9bba8c
Summary:
this will be used to understand if server supports 'listkeyspatterns'
(Mononoke doesn't support this)
Reviewed By: DurhamG
Differential Revision: D14107742
fbshipit-source-id: c9a42e8516eb5660ab2f498996b211db7086bcb1
Summary:
In order to move the types in `edenapi-types` (containing types shared between Mercurial and Mononoke) to the `types` crate, we need to move a few types from the `revisionstore` crate into this crate first, because `revisionstore` depends on `types`, which would create a circular dependency since `edenapi-types` uses types from `revisionstore`.
In particular, this diff moves the `Key` and `NodeInfo` types into their own modules in the `types` crate.
Reviewed By: quark-zju
Differential Revision: D14114166
fbshipit-source-id: 8f9e78d610425faec9dc89ecc9e450651d24177a
Summary:
the early exit logic was incorrect, basically if there is at least 1 bookmark
in the repo and the backup state was not empty, it didn't catch that nothing has been changed.
bookmarks are dicts, so it is fine to compare them
if any bookmarks in the repo, everytime `hg pushbackup` established a connection to mercurial
Reviewed By: quark-zju
Differential Revision: D14103938
fbshipit-source-id: 0edc4d9e186199670765fd2362236330e831c7d9
Summary:
Migrate away from using the "hg branch" command to "hg commit --extra branch="
instead. In the future the branch namespace would be removed, so we create
local tags instead.
`test-revset2` was split from `test-revset` and has the same header. Do the
same change to it. Besides, de-duplicate tests about the `tag()` revset.
Reviewed By: ikostia
Differential Revision: D14058435
fbshipit-source-id: b59fadc43939d85d14bbf9f81227c523b65557a0
Summary:
During repack, the repacked files are deleted without any verification. Since
Adam saw some data loss, it's possible that somehow repack didn't fully repack
a packfile but it was deleted. Let's verify that the entire packfile was
repacked before deleting it.
Since repack is mostly a background operation, we don't have a way to notify
the user, but we can log the error to a scuba table to analyse further.
Reviewed By: DurhamG
Differential Revision: D14069766
fbshipit-source-id: 4358a87deeb9732eec1afdfb742e8d81db41cd87
Summary:
Removing files on Windows is hard. It can fail for many reasons, many of which
involves another process having the file opened in some way. One way to solve
this problem is that renaming the file isn't as restrictive as removing it.
Since hg repack will attempt removing any temporary files it will also try to
remove the packfiles that we failed to remove earlier.
Reviewed By: DurhamG
Differential Revision: D14030445
fbshipit-source-id: 1f3799e021c2e0451943a1d5bd4cd25ed608ffb6
Summary:
Preloading all the pack files on initialization ties the lifetime of the
packfiles to the repo. For normal operations, this is fine, as packfiles are
mostly read. During a repack however, we need to be able to remove them, and
while having an open file handle allows deletion on unix OSes, it prevents it
on Windows.
The Rust repack now succeeds on Windows.
Reviewed By: DurhamG
Differential Revision: D14013786
fbshipit-source-id: 99279d4af67a0dfe8679159e9409186f56a09296
Summary:
When using query strings or fragments in an URL we should treat repository paths
as the same repo. This allows servers to use query strings for metadata, without
treating the urls as different servers. By introducing normalization in our grouping,
we will consider the normalized result to be the qualifier for what is the same repo,
rather then the full absolute path - which includes query strings and fragments.
Reviewed By: DurhamG
Differential Revision: D14051479
fbshipit-source-id: c82fe041467f6bd6af210688c0be873e2da93781
Summary: We were reading the `edenapi.url` config item without explicitly setting it up with the `configitem()` function. Not sure what negative impact this would have, but it's probably a good idea to have the explicit call in place.
Reviewed By: quark-zju
Differential Revision: D14075080
fbshipit-source-id: bb4e25de273341768f850f1d5aab6ac21e7f2fc5
Summary: Now that the `edenapi` module in bindings is always available for all platforms, we no longer need a try block around the import.
Reviewed By: quark-zju
Differential Revision: D14075082
fbshipit-source-id: e3f45e67ef4572e58f85875af12390ea5d697d43
Summary: Move the edenapi Python bindings into the common `bindings` crate.
Reviewed By: quark-zju
Differential Revision: D13963179
fbshipit-source-id: 76dead82af992615a9e452ee6fbb9f66639c822c