Summary: Update the tests to expect the new output added by D15173846.
Reviewed By: quark-zju
Differential Revision: D15212315
fbshipit-source-id: 82bb3b4e67a28eb8905d35fcaa8251947163f521
Summary: Update the test code to return a 3-tuple as required after D15173846.
Reviewed By: singhsrb
Differential Revision: D15212317
fbshipit-source-id: 5c5ecaae858a3eaab23f624c11f0dda3ac74a870
Summary:
I wanted this feature in multiple cases. For example, I have renamed
`segdag::SegDag` to `segment::Dag`, and want to edit commit message for
`D15055347::D15055347`, a 9-diff stack. Editing them one by one is
painful.
Reviewed By: singhsrb
Differential Revision: D15188668
fbshipit-source-id: c7cc11aca0a5e16992b5246a74346a35bec00770
Summary:
This introduces `repo.sqlreporeadonlystate()`, which works similarly to `repo.sqlisreporeadonly()`, but also gets the reason why the read only state is the way it is.
The underlying goal is to use this in a repo hook to report the lock state.
I retained the `sqlisreporeadonly` function so that we don't need to synchronize deployment of this code and the underlying hook.
Reviewed By: quark-zju
Differential Revision: D15165946
fbshipit-source-id: 0a62147167fa826b575178dd261990a956b5f317
Summary:
Add support for the `unamend` command. It should use the mutation predecessors
if mutation is enabled, and update the visibility of the commits.
Reviewed By: quark-zju
Differential Revision: D15146976
fbshipit-source-id: e9ee4d26f45ba9e5c3c05a7bca80c8ac326adb9c
Summary:
We have an existing method which can capture the desired functionality
without dropping all the data in the `revision_references` table. This
primarily solves these problems:
- Reuse of code.
Reviewed By: farnz
Differential Revision: D15107057
fbshipit-source-id: 5f9970ffd13536808c1b201481b6d2015fbe8295
Summary: Added logging of a number of the currently tracked accessed remote bookmarks.
Reviewed By: mitrandir77
Differential Revision: D15080683
fbshipit-source-id: c03c417afcd24683998689365c893d9e16f265f8
Summary:
Track remote names that are used as destination for hg update.
Resolving name in the repo happens actually twice: as validation during parsing and tokenization of the given specs and actual resolving. This is still fine to update file with used bookmarks twice, because the update/pull operations are much heavy, and updating the file won't make things much slower. Implementing kind-of-cache for used remote names so we could update the file once, isn't worth it, as the feature will be temporarily enabled and won't be needed after the selective pull rollout.
Reviewed By: markbt
Differential Revision: D15048105
fbshipit-source-id: 5b03443a6ab349e3bd88613d21e7b1efdc1ff6cf
Summary:
Tracking remote bookmarks that was pulled with
```
hg pull -B <remote name>
```
All these remotenames, if they exist, will be stored in `.hg/selectivepullusedbookmarks` file.
It will allow us to estimate how much memory do we need to keep remote names in sync state in Commit Cloud and automatically mark collected remote bookmarks as "interesting" when the selective pull will be enabled.
Reviewed By: markbt
Differential Revision: D14912903
fbshipit-source-id: 3001869175553327c0840e2cfb829724dfd82893
Summary:
This diff adds support for extension source code inlined into configuration.
The main intended use case is to use Chef to quickly deploy config changes.
The config change can be inlined extensions that patch broken logic.
Since extensions are more powerful than pure config changes, we gain the
ability to hotfix more complicated issues.
In theory, Chef can also write files to the filesystem. It's less convenient
because different OSes might require different paths, cleaning them up might
be extra work.
Reviewed By: ikostia
Differential Revision: D15073429
fbshipit-source-id: 7696bdce72bf7222debc72002173feb7de95198f
Summary:
To be able to investigate problems with dangling lock files "infinitepushbackup" and "prefetch".
- "infinitepushbackup", file "infinitepushbackup.lock" in repo.sharedvfs
- "prefetchlock", file "prefetchlock" in repo.svfs
Also some small fixes around.
- Crate "undolog" directory if it doesn't exist.
- `report` function was restructed. Now if lock checking method is not passed
lock will be checked with `lockmod.lock`.
Reviewed By: markbt
Differential Revision: D14949807
fbshipit-source-id: 02143ff923145e67e88c5627cf3355a834823b6d
Summary:
D14185380 made it so pushrebase will always force pushkey to get a
transaction. This has the side effect of making infinitepush bundles take the
lock and fire transactions. They should do neither.
Let's instead request the transaction at bookmark edit time.
Reviewed By: singhsrb
Differential Revision: D15094917
fbshipit-source-id: 6573447a7ba61b1853a37eacb1b3e767abb3f27f
Summary:
D14185380 made a change that caused infinitepush to take a lock when it
shouldn't. Let's add a test demonstrating that. In a future diff we'll fix the
bug and update the test.
Reviewed By: singhsrb
Differential Revision: D15106388
fbshipit-source-id: 5a37688647ccf646f61e66bb33283c91d06c8761
Summary:
When `remotefilelog.fetchpacks` is enabled, an automatic repack will be
triggered on refresh. Let's add a test to verify this behavior.
Reviewed By: singhsrb
Differential Revision: D15095200
fbshipit-source-id: 89c0a98925e4e53413cf9ea1b1862859c370e12a
Summary:
Undo removes visibility of commits one by one, starting at the bottom of the
stack. This doesn't work, as the bottom of the stack is kept visible by the
commits above it.
Remove them all in one go.
Reviewed By: mitrandir77
Differential Revision: D15079353
fbshipit-source-id: 52335b6dd1bd91c7f87d1a8ee2dbefa8aff7d24b
Summary:
Shelve uses hidden commits to store shelved changes. These need to be made
invisible, too.
Differential Revision: D15079352
fbshipit-source-id: 6063270d18df81d9b4af7823542a38c5feb45e3a
Summary:
This test is failing on OSX due to case collisions. Let's just avoid
the case collisions by using the directory name as `x` instead of `a`.
Reviewed By: quark-zju
Differential Revision: D15083754
fbshipit-source-id: 0752f06c71c315e349a8eea8dbe7da14e564f1b2
Summary:
It's been failing in continous run. Somehow one line of traceback is missing in
opt run, not sure why but it's easy enough to skip it.
Reviewed By: quark-zju
Differential Revision: D15080872
fbshipit-source-id: 55eff2d471da05b109faa04b6801db1e6245d7a6
Summary:
The `path` argument passed to the follow revset is absolute. So `path` instead
of `relpath` should be used.
Reviewed By: DurhamG
Differential Revision: D15071189
fbshipit-source-id: 6aec76fa1a8cabd545a375aa40448cc75dbd1d6d
Summary:
Ordinarily loops are prevented in the mutation graph as the predecessors must exist
at the point that the successor is created. However, backfilling from a complex
obsolescence graph may inadvertently introduce cycles.
Since loops are invalid, we can safely ignore any mutation edges that may
introduce them. The `allpredecessors` and `allsuccessors` functions already
do this.
Add loop detection and ignoring to the `predecessorsset` and `successorssets`
functions.
Reviewed By: mitrandir77
Differential Revision: D15062399
fbshipit-source-id: fe892d9236c8d8dc4e1322b82618ab4bca35d30a
Summary:
Use the graphnode `-` for all invisible commits, even obsolete ones.
Users will only see them in their logs if:
- they run log with `--hidden`.
- they have invisible commits that are temporarily unhidden (e.g. they've checked it out).
Reviewed By: mitrandir77
Differential Revision: D15061894
fbshipit-source-id: 86873bd86cb15cef72dae248b8e2a636378cc547
Summary:
Once `remotefilelog.fetchpacks` is enabled, `hg gc` will no longer be able to
limit the size of the hgcache. This will be particularly challenging for
Sandcastle/Quicksand as they already see hgcache over 100GB.
The long-term plan is switching to IndexedLog based stores with a log rotate
functionality to control the cache size. In the meantime, we can implement
a basic logic to enforce the size of the hgcache that simply remove packfiles
once the cache is over the configured size.
One complication of this method is that several concurrent Mercurial processes
could be running and accessing the packfiles being removed. In this case, we
can split the packfiles in 2 categories: ones created a while back, and new
ones. Removing packfiles from the first case, lookups will simply raise a
KeyError and data will be re-fetched from Memcache/Mononoke, ie: failure is
acceptable. The second category belongs to ones that were just created by
downloading them from Memcache/Mononoke, and the code strongly assume that they
will stick around. A failure at this point will not be recovered.
One way of fixing this would be to handle these failures properly and simply
retry, the other is to not remove new packfiles. A time of 10 minutes was chosen
to categorize the packfiles.
Reviewed By: quark-zju
Differential Revision: D15014076
fbshipit-source-id: 014eea0251ea3a630aaaa75759cd492271a5c5cd
Summary:
Clean up some of the calls to `ui.log` and how they appear in blackbox logging.
* Make the names of the events consistently use `snake_case`.
* For watchman, only log once for each watchman command. Include whether or not it failed.
* Unify `fsmonitor` logging under the `fsmonitor` event.
* Omit the second argument when it is empty - it is optional and does nothing when empty.
* Increase the number of blackbox lines included in rage to 100.
Reviewed By: quark-zju
Differential Revision: D14949868
fbshipit-source-id: a9aa8251e71ae7ca556c08116f8f7c61ff472218
Summary:
On Windows, all the tests that are expecting to find some files in $CACHEDIR
would fail due to the directory not existing. Interestingly enough, printing
$CACHEDIR would print a reasonable path, which is the same as $TESTTMP.
Trying to understand this better, I passed --keep-tmpdir to run-tests and
realized that the "real" $TESTTMP was somewhere in my home directory, while
the real $CACHEDIR was in fact C:\tmp.
I haven't fully understood why, but it looks like $PWD is expanded in C:\tmp,
while $TESTTMP is expanded into something else.
Reviewed By: quark-zju
Differential Revision: D15041274
fbshipit-source-id: 0d167183d74df5f6ab84360c5699e96808fceb9b
Summary:
The prechangegroup hook didn't have throw=True set, so if the hooks
failed we ignored it. This seems to have been the case for a long time, but we
only recently hit it.
Reviewed By: kulshrax
Differential Revision: D15038494
fbshipit-source-id: 4fa9ed4924c02732e3e4070e747a80fbe63564c9
Summary:
The warning isn't that useful, and can actually cause more harm than good, as running `hg prefetch -r .`
can download gigabytes of unnecessary data to the hgcache.
Reviewed By: quark-zju
Differential Revision: D14999458
fbshipit-source-id: b0ff2c2ad0e441622066fac10a5efafe8de588db
Summary:
This test was broken by D14971701 on OSX because it has a case
insensitive filesystem.
Reviewed By: kulshrax
Differential Revision: D14986692
fbshipit-source-id: a2a924d7aae4f3b96e7691e824a82087c1ff8513
Summary:
The `if dest` test fails with infinitepush changing `dest` to non-None.
Fix it by also checking if `dest` matches `default-push` (or `default`).
Reported by: fryfrog
Reviewed By: markbt
Differential Revision: D14965995
fbshipit-source-id: 91e68368eda4457d06059387307a9572bc6d2906
Summary:
The way this command was implemented before this change would collapse
existing case collisions into a single value of the `lowerdirlist` dict.
The value that was chosen would be dependent on the traversal ordering.
If this value would be equivalent to the newly-introduced file, we would
miss the collision.
Reviewed By: markbt
Differential Revision: D14971701
fbshipit-source-id: d352e96e512154d92fe6bc49dd76aec63b954fef
Summary:
Change the fastlog approach to patch the "follow" revset function instead of
"getlogrevs". This makes it more general purposed so it can speed up commands
like `hg log -r 'reverse(follow("dir"))'`.
Note: This will regress `log dir` performance, which will be fixed in the next
diff.
Reviewed By: sfilipco
Differential Revision: D14764074
fbshipit-source-id: c2a4c8e91359d971e6ea668e5ff1f0ab6eb0534c
Summary:
The "hg log" implementation chooses to not use the "follow" revset when logging
a directory, but use "_matchfiles" instead. In an upcoming change, we'd like
"follow" to handle directories so fastlog only needs to patch the "follow"
revset.
The "follow" revset can take a directory pattern just fine. The problem is
"follow" will follow *every* file inside the specified directory, which is
quite expensive.
For now, I just moved the "_matchfiles" fallback path to "follow" so when it
detects there are too many files to follow, it will switch to "_matchfiles"
directly.
In theory, tree manfiest repos would have "tree manifest" infomation that speed
up "follow" on a directory. But that's a bigger change, and it's probably very
slow in our setup because our trees are lazy.
This changes some behaviors subtly, as reflected in tests:
- `-f path` can use DAG range instead of rev range, which is a good thing as
rev range does not make much sense to end-users. This removes a "BUG" in
test-commit-amend.t
- `-f dir` can follow renames if the directory contains just a few files.
This looks like a good thing, and is reflected in `test-log.t`.
Reviewed By: sfilipco
Differential Revision: D14863134
fbshipit-source-id: 99ddff46d43f63ce03dc7bf005e3ac1cb9b39d03
Summary:
With upcoming changes, I noticed `limit(reverse(:.) & follow("path:fbcode/scm/hg"), 10)`
is much slower than `limit(reverse(::.) & follow("path:fbcode/scm/hg"), 10)`. I tracked
it down to the fact that spanset introduces a lot of unnecessary checks. Optimize it
by avoid using spanset in this case.
The revset pattern is used by `hg log`, the `reverse(:.)` part is to "define" the
order. Perhaps we sould replace it with `reverse(::.)`. But that's a BC change that
might have some unwanted side effects.
Reviewed By: sfilipco
Differential Revision: D14863135
fbshipit-source-id: 6ba8a02b58e1109bdf8370f03965a3b302cba6c0
Summary:
`ui.log` expects to be called with valid format arguments. If the arguments
are not a valid format string, or the number of arguments doesn't match the
number of format placeholders, formatting will fail.
In this case, catch the exception and fail gracefully. Don't even bother
formatting if there is exactly one argument.
The `blackbox` extension already does this, so extend to the `sampling`
extension.
Also fix the place where `perftrace` calls `ui.log` with a string that might
contain formatting placeholders.
Reviewed By: quark-zju
Differential Revision: D14938952
fbshipit-source-id: 1d9802308dba925109c018124d51273c348526b4
Summary:
Looking at the Hgerrors scuba table, I noticed that a lot of the sandcastle
machines had repack failures due to "No such file or directory". I'm suspecting
that's due to not having a local store to repack, and therefore listing of
files to repack would fail. Let's verify that the directory is present before
repacking to avoid this issue.
Reviewed By: quark-zju
Differential Revision: D14906503
fbshipit-source-id: 98fbe57310511df4fc9856bf71f836adefb3d855
Summary:
Looking at the Hgerror scuba table, I see a lot of failures due to ENOENT on
Sandcastle. I'm suspecting this has to do with Sandcastle not having a local
manifest.
Reviewed By: quark-zju
Differential Revision: D14906506
fbshipit-source-id: a5d3eec824168e78ce3146dbde2d2bbbed1702f9
Summary:
This diff adds args escaping using single quote symbol before sending them to hooks.
Before all arguments where joined by space symbol " " which was producing incorrect result when argument itself contains space symbol.
Reviewed By: markbt
Differential Revision: D14799188
fbshipit-source-id: df5a4324d138515a4b881df96f2991de03df7a5b
Summary:
Add `shelvename` template keyword, which expands to the name of the shelve for
commits that contain shelves.
Reviewed By: farnz
Differential Revision: D14932985
fbshipit-source-id: cddebd2dbc6454f7c61ed296f37822179da8a2de
Summary:
It makes this method 25-30% faster (shaves off 250-300 ms).
Also it counts number of fetched rows correctly - fetchall method was
overriden, but looks like __iter__ method wasn't
Reviewed By: ikostia
Differential Revision: D14915472
fbshipit-source-id: 313695c1a83d05dac2fc801792226b6b64539cb5
Summary: This test was failing because mercurial wants the file paths to be valid utf-8.
Reviewed By: singhsrb
Differential Revision: D14924604
fbshipit-source-id: be2db5c437df77ad3ad70f6956451e4a03835378
Summary:
Autorels attempts to detect the scenario where `P -> Q` and `X -> Y` are
being added, and there already exists a `P -> ... -> X` relationship.
In this case it will create a `Q -> Y` "copy" marker to express the fact
that `Q` should be copied.
However, this also triggers in the case where `Q == Y`, creating a revive
marker for `Q`.
Normally this is benign, as Q is probably visible anyway, however when there
are two commits associated with a diff that has been landed, pullcreatemarkers
can create two markers: `P -> L` and `X -> L`. Since P and X are for the same
diff, there probably exists a `P -> ... -> X` relationship, and so autorels
attempts to make an `L -> L` marker. This fails because L is public.
Differential Revision: D14891063
fbshipit-source-id: 3f076a003508dd7b7d17e3eb7cdaeb8ac09e6b15
Summary:
Demonstrate that a combination of autorels and pullcreatemarkers causes an
attempt to obsolete a public commit.
Differential Revision: D14891064
fbshipit-source-id: 29f5cea9c843cc87aef18f74bad11eaabfa7b311
Summary:
The `tglogm` test function displays a graph log with mutation information.
Use this common function in all tests.
Differential Revision: D14876688
fbshipit-source-id: 2eb29a45b6267d448d292ac13dbfb0135d6fc8e4
Summary:
Add support for explicit visibility tracking in commit cloud sync.
This means commit cloud reads the visibleheads and syncs these with the commit
cloud heads directly, removing the source of problems where obsmarkers disagree
on different hosts.
Commit cloud requires that the ordering of heads is maintained to get stable
ordering of new commits. Update the visibleheads tracking to maintain
ordering, rather than using sets.
Finally, the calculation of the replacement node was slightly off. This was
revealed in the new test case that is being added, so it is also fixed.
Differential Revision: D14876266
fbshipit-source-id: fe5b6bffd196d3bd74e7582e29484969495eac8e
Summary:
The computation of whether a commit is obsolete or not can be improved.
We can cache which commits are known to not be obsolete.
We can also have a cache for each filter type so that we only need to compute
obsolete nodes that match the filter.
Finally, when we need to compute all obsolete commits, we can start by looking
for commits which are made obsolete by only their closest successors, and then
filling back obsolescence to the predecessors of these obsolete commits.
Reviewed By: DurhamG
Differential Revision: D14858655
fbshipit-source-id: 1d03e214ad878ecb6ae548f80373702e2a184146
Summary: The `absorb` command should also record mutation information when it modifies commits.
Reviewed By: DurhamG
Differential Revision: D14871232
fbshipit-source-id: 46bc95b7f5781f0b5f5e057a34c755fcfe653f7e