Because the original dumbhttp.py exited without waiting for listen(), several
tests could fail with "abort: error: Connection refused" if subsequent hg
command is fast enough.
It seems like the last known misbehvior that the comment was referring
to was dealt with in 708ed580cbea (revert: properly back up added
files with local modification, 2014-08-31).
Further digging on this issue show that the limit on the sample size used in
discovery never works for heads. Here is a quote from the code itself:
desiredlen = size - len(always)
if desiredlen <= 0:
# This could be bad if there are very many heads, all unknown to the
# server. We're counting on long request support here.
The long request support never landed and evolution make the "very many heads,
all unknown to the server" case quite common.
We implement a simple and stupid hard limit of sample size for all query. This
should prevent HTTP 414 error with the current state of the code.
History rewriting commands like histedit tend to use temporary
commits. They may schedule hook execution on these temporary commits
for after the lock has been released. But temporary commits are likely
to have been stripped before the lock is released (and the hook run).
Hook executed for missing revisions leads to various crashes.
We disable hooks execution for revision missing in the repo. This
provides a dirty but simple fix to user issues.
On streaming clone, we were priming the local branch cache with the
remote branchmap, without checking which heads were closed.
This fixes an issue introduced in:
changeset: 17740:f8d7aaf86507
user: Tomasz Kleczek <tomasz.kleczek@fb.com>
date: Wed Oct 03 13:19:53 2012 -0700
summary: branchcache: fetch source branchcache during clone (issue3378)
that was exposed in 2.9 by:
changeset: 20192:6c385e85aa05
user: Brodie Rao <brodie@sf.io>
date: Mon Sep 16 01:08:29 2013 -0700
summary: branches: simplify with repo.branchmap().iterbranches()
8a92e6790099 broke bookmarks getting copied during uncompressed clones. Since
most of the pull logic has been moved into exchange.py, lets just call
exchange.pull to fix up the repo with the latest bits after the streaming clone
has bootstrapped the repo. This keeps us from having to duplicate the bookmark
logic.
The matcher variable 'm' in checkstatus() is reset to None on each
call, so the caching of the matcher no longer happens as it was
intended. This seems to be a regression in 6b9fbae54476 (revset: added
lazyset implementation to checkstatus, 2014-01-03).
Fix by moving the cached matcher into the enclosing function so it's
actually cached across calls. This speeds up
hg log -r 'modifies(mercurial/context.py)' >/dev/null
from 7.5s to 4s.
Also see similar fix in 5ff5c5c9e69f (revset: avoid recalculating
filesets, 2014-10-22).
0cc5c10d5dc7 was not the final version of that patch. It was really slow
because `l not in repo.changelog` iterates revisions up to `l`. Instead,
rev() should utilize spanset.__contains__().
revset #0: rev(210000)
0) wall 0.000039 comb 0.000000 user 0.000000 sys 0.000000 (best of 67978)
1) wall 0.002721 comb 0.000000 user 0.000000 sys 0.000000 (best of 1055)
2) wall 0.000059 comb 0.000000 user 0.000000 sys 0.000000 (best of 45599)
(0: 3.2-rc, 1: 0cc5c10d5dc7, 2: this patch)
Note that the benchmark result described in 0cc5c10d5dc7 is wrong because
it is the one of the initial version.
'n' was introduced in Mercurial in 5d1adb6683fa and broke Python 2.4 support in
mysterious ways that only showed failure in test-glog.t. Py_BuildValue failed
because of the unknown format and a TypeError was thrown ... but it never
showed up on the Python side and it happily continued processing with wrong
data.
Quoting https://docs.python.org/2/c-api/arg.html :
n (integer) [Py_ssize_t]
Convert a Python integer or long integer to a C Py_ssize_t.
New in version 2.5.
k (integer) [unsigned long]
Convert a Python integer or long integer to a C unsigned long without
overflow checking.
This will use unsigned long instead of Py_ssize_t. That is not a good solution,
but good is not an option when we have to support Python 2.4.
The old revset had pretty terrible performance on large repositories (12+
seconds). This new revset achieves the same result in only 0.7s. As we improve
the underlying revset APIs we can probably get this revset down to 'only(base,
dest)::', but at the moment that version still takes 2s.
This addresses the bug described in issue4405: when obsolescence markers are
enabled, amending a commit with a file move can lead to the copy information
being lost.
However, the bug is more general and can be reproduced without obsmarkers as
well, as demonstracted by Pierre-Yves and put into the updated test.
Specifically, graph topology divergences between the filelogs and the changelog
can cause copy information to be lost during amends.
In light of the POODLE[0] attack on SSLv3, let's just drop the ability to
use anything older than TLSv1 entirely.
This only fixes the client side. Another commit will fix the server
side. There are still a few SSLv[23] constants hiding in httpclient,
but I'll fix those separately upstream and import them when we're not
in a code freeze.
0: http://googleonlinesecurity.blogspot.com/2014/10/this-poodle-bites-exploiting-ssl-30.html
If an exception is raised during a bundle2 part payload generation it is now
recorded in the bundle. If such exception occurs, we capture it, transmit an
abort exception through the bundle, cleanly close the current part payload and
raise it again. This allow to generate valid bundle even in case of exception so
that the consumer does not wait forever for a dead producer. This also allow to
raise the exception during unbundling at the exact point it happened during
bundling make debugging easier.
It is now possible to emit a single part in the middle of a payload production.
This part will be processed with limitation (only access to a `ui` object). The
goal is to let the server raise exception and output while a part is being
processed. The source motivation is to transmit exception that occurs while
generating a part.
This change is was the motivation to bump the bundle2 format from HG2X to HG2Y.
Somehow, the format bump made it into 3.2 without it. So this change go on
stable. It is low risk as bundle2 is still disabled by default.
Previously the journal.backupfiles file was delimited by \0. Now we delimit it
using \n (same as the journal file). This allows us to change the number of
values in each line more easily, rather than relying on the count of \0's.
The transaction format will be changing a bit over the next releases, so let's
go ahead and add a version number to make backwards compatibility easier. This
whole file format was broken prior to 3.2 (see previous patch), so changing it
now is pretty low risk.
The transaction backupfiles logic was broken for 'hg recover'. The file format
is XXX\0XXX\0YYY\0YYY\0 but the parser did a couple things wrong. 1) It went one
step beyond the final \0 and tried to read past the end of the array. 2)
array[i:i+1] returns a single item, instead of two items as intended.
Added a test to catch it, which turns out to be the first actual 'hg recover'
test.
The recent optimization of "and" operation relies on the assumption that
the rhs set does not contain invalid revisions. So rev() has to remove
invalid revisions.
This is still faster than using `.filter(lambda r: r == l)`.
revset #0: rev(25)
0) wall 0.026341 comb 0.020000 user 0.020000 sys 0.000000 (best of 113)
1) wall 0.000038 comb 0.000000 user 0.000000 sys 0.000000 (best of 66567)
2) wall 0.000062 comb 0.000000 user 0.000000 sys 0.000000 (best of 43699)
(0: 428fa22fb2d1^, 1: 3.2-rc, 2: this patch)