When converting a particularly large Perforce changelist (especially with some
big files), it is very likely to run into an intermittent network issue (e.g.
WSAECONNRESET or WSAETIMEDOUT) getting one of the files, which will result in
the entire changelist converting being aborted. Which can be quite unfortunate
since you might have waited hours getting all other files. To mitigate this
let's attempt to get the file one more time, escalating original exception
if that attempt fails.
Before this patch, backups to be discarded are decided by steps below
at 'hg unshelve' or so:
1. list '(st_mtime, filename)' tuples of each backups up
2. sort list of these tuples, and
3. discard backups other than 'maxbackups' ones at the end of list
This doesn't work well in the case below:
- "sort by name" order differs from actual backup-ing order, and
- some of backups have same timestamp
For example, 'test-shelve.t' satisfies the former condition:
- 'default-01' < 'default-1' in "sort by name" order
- 'default-1' < 'default-01' in actual backup-ing order
Then, 'default-01' is discarded instead of 'default-1' unexpectedly,
if they have same timestamp. This failure appears occasionally,
because the most important condition "same timestamp" is timing
critical.
To avoid such unexpected discarding, this patch keeps old backups if
timestamp can't decide exact order of them.
Timestamp of the border backup (= the oldest one of recent
'maxbackups' ones) as 'bordermtime' is used to examine whether
timestamp can decide exact order of backups.
Many 3rd party consumers of Mercurial have created wrappers to
essentially perform clone+share as a single operation. This is
especially popular in automated processes like continuous integration
systems. The Jenkins CI software and Mozilla's Firefox release
automation infrastructure have both implemented custom code that
effectively perform clone+share. The common use case here is that
clients want to obtain N>1 checkouts while minimizing disk space and
network requirements. Furthermore, they often don't care that a clone
is an exact mirror of a remote: they are simply looking to obtain
checkouts of specific revisions.
When multiple third parties implement a similar feature, it's a good
sign that the feature is worth adding to the core product. This patch
adds support for an easy-to-use clone+share feature.
The internal "clone" function now accepts options to control auto
sharing during clone. When the auto share mode is active, a store will
be created/updated under the base directory specified and a new
repository pointing to the shared store will be created at the path
specified by the user.
The share extension has grown the ability to pass these options into
the clone command/function.
No command line options for this feature are added because we don't
feel the feature will be popular enough to warrant their existence.
There are two modes for auto share mode. In the default mode, the shared
repo is derived from the first changeset (rev 0) in the remote
repository. This enables related repositories existing at different URLs
to automatically use the same storage. In environments that operate
several repositories (separate repo for branch/head/bookmark or separate
repo per user), this has the potential to drastically reduce storage
and network requirements. In the other mode, the name is derived from the
remote's path/URL.
This creates the convert.hg.sourcename config option which will embed a user
defined name into each commit created by the convert. This is useful when using
the convert extension to merge several repositories together and we want to
record where each commit came from.
Previously convert could only take one '--rev'. This change allows the user to
specify multiple --rev entries. For instance, this could allow converting
multiple branches (but not all branches) at once from git.
In this first patch, we disable support for this for all sources. Future
patches will enable it for select sources (like git).
Mercurial uses tags of null to mark deletions, but convert was
silently discarding these because it had no mapping for them. Thus, it
was resurrecting deleted tags.
The check of the inrebase function was not correct, and it failed to
consider the situation in which nothing has been rebased yet, *and*
the working dir had been updated away from the initial revision.
But this is easy to fix. Given the rebase state, we know exactly where
we should be standing: on the first unrebased commit. We check that
instead. I also took the liberty to rename the function, as "inrebase"
doesn't really describe the situation: we could still be in a rebase
state yet the user somehow forcibly updated to a different revision.
We also check that we're in a merge state, since an interrupted merge
is the only "safe" way to interrupt a rebase. If the rebase got
interrupted by power loss or whatever (so there's no merge state),
it's still safer to not blow away the working directory.
If histedit failed after all the rules were complete (for instance, if there was
an exception in the cleanup phase), you couldn't --continue because it was
unable to pop a rule. Let's just skip the rule execution phase of --continue if
there are no more rules.
If the histedit backupfile was None (like if evolve is enabled) it would get
serialized as 'None' into the state file. Later if the histedit was aborted and
the top most commit was unreachable (ex: if it was obsolete or stripped),
histedit would try to unbundle the backupfile and try to read .hg/None.
This fixes it to not serialize None. Since it only happens with evolve, I'm not
sure how to add test coverage here.
--edit-plan was completely broken from the command line because it used an old
api that was not updated (it would crash with a stack trace). Let's update it
and add tests to catch this.
According to recent "filectx.data()" implementation, "censor.policy"
should be configured as "ignore" to ignore error at censored file:
"censor.allow" seems outdated name.
While fixing issue4304: "record: allow editing new files" we introduced
changes in record/crecord. These changes need to be matched with changes in any
command using record. Shelve is one of these commands and the changes have
not been made for this release. Therefore, shelve -i should be an experimental
feature for this release.
`hg pull` takes an optional "source" argument to define the path/url to
pull from. Under some circumstances, this option could get proxied to
rebase and interpretted as the --source argument to rebase, leading to
unexpected behavior.
In my local environment, "source" always appears in "opts" in
pullrebase. However, when attempting to write a test, I couldn't reproduce
this. Instead, the source is being captured as a positional argument in
"args." I suspect an interaction between **kwargs and an extension is to
blame for the differences in behavior. This is why no test has been
written.
I have tested behavior locally and the patch has the intended
side-effect of making `hg pull --rebase` work again.
We've had a couple reports of subversion tracebacks that trigger when
the parents list is empty, and thus block showing what the commit
failure was on the next two lines.
We want to capture hooks output during bundle2 processing. For this purpose we
introduce a new 'subproc' argument to 'ui.pushbuffer'. When set, the output of
sub process created through 'ui.system' will be captured in the buffer too.
This will be used in the next changeset.
Commit 960f8ca79ab1 broke histedit's rollup by causing it to open the editor.
Turns out I missed a spot where the rollup option was read.
This fixes that and adjusts the test to catch this case.
The error-handling here is quite byzantine. self._apply raises an
AbortNoCleanup, but self.apply was swallowing the exception and
returns 2. In self.push, we catch all exceptions.. and cleanup. We try
to print a message to clean up.. but that relies on having a
top-of-stack.
Instead, we re-raise the abort in self.apply, and avoid cleanup on
AbortNoCleanup in self.push by adding a trivial new except clause. We
also modernize the now-visible abort message.
The fileset-generated.t test previously failed with this:
+ hg: parse error: unknown identifier: .hglf/modified
+ (did you mean 'modified'?)
+ [255]
Filesets will find the standins on their own, without any help. While that's
useful for some things like modified(), clean(), etc., it is wrong for things
like size(). Proper fileset support for largefiles is not trivial, but this was
failing with just the extension enabled on a normal repo.
The immediate crash was when checking for requirements immediately after this,
but lfcommands.downloadlfiles() will also crash if --all-largefiles is
specified. That has been in place since atleast e01343f7da6f (2.3-rc) without
anyone noticing.
I can't tell from the peer classes if there's a way to make the custom largefile
functionality work in this case, but atleast it doesn't crash.
The existing state serialization format assumed the rule line consisted of an
action and a hash. In our external extension that adds 'exec' this is not the
case (there is no hash, just the shell command). So let's change the format to
be more generic with just an action and a remainder, and the various commands
can handle it as they wish.
Flagged for stable since we want to get this format tweak in before the new
format goes live in the release.
mergeupdate already set the flag to update all. This will thus only change
overriderevert and scmutilmarktouched ... where the flag effectually also were
true. The test coverage thus shows no change.
As the flag always is set, it is removed.
This is mainly a change for keeping the code simple and consistent and correct,
but it should also make it faster in many cases.
Before, a --clean update with largefiles would use the "optimization" that it
didn't read hashes from standin files before and after the update. Instead of
trusting the content of the standin files, it would rehash all the actual
largefiles that lfdirstate reported clean and update the standins that didn't
have the expected content. It could thus in some "impossible" situations
automatically recover from some "largefile got out sync with its standin"
issues (even there apparently still were weird corner cases where it could
fail). This extra checking is similar to what core --clean intentionally do
not do, and it made update --clean unbearable slow.
Usually in core Mercurial, --clean will rely on the dirstate to find the files
it should update. (It is thus intentionally possible (when trying to trick the
system or if there should be bugs) to end up in situations where --clean not
will restore the working directory content correctly.) Checking every file when
we "know" it is ok is however not an option - that would be too slow.
Instead, trust the content of the standin files. Use the same logic for --clean
as for linear updates and trust the dirstate and that our "logic" will keep
them in sync. It is much cheaper to just rehash the largefiles reported dirty
by a status walk and read all standins than to hash largefiles.
Most of the changes are just a change of indentation now when the different
kinds of updates no longer are handled that differently. Standins for added
files are however only written when doing a normal update, while deleted and
removed files only will be updated for --clean updates.
This allows passing a matcher down the pathcopies() stack to _forwardcopies().
This will let us add logic in a later patch to avoid tracing copies when not
necessary (like when doing hg diff -r 1 -r 2 foo.txt).
Previously the fold action would inspect it's class to figure out if it was a
rollup or not. This was hacky. Now that finishfold is inside the fold class,
let's modify it to check a function (which roll can override) to determine if it
should be prompting for a commit message.
This converts the fold/roll actions into a histeditclass instance, as part of an
ongoing effort to refactor histedit for maintainability and robustness.
The tests changed for two reasons:
1) We get a new 'empty changeset' warning because we now warn more consistently
between normal histedit and --continue about commits disappearing.
2) Previously we were not putting the histedit-source extra field on the
temporary fold commit during normal runs, but we were on --continue runs. By
unifying these code paths we now consistently put histedit-source on the
temporary fold commit, which changes some of the hashes in the backup bundles.
This converts the pick action into a histeditclass instance, as part of an
ongoing effort to refactor histedit for maintainability and robustness.
The test output changed because previously pick would only report the commit
disappearing if there were no merge conflicts. Now that we've unified the normal
and the --continue flows, they act the same and we get the warning even after
--continue.
This takes the newly added histeditaction class and integrates it into the
histedit run and bootstrapcontinue logic. No actions implement the class yet,
but they will be add in upcoming patches.
This adds a new class called histeditaction. It represents a single action in a
histedit plan. Future patches will integrate it into the histedit flow, then
convert each existing action in to the class form one by one.
This is part of a larger refactor aimed at increasing histedit robustness,
maintainability, and extensibility.
Previously we would not allow --continue if the current working copy parent was
not a descendant of the commit produced by the previous histedit step. There's
nothing really blocking us from continuing the histedit in this situation, so
let's stop aborting in this case.
This is technically a BC change, but it is making things more forgiving so I
think it's ok.
In the future we will likely add an 'exec' action to histedit, which means the
user can do whatever they want during the middle of a histedit (like perhaps
calling 'hg update'), which means we'll need to support this case eventually
anyway.
It's possible for the user to delete some of the commits they started with
during a histedit, and aborting the histedit doesn't bring them back. Let's
store a backup bundle so we can always recover the stack of commits from before
they began.
Normal updates automatically clean up the resolve state, but strip
--keep does a "manual" update that bypasses the normal machinery. This
adds a mergestate reset.