2005-08-28 01:21:25 +04:00
|
|
|
# localrepo.py - read/write repository class for mercurial
|
|
|
|
#
|
2007-06-19 10:51:34 +04:00
|
|
|
# Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
|
2005-08-28 01:21:25 +04:00
|
|
|
#
|
2009-04-26 03:08:54 +04:00
|
|
|
# This software may be used and distributed according to the terms of the
|
2010-01-20 07:20:08 +03:00
|
|
|
# GNU General Public License version 2 or any later version.
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2008-03-07 00:23:26 +03:00
|
|
|
from node import bin, hex, nullid, nullrev, short
|
2006-12-15 05:25:19 +03:00
|
|
|
from i18n import _
|
2009-06-15 11:45:38 +04:00
|
|
|
import repo, changegroup, subrepo
|
2009-04-28 19:40:46 +04:00
|
|
|
import changelog, dirstate, filelog, manifest, context
|
2009-05-14 17:35:46 +04:00
|
|
|
import lock, transaction, store, encoding
|
2009-04-28 19:40:46 +04:00
|
|
|
import util, extensions, hook, error
|
2008-05-12 20:37:08 +04:00
|
|
|
import match as match_
|
2008-08-14 05:18:40 +04:00
|
|
|
import merge as merge_
|
2009-07-16 18:39:41 +04:00
|
|
|
import tags as tags_
|
2009-04-22 04:01:22 +04:00
|
|
|
from lock import release
|
2009-06-07 23:16:05 +04:00
|
|
|
import weakref, stat, errno, os, time, inspect
|
2009-04-30 05:47:15 +04:00
|
|
|
propertycache = util.propertycache
|
2009-04-22 04:01:22 +04:00
|
|
|
|
2006-07-14 22:17:22 +04:00
|
|
|
class localrepository(repo.repository):
|
named branches: server branchmap wire protocol support (issue736)
The repository command, 'branchmap', returns a dictionary, branchname
-> [branchheads], and will be implemented for localrepo, httprepo and
sshrepo.
The following wire format is used for returning data:
branchname1 branch1head2 branch1head2 ...
branchname2 ...
...
Branch names are URL encoded to escape white space, and branch heads
are sent as hex encoded node ids. All branches and all their heads are
sent.
The background and motivation for this command is the desire for a
richer named branch semantics when pushing changesets. The details are
explained in the original proposal which is included below.
1. BACKGROUND
The algorithm currently implemented in Mercurial only considers the
graph theoretical heads when determining whether new heads are
created, rather than using the branch heads as a count (the algorithm
considers a branch head effectively closed when it is merged into
another branch or a new named branch is started from that point
onward).
Our particular problem with the algorithm is that we'd like to see the
following case working without forcing a push:
Upsteam has:
(0:dev) ---- (1:dev)
\
`--- (2:stable)
Someone merges stable into dev:
(0:dev) ---- (1:dev) ------(3:dev)
\ /
`--- (2:stable) --------´
This can be pushed without --force (as it should).
Now someone else does some coding on stable (a bug fix, say):
(0:dev) ---- (1:dev) ------(3:dev)
\ /
`--- (2:stable) ---------´---------(4:stable)
This time we need --force to push.
We allow this to be pushed without using --force by getting all the
remote branch heads (by extending the wire protocol with a new
function).
We would, furthermore, also prefer if it is impossible to push a new
branch without --force (or a later --newbranch option so --force isn't
shoe-horned into too many disparate functions, if need be), except of
course in the case where the remote repository is empty.
This is what our patches accomplish.
2. ALTERNATIVES
We have, of course, considered some alternatives to reconstructing
enough information to decide whether we are creating new remote branch
heads, before we added the new wire protocol command.
2.1. LOOKUP ON REMOTE
The main alternative is to use the information from remote.heads() and
remote.lookup() to try to reconstruct enough graph information to
decide whether we are creating new heads. This is not adequate as
illustrated below.
Remember that each lookup is typically a request-response pair over
SSH or HTTP(S).
If we have a simple repository at the remote end like this:
(0:dev) ---- (1:dev) ---- (3:stable)
\
`--- (2:dev)
then remote.heads() will yield [2, 3]. Assume we have nodes [0, 1, 2]
locally and want to create a new node, 4:dev, as a descendant from
(1:dev), which should be OK as 1:dev is a branch head.
If we do remote.lookup('dev') we will get [2]. Thus, we can get
information about whether a branch exists on the remote server or not,
but this does not solve our problem of figuring out whether we are
creating new heads or not.
Pushing 4:dev ought to be OK, since after the push, we still only have
two heads on branch a.
Using remote.lookup() and remote.heads() is thus not adequate to
consistently decide whether we are creating new remote heads (e.g. in
this situation the latter would never return 1:dev).
2.2. USING INCOMING TO RECONSTRUCT THE GRAPH
An alternative would be to use information equivalent to hg incoming
to get the full remote graph in addition to the local graph.
To do this, we would have to get a changegroup(subset) bundle
representing the remote end (which may be a substantial amount of
data), getting the branch heads from an instantiated bundlerepository,
deleting the bundle, and finally, we can compute the prepush logic.
While this is backwards compatible, it will cause a possibly
substantial slowdown of the push command as it first needs to pull in
all changes.
3. FURTHER ARGUMENTS IN FAVOUR OF THE BRANCHMAP WIRE-PROTOCOL EXTENSION
Currently, the commands incoming and pull, work based on the tip of a
given branch if used with "-r branchname", making it hard to get all
revisions of a certain branch only (if it has multiple heads). This
can be solved by requesting the remote's branchheads and letting the
revisions to be used with the command be these heads. This can be done
by extending the commands with a new option, e.g.:
hg pull -b branchname
which will be turned into the equivalent of:
hg pull -r branchhead1 -r branchhead2 -r branchhead3
We have a simple follow-up patch that can do this ready as well
(although not submitted yet as it is pending the acceptance of the
branch patch).
4. WRAP-UP
We generally find that the branchmap wire protocol extension can
provide better named branch support to Mercurial. Currently, some
things, like the initial push scenario in this mail, are fairly
counter-intuitive, and the more often you have to force push, the more
it is likely you will get a lot of spurious and unnecessary merge
nodes. Also, restricting incoming and pull to all changes on a branch
rather than changes on the tip-most head would be a sensible extension
to making named branches a first class citizen in Mercurial.
Currently, named branches sometimes feel like a late-coming unwanted
step-child.
We have run it in a production environment for a while, with fewer
multiple heads occurring in our repositories and fewer confused users
as a result.
Also, it fixes the long-standing issue 736.
Co-contributor: Sune Foldager <cryo@cyanite.org>
2009-05-23 19:02:49 +04:00
|
|
|
capabilities = set(('lookup', 'changegroupsubset', 'branchmap'))
|
2009-06-14 03:01:46 +04:00
|
|
|
supported = set('revlogv1 store fncache shared'.split())
|
2006-06-16 03:37:23 +04:00
|
|
|
|
2009-04-27 01:50:43 +04:00
|
|
|
def __init__(self, baseui, path=None, create=0):
|
2006-07-14 22:17:22 +04:00
|
|
|
repo.repository.__init__(self)
|
2006-12-10 02:06:45 +03:00
|
|
|
self.root = os.path.realpath(path)
|
2007-03-11 04:03:20 +03:00
|
|
|
self.path = os.path.join(self.root, ".hg")
|
2006-12-10 02:06:45 +03:00
|
|
|
self.origroot = path
|
|
|
|
self.opener = util.opener(self.path)
|
|
|
|
self.wopener = util.opener(self.root)
|
2009-06-13 23:44:59 +04:00
|
|
|
self.baseui = baseui
|
|
|
|
self.ui = baseui.copy()
|
|
|
|
|
|
|
|
try:
|
|
|
|
self.ui.readconfig(self.join("hgrc"), self.root)
|
|
|
|
extensions.loadall(self.ui)
|
|
|
|
except IOError:
|
|
|
|
pass
|
2005-08-28 03:28:53 +04:00
|
|
|
|
2006-09-03 01:06:47 +04:00
|
|
|
if not os.path.isdir(self.path):
|
|
|
|
if create:
|
|
|
|
if not os.path.exists(path):
|
|
|
|
os.mkdir(path)
|
|
|
|
os.mkdir(self.path)
|
2007-03-09 02:08:24 +03:00
|
|
|
requirements = ["revlogv1"]
|
2009-06-13 23:44:59 +04:00
|
|
|
if self.ui.configbool('format', 'usestore', True):
|
2007-03-09 01:12:52 +03:00
|
|
|
os.mkdir(os.path.join(self.path, "store"))
|
2007-03-09 02:08:24 +03:00
|
|
|
requirements.append("store")
|
2009-06-13 23:44:59 +04:00
|
|
|
if self.ui.configbool('format', 'usefncache', True):
|
2008-10-24 12:31:51 +04:00
|
|
|
requirements.append("fncache")
|
2007-03-09 02:08:24 +03:00
|
|
|
# create an invalid changelog
|
|
|
|
self.opener("00changelog.i", "a").write(
|
|
|
|
'\0\0\0\2' # represents revlogv2
|
|
|
|
' dummy changelog to prevent using the old repo layout'
|
|
|
|
)
|
2006-12-10 02:06:59 +03:00
|
|
|
reqfile = self.opener("requires", "w")
|
|
|
|
for r in requirements:
|
|
|
|
reqfile.write("%s\n" % r)
|
|
|
|
reqfile.close()
|
2006-09-03 01:06:47 +04:00
|
|
|
else:
|
2009-01-12 19:42:31 +03:00
|
|
|
raise error.RepoError(_("repository %s not found") % path)
|
2006-09-03 01:06:47 +04:00
|
|
|
elif create:
|
2009-01-12 19:42:31 +03:00
|
|
|
raise error.RepoError(_("repository %s already exists") % path)
|
2006-12-10 02:06:59 +03:00
|
|
|
else:
|
|
|
|
# find requirements
|
2009-04-30 05:47:28 +04:00
|
|
|
requirements = set()
|
2006-12-10 02:06:59 +03:00
|
|
|
try:
|
2009-04-30 05:47:28 +04:00
|
|
|
requirements = set(self.opener("requires").read().splitlines())
|
2006-12-10 02:06:59 +03:00
|
|
|
except IOError, inst:
|
|
|
|
if inst.errno != errno.ENOENT:
|
|
|
|
raise
|
2009-04-30 05:47:28 +04:00
|
|
|
for r in requirements - self.supported:
|
|
|
|
raise error.RepoError(_("requirement '%s' not supported") % r)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2009-06-14 03:01:46 +04:00
|
|
|
self.sharedpath = self.path
|
|
|
|
try:
|
|
|
|
s = os.path.realpath(self.opener("sharedpath").read())
|
|
|
|
if not os.path.exists(s):
|
|
|
|
raise error.RepoError(
|
2009-06-19 10:28:29 +04:00
|
|
|
_('.hg/sharedpath points to nonexistent directory %s') % s)
|
2009-06-14 03:01:46 +04:00
|
|
|
self.sharedpath = s
|
|
|
|
except IOError, inst:
|
|
|
|
if inst.errno != errno.ENOENT:
|
|
|
|
raise
|
|
|
|
|
|
|
|
self.store = store.store(requirements, self.sharedpath, util.opener)
|
2008-07-24 18:32:52 +04:00
|
|
|
self.spath = self.store.path
|
|
|
|
self.sopener = self.store.opener
|
|
|
|
self.sjoin = self.store.join
|
|
|
|
self.opener.createmode = self.store.createmode
|
2010-02-05 21:10:26 +03:00
|
|
|
self.sopener.options = {}
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2009-07-16 18:39:41 +04:00
|
|
|
# These two define the set of tags for this repository. _tags
|
|
|
|
# maps tag name to node; _tagtypes maps tag name to 'global' or
|
|
|
|
# 'local'. (Global tags are defined by .hgtags across all
|
|
|
|
# heads, and local tags are defined in .hg/localtags.) They
|
|
|
|
# constitute the in-memory cache of tags.
|
|
|
|
self._tags = None
|
|
|
|
self._tagtypes = None
|
|
|
|
|
2009-10-31 02:27:50 +03:00
|
|
|
self._branchcache = None # in UTF-8
|
2008-02-15 21:06:36 +03:00
|
|
|
self._branchcachetip = None
|
2005-08-28 01:21:25 +04:00
|
|
|
self.nodetagscache = None
|
2006-12-30 05:04:31 +03:00
|
|
|
self.filterpats = {}
|
2008-01-28 23:39:47 +03:00
|
|
|
self._datafilters = {}
|
2007-07-22 01:02:10 +04:00
|
|
|
self._transref = self._lockref = self._wlockref = None
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2009-04-30 05:47:15 +04:00
|
|
|
@propertycache
|
|
|
|
def changelog(self):
|
|
|
|
c = changelog.changelog(self.sopener)
|
|
|
|
if 'HG_PENDING' in os.environ:
|
|
|
|
p = os.environ['HG_PENDING']
|
|
|
|
if p.startswith(self.root):
|
|
|
|
c.readpending('00changelog.i.a')
|
2010-02-05 21:10:26 +03:00
|
|
|
self.sopener.options['defversion'] = c.version
|
2009-04-30 05:47:15 +04:00
|
|
|
return c
|
|
|
|
|
|
|
|
@propertycache
|
|
|
|
def manifest(self):
|
|
|
|
return manifest.manifest(self.sopener)
|
|
|
|
|
|
|
|
@propertycache
|
|
|
|
def dirstate(self):
|
|
|
|
return dirstate.dirstate(self.opener, self.ui, self.root)
|
2006-04-29 02:50:22 +04:00
|
|
|
|
2008-06-26 23:35:46 +04:00
|
|
|
def __getitem__(self, changeid):
|
2009-05-20 02:52:46 +04:00
|
|
|
if changeid is None:
|
2008-06-26 23:35:46 +04:00
|
|
|
return context.workingctx(self)
|
|
|
|
return context.changectx(self, changeid)
|
|
|
|
|
2009-11-24 15:32:19 +03:00
|
|
|
def __contains__(self, changeid):
|
|
|
|
try:
|
|
|
|
return bool(self.lookup(changeid))
|
|
|
|
except error.RepoLookupError:
|
|
|
|
return False
|
|
|
|
|
2008-06-26 23:35:50 +04:00
|
|
|
def __nonzero__(self):
|
|
|
|
return True
|
|
|
|
|
|
|
|
def __len__(self):
|
|
|
|
return len(self.changelog)
|
|
|
|
|
|
|
|
def __iter__(self):
|
|
|
|
for i in xrange(len(self)):
|
|
|
|
yield i
|
|
|
|
|
2006-07-26 00:50:32 +04:00
|
|
|
def url(self):
|
|
|
|
return 'file:' + self.root
|
|
|
|
|
2006-02-15 02:28:06 +03:00
|
|
|
def hook(self, name, throw=False, **args):
|
2007-06-18 22:24:34 +04:00
|
|
|
return hook.hook(self.ui, self, name, throw, **args)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-07-12 19:59:20 +04:00
|
|
|
tag_disallowed = ':\r\n'
|
|
|
|
|
2009-05-14 22:20:40 +04:00
|
|
|
def _tag(self, names, node, message, local, user, date, extra={}):
|
2008-03-15 01:38:56 +03:00
|
|
|
if isinstance(names, str):
|
|
|
|
allchars = names
|
|
|
|
names = (names,)
|
|
|
|
else:
|
|
|
|
allchars = ''.join(names)
|
2007-02-27 23:58:40 +03:00
|
|
|
for c in self.tag_disallowed:
|
2008-03-15 01:38:56 +03:00
|
|
|
if c in allchars:
|
2007-02-27 23:58:40 +03:00
|
|
|
raise util.Abort(_('%r cannot be used in a tag name') % c)
|
|
|
|
|
2008-03-15 01:38:56 +03:00
|
|
|
for name in names:
|
|
|
|
self.hook('pretag', throw=True, node=hex(node), tag=name,
|
|
|
|
local=local)
|
2007-02-27 23:58:40 +03:00
|
|
|
|
2008-03-15 01:38:56 +03:00
|
|
|
def writetags(fp, names, munge, prevtags):
|
2008-02-04 02:03:46 +03:00
|
|
|
fp.seek(0, 2)
|
2007-07-17 07:15:03 +04:00
|
|
|
if prevtags and prevtags[-1] != '\n':
|
|
|
|
fp.write('\n')
|
2008-03-15 01:38:56 +03:00
|
|
|
for name in names:
|
2008-06-14 02:29:10 +04:00
|
|
|
m = munge and munge(name) or name
|
2009-07-16 18:39:41 +04:00
|
|
|
if self._tagtypes and name in self._tagtypes:
|
|
|
|
old = self._tags.get(name, nullid)
|
2008-06-14 02:29:10 +04:00
|
|
|
fp.write('%s %s\n' % (hex(old), m))
|
|
|
|
fp.write('%s %s\n' % (hex(node), m))
|
2007-07-17 07:15:03 +04:00
|
|
|
fp.close()
|
2007-07-22 01:02:09 +04:00
|
|
|
|
2007-07-17 07:15:03 +04:00
|
|
|
prevtags = ''
|
2007-02-27 23:58:40 +03:00
|
|
|
if local:
|
2007-07-17 07:15:03 +04:00
|
|
|
try:
|
|
|
|
fp = self.opener('localtags', 'r+')
|
2009-03-23 15:13:06 +03:00
|
|
|
except IOError:
|
2007-07-17 07:15:03 +04:00
|
|
|
fp = self.opener('localtags', 'a')
|
|
|
|
else:
|
|
|
|
prevtags = fp.read()
|
|
|
|
|
2007-02-27 23:58:40 +03:00
|
|
|
# local tags are stored in the current charset
|
2008-03-15 01:38:56 +03:00
|
|
|
writetags(fp, names, None, prevtags)
|
|
|
|
for name in names:
|
|
|
|
self.hook('tag', node=hex(node), tag=name, local=local)
|
2007-02-27 23:58:40 +03:00
|
|
|
return
|
|
|
|
|
2009-05-14 22:20:40 +04:00
|
|
|
try:
|
|
|
|
fp = self.wfile('.hgtags', 'rb+')
|
|
|
|
except IOError:
|
|
|
|
fp = self.wfile('.hgtags', 'ab')
|
2007-02-27 23:58:40 +03:00
|
|
|
else:
|
2009-05-14 22:20:40 +04:00
|
|
|
prevtags = fp.read()
|
2007-07-17 07:15:03 +04:00
|
|
|
|
|
|
|
# committed tags are stored in UTF-8
|
2009-04-03 23:51:48 +04:00
|
|
|
writetags(fp, names, encoding.fromlocal, prevtags)
|
2007-07-17 07:15:03 +04:00
|
|
|
|
2009-05-14 22:20:40 +04:00
|
|
|
if '.hgtags' not in self.dirstate:
|
2007-02-27 23:58:40 +03:00
|
|
|
self.add(['.hgtags'])
|
|
|
|
|
2009-06-01 23:11:19 +04:00
|
|
|
m = match_.exact(self.root, '', ['.hgtags'])
|
2009-06-01 23:11:32 +04:00
|
|
|
tagnode = self.commit(message, user, date, extra=extra, match=m)
|
2007-02-27 23:58:40 +03:00
|
|
|
|
2008-03-15 01:38:56 +03:00
|
|
|
for name in names:
|
|
|
|
self.hook('tag', node=hex(node), tag=name, local=local)
|
2007-02-27 23:58:40 +03:00
|
|
|
|
|
|
|
return tagnode
|
|
|
|
|
2008-03-15 01:38:56 +03:00
|
|
|
def tag(self, names, node, message, local, user, date):
|
|
|
|
'''tag a revision with one or more symbolic names.
|
2006-07-12 19:59:20 +04:00
|
|
|
|
2008-03-15 01:38:56 +03:00
|
|
|
names is a list of strings or, when adding a single tag, names may be a
|
|
|
|
string.
|
2008-03-21 02:39:39 +03:00
|
|
|
|
2008-03-15 01:38:56 +03:00
|
|
|
if local is True, the tags are stored in a per-repository file.
|
|
|
|
otherwise, they are stored in the .hgtags file, and a new
|
2006-07-12 19:59:20 +04:00
|
|
|
changeset is committed with the change.
|
|
|
|
|
|
|
|
keyword arguments:
|
|
|
|
|
2008-03-15 01:38:56 +03:00
|
|
|
local: whether to store tags in non-version-controlled file
|
2006-07-12 19:59:20 +04:00
|
|
|
(default False)
|
|
|
|
|
|
|
|
message: commit message to use if committing
|
|
|
|
|
|
|
|
user: name of user to use if committing
|
|
|
|
|
|
|
|
date: date tuple to use if committing'''
|
|
|
|
|
2006-08-13 03:40:12 +04:00
|
|
|
for x in self.status()[:5]:
|
2006-07-12 19:59:20 +04:00
|
|
|
if '.hgtags' in x:
|
|
|
|
raise util.Abort(_('working copy of .hgtags is changed '
|
|
|
|
'(please commit .hgtags manually)'))
|
|
|
|
|
2009-03-03 04:19:09 +03:00
|
|
|
self.tags() # instantiate the cache
|
2008-09-10 10:48:23 +04:00
|
|
|
self._tag(names, node, message, local, user, date)
|
2006-07-12 19:59:20 +04:00
|
|
|
|
2005-08-28 01:21:25 +04:00
|
|
|
def tags(self):
|
|
|
|
'''return a mapping of tag to node'''
|
2009-07-16 18:39:41 +04:00
|
|
|
if self._tags is None:
|
|
|
|
(self._tags, self._tagtypes) = self._findtags()
|
2009-07-16 18:39:41 +04:00
|
|
|
|
2009-07-16 18:39:41 +04:00
|
|
|
return self._tags
|
2009-07-16 18:39:41 +04:00
|
|
|
|
|
|
|
def _findtags(self):
|
|
|
|
'''Do the hard work of finding tags. Return a pair of dicts
|
|
|
|
(tags, tagtypes) where tags maps tag name to node, and tagtypes
|
|
|
|
maps tag name to a string like \'global\' or \'local\'.
|
|
|
|
Subclasses or extensions are free to add their own tags, but
|
|
|
|
should be aware that the returned dicts will be retained for the
|
|
|
|
duration of the localrepo object.'''
|
|
|
|
|
|
|
|
# XXX what tagtype should subclasses/extensions use? Currently
|
|
|
|
# mq and bookmarks add tags, but do not set the tagtype at all.
|
|
|
|
# Should each extension invent its own tag type? Should there
|
|
|
|
# be one tagtype for all such "virtual" tags? Or is the status
|
|
|
|
# quo fine?
|
2006-03-22 07:30:47 +03:00
|
|
|
|
2009-07-16 18:39:41 +04:00
|
|
|
alltags = {} # map tag name to (node, hist)
|
|
|
|
tagtypes = {}
|
|
|
|
|
2009-07-16 18:39:41 +04:00
|
|
|
tags_.findglobaltags(self.ui, self, alltags, tagtypes)
|
|
|
|
tags_.readlocaltags(self.ui, self, alltags, tagtypes)
|
2007-03-15 00:40:47 +03:00
|
|
|
|
2009-07-16 18:41:19 +04:00
|
|
|
# Build the return dicts. Have to re-encode tag names because
|
|
|
|
# the tags module always uses UTF-8 (in order not to lose info
|
|
|
|
# writing to the cache), but the rest of Mercurial wants them in
|
|
|
|
# local encoding.
|
2009-07-16 18:39:41 +04:00
|
|
|
tags = {}
|
2009-07-16 18:39:41 +04:00
|
|
|
for (name, (node, hist)) in alltags.iteritems():
|
|
|
|
if node != nullid:
|
2009-07-16 18:41:19 +04:00
|
|
|
tags[encoding.tolocal(name)] = node
|
2009-07-16 18:39:41 +04:00
|
|
|
tags['tip'] = self.changelog.tip()
|
2009-07-16 18:41:19 +04:00
|
|
|
tagtypes = dict([(encoding.tolocal(name), value)
|
|
|
|
for (name, value) in tagtypes.iteritems()])
|
2009-07-16 18:39:41 +04:00
|
|
|
return (tags, tagtypes)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2007-12-09 10:32:05 +03:00
|
|
|
def tagtype(self, tagname):
|
|
|
|
'''
|
|
|
|
return the type of the given tag. result can be:
|
|
|
|
|
|
|
|
'local' : a local tag
|
|
|
|
'global' : a global tag
|
|
|
|
None : tag does not exist
|
|
|
|
'''
|
|
|
|
|
|
|
|
self.tags()
|
2007-12-29 21:49:48 +03:00
|
|
|
|
2009-07-16 18:39:41 +04:00
|
|
|
return self._tagtypes.get(tagname)
|
2007-12-09 10:32:05 +03:00
|
|
|
|
2005-08-28 01:21:25 +04:00
|
|
|
def tagslist(self):
|
|
|
|
'''return a list of tags ordered by revision'''
|
|
|
|
l = []
|
2009-01-12 11:16:03 +03:00
|
|
|
for t, n in self.tags().iteritems():
|
2005-08-28 01:21:25 +04:00
|
|
|
try:
|
|
|
|
r = self.changelog.rev(n)
|
|
|
|
except:
|
|
|
|
r = -2 # sort to the beginning of the list if unknown
|
2006-01-12 09:57:58 +03:00
|
|
|
l.append((r, t, n))
|
2009-04-27 01:50:44 +04:00
|
|
|
return [(t, n) for r, t, n in sorted(l)]
|
2005-08-28 01:21:25 +04:00
|
|
|
|
|
|
|
def nodetags(self, node):
|
|
|
|
'''return the tags associated with a node'''
|
|
|
|
if not self.nodetagscache:
|
|
|
|
self.nodetagscache = {}
|
2009-01-12 11:16:03 +03:00
|
|
|
for t, n in self.tags().iteritems():
|
2006-01-12 09:57:58 +03:00
|
|
|
self.nodetagscache.setdefault(n, []).append(t)
|
2005-08-28 01:21:25 +04:00
|
|
|
return self.nodetagscache.get(node, [])
|
|
|
|
|
2008-02-15 21:06:36 +03:00
|
|
|
def _branchtags(self, partial, lrev):
|
2009-01-15 05:47:38 +03:00
|
|
|
# TODO: rename this function?
|
2008-06-26 23:35:50 +04:00
|
|
|
tiprev = len(self) - 1
|
2006-10-24 06:32:56 +04:00
|
|
|
if lrev != tiprev:
|
2010-01-25 09:05:27 +03:00
|
|
|
self._updatebranchcache(partial, lrev + 1, tiprev + 1)
|
2006-10-24 06:32:56 +04:00
|
|
|
self._writebranchcache(partial, self.changelog.tip(), tiprev)
|
|
|
|
|
2006-12-07 19:35:43 +03:00
|
|
|
return partial
|
|
|
|
|
2009-10-26 15:37:39 +03:00
|
|
|
def branchmap(self):
|
2010-02-06 13:29:23 +03:00
|
|
|
'''returns a dictionary {branch: [branchheads]}'''
|
2009-10-26 15:37:39 +03:00
|
|
|
tip = self.changelog.tip()
|
2009-10-31 02:27:50 +03:00
|
|
|
if self._branchcache is not None and self._branchcachetip == tip:
|
|
|
|
return self._branchcache
|
2009-10-26 15:37:39 +03:00
|
|
|
|
2008-02-15 21:06:36 +03:00
|
|
|
oldtip = self._branchcachetip
|
|
|
|
self._branchcachetip = tip
|
|
|
|
if oldtip is None or oldtip not in self.changelog.nodemap:
|
|
|
|
partial, last, lrev = self._readbranchcache()
|
|
|
|
else:
|
|
|
|
lrev = self.changelog.rev(oldtip)
|
2009-10-31 02:27:50 +03:00
|
|
|
partial = self._branchcache
|
2008-02-15 21:06:36 +03:00
|
|
|
|
2008-02-15 21:06:36 +03:00
|
|
|
self._branchtags(partial, lrev)
|
2009-01-15 05:47:38 +03:00
|
|
|
# this private cache holds all heads (not just tips)
|
2009-10-31 02:27:50 +03:00
|
|
|
self._branchcache = partial
|
2006-12-07 19:35:43 +03:00
|
|
|
|
2009-10-31 02:27:50 +03:00
|
|
|
return self._branchcache
|
2009-01-15 05:47:38 +03:00
|
|
|
|
|
|
|
def branchtags(self):
|
|
|
|
'''return a dict where branch names map to the tipmost head of
|
2009-01-15 05:47:38 +03:00
|
|
|
the branch, open heads come before closed'''
|
|
|
|
bt = {}
|
2009-10-31 02:31:08 +03:00
|
|
|
for bn, heads in self.branchmap().iteritems():
|
2010-02-08 16:52:28 +03:00
|
|
|
tip = heads[-1]
|
|
|
|
for h in reversed(heads):
|
2009-01-15 05:47:38 +03:00
|
|
|
if 'close' not in self.changelog.read(h)[5]:
|
2010-02-08 16:52:28 +03:00
|
|
|
tip = h
|
2009-01-15 05:47:38 +03:00
|
|
|
break
|
2010-02-08 16:52:28 +03:00
|
|
|
bt[bn] = tip
|
2009-01-15 05:47:38 +03:00
|
|
|
return bt
|
|
|
|
|
2009-01-15 05:47:38 +03:00
|
|
|
|
2006-10-24 06:32:56 +04:00
|
|
|
def _readbranchcache(self):
|
|
|
|
partial = {}
|
2006-10-18 03:31:18 +04:00
|
|
|
try:
|
2009-01-15 05:47:38 +03:00
|
|
|
f = self.opener("branchheads.cache")
|
2006-11-15 22:56:57 +03:00
|
|
|
lines = f.read().split('\n')
|
|
|
|
f.close()
|
2007-05-08 11:57:05 +04:00
|
|
|
except (IOError, OSError):
|
|
|
|
return {}, nullid, nullrev
|
|
|
|
|
|
|
|
try:
|
2007-03-09 20:09:02 +03:00
|
|
|
last, lrev = lines.pop(0).split(" ", 1)
|
2006-10-18 03:31:18 +04:00
|
|
|
last, lrev = bin(last), int(lrev)
|
2008-06-26 23:35:50 +04:00
|
|
|
if lrev >= len(self) or self[lrev].node() != last:
|
2006-12-02 08:38:55 +03:00
|
|
|
# invalidate the cache
|
2008-02-09 20:58:31 +03:00
|
|
|
raise ValueError('invalidating branch cache (tip differs)')
|
2006-12-02 08:38:55 +03:00
|
|
|
for l in lines:
|
2010-01-25 09:05:27 +03:00
|
|
|
if not l:
|
|
|
|
continue
|
2007-03-09 20:09:02 +03:00
|
|
|
node, label = l.split(" ", 1)
|
2009-01-15 05:47:38 +03:00
|
|
|
partial.setdefault(label.strip(), []).append(bin(node))
|
2009-01-12 20:48:05 +03:00
|
|
|
except KeyboardInterrupt:
|
2006-12-02 08:38:55 +03:00
|
|
|
raise
|
|
|
|
except Exception, inst:
|
|
|
|
if self.ui.debugflag:
|
|
|
|
self.ui.warn(str(inst), '\n')
|
|
|
|
partial, last, lrev = {}, nullid, nullrev
|
2006-10-24 06:32:56 +04:00
|
|
|
return partial, last, lrev
|
2006-10-18 03:31:18 +04:00
|
|
|
|
2006-10-24 06:32:56 +04:00
|
|
|
def _writebranchcache(self, branches, tip, tiprev):
|
2006-10-18 19:46:51 +04:00
|
|
|
try:
|
2009-01-15 05:47:38 +03:00
|
|
|
f = self.opener("branchheads.cache", "w", atomictemp=True)
|
2006-10-24 06:32:56 +04:00
|
|
|
f.write("%s %s\n" % (hex(tip), tiprev))
|
2009-01-15 05:47:38 +03:00
|
|
|
for label, nodes in branches.iteritems():
|
|
|
|
for node in nodes:
|
|
|
|
f.write("%s %s\n" % (hex(node), label))
|
2007-04-09 11:24:17 +04:00
|
|
|
f.rename()
|
2007-05-08 11:57:05 +04:00
|
|
|
except (IOError, OSError):
|
2006-10-18 19:46:51 +04:00
|
|
|
pass
|
2006-10-18 03:31:18 +04:00
|
|
|
|
2006-10-24 06:32:56 +04:00
|
|
|
def _updatebranchcache(self, partial, start, end):
|
2009-06-29 11:54:23 +04:00
|
|
|
# collect new branch entries
|
|
|
|
newbranches = {}
|
2006-10-24 06:32:56 +04:00
|
|
|
for r in xrange(start, end):
|
2008-06-26 23:35:46 +04:00
|
|
|
c = self[r]
|
2009-06-29 11:54:23 +04:00
|
|
|
newbranches.setdefault(c.branch(), []).append(c.node())
|
|
|
|
# if older branchheads are reachable from new ones, they aren't
|
|
|
|
# really branchheads. Note checking parents is insufficient:
|
|
|
|
# 1 (branch a) -> 2 (branch b) -> 3 (branch a)
|
|
|
|
for branch, newnodes in newbranches.iteritems():
|
|
|
|
bheads = partial.setdefault(branch, [])
|
|
|
|
bheads.extend(newnodes)
|
|
|
|
if len(bheads) < 2:
|
|
|
|
continue
|
|
|
|
newbheads = []
|
|
|
|
# starting from tip means fewer passes over reachable
|
|
|
|
while newnodes:
|
|
|
|
latest = newnodes.pop()
|
|
|
|
if latest not in bheads:
|
|
|
|
continue
|
2009-07-13 22:19:17 +04:00
|
|
|
minbhrev = self[min([self[bh].rev() for bh in bheads])].node()
|
|
|
|
reachable = self.changelog.reachable(latest, minbhrev)
|
2009-06-29 11:54:23 +04:00
|
|
|
bheads = [b for b in bheads if b not in reachable]
|
|
|
|
newbheads.insert(0, latest)
|
|
|
|
bheads.extend(newbheads)
|
|
|
|
partial[branch] = bheads
|
2006-10-24 06:32:56 +04:00
|
|
|
|
2005-08-28 01:21:25 +04:00
|
|
|
def lookup(self, key):
|
2008-11-14 23:12:43 +03:00
|
|
|
if isinstance(key, int):
|
|
|
|
return self.changelog.node(key)
|
|
|
|
elif key == '.':
|
2008-06-26 02:33:34 +04:00
|
|
|
return self.dirstate.parents()[0]
|
2006-12-06 00:41:42 +03:00
|
|
|
elif key == 'null':
|
|
|
|
return nullid
|
2008-11-14 23:12:43 +03:00
|
|
|
elif key == 'tip':
|
|
|
|
return self.changelog.tip()
|
2006-10-18 20:44:56 +04:00
|
|
|
n = self.changelog._match(key)
|
|
|
|
if n:
|
|
|
|
return n
|
2006-10-18 03:31:56 +04:00
|
|
|
if key in self.tags():
|
2005-08-28 01:21:25 +04:00
|
|
|
return self.tags()[key]
|
2006-10-18 03:31:56 +04:00
|
|
|
if key in self.branchtags():
|
|
|
|
return self.branchtags()[key]
|
2006-10-18 20:44:56 +04:00
|
|
|
n = self.changelog._partialmatch(key)
|
|
|
|
if n:
|
|
|
|
return n
|
2009-05-25 19:44:37 +04:00
|
|
|
|
|
|
|
# can't find key, check if it might have come from damaged dirstate
|
|
|
|
if key in self.dirstate.parents():
|
|
|
|
raise error.Abort(_("working directory has unknown parent '%s'!")
|
|
|
|
% short(key))
|
2007-07-22 01:02:10 +04:00
|
|
|
try:
|
|
|
|
if len(key) == 20:
|
|
|
|
key = hex(key)
|
|
|
|
except:
|
|
|
|
pass
|
2009-08-31 19:58:33 +04:00
|
|
|
raise error.RepoLookupError(_("unknown revision '%s'") % key)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
|
|
|
def local(self):
|
2005-08-28 03:28:53 +04:00
|
|
|
return True
|
2005-08-28 01:21:25 +04:00
|
|
|
|
|
|
|
def join(self, f):
|
|
|
|
return os.path.join(self.path, f)
|
|
|
|
|
|
|
|
def wjoin(self, f):
|
|
|
|
return os.path.join(self.root, f)
|
|
|
|
|
2008-04-12 09:19:52 +04:00
|
|
|
def rjoin(self, f):
|
|
|
|
return os.path.join(self.root, util.pconvert(f))
|
2008-04-12 22:03:54 +04:00
|
|
|
|
2005-08-28 01:21:25 +04:00
|
|
|
def file(self, f):
|
2006-01-12 09:57:58 +03:00
|
|
|
if f[0] == '/':
|
|
|
|
f = f[1:]
|
2007-03-23 03:52:38 +03:00
|
|
|
return filelog.filelog(self.sopener, f)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2008-06-26 02:35:20 +04:00
|
|
|
def changectx(self, changeid):
|
2008-06-26 23:35:46 +04:00
|
|
|
return self[changeid]
|
2006-10-03 10:21:46 +04:00
|
|
|
|
2006-09-30 00:48:16 +04:00
|
|
|
def parents(self, changeid=None):
|
2008-06-26 22:46:33 +04:00
|
|
|
'''get list of changectxs for parents of changeid'''
|
2008-06-26 23:35:46 +04:00
|
|
|
return self[changeid].parents()
|
2006-09-30 00:48:16 +04:00
|
|
|
|
2006-06-29 02:08:10 +04:00
|
|
|
def filectx(self, path, changeid=None, fileid=None):
|
|
|
|
"""changeid can be a changeset revision, node, or tag.
|
|
|
|
fileid can be a file revision or node."""
|
|
|
|
return context.filectx(self, path, changeid, fileid)
|
|
|
|
|
2005-08-28 01:21:25 +04:00
|
|
|
def getcwd(self):
|
|
|
|
return self.dirstate.getcwd()
|
|
|
|
|
2007-06-09 06:49:12 +04:00
|
|
|
def pathto(self, f, cwd=None):
|
|
|
|
return self.dirstate.pathto(f, cwd)
|
|
|
|
|
2005-08-28 01:21:25 +04:00
|
|
|
def wfile(self, f, mode='r'):
|
|
|
|
return self.wopener(f, mode)
|
|
|
|
|
2007-03-24 05:40:25 +03:00
|
|
|
def _link(self, f):
|
|
|
|
return os.path.islink(self.wjoin(f))
|
|
|
|
|
2006-12-30 05:04:31 +03:00
|
|
|
def _filter(self, filter, filename, data):
|
|
|
|
if filter not in self.filterpats:
|
2005-09-15 11:59:16 +04:00
|
|
|
l = []
|
2006-12-30 05:04:31 +03:00
|
|
|
for pat, cmd in self.ui.configitems(filter):
|
2008-10-14 23:28:49 +04:00
|
|
|
if cmd == '!':
|
|
|
|
continue
|
2009-05-24 11:56:14 +04:00
|
|
|
mf = match_.match(self.root, '', [pat])
|
2008-01-28 23:39:47 +03:00
|
|
|
fn = None
|
2008-02-09 20:27:58 +03:00
|
|
|
params = cmd
|
2008-01-28 23:39:47 +03:00
|
|
|
for name, filterfn in self._datafilters.iteritems():
|
2008-03-07 02:24:36 +03:00
|
|
|
if cmd.startswith(name):
|
2008-01-28 23:39:47 +03:00
|
|
|
fn = filterfn
|
2008-02-09 20:27:58 +03:00
|
|
|
params = cmd[len(name):].lstrip()
|
2008-01-28 23:39:47 +03:00
|
|
|
break
|
|
|
|
if not fn:
|
2007-12-22 07:21:17 +03:00
|
|
|
fn = lambda s, c, **kwargs: util.filter(s, c)
|
|
|
|
# Wrap old filters not supporting keyword arguments
|
|
|
|
if not inspect.getargspec(fn)[2]:
|
|
|
|
oldfn = fn
|
|
|
|
fn = lambda s, c, **kwargs: oldfn(s, c)
|
2008-02-09 20:27:58 +03:00
|
|
|
l.append((mf, fn, params))
|
2006-12-30 05:04:31 +03:00
|
|
|
self.filterpats[filter] = l
|
2005-09-15 11:59:16 +04:00
|
|
|
|
2008-01-28 23:39:47 +03:00
|
|
|
for mf, fn, cmd in self.filterpats[filter]:
|
2005-09-15 11:59:16 +04:00
|
|
|
if mf(filename):
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("filtering %s through %s\n" % (filename, cmd))
|
2007-12-22 07:21:17 +03:00
|
|
|
data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
|
2005-09-15 11:59:16 +04:00
|
|
|
break
|
|
|
|
|
|
|
|
return data
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2008-01-28 23:39:47 +03:00
|
|
|
def adddatafilter(self, name, filter):
|
|
|
|
self._datafilters[name] = filter
|
|
|
|
|
2006-12-30 05:04:31 +03:00
|
|
|
def wread(self, filename):
|
|
|
|
if self._link(filename):
|
|
|
|
data = os.readlink(self.wjoin(filename))
|
|
|
|
else:
|
|
|
|
data = self.wopener(filename, 'r').read()
|
|
|
|
return self._filter("encode", filename, data)
|
2005-09-15 11:59:16 +04:00
|
|
|
|
2006-12-30 05:04:31 +03:00
|
|
|
def wwrite(self, filename, data, flags):
|
2006-12-30 05:04:31 +03:00
|
|
|
data = self._filter("decode", filename, data)
|
2007-12-28 07:27:47 +03:00
|
|
|
try:
|
|
|
|
os.unlink(self.wjoin(filename))
|
|
|
|
except OSError:
|
|
|
|
pass
|
2008-08-11 06:55:06 +04:00
|
|
|
if 'l' in flags:
|
|
|
|
self.wopener.symlink(data, filename)
|
|
|
|
else:
|
|
|
|
self.wopener(filename, 'w').write(data)
|
|
|
|
if 'x' in flags:
|
|
|
|
util.set_flags(self.wjoin(filename), False, True)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-12-30 05:04:31 +03:00
|
|
|
def wwritedata(self, filename, data):
|
|
|
|
return self._filter("decode", filename, data)
|
|
|
|
|
2005-08-28 01:21:25 +04:00
|
|
|
def transaction(self):
|
2009-04-15 21:54:22 +04:00
|
|
|
tr = self._transref and self._transref() or None
|
|
|
|
if tr and tr.running():
|
|
|
|
return tr.nest()
|
2006-02-28 21:24:54 +03:00
|
|
|
|
2008-01-16 20:32:25 +03:00
|
|
|
# abort here if the journal already exists
|
|
|
|
if os.path.exists(self.sjoin("journal")):
|
2010-01-25 09:05:27 +03:00
|
|
|
raise error.RepoError(
|
|
|
|
_("abandoned transaction found - run hg recover"))
|
2008-01-16 20:32:25 +03:00
|
|
|
|
2006-06-01 21:08:29 +04:00
|
|
|
# save dirstate for rollback
|
2005-08-28 01:21:25 +04:00
|
|
|
try:
|
|
|
|
ds = self.opener("dirstate").read()
|
|
|
|
except IOError:
|
|
|
|
ds = ""
|
|
|
|
self.opener("journal.dirstate", "w").write(ds)
|
2008-01-08 00:26:12 +03:00
|
|
|
self.opener("journal.branch", "w").write(self.dirstate.branch())
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-12-05 13:28:21 +03:00
|
|
|
renames = [(self.sjoin("journal"), self.sjoin("undo")),
|
2008-01-08 00:26:12 +03:00
|
|
|
(self.join("journal.dirstate"), self.join("undo.dirstate")),
|
|
|
|
(self.join("journal.branch"), self.join("undo.branch"))]
|
2006-10-24 02:12:20 +04:00
|
|
|
tr = transaction.transaction(self.ui.warn, self.sopener,
|
2008-02-09 23:38:54 +03:00
|
|
|
self.sjoin("journal"),
|
|
|
|
aftertrans(renames),
|
2008-08-14 05:18:42 +04:00
|
|
|
self.store.createmode)
|
2007-07-22 01:02:10 +04:00
|
|
|
self._transref = weakref.ref(tr)
|
2006-02-28 21:24:54 +03:00
|
|
|
return tr
|
2005-08-28 01:21:25 +04:00
|
|
|
|
|
|
|
def recover(self):
|
2009-04-22 04:01:22 +04:00
|
|
|
lock = self.lock()
|
2007-07-22 01:02:10 +04:00
|
|
|
try:
|
|
|
|
if os.path.exists(self.sjoin("journal")):
|
|
|
|
self.ui.status(_("rolling back interrupted transaction\n"))
|
2010-01-25 09:05:27 +03:00
|
|
|
transaction.rollback(self.sopener, self.sjoin("journal"),
|
|
|
|
self.ui.warn)
|
2007-07-22 01:02:10 +04:00
|
|
|
self.invalidate()
|
|
|
|
return True
|
|
|
|
else:
|
|
|
|
self.ui.warn(_("no interrupted transaction available\n"))
|
|
|
|
return False
|
|
|
|
finally:
|
2009-04-22 04:01:22 +04:00
|
|
|
lock.release()
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2007-07-22 01:02:10 +04:00
|
|
|
def rollback(self):
|
|
|
|
wlock = lock = None
|
2007-07-22 01:02:10 +04:00
|
|
|
try:
|
2007-07-22 01:02:10 +04:00
|
|
|
wlock = self.wlock()
|
|
|
|
lock = self.lock()
|
2007-07-22 01:02:10 +04:00
|
|
|
if os.path.exists(self.sjoin("undo")):
|
|
|
|
self.ui.status(_("rolling back last transaction\n"))
|
2010-01-25 09:05:27 +03:00
|
|
|
transaction.rollback(self.sopener, self.sjoin("undo"),
|
|
|
|
self.ui.warn)
|
2007-07-22 01:02:10 +04:00
|
|
|
util.rename(self.join("undo.dirstate"), self.join("dirstate"))
|
2008-02-09 21:39:01 +03:00
|
|
|
try:
|
|
|
|
branch = self.opener("undo.branch").read()
|
|
|
|
self.dirstate.setbranch(branch)
|
|
|
|
except IOError:
|
|
|
|
self.ui.warn(_("Named branch could not be reset, "
|
|
|
|
"current branch still is: %s\n")
|
2009-04-03 23:51:48 +04:00
|
|
|
% encoding.tolocal(self.dirstate.branch()))
|
2007-07-22 01:02:10 +04:00
|
|
|
self.invalidate()
|
|
|
|
self.dirstate.invalidate()
|
2009-07-16 18:39:41 +04:00
|
|
|
self.destroyed()
|
2007-07-22 01:02:10 +04:00
|
|
|
else:
|
|
|
|
self.ui.warn(_("no rollback information available\n"))
|
|
|
|
finally:
|
2009-04-22 04:01:22 +04:00
|
|
|
release(lock, wlock)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2007-06-18 22:24:34 +04:00
|
|
|
def invalidate(self):
|
|
|
|
for a in "changelog manifest".split():
|
2008-03-24 03:03:24 +03:00
|
|
|
if a in self.__dict__:
|
|
|
|
delattr(self, a)
|
2009-07-16 18:39:41 +04:00
|
|
|
self._tags = None
|
|
|
|
self._tagtypes = None
|
2006-02-22 09:26:29 +03:00
|
|
|
self.nodetagscache = None
|
2009-10-31 02:27:50 +03:00
|
|
|
self._branchcache = None # in UTF-8
|
2008-02-15 21:06:36 +03:00
|
|
|
self._branchcachetip = None
|
2006-02-22 09:26:29 +03:00
|
|
|
|
2007-07-22 01:02:09 +04:00
|
|
|
def _lock(self, lockname, wait, releasefn, acquirefn, desc):
|
2005-08-28 01:21:25 +04:00
|
|
|
try:
|
2006-10-24 02:12:20 +04:00
|
|
|
l = lock.lock(lockname, 0, releasefn, desc=desc)
|
2009-01-12 20:09:14 +03:00
|
|
|
except error.LockHeld, inst:
|
2005-11-12 02:34:13 +03:00
|
|
|
if not wait:
|
2006-03-28 21:01:07 +04:00
|
|
|
raise
|
2006-11-20 21:55:59 +03:00
|
|
|
self.ui.warn(_("waiting for lock on %s held by %r\n") %
|
|
|
|
(desc, inst.locker))
|
2006-03-28 21:01:07 +04:00
|
|
|
# default to 600 seconds timeout
|
2006-10-24 02:12:20 +04:00
|
|
|
l = lock.lock(lockname, int(self.ui.config("ui", "timeout", "600")),
|
2006-03-28 21:01:07 +04:00
|
|
|
releasefn, desc=desc)
|
2006-02-20 00:39:09 +03:00
|
|
|
if acquirefn:
|
|
|
|
acquirefn()
|
|
|
|
return l
|
|
|
|
|
2007-07-22 01:02:09 +04:00
|
|
|
def lock(self, wait=True):
|
2009-08-05 16:42:57 +04:00
|
|
|
'''Lock the repository store (.hg/store) and return a weak reference
|
|
|
|
to the lock. Use this before modifying the store (e.g. committing or
|
|
|
|
stripping). If you are opening a transaction, get a lock as well.)'''
|
2009-04-22 04:01:22 +04:00
|
|
|
l = self._lockref and self._lockref()
|
|
|
|
if l is not None and l.held:
|
|
|
|
l.lock()
|
|
|
|
return l
|
2007-07-22 01:02:10 +04:00
|
|
|
|
|
|
|
l = self._lock(self.sjoin("lock"), wait, None, self.invalidate,
|
|
|
|
_('repository %s') % self.origroot)
|
|
|
|
self._lockref = weakref.ref(l)
|
|
|
|
return l
|
2006-02-20 00:39:09 +03:00
|
|
|
|
2007-07-22 01:02:09 +04:00
|
|
|
def wlock(self, wait=True):
|
2009-08-05 16:42:57 +04:00
|
|
|
'''Lock the non-store parts of the repository (everything under
|
|
|
|
.hg except .hg/store) and return a weak reference to the lock.
|
|
|
|
Use this before modifying files in .hg.'''
|
2009-04-22 04:01:22 +04:00
|
|
|
l = self._wlockref and self._wlockref()
|
|
|
|
if l is not None and l.held:
|
|
|
|
l.lock()
|
|
|
|
return l
|
2007-07-22 01:02:10 +04:00
|
|
|
|
|
|
|
l = self._lock(self.join("wlock"), wait, self.dirstate.write,
|
|
|
|
self.dirstate.invalidate, _('working directory of %s') %
|
|
|
|
self.origroot)
|
|
|
|
self._wlockref = weakref.ref(l)
|
|
|
|
return l
|
2005-11-12 02:34:13 +03:00
|
|
|
|
2009-05-14 22:20:40 +04:00
|
|
|
def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
|
2006-10-09 04:57:45 +04:00
|
|
|
"""
|
2006-10-09 23:02:01 +04:00
|
|
|
commit an individual file as part of a larger transaction
|
2006-10-09 04:57:45 +04:00
|
|
|
"""
|
2006-10-09 23:02:01 +04:00
|
|
|
|
2009-04-28 20:14:49 +04:00
|
|
|
fname = fctx.path()
|
|
|
|
text = fctx.data()
|
|
|
|
flog = self.file(fname)
|
|
|
|
fparent1 = manifest1.get(fname, nullid)
|
2009-05-14 22:20:40 +04:00
|
|
|
fparent2 = fparent2o = manifest2.get(fname, nullid)
|
2006-02-18 02:23:53 +03:00
|
|
|
|
2006-10-09 04:57:45 +04:00
|
|
|
meta = {}
|
2009-04-28 20:14:49 +04:00
|
|
|
copy = fctx.renamed()
|
|
|
|
if copy and copy[0] != fname:
|
2007-01-31 00:09:08 +03:00
|
|
|
# Mark the new revision of this file as a copy of another
|
2007-06-06 22:22:52 +04:00
|
|
|
# file. This copy data will effectively act as a parent
|
|
|
|
# of this new revision. If this is a merge, the first
|
2007-01-31 00:09:08 +03:00
|
|
|
# parent will be the nullid (meaning "look up the copy data")
|
|
|
|
# and the second one will be the other parent. For example:
|
|
|
|
#
|
|
|
|
# 0 --- 1 --- 3 rev1 changes file foo
|
|
|
|
# \ / rev2 renames foo to bar and changes it
|
|
|
|
# \- 2 -/ rev3 should have bar with all changes and
|
|
|
|
# should record that bar descends from
|
|
|
|
# bar in rev2 and foo in rev1
|
|
|
|
#
|
|
|
|
# this allows this merge to succeed:
|
|
|
|
#
|
|
|
|
# 0 --- 1 --- 3 rev4 reverts the content change from rev2
|
|
|
|
# \ / merging rev3 and rev4 should use bar@rev2
|
|
|
|
# \- 2 --- 4 as the merge base
|
|
|
|
#
|
2008-08-11 03:01:03 +04:00
|
|
|
|
2009-04-28 20:14:49 +04:00
|
|
|
cfname = copy[0]
|
|
|
|
crev = manifest1.get(cfname)
|
|
|
|
newfparent = fparent2
|
2008-08-11 03:01:03 +04:00
|
|
|
|
|
|
|
if manifest2: # branch merge
|
2009-04-28 20:14:49 +04:00
|
|
|
if fparent2 == nullid or crev is None: # copied on remote side
|
|
|
|
if cfname in manifest2:
|
|
|
|
crev = manifest2[cfname]
|
|
|
|
newfparent = fparent1
|
2008-08-11 03:01:03 +04:00
|
|
|
|
2008-08-11 03:01:03 +04:00
|
|
|
# find source in nearest ancestor if we've lost track
|
2009-04-28 20:14:49 +04:00
|
|
|
if not crev:
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug(" %s: searching for copy revision for %s\n" %
|
2009-04-28 20:14:49 +04:00
|
|
|
(fname, cfname))
|
|
|
|
for ancestor in self['.'].ancestors():
|
|
|
|
if cfname in ancestor:
|
|
|
|
crev = ancestor[cfname].filenode()
|
2008-08-11 03:38:43 +04:00
|
|
|
break
|
2008-08-11 03:01:03 +04:00
|
|
|
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
|
2009-04-28 20:14:49 +04:00
|
|
|
meta["copy"] = cfname
|
|
|
|
meta["copyrev"] = hex(crev)
|
|
|
|
fparent1, fparent2 = nullid, newfparent
|
|
|
|
elif fparent2 != nullid:
|
2006-02-18 02:23:53 +03:00
|
|
|
# is one parent an ancestor of the other?
|
2009-04-28 20:14:49 +04:00
|
|
|
fparentancestor = flog.ancestor(fparent1, fparent2)
|
|
|
|
if fparentancestor == fparent1:
|
|
|
|
fparent1, fparent2 = fparent2, nullid
|
|
|
|
elif fparentancestor == fparent2:
|
|
|
|
fparent2 = nullid
|
2006-02-18 02:23:53 +03:00
|
|
|
|
2009-05-14 22:20:40 +04:00
|
|
|
# is the file changed?
|
|
|
|
if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
|
|
|
|
changelist.append(fname)
|
|
|
|
return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
|
2006-02-18 02:23:53 +03:00
|
|
|
|
2009-05-14 22:20:40 +04:00
|
|
|
# are just the flags changed during merge?
|
2010-02-05 18:02:27 +03:00
|
|
|
if fparent1 != fparent2o and manifest1.flags(fname) != fctx.flags():
|
2009-05-14 22:20:40 +04:00
|
|
|
changelist.append(fname)
|
|
|
|
|
|
|
|
return fparent1
|
2005-11-12 02:34:13 +03:00
|
|
|
|
2009-06-01 23:11:32 +04:00
|
|
|
def commit(self, text="", user=None, date=None, match=None, force=False,
|
|
|
|
editor=False, extra={}):
|
2009-05-19 13:39:12 +04:00
|
|
|
"""Add a new revision to current repository.
|
|
|
|
|
2009-06-01 23:11:32 +04:00
|
|
|
Revision information is gathered from the working directory,
|
|
|
|
match can be used to filter the committed files. If editor is
|
|
|
|
supplied, it is called to get a commit message.
|
2009-05-19 13:39:12 +04:00
|
|
|
"""
|
2009-06-02 06:51:00 +04:00
|
|
|
|
2009-06-05 01:21:03 +04:00
|
|
|
def fail(f, msg):
|
|
|
|
raise util.Abort('%s: %s' % (f, msg))
|
|
|
|
|
|
|
|
if not match:
|
|
|
|
match = match_.always(self.root, '')
|
|
|
|
|
|
|
|
if not force:
|
|
|
|
vdirs = []
|
|
|
|
match.dir = vdirs.append
|
|
|
|
match.bad = fail
|
|
|
|
|
2009-05-14 22:20:40 +04:00
|
|
|
wlock = self.wlock()
|
|
|
|
try:
|
2009-05-14 22:20:40 +04:00
|
|
|
p1, p2 = self.dirstate.parents()
|
2009-06-15 11:45:38 +04:00
|
|
|
wctx = self[None]
|
2009-05-14 22:20:40 +04:00
|
|
|
|
2009-05-19 02:36:24 +04:00
|
|
|
if (not force and p2 != nullid and match and
|
|
|
|
(match.files() or match.anypats())):
|
2009-05-14 22:20:40 +04:00
|
|
|
raise util.Abort(_('cannot partially commit a merge '
|
|
|
|
'(do not specify files or patterns)'))
|
|
|
|
|
2009-06-01 23:11:32 +04:00
|
|
|
changes = self.status(match=match, clean=force)
|
|
|
|
if force:
|
|
|
|
changes[0].extend(changes[6]) # mq may commit unchanged files
|
2008-06-19 00:52:25 +04:00
|
|
|
|
2009-06-15 11:45:38 +04:00
|
|
|
# check subrepos
|
|
|
|
subs = []
|
|
|
|
for s in wctx.substate:
|
|
|
|
if match(s) and wctx.sub(s).dirty():
|
|
|
|
subs.append(s)
|
|
|
|
if subs and '.hgsubstate' not in changes[0]:
|
|
|
|
changes[0].insert(0, '.hgsubstate')
|
|
|
|
|
2009-06-02 06:51:00 +04:00
|
|
|
# make sure all explicit patterns are matched
|
|
|
|
if not force and match.files():
|
2009-06-02 07:13:08 +04:00
|
|
|
matched = set(changes[0] + changes[1] + changes[2])
|
2009-06-02 06:51:00 +04:00
|
|
|
|
|
|
|
for f in match.files():
|
2009-06-15 11:45:38 +04:00
|
|
|
if f == '.' or f in matched or f in wctx.substate:
|
2009-06-02 06:51:00 +04:00
|
|
|
continue
|
|
|
|
if f in changes[3]: # missing
|
|
|
|
fail(f, _('file not found!'))
|
|
|
|
if f in vdirs: # visited directory
|
|
|
|
d = f + '/'
|
2009-06-02 07:13:08 +04:00
|
|
|
for mf in matched:
|
|
|
|
if mf.startswith(d):
|
|
|
|
break
|
|
|
|
else:
|
2009-06-02 06:51:00 +04:00
|
|
|
fail(f, _("no match under directory!"))
|
|
|
|
elif f not in self.dirstate:
|
|
|
|
fail(f, _("file not tracked!"))
|
|
|
|
|
2009-05-19 02:36:24 +04:00
|
|
|
if (not force and not extra.get("close") and p2 == nullid
|
|
|
|
and not (changes[0] or changes[1] or changes[2])
|
|
|
|
and self[None].branch() == self['.'].branch()):
|
2009-05-14 22:20:40 +04:00
|
|
|
return None
|
|
|
|
|
2008-08-14 05:18:40 +04:00
|
|
|
ms = merge_.mergestate(self)
|
|
|
|
for f in changes[0]:
|
|
|
|
if f in ms and ms[f] == 'u':
|
|
|
|
raise util.Abort(_("unresolved merge conflicts "
|
|
|
|
"(see hg resolve)"))
|
2009-05-19 02:36:24 +04:00
|
|
|
|
2009-06-04 02:12:48 +04:00
|
|
|
cctx = context.workingctx(self, (p1, p2), text, user, date,
|
2008-06-19 01:00:20 +04:00
|
|
|
extra, changes)
|
2009-05-19 02:36:24 +04:00
|
|
|
if editor:
|
2009-07-01 10:05:24 +04:00
|
|
|
cctx._text = editor(self, cctx, subs)
|
2009-11-25 05:08:40 +03:00
|
|
|
edited = (text != cctx._text)
|
2009-06-15 11:45:38 +04:00
|
|
|
|
|
|
|
# commit subs
|
|
|
|
if subs:
|
|
|
|
state = wctx.substate.copy()
|
|
|
|
for s in subs:
|
|
|
|
self.ui.status(_('committing subrepository %s\n') % s)
|
|
|
|
sr = wctx.sub(s).commit(cctx._text, user, date)
|
|
|
|
state[s] = (state[s][0], sr)
|
|
|
|
subrepo.writestate(self, state)
|
|
|
|
|
2009-11-25 05:08:39 +03:00
|
|
|
# Save commit message in case this transaction gets rolled back
|
2009-11-27 19:50:52 +03:00
|
|
|
# (e.g. by a pretxncommit hook). Leave the content alone on
|
|
|
|
# the assumption that the user will use the same editor again.
|
|
|
|
msgfile = self.opener('last-message.txt', 'wb')
|
|
|
|
msgfile.write(cctx._text)
|
2009-11-25 05:08:39 +03:00
|
|
|
msgfile.close()
|
|
|
|
|
2009-11-25 05:08:40 +03:00
|
|
|
try:
|
2010-02-17 17:43:21 +03:00
|
|
|
hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
|
|
|
|
self.hook("precommit", throw=True, parent1=hookp1, parent2=hookp2)
|
2009-11-25 05:08:40 +03:00
|
|
|
ret = self.commitctx(cctx, True)
|
|
|
|
except:
|
|
|
|
if edited:
|
|
|
|
msgfn = self.pathto(msgfile.name[len(self.root)+1:])
|
|
|
|
self.ui.write(
|
|
|
|
_('note: commit message saved in %s\n') % msgfn)
|
|
|
|
raise
|
2009-05-14 22:24:39 +04:00
|
|
|
|
2009-05-19 02:36:24 +04:00
|
|
|
# update dirstate and mergestate
|
2009-05-14 22:24:39 +04:00
|
|
|
for f in changes[0] + changes[1]:
|
|
|
|
self.dirstate.normal(f)
|
|
|
|
for f in changes[2]:
|
|
|
|
self.dirstate.forget(f)
|
|
|
|
self.dirstate.setparents(ret)
|
2009-05-19 02:36:24 +04:00
|
|
|
ms.reset()
|
2008-06-19 00:52:26 +04:00
|
|
|
finally:
|
2009-05-14 22:20:40 +04:00
|
|
|
wlock.release()
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2010-02-17 17:43:21 +03:00
|
|
|
self.hook("commit", node=hex(ret), parent1=hookp1, parent2=hookp2)
|
|
|
|
return ret
|
|
|
|
|
2009-05-19 02:36:24 +04:00
|
|
|
def commitctx(self, ctx, error=False):
|
2008-10-11 15:07:29 +04:00
|
|
|
"""Add a new revision to current repository.
|
2009-05-14 22:21:20 +04:00
|
|
|
Revision information is passed via the context argument.
|
2008-10-11 15:07:29 +04:00
|
|
|
"""
|
2008-06-19 02:14:23 +04:00
|
|
|
|
2009-05-14 22:21:20 +04:00
|
|
|
tr = lock = None
|
2009-05-19 02:36:24 +04:00
|
|
|
removed = ctx.removed()
|
2009-05-14 22:24:26 +04:00
|
|
|
p1, p2 = ctx.p1(), ctx.p2()
|
|
|
|
m1 = p1.manifest().copy()
|
|
|
|
m2 = p2.manifest()
|
2009-05-14 22:21:20 +04:00
|
|
|
user = ctx.user()
|
2009-05-14 22:21:20 +04:00
|
|
|
|
|
|
|
lock = self.lock()
|
|
|
|
try:
|
2007-07-22 01:02:10 +04:00
|
|
|
tr = self.transaction()
|
2007-07-22 23:53:57 +04:00
|
|
|
trp = weakref.proxy(tr)
|
2007-07-22 01:02:10 +04:00
|
|
|
|
|
|
|
# check in files
|
|
|
|
new = {}
|
2008-06-19 00:52:26 +04:00
|
|
|
changed = []
|
2008-06-26 23:35:50 +04:00
|
|
|
linkrev = len(self)
|
2009-05-14 22:21:20 +04:00
|
|
|
for f in sorted(ctx.modified() + ctx.added()):
|
2007-07-22 01:02:10 +04:00
|
|
|
self.ui.note(f + "\n")
|
|
|
|
try:
|
2009-05-14 22:20:40 +04:00
|
|
|
fctx = ctx[f]
|
|
|
|
new[f] = self._filecommit(fctx, m1, m2, linkrev, trp,
|
|
|
|
changed)
|
|
|
|
m1.set(f, fctx.flags())
|
2010-02-12 00:15:42 +03:00
|
|
|
except OSError, inst:
|
|
|
|
self.ui.warn(_("trouble committing %s!\n") % f)
|
|
|
|
raise
|
|
|
|
except IOError, inst:
|
|
|
|
errcode = getattr(inst, 'errno', errno.ENOENT)
|
|
|
|
if error or errcode and errcode != errno.ENOENT:
|
2007-07-22 01:02:10 +04:00
|
|
|
self.ui.warn(_("trouble committing %s!\n") % f)
|
|
|
|
raise
|
|
|
|
else:
|
2009-05-19 02:36:24 +04:00
|
|
|
removed.append(f)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2007-07-22 01:02:10 +04:00
|
|
|
# update manifest
|
|
|
|
m1.update(new)
|
2009-05-19 02:36:24 +04:00
|
|
|
removed = [f for f in sorted(removed) if f in m1 or f in m2]
|
|
|
|
drop = [f for f in removed if f in m1]
|
|
|
|
for f in drop:
|
|
|
|
del m1[f]
|
2009-05-14 22:24:26 +04:00
|
|
|
mn = self.manifest.add(m1, trp, linkrev, p1.manifestnode(),
|
2009-05-19 02:36:24 +04:00
|
|
|
p2.manifestnode(), (new, drop))
|
2007-07-22 01:02:10 +04:00
|
|
|
|
2009-05-19 02:36:24 +04:00
|
|
|
# update changelog
|
2009-02-17 04:35:07 +03:00
|
|
|
self.changelog.delayupdate()
|
2009-05-19 02:36:24 +04:00
|
|
|
n = self.changelog.add(mn, changed + removed, ctx.description(),
|
|
|
|
trp, p1.node(), p2.node(),
|
2009-05-14 22:21:20 +04:00
|
|
|
user, ctx.date(), ctx.extra().copy())
|
2009-02-17 04:35:07 +03:00
|
|
|
p = lambda: self.changelog.writepending() and self.root or ""
|
2010-02-17 17:43:21 +03:00
|
|
|
xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
|
2007-07-22 01:02:10 +04:00
|
|
|
self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
|
2009-02-17 04:35:07 +03:00
|
|
|
parent2=xp2, pending=p)
|
|
|
|
self.changelog.finalize(trp)
|
2007-07-22 01:02:10 +04:00
|
|
|
tr.close()
|
|
|
|
|
2009-10-31 02:27:50 +03:00
|
|
|
if self._branchcache:
|
2008-02-15 21:06:36 +03:00
|
|
|
self.branchtags()
|
2007-07-22 01:02:10 +04:00
|
|
|
return n
|
|
|
|
finally:
|
2008-06-19 00:52:26 +04:00
|
|
|
del tr
|
2009-05-14 22:20:40 +04:00
|
|
|
lock.release()
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2009-07-16 18:39:41 +04:00
|
|
|
def destroyed(self):
|
|
|
|
'''Inform the repository that nodes have been destroyed.
|
|
|
|
Intended for use by strip and rollback, so there's a common
|
|
|
|
place for anything that has to be done after destroying history.'''
|
|
|
|
# XXX it might be nice if we could take the list of destroyed
|
|
|
|
# nodes, but I don't see an easy way for rollback() to do that
|
2009-07-16 18:39:42 +04:00
|
|
|
|
|
|
|
# Ensure the persistent tag cache is updated. Doing it now
|
|
|
|
# means that the tag cache only has to worry about destroyed
|
|
|
|
# heads immediately after a strip/rollback. That in turn
|
|
|
|
# guarantees that "cachetip == currenttip" (comparing both rev
|
|
|
|
# and node) always means no nodes have been added or destroyed.
|
|
|
|
|
|
|
|
# XXX this is suboptimal when qrefresh'ing: we strip the current
|
|
|
|
# head, refresh the tag cache, then immediately add a new head.
|
|
|
|
# But I think doing it this way is necessary for the "instant
|
|
|
|
# tag cache retrieval" case to work.
|
|
|
|
tags_.findglobaltags(self.ui, self, {}, {})
|
2009-07-16 18:39:41 +04:00
|
|
|
|
2008-05-12 20:37:08 +04:00
|
|
|
def walk(self, match, node=None):
|
2006-10-27 20:24:10 +04:00
|
|
|
'''
|
|
|
|
walk recursively through the directory tree or a given
|
|
|
|
changeset, finding all files matched by the match
|
|
|
|
function
|
|
|
|
'''
|
2008-06-28 04:25:48 +04:00
|
|
|
return self[node].walk(match)
|
2006-10-27 20:24:10 +04:00
|
|
|
|
2008-07-12 03:46:02 +04:00
|
|
|
def status(self, node1='.', node2=None, match=None,
|
2008-06-27 22:43:29 +04:00
|
|
|
ignored=False, clean=False, unknown=False):
|
2006-07-21 03:21:07 +04:00
|
|
|
"""return status of files between two nodes or node and working directory
|
2006-01-12 13:32:07 +03:00
|
|
|
|
|
|
|
If node1 is None, use the first dirstate parent instead.
|
|
|
|
If node2 is None, compare node1 with working directory.
|
|
|
|
"""
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2008-07-12 03:46:02 +04:00
|
|
|
def mfmatches(ctx):
|
|
|
|
mf = ctx.manifest().copy()
|
2005-08-28 01:21:25 +04:00
|
|
|
for fn in mf.keys():
|
|
|
|
if not match(fn):
|
|
|
|
del mf[fn]
|
|
|
|
return mf
|
|
|
|
|
2008-10-13 00:21:08 +04:00
|
|
|
if isinstance(node1, context.changectx):
|
|
|
|
ctx1 = node1
|
|
|
|
else:
|
|
|
|
ctx1 = self[node1]
|
|
|
|
if isinstance(node2, context.changectx):
|
|
|
|
ctx2 = node2
|
|
|
|
else:
|
|
|
|
ctx2 = self[node2]
|
|
|
|
|
2008-11-27 18:07:17 +03:00
|
|
|
working = ctx2.rev() is None
|
2008-07-12 03:46:02 +04:00
|
|
|
parentworking = working and ctx1 == self['.']
|
2008-07-12 03:46:02 +04:00
|
|
|
match = match or match_.always(self.root, self.getcwd())
|
2008-06-26 23:35:50 +04:00
|
|
|
listignored, listclean, listunknown = ignored, clean, unknown
|
2006-07-21 03:21:07 +04:00
|
|
|
|
2008-10-13 00:21:08 +04:00
|
|
|
# load earliest manifest first for caching reasons
|
|
|
|
if not working and ctx2.rev() < ctx1.rev():
|
|
|
|
ctx2.manifest()
|
|
|
|
|
2008-10-09 01:22:10 +04:00
|
|
|
if not parentworking:
|
|
|
|
def bad(f, msg):
|
|
|
|
if f not in ctx1:
|
|
|
|
self.ui.warn('%s: %s\n' % (self.dirstate.pathto(f), msg))
|
|
|
|
match.bad = bad
|
|
|
|
|
2008-07-12 03:46:02 +04:00
|
|
|
if working: # we need to scan the working dir
|
2010-01-01 02:19:30 +03:00
|
|
|
subrepos = ctx1.substate.keys()
|
|
|
|
s = self.dirstate.status(match, subrepos, listignored,
|
|
|
|
listclean, listunknown)
|
2008-07-12 03:46:02 +04:00
|
|
|
cmp, modified, added, removed, deleted, unknown, ignored, clean = s
|
|
|
|
|
|
|
|
# check for any possibly clean files
|
|
|
|
if parentworking and cmp:
|
|
|
|
fixup = []
|
|
|
|
# do a full compare of any files that might have changed
|
2009-05-14 22:20:40 +04:00
|
|
|
for f in sorted(cmp):
|
2008-07-12 03:46:02 +04:00
|
|
|
if (f not in ctx1 or ctx2.flags(f) != ctx1.flags(f)
|
|
|
|
or ctx1[f].cmp(ctx2[f].data())):
|
|
|
|
modified.append(f)
|
|
|
|
else:
|
|
|
|
fixup.append(f)
|
2006-02-25 15:44:40 +03:00
|
|
|
|
2008-07-12 03:46:02 +04:00
|
|
|
if listclean:
|
2008-07-22 22:03:19 +04:00
|
|
|
clean += fixup
|
2007-07-22 01:02:09 +04:00
|
|
|
|
2008-07-12 03:46:02 +04:00
|
|
|
# update dirstate for files that are actually clean
|
|
|
|
if fixup:
|
|
|
|
try:
|
2009-05-28 10:29:40 +04:00
|
|
|
# updating the dirstate is optional
|
|
|
|
# so we don't wait on the lock
|
2009-05-27 16:16:13 +04:00
|
|
|
wlock = self.wlock(False)
|
2007-07-22 01:02:10 +04:00
|
|
|
try:
|
2008-07-12 03:46:02 +04:00
|
|
|
for f in fixup:
|
|
|
|
self.dirstate.normal(f)
|
2009-05-27 16:16:13 +04:00
|
|
|
finally:
|
|
|
|
wlock.release()
|
|
|
|
except error.LockError:
|
|
|
|
pass
|
2008-07-12 03:46:02 +04:00
|
|
|
|
|
|
|
if not parentworking:
|
|
|
|
mf1 = mfmatches(ctx1)
|
|
|
|
if working:
|
2006-01-12 13:32:07 +03:00
|
|
|
# we are comparing working dir against non-parent
|
|
|
|
# generate a pseudo-manifest for the working dir
|
2008-07-12 03:46:02 +04:00
|
|
|
mf2 = mfmatches(self['.'])
|
2008-07-12 03:46:02 +04:00
|
|
|
for f in cmp + modified + added:
|
2008-07-12 03:46:02 +04:00
|
|
|
mf2[f] = None
|
2008-07-22 22:00:22 +04:00
|
|
|
mf2.set(f, ctx2.flags(f))
|
2006-01-12 14:22:28 +03:00
|
|
|
for f in removed:
|
2006-01-12 13:32:07 +03:00
|
|
|
if f in mf2:
|
|
|
|
del mf2[f]
|
2008-07-12 03:46:02 +04:00
|
|
|
else:
|
|
|
|
# we are comparing two revisions
|
|
|
|
deleted, unknown, ignored = [], [], []
|
|
|
|
mf2 = mfmatches(ctx2)
|
2007-04-24 22:05:39 +04:00
|
|
|
|
2006-07-21 03:21:07 +04:00
|
|
|
modified, added, clean = [], [], []
|
2008-07-22 22:03:19 +04:00
|
|
|
for fn in mf2:
|
2008-01-20 16:39:25 +03:00
|
|
|
if fn in mf1:
|
2007-06-19 10:06:37 +04:00
|
|
|
if (mf1.flags(fn) != mf2.flags(fn) or
|
|
|
|
(mf1[fn] != mf2[fn] and
|
2008-07-22 22:00:22 +04:00
|
|
|
(mf2[fn] or ctx1[fn].cmp(ctx2[fn].data())))):
|
2006-01-12 13:32:07 +03:00
|
|
|
modified.append(fn)
|
2008-06-26 23:35:50 +04:00
|
|
|
elif listclean:
|
2006-07-21 03:21:07 +04:00
|
|
|
clean.append(fn)
|
2006-01-12 13:32:07 +03:00
|
|
|
del mf1[fn]
|
|
|
|
else:
|
|
|
|
added.append(fn)
|
2006-01-12 14:22:28 +03:00
|
|
|
removed = mf1.keys()
|
|
|
|
|
2008-07-22 22:03:19 +04:00
|
|
|
r = modified, added, removed, deleted, unknown, ignored, clean
|
|
|
|
[l.sort() for l in r]
|
|
|
|
return r
|
2006-07-21 03:21:07 +04:00
|
|
|
|
2007-07-22 01:02:10 +04:00
|
|
|
def add(self, list):
|
|
|
|
wlock = self.wlock()
|
2007-07-22 01:02:10 +04:00
|
|
|
try:
|
2007-12-24 14:14:43 +03:00
|
|
|
rejected = []
|
2007-07-22 01:02:10 +04:00
|
|
|
for f in list:
|
|
|
|
p = self.wjoin(f)
|
|
|
|
try:
|
|
|
|
st = os.lstat(p)
|
|
|
|
except:
|
|
|
|
self.ui.warn(_("%s does not exist!\n") % f)
|
2007-12-24 14:14:43 +03:00
|
|
|
rejected.append(f)
|
2007-07-22 01:02:10 +04:00
|
|
|
continue
|
|
|
|
if st.st_size > 10000000:
|
|
|
|
self.ui.warn(_("%s: files over 10MB may cause memory and"
|
|
|
|
" performance problems\n"
|
|
|
|
"(use 'hg revert %s' to unadd the file)\n")
|
|
|
|
% (f, f))
|
|
|
|
if not (stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode)):
|
|
|
|
self.ui.warn(_("%s not added: only files and symlinks "
|
|
|
|
"supported currently\n") % f)
|
2007-12-24 14:14:43 +03:00
|
|
|
rejected.append(p)
|
2007-08-21 04:02:08 +04:00
|
|
|
elif self.dirstate[f] in 'amn':
|
2007-07-22 01:02:10 +04:00
|
|
|
self.ui.warn(_("%s already tracked!\n") % f)
|
2007-08-21 04:13:32 +04:00
|
|
|
elif self.dirstate[f] == 'r':
|
merge: forcefully mark files that we get from the second parent as dirty
After a hg merge, we want to include in the commit all the files that we
got from the second parent, so that we have the correct file-level
history. To make them visible to hg commit, we try to mark them as dirty.
Unfortunately, right now we can't really mark them as dirty[1] - the
best we can do is to mark them as needing a full comparison of their
contents, but they will still be considered clean if they happen to be
identical to the version in the first parent.
This changeset extends the dirstate format in a compatible way, so that
we can mark a file as dirty:
Right now we use a negative file size to indicate we don't have valid
stat data for this entry. In practice, this size is always -1.
This patch uses -2 to indicate that the entry is dirty. Older versions
of hg won't choke on this dirstate, but they may happily mark the file
as clean after a full comparison, destroying all of our hard work.
The patch adds a dirstate.normallookup method with the semantics of the
current normaldirty, and changes normaldirty to forcefully mark the
entry as dirty.
This should fix issue522.
[1] - well, we could put them in state 'm', but that state has a
different meaning.
2007-08-23 08:48:29 +04:00
|
|
|
self.dirstate.normallookup(f)
|
2007-07-22 01:02:10 +04:00
|
|
|
else:
|
|
|
|
self.dirstate.add(f)
|
2007-12-24 14:14:43 +03:00
|
|
|
return rejected
|
2007-07-22 01:02:10 +04:00
|
|
|
finally:
|
2009-04-22 04:01:22 +04:00
|
|
|
wlock.release()
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2007-07-22 01:02:10 +04:00
|
|
|
def forget(self, list):
|
|
|
|
wlock = self.wlock()
|
2007-07-22 01:02:10 +04:00
|
|
|
try:
|
|
|
|
for f in list:
|
|
|
|
if self.dirstate[f] != 'a':
|
|
|
|
self.ui.warn(_("%s not added!\n") % f)
|
|
|
|
else:
|
|
|
|
self.dirstate.forget(f)
|
|
|
|
finally:
|
2009-04-22 04:01:22 +04:00
|
|
|
wlock.release()
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2007-07-22 01:02:10 +04:00
|
|
|
def remove(self, list, unlink=False):
|
2009-05-27 16:16:13 +04:00
|
|
|
if unlink:
|
|
|
|
for f in list:
|
|
|
|
try:
|
|
|
|
util.unlink(self.wjoin(f))
|
|
|
|
except OSError, inst:
|
|
|
|
if inst.errno != errno.ENOENT:
|
|
|
|
raise
|
|
|
|
wlock = self.wlock()
|
2007-07-22 01:02:10 +04:00
|
|
|
try:
|
2005-10-19 11:10:52 +04:00
|
|
|
for f in list:
|
2007-07-22 01:02:10 +04:00
|
|
|
if unlink and os.path.exists(self.wjoin(f)):
|
|
|
|
self.ui.warn(_("%s still exists!\n") % f)
|
|
|
|
elif self.dirstate[f] == 'a':
|
|
|
|
self.dirstate.forget(f)
|
|
|
|
elif f not in self.dirstate:
|
|
|
|
self.ui.warn(_("%s not tracked!\n") % f)
|
|
|
|
else:
|
|
|
|
self.dirstate.remove(f)
|
|
|
|
finally:
|
2009-05-27 16:16:13 +04:00
|
|
|
wlock.release()
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2007-07-22 01:02:10 +04:00
|
|
|
def undelete(self, list):
|
2009-04-22 04:01:22 +04:00
|
|
|
manifests = [self.manifest.read(self.changelog.read(p)[0])
|
|
|
|
for p in self.dirstate.parents() if p != nullid]
|
|
|
|
wlock = self.wlock()
|
2007-07-22 01:02:10 +04:00
|
|
|
try:
|
|
|
|
for f in list:
|
|
|
|
if self.dirstate[f] != 'r':
|
2008-08-31 18:12:02 +04:00
|
|
|
self.ui.warn(_("%s not removed!\n") % f)
|
2007-07-22 01:02:10 +04:00
|
|
|
else:
|
2007-09-23 17:29:58 +04:00
|
|
|
m = f in manifests[0] and manifests[0] or manifests[1]
|
2007-07-22 01:02:10 +04:00
|
|
|
t = self.file(f).read(m[f])
|
|
|
|
self.wwrite(f, t, m.flags(f))
|
|
|
|
self.dirstate.normal(f)
|
|
|
|
finally:
|
2009-04-22 04:01:22 +04:00
|
|
|
wlock.release()
|
2007-07-22 01:02:10 +04:00
|
|
|
|
2007-07-22 01:02:10 +04:00
|
|
|
def copy(self, source, dest):
|
2009-04-22 04:01:22 +04:00
|
|
|
p = self.wjoin(dest)
|
|
|
|
if not (os.path.exists(p) or os.path.islink(p)):
|
|
|
|
self.ui.warn(_("%s does not exist!\n") % dest)
|
|
|
|
elif not (os.path.isfile(p) or os.path.islink(p)):
|
|
|
|
self.ui.warn(_("copy failed: %s is not a file or a "
|
|
|
|
"symbolic link\n") % dest)
|
|
|
|
else:
|
|
|
|
wlock = self.wlock()
|
|
|
|
try:
|
2008-10-18 13:26:09 +04:00
|
|
|
if self.dirstate[dest] in '?r':
|
2007-07-22 01:02:10 +04:00
|
|
|
self.dirstate.add(dest)
|
|
|
|
self.dirstate.copy(source, dest)
|
2009-04-22 04:01:22 +04:00
|
|
|
finally:
|
|
|
|
wlock.release()
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2009-06-11 03:11:49 +04:00
|
|
|
def heads(self, start=None):
|
2005-11-16 14:08:25 +03:00
|
|
|
heads = self.changelog.heads(start)
|
|
|
|
# sort the output in rev descending order
|
2009-06-11 03:11:49 +04:00
|
|
|
heads = [(-self.changelog.rev(h), h) for h in heads]
|
2009-04-27 01:50:44 +04:00
|
|
|
return [n for (r, n) in sorted(heads)]
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2009-06-03 15:42:55 +04:00
|
|
|
def branchheads(self, branch=None, start=None, closed=False):
|
2009-09-23 17:51:36 +04:00
|
|
|
'''return a (possibly filtered) list of heads for the given branch
|
|
|
|
|
|
|
|
Heads are returned in topological order, from newest to oldest.
|
|
|
|
If branch is None, use the dirstate branch.
|
|
|
|
If start is not None, return only heads reachable from start.
|
|
|
|
If closed is True, return heads that are marked as closed as well.
|
|
|
|
'''
|
2008-06-26 23:35:46 +04:00
|
|
|
if branch is None:
|
|
|
|
branch = self[None].branch()
|
2009-10-31 02:31:08 +03:00
|
|
|
branches = self.branchmap()
|
2007-06-19 19:37:43 +04:00
|
|
|
if branch not in branches:
|
|
|
|
return []
|
2009-01-15 05:47:38 +03:00
|
|
|
# the cache returns heads ordered lowest to highest
|
2009-09-23 17:51:36 +04:00
|
|
|
bheads = list(reversed(branches[branch]))
|
2007-06-19 19:37:43 +04:00
|
|
|
if start is not None:
|
2009-01-15 05:47:38 +03:00
|
|
|
# filter out the heads that cannot be reached from startrev
|
2009-09-23 17:51:36 +04:00
|
|
|
fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
|
|
|
|
bheads = [h for h in bheads if h in fbheads]
|
2009-01-15 05:47:38 +03:00
|
|
|
if not closed:
|
2009-01-19 14:59:56 +03:00
|
|
|
bheads = [h for h in bheads if
|
2009-01-15 05:47:38 +03:00
|
|
|
('close' not in self.changelog.read(h)[5])]
|
2009-01-15 05:47:38 +03:00
|
|
|
return bheads
|
2007-06-19 19:37:43 +04:00
|
|
|
|
2005-08-28 01:21:25 +04:00
|
|
|
def branches(self, nodes):
|
2006-01-12 09:57:58 +03:00
|
|
|
if not nodes:
|
|
|
|
nodes = [self.changelog.tip()]
|
2005-08-28 01:21:25 +04:00
|
|
|
b = []
|
|
|
|
for n in nodes:
|
|
|
|
t = n
|
2006-05-24 03:01:39 +04:00
|
|
|
while 1:
|
2005-08-28 01:21:25 +04:00
|
|
|
p = self.changelog.parents(n)
|
|
|
|
if p[1] != nullid or p[0] == nullid:
|
|
|
|
b.append((t, n, p[0], p[1]))
|
|
|
|
break
|
|
|
|
n = p[0]
|
|
|
|
return b
|
|
|
|
|
|
|
|
def between(self, pairs):
|
|
|
|
r = []
|
|
|
|
|
|
|
|
for top, bottom in pairs:
|
|
|
|
n, l, i = top, [], 0
|
|
|
|
f = 1
|
|
|
|
|
2009-01-25 19:16:45 +03:00
|
|
|
while n != bottom and n != nullid:
|
2005-08-28 01:21:25 +04:00
|
|
|
p = self.changelog.parents(n)[0]
|
|
|
|
if i == f:
|
|
|
|
l.append(n)
|
|
|
|
f = f * 2
|
|
|
|
n = p
|
|
|
|
i += 1
|
|
|
|
|
|
|
|
r.append(l)
|
|
|
|
|
|
|
|
return r
|
|
|
|
|
2006-03-15 09:58:14 +03:00
|
|
|
def findincoming(self, remote, base=None, heads=None, force=False):
|
2006-05-23 12:44:40 +04:00
|
|
|
"""Return list of roots of the subsets of missing nodes from remote
|
|
|
|
|
|
|
|
If base dict is specified, assume that these nodes and their parents
|
|
|
|
exist on the remote side and that no child of a node of base exists
|
|
|
|
in both remote and self.
|
|
|
|
Furthermore base will be updated to include the nodes that exists
|
|
|
|
in self and remote but no children exists in self and remote.
|
|
|
|
If a list of heads is specified, return only nodes which are heads
|
|
|
|
or ancestors of these heads.
|
|
|
|
|
|
|
|
All the ancestors of base are in self and in remote.
|
|
|
|
All the descendants of the list returned are missing in self.
|
|
|
|
(and so we know that the rest of the nodes are missing in remote, see
|
|
|
|
outgoing)
|
|
|
|
"""
|
2008-11-26 01:26:33 +03:00
|
|
|
return self.findcommonincoming(remote, base, heads, force)[1]
|
|
|
|
|
|
|
|
def findcommonincoming(self, remote, base=None, heads=None, force=False):
|
|
|
|
"""Return a tuple (common, missing roots, heads) used to identify
|
|
|
|
missing nodes from remote.
|
|
|
|
|
|
|
|
If base dict is specified, assume that these nodes and their parents
|
|
|
|
exist on the remote side and that no child of a node of base exists
|
|
|
|
in both remote and self.
|
|
|
|
Furthermore base will be updated to include the nodes that exists
|
|
|
|
in self and remote but no children exists in self and remote.
|
|
|
|
If a list of heads is specified, return only nodes which are heads
|
|
|
|
or ancestors of these heads.
|
|
|
|
|
|
|
|
All the ancestors of base are in self and in remote.
|
|
|
|
"""
|
2005-08-28 01:21:25 +04:00
|
|
|
m = self.changelog.nodemap
|
|
|
|
search = []
|
2009-04-24 19:32:18 +04:00
|
|
|
fetch = set()
|
|
|
|
seen = set()
|
|
|
|
seenbranch = set()
|
2009-05-20 02:52:46 +04:00
|
|
|
if base is None:
|
2005-08-28 01:21:25 +04:00
|
|
|
base = {}
|
|
|
|
|
2006-04-22 00:33:51 +04:00
|
|
|
if not heads:
|
|
|
|
heads = remote.heads()
|
|
|
|
|
|
|
|
if self.changelog.tip() == nullid:
|
2006-05-23 12:44:40 +04:00
|
|
|
base[nullid] = 1
|
2006-04-22 00:33:51 +04:00
|
|
|
if heads != [nullid]:
|
2008-11-26 01:26:33 +03:00
|
|
|
return [nullid], [nullid], list(heads)
|
|
|
|
return [nullid], [], []
|
2006-04-22 00:33:51 +04:00
|
|
|
|
2005-08-28 01:21:25 +04:00
|
|
|
# assume we're closer to the tip than the root
|
|
|
|
# and start by examining the heads
|
2005-10-19 05:38:39 +04:00
|
|
|
self.ui.status(_("searching for changes\n"))
|
2005-08-28 01:21:25 +04:00
|
|
|
|
|
|
|
unknown = []
|
|
|
|
for h in heads:
|
|
|
|
if h not in m:
|
|
|
|
unknown.append(h)
|
|
|
|
else:
|
|
|
|
base[h] = 1
|
|
|
|
|
2008-11-26 01:26:33 +03:00
|
|
|
heads = unknown
|
2005-08-28 01:21:25 +04:00
|
|
|
if not unknown:
|
2008-11-26 01:26:33 +03:00
|
|
|
return base.keys(), [], []
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2009-04-22 02:57:28 +04:00
|
|
|
req = set(unknown)
|
2005-08-28 01:21:25 +04:00
|
|
|
reqcnt = 0
|
|
|
|
|
|
|
|
# search through remote branches
|
|
|
|
# a 'branch' here is a linear segment of history, with four parts:
|
|
|
|
# head, root, first parent, second parent
|
|
|
|
# (a branch always has two parents (or none) by definition)
|
|
|
|
unknown = remote.branches(unknown)
|
|
|
|
while unknown:
|
|
|
|
r = []
|
|
|
|
while unknown:
|
|
|
|
n = unknown.pop(0)
|
|
|
|
if n[0] in seen:
|
|
|
|
continue
|
|
|
|
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("examining %s:%s\n"
|
2006-01-12 09:57:58 +03:00
|
|
|
% (short(n[0]), short(n[1])))
|
2006-05-23 12:44:40 +04:00
|
|
|
if n[0] == nullid: # found the end of the branch
|
|
|
|
pass
|
|
|
|
elif n in seenbranch:
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("branch already found\n")
|
2005-08-28 01:21:25 +04:00
|
|
|
continue
|
2006-05-23 12:44:40 +04:00
|
|
|
elif n[1] and n[1] in m: # do we know the base?
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("found incomplete branch %s:%s\n"
|
2005-08-28 01:21:25 +04:00
|
|
|
% (short(n[0]), short(n[1])))
|
2008-10-24 18:20:53 +04:00
|
|
|
search.append(n[0:2]) # schedule branch range for scanning
|
2009-04-24 19:32:18 +04:00
|
|
|
seenbranch.add(n)
|
2005-08-28 01:21:25 +04:00
|
|
|
else:
|
|
|
|
if n[1] not in seen and n[1] not in fetch:
|
|
|
|
if n[2] in m and n[3] in m:
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("found new changeset %s\n" %
|
2005-08-28 01:21:25 +04:00
|
|
|
short(n[1]))
|
2009-04-24 19:32:18 +04:00
|
|
|
fetch.add(n[1]) # earliest unknown
|
2006-05-23 12:44:40 +04:00
|
|
|
for p in n[2:4]:
|
|
|
|
if p in m:
|
|
|
|
base[p] = 1 # latest known
|
|
|
|
|
|
|
|
for p in n[2:4]:
|
|
|
|
if p not in req and p not in m:
|
|
|
|
r.append(p)
|
2009-04-22 02:57:28 +04:00
|
|
|
req.add(p)
|
2009-04-24 19:32:18 +04:00
|
|
|
seen.add(n[0])
|
2005-08-28 01:21:25 +04:00
|
|
|
|
|
|
|
if r:
|
|
|
|
reqcnt += 1
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('searching'), reqcnt, unit='queries')
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("request %d: %s\n" %
|
2005-08-28 01:21:25 +04:00
|
|
|
(reqcnt, " ".join(map(short, r))))
|
2006-10-19 16:16:51 +04:00
|
|
|
for p in xrange(0, len(r), 10):
|
2010-01-25 09:05:27 +03:00
|
|
|
for b in remote.branches(r[p:p + 10]):
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("received %s:%s\n" %
|
2005-08-28 01:21:25 +04:00
|
|
|
(short(b[0]), short(b[1])))
|
2006-05-23 12:44:40 +04:00
|
|
|
unknown.append(b)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
|
|
|
# do binary search on the branches we found
|
|
|
|
while search:
|
2008-10-22 23:43:35 +04:00
|
|
|
newsearch = []
|
2005-08-28 01:21:25 +04:00
|
|
|
reqcnt += 1
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('searching'), reqcnt, unit='queries')
|
2008-10-22 23:43:35 +04:00
|
|
|
for n, l in zip(search, remote.between(search)):
|
|
|
|
l.append(n[1])
|
|
|
|
p = n[0]
|
|
|
|
f = 1
|
|
|
|
for i in l:
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("narrowing %d:%d %s\n" % (f, len(l), short(i)))
|
2008-10-22 23:43:35 +04:00
|
|
|
if i in m:
|
|
|
|
if f <= 2:
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("found new branch changeset %s\n" %
|
2008-10-22 23:43:35 +04:00
|
|
|
short(p))
|
2009-04-24 19:32:18 +04:00
|
|
|
fetch.add(p)
|
2008-10-22 23:43:35 +04:00
|
|
|
base[i] = 1
|
|
|
|
else:
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("narrowed branch search to %s:%s\n"
|
2008-10-22 23:43:35 +04:00
|
|
|
% (short(p), short(i)))
|
|
|
|
newsearch.append((p, i))
|
|
|
|
break
|
|
|
|
p, f = i, f * 2
|
|
|
|
search = newsearch
|
2005-08-28 01:21:25 +04:00
|
|
|
|
|
|
|
# sanity check our fetch list
|
2009-04-24 19:32:18 +04:00
|
|
|
for f in fetch:
|
2005-08-28 01:21:25 +04:00
|
|
|
if f in m:
|
2009-01-12 19:42:31 +03:00
|
|
|
raise error.RepoError(_("already have changeset ")
|
|
|
|
+ short(f[:4]))
|
2005-08-28 01:21:25 +04:00
|
|
|
|
|
|
|
if base.keys() == [nullid]:
|
2006-03-15 09:58:14 +03:00
|
|
|
if force:
|
|
|
|
self.ui.warn(_("warning: repository is unrelated\n"))
|
|
|
|
else:
|
|
|
|
raise util.Abort(_("repository is unrelated"))
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("found new changesets starting at " +
|
2005-08-28 01:21:25 +04:00
|
|
|
" ".join([short(f) for f in fetch]) + "\n")
|
|
|
|
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('searching'), None, unit='queries')
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("%d total queries\n" % reqcnt)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2009-04-24 19:32:18 +04:00
|
|
|
return base.keys(), list(fetch), heads
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-03-15 09:58:14 +03:00
|
|
|
def findoutgoing(self, remote, base=None, heads=None, force=False):
|
2006-03-30 00:35:21 +04:00
|
|
|
"""Return list of nodes that are roots of subsets not in remote
|
|
|
|
|
|
|
|
If base dict is specified, assume that these nodes and their parents
|
|
|
|
exist on the remote side.
|
|
|
|
If a list of heads is specified, return only nodes which are heads
|
|
|
|
or ancestors of these heads, and return a second element which
|
|
|
|
contains all remote heads which get new children.
|
|
|
|
"""
|
2009-05-20 02:52:46 +04:00
|
|
|
if base is None:
|
2005-08-28 01:21:25 +04:00
|
|
|
base = {}
|
2006-03-15 09:58:14 +03:00
|
|
|
self.findincoming(remote, base, heads, force=force)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("common changesets up to "
|
2005-08-28 01:21:25 +04:00
|
|
|
+ " ".join(map(short, base.keys())) + "\n")
|
|
|
|
|
2009-04-22 02:57:28 +04:00
|
|
|
remain = set(self.changelog.nodemap)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
|
|
|
# prune everything remote has from the tree
|
2009-04-22 02:57:28 +04:00
|
|
|
remain.remove(nullid)
|
2005-08-28 01:21:25 +04:00
|
|
|
remove = base.keys()
|
|
|
|
while remove:
|
|
|
|
n = remove.pop(0)
|
|
|
|
if n in remain:
|
2009-04-22 02:57:28 +04:00
|
|
|
remain.remove(n)
|
2005-08-28 01:21:25 +04:00
|
|
|
for p in self.changelog.parents(n):
|
|
|
|
remove.append(p)
|
|
|
|
|
|
|
|
# find every node whose parents have been pruned
|
|
|
|
subset = []
|
2006-03-30 00:35:21 +04:00
|
|
|
# find every remote head that will get new children
|
2009-05-17 06:33:39 +04:00
|
|
|
updated_heads = set()
|
2005-08-28 01:21:25 +04:00
|
|
|
for n in remain:
|
|
|
|
p1, p2 = self.changelog.parents(n)
|
|
|
|
if p1 not in remain and p2 not in remain:
|
|
|
|
subset.append(n)
|
2006-03-30 00:35:21 +04:00
|
|
|
if heads:
|
|
|
|
if p1 in heads:
|
2009-05-17 06:33:39 +04:00
|
|
|
updated_heads.add(p1)
|
2006-03-30 00:35:21 +04:00
|
|
|
if p2 in heads:
|
2009-05-17 06:33:39 +04:00
|
|
|
updated_heads.add(p2)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
|
|
|
# this is the set of all roots we have to push
|
2006-03-30 00:35:21 +04:00
|
|
|
if heads:
|
2009-05-17 06:33:39 +04:00
|
|
|
return subset, list(updated_heads)
|
2006-03-30 00:35:21 +04:00
|
|
|
else:
|
|
|
|
return subset
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2007-07-22 01:02:10 +04:00
|
|
|
def pull(self, remote, heads=None, force=False):
|
|
|
|
lock = self.lock()
|
2006-08-09 04:08:59 +04:00
|
|
|
try:
|
2008-11-26 01:26:33 +03:00
|
|
|
common, fetch, rheads = self.findcommonincoming(remote, heads=heads,
|
|
|
|
force=force)
|
2006-08-09 04:08:59 +04:00
|
|
|
if fetch == [nullid]:
|
|
|
|
self.ui.status(_("requesting all changes\n"))
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-08-09 04:08:59 +04:00
|
|
|
if not fetch:
|
|
|
|
self.ui.status(_("no changes found\n"))
|
|
|
|
return 0
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2008-11-26 01:26:33 +03:00
|
|
|
if heads is None and remote.capable('changegroupsubset'):
|
|
|
|
heads = rheads
|
|
|
|
|
2006-08-09 04:08:59 +04:00
|
|
|
if heads is None:
|
|
|
|
cg = remote.changegroup(fetch, 'pull')
|
|
|
|
else:
|
2008-11-26 01:26:33 +03:00
|
|
|
if not remote.capable('changegroupsubset'):
|
2009-05-31 03:30:16 +04:00
|
|
|
raise util.Abort(_("Partial pull cannot be done because "
|
|
|
|
"other repository doesn't support "
|
|
|
|
"changegroupsubset."))
|
2006-08-09 04:08:59 +04:00
|
|
|
cg = remote.changegroupsubset(fetch, heads, 'pull')
|
|
|
|
return self.addchangegroup(cg, 'pull', remote.url())
|
|
|
|
finally:
|
2009-04-22 04:01:22 +04:00
|
|
|
lock.release()
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-02-14 23:11:57 +03:00
|
|
|
def push(self, remote, force=False, revs=None):
|
2006-06-16 03:37:23 +04:00
|
|
|
# there are two ways to push to remote repo:
|
|
|
|
#
|
|
|
|
# addchangegroup assumes local user can lock remote
|
|
|
|
# repo (local filesystem, old ssh servers).
|
|
|
|
#
|
|
|
|
# unbundle assumes local user cannot lock remote repo (new ssh
|
|
|
|
# servers, http servers).
|
|
|
|
|
2006-07-14 22:17:22 +04:00
|
|
|
if remote.capable('unbundle'):
|
2006-06-21 02:14:12 +04:00
|
|
|
return self.push_unbundle(remote, force, revs)
|
|
|
|
return self.push_addchangegroup(remote, force, revs)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-06-16 03:37:23 +04:00
|
|
|
def prepush(self, remote, force, revs):
|
2009-09-09 01:58:59 +04:00
|
|
|
'''Analyze the local and remote repositories and determine which
|
|
|
|
changesets need to be pushed to the remote. Return a tuple
|
|
|
|
(changegroup, remoteheads). changegroup is a readable file-like
|
|
|
|
object whose read() returns successive changegroup chunks ready to
|
|
|
|
be sent over the wire. remoteheads is the list of remote heads.
|
|
|
|
'''
|
2008-12-02 21:36:43 +03:00
|
|
|
common = {}
|
2006-03-30 00:35:21 +04:00
|
|
|
remote_heads = remote.heads()
|
2008-12-02 21:36:43 +03:00
|
|
|
inc = self.findincoming(remote, common, remote_heads, force=force)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2008-12-02 21:36:43 +03:00
|
|
|
update, updated_heads = self.findoutgoing(remote, common, remote_heads)
|
2009-09-23 20:56:09 +04:00
|
|
|
msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
|
2006-02-14 23:11:57 +03:00
|
|
|
|
2010-02-08 21:44:04 +03:00
|
|
|
def checkbranch(lheads, rheads, updatelb, branchname=None):
|
2009-05-23 19:04:31 +04:00
|
|
|
'''
|
|
|
|
check whether there are more local heads than remote heads on
|
|
|
|
a specific branch.
|
|
|
|
|
|
|
|
lheads: local branch heads
|
|
|
|
rheads: remote branch heads
|
2009-09-23 20:56:09 +04:00
|
|
|
updatelb: outgoing local branch bases
|
2009-05-23 19:04:31 +04:00
|
|
|
'''
|
2006-11-20 01:32:36 +03:00
|
|
|
|
|
|
|
warn = 0
|
|
|
|
|
2009-05-23 19:04:31 +04:00
|
|
|
if not revs and len(lheads) > len(rheads):
|
2006-11-20 01:32:36 +03:00
|
|
|
warn = 1
|
|
|
|
else:
|
2009-09-23 20:56:09 +04:00
|
|
|
# add local heads involved in the push
|
2009-05-23 19:04:31 +04:00
|
|
|
updatelheads = [self.changelog.heads(x, lheads)
|
2009-09-23 20:56:09 +04:00
|
|
|
for x in updatelb]
|
2009-05-23 19:04:31 +04:00
|
|
|
newheads = set(sum(updatelheads, [])) & set(lheads)
|
|
|
|
|
|
|
|
if not newheads:
|
|
|
|
return True
|
|
|
|
|
2009-09-23 20:56:09 +04:00
|
|
|
# add heads we don't have or that are not involved in the push
|
2009-05-23 19:04:31 +04:00
|
|
|
for r in rheads:
|
2006-11-20 01:32:36 +03:00
|
|
|
if r in self.changelog.nodemap:
|
2006-12-17 07:00:22 +03:00
|
|
|
desc = self.changelog.heads(r, heads)
|
2006-11-20 01:32:36 +03:00
|
|
|
l = [h for h in heads if h in desc]
|
|
|
|
if not l:
|
2009-05-23 19:04:31 +04:00
|
|
|
newheads.add(r)
|
2006-11-20 01:32:36 +03:00
|
|
|
else:
|
2009-05-23 19:04:31 +04:00
|
|
|
newheads.add(r)
|
|
|
|
if len(newheads) > len(rheads):
|
2006-11-20 01:32:36 +03:00
|
|
|
warn = 1
|
|
|
|
|
|
|
|
if warn:
|
2010-02-08 21:44:04 +03:00
|
|
|
if branchname is not None:
|
|
|
|
msg = _("abort: push creates new remote heads"
|
|
|
|
" on branch '%s'!\n") % branchname
|
|
|
|
else:
|
|
|
|
msg = _("abort: push creates new remote heads!\n")
|
|
|
|
self.ui.warn(msg)
|
|
|
|
if len(lheads) > len(rheads):
|
|
|
|
self.ui.status(_("(did you forget to merge?"
|
|
|
|
" use push -f to force)\n"))
|
|
|
|
else:
|
|
|
|
self.ui.status(_("(you should pull and merge or"
|
|
|
|
" use push -f to force)\n"))
|
2009-05-23 19:04:31 +04:00
|
|
|
return False
|
|
|
|
return True
|
|
|
|
|
|
|
|
if not bases:
|
|
|
|
self.ui.status(_("no changes found\n"))
|
|
|
|
return None, 1
|
|
|
|
elif not force:
|
|
|
|
# Check for each named branch if we're creating new remote heads.
|
|
|
|
# To be a remote head after push, node must be either:
|
|
|
|
# - unknown locally
|
|
|
|
# - a local outgoing head descended from update
|
|
|
|
# - a remote head that's known locally and not
|
|
|
|
# ancestral to an outgoing head
|
|
|
|
#
|
|
|
|
# New named branches cannot be created without --force.
|
|
|
|
|
|
|
|
if remote_heads != [nullid]:
|
|
|
|
if remote.capable('branchmap'):
|
2010-02-07 02:43:22 +03:00
|
|
|
remotebrheads = remote.branchmap()
|
|
|
|
|
2009-05-23 19:04:31 +04:00
|
|
|
if not revs:
|
2010-02-07 02:43:22 +03:00
|
|
|
localbrheads = self.branchmap()
|
2009-05-23 19:04:31 +04:00
|
|
|
else:
|
2010-02-07 02:43:22 +03:00
|
|
|
localbrheads = {}
|
2009-05-23 19:04:31 +04:00
|
|
|
for n in heads:
|
|
|
|
branch = self[n].branch()
|
2010-02-07 02:43:22 +03:00
|
|
|
localbrheads.setdefault(branch, []).append(n)
|
2009-05-23 19:04:31 +04:00
|
|
|
|
2010-02-07 02:43:24 +03:00
|
|
|
newbranches = list(set(localbrheads) - set(remotebrheads))
|
|
|
|
if newbranches: # new branch requires --force
|
2010-02-07 17:09:02 +03:00
|
|
|
branchnames = ', '.join("%s" % b for b in newbranches)
|
2010-02-07 02:43:24 +03:00
|
|
|
self.ui.warn(_("abort: push creates "
|
|
|
|
"new remote branches: %s!\n")
|
|
|
|
% branchnames)
|
|
|
|
# propose 'push -b .' in the msg too?
|
|
|
|
self.ui.status(_("(use 'hg push -f' to force)\n"))
|
|
|
|
return None, 0
|
2010-02-07 02:43:22 +03:00
|
|
|
for branch, lheads in localbrheads.iteritems():
|
2010-02-07 02:43:24 +03:00
|
|
|
if branch in remotebrheads:
|
|
|
|
rheads = remotebrheads[branch]
|
2010-02-08 21:44:04 +03:00
|
|
|
if not checkbranch(lheads, rheads, update, branch):
|
2010-02-07 02:43:24 +03:00
|
|
|
return None, 0
|
2009-05-23 19:04:31 +04:00
|
|
|
else:
|
|
|
|
if not checkbranch(heads, remote_heads, update):
|
|
|
|
return None, 0
|
|
|
|
|
|
|
|
if inc:
|
2006-11-20 01:32:36 +03:00
|
|
|
self.ui.warn(_("note: unsynced remote changes!\n"))
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-11-20 01:32:36 +03:00
|
|
|
|
2006-02-14 23:11:57 +03:00
|
|
|
if revs is None:
|
2008-12-02 21:36:43 +03:00
|
|
|
# use the fast path, no race possible on push
|
2009-11-07 14:28:30 +03:00
|
|
|
nodes = self.changelog.findmissing(common.keys())
|
|
|
|
cg = self._changegroup(nodes, 'push')
|
2006-02-14 23:11:57 +03:00
|
|
|
else:
|
2006-02-21 18:46:38 +03:00
|
|
|
cg = self.changegroupsubset(update, revs, 'push')
|
2006-06-16 03:37:23 +04:00
|
|
|
return cg, remote_heads
|
|
|
|
|
|
|
|
def push_addchangegroup(self, remote, force, revs):
|
|
|
|
lock = remote.lock()
|
2007-07-22 01:02:10 +04:00
|
|
|
try:
|
|
|
|
ret = self.prepush(remote, force, revs)
|
|
|
|
if ret[0] is not None:
|
|
|
|
cg, remote_heads = ret
|
|
|
|
return remote.addchangegroup(cg, 'push', self.url())
|
|
|
|
return ret[1]
|
|
|
|
finally:
|
2009-04-22 04:01:22 +04:00
|
|
|
lock.release()
|
2006-06-16 03:37:23 +04:00
|
|
|
|
|
|
|
def push_unbundle(self, remote, force, revs):
|
|
|
|
# local repo finds heads on server, finds out what revs it
|
|
|
|
# must push. once revs transferred, if server finds it has
|
|
|
|
# different heads (someone else won commit/push race), server
|
|
|
|
# aborts.
|
|
|
|
|
|
|
|
ret = self.prepush(remote, force, revs)
|
|
|
|
if ret[0] is not None:
|
|
|
|
cg, remote_heads = ret
|
2010-01-25 09:05:27 +03:00
|
|
|
if force:
|
|
|
|
remote_heads = ['force']
|
2006-06-16 03:37:23 +04:00
|
|
|
return remote.unbundle(cg, remote_heads, 'push')
|
|
|
|
return ret[1]
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2007-12-30 21:46:13 +03:00
|
|
|
def changegroupinfo(self, nodes, source):
|
|
|
|
if self.ui.verbose or source == 'bundle':
|
|
|
|
self.ui.status(_("%d changesets found\n") % len(nodes))
|
2006-10-25 20:45:18 +04:00
|
|
|
if self.ui.debugflag:
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("list of changesets:\n")
|
2006-10-25 20:45:18 +04:00
|
|
|
for node in nodes:
|
|
|
|
self.ui.debug("%s\n" % hex(node))
|
|
|
|
|
2008-01-19 23:01:16 +03:00
|
|
|
def changegroupsubset(self, bases, heads, source, extranodes=None):
|
2009-09-09 01:58:59 +04:00
|
|
|
"""Compute a changegroup consisting of all the nodes that are
|
|
|
|
descendents of any of the bases and ancestors of any of the heads.
|
|
|
|
Return a chunkbuffer object whose read() method will return
|
|
|
|
successive changegroup chunks.
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2005-10-12 05:56:47 +04:00
|
|
|
It is fairly complex as determining which filenodes and which
|
|
|
|
manifest nodes need to be included for the changeset to be complete
|
|
|
|
is non-trivial.
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2005-10-12 05:56:47 +04:00
|
|
|
Another wrinkle is doing the reverse, figuring out which changeset in
|
2008-01-19 23:01:16 +03:00
|
|
|
the changegroup a particular filenode or manifestnode belongs to.
|
2008-03-07 02:24:36 +03:00
|
|
|
|
2008-01-19 23:01:16 +03:00
|
|
|
The caller can specify some nodes that must be included in the
|
|
|
|
changegroup using the extranodes argument. It should be a dict
|
|
|
|
where the keys are the filenames (or 1 for the manifest), and the
|
|
|
|
values are lists of (node, linknode) tuples, where node is a wanted
|
|
|
|
node and linknode is the changelog node that should be transmitted as
|
|
|
|
the linkrev.
|
|
|
|
"""
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2009-11-07 14:28:30 +03:00
|
|
|
# Set up some initial variables
|
|
|
|
# Make it easy to refer to self.changelog
|
|
|
|
cl = self.changelog
|
|
|
|
# msng is short for missing - compute the list of changesets in this
|
|
|
|
# changegroup.
|
|
|
|
if not bases:
|
|
|
|
bases = [nullid]
|
|
|
|
msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
|
|
|
|
|
2008-10-21 19:00:35 +04:00
|
|
|
if extranodes is None:
|
|
|
|
# can we go through the fast path ?
|
|
|
|
heads.sort()
|
|
|
|
allheads = self.heads()
|
|
|
|
allheads.sort()
|
|
|
|
if heads == allheads:
|
2009-11-07 14:28:30 +03:00
|
|
|
return self._changegroup(msng_cl_lst, source)
|
2008-10-21 19:00:35 +04:00
|
|
|
|
2009-11-07 14:28:30 +03:00
|
|
|
# slow path
|
2006-02-17 19:26:21 +03:00
|
|
|
self.hook('preoutgoing', throw=True, source=source)
|
|
|
|
|
2007-12-30 21:46:13 +03:00
|
|
|
self.changegroupinfo(msng_cl_lst, source)
|
2005-10-12 05:56:47 +04:00
|
|
|
# Some bases may turn out to be superfluous, and some heads may be
|
|
|
|
# too. nodesbetween will return the minimal set of bases and heads
|
|
|
|
# necessary to re-create the changegroup.
|
|
|
|
|
|
|
|
# Known heads are the list of heads that it is assumed the recipient
|
|
|
|
# of this changegroup will know about.
|
2009-05-17 06:33:39 +04:00
|
|
|
knownheads = set()
|
2005-10-12 05:56:47 +04:00
|
|
|
# We assume that all parents of bases are known heads.
|
2005-10-08 06:49:25 +04:00
|
|
|
for n in bases:
|
2009-05-17 18:56:53 +04:00
|
|
|
knownheads.update(cl.parents(n))
|
|
|
|
knownheads.discard(nullid)
|
2009-05-17 06:33:39 +04:00
|
|
|
knownheads = list(knownheads)
|
2005-10-08 06:49:25 +04:00
|
|
|
if knownheads:
|
2005-10-12 05:56:47 +04:00
|
|
|
# Now that we know what heads are known, we can compute which
|
|
|
|
# changesets are known. The recipient must know about all
|
|
|
|
# changesets required to reach the known heads from the null
|
|
|
|
# changeset.
|
2005-10-08 06:49:25 +04:00
|
|
|
has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
|
2005-10-12 05:56:47 +04:00
|
|
|
junk = None
|
2009-04-28 20:32:15 +04:00
|
|
|
# Transform the list into a set.
|
2009-04-22 02:57:28 +04:00
|
|
|
has_cl_set = set(has_cl_set)
|
2005-10-08 06:49:25 +04:00
|
|
|
else:
|
2005-10-12 05:56:47 +04:00
|
|
|
# If there were no known heads, the recipient cannot be assumed to
|
|
|
|
# know about any changesets.
|
2009-04-22 02:57:28 +04:00
|
|
|
has_cl_set = set()
|
2005-10-07 21:57:11 +04:00
|
|
|
|
2005-10-12 05:56:47 +04:00
|
|
|
# Make it easy to refer to self.manifest
|
2005-10-07 21:57:11 +04:00
|
|
|
mnfst = self.manifest
|
2005-10-12 05:56:47 +04:00
|
|
|
# We don't know which manifests are missing yet
|
2005-10-07 21:57:11 +04:00
|
|
|
msng_mnfst_set = {}
|
2005-10-12 05:56:47 +04:00
|
|
|
# Nor do we know which filenodes are missing.
|
2005-10-07 21:57:11 +04:00
|
|
|
msng_filenode_set = {}
|
|
|
|
|
2008-06-26 23:35:50 +04:00
|
|
|
junk = mnfst.index[len(mnfst) - 1] # Get around a bug in lazyindex
|
2005-10-08 06:49:25 +04:00
|
|
|
junk = None
|
|
|
|
|
2005-10-12 05:56:47 +04:00
|
|
|
# A changeset always belongs to itself, so the changenode lookup
|
|
|
|
# function for a changenode is identity.
|
2005-10-07 21:57:11 +04:00
|
|
|
def identity(x):
|
|
|
|
return x
|
|
|
|
|
2005-10-12 05:56:47 +04:00
|
|
|
# If we determine that a particular file or manifest node must be a
|
|
|
|
# node that the recipient of the changegroup will already have, we can
|
|
|
|
# also assume the recipient will have all the parents. This function
|
|
|
|
# prunes them from the set of missing nodes.
|
2005-10-07 21:57:11 +04:00
|
|
|
def prune_parents(revlog, hasset, msngset):
|
2009-12-04 19:42:59 +03:00
|
|
|
for r in revlog.ancestors(*[revlog.rev(n) for n in hasset]):
|
|
|
|
msngset.pop(revlog.node(r), None)
|
2005-10-07 21:57:11 +04:00
|
|
|
|
2005-10-12 05:56:47 +04:00
|
|
|
# Use the information collected in collect_manifests_and_files to say
|
|
|
|
# which changenode any manifestnode belongs to.
|
2005-10-07 21:57:11 +04:00
|
|
|
def lookup_manifest_link(mnfstnode):
|
|
|
|
return msng_mnfst_set[mnfstnode]
|
|
|
|
|
2005-10-12 05:56:47 +04:00
|
|
|
# A function generating function that sets up the initial environment
|
|
|
|
# the inner function.
|
2005-10-07 21:57:11 +04:00
|
|
|
def filenode_collector(changedfiles):
|
2005-10-12 05:56:47 +04:00
|
|
|
# This gathers information from each manifestnode included in the
|
|
|
|
# changegroup about which filenodes the manifest node references
|
|
|
|
# so we can include those in the changegroup too.
|
|
|
|
#
|
|
|
|
# It also remembers which changenode each filenode belongs to. It
|
|
|
|
# does this by assuming the a filenode belongs to the changenode
|
|
|
|
# the first manifest that references it belongs to.
|
2005-10-07 21:57:11 +04:00
|
|
|
def collect_msng_filenodes(mnfstnode):
|
2005-10-10 19:36:29 +04:00
|
|
|
r = mnfst.rev(mnfstnode)
|
2009-12-04 19:43:01 +03:00
|
|
|
if r - 1 in mnfst.parentrevs(r):
|
|
|
|
# If the previous rev is one of the parents,
|
2005-10-10 19:36:29 +04:00
|
|
|
# we only need to see a diff.
|
2007-08-16 01:09:50 +04:00
|
|
|
deltamf = mnfst.readdelta(mnfstnode)
|
2005-10-12 05:56:47 +04:00
|
|
|
# For each line in the delta
|
2009-01-12 11:16:03 +03:00
|
|
|
for f, fnode in deltamf.iteritems():
|
2005-10-10 19:36:29 +04:00
|
|
|
f = changedfiles.get(f, None)
|
2005-10-12 05:56:47 +04:00
|
|
|
# And if the file is in the list of files we care
|
|
|
|
# about.
|
2005-10-10 19:36:29 +04:00
|
|
|
if f is not None:
|
2005-10-12 05:56:47 +04:00
|
|
|
# Get the changenode this manifest belongs to
|
2005-10-10 19:36:29 +04:00
|
|
|
clnode = msng_mnfst_set[mnfstnode]
|
2005-10-12 05:56:47 +04:00
|
|
|
# Create the set of filenodes for the file if
|
|
|
|
# there isn't one already.
|
2005-10-10 19:36:29 +04:00
|
|
|
ndset = msng_filenode_set.setdefault(f, {})
|
2005-10-12 05:56:47 +04:00
|
|
|
# And set the filenode's changelog node to the
|
|
|
|
# manifest's if it hasn't been set already.
|
2005-10-10 19:36:29 +04:00
|
|
|
ndset.setdefault(fnode, clnode)
|
|
|
|
else:
|
2005-10-12 05:56:47 +04:00
|
|
|
# Otherwise we need a full manifest.
|
2005-10-10 19:36:29 +04:00
|
|
|
m = mnfst.read(mnfstnode)
|
2005-10-12 05:56:47 +04:00
|
|
|
# For every file in we care about.
|
2005-10-10 19:36:29 +04:00
|
|
|
for f in changedfiles:
|
|
|
|
fnode = m.get(f, None)
|
2005-10-12 05:56:47 +04:00
|
|
|
# If it's in the manifest
|
2005-10-10 19:36:29 +04:00
|
|
|
if fnode is not None:
|
2005-10-12 05:56:47 +04:00
|
|
|
# See comments above.
|
2005-10-10 19:36:29 +04:00
|
|
|
clnode = msng_mnfst_set[mnfstnode]
|
|
|
|
ndset = msng_filenode_set.setdefault(f, {})
|
|
|
|
ndset.setdefault(fnode, clnode)
|
2005-10-08 06:49:25 +04:00
|
|
|
return collect_msng_filenodes
|
2005-10-07 21:57:11 +04:00
|
|
|
|
2005-10-12 05:56:47 +04:00
|
|
|
# We have a list of filenodes we think we need for a file, lets remove
|
2009-04-28 20:29:50 +04:00
|
|
|
# all those we know the recipient must have.
|
2005-10-07 21:57:11 +04:00
|
|
|
def prune_filenodes(f, filerevlog):
|
|
|
|
msngset = msng_filenode_set[f]
|
2009-05-17 06:33:39 +04:00
|
|
|
hasset = set()
|
2005-10-12 05:56:47 +04:00
|
|
|
# If a 'missing' filenode thinks it belongs to a changenode we
|
|
|
|
# assume the recipient must have, then the recipient must have
|
|
|
|
# that filenode.
|
2005-10-07 21:57:11 +04:00
|
|
|
for n in msngset:
|
2008-11-13 00:19:14 +03:00
|
|
|
clnode = cl.node(filerevlog.linkrev(filerevlog.rev(n)))
|
2005-10-07 21:57:11 +04:00
|
|
|
if clnode in has_cl_set:
|
2009-05-17 06:33:39 +04:00
|
|
|
hasset.add(n)
|
2005-10-07 21:57:11 +04:00
|
|
|
prune_parents(filerevlog, hasset, msngset)
|
|
|
|
|
2005-10-12 05:56:47 +04:00
|
|
|
# A function generator function that sets up the a context for the
|
|
|
|
# inner function.
|
2005-10-07 21:57:11 +04:00
|
|
|
def lookup_filenode_link_func(fname):
|
|
|
|
msngset = msng_filenode_set[fname]
|
2005-10-12 05:56:47 +04:00
|
|
|
# Lookup the changenode the filenode belongs to.
|
2005-10-07 21:57:11 +04:00
|
|
|
def lookup_filenode_link(fnode):
|
|
|
|
return msngset[fnode]
|
|
|
|
return lookup_filenode_link
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2008-01-19 23:01:16 +03:00
|
|
|
# Add the nodes that were explicitly requested.
|
|
|
|
def add_extra_nodes(name, nodes):
|
|
|
|
if not extranodes or name not in extranodes:
|
|
|
|
return
|
|
|
|
|
|
|
|
for node, linknode in extranodes[name]:
|
|
|
|
if node not in nodes:
|
|
|
|
nodes[node] = linknode
|
|
|
|
|
2005-10-12 05:56:47 +04:00
|
|
|
# Now that we have all theses utility functions to help out and
|
|
|
|
# logically divide up the task, generate the group.
|
2005-08-28 01:21:25 +04:00
|
|
|
def gengroup():
|
2005-10-12 05:56:47 +04:00
|
|
|
# The set of changed files starts empty.
|
2005-10-07 21:57:11 +04:00
|
|
|
changedfiles = {}
|
2010-02-07 11:58:41 +03:00
|
|
|
collect = changegroup.collector(cl, msng_mnfst_set, changedfiles)
|
2010-02-08 22:51:23 +03:00
|
|
|
|
2005-10-12 05:56:47 +04:00
|
|
|
# Create a changenode group generator that will call our functions
|
|
|
|
# back to lookup the owning changenode and collect information.
|
2010-02-07 11:58:41 +03:00
|
|
|
group = cl.group(msng_cl_lst, identity, collect)
|
2010-02-09 19:02:01 +03:00
|
|
|
cnt = 0
|
2005-10-07 21:57:11 +04:00
|
|
|
for chnk in group:
|
|
|
|
yield chnk
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('bundle changes'), cnt, unit='chunks')
|
2010-02-09 19:02:01 +03:00
|
|
|
cnt += 1
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('bundle changes'), None, unit='chunks')
|
2010-02-09 19:02:01 +03:00
|
|
|
|
2005-10-12 05:56:47 +04:00
|
|
|
|
2010-02-07 12:01:55 +03:00
|
|
|
# Figure out which manifest nodes (of the ones we think might be
|
|
|
|
# part of the changegroup) the recipient must know about and
|
|
|
|
# remove them from the changegroup.
|
|
|
|
has_mnfst_set = set()
|
|
|
|
for n in msng_mnfst_set:
|
|
|
|
# If a 'missing' manifest thinks it belongs to a changenode
|
|
|
|
# the recipient is assumed to have, obviously the recipient
|
|
|
|
# must have that manifest.
|
|
|
|
linknode = cl.node(mnfst.linkrev(mnfst.rev(n)))
|
|
|
|
if linknode in has_cl_set:
|
|
|
|
has_mnfst_set.add(n)
|
|
|
|
prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
|
2008-01-19 23:01:16 +03:00
|
|
|
add_extra_nodes(1, msng_mnfst_set)
|
2005-10-07 21:57:11 +04:00
|
|
|
msng_mnfst_lst = msng_mnfst_set.keys()
|
2005-10-12 05:56:47 +04:00
|
|
|
# Sort the manifestnodes by revision number.
|
2009-07-05 14:43:40 +04:00
|
|
|
msng_mnfst_lst.sort(key=mnfst.rev)
|
2005-10-12 05:56:47 +04:00
|
|
|
# Create a generator for the manifestnodes that calls our lookup
|
|
|
|
# and data collection functions back.
|
2005-10-08 06:49:25 +04:00
|
|
|
group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
|
2005-10-07 21:57:11 +04:00
|
|
|
filenode_collector(changedfiles))
|
2010-02-09 19:02:01 +03:00
|
|
|
cnt = 0
|
2005-10-07 21:57:11 +04:00
|
|
|
for chnk in group:
|
|
|
|
yield chnk
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('bundle manifests'), cnt, unit='chunks')
|
2010-02-09 19:02:01 +03:00
|
|
|
cnt += 1
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('bundle manifests'), None, unit='chunks')
|
2005-10-12 05:56:47 +04:00
|
|
|
|
|
|
|
# These are no longer needed, dereference and toss the memory for
|
|
|
|
# them.
|
2005-10-07 21:57:11 +04:00
|
|
|
msng_mnfst_lst = None
|
|
|
|
msng_mnfst_set.clear()
|
2005-10-12 05:56:47 +04:00
|
|
|
|
2008-01-19 23:01:16 +03:00
|
|
|
if extranodes:
|
|
|
|
for fname in extranodes:
|
|
|
|
if isinstance(fname, int):
|
|
|
|
continue
|
2008-10-18 22:25:45 +04:00
|
|
|
msng_filenode_set.setdefault(fname, {})
|
2008-01-19 23:01:16 +03:00
|
|
|
changedfiles[fname] = 1
|
2005-10-12 05:56:47 +04:00
|
|
|
# Go through all our files in order sorted by name.
|
2010-02-09 19:02:01 +03:00
|
|
|
cnt = 0
|
2009-04-27 01:50:44 +04:00
|
|
|
for fname in sorted(changedfiles):
|
2005-10-07 21:57:11 +04:00
|
|
|
filerevlog = self.file(fname)
|
2008-06-26 23:35:50 +04:00
|
|
|
if not len(filerevlog):
|
2007-12-19 00:40:46 +03:00
|
|
|
raise util.Abort(_("empty or missing revlog for %s") % fname)
|
2005-10-12 05:56:47 +04:00
|
|
|
# Toss out the filenodes that the recipient isn't really
|
|
|
|
# missing.
|
2008-01-20 16:39:25 +03:00
|
|
|
if fname in msng_filenode_set:
|
2006-01-20 20:35:43 +03:00
|
|
|
prune_filenodes(fname, filerevlog)
|
2008-10-18 22:25:45 +04:00
|
|
|
add_extra_nodes(fname, msng_filenode_set[fname])
|
2006-01-20 20:35:43 +03:00
|
|
|
msng_filenode_lst = msng_filenode_set[fname].keys()
|
|
|
|
else:
|
|
|
|
msng_filenode_lst = []
|
2005-10-12 05:56:47 +04:00
|
|
|
# If any filenodes are left, generate the group for them,
|
|
|
|
# otherwise don't bother.
|
2005-10-07 21:57:11 +04:00
|
|
|
if len(msng_filenode_lst) > 0:
|
2007-10-04 02:17:28 +04:00
|
|
|
yield changegroup.chunkheader(len(fname))
|
|
|
|
yield fname
|
2005-10-12 05:56:47 +04:00
|
|
|
# Sort the filenodes by their revision #
|
2009-07-05 14:43:40 +04:00
|
|
|
msng_filenode_lst.sort(key=filerevlog.rev)
|
2005-10-12 05:56:47 +04:00
|
|
|
# Create a group generator and only pass in a changenode
|
|
|
|
# lookup function as we need to collect no information
|
|
|
|
# from filenodes.
|
2005-10-07 21:57:11 +04:00
|
|
|
group = filerevlog.group(msng_filenode_lst,
|
2005-10-08 06:49:25 +04:00
|
|
|
lookup_filenode_link_func(fname))
|
2005-10-07 21:57:11 +04:00
|
|
|
for chnk in group:
|
2010-02-09 19:02:01 +03:00
|
|
|
self.ui.progress(
|
2010-02-19 04:23:38 +03:00
|
|
|
_('bundle files'), cnt, item=fname, unit='chunks')
|
2010-02-09 19:02:01 +03:00
|
|
|
cnt += 1
|
2005-10-07 21:57:11 +04:00
|
|
|
yield chnk
|
2008-01-20 16:39:25 +03:00
|
|
|
if fname in msng_filenode_set:
|
2006-01-20 20:35:43 +03:00
|
|
|
# Don't need this anymore, toss it to free memory.
|
|
|
|
del msng_filenode_set[fname]
|
2005-10-12 05:56:47 +04:00
|
|
|
# Signal that no more groups are left.
|
2006-03-21 13:47:21 +03:00
|
|
|
yield changegroup.closechunk()
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('bundle files'), None, unit='chunks')
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-04-28 13:36:33 +04:00
|
|
|
if msng_cl_lst:
|
2006-04-28 09:29:02 +04:00
|
|
|
self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
|
2006-02-17 19:26:21 +03:00
|
|
|
|
2005-10-07 21:57:11 +04:00
|
|
|
return util.chunkbuffer(gengroup())
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-02-17 19:26:21 +03:00
|
|
|
def changegroup(self, basenodes, source):
|
2008-10-21 19:00:35 +04:00
|
|
|
# to avoid a race we use changegroupsubset() (issue1320)
|
|
|
|
return self.changegroupsubset(basenodes, self.heads(), source)
|
|
|
|
|
2009-11-07 14:28:30 +03:00
|
|
|
def _changegroup(self, nodes, source):
|
2009-09-09 01:58:59 +04:00
|
|
|
"""Compute the changegroup of all nodes that we have that a recipient
|
|
|
|
doesn't. Return a chunkbuffer object whose read() method will return
|
|
|
|
successive changegroup chunks.
|
2005-10-12 05:56:47 +04:00
|
|
|
|
|
|
|
This is much easier than the previous function as we can assume that
|
2008-10-21 19:00:35 +04:00
|
|
|
the recipient has any changenode we aren't sending them.
|
|
|
|
|
2009-11-07 14:28:30 +03:00
|
|
|
nodes is the set of nodes to send"""
|
2006-02-17 19:26:21 +03:00
|
|
|
|
|
|
|
self.hook('preoutgoing', throw=True, source=source)
|
|
|
|
|
2005-10-07 21:57:11 +04:00
|
|
|
cl = self.changelog
|
2009-04-22 02:57:28 +04:00
|
|
|
revset = set([cl.rev(n) for n in nodes])
|
2007-12-30 21:46:13 +03:00
|
|
|
self.changegroupinfo(nodes, source)
|
2005-10-07 21:57:11 +04:00
|
|
|
|
|
|
|
def identity(x):
|
|
|
|
return x
|
|
|
|
|
2008-06-26 23:35:50 +04:00
|
|
|
def gennodelst(log):
|
|
|
|
for r in log:
|
2008-11-13 00:19:14 +03:00
|
|
|
if log.linkrev(r) in revset:
|
|
|
|
yield log.node(r)
|
2005-10-07 21:57:11 +04:00
|
|
|
|
|
|
|
def lookuprevlink_func(revlog):
|
|
|
|
def lookuprevlink(n):
|
2008-11-13 00:19:14 +03:00
|
|
|
return cl.node(revlog.linkrev(revlog.rev(n)))
|
2005-10-07 21:57:11 +04:00
|
|
|
return lookuprevlink
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2005-10-07 21:57:11 +04:00
|
|
|
def gengroup():
|
2009-09-09 01:58:59 +04:00
|
|
|
'''yield a sequence of changegroup chunks (strings)'''
|
2005-10-07 21:57:11 +04:00
|
|
|
# construct a list of all changed files
|
2010-02-07 11:58:41 +03:00
|
|
|
changedfiles = {}
|
|
|
|
mmfs = {}
|
|
|
|
collect = changegroup.collector(cl, mmfs, changedfiles)
|
2005-10-07 21:57:11 +04:00
|
|
|
|
2010-02-09 19:02:01 +03:00
|
|
|
cnt = 0
|
2010-02-07 11:58:41 +03:00
|
|
|
for chnk in cl.group(nodes, identity, collect):
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('bundle changes'), cnt, unit='chunks')
|
2010-02-09 19:02:01 +03:00
|
|
|
cnt += 1
|
2005-10-07 21:57:11 +04:00
|
|
|
yield chnk
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('bundle changes'), None, unit='chunks')
|
2005-10-07 21:57:11 +04:00
|
|
|
|
|
|
|
mnfst = self.manifest
|
|
|
|
nodeiter = gennodelst(mnfst)
|
2010-02-09 19:02:01 +03:00
|
|
|
cnt = 0
|
2005-10-07 21:57:11 +04:00
|
|
|
for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('bundle manifests'), cnt, unit='chunks')
|
2010-02-09 19:02:01 +03:00
|
|
|
cnt += 1
|
2005-10-07 21:57:11 +04:00
|
|
|
yield chnk
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('bundle manifests'), None, unit='chunks')
|
2005-10-07 21:57:11 +04:00
|
|
|
|
2010-02-09 19:02:01 +03:00
|
|
|
cnt = 0
|
2009-04-27 01:50:44 +04:00
|
|
|
for fname in sorted(changedfiles):
|
2005-10-07 21:57:11 +04:00
|
|
|
filerevlog = self.file(fname)
|
2008-06-26 23:35:50 +04:00
|
|
|
if not len(filerevlog):
|
2007-12-19 00:40:46 +03:00
|
|
|
raise util.Abort(_("empty or missing revlog for %s") % fname)
|
2005-10-07 21:57:11 +04:00
|
|
|
nodeiter = gennodelst(filerevlog)
|
|
|
|
nodeiter = list(nodeiter)
|
|
|
|
if nodeiter:
|
2007-10-04 02:17:28 +04:00
|
|
|
yield changegroup.chunkheader(len(fname))
|
|
|
|
yield fname
|
2005-10-07 21:57:11 +04:00
|
|
|
lookup = lookuprevlink_func(filerevlog)
|
|
|
|
for chnk in filerevlog.group(nodeiter, lookup):
|
2010-02-09 19:02:01 +03:00
|
|
|
self.ui.progress(
|
2010-02-19 04:23:38 +03:00
|
|
|
_('bundle files'), cnt, item=fname, unit='chunks')
|
2010-02-09 19:02:01 +03:00
|
|
|
cnt += 1
|
2005-10-07 21:57:11 +04:00
|
|
|
yield chnk
|
2010-02-19 04:23:38 +03:00
|
|
|
self.ui.progress(_('bundle files'), None, unit='chunks')
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-03-21 13:47:21 +03:00
|
|
|
yield changegroup.closechunk()
|
2006-04-22 00:14:27 +04:00
|
|
|
|
|
|
|
if nodes:
|
|
|
|
self.hook('outgoing', node=hex(nodes[0]), source=source)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2005-10-07 21:57:11 +04:00
|
|
|
return util.chunkbuffer(gengroup())
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2008-01-19 23:01:16 +03:00
|
|
|
def addchangegroup(self, source, srctype, url, emptyok=False):
|
2006-03-29 22:27:16 +04:00
|
|
|
"""add changegroup to repo.
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-12-06 01:25:28 +03:00
|
|
|
return values:
|
|
|
|
- nothing changed or no source: 0
|
|
|
|
- more heads than before: 1+added heads (2..n)
|
|
|
|
- less heads than before: -1-removed heads (-2..-n)
|
|
|
|
- number of heads stays the same: 1
|
|
|
|
"""
|
2005-08-28 01:21:25 +04:00
|
|
|
def csmap(x):
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("add changeset %s\n" % short(x))
|
2008-06-26 23:35:50 +04:00
|
|
|
return len(cl)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
|
|
|
def revmap(x):
|
fix race in localrepo.addchangegroup.
localrepo.addchangegroup writes to changelog, then manifest, then normal
files. this breaks access ordering. if reader reads changelog while
manifest is being written, can find pointers into places in manifest
that are not yet written. same can happen for manifest and normal files.
fix is to make almost no change to localrepo.addchangegroup. it must
to write changelog and manifest data early because it has to read them
while writing other files. instead, write changelog and manifest data
to temp file that reader cannot see, then append temp data to manifest
after all normal files written, finally append temp data to changelog.
temp file code is in new appendfile module. can be used in other places
with small changes.
much smaller race still left. we write all new data in one write call,
but reader can maybe see partial update because python or os or filesystem
cannot always make write really atomic. file locking no help: slow, not
portable, not reliable over nfs. only real safe other plan is write to
temp file every time and rename, but performance bad when manifest or
changelog is big.
2006-03-24 20:08:12 +03:00
|
|
|
return cl.rev(x)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2006-01-12 09:57:58 +03:00
|
|
|
if not source:
|
2006-03-29 22:27:16 +04:00
|
|
|
return 0
|
2006-02-15 21:49:30 +03:00
|
|
|
|
2006-07-26 00:50:32 +04:00
|
|
|
self.hook('prechangegroup', throw=True, source=srctype, url=url)
|
2006-02-15 21:49:30 +03:00
|
|
|
|
2005-08-28 01:21:25 +04:00
|
|
|
changesets = files = revisions = 0
|
|
|
|
|
2006-06-04 19:46:33 +04:00
|
|
|
# write changelog data to temp files so concurrent readers will not see
|
|
|
|
# inconsistent view
|
2007-03-23 07:37:44 +03:00
|
|
|
cl = self.changelog
|
|
|
|
cl.delayupdate()
|
|
|
|
oldheads = len(cl.heads())
|
|
|
|
|
2007-07-22 01:02:10 +04:00
|
|
|
tr = self.transaction()
|
|
|
|
try:
|
2007-07-22 23:53:57 +04:00
|
|
|
trp = weakref.proxy(tr)
|
2007-07-22 01:02:10 +04:00
|
|
|
# pull off the changeset group
|
|
|
|
self.ui.status(_("adding changesets\n"))
|
2009-05-14 18:11:45 +04:00
|
|
|
clstart = len(cl)
|
2010-02-07 21:00:40 +03:00
|
|
|
class prog(object):
|
2010-02-18 01:07:50 +03:00
|
|
|
step = _('changesets')
|
2010-02-07 21:00:40 +03:00
|
|
|
count = 1
|
|
|
|
ui = self.ui
|
|
|
|
def __call__(self):
|
|
|
|
self.ui.progress(self.step, self.count, unit='chunks')
|
|
|
|
self.count += 1
|
|
|
|
pr = prog()
|
|
|
|
chunkiter = changegroup.chunkiter(source, progress=pr)
|
2008-06-05 18:25:11 +04:00
|
|
|
if cl.addgroup(chunkiter, csmap, trp) is None and not emptyok:
|
2007-07-22 01:02:10 +04:00
|
|
|
raise util.Abort(_("received changelog group is empty"))
|
2009-05-14 18:11:45 +04:00
|
|
|
clend = len(cl)
|
|
|
|
changesets = clend - clstart
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('changesets'), None)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2007-07-22 01:02:10 +04:00
|
|
|
# pull off the manifest group
|
|
|
|
self.ui.status(_("adding manifests\n"))
|
2010-02-18 01:07:50 +03:00
|
|
|
pr.step = _('manifests')
|
2010-02-07 21:00:40 +03:00
|
|
|
pr.count = 1
|
|
|
|
chunkiter = changegroup.chunkiter(source, progress=pr)
|
2007-07-22 01:02:10 +04:00
|
|
|
# no need to check for empty manifest group here:
|
|
|
|
# if the result of the merge of 1 and 2 is the same in 3 and 4,
|
|
|
|
# no new manifest will be created and the manifest group will
|
|
|
|
# be empty during the pull
|
2007-07-22 23:53:57 +04:00
|
|
|
self.manifest.addgroup(chunkiter, revmap, trp)
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('manifests'), None)
|
2007-07-22 01:02:10 +04:00
|
|
|
|
2010-02-12 01:37:43 +03:00
|
|
|
needfiles = {}
|
|
|
|
if self.ui.configbool('server', 'validate', default=False):
|
|
|
|
# validate incoming csets have their manifests
|
|
|
|
for cset in xrange(clstart, clend):
|
|
|
|
mfest = self.changelog.read(self.changelog.node(cset))[0]
|
|
|
|
mfest = self.manifest.readdelta(mfest)
|
|
|
|
# store file nodes we must see
|
|
|
|
for f, n in mfest.iteritems():
|
|
|
|
needfiles.setdefault(f, set()).add(n)
|
|
|
|
|
2007-07-22 01:02:10 +04:00
|
|
|
# process the files
|
|
|
|
self.ui.status(_("adding file changes\n"))
|
2010-02-07 21:00:40 +03:00
|
|
|
pr.step = 'files'
|
|
|
|
pr.count = 1
|
2007-07-22 01:02:10 +04:00
|
|
|
while 1:
|
|
|
|
f = changegroup.getchunk(source)
|
|
|
|
if not f:
|
|
|
|
break
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("adding %s revisions\n" % f)
|
2007-07-22 01:02:10 +04:00
|
|
|
fl = self.file(f)
|
2008-06-26 23:35:50 +04:00
|
|
|
o = len(fl)
|
2010-02-07 21:00:40 +03:00
|
|
|
chunkiter = changegroup.chunkiter(source, progress=pr)
|
2007-07-22 23:53:57 +04:00
|
|
|
if fl.addgroup(chunkiter, revmap, trp) is None:
|
2007-07-22 01:02:10 +04:00
|
|
|
raise util.Abort(_("received file revlog group is empty"))
|
2008-06-26 23:35:50 +04:00
|
|
|
revisions += len(fl) - o
|
2007-07-22 01:02:10 +04:00
|
|
|
files += 1
|
2010-02-12 01:37:43 +03:00
|
|
|
if f in needfiles:
|
|
|
|
needs = needfiles[f]
|
|
|
|
for new in xrange(o, len(fl)):
|
|
|
|
n = fl.node(new)
|
|
|
|
if n in needs:
|
|
|
|
needs.remove(n)
|
|
|
|
if not needs:
|
|
|
|
del needfiles[f]
|
2010-02-18 01:07:50 +03:00
|
|
|
self.ui.progress(_('files'), None)
|
2010-02-12 01:37:43 +03:00
|
|
|
|
|
|
|
for f, needs in needfiles.iteritems():
|
|
|
|
fl = self.file(f)
|
|
|
|
for n in needs:
|
|
|
|
try:
|
|
|
|
fl.rev(n)
|
|
|
|
except error.LookupError:
|
|
|
|
raise util.Abort(
|
|
|
|
_('missing file data for %s:%s - run hg verify') %
|
|
|
|
(f, hex(n)))
|
2007-07-22 01:02:10 +04:00
|
|
|
|
2009-05-14 18:09:27 +04:00
|
|
|
newheads = len(cl.heads())
|
2007-07-22 01:02:10 +04:00
|
|
|
heads = ""
|
|
|
|
if oldheads and newheads != oldheads:
|
|
|
|
heads = _(" (%+d heads)") % (newheads - oldheads)
|
|
|
|
|
|
|
|
self.ui.status(_("added %d changesets"
|
|
|
|
" with %d changes to %d files%s\n")
|
|
|
|
% (changesets, revisions, files, heads))
|
|
|
|
|
|
|
|
if changesets > 0:
|
2009-05-14 18:09:27 +04:00
|
|
|
p = lambda: cl.writepending() and self.root or ""
|
2007-07-22 01:02:10 +04:00
|
|
|
self.hook('pretxnchangegroup', throw=True,
|
2009-05-14 18:11:45 +04:00
|
|
|
node=hex(cl.node(clstart)), source=srctype,
|
2009-02-17 04:35:07 +03:00
|
|
|
url=url, pending=p)
|
|
|
|
|
|
|
|
# make changelog see real files again
|
|
|
|
cl.finalize(trp)
|
2007-07-22 01:02:10 +04:00
|
|
|
|
|
|
|
tr.close()
|
|
|
|
finally:
|
|
|
|
del tr
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2005-10-04 01:45:14 +04:00
|
|
|
if changesets > 0:
|
2008-02-04 02:03:46 +03:00
|
|
|
# forcefully update the on-disk branch cache
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug("updating the branch cache\n")
|
2008-02-04 02:03:46 +03:00
|
|
|
self.branchtags()
|
2009-05-14 18:11:45 +04:00
|
|
|
self.hook("changegroup", node=hex(cl.node(clstart)),
|
2006-07-26 00:50:32 +04:00
|
|
|
source=srctype, url=url)
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2009-05-14 18:11:45 +04:00
|
|
|
for i in xrange(clstart, clend):
|
2009-05-14 18:09:27 +04:00
|
|
|
self.hook("incoming", node=hex(cl.node(i)),
|
2006-07-26 00:50:32 +04:00
|
|
|
source=srctype, url=url)
|
2005-09-22 21:12:42 +04:00
|
|
|
|
2006-12-06 01:25:28 +03:00
|
|
|
# never return 0 here:
|
|
|
|
if newheads < oldheads:
|
|
|
|
return newheads - oldheads - 1
|
|
|
|
else:
|
|
|
|
return newheads - oldheads + 1
|
2006-03-29 22:27:16 +04:00
|
|
|
|
2006-02-28 21:24:54 +03:00
|
|
|
|
2006-07-14 22:17:22 +04:00
|
|
|
def stream_in(self, remote):
|
|
|
|
fp = remote.stream_out()
|
2006-10-27 20:17:12 +04:00
|
|
|
l = fp.readline()
|
|
|
|
try:
|
|
|
|
resp = int(l)
|
|
|
|
except ValueError:
|
2009-01-12 20:28:28 +03:00
|
|
|
raise error.ResponseError(
|
2006-10-27 20:17:12 +04:00
|
|
|
_('Unexpected response from remote server:'), l)
|
2006-11-20 21:41:49 +03:00
|
|
|
if resp == 1:
|
2006-07-16 03:06:35 +04:00
|
|
|
raise util.Abort(_('operation forbidden by server'))
|
2006-11-20 21:41:49 +03:00
|
|
|
elif resp == 2:
|
|
|
|
raise util.Abort(_('locking the remote repository failed'))
|
|
|
|
elif resp != 0:
|
|
|
|
raise util.Abort(_('the server sent an unknown error code'))
|
2006-07-16 03:06:35 +04:00
|
|
|
self.ui.status(_('streaming all changes\n'))
|
2006-10-27 20:17:12 +04:00
|
|
|
l = fp.readline()
|
|
|
|
try:
|
|
|
|
total_files, total_bytes = map(int, l.split(' ', 1))
|
2008-03-30 22:21:46 +04:00
|
|
|
except (ValueError, TypeError):
|
2009-01-12 20:28:28 +03:00
|
|
|
raise error.ResponseError(
|
2006-10-27 20:17:12 +04:00
|
|
|
_('Unexpected response from remote server:'), l)
|
2006-07-14 22:17:22 +04:00
|
|
|
self.ui.status(_('%d files to transfer, %s of data\n') %
|
|
|
|
(total_files, util.bytecount(total_bytes)))
|
|
|
|
start = time.time()
|
|
|
|
for i in xrange(total_files):
|
2006-11-28 21:11:46 +03:00
|
|
|
# XXX doesn't support '\n' or '\r' in filenames
|
2006-10-27 20:17:12 +04:00
|
|
|
l = fp.readline()
|
|
|
|
try:
|
|
|
|
name, size = l.split('\0', 1)
|
|
|
|
size = int(size)
|
2008-10-04 01:13:03 +04:00
|
|
|
except (ValueError, TypeError):
|
2009-01-12 20:28:28 +03:00
|
|
|
raise error.ResponseError(
|
2006-10-27 20:17:12 +04:00
|
|
|
_('Unexpected response from remote server:'), l)
|
2009-09-19 03:15:38 +04:00
|
|
|
self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
|
2009-05-20 20:35:47 +04:00
|
|
|
# for backwards compat, name was partially encoded
|
|
|
|
ofp = self.sopener(store.decodedir(name), 'w')
|
2006-07-14 22:17:22 +04:00
|
|
|
for chunk in util.filechunkiter(fp, limit=size):
|
|
|
|
ofp.write(chunk)
|
|
|
|
ofp.close()
|
|
|
|
elapsed = time.time() - start
|
2007-02-19 12:29:05 +03:00
|
|
|
if elapsed <= 0:
|
|
|
|
elapsed = 0.001
|
2006-07-14 22:17:22 +04:00
|
|
|
self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
|
|
|
|
(util.bytecount(total_bytes), elapsed,
|
|
|
|
util.bytecount(total_bytes / elapsed)))
|
2007-06-18 22:24:34 +04:00
|
|
|
self.invalidate()
|
2006-07-14 22:17:22 +04:00
|
|
|
return len(self.heads()) + 1
|
2006-08-08 01:27:09 +04:00
|
|
|
|
2006-07-15 01:51:36 +04:00
|
|
|
def clone(self, remote, heads=[], stream=False):
|
2006-07-14 22:17:22 +04:00
|
|
|
'''clone remote repository.
|
|
|
|
|
|
|
|
keyword arguments:
|
|
|
|
heads: list of revs to clone (forces use of pull)
|
2006-07-16 03:06:35 +04:00
|
|
|
stream: use streaming clone if possible'''
|
2006-07-14 22:17:22 +04:00
|
|
|
|
2006-07-16 03:06:35 +04:00
|
|
|
# now, all clients that can request uncompressed clones can
|
|
|
|
# read repo formats supported by all servers that can serve
|
|
|
|
# them.
|
2006-07-14 22:17:22 +04:00
|
|
|
|
|
|
|
# if revlog format changes, client will have to check version
|
2006-07-16 03:06:35 +04:00
|
|
|
# and format flags on "stream" capability, and use
|
|
|
|
# uncompressed only if compatible.
|
2006-07-14 22:17:22 +04:00
|
|
|
|
2006-07-15 01:51:36 +04:00
|
|
|
if stream and not heads and remote.capable('stream'):
|
2006-07-14 22:17:22 +04:00
|
|
|
return self.stream_in(remote)
|
|
|
|
return self.pull(remote, heads)
|
|
|
|
|
2006-02-28 21:24:54 +03:00
|
|
|
# used to avoid circular references so destructors work
|
2006-12-05 13:28:21 +03:00
|
|
|
def aftertrans(files):
|
|
|
|
renamefiles = [tuple(t) for t in files]
|
2006-02-28 21:24:54 +03:00
|
|
|
def a():
|
2006-12-05 13:28:21 +03:00
|
|
|
for src, dest in renamefiles:
|
|
|
|
util.rename(src, dest)
|
2006-02-28 21:24:54 +03:00
|
|
|
return a
|
|
|
|
|
2006-07-31 18:11:12 +04:00
|
|
|
def instance(ui, path, create):
|
|
|
|
return localrepository(ui, util.drop_scheme('file', path), create)
|
2006-10-01 21:26:33 +04:00
|
|
|
|
2006-07-31 18:11:12 +04:00
|
|
|
def islocal(path):
|
|
|
|
return True
|