2012-07-13 23:47:06 +04:00
|
|
|
# peer.py - repository base classes for mercurial
|
2005-08-28 01:21:25 +04:00
|
|
|
#
|
2007-06-19 10:51:34 +04:00
|
|
|
# Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
|
2006-08-12 23:30:02 +04:00
|
|
|
# Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
|
2005-08-28 01:21:25 +04:00
|
|
|
#
|
2009-04-26 03:08:54 +04:00
|
|
|
# This software may be used and distributed according to the terms of the
|
2010-01-20 07:20:08 +03:00
|
|
|
# GNU General Public License version 2 or any later version.
|
2005-08-28 01:21:25 +04:00
|
|
|
|
2015-08-09 05:45:45 +03:00
|
|
|
from __future__ import absolute_import
|
|
|
|
|
|
|
|
from . import (
|
|
|
|
error,
|
|
|
|
util,
|
|
|
|
)
|
batching: migrate basic noop batching into peer.peer
"Real" batching only makes sense for wirepeers, but it greatly
simplifies the clients of peer instances if they can be ignorant to
actual batching capabilities of that peer. By moving the
not-really-batched batching code into peer.peer, all peer instances
now work with the batching API, thus simplifying users.
This leaves a couple of name forwards in wirepeer.py. Originally I had
planned to clean those up, but it kind of unclarifies other bits of
code that want to use batching, so I think it makes sense for the
names to stay exposed by wireproto. Specifically, almost nothing is
currently aware of peer (see largefiles.proto for an example), so
making them be aware of the peer module *and* the wireproto module
seems like some abstraction leakage. I *think* the right long-term fix
would actually be to make wireproto an implementation detail that
clients wouldn't need to know about, but I don't really know what that
would entail at the moment.
As far as I'm aware, no clients of batching in third-party extensions
will need updating, which is nice icing.
2015-08-05 21:51:34 +03:00
|
|
|
|
|
|
|
# abstract batching support
|
|
|
|
|
|
|
|
class future(object):
|
|
|
|
'''placeholder for a value to be set later'''
|
|
|
|
def set(self, value):
|
|
|
|
if util.safehasattr(self, 'value'):
|
|
|
|
raise error.RepoError("future is already set")
|
|
|
|
self.value = value
|
|
|
|
|
|
|
|
class batcher(object):
|
|
|
|
'''base class for batches of commands submittable in a single request
|
|
|
|
|
|
|
|
All methods invoked on instances of this class are simply queued and
|
|
|
|
return a a future for the result. Once you call submit(), all the queued
|
|
|
|
calls are performed and the results set in their respective futures.
|
|
|
|
'''
|
|
|
|
def __init__(self):
|
|
|
|
self.calls = []
|
|
|
|
def __getattr__(self, name):
|
|
|
|
def call(*args, **opts):
|
|
|
|
resref = future()
|
|
|
|
self.calls.append((name, args, opts, resref,))
|
|
|
|
return resref
|
|
|
|
return call
|
|
|
|
def submit(self):
|
2016-03-02 00:37:56 +03:00
|
|
|
raise NotImplementedError()
|
batching: migrate basic noop batching into peer.peer
"Real" batching only makes sense for wirepeers, but it greatly
simplifies the clients of peer instances if they can be ignorant to
actual batching capabilities of that peer. By moving the
not-really-batched batching code into peer.peer, all peer instances
now work with the batching API, thus simplifying users.
This leaves a couple of name forwards in wirepeer.py. Originally I had
planned to clean those up, but it kind of unclarifies other bits of
code that want to use batching, so I think it makes sense for the
names to stay exposed by wireproto. Specifically, almost nothing is
currently aware of peer (see largefiles.proto for an example), so
making them be aware of the peer module *and* the wireproto module
seems like some abstraction leakage. I *think* the right long-term fix
would actually be to make wireproto an implementation detail that
clients wouldn't need to know about, but I don't really know what that
would entail at the moment.
As far as I'm aware, no clients of batching in third-party extensions
will need updating, which is nice icing.
2015-08-05 21:51:34 +03:00
|
|
|
|
2016-03-02 02:39:25 +03:00
|
|
|
class iterbatcher(batcher):
|
|
|
|
|
|
|
|
def submit(self):
|
|
|
|
raise NotImplementedError()
|
|
|
|
|
|
|
|
def results(self):
|
|
|
|
raise NotImplementedError()
|
|
|
|
|
|
|
|
class localiterbatcher(iterbatcher):
|
|
|
|
def __init__(self, local):
|
|
|
|
super(iterbatcher, self).__init__()
|
|
|
|
self.local = local
|
|
|
|
|
|
|
|
def submit(self):
|
|
|
|
# submit for a local iter batcher is a noop
|
|
|
|
pass
|
|
|
|
|
|
|
|
def results(self):
|
|
|
|
for name, args, opts, resref in self.calls:
|
2017-08-10 09:29:30 +03:00
|
|
|
resref.set(getattr(self.local, name)(*args, **opts))
|
|
|
|
yield resref.value
|
2016-03-02 02:39:25 +03:00
|
|
|
|
batching: migrate basic noop batching into peer.peer
"Real" batching only makes sense for wirepeers, but it greatly
simplifies the clients of peer instances if they can be ignorant to
actual batching capabilities of that peer. By moving the
not-really-batched batching code into peer.peer, all peer instances
now work with the batching API, thus simplifying users.
This leaves a couple of name forwards in wirepeer.py. Originally I had
planned to clean those up, but it kind of unclarifies other bits of
code that want to use batching, so I think it makes sense for the
names to stay exposed by wireproto. Specifically, almost nothing is
currently aware of peer (see largefiles.proto for an example), so
making them be aware of the peer module *and* the wireproto module
seems like some abstraction leakage. I *think* the right long-term fix
would actually be to make wireproto an implementation detail that
clients wouldn't need to know about, but I don't really know what that
would entail at the moment.
As far as I'm aware, no clients of batching in third-party extensions
will need updating, which is nice icing.
2015-08-05 21:51:34 +03:00
|
|
|
def batchable(f):
|
|
|
|
'''annotation for batchable methods
|
|
|
|
|
|
|
|
Such methods must implement a coroutine as follows:
|
|
|
|
|
|
|
|
@batchable
|
|
|
|
def sample(self, one, two=None):
|
|
|
|
# Build list of encoded arguments suitable for your wire protocol:
|
|
|
|
encargs = [('one', encode(one),), ('two', encode(two),)]
|
|
|
|
# Create future for injection of encoded result:
|
|
|
|
encresref = future()
|
|
|
|
# Return encoded arguments and future:
|
|
|
|
yield encargs, encresref
|
|
|
|
# Assuming the future to be filled with the result from the batched
|
|
|
|
# request now. Decode it:
|
|
|
|
yield decode(encresref.value)
|
|
|
|
|
|
|
|
The decorator returns a function which wraps this coroutine as a plain
|
|
|
|
method, but adds the original method as an attribute called "batchable",
|
|
|
|
which is used by remotebatch to split the call into separate encoding and
|
|
|
|
decoding phases.
|
|
|
|
'''
|
|
|
|
def plain(*args, **opts):
|
|
|
|
batchable = f(*args, **opts)
|
2016-05-17 00:30:53 +03:00
|
|
|
encargsorres, encresref = next(batchable)
|
batching: migrate basic noop batching into peer.peer
"Real" batching only makes sense for wirepeers, but it greatly
simplifies the clients of peer instances if they can be ignorant to
actual batching capabilities of that peer. By moving the
not-really-batched batching code into peer.peer, all peer instances
now work with the batching API, thus simplifying users.
This leaves a couple of name forwards in wirepeer.py. Originally I had
planned to clean those up, but it kind of unclarifies other bits of
code that want to use batching, so I think it makes sense for the
names to stay exposed by wireproto. Specifically, almost nothing is
currently aware of peer (see largefiles.proto for an example), so
making them be aware of the peer module *and* the wireproto module
seems like some abstraction leakage. I *think* the right long-term fix
would actually be to make wireproto an implementation detail that
clients wouldn't need to know about, but I don't really know what that
would entail at the moment.
As far as I'm aware, no clients of batching in third-party extensions
will need updating, which is nice icing.
2015-08-05 21:51:34 +03:00
|
|
|
if not encresref:
|
|
|
|
return encargsorres # a local result in this case
|
|
|
|
self = args[0]
|
2017-10-05 21:15:05 +03:00
|
|
|
encresref.set(self._submitone(f.__name__, encargsorres))
|
2016-05-17 00:30:53 +03:00
|
|
|
return next(batchable)
|
batching: migrate basic noop batching into peer.peer
"Real" batching only makes sense for wirepeers, but it greatly
simplifies the clients of peer instances if they can be ignorant to
actual batching capabilities of that peer. By moving the
not-really-batched batching code into peer.peer, all peer instances
now work with the batching API, thus simplifying users.
This leaves a couple of name forwards in wirepeer.py. Originally I had
planned to clean those up, but it kind of unclarifies other bits of
code that want to use batching, so I think it makes sense for the
names to stay exposed by wireproto. Specifically, almost nothing is
currently aware of peer (see largefiles.proto for an example), so
making them be aware of the peer module *and* the wireproto module
seems like some abstraction leakage. I *think* the right long-term fix
would actually be to make wireproto an implementation detail that
clients wouldn't need to know about, but I don't really know what that
would entail at the moment.
As far as I'm aware, no clients of batching in third-party extensions
will need updating, which is nice icing.
2015-08-05 21:51:34 +03:00
|
|
|
setattr(plain, 'batchable', f)
|
|
|
|
return plain
|