In 7a1ccfe03f74 (treemanifests: set bundle2 part parameter indicating
treemanifest, 2016-01-08), I didn't realize I had to set the parameter
separately for getbundle and unbundle. Having the parameter there on
push allows us to push to an empty repo and have the requirements
updated correctly.
We don't currently have a mechanism for inferring bundle spec strings
from bundle files. This patch adds one.
This will eventually be used to make the producing of clone bundles
manifests easier.
The clone bundles feature was introduced in Mercurial 3.6 behind an
experimental and disabled by default flag. The feature has been enabled
on hg.mozilla.org for a few months and has served many terabytes of
clones. Users have been encouraged to use the feature and reception
has been very positive (mainly due to faster clones as a result of
connecting to a CDN). I have heard no feedback about changing the
feature other than inquiries about when it will be enabled by default.
So, I think the feature is ready to be enabled by default.
This patch renames experimental.clonebundles to ui.clonebundles,
documents the option, and enables it by default. References to the
experimental state of clone bundles have been removed. The remaining
config option docs in clonebundles.py have been removed because they
are redudant with `hg help config`.
There are some oddities with behavior of clone bundles. Because clones
with clone bundles are effectively 2 `hg pull` operations, there may be
2 transactions. This could result in hooks running twice. If the
subsequent pull is aborted, it could result in partial rollback and an
incomplete clone. This behavior is a bit wonky and should probably
be documented. If this patch is accepted, I'll send a follow-up to
document it. I don't think this behavior should prevent the feature
being enabled by default. Reworking the clone mechanism to support
interrupted or multi-part clones feels like a major new feature and
something that when implemented can change the hook and rollback
semantics of clone bundles. Besides, partial clone is better than
full rollback and hooks running on initial clone are likely rare, so I
think the impact is minimal.
By adding a mandatory 'treemanifest' parameter in the bundle2 part, we
make it possible for the recipient to set repo requirements before the
manifest revlog is accessed.
Prior to this, a pull of 90k markers (already known locally!) was
making about 2000 calls to obsstore.add, which was repeatedly building
a full set of known markers (in addition to other transaction
overhead). This quadratic behavior accounted for about 50 seconds of a
70 second no-op pull. After this change, we're down to 20 seconds.
While it would seem simplest to just cache the known set for
obsstore.add, this would also introduce issues of correct cache invalidation.
The extra pointless transaction overhead would also remain.
Previously, we passed a bunch of parameters to discovery.checkheads, but all
of the arguments can be fetched out of pushop, which may contain a lot more
useful information for extensions now that pushop is extensible.
Before bundle2, hook output from hook failures was prefixed with
"remote: ". Up to this point with bundle2, the output was converted to
the message to print in an Abort exception. This had 2 implications:
1) It was unclear whether an error message came from the local repo
or the remote
2) The exit code changed from 1 to 255
This patch changes the handling of error:abort bundle2 parts during push
to prefix the error message with "remote: ". This restores the old
behavior.
We still preserve the behavior of raising an Abort during bundle2
application failure. This is a regression from pre-bundle2 because the
exit code changed.
Because we no longer raise an Abort with the remote's message, we needed
to insert a message for the new Abort. So, I invented a new error
message for that. This is another change from pre-bundle2. However, I
like the new error message because it states unambiguously who aborted
the push failed, which I think is important for users so they can decide
what's next.
Now that we have support for detecting compatible stream clone bundles
in bundle specifications, we can safely add support for applying stream
clone bundles to the clone bundles feature.
Stream clone bundles can only be consumed if the consumer supports the
exact format requirements that were present on the producer.
This patch adds support for encoding and verifying the format
requirements on the bundle specification string for a stream clone
bundle are supported by the local repository. If they aren't, we raise
an UnsupportedBundleSpecification, just like we do when an unknown
compression or bundle type is encountered.
The impetus for this patch is so the clone bundles manifest can
advertise stream clone bundles and so clients can filter out stream
clones with unsupported format requirements. e.g. a stream clone
produced with the not-yet-invented "revlogv2" format will be ignored by
clients that only support "revlogv1."
Sometimes a basic type string is not sufficient for representing the
contents of a bundle. Take bundle2 for example: future bundle2 files may
contain parts that today's bundle2 parser can't read. Another example is
stream clone data. These require clients to support specific
repository formats or they won't be able to read the written files. In
both scenarios, we need to describe additional metadata beyond the outer
container type. Furthermore, this metadata behaves more like an
unordered set, so an order-based declaration format (such as static
strings) is not sufficient.
We introduce support for "parameters" into the bundle specification
string. These are essentially key-value pairs that can be used to encode
additional metadata about the bundle.
Semicolons are used as the delimiter partially to increase similarity to
MIME parameter values (see RFC 2231) and because they are relatively
safe from the command line (although values will need quotes to avoid
interpretation as multiple shell commands). Alternatives considered were
spaces (a bit annoying to encode) and '&' (similar to URL query strings)
(which will do bad things in a shell if unquoted).
The parsing function now returns a dict of parsed parameters and
consumers have been updated accordingly.
Now that we have a mechanism to produce and consume streaming clone
bundles, we need to teach the human-facing bundle specification parser
and the internal bundle file header reading code to be aware of this new
format. This patch does so.
For the human-facing bundle specification, we choose the name "packed"
to describe "streaming clone bundles" because the bundle is essentially
a "pack" of raw revlog files that are "packed" together. There should
probably be a bikeshed over the name, especially since it is human
facing.
We don't appear to print error codes elsewhere. The error codes are
inconsistent between at least Linux and OS X and are more trouble than
they are worth. Humans care about the error string more than the code
anyway.
A glob was also added to pave over differences in error strings between
Linux and OS X.
The client now sends a "cbattempted" boolean flag to the "getbundle"
wire protocol command to tell the server whether a clone bundle was
attempted.
The presence of this flag will enable the server to conditionally emit a
bundle2 "output" part advertising the availability of clone bundles to
compatible clients that don't have it enabled.
This is needed so a subsequent patch can conditionally add a bundle2
part to the "getbundle" wire protocol command depending on whether a
clone bundle was attempted.
If a clone bundle persistently fails to apply, users need a way to
disable it so they have a hope of the clone working. Change the hint for
the abort scenario to advertise the config option to disable clone
bundles.
Not all bundles are appropriate for all clients. For example, someone
with a slow Internet connection may want to prefer bz2 bundles over gzip
bundles because they are smaller and don't take as long to transfer.
This is information that a server cannot know on its own. So, we invent
a mechanism for "preferring" server-advertised URLs based on their
attributes.
We could invent a negotiation between client and server where the client
sends its preferences and the sorting/filtering is done server-side.
However, this feels complex. We can avoid complicating the wire protocol
and exposing ourselves to backwards compatible concerns by performing
the sorting locally.
This patch defines a new config option for expressing preferred
attributes in server-advertised bundles.
At Mozilla, we leverage this feature so clients in fast data centers
prefer uncompressed bundles. (We advertise gzip bundles first because
that is a reasonable default.)
I consider this an advanced feature. I'm on the fence as to whether it
should be documented in `hg help config`.
An upcoming patch will enable clients to prefer certain bundles over
others. The idea is that we define values of attributes from manifests
that are desirable.
The BUNDLESPEC attribute is a complex value consisting of multiple
parts. Clients may wish to only prefer one of these parts. Having to
specify every combination of BUNDLESPEC would be annoying. So, we
extract the components of BUNDLESPEC into their own attributes so
clients can easily filter on a sub-component.
Server Name Indication (SNI) is commonly used in CDNs and other hosted
environments. Unfortunately, Python <2.7.9 does not support SNI and when
these older Python versions attempt to negotiate TLS to an SNI server,
they raise an opaque error like
"_ssl.c:507: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert
handshake failure."
We introduce a manifest attribute to denote the URL requires SNI and
have clients without SNI support filter these entries.
Not all clients are capable of reading every bundle. Currently, content
negotiation to ensure a server sends a client a compatible bundle
format is performed at request time. The response bundle is dynamically
generated at request time, so this works fine.
Clone bundles are statically generated *before* the request. This means
that a modern server could produce bundles that a legacy client isn't
capable of reading. Without some kind of "type hint" in the clone
bundles manifest, a client may attempt to download an incompatible
bundle. Furthermore, a client may not realize a bundle is incompatible
until it has processed part of the bundle (imagine consuming a 1 GB
changegroup bundle2 part only to discover the bundle2 part afterwards is
incompatibl). This would waste time and resources. And it isn't very
user friendly.
Clone bundle manifests thus need to advertise the *exact* format of the
hosted bundles so clients may filter out entries that they don't know
how to read. This patch introduces that mechanism.
We introduce the BUNDLESPEC attribute to declare the "bundle
specification" of the entry. Bundle specifications are parsed using
exchange.parsebundlespecification, which uses the same strings as the
"--type" argument to `hg bundle`. The supported bundle specifications
are well defined and backwards compatible.
When a client encounters a BUNDLESPEC that is invalid or unsupported, it
silently ignores the entry.
exchange.readbundle() can return 2 different types. We weren't handling
the bundle2 case. Handle it.
At some point we'll likely want a generic API for applying a bundle from
a file handle. For now, create another one-off until we figure out what
the unified bundle API should look like (addressing this is a can of
worms I don't want to open right now).
The old code was tailored to `hg bundle` usage and not appropriate for
use as a general API, which clone bundles will require. The code has
been rewritten to make it more generally suitable.
We introduce dedicated error types to represent invalid and unsupported
bundle specifications. The reason we need dedicated error types (rather
than error.Abort) is because clone bundles will want to catch these
exception as part of filtering entries. We don't want to swallow
error.Abort on principle.
Clone bundles require a well-defined string to specify the type of
bundle that is listed so clients can filter compatible file types. The
`hg bundle` command and cmdutil.parsebundletype() already establish the
beginnings of a bundle specification format.
As part of formalizing this format specification so it can be used by
clone bundles, we move the specification parsing bits verbatim to
exchange.py, which is a more suitable place than cmdutil.py. A
subsequent patch will refactor this code to make it more appropriate as
a general API.
Cloning can be an expensive operation for servers because the server
generates a bundle from existing repository data at request time. For
a large repository like mozilla-central, this consumes 4+ minutes
of CPU time on the server. It also results in significant network
utilization. Multiplied by hundreds or even thousands of clients and
the ensuing load can result in difficulties scaling the Mercurial server.
Despite generation of bundles being deterministic until the next
changeset is added, the generation of bundles to service a clone request
is not cached. Each clone thus performs redundant work. This is
wasteful.
This patch introduces the "clonebundles" extension and related
client-side functionality to help alleviate this deficiency. The
client-side feature is behind an experimental flag and is not enabled by
default.
It works as follows:
1) Server operator generates a bundle and makes it available on a
server (likely HTTP).
2) Server operator defines the URL of a bundle file in a
.hg/clonebundles.manifest file.
3) Client `hg clone`ing sees the server is advertising bundle URLs.
4) Client fetches and applies the advertised bundle.
5) Client performs equivalent of `hg pull` to fetch changes made since
the bundle was created.
Essentially, the server performs the expensive work of generating a
bundle once and all subsequent clones fetch a static file from
somewhere. Scaling static file serving is a much more manageable
problem than scaling a Python application like Mercurial. Assuming your
repository grows less than 1% per day, the end result is 99+% of CPU
and network load from clones is eliminated, allowing Mercurial servers
to scale more easily. Serving static files also means data can be
transferred to clients as fast as they can consume it, rather than as
fast as servers can generate it. This makes clones faster.
Mozilla has implemented similar functionality of this patch on
hg.mozilla.org using a custom extension. We are hosting bundle files in
Amazon S3 and CloudFront (a CDN) and have successfully offloaded
>1 TB/day in data transfer from hg.mozilla.org, freeing up significant
bandwidth and CPU resources. The positive impact has been stellar and
I believe it has proved its value to be included in Mercurial core. I
feel it is important for the client-side support to be enabled in core
by default because it means that clients will get faster, more reliable
clones and will enable server operators to reduce load without
requiring any client-side configuration changes (assuming clients are
up to date, of course).
The scope of this feature is narrowly and specifically tailored to
cloning, despite "serve pulls from pre-generated bundles" being a valid
and useful feature. I would eventually like for Mercurial servers to
support transferring *all* repository data via statically hosted files.
You could imagine a server that siphons all pushed data to bundle files
and instructs clients to apply a stream of bundles to reconstruct all
repository data. This feature, while useful and powerful, is
significantly more work to implement because it requires the server
component have awareness of discovery and a mapping of which changesets
are in which files. Full, clone bundles, by contrast, are much simpler.
The wire protocol command is named "clonebundles" instead of something
more generic like "staticbundles" to leave the door open for a new, more
powerful and more generic server-side component with minimal backwards
compatibility implications. The name "bundleclone" is used by Mozilla's
extension and would cause problems since there are subtle differences
in Mozilla's extension.
Mozilla's experience with this idea has taught us that some form of
"content negotiation" is required. Not all clients will support all
bundle formats or even URLs (advanced TLS requirements, etc). To ensure
the highest uptake possible, a server needs to advertise multiple
versions of bundles and clients need to be able to choose the most
appropriate from that list one. The "attributes" in each
server-advertised entry facilitate this filtering and sorting. Their
use will become apparent in subsequent patches.
Initial inspiration and credit for the idea of cloning from static files
belongs to Augie Fackler and his "lookaside clone" extension proof of
concept.
The home of 'Abort' is 'error' not 'util' however, a lot of code seems to be
confused about that and gives all the credit to 'util' instead of the
hardworking 'error'. In a spirit of equity, we break the cycle of injustice and
give back to 'error' the respect it deserves. And screw that 'util' poser.
For great justice.
In the external pushrebase extension, it is valuable to be able to do some work
without taking the lock (like running expensive hooks). This enables
significantly higher commit throughput.
This patch adds an option to lazily acquire the lock. It means that all bundle2
part handlers that require writing to the repo must first call
op.gettransction(), when in this mode.
This is the beginning of client-side support for performing a stream
clone using bundle2. The main bundle2 pull function checks whether to
perform a streaming clone and outputs a message if so.
While we have a duplicate message, it seems easier to have all the
bundle2 console writing in one location and in an easy-to-read
conditional block.
This adds a cache and makes accessing the capabilities slightly simpler,
as you don't need to directly go through the bundle2 module. This will
also help prevent a function-level import in streamclone.py.
This patch arguably isn't necessary. But I think it makes things
slightly nicer.
Upcoming patches will introduce bundle2 based streaming clones. Add
"legacy" to the function name and add a docstring clarifying the intent of
the function.
Just like all the other pull steps. Consistency is good.
This seems a little excessive right now since maybeperformstreamclone is
such a short function. This will be addressed in a subsequent patch.
Stream clones are a special case of clones. Clones are a special case of
pull. Most of the logic for deciding what to do at pull time is in
exchange.py. It makes sense for the stream clone determination to live
there as well.
This patch moves the calling of the stream clone code into pull(). The
checks in streamclone.canperformstreamclone() ensure that we don't
perform a stream clone unless it is possible.
A future patch will convert maybeperformstreamclone() to accept a
pullop to make it consistent with everything else in pull(). It will
also grow some functionality (in case you doubted the necessity of a 4
line function).
An upcoming patch will move the invocation of stream cloning logic to
the normal pull code path (from localrepository.clone). In preparation
for this, we teach pull() and pulloperation about whether a streaming
clone is requested.
The return logic in localrepository.clone() has been reformatted
slightly because of line length issues.
We bulk move functions from exchange.py related to streaming clones.
Function names were renamed slightly to drop a component redundant with
the module name. Docstrings and comments referencing old names and
locations were updated accordingly.
The common ancestor set implementation was made lazy a couple years ago, but
this piece of code still required processing the entire repo by putting set()
around the lazy set. The code was introduced in 984b6b21bf13, a year before the
lazy ancestor set was added.
Dropping the set() shaves 3.5 seconds off of 'push -r' in repos with hundreds of
thousands of commits.
The assignment of the value from bundle2.processbundle() to 'r' is
unused. It is currently the same as its third argument (if given), and
since that argument may eventually go away (according to the method's
docstring), let's reassign the return value to 'op' instead to better
prepare for that.
Python 2.6 introduced the "except type as instance" syntax, replacing
the "except type, instance" syntax that came before. Python 3 dropped
support for the latter syntax. Since we no longer support Python 2.4 or
2.5, we have no need to continue supporting the "except type, instance".
This patch mass rewrites the exception syntax to be Python 2.6+ and
Python 3 compatible.
This patch was produced by running `2to3 -f except -w -n .`.
The 'getchangegroupraw' is very simple (two lines) so we inline it in its only
caller. This exposes the 'outgoing' object of the part generator function, allowing
us to add information on the number of changesets contained in the part in a
later changeset. Such information is useful for progress bar.
When using bundle2, the phase pushkey parts are now made mandatory. As a
result, failure to update the bookmark server side will result in the transaction
being aborted.
When using bundle2, the bookmark's pushkey parts are now made mandatory. As a
result failure to update the bookmark server side will result in the transaction
being aborted.
We add a way to register "pushkey failure callback" that will be used if the
push is aborted by a pushkey failure. A part generator adding mandatory pushkey
parts should register a failure callback for all of them. The callback will be
in charge of generating a meaningful abort if this part fails.
If no callback is registered, the error is propagated.
Catch PushkeyFailed error in exchange.
The current behavior (with bundle1) is to let the rest of the push succeed if
the pushkey call (phases, bookmarks) failed (this comes from the fact that each
item is sent in its own command).
We kept this behavior with bundle2, which is highly debatable, but let us keep
thing as they are now as a start. We are about to enforce 'mandatory' pushkey
part as 'mandatory' successful, so we need to marks parts as advisory to
preserve the current (debatable) behavior.
All known server implementations have listkeys support with bundle2, but people
in the process of implementing new servers may not. Let's be nice with them.
We are already fetching remote bookmarks to honor the -B option, we
now pass that data to the pull process so it can reuse it. This
prevents a race condition between the initial looking and the actual
pulling of changesets and bookmarks. Tests are updated to handle this
fact.
We have been feeling the need for this in extensions for quite some time. This
will be used to pass remote bookmark information around in the next changesets.
For efficiency and consistency purpose, remote bookmarks, retrieved at the time
the pull command code is doing lookup, will be reused during the core pull
operation.
A second step toward this is to avoid requesting bookmark information in
the bundle 2 if we already have them locally.
For efficiency and consistency purpose, remote bookmarks, retrieved at the time
the pull command code is doing lookup, will be reused during the core pull
operation.
A first step toward this is to setup the logic avoiding pulling the data again
during the discovery phase if some have already been provided.
All the test change have been isolated and validated. We have free to turn on
bundle2 as the default exchange protocol.
"To reach a port we must set sail –
Sail, not tie at anchor
Sail, not drift."
On Mozilla's mozilla-beta repository .hgtags fnodes resolution takes
~18s from a clean cache on my machine. This means that the first time
a user runs `hg tags`, `hg log`, or any other command that displays or
accesses tags data, a ~18s pause will occur. There is no output during
this pause. This results in a poor user experience and perception
that Mercurial is slow.
The .hgtags changeset to filenode mapping is deterministic. This
patch takes advantage of that property by implementing support
for transferring .hgtags filenodes mappings in a dedicated bundle2
part. When a client advertising support for the "hgtagsfnodes"
capability requests a bundle, a mapping of changesets to .hgtags
filenodes will be sent to the client.
Only mappings of head changesets included in the bundle will be sent. The
transfer of this mapping effectively eliminates one time tags cache related
pauses after initial clone.
The mappings are sent as binary data. So, 40 bytes per pair of
SHA-1s. On the aforementioned mozilla-beta repository,
659 * 40 = 26,360 raw bytes of mappings are sent over the wire
(in addition to the bundle part headers). Assuming 18s to populate
the cache, we only need to transfer this extra data faster than
1.5 KB/s for overall clone + tags cache population time to be shorter.
Put into perspective, the mozilla-beta repository is ~1 GB in size.
So, this additional data constitutes <0.01% of the cloned data.
The marginal overhead for a multi-second performance win on clones
in my opinion justifies an on-by-default behavior.
All bundle2 servers now support the 'listkeys' part(1), so we'll
always be able to fetch bookmarks data at the same time as the
changeset. This should be enough to avoid the one race condition that
this bookmark prefetching is trying to work around. It even allows
future server to make sure everything is generated from the same
"transaction" if they become capable of such. The current code was
already overwriting the prefetched value with the one in bundle2
anyway. Note that this is not preventing all race conditions in
related to bookmark in 'hg pull' it makes nothing better and nothing
worse.
Reducing the number of listkeys calls will reduce the latency on pull.
The pre-fetch is also moved into a discovery step because it seems to belong
there.
(1) Because all servers not speaking 'pushkey' parts are compatible with the
'HG2X' protocol only.
We are doing some strange special casing of phase push when:
- the source is a subrepo
- the destination is publishing
- some changeset are still draft on the destination
In that case we do not push phases information (to publish the draft changesets)
because it could break simple cycle of 'clone/pull/push' of subrepos. We have to
detect this case earlier to have bundle2 respecting it.
We change the test to check the behavior for both bundle1 and bundle2.
For reasons outlined in the previous commit, we want to make the code
for consuming "stream bundles" reusable. This patch extracts the code
into a standalone function.
Streaming clones are fast because they are essentially tar files.
On mozilla-central, a streaming clone only consumes ~55s CPU time
on clients as opposed to ~340s CPU time for a regular clone or gzip
bundle unbundle.
Mozilla is deploying static file "lookaside" support to our Mercurial
server. Static bundles are pre-generated and uploaded to S3. When a
clone is performed, the static file is fetched, applied, and then an
incremental pull is performed. Unfortunately, on an ideal network
connection this still takes as much wall and CPU time as a regular
clone (although it does save significant server resources).
We like the client-side wall time wins of streaming clones. But we want
to leverage S3-based pre-generated files for serving the bulk of clone
data.
This patch moves the code for producing a "stream bundle" into its
own standalone function, away from the wire protocol. This will enable
stream bundle files to be produced outside the context of the wire
protocol.
A bikeshed on whether exchange is the best module for this function
might be warranted. I selected exchange instead of changegroup because
"stream bundles" aren't changegroups (yet).
I just discovered that we are not displaying ssh server output in real time
anymore. So we can just fall back to the bundle2 output capture for now. This
fix the race condition issue we where seeing in tests. Re-instating real time
output for ssh would fix the issue too but lets get the test to pass first.
The current bundle2 processing was capturing all output. This is nice as it
provide better meta data about what output what, but this was changing two
things:
1) adding a prefix "remote: " to "other" output during local push (issue4613)
2) local and ssh push does not provide real time output anymore (issue4615)
As we are unsure about what form should be used in (1) and how to solve (2) we
disable output capture in this two cases. Output capture can be forced using an
experimental option.
Because bundle2 allows a more precise exchange of obsmarkers during pull, it
sends them in a different order (previously unstable because of sets.) As
a result, they are added to the repository in a different order. To stabilize
the order and ensure tests are unchanged when moving from bundle1 to bundle2 we
sort markers when exchanging them.
In the long run, the obsstore will probably not use a linear storage.
Until this changeset, we were only able to save output if an error happened
during the 'transaction.close()' phase. If the 'processbundle' call raised an
exception, the 'bundleoperation' object was never returned, so the reply bundle
was never accessible and no output could be salvaged. We introduce a quick (but
not very elegant) fix to gain access to any reply created during the processing.
This conclude this output related series. We should hopefully be able client-side to see the
whole server output, in a proper order.
The code is now complex enough that a refactoring of it would make sense on
default.
We were capturing all output issue during bundle2 processing, and all output
issue during transaction rollback in case of failure. However, the output issue
during transaction commit was still roaming the land freely. It is now put back
in line.
This let the user see output from 'pretxnclose' and 'txnclose' (and related) in
the right order.
External hook used to directly write on stdout and stderr. As a result their
output was not captured by the bundle2 processing. This resulted in confusing
out of order output on the client side. We are now capturing hooks output in
this context.
The output from the transaction rollback was not included into the reply bundle.
It was eventually caught by the usual 'unbundle' output capture and sent to the
client but the result was out of order on the client side. We now capture the
output for the transaction release and transmit it the same way as all other
output.
We should probably rethink the whole output capture things but this would not be
appropriate for stable.
The is still multiple cases were output failed to be properly capture, they will
be fixed in later changesets.
The re-handling of output is happening in some 'unbundle' callers. We have to
transmit the output information to this place so we stick it on the exception.
This is the third step in our quest for preserving the server output on error
(issue4594). We want to be able to copy the output part from the aborted reply
into the exception bundle.
If the client allows "pushback", the bundle2 served back by the server may
contains parts that will write to the repository. Such parts may require the
'wlock' (eg: bookmark) so we acquire it in advance to make sure it got acquired
before the 'lock'.
A bundle2 may contain bookmark updates (or other extension content) that
requires the 'wlock' to be written. As 'wlock' must be acquired before 'lock',
we must stay on the side of caution and use both in all case to ensure their
ordering.
This argument let extensions control in what order bundle2 part are generated
server side during a pull. This is useful to ensure the transaction is in a
proper state before some actions or hooks happens.
This argument let extensions control in what order bundle2 part are generated
client side during a push. This is useful to ensure the transaction is in a
proper state before some actions or hooks happens.
The series at a17556fc1521::77b112363d48 introduced generic transaction level
hooking. This makes the experimental bundle2 specific hooks redundant, we drop
them.
It is finally time to freeze the bundle2 format! To do so we:
- rename HG2Y to HG20,
- drop "b2x:" prefix from all part names,
- rename capability to "bundle2-exp" to "bundle2"
- rename the hook flag from 'bundle2-exp' to 'bundle2'
This function refactors the logic that decides to use 'bundle2' during an
exchange (pull/push). This will help being consistent while transitioning from
the experimental protocol to the final frozen version.
I do not expect this function to survive on the long run when using 'bundle2'
will become a simple capability check.
This is also necessary to allow HG2Y support in an extension to ease transition
of companies using the experimental protocol in production (yeah...). Such
extension will be able to wrap this function to use the experimental protocol in
some case.
To support more bundle2 formats, we need a wider detection of bundle2-family
streams. The various places what were explicitly detecting the full magic string
are now matching on the first three characters of it.
To support multiple bundle2 formats, we will need a function returning
the proper unbundler according to the header. We introduce such aa
function and change the usage in the code base. The function will get
smarter in later changesets.
This is somewhat similar to the dispatching we do for 'HG10' and 'HG11'.
The main target is to allow HG2Y support in an extension to ease transition of
companies using the experimental protocol in production (yeah...) But I've no
doubt this will be useful when playing with a future HG21.
When the 'kwargs' variable was added in 038ae1f12393 (bundle2: allow
pulling changegroups using bundle2, 2014-04-01), it could contain only
'bundlecaps', 'common' and 'heads', so the check for 'format' would
always be false. Since then, _pullbundle2extraprepare() has been added
for hooks, but it seems unlikely that they would a 'format' key.
The conditional was a bit too narrow and produced buggy result when a node was
present in both common and heads (because it pleased the discovery) and it was
locally known but filtered.
This resulted in buggy getbundle request and server side crash.
We have been running discovery on unfiltered repository for quite some time.
This was aimed at two things:
- save some bandwith by prevent the repushing of common but hidden changesets
- allow phases changes on secret/hidden changeset on bare push.
The cost of this unfiltered discovery combined with evolution is actually really
high. Evolution likely create thousand of hidden heads, and the discovery is
going to try to discovery if each of them are common or not. For example,
pushing from my development mercurial repository implies 17 discovery
round-trip.
The benefit are rare corner cases while the drawback are massive. So we run the
discovery on a filtered repository again.
We add some hack to detect remote heads that are known locally and adds them to
the common set anyway, so the good behavior of most of the corner case should
remains. But this will not work in all cases.
This bring my discovery phase back from 17 round-trips to 1 or 2.
This patch series is intended to allow bundle2 push reply part handlers to
make changes to the local repository; it has been developed in parallel with
an extension that allows the server to rebase incoming changesets while applying
them.
This diff adds an experimental config option "bundle2.pushback" which provides
a transaction to the reply unbundler during a push operation. This behavior is
opt-in because of potential security issues: the response can contain any part
type that has a handler defined, allowing the server to make arbitrary changes
to the local repository.
This patch series is intended to allow bundle2 push reply part handlers to
make changes to the local repository; it has been developed in parallel with
an extension that allows the server to rebase incoming changesets while applying
them.
Most pushes already open a transaction in order to sync phase information.
This diff replaces that transaction with one that spans the entire push
operation.
This transaction will be used in a later patch to guard repository changes
made during the reply handler.
This patch series is intended to allow bundle2 push reply part handlers to
make changes to the local repository; it has been developed in parallel with
an extension that allows the server to rebase incoming changesets while applying
them.
Aside from the transaction logic, the pulloperation class is used primarily as
a logic-free data structure for storing state information. This diff extracts
the transaction logic into its own class that can be shared with push
operations.
The phase-syncing code was using bundle2 if the remote supported it. It was
doing so without regard to bundle2 activation on the client. Moreover, the
phase push is now properly included in the unified bundle2 push, so having extra
code in syncphase should be useless. If the remote is bundle2-enabled, the
phases should already be synced.
The buggy verification code was leading to a crash when a 3.2 client was pushing
to a 3.1 server. The real bundle2 path detected that their versions were
incompatible, but the syncphase code failed to, sending an incompatible bundle2
to the server.
We drop the useless and buggy code as a result. The "else" clause is
de-indented in the process.
This mirrors the API for 'pending' and 'finalize' callbacks. I do not have
immediate usage planned for it, but I'm sure some callback will be happy to
access transaction related data.
Changeset d79feb65f3ee added advertising of supported changegroup version
through the new 'b2x:changegroup' capability. However, this capability is not
new and has been around since 3.1 with an empty value. This makes new clients
unable to push to 3.2 servers through bundle2 as they cannot find a common
changegroup version to use from and empty list.
Treating empty 'b2x:changegroup' value as old client fixes it.
The 'delayupdate' method now takes a transaction object and registers its
'_writepending' method for execution in 'transaction.writepending()'. The hook can then
use 'transaction.writepending()' directly.
At some point this will allow the addition of other file creation
during writepending.
When using bundle2, we find the common subset of supported changegroup-packers
and we pick the max of them. This allow to use generaldelta aware changegroups through
bundle2.
When using bundle2, we find the common subset of supported changegroup-packers
and we pick the max of them. This allow to use generaldelta aware changegroup
through bundle2.
The commands getchangegroup, getlocalchangegroup and getsubset now each
have a version ending in -raw. The raw versions return the chunk generator
from the changegroup packer directly, without wrapping it in a chunkbuffer
and unpacker. This avoids extra chunkbuffers in the bundle2 code path.
Also, the raw versions can be extended to support alternative packers
in the future, to be used from bundle2.
48062b2d0f30 regressed the behavior of pushing an unchanged bookmark to
a remote. Before that commit, pushing a unchanged bookmark would result
in "exporting bookmark @" being printed. After that commit, we now see
an incorrect message "bookmark %s does not exist on the local or remote
repository!"
This patch fixes the regression introduced by 48062b2d0f30 by having
the bookmark error reporting code filter identical bookmarks and adds
a test for the behavior.
bookmarks.compare() previously lumped identical bookmarks in the
"invalid" bucket. This patch adds a "same" bucket.
An 8-tuple for holding this state is pretty gnarly. The return value
should probably be converted into a class to increase readability. But
that is beyond the scope of a patch intended to be a late arrival to
stable.
Hooks that run after the transaction need to be able to touch the
repository. So we need to run them after the lock release. This is
similar to what the "changegroup" hook is doing in the
`addchangegroup` function.
We are changing all integers that denote the size of a chunk to read to int32.
There are two main motivations for that.
First, we change everything to the same width (32 bits) to make it possible for
a reasonably agnostic actor to forward a bundle2 without any extra processing.
With this change, this could be achieved by just reading int32s and forwarding
chunks of the size read. A bit a smartness would be logic to detect the end of
stream but nothing too complicated.
Second, we need some capacity to transmit special information during the bundle
processing. For example we would like to be able to raise an exception while a
part is being read if this exception happend while this part was generated.
Having signed integer let us use negative numbers to trigger special events
during the parsing of the bundle.
The format is renamed for B2X to B2Y because this breaks binary
compatibility. The B2X format support is dropped. It was experimental to
allow this kind of things. All elements not directly related to the binary
format remain flagged "b2x" because they are still compatible.
We need a wider set of hooks to process all the changes that happened during the
pull transaction. We reuse the experimental `b2x-transactionclose` hook set
from server's unbundle for consistency. This hook is experimental and will not
remains as-is forever, but this will open the door for experimentation in 3.2.
The source information can, should be applied once when opening the transaction
for the pull. This will lets element processed within a bundle2 be aware of them
and open the door to running a set of hooks when closing this pull transaction.
This is similar to what is done in server's unbundle call.
We store the source and url of the current data into `transaction.hookargs` this
let us inherit it from upper layers that may have created a much wider
transaction. We have to modify bundle2 at the same time to register the source
and url in the transaction. We have to do it in the same patch otherwise, the
`addchangegroup` call would fill these values and the hook calling will crash
because of the duplicated 'source' and 'url' arguments passed to the hook call.
A bundle2 may contain multiple parts adding changegroups, in which case there
are multiple operation records for changegroups, each with its own return
value. Those multiple return values are aggregated in a single cgresult value
for the whole operation.
As can be seen in the associated test case, the situation with hooks is not
really the best, but without deeper thoughts and changes, we can't do much
better. Hopefully, things will be improved before bundle2 is enabled by default.
In the meanwhile, multiple changegroups is not expected to be in widespread
use, and even less expected to be used for pushes. Also, not many clients
cloning or pulling bundle2 with multiple changesets are not expected to have
changegroup hooks anyways.
The push process uses a `stepsdone` attribute instead of a `todosteps` one (with
the logic swapped). We unify the two process by picking the `stepsdone` version.
I feel like `stepsdone` better fits extensions that would want to extend the push
exchange process.
We apply the same approach as for push and make the discovery extensible. There
is only one user in core right now, but we already know we'll need something
smarter for obsmarkers. In fact the evolve extension could use this to cleanly
extend discovery.
The main motivation for this change is consistency between push and pull.
We mimic what was done for `push` for similar reason. We are about to drop
`localrepo.pull` (for consistency with dropping `localrepo.push` and we better
have an API as extensible as `push` is.
Find explanations about localrepo.push removal in 88d9d4ec499e.
Retrieving bookmarks before obsmarkers will avoid turning some changesets hidden
right before making them visible again if a bookmark keeps them visible.
The discovery phases for bookmarks now use the list of explicitly pushed bookmarks
to do addition, removal and overwriting.
Tests are impacted because this reduces the amount of listkeys calls issued, removes
some duplicated messages and improves the accuracy of some messages.
To gather all the bookmark pushing actions together, we need code performing
those actions to be ready for them. We need to be able to produce different
messages for different actions.
There is no reason for bookmarks to get a special treatment. As a first step we
move the code as is in the `exchange.pull` function. Integration with the rest
of the flow will come later.
Adding bookmarks to pull means that most clone paths are now pulling bookmarks
through pull. We ensure that bookmark-update messages are properly suppressed in
that case.
In test-pull-http.t the 'requesting all changes' message disappear because we
now get the authentication error on the `listkeys`command before such message
is printed.
This part is responsible for adding new bookmarks on the remote. Before that,
it was done on its own in `commands.push`. The export is still not integrated
with the rest of the push process, but at least it now dwells in the right
function.
Returning the pushop object gives access to more information (upcoming bookmark
push result for example). `localrepo.push` currently extracts the `cgresult` for
callers.
We are about to introduce more results-related attributes on pushop (for
bookmarks) so we need a more distinctive name. We now use `cgresult` as
`pulloperation` does.
The primary goal is to make it easier for extensions to alter how bundle2
parts are laid out. They now can use the getbundle2partsgenerator decorator
to add new parts, or directly act on getbundle2partsmapping to wrap existing
part functions.
Note the 'request for bundle10 must include changegroup' error was kept
under the same conditions as before, although the logic changes don't make
it obvious.