Summary:
Using `testedwith = 'internal'` is not a good habit [1]. Having it
auto-updated in batch would also introduce a lot of churn. This diff makes
them "ships-with-fb-hgext". If we do want to fill the ideal "testedwith"
information, we could put it in a centric place, like a "fbtestedwith"
extension rewriting those "ships-with-fb-hgext" on the fly.
Maybe having in-repo tags for tested Mercurial releases is also a good idea.
[1]: www.mercurial-scm.org/repo/hg/rev/2af1014c2534
Test Plan: `arc lint`
Reviewers: #sourcecontrol, rmcelroy
Reviewed By: rmcelroy
Subscribers: rmcelroy, mjpieters
Differential Revision: https://phabricator.intern.facebook.com/D4244689
Signature: t1:4244689:1480440027:3dc18d017b48beba1176fbfd120351889259eb4b
Summary:
The debugindex and debugindexdot commands have moved and are not registered
unless you import the new mercurial.debugcommands module.
Test Plan:
Run
hg --config=extensions.remotefilelog=fb-hgext/remotefilelog help
and observe that you get help info, rather that the error
hg: unknown command 'debugindex'
then run the fb-hgext test suite.
Reviewers: rmcelroy, quark, simonfar
Reviewed By: simonfar
Subscribers: mjpieters, #mercurial
Differential Revision: https://phabricator.intern.facebook.com/D4244047
Signature: t1:4244047:1480427216:dcaa1ca441ea189bdf68f1f619b4078d8c1d09dc
Summary:
We've gotten reports of hg gc failing on some service machines because
`peer._repo.name` complains that repo has no attribute 'name'. I'm not sure how
this could happen, but it makes sense to make the hg gc loop more robust to the
possibility that the repos in the 'repos' file have changed their configuration
since they were added to the file.
Test Plan: Ran the tests
Reviewers: #mercurial, simonfar
Reviewed By: simonfar
Subscribers: mjpieters
Differential Revision: https://phabricator.intern.facebook.com/D4072719
Signature: t1:4072719:1477385020:24d532b9442292ce8234cc91bc7de503d3b0c88f
Summary:
Upstream changed the signature of computenonoverlap. Let's change our wrapping
of it to be more robust to signature changes.
Test Plan: ./run-tests.py test-remotefilelog* test-check*
Reviewers: #mercurial, quark
Reviewed By: quark
Subscribers: mjpieters
Differential Revision: https://phabricator.intern.facebook.com/D4062705
Tasks: 14037455
Signature: t1:4062705:1477096049:5a011a7a5edf9bb01475694777c738cdb02453f5
Summary:
This fixes all the pyflaks and module errors for the main remotefilelog
code base.
Test Plan: ./run-tests.py test-check* test-remotefilelog*
Reviewers: #mercurial, quark
Reviewed By: quark
Subscribers: mjpieters
Differential Revision: https://phabricator.intern.facebook.com/D4055537
Signature: t1:4055537:1477049663:ee904d311d17d3659e055e2c109c68c9023cfd1f
Summary:
764cd9916c94 recently introduced code that was unconditionally checking the
repo.includepattern and repo.excludepattern attributes on a local repository
without first checking if this is a shallow repository. These attributes only
exist on shallow repositories, causing "hg pull" to crash on non-shallow
repositories. This crash wouldn't happen in simple circumstances, since the
remotefilelog extension only gets fully set up once a shallow repository object
has been created, however when using chg you can end up with scenarios where a
non-shallow repository is used in the same hg process after a shallow one.
This refactors the code to now store the local repository object on the remote
peer rather than trying to store the individual shallow, includepattern, and
excludepattern attributes.
Overall this code does still feel a bit janky to me -- the rest of the peer API
is independent of the local repository, but the _callstream() wrapper cares
about the local repository being referenced. It seems like we should ideally
redesign the APIs so that _callstream() receives the local repository data as
an argument (or we should make the peer <--> local repository assocation more
formal and explicit if think it's better to force an association here).
Test Plan: Added a new test which triggered the crash, but passes with these changes.
Reviewers: ttung, mitrandir, durham
Reviewed By: durham
Subscribers: net-systems-diffs@, yogeshwer
Differential Revision: https://phabricator.intern.facebook.com/D3756493
Tasks: 12823586
Signature: t1:3756493:1471971600:9666e9c31bf59070c3ace0821d47d322671eb5b1
Previously we were kicking off background repacks even for non remotefilelog
repos. Moving the repack to be inside the remotefilelog requirement check will
prevent this.
Calling wrapfunction on the remotefilepeer(sshpeer) object in exchangepull
function introduces a reference cycle. Hence, this object will not be deleted
until the process dies. This is not a big issue for processes having a short
lifetime(e.g. lauched by command line.)
However, for persistent processes (e.g. TortoiseHg), this can lead to multiple
lingering ssh connections to the server(actually one by pull operation).
The fix is to not wrap the remotefilepeer._callstream. This method is defined
right into the remotefilepeer object. The required repo data is made available
in the remotefilepeer object by monkeypatching this object in the exchangepull
function.
In some situations the remotefilelog setup logic could be called, which will
wrap certain functions, and then later a call will happen to a repo that wasn't
remotefilelog which will run some remotefilelog code because of the wrapping.
Normally we take care of this by checking for the remotefilelog requirement. We
missed it in this one spot though.
Before this patch, debugremotefilelog and verifyremotefilelog would
crash if not given a path. Also, many commands would accept arguments
they then ignored.
Summary:
Previously a bunch of different places accessed the cachepath through ui.config
directly. This is a problem because we need to resolve any environment variables
in the path, and some spots didn't do this. So let's unify all accesses through
a helper function that takes care of the environment variables.
Test Plan: Added a test
Reviewers: mitrandir, lcharignon, #sourcecontrol, ttung, simonfar
Reviewed By: simonfar
Subscribers: simonfar
Differential Revision: https://phabricator.intern.facebook.com/D3385583
Signature: t1:3385583:1464971813:5b9ee5ed3d6ff9f1a78cb9e0269e433844758c9d
Previously, background repacks would only repack pack files, which meant there
was no automated way to repack loose remotefilelog files without manually
running 'hg repack'. This allows incremental repacks to also pack the loose
files.
It also changes the config knob for background repacks, so we can enable pack
file usage without the server having to support it just yet.
Summary:
This runs the incremental background repacking logic after hg pull.
As part of adding tests, I also added a 'hg debugwaitonrepack' function that
will wait until any pending repack is done before returning, so the tests can
wait on repacks without so many sleeps.
Test Plan: Adds a test
Reviewers: mitrandir, #mercurial, ttung, rmcelroy
Reviewed By: rmcelroy
Subscribers: rmcelroy
Differential Revision: https://phabricator.intern.facebook.com/D3306526
Signature: t1:3306526:1463696933:9e27daf0c08076468e8f365a3c372fa7d4f56bde
Summary:
This adds a --incremental flag to the hg repack command. This flag causes repack
to look at the distribution of pack files in the repo and performs the most
minimal repack to keep the repo in good shape. Currently it's only implemented
for datapacks.
The new remotefilelog.datagenerations config contains a list of the sizes for
the different generations of pack files. For instance:
[remotefilelog]
datagenerations=1GB
100MB
1MB
Designates 4 generations - packs over 1GB, packs over 100MB, packs over 1MB, and
implicitly packs undex 1MB. The incremental algorithm will try to keep each
generation to less than 3 pack files (prioritizing the larger generations
first). When performing a repack it will grab at least 2 packs, and will grab
more if the total pack size is less than 100MB (since repacking at that level is
pretty cheap).
I have no idea if this is a good algorithm. We'll how to see and iterate.
Test Plan: Adds a test
Reviewers: mitrandir, #mercurial, ttung, rmcelroy
Reviewed By: rmcelroy
Subscribers: rmcelroy
Differential Revision: https://phabricator.intern.facebook.com/D3306523
Signature: t1:3306523:1463697129:c87f4a397ef357b5ca4a80d01e9a6ca4d61f9d3d
Summary:
A future patch will be adding incremental repack, so let's move our repack logic
to the repack module so it's easier to refactor and extend.
Also adds a message for when a background repack kicks off (since we'll be
calling that from other places eventually).
Test Plan: Adds a test
Reviewers: mitrandir, #mercurial, ttung, rmcelroy
Reviewed By: rmcelroy
Subscribers: rmcelroy
Differential Revision: https://phabricator.intern.facebook.com/D3306521
Signature: t1:3306521:1463602886:cece3d517f0672b829702866482c902812f9ae27
Summary: Some simple debug commands to print the contents of each pack.
Test Plan: Ran it manually, and added a simple test
Reviewers: #mercurial, ttung, mitrandir
Reviewed By: mitrandir
Differential Revision: https://phabricator.intern.facebook.com/D3277233
Signature: t1:3277233:1463085196:c54fc875d536a96150bb1461b77247a5d7a9402c
Summary:
This allows triggering a repack that can be run in the background. In the future
we will trigger this automatically under certain circumstances (like too many
pack files).
Test Plan: Added a test
Reviewers: #mercurial, ttung, quark
Reviewed By: quark
Subscribers: quark
Differential Revision: https://phabricator.intern.facebook.com/D3261161
Signature: t1:3261161:1462398568:5ae25f3e5a9acd0f4b34490b34a62be33cc69e3c
Summary:
This adds a lock that limits us to running only one repack at a time. We also
add a simple prerepack hook to allow the tests to insert a sleep to test this
functionality.
Test Plan: Added a test
Reviewers: #mercurial, ttung, mitrandir
Reviewed By: mitrandir
Differential Revision: https://phabricator.intern.facebook.com/D3260428
Signature: t1:3260428:1462393311:3e1bf5dd047e7f3521679ca7640b448f5e784913
Summary:
Since pack files should never change after they are created, let's create them
with read-only permissions. It turns out that the Mercurial vfs doesn't apply
the correct permissions to files created by mkstemp (and we have to use mkstemp
since we don't know the name of the file until after we've written all the data
to it), so we have to manually call the permission fixing code.
We also need to fix our mmap calls to be readonly now, otherwise we get a
runtime permission denied exception.
Test Plan: Added a test
Reviewers: #mercurial, ttung, mitrandir
Reviewed By: mitrandir
Differential Revision: https://phabricator.intern.facebook.com/D3255816
Signature: t1:3255816:1462321201:dff4fb4c9301d67a77043ecc1d96262bb5d6a54a
Summary:
Instead of passing in a path and performing joins ourselves, let's use an
opener. This will help handle all the file permission edge cases.
Test Plan: Ran the tests
Reviewers: #mercurial, ttung, mitrandir
Reviewed By: mitrandir
Differential Revision: https://phabricator.intern.facebook.com/D3255165
Signature: t1:3255165:1462393836:38a28c850a0dc06838d9c17672d3dffd9903bbd7
Summary:
Previously, hg repack would repack all the objects in all the store and dump the
new packs in .hg/store/packs. Initially we only want to repack the shared cache
though, so let's change repack to only operate on shared stores, and to write
out the new packs to the hgcache, under the appropriate repo name.
In a future patch I'm going to go through all this store stuff and replace all
uses of os.path and direct file reads/writes with a mercurial vfs.
Test Plan:
Ran repack in a large repo and verified packs were produced in
$HGCACHE/$REPONAME/packs
Ran hg cat on a file to verify that it read the data from the pack and did not do any remotefilelog network calls.
Reviewers: lcharignon, rmcelroy, ttung, quark, mitrandir
Reviewed By: mitrandir
Differential Revision: https://phabricator.intern.facebook.com/D3250213
Signature: t1:3250213:1462315927:694661795141e2c869ba661a54cea8f4b90823df
Summary:
Previously, if a repack failed, it would leave temporary pack files laying
around. By adding enter/exit functions to mutable packs, we can guarantee
cleanup happens.
Test Plan: Ran repack, verified that a failure did not leave tmp files
Reviewers: rmcelroy, quark, ttung, lcharignon, mitrandir
Reviewed By: mitrandir
Differential Revision: https://phabricator.intern.facebook.com/D3250201
Signature: t1:3250201:1462234552:7f20260a193ed1dd858bf6e9f489ac902d859218
Summary:
Now that all the repack logic is in place, let's switch the repack
command to use the new version. This also means the repack command will now
clean up the old remotefilelog blobs once it's finished.
Test Plan:
Ran hg repack in a large repo. Verified it deleted the old
remotefilelog blobs, and verified that I could still updated around the
repository without making any remotefilelog network requests.
A future diff will add standard .t mercurial tests for the repack command.
Reviewers: rmcelroy, ttung, lcharignon, quark, mitrandir
Reviewed By: mitrandir
Differential Revision: https://phabricator.intern.facebook.com/D3249601
Signature: t1:3249601:1462235506:03c0d95f6a82cfc04b340b139f39c02853941a17
Summary: Fix check code for various store related files
Test Plan: Ran the tests
Reviewers: #sourcecontrol, mitrandir, ttung
Reviewed By: mitrandir
Differential Revision: https://phabricator.intern.facebook.com/D3222465
Signature: t1:3222465:1461701300:34560288be4dc921f0252d4ad8fdc9c8d9357e23
Summary:
This is an initial implementation of a history pack file creator and a repacker
class that can produce it. A history pack is a pack file that contains no file
content, just history information (parents and linknodes).
A histpack is two files:
- a .histpack file consisting of a series of file sections, each of which
contains a series of revision entries (node, p1, p2, linknode)
- a .histidx file containing a filename based index to the various file sections
in the histpack.
See the code for documentation of the exact format.
Test Plan:
ran the tests. A future diff will add unit tests for all the new pack
structures.
Ran `hg repack` on a large repo. Verified pack files were produced in
.hg/store/packs. In a future diff, I verified that the data could be read
correctly.
Reviewers: #sourcecontrol, mitrandir, ttung, rmcelroy
Reviewed By: rmcelroy
Subscribers: mitrandir, rmcelroy, mjpieters
Differential Revision: https://phabricator.intern.facebook.com/D3219762
Signature: t1:3219762:1461751982:e7bbc65e8f01c812fc1eb566d2d48208b0913766
Summary:
This is an initial implementation of a repack algorithm that can read data from
an arbitrary store (in this case the remotefilelog content store), and repack it
into a datapack.
A datapack is two files:
- a .datapack file consisting of a series of deltas (a delta may be a full text if the delta base is the nullid)
- a .dataidx file consisting of delta information and an index into the deltas
See the code for documentation of the exact format.
Test Plan:
ran the tests
Ran `hg repack` in a large repo. Verified that a datapack and a dataidx file
were created in .hg/store/packs. The datapack used 148MB instead of the 439MB the
old remotefilelog storage used.
Reviewers: #sourcecontrol, ttung, rmcelroy
Reviewed By: rmcelroy
Subscribers: rmcelroy
Differential Revision: https://phabricator.intern.facebook.com/D3205334
Signature: t1:3205334:1461751366:ee4bf6a580ffb667071a8046fda6f0858b7f25ae
Summary:
Instead of hard coding the list of stores in each union store, let's make it a
list and just test each store in order. This will allow easily adding new stores
and reordering the priority of the existing ones.
Also fix the remote store's contains function. 'contains' is the old name, and
it now needs to be getmissing in order to fit the store contract.
Test Plan: ran the tests
Reviewers: #sourcecontrol, ttung, rmcelroy
Reviewed By: rmcelroy
Differential Revision: https://phabricator.fb.com/D3205314
Signature: t1:3205314:1461606028:3a513ac82c5de668a7e40bbf7cc88d8754e2f0bb
Summary: Fix failures found by check-code.
Test Plan: Ran the tests
Reviewers: #sourcecontrol, ttung
Reviewed By: ttung
Differential Revision: https://phabricator.fb.com/D3221365
Signature: t1:3221365:1461646159:efeb0478c66cbd49d4a0a6c02a79d530b42f8248
The last major piece of functionality that needs to be moved into the new store
is the gc algorithm. This is just a copy paste of the one that exists in
localcache.
Summary:
When running inside chg, `reposetup` will be called once since `serve` is not
a `norepo` command. Then if the user runs a `norepo` command like `help`,
`runcommand` will receive `repo = None` and error out. Fix it by checking
`repo` explicitly.
Test Plan: Run `chg help` and no exception is thrown.
Reviewers: #sourcecontrol, ttung, durham
Reviewed By: durham
Differential Revision: https://phabricator.fb.com/D3136328
Signature: t1:3136328:1459811387:3b86df9765aa5e20677031d6e9fc4bc3d524efa6
Summary:
Discovered by `hg log filename` in the hg-committed repo. It seems we missed
a check here.
Test Plan:
Run `hg log filename` in a non-remotefilelog repo with remotefilelog enabled
and make sure "warning: file log can be slow on large repos" is not printed.
Reviewers: #sourcecontrol, ttung, durham
Reviewed By: durham
Differential Revision: https://phabricator.fb.com/D3132523
Signature: t1:3132523:1459801676:bcba3bbcaf1c358ad11e8ad25c0a1d3cc2637a76
Summary:
I somehow got a stacktrace with IPython on a non-remotefilelog repo that ran
this code and complained that fileservice didn't exit. I am not sure how it
happened but let's make the call safer to match the pattern used elsewhere in
the file.
Test Plan: No stacktrace seen after that, one line change
Reviewers: durham
Differential Revision: https://phabricator.fb.com/D2819402
Summary:
In 4fb35d8c2105 in core @durham removed _verify and replaced it with
verify, this patch makes remotefilelog compatible with those changes.
Test Plan: The tests are failing after but don't fail on this anyore
Reviewers: ericsumner
Subscribers: durham
Differential Revision: https://phabricator.fb.com/D2791847
The newly added checkunknown prefetching apparently gets handed the full list of
files that are not present on disk right now, which includes all the files
outside of the sparse checkout. So we need to filter those out here.
Summary:
When running addremove, it needs to see the contents of the removed files so it
can determine if they are a remain. So we need to add bulk prefetching in this
situation.
Test Plan: Added a test
Reviewers: #sourcecontrol, ttung, rmcelroy
Reviewed By: rmcelroy
Subscribers: dcapra
Differential Revision: https://phabricator.fb.com/D2756979
Signature: t1:2756979:1450132279:668b8b160d792cad1ac37e2069716e20ea304f57
Summary:
During hg status Mercurial sometimes needs to look at the size of contents of
the file and compare it to what's in history, which requires the file blob.
This patch causes those files to be batch downloaded before they are compared.
There was a previous attempt at this (see the deleted code), but it only wrapped
the dirstate once at the beginning, so it was lost if the dirstate object was
replaced at any point.
Test Plan: Added a test to verify unknown files require only one fetch.
Reviewers: #sourcecontrol, ttung
Reviewed By: ttung
Subscribers: dcapra
Differential Revision: https://phabricator.fb.com/D2756768
Signature: t1:2756768:1450130997:7c7101efe66c998e3182dfbd848aa6b1a57d509f
Summary:
When doing an update, Mercurial checks if unknown files on disk match
what's in memory, otherwise it stops the checkout so it doesn't cause data loss.
We need to batch fetch the necessary files from the remotefilelog server for
this operation.
Test Plan: Added a test
Reviewers: #sourcecontrol, ttung, rmcelroy
Reviewed By: rmcelroy
Subscribers: dcapra
Differential Revision: https://phabricator.fb.com/D2756837
Signature: t1:2756837:1450132288:bc0530a07ea40aaeb2af1a93e4da82778cc11369