Prior to revision 149be6a0072e, largefiles were saved in the local repository,
even if it was using the share extension. After that change, all largefiles are
now stored in the shared repository. However, the backward compatibility for
existing largefiles already placed in the local repository was never tested,
and has been broken since.
The default of pushing all largefiles referenced in outgoing revisions is safe,
but also expensive and sometimes not what is needed. We thus introduce a
--lfrev option, similar to what pull already has.
By specifying an empty set of revisions (or null), it is possible to get lazy
(and insecure!) pushes of revisions without referenced largefiles, similar to
how pull works.
Bare 'hg update' now brings you to the tipmost descendant (on the same branch).
Leaving the user on the same topological branch. The previous behavior, updating
to the tipmost changeset on the same branch could lead to jump from a
topological branch to another. This was confusing and impractical. As the only
conceivable reason for the old behavior have been address by the recently
introduce message about other heads, we can "safely" change this behavior
All test changes have been reviewed and seen a valid consequences.
A concern around the user experience of Mercurial is user getting stuck on there
own topological branch forever. For example, someone pulling another topological
branch, missing that message in pull asking them to merge and getting stuck on
there own local branch.
The current way to "address" this concern was for bare 'hg update' to target the
tipmost (also latest pulled) changesets and complain when the update was not
linear. That way, failure to merge newly pulled changesets would result in some
kind of failure.
Yet the failure was quite obscure, not working in all cases (eg: commit right
after pull) and the behavior was very impractical in the common case
(eg: issue4673).
To be able to change that behavior, we need to provide other ways to alert a
user stucks on one of many topological head. We do so with an extra message after
bare update:
1 other heads for branch "default"
Bookmark get its own special version:
1 other divergent bookmarks for "foobar"
There is significant room to improve the message itself, and we should augment
it with hint about how to see theses other heads or handle the situation (see
in-line comment). But having "a" message is already a significant improvement
compared to the existing situation. Once we have it we can iterate on a better
version of it. As having such message is an important step toward changing the
default destination for update and other nicety, I would like to move forward
quickly on getting such message.
This was discussed during London - October 2015 Sprint.
Previously, if the largefile was deleted at the time of a commit, the standin
was silently not updated and its current state (possibly garbage) was recorded.
The test makes it look like this is somewhat of an edge case, but the same thing
happens when an `hg revert` followed by `rm` changes the standin.
Aside from the second invocation of this in lfutil.updatestandinsbymatch()
(which is what triggers this test case), the three other uses are guarded by
dirstate checks for added or modified, or an existence check in the filesystem.
So aborting in lfutil.updatestandins() should be safe, and will avoid silent
skips in the future if this is used elsewhere.
The change in 6fce9a02f069 to handle a normal -> largefile switch was too
aggressive in preserving the original matcher names. If a largefile is
explicitly provided by the user, but only the standin exists in dirstate, then
only the standin can be committed.
There's still maybe an issue when the largefile is deleted outside of Mercurial:
$ rm large
$ hg ci -m "oops" large
large: The system cannot find the file specified
nothing changed
[1]
'unexpected putlfile response: None' when an http error occurs is not very
helpful.
Instead, leave the handling of urllib2.HTTPError exceptions to other layers.
If the store somehow got corrupted, users could end up in weird situations that
were very hard to recover from or lead to propagation of the corruption.
Instead, spend the extra time checking the hash when copying to the working
directory. If it doesn't match, emit a warning, and don't put wrong content in
the working directory.
The benefit of retargeting the local store to the share source is that all
shares will always have access to the largefiles any one of them commit, even if
the user cache is deleted (which is documented to be OK to do). Further, any
push into the source (and now any shares), will likewise make the largefile(s)
visible to all related repositories.
In order to maintain compatibility with existing repos, where the largefiles
would be cached only in the local share, fallback to searching the local share
if it isn't found at the share source.
The unshare command should probably be taught to copy the source store into the
store for the repo being unshared to complete the loop.
This patch changes the test like this:
@@ -159,6 +159,5 @@
$ hg share -q src share_dst --config extensions.share=
$ hg -R share_dst update -r0
getting changed largefiles
- large: largefile $HASH not available from file:///$TESTTMP\share_dst
- 0 largefiles updated, 0 removed
+ 1 largefiles updated, 0 removed
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
The issue writeup mentions pushing a largefile from a remote repo to the main
local repo, and the largefile is then not available in any shares. Since the
push doesn't cache the largefile in $USERCACHE, the trashed $USERCACHE in this
test is equivalent.
If an uncommitted and deleted file was forgotten, a warning would be emitted,
even though the operation was successful. See the previous patch for
'remove -A' for the exact circumstances, and details about the cause.
The bug report doesn't mention largefiles, but the given recipe doesn't fail
unless the largefiles extension is loaded. The problem only affected normal
files, whether or not any largefiles are committed, and only files that have
not been committed yet. (Files with an 'a' state are dropped from dirstate,
not marked removed.) Further, if the named normal file never existed, the
warning would be printed out twice.
The problem is that the core implementation of remove() calls repo.status(),
which eventually triggers a dirstate.walk(). When the file isn't seen in the
filesystem during the walk, the exception handling finds the file in
dirstate, so it doesn't complain. However, the largefiles implementation
called status() again with all of the original files (including the normal
ones, just dropped). This time, the exception handler doesn't find the file
in dirstate and does complain. This simply excludes the normal files from
the second repo.status() call, which the largefiles extension has no interest
is processing anyway.
on windows, largefile paths are written as "file:///C:/temp/...", corresponding
to "file:///$TESTTMP/..." (all three slashes shown).
But on posix systems they are written as "file:///tmp/..." corresponding to
"file://$TESTTMP/..." (only two slashes shown).
Write the glob "file:/*/" to match both versions.
This avoids a lot of expensive roundtrips to remote repositories ... but might
be slightly slower for local operations.
This will also change some aborts on missing files to warnings. That will in
some situations make it possible to continue working on a repository with
missing largefiles.
Looking for a (potentially empty) directory was not reliable - both because it
is a reasonable assumption that empty directories can be removed and because it
wasn't created in all cases ... such as when pulling to an existing repository.
The test relied on the bug that 'pull largefiles from branchheads' didn't pull
any largefiles from tip revision when it seemed like no largefiles had been
checked out before.
After discussion, we've agreed that largefiles for newly pulled heads should
not be cached by default. The use case for this is using largefiles repos
with multiple remote servers (and therefore multiple remote largefiles caches),
where users will be pulling from non-default locations on a regular basis. We
think this use case will be significantly less common than the use case where
all largefiles are stored on the same central server, so the default should be
no caching.
The old behavior can be obtained by passing the --cache-largefiles flag to
pull.
Override updaterepo() instead of individual methods that may not be called for
each subrepo. Add test.
Based on patch from Matt Harbison.
Changes the order of update-related messages (now largefiles comes before the
global status).
Especially the "no default or default-push path set in hgrc" was often very
misleading and didn't give any hint where it actually was looking.
A long error messages is better than several multi-line messages.
Many tests didn't change back from subdirectories at the end of the tests ...
and they don't have to. The missing 'cd ..' could always be added when another
test case is added to the test file.
This change do that tests (99.5%) consistently end up in $TESTDIR where they
started, thus making it simpler to extend them or move them around.
Before, a tempfile was used to create a temp file was created with 600
permissions and the uploaded data was written into it. This file was
then *copied* to .hg/largefiles/<hash>.
We now simply use atomictempfile to write the data to a temp file with
the right permissions and then rename that into place.
This replaces another use of tempfile with atomictempfile. The problem
with tempfile is that it creates files with 600 permissions instead of
respecting repo.store.createmode.
Before, the mode was copied from the file in the working copy. This is
inconsistent with how Mercurial normally creates files inside the .hg
folder. This can lead to a situation where you can read the files
under .hg/store but not under .hg/largefiles and so a clone will fail
if it needs to access a largefile.
When the user pulls from a remote repository that is not his default repo, it
is quite likely that he will pull a new head. This means that if he tries to
merge or rebase with the other head, he will run into a problem becuase
largefiles has no way of tracking where the remote repository for this other
head is, so it cannot download the largefiles from this other remote repository.
It will attempt to download them from its default remote repository, which will
not yet contain the largefiles.
This patch solves this problem by caching any new largefiles for all heads
directly into the system cache at the time of the pull, so they are available
later.
This behavior is actually more in line with Mercurial's distributed nature,
because pulling already implies we have a connection to the remote server, but
merging or rebasing does not.
"hg status" may treat cache missed largefiles as "removed" incorrectly.
assumptions for problem case:
- there is no cache for largefile "L"
- at first, update working directory to the revision in which "L" is
not yet added,
- then, update working directory to the revision in which "L" is
already added
and now, "hg status" treats "L" as "removed".
current implementation does not allocate entry for cache missed
largefile in ".hg/largefiles/dirstate", but files without
".hg/largefiles/dirstate" entry are treated as "removed" by largefiles
extension.
"hg revert" can not recover from this situation, but "rm -rf
.hg/largefiles", because it causes dirstate rebuilding.
this patch invokes normallookup() for cache missed largefiles to
allocate entry in ".hg/largefiles/dirstate", so "hg status" can treat
it as "missing" correctly.