Now that the 'vfs' classes moved in their own module, lets use the new module
directly. We update code iteratively to help with possible bisect needs in the
future.
The decoders were already run by default for the main repo, so this seemed like
an oversight.
The extdiff extension has been using 'archive' since a80ec1ea2694 to support -S,
and a colleague noticed that after diffing, making changes, and closing it, the
line endings were wrong for the diff-tool modified files in the subrepository.
(Files in the parent repo were correct, with the same .hgeol settings.) The
editor (Visual Studio in this case) reloads the file, but doesn't notice the EOL
change. It still adds new lines with the original EOL setting, and the file
ends up inconsistent.
Without this change, the first file `cat`d in the test prints '\r (esc)' EOL,
but the second doesn't on Windows or Linux.
pycompat.getenv returns os.getenvb on py3 which is not available on Windows.
This patch replaces them with encoding.environ.get and checks to ensure no
new instances of os.getenv or os.setenv are introduced.
os.getenv deals with unicodes on Python 3, so we have pycompat.osgetenv to
deal with bytes. This patch replaces occurrences on os.getenv with
pycompat.osgetenv
Currently, the "streamres" response type is populated with a generator
of chunks with compression possibly already applied. This puts the onus
on commands to perform chunking and compression. Architecturally, I
think this is the wrong place to perform this work. I think commands
should say "here is the data" and the protocol layer should take care
of encoding the final bytes to put on the wire.
Additionally, upcoming commits will improve wire protocol support for
compression. Having a central place for performing compression in the
protocol transport layer will be easier than having to deal with
compression at the commands layer.
This commit refactors the "streamres" response type to accept either
a generator or an object with "read." Additionally, the type now
accepts a flag indicating whether the response is a "version 1
compressible" response. This basically identifies all commands
currently performing compression. I could have used a special type
for this, but a flag works just as well. The argument name
foreshadows the introduction of wire protocol changes, hence the "v1."
The code for chunking and compressing has been moved to the output
generation function for each protocol transport. Some code has been
inlined, resulting in the deletion of now unused methods.
I somehow ended up in a situation where hg crashed on an unlink I introduced in
8fd3fc1ef4c6.
I don't know how it happened and can't reproduce it. It seems like it only can
happen when the file is removed between the time of check in a working
directory context walk that finds a standin file, and the time of use when we
try to remove it because the corresponding largefile doesn't exist.
But better safe than sorry: replace the plain unlink with unlinkpath with
ignoremissing=True. That will also remove remaining empty directories, which
arguably is more correct.
A code snippet that has been around since largefiles was introduced was wrong:
Standins no longer found in lfdirstate has *not* been removed -
they have probably just been deleted ... or not created.
This wrong reporting did that 'up -C' didn't undo the change and didn't sync
the two dirstates.
Instead of reporting such files as removed, propagate the deletion to the
standin file and report the file as deleted.
Largefiles are fragile with the design where dirstate and lfdirstate must be
kept in sync.
To be less fragile, mark all clean largefiles as unsure ("normallookup") before
updating standins. After standins have been updated and we know exactly which
largefile standins actually was changed, mark the unchanged largefiles back to
clean ("normal").
This will make the failure mode more safe. If interrupted, the next command
will continue to perform extra hashing of all largefiles. That will do that all
largefiles that are out of sync with their standin will be marked dirty and
they will show up in status and can be cleaned with update --clean.
util.filechunkiter has been using a chunk size of 64k for more than 10 years,
also in years where Moore's law still was a law. It is probably ok to bump it
now and perhaps get a slight win in some cases.
Also, largefiles have been using 128k for a long time. Specifying that size
multiple times (or forgetting to do it) seems a bit stupid. Decreasing it to
64k also seems unfortunate.
Thus, we will set the default chunksize to 128k and use the default everywhere.
Before, we would sometimes use the default iterator over large files. That
iterator is line based and would add extra buffering and use odd chunk sizes
which could give some overhead.
copyandhash can't just apply a filechunkiter as it sometimes is passed a
genuine generator when downloading remotely.
It is only the X bit that it matters to copy from the standin to the largefile
in the working directory. While it generally doesn't do any harm to copy the
whole mode, it is also "wrong" to copy more than the X bit we care about. It
can make a difference if someone should try to handle largefiles differently,
such as marking them read-only.
Thus, do similar to what utils.setflags does and set the X bit where there are
R bits and obey umask.
Problem was files to check were gathered in the repository where
the verify was launched but verification was done on the remote
store. It was observed when user committed in cloned repository
and ran verify before pushing - committed files were marked
as non existing.
This commit fixes this by checking in the remote store only files
that are not existing in the repository store where verify was launched.
Solution is similiar to 909b9d8f9ae7
Problem in both cases is cache in largefiles has assigned
meaning - user cache which is additional place to get/put
files. Those two function works on store - the main place
to store largefiles in the repository - .hg/largefiles and
using "cache" to describe it is misleading.
All versions of Python we support or hope to support make the hash
functions available in the same way under the same name, so we may as
well drop the util forwards.
Prior to revision 149be6a0072e, largefiles were saved in the local repository,
even if it was using the share extension. After that change, all largefiles are
now stored in the shared repository. However, the backward compatibility for
existing largefiles already placed in the local repository was never tested,
and has been broken since.
Files that are already in local store should be checked locally. The problem
with this implementation is how difference in messages between local and remote
checks should look like. For now local errors for file missing and content
corrupted looks like this:
'changeset cset: filename references missing storepath\n'
'changeset cset: filename references corrupted storepath\n'
for remote it looks like:
'changeset cset: filename missing\n'
'changeset cset: filename: contents differ\n'
Contents differ error for remote calls is never raised currently - for now
statlfile implementation lacks checking file content.
I've caught multiple extensions in the wild lying about being
'internal', so it's time to move the goalposts on people. Goalpost
moving will continue until third party extensions stop trying to
defeat the system.