Commit Graph

19 Commits

Author SHA1 Message Date
Yuya Nishihara
d9d64e114f bdiff: proxy through mdiff module
See the previous commit for why.

mdiff seems a good place to host bdiff functions. bdiff.bdiff was already
aliased as textdiff, so we use it.
2017-04-26 22:03:37 +09:00
Yuya Nishihara
8a64b55504 similar: use cheaper hash() function to test exact matches
We just need a hash table {fctx.data(): fctx} which doesn't keep fctx.data()
in memory. Let's simply use hash(fctx.data()) to put data out from memory,
and manage collided fctx objects by list.

This isn't significantly faster than using sha1, but is more correct as we
know SHA-1 collision attack is getting practical.

Benchmark with 50k added/removed files, on tmpfs:

  $ hg addremove --dry-run --time -q

  previous:   real 12.420 secs (user 11.120+0.000 sys 1.280+0.000)
  this patch: real 12.350 secs (user 11.210+0.000 sys 1.140+0.000)
2017-03-23 20:57:27 +09:00
Yuya Nishihara
8c9b9d4020 similar: take the first match instead of the last
It seems more natural. This makes the next patch slightly cleaner.
2017-03-23 20:52:41 +09:00
Yuya Nishihara
9290e50162 similar: do not look up and create filectx more than once
Benchmark with 50k added/removed files, on tmpfs:

  $ hg addremove --dry-run --time -q

  previous:   real 16.070 secs (user 14.470+0.000 sys 1.580+0.000)
  this patch: real 12.420 secs (user 11.120+0.000 sys 1.280+0.000)
2017-03-23 21:17:08 +09:00
Yuya Nishihara
0087b0c7b8 similar: use common names for changectx variables
We generally use 'wctx' and 'pctx' for working context and its parent
respectively.
2017-03-23 21:10:45 +09:00
Yuya Nishihara
6a4b18deca similar: get rid of quadratic addedfiles.remove()
Instead, build a set of files to be removed and recreate addedfiles
only if necessary.

Benchmark with 50k added/removed files, on tmpfs:

  $ hg addremove --dry-run --time -q

  original:   real 16.550 secs (user 15.000+0.000 sys 1.540+0.000)
  previous:   real 16.730 secs (user 15.280+0.000 sys 1.440+0.000)
  this patch: real 16.070 secs (user 14.470+0.000 sys 1.580+0.000)
2017-03-23 20:50:33 +09:00
Yuya Nishihara
f7e0bc01eb similar: sort files not by object id but by path for stable result
Perhaps the original implementation would want to sort added/removed files
alphabetically, but actually it did sort fctx objects by memory location.

This patch removes the use of set()s in order to preserve the order of
added/removed files. addedfiles.remove() becomes quadratic, but its cost
appears not dominant. Anyway, the quadratic behavior will be eliminated by
the next patch.

Benchmark with 50k added/removed files, on tmpfs:

  $ mkdir src
  $ for n in `seq 0 49`; do
  >     mkdir `printf src/%02d $n`
  > done

  $ for n in `seq 0 49999`; do
  >     f=`printf src/%02d/%05d $(($n/1000)) $n`
  >     dd if=/dev/urandom of=$f bs=8k count=1 status=none
  > done

  $ hg ci -qAm 'add 50k files of random content'
  $ mv src dest

  $ hg addremove --dry-run --time -q

  original:   real 16.550 secs (user 15.000+0.000 sys 1.540+0.000)
  this patch: real 16.730 secs (user 15.280+0.000 sys 1.440+0.000)
2015-03-15 18:58:56 +09:00
FUJIWARA Katsunori
c23eb09a4f similar: compare between actual file contents for exact identity
Before this patch, similarity detection logic (for addremove and
automv) depends entirely on SHA-1 digesting. But this causes incorrect
rename detection, if:

  - removing file A and adding file B occur at same committing, and
  - SHA-1 hash values of file A and B are same

This may prevent security experts from managing sample files for
SHAttered issue in Mercurial repository, for example.

  https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
  https://shattered.it/

Hash collision itself isn't so serious for core repository
functionality of Mercurial, described by mpm as below, though.

  https://www.mercurial-scm.org/wiki/mpm/SHA1

This patch compares between actual file contents after hash comparison
for exact identity.

Even after this patch, SHA-1 is still used, because it is reasonable
enough to quickly detect existence of "(almost) same" file.

  - replacing SHA-1 causes decreasing performance, and
  - replacement of it has ambiguity, yet

Getting content of removed file (= rfctx.data()) at each exact
comparison should be cheap enough, even though getting content of
added one costs much.

  ======= ============== =====================
  file    fctx           data() reads from
  ======= ============== =====================
  removed filectx        in-memory revlog data
  added   workingfilectx storage
  ======= ============== =====================
2017-03-03 02:57:06 +09:00
Pierre-Yves David
b3ce804dcd similar: remove caching from the module level
To prevent Bad Things™ from happening, let's rework the logic to not use
util.cachefunc.
2017-01-13 11:42:36 -08:00
Sean Farley
8fc2b48eb5 similar: move score function to module level
Future patches will use this to report the similarity of a rename / copy
in the patch output.
2017-01-07 20:47:57 -08:00
Sean Farley
3c1cbd7c9b similar: rename local variable to not collide with previous
Future patches will move the score function to the module level, so
let's not shadow that.
2017-01-07 20:43:49 -08:00
Augie Fackler
ad67b99d20 cleanup: replace uses of util.(md5|sha1|sha256|sha512) with hashlib.\1
All versions of Python we support or hope to support make the hash
functions available in the same way under the same name, so we may as
well drop the util forwards.
2016-06-10 00:12:33 -04:00
Augie Fackler
e34f1062d8 similar: delete extra newline at EOF
Spotted by my emacs config that cleans up extra whitespace.
2016-06-10 00:14:43 -04:00
Anton Shestakov
e850090773 similar: specify unit for ui.progress when operating on files 2016-03-11 22:29:20 +08:00
Gregory Szorc
5c29ba6835 similar: use absolute_import 2015-12-12 23:17:22 -08:00
Brodie Rao
d6a6abf2b0 cleanup: eradicate long lines 2012-05-12 15:54:54 +02:00
Benoit Boissinot
38455dfaea fix coding style 2010-05-02 00:48:33 +02:00
David Greenaway
ae788f807e findrenames: Optimise "addremove -s100" by matching files by their SHA1 hashes.
We speed up 'findrenames' for the usecase when a user specifies they
want a similarity of 100% by matching files by their exact SHA1 hash
value. This reduces the number of comparisons required to find exact
matches from O(n^2) to O(n).

While it would be nice if we could just use mercurial's pre-calculated
SHA1 hash for existing files, this hash includes the file's ancestor
information making it unsuitable for our purposes. Instead, we calculate
the hash of old content from scratch.

The following benchmarks were taken on the current head of crew:

addremove 100% similarity:
  rm -rf *; hg up -C; mv tests tests.new
  hg --time addremove -s100 --dry-run

  before:  real 176.350 secs (user 128.890+0.000 sys 47.430+0.000)
  after:   real   2.130 secs (user   1.890+0.000 sys  0.240+0.000)

addremove 75% similarity:
  rm -rf *; hg up -C; mv tests tests.new; \
      for i in tests.new/*; do echo x >> $i; done
  hg --time addremove -s75  --dry-run

  before: real 264.560 secs (user 215.130+0.000 sys 49.410+0.000)
  after:  real 218.710 secs (user 172.790+0.000 sys 45.870+0.000)
2010-04-03 11:58:16 +11:00
David Greenaway
70b803a04d Move 'findrenames' code into its own file.
The next few patches will increase the size of the "findrenames"
functionality. This patch simply moves the function into its own
file to avoid clutter building up in 'cmdutil.py'.
2010-04-03 11:58:16 +11:00