mirror of
https://github.com/facebook/sapling.git
synced 2024-10-10 16:57:49 +03:00
7110077dec
For highly structured files like JSON or XML dumps with large numbers of duplicate lines (eg braces) and isolated matching lines, bdiff could find large numbers of equally good spans. Because it prefers earlier matches, this would result in pathologically unbalance recursion that resulted in quadratic performance. This patch makes it prefer matches closer to the middle that tend to balance recursion. This change improves the speed of a pathological test case from 1100s to 9s. Included is a smaller test that has a roughly 50x safety margin on the performance it accepts. It's likely to fail on pure builds because difflib also has a recursion-balancing problem.
30 lines
506 B
Perl
30 lines
506 B
Perl
#require no-pure
|
|
|
|
A script to generate nasty diff worst-case scenarios:
|
|
|
|
$ cat > s.py <<EOF
|
|
> import random
|
|
> for x in xrange(100000):
|
|
> print
|
|
> if random.randint(0, 100) >= 50:
|
|
> x += 1
|
|
> print hex(x)
|
|
> EOF
|
|
|
|
$ hg init a
|
|
$ cd a
|
|
|
|
Check in a big file:
|
|
|
|
$ python ../s.py > a
|
|
$ hg ci -qAm0
|
|
|
|
Modify it:
|
|
|
|
$ python ../s.py > a
|
|
|
|
Time a check-in, should never take more than 10 seconds user time:
|
|
|
|
$ hg ci --time -m1
|
|
time: real .* secs .user [0-9][.].* sys .* (re)
|