sapling/tests/test-contrib-perf.t

182 lines
5.1 KiB
Perl
Raw Normal View History

#require test-repo
Set vars:
$ . "$TESTDIR/helpers-testrepo.sh"
$ CONTRIBDIR="$TESTDIR/../contrib"
Prepare repo:
$ hg init
$ echo this is file a > a
$ hg add a
$ hg commit -m first
$ echo adding to file a >> a
$ hg commit -m second
$ echo adding more to file a >> a
$ hg commit -m third
$ hg up -r 0
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
$ echo merge-this >> a
$ hg commit -m merge-able
created new head
$ hg up -r 2
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
perfstatus
$ cat >> $HGRCPATH << EOF
> [extensions]
> perfstatusext=$CONTRIBDIR/perf.py
> [perf]
> presleep=0
> stub=on
> parentscount=1
> EOF
$ hg help perfstatusext
perfstatusext extension - helper extension to measure performance
list of commands:
perfaddremove
(no help text available)
perfancestors
(no help text available)
perfancestorset
(no help text available)
perfannotate (no help text available)
perfbdiff benchmark a bdiff between revisions
perfbookmarks
benchmark parsing bookmarks from disk to memory
perfbranchmap
benchmark the update of a branchmap
perf: add command to benchmark bundle reading Upcoming commits will be refactoring bundle2 I/O code. This commit establishes a `hg perfbundleread` command that measures how long it takes to read a bundle using various mechanisms. As a baseline, here's output from an uncompressed bundle1 bundle of my Firefox repo (7,098,622,890 bytes): ! read(8k) ! wall 0.763481 comb 0.760000 user 0.160000 sys 0.600000 (best of 6) ! read(16k) ! wall 0.644512 comb 0.640000 user 0.110000 sys 0.530000 (best of 16) ! read(32k) ! wall 0.581172 comb 0.590000 user 0.060000 sys 0.530000 (best of 18) ! read(128k) ! wall 0.535183 comb 0.530000 user 0.010000 sys 0.520000 (best of 19) ! cg1 deltaiter() ! wall 0.873500 comb 0.880000 user 0.840000 sys 0.040000 (best of 12) ! cg1 getchunks() ! wall 6.283797 comb 6.270000 user 5.570000 sys 0.700000 (best of 3) ! cg1 read(8k) ! wall 1.097173 comb 1.100000 user 0.400000 sys 0.700000 (best of 10) ! cg1 read(16k) ! wall 0.810750 comb 0.800000 user 0.200000 sys 0.600000 (best of 13) ! cg1 read(32k) ! wall 0.671215 comb 0.670000 user 0.110000 sys 0.560000 (best of 15) ! cg1 read(128k) ! wall 0.597857 comb 0.600000 user 0.020000 sys 0.580000 (best of 15) And from an uncompressed bundle2 bundle (6,070,036,163 bytes): ! read(8k) ! wall 0.676997 comb 0.680000 user 0.160000 sys 0.520000 (best of 15) ! read(16k) ! wall 0.592706 comb 0.590000 user 0.080000 sys 0.510000 (best of 17) ! read(32k) ! wall 0.529395 comb 0.530000 user 0.050000 sys 0.480000 (best of 16) ! read(128k) ! wall 0.491270 comb 0.490000 user 0.010000 sys 0.480000 (best of 19) ! bundle2 forwardchunks() ! wall 2.997131 comb 2.990000 user 2.270000 sys 0.720000 (best of 4) ! bundle2 iterparts() ! wall 12.247197 comb 10.670000 user 8.170000 sys 2.500000 (best of 3) ! bundle2 part seek() ! wall 11.761675 comb 10.500000 user 8.240000 sys 2.260000 (best of 3) ! bundle2 part read(8k) ! wall 9.116163 comb 9.110000 user 8.240000 sys 0.870000 (best of 3) ! bundle2 part read(16k) ! wall 8.984362 comb 8.970000 user 8.110000 sys 0.860000 (best of 3) ! bundle2 part read(32k) ! wall 8.758364 comb 8.740000 user 7.860000 sys 0.880000 (best of 3) ! bundle2 part read(128k) ! wall 8.749040 comb 8.730000 user 7.830000 sys 0.900000 (best of 3) We already see some interesting data. Notably that bundle2 has significant overhead compared to bundle1. This matters for e.g. stream clone bundles, which can be applied at >1Gbps. Differential Revision: https://phab.mercurial-scm.org/D1385
2017-11-14 06:20:34 +03:00
perfbundleread
Benchmark reading of bundle files.
perfcca (no help text available)
perfchangegroupchangelog
Benchmark producing a changelog group for a changegroup.
perfchangeset
(no help text available)
perfctxfiles (no help text available)
perfdiffwd Profile diff of working directory changes
perfdirfoldmap
(no help text available)
perfdirs (no help text available)
perfdirstate (no help text available)
perfdirstatedirs
(no help text available)
perfdirstatefoldmap
(no help text available)
perfdirstatewrite
(no help text available)
perffncacheencode
(no help text available)
perffncacheload
(no help text available)
perffncachewrite
(no help text available)
perfheads (no help text available)
perfindex (no help text available)
perfloadmarkers
benchmark the time to parse the on-disk markers for a repo
perflog (no help text available)
perflookup (no help text available)
perflrucachedict
(no help text available)
perfmanifest (no help text available)
perfmergecalculate
(no help text available)
perfmoonwalk benchmark walking the changelog backwards
perfnodelookup
(no help text available)
perfparents (no help text available)
perfpathcopies
(no help text available)
perfphases benchmark phasesets computation
perfrawfiles (no help text available)
perf: add command for measuring revlog chunk operations Upcoming commits will teach revlogs to leverage the new compression engine API so that new compression formats can more easily be leveraged in revlogs. We want to be sure this refactoring doesn't regress performance. So this commit introduces "perfrevchunks" to explicitly test performance of reading, decompressing, and recompressing revlog chunks. Here is output when run on the mozilla-unified repo: $ hg perfrevlogchunks -c ! read ! wall 0.346603 comb 0.350000 user 0.340000 sys 0.010000 (best of 28) ! read w/ reused fd ! wall 0.337707 comb 0.340000 user 0.320000 sys 0.020000 (best of 30) ! read batch ! wall 0.013206 comb 0.020000 user 0.000000 sys 0.020000 (best of 221) ! read batch w/ reused fd ! wall 0.013259 comb 0.030000 user 0.010000 sys 0.020000 (best of 222) ! chunk ! wall 1.909939 comb 1.910000 user 1.900000 sys 0.010000 (best of 6) ! chunk batch ! wall 1.750677 comb 1.760000 user 1.740000 sys 0.020000 (best of 6) ! compress ! wall 5.668004 comb 5.670000 user 5.670000 sys 0.000000 (best of 3) $ hg perfrevlogchunks -m ! read ! wall 0.365834 comb 0.370000 user 0.350000 sys 0.020000 (best of 26) ! read w/ reused fd ! wall 0.350160 comb 0.350000 user 0.320000 sys 0.030000 (best of 28) ! read batch ! wall 0.024777 comb 0.020000 user 0.000000 sys 0.020000 (best of 119) ! read batch w/ reused fd ! wall 0.024895 comb 0.030000 user 0.000000 sys 0.030000 (best of 118) ! chunk ! wall 2.514061 comb 2.520000 user 2.480000 sys 0.040000 (best of 4) ! chunk batch ! wall 2.380788 comb 2.380000 user 2.360000 sys 0.020000 (best of 5) ! compress ! wall 9.815297 comb 9.820000 user 9.820000 sys 0.000000 (best of 3) We already see some interesting data, such as how much slower non-batched chunk reading is and that zlib compression appears to be >2x slower than decompression. I didn't have the data when I wrote this commit message, but I ran this on Mozilla's NFS-based Mercurial server and the time for reading with a reused file descriptor was faster. So I think it is worth testing both with and without file descriptor reuse so we can make informed decisions about recycling file descriptors.
2016-11-18 07:17:51 +03:00
perfrevlogchunks
Benchmark operations on revlog chunks.
perf: benchmark command for revlog indexes We didn't have explicit microbenchmark coverage for loading revlog indexes. That seems like a useful thing to have, so let's add it. We currently measure the low-level nodemap APIs. There is room to hook in at the actual revlog layer. This could be done as a follow-up. The hackiest thing about this patch is specifying revlog paths. Other commands have arguments that allow resolution of changelog, manifest, and filelog. I needed to hook in at a lower level of the revlog API than what the existing helper functions to resolve revlogs allowed. I was too lazy to write some new APIs. This could be done as a follow-up easily enough. Example output for `hg perfrevlogindex 00changelog.i` on my Firefox repo (404418 revisions): ! revlog constructor ! wall 0.003106 comb 0.000000 user 0.000000 sys 0.000000 (best of 912) ! read ! wall 0.003077 comb 0.000000 user 0.000000 sys 0.000000 (best of 924) ! create index object ! wall 0.000000 comb 0.000000 user 0.000000 sys 0.000000 (best of 1803994) ! retrieve index entry for rev 0 ! wall 0.000193 comb 0.000000 user 0.000000 sys 0.000000 (best of 14037) ! look up missing node ! wall 0.003313 comb 0.000000 user 0.000000 sys 0.000000 (best of 865) ! look up node at rev 0 ! wall 0.003295 comb 0.010000 user 0.010000 sys 0.000000 (best of 858) ! look up node at 1/4 len ! wall 0.002598 comb 0.010000 user 0.010000 sys 0.000000 (best of 1103) ! look up node at 1/2 len ! wall 0.001909 comb 0.000000 user 0.000000 sys 0.000000 (best of 1507) ! look up node at 3/4 len ! wall 0.001213 comb 0.000000 user 0.000000 sys 0.000000 (best of 2275) ! look up node at tip ! wall 0.000453 comb 0.000000 user 0.000000 sys 0.000000 (best of 5697) ! look up all nodes (forward) ! wall 0.094615 comb 0.100000 user 0.100000 sys 0.000000 (best of 100) ! look up all nodes (reverse) ! wall 0.045889 comb 0.050000 user 0.050000 sys 0.000000 (best of 100) ! retrieve all index entries (forward) ! wall 0.078398 comb 0.080000 user 0.060000 sys 0.020000 (best of 100) ! retrieve all index entries (reverse) ! wall 0.079376 comb 0.080000 user 0.070000 sys 0.010000 (best of 100)
2017-05-28 21:13:10 +03:00
perfrevlogindex
Benchmark operations against a revlog index.
perfrevlogrevision
Benchmark obtaining a revlog revision.
perfrevlogrevisions
Benchmark reading a series of revisions from a revlog.
perfrevrange (no help text available)
perfrevset benchmark the execution time of a revset
perfstartup (no help text available)
perfstatus (no help text available)
perftags (no help text available)
perftemplating
(no help text available)
perfvolatilesets
benchmark the computation of various volatile set
perfwalk (no help text available)
perfwrite microbenchmark ui.write
2016-09-21 02:47:46 +03:00
(use 'hg help -v perfstatusext' to show built-in aliases and global options)
$ hg perfaddremove
$ hg perfancestors
$ hg perfancestorset 2
$ hg perfannotate a
$ hg perfbdiff -c 1
$ hg perfbdiff --alldata 1
$ hg perfbookmarks
$ hg perfbranchmap
$ hg perfcca
$ hg perfchangegroupchangelog
$ hg perfchangeset 2
$ hg perfctxfiles 2
$ hg perfdiffwd
$ hg perfdirfoldmap
$ hg perfdirs
$ hg perfdirstate
$ hg perfdirstatedirs
$ hg perfdirstatefoldmap
$ hg perfdirstatewrite
$ hg perffncacheencode
$ hg perffncacheload
$ hg perffncachewrite
$ hg perfheads
$ hg perfindex
$ hg perfloadmarkers
$ hg perflog
$ hg perflookup 2
$ hg perflrucache
$ hg perfmanifest 2
$ hg perfmergecalculate -r 3
$ hg perfmoonwalk
$ hg perfnodelookup 2
$ hg perfpathcopies 1 2
$ hg perfrawfiles 2
perf: benchmark command for revlog indexes We didn't have explicit microbenchmark coverage for loading revlog indexes. That seems like a useful thing to have, so let's add it. We currently measure the low-level nodemap APIs. There is room to hook in at the actual revlog layer. This could be done as a follow-up. The hackiest thing about this patch is specifying revlog paths. Other commands have arguments that allow resolution of changelog, manifest, and filelog. I needed to hook in at a lower level of the revlog API than what the existing helper functions to resolve revlogs allowed. I was too lazy to write some new APIs. This could be done as a follow-up easily enough. Example output for `hg perfrevlogindex 00changelog.i` on my Firefox repo (404418 revisions): ! revlog constructor ! wall 0.003106 comb 0.000000 user 0.000000 sys 0.000000 (best of 912) ! read ! wall 0.003077 comb 0.000000 user 0.000000 sys 0.000000 (best of 924) ! create index object ! wall 0.000000 comb 0.000000 user 0.000000 sys 0.000000 (best of 1803994) ! retrieve index entry for rev 0 ! wall 0.000193 comb 0.000000 user 0.000000 sys 0.000000 (best of 14037) ! look up missing node ! wall 0.003313 comb 0.000000 user 0.000000 sys 0.000000 (best of 865) ! look up node at rev 0 ! wall 0.003295 comb 0.010000 user 0.010000 sys 0.000000 (best of 858) ! look up node at 1/4 len ! wall 0.002598 comb 0.010000 user 0.010000 sys 0.000000 (best of 1103) ! look up node at 1/2 len ! wall 0.001909 comb 0.000000 user 0.000000 sys 0.000000 (best of 1507) ! look up node at 3/4 len ! wall 0.001213 comb 0.000000 user 0.000000 sys 0.000000 (best of 2275) ! look up node at tip ! wall 0.000453 comb 0.000000 user 0.000000 sys 0.000000 (best of 5697) ! look up all nodes (forward) ! wall 0.094615 comb 0.100000 user 0.100000 sys 0.000000 (best of 100) ! look up all nodes (reverse) ! wall 0.045889 comb 0.050000 user 0.050000 sys 0.000000 (best of 100) ! retrieve all index entries (forward) ! wall 0.078398 comb 0.080000 user 0.060000 sys 0.020000 (best of 100) ! retrieve all index entries (reverse) ! wall 0.079376 comb 0.080000 user 0.070000 sys 0.010000 (best of 100)
2017-05-28 21:13:10 +03:00
$ hg perfrevlogindex -c
$ hg perfrevlogrevisions .hg/store/data/a.i
$ hg perfrevlogrevision -m 0
perf: add command for measuring revlog chunk operations Upcoming commits will teach revlogs to leverage the new compression engine API so that new compression formats can more easily be leveraged in revlogs. We want to be sure this refactoring doesn't regress performance. So this commit introduces "perfrevchunks" to explicitly test performance of reading, decompressing, and recompressing revlog chunks. Here is output when run on the mozilla-unified repo: $ hg perfrevlogchunks -c ! read ! wall 0.346603 comb 0.350000 user 0.340000 sys 0.010000 (best of 28) ! read w/ reused fd ! wall 0.337707 comb 0.340000 user 0.320000 sys 0.020000 (best of 30) ! read batch ! wall 0.013206 comb 0.020000 user 0.000000 sys 0.020000 (best of 221) ! read batch w/ reused fd ! wall 0.013259 comb 0.030000 user 0.010000 sys 0.020000 (best of 222) ! chunk ! wall 1.909939 comb 1.910000 user 1.900000 sys 0.010000 (best of 6) ! chunk batch ! wall 1.750677 comb 1.760000 user 1.740000 sys 0.020000 (best of 6) ! compress ! wall 5.668004 comb 5.670000 user 5.670000 sys 0.000000 (best of 3) $ hg perfrevlogchunks -m ! read ! wall 0.365834 comb 0.370000 user 0.350000 sys 0.020000 (best of 26) ! read w/ reused fd ! wall 0.350160 comb 0.350000 user 0.320000 sys 0.030000 (best of 28) ! read batch ! wall 0.024777 comb 0.020000 user 0.000000 sys 0.020000 (best of 119) ! read batch w/ reused fd ! wall 0.024895 comb 0.030000 user 0.000000 sys 0.030000 (best of 118) ! chunk ! wall 2.514061 comb 2.520000 user 2.480000 sys 0.040000 (best of 4) ! chunk batch ! wall 2.380788 comb 2.380000 user 2.360000 sys 0.020000 (best of 5) ! compress ! wall 9.815297 comb 9.820000 user 9.820000 sys 0.000000 (best of 3) We already see some interesting data, such as how much slower non-batched chunk reading is and that zlib compression appears to be >2x slower than decompression. I didn't have the data when I wrote this commit message, but I ran this on Mozilla's NFS-based Mercurial server and the time for reading with a reused file descriptor was faster. So I think it is worth testing both with and without file descriptor reuse so we can make informed decisions about recycling file descriptors.
2016-11-18 07:17:51 +03:00
$ hg perfrevlogchunks -c
$ hg perfrevrange
$ hg perfrevset 'all()'
$ hg perfstartup
$ hg perfstatus
$ hg perftags
$ hg perftemplating
$ hg perfvolatilesets
$ hg perfwalk
$ hg perfparents
Check perf.py for historical portability
$ cd "$TESTDIR/.."
$ (testrepohg files -r 1.2 glob:mercurial/*.c glob:mercurial/*.py;
> testrepohg files -r tip glob:mercurial/*.c glob:mercurial/*.py) |
> "$TESTDIR"/check-perf-code.py contrib/perf.py
perf: add command to benchmark bundle reading Upcoming commits will be refactoring bundle2 I/O code. This commit establishes a `hg perfbundleread` command that measures how long it takes to read a bundle using various mechanisms. As a baseline, here's output from an uncompressed bundle1 bundle of my Firefox repo (7,098,622,890 bytes): ! read(8k) ! wall 0.763481 comb 0.760000 user 0.160000 sys 0.600000 (best of 6) ! read(16k) ! wall 0.644512 comb 0.640000 user 0.110000 sys 0.530000 (best of 16) ! read(32k) ! wall 0.581172 comb 0.590000 user 0.060000 sys 0.530000 (best of 18) ! read(128k) ! wall 0.535183 comb 0.530000 user 0.010000 sys 0.520000 (best of 19) ! cg1 deltaiter() ! wall 0.873500 comb 0.880000 user 0.840000 sys 0.040000 (best of 12) ! cg1 getchunks() ! wall 6.283797 comb 6.270000 user 5.570000 sys 0.700000 (best of 3) ! cg1 read(8k) ! wall 1.097173 comb 1.100000 user 0.400000 sys 0.700000 (best of 10) ! cg1 read(16k) ! wall 0.810750 comb 0.800000 user 0.200000 sys 0.600000 (best of 13) ! cg1 read(32k) ! wall 0.671215 comb 0.670000 user 0.110000 sys 0.560000 (best of 15) ! cg1 read(128k) ! wall 0.597857 comb 0.600000 user 0.020000 sys 0.580000 (best of 15) And from an uncompressed bundle2 bundle (6,070,036,163 bytes): ! read(8k) ! wall 0.676997 comb 0.680000 user 0.160000 sys 0.520000 (best of 15) ! read(16k) ! wall 0.592706 comb 0.590000 user 0.080000 sys 0.510000 (best of 17) ! read(32k) ! wall 0.529395 comb 0.530000 user 0.050000 sys 0.480000 (best of 16) ! read(128k) ! wall 0.491270 comb 0.490000 user 0.010000 sys 0.480000 (best of 19) ! bundle2 forwardchunks() ! wall 2.997131 comb 2.990000 user 2.270000 sys 0.720000 (best of 4) ! bundle2 iterparts() ! wall 12.247197 comb 10.670000 user 8.170000 sys 2.500000 (best of 3) ! bundle2 part seek() ! wall 11.761675 comb 10.500000 user 8.240000 sys 2.260000 (best of 3) ! bundle2 part read(8k) ! wall 9.116163 comb 9.110000 user 8.240000 sys 0.870000 (best of 3) ! bundle2 part read(16k) ! wall 8.984362 comb 8.970000 user 8.110000 sys 0.860000 (best of 3) ! bundle2 part read(32k) ! wall 8.758364 comb 8.740000 user 7.860000 sys 0.880000 (best of 3) ! bundle2 part read(128k) ! wall 8.749040 comb 8.730000 user 7.830000 sys 0.900000 (best of 3) We already see some interesting data. Notably that bundle2 has significant overhead compared to bundle1. This matters for e.g. stream clone bundles, which can be applied at >1Gbps. Differential Revision: https://phab.mercurial-scm.org/D1385
2017-11-14 06:20:34 +03:00
contrib/perf.py:498:
> from mercurial import (
import newer module separately in try clause for early Mercurial
[1]