Summary:
A subsequent diff will need access to the node's diff and meta during iteration time. It seems like a
natural part of the API so let's add it.
Note: It's possible to call `.getdelta(name, node)` to get this data if we don't read it here.
But I ran into some weird occassional OSErrors from the mmap API when I did that. So let's just
do this.
Reviewed By: DurhamG
Differential Revision: D7369225
fbshipit-source-id: 252839a549242909153c74287db8f36d6c63bd9c
Summary:
This makes hg pull use the connectionpool. This means prefetches can
reuse the existing ssh connection when appropriate. This both speeds up
prefetches, and also means they will speak to the same server that served the
pull.
Reviewed By: ryanmce
Differential Revision: D7481107
fbshipit-source-id: f9a3670527cb7e8956029c86d50d8e030dd3cc01
Summary:
Previously the connectionpool was a remotefilelog specific concept. We
want to start sharing connections between pull and prefetches, so let's move it
to core Mercurial.
Reviewed By: ryanmce, phillco
Differential Revision: D7480670
fbshipit-source-id: 1b2eff3b0e61a815709ffaec35df802eeda0c24b
Summary:
`hg debugcolor --style` shows the component parts of each style individually,
however this doesn't work if the styles are defined as the new fallback styles
(separated by colons). This is because the fallback is only implemented for
actual style names - it doesn't work for `ui.label('brightblue:blue', 'text')`.
It's usefule to see what the fallbacks are (even if they're not necessary on
your own system), so change debugcolor to split the elements of the fallback
style and show them separately.
Reviewed By: quark-zju
Differential Revision: D7485545
fbshipit-source-id: dce7204c9f0a98bb730b3ba864db28a9ec52a339
Summary:
`len()` on a hybrid manifest wrapping a treemanifest would raise an attribute error. But if there is no treemanifest or there is *only* a treemanifest, then a TypeError is raised. Using `len()` on an object that doesn't support length should always raise `TypeError`, consistently.
Instead of looking up the `__len__` attribute, use the built-in `len()` function, which will raise `TypeError` if the wrapped manifest in a hybrid doesn't have a `__len__` method. This ensures that we get a consistent exception.
Reviewed By: farnz
Differential Revision: D7485510
fbshipit-source-id: 4132d6b383171cde8dd99dd60098716d4aedc527
Summary:
The full command line needs to come from the `dispatch.runcommand` function, as
`sys.argv` contains `serve ...` for chg invocations.
Also make sure the correlator remains the same for commands that make multiple
connections to the server.
Reviewed By: quark-zju
Differential Revision: D7443727
fbshipit-source-id: a785e372b7b67fbd0b4ab4d73e7ff914aa5db9c3
Summary:
`remotefilelog.fileserverclient.peersetup.remotefilepeer` overrides the
`_callstream` method, however it uses `command` rather than `cmd` for the first
parameter name. This doesn't match the method it's overriding, and clashes
with clienttelemetry's use of this parameter for the original command that the
user ran.
Make this method match all the others.
Reviewed By: quark-zju
Differential Revision: D7443726
fbshipit-source-id: 1170feb21056c3e044bffaf55d95f7c48ff972fb
Summary:
gitignore could have performance issues stating .gitignore files everywhere.
That happens if watchman returns O(working copy) files. Add a config to
disable it as we're finding solutions.
Reviewed By: DurhamG
Differential Revision: D7482499
fbshipit-source-id: 4c9247b0318bf034c8e9af4b74c21110cc598714
Summary:
Turns out I incorrectly assessed this situation before. We do use content from
perforce servers a lot. This change makes p4seqimport read from local disk
directly if possibel rather than resorting solely on `p4 print` to obtain file content.
```name=Checking file content src on master-importer task 0 (running for 15h+)
[15:40:23 twsvcscm@priv_global/independent_devinfra/ovrsource-master-importer/0 ~]$ egrep -o 'src: (gzip|rcs|p4)' /logs/stdout | sort | uniq -c
2567 src: gzip
24 src: p4
```
Differential Revision: D7388797
fbshipit-source-id: 5fe1a525bc211d64a75954d529edc152d22970a7
Summary:
Subsequent commits will need the new path of a mutable{data, hist}pack -- this makes
that data accessible.
Reviewed By: DurhamG
Differential Revision: D7369226
fbshipit-source-id: f6849aaed747fbd9afee7191e6a0e5e1357ca618
Summary: fastmanifest used the statvfs function to be smart about how much disk space it used. That function isn't available on windows though. This optimization is optional, and we probably won't end up using the fastmanifest cache on windows anyways, so let's just skip it if its not available.
Reviewed By: quark-zju
Differential Revision: D7478478
fbshipit-source-id: e9595f3fef397d66d76f3ecfa54f8e4328ce0921
Summary:
dsp had a look at the whole stack and suggested some changes:
* Only write bookmark once at the end of the import - we are doing a single transaction anyways so updating the bookmark after every changelist import is moot
* Remove unused function seqimporter.ChangelistImporter._safe_open
* Require fncache to preserve behavior from p4fastimport
Differential Revision: D7375481
fbshipit-source-id: f4407d5d0276f96d72bf67544091640fe1c46044
Summary: Updates the importer wrapper to use the new p4seqimport, replacing p4fastimport.
Differential Revision: D7326764
fbshipit-source-id: 588486bfd747086396f47e678da05c6eafd30565
Summary:
When testing p4seqimport with remotefilelog it would barf on call to `.tip()`,
because remotefilelog doesn't have that.
This change makes use of the change context from the repo instead to get the
tip node.
Differential Revision: D7294979
fbshipit-source-id: 18b4a5107f4cbf676016d44d5134bf0d252eeff3
Summary:
Testing that p4seqimport works properly for branching
Based on comment on D7172867
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7203765
fbshipit-source-id: 2e328f5b43fc47a60bfe2c41f9454f8471dda814
Summary:
Perforce supports RCS keyworded files, more info here:
http://answers.perforce.com/articles/KB/3482
We replace things back in p4fastimport, this replicates the behavior in
p4seqimport (unit test should clarify what this means)
Differential Revision: D7188163
fbshipit-source-id: 594f71d6114c73001753ae36c4973c2db3310e62
Summary:
Respect the executable bit on files based on perforce type.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7185388
fbshipit-source-id: 59afec7bd857572b8347ebe546d131017a79928c
Summary:
p4seqimport has used very high level mercurial abstractions so far (almost
equivalent to running hg add / mv / rm / commit on command line). This is very
easy to grasp as we use it day to day. It is not performant enough for our
importer:
- It does the work twice (write to working copy, then commit changing hg metadata)
- It requires the working copy (this would force us to update between revs,
materializing a prohibitively large number of files)
This change makes use of memctx, which is basically an in-memory commit. This way
we don't need a working copy and we save time + a lot of space.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7176903
fbshipit-source-id: 2773d7c001b615837496ea9db3229d9afc020124
Summary:
p4seqimport has a bookmark option, it was completely ignored before this change.
This makes use of the opt, moving the bookmark as we import changes.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7172867
fbshipit-source-id: be63765088b0583df2e1c9e0ccec869c5278d782
Summary:
Properly create files as symlinks if they are symlinks in P4
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Reviewed By: wlis
Differential Revision: D7157772
fbshipit-source-id: ac3e5010f3d15460592a449c817824c0b28a8435
Summary:
Similar to #10 (D7113181), we need to track large files.
This change adds the bits to do so, reusing the logic from p4fastimport which was
moved to lfs.py
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7115654
fbshipit-source-id: 56ccfadf6fa14dcfb8005cc5ef03fb175835bcda
Summary:
This change makes seqimport write revision info (i.e. (CL, hghash) pairs) to a
sqlite file. This is used by the importer TW job wrapper to write the info
into `xdb.p4sync` table `revmap`
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7113181
fbshipit-source-id: e55a8cf0b794216a4855ae7486885c3d956cd7fb
Summary:
Adds p4changelist to commit extra info
With p4changelist info, make p4seqimport incremental
Add debug message to have more accurate info on what is actually being imported
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7090090
fbshipit-source-id: 17529aa57452453cfe29c3c3dc9d9e7daa8cffb2
Summary:
Adds copy tracing to `p4seqimport` by:
- Leveraging `fromFile` from `p4 -ztag describe` to introduce source for moved
files into P4Changelist.load's
- Utilizing that info from P4 CL when creating hg commit
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7074892
fbshipit-source-id: e105a608bb953a8137ec6c9afc7e0571a902c868
Summary:
Consolidates manipulation of p4 CL info into p4 module, pulling the relevant code
out of ChangeManifestImporter creategen so it can be easily shared by
p4fastimport and p4seqimport
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7064179
fbshipit-source-id: 72c5bcad209eebf40ec8152a07f98f7f7fa544fb
Summary:
Adds logic to create the commit, using info from p4 CL + the list of added and
removed files.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7063983
fbshipit-source-id: c64e44c19d06e54fe35121a8d6128de050f93823
Summary:
Read file from perforce, write into the hg repo.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7050157
fbshipit-source-id: 4389ba11f62c8ed825d6a6ef3c001095339eb551
Summary:
Creates ChangelistImporter, which will be responsible for translating a p4 CL to
a hg commit
For now it only goes through files touched by the CL and lists what was added or
removed. Next diffs will evolve it to the point where it effectively performs the
translation.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7049961
fbshipit-source-id: 6a9f3bd57cadc2b9ea8a81373cc10dfda76311e7
Summary:
Pulls the logic to define changelists from p4fastimport into separate function
and re-uses that in p4seqimport
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7035674
fbshipit-source-id: 699e9148d35e437f306062f290c8ec2a857df480
Summary:
This change:
Moves some opts sanitizing logic into function `sanitizeopts`
Adds checks for `limit` being a positive integer
Uses `sanitizeopts` new function in p4seqimport
Adds a test covering `sanitizeopts`
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7035217
fbshipit-source-id: cd677fb254ff83d123673d51a1c682639de08a30
Summary:
p4seqimport will be the new command to import from p4 to hg changelist by
changelist. This should provide us with a more robust importer that doesn't rely
on fiddling with hg's data structures directly. p4fastimport was important to
create ovrsource from scratch and import thousands of changelists, but moving
forward it is probably safer and easier to understand/maintain something that is
based on higher level Mercurial APIs
All that said, this is the first change, this change:
1. Creates p4seqimport command as part of the p4fastimport extension
2. Refactors the p4 client checking logic into `enforce_p4_client_exists`
3. Adds a test that checks the new function works through using `p4seqimport`.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7015941
fbshipit-source-id: cb5c59b2f104f336a078025544a44028bf01fa85
Summary:
After testing locally, I couldn't conclusively prove if rebasing a single change with IMM was any faster or slower than on disk.
Using IMM on the working copy will definitely be better for rebasing stacks, and it's just nicer to not have the working copy thrash around as much. It also might be interesting to (possibly) let you work while the rebase is running, too.* So I've added the code that will let us enable this more widely (as a subset of IMM) to experiment.
*I've made it so that if you make any changes during the rebase (causing the last update to fail), we just print a nice message telling you to checkout the new rebased working copy commit, instead of failing/aborting. TBD whether this is something we want to encourage people to do, however. I've kept the existing up-front check for uncommited changes when rebasing the WCP with IMM for now.
Reviewed By: DurhamG
Differential Revision: D7051282
fbshipit-source-id: c04302539021f481c17e47c23d3f4d8b3ed59db6
Summary:
There was an issue where if the prefetch inside cansendtrees failed, it
wouldn't allow it to actually try the operation. This is undesirable, since
prefetch only talks to the server while the actual tree fetch will also attempt
to generate a tree from an old flat manifest.
Ideally we'd have a more unified flow here, where we could have the server let
us know what nodes it couldn't find, then the client could try other options for
the remaining nodes, but that requires significantly more refactoring.
Reviewed By: quark-zju
Differential Revision: D7450662
fbshipit-source-id: a023f27ee4b74786633e4dce7e62f3d9604c2b7f
Summary:
It further slows down lookups, even when checksum is disabled, since even a
`is_none()` check is not free:
index insertion 4.697 ms
index flush 3.764 ms
index lookup (memory) 2.878 ms
index lookup (disk, no verify) 3.564 ms
index lookup (disk, verified) 7.788 ms
The "verified" version basically needs 2x time due to more memory lookups.
Unfortunately this means eventual lookup performance will be slower than
gdbm, but insertion is still much faster. And the index still has a better
locking properties (lock-free read) that gdbm does not have.
With correct time complexity (no O(len(changelog)) index-only operations for
example), I'd expect it's rare for the overall performance to be bounded by
index performance. Data integrity is more important.
With a larger number of nodes, ex. 2M 20-byte strings: inserting to memory
takes 1.4 seconds, flushing to disk takes 0.9 seconds, looking up without
checksum takes 0.9 seconds, looking up with checksum takes 1.7 seconds.
Reviewed By: DurhamG
Differential Revision: D7440248
fbshipit-source-id: 020e5204606f9f0a4f68843a491009a6a6f75751
Summary:
This is in the critical path for lookup, and has very visible performance
penalty:
index insertion 3.923 ms
index flush 3.921 ms
index lookup (memory) 1.070 ms
index lookup (disk, no verify) 1.980 ms
index lookup (disk, verified) 5.206 ms
Reviewed By: DurhamG
Differential Revision: D7440252
fbshipit-source-id: 49540f974faff1cdd0603a72328f141ccd054ee2
Summary:
Previously checksum is only for `MemRoot`, now it's for all `Mem` structs.
Since `Mem*` structs are not frequently used in the normal lookup code path,
there is no visible performance change.
Reviewed By: DurhamG
Differential Revision: D7440253
fbshipit-source-id: 945f5a8c38d228f59190a487b0cf6dbc5daac4f7
Summary:
The type will be used all over the place and may make `rustfmt` wrap lines.
Use a shorter type to make it slightly cleaner.
Reviewed By: DurhamG
Differential Revision: D7436338
fbshipit-source-id: ecaada23916a22658f65669b748632a077e60df2
Summary:
This only affects `Index::open` right now. So it's a one time check and does
not affect performance.
Reviewed By: DurhamG
Differential Revision: D7436341
fbshipit-source-id: 30313064bf2ea50320ac744fc18c03bff4b12c89
Summary:
Add `ChecksumTable` to the `Index` struct. But it's not functional yet.
The checksum will mainly affect "index lookup (disk)" case. Add another
benchmark for showing the difference with checksum on and off. They do not
have much difference right now:
index insertion 3.756 ms
index flush 3.469 ms
index lookup (memory) 0.990 ms
index lookup (disk, no verify) 1.768 ms
index lookup (disk, verified) 1.766 ms
Reviewed By: DurhamG
Differential Revision: D7436339
fbshipit-source-id: 60a6554a2c96067a53ce9e1753cd51d0d61c0bea
Summary:
The minibench framework does not provide benchmark filtering. So let's
separate benchmarks using different entry points.
Reviewed By: DurhamG
Differential Revision: D7440250
fbshipit-source-id: 11e7790a5074ebf4c08e33c312a490a66a921926
Summary:
The "clone" benchmarks were added to be subtracted from "lookup" to
workaround the test framework limitation.
The new minibench framework makes it easier to exclude preparation cost.
Therefore the clone benchmarks are no longer needed.
index insertion 3.881 ms
index flush 3.286 ms
index lookup (memory) 0.928 ms
index lookup (disk) 1.685 ms
"index lookup (memory)" is basically "index lookup (memory)" minus
"index clone (memory)" in previous benchmarks.
Reviewed By: DurhamG
Differential Revision: D7440251
fbshipit-source-id: 0e6a1fb7ee64f9a393ee9ada4db6e6eb052e20bf
Summary:
See the previous minibench diff for the motivation.
"failure" was removed from build dependencies since it's not used yet.
Run benchmark a few times. It seems the first several items are less stable
due to possibly warming up issues. Otherwise the result looks good enough.
The test also compiles and runs much faster.
```
base16 iterating 1M bytes 0.921 ms
index insertion 4.804 ms
index flush 5.104 ms
index lookup (memory) 2.929 ms
index lookup (disk) 1.767 ms
index clone (memory) 2.036 ms
index clone (disk) 0.010 ms
base16 iterating 1M bytes 0.853 ms
index insertion 4.512 ms
index flush 4.717 ms
index lookup (memory) 2.907 ms
index lookup (disk) 1.755 ms
index clone (memory) 1.856 ms
index clone (disk) 0.010 ms
base16 iterating 1M bytes 1.525 ms
index insertion 4.577 ms
index flush 4.901 ms
index lookup (memory) 2.800 ms
index lookup (disk) 1.790 ms
index clone (memory) 1.794 ms
index clone (disk) 0.010 ms
base16 iterating 1M bytes 0.768 ms
index insertion 4.486 ms
index flush 4.918 ms
index lookup (memory) 2.658 ms
index lookup (disk) 1.721 ms
index clone (memory) 1.763 ms
index clone (disk) 0.010 ms
base16 iterating 1M bytes 0.732 ms
index insertion 4.489 ms
index flush 4.792 ms
index lookup (memory) 2.689 ms
index lookup (disk) 1.739 ms
index clone (memory) 1.850 ms
index clone (disk) 0.009 ms
base16 iterating 1M bytes 1.124 ms
index insertion 7.188 ms
index flush 4.888 ms
index lookup (memory) 2.829 ms
index lookup (disk) 1.609 ms
index clone (memory) 2.642 ms
index clone (disk) 0.010 ms
base16 iterating 1M bytes 1.055 ms
index insertion 4.683 ms
index flush 4.996 ms
index lookup (memory) 2.782 ms
index lookup (disk) 1.710 ms
index clone (memory) 1.802 ms
index clone (disk) 0.009 ms
```
Reviewed By: DurhamG
Differential Revision: D7440249
fbshipit-source-id: 0f946ab184455acd40c5a38cf46ff94d9e3755c8
Summary:
The dirty -> non-dirty offset mapping can be optimized using a dedicated
"map" type that is backed by `vec`s, because dirty offsets are continuous
per type.
This makes "flush" significantly faster:
```
index flush time: [5.8808 ms 6.1800 ms 6.4813 ms]
change: [-62.250% -59.481% -56.325%] (p = 0.00 < 0.05)
Performance has improved.
```
Reviewed By: DurhamG
Differential Revision: D7422832
fbshipit-source-id: 9ab8a70d1663155941dae5b4f02f7452f5e3cadf
Summary:
It seems to improve the performance a bit:
```
index insertion time: [5.4643 ms 5.6818 ms 5.9188 ms]
change: [-24.526% -17.384% -10.315%] (p = 0.00 < 0.05)
Performance has improved.
```
Reviewed By: DurhamG
Differential Revision: D7422831
fbshipit-source-id: fc1c72f402258db7e189cd8724583757d48affb7
Summary:
For key entries, the key is immutable once stored. So just use `Box<[u8]>`.
It saves a `usize` per entry. On 64-bit platform, that's a lot.
Performance is slightly improved and it catches up with D7404532 before
typed offset refactoring now:
index insertion time: [6.1852 ms 6.6598 ms 7.2433 ms]
index flush time: [15.814 ms 16.538 ms 17.235 ms]
index lookup (memory) time: [3.7636 ms 3.9403 ms 4.1424 ms]
index lookup (disk) time: [1.9413 ms 2.0366 ms 2.1325 ms]
index clone (memory) time: [2.6952 ms 2.9221 ms 3.0968 ms]
index clone (disk) time: [5.0296 us 5.2862 us 5.5629 us]
Reviewed By: DurhamG
Differential Revision: D7422837
fbshipit-source-id: 4aabfdc028aefb8e796803e103f0b2e4965f84e6
Summary:
Previously, both `value` and `link` are optional in `insert_advanced`.
This diff makes `value` required.
`maybe_create_link_entry` becomes unused and removed.
No visible performance change.
Reviewed By: DurhamG
Differential Revision: D7422838
fbshipit-source-id: 8d7d3cc1cc325f6fea7e8ce996d0a43d3ee49839
Summary:
Also add an IMM test to tease out working-copy vs. non-working-copy issues.
Also add some newlines to code stolen from fbcode.
Reviewed By: DurhamG
Differential Revision: D7432333
fbshipit-source-id: 029ccd8aeec7f0e2c380da41e7d78b433a275af3