Summary:
We were ignoring the actual profile data, and all profiles were given the same .hg/sparse-based hash signature instead.
Error introduced in D7415720
Differential Revision: D7514087
fbshipit-source-id: 56288aaaa2065b031318e7c065ec6310f6cecd37
Summary:
Our tests use features that are provided by the bash shell to function
properly. I ran into issues with our tests on Ubuntu because the default shell
was `dash` which did not even support the `source` command. Therefore, this
commit changes the default hg shell for the tests to `bash`. Clients can always
override the default hg shell for the tests to another shell.
Reviewed By: quark-zju
Differential Revision: D7563733
fbshipit-source-id: e9c16c002793a919a49292a9aa0a876ba232c293
Summary:
`run-tests.py` has an option to specify the shell for running the
tests. However, when run with the `--shell bash` configuration, several tests
fail with the `testrepohg: command not found` error. I could not find the
specific cause for this but changing the alias to a function resolved this
error. Also, fixed another related error by using double quotes during variable
expansion.
Reviewed By: quark-zju
Differential Revision: D7563731
fbshipit-source-id: 58e1b5b996ccdc20b8375dcd5f4f8e071bd9cdc1
Summary:
This allows us to turn on and off hgignore support directly without changing
files in the working copy (which could be hard to revert cleanly).
Reviewed By: mjpieters
Differential Revision: D7544529
fbshipit-source-id: 14cc41e2ae361070f91bf3b8aa28dd5808e7fe99
Summary: When I run this test (test-fb-hgext-p4fastimport-seqimport.t) on my dev server, which has the same `p4d` as what we have in prod (P59384966), it fails the test due to reading the file from a different source P59381563. Since both `p4` and `rcs` are valid options for a read, this diff updates the test to not restrict to `p4` only.
Differential Revision: D7556357
fbshipit-source-id: b82d254841a31fe447452ee408bdb8e157854aab
Summary:
Previously `hg graft -r 'ancestor::descendant` printed no error message at all.
This diff fixes it.
Reviewed By: mjpieters
Differential Revision: D7533498
fbshipit-source-id: 5c4e41ecc3178495ad2f41ef53ef65f7fbb70212
Summary:
A side effect is, the hint won't be printed out if fbamend is not enabled,
which is more "correct".
Reviewed By: markbt
Differential Revision: D7392130
fbshipit-source-id: 5b7aa4cc3083b03546c54965ce51040fab958b87
Summary: This allows people to silence the hint.
Reviewed By: markbt
Differential Revision: D7392127
fbshipit-source-id: ac16f952a178d567ce13e22946127456972ebe85
Summary:
This allows users to silence the "hide" advice.
In the future, we might want to change "hide/unhide" to only affect visibility
without changing obsolesce. So "strip" is not fully deprecated yet.
Reviewed By: markbt
Differential Revision: D7392131
fbshipit-source-id: 2448d4c91dffce31d29e2dd99078cb555c9a8f8c
Summary:
This allows people to silence hints as they like. It's done by modifying
user hgrc.
Reviewed By: markbt
Differential Revision: D7392133
fbshipit-source-id: 1365294217db92dfb3a0c81332a9fefd164795d4
Summary:
This allows us to have a unified way to print hint messages at the end of a
command. It would be helpful for feature discovery in general.
Reviewed By: mjpieters
Differential Revision: D7392132
fbshipit-source-id: 8b4e94cc2176266652459ecca3428bd86d95bfe2
Summary:
We bailifchanged() later on, but the abort does an `up -C originalwc` before that. Let's bailifchanged() immediately.
A better version: reset the transaction and clear the statefile, instead of calling `abort()`.
Reviewed By: quark-zju
Differential Revision: D7538017
fbshipit-source-id: 8c490b366e495bb269c4d8c75b6144c535c8d54f
Summary:
Minor fixes to how the `previous locations of %s` line is printed:
- Start the pager before printing this line, so it gets included in the pager
output correctly.
- Avoid printing this line when using a custom output template. Previously it
was only skipped when using the `json` template. This now matches the logic
used to skip the `no recorded locations` line that was recommended in
D7512030.
Reviewed By: ryanmce
Differential Revision: D7537661
fbshipit-source-id: eb695dd98c06149701cf96acf5ec2eb277ea9cf3
Summary:
Avoid printing "no recorded locations" directly to stdout when a format
template was specified. In particular this avoids printing non-JSON data
when using `-Tjson`.
We potentially could change this to print to stderr instead. However for now
I just followed the same pattern of checking the template as was done above for
the "previous locations" message.
Reviewed By: ryanmce
Differential Revision: D7512030
fbshipit-source-id: 2c32f07962fac4ca3d6bfd8f2ca3c4840b2a8a9b
Summary: When no cache has been set (`simplecache.caches=`) then there's no `name` set either. This exposed a logic error in this section of the code.
Reviewed By: ryanmce
Differential Revision: D7513976
fbshipit-source-id: ecbc7a8ac8c6eb23010d64ab8cbf9f9fb7d8f497
Summary:
move logic to update current rev to its new location (optional)
we are trying to update only if it is unambiguous
Reviewed By: DurhamG
Differential Revision: D7431940
fbshipit-source-id: 72e7fea7365a231c4d98ceb4cf4872a4db02d9ca
Summary:
[commitcloud] commit cloud recover state
`hg cloudrecover` command
It might be helpful to have a command like this in cases something goes wrong
with the local state
Reviewed By: DurhamG
Differential Revision: D7417147
fbshipit-source-id: 4b236f2753b1f212ff4881a649032e53e032c66c
Summary:
When pushing a backup bundle to the server, check if the response contains an
error, and fail the backup accordingly.
Differential Revision: D7498324
fbshipit-source-id: a08807ac54e9d3044ff1450e93d2a8ea9d6f767f
Summary:
Add a server-side config option `infinitepush.maxbundlesize` to control the
maximum bundle size (currently 100MB).
Add a test that shows bad behaviour when pushing backups that exceed this size.
Differential Revision: D7498323
fbshipit-source-id: 640478e7a58cb3c39408fe2a24d8d581f14d891c
Summary:
Previously the connectionpool was a remotefilelog specific concept. We
want to start sharing connections between pull and prefetches, so let's move it
to core Mercurial.
Reviewed By: ryanmce, phillco
Differential Revision: D7480670
fbshipit-source-id: 1b2eff3b0e61a815709ffaec35df802eeda0c24b
Summary:
The full command line needs to come from the `dispatch.runcommand` function, as
`sys.argv` contains `serve ...` for chg invocations.
Also make sure the correlator remains the same for commands that make multiple
connections to the server.
Reviewed By: quark-zju
Differential Revision: D7443727
fbshipit-source-id: a785e372b7b67fbd0b4ab4d73e7ff914aa5db9c3
Summary:
gitignore could have performance issues stating .gitignore files everywhere.
That happens if watchman returns O(working copy) files. Add a config to
disable it as we're finding solutions.
Reviewed By: DurhamG
Differential Revision: D7482499
fbshipit-source-id: 4c9247b0318bf034c8e9af4b74c21110cc598714
Summary:
Turns out I incorrectly assessed this situation before. We do use content from
perforce servers a lot. This change makes p4seqimport read from local disk
directly if possibel rather than resorting solely on `p4 print` to obtain file content.
```name=Checking file content src on master-importer task 0 (running for 15h+)
[15:40:23 twsvcscm@priv_global/independent_devinfra/ovrsource-master-importer/0 ~]$ egrep -o 'src: (gzip|rcs|p4)' /logs/stdout | sort | uniq -c
2567 src: gzip
24 src: p4
```
Differential Revision: D7388797
fbshipit-source-id: 5fe1a525bc211d64a75954d529edc152d22970a7
Summary:
dsp had a look at the whole stack and suggested some changes:
* Only write bookmark once at the end of the import - we are doing a single transaction anyways so updating the bookmark after every changelist import is moot
* Remove unused function seqimporter.ChangelistImporter._safe_open
* Require fncache to preserve behavior from p4fastimport
Differential Revision: D7375481
fbshipit-source-id: f4407d5d0276f96d72bf67544091640fe1c46044
Summary:
Testing that p4seqimport works properly for branching
Based on comment on D7172867
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7203765
fbshipit-source-id: 2e328f5b43fc47a60bfe2c41f9454f8471dda814
Summary:
Perforce supports RCS keyworded files, more info here:
http://answers.perforce.com/articles/KB/3482
We replace things back in p4fastimport, this replicates the behavior in
p4seqimport (unit test should clarify what this means)
Differential Revision: D7188163
fbshipit-source-id: 594f71d6114c73001753ae36c4973c2db3310e62
Summary:
Respect the executable bit on files based on perforce type.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7185388
fbshipit-source-id: 59afec7bd857572b8347ebe546d131017a79928c
Summary:
p4seqimport has used very high level mercurial abstractions so far (almost
equivalent to running hg add / mv / rm / commit on command line). This is very
easy to grasp as we use it day to day. It is not performant enough for our
importer:
- It does the work twice (write to working copy, then commit changing hg metadata)
- It requires the working copy (this would force us to update between revs,
materializing a prohibitively large number of files)
This change makes use of memctx, which is basically an in-memory commit. This way
we don't need a working copy and we save time + a lot of space.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7176903
fbshipit-source-id: 2773d7c001b615837496ea9db3229d9afc020124
Summary:
p4seqimport has a bookmark option, it was completely ignored before this change.
This makes use of the opt, moving the bookmark as we import changes.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7172867
fbshipit-source-id: be63765088b0583df2e1c9e0ccec869c5278d782
Summary:
Properly create files as symlinks if they are symlinks in P4
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Reviewed By: wlis
Differential Revision: D7157772
fbshipit-source-id: ac3e5010f3d15460592a449c817824c0b28a8435
Summary:
Similar to #10 (D7113181), we need to track large files.
This change adds the bits to do so, reusing the logic from p4fastimport which was
moved to lfs.py
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7115654
fbshipit-source-id: 56ccfadf6fa14dcfb8005cc5ef03fb175835bcda
Summary:
This change makes seqimport write revision info (i.e. (CL, hghash) pairs) to a
sqlite file. This is used by the importer TW job wrapper to write the info
into `xdb.p4sync` table `revmap`
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7113181
fbshipit-source-id: e55a8cf0b794216a4855ae7486885c3d956cd7fb
Summary:
Adds p4changelist to commit extra info
With p4changelist info, make p4seqimport incremental
Add debug message to have more accurate info on what is actually being imported
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7090090
fbshipit-source-id: 17529aa57452453cfe29c3c3dc9d9e7daa8cffb2
Summary:
Adds copy tracing to `p4seqimport` by:
- Leveraging `fromFile` from `p4 -ztag describe` to introduce source for moved
files into P4Changelist.load's
- Utilizing that info from P4 CL when creating hg commit
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7074892
fbshipit-source-id: e105a608bb953a8137ec6c9afc7e0571a902c868
Summary:
Adds logic to create the commit, using info from p4 CL + the list of added and
removed files.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7063983
fbshipit-source-id: c64e44c19d06e54fe35121a8d6128de050f93823
Summary:
Read file from perforce, write into the hg repo.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7050157
fbshipit-source-id: 4389ba11f62c8ed825d6a6ef3c001095339eb551
Summary:
Creates ChangelistImporter, which will be responsible for translating a p4 CL to
a hg commit
For now it only goes through files touched by the CL and lists what was added or
removed. Next diffs will evolve it to the point where it effectively performs the
translation.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7049961
fbshipit-source-id: 6a9f3bd57cadc2b9ea8a81373cc10dfda76311e7
Summary:
Pulls the logic to define changelists from p4fastimport into separate function
and re-uses that in p4seqimport
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7035674
fbshipit-source-id: 699e9148d35e437f306062f290c8ec2a857df480
Summary:
This change:
Moves some opts sanitizing logic into function `sanitizeopts`
Adds checks for `limit` being a positive integer
Uses `sanitizeopts` new function in p4seqimport
Adds a test covering `sanitizeopts`
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7035217
fbshipit-source-id: cd677fb254ff83d123673d51a1c682639de08a30
Summary:
p4seqimport will be the new command to import from p4 to hg changelist by
changelist. This should provide us with a more robust importer that doesn't rely
on fiddling with hg's data structures directly. p4fastimport was important to
create ovrsource from scratch and import thousands of changelists, but moving
forward it is probably safer and easier to understand/maintain something that is
based on higher level Mercurial APIs
All that said, this is the first change, this change:
1. Creates p4seqimport command as part of the p4fastimport extension
2. Refactors the p4 client checking logic into `enforce_p4_client_exists`
3. Adds a test that checks the new function works through using `p4seqimport`.
For a high-level overview of p4seqimport, please check https://our.intern.facebook.com/intern/wiki/IDI/p4seqimport/
Differential Revision: D7015941
fbshipit-source-id: cb5c59b2f104f336a078025544a44028bf01fa85
Summary:
After testing locally, I couldn't conclusively prove if rebasing a single change with IMM was any faster or slower than on disk.
Using IMM on the working copy will definitely be better for rebasing stacks, and it's just nicer to not have the working copy thrash around as much. It also might be interesting to (possibly) let you work while the rebase is running, too.* So I've added the code that will let us enable this more widely (as a subset of IMM) to experiment.
*I've made it so that if you make any changes during the rebase (causing the last update to fail), we just print a nice message telling you to checkout the new rebased working copy commit, instead of failing/aborting. TBD whether this is something we want to encourage people to do, however. I've kept the existing up-front check for uncommited changes when rebasing the WCP with IMM for now.
Reviewed By: DurhamG
Differential Revision: D7051282
fbshipit-source-id: c04302539021f481c17e47c23d3f4d8b3ed59db6
Summary:
There was an issue where if the prefetch inside cansendtrees failed, it
wouldn't allow it to actually try the operation. This is undesirable, since
prefetch only talks to the server while the actual tree fetch will also attempt
to generate a tree from an old flat manifest.
Ideally we'd have a more unified flow here, where we could have the server let
us know what nodes it couldn't find, then the client could try other options for
the remaining nodes, but that requires significantly more refactoring.
Reviewed By: quark-zju
Differential Revision: D7450662
fbshipit-source-id: a023f27ee4b74786633e4dce7e62f3d9604c2b7f
Summary:
Also add an IMM test to tease out working-copy vs. non-working-copy issues.
Also add some newlines to code stolen from fbcode.
Reviewed By: DurhamG
Differential Revision: D7432333
fbshipit-source-id: 029ccd8aeec7f0e2c380da41e7d78b433a275af3
Summary:
In an in-memory merge, if a commit only changed the flags of a file, and that file also never got written to during the merge, the IMM could fail and cause it to restart.
The reason is pretty simple: `setflags()` sets `cache[flags]` but not `cache[data]`, as it doesn't have any new data to store. In that case, calls to read the data should to fall-through to the underlying `p1` context.
Indeed, proper logic to do that already exists in `overlayworkingctx.data(path)` and `flags(path)`. The problem is that `tomemctx()` was reading from the cache directly, which is problematic and unhygenic. So let's just change it to call the proper functions, which also fixes the bug.
Reviewed By: DurhamG
Differential Revision: D7447640
fbshipit-source-id: 1625ef82ad2683c6a72059a0944fd5e336d3ec3a
Summary:
Use the new gitignore matcher powered by Rust.
The hgignore matcher has some laziness, but is not tree-aware - with N
"hgignore" files loaded, it needs O(N) time to match. The gitignore matcher
is tree-aware and backed by native code with decent time complexity.
We have been maintaining a translation script that collects all gitignores,
generate hgignore files with very long regexp for them. That script has
issues with sparse recently. This diff allows us to remove those generated
hgignore files from the repo.
Note: fsmonitor state does not contain ignored files. And ignore
invalidation is generally broken in fsmonitor (it only checks top-level
.hgignore). That means, once a file is ignored, it cannot be "unignored" by
just removing the matched pattern from ".gitignore". The file has to be
"touched" or so.
Reviewed By: markbt
Differential Revision: D7319608
fbshipit-source-id: 1763544aedb44676413efb6d14ffd3917ed3b1cd