Summary:
Add a config option to use indexedlog as the node map storage instead
of a flat text file.
Reviewed By: quark-zju
Differential Revision: D13062573
fbshipit-source-id: ae14df24a4e36c59fbd9ec82d785aac52a2f8b5f
Summary:
In order to move our hg-git mirroring off of the main hg servers, we
need to make it possible for the hg servers to compute the hg-git mapping
without having the entire git repository available. To do so, let's store the
git hash as an extra in the hg commit.
This breaks bidirectionality, but we've long since not needed that.
Reviewed By: phillco
Differential Revision: D13362980
fbshipit-source-id: 51df709bc5e77d78bb963abf90d0c35bb743d966
Summary:
A future diff will store the GitMap data in a rust storage structure.
Let's start by refactoring the python code to meet the same API as the rust
code.
Reviewed By: quark-zju
Differential Revision: D13062574
fbshipit-source-id: 3a1573afb98b73dacfc6e9e9efc5504a8b5ccbfb
Summary:
This bug has been here for 2+ years. Basically, when gathering the
ancestors for a given file node, if it traversed a rename it would lose track of
the new name and instead look up the new hash by the old name, which would fail.
We didn't hit this often, because it only causes a problem when you have partial
history and have to go fetch more history. In most remotefilelog repos we
download all of history, so you always have everything and therefore never hit
this.
Reviewed By: kulshrax, singhsrb
Differential Revision: D13332459
fbshipit-source-id: 120bfe9ac618a4979e1685f24dc6462fc7415b1b
Summary:
hgsubversion was doing one transaction per commit, which is both slow
and also causes a lot of packs to be created when operating on a remotefilelog
client.
Let's make it use a single transaction instead. Unfortunately the svn metadata
is not integrated with Mercurial transactions. If we're using a normal flat-text
revmap, if there is an exception when running pull, it will need to svn rebuild
the metadata to remove the bad data and continue.
If we're using a sqlite revmap, then we've integrated with the sqlite
transaction, so it should work as expected.
Reviewed By: phillco
Differential Revision: D13347708
fbshipit-source-id: 502c7e020ba2a2df1ec245a845b6d8eb8b9c3456
Summary: This needs to be patterns not pattern.
Reviewed By: quark-zju
Differential Revision: D13345626
fbshipit-source-id: fd998610d3a6840905f82e4b37fd240e8ac7f8d9
Summary:
Match patterns don't work the way I expected. They require the path to
not have '/' at the end, and they don't seem to match when a file or sub
directory is appended. Fix it by setting default to 'path'.
```
> matchmod.match("", "/", patterns=['foo/'])('foo/bar')
False
> matchmod.match("", "/", default='path', patterns=['foo/'])('foo/bar')
True
```
Reviewed By: quark-zju
Differential Revision: D13332653
fbshipit-source-id: e0f3fa9a51d36a40ac8a9c54f73296f431536d3c
Summary: Fix test failures (`test-check-code.t` and `test-check-config.t`) introduced by my Mononoke API diff stack. I thought Sandcastle would have surfaced these, but I guess not.
Reviewed By: quark-zju
Differential Revision: D13325133
fbshipit-source-id: fd14f4c7e7280155d6f677f18b20691ef54ceca3
Summary:
When garbage collecting lots of repository, a single corrupted
repository will stop the garbage collection for all the other repositories. In
the worst case, the first repository examined was bad, and thus no collection
will be done for the other repositories.
Let's log the faulty repository and continue with the other repositories.
Reviewed By: DurhamG
Differential Revision: D13316936
fbshipit-source-id: 9760710ff64d0173ae1c369991f438bbace11cf4
Summary: Allow MononokeClient to support both HTTP and HTTPS. The protocol use is determined by the scheme of the server base URI passed in. For example, specifying `https://mononoke-api.internal.tfbnw.net` would use HTTPS, whereas specifying `http://localhost:12345` would use HTTP. This is useful for local testing.
Reviewed By: DurhamG
Differential Revision: D13089197
fbshipit-source-id: 2da72ac98c60746200334e4bcc0e2568abe3073b
Summary: This diff adds the `hg debugmononokeapi` command to remotefilelog, demonstrating that the Mononoke API server can be accessed from within the remotefilelog extension. This paves the way for fetching data from the API server from remotefilelog.
Reviewed By: DurhamG
Differential Revision: D13055686
fbshipit-source-id: c826f4d79389f7364b45483b3d83afd0caf65063
Summary:
This diff adds Python FFI bindings for the `mononokeapi` crate introduced in the previous diff in this stack. It exposes a `PyMononokeClient` Python class which wraps the Rust `MononokeClient` class internally.
This class currently only has one method, `health_check()`, which hits the API server's health check endpoint, returning `None` on success or raising an exception otherwise.
Reviewed By: DurhamG
Differential Revision: D13055688
fbshipit-source-id: a3703617c919f5317c90029fb40d275f3d63d56f
Summary:
Let's stop depending on call-conduit here. This will make it work on Windows
and remove the Mercurial's dependency on conduit.
Technically we're replacing subprocess call to `arc call-conduit` with
a thread doing a pure graphql call (as we already have graphql client
implemented as hg extension).
Reviewed By: markbt
Differential Revision: D13280279
fbshipit-source-id: 56ee9027826f2dd2f71cbe0faa28cd2310660aaf
Summary:
In a future diff we want pushrebase to be able to rewrite commits
pulled from svn. In that situation, we need to be able to rewrite the svn
metadata so that the post-pushrebase commits are mapped to the svn rev numbers.
Let's add a truncatemeta command that will walk the latest commits in the
changelog and redo any that aren't mapped correctly.
Reviewed By: markbt
Differential Revision: D13074960
fbshipit-source-id: fe98879dd16cc0806d20ef6eab5f9d77f0e0c877
Summary:
Make repacks flush the newly created pack files to disk before proceeding to
delete the old datapacks. This ensures the data is committed to disk before we
delete the old pack files. There had been user complaints of data corruption.
The changes affects repack only to avoid slowing down performance for other
operations with manipulates datapack and historypack.
Reviewed By: DurhamG
Differential Revision: D12920147
fbshipit-source-id: 907b64d7763a6212fb49487cfc3bc403f8e3dce2
Summary:
If full is specified, or the maxage config option changes from one sync to
another, we need to actually pull in the new commits. Since the version number
won't necessarily have changed, we need to request a full copy of the cloud
workspace to work from.
Reviewed By: liubov-dmitrieva
Differential Revision: D13214274
fbshipit-source-id: b21594c04c05f065caf9f9dc494e6274debbde5c
Summary:
Stop comparing the local heads and bookmarks with the full cloud heads and
bookmarks when some heads or bookmarks have been omitted. Since they're
omitted, they won't be there.
Reviewed By: liubov-dmitrieva
Differential Revision: D13214275
fbshipit-source-id: 35a897f053f58d0793d384ff60b8202e80aec0c7
Summary: Sometimes, the script could provide useful context to the user, so allow its stderr to be printed.
Reviewed By: DurhamG
Differential Revision: D13202155
fbshipit-source-id: cb4c5c4ff0c696fa2959f0c038d391bf701949e1
Summary: in case of VIP TLS errors comes as socket errors.
Reviewed By: DurhamG
Differential Revision: D13159635
fbshipit-source-id: 80204718ea117e3694199a8686a151ff8aae2634
Summary:
This had a nasty bug where it manually computed the cached base text
using the full manifest text. When operating in a treeonly repository, this
resulted in corrupt deltas. The only reason we didn't hit it earlier is because
the secondary stackpush path was added and doesn't have this issue. But it
couldn't handle merge commits, which is why we hit this when it came time to add
a merge commit.
Now that the main repo that needed this optimization is treeonly, we don't need
this precaching anymore.
Reviewed By: mitrandir77
Differential Revision: D13154342
fbshipit-source-id: 5cb74167665a60a36e4a6d926f3f9f1c5c7bbef1
Summary:
Mercurial extensions are expected to be side-effect free at import time.
Move setup code to `uisetup` so they only get executed if the extension is enabled.
This helps chg's case where `lz4revlog` is always pre-imported.
Reviewed By: quark-zju
Differential Revision: D13118788
fbshipit-source-id: a7801057c01b6918bb4902a326b92f1c17f2707a
Summary:
When we have omitted syncing commits from the cloud workspace, give the user
advice on what to do to fetch them if they run `hg sl --all`.
Reviewed By: markbt
Differential Revision: D13085505
fbshipit-source-id: 624ce6cbf4cb2194ab5ffbc09c1ac3e073932249
Summary:
During the sequence of events that led up to S169085 we tried an hg
push that resulted in an error that looked like this: "RuntimeError:
std::exception". I believe that error was caused by an uncaught
pyexception, replacing Python's current thread exception with a
useless one.
Catch the pyexception and deal with it appropriately.
Reviewed By: quark-zju
Differential Revision: D13112735
fbshipit-source-id: 301d899543ae95084b890c19b00322e69ded07b2
Summary:
In backuplockcheck we try to take the lock to see if it's currently in use. If
it isn't, then we end up taking the lock, but don't release it. We should
release it.
Reviewed By: liubov-dmitrieva
Differential Revision: D13085506
fbshipit-source-id: 4b38ec11385d754d07ad2d15ff14b43d37325263
Summary:
hgsubversion has some logic that tries to read the list of files in a
given directory. In a tree-only world, it turns the entire tree into a text
file, then splits it by line and bisects over it.
Let's instead use the walk function, which is optimized for this kind of thing
on trees.
Reviewed By: quark-zju
Differential Revision: D12969497
fbshipit-source-id: ef3af9e0022978d6a4922cbb464bfd14248f5501
Summary:
Add "--dry-run" for fix-code.py and use it in test-check.
This avoids license header and version = "*" issues.
Reviewed By: ikostia
Differential Revision: D10213070
fbshipit-source-id: 9fdd49ead3dfcecf292d5f42c028f20e5dde65d3
Summary:
This is done by running `fix-code.py`. Note that those strings are
semvers so they do not pin down the exact version. An API-compatiable upgrade
is still possible.
Reviewed By: ikostia
Differential Revision: D10213073
fbshipit-source-id: 82f90766fb7e02cdeb6615ae3cb7212d928ed48d
Summary:
Add a new config option: `commitcloud.max_sync_age`. When set, commit cloud
will not pull in any commits that are older than this when it is joining or
syncing. The commits are still nominally in the cloud workspace, we just
save join or sync time by not including the commits.
Reviewed By: liubov-dmitrieva
Differential Revision: D13062470
fbshipit-source-id: 17a4bdb4095766a83a4bf6d4151ae86b39edf59c
Summary: This commit reverts an earlier change in D12950765 that removed options from 'hg smartlog'
Reviewed By: kulshrax
Differential Revision: D13029915
fbshipit-source-id: f514dca841cf9a48a46255c4eb0b376d8f0d2761
Summary: Log the uploading time spend and the downloading size when hg is using lfs
Reviewed By: ikostia
Differential Revision: D12993962
fbshipit-source-id: c53b189a12c60eece47dbbab0852fcfea9471363
Summary:
tweakdefaults used to do this if configured.
After dicussion with the team, we decided that returning 1 was not a helpful design decision, so we're going to fold this in here and everywhere.
Reviewed By: singhsrb
Differential Revision: D13050548
fbshipit-source-id: 66e834ea503e4b1339e369495a9729b951024a6d
Summary:
These commands should be able to use the full workspace name as well
Also the fix for commit cloud test file is included
Reviewed By: markbt
Differential Revision: D13042297
fbshipit-source-id: ac8c907292e2d15b72f56ef1cc831add5523b990
Summary:
debugcommand for receiving several lfs objects from server
Analog to the funtion `debuglfsreceive`, but for several objects
Will be used in replay traffic for Mononoke LFS.
Requires URL and an iterable with oids and sizes.
Format required:
url, (oid1, size1, oid2, size2, ... )
Write all files to the console.
Reviewed By: quark-zju
Differential Revision: D12956274
fbshipit-source-id: f83dd0636b2ad197cace9633222d0f1ed8191dab
Summary:
This allows us to expose `arc pull`'s functionality directly from hg.
There are three parts:
- Create a revset that runs a program, reads the stdout and looks up that commit if it exists
- Auto-pulling if the commit doesn't exist
- Supporting an optional argument (target) that's passed to the script
Reviewed By: DurhamG
Differential Revision: D10524541
fbshipit-source-id: 7493c5592e272f9e8a87f109cec1426d44935ecc
Summary:
We need to record updated onto rev instead of guessing it in pushrebase replayer.
I will not land this diff until mononoke db schema is update D12923144, D12997125
Reviewed By: quark-zju
Differential Revision: D12922833
fbshipit-source-id: 11c6411c392ca9092be53ffba8baa074faf3a996
Summary:
Previously, if we were writing local data to pack files we weren't able
to read that data until the pack file had been flushed. Let's add those mutable
packs to the union store so we can see there data.
This is important for unblocking use of hggit and hgsubversion since they may
import many dependent commits in a single transaction.
Reviewed By: quark-zju
Differential Revision: D12959497
fbshipit-source-id: 405c0c5c1e8fc84bc8ffef827a84e91d57eb95d8
Summary:
Previously these were stored on the remotefilelogcontentstore, which is
a weird place since it's generally responsible for loose files. Now that we have
the fileslog abstraction let's move the mutable packs on to it.
This mirrors the manifestlog pattern.
Reviewed By: quark-zju
Differential Revision: D12959496
fbshipit-source-id: 25649570a44b50e9baa558b85ba00605883fd403
Summary:
A future diff will use the same mutable*store pattern to allow pending
file mutable packs to be read from the store. The mutablemanifeststore is
generic and can be reused, so let's move it and rename it.
Reviewed By: quark-zju
Differential Revision: D12959493
fbshipit-source-id: 82710b4d157eb3194440ea630dd458b382896a39
Summary:
Now that all the file store creation logic is inside the fileslog
abstraction, let's also move the storage of all the stores onto this instance,
instead of having them hanging off the repo object.
This matches the manifestlog pattern and gives us better control over the
lifetime of all the filelog stores.
Reviewed By: quark-zju
Differential Revision: D12959500
fbshipit-source-id: f07f8b52bb83a7837e6dc02664bec6111df7a421
Summary:
As part of unifying file storage into fileslog, let's move the store
creation logic into fileslog. It still puts the stores on the repo directly, but
a future diff will come back and fix that so the stores are kept on the fileslog
object.
Reviewed By: quark-zju
Differential Revision: D12959498
fbshipit-source-id: f3defc88b34c74c95bf1604f194b1b5883bad24d
Summary:
We had remotefilelog specific logic in core mercurial code. Now that
we have the fileslog abstraction we can hide remotefilelog specific invalidation
in a remotefilelog specific fileslog implementation.
Reviewed By: quark-zju
Differential Revision: D12959495
fbshipit-source-id: ccf224bf9799eb1af74f0dff6021fcbc2eb20d68
Summary:
In upcoming diffs I want to introduce lifetime management for
structures receiving writes for file content. We need to be able commit these at
the end of a transaction, and roll them back in the event of an abort.
We already have this, but it's adhoc and inconsistent. Let's introduce the
concept of a fileslog on the repository that is responsible for all filelog read
writes. At the moment it doesn't manage reads, and only governs remotefilelog
writes, but we can extend this later.
This pattern of a top level repo.*log property that manages the reading/writing
of a type of object is already used in the manifestlog and has proven to be
relatively clean.
Reviewed By: quark-zju
Differential Revision: D12959494
fbshipit-source-id: 676aa86c313cb7e48512091a9c19b9452e8f114a
Summary:
Previously, remotefilelog tried to hide which store should receive
writes behind the union store abstraction. This is starting to make things a
little complicated though. For instance, it means that right now both local data
and local history are written via one api on the remotefilelogcontentstore, even
though the remotefilelogcontentstore should only be about data. It also means
that remotefilelogcontentstore had to become aware of pack files, which it can't
read, so it no longer upholds the guarantee that anything written to it can be
immediately read. Overall it's just confusing.
This diff rips out the writestore concept, and instead has the store setup logic
store the writable stores directly on the repo, and changes the write path to
write to those stores directly. Thus removing the notion of pack files from
remotefilelogcontentstore.
A future diff will clean this up even further, and fix the bug where you can't
read data that was just written to a local pack.
Reviewed By: quark-zju
Differential Revision: D12959502
fbshipit-source-id: 85c39c0696febd0972a21f22f3640fd6954901c1
Summary:
This will help us in identifying commonly conflicted files/artifacts.
Alas, I think SVN only includes one path in the error message. But better than nothing.
Reviewed By: quark-zju
Differential Revision: D13009975
fbshipit-source-id: 220bcaa679222718c58e42174f28fd0bbeb618d2