Summary:
Adds the initial condition and creation logic for creating a Rust
treemanifest store. Fetching and some other code paths don't work just yet, but
subsequent diffs enable more and more functionality.
Reviewed By: quark-zju
Differential Revision: D23662052
fbshipit-source-id: a0e7090c9a3bf27a7738bf093f2d4eb6098b1ed6
Summary: The old logic would just double pack some bits. Let's prevent that.
Reviewed By: xavierd
Differential Revision: D23661933
fbshipit-source-id: 155291fa08ec2c060619329bd1cb6040769feb63
Summary:
The rust pack stores currently have logic to refresh their list of
packs if there's a key miss and if it's been a while since we last loaded the
list of packs. In some cases we want to manually trigger this refresh, like if
we're in the middle of a histedit and it invokes an external command that
produces pack files that the histedit should later consume (like an external
amend, that histedit then needs to work on top of).
Python pack stores solve this by allowing callers to mark the store for a
refresh. Let's add the same logic for rust stores. Once pack files are gone we
can delete this.
This will be useful for the upcoming migration of treemanifest to Rust
contentstore. Filelog usage of the Rust contentstore avoided this issue by
recreating the entire contentstore object in certain situations, but refresh
seems useful and less expensive.
Reviewed By: quark-zju
Differential Revision: D23657036
fbshipit-source-id: 7c6438024c3d642bd22256a8e58961a6ee4bc867
Summary:
Instants do not represent actual time and can only be compared against
each other. When we subtracted arbitrary Durations from them, we run the risk of
overflowing the underlying storage, since the Instant may be represented by a
low number (such as the age of the process).
This caused crashes in test_refresh (in the next diff) on Windows.
Let's instead represent the "must rescan" state as a None last_scanned time, and avoid any arbitrary subtraction. It's generally much cleaner too.
Reviewed By: quark-zju
Differential Revision: D23752511
fbshipit-source-id: db89b14a701f238e1c549e497a5d751447115fb2
Summary: Previously, we used the call sign of the repo we import when checking any of the if the commits are parsed by Phabricator. However, we also use this callsign for other repos when checking Phabricator, which is an incorrect implementation. E.g. if fbsource back-syncs to ovsource, we would have used FBS callsign when checking Phabricator for both fbsource and ovrsource, but we should use OVRSOURCE callsign for repo ovrsource. This diff corrects this implementation by saving the callsigns of the small repos in their SmallRepoBackSyncVars.
Reviewed By: StanislavGlebik
Differential Revision: D23758355
fbshipit-source-id: b322acb2ec589eabed5362bfd6b963e2dd1d6ea9
Summary:
I had originally make the logic around ECHILD very strict,
thinking it impossible to have a situation where it may arise,
but it turns out that our daemonization makes this happen all
the time.
This commit treats an ECHILD return from waitpid as equivalent
to a success waitpid result and success child process termination.
Reviewed By: chadaustin
Differential Revision: D23683107
fbshipit-source-id: 7867d636afd8ee79b9f100454f84e7ef480109d8
Summary:
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/58
This makes the test-bookmarks-filler.t pass. Additionally remove few tests from exclusion lists as they started to pass.
Reviewed By: ikostia
Differential Revision: D23757401
fbshipit-source-id: eddcda5fd1806d77d0046b6ced3695df6b3d775d
Summary:
We are running out of space on integration tests runs on Linux. In order to avoid that this change is adding some cleanups.
1. Adding `docker rmi $(docker image ls -aq)` frees up 4 GB.
2. Cleaning up `eden_scm` build directory frees up 3 GB.
3. Cleaning up `mononoke` build directory frees up 1 GB.
This diff also includes a fix for run_tests_getdeps.py where we run all the "PASSING" tests when --rerun flag is passed instead of only the failed ones.
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/57
Reviewed By: krallin
Differential Revision: D23742159
Pulled By: lukaspiatkowski
fbshipit-source-id: 3b5e89ad29c753d585c1a6f01a9a1d6c1e616fbf
Summary: fixes build and test errors on OSS introduced by D23596262 (deb57a25ed)
Reviewed By: ikostia
Differential Revision: D23757086
fbshipit-source-id: 7973ce36b3589cbe21590bd7e19a9828be72128f
Summary: Since repo_import tool is automated, we need a way to recover the process when the tool break without restarting the whole process. To do this, I defined a new struct (RecoveryFields) that allows us to keep track of the state. The most important fields are the import_stage (ImportStage) we need for keeping track of the process and to indicate the first stage of recovery, and the cs_ids we use throughout the process. For each stage in importing, we save the state after we have finished that part. This way we can also recover from interrupts. To do process recovery, we only need to use `recover-process <file_path>` subcommand, where file_path stores the saved state of importing. For normal run we will use `import [<args>]` subcommand.
Reviewed By: krallin
Differential Revision: D23678367
fbshipit-source-id: c0e0b270ea2ccc499368e54f37550cfa58c03970
Summary:
This change allows us to use warm bookmark cache for all clients except
for external sync job (i.e. the job we use to keep configerator-hg in sync with
configerator-git).
This is useful we'd like to use warm bookmark cache for configerator but it
doesn't work with external sync job. We'd like to use it because warm bookmark
cache doesn't advance a bookmark until this revision showed up in
configerator-hg - this proved to be useful when rolling out configerator for
devservers since there were tools that talked to hg, and they were failing if
hg was behind.
Currently hg external sync job doesn't work with warm bookmark cache because it
tries to incorrect move a master. What I mean by that is that hg external sync
job sends unbundle request which contains a pushkey part which says "move
master from commit A to commit B". If commit A is outdated because of warm
bookmark cache then this update will just fail, because master bookmark
actually points to commit C.
Let's just not ever use warm bookmark cache for external sync job
Reviewed By: aslpavel
Differential Revision: D23754603
fbshipit-source-id: c8eec54bca2224688d4a829ded372c6fc4d7930f
Summary:
Pass the elements to the hasher to avoid needing to alloc a vec from them.
This saves building the vec inside MPathElement, and when used on top of smallvec based MPathElement also saves allocation of a Vec from the SmallVec for each element.
Reviewed By: aslpavel
Differential Revision: D23703342
fbshipit-source-id: dd81c6d69b90f128d697ba847dde34058ad1ea6e
Summary:
Use smallvec for the internal storage of MPathElement.
The previous Bytes had a stack size of 32 bytes plus the text it pointed to.
Using SmallVec we can store up to 24 bytes without allocation keeping the same space as the previous Bytes object.
Given most path elements are directory names and directory names are usually short it is expected that this will save both space and allocations.
Reviewed By: farnz
Differential Revision: D23703344
fbshipit-source-id: 39ffc3bd3bb765bd1dbb757b4b1a7782382db909
Summary:
When sending trees and files we try to avoid sending trees that are
available from the main server. To do so, we currently check to see if the
tree/file is from the local store (i.e. .hg/store instead of $HGCACHE).
In a future diff we'll be moving trees to use the Rust store, which doesn't
expose the difference between shared and local stores. So we need to stop
depending on logic to test the local store.
Instead we can test if the commit is public or not, and only send the tree/file
is the commit is not public. This is technically a revert of the 2018 D7992502 (5e95b0e32e)
diff, which stopped depending on phases because we'd receive public commits from
svn there were not public on the server yet. Since svn is gone, I think it's
safe go back to that way.
This code was usually to help when the client was further ahead than another
client and in some commit cloud edge cases, but 1) we don't do much/any p2p
exchange anymore, and 2) we did some work this year to ensure clients have more
up-to-date remote bookmarks during exchange (as a way of making phases and
discovery more reliable), so hopefully we can rely on phases more now.
Reviewed By: quark-zju
Differential Revision: D23639017
fbshipit-source-id: 34c13aa2b5ef728ea53ffe692081ef443e7e57b8
Summary:
Previously the MetadataStore would always construct a mutable pack, even
if the operation was readonly. This meant all read commands required write
access. It also means that random .tmp files get scattered all over the place
when the rust structures are not properly destructed (like if python doesn't
bother doing the final gc to call destructors for the Rust types).
Let's just only create mutable packs when we actually need them.
Reviewed By: quark-zju
Differential Revision: D23219961
fbshipit-source-id: a47f3d94f70adac1f2ee763f3170ed582ef01a14
Summary:
Previously the ContentStore would always construct a mutable pack, even
if the operation was readonly. This meant all read commands required write
access. It also means that random .tmp files get scattered all over the place
when the rust structures are not properly destructed (like if python doesn't
bother doing the final gc to call destructors for the Rust types).
Let's just only create mutable packs when we actually need them.
Reviewed By: quark-zju
Differential Revision: D23219962
fbshipit-source-id: 573844f81966d36ad324df03eecec3711c14eafe
Summary:
Some tools, like ShipIt, close stdin before they launch the subprocess.
This causes sys.stdin to be None, which breaks our pycompat buffer read. Let's
handle that.
Reviewed By: quark-zju
Differential Revision: D23734233
fbshipit-source-id: 0adc23cd5a8040716321f6ede0157bc8362d56e0
Summary: This moves the Windows specific bits outside of the dispatcher code.
Reviewed By: chadaustin
Differential Revision: D23655613
fbshipit-source-id: 05b5bb9ed124ae37b6ae43c2a646967724337962
Summary: Similarly to the other one, this will make it possible to interrupt.
Reviewed By: fanzeyi
Differential Revision: D23643100
fbshipit-source-id: 0daab1cec94d0e177bb707d97bf928b05d5d24a3
Summary: Similarly to the other callback, this will make it possible to interrupt.
Reviewed By: fanzeyi
Differential Revision: D23643101
fbshipit-source-id: 9f9a48e752a850c63255b8867b980163cb6a92c9
Summary:
The opendir callback tend to be the most expensive of all due to having to
fetch the content of all the files. This leads to some frustrating UX as the
`ls` operation cannot be interrupted. By making this asynchronous, the slow
operation can be interrupted. The future isn't cancelled and thus it will
continue to fetch in the background, this will be tackled in a future diff.
Reviewed By: fanzeyi
Differential Revision: D23630462
fbshipit-source-id: f1c4a9fbd9daa18ca4b8f4837c5241a37ccfbcf9
Summary:
Previously, the notification callback code was pretty ad-hoc in how it dealt
with the request context and handling asynchronous callbacks, in order to share
more code with FUSE, let's add a catchErrors method to the PrjfsRequestContext
similarly to what is done in the FUSE code. Once timeouts and notifications
will be added, the catchErrors code will be moved into the parent class and all
of this code will be common between ProjectedFS and FUSE.
Reviewed By: fanzeyi
Differential Revision: D23626748
fbshipit-source-id: 70fae3d4a276be374f58559cc1fb05c8e56e5c2d
Summary:
Completing callbacks asynchronously is as simple as using PrjCompleteCommand
instead of returning the result from the callback. This allows callbacks to be
interrupted/cancelled, which will lead to a better user experience.
For now, the code in the notifaction callback is very ad-hoc, but most of it
will be refactored to be re-used by the other callbacks.
Reviewed By: fanzeyi
Differential Revision: D23611372
fbshipit-source-id: 17d1b8a4cd05706141abbf1e861d74f471537fba
Summary:
This removes a bunch of Windows specific code from the dispatcher which brings
us closer to moving it to a non-Windows specific directory. It also makes the
code more ready to use async notification handling as the dispatcher no longer
waits on future to complete before returning.
Reviewed By: fanzeyi
Differential Revision: D23611371
fbshipit-source-id: b1a2a6ce0a0be4747423ed75bc8a7aa4b5fa99f4
Summary:
Now that all the pieces are in place, we can plumb the request context in. For
now, this adds it to only one callback as I figure out more about it and tweak
it until I have something satisfactory. There are some rough edges with it that
I'm not entirely happy about, but as I change the notification callback to be
more async, I'm hoping to make more convenient to use and less clanky.
Reviewed By: fanzeyi
Differential Revision: D23505508
fbshipit-source-id: d5f12e22a8f67dfa061b8ad82ea718582c323b45
Summary:
There was a time when getRegexExportedValues was not in open source
fb303, but that time has long passed.
Reviewed By: kmancini
Differential Revision: D23745295
fbshipit-source-id: 4702068f0bb7350467e42439444b3f4d75aeec76
Summary:
Since the Stub.h now only contains NOT_IMPLEMENTED, let's move it to its own
header outside of the win directory.
Reviewed By: genevievehelsel
Differential Revision: D23696244
fbshipit-source-id: 2dfc3204707e043ee6c89595668c484e0fa8c0d0
Summary:
With this gone, we will be able to rename and move Stub.h outside of the win
directory.
Reviewed By: genevievehelsel
Differential Revision: D23696243
fbshipit-source-id: ea05b10951fa38a77ce38cd6a09a293364dbeec9
Summary:
While the code isn't compiled, this makes the thrift definition available to
the rest of the code, eliminating the need for having a stub for
SerializedInodeMap on Windows.
Reviewed By: genevievehelsel
Differential Revision: D23696242
fbshipit-source-id: 8a42dd2ed16887f3b7d161511e07aaa35fd1b968
Summary:
The getuid and getgid are defined as returning uid_t and gid_t. Defining these
types here will prevent downstream consumer from having to redefine these
types for Windows.
(Note: this ignores all push blocking failures!)
Reviewed By: yfeldblum, Orvid
Differential Revision: D23693492
fbshipit-source-id: 1ec9221509bffdd5f6d241c4bc08d7809cdb6162
Summary:
There was a bug. If an entry was skipped then we haven't updated the counter.
That means we might skip the same entry over and over again.
Let's fix it
Reviewed By: ikostia
Differential Revision: D23728790
fbshipit-source-id: f323d14c4deba5736ceb8ada7cb7ee48a69c1272
Summary:
Turns out crecord had a help screen. It was broken in Python 3. This
fixes it.
Reviewed By: singhsrb
Differential Revision: D23720798
fbshipit-source-id: 4aade9abb88355c19ee4445de116fdb40d5366bd
Summary: filter returns a generator in Python 3, but we need a list.
Reviewed By: singhsrb
Differential Revision: D23720661
fbshipit-source-id: 8de3f5844bfe8b85b37c44423733fd2a09967397
Summary: This was horribly broken, and we have no tests.
Reviewed By: singhsrb
Differential Revision: D23720984
fbshipit-source-id: 4ad47c767b0d18f700c855a7bb43f38f5c5ef317