Summary:
Adding automatically generated derived_xxxx node groups so that less typing is
needed and we're also checking the nodes are mapped correctly to derived data types.
Reviewed By: mitrandir77
Differential Revision: D24838738
fbshipit-source-id: 2bc8ff03a82c5d18f749affba2e67d214fb7ace7
Summary: This allows us to use -i bonsai instead of -i Bookmark -i BonsaiChangeset, which is a bit shorter
Reviewed By: mitrandir77
Differential Revision: D24838454
fbshipit-source-id: a758ad069af36fb1d1301e162bee822988cab07b
Summary: All the node types support FromStr so can generate NodeType::parse_node() rather than hand implement it.
Reviewed By: mitrandir77
Differential Revision: D24711372
fbshipit-source-id: 24e27e9cdda078c6dc66ac839cb3cfed6e93f269
Summary:
implement FromStr for BookmarkName, can use it to handle bookmarks
more uniformly with other types in the walker
Reviewed By: mitrandir77
Differential Revision: D24725786
fbshipit-source-id: e7eb7ece4a4bdc5dfd91f253f0383829c4ecc73b
Summary: Refactor from non-FromStr node parsing to FromStr, make it consistent with other node keys.
Reviewed By: mitrandir77
Differential Revision: D24711374
fbshipit-source-id: 84200b781bfad0f860acd8aecb95ff238490b92d
Summary: use PathKey for parsing of Node::HgFileNode in walker.
Reviewed By: ikostia
Differential Revision: D24711375
fbshipit-source-id: 4fe5887ba44ca9fca1dde54eaa75b30114b3b4b8
Summary: add PathKey newtype to Node so can implement FromStr and use it in parsing for HgManifest
Reviewed By: mitrandir77
Differential Revision: D24711371
fbshipit-source-id: a9879f6d2e16eb54b2ca7af4e812a4f031c9e584
Summary: Add UnitKey newtype to walker so that can implement FromStr, this is leading up to all node keys supporting from_str at which point I can generate NodeType::parse_node.
Reviewed By: mitrandir77, ikostia
Differential Revision: D24711376
fbshipit-source-id: aa4e26eb8e9206673298b632a079d2cc66d152ee
Summary: This is mostly a slight refactoring to help code reuse. However, there's a small behavior change as well (which I think is acceptable): before we compared `count` vs `max_value`, and now we compare `count + bump` vs `max_value`.
Reviewed By: krallin
Differential Revision: D24871175
fbshipit-source-id: 94e53ff2c05b4f9b236473c7e4b6d78229b64d53
Summary: Now that `derive03` is the only version available, rename it to `derive`.
Reviewed By: krallin
Differential Revision: D24900106
fbshipit-source-id: c7fbf9a00baca7d52da64f2b5c17e3fe1ddc179e
Summary:
Now that all code is using `BonsaiDerived::derive03`, we can remove the old
futures version `BonsaiDerived::derive`.
Reviewed By: krallin
Differential Revision: D24900108
fbshipit-source-id: 885d903d4a45e639e4d44e19b5d70fac26bce279
Summary:
The wirepack sending code builds up the entire history blob in memory
before sending it. Previously we did this by appending to the string. In Python
2 this was fast, in Python 3 this is n^2 and n can be 100k+ in cases of long
history.
Let's switch to list+join.
Reviewed By: xavierd
Differential Revision: D24933183
fbshipit-source-id: 5c36d7868e7c64a2292bd68ec2ffb584d85dd98f
Summary:
osxfuse is rebranding as macfuse in 4.x.
That has ripple effects through how the filesystem is mounted and shows up in
the system.
This commit adjusts for the new opaque and undocumented mount procedure and
speculatively updates a couple of other code locations that were sensitive to
looking for "osxfuse" when examining filesystems.
Reviewed By: genevievehelsel
Differential Revision: D24769826
fbshipit-source-id: dab81256a31702587b0683079806558e891bd1d2
Summary:
We got a [report](https://fb.workplace.com/groups/scm/permalink/3379140858802177/) that a new hg build fails with an error because it can't xor None types.
```
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: PyErr {
ptype: <type 'exceptions.TypeError'>, pvalue: Some("unsupported operand type(s)
for ^: 'NoneType' and 'NoneType'"), ptraceback: Some(<traceback object at
0x00000249BB158248>) }',
```
Full stack trace is here
{P149395441}
This seems likely to be related to the diff I landed recently - D24725902 (7b1798be37).
However it's unclear why it was affecting only windows because I couldn't repro
it on linux.
Turned out that we have experimental.treematcher option disabled on windows,
which causes it to use includematcher instead of treematcher. And includematcher
returns either None or BytesMatchObject and they are impossible to xor.
This diff fixes it by converting them value to bool first, and also it adds a
test for it.
Reviewed By: singhsrb
Differential Revision: D24918192
fbshipit-source-id: 1359e8b97d26d3b1a4795b7b3d4cfa3d6d4ae843
Summary:
It would be nice to see if there was a fsck on startup, the duration of the fsck, and if it was able to repair all of the problems or not. This diff adds external logging for fsck runs on daemon start.
duration: how long the fsck took
success: false if was not able to repair errors, true if repaired all errors or didn't have to repair at all
attempted_repair: true if we found problems, false otherwise
Reviewed By: chadaustin
Differential Revision: D24774065
fbshipit-source-id: 2fa911652abec889299c74aaa2d53718ed3b4f92
Summary:
To ensure other parts of Mononoke can fully read new blobs as soon as they've
been written, ensure their buffers are flushed and they've been synced to disk
before returning from the blob put.
Reviewed By: krallin
Differential Revision: D24921657
fbshipit-source-id: df401470aaeeebcdc9d237271b40a399115ba25f
Summary:
We've seen http 2 potentially causing hangs for users. Let's make this
configurable for lfs, so we can disable it and see if things get fixed.
Reviewed By: krallin
Differential Revision: D24898322
fbshipit-source-id: dc7842c0247dc6b9590a1f160076b17788aab1b9
Summary:
As discussed in a group thread (see link below), HTTP 2 may be causing
hangs for users. Let's start by making the http-client configurable. In
subsequent diffs we'll make edenapi and lfs configurable as well.
Reviewed By: krallin
Differential Revision: D24898323
fbshipit-source-id: f0035a1b8df3cee626ebe519e9e99358c1b3f043
Summary:
This isn't code that compiles, but the convention in Rust is that code actually
is doctests unless annotated otherwise, so if tested with Cargo, those fail.
This fixes that.
Reviewed By: farnz
Differential Revision: D24917364
fbshipit-source-id: 62fe11700ce561c13dc5498e01d15894b17b5b22
Summary:
Thread Pool fails with py3 hg build. Let's replace with a loop.
Most of the usage for the command will be for a single head anyway.
Reviewed By: krallin
Differential Revision: D24902167
fbshipit-source-id: c7af46d0d63ddd074c98788bf55520ae3f2550b8
Summary: As we are making directory structure inside the bucket anyway, it would be usefull to combine keys per repo.
Reviewed By: ahornby
Differential Revision: D24884248
fbshipit-source-id: 85efeb7009a9d211381319caa4e72aa3687c51ee
Summary:
Transfers iddag flat segments along with the head_id that should be use to
rebuild a full fledged IdDag. It also transfers idmap details. In the current
version it only transfers universal commit mappings.
Reviewed By: krallin
Differential Revision: D24808329
fbshipit-source-id: 4de9edcab56b54b901df1ca4be5985af2539ae05
Summary:
Under this configuration SegmentedChangelog Dags (IdDag + IdMap) are always
downloaded from saves. There is no real state kept in memory.
It's a simple configuration and somewhat flexible with treaks to blobstore
caching.
Reviewed By: krallin
Differential Revision: D24808330
fbshipit-source-id: 450011657c4d384b5b42e881af8a1bd008d2e005
Summary:
Constructs and returns `CloneData<ChangesetId>`. This object can then be used
to bootstrap a client dag that speaks bonsai commits.
Short term we are going to be using this data in the Mercurial client which
doesn't use bonsai. Hg MononokeRepo will convert it.
Long term we may decide that we want to download cached artifacts for
CloneData. I don't see an issue getting there, I see this as a valid path
forward that cuts down on the configuration required to get to the cached
artifacts. All that said, I think that using whatever dag is available in
memory would be a viable production option.
Reviewed By: krallin
Differential Revision: D24717915
fbshipit-source-id: 656924abb4bbfa1a11431000b6ca6ed2491cdc74
Summary: The SegmentedChangelogManager abstracts saving and loading Dags. This is currently used in the tailer and seeder processes. It will also be used to load dags while the server is running.
Reviewed By: krallin
Differential Revision: D24717925
fbshipit-source-id: 30dff7dfc957f455be6cf733b20449c804511b43
Summary:
The XLOG_EVERY_MS doesn't use the 3rd argument as a format string, it just
prints it verbatim. To format it, we need to use fmt::format.
Reviewed By: genevievehelsel
Differential Revision: D24906819
fbshipit-source-id: 7d45787301086fb87dd8f5d478af8007df82c0b6
Summary:
The move constructor needs to be noexcept and should also initialize the
members in the right order.
Reviewed By: genevievehelsel
Differential Revision: D24874304
fbshipit-source-id: a3db5dcdab1397b872b8f13ec5c7fd45baad5e6f
Summary:
The components iterator return pieces of the original path, using a reference
makes little sense and the compiler complains.
Reviewed By: genevievehelsel
Differential Revision: D24873851
fbshipit-source-id: 40d414dcb4a0539167ab4760dfc0095af8245b3a
Summary:
The documentation for PrjFillDirEntryBuffer states that if no entries could be
added, then the ERROR_INSUFFICIENT_BUFFER errors need to be returned as is, the
code didn't do that.
Reviewed By: chadaustin
Differential Revision: D24764566
fbshipit-source-id: d6411822eac71b2f9aa7cf07858d09115767cc59
Summary:
This is the plumbing to allow us to skip Metadata prefetching during eden
prefetches. These can trigger a bunch of wasted network requests
when we are fetching files anyways. (These network requests are wasted since we
fetch the file contents and most of them are being throttled on sandcastle anyways.)
We won't necessarily want to skip metadata prefetching always, we will still want it
for the watchman queries, but for `eden prefetch` will probably want to skip it. This
is why we are making it an option in the GlobParams.
Reviewed By: chadaustin
Differential Revision: D24640754
fbshipit-source-id: 20db62d4c0e59fe17cb6535c86ac8f1e3877879c
Summary:
We will start opting-in and rolling prefetch profiles mvp out to users soon.
This is a switch to allow users to opt-in, us to gradually rollout, and to
quickly turn prefetch profiles off if this causes issues for users.
Reviewed By: genevievehelsel
Differential Revision: D24803728
fbshipit-source-id: 0456f2a733958b495e5d84f7177c99f3ef481f57
Summary:
Allow users of `tests_utils` to create paths that are not `String`, by supporting any type
that can be converted into `MPath`.
Reviewed By: StanislavGlebik
Differential Revision: D24887002
fbshipit-source-id: 47ad567507185863c1cfa3c6738f30aa9266901a
Summary:
Add type definitions for skeleton manifests.
Skeleton manifests are manifests that correspond to the shape of the repository (which directories and files exist), but do not include anything relating to the content. This means they only change when files are added or deleted.
They are used for two purposes:
* To record the count of descendant directories for each directory. This will be useful for limiting parallelism when doing an ordered traversal of a manifest. The descendant directory count is a good estimate of the amount of work required to process a directory.
* To record whether a directory, or any of its subdirectories, contains a case conflict. This will be used to enforce case-conflict requirements in repos.
Differential Revision: D24787535
fbshipit-source-id: 7cb92546ed80687d5b98a6c00f9cd10896359b8d
Summary:
On Windows, /bin/sh doesn't exist. To spawn a command in a shell, we need to
use Powershell.
Reviewed By: genevievehelsel
Differential Revision: D24864355
fbshipit-source-id: 3bcf630a90e644a31ff9db8fea9891476cad641d
Summary:
While doing notifications, I struggled a bit to get them working and thought
the special quoting on Windows didn't work as expected. It turns out the error
was cmd related and using a modern shell (PowerShell) fixed it.
Having a test for the quoting is a good idea nonetheless, so let's have one.
Reviewed By: genevievehelsel
Differential Revision: D24864357
fbshipit-source-id: 6b1ac50f3b7b1ef469378d5de21f56c24c0945f9
Summary:
BE: remove old subscription to save resources in IceBreaker. The client code will recreate it anyway if missing but cleaning up will help us to reduce number of unused subscriptions.
Classic example: repo opsfiles or configerator maybe needed once and then a user don't use
Another example: switching workspaces failed and it could be result in subscriptions are not cleaned up properly
Reviewed By: markbt
Differential Revision: D24859931
fbshipit-source-id: 6df6c7e5f95859946726e04bce8bc8f3ac2d03df
Summary:
Those are the tweaks I've made to make `--config devel.bundle2.debug` more
verbose to aid with my investigation. This might help somebody else in the
future so let's comit it:
* added "params" decoding to debugsendbundle
* added "message" to `error:unsupportedcontent` part (we already send it with
some other error parts)
Reviewed By: sfilipco
Differential Revision: D24840405
fbshipit-source-id: b25d5823d05f3d50230c078e8db459dc66256707
Summary:
Generate walker EdgeType::outgoing_type() to reduce boilerplate
When defining edges if the edge variant and destination node at the same no extra parameters needed. If they are different then the node type of destination is passed in parens, e.g. BonsaiParent(BonsaiChangeset)
Reviewed By: StanislavGlebik
Differential Revision: D24687828
fbshipit-source-id: 1616c786d78242c2b3a8c7a1ba58cc1433ea0a26
Summary:
This function is useful in the mononoke to compute the universal commit idmap
that is required for clone.
Reviewed By: quark-zju
Differential Revision: D24808327
fbshipit-source-id: 0cccd59bd7982dd0bc024d5fc85fb5aa5eafb831
Summary:
`flat_segments` are going to be used to generate CloneData. These segments will
be sent to a client repository and are going to bootstrap the iddag.
Reviewed By: quark-zju
Differential Revision: D24808331
fbshipit-source-id: 00bf9723a43bb159cd98304c2c4c6583988d75aa
Summary: This is the object that will be used to bootstrap a Dag after a clone.
Reviewed By: quark-zju
Differential Revision: D24808328
fbshipit-source-id: 2c7e97c027c84a11e8716f2e288500474990169b
Summary:
The goal is to reused the functionality provided by AssignHeadOutcome for clone
purposes.
Reviewed By: quark-zju
Differential Revision: D24717924
fbshipit-source-id: e88f21ee0d8210e805e9d6896bc8992009bd7975
Summary:
The EdenFS codebase uses folly/logging/xlog to log, but we were still relying
on glog for the various CHECK macros. Since xlog also contains equivalent CHECK
macros, let's just rely on them instead.
This is mostly codemodded + arc lint + various fixes to get it compile.
Reviewed By: chadaustin
Differential Revision: D24871174
fbshipit-source-id: 4d2a691df235d6dbd0fbd8f7c19d5a956e86b31c
Summary:
There were `eden top` issues on MacOS that I thought had been fixed a while ago,
but it doesn't look like we caught them all. This should catch the remaining bug
in `eden top`.
Reviewed By: genevievehelsel
Differential Revision: D23743199
fbshipit-source-id: ca66748c7a8a26062caf934c8f2c1fd13d9ae69e
Summary:
In order to allow request to timeout to display notifications to the user, the
`within` future method will need to be called on the various callback futures.
Unfortunately, once the timeout expires, the underlying future isn't cancelled
and stopped, but the unique pointer holding the context will be reclaimed.
Whenever the future actually completes, it will try to use an invalid pointer,
crashing EdenFS.
To solve this, switch to using a shared_ptr and copy it in the right places so
it will only be freed once all futures holding a reference to it will be gone.
I also took the opportunity to reduce the nesting a bit to make the code more
readable.
Reviewed By: kmancini
Differential Revision: D24809647
fbshipit-source-id: 987d6e5763106fabc6bed3ea00d28b129b5285a1
Summary: These errors are Win32 errors, we need to wrap them into a HRESULT.
Reviewed By: chadaustin
Differential Revision: D24809646
fbshipit-source-id: 9f42b9d0c43474967dc26cb2c14cbee463768b79
Summary: It is possible that hash of newly created bonsai_changeset will be different from what is in prod repo. In this case let's fetch bonsai from the prod, to make backup repo consistent with prod.
Reviewed By: StanislavGlebik
Differential Revision: D24593003
fbshipit-source-id: 70496c59927dae190a8508d67f0e3d5bf8d32e5c
Summary: Use create_graph to generate EdgeType enum in walker to reduce boiler plate needed when adding new derived node and edge types to the walker
Differential Revision: D24687827
fbshipit-source-id: 63337f4136c649948e0d3039529965c296c6d67e
Summary: Also use the 0.3 compatible .return_remainder in unbundle.
Reviewed By: ikostia
Differential Revision: D24729464
fbshipit-source-id: ede5cc60f4b872a3b968cf14bb0e2c5d9b85c242
Summary:
When finishing a hash computation for a blob, we currently call `format!` to allocate
and format the error string before calling `.expect` on the `write_all` result.
In practice this will never fail, so this is wasted work. From experimentation on
the playground, the Rust compiler does not appear to be smart enough to optimize this
away, either.
A small optimization, but let's get rid of this by calling panic directly, and
only in the failure path.
Reviewed By: farnz
Differential Revision: D24857833
fbshipit-source-id: e3e35b402ca3a9f6c9d8fbbd758cc486ef1c5566
Summary:
Adds `--reversefill` mode to bookmarks filler that fetches bookmark updates
from the queue and syncs them to infinitepush database.
Reviewed By: farnz
Differential Revision: D24538317
fbshipit-source-id: 5ac7ef601f2ff120c4efd8df08a416e00df0ceb9
Summary:
Summarry:
This is the first part of syncing new scratch bookmark pushes from Mononoke to
Mercurial: on each bookmark movement we log this bookmark movement to filler's
queue.
Reviewed By: liubov-dmitrieva
Differential Revision: D24480546
fbshipit-source-id: 27103b4b4f8c4600aaf485826db2936eaffcc4a9
Summary: Make the naming of bonsai fsnode edge variants consistent with the other edges in preparation for building them programmatically from a macro
Reviewed By: krallin
Differential Revision: D24687833
fbshipit-source-id: 8d46a53c023a4b8f95c0edc42df86e467c054ebb
Summary: Make the naming of linknode edge variants consistent with the other edges in preparation for building them programmatically from a macro
Reviewed By: krallin
Differential Revision: D24687832
fbshipit-source-id: 46525d7bebd17723a130a70f566b24104cc39656
Summary:
Use macro to implement Node::get_type() in walker.
Reduces the boiler plate when adding new types to the graph.
Reviewed By: farnz
Differential Revision: D24687826
fbshipit-source-id: 5f89c6fb69fd9df3fff25a2425a4d2035dbf5ed9
Summary: Generate NodeType::root_edge_type() so less boiler plate when adding new types to the walk.
Reviewed By: farnz
Differential Revision: D24687825
fbshipit-source-id: 083fc57aee8fe01b29ad4a6f9ebe660cc057dfab
Summary:
Define walker graph with macro to reduce repetition which should make adding new derived data types simpler.
Specifically, this removes the duplication between NodeType and Node
Reviewed By: farnz
Differential Revision: D24687831
fbshipit-source-id: 97d67faf02b2a88bb871dc0388d75d3dd3e8528d
Summary: use strum EnumCount instead of own macros, can remove some code. This needed strum upgrade to 0.19
Reviewed By: krallin
Differential Revision: D24680441
fbshipit-source-id: 56e5b66f75c3d8ff949685c26f503571873c0cde
Summary: Update to strum 0.19 as it has improved EnumCount derivation
Reviewed By: mohanz
Differential Revision: D24680442
fbshipit-source-id: 2d3d2a84e994f09bf3b1c7ea748a80a67d100c13
Summary:
`manifest/test_utils` contains test utilities that are only used by derived data, and only one
of which relates to manifests. Its name (`test_utils`) is also confusing with `tests/utils`.
Move it to `derived_data_test_utils`, and update it to new futures.
Reviewed By: mitrandir77
Differential Revision: D24787536
fbshipit-source-id: 7a4a735132ccf81e3f75683c7f44c9ada11bc9d7
Summary:
Reduce visibility of add_* functions that MononokeApp controls, no need for them to be public.
Updated a couple of binaries to use MononokApp.with_fb303_args() instead of calling the add_fb303 function directly.
Reviewed By: krallin
Differential Revision: D24757202
fbshipit-source-id: a068ca4fd976429e7c02c4049429553cc8acf3d4
Summary: benchmark is the last remaining user of args::add_cachelib_args outside of MononokeApp, switch it to use MononokeApp instead.
Reviewed By: krallin
Differential Revision: D24755785
fbshipit-source-id: c105b4443394c88b6effdac382089e7eaca65bfe
Summary: make MononokeApp arguments more configurable so binaries can opt out of them if a section does not apply, making the --help more relevant.
Reviewed By: krallin
Differential Revision: D24757007
fbshipit-source-id: eed2f321bdbd04208567ef9a45cf861e56cdd07e
Summary:
Previously it was a config knob, but they are rather hard to change because
they require a restart of the service. Let's make it a tunable instead.
Reviewed By: farnz
Differential Revision: D24682129
fbshipit-source-id: 9832927a97b1ff9da49c69755c3fbdc1871b3c5d
Summary:
The original problem was a fastlog bug, solved by D24513444 (c3bcc1ab88).
Restores prefetching for phabricator status so `hg ssl` and `hg fssl` become fast again.
Original commit changeset: b10c4caf8fda
Reviewed By: sfilipco
Differential Revision: D24749774
fbshipit-source-id: fa14f7dde9c922733525a7ff014efc32875426fa
Summary:
The original issue was a rust-cpython bug, solved by D24698226, or https://github.com/dgrunwald/rust-cpython/pull/244.
Original commit changeset: 08f598df0892
Reviewed By: sfilipco
Differential Revision: D24759765
fbshipit-source-id: f9a1359cfce68c8754ddd1bcb8bfc54bf75af7ff
Summary:
This updates the last bit of the Filestore that was using 0.1 futures to 0.3.
This used to use a weighted buffered stream (which we don't have for 0.3
futures at this point), but as I started working on one I realized we don't
even need it here, so I took this out.
Reviewed By: StanislavGlebik
Differential Revision: D24735907
fbshipit-source-id: 00a55c14864b09f9c353f95f2f8cbb895cf52791
Summary:
This updates the external facing API of the filestore to use 0.3 streams.
Internally, there is still a bit of 0.3 streams, but as of this change, it's
all 0.3 outside.
This required a few changes here and there in places where it was simpler to
just update them to use 0.3 futures instead of `compat()`-ing everything.
Reviewed By: ikostia
Differential Revision: D24731298
fbshipit-source-id: 18a1dc58b27d129970a6aa2d0d23994d5c5de6aa
Summary: Like it says in the title.
Reviewed By: StanislavGlebik
Differential Revision: D24731300
fbshipit-source-id: b9c44fc1e4bd4cfe8655e1024a0547e40fb99424
Summary:
Like it says in the title. This required quite a lot of changes at callsites,
as you'd expect.
Reviewed By: StanislavGlebik
Differential Revision: D24731299
fbshipit-source-id: e58447e88dcc3ba1ab3c951f87f7042e2b03eb2c
Summary: Like it says in the title. This updates `store()` and its (many) callsites.
Reviewed By: ahornby
Differential Revision: D24728658
fbshipit-source-id: 5fccf76d25e58eaf069f3f0cf5a31d2c397687ea
Summary: Like it says in the title. Not much to be said here.
Reviewed By: ahornby
Differential Revision: D24727256
fbshipit-source-id: 1645339edf287ac7e59612589b308f08b708ae00
Summary:
This updates the metadata APIs in the filestore to futures 0.3 & async / await.
This changes the external API of the filestore, so there's quite a bit of churn
outside of that module.
Reviewed By: markbt
Differential Revision: D24727255
fbshipit-source-id: 59833f185abd6ab9c609c6bcc22ca88ada6f1b42
Summary:
Like it says in the title. This also lets us get rid of some macros we no
longer need.
Reviewed By: markbt
Differential Revision: D24727259
fbshipit-source-id: 5e3211bc08fa5376b4cfce4bea0428ab7bf3dc0f
Summary:
Like it says in the title. This also lets us remove the spawn module entirely.
Note that there is one little annoyance here: I ran into the good old "not
general enough" compiler issue, so I had to add a bit more boxing to make this
go away :(
Reviewed By: markbt
Differential Revision: D24727253
fbshipit-source-id: 73435305d39cade2f32b151734adf0969311c243
Summary:
This will simplify a bunch of refactoring, and we no longer need them not to
be. This lets us convert parts of `chunk` to new futures as well.
Reviewed By: markbt
Differential Revision: D24727254
fbshipit-source-id: de643effe2d1d42ff9bf85a48d09301e929e66de
Summary:
Like it says in the title. This updates the Filestore's multiplexer to new
futures. The change is pretty mechanical, save for:
- We don't use filestore::spawn anymore, since that's provided by
`tokio::task::spawn` now.
- We no longer need to use futures or streams with `Error = !` since in Futures
0.3 you can have an `Output` that isn't a `Result`.
- We need to make the `Stream` we accept `Send` because we can't used
`boxed().compat()` otherwise. I'd like to remove that constraint once the
conversion is complete, but considering all callsites do have a `Send` stream
(the only one that didn't was API Server but that's long gone), just adding
the bound is easiest.
Reviewed By: farnz
Differential Revision: D24708596
fbshipit-source-id: 8b278b5ae49029b7f0d0d9d4fe96c467e1343f60
Summary:
This makes some refactoring later easier. I'd like to not require this, but for
now it's a bit simpler to just do this. Those are the only callsites that
send non-static streams.
Reviewed By: markbt
Differential Revision: D24727258
fbshipit-source-id: c0e4dc86e249a08c2194a20de5a2dfd5a5933d0b
Summary: We don't need to declare a fake empty version number
Reviewed By: farnz
Differential Revision: D24757981
fbshipit-source-id: 594c97e225704d783bea723efcbb9dfc4d5d800b
Summary:
The root cause for S199754 was a file named "con.rs" was checked in onto the
repo. Since this is a reserved filename on Windows, this broke all Windows
users having it in their sparse profiles.
The rules for reserved names are defined as such by Microsoft:
"CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9,
LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9. Also avoid these
names followed immediately by an extension; for example, NUL.txt is not
recommended. For more information, see Namespaces."
Of course, since the filesystem is case insensitive, these can have any casing.
Reviewed By: krallin
Differential Revision: D24453528
fbshipit-source-id: 389f15e2b1a88e3c1e8721fb7868616acabebc64
Summary:
On Windows terminal, with light color schemes, crecord text was barely visible
(sometimes invisible) due to low contrast on either the background, or the
foreground. Making the text bold makes it brighter and thus more readable.
As a bonus, I've also made the hunk lines magenta to mimic what `hg diff` does.
Reviewed By: DurhamG
Differential Revision: D24718598
fbshipit-source-id: 18c2ff03fc2a46ca45808d5061db21e1f1b501ae
Summary: This makes it clean up stale files more aggressively.
Reviewed By: DurhamG
Differential Revision: D24744461
fbshipit-source-id: 76d163c9f16d8f8d1bf628e9197a3086d7cd48aa
Summary:
The goal of this code was to divide the cache limit by the number of
logs. Instead it divided the cache limit by the default per-log size (2GB). That
results in a very small max-bytes-per-log so data was being thrown out
constantly. This fixes it and updates tests to actually demonstrate the issue.
Reviewed By: kulshrax
Differential Revision: D24712842
fbshipit-source-id: 8062758b5bfa40493e2003d5a9028d601b1522b1
Summary:
Python 3 doesn't support line buffering for binary file descriptors.
Let's stop setting it in chg.
This was causing warnings to pop up during prompts for users.
```
.../python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
return io.open(fd, *args, **kwargs)
```
Reviewed By: singhsrb
Differential Revision: D24747777
fbshipit-source-id: 0b881b4067e8c7086fe73380f81d526a2ecc364a
Summary:
Downloading and applying mercurial bundles directly for list of given heads.
Backup store stores commits as mercurial bundles that can be fetched directly from the store and applied (everstore).
The command could be useful when we migrate our server from one backend to another (Mononoke) and some commits can be missing in Mononoke.
The command could probably be deleted after a while once we migrate completely...
Reviewed By: mitrandir77
Differential Revision: D24756583
fbshipit-source-id: 1629c3756f244621efb965dfe15b75c7509a1cd1
Summary: As part of the effort to deprecate futures 0.1 in favor of 0.3 I want to create a new futures_ext crate that will contain some of the extensions that are applicable from the futures_01_ext. But first I need to reclame this crate name by renaming the old futures_ext crate. This will also make it easier to track which parts of codebase still use the old futures.
Reviewed By: farnz
Differential Revision: D24725776
fbshipit-source-id: 3574d2a0790f8212f6fad4106655cd41836ff74d
Summary:
In Mononoke for a sharded DB we historically used connection pool size 1 per shard. With the Mysql FFI client now it doesn't make sense, as the client's conn pool is smart enough and designed to works with sharded DBs, so currently we don't even benefit from having a pool.
In this diff I added an API to create sharded connections: a single pool is shared between all the shards.
Reviewed By: farnz
Differential Revision: D24475317
fbshipit-source-id: b7142c030a10ccfde1d5a44943b38cfa70332c6a
Summary:
This diff makes "Calculating additional actions for sparse profile update" more
efficient by using xormatcher instead of unionmatcher. Indeed, we are
interested only in files that changed their "state" after sparse profile change
e.g. either a file was included in sparse profile and then became excluded.
Reviewed By: sfilipco
Differential Revision: D24725902
fbshipit-source-id: ee611e7c123b95937652ced828b5bea6d75a3daf
Summary:
At the moment differencematcher.visitdir never returns "all".
This diff changes it to return all in the case if self._m2 doesn't visit the directory at all and
self.m1.visitdir(dir) returns "all". This makes sense - if m1 visits all files
in the directory and m2 doesn't exclude any file then it's safe to return all
in this case.
This optimization will be used in the next diff.
Reviewed By: sfilipco
Differential Revision: D24725903
fbshipit-source-id: 2a049cfb1ea4878331e8640cbb20af74da86a1a1
Summary:
Whenever a sparse profile changes (e.g. we include or exclude a directory or a file) we do a full prefetch for all trees in the revision and then for each file in a revision we check if this file has changed its state after sparse profile change (i.e. whether it was included before the change and became excluded after the change and vice versa). It can be quite expensive for large repos and looks like checking all the files is unnecessary.
For example, there might be top-level directories that are excluded in sparse profile before and after the change. In that case there's no reason to check every file in this directory, and there's no reason to prefetch manifests for this directory.
More importantly, `mf.walk()` method is already smart enough to do manifest prefetches if treemanifest.ondemandfetch is set to True, so it looks like there's no reason to do any additional prefetching at all (at least in theory).
So this diff does a few things:
1) The default mode is to use mf.walk() method with a union matcher to find all the files that were are included either in old or new sparse profile. In order for it to prefetch efficiently we force enable treemanifest.ondemandfetch config option.
2) It also adds a fallback option to full prefetch (i.e. the same thing we do right now) Hopefully this fallback option won't be necessary and we'll delete them soon. I've added them only to be able to fallback to current behaviour in case there are problems with the new behaviour
I think we can do an even more efficient fetch by using xor matcher instead of union matcher. I'll try to implement it in the next diffs
Reviewed By: sfilipco
Differential Revision: D24705823
fbshipit-source-id: 2c232a66cc74ee95bdaa84201df46448412f087f
Summary:
This seems to trip up Cargo builds
```
error: expected one of `!`, `.`, `::`, `;`, `?`, `{`, `}`, or an operator, found `with`
--> src/lib.rs:365:3
|
7 | S with version V1
| ^^^^ expected one of 8 possible tokens
error: aborting due to previous error
```
Reviewed By: StanislavGlebik
Differential Revision: D24754708
fbshipit-source-id: 0dc5539acf340ac409bf7b6158313c8fec16a275
Summary: force-unmount-all.sh is a convenience script for edenfs, so move it into eden/fs/.
Reviewed By: fanzeyi
Differential Revision: D24745361
fbshipit-source-id: 661a6f09b73911411fbb8a00bc016757ad19eb2a
Summary: This is unecessary, remove it.
Reviewed By: chadaustin
Differential Revision: D24743519
fbshipit-source-id: 5e10eafcd3f84d9ad053be35798df86b21f97d4f
Summary:
One of the issue that EdenFS on Windows is currently facing is around
invalidation during an update. In effect, EdenFS is over invalidating, which
causes update to be slower than it should be, as well as EdenFS recursively
triggering ProjectedFS callbacks during invalidation. Both of these are a
sub-par UX.
The reason this issue exist is multi-faceted. First, the update code follows
the "kPreciseInodeNumberMemory" path which enforces that a directory that is
present in the overlay needs to be invalidated, even if it isn't materialized.
The second reason is that no reclamation is done for the overlay, combine the
two and you get an update that gets both slower over time and will issue
significantly more invalidation that is needed.
Solving this is a bit involved. We could for instance start by reclaiming
inodes from the overlay, but this wouldn't be effective as we use the fact that
an inode is present in the overlay as a way to know that the file is cached in
the overlay. If we reclaim from the overlay we simply won't be invalidating
enough and some files will be out of date.
It turns out that we already have a mechanism to track what is cached by the
kernel: the fuse refcount. On Linux/macOS, everytime an inode is returned to
the kernel, this refcount incremented, and the kernel then notifies us when it
forgot about it, at which point the refcount can be decremented. On Windows,
the rules are a bit different, and a simple flag is sufficient: set when we
write a placeholder on disk (either during a directory listing, or when
ProjectedFS asks for it), and unset at invalidation time during update. There
is however a small snag in this plan. On Linux, the refcount starts at 0 when
EdenFS starts as a mount/unmount will clear all the kernel references on the
inodes. On Windows, the placeholder aren't disappearing when EdenFS dies or is
stopped, so we need a way to scan the working copy when EdenFS starts to know
which inodes should be loaded (an UnloadedInode really).
The astute reader will have noticed that this last part is effectively a
O(materialized) operation that needs to happen at startup, which would be
fairly expensive in itself. It turns out that we really don't have choice and
we need to do it regardless due to Windows not disallowing writes to the
working copy when EdenFS is stopped, and thus for EdenFS to be aware of the
actual state of the working copy, it needs to scan it at startup...
The first step in doing all of this is to simply rename the various places that
uses "fuse refcount" to "fs refcount" which is what this diff does.
Reviewed By: chadaustin
Differential Revision: D24716801
fbshipit-source-id: e9e6ccff14c454e9f2626fab23daeb3930554b1a
Summary:
The revlog changelog has incompatible rev numbers with changelog2 backends. Do
not construct it. Instead, just use the current changelog.
Reviewed By: DurhamG
Differential Revision: D24513444
fbshipit-source-id: 35d9326cd9fde4af8b98d628f6df66bd80883f92
Summary:
Previously we were choosing current version, and just as with backsyncer this
is not always correct. Let's instead choose not the current version but the
version of the bookmark you are importing to.
This diff also introduced an integration test for a repo import into a pushredirected repo, and turned out there were a few bugs in the repo_import code (open_source_sql was used instead of open_sql). This diff fixed them as well
Reviewed By: ikostia
Differential Revision: D24651849
fbshipit-source-id: bfe36e005170ae2f49fa3a6cb208bf6d2c351298
Summary:
This diff changes semantic of `sync_commit()` function to return an error when
trying to sync a commit with no parents. This is a small code change which has big change
in semantics, and because of that I had to change how backsyncer and
mononoke_x_repo_sync job.
Instead of using `unsafe_sync_commit()/sync_commit()` functions both backsyncer and
`x_repo_sync_job` now use `unsafe_sync_commit_with_expected_version()`
which forces them to specify which version to use for commit with no parents.
And in order to find this version I changed find_toposorted_unsynced_ancestors
to not only return unsynced ancestors but also return mapping versions of their
(i.e. of unsynced ancestors) parents. Given this mapping we can figure out what
version is supposed to be used in `unsafe_sync_commit_with_expected_version()`.
The question raises what to do when a commit doesn't have any synced ancestor and hence we can't decide
which version to use to remap it. At the moment we are using current version (i.e. preserving the existing behaviour).
However this behaviour is incorrect, and so it will be changed in the next diffs
Reviewed By: ikostia
Differential Revision: D24617936
fbshipit-source-id: 6de26c50e4dde0d054ed2cba3508e6e8568f9222
Summary:
Previously we were always choosing the current version for remapping via
pushrebase, but this is incorrect. Let's instead select the version based on
what version parent commits used to remap with.
Reviewed By: ikostia
Differential Revision: D24621128
fbshipit-source-id: 2fedc34b706f090266cd43eaf3439f8fb0360d0d
Summary: Let strum crate do this for us
Reviewed By: krallin
Differential Revision: D24680444
fbshipit-source-id: dbde0077c105d6cc572a0c863bcb4d043714d441
Summary:
Now that fsnodes is async, convert more functions to use references, and tidy
up imports and type names.
Reviewed By: krallin
Differential Revision: D24726145
fbshipit-source-id: 75a619777f19754daf494a3743d26fa2e77aef54
Summary:
Update `fsnodes::derive_fsnode` and its immediate utility functions to use new style
futures and `async`/`.await` syntax.
Reviewed By: krallin
Differential Revision: D24726146
fbshipit-source-id: 0b0d5b1162a73568ef5c47db6e8252267e760e7f
Summary:
The goal of this diff is to provide more visibility into how long the client
takes to create/upload an infinitepush bundle. This is done in two ways:
- by adding more `perftrace` calls (useful when invistigating individual slow
pushes)
- by adding `ui.timesection` scopes (useful for aggregation purposes)
Two main things that are measured:
- creation of the bundle purely on the client
- sending of the bundle over the wire
In addition, in the perftrace recording, this measures how long it takes to
process the reply handlers, how much bytes are sent over the wire, what are the
part names and sizes (when available). These changes mostly do not distinguish
whether the code is infinitepush push or not, but they are always related to
some sort of a wireproto scenario, which means that the performance impact is
negligible (writing things to thread-local storage is *much* cheaper than
sending them over the network).
Reviewed By: DurhamG
Differential Revision: D24683484
fbshipit-source-id: 53fdfb63dcdfcf38924237c59a1e8f5e24ff96c0
Summary: We're getting rid of old futures - remove them as a dep here
Reviewed By: StanislavGlebik
Differential Revision: D24705787
fbshipit-source-id: 83ae938be0c9f7f485c74d3e26d041e844e94a43
Summary:
We can have different bonsai changesets hash for the same hg changeset. Consider situation - we have hg repo:
```
o B (Add file "b")
│
o A (Add file "a")
```
The correct bonsai changeset for B will have only entry `(<Path_to_b>,Some(<hash_b>))` in `file_changes`. But we can also have bonsai changeset for B with 2 entries `(<Path_to_b>,Some(<hash_b>)), (<Path_to_a>,Some(<hash_a>))`. This diff provides the functionality to manually create such situation. And later it will be used for verification blobimport backups
Reviewed By: StanislavGlebik
Differential Revision: D24589387
fbshipit-source-id: 89c56fca935dffe3cbfb282995efb287726a3ca9
Summary: We were incorrectly marking reverts as landed during pullcreatemarkers.
Reviewed By: quark-zju
Differential Revision: D24608217
fbshipit-source-id: f919f49469d6933c17894b3b0926ba2430a5947a
Summary:
As part of getting buck build to work on OSX, we need procinfo to
include it's OSX specific library.
Reviewed By: sfilipco
Differential Revision: D24513234
fbshipit-source-id: 69d8dd546e28b4403718351ff7984ee6b2ed3d1d