Summary: convert `BlobRepo::get_bonsai_bookmark` to new type futures
Reviewed By: StanislavGlebik
Differential Revision: D25188577
fbshipit-source-id: fb6f2b592b9e9f76736bc1af5fa5a08d12744b5f
Summary: Let's use fbinit::compat_test, it makes the tests a bit shorter
Reviewed By: ikostia
Differential Revision: D25196759
fbshipit-source-id: bd8aca4b676a71158269195fd35153a0934b0cbc
Summary: Prefix old futures with `Old` so it would be possible to start conversion of BlobRepo to new type futures
Reviewed By: StanislavGlebik
Differential Revision: D25187882
fbshipit-source-id: d66debd2981564b289d50292888f3f6bd3343e94
Summary:
The expected walker validation failures can appear in any order. Sort
them so the comparison is deterministic
Reviewed By: StanislavGlebik
Differential Revision: D25195052
fbshipit-source-id: aeded6675c85186e12d27023fb862d7c52dd764f
Summary: editmergeps.bat was separating a filename with spaces into args[0] and args[1] when calling editmergeps.ps1. Use proper quoting to send files with spaces as a single argument.
Reviewed By: ikostia
Differential Revision: D25194324
fbshipit-source-id: 065f677c9015681b310e1cfc46f52ff563a35f99
Summary: This will make it easier to choose how to migrate this code to futures 0.3
Reviewed By: ahornby
Differential Revision: D25185646
fbshipit-source-id: b39f7540275115fff4e8b6f2e740d544c2c876ef
Summary:
While trying to repro a user report (https://fburl.com/jqvm320o), I ran into a
new hg error: P151186623, i.e.:
```
KeyError: 'Key not found HgId(Key { path: RepoPathBuf("fbcode/thrift/facebook/test/py/TARGETS"), hgid: HgId("55713728544d5955703d604299d77bb1ed50c62d") })'
```
After some investigation (and adding a lot of prints), I noticed that this
was trying to query the EdenAPI server for this filenode. That request should
succeed, given Mononoke knows about this filenode:
```
[torozco@devbig051]~/fbcode % mononoke_exec mononoke_admin fbsource --use-mysql-client filenodes by-id fbcode/thrift/facebook/test/py/TARGETS 55713728544d5955703d604299d77bb1ed50c62d
mononoke_exec: Using config stage prod (set MONONOKE_EXEC_STAGE= to override)
I1126 08:10:02.089614 3697890 [main] eden/mononoke/cmdlib/src/args/mod.rs:1097] using repo "fbsource" repoid RepositoryId(2100)
I1126 08:10:02.995172 3697890 [main] eden/mononoke/cmds/admin/filenodes.rs:137] Filenode: HgFileNodeId(HgNodeHash(Sha1(55713728544d5955703d604299d77bb1ed50c62d)))
I1126 08:10:02.995282 3697890 [main] eden/mononoke/cmds/admin/filenodes.rs:138] -- path: FilePath(MPath("fbcode/thrift/facebook/test/py/TARGETS"))
I1126 08:10:02.995341 3697890 [main] eden/mononoke/cmds/admin/filenodes.rs:139] -- p1: Some(HgFileNodeId(HgNodeHash(Sha1(ccb76adc7db0fc4a395be066fe5464873cdf57e7))))
I1126 08:10:02.995405 3697890 [main] eden/mononoke/cmds/admin/filenodes.rs:140] -- p2: None
I1126 08:10:02.995449 3697890 [main] eden/mononoke/cmds/admin/filenodes.rs:141] -- copyfrom: None
I1126 08:10:02.995486 3697890 [main] eden/mononoke/cmds/admin/filenodes.rs:142] -- linknode: HgChangesetId(HgNodeHash(Sha1(dfe46f7d6cd8bc9b03af8870aca521b1801126f0)))
```
Turns out, the success rate for this method is actually 0% — out of thousands of requests,
not a single one succeeded :(
https://fburl.com/scuba/edenapi_server/cma3c3j0
The root cause here is that the server side is not properly deserializing
requests (actually finding that was a problem of its own, I filed T80406893 for this).
If you manage to get it to print the errors, it says:
```
{"message":"Deserialization failed: missing field `path`","request_id":"f97e2c7c-a432-4696-9a4e-538ed0db0418"}
```
The reason for this is that the server side tries to deserialize the request
as if it was a `WireHistoryRequest`, but actually it's a `HistoryRequest`, so
all the fields have different names (we use numbers in `WireHistoryRequest`).
This diff fixes that. I also introduced a helper method to make this a little
less footgun-y and double-checked the other callsites. There is one callsite
right now that looks like it might be broken (the commit one), but I couldn't
find the client side interface for this (I'm guessing it's not implemented
yet), so for now I left it as-is.
Reviewed By: StanislavGlebik
Differential Revision: D25187639
fbshipit-source-id: fa993579666dda762c0d71ccb56a646c20aee606
Summary:
In the next diff I'd like to allow hg sync job to combine a few bookmark update log entries and send a single bundle for all of them. The goal is to reduce the lag between mononoke and hg servers.
We've already made an attempt to implement bundle combining some time ago, hence why we have things like CombinedBookmarkUpdateLogEntry. However it was never really used in prod - back then the only way to sync a bundle from mononoke to hg servers was to replay a bundle that client has pushed to us, and combining bundles like that was tricky.
Now it's different, because hg sync job has the logic to generates the bundles itself rather than re-use the bundle that client has pushed to us. This makes implementing bundle combinging easier.
This diff doesn't add the actual bundle combining, but it does the refactoring that makes it easier. In particular:
1) Old logic for combining bundles was removed - it was never really used anyway.
1) prepare_bundles() function was added - this function takes a vector of BookmarkUpdateLogEntry and returns a vector of CombinedBookmarkUpdateLogEntry. The idea is to move bundle combining logic from main function to BundlePreparer class, since it has better knowledge of how to do bundle combining (in particular, it knows whether sync job re-uses existing bundle or generates it)
1) prepare_single_bundle() was changed - instead of taking bookmark name, from/to changeset id from BookmarkUpdateLogEntry, it now requires passing them explicitly. This makes adding bundle combining easier in the next diff.
Reviewed By: mitrandir77
Differential Revision: D25168877
fbshipit-source-id: 2935d1795925b4bf0456b9221e2f76ce3987cbd0
Summary: Only the id is used when loading the manifest, not the path. Path is only needed for route information
Reviewed By: mitrandir77
Differential Revision: D25185264
fbshipit-source-id: 1af453fa2716d53ea4fb18a81d59867e98ea07f6
Summary: The path is not needed to load the fsnode, only as route info for things like compression
Reviewed By: mitrandir77
Differential Revision: D25184485
fbshipit-source-id: afd84dcd9dbd82c2c9018e86fb676bc57a5d009b
Summary:
We have `HgBlobChangeset`, `HgFileEnvelope`, `HgManifestEnvelope` ... but we
also have `BlobManifest`. Let's be a little consistent.
Reviewed By: markbt
Differential Revision: D25122288
fbshipit-source-id: 9ae0be49986dbcc31cee9a46bd30093b07795c62
Summary: Add them to the walker so we can scrub and inspect them
Reviewed By: StanislavGlebik
Differential Revision: D25124144
fbshipit-source-id: 5df1ca6e48d18d3d55f68905e93fd75fbae92adb
Summary:
We have 4 different ways of awaiting futures in there: sometimes we create a
runtime, sometimes we use async-unit, sometimes we use `fbinit::test` and
sometimes we use `fbinit::compat_test`. Let's be consistent.
While in there, let's also get rid of `should_panic`, since that's not a very
good way of running tests.
Reviewed By: HarveyHunt
Differential Revision: D25186195
fbshipit-source-id: b64bb61935fb2132d2e5d8dff66fd4efdae1bf64
Summary:
HgBlobEntry is kind of a problematic type right now:
- It has no typing to indicate if it's a file or a manifest
- It always has a file type, but we only sometimes care about it
This results in us using `HgBlobEntry` to sometimes represent
`Entry<HgManifestId, (FileType, HgFileNodeId)>`, and some other times to
represent `Entry<HgManifestId, HgFileNodeId>`.
This makes code a) confusing, b) harder to refactor because you might be having
to change unrelated code that cares for the use case you don't care about (i.e.
with or without the FileType), and c) results in us sometimes just throwing in
a `FileType::Normal` because we don't care about the type, which is prone to
breaking stuff.
So, this diff just removes it, and replaces it with the appropriate types.
Reviewed By: farnz
Differential Revision: D25122291
fbshipit-source-id: e9c060c509357321a8059d95daf22399177140f1
Summary:
I'd like to get rid of HgEntry altogether, as it's not a very useful
abstraction (`Entry` is better). This is one step towards this, by moving the
HgEntry -> Leaf conversion close to where the `HgEntry` is created (which is
where we upload filenodes).
Reviewed By: markbt
Differential Revision: D25122290
fbshipit-source-id: 0e3049392791153b9547091516e702fb04ad7094
Summary:
I'd like to remove the number of places that use HgEntryId, and make its usage
a little more consistent. Indeed, right now, HgEntryId is used in some places
to upload things where we just give it a fake entry type, and sometimes to
represent an actual entry in a manifest; This isn't great.
Let's remove it from here.
Reviewed By: markbt
Differential Revision: D25122289
fbshipit-source-id: 5075383e037e4e890af203d133f0a25118c19823
Summary:
This contains a path, which it turns out is actually never used throughout the
codebase. Remove it.
Reviewed By: markbt
Differential Revision: D25122292
fbshipit-source-id: eab528bfb1e3893bca4d92d62c30499cf9eead6c
Summary: So we can scrub and inspect them
Reviewed By: StanislavGlebik
Differential Revision: D25120995
fbshipit-source-id: d150e55f0d72f584c15dbaf2bd017d19130312b2
Summary: This diff asyncifies build_reporting_handler, and while there also simplifies this function a bit by ignoring cases where log_entries is empty or not specified
Reviewed By: farnz
Differential Revision: D25184396
fbshipit-source-id: 46b5d2f9fb5571e502bcdf5a0fe964fb62426313
Summary:
This diff asyncifies SYNC_LOOP similar to how SYNC_ONCE was asyncified.
The biggest part of SYNC_LOOP is a stream that starts with loop_over_log_entries. Previously it was a very long chain of combinators. This diff makes this chain ~2 times smaller, but unfortunately it can't make it even smaller because of the use of "buffered(...)" method.
Reviewed By: ahornby
Differential Revision: D25123487
fbshipit-source-id: e913bbd1369e4375b5b1d0e6ba462e463a5a44fc
Summary:
The write & read path use different back up state files, which means they can
go out of sync. If that happens, it's a bit annoying because:
- The output of `hg cloud check` is out of sync with that of `hg sl`
- `hg cloud backup -r` says the commit is backedup, and `hg cloud check -r`
says it isn't.
This diff fixes this by just using the `backedup()` revset, which under the
hood reads all state files.
Reviewed By: liubov-dmitrieva
Differential Revision: D25186071
fbshipit-source-id: 0ae2d43227d65a3564b2f796218b55982d5cc467
Summary:
Right now we just get a "deadline exceeded" error, which isn't very amenable to
helping us understand why we timed out. Let's add more logging. Notably,
I'd like to understnad what we've actually received at this point, if anything,
and how long we waited, as I'm starting to suspect this issue doesn't have much
to do with HTTP.
See https://fb.workplace.com/groups/scm/permalink/3361972140519049/ for
more context.
Reviewed By: quark-zju
Differential Revision: D25128159
fbshipit-source-id: b45d3415526fdf21aa80b7eeed98ee9206fbfd12
Summary:
`eden du --clean` currently fails with
```
Fatal Python error: initfsencoding: unable to load the file system codec
ModuleNotFoundError: No module named 'encodings'
```
Full error: P149352812
It looks like this is because Buck expects to run with a different python, so
here I clear out the PYTHONHOME variable before we start buck.
This reuses out logic used elsewhere to clean up the environment before
calling buck.
Reviewed By: wez
Differential Revision: D24904105
fbshipit-source-id: 73587c52aff3ea00f68339eb44e7042329de2e44
Summary: `lru-disk-cache` depends on an old version of `linked-hash-map` which contains UB in 1.48 (see https://github.com/mozilla/sccache/issues/813). They updated the deps in their repo months ago, but haven't pushed a new version. This diff makes us get `lru-disk-cache` directly from their GitHub instead.
Reviewed By: dtolnay
Differential Revision: D25134582
fbshipit-source-id: 05fd63a76b7095ebeea458645b92a83bbd8c4614
Summary:
It was mistakenly mapping deleted manifest id to a unode id.
I'm surprised the walk.rs code that constructs the DeletedManifestToDeletedManifestChild edge built with this bug. The DeletedManifestId and ManifestUnodeId types presumably coerce to each other.
Reviewed By: markbt
Differential Revision: D25120994
fbshipit-source-id: 1b53037808779c345c163ef32324961938078fc7
Summary:
Soon we are going to use hg sync job for configerator repos, and they might use
Push bookmark move. Let's allow it in sync job
Reviewed By: ikostia
Differential Revision: D25121176
fbshipit-source-id: f6000617b42be8392730ffc56be1947fbdbfff76
Summary:
This adds support for optionally not uploading commits we already have when
they arrive via infinitepush. This can happen if we're replaying bundles.
This works by filtering the commits we have. We still get some futures created
for the underlying uploads, but we never poll them because when we visit
what futures are needed for what commits, we don't select uploads that are
only reachable from commits we filtered out.
Obviously, this isn't super efficient, since the client still has to send us
all this data, but it's better than having them send us all that data then
having us take hours overwriting it all.
Reviewed By: mitrandir77
Differential Revision: D25120844
fbshipit-source-id: f96881d0d98ec622259bedb624fd94b600a2bf1d
Summary: Remove 'static requirement for async methods of Blobstore, propagate this change and fixup low hanging fruits where the code can become 'static free easily.
Reviewed By: ahornby, farnz
Differential Revision: D24839054
fbshipit-source-id: 5d5daa04c23c4c9ae902b669b0a71fe41ee6dee6
Summary:
It feels like invalidating the entry before the directory makes slightly more
sense, so do it in that order.
Reviewed By: chadaustin
Differential Revision: D24800817
fbshipit-source-id: ed053d07bbae6954c276d1ad7a1ff247e5c055d9
Summary:
It turns out the hggit tests weren't passing in Python 3, despite us
having removed them from our py3-broken list. Woops. This fixes them and enables
the tests.
Reviewed By: sfilipco
Differential Revision: D25095189
fbshipit-source-id: acffca34b0d5defa7575ede60621ca2ce0a2afe4
Summary:
Back in March we forced all extras to be strings. It turns out hggit
needs to write binary extras since it stores some binary patches in extras.
To support that, let's encode commit extras using surrogateescaped. That will
allow hggit to put binary data in extras in a later diff.
Reviewed By: sfilipco
Differential Revision: D25095190
fbshipit-source-id: 330bf06b15fc435f70119490557b8c22341b6ed9
Summary:
Apparently shards for some DBs start from 1 (production_filenodes) and for some - from 0 (production_blobstore).
This diff fixed the issue for mysql connections.
Long term we might want to query SMC for the list of shards instead of hardcoding different values in the different places.
Reviewed By: farnz
Differential Revision: D25057136
fbshipit-source-id: 9201a2ec8afe0b66a246a2ee91cc9389630f5ddf
Summary:
Add a TraceBus to HgQueuedBackingStore and allow tracing import events over Thrift.
This powers a new `eden trace hg` command that allows a live view of
tree and blob imports.
Reviewed By: genevievehelsel
Differential Revision: D24732059
fbshipit-source-id: 525152fe39047160a68c1706217a06a00a6dbef1
Summary:
We got a few ubns because one bookmark validator in a single region wasn't able
to connect to mysql and was reporting errors.
This diff fixes by separating logical and infra errors
Reviewed By: ikostia
Differential Revision: D25092364
fbshipit-source-id: 93f4be1a7e0467051b7b8d927eef9b4f5cd6a983
Summary:
This isn't actually being consulted anywhere save for a single test, so let's
just remove it (it's not like the test checks anything important — that field
might not as well exist given we never read it).
Reviewed By: farnz
Differential Revision: D25093494
fbshipit-source-id: 5f4a53f8666fc0e8a89ceade44baa96e71fb813f
Summary:
This is a bit unnecessary as it stands — we roundtrip the path through
execute() just to return it back. This path was used for trace logging, but
given we literally never look at this log, let's just simplify this logging a
little bit.
Reviewed By: StanislavGlebik
Differential Revision: D25089344
fbshipit-source-id: 15b0f1cce8c9b2938429de19ff063e5677794912
Summary:
With segmented changelog, rev can exceed f64 safe range
(Number.MAX_SAFE_INTEGER in Javascript, 9007199254740991, 0x1fffffffffffff).
If rev is used in JSON, then the JSON cannot be parsed with precise rev
information.
This diff adds a compatibility mode so template will map the out-of-range revs
to safe range, and the mapped revs can be detected, and mapped back to be
resolved correctly.
Reviewed By: sfilipco
Differential Revision: D25079001
fbshipit-source-id: 52a5a2c8345a76355b8bf81435034d90c7c31610
Summary: Tests the behaviour of collecting the raw bundles.
Reviewed By: krallin
Differential Revision: D25025255
fbshipit-source-id: 114da273a28d131f5dd24047ed28ea23d076f235
Summary:
The `getdesignatednodes` function returns a boolean indicating whether the requested nodes were actually fetched.
In the case of SSH, this is needed because the server may not support the `designatednodes` capability (only supported by Mononoke). If the fetch fails, Mercurial will fall back to an expensive complete tree fetch.
The HTTP version of this function accidentally omitted `return True`, which meant it implicitly returned `None`, which triggered the fallback path.
Reviewed By: sfilipco
Differential Revision: D25074067
fbshipit-source-id: 089d5382dd566db89ee732cdcb82762c8d43e21a
Summary: The test doesn't compile on Windows, let's just ifdef it.
Reviewed By: genevievehelsel
Differential Revision: D25033804
fbshipit-source-id: 4f312f010f9d0db42cc9ae19df3f668e8e1c4665
Summary:
Converting back and forth between folly::fs::path and AbsolutePath appears to
be problematic on Windows as NUL bytes appears in the paths, causing the tests
to fail. Instead of doing this conversion, let's simply use AbsolutePath everywhere.
Reviewed By: chadaustin
Differential Revision: D25033803
fbshipit-source-id: 6c45c2a20fc4bf18cecc838b219faacfeb8386d8
Summary:
Querying bookmarks in all repos at the exact same time results in us making a
bunch of concurrent queries to MySQL, which in turn results in MyRouter
allocating a bunch of connections.
This is for reporting the age of bookmarks: that stuff can wait... It doesn't
need to be super fast. Let's make it run on repos one at a time to avoid
allocating dozens of connections every 10 seconds (which is more often than
MyRouter seems to close them).
Reviewed By: ikostia
Differential Revision: D25057432
fbshipit-source-id: 8b65ef65752fc9762a26d835ac80f61573003dd7
Summary:
Skip the need to go through SSH/Mercurial servers. Directly connect to Mononoke
via TLS with X509 certificates.
Reviewed By: markbt
Differential Revision: D24756947
fbshipit-source-id: a782c894956d6a6be055eaa78287c040ceb57da7
Summary: Add a test case for walking hg data from non-public changeset
Reviewed By: ikostia
Differential Revision: D25023130
fbshipit-source-id: 34295f77926b32c77095f7c10d6daa8ef59d9550
Summary: Add command line option to exclude derived data types from blobimport so we can use it to create non-public commits without filenode data for tests
Differential Revision: D25023116
fbshipit-source-id: d8e5d6955f11cebec0de2075c22981bf6c6f4af3
Summary: Add DeletedManifest to walker types with edges to linked changeset and its sub-manifests so that they can be scrubbed.
Differential Revision: D25000319
fbshipit-source-id: f146e6132fde0fb13e630d315484cc2c0a964bdc
Summary: Add the mapping from bonsai to deleted manifest to the walker
Reviewed By: ikostia
Differential Revision: D24989424
fbshipit-source-id: 53d622f661629b9b3de91910f4560b641a95a7bf
Summary: Sometimes pretty debug format is too verbose and one per line regular debug format is preferrable.
Reviewed By: ikostia
Differential Revision: D24996432
fbshipit-source-id: 1acda3985658e4c17b57e36734c77b7579e7e28a
Summary: add an output option to walker scrub so we can dump out debug representation of any type the walker can walk
Reviewed By: farnz
Differential Revision: D24996433
fbshipit-source-id: a332d89d65e4d928159155a34bd39b0e2e1131de
Summary: Add gathering of statistics to the three main futures in create_changeset.
Reviewed By: StanislavGlebik
Differential Revision: D25022231
fbshipit-source-id: 26c7cd4a05483e694bdff24229e61a63249f98b5
Summary:
Add a trait that lets us log data from the response of a thrift request.
Use this to log the commit that was created by `repo_create_commit`.
Reviewed By: StanislavGlebik
Differential Revision: D25022232
fbshipit-source-id: c6526b29b1d2072bf7d4c46d80cb1a5bf522d227
Summary:
We don't currently log any information about RepoCreateCommit parameters.
Start logging the parents, date, author and number of changes and deletes.
Reviewed By: farnz
Differential Revision: D25021423
fbshipit-source-id: 2723c208643e074861732a21e149c06ad47879f2
Summary:
Move the check for commits not having case conflicts from upload time to when
the commit is being landed to a public bookmark.
This allows draft commits in commit cloud to contain erroneously-introduced
case conflicts, whilst still protecting public commits from these case conflicts.
Note that the checks we are moving are the checks for whether this commit
contains case conflicts internally (two files added by this commit conflict in
case), or against the content of ancestor commits (the commit adds a file which
conflicts with an existing file). The check for whether this commit and the
commits it is being pushrebased over conflict still happens during pushrebase.
This behaviour is controlled by a pair of tunable flags:
* `check_case_conflicts_on_bookmark_movement` enables the new check at land time
* `skip_case_conflict_check_on_changeset_upload` disables the old check at upload-time
The `check_case_conflicts_on_bookmark_movement` should always be set if the
`skip_case_conflict_check_on_changeset_upload` is set, otherwise users will
be able to land case conflicts. Setting `check_case_conflicts_on_bookmark_movement`
without setting `skip_case_conflict_check_on_changeset_upload` allows both
checks to be enabled if that is necessary.
To handle bookmark-only moves, the `run_hooks_on_additional_changesets` tunable
flag must also be enabled.
Reviewed By: ahornby
Differential Revision: D24990174
fbshipit-source-id: 34e40e389f2c2139ba24ecee75473c362f365864
Summary:
Add a method to `SkeletonManifest` which finds the paths to the first case
conflict, if there is one.
First means lexicographically first within directories, and with the shortest path.
Reviewed By: ahornby, StanislavGlebik
Differential Revision: D24990175
fbshipit-source-id: ec10f66582b81c40740823e32362ca489a6ebb4d
Summary:
Refactor loading of additional changesets so that it is cached. This will
allow us to access them multiple times but only load them once.
Differential Revision: D24990176
fbshipit-source-id: c21cd1a811ede8fe2c2b03444de0f071ecf5a38c
Summary:
We used to produce a confusing error message during glob evaluation
when . or .. was specified as a glob component. Instead, fail early,
with an error message that more directly explains the problem.
Reviewed By: genevievehelsel
Differential Revision: D24969096
fbshipit-source-id: fe70a8f4db1fdce8eec13890d20913b63a516518
Summary:
Be more specific about which PathComponent string failed to validate
in order to help diagnose downstream issues like glob syntax errors.
Reviewed By: genevievehelsel
Differential Revision: D24966004
fbshipit-source-id: cd3bc0aeaeb389caa13c86b91149d48c5afdb306
Summary:
Previous code used flat_map on the result which discarded the errors.
Fix it and add a test
Reviewed By: StanislavGlebik
Differential Revision: D24989246
fbshipit-source-id: 075c8e40dceebc480ad722fb467a8a9e9bef5905
Summary:
There seem to be legitimate cases when we want to have both bypass mechanisms available: via the commit message and via the pushvar. The need to be able to use a commit message arises from the fact that engineers rarely have access to push CLI, so they cannot use pushvars. Still, pushvars are useful if we want to push an existing commit without any changes to its message.
NB. Since I wanted to make `HookBypass` a multi-option, I had to change it from `enum` to `struct`. If I left fields `pub` in that struct (the way it's commonly done in `metaconfig`, I would've allowed `HookBypass` instances with both options `None`. This is not a big deal, but is a bit ugly as we already store `Option<HookBypass>` instead, so it's better if `Some(hook_bypass)` symbolizes that there indeed is some way to bypass the hook. So I opted in for constructors, getters and private fields.
Reviewed By: StanislavGlebik
Differential Revision: D25000578
fbshipit-source-id: ad13076c9690ee8d45fc4a93e32a6026ff5bdd09
Summary:
Dechunker has a feature of saving entire dechunked bundle contents in memory
this is used to save the raw bundles to manifold. Unitl now this feature worked
properly when accesed via `Read` trait methods. When accessed via `BufRead`
trait the logic that was collecting the read contents was skipped.
This manifested in the saved infinitepush bundles being always trimmed to 4kb.
Reviewed By: markbt
Differential Revision: D25020371
fbshipit-source-id: c606c9fb116a1cd00ae7f4558a7249364faa9c13
Summary:
Add Blame to walker so that we can scrub it.
For public commits, if blame is requested we wait for blame to be derived before traversing unodes. This saves traversing the unode tree twice.
For non-public commits they are not expected to have blame, so for those we don't wait for blame to be derived before traversing unodes.
Reviewed By: farnz
Differential Revision: D24896243
fbshipit-source-id: 66226a8e47f115bcda62269ade63874e0fff4ba0
Summary:
This change allows the walk steps to check the underlying WalkState's phase information to see if a changeset is public. Its exposed to the steps via Checker::is_public().
This saves repeat checks from the changeset phase from the bonsai_to_hg_mapping_step if the walk state already knows its public, and makes sure we are passing the get_phase ephemeral_derive flag from just one place (it should always be the opposite of enable_derive).
Reviewed By: farnz
Differential Revision: D24954837
fbshipit-source-id: b911d69837db8ef34fbe2c27f642d6819ea46908
Summary:
delegate FromStr, AsRef and sampling_fingerprint from BlameId to the
inner FileUnodeId
Reviewed By: markbt
Differential Revision: D24917140
fbshipit-source-id: 1f39b15c91c1f90f371baf03d97f03d00d8798ea
Summary:
The NodeTypes variants and the EdgeType variants generated from them were getting rather long which makes them hard to work with.
This change shortens them by
- changing BonsaiChangeset* to Changeset (this also has better consistency with changeset its blobstore keys)
- updating Bonsai*Mapping to *Mapping )except for BonsaiHgMapping where it disambiguates with HgBonsaiMapping)
Due to the way walker output is sorted it made more sense to do these in one diff rather than two.
Reviewed By: markbt
Differential Revision: D24864061
fbshipit-source-id: dbd395a89be828ac97cf056f03787097d8f1491d
Summary: The test wasn't compiling due to these 3 missing headers, add them.
Reviewed By: genevievehelsel
Differential Revision: D24997597
fbshipit-source-id: 6e3be0763362e41be138c670dc88a63bd9e88024
Summary:
We've seen some hangs with http 2 in lfs. Switching to http 1.1 seems
to fix it. Let's make this configurable so we can tweak this if we see it in
edenapi. For now we continue to default to http 2.
Reviewed By: krallin
Differential Revision: D24901201
fbshipit-source-id: 9806e2c37fa299e4bd381ebdcb17d00800408de3
Summary:
To support the ability for Mercurial to talk to Mononoke directly while going
through an HTTP proxy, we need to be able to accept HTTP prefixed requests.
This adds the support for Mononoke side. Mercurial client will be added in
subsequent diff.
Reviewed By: krallin
Differential Revision: D24989335
fbshipit-source-id: 597eaa974cea661332967e34abc80b2b609b94ff
Summary:
This is a preparatory phase to make the refcount usuable on Windows. For more
details, see D24716801 (e50725d2cb)
Reviewed By: chadaustin
Differential Revision: D24764568
fbshipit-source-id: 1e8c6ab00d4c1ec79c347fd5ae7167b2ce1dff68
Summary:
The skip_on_mode_mac argument to cpp_unittest doesn't exist on Windows, and to
be consistent with the rest of the code, we can simply ifdef the code that
either doesn't compile, or doesn't run.
Note: For some reason the accessIncrementsAccessCount test fails when running
on Windows, but only if running after accessAddsProcessToProcessNameCache...
Reviewed By: chadaustin
Differential Revision: D24496450
fbshipit-source-id: fe18fe1d791a27fbe4bd03bd3e8c811feeb23f5f
Summary:
This is what `eden debug` shows:
modified (m, a, t, e, r, i, a, l, i, z, e, d)
Enumerate all potentially-modified inode paths
Reviewed By: chadaustin
Differential Revision: D24908570
fbshipit-source-id: 62e91b6f0212c70de4bb1705539d3674e6e000d9
Summary:
On Windows, `eden debug file_stats` would crash due to the fileSize not being
filled. Since computing the file size is just a matter of calling stat(2) on
the file, let's simply do that.
Reviewed By: chadaustin
Differential Revision: D24908430
fbshipit-source-id: 07ffb97ada15a07565bea397b436fd21d09b5565
Summary:
We had a bug: if two files were reverted and then we try to amend one of them
mercurial will actually amend both of them.
Looks like the problem was caused by "Prune files reverted by amend" block.
Previously this block was considering every file that was changed in a commit we
are about to amend and comparing with working copy. If a file is the same in a
commit we are about to amend and in the working copy then it will amended as
well.
This diff attempts to fix it by considering only files that were selected for
amending.
Reviewed By: DurhamG
Differential Revision: D24949727
fbshipit-source-id: cf6cb95af3f67ec769e8a58db3b829945133b830
Summary:
We have an edge case - if we reverted two files and then we try to amend only a
single one then both of them will be amended.
This diff adds a test for this and other edge-cases. The next diff will fix it
Reviewed By: DurhamG
Differential Revision: D24949726
fbshipit-source-id: c5c53de1d67f161efa8564f89127e61ac2f28ac9
Summary:
`BonsaiChangesets` are immutable. Rather than recomputing their changeset ids
every time (which involves cloning, serializing, and hashing the changeset),
instead store it alongside the changeset data, computing it once at the point
the changeset is frozen.
When loading the changeset, we already know the id, so there's no need to
compute it at all.
Reviewed By: farnz
Differential Revision: D24951230
fbshipit-source-id: 5350e94eb6ea799a89ced2a211baa657a06b83d0
Summary:
When we streamclone, we snapshot the revlogs under a lock, then we start
sending. That works fine, unless we have a file whose size changes during
the sending phase. This can happen if it's promoted from a single `.i` to a
`.i` and a `.d`.
When that happens, the clone fails (the client reports it received a bunch of
corrupted data because it starts interpreting parts of files as inputs). Since
the breakage is also confusing client side, I updated the server side to also
assert that it's sending what it thinks it's sending.
Reviewed By: DurhamG
Differential Revision: D24958885
fbshipit-source-id: a0349c651b7cb63ab27546adf9944e7fba63a95d
Summary: This is unused, except in tests, so let's just remove it.
Reviewed By: chadaustin
Differential Revision: D24930011
fbshipit-source-id: cb132962e1dff9d12ce12e7eb75bd34a026c58b7
Summary:
As of right now, opendir is the expensive callbacks due to fetching the sizes
for all the files in a directory. This strategy however breaks down when
timeouts are added as very large directories would trigger the timeout, not
because EdenFS is having a hard time reaching Mononoke, but because of
bandwidth limitation.
To avoid this issue, we need to have a per-file timeout and thus makes opendir
just trigger the futures, but not wait on them. The waiting bit will happen
at readdir time which will also enable having a timeout per file.
The one drawback to this is that the ObjectFetchContext that is passed in by
opendir cannot live long enough to be used in the size future. For now, we can
use a null context, but a proper context will need to be passed in, in the
future.
Reviewed By: wez
Differential Revision: D24895089
fbshipit-source-id: e10ceae2f7c49b4a006b15a34f85d06a2863ae3a
Summary:
The rest of the non debug commands use - instead of _.
Lets flip the prefretch profiles cli to be consistant.
Reviewed By: genevievehelsel
Differential Revision: D24910172
fbshipit-source-id: a5f18a9c9d5fb4ef9417f14ef9d053cdc1599d76
Summary:
The second line of the summary got cut off due to a bad multiline comment
signals_oops
Reviewed By: wez
Differential Revision: D24909886
fbshipit-source-id: 844778e7d47a2b7b413fdd0c4fa0ef71dec9dadb
Summary:
`eden prefetch` can trigger a bunch of wasted network requests to fetch metadata
when we are fetching files anyways. (These network requests are wasted since we
fetch the file contents and most of them are being throttled on sandcastle anyways.)
So lets skip metadata prefetching during eden prefetches.
Reviewed By: genevievehelsel
Differential Revision: D24658066
fbshipit-source-id: f8a32807a4e238222158f100cdd5ffa1b92fd833
Summary: Fix the walker build. Had a land race on non-conflicting lines.
Reviewed By: farnz
Differential Revision: D24953152
fbshipit-source-id: b3d43d745242542ba1a6df9ba730460112bee6dc
Summary:
This diff makes it possible to rate limit public or infinitepush pushes for individual repos based on the total number of file changes across all commits accepted in the given sliding window.
## Why support infintiepush rate limiting?
We want to hit the same servers with infinitepush and public traffic, yet we want to avoid the situation when a spike in infinitepush pushes (like after a rebase of a huge codemod stack) makes our servers incapable of serving public writes.
## Why support public rate limiting?
I don't think there's any reason to do it, but it's essentially free, so why not.
## Why per repo rather than cumulative?
If we only rate-limit cumulative traffic, a single misbehaving repo can fill in the quota and prevent other repos from accepting any traffic even with small limits.
Reviewed By: mitrandir77
Differential Revision: D24871837
fbshipit-source-id: 6da36794b2fdddd70f1a54e0afeaa4c49065a98d
Summary: Step into history via UnodeFile parents to check they are valid
Reviewed By: markbt
Differential Revision: D24838456
fbshipit-source-id: 3b19973f9f7da6e502595b4ffa33a031e1df03ab
Summary: Check that the UnodeFile's link is good by stepping to it.
Reviewed By: markbt
Differential Revision: D24838458
fbshipit-source-id: c6c8ef5dc580bb591f484df659a76f0ba43a68e6
Summary: add UnodeFile to walker graph to completment UnodeManifest, along with edges from UnodeFile to FileContent
Reviewed By: ikostia
Differential Revision: D24838460
fbshipit-source-id: 4bc860a65c5489ba9b2bff4582967d9db6181f25
Summary: When doing a deep walk step into history via the unode's linked changeset.
Reviewed By: ikostia
Differential Revision: D24838459
fbshipit-source-id: cf2196f95e7b6bf3c3ba2cd1c44fef748a11597f
Summary: add UnodeManifest support to walker so they can be scrubbed
Reviewed By: mitrandir77
Differential Revision: D24604174
fbshipit-source-id: c60a020802fae109005263393ea5618f22f21888
Summary: Add bonsai to unode manifest mapping to the walker so we can scrub from bonsais
Reviewed By: mitrandir77
Differential Revision: D24604188
fbshipit-source-id: d98938c5ec640c479007b59163899c16d0e3fb9d
Summary:
Track ChangesetInfo derivation in the same way as Fsnode. This fixes repeated visits to ChangesetInfo
As part of this the matching in WalkState::needs_visit is made exhaustive, as it the default match had previously been returning true for ChangesetInfo nodes.
Reviewed By: mitrandir77
Differential Revision: D24838455
fbshipit-source-id: 33b8201984b294a1560da104b2cc9c65849b9297
Summary: This is in preparation to prevent marking visited for ChangesetInfo if it is not derived yet.
Reviewed By: mitrandir77
Differential Revision: D24838453
fbshipit-source-id: 0262b7f28754724bbfcc2407b4e649a90175cbcc
Summary: The old test counted all types, make the test case check just the changeset_info derived node types for clarity.
Reviewed By: mitrandir77
Differential Revision: D24838457
fbshipit-source-id: 0ba96fd640e700f605521cf066318a89570ee71c
Summary:
Adding automatically generated derived_xxxx node groups so that less typing is
needed and we're also checking the nodes are mapped correctly to derived data types.
Reviewed By: mitrandir77
Differential Revision: D24838738
fbshipit-source-id: 2bc8ff03a82c5d18f749affba2e67d214fb7ace7
Summary: This allows us to use -i bonsai instead of -i Bookmark -i BonsaiChangeset, which is a bit shorter
Reviewed By: mitrandir77
Differential Revision: D24838454
fbshipit-source-id: a758ad069af36fb1d1301e162bee822988cab07b
Summary: All the node types support FromStr so can generate NodeType::parse_node() rather than hand implement it.
Reviewed By: mitrandir77
Differential Revision: D24711372
fbshipit-source-id: 24e27e9cdda078c6dc66ac839cb3cfed6e93f269
Summary:
implement FromStr for BookmarkName, can use it to handle bookmarks
more uniformly with other types in the walker
Reviewed By: mitrandir77
Differential Revision: D24725786
fbshipit-source-id: e7eb7ece4a4bdc5dfd91f253f0383829c4ecc73b
Summary: Refactor from non-FromStr node parsing to FromStr, make it consistent with other node keys.
Reviewed By: mitrandir77
Differential Revision: D24711374
fbshipit-source-id: 84200b781bfad0f860acd8aecb95ff238490b92d
Summary: use PathKey for parsing of Node::HgFileNode in walker.
Reviewed By: ikostia
Differential Revision: D24711375
fbshipit-source-id: 4fe5887ba44ca9fca1dde54eaa75b30114b3b4b8
Summary: add PathKey newtype to Node so can implement FromStr and use it in parsing for HgManifest
Reviewed By: mitrandir77
Differential Revision: D24711371
fbshipit-source-id: a9879f6d2e16eb54b2ca7af4e812a4f031c9e584
Summary: Add UnitKey newtype to walker so that can implement FromStr, this is leading up to all node keys supporting from_str at which point I can generate NodeType::parse_node.
Reviewed By: mitrandir77, ikostia
Differential Revision: D24711376
fbshipit-source-id: aa4e26eb8e9206673298b632a079d2cc66d152ee
Summary: This is mostly a slight refactoring to help code reuse. However, there's a small behavior change as well (which I think is acceptable): before we compared `count` vs `max_value`, and now we compare `count + bump` vs `max_value`.
Reviewed By: krallin
Differential Revision: D24871175
fbshipit-source-id: 94e53ff2c05b4f9b236473c7e4b6d78229b64d53
Summary: Now that `derive03` is the only version available, rename it to `derive`.
Reviewed By: krallin
Differential Revision: D24900106
fbshipit-source-id: c7fbf9a00baca7d52da64f2b5c17e3fe1ddc179e
Summary:
Now that all code is using `BonsaiDerived::derive03`, we can remove the old
futures version `BonsaiDerived::derive`.
Reviewed By: krallin
Differential Revision: D24900108
fbshipit-source-id: 885d903d4a45e639e4d44e19b5d70fac26bce279
Summary:
The wirepack sending code builds up the entire history blob in memory
before sending it. Previously we did this by appending to the string. In Python
2 this was fast, in Python 3 this is n^2 and n can be 100k+ in cases of long
history.
Let's switch to list+join.
Reviewed By: xavierd
Differential Revision: D24933183
fbshipit-source-id: 5c36d7868e7c64a2292bd68ec2ffb584d85dd98f
Summary:
osxfuse is rebranding as macfuse in 4.x.
That has ripple effects through how the filesystem is mounted and shows up in
the system.
This commit adjusts for the new opaque and undocumented mount procedure and
speculatively updates a couple of other code locations that were sensitive to
looking for "osxfuse" when examining filesystems.
Reviewed By: genevievehelsel
Differential Revision: D24769826
fbshipit-source-id: dab81256a31702587b0683079806558e891bd1d2
Summary:
We got a [report](https://fb.workplace.com/groups/scm/permalink/3379140858802177/) that a new hg build fails with an error because it can't xor None types.
```
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: PyErr {
ptype: <type 'exceptions.TypeError'>, pvalue: Some("unsupported operand type(s)
for ^: 'NoneType' and 'NoneType'"), ptraceback: Some(<traceback object at
0x00000249BB158248>) }',
```
Full stack trace is here
{P149395441}
This seems likely to be related to the diff I landed recently - D24725902 (7b1798be37).
However it's unclear why it was affecting only windows because I couldn't repro
it on linux.
Turned out that we have experimental.treematcher option disabled on windows,
which causes it to use includematcher instead of treematcher. And includematcher
returns either None or BytesMatchObject and they are impossible to xor.
This diff fixes it by converting them value to bool first, and also it adds a
test for it.
Reviewed By: singhsrb
Differential Revision: D24918192
fbshipit-source-id: 1359e8b97d26d3b1a4795b7b3d4cfa3d6d4ae843
Summary:
It would be nice to see if there was a fsck on startup, the duration of the fsck, and if it was able to repair all of the problems or not. This diff adds external logging for fsck runs on daemon start.
duration: how long the fsck took
success: false if was not able to repair errors, true if repaired all errors or didn't have to repair at all
attempted_repair: true if we found problems, false otherwise
Reviewed By: chadaustin
Differential Revision: D24774065
fbshipit-source-id: 2fa911652abec889299c74aaa2d53718ed3b4f92
Summary:
To ensure other parts of Mononoke can fully read new blobs as soon as they've
been written, ensure their buffers are flushed and they've been synced to disk
before returning from the blob put.
Reviewed By: krallin
Differential Revision: D24921657
fbshipit-source-id: df401470aaeeebcdc9d237271b40a399115ba25f
Summary:
We've seen http 2 potentially causing hangs for users. Let's make this
configurable for lfs, so we can disable it and see if things get fixed.
Reviewed By: krallin
Differential Revision: D24898322
fbshipit-source-id: dc7842c0247dc6b9590a1f160076b17788aab1b9
Summary:
As discussed in a group thread (see link below), HTTP 2 may be causing
hangs for users. Let's start by making the http-client configurable. In
subsequent diffs we'll make edenapi and lfs configurable as well.
Reviewed By: krallin
Differential Revision: D24898323
fbshipit-source-id: f0035a1b8df3cee626ebe519e9e99358c1b3f043
Summary:
This isn't code that compiles, but the convention in Rust is that code actually
is doctests unless annotated otherwise, so if tested with Cargo, those fail.
This fixes that.
Reviewed By: farnz
Differential Revision: D24917364
fbshipit-source-id: 62fe11700ce561c13dc5498e01d15894b17b5b22
Summary:
Thread Pool fails with py3 hg build. Let's replace with a loop.
Most of the usage for the command will be for a single head anyway.
Reviewed By: krallin
Differential Revision: D24902167
fbshipit-source-id: c7af46d0d63ddd074c98788bf55520ae3f2550b8
Summary: As we are making directory structure inside the bucket anyway, it would be usefull to combine keys per repo.
Reviewed By: ahornby
Differential Revision: D24884248
fbshipit-source-id: 85efeb7009a9d211381319caa4e72aa3687c51ee
Summary:
Transfers iddag flat segments along with the head_id that should be use to
rebuild a full fledged IdDag. It also transfers idmap details. In the current
version it only transfers universal commit mappings.
Reviewed By: krallin
Differential Revision: D24808329
fbshipit-source-id: 4de9edcab56b54b901df1ca4be5985af2539ae05
Summary:
Under this configuration SegmentedChangelog Dags (IdDag + IdMap) are always
downloaded from saves. There is no real state kept in memory.
It's a simple configuration and somewhat flexible with treaks to blobstore
caching.
Reviewed By: krallin
Differential Revision: D24808330
fbshipit-source-id: 450011657c4d384b5b42e881af8a1bd008d2e005
Summary:
Constructs and returns `CloneData<ChangesetId>`. This object can then be used
to bootstrap a client dag that speaks bonsai commits.
Short term we are going to be using this data in the Mercurial client which
doesn't use bonsai. Hg MononokeRepo will convert it.
Long term we may decide that we want to download cached artifacts for
CloneData. I don't see an issue getting there, I see this as a valid path
forward that cuts down on the configuration required to get to the cached
artifacts. All that said, I think that using whatever dag is available in
memory would be a viable production option.
Reviewed By: krallin
Differential Revision: D24717915
fbshipit-source-id: 656924abb4bbfa1a11431000b6ca6ed2491cdc74
Summary: The SegmentedChangelogManager abstracts saving and loading Dags. This is currently used in the tailer and seeder processes. It will also be used to load dags while the server is running.
Reviewed By: krallin
Differential Revision: D24717925
fbshipit-source-id: 30dff7dfc957f455be6cf733b20449c804511b43
Summary:
The XLOG_EVERY_MS doesn't use the 3rd argument as a format string, it just
prints it verbatim. To format it, we need to use fmt::format.
Reviewed By: genevievehelsel
Differential Revision: D24906819
fbshipit-source-id: 7d45787301086fb87dd8f5d478af8007df82c0b6
Summary:
The move constructor needs to be noexcept and should also initialize the
members in the right order.
Reviewed By: genevievehelsel
Differential Revision: D24874304
fbshipit-source-id: a3db5dcdab1397b872b8f13ec5c7fd45baad5e6f
Summary:
The components iterator return pieces of the original path, using a reference
makes little sense and the compiler complains.
Reviewed By: genevievehelsel
Differential Revision: D24873851
fbshipit-source-id: 40d414dcb4a0539167ab4760dfc0095af8245b3a
Summary:
The documentation for PrjFillDirEntryBuffer states that if no entries could be
added, then the ERROR_INSUFFICIENT_BUFFER errors need to be returned as is, the
code didn't do that.
Reviewed By: chadaustin
Differential Revision: D24764566
fbshipit-source-id: d6411822eac71b2f9aa7cf07858d09115767cc59
Summary:
This is the plumbing to allow us to skip Metadata prefetching during eden
prefetches. These can trigger a bunch of wasted network requests
when we are fetching files anyways. (These network requests are wasted since we
fetch the file contents and most of them are being throttled on sandcastle anyways.)
We won't necessarily want to skip metadata prefetching always, we will still want it
for the watchman queries, but for `eden prefetch` will probably want to skip it. This
is why we are making it an option in the GlobParams.
Reviewed By: chadaustin
Differential Revision: D24640754
fbshipit-source-id: 20db62d4c0e59fe17cb6535c86ac8f1e3877879c
Summary:
We will start opting-in and rolling prefetch profiles mvp out to users soon.
This is a switch to allow users to opt-in, us to gradually rollout, and to
quickly turn prefetch profiles off if this causes issues for users.
Reviewed By: genevievehelsel
Differential Revision: D24803728
fbshipit-source-id: 0456f2a733958b495e5d84f7177c99f3ef481f57
Summary:
Allow users of `tests_utils` to create paths that are not `String`, by supporting any type
that can be converted into `MPath`.
Reviewed By: StanislavGlebik
Differential Revision: D24887002
fbshipit-source-id: 47ad567507185863c1cfa3c6738f30aa9266901a
Summary:
Add type definitions for skeleton manifests.
Skeleton manifests are manifests that correspond to the shape of the repository (which directories and files exist), but do not include anything relating to the content. This means they only change when files are added or deleted.
They are used for two purposes:
* To record the count of descendant directories for each directory. This will be useful for limiting parallelism when doing an ordered traversal of a manifest. The descendant directory count is a good estimate of the amount of work required to process a directory.
* To record whether a directory, or any of its subdirectories, contains a case conflict. This will be used to enforce case-conflict requirements in repos.
Differential Revision: D24787535
fbshipit-source-id: 7cb92546ed80687d5b98a6c00f9cd10896359b8d
Summary:
On Windows, /bin/sh doesn't exist. To spawn a command in a shell, we need to
use Powershell.
Reviewed By: genevievehelsel
Differential Revision: D24864355
fbshipit-source-id: 3bcf630a90e644a31ff9db8fea9891476cad641d
Summary:
While doing notifications, I struggled a bit to get them working and thought
the special quoting on Windows didn't work as expected. It turns out the error
was cmd related and using a modern shell (PowerShell) fixed it.
Having a test for the quoting is a good idea nonetheless, so let's have one.
Reviewed By: genevievehelsel
Differential Revision: D24864357
fbshipit-source-id: 6b1ac50f3b7b1ef469378d5de21f56c24c0945f9
Summary:
BE: remove old subscription to save resources in IceBreaker. The client code will recreate it anyway if missing but cleaning up will help us to reduce number of unused subscriptions.
Classic example: repo opsfiles or configerator maybe needed once and then a user don't use
Another example: switching workspaces failed and it could be result in subscriptions are not cleaned up properly
Reviewed By: markbt
Differential Revision: D24859931
fbshipit-source-id: 6df6c7e5f95859946726e04bce8bc8f3ac2d03df
Summary:
Those are the tweaks I've made to make `--config devel.bundle2.debug` more
verbose to aid with my investigation. This might help somebody else in the
future so let's comit it:
* added "params" decoding to debugsendbundle
* added "message" to `error:unsupportedcontent` part (we already send it with
some other error parts)
Reviewed By: sfilipco
Differential Revision: D24840405
fbshipit-source-id: b25d5823d05f3d50230c078e8db459dc66256707
Summary:
Generate walker EdgeType::outgoing_type() to reduce boilerplate
When defining edges if the edge variant and destination node at the same no extra parameters needed. If they are different then the node type of destination is passed in parens, e.g. BonsaiParent(BonsaiChangeset)
Reviewed By: StanislavGlebik
Differential Revision: D24687828
fbshipit-source-id: 1616c786d78242c2b3a8c7a1ba58cc1433ea0a26
Summary:
This function is useful in the mononoke to compute the universal commit idmap
that is required for clone.
Reviewed By: quark-zju
Differential Revision: D24808327
fbshipit-source-id: 0cccd59bd7982dd0bc024d5fc85fb5aa5eafb831
Summary:
`flat_segments` are going to be used to generate CloneData. These segments will
be sent to a client repository and are going to bootstrap the iddag.
Reviewed By: quark-zju
Differential Revision: D24808331
fbshipit-source-id: 00bf9723a43bb159cd98304c2c4c6583988d75aa
Summary: This is the object that will be used to bootstrap a Dag after a clone.
Reviewed By: quark-zju
Differential Revision: D24808328
fbshipit-source-id: 2c7e97c027c84a11e8716f2e288500474990169b
Summary:
The goal is to reused the functionality provided by AssignHeadOutcome for clone
purposes.
Reviewed By: quark-zju
Differential Revision: D24717924
fbshipit-source-id: e88f21ee0d8210e805e9d6896bc8992009bd7975
Summary:
The EdenFS codebase uses folly/logging/xlog to log, but we were still relying
on glog for the various CHECK macros. Since xlog also contains equivalent CHECK
macros, let's just rely on them instead.
This is mostly codemodded + arc lint + various fixes to get it compile.
Reviewed By: chadaustin
Differential Revision: D24871174
fbshipit-source-id: 4d2a691df235d6dbd0fbd8f7c19d5a956e86b31c
Summary:
There were `eden top` issues on MacOS that I thought had been fixed a while ago,
but it doesn't look like we caught them all. This should catch the remaining bug
in `eden top`.
Reviewed By: genevievehelsel
Differential Revision: D23743199
fbshipit-source-id: ca66748c7a8a26062caf934c8f2c1fd13d9ae69e
Summary:
In order to allow request to timeout to display notifications to the user, the
`within` future method will need to be called on the various callback futures.
Unfortunately, once the timeout expires, the underlying future isn't cancelled
and stopped, but the unique pointer holding the context will be reclaimed.
Whenever the future actually completes, it will try to use an invalid pointer,
crashing EdenFS.
To solve this, switch to using a shared_ptr and copy it in the right places so
it will only be freed once all futures holding a reference to it will be gone.
I also took the opportunity to reduce the nesting a bit to make the code more
readable.
Reviewed By: kmancini
Differential Revision: D24809647
fbshipit-source-id: 987d6e5763106fabc6bed3ea00d28b129b5285a1
Summary: These errors are Win32 errors, we need to wrap them into a HRESULT.
Reviewed By: chadaustin
Differential Revision: D24809646
fbshipit-source-id: 9f42b9d0c43474967dc26cb2c14cbee463768b79
Summary: It is possible that hash of newly created bonsai_changeset will be different from what is in prod repo. In this case let's fetch bonsai from the prod, to make backup repo consistent with prod.
Reviewed By: StanislavGlebik
Differential Revision: D24593003
fbshipit-source-id: 70496c59927dae190a8508d67f0e3d5bf8d32e5c
Summary: Use create_graph to generate EdgeType enum in walker to reduce boiler plate needed when adding new derived node and edge types to the walker
Differential Revision: D24687827
fbshipit-source-id: 63337f4136c649948e0d3039529965c296c6d67e
Summary: Also use the 0.3 compatible .return_remainder in unbundle.
Reviewed By: ikostia
Differential Revision: D24729464
fbshipit-source-id: ede5cc60f4b872a3b968cf14bb0e2c5d9b85c242
Summary:
When finishing a hash computation for a blob, we currently call `format!` to allocate
and format the error string before calling `.expect` on the `write_all` result.
In practice this will never fail, so this is wasted work. From experimentation on
the playground, the Rust compiler does not appear to be smart enough to optimize this
away, either.
A small optimization, but let's get rid of this by calling panic directly, and
only in the failure path.
Reviewed By: farnz
Differential Revision: D24857833
fbshipit-source-id: e3e35b402ca3a9f6c9d8fbbd758cc486ef1c5566
Summary:
Adds `--reversefill` mode to bookmarks filler that fetches bookmark updates
from the queue and syncs them to infinitepush database.
Reviewed By: farnz
Differential Revision: D24538317
fbshipit-source-id: 5ac7ef601f2ff120c4efd8df08a416e00df0ceb9