Commit Graph

1164 Commits

Author SHA1 Message Date
Stanislau Hlebik
664c824764 mononoke: thrift serialization/deserialization of ChangesetEntry
Summary:
We have plans to add a cache of many changeset entries and store it in the
blobstore. The main reason is to speed up Mononoke's revsets and in turn speed
up getbundle wireproto request.

To cache we first need to serialize them. Let's use thrift serialization for
that.

Reviewed By: lukaspiatkowski

Differential Revision: D9738637

fbshipit-source-id: ba771545de9a955956acb6d169ee7bc424ef271b
2018-09-13 06:08:27 -07:00
Stanislau Hlebik
786e58c2e2 mononoke: move test utils to a separate library
Summary: It will be useful outside of pushrebase library as well

Reviewed By: farnz

Differential Revision: D9789811

fbshipit-source-id: c851df8a8cce8b1c26daa09b7fe2ffa40f290160
2018-09-13 06:08:27 -07:00
Stanislau Hlebik
99ecff8586 mononoke: handle pushvars during pushrebase
Summary:
There were quite a lot of pushes that use pushvars.
This diff adds a parsing of it.

After I added the parsing of pushvars it started to fail because it seems to be
a flat manifest push. But having parsing of pushvars probably won't hurt.

Reviewed By: farnz

Differential Revision: D9751962

fbshipit-source-id: 49796e91edfad76fb022a2e0fc049a79859de1b7
2018-09-13 06:08:27 -07:00
Stanislau Hlebik
831e52a98c mononoke: do not generate Blob<Id> unnecessarily
Summary:
In `fetch_file_contents()` `blobstore_bytes.into()` converted the bytes to
`Blob<Id>`. This code calls `MononokeId::from_data()` which calls blake2
hashing. Turns out it causes big problems for large many large files that
getfiles can return.

Since this hash is not used at all, let's avoid generating it.

Reviewed By: jsgf

Differential Revision: D9786549

fbshipit-source-id: 65de6f82c1671ed64bdd74b3a2a3b239f27c9f17
2018-09-13 05:53:10 -07:00
Jeremy Fitzhardinge
75b83935f8 rust/netstring: move encode and decode into separate modules.
Summary: Pure code motion

Reviewed By: farnz

Differential Revision: D9780345

fbshipit-source-id: dd743f4cf4a16712114af9a098b78aea02a2179d
2018-09-12 20:37:41 -07:00
Jeremy Fitzhardinge
48773d7c76 rust/netstring: convert from error-chain to failure
Summary: Use failure rather than error-chain for errors.

Reviewed By: StanislavGlebik

Differential Revision: D9780341

fbshipit-source-id: 4d41855093cf812e83b6c348a7499e85d9472daf
2018-09-12 20:37:41 -07:00
Jeremy Fitzhardinge
908adb73fa rust/netstring: refactor decoder into its own function
Summary:
Split the decoder out into its own function. This can handle partial results,
but the Decoder trait API cannot, so make sure the Decoder still only returns
complete results.

Reviewed By: farnz

Differential Revision: D9780342

fbshipit-source-id: b2439cba95b1e42444adbf2ee4b6e3792703a188
2018-09-12 20:37:41 -07:00
Stanislau Hlebik
91dceba40a mononoke: add logic to do batch creation of bonsai changesets
Summary:
Profiling showed that since we are inserting objects into blobstore
sequentially it takes a lot of time for long stacks of commit. Let's do it in
parallel.

Note that we are still inserting sequentially into changesets table

Reviewed By: farnz

Differential Revision: D9683037

fbshipit-source-id: 8f9496b97eaf265d9991b94243f0f14133f463da
2018-09-11 09:53:22 -07:00
Lukas Piatkowski
b68675f549 mononoke config: stop using "path" in manifold based repos
Summary:
The "path" in manifold blobrepo is used for logging, but it has been quite confusing with "fbsource" and "fbsource-pushrebase" to be logged in an identical way - both are "fbsource", because of the "path" config. Lets not use the "path" for logging, instead use the "reponame" from metaconfig repo.

In case we ever want to have two repos that are named the same (please don't) or have logging under a different name than "reponame" from config then we can add a proper optional "name" parameter, but for now we don't require this confusing feature.

Reviewed By: StanislavGlebik

Differential Revision: D9769514

fbshipit-source-id: 89f2291df90a7396749e127d8985dc12e61f4af4
2018-09-11 08:06:31 -07:00
Stanislau Hlebik
60ad77a0d4 mononoke: add pushrebase logging
Summary:
Let's log how long it takes to do pushrebase, how many retry attempts
pushrebase has done,  and how long it takes to generate
the response to the client.

Reviewed By: farnz

Differential Revision: D9683036

fbshipit-source-id: 3ad57c2925bdceb3839cae1ff4215c3dd8cd0cc2
2018-09-11 06:06:22 -07:00
Stanislau Hlebik
5ec580570f mononoke: fix getfiles timeouts
Summary:
We had a lot of requests that took > 15 mins on Mononoke, while taking few
seconds on mercurial. Turned out that hgcli doesn't play well with big chunks.
Looks like AsyncRead very inefficiently tries to allocate memory, and that
causes huge slowness (T33775046 for more details).

As a short-term fix let's chunk the data on the server. Note that now we have
to make getfiles request streamable and manually insert the size of the
request.

Reviewed By: lukaspiatkowski

Differential Revision: D9738591

fbshipit-source-id: f504cf540bc7d90e2cbebba9808455b6e89c92c6
2018-09-11 02:06:48 -07:00
Stanislau Hlebik
1ca1bc0d81 mononoke: fix buffer size in compression
Summary:
We were using incorrect buffer size. That's *very* surprising that our servers
weren't continuously crashing. However, see the test plan - it really looks
like `LZ4_compressBound()` is the correct option here.

Reviewed By: farnz

Differential Revision: D9738590

fbshipit-source-id: d531f32e79ab900f40d46b7cb6dac01dff8e9cdc
2018-09-10 09:23:50 -07:00
Lukas Piatkowski
47210da18f async-compression: re-add support for zstd decompression with warnings
Summary:
See the comment near "DecompressionType::OverreadingZstd" to see what it does.

Why OverreadingZstd works for Mononoke's use case? Answer:

Because we use it in bundle2 parsing, which is already chunked by the outer Reader. This means that when we have a stream of bytes:
```
uncompressed -> compressed bundle2 -> uncompressed
```
thanks to chunking we extract the compressed part:
```
do_stuff(uncompressed)
ZstdDecoder(compressed bundle2)
do_stuff(uncompressed)
```
rather than
```
do_stuff(uncompressed)
ZstdDecoder(compressed bundle2 -> uncompressed)
```
So overreading doesn't hurt us here

Reviewed By: StanislavGlebik

Differential Revision: D9700778

fbshipit-source-id: 70dd6f405ffa00fb981791aff25c60f60831ea6b
2018-09-07 09:53:25 -07:00
Pavel Aslanov
af69be4b3b case-conflict checking functions
Summary:
Adds case conflict checking functions
- `manifest + path` case
- `[path]` case

Reviewed By: StanislavGlebik

Differential Revision: D9700760

fbshipit-source-id: 582430f61bed1ae279dafbe7804a562d5b2ddf59
2018-09-07 09:06:17 -07:00
Jeremy Fitzhardinge
c4ece89763 mononoke: use Chain for errors
Summary:
Use .chain_err() where appropriate to give context to errors coming up from
below. This requires the outer errors to be proper Fail-implementing errors (or
failure::Error), so leave the string wrappers as Context.

Reviewed By: lukaspiatkowski

Differential Revision: D9439058

fbshipit-source-id: 58e08e6b046268332079905cb456ab3e43f5bfcd
2018-09-06 14:24:08 -07:00
Jeremy Fitzhardinge
2cfa682b33 mononoke: use err_downcast generally in mononoke
Summary: Cleans things up a bit, esp when matching Context/Chain.

Reviewed By: lukaspiatkowski

Differential Revision: D9439062

fbshipit-source-id: cde8727437f58b288bed9dfacb864bdcd7dea45c
2018-09-06 14:24:08 -07:00
Jeremy Fitzhardinge
82386b46f7 mononoke/apiserver: use err_downcast macros
Summary:
Use the err_downcast macros instead of manual downcasting. Doesn't make
a huge code-size difference in this case, but a little neater?

Reviewed By: kulshrax, fanzeyi

Differential Revision: D9405014

fbshipit-source-id: 170665f3ec3e78819c5c8a78d458636de253bb6f
2018-09-06 14:24:08 -07:00
Jeremy Fitzhardinge
4f7f38c1a0 rust/failure_ext: add .chain()/Chain
Summary:
Add a type to explicitly model a causal chain of errors, akin to
error_chain. This looks a lot like Context, but is intended to show the entire
stack of errors rather than deciding that only the top-level one is
interesting.

This adds a `ChainExt` trait, which adds a `.chain_ext(OuterError)` method to
add another step to the causal chain. This is implemented for:
- `F` where `F: Fail`
- `Error`
- `Result<_, F>` where `F: Fail`
- `Result<_, Error>`
- `Future`/`Stream<Error=F>` where `F: Fail`
- `Future`/`Stream<Error=Error>`
- `Chain`

Using it is simple:
```
let res = something_faily().chain_err(LocalError::new("Something amiss"))?;
```
where `something_faily()` returns any of the above types.

(This is done by adding an extra dummy marker type parameter to the `ChainExt`
trait so that it can avoid problems with the coherence rules - thanks for the idea @[100000771202578:kulshrax]!)

Reviewed By: lukaspiatkowski

Differential Revision: D9394192

fbshipit-source-id: 0817844d283b3900d2555f526c2683231ca7fe12
2018-09-06 14:24:08 -07:00
Jeremy Fitzhardinge
b44055a6b3 rust/failure_ext: add err_downcast/err_downcast_ref macros
Summary:
Add a pair of macros to make downcasting errors less tedious:
```
let res = err_downcast! {
    err, // failure::Error
    foo: FooError => { println!("err is an FooError! {:?}", foo) },
    bar: BarError => { println!("err is a BarError! {:?}", bar) },
};
```

`err_downcast` takes `failure::Error` and deconstructs it into one of the
desired types and returns `Ok(match action)`, or returning it as `Err(Error)`
if nothing matches.

`err_downcast_ref` takes `&failure::Error` and gives a reference type. It
returns `Some(match action)` or `None` if nothing matches.

The error types are required to implement `failure::Fail`.

`err_downcast_ref` also matches each error type `E` as `Context<E>`.

Reviewed By: lukaspiatkowski

Differential Revision: D9394193

fbshipit-source-id: c56d91362d5bed8ab3e254bc44bb6f8a0eb376a2
2018-09-06 14:24:06 -07:00
Anastasiya Zhyrkevich
efb795b14b mononoke: add json flag to mononoke admin tool
Summary: boormarks get <BOOKMARKNAME> --json

Reviewed By: StanislavGlebik

Differential Revision: D9677271

fbshipit-source-id: e57f1ab324dcedfbb18c1f07f01b2cad9db0c1e3
2018-09-06 09:23:20 -07:00
Stanislau Hlebik
cab68edc75 mononoke: return bundle to the client in pushrebase
Summary:
Pushrebase should send back the newly created commits. This diff adds this
functionality.

Note that it fetches both pushrebased commit and current "onto" bookmark.
Normally they should be the same, however they maybe different if bookmark
suddenly moved before current pushrebase finished.

Reviewed By: lukaspiatkowski

Differential Revision: D9635433

fbshipit-source-id: 12a076cc95f55b1af49690d236cee567429aef93
2018-09-06 06:53:57 -07:00
Stanislau Hlebik
045623e7c7 mononoke: move getbundle response creation in bundle2resolver
Summary: We are going to use it in pushrebase as well

Reviewed By: lukaspiatkowski

Differential Revision: D9635432

fbshipit-source-id: 5cbe0879d002d9b6c21431b0938562357347a67f
2018-09-06 06:53:57 -07:00
Stanislau Hlebik
7a5b393f88 mononoke: replace Arc<BlobRepo> with BlobRepo
Summary: Arc<BlobRepo> is useless, because BlobRepo is cloneable.

Reviewed By: farnz

Differential Revision: D9654256

fbshipit-source-id: ec54d7669c17732112bee2ba4202b6eafd31bfae
2018-09-06 05:23:35 -07:00
Simon Farnsworth
0ad8dcc0da Split asynchronize into useful components
Summary:
`asynchronize` does two conceptually separate things:

1. Given a closure that can do blocking I/O or is CPU heavy, create a future
that runs that closure inside a Tokio task.
2. Given a future, run it on a new Tokio task and shuffle the result back to
the caller via a channel.

Split these two things out into their own functions - one to make the future,
one to spawn it and recover the result. For now, this is no net change - but
`spawn_future` is likely to come in useful once we need more parallelism than
we get from I/O alone, and `closure_to_blocking_future` at least signals intent
when we allow a long-running function to take over a Tokio task.

Reviewed By: jsgf

Differential Revision: D9635812

fbshipit-source-id: e15aeeb305c8499219b89a542962cb7c4b740354
2018-09-05 12:23:49 -07:00
Simon Farnsworth
7fd5851f1e Use blocking in asynchronize as well as spawning a task
Summary:
`asynchronize` currently does not warn the event loop that it's
running blocking code, so we can end up starving the thread pool of threads.

We can't use `blocking` directly, because it won't spawn a synchronous task
onto a fresh Tokio task, so your "parallel" futures end up running in series.
Instead, use it inside `asynchronize` so that we can pick up extra threads in
the thread pool as and when we need them due to heavy load.

While in here, fix up `asynchronize` to only work on synchronous tasks and
push the boxing out one layer. Filenodes needs a specific change that's
worth extra eyes.

Reviewed By: jsgf

Differential Revision: D9631141

fbshipit-source-id: 06f79c4cb697288d3fadc96448a9173e38df425f
2018-09-05 12:23:49 -07:00
Simon Farnsworth
6eb6e4543d Add a test for asynchronize
Summary:
We have suspect timings in Mononoke where `asynchronize` is used to
turn a blocking function into a future. Add a test case to ensure that
`asynchronize` itself cannot be causing accidental serialization.

Reviewed By: jsgf

Differential Revision: D9561367

fbshipit-source-id: 14f03e3f003f258450bb897498001050dee0b40d
2018-09-05 12:23:49 -07:00
Sebastian Lund
ee92a7f421 mononoke: return root asap in get_change_manifests_stream if max_depth=1
Summary: In case `max_depth=1` we should only return the topmost entry, which in this case always is the root-entry. This fixes it so that we always return-fast in case `max_depth=1`.

Reviewed By: StanislavGlebik

Differential Revision: D9614259

fbshipit-source-id: a6b82bd5aac74d004f61a07bc24f5d26e5c56412
2018-09-03 11:36:57 -07:00
Stanislau Hlebik
0403dba05c mononoke: remove unused options
Reviewed By: lukaspiatkowski

Differential Revision: D9627883

fbshipit-source-id: b235cb272f93178c942ebf662d77ca73c3790a40
2018-09-03 04:06:14 -07:00
Arun Kulshreshtha
d9a491b1d8 Use tokio::timer::Timeout
Summary: The latest release of `tokio` updates `tokio::timer` to include a new `Timeout` type and a `.timeout()` method on `Future`s. As such, our internal implementation of `.timeout()` in `FutureExt` is no longer needed.

Reviewed By: jsgf

Differential Revision: D9617519

fbshipit-source-id: b84fd47a3ee4fc1f7c0a52e308317b93f28f04da
2018-08-31 15:37:30 -07:00
Arun Kulshreshtha
2dc93d6a5f refactor actors to simple struct
Summary: While I was working on `actix-srserver`, I realized the current design of the API server is quite unnecessary. The "MononokeActor" and "MononokeRepoActor" are only returning futures without much CPU computation cost. So it don't need to be placed in a separate thread.

Reviewed By: jsgf

Differential Revision: D9472848

fbshipit-source-id: 618ec39c42d90717fa6985fee7d6308420962d3f
2018-08-31 14:07:17 -07:00
Jeremy Fitzhardinge
ed34b17e1a mononoke: hgproto: make encoding response take ownership of the response
Summary: It avoids a heap of copies

Reviewed By: StanislavGlebik

Differential Revision: D9595689

fbshipit-source-id: a64f0a383acd517830d08cf0be9fc0a1b6903382
2018-08-31 10:23:49 -07:00
Jeremy Fitzhardinge
d8ad00442d mononoke: hgproto: little cleanups
Reviewed By: StanislavGlebik

Differential Revision: D9595691

fbshipit-source-id: eaf8223253ebdc6828758041b1126745aa58d462
2018-08-31 10:23:48 -07:00
Pavel Aslanov
57d5ddcaf8 added pushrebase configuration options
Summary:
- added `PushrebaseParams` to `RepoConfig`
- configurable recursion_depth and rewritedates

Reviewed By: StanislavGlebik

Differential Revision: D9578661

fbshipit-source-id: df26be4f0f54a54ab6a82fc89d6733099469ce98
2018-08-31 08:55:19 -07:00
Stanislau Hlebik
904e4ee900 mononoke: decrease getfiles buffer
Summary:
Looks like we shouldn't have raised in the first place. Big getfiles buffer
causes OOMs on the servers. Also memory profiling shows that quite often most
of the Mononoke server's memory is used for serving remotefilelog requests.

Reviewed By: purplefox

Differential Revision: D9601990

fbshipit-source-id: 356a65d0749b064486436fb737bd5a47b3beecfa
2018-08-31 01:36:23 -07:00
Jeremy Fitzhardinge
4021018efc tp2: rust: update rust-crates-io
Summary: Need new version of tokio.

Reviewed By: kulshrax

Differential Revision: D9598352

fbshipit-source-id: e2e217e6b7d18354cf9725cb59e9e32ed153a124
2018-08-30 17:37:32 -07:00
Sebastian Lund
8d3b5bfb19 mononoke: add bookmark get support to admin tool
Summary:
Add the ability to get bookmarks using the mononoke admin tool.

Usage: `mononoke_admin --repo-id <repo-id> bookmarks get --changeset-type <hg|bonsai> <BOOKMARK_NAME>`

The changeset-type defaults to HG.

Reviewed By: StanislavGlebik

Differential Revision: D9556742

fbshipit-source-id: c5e64981947aabb9059295622501bc359ed57cc6
2018-08-30 04:22:20 -07:00
Sebastian Lund
2ac12dbe08 mononoke: add bookmark set support to admin tool
Summary:
Add the ability to set bookmarks using the mononoke admin tool.

Usage: `mononoke_admin --repo-id <repo-id> bookmarks set <BOOKMARK_NAME> <HG_CHANGESET_ID>`

Reviewed By: StanislavGlebik

Differential Revision: D9539550

fbshipit-source-id: 7114a6a51711eae6784eb30d820c2ce11672679c
2018-08-30 04:22:20 -07:00
Stanislau Hlebik
14e0804798 mononoke: handle pushkeys in pushrebase
Summary:
Pushkey parts can be sent as part of pushrebase. Phases pushkeys are ignored
because we don't yet support them in Mononoke.
It's not really clear what to do with bookmark pushkey parts, since we are
already moving the `onto` bookmark. For now I suggest to ignore moves of the
`onto` bookmark, and error if there is a pushkey of another bookmark.

Reviewed By: farnz

Differential Revision: D9554385

fbshipit-source-id: 07aff1bd9034c0f2d56a2a5a66ea33c91835ef98
2018-08-29 07:07:36 -07:00
Stanislau Hlebik
181388e584 mononoke: add context to the errors
Summary:
I was debugging pushrebase bugs, and the only error I got was
'oneshot::Cancelled'. Let's add a bit more context around it

Reviewed By: farnz

Differential Revision: D9554384

fbshipit-source-id: b3111ef1b5c743d65740f7fa3fd1a92eef9ab784
2018-08-29 07:07:36 -07:00
Pavel Aslanov
cf9cd619c1 compute changed files and find conflicts
Summary:
This diff fills missing parts of push-rebase implementation
- `find_closest_root` - find closest root to specified bookmark
- `find_changed_files` - find file affected by changesets between provided `ancestor` and `descendant`
- `intersect_changed_files` - rejects push rebase if any conflicts have been found
- `create_rebased_changes` - support for merges
- `do_pushrebase` - returns updated bookmark value

Reviewed By: StanislavGlebik

Differential Revision: D9458416

fbshipit-source-id: c0cb53773eba6e966f1a5928c43ebdec761a78d3
2018-08-29 06:52:11 -07:00
Harvey Hunt
b582d26357 Add lookup support for bookmarks
Summary:
Modify the lookup() RPC function to be able to accept either a
bookmark or commit hash. A commit hash lookup is attempted first, falling
back to a bookmark lookup if it fails.

Reviewed By: StanislavGlebik

Differential Revision: D9457349

fbshipit-source-id: 78db21c01c498b045f5781097cb12f7220a40999
2018-08-28 09:53:24 -07:00
Jeremy Fitzhardinge
916d8b6813 rust/failure_ext: move slogkv into submodule
Summary: Avoid cluttering top-level module.

Reviewed By: farnz

Differential Revision: D9394195

fbshipit-source-id: ae7b6e7c182eaf50cfad075cee4b0775c1df0e68
2018-08-20 12:37:04 -07:00
Zeyi Fan
300047c2fa add thrift client
Summary: Added a thrift client library and binary for Mononoke API Server that allows us to play with the API Server's thrift port.

Reviewed By: farnz

Differential Revision: D9110899

fbshipit-source-id: 603cc5e2b5e0419a73c9eccb35f8c95455ada9ce
2018-08-19 16:07:12 -07:00
Zeyi Fan
dcf0665484 add cat_file in thrift
Summary: Add `get_raw` in thrift part.

Reviewed By: StanislavGlebik

Differential Revision: D9094301

fbshipit-source-id: 23bbfa6fb653e07ca687ff8e21da8ae5fca3333e
2018-08-19 16:07:12 -07:00
Zeyi Fan
0fcfbda8b1 add fb303 thrift server
Summary: This commit adds a basic thrift server that responds to fb303 status check queries to Mononoke API Server.

Reviewed By: farnz

Differential Revision: D9092291

fbshipit-source-id: d1e4ddb280c252f549d40a0bb03d05afccbf73b8
2018-08-19 16:07:12 -07:00
Zeyi Fan
c1b1005d91 clean up HgBlob and HgBlobHash
Summary: This commits change `HgBlob` from an enum into a struct that only contains one Bytes field, completely removes `HgBlobHash` and changes the methods of `HgBlob` from returning `Option`s into directly returning results.

Reviewed By: farnz

Differential Revision: D9317851

fbshipit-source-id: 48030a621874d628602b1c5d3327e635d721facf
2018-08-19 15:52:34 -07:00
Alex Maloney
1496846903 Futures split Stats into FutureStats and TimedStats
Summary: Since this data is specific to TimedStream and not TimedFuture I split the Stats struct into FutureStats and StreamStats

Reviewed By: StanislavGlebik

Differential Revision: D9355421

fbshipit-source-id: cc2055706574756e2e53f3ccc57abfc50c3a02ba
2018-08-17 13:07:24 -07:00
Lukas Piatkowski
09a8d9430f manifest utils: use structure rather than closure to represent pruner and implement DeletedPruner
Summary: gettreepack doesn't care for deleted entries, only about added or modified ones

Reviewed By: StanislavGlebik

Differential Revision: D9378909

fbshipit-source-id: 2935e6b74fbb0208f7cf89ab4b1e761bb9c6000b
2018-08-17 09:07:27 -07:00
Lukas Piatkowski
2d92742e38 gettreepack: handle the depth parameter being send by client
Reviewed By: StanislavGlebik

Differential Revision: D9378908

fbshipit-source-id: 980e625765803c7cac9a272f3e701a3b2fa8da28
2018-08-17 09:07:26 -07:00
Stanislau Hlebik
60150b9488 mononoke: stack pushrebase
Summary: Now pushrebasing stacks as well. Again, still no conflicts checks

Reviewed By: aslpavel

Differential Revision: D9359807

fbshipit-source-id: 9f6e7a05b45fb80b40faaaaa4fe2434b7a591a7c
2018-08-17 07:21:31 -07:00