Summary:
Using multiple Runtimes might be a cause of problems in future and even if it isn't it will be a cause for investigating whether it is a problem or not.
The issue I have in mind is that if someone runs a future on one runtime that calls `tokio::spawn` on it (f.e. schedule a job that works for ever) but then uses a different runtime to drive another future to complection one might not suspect that the previous spawn is already lost with the previous Runtime.
Reviewed By: farnz
Differential Revision: D10446122
fbshipit-source-id: 4bfd2a04487a70355a26f821e6348f5223901c0d
Summary:
getfiles implementation for lfs
The implementation is the following:
- get file size from file envelope (retrieve from manifold by HgNodeId)
- if file size > threshold from lfs config
- fetch file to memory, get sha256 of the file, will be fixed later, as this approach consumes a lot of memory, but we don't have any mapping from sha256 - blake2 [T35239107](https://our.intern.facebook.com/intern/tasks/?t=35239107)
- generate lfs metadata file according to [LfsPlan](https://www.mercurial-scm.org/wiki/LfsPlan)
- set metakeyflag (REVID_STORED_EXT) in the file header
- if file size < threshold, process usual way
Reviewed By: StanislavGlebik
Differential Revision: D10335988
fbshipit-source-id: 6a1ba671bae46159bcc16613f99a0e21cf3b5e3a
Summary:
As per the comments added - MyRouter setup is such that it starts inside a tupperware container together with the binary that will be using it. This means that by the time the binary wants to use the MyRouter connection the MyRouter instance might not be ready yet. In order to mitigate this effect the myrouter::Builder will attempt to make a "Select 1" query and retry it with a backoff for a max of 2 min or until the connection is actually established.
Unfortunately the `queries!` macro had to be moved inside the `macro` module in order to make it usable from inside `myrouter` module, see this: https://stackoverflow.com/questions/31103213/import-macro-from-parent-module
Reviewed By: farnz
Differential Revision: D10270464
fbshipit-source-id: 9cf6ad936a0cabd72967fb96796d4af3bab25822
Summary:
The idea for rollout is to:
- first make sure that Mononoke doesn't crash when a --myrouter-port is provided
- then tupperware configs will be modified to include myrouter as a collocated proces on every host and the port of that myrouter instance will be provided via command line
- lastly land the change that actually talks to myrouter
Reviewed By: StanislavGlebik
Differential Revision: D10258251
fbshipit-source-id: ea9d461b401d41ef624304084014c2227968d33f
Summary:
We now have a way for a MySQL database to tell us how to send
streaming clones to the client. Hook it all up, so that (with any luck), once
we have data in MySQL and the blobstore, we'll see working streaming clones.
Reviewed By: StanislavGlebik
Differential Revision: D10130774
fbshipit-source-id: b22ffb642d0a54b09545889779f79e7a0f81acd7
Summary:
Previously cachelib cmdline args were added only to cmd line binaries, but not
to Mononoke this diff fixes it.
Reviewed By: farnz
Differential Revision: D10083899
fbshipit-source-id: 8febba96561c5ab9a61f60fafc7a7e56985dc038
Summary:
The "path" in manifold blobrepo is used for logging, but it has been quite confusing with "fbsource" and "fbsource-pushrebase" to be logged in an identical way - both are "fbsource", because of the "path" config. Lets not use the "path" for logging, instead use the "reponame" from metaconfig repo.
In case we ever want to have two repos that are named the same (please don't) or have logging under a different name than "reponame" from config then we can add a proper optional "name" parameter, but for now we don't require this confusing feature.
Reviewed By: StanislavGlebik
Differential Revision: D9769514
fbshipit-source-id: 89f2291df90a7396749e127d8985dc12e61f4af4
Summary:
We had a lot of requests that took > 15 mins on Mononoke, while taking few
seconds on mercurial. Turned out that hgcli doesn't play well with big chunks.
Looks like AsyncRead very inefficiently tries to allocate memory, and that
causes huge slowness (T33775046 for more details).
As a short-term fix let's chunk the data on the server. Note that now we have
to make getfiles request streamable and manually insert the size of the
request.
Reviewed By: lukaspiatkowski
Differential Revision: D9738591
fbshipit-source-id: f504cf540bc7d90e2cbebba9808455b6e89c92c6
Summary: The latest release of `tokio` updates `tokio::timer` to include a new `Timeout` type and a `.timeout()` method on `Future`s. As such, our internal implementation of `.timeout()` in `FutureExt` is no longer needed.
Reviewed By: jsgf
Differential Revision: D9617519
fbshipit-source-id: b84fd47a3ee4fc1f7c0a52e308317b93f28f04da
Summary: Since this data is specific to TimedStream and not TimedFuture I split the Stats struct into FutureStats and StreamStats
Reviewed By: StanislavGlebik
Differential Revision: D9355421
fbshipit-source-id: cc2055706574756e2e53f3ccc57abfc50c3a02ba
Summary:
Sometimes the scribe writes can fail due to backpressure, so just drop
them while still logging to stdout.
Reviewed By: StanislavGlebik
Differential Revision: D9355416
fbshipit-source-id: 8cebe61b1ccfe802fcff686102096d1c9291aa1a
Summary:
Added some comments and fixed a couple of little style issues.
Log when warmup prefetching starts and ends.
Reviewed By: StanislavGlebik
Differential Revision: D9355414
fbshipit-source-id: b16ac267cc0abda01ab445ca3e5de34c17f680a7
Summary: No need for a whole file for a single use statement.
Reviewed By: StanislavGlebik
Differential Revision: D9349613
fbshipit-source-id: 511985201e0799a0c4f0847d14a7c439fa249687
Summary: Remove `Arc<BlobRepo>` from more places since `BlobRepo` will do its own internal `Arc`ing.
Reviewed By: StanislavGlebik
Differential Revision: D9317987
fbshipit-source-id: 899e8b2ede278e62a83e64c144eb18c8cc7e57c6
Summary: Beginnings of a container with various essential bits.
Reviewed By: StanislavGlebik
Differential Revision: D9322148
fbshipit-source-id: b69bd17aa88acd69e81b90e9a1efb672247dc887
Summary:
It's really part of the server, and isn't likely to be useful
elsewhere.
Reviewed By: StanislavGlebik
Differential Revision: D9322149
fbshipit-source-id: 0dc3ca41f2779b3cc9e1c32f8e09e369038c3d53
Summary: Those futures started calling tokio_threadpool::blocking since aslpavel added bonsai to hg mapping for bookmarks, this causes the server to fail to start up because fetching config repo was happening outside of tokio.
Reviewed By: farnz
Differential Revision: D9316711
fbshipit-source-id: 64188028537881baf1b1c713adc39b22c09a78cc
Summary:
Set a panichandler by default in cmdlib::get_logger to make sure
everyone gets one set. It configures itself to exit the process so that we
don't leave it in a half-broken state.
The Mononoke server was already using a panic hook, but this replaces it with
one that prints more detail about what was going on at the time.
Reviewed By: StanislavGlebik
Differential Revision: D9234587
fbshipit-source-id: bb51790a60b1ee545a364b4b92e09ec950788684
Summary:
We'll be running in Tupperware, and want to shrink when we get too
large to avoid OOM due to caches. Configure cachelib appropriately
Reviewed By: StanislavGlebik
Differential Revision: D8900371
fbshipit-source-id: 4f1f64c2508c64e4ce2d201e0a0e86446f84ffef
Summary:
Back out "[mononoke] Switch to cachelib for blob caching"
Original commit changeset: 2549d85dfcba
Back out "[mononoke] Remove unused asyncmemo imports"
Original commit changeset: e34f8c34a3f6
Back out "mononoke: fix blobimport"
Original commit changeset: b540201b93f1
Reviewed By: StanislavGlebik
Differential Revision: D8989404
fbshipit-source-id: e4e7c629cb4dcf196aa56eb07a53a45f6008eb4e
Summary:
cachelib can shrink caches to avoid running into OOM, while still
giving as close to full-sized cache performance as possible. Make it possible
for Rust users of cachelib to request cache shrinking on low memory, while
choosing a strategy that suits them.
Reviewed By: Imxset21
Differential Revision: D8895814
fbshipit-source-id: ca4eb5b002c9ed922e7f7f56de002313a0d2303b
Summary:
Start deprecating AsyncMemo for caching purposes, by removing its use
as a blobstore cache.
Reviewed By: StanislavGlebik
Differential Revision: D8840496
fbshipit-source-id: 2549d85dfcba6647e9b0824ab55ab76165a17564
Summary: Let it be a normal future-style design - infinite loop that spawns futures that handled the request
Reviewed By: jsgf
Differential Revision: D8866166
fbshipit-source-id: 5d90e5c987c419351a7a15013133b47522c345f9
Summary: As a bonus there is no need for spawning threads for separate tokio cores since tokio::runtime has built-in thread pool
Reviewed By: farnz
Differential Revision: D8863343
fbshipit-source-id: 8adf696640aec78e767574e8bf2925699a580ca0
Summary: Iterating over the code on server is a bit painful and it has grown a lot, splitting it should speed up future refactories and make it more maintainable
Reviewed By: jsgf, StanislavGlebik
Differential Revision: D8859811
fbshipit-source-id: 7c56f9f835f45eca322955cb3b9eadd87fbb30a1
Summary: Moving to the crate allows apiserver to reuse the function.
Reviewed By: jsgf
Differential Revision: D8843178
fbshipit-source-id: 9d110c7f2683ff58654187222e7820240bfda98e
Summary:
Deleted the RepoGenCache structure, associated file, and public exports.
Also deleted the containing repoinfo crate, as nothing else was using it now.
Deleted some existing references to it in experimental code which weren't caught in the test plan but were blocking this from landing.
Reviewed By: StanislavGlebik
Differential Revision: D8787103
fbshipit-source-id: 0b90c758ea8175cb0f3ec74c371592b9ca5b192e
Summary:
Removed all references to RepoGenCache from publically callable functions in the revset package. This involved:
- Modifying blobrepo so that its get_generation_number method returned a Generation wrapper instead of a raw usize, to allow it to be used in a cleaner manner in the revset code.
- Simultaneously changing the constructors of all the structures in revset. This seems like a big change, but many of them call each other, passing a RepoGenCache object down the line, so eliminating them all at once made for the cleanest update.
- Modifying helper functions in the revset structures which would create streams of nodes by taking ownership of a RepoGenCache object within a closure. Instead they now take ownership of a clone of the repo. This strategy was already done earlier in the same helper functions, so I am assuming the cost of cloning a repo into a closure is small.
- Modifying the only external usage of revset within the mononoke server code.
This is part of a several step process to completely remove RepoGenCache from the code base. The next steps should be:
- Remove all references to RepoGenCache in the testing macros for revset.
- Delete RepoGenCache and clean up any dangling references to it.
Reviewed By: StanislavGlebik
Differential Revision: D8743560
fbshipit-source-id: 125f851075d836d40224d339e1daee912a39f7e4
Summary:
Those are the changes made to crates in tp2: P59806629
Several changes that were hard to split are here:
- update manifold client to use hyper 0.12.x
- update scribe client to use hyper 0.12.x
- update mononoke manifoldblob to use the updated manifold client
- update mononoke hgcli to use openssl 0.10.x and tokio-openssl
- update mononoke server to use openssl 0.10.x and tokio-openssl
- remove sendwrapper
Reviewed By: jsgf
Differential Revision: D8806931
fbshipit-source-id: 65412d483f77d8c0a0d5692c41c6516bb8f86046
Summary:
This is a series of patches which adds Cargo.toml files to all the crates and tries to build them. There is individual patch for each crate which tells whether that crate build successfully right now using cargo or not, and if not, reason behind that.
Following are the reasons why the crates don't build:
* failure_ext and netstring crates which are internal
* error related to tokio_io, there might be an patched version of tokio_io internally
* actix-web depends on httparse which uses nightly features
All the build is done using rustc version `rustc 1.27.0-dev`.
Pull Request resolved: https://github.com/facebookexperimental/mononoke/pull/7
Differential Revision: D8778746
Pulled By: jsgf
fbshipit-source-id: 927a7a20b1d5c9643869b26c0eab09e90048443e
Summary:
hgcli will start logging stuff as well and it will use the same session_uuid as the server.
This also includes logging the user and source hostname.
Reviewed By: farnz
Differential Revision: D8750663
fbshipit-source-id: 7ebc8b6c10b7560d985fd23e9e3f2645f3bd0a1c
Summary: those structures are sshrelay specific, move them there so it's easier to share them in future diffs
Reviewed By: farnz
Differential Revision: D8750666
fbshipit-source-id: b58596e63787d221a3970d5f1648e11d81949925
Summary: This is a preparation to using Preamble more heavily
Reviewed By: StanislavGlebik
Differential Revision: D8750665
fbshipit-source-id: 44d4bcedbe95fe05679faeedf4479ebad4d9359c
Summary: Session UUID will help identify the issues on Mononoke side whenever the client encounters problems
Reviewed By: StanislavGlebik
Differential Revision: D8732396
fbshipit-source-id: 35d04b0d56be0cfc2c608f08287a2b1d236a96e3
Summary:
This diff refactors the server config repository to support storing and loading of hooks. In the new structure each repo lives in its own directory and the config file for the server is called "server.toml".
Hooks can be referenced by relative or absolute paths allowing either local or common hooks to be loaded.
Reviewed By: StanislavGlebik
Differential Revision: D8625178
fbshipit-source-id: 62c8c515a0fbbf7a38cfc68317300d8f42eb4d7a
Summary:
Add a per-repo config flag to repos to be configed without being
enabled. Setting "enabled = false" will make Mononoke completely ignore the
repo config. If not present, "enabled" is assumed to be true.
Reviewed By: farnz
Differential Revision: D8647161
fbshipit-source-id: 2646d41a64917d3e50f662b0b4b628ccfdbb05a8
Summary:
Use tls for connection between hgcli and Mononoke server always, even for
localhost connections[1]
The setup is similar to tls setup of Eden server.
[1] This is not necessary of course, but adding an option to bypass tls
connection may result in accidental use of it in prod. However if it turns out
to be too unusable, we can add such option in the future
Reviewed By: jsgf
Differential Revision: D8644299
fbshipit-source-id: 0898e30e33b718e13a766763479f3adf9323ffe7
Summary: This commit upgraded openssl, enabled alpn for actix-web and added tokio-codec with fixes due to the upgrade.
Reviewed By: StanislavGlebik
Differential Revision: D8682673
fbshipit-source-id: 8c7cadfd6c0c7b016202f6cb038eb4951d0f9333
Summary:
repo.rs file is getting too big. Let's move remotefilelog logic to a separate
file. In the next diff I'm going to modify it to add memcache caching for
getfiles
Reviewed By: farnz
Differential Revision: D8678229
fbshipit-source-id: c12ed23b4b044528d551f9c0f0266114a575d6d6
Summary:
There are so many individual arguments here that it's honestly hard to keep
track.
It is unfortunate that a bunch of string copies have to be done, but not really
a big deal.
Reviewed By: jsgf
Differential Revision: D8675237
fbshipit-source-id: 6a333d01579532a0a88c3e26b2db86b46cf45955
Summary:
It's easier to use scuba when you separate the part of log that identifies the log from the part that can have arbitrary data in it.
It's also easier to search for code after a sample was found
Reviewed By: jsgf
Differential Revision: D8625612
fbshipit-source-id: 7d7e382530dd5d7e5d69c6d34caccda4b6d2921b
Summary: Make it possible to enable and disable tracing in Mononoke via fb303 thrift calls.
Reviewed By: StanislavGlebik
Differential Revision: D8553236
fbshipit-source-id: 6c962fcf7f753f200bf865c403da12b0f9619221