1c6ca01a25
Summary: This updates the Filestore to make writes faster by farming out all hashing to separate Tokio tasks. This lets us increase throughput of the Filestore substantially, since we're no longer limited by the ability of a single core to hash data. On my dev server, when running on a 1MB file, this lets us improve the throughput of the Filestore for writes from 36.50 MB/s (0.29 Gb/s) to 152.61 MB/s (1.19 Gb/s) when using a chunk size of 1MB and a concurrency level of 10 (i.e. 10 concurrent chunk uploads). Note that the chunk size has a fairly limited impact on performance (e.g. making 10KB instead has a <10% impact on performance). Of course, this doesn't reflect performance when uploading to a remote blobstore, but note that we can tune that by tweaking our upload concurrency (making uploads faster at the expense of more memory). --- Note that as part of this change, I updated the implementation away from stream splitting, and into an implementation that fans out to Sinks. I actually had implementation of a higher-performance filestore for both, but went with this approach because it doesn't require the incoming Stream to be Send (and I have a forthcoming diff to make the whole Filestore not require a Send input), which will be useful when incorporating with the API Server, which unfortunately does not provide us with a Send input. Reviewed By: aslpavel Differential Revision: D16560769 fbshipit-source-id: b2e414ea3b47cc4db17f82d982618bbd837f93a9 |
||
---|---|---|
apiserver | ||
async-compression | ||
asyncmemo | ||
benchmark | ||
blobimport_lib/src | ||
blobrepo | ||
blobrepo_utils | ||
blobstore | ||
blobstore_sync_queue | ||
bonsai_hg_mapping | ||
bonsai_utils | ||
bookmarks | ||
bundle2_resolver | ||
bytes-ext | ||
cache_warmup/src | ||
changesets | ||
cmdlib/src | ||
cmds | ||
common | ||
derived_data/src | ||
failure_ext | ||
filenodes | ||
filestore/src | ||
futures-ext | ||
hgcli | ||
hgproto | ||
hook_tailer | ||
hooks | ||
manifest | ||
mercurial | ||
mercurial_bundles | ||
mercurial_types | ||
metaconfig | ||
mononoke_api/src | ||
mononoke_types | ||
netstring | ||
phases | ||
py_tar_utils | ||
reachabilityindex | ||
ready_state/src | ||
repo_client | ||
revset | ||
server | ||
sshrelay | ||
tests | ||
.gitignore | ||
.rlsconfig | ||
.travis.yml | ||
Cargo.toml | ||
CONTRIBUTING.md | ||
LICENSE | ||
packman.yml | ||
README.md | ||
rustfmt.toml |
Mononoke
Mononoke is a next-generation server for the Mercurial source control system, meant to scale up to accepting thousands of commits every hour across millions of files. It is primarily written in the Rust programming language.
Caveat Emptor
Mononoke is still in early stages of development. We are making it available now because we plan to start making references to it from our other open source projects such as Eden.
The version that we provide on GitHub does not build yet.
This is because the code is exported verbatim from an internal repository at Facebook, and not all of the scaffolding from our internal repository can be easily extracted. The key areas where we need to shore things up are:
- Full support for a standard
cargo build
. - Open source replacements for Facebook-internal services (blob store, logging etc).
The current goal is to get Mononoke working on Linux. Other Unix-like OSes may be supported in the future