f93426a8c8
Summary: Compressed responses from LFS are slower than they should right now. Normally, we'd expect something along the lines of normal response time + compression time, but right now it's a lot more than this. The reason for this is that our compressed streams are eager, i.e. they will consume and compress as much of the underlying stream as possible before sending off the data. This is problematic for LFS, because we try very hard to serve everything out of RAM directly (and very often succeed), so that means we compress the whole stream before sending it off. This means we might spend e.g. 500ms compressing (this is how long it takes zstd to compress the object I was testing on, which is a ~80MiB binary that compresses down to 33% of that), and _then_ we'll spend some time transferring the compressed data, when we could have started transferring immediately while we were compressing. To achieve this, let's simply tell our compressed stream to stop waiting for more data once in a while (every 4 MiB, which seems very frequent but actually really isn't). Reviewed By: StanislavGlebik Differential Revision: D23782756 fbshipit-source-id: a0d523d84f92e215eb366f551063383fc835fdd6 |
||
---|---|---|
.. | ||
benchmark | ||
blobimport_lib | ||
blobrepo | ||
blobrepo_utils | ||
blobstore | ||
blobstore_sync_queue | ||
bonsai_git_mapping | ||
bonsai_globalrev_mapping | ||
bonsai_hg_mapping | ||
bookmarks | ||
bulkops | ||
cache_warmup | ||
changesets | ||
cmdlib | ||
cmds | ||
commit_rewriting | ||
common | ||
derived_data | ||
edenapi_server | ||
fastreplay | ||
filenodes | ||
filestore | ||
git | ||
gotham_ext | ||
hgcli | ||
hgproto | ||
hook_tailer | ||
hooks | ||
lfs_import_lib | ||
lfs_protocol | ||
lfs_server | ||
load_limiter | ||
manifest | ||
mercurial | ||
metaconfig | ||
microwave | ||
mononoke_api | ||
mononoke_commitcloud_bookmarks_filler | ||
mononoke_hg_sync_job | ||
mononoke_types | ||
mutable_counters | ||
newfilenodes | ||
permission_checker | ||
phases | ||
pushrebase | ||
reachabilityindex | ||
regenerate_hg_filenodes | ||
repo_client | ||
repo_import | ||
revset | ||
scs_server | ||
segmented_changelog | ||
server | ||
sshrelay | ||
tests | ||
time_window_counter | ||
tunables | ||
unbundle_replay | ||
walker | ||
Cargo.toml | ||
README.md |
Mononoke
Mononoke is a next-generation server for the Mercurial source control system, meant to scale up to accepting thousands of commits every hour across millions of files. It is primarily written in the Rust programming language.
Caveat Emptor
Mononoke is still in early stages of development. We are making it available now because we plan to start making references to it from our other open source projects.
The version that we provide on GitHub does not build yet.
This is because the code is exported verbatim from an internal repository at Facebook, and not all of the scaffolding from our internal repository can be easily extracted. The key areas where we need to shore things up are:
- Full support for a standard
cargo build
. - Open source replacements for Facebook-internal services (blob store, logging etc).
The current goal is to get Mononoke working on Linux. Other Unix-like OSes may be supported in the future