Summary: Scrub options are of use to walker, manual_scrub and admin tool, should be cmdlib blobstore options rather than per-tool.
Reviewed By: krallin
Differential Revision: D25928062
fbshipit-source-id: a5bbf518c4e5d97275fb3d8effd923fcca691891
Summary:
Previously, for configs that are only read once, EdenFS would have to be
stopped, the config written, and then EdenFS would be restarted. For Mercurial,
this increases the test time significantly as starting EdenFS takes ~20s.
Reviewed By: fanzeyi
Differential Revision: D26258174
fbshipit-source-id: a74d1e5be35044e95e5a7403f1bf28d557b613d2
Summary:
Since mounting EdenFS via NFS requires the same privilege as mounting EdenFS
with FUSE, let's re-use the PrivHelper infrastructure that FUSE already uses.
The macOS bit isn't yet implemented as developing on Linux is for now easier,
the mount args are also overly complicated on macOS and not well documented.
Reviewed By: fanzeyi
Differential Revision: D26255629
fbshipit-source-id: 295261dd40442fe7e0f9439c4f4c25e0d50211a3
Summary: This enables the repository to be mounted via NFS, and not FUSE.
Reviewed By: chadaustin
Differential Revision: D26229827
fbshipit-source-id: 5af5a47ebe5f1dd54df7707bf57d9b7476921f29
Summary:
Accumulate every commit transition in the journal, rather than
treating every merged range of journal deltas as having a "from"
commit and a "to" commit. This makes it possible for Watchman to
accurately report the set of changed files across an ABA situation or
rapid `hg next` and `hg prev` operations.
Reviewed By: genevievehelsel
Differential Revision: D26193478
fbshipit-source-id: 8b54b9d5bcefa1811008a3b6e9c3aa25a69471ca
Summary:
`gettreepack` accounts for ~6B logged scuba rows a day (https://fburl.com/scuba/mononoke_test_perf/vpnsn1ny) out of ~10B totally logged rows (https://fburl.com/scuba/mononoke_test_perf/qw78ecxe), so 60% of rows. For the vast majority of `gettreepack` instances we log 3 log tags: "Start processing", "Gettreepack params" and "Command processed". Similarly, the vast majority requests just 1 mfnode: https://fburl.com/scuba/mononoke_test_perf/3xwotsgq. If we sample logging for these commands by a factor of 100, we'll be able to save almost all of these 60% of rows (it's not entirely clear how that will actually influence our retention, but likely pretty significantly).
What do we lose if we do this sampling?
There are a few perf counters, like GettreepackResponseSize, GettreepackNumTreepacks, GettreepackDirectories, GettreepackDesignatedNodes, that will lose their aggregation accuracy. Given that we're only sampling single-mfnode gettreepacks, these values are not likely to be very interesting. However, we are still leaving a possibility to turn verbose logging back on and get full amount of logging.
Reviewed By: mitrandir77, krallin
Differential Revision: D26148453
fbshipit-source-id: a8521364bb5323d41c6c0c7d82d50508c0eda068
Summary:
This allows us to log sampled messages, but reserves an option of falling back to full verbose logging in critical situations.
Note that while this might be a desired behavior in most cases, it's certainly not always the right thing to do: sometimes sampled data needs to remain sampled, even for verbose logging.
Reviewed By: ahornby
Differential Revision: D26148454
fbshipit-source-id: c6ff9d1b05c9cec4895181e008ef6483884bb483
Summary:
For now, they all pretend to not be available, except the null one which does
nothing as per the RFC.
Reviewed By: genevievehelsel
Differential Revision: D26159846
fbshipit-source-id: 8d0f43f6bacc5c5a93e883e527769cb7a3b6e22b
Summary: That way the NFS procedure won't clash with it.
Reviewed By: genevievehelsel
Differential Revision: D26159845
fbshipit-source-id: 22ce07326f9ec42aa9d44352ae5bb71368337c03
Summary:
For now, this only registers itself against rpcbind and always reply that the
procedure is unavailable. In the future, this will service all the procedures
and forward them to a Dispatcher.
Reviewed By: genevievehelsel
Differential Revision: D26159844
fbshipit-source-id: 21908f1333ed41b3eea3fb5ce19c8e68391df103
Summary:
We do have logs for what we receive, let's do the same for what we are sending
back.
Reviewed By: genevievehelsel
Differential Revision: D26152811
fbshipit-source-id: 8e605f78a8c849f3bd65b70be51617fc058330ff
Summary:
In the NFS spec, the fhandle3 is defined as an opaque byte array, and thus its
size must preceed its content. Let's also move it to an NfsdRpc.h as this type
will be predominantly used by the Nfsd RPC program.
Reviewed By: chadaustin
Differential Revision: D26152812
fbshipit-source-id: 0cc37325078a2c7b58551eaa5177436b21e03838
Summary:
governor supercedes ratelimit_meter and provides async apis which means we don't need to use our own async_limiter version.
This is in preparation for the next diff D26021464 which uses governor's update_n_ready() api for byte rate limiting, rather than adding it to AsyncLimiter in D26021464.
Reviewed By: krallin
Differential Revision: D26153156
fbshipit-source-id: c0b79baee3b71c770353152c6d7c63f616171c86
Summary:
You can't start Mononoke in mode/dev right now: the startup stalls because
creating Memcache takes ~15 seconds, and if it overlaps between acquiring a SQL
connection and dispatching a query (highly likely when instantiating repos),
you get a connection that sits unused for too long.
Reviewed By: farnz
Differential Revision: D26250069
fbshipit-source-id: fec67cd98895db0358e3f47a6e7d1d6b1cef61a1
Summary:
Like it says in the title. This adds a crate that provides a combinator that
lets us easily find stalls caused by futures that stay in `poll()` for too
long.
The goal is to make this minimal overhead for whoever is using it: all you need
is to import it + give it a logger. It automatically looks up the line where
it's called and gives it back to you in logs. This uses the `track_caller`
functionality to make this work.
Reviewed By: farnz
Differential Revision: D26250068
fbshipit-source-id: a1458e5adebac7eab6c2de458f679c7215147937
Summary:
Like it says in the title, this adds support for exposing EdenAPI in Mononoke
Server. That's it!
Differential Revision: D26131777
fbshipit-source-id: 15ed2d6d80b1ea06763adc0b7312d1cab2df5b76
Summary:
We always log to the same destination, but this is a little annoying as it
stands for e.g. hooking in EdenAPI because that wants 1 Scuba dataset for the
whole thing instead of per-repo.
This also means that logging to Scuba for things that aren't tied to a repo
isn't possible, which might explain why our logging in connection_acceptor
is a) very sparse and b) only logs to stderr.
To make the transition smooth here, I added support for a default Scuba
table and used this in Mononoke Server. I think we can remove this after
updating our tasks to explicitly receive the dataset as an argument.
Reviewed By: StanislavGlebik
Differential Revision: D26126012
fbshipit-source-id: 0fa92dc8c5d5ddeed99dd7d9dd5a2288b8300bf3
Summary:
This isn't a thing we've been enjoying so before adding another arg let's clean
up.
Reviewed By: ahornby
Differential Revision: D26126011
fbshipit-source-id: a7d25cb664b5410b0d9c8fbfc70cf879db395e4e
Summary:
Like it says in the title, this splits edenapi_server into 2 crates. Most of
the logic here is pretty straightforward:
- edenapi_service creates a Gotham handler
- edenapi_server instantiates the handler and sets up a socket, tls, etc.
Reviewed By: ahornby
Differential Revision: D26108439
fbshipit-source-id: 6a79e9767ba891265bca11f78eb1a6d3a61ee21f
Summary:
I'd like to support bridging to EdenAPI Server in Mononoke Server. Mononoke
Server already performs an ACL check for trusted proxies when the client
connects, I'd like to pass this information to EdenAPI to avoid re-doing
a check we've already done.
This allows that.
Reviewed By: ahornby
Differential Revision: D26108441
fbshipit-source-id: f0a294e340f38d039b3ba30a4c262c4a8ccbb318
Summary:
Like it says in the title. Explicitly changing behavior between tests and prod
seems a bit random, and it had two callsites only:
- In commit sync config to avoid having to setup a Configerator config, which
in turn means we take a totally new codepath in tests. That seems a bit
misguided: let's just set an empty config.
- In config source setup, which is a lot more legit, but also somewhat
unnecessary: instead of passing `--test-instance`, we can just pass the
config source itself.
Reviewed By: StanislavGlebik
Differential Revision: D26108442
fbshipit-source-id: a2c112d175031708646efacd5c02dd36be0c3eac
Summary:
Mononoke API already has an instance of this (though it's created per-repo,
which is a little bit awkward — I'll try to change that later), so we might
as well use it.
Reviewed By: StanislavGlebik
Differential Revision: D26108438
fbshipit-source-id: 3b5e7d5d3427304cc788930cbe9a51a6a6d214b9
Summary:
This is something we do in our other web services and which we should continue
to do here. This ensures proper draining from Proxygen.
Reviewed By: StanislavGlebik
Differential Revision: D26108440
fbshipit-source-id: 16f3941cce4a6b29c5091d10f1c887d099cfb69f
Summary:
Like it says in the title, this updates our repo construction to rely on
Mononoke API. My underlying goal here is to have a Mononoke instance around so
that I can start EdenAPI on it, but it also allows for a bunch of cleanup &
code deduplication.
There is still some stuff that isn't initialized in Mononoke API and probably
does not belong there, but at least the shared pieces now come from there. I
also did keep the `Arc<Repo>` around in Mononoke Server's `MononokeRepo`, so
this way we can start to migrate things to Mononoke API (instead of
de-constructing my `Repo` and getting the parts I need to stuff them into
`MononokeRepo`).
One part of this that might be a bit controversial is that I exposed some of
the internals of `Repo` via accessor methods. I know we've historically
wanted access via Mononoke API to not use the fields but instead use the
RepoContext, and I think that's a good goal, but (IMO) realistically the only
way we get there is by first making Mononoke API *available* to use in
repo_client (which is what this ends up doing), and then we can port things to
call Mononoke API instead of using blobrepo and such directly.
To make this work properly I also updated our tests to default to always
set up Configerator configs when starting Mononoke, since we need them to start
MononokeApi (for the CfgrLiveCommitSyncConfig, which right now has an ad-hoc
"ignore the failures in test mode" branch in Mononoke Server).
Reviewed By: markbt
Differential Revision: D26108443
fbshipit-source-id: b7cf5452e044828e73a0aa3ca3ddbc78e466fe57
Summary:
Some tests access sqlite DBs while Mononoke is also accessing them. Make this
less aggressive.
Here's an example: https://fburl.com/sandcastle/jpia9lzz
Reviewed By: StanislavGlebik
Differential Revision: D26252044
fbshipit-source-id: 333f519ca211c2b5d06bfc8a35be9c1af6a15b0a
Summary:
Those tests have tweak the bookmarks from outside Mononoke so accordingly they
need a little bit of flushing to make sure the bookmarks are visible when we
try to pull them in Mononoke.
Reviewed By: StanislavGlebik
Differential Revision: D26200830
fbshipit-source-id: 2e84e06fdbd47e08103ee6a74147f3b505140c0d
Summary: Add option to allow remaining deferred edges at the end of a walker run so that any repos with unresolved edges can still be tailed.
Reviewed By: StanislavGlebik
Differential Revision: D26230927
fbshipit-source-id: 19eed6a616f722d522c7bca30bbe3bc4dae08655
Summary: Add support for opening the checkpoint database from metadata db config
Differential Revision: D26100513
fbshipit-source-id: 094fab028395ed0324421488bf83b3762c43799a
Summary: If a checkpoint gets too old then we don't want to rely on it, instead start a new walk from scratch and then update the checkpoint.
Differential Revision: D25995107
fbshipit-source-id: 1e05030926123e1066c9b5a42330028d7786c1f3
Summary:
Walker checkpoints allow a scrub or other walk to continue where it left off. This is useful as we can release new code without making the scrub start from scratch again.
This change adds checkpoint loading and recording to tail.rs along with a new test for it.
When restarting from a checkpoint the code considers the unfinished checkpoint itself as the main_bounds and and new commits since the checkpoint as the catchup_bounds.
If there is no checkpoint at all the repo bounds are used as the main_bounds.
Differential Revision: D25995106
fbshipit-source-id: e1663091e4b1157541b256f36b354bbf316a92c9
Summary:
Fix some warnings in the Mononoke build:
- URLs in doc comments should be delimited with `<` and `>`.
- Permission checker `try_from_ssh_encoded` parameter is unused.
Reviewed By: krallin
Differential Revision: D26224590
fbshipit-source-id: 49ce62655189a7045b78538642dbf638519f71de
Summary: Use normal ? style rather than panic with except.
Reviewed By: krallin
Differential Revision: D26176912
fbshipit-source-id: d04ebc4b6c04dd1f8f34b49bee350b52feb11ec1
Summary:
We have the code that checks that value of the argument but we don't have the
actual entry in the argument list.
Reviewed By: quark-zju
Differential Revision: D26236489
fbshipit-source-id: a418f309a73430915ee8b130adb3b9a92ceecc23
Summary:
Reduce local heads from unfiltered raw heads to visible heads. Reduce remote
heads from all heads to selected heads, plus those explicitly specified via
`-r`, `-B`, or via `repo.pull`.
This should speed up both pull and push for repos with lots of heads (ex.
fbsource), and make fastdiscovery less necessary.
Reviewed By: DurhamG
Differential Revision: D26207588
fbshipit-source-id: b64485566e0651ad47a5d1ee47e68301ba371e57