Summary:
Consider test-blobstore_healer.t. We write to 3 blobstores, and expect 1 to
fail all the time and the others to have all the blobs. Except this isn't
really the guarantee the multiplex makes when you configure it with just 1
store to write to (it guarantees the blob is in ONE store, not 2).
The relevant failure is: P172423216.
So let's have a config that reflects our expectations. There are other tests
that use the Multiplex. I'll let CI tell me if they need changing.
Hopefully this should eliminate one more source of test flakiness...
Reviewed By: StanislavGlebik
Differential Revision: D26275304
fbshipit-source-id: 9184420b2ce0990d91e4ec6d38d28ec5ae87775d
Summary: Noticed this test was flaky due to indeterminate output order
Reviewed By: ikostia
Differential Revision: D26274078
fbshipit-source-id: 29b23072f849d98b73c39046af500b57b91477d0
Summary:
the revset cloudremote is no longer used anywhere in the codebase
also it is related to obsmarkers that are not longer used either
Reviewed By: quark-zju
Differential Revision: D26250792
fbshipit-source-id: b55b8d52c44869f50d5b5f5d8ef2e6c2fac07597
Summary: The function is used in many places and I noticed there are some issues with commit cloud due to the bug that visible heads can contain public commit.
Reviewed By: quark-zju
Differential Revision: D26250556
fbshipit-source-id: e57e447dee803719fcf38cf376ad5af569d8020d
Summary: I have noticed that server-side status code returned by speedtest's upload changed to 200. It's on master but it's not yet out in the wild.
Reviewed By: johansglock
Differential Revision: D26249530
fbshipit-source-id: 0f6e77ad4f9daf7f7a3bbc216f20435252101078
Summary: Update walker to use new common cmdlib scrub options if present. These are common across admin, walker and scrub so they were moved up to cmdlib.
Reviewed By: krallin
Differential Revision: D25976408
fbshipit-source-id: 430bb0c6e8b78470afdfc7cebc44c6645492c6fe
Summary: Update to use the new common argument from cmdlib version
Reviewed By: krallin
Differential Revision: D25976404
fbshipit-source-id: a1089b82e6455254fed32317e76764498dcfa130
Summary: Scrub options are of use to walker, manual_scrub and admin tool, should be cmdlib blobstore options rather than per-tool.
Reviewed By: krallin
Differential Revision: D25928062
fbshipit-source-id: a5bbf518c4e5d97275fb3d8effd923fcca691891
Summary:
Previously, for configs that are only read once, EdenFS would have to be
stopped, the config written, and then EdenFS would be restarted. For Mercurial,
this increases the test time significantly as starting EdenFS takes ~20s.
Reviewed By: fanzeyi
Differential Revision: D26258174
fbshipit-source-id: a74d1e5be35044e95e5a7403f1bf28d557b613d2
Summary:
Since mounting EdenFS via NFS requires the same privilege as mounting EdenFS
with FUSE, let's re-use the PrivHelper infrastructure that FUSE already uses.
The macOS bit isn't yet implemented as developing on Linux is for now easier,
the mount args are also overly complicated on macOS and not well documented.
Reviewed By: fanzeyi
Differential Revision: D26255629
fbshipit-source-id: 295261dd40442fe7e0f9439c4f4c25e0d50211a3
Summary: This enables the repository to be mounted via NFS, and not FUSE.
Reviewed By: chadaustin
Differential Revision: D26229827
fbshipit-source-id: 5af5a47ebe5f1dd54df7707bf57d9b7476921f29
Summary:
Accumulate every commit transition in the journal, rather than
treating every merged range of journal deltas as having a "from"
commit and a "to" commit. This makes it possible for Watchman to
accurately report the set of changed files across an ABA situation or
rapid `hg next` and `hg prev` operations.
Reviewed By: genevievehelsel
Differential Revision: D26193478
fbshipit-source-id: 8b54b9d5bcefa1811008a3b6e9c3aa25a69471ca
Summary:
`gettreepack` accounts for ~6B logged scuba rows a day (https://fburl.com/scuba/mononoke_test_perf/vpnsn1ny) out of ~10B totally logged rows (https://fburl.com/scuba/mononoke_test_perf/qw78ecxe), so 60% of rows. For the vast majority of `gettreepack` instances we log 3 log tags: "Start processing", "Gettreepack params" and "Command processed". Similarly, the vast majority requests just 1 mfnode: https://fburl.com/scuba/mononoke_test_perf/3xwotsgq. If we sample logging for these commands by a factor of 100, we'll be able to save almost all of these 60% of rows (it's not entirely clear how that will actually influence our retention, but likely pretty significantly).
What do we lose if we do this sampling?
There are a few perf counters, like GettreepackResponseSize, GettreepackNumTreepacks, GettreepackDirectories, GettreepackDesignatedNodes, that will lose their aggregation accuracy. Given that we're only sampling single-mfnode gettreepacks, these values are not likely to be very interesting. However, we are still leaving a possibility to turn verbose logging back on and get full amount of logging.
Reviewed By: mitrandir77, krallin
Differential Revision: D26148453
fbshipit-source-id: a8521364bb5323d41c6c0c7d82d50508c0eda068
Summary:
This allows us to log sampled messages, but reserves an option of falling back to full verbose logging in critical situations.
Note that while this might be a desired behavior in most cases, it's certainly not always the right thing to do: sometimes sampled data needs to remain sampled, even for verbose logging.
Reviewed By: ahornby
Differential Revision: D26148454
fbshipit-source-id: c6ff9d1b05c9cec4895181e008ef6483884bb483
Summary:
For now, they all pretend to not be available, except the null one which does
nothing as per the RFC.
Reviewed By: genevievehelsel
Differential Revision: D26159846
fbshipit-source-id: 8d0f43f6bacc5c5a93e883e527769cb7a3b6e22b
Summary: That way the NFS procedure won't clash with it.
Reviewed By: genevievehelsel
Differential Revision: D26159845
fbshipit-source-id: 22ce07326f9ec42aa9d44352ae5bb71368337c03
Summary:
For now, this only registers itself against rpcbind and always reply that the
procedure is unavailable. In the future, this will service all the procedures
and forward them to a Dispatcher.
Reviewed By: genevievehelsel
Differential Revision: D26159844
fbshipit-source-id: 21908f1333ed41b3eea3fb5ce19c8e68391df103
Summary:
We do have logs for what we receive, let's do the same for what we are sending
back.
Reviewed By: genevievehelsel
Differential Revision: D26152811
fbshipit-source-id: 8e605f78a8c849f3bd65b70be51617fc058330ff
Summary:
In the NFS spec, the fhandle3 is defined as an opaque byte array, and thus its
size must preceed its content. Let's also move it to an NfsdRpc.h as this type
will be predominantly used by the Nfsd RPC program.
Reviewed By: chadaustin
Differential Revision: D26152812
fbshipit-source-id: 0cc37325078a2c7b58551eaa5177436b21e03838
Summary:
governor supercedes ratelimit_meter and provides async apis which means we don't need to use our own async_limiter version.
This is in preparation for the next diff D26021464 which uses governor's update_n_ready() api for byte rate limiting, rather than adding it to AsyncLimiter in D26021464.
Reviewed By: krallin
Differential Revision: D26153156
fbshipit-source-id: c0b79baee3b71c770353152c6d7c63f616171c86
Summary:
You can't start Mononoke in mode/dev right now: the startup stalls because
creating Memcache takes ~15 seconds, and if it overlaps between acquiring a SQL
connection and dispatching a query (highly likely when instantiating repos),
you get a connection that sits unused for too long.
Reviewed By: farnz
Differential Revision: D26250069
fbshipit-source-id: fec67cd98895db0358e3f47a6e7d1d6b1cef61a1
Summary:
Like it says in the title. This adds a crate that provides a combinator that
lets us easily find stalls caused by futures that stay in `poll()` for too
long.
The goal is to make this minimal overhead for whoever is using it: all you need
is to import it + give it a logger. It automatically looks up the line where
it's called and gives it back to you in logs. This uses the `track_caller`
functionality to make this work.
Reviewed By: farnz
Differential Revision: D26250068
fbshipit-source-id: a1458e5adebac7eab6c2de458f679c7215147937
Summary:
Like it says in the title, this adds support for exposing EdenAPI in Mononoke
Server. That's it!
Differential Revision: D26131777
fbshipit-source-id: 15ed2d6d80b1ea06763adc0b7312d1cab2df5b76
Summary:
We always log to the same destination, but this is a little annoying as it
stands for e.g. hooking in EdenAPI because that wants 1 Scuba dataset for the
whole thing instead of per-repo.
This also means that logging to Scuba for things that aren't tied to a repo
isn't possible, which might explain why our logging in connection_acceptor
is a) very sparse and b) only logs to stderr.
To make the transition smooth here, I added support for a default Scuba
table and used this in Mononoke Server. I think we can remove this after
updating our tasks to explicitly receive the dataset as an argument.
Reviewed By: StanislavGlebik
Differential Revision: D26126012
fbshipit-source-id: 0fa92dc8c5d5ddeed99dd7d9dd5a2288b8300bf3
Summary:
This isn't a thing we've been enjoying so before adding another arg let's clean
up.
Reviewed By: ahornby
Differential Revision: D26126011
fbshipit-source-id: a7d25cb664b5410b0d9c8fbfc70cf879db395e4e
Summary:
Like it says in the title, this splits edenapi_server into 2 crates. Most of
the logic here is pretty straightforward:
- edenapi_service creates a Gotham handler
- edenapi_server instantiates the handler and sets up a socket, tls, etc.
Reviewed By: ahornby
Differential Revision: D26108439
fbshipit-source-id: 6a79e9767ba891265bca11f78eb1a6d3a61ee21f
Summary:
I'd like to support bridging to EdenAPI Server in Mononoke Server. Mononoke
Server already performs an ACL check for trusted proxies when the client
connects, I'd like to pass this information to EdenAPI to avoid re-doing
a check we've already done.
This allows that.
Reviewed By: ahornby
Differential Revision: D26108441
fbshipit-source-id: f0a294e340f38d039b3ba30a4c262c4a8ccbb318
Summary:
Like it says in the title. Explicitly changing behavior between tests and prod
seems a bit random, and it had two callsites only:
- In commit sync config to avoid having to setup a Configerator config, which
in turn means we take a totally new codepath in tests. That seems a bit
misguided: let's just set an empty config.
- In config source setup, which is a lot more legit, but also somewhat
unnecessary: instead of passing `--test-instance`, we can just pass the
config source itself.
Reviewed By: StanislavGlebik
Differential Revision: D26108442
fbshipit-source-id: a2c112d175031708646efacd5c02dd36be0c3eac
Summary:
Mononoke API already has an instance of this (though it's created per-repo,
which is a little bit awkward — I'll try to change that later), so we might
as well use it.
Reviewed By: StanislavGlebik
Differential Revision: D26108438
fbshipit-source-id: 3b5e7d5d3427304cc788930cbe9a51a6a6d214b9
Summary:
This is something we do in our other web services and which we should continue
to do here. This ensures proper draining from Proxygen.
Reviewed By: StanislavGlebik
Differential Revision: D26108440
fbshipit-source-id: 16f3941cce4a6b29c5091d10f1c887d099cfb69f
Summary:
Like it says in the title, this updates our repo construction to rely on
Mononoke API. My underlying goal here is to have a Mononoke instance around so
that I can start EdenAPI on it, but it also allows for a bunch of cleanup &
code deduplication.
There is still some stuff that isn't initialized in Mononoke API and probably
does not belong there, but at least the shared pieces now come from there. I
also did keep the `Arc<Repo>` around in Mononoke Server's `MononokeRepo`, so
this way we can start to migrate things to Mononoke API (instead of
de-constructing my `Repo` and getting the parts I need to stuff them into
`MononokeRepo`).
One part of this that might be a bit controversial is that I exposed some of
the internals of `Repo` via accessor methods. I know we've historically
wanted access via Mononoke API to not use the fields but instead use the
RepoContext, and I think that's a good goal, but (IMO) realistically the only
way we get there is by first making Mononoke API *available* to use in
repo_client (which is what this ends up doing), and then we can port things to
call Mononoke API instead of using blobrepo and such directly.
To make this work properly I also updated our tests to default to always
set up Configerator configs when starting Mononoke, since we need them to start
MononokeApi (for the CfgrLiveCommitSyncConfig, which right now has an ad-hoc
"ignore the failures in test mode" branch in Mononoke Server).
Reviewed By: markbt
Differential Revision: D26108443
fbshipit-source-id: b7cf5452e044828e73a0aa3ca3ddbc78e466fe57
Summary:
Some tests access sqlite DBs while Mononoke is also accessing them. Make this
less aggressive.
Here's an example: https://fburl.com/sandcastle/jpia9lzz
Reviewed By: StanislavGlebik
Differential Revision: D26252044
fbshipit-source-id: 333f519ca211c2b5d06bfc8a35be9c1af6a15b0a
Summary:
Those tests have tweak the bookmarks from outside Mononoke so accordingly they
need a little bit of flushing to make sure the bookmarks are visible when we
try to pull them in Mononoke.
Reviewed By: StanislavGlebik
Differential Revision: D26200830
fbshipit-source-id: 2e84e06fdbd47e08103ee6a74147f3b505140c0d
Summary: Add option to allow remaining deferred edges at the end of a walker run so that any repos with unresolved edges can still be tailed.
Reviewed By: StanislavGlebik
Differential Revision: D26230927
fbshipit-source-id: 19eed6a616f722d522c7bca30bbe3bc4dae08655
Summary: Add support for opening the checkpoint database from metadata db config
Differential Revision: D26100513
fbshipit-source-id: 094fab028395ed0324421488bf83b3762c43799a