Summary: Improvements aim to minimize number of db queries
Differential Revision: D20280711
fbshipit-source-id: 6cc06f1ac4ed8db9978e0eee956550fcd16bbe8a
Summary:
Implementation of derivation logic for the changeset info.
BonsaiDerived is implemented for the ChangesetInfo. `derive_from_parents` just derives an info and BonsaiDerivedMapping then puts it into the blobstore.
```
ChangesetInfo::derive(..) -> ChacgesetInfo
```
Reviewed By: krallin
Differential Revision: D20185954
fbshipit-source-id: afe609d1b2711aed7f2740714df6b9417c6fe716
Summary:
Introducing data structures for derived Bonsai changeset info, which is supposed to store all commit metadata except of the file changes.
Bonsai changeset consists of the commit metadata and a set of all the file changes associated with the commit.
Some of the changesets, usually for merge commits, include thousands of file changes. It is not a problem by itself, however in cases where we need to know some information about the commit apart from its hash, we have to fetch the whole changeset. And it can take up to 15-20 seconds
Changeset info as a separate data structure is needed to speed up changeset fetching process: when we need to use commit metadata but not the file changes.
Reviewed By: markbt
Differential Revision: D20139434
fbshipit-source-id: 4faab267304d987b44d56994af9e36b6efabe02a
Summary:
The new API is required for migration Commit Cloud off hg servers and infinitepush database
This also can fix phases issues with `hg cloud sl`.
Reviewed By: markbt
Differential Revision: D20221913
fbshipit-source-id: 67ddceb273b8c6156c67ce5bc7e71d679e8999b6
Summary:
Fix the tail interval delay, it wasn't triggering.
Took the opportunity to structure the code as a loop as well which simplified it a bit.
Reviewed By: markbt
Differential Revision: D20247077
fbshipit-source-id: 1786ef1528a4b0493f5e454d28450d7198af8ad4
Summary: The `rust-crypto` crate has not been maintained; replacing it with the `sha-1` crate since it's the only algorithm used in this library.
Reviewed By: dtolnay
Differential Revision: D20236029
fbshipit-source-id: 9c4ff25f393b099ec9570a7badbe4b378fbd98af
Summary:
Previously warm bookmark cache tried to derive all bookmarks on startup. It slows down the startup time and in some cases it might prevent scs server from starting up at all.
Let's change how warm bookmark cache initializes the bookmarks - instead of trying to derive all of them let's move underived bookmarks back in history.
Reviewed By: krallin
Differential Revision: D20195211
fbshipit-source-id: 5cb5d8599d3035973175d3063186a7c01536889a
Summary:
We didn't use DelayBlob at all, however we use DelayedBlobstore in benchmark
lib. DelayedBlobstore seem to have more useful options, so let's remove
DelayBlob and use DelayedBlobstore instead.
Reviewed By: farnz
Differential Revision: D20245865
fbshipit-source-id: bd694a0e178367014adc2776185450693f87475d
Summary:
Context: https://fb.workplace.com/groups/rust.language/permalink/3338940432821215/
In targets that depend on both 0.1 and 0.2 tokio, this codemod renames the 0.1 dependency to be exposed as tokio_old::. This is in preparation for flipping the 0.2 dependencies from tokio_preview:: to plain tokio::.
This is the tokio version of what D20168958 did for futures.
Codemod performed by:
```
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| sed 's,TARGETS$,:,' \
| xargs \
-x \
buck query "labels(srcs,
rdeps(%Ss, fbsource//third-party/rust:tokio-old, 1)
intersect
rdeps(%Ss, //common/rust/renamed:tokio-preview, 1)
)" \
| xargs sed -i 's,\btokio::,tokio_old::,'
```
Reviewed By: k21
Differential Revision: D20235404
fbshipit-source-id: cfb2689a584ad0d73f16d98d8587fb9c44661465
Summary: clippy was failing, this diff should fix it hopefully
Reviewed By: krallin
Differential Revision: D20250585
fbshipit-source-id: 6a9becdb84ec293659433fa9078e456d40210b6c
Summary:
This removes the Extend implementation for FileBytes, which was incorrect (it
discarded existing data!). I had introduced this as a backwards compatibility
shim when doing the Bytes 0.4 to Bytes 0.5 migration :/
We don't really need this shim, considering:
- The only place that really matters that uses this is the remotefilelog crate,
where we have a content id, and where we should use `filestore::fetch_concat`
instead.
- The other places are tests (or close to abandonware...), which can do their
own folding.
Longer term, I'd like to remove the whole `Content` stream in hg entries, so
those callsites can use the filestore methods, which a) have test coverage
(unlike ad-hoc folds, which don't always do), and b) are more efficient since
they know how large the destination buffer needs to be ahead of time, and don't
need to re-allocate.
To make sure this fixes the bug, I also introduced tests for the remotefilelog
crate. As expected, the chunked variant fails without this fix.
Reviewed By: mitrandir77
Differential Revision: D20248978
fbshipit-source-id: 1b554d3e595eb867b6b6cf4204d31f27dd90a111
Summary:
Not sampling errors will make it easier to use Fastreplay as an early alarm
system for errors.
Reviewed By: ahornby
Differential Revision: D20249202
fbshipit-source-id: 92da53d5703b58bcef49cfcdc251f008ae6f25bc
Summary:
The git mappings are normally populated during blobimport of the repo but we
need something for the repos we've already imported.
Reviewed By: markbt
Differential Revision: D20160768
fbshipit-source-id: 9e37c7d0f12682e73ca9990e56e4d827e9861a9f
Summary:
We don't use it, and this tries to write to Manifold from tests, which is
undesirable. Let's remove it;
Reviewed By: farnz
Differential Revision: D20219902
fbshipit-source-id: 2e983bee54cadad257648cc9633695be825a1ef3
Summary:
This introduces a new binary and library that (microwave: it makes warmup
faster..!) that can be used to accelerate cache warmup. The idea is the
microwave binary will run cache warmup and capture things that are loaded
during cache warmup, and commit those to a file.
We can then use that file when starting up a host to get a head start on cache
warmup by injecting all those entries into our local cache before actually
starting cache warmup.
Currently, this only supports filenodes, but that's already a pretty good
improvement. Changesets should be easy to add as well. Blobs might require a
bit more work.
Reviewed By: StanislavGlebik
Differential Revision: D20219905
fbshipit-source-id: 82bb13ca487f82ca53b4a68a90ac5893895a96e9
Summary:
The walker has been hitting the filenodes-enforced 5 second SQL timeout when
querying filenodes from MySQL.
It's not clear why that is, but looking at previous run history shows that we
occasionally have queries that take > 30 seconds to complete (none of those
show up in MySQL slow queries, though, and there's no particular load on the
hosts around that time, so it's not clear whether this is happening in MySQL or
our end).
Anyhow, those queries would have worked in the old implementation (after a long
time), but they fail in the new one, since it enforces a 5-second timeout.
We should investigate why this is happening (and Alex has landed diffs to add
more reporting in the walker to that end), but in the meantime, there's no
reason to break the walker
Reviewed By: farnz
Differential Revision: D20227842
fbshipit-source-id: 5ee5c8225b6474b66c1f48a10b4a2d671ebc79c6
Summary: When it fails, it's better to know which repo failed.
Reviewed By: farnz
Differential Revision: D20245375
fbshipit-source-id: 9794911308dbdd67b20673857ac8b7b54f06a217
Summary: Makes it easier to understand which repo is failing
Reviewed By: krallin
Differential Revision: D20244630
fbshipit-source-id: ca32f7831c5ed4e701103020e9878c459ba6d573
Summary: Make these functions generic so that callers don't need to construct a trait object whenever they want to call them. Passing in a trait object should still work so existing callsites should not be affected.
Reviewed By: krallin
Differential Revision: D20225830
fbshipit-source-id: df0389b0f19aa44aaa89682198f43cb9f1d84b25
Summary: Add a method to `HgFileContext` to stream the history of the file. Will be used to support EdenAPI history requests.
Reviewed By: krallin
Differential Revision: D20211779
fbshipit-source-id: 49e8c235468d18b23976e64a9205cbcc86a7a1b4
Summary: Add an 'HgTreeContext' struct to the 'hg' module to allow querying for tree data in Mercurial-specific formats. This initial implementation's primary purpose is to enable getting the content of tree nodes in a format that can be written directly to Mercurial's storage.
Reviewed By: krallin
Differential Revision: D20159958
fbshipit-source-id: d229aee4d6c7d9ef45297c18de6e393d2a2dc83f
Summary:
Context: https://fb.workplace.com/groups/rust.language/permalink/3338940432821215/
This codemod replaces *all* dependencies on `//common/rust/renamed:futures-preview` with `fbsource//third-party/rust:futures-preview` and their uses in Rust code from `futures_preview::` to `futures::`.
This does not introduce any collisions with `futures::` meaning 0.1 futures because D20168958 previously renamed all of those to `futures_old::` in crates that depend on *both* 0.1 and 0.3 futures.
Codemod performed by:
```
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| sed 's,TARGETS$,:,' \
| xargs \
-x \
buck query "labels(srcs, rdeps(%Ss, //common/rust/renamed:futures-preview, 1))" \
| xargs sed -i 's,\bfutures_preview::,futures::,'
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| xargs sed -i 's,//common/rust/renamed:futures-preview,fbsource//third-party/rust:futures-preview,'
```
Reviewed By: k21
Differential Revision: D20213432
fbshipit-source-id: 07ee643d350c5817cda1f43684d55084f8ac68a6
Summary:
While we are transitioning from tokio 0.1 to tokio 0.2 we might need to use
[tokio_compat](https://docs.rs/tokio-compat/0.1.4/tokio_compat/) crate.
Let's add a helper macro similar to fbinit::test that uses tokio_compat
runtime.
Reviewed By: farnz
Differential Revision: D20213814
fbshipit-source-id: 18976e953011c8ada1fa915686e2dcb76ea288d5
Summary:
Well, we don't have a Tokio Compat runtime in Actix. This means Tokio 0.2 code
(e.g. Tokio 0.2 timers) blows up when executed in the API Server.
How do we fix this? By not running Mononoke code on Actix's runtime, and
instead running in on a Mononoke runtime we instantiated.
How do we do that? By passing a Tokio Compat Executor all the way down to the
place where Actix is about to consume our stream ... and at that point, we
spawn the stream on our runtime, and give Actix a dumb receiver that does work
when polled on a Tokio 0.1 runtime.
This feels like the end of the road for the API Server. Nothing about this is
even remotely sane, but it should take us through the API Server's eventual
demise and replacement with the Gotham-based EdenAPI Server, which runs on the
runtime of our choice (i.e. Tokio 0.2).
Reviewed By: farnz
Differential Revision: D20222294
fbshipit-source-id: 1646e35fe05b131b030e4962c8a7f68f72995035
Summary:
* Added intermediate (de)serializers for config types, so that we generate full Identity objects at config load time
* Implement FromStr for Identity
* Compare configured identities to presented identities in ratelimit middleware in order to decide whether or not to apply the limit
Reviewed By: krallin
Differential Revision: D20139308
fbshipit-source-id: 340c300db549575eb6d06efcbe437c0b1db4927b
Summary:
Usually we have only one repo, but in case of xrepo_commit_lookup we actually
have two. It's nice to know which permission failed
Reviewed By: krallin
Differential Revision: D20221509
fbshipit-source-id: ee98845767e72f99027ba18a8c5b374cb6f9f3ab
Summary: publish per-node-type progrss stats so we can correlate storage access/load to type of node traversed
Reviewed By: farnz
Differential Revision: D20181064
fbshipit-source-id: c741b526c50e86a3eee105fab57fd7bc3ecc063b
Summary: Add tests for existing default block casefolding_check behaviour, plus test demonstrating problem with casefolding_check=false
Reviewed By: farnz
Differential Revision: D20192412
fbshipit-source-id: 1aea0fc5581e0c44388a4224ca693698731d3cd5
Summary:
In targets that depend on *both* 0.1 and 0.3 futures, this codemod renames the 0.1 dependency to be exposed as futures_old::. This is in preparation for flipping the 0.3 dependencies from futures_preview:: to plain futures::.
rs changes performed by:
```
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| sed 's,TARGETS$,:,' \
| xargs \
-x \
buck query "labels(srcs,
rdeps(%Ss, fbsource//third-party/rust:futures-old, 1)
intersect
rdeps(%Ss, //common/rust/renamed:futures-preview, 1)
)" \
| xargs sed -i 's/\bfutures::/futures_old::/'
```
Reviewed By: jsgf
Differential Revision: D20168958
fbshipit-source-id: d2c099f9170c427e542975bc22fd96138a7725b0
Summary:
Small cleanup that removes a bunch of duplicate code.
That should make it easier to add other types of derived data to the warmer
Reviewed By: krallin
Differential Revision: D20193169
fbshipit-source-id: 437fe7981d8a71164dc9edfcc423e8c41cbe0967
Summary:
Add a a new `hg` module to the `mononoke_api` crate that provides a `HgRepoContext` type, which can be used to query the repo for data in Mercurial-specific formats. This will be used in the EdenAPI server.
Initially, the `HgRepoContext`'s functionality is limited to just getting the content of individual files. It will be expanded to support querying more things in later diffs.
Reviewed By: markbt
Differential Revision: D20117038
fbshipit-source-id: 23dd0c727b9e3d80bd6dc873804e41c7772f3146
Summary:
This updates our middleware stack and introduces two new pieces of functinality:
- Middleware can now be async.
- Middleware can now preempt requests and dispatch a response.
The underlying motivation for this is to allow implementing Mononoke LFS's rate
limiting middleware in our existing middleware stack.
Reviewed By: kulshrax
Differential Revision: D20191213
fbshipit-source-id: fc1df7a14eb0bbefd965e32c1fca5557124076b5
Summary: D20121350 changed the methods for accessing file content on `FileContext` to no longer return `Stream`s. We should update the comments accordingly.
Reviewed By: ahornby
Differential Revision: D20160128
fbshipit-source-id: f5bfd7e31bc7e6db63f56b8f4fc238893aa09a90
Summary:
This updates the hg_sync_job to update Globalrevs in hgsql before attempting to
sync bundles. This means that if we're syncing successfully, hg is in sync with
Mononoke, and if we fail (which should be very uncommon to begin with!), hg
might skip a little bit ahead, but that's OK.
This only makes sense when generating bundles — when doing pushrebase, hg would
be updating its own globalrevs.
Reviewed By: StanislavGlebik
Differential Revision: D20159262
fbshipit-source-id: 6736f8592682da1001c7c9c4c9444462b71913c2
Summary:
Our previous implementation of unodes had a problem with diamond merges -
essentially because p1 and p2 might have the same file but with different
content unode will always create a merge unode which can be unexpected.
(code comment in unodes/derive.rs has more info about it).
This diff fixes the problem by introducing unodes v2. This allows us to import
new repos with new unode implementation while keeping the old repos with unode
v1.
This implementation uses a heuristic which should be fast and should do the
correct thing most of the time. In some cases it might exclude some parts of
the history completely. For example:
O <- merge commit, doesn't change anything
/ \
P1 | <- modified "file.txt" to "B"
| P2 <- modified "file.txt" to "B"
\ /
ROOT <- created "file.txt" with content "A"
In that case history of "file.txt" starting from merge commit will contain only (P1, ROOT),
but it won't contain P2.
We also considered other options:
1) Move this heuristic to fastlog batch derived data. See D19973553 for more
details about why we decided not to do it.
2) Filter out parent unodes that are ancestors of other parent unodes. This should
always be correct, but it will be hard to implement, it wil be even harder to make
sure it always have good performance.
Reviewed By: krallin
Differential Revision: D19978157
fbshipit-source-id: 445ddd5629669d987e7aa88c35fecf0b34a40da0
Summary: I'd like to log all derivations to a single place so that's it's easier to understand what was derived and where
Reviewed By: aslpavel
Differential Revision: D20140004
fbshipit-source-id: 305ea533031a04ff95995a6fe2a6e57e95a87026
Summary: Log the source node when validating so that we can more quickly reproduce any issues in a single step via the --walk-root option, rather than needing to run the entire walk again.
Differential Revision: D20098200
fbshipit-source-id: 6b0d7d151c97f25080953d6c0fbf431dc2cec6a8
Summary:
I noticed in my earlier Bytes 0.5 diff that this doesn't have local test
coverage (there might be things somewhere else in the test suite that look for
it). Let's add some.
Reviewed By: ahornby
Differential Revision: D20139437
fbshipit-source-id: c17e4516574d674bb0b009cd1f322008fb3c1a79
Summary: Add ability to track route to node, so that one could report the node from which failing step started from.
Reviewed By: ikostia
Differential Revision: D20097615
fbshipit-source-id: 4f2c000f54bd212225533e7f3570178020f34a9d
Summary:
In case this starts to cause problems, let's have a way to correlate those
problems with some exported metrics.
Reviewed By: StanislavGlebik
Differential Revision: D20158822
fbshipit-source-id: 6ac9e25861dbedaecdf04fd92bda835ae66535eb
Summary:
## Wider goal
See D20068839
## This diff
This diff actually implements the conditional hydration of `getbundle`
responses, as described in the D20068839.
Note that as well as implementing support for hydrated `getbyndle` responses, this diff also implements support for changegroup v3 and lfs in such responses, which is needed if we are to do this kind of stuff in LFS-enabled repository.
Reviewed By: StanislavGlebik
Differential Revision: D20068838
fbshipit-source-id: fbdd3f8f5fb7cd2cb60473a94094553a1d4b4d2f
Summary:
Extend the session id logging to the validate command by adding ability to set
the progress reporters scuba builder.
Reviewed By: ikostia
Differential Revision: D20074153
fbshipit-source-id: ceaeebdb7eb976080061ad3b76b22d7a0f7bd891
Summary: I canaried with this but I forgot to fold it in -_-
Reviewed By: HarveyHunt
Differential Revision: D20158157
fbshipit-source-id: 4a570bbca421d8c3e1e66605f164f2b8e2a433f6
Summary:
Most binaries don't need hooks. Let's not require them. This might not be very
long lived since Simon is working on removing lua hooks, but this was a trivial
fix.
Reviewed By: johansglock
Differential Revision: D20140026
fbshipit-source-id: cc74b37459f63c5dd550c5779b72aa1d6531202c
Summary:
(this doesn't remove ad-hoc leases, like derived data)
Let's see if this has any impact on performance. We no longer fail Manifold
writes on conflicts, and
Reviewed By: StanislavGlebik
Differential Revision: D20038572
fbshipit-source-id: 4a972ff09ceb65e69a1d22a643a8f2d9b2ab1b17
Summary: The Thrift generated code depends only on futures 0.3, not 0.1. Thus it isn't necessary to depend on renamed:futures-preview and we can depend on futures-preview directly, which is exposed to Rust code as `futures::`.
Reviewed By: jsgf
Differential Revision: D20145921
fbshipit-source-id: 5cae94ec6747a374c2bf05f124ab237c798de005
Summary:
The last uses of futures 0.1 were removed in D18411564 and D18392252.
A later diff will switch thrift from using renamed:futures-preview to plain futures-preview to prepare for eliminating the -preview suffix.
Reviewed By: jsgf
Differential Revision: D20143832
fbshipit-source-id: b7fd79f18368ade59eeba6ed0ac09613000c046b
Summary: Continue to push `compat()` deeper into subcommands. This enables us to refactor each file one at a time and ultimately remove the old futures from our code base.
Reviewed By: farnz
Differential Revision: D20132126
fbshipit-source-id: cc10dde6eda7ddcbf911dbe8d3ebe1713f8ec2ab
Summary: Makes the code a little nicer to work with.
Reviewed By: HarveyHunt
Differential Revision: D20138720
fbshipit-source-id: 19f228782ab3582739e35fddcb2b0bf952110641
Summary:
Paths are in a different replica, so they can be missing even if copy info is
present. Let's fallback to master in this case.
Differential Revision: D20098902
fbshipit-source-id: 838ab1c70a74420c431a2f442f1504c8edd29a2e
Summary:
Locking by physical shard worked earlier in this stack as indicated in the
benchmarks, but after Ondemand restored their fetching for www, it proved
insufficient in terms of parallelism, and resulted in substantially slower
gettreepacks.
Besides, with the "physical sharding" approach, we found ourselves between a rock and a hard place in terms of what to do with paths:
- We could keep holding the semaphore for a filenode while fetching paths. This is undesirable because it further limits our level our concurrency (because fetching a filenode + paths is going to be at least 2x as slow as fetching a filenode).
- We could fetch them without holding a lease at all. This is even more undesirable, because it means that when we release the semaphore for a given shard, we haven't filled the cache yet. This means that if we have a queue of 2 requests for the same bit of data, we're going to fetch twice (task A acquires the lock, goes to MySQL for the filenode, releases the lock and starts going to paths, at which point task B acquires the lock and goes to MySQL again since the filenode hasn't been filled yet).
To fix this, I had to add a dedicated cache for paths, and put it behind semaphores as well. In the example above, this would ensure task B finds a "partial filenode" in the cache and doesn't go to MySQL (instead, it goes straight up to queuing for access to paths, where it will wait behind task A and also won't hit MySQL).
There are a few problems with this:
- It's a lot of extra complexity (because we need to handle half misses where we have the filenode but not the path).
- It ties together our level of concurrency a second time to that of the underlying number of physical shards, which is kinda meaningless when some of this data can be provided by Memcache to begin with.
This diff fixes both problems.
The root cause of our problem that is that we're tying our level of concurrency to physical
MySQL shards, whereas what we actually want is a tunable level of concurrency
that matches our work load, yet effectively deduplicates queries.
In this diff, I'm updating our exclusive locking to be purely virtual. This
means that we're still not over-fetching, but we are no longer constrained by
the parallelism of the underlying DB (this does mean we might queue up requests
there, but they won't be duplicate requests).
This also results in simpler code, and opens up the way for further
improvements in the future, such as using Memcache lease-get operations to
further deduplicate calls, if we'd like.
As part of that, I've also updated our remote_cache to use the same CacheKey
entity as the local cache, to avoid spending time producing new keys when we
have perfectly good ones available.
Reviewed By: StanislavGlebik
Differential Revision: D20097821
fbshipit-source-id: 03d7be9082982fc1c6ef365d541c1ed8ae3e6e8d
Summary:
This adds a test for our cache fill behavior, which is to fill the remote cache
if we miss in local cache. I hadn't added this later and it's a little easier
to add now that the refactor for FilenodeInfo is through.
Reviewed By: ahornby
Differential Revision: D19905396
fbshipit-source-id: 88b5fd83f5d2213e91efc3c5dfb91dfe4e395136
Summary:
This updates our filenodes implementation to use different types for writing
(`PreparedFilenode`) and reading `(FilenodeInfo`).
The bottom line is that this avoids a bunch of cloning of paths on the read
path, which doesn't need to return the path to the caller, since the caller
already knows it! We can also take it out of Memcache, since we don't need
Memcache to tell us the path for a blob we could only possibly have found by
having the path to begin with.
This does update our filenodes serialization format. I bumped MC_CODEVER
accordingly.
Reviewed By: StanislavGlebik
Differential Revision: D19905400
fbshipit-source-id: 6037802c1773de564cade8e264d36087382ee15a
Summary:
This removes the old sqlfilenodes implementation, since we're now using the new
one. There's also a bit of cruft here and there we can get rid of.
Reviewed By: StanislavGlebik
Differential Revision: D19905395
fbshipit-source-id: 2526b6d65eeb981f5aedda9951b44b389ecec29d
Summary:
The former implementation would eagerly query Memcache when fetching history
(due to how old futures work) for files in getpack, but the new one does not.
This means the new one loses out on a lot of buffering, which the old one used
to do.
This diff emulates the old behavior by eagerly querying filenodes in getpack,
which improves performance on a very big getpack (32K files) by about 3x, and
makes it 30% faster than the old code, instead of > 2x slower.
Note that I'm not certain we really want to do this kind of aggressive
buffering in getpack long term, but for now, I'd like to keep this unchanged.
Reviewed By: StanislavGlebik
Differential Revision: D19905398
fbshipit-source-id: 49f9a2cd505a98123fd1dabb835e8e378d45c930
Summary:
This updates Mononoke to use the new filenodes implementation introduced
earlier in this stack.
See the test plan for detailed performance results supporting why I'm making
this change.
Reviewed By: StanislavGlebik
Differential Revision: D19905394
fbshipit-source-id: 8370fd30c9cfd075c3527b9220e4cf4f604705ae
Summary:
Since we have one connection per shard, it's a good idea to make sure we don't
keep those locked for too long. This diffs adds generous timeouts to protect
against this, as well as ODS reporting to track errors.
Reviewed By: StanislavGlebik
Differential Revision: D19905393
fbshipit-source-id: ee4f4d3e33cf48a9002b016e31d37a401c6578f2
Summary:
This introduces caching of filenodes to Memcache as in the old filenodes
implementation. The code is mostly was ported over from the existing filenodes
implementation, and converted to async / await. However, one key difference is
that the lookups happen once we hold the semaphore to talk to the underlying
MySQL shard.
The reason for this is:
- Reads to Memcache are really fast. They're often under 1ms. If you're going
to miss in Memcache and have to go to SQL, it won't make you much slower.
- Reads to Memcache are kinda expensive CPU-wise. Data in Memcache is
compressed, and we often see a lot of our CPU cycles spent talking to Memache
when we're under load.
- Memcache isn't an infinite resource. If we're reading the exact same
key a hundred times, that's going to hit the same Memcache box. A bit of
deduplication on our end is a nice thing to strive for. Besides, our own
thread pool we use to talk to Memcache is limited in size.
From a performance perspective, this doesn't make things any slower, but
reduces CPU usage when we'd otherwise have a lot of duplicate fetching.
Finally, note that this update also includes support for dirty-tracking in our
local cache. We use this to know if we should fill the remote cache (if we 100%
hit in local cache, we don't fill the remote cache).
Reviewed By: StanislavGlebik
Differential Revision: D19905390
fbshipit-source-id: 363f638bb24cf488c7cd3a8ecea43e93f8391d3f
Summary:
This is the meat of the change I'm trying to make here. This updates
newfilenodes to check their cache before dispatching queries to MySQL once they
acquire the connection.
Since we only get one connection per shard, this ensures that we don't query
several times for the same piece of data.
Note that the caching structure is a little different from the old one, which
cached entire filenode info. Instead, this now caches the exact data we'd get
out of MySQL, since we want to map MySQL queries 1-1 to cache lookups.
With this change, we also now have a local cache for file history queries.
Historically, we hadn't cached those at all, but with this change, we can get a
lot of value of caching them even for small period of time in order to
de-amplify reads to MySQL and Memcache.
However, they are in separate cache pools to make sure they don't evict point
filenodes, which we use for gettreepack (and have a good hit rate, unlike
history blocks, which have a pretty poor hit rate).
Note that having those semaphored connections might feel a little scary, but
it's worth noting that the exact same bottleneck is implicitly present in the
existing filenodes implementation, since we can only have one active query to
any given shard a given time. That said, this approach also gives us a little
more future flexibility, if we'd like, since we could map multiple semaphores to
"sub shards" that map N-to-1 to real, physical shards.
Reviewed By: HarveyHunt
Differential Revision: D19905391
fbshipit-source-id: 02b5efaa44789e6afcccdeb9ee2b4791f7c3c824
Summary:
This introduces a new implementation of filenodes that maintains its own
queuing on top of the queuing enforced by the SQL crate.
Later in this stack, the goal is for this implementation to avoid dispatching
duplicate queries when there is a lot of contention talking to MySQL, which
happens when large changes land and suddenly everyone wants the updated code.
The underlying goal is to avoid dispatching a lot of duplicate queries when
there is contention. Indeed, if there is contention, then the latency between
query and response increases. As a result, without visibility in the queue, the
following can happen:
- Task 1 looks for A in the cache. It misses
- Task 1 dispatches a SQL query
- Task 2 looks for A in the cache. It misses
- Task 2 dispatches a SQL query
- Task 3 looks for A in the cache. It misses
- Task 3 dispatches a SQL query
- ...
- Task 1's SQL query finally executes and fills the cache.
- All other queries execute anyway.
The longer the dispatch queue, the longer it takes to run those queries.
Looking at Mononoke's stats in prod, this happens pretty often:
https://pxl.cl/10xxmo (the spike at 3pm was a 10K-files change in fbsource, for
example).
The goal of this stack is to avoid this effect, by checking the cache only once
we know we're ready to go to SQL.
In this particular diff, what's added is:
- The SQL read and write implementation. This is all implemented using new
futures, but the logic should be largely unchanged from before (i.e. we store
filenodes and their associated copy info in shards by the filenode's path —
not the source path if there is copy info —, and paths in their own shard).
The queries themselves largely unchanged from the existing filenodes, with
only a few tweaks:
- Filenodes and copy info are now selected in one go.
- There are types to distinguish path hashes and paths.
- The structs to support this implementation.
Reviewed By: StanislavGlebik
Differential Revision: D19905397
fbshipit-source-id: bec981e7bfb396d62eb06e5ce249c21555afc64b
Summary:
The API expects a stream of filenodes to insert, but we actually never used
that ability. Instead, every single callsites has a `Vec`, which it converts to
a stream and passes that in.
I'd like to change this for two reasons:
- It's un-necessary
- It makes the code more complex on the Filenodes implementation side, and less
efficient, since we need to `chunk()` there in small chunks, which might not
all be in the same shard. If we get the entire `Vec` at once, we can chunk on a
per-shard basis (this happens later in this stack).
Besides, if we end up having a stream and wanting the old behavior, we can
always call `chunk()` the stream and call `add_filenodes` on each batch (which
is actually nicer because if you have a futures 0.2 stream that isn't static,
you can do this, but you can't turn it into a `BoxStream`!).
Reviewed By: StanislavGlebik
Differential Revision: D19902537
fbshipit-source-id: a4c030c4a51afbb6e9db133b32464009eed197af
Summary:
Log a per-run session id to distinguish runs more easily.
This diff adds the session for scrub logging , following one extends this to validate/progress logging.
So that each tail has a separate session logged, setup is delayed until the start of each tail by passing it in as a function.
Differential Revision: D19907398
fbshipit-source-id: 8e5470918112321866c67c9f94e703fd46e6a16b
Summary:
The Bytes 0.5 update left us in a somewhat undesirable position where every
access to our blobstore incurs an extra copy whenever we fetch data out of our
cache (by turning it from Bytes 0.5 into Bytes 0.4) — we also have quite a few
place where we convert in one direction then immediately into the other.
Internally, we can start using Bytes 0.5 now. For example, this is useful when
pulling data out of our blobstore and deserializing as Thrift (or conversely,
when serializing and putting it into our blobstore).
However, when we interface with Tokio (i.e. decoders & encoders), we still have
to use Bytes 0.4. So, when needed, we convert our Bytes 0.5 to 0.4 there.
The tradeoff idea is that we deal with more bytes internally than we end up
sending to clients, so doing the Bytes conversion closer to the point of
sending data to clients means less copies.
We can also start removing those once we migrate to Tokio 0.2 (and newer
versions of Hyper for HTTP services).
Changes that were required:
- You can't extend new bytes (because that implicitly copies). You need to use
BytesMut instead, which I did where that was necessary (I also added calls in
the Filestore to do that efficiently).
- You can't create bytes from a `&'a [u8]`, unless `'a` is `'static`. You need
to use `copy_from_slice` instead.
- `slice_to` and `slice_from` have been replaced by a `slice()` function that
takes ranges.
Reviewed By: StanislavGlebik
Differential Revision: D20121350
fbshipit-source-id: eb31af2051fd8c9d31c69b502e2f6f1ce2190cb1
Summary:
Nearly all of the Mononoke SQL stores are instantiated once per repo but they don't store the `RepositoryId` anywhere so every method takes it as argument. And because providing the repo_id on every call is not ergonomical we tend to add methods to blob_repo that just call the right method with the right repo_id in on of the underlying stores (see `get_bonsai_from_globalrev` on blobrepo for example).
Because my reviewers [pushed back](https://our.intern.facebook.com/intern/diff/D19972871/?transaction_id=196961774880671&dest_fbid=1282141621983439) when I've tried to do the same for bonsai_git_mapping I've decided to make it right by adding the repo_id to the BonsaiGitMapping.
Reviewed By: krallin
Differential Revision: D20029485
fbshipit-source-id: 7585c3bf9cc8fa3cbe59ab1e87938f567c09278a
Summary:
## Wider goal
See D20068839
## This diff
Asyncifying only singatures allows us to independently work on function bodies, without touching the callsites later in the diff.
Reviewed By: StanislavGlebik
Differential Revision: D20097804
fbshipit-source-id: f1391a055947c7802f719bc99b9eae71a4ac39cd
Summary:
## Wider goal
See D20068839
## This diff
This file contains a mix of old and new-style futures. It even has futures,
which have items composed of futures. To be able to convert on one of the
levels and not the other, we need to deal with the confusion.
Let's have old things have `Old` in the name.
Reviewed By: StanislavGlebik
Differential Revision: D20097803
fbshipit-source-id: fedb3669ef34a8328ec389a30ff2c512ab363818
Summary:
## Wider goal
We want the flexibility to return hydrated responses for `getbundle` wireproto
requests for draft commits. This means that the responses will contain not
only the commit data (as they do now), but also trees and files.
For context, when an "unhydrated" response is returned for the `getbundle`
request for a draft commit, we expect one of two things to happen later
in the e2e scenario:
- either `hg` client would immediately make another wireproto request
(`gettreepack`, `getpackv1`) within the same client `hg` command execution
- or a subsequent `hg update` call will cause another wireproto request
In any case, another request is needed before the pulled commit can be used.
This request can hit a different server, sometimes it can even be Mercurial
instead of Mononoke. Specifically, it can Mercurial instead of Mononoke if the
`fallback` path markers are configured incorrectly. In that case we have a
problem, as Mercurial is incapable of serving `gettreepack` or `getpackv1` for
infinitepush commits.
One way to deal with this is to always have correct path markers, which is
prone to human mistakes. Another way is to guarantee that Mononoke returns
everything in the original `getbundle` request. We don't want to do this for
public commits, as `pull`s of public commits typically fetch thousands of those
commits and never care about tree or file data for all but one of them. Draft
commits are different however, as they are usually exactly what the client
intends to use, so hydrating those is fine. Still, we want this behavior to
be gated behind a config flag.
## This diff
A lot of the needed code is already implemented in the hg-sync job, bundle
generating variant. So prior to implementing the actual behavior described
above, let's move the relevant bits to `getbundle_response`. Later we can comb
them up a bit (asyncify) and use to implement the needed behavior.
Reviewed By: StanislavGlebik
Differential Revision: D20068839
fbshipit-source-id: 0ab63d57b2d167401b7ee8864fe7760f5f65f8ec
Summary:
This is the moral equivalent of D20115877 in fbcode. See that diff for
motivation.
Reviewed By: StanislavGlebik
Differential Revision: D20118575
fbshipit-source-id: 8f77f572068e611003b1344be3434f2d04ec56ca
Summary:
Previously it was hard to tell whether the process were actually responsible
for generating derived data or it was just waiting for it to be generated.
Let's make this distinction clearer.
Reviewed By: johansglock
Differential Revision: D20138284
fbshipit-source-id: 52ae12679db2f61869f048baf2a603b456710a71
Summary:
These comments end up being a source of churn as we roll out D20125635, and anyway are not particularly meaningful after the transformations performed by autocargo. For example:
```
bytes = { version = "0.4", features = ["serde"] } # todo: remove
```
^ This doesn't mean the generated Cargo.toml intends to drop its bytes dependency altogether, but just that will be migrated to a different version that is present in the third-party/rust/Cargo.toml but not visible in the generated Cargo.toml.
Reviewed By: jsgf
Differential Revision: D20128612
fbshipit-source-id: a9e7b29ddc4b26bc47a626dd73bdaa4771ee7b18
Summary:
Since Mononoke's filenodes were migrated to derived data framework
hg_linknode_populated alarm has been firing. The main reason was that there's
now a delay between hg changeset being generated and filenodes being generated.
This diff fixes it by making sure walker won't visit hg changesets without
generated filenodes (note that walker will visit these changesets later after filenodes will be
generated).
Reviewed By: ahornby
Differential Revision: D20067615
fbshipit-source-id: 285e9a3d8c89b85441491c889a8458c86ca0e3a8
Summary:
There is no need to generate expensive file history stream if only one node is requested.
I refactored code that generated stream of history commits, so it'd first yield the nodes and only then prefetch their parents. That will help to solve latency problem for the history request for only a single commit.
I removed BFS queue and added two state variables: ready nodes and already processed:
* The last are the nodes that were return as a part of a history stream on the last iteration and now can be used to construct next BFS layer: prefetch fastlog batches, fill the commit graph, take parents in BFS order to form new bunch of nodes.
* First are used if it's the first iteration - there is no processed nodes yet but there are some that are ready to be returned.
I believe removing the queue I simplified the code and logic a little bit.
Reviewed By: StanislavGlebik
Differential Revision: D19818100
fbshipit-source-id: c30d28c623464ba3552a00e8542552f7655076ef
Summary:
During our tests we noticed that we can send too many blobstore read requests to the
mapping. Let's add exponential backoff to prevent that
Reviewed By: ikostia
Differential Revision: D20116043
fbshipit-source-id: 6fecbda4c36a5065b77ba9df561c6d9c6a969089
Summary:
Memcache doesn't care (because both old and new Bytes to `Into<IOBuf>`), but
Thrift is Bytes 0.5. We have our caching ext layer in the middle, which wants
Bytes 0.4. This means we end up copying things we don't need to copy.
Let's update to fewer copies. I didn't update apiserver, because a) it's going
away, and b) those bytes go into Actix, and Actix isn't upgrading to Bytes 0.5
any time soon! Besides, this doesn't actually need updating besides tests anyway.
Reviewed By: dtolnay
Differential Revision: D20006062
fbshipit-source-id: 42766363a0ff8494f18349bcc822b5238e1ec0cd
Summary: Moving `compat` one level down to the call sites of subcommand functions.
Reviewed By: farnz
Differential Revision: D20085398
fbshipit-source-id: 461e147d2ae6e560b3a75fb92fa6b23f9f54d13e
Summary:
During S196197 lease expired and we were rederiving the same derived data over and over again for a big commit.
this diff adds lease renewal that should help with this problem.
Reviewed By: HarveyHunt
Differential Revision: D20093323
fbshipit-source-id: d139abf6659722f47ea40d9b2f279daa03623ff4
Summary:
fetch_root_filenode is called by FilenodesOnlyPublicMapping to figure out if
filenodes were already derived. Previously it first derived hg changeset and
then fetched looked up root manifest in db. However if hg changeset is not
derived then filenodes couldn't possible be derived either and we can return an
answer faster.
This is useful in the next diff where I change walker
Reviewed By: ahornby
Differential Revision: D20068819
fbshipit-source-id: 17f066c437e0b1f7bbeb8f6e247eadc9afe94f90
Summary:
The blobstore_healer has never waited for MyRouter before querying for slave
status, but it ended up implicitly working because creating a blobstore
required a SQL factory, and creating a SQL factory would result in waiting for
MyRouter.
Now that creating a blobstore doesn't require SQL factory unless you're going
to actually use it (which the healer isn't: it doesn't use a multiplexblob, it
uses the underlying blobstores instead), we no longer wait properly for
MyRouter, so if MyRouter isn't there when we boot, we crash.
This fixes that.
Reviewed By: ahornby
Differential Revision: D20094829
fbshipit-source-id: 82b7e8d893a01049d1f434ee8dff36a877a0d2f4
Summary:
Add support for loading by GitSha1 Aliases. This relies on the change to
Alias::GitSha1 earlier in stack.
Reviewed By: ikostia
Differential Revision: D19903577
fbshipit-source-id: 73cdccc04af61fa524c3683851d8af9ae90d31dc
Summary:
This should give us a slightly better idea of what hosts are doing to
troubleshoot duplicate derivation.
Also, let's make the logging a bit less confusing.
Reviewed By: StanislavGlebik
Differential Revision: D20070619
fbshipit-source-id: 91cc264b7043b8fc8c21c007832fba328ef0017d
Summary:
This updates our multiplexed blobstore configuration to carry its own DB
config. The upshot of this change is that we can move the blobstore sync queue
(a fairly unruly table) to its own DB.
Another nice side effect of this is that it cleans up a bunch of other code, by
finally decoupling the blobstore config from the DB config. For examples,
places that need to instantiate a blobstore can now to do even without a DB
config (such as wireproto logging).
Obviously, this cannot land until we update the configs to include this. I'll
do so in Configerator prior to landing the diff.
Reviewed By: HarveyHunt
Differential Revision: D19973905
fbshipit-source-id: 79e4ff92cdb989aab4532decd3fe4fd6c55e2bb2
Summary:
I'd like to refactor our multiplex blob to store its DB using a different
shard. In preparation of doing so, let's:
- Extract parsing DB configs from storage configs
- Tidy up some related places that take a reference when they actually need
ownership (which is sort of wasteful).
Reviewed By: StanislavGlebik
Differential Revision: D19973906
fbshipit-source-id: 82baceb892e9e24e5fd0349ffa5503884c177a7a