Summary:
In a repository with files with large histories we run into a lot of SqlTimeout
errors while fetching file history to serve getpack calls. However fetching the
whole file history is not really necessary - client knows how to work with
partial history i.e. if client misses some portion of history then it would
just fetch it on demand.
This diff adds way to add a limit on how many entries were going to be fetched, and if more entries were fetched then we return FilenodeRangeResult::TooBig. The downside of this diff is that we'd have to do more sequential database
queries.
Reviewed By: krallin
Differential Revision: D23025249
fbshipit-source-id: ebed9d6df6f8f40e658bc4b83123c75f78e70d93
Summary: The more contexts the better. Makes debugging errors much more pleasant.
Reviewed By: StanislavGlebik
Differential Revision: D22890940
fbshipit-source-id: 48f89031b4b5f9b15f69734d784969e2986b926d
Summary:
This is the (almost) final diff to introduce WarmBookmarksCache in repo_client.
A lot of this code is to pass through the config value, but a few things I'd
like to point out:
1) Warm bookmark cache is enabled from config, but it can be killswitched using
a tunable.
2) WarmBookmarksCache in scs derives all derived data, but for repo_client I
decided to derive just hg changeset. The main motivation is to not change the
current behaviour, and to make mononoke server more resilient to failures in
other derived data types.
3) Note that WarmBookmarksCache doesn't obsolete SessionBookmarksCache that was
introduced earlier, but rather it complements it. If WarmBookmarksCache is
enabled, then SessionBookmarksCache reads the bookmarks from it and not from
db.
4) There's one exception in point #3 - if we just did a push then we read
bookmarks from db rather than from bookmarks cache (see
update_publishing_bookmarks_after_push() method). This is done intentionally -
after push is finished we want to return the latest updated bookmarks to the
client (because the client has just moved a bookmark after all!).
I'd argue that the current code is a bit sketchy already - it doesn't read from
master but from replica, which means we could still see outdated bookmarks.
Reviewed By: krallin
Differential Revision: D22820879
fbshipit-source-id: 64a0aa0311edf17ad4cb548993d1d841aa320958
Summary:
The check does not practically work because the client sends `common=[null]`
if the common set is empty.
D22519582 changes the client-side logic to send `common=[]` instead of
`common=[null]` in such cases. Therefore remove the constraint to keep
tests passing. 13 tests depend on this change.
Reviewed By: StanislavGlebik
Differential Revision: D22612285
fbshipit-source-id: 48fbc94c6ab8112f0d7bae1e276f40c2edd47364
Summary:
The overall goal of this stack is to add WarmBookmarksCache support to
repo_client to make Mononoke more resilient to lands of very large
commits.
The code for managing cached_publishing_bookmarks_maybe_stale was already a bit
tricky, and with WarmBookmarksCache introduction it would've gotten even worse.
Let's move this logic to a separate SessionBookmarkCache struct.
Reviewed By: krallin
Differential Revision: D22816708
fbshipit-source-id: 02a7e127ebc68504b8f1a7401beb063a031bc0f4
Summary:
The overall goal of this stack is to add WarmBookmarksCache support to
repo_client to make Mononoke more resilient to lands of very large
commits.
The problem with large changesets is deriving hg changesets for them. It might take
a significant amount of time, and that means that all the clients are stuck waiting on
listkeys() or heads() call waiting for derivation. WarmBookmarksCache can help here by returning bookmarks
for which hg changesets were already derived.
This is the second refactoring to introduce WarmBookmarksCache.
Now let's cache not only pull default, but also publishing bookmarks. There are two reasons to do it:
1) (Less important) It simplifies the code slightly
2) (More important) Without this change 'heads()' fetches all bookmarks directly from BlobRepo thus
bypassing any caches that we might have. So in order to make WarmBookmarksCache useful we need to avoid
doing that.
Reviewed By: farnz
Differential Revision: D22816707
fbshipit-source-id: 9593426796b5263344bd29fe5a92451770dabdc6
Summary:
The overall goal of this stack is to add WarmBookmarksCache support to
repo_client to make Mononoke more resilient to lands of very large commits.
This diff just does a small refactoring that makes introducing
WarmBookmarksCache easier. In particular, later in cached_pull_default_bookmarks_maybe_stale cache I'd like to store
not only PullDefault bookmarks, but also Publishing bookmarks so that both
listkeys() and heads() method could be served from this cache. In order to do
that we need to store not only bookmark name, but also bookmark kind (i.e. is
it Publishing or PullDefault).
To do that let's store the actual Bookmarks and hg changeset objects instead of
raw bytes.
Reviewed By: farnz
Differential Revision: D22816710
fbshipit-source-id: 6ec3af8fe365d767689e8f6552f9af24cbcd0cb9
Summary: Same as a previous diff. Let's keep the top-level dir tidy.
Reviewed By: krallin
Differential Revision: D22691638
fbshipit-source-id: 7f9a21f307efd9bbe37f515f475409c89b99cd31
Summary:
Currently the repo lock is checked only once at the beginnig of unbundle future. That unbundle process take some time and during that time repo can be locked by someone.
We can reduce that possibility by creating additional future, which will check the repo in the loop and poll both futures for whoever will finish first.
Reviewed By: StanislavGlebik
Differential Revision: D22560907
fbshipit-source-id: 1cba492fa101dba988e07361e4048c6e9b778197
Summary:
This is the next step in exposing version, used to sync commits in read
queries. The previous step was to query this from DB, now let's also put it
into an enum payload. Further, I will add consumers of this in admin and
validation.
Note that ideally, `RewrittenAs` should always have a version associated with
it, but:
- right now this is not true in the DB (needs backfilling)
- even when I backfill everything, I would not like to error out just at
reading time, I would prefer the consumers to deal with the absense of a
version in rewrtitten commits.
Therefore, I decided to use `Option` here.
Reviewed By: StanislavGlebik
Differential Revision: D22476166
fbshipit-source-id: 5bc27bb21b7e59c604755ef35aa5d3d2c3df527e
Summary:
Separate out the `BundleReplayData` from the `BookmarkUpdateReason` enum. There's
no real need for this to be part of the reason, and removing it means we can
abstract away the remaining dependency on Mercurial changeset IDs from
the main bookmarks traits.
Reviewed By: mitrandir77, ikostia
Differential Revision: D22417659
fbshipit-source-id: c8e5af7ba57d10a90c86437b59c0d48e587e730e
Summary:
The goal is to make it easier to implement unit tests, which depend on `LiveCommitSyncConfig`. Specifically, `scs` has a piece of code, which instantiates `mononoke_api::Repo` with a test version of `CommitSyncConfig`. To migrate it to `LiveCommitSyncConfig`, I need to be able to create a test version of that. It **is** possible now, but would require me to turn a supplied instance of `CommitSyncConfig` back into `json`, which is cumbersome. Using a `dyn LiveCommitSyncConfig` there, instead of a concrete struct seems like a good idea.
Note also that we are using this technique in many places: most (all?) of our DB tables are traits, which we then implement for SQL-specific structs.
Finally, this diff does not actually migrate all of the current users of `LiveCommitSyncConfig` (the struct) to be users of `LiveCommitSyncConfig` (the trait), and instead makes them use `CfgrLiveCommitSyncConfig` (the trait impl). The idea is that we can migrate bits to use traits when needed (for example, in an upcoming `scs` diff). When not needed, it's fine to use concrete structs. Again, this is already the case in a a few places: we sometimes use `SqlSyncedCommitMapping` struct directly, instead of `T: SyncedCommitMapping` or `dyn SyncedCommitMapping`.
Reviewed By: StanislavGlebik
Differential Revision: D22383859
fbshipit-source-id: 8657fa39b11101684c1baae9f26becad6f890302
Summary:
Just knowing the number of fetched undesired files doesn't give the full
picture. e.g. fetching lots of small files is better than fetching single
multi-Gb file.
So knowing the size of files is helpful
Reviewed By: krallin
Differential Revision: D22408400
fbshipit-source-id: 7653c1cdceccf50aeda9ce8a4880ee5178d4b107
Summary: D22381744 updated the version of `futures` in third-party/rust to 0.3.5, but did not regenerate the autocargo-managed Cargo.toml files in the repo. Although this is a semver-compatible change (and therefore should not break anything), it means that affected projects would see changes to all of their Cargo.toml files the next time they ran `cargo autocargo`.
Reviewed By: dtolnay
Differential Revision: D22403809
fbshipit-source-id: eb1fdbaf69c99549309da0f67c9bebcb69c1131b
Summary:
Rework the bookmarks traits:
* Split out log functions into a separate `BookmarkUpdateLog` trait. The cache doesn't care about these methods.
* Simplify `list` down to a single method with appropriate filtering parameters. We want to add more filtering types, and adding more methods for each possible combination will be messier.
* The `Bookmarks` and `BookmarkUpdateLog` traits become `attributes` on `BlobRepo`, rather than being a named member.
Reorganise the bookmarks crate to separate out the bookmarks log and transactions into their own modules.
Reviewed By: krallin
Differential Revision: D22307781
fbshipit-source-id: 4fe514df8b7ef92ed3def80b21a16e196d916c64
Summary:
`bulk_add()` method was checking for conflicts correctly i.e. it wouldn't fail
if we try to insert the same mapping twice.
`bulk_add_git_mapping_in_transaction` wasn't doing this check i.e. it would
fail.
This caused us a few problems and this diff fixes them - now
`bulk_add_git_mapping_in_transaction` would do the same checks as bulk_add was
doing previously.
There is another important change in behaviour: if we try to insert two entries, one of them
has a conflict another don't then previously we'd insert the second entry.
Now we don't insert any, arguably that's a preferred behaviour.
Reviewed By: krallin
Differential Revision: D22332001
fbshipit-source-id: 86fff8c23c43eeca0fb36b01b10cdaa73b3ce4ab
Summary: Convert the bookmarks traits to use new-style `BoxFuture<'static>` and `BoxStream<'static>`. This is a step along the path to full `async`/`await`.
Reviewed By: farnz
Differential Revision: D22244489
fbshipit-source-id: b1bcb65a6d9e63bc963d9faf106db61cd507e452
Summary: This is the final diff of the stack - it starts logging pushed commits to scribe
Reviewed By: farnz
Differential Revision: D22212755
fbshipit-source-id: ec09728408468acaeb1c214d43f930faac30899b
Summary:
Failing push if we failed to log to scribe doesn't make a lot of sense. By that
time the ship has sailed - commit has already been pushed and by failing the
request we can't undo that. It will just create an annoyance by whoever is
pushing.
Instead let's log it to scuba
Reviewed By: farnz
Differential Revision: D22256687
fbshipit-source-id: 2428bbf1db4cef6fa80777ad65184fab1804fa9c
Summary:
At the moment we can't test logging to scribe easily - we don't have a way to
mock it. Scribe are supposed to help with that.
They will let us to configure all scribe logs to go to a directory on a
filesystem similar to the way we configure scuba. The Scribe itself will
be stored in CoreContext
Reviewed By: farnz
Differential Revision: D22237730
fbshipit-source-id: 144340bcfb1babc3577026191428df48e30a0bb6
Summary:
Eventually, we want everything to be `async`/`await`; as a stepping stone in that direction, switch the remaining lobstore traits to new-style futures.
This just pushes the `.compat()` out to old-style futures, but it makes the move to non-'static lifetimes easier, as all the compile errors will relate to lifetime issues.
Reviewed By: krallin
Differential Revision: D22183228
fbshipit-source-id: 3fe3977f4469626f55cbf5636d17fff905039827
Summary:
Eventually, we want everything to be `async`/`await`; as a stepping stone in that direction, switch some of the blobstore interfaces to new-style `BoxFuture` with a `'static` lifetime.
This does not enable any fixes at this point, but does mean that `.compat()` moves to the places that need old-style futures instead of new. It also means that the work needed to make the transition fully complete is changed from a full conversion to new futures, to simply changing the lifetimes involved and fixing the resulting compile failures.
Reviewed By: krallin
Differential Revision: D22164315
fbshipit-source-id: dc655c36db4711d84d42d1e81b76e5dddd16f59d
Summary:
Push supported multiple bookmarks in theory, but in practice we never used it.
Since we want to start logging pushed commits in the next diffs we need to decide what to do with
bookmarks, since at the moment we can log only a single bookmark to scribe
let's just allow a single bookmark push
Reviewed By: farnz
Differential Revision: D22212674
fbshipit-source-id: 8191ee26337445ce2ef43adf1a6ded3e3832cc97
Summary:
In the next diffs it will be passed to unbundle processing so that we can use
scribe category to log pushed commits
Reviewed By: krallin
Differential Revision: D22212616
fbshipit-source-id: 17552bda11f102041a043f810125dc381e478611
Summary: That was like 50% of the point of this change, and somehow I forgot to do it.
Reviewed By: farnz
Differential Revision: D22231923
fbshipit-source-id: 4a4daaeaa844acd219680907c0b5a5fdacdf535c
Summary:
This fn is not used anywhere except tests, and its only difference from
`backsync_all_latest` is in the fact that it accepts a limit. So let's rename
`backsync_all_latest` into `backsync_latest` and make it accept a limit arg.
I decided to use a custom enum instead of `Option` so that people don't have to
open fn definition to understand what `BacksyncLimit::Limit(2)` or
`BacksyncLimit::NoLimit` mean.
Reviewed By: StanislavGlebik
Differential Revision: D22187118
fbshipit-source-id: 6bd97bd6e6f3776e46c6031f775739ca6788ec8c
Summary:
This diff enables `unbundle` flow to start creating `push_redirector` structs from hot-reloaded `CommitSyncConfig` (by using the `LiveCommitSyncConfig` struct).
Using `LiveCommitSyncConfig` unfortunately means that we need to make sure those tests, which don't use standard fixtures, need to have both the `.toml` and the `.json` commit sync configs present, which is a little verbose. But it's not too horrible.
Reviewed By: StanislavGlebik
Differential Revision: D21962960
fbshipit-source-id: d355210b5dac50d1b3ad277f99af5bab56c9b62e
Summary:
Due to Thrift design of "include" statements in fbcode the thrift structures has to be contained in folders that are identical to the folder layout inside fbcode.
This diff changes the folder layout on Cargp.toml files and in fbcode_builder, there will be a next diff that changes this for ShipIt as well.
Reviewed By: ikostia
Differential Revision: D22208707
fbshipit-source-id: 65f9cafed2f0fcc8398a3887dfa622de9e139f68
Summary:
If a commit changes modes (i.e. executable, symlink or regular) of a lot of files but
doesn't change their content then we don't need to put these filenodes to the
generated bundle. Mercurial stores mode in manifest, so changing the mode
doesn't change the filenode.
Reviewed By: ikostia
Differential Revision: D22206736
fbshipit-source-id: f64ee8a34281cd207c92653b927bf9109ccbe1b4
Summary: `HgPhase` type is redundant and was adding dependency on mercurial in phases crate.
Reviewed By: farnz
Differential Revision: D22162716
fbshipit-source-id: 1c21841d34897d0072ff6fe5e4ac89adddeb3c68
Summary: DangerousOverride is moved into a separate crate. Not only it is usually not needed but it was introducing dependencies on mercurial crate.
Reviewed By: StanislavGlebik
Differential Revision: D22115015
fbshipit-source-id: c9646896f906ea54d11aa83a8fbd8490a5b115ea
Summary: Move all mercurial changeset generation logic to `blobrepo_hg`. This is preliminary step is required to decouples BlobRepo from mercurial, and in later stages it will be moved to derived data infra once blobrepo is free of mercurial.
Reviewed By: StanislavGlebik
Differential Revision: D22089677
fbshipit-source-id: bca28dedda499f80899e729e4142e373d8bec0b8
Summary: move HgMutationStore to attributes, and all related methods to BlobRepoHg
Reviewed By: StanislavGlebik
Differential Revision: D22089657
fbshipit-source-id: 8fe87418ccb8a7ad43828758844bdbd73dc0573d
Summary: This diff introduces `BlobRepoHg` extension trait for `BlobRepo` object. Which contains mercurial specific methods that were previously part of `BlobRepo`. This diff also stars moving some of the methods from BlobRepo to BlobRepoHg.
Reviewed By: ikostia
Differential Revision: D21659867
fbshipit-source-id: 1af992915a776f6f6e49b03e4156151741b2fca2
Summary: Megarepo is simplified if we can avoid copying hooks everywhere - run megarepo hooks as well as small repo hooks during pushredirection.
Reviewed By: StanislavGlebik
Differential Revision: D20652331
fbshipit-source-id: f42216797b9061db10b50c1440253de1f56d6b85
Summary:
Remove unused dependencies for Rust targets.
This failed to remove the dependencies in eden/scm/edenscmnative/bindings
because of the extra macro layer.
Manual edits (named_deps) and misc output in P133451794
Reviewed By: dtolnay
Differential Revision: D22083498
fbshipit-source-id: 170bbaf3c6d767e52e86152d0f34bf6daa198283
Summary:
The goal of the stack is to support hot reloading of `CommitSyncConfig`s everywhere: in `push_redirector`, `backsyncer`, `x-repo sync job` and so forth.
This diff in particular is a refactoring of how we instantiate the `PushRedirector` struct for the `unbundle` flow. Previously the struct would be instantiated when `RepoHandler` struct was built and would later be reused by `RepoClient`. Now we want to instantiate `PushRedirector` before we start processing the `unbundle` request, so that we can request the newest `CommitSyncConfig`. Note that this diff does not introduce the hot reload itself, it just lays the groundwork: instantiation of `PushRedirector` at request start.
To achieve this goal, `RepoClient` now contains a somewhat modified `PushRedirectorArgs` struct, whose goal is to own the unchanging stuff, needed to create a full `PushRedirector`.
Here are a few explicit non-goals for this hot reloading:
- the overall decision whether the repo is part of any `CommitSyncConfig` or not is still made at `RepoHandler` creation time. What this means is that if `CommitSyncConfig` is changed to have another small repo and Mononoke servers happens to know about that repo, it would not automatically pick up the fact that the repo should be a part of `CommitSyncConfig`
- same for removal (stopping push redirector is already possible via a different hot-reloaded config)
- changing anything about a large/small relationship is likely to be very complicated under the best circumstances of everything being down, let alone during a hot reload. This means that target repo cannot be changed via this mechanizm.
Essentially, the goal is only to be able to do a live change of how paths change between repos.
Reviewed By: StanislavGlebik
Differential Revision: D21904799
fbshipit-source-id: e40e6a9c39f4f03a436bd974f3cba26c690c5f27
Summary:
Add logging of infinitepush (draft) commits to a separate scribe category.
The logging will also include the username and hostname of the pusher. Since
this code is shared with the public commits scribe logging, that logging will
also gain this information.
Reviewed By: farnz
Differential Revision: D21742656
fbshipit-source-id: bdbfd14db9e8aae190c634ac4bfff35b3f62bbe4
Summary:
Add the `move_bookmark` method to `mononoke_api`.
This attempts to re-use code from `repo_client`. This code isn't really designed to be called from this API, so the fit is very poor. In a future diff we will fix this up.
The `repo_client` code for force-moving a bookmark does not support running hooks for that bookmark. For now we will prevent API users from moving bookmarks that have hooks configured. This will also be addressed in a future diff.
Reviewed By: krallin
Differential Revision: D21904979
fbshipit-source-id: 42bf840489e5b04f463c69c752bcaa5174630c21
Summary:
Some of the hg changeset structures use `comments` as the name for commit
message. But this is confusing - let's rename
Reviewed By: HarveyHunt
Differential Revision: D21974571
fbshipit-source-id: e7c5c5ad8db9b2f1343abe9101fc56a6d4287548
Summary: Let's log the name as well - it will help with investigation.
Reviewed By: farnz
Differential Revision: D21906595
fbshipit-source-id: 51eb49354017c17ba3304f0a66c95dfc3c695e6a
Summary:
Let's return FilenodeResult from get_all_filenodes_maybe_stale and change
callers to deal with that.
The change is straightforward with the exception of `file_history.rs`.
get_all_filenodes_maybe_stale() is used here to prefetch a lot filenodes in one
go. This diff changes it to return an empty vec in case filenodes are disabled.
Unfortunately this is not a great solution - since prefetched files are empty
get_file_history_using_prefetched() falls back to fetching filenodes
sequentially from the blobstore. that might be too slow, and the next diffs in
the stack will address this problem.
Reviewed By: krallin
Differential Revision: D21881082
fbshipit-source-id: a86dfd48a92182381ab56994f6b0f4b14651ea14
Summary:
I observed that for whatever reason our setting of `use_try_shorthand = true` in rustfmt.toml was causing entire functions to not get processed by rustfmt. Even files that contain neither `try` nor `?`. Remove it and reformat fbsource.
Documentation of that config:
- https://github.com/rust-lang/rustfmt/blob/master/Configurations.md#use_try_shorthand
We don't particularly care about the value anymore because nobody writes `r#try!(...)` in 2018 edition code.
Minimized:
```
fn f() {
g(
)
// ...
.h
}
```
This function gets formatted only if use_try_shorthand is not set.
The bug is fixed in the rustfmt 2.0 release candidate.
Reviewed By: jsgf
Differential Revision: D21878162
fbshipit-source-id: b028673c0eb703984d24bf0d2983453fc2a8c212
Summary:
See D21765065 for more context. TL;DR is that we want to control
lfs rollout from client side to make sure we don't put lfs pointers in the
shared memcache
Reviewed By: xavierd
Differential Revision: D21822159
fbshipit-source-id: daea6078d95eb4e9c040d353a20bcdf1b6ae07b1
Summary:
The motivation for the whole stack:
At the moment if mysql is down then Mononoke is down as well, both for writes
and for reads. However we can relatively easily improve the situation.
During hg update client sends getpack() requests to fetch files, and currently
for each file fetch we also fetch file's linknode. However hg client knows how
to deal with null linknodes [1], so when mysql is unavailable we can disable
filenode fetching completely and just return null linknodes. So the goal of this stack is to
add a knob (i.e. a tunable) that can turn things filenode fetches on and off, and make
sure the rest of the code deals nicely with this situation.
Now, about this diff. In order to force callers to deal with the fact that
filenodes might unavailable I suggest to add a special type of result, which (in
later diffs) will be returned by every filenodes methods.
This diff just introduces the FilenodeResult and convert BlobRepo filenode
methods to return it. The reason why I converted BlobRepo methods first
is to limit the scope of changes but at the same time show how the callers' code will look
like after FilenodeResult is introduced, and get people's thoughts of whether
it's reasonable or not.
Another important change I'd like to introduce in the next diffs is modifying FilenodesOnlyPublic
derived data to return success if filenodes knob is off. If we don't do that
then any attempt to derive filenodes might fail which in turn would lead to the
same problem we have right now - people won't be able to do hg update/hg
pull/etc if mysql is down.
[1] null linknodes might make some client side operation slower (e.g. hg rebase/log/blame),
so we should use it only in sev-like situations
Reviewed By: krallin
Differential Revision: D21787848
fbshipit-source-id: ad48d5556e995af09295fa43118ff8e3c2b0e53e