Commit Graph

323 Commits

Author SHA1 Message Date
Lukas Piatkowski
f317302b0f autocargo v1: reformating of oss-dependencies, workspace and patch sections and thrift files to match v2
Summary:
For dependencies V2 puts "version" as the first attribute of dependency or just after "package" if present.
Workspace section is after patch section in V2 and since V2 autoformats patch section then the third-party/rust/Cargo.toml manual entries had to be formatted manually since V1 takes it as it is.
The thrift files are to have "generated by autocargo" and not only "generated" on their first line. This diff also removes some previously generated thrift files that have been incorrectly left when the corresponding Cargo.toml was removed.

Reviewed By: ikostia

Differential Revision: D26618363

fbshipit-source-id: c45d296074f5b0319bba975f3cb0240119729c92
2021-02-25 15:10:56 -08:00
Thomas Orozco
d71fa2882c common/rust/futures_ext: update to tokio_shim
Summary:
Like it says in the title, this updates futures_ext to use tokio_shim, which
makes it compatible with Tokio 0.2 and 1.0.

There is one small difference in behavior here, which is that in Tokio 1.0,
sleep isn't Unpin anymore, so callers will need to call `boxed()` or use Tokio's `pin!` macro if they need
Unpin.

I do want to get as close to what upstream is doing in Tokio 1.0, so I think
it's good to keep that behavior.

Reviewed By: farnz

Differential Revision: D26610036

fbshipit-source-id: ff72275da55558fdf8fe3ad009d25cf84e108a5a
2021-02-25 02:11:30 -08:00
Stefan Filip
84017abe21 segmented_changelog: update OnDemandUpdateDag to have smaller critical sections
Summary:
The on demand update code we have is the most basic logic that we could have.
The main problem is that it has long and redundant write locks. This change
reduces the write lock strictly to the section that has to update the in memory
IdDag.

Updating the Dag has 3 phases:
* loading the data that is required for the update;
* updating the IdMap;
* updating the IdDag;

The Dag can function well for serving requests as long as the commits involved
have been built so we want to have easy read access to both the IdMap and the
IdDag. The IdMap is a very simple structure and because it's described as an
Arc<dyn IdMap> we push the update locking logic to the storage.  The IdDag is a
complicated structure that we ask to update itself. Those functions take
mutable references. Updating the storage of the iddag to hide the complexities
of locking is more difficult. We deal with the IdDag directly by wrapping it in
a RwLock. The RwLock allows for easy read access which we expect to be the
predominant access pattern.

Updates to the dag are not completely stable so racing updates can have
conflicting results. In case of conflics one of the update processes would have
to restart. It's easier to reason about the process if we just allow one
"thread" to start an update process. The update process is locked by a sync
mutex. The "threads" that fail the race to update are asked to wait until the
ongoing update is complete. The waiters will poll on a shared future that
tracks the ongoing dag update. After the update is complete the waiters will go
back to checking if the data they have is available in the dag. It is possible
that the dag is updated in between determining that the an update is needed and
acquiring the ongoing_update lock. This is fine because the update building
process checks the state of dag before the dag and updates only what is
necessary if necessary.

Reviewed By: krallin

Differential Revision: D26508430

fbshipit-source-id: cd3bceed7e0ffb00aee64433816b5a23c0508d3c
2021-02-22 18:17:21 -08:00
Thomas Orozco
097e4ad00c mononoke: remove tokio-compat (i.e. use tokio 0.2 exclusively)
Summary:
The earlier diffs in this stack have removed all our dependencies on the Tokio
0.1 runtime environment (so, basically, `tokio-executor` and `tokio-timer`), so
we don't need this anymore.

We do still have some deps on `tokio-io`, but this is just traits + helpers,
so this doesn't actually prevent us from removing the 0.1 runtime!

Note that we still have a few transitive dependencies on Tokio 0.1:

- async-unit uses tokio-compat
- hg depends on tokio-compat too, and we depend on it in tests

This isn't the end of the world though, we can live with that :)

Reviewed By: ahornby

Differential Revision: D26544410

fbshipit-source-id: 24789be2402c3f48220dcaad110e8246ef02ecd8
2021-02-22 09:22:42 -08:00
Thomas Orozco
0734a61cb1 common/rust: remove tracing
Summary:
This was a thing that was only ever used in Mononoke, and we don't think it's
usable and haven't been using it. Let's get rid of it. As-is, it won't even work
for most people due to its (indirect) dependency on Tokio 0.1.

Reviewed By: StanislavGlebik

Differential Revision: D26512243

fbshipit-source-id: faa16683f2adb20dfba43c4768486b982bc02ff9
2021-02-22 09:22:41 -08:00
Stanislau Hlebik
dff228e967 mononoke: log how many nodes were requested in known call
Summary: We don't log this information if a request failed. Let's start doing that.

Reviewed By: farnz

Differential Revision: D26577832

fbshipit-source-id: acfac1c57364eeb457a81ff4bbeddc5407f3a985
2021-02-22 04:21:34 -08:00
Lukas Piatkowski
cd0b6d50e2 autocargo v1: changes to match autocargo v2 generation results.
Summary:
The changes (and fixes) needed were:
- Ignore rules that are not rust_library or thrift_library (previously only ignore rust_bindgen_library, so that binary and test dependencies were incorrectly added to Cargo.toml)
- Thrift package name to match escaping logic of `tools/build_defs/fbcode_macros/build_defs/lib/thrift/rust.bzl`
- Rearrange some attributes, like features, authors, edition etc.
- Authors to use " instead of '
- Features to be sorted
- Sort all dependencies as one instead of grouping third party and fbcode dependencies together
- Manually format certain entries from third-party/rust/Cargo.toml, since V2 formats third party dependency entries and V1 just takes them as is.

Reviewed By: zertosh

Differential Revision: D26544150

fbshipit-source-id: 19d98985bd6c3ac901ad40cff38ee1ced547e8eb
2021-02-19 11:03:55 -08:00
Thomas Orozco
72ed8767e0 mononoke/unbundle: remove tokio 0.1
Summary: Here again, easy, it's not used.

Reviewed By: StanislavGlebik

Differential Revision: D26485714

fbshipit-source-id: 1f96e05934c4b649d862ce992ca90b031ea241a7
2021-02-19 07:00:54 -08:00
Thomas Orozco
3073987faf mononoke/repo_client: remove 0.1 tokio timeout from getcommitdata
Summary: This lets us ensure we only depend on Tokio 0.2 for timers.

Reviewed By: StanislavGlebik

Differential Revision: D26485590

fbshipit-source-id: e39dbc21dc51070113b3c9df497c2e0bbaa12450
2021-02-19 07:00:52 -08:00
Thomas Orozco
ae83446c36 mononoke/repo_client: remove 0.1 tokio timeout from streaming clone
Summary:
Like it says in the title. This one was pretty easy to just convert to 0.3
futures so I did so.

Reviewed By: StanislavGlebik

Differential Revision: D26485577

fbshipit-source-id: 76c751c1004288dda1d7b62866979c9228e0ef34
2021-02-19 07:00:52 -08:00
Thomas Orozco
c8854bf5c3 mononoke/repo_client: remove 0.1 tokio timeout in getbundle / gettreepack
Summary:
Like it says in the title. Porting anything to Futures 0.3 isn't practical
here so I didn't touch it.

Reviewed By: StanislavGlebik

Differential Revision: D26485585

fbshipit-source-id: 291bf63a6f31d502ac14492151f14bae20009094
2021-02-19 07:00:51 -08:00
Thomas Orozco
bfe01641ad mononoke/repo_client: remove 0.1 tokio timeout from getpack
Summary: Like it says in the title.

Reviewed By: StanislavGlebik

Differential Revision: D26485573

fbshipit-source-id: f0604760ff73352c7e3601103a84619186cac0d7
2021-02-19 07:00:51 -08:00
Thomas Orozco
d047316a8e mononoke/repo_client: rename 0.1 stream:: to stream_old::
Summary: This will make the next diff easier to read.

Reviewed By: StanislavGlebik

Differential Revision: D26485586

fbshipit-source-id: c52d5355c8d9ed742b4de0b1faab460ef6664c69
2021-02-19 07:00:51 -08:00
Thomas Orozco
6583143fec mononoke/repo_client: remove direct usage of Tokio 0.1
Summary:
Like it says in the title. This removes places where we use Tokio 0.1 directly
in repo client. We use it for timeouts, so this updates us to Tokio 0.2
timeouts.

Where possible, I've made a few improvements to the code as well:

- I removed timeouts on `future::ok()` because a future that is immediately
  ready isn't going to time out.
- I updated some code to async / await where it made sense to do so to avoid
  round-tripping through futures 0.1 and 0.2 several times.

One thing that changes here is that we'll show Tokio's error on timeouts (which
says timeout has elapsed) instead of ours, which used to say just "timeout". I
think this doesn't make a big difference.

Reviewed By: StanislavGlebik

Differential Revision: D26485575

fbshipit-source-id: 8158f709bcc52d123a95df541aaeb1ec0fc9c281
2021-02-19 07:00:50 -08:00
Thomas Orozco
fc48f40f4a mononoke: update futures_ext name in repo_client
Summary: I'd like to use the 0.3 version here so let's get this cleaned up.

Reviewed By: StanislavGlebik

Differential Revision: D26485583

fbshipit-source-id: 1d1ff8e75888e6d874d21195cae7600f171321ac
2021-02-19 07:00:50 -08:00
Thomas Orozco
737e98580e common/rust/shed/futures_ext: split FbFutureExt
Summary:
For streams we have `FbStreamExt` and `FbTryStreamExt`. Let's be a little
consistent and do the same with the future extension trait.

Reviewed By: StanislavGlebik

Differential Revision: D26485589

fbshipit-source-id: 5ebbda11d02e16709958a99a806aa70a8354672e
2021-02-19 07:00:49 -08:00
Jan Mazur
f9376fce90 load_limiter: static, sliced rate limitting
Summary: We would like to consistently rate limit a percentage of hosts from a specific tier expressed as a subset of identities.

Reviewed By: krallin

Differential Revision: D26312370

fbshipit-source-id: d3fc9e892a8c9f62e22b079fa947a85078831687
2021-02-15 09:56:56 -08:00
Lukas Piatkowski
87ddbe2f74 autocargo v1: update autocargo field format to allow transition to autocargo v2
Summary:
Autocargo V2 will use a more structured format for autocargo field
with the help of `cargo_toml` crate it will be easy to deserialize and handle
it.

Also the "include" field is apparently obsolete as it is used for cargo-publish (see https://doc.rust-lang.org/cargo/reference/manifest.html#the-exclude-and-include-fields). From what I know this might be often wrong, especially if someone tries to publish a package from fbcode, then the private facebook folders might be shipped. Lets just not set it and in the new system one will be able to set it explicitly via autocargo parameter on a rule.

Reviewed By: ahornby

Differential Revision: D26339606

fbshipit-source-id: 510a01a4dd80b3efe58a14553b752009d516d651
2021-02-12 23:28:25 -08:00
Thomas Orozco
2a21e4fb17 third-party/rust: update Tokio to 0.2.25 + add a patch to disable coop scheduling
Summary:
See the patch & motivation here:

818f943db3

Reviewed By: StanislavGlebik

Differential Revision: D26399890

fbshipit-source-id: e184a3f6c1dd03cb4cdb7ea18073c3392d7ce355
2021-02-12 04:56:23 -08:00
Kostia Balytskyi
6b67fe8602 rate_limits: add total file-changes rate limit
Summary: Instead of doing per-repo rate-limiting checks, let's do total ones. All of the business logic stays the same, with the exception of a different counter used.

Reviewed By: farnz

Differential Revision: D26374353

fbshipit-source-id: 92006cd3e5dd194ac9e6531cbb19289fa73a63d2
2021-02-12 01:31:55 -08:00
Stanislau Hlebik
af2ab0cf10 mononoke: store hydrated tree manifests in .hg
Reviewed By: krallin

Differential Revision: D26401093

fbshipit-source-id: e5050883b0e6f370a7cfbb5f46721aca7469dce1
2021-02-11 10:12:27 -08:00
Stefan Filip
0a308f9f84 update Cargo.toml after assert_matches update
Summary: cargo autocargo

Reviewed By: fanzeyi

Differential Revision: D26316542

fbshipit-source-id: f9e12a9d7b3b4e03a6f7b074ea2873ad6dcc82ad
2021-02-08 10:23:00 -08:00
Kostia Balytskyi
29f1b16154 live_commit_sync_config: asyncify commit sync config accessors
Summary:
This is a preparation for potential necessity of IO being done by this trait and its implementors.

We think the IO might be needed if we move commit sync config storage from `Configerator` into xdb, or some place else. To be clear, I personally am not certain we'll *need* this, but in any case, asyncifying the trait does not seem like a risky thing here (because we usually have only 0-2 sync functions in the stack above `LiveCommitSyncConfig` accessors, so it does not require large-scale code flow changes or anything).

I intentionally did not touch the push-redirection accessors, as those I don't think will ever move away from configerator.

Reviewed By: StanislavGlebik

Differential Revision: D26275905

fbshipit-source-id: 1bfdca087434d475d50032dd47dd94f16be051f9
2021-02-08 00:42:31 -08:00
Thomas Orozco
497544a63e mononoke/edenapi_service: check & bump load counters for trees & files
Summary:
Like it says in the title. Let's add some basic throttling control here, in
line with what we have in Mononoke Server. The numbers don't quite match up
since fetches in EdenAPI don't include linknodes or history, but this should be
better than nothing and sufficient for now, and makes sense to have with
EdenAPI & Mononoke Server running in the same process.

Reviewed By: HarveyHunt

Differential Revision: D26250746

fbshipit-source-id: 338eda4341a163d0d915f10bf45fc7f40c74fc69
2021-02-05 15:16:09 -08:00
Ilia Medianikov
ba7e78c567 mononoke: expose PerfCounters at request level
Summary: One thing we have right now in Repo Client is logging of perf counters (this happens in the call to finalize_command). However, we don't have this at the request level. This is a little annoying because there is some context about the client calling us that only exists in the request level, so we can't aggregate on PerfCounters there.

Reviewed By: krallin

Differential Revision: D26257247

fbshipit-source-id: 220c1adafd583420e64599befbd165152c3a8c6f
2021-02-05 10:58:27 -08:00
Kostia Balytskyi
cd9479ae54 repo_client: sample small gettreepack logging
Summary:
`gettreepack` accounts for ~6B logged scuba rows a day (https://fburl.com/scuba/mononoke_test_perf/vpnsn1ny) out of ~10B totally logged rows (https://fburl.com/scuba/mononoke_test_perf/qw78ecxe), so 60% of rows. For the vast majority of `gettreepack` instances we log 3 log tags: "Start processing", "Gettreepack params" and "Command processed". Similarly, the vast majority requests just 1 mfnode: https://fburl.com/scuba/mononoke_test_perf/3xwotsgq. If we sample logging for these commands by a factor of 100, we'll be able to save almost all of these 60% of rows (it's not entirely clear how that will actually influence our retention, but likely pretty significantly).

What do we lose if we do this sampling?
There are a few perf counters, like GettreepackResponseSize, GettreepackNumTreepacks, GettreepackDirectories, GettreepackDesignatedNodes, that will lose their aggregation accuracy. Given that we're only sampling single-mfnode gettreepacks, these values are not likely to be very interesting. However, we are still leaving a possibility to turn verbose logging back on and get full amount of logging.

Reviewed By: mitrandir77, krallin

Differential Revision: D26148453

fbshipit-source-id: a8521364bb5323d41c6c0c7d82d50508c0eda068
2021-02-04 13:51:26 -08:00
Thomas Orozco
42f2751873 mononoke/repo_client: use commit sync config from Mononoke API
Summary:
Mononoke API already has an instance of this (though it's created per-repo,
which is a little bit awkward — I'll try to change that later), so we might
as well use it.

Reviewed By: StanislavGlebik

Differential Revision: D26108438

fbshipit-source-id: 3b5e7d5d3427304cc788930cbe9a51a6a6d214b9
2021-02-04 10:40:02 -08:00
Thomas Orozco
ea1689d949 mononoke: update mononoke server to use mononoke_api for repo construction
Summary:
Like it says in the title, this updates our repo construction to rely on
Mononoke API. My underlying goal here is to have a Mononoke instance around so
that I can start EdenAPI on it, but it also allows for a bunch of cleanup &
code deduplication.

There is still some stuff that isn't initialized in Mononoke API and probably
does not belong there, but at least the shared pieces now come from there. I
also did keep the `Arc<Repo>` around in Mononoke Server's `MononokeRepo`, so
this way we can start to migrate things to Mononoke API (instead of
de-constructing my `Repo` and getting the parts I need to stuff them into
`MononokeRepo`).

One part of this that might be a bit controversial is that I exposed some of
the internals of `Repo` via accessor methods. I know we've historically
wanted access via Mononoke API to not use the fields but instead use the
RepoContext, and I think that's a good goal, but (IMO) realistically the only
way we get there is by first making Mononoke API *available* to use in
repo_client (which is what this ends up doing), and then we can port things to
call Mononoke API instead of using blobrepo and such directly.

To make this work properly I also updated our tests to default to always
set up Configerator configs when starting Mononoke, since we need them to start
MononokeApi (for the CfgrLiveCommitSyncConfig, which right now has an ad-hoc
"ignore the failures in test mode" branch in Mononoke Server).

Reviewed By: markbt

Differential Revision: D26108443

fbshipit-source-id: b7cf5452e044828e73a0aa3ca3ddbc78e466fe57
2021-02-04 10:40:01 -08:00
Stanislau Hlebik
7115cf31d2 mononoke: getbundle optimization for many heads with low gen number
Reviewed By: markbt

Differential Revision: D26221250

fbshipit-source-id: dbc2dd4f181d22c30c6061f5b5de95b0be1ea19f
2021-02-03 03:55:46 -08:00
Thomas Orozco
287348866b mononoke/getbundle_response: log args to call_difference_of_union_of_ancestors_revset
Summary:
Like it says in the title, this logs arguments that we pass to
call_difference_of_union_of_ancestors_revset.

The underlying goal is to see if we could benefit from caching here by seeing
how unique the args are.

Facebook

Getbundle accounts for a small portion of our traffic, so Scuba-wise I think
this should be fine.

Reviewed By: StanislavGlebik

Differential Revision: D26202463

fbshipit-source-id: 93a82662764d0b114291d72ffc79d977c9721d63
2021-02-02 13:14:06 -08:00
Thomas Orozco
d907878221 mononoke/repo_client: bring back mod tests
Summary:
This test module accidentally got lost when I added a `mod tests { ... }` in
the containing module. This brings it back and modernizes the tests that could
be. The push redirection test has way too much boilerplate to be manageable so
for now I removed it. I'll see if I can bring it back after some refactoring
I'm doing.

I'll try to see if there's a way we can try to lint / warn against inline
modules shadowing other files.

Reviewed By: ahornby

Differential Revision: D26124354

fbshipit-source-id: 7b24c4fe635bf8197142ab9ee087631ed49f10be
2021-02-01 07:53:17 -08:00
Thomas Orozco
2f47e9263e mononoke: allow pushes in globalrev repos to ancestors of globalrev bookmark
Summary:
Like it says in the title, this updates our implementation of Globalrevs to
be a little more relaxed, and allows you to create and move bookmarks as long as
they are ancestors of the "main" Globalrevs bookmark (but NOT to pushrebase to
them later, because we only want to allow ONE globalrevs-publishing bookmark
per repo).

While in there, I also deduplicated how we instantiate pushrebase hooks a
little bit. If anything, this could be better in the pushrebase crate, but
that'd be a circular dependency between pushrebase & bookmarks movement.
Eventually, the callsites should probably be using bookmarks movement anyway,
so leaving pushrebase as the low-level crate and bookmarks movement as the high
level one seems reasonable.

Reviewed By: StanislavGlebik

Differential Revision: D26020274

fbshipit-source-id: 5ff6c1a852800b491a16d16f632462ce9453c89a
2021-02-01 05:30:57 -08:00
Stanislau Hlebik
21963bbc1b mononoke: make listkeyspatterns use warm bookmark cache
Summary:
krallin noticed that we aren't using warm bookmark cache anymore. Turned out
the reason was in the fact that client uses `listkeyspatterns` call to fetch
bookmarks and not `listkeys`. This diff makes `listkeyspatterns` use warm
bookmark cache as well.

Reviewed By: markbt

Differential Revision: D26124605

fbshipit-source-id: 637db8d66934cabc1793f9f615fefddd07c3af62
2021-01-29 00:20:14 -08:00
Stanislau Hlebik
da6664a9b5 mononoke: use background session class for blobstore sync queue
Summary:
Yesterday we had an alarm when blobstore sync queue got overloaded again. This
time it was caused by a large commit cloud commit landing and writing lots of
content and alias blobs.

As we discussed before, let's add an option that would allow us to not write to
blobstore sync queue for commit cloud pushes of content and aliases.
It would slightly increase the latency, but will protect blobstore sync queue
from overloading.

Reviewed By: farnz

Differential Revision: D26129038

fbshipit-source-id: 0e96887e3aa3cf26880899c820f556bb16c437cb
2021-01-28 11:38:30 -08:00
Thomas Orozco
2ca2e8b123 mononoke: read globalrevs enabled from globalrevs_publishing_bookmark
Summary:
Like it says in the title. This is prep work for allowing extra
bookmarks in a Gobalrevs repo later in this stack.

Reviewed By: ahornby

Differential Revision: D26076566

fbshipit-source-id: c775d50dfaa51e0f0f64e861b6c5b7ee16d62074
2021-01-27 08:32:38 -08:00
Daniel Xu
5715e58fce Add version specificiation to internal dependencies
Summary:
Lots of generated code in this diff. Only code change was in
`common/rust/cargo_from_buck/lib/cargo_generator.py`.

Path/git-only dependencies (ie `mydep = { path = "../foo/bar" }`) are not
publishable to crates.io. However, we are allowed to specify both a path/git
_and_ a version. When building locally, the path/git is chosen. When publishing,
the version on crates.io is chosen.

See https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html#multiple-locations .

Note that I understand that not all autocargo projects are published on crates.io (yet).
The point of this diff is to allow projects to slowly start getting uploaded.
The end goal is autocargo generated `Cargo.toml`s that can be `cargo publish`ed
without further modification.

Reviewed By: lukaspiatkowski

Differential Revision: D26028982

fbshipit-source-id: f7b4c9d4f4dd004727202bd98ab10e201a21e88c
2021-01-25 22:10:24 -08:00
Thomas Orozco
4dd3461824 third-party/rust: update Tokio 0.2.x to 0.2.24 & futures 1.x to 1.30
Summary:
When we tried to update to Tokio 0.2.14, we hit lots of hangs. Those were due
to incompatibilities between Tokio 0.2.14 and Futures 1.29. We fixed some of
the bugs (and others had been fixed and were pending a release), and Futures
1.30 have now been released, which unblocks our update.

This diff updates Tokio accordingly (the previous diff in the stack fixes an
incompatibility).

The underlying motivation here is to ease the transition to Tokio 1.0.
Ultimately we'll be pulling in those changes one or way or another, so let's
get started on this incremental first step.

Reviewed By: farnz

Differential Revision: D25952428

fbshipit-source-id: b753195a1ffb404e0b0975eb7002d6d67ba100c2
2021-01-25 08:06:55 -08:00
Stanislau Hlebik
960c9943ba mononoke: tweak how we decide if a generation number is low or not
Summary:
A bit of background: some time ago I landed an optimization to getbundle, which
 split the commits user want to fetch into "high generation numbers"
(which means commits are close to main bookmark) and "low generation numbers"
(commits that are far away from the main bookmark). User can fetch "low
generation numbers" if e.g. a commit to an old release branch was landed.
Processing them separately can yield significant speedups.

Previously we chose a threshold statically e.g. if a commit has generation
number lower than X then it's considered to have a low generation number.
Threshold was chosen arbitrarily and relatively small [1], and it didn't work
that well for commits that were above this threshold, but still not close
enough to main bookmark - we still see cpu spikes.

Instead let's define a commit as having low generation number if it's >X
commits farther from the commit with the highest generation number.

Reviewed By: ahornby

Differential Revision: D25995936

fbshipit-source-id: 57eba4ba288114f430722266a8326183b3b1f0bd
2021-01-25 03:37:31 -08:00
Stanislau Hlebik
51aaf8eade mononoke: incapsulate logic of checking low gen number
Summary:
In the next diff i'm going to tweak how we decide if low generation number is
low or not [1]. To make it easier to add this tweak let's put the logic of
figuring out low generation number in a struct.

[1] If generation number is low then we run a few heuristics to speed up the
getbundle request

Reviewed By: ahornby

Differential Revision: D25995938

fbshipit-source-id: dbb95b4321d5a4caa13c4183882e90b23020503c
2021-01-25 03:37:31 -08:00
Stanislau Hlebik
7770133dbe mononoke: add more getbundle logging
Summary:
We have a few tricky opimizations, so it's better to have more logging than
less.

Reviewed By: HarveyHunt

Differential Revision: D25995937

fbshipit-source-id: b5502708125b70f3d656be3dc1120176f5c76ce8
2021-01-22 05:40:08 -08:00
Radu Szasz
5fb5d23ec8 Make tokio-0.2 include test-util feature
Summary:
This feature is useful for testing time-dependent stuff (e.g. it
allows you to stop/forward time). It's already included in the buck build.

Reviewed By: SkyterX

Differential Revision: D25946732

fbshipit-source-id: 5e7b69967a45e6deaddaac34ba78b42d2f2ad90e
2021-01-18 10:38:08 -08:00
Stanislau Hlebik
9f44f99c13 mononoke: log lowest generation number of the heads from getbundle
Summary:
We seem to get cpu spikes. The theory is that it happens  because of a commit with
low generation number lands which triggers a slow path in getbundle code. Note that I've landed two
optimizations (D23824204 (609c2ac257) and D23599866 (54d43b7f95)) which *should* help, however at the
moment threshold for what to consider a log generation number is too low so the
optimization doesn't kick in.

I'd like to verify this theory, hence adding this logging.

Reviewed By: ahornby

Differential Revision: D25884345

fbshipit-source-id: 9686933726ff0a3ae11b541b3738eb08d011abe0
2021-01-13 09:23:42 -08:00
Stanislau Hlebik
5a7087f66f mononoke: add link to reclone instruction
Reviewed By: krallin

Differential Revision: D25846570

fbshipit-source-id: 7e8f0d103659dba4c1ab70cae0c172878b967fb6
2021-01-08 10:33:40 -08:00
Egor Tkachenko
11dd72d6c5 Add unbundlereplay command
Summary:
Unbundlereplay command was not implemented in the mononoke but it is used by sync job. So let's add this command here
together with additional integration test for sync between 2 mononoke repos. In addition I'm adding non fast forward bookmark movements by specifying key to sync job.

Reviewed By: StanislavGlebik

Differential Revision: D25803375

fbshipit-source-id: 6be9e8bfed8976d47045bc425c8c796fb0dff064
2021-01-07 20:36:26 -08:00
Stanislau Hlebik
faf88d25b2 mononoke: use get_common_pushrebase_bookmarks from CommitSyncer
Summary:
After the refactoring in the previous diff let's stop using CommitSyncConfig in
PushRedirectorArgs and start using get_common_pushrebase_bookmarks() method.

Reviewed By: mitrandir77

Differential Revision: D25636577

fbshipit-source-id: 126b38860b011c5a9506a38d4568e5d51b2af648
2021-01-04 23:29:46 -08:00
Daniel Xu
1e78d023e7 Update regex to v1.4.2
Summary: Update so libbpf-cargo doens't need to downgrade regex version.

Reviewed By: kevin-vigor

Differential Revision: D25719327

fbshipit-source-id: 5781871a359f744e2701a34df1931f0c37958c27
2020-12-29 22:59:52 -08:00
Lukas Piatkowski
67b05b6c24 mononoke/integration: make integration tests work under @mode/dev-rust-oss
Summary: Last step in covering Mononoke with mode/dev-rust-oss buck build mode.

Reviewed By: markbt

Differential Revision: D25461223

fbshipit-source-id: 3fa0fa05e8c96476e069990e8f5cc6d56acf38c0
2020-12-18 06:13:32 -08:00
Aida Getoeva
e9f3284b5b mononoke/mysql: make mysql options not copyable
Summary:
In the next diff I'm going to add Mysql connection object to `MysqlOptions` in order to pass it down from `MononokeAppData` to the code that works with sql.
This change will make MysqlOptions un-copyable.

This diff fixed all issues produced by the change.

Reviewed By: ahornby

Differential Revision: D25590772

fbshipit-source-id: 440ae5cba3d49ee6ccd2ff39a93829bcd14bb3f1
2020-12-17 15:46:30 -08:00
Thomas Orozco
db4c8fa924 mononoke/bonsai_hg_mapping: get rid of futures 0.1
Summary:
Like it says in the title. This is nice to do because we had old futures
wrapping new futures here, so this lets us get rid of a lot of cruft.

Reviewed By: ahornby

Differential Revision: D25502648

fbshipit-source-id: a34973b32880d859b25dcb6dc455c42eec4c2f94
2020-12-17 14:30:57 -08:00
Pavel Aslanov
0fc5c3aca7 convert BlobRepoHg to new type futures
Summary: Convert all BlobRepoHg methods to new type futures

Reviewed By: StanislavGlebik

Differential Revision: D25471540

fbshipit-source-id: c8e99509d39d0e081d082097cbd9dbfca431637e
2020-12-17 07:45:26 -08:00