Commit Graph

98 Commits

Author SHA1 Message Date
Thomas Orozco
db4c509b9e mononoke: use MononokeEnvironment in RepoFactory
Summary:
There is a very frustrating operation that happens often when working on the
Mononoke code base:

- You want to add a flag
- You want to consume it in the repo somewhere

Unfortunately, when we need to do this, we end up having to thread this from a
million places and parse it out in every single main() we have.

This is a mess, and it results in every single Mononoke binary starting with
heaps of useless boilerplate:

```
    let matches = app.get_matches();

    let (caching, logger, mut runtime) = matches.init_mononoke(fb)?;

    let config_store = args::init_config_store(fb, &logger, &matches)?;

    let mysql_options = args::parse_mysql_options(&matches);
    let blobstore_options = args::parse_blobstore_options(&matches)?;
    let readonly_storage = args::parse_readonly_storage(&matches);
```

So, this diff updates us to just use MononokeEnvironment directly in
RepoFactory, which means none of that has to happen: we can now add a flag,
parse it into MononokeEnvironment, and get going.

While we're at it, we can also remove blobstore options and all that jazz from
MononokeApiEnvironment since now it's there in the underlying RepoFactory.

Reviewed By: HarveyHunt

Differential Revision: D27767700

fbshipit-source-id: e1e359bf403b4d3d7b36e5f670aa1a7dd4f1d209
2021-04-16 10:27:43 -07:00
Thomas Orozco
c2c904f933 mononoke: initialize loggers, config, caching, tunables & runtime in MononokeMatches
Summary:
Basically every single Mononoke binary starts with the same preamble:

- Init mononoke
- Init caching
- Init logging
- Init tunables

Some of them forget to do it, some don't, etc. This is a mess.

To make things messier, our initialization consists of a bunch of lazy statics
interacting with each other (init logging & init configerator are kinda
intertwined due to the fact that configerator wants a logger but dynamic
observability wants a logger), and methods you must only call once.

This diff attempts to clean this up by moving all this initialization into the
construction of MononokeMatches. I didn't change all the accessor methods
(though I did update those that would otherwise return things instantiated at
startup).

I'm planning to do a bit more on top of this, as my actual goal here is to make
it easier to thread arguments from MononokeMatches to RepoFactory, and to do so
I'd like to just pass my MononokeEnvironment as an input to RepoFactory.

Reviewed By: HarveyHunt

Differential Revision: D27767698

fbshipit-source-id: 00d66b07b8c69f072b92d3d3919393300dd7a392
2021-04-16 10:27:43 -07:00
Aida Getoeva
442775f79f mononoke/mysql: tokio spawn queries
Summary:
Sometimes we can hit an Idle timeout error while talking to MySQL, because we open a connection and go idle for a long time. Then when we finally send a query, server returns an error: the connection is expired. This is the issue we found and fixed in D27503062 (a856799489) that blocked MySQL client release.

## Future starvation
Imagine you have a stream in which you're connecting to a server, fetching and preparing some values:
```
let v = vec![1u32, 2, 3, 4, 5];
let mut s = stream::iter(v)
   .map(|key| async move {
        let conn = connect(..).await?;
        conn.fetch(..).await
   })
   .buffered(2)
   .map(|item| async move { prepare(..) })
   .buffered(2);
```
Now you want to asynchronously process those prepared values one by one:
```
while let Some(result) = s.next().await {
   let value = result?;
   process(value).await?;
}
```
This async `process(..)` call can be talking to some service to take these values or something else that doesn't require much of a CPU time. Although the operation can be long.

**Now what happens when we do s.next().await?**

Because the stream is `buffered(2)` we wait for the first 2 futures. When the first item is ready, it returns the result and polls next stream item with a key - 3. The third future only makes a `connect(..)` call and gets switched.

When we've got a next value from the stream, we're waiting on the `process(value)` call and not polling the underlying stream till the processing is done.

**As I mentioned earlier, it is not expensive...**
But what if it takes > 10s to complete anyway?

The third future from the stream, that was polled earlier, **will wait for all these > 10s till it is polled again**.

More details [in this post](https://fb.workplace.com/groups/learningrust/permalink/2890621307875402/).

## Solution

In this case spawning a future with connection and query steps is a way to fix the issue.

This diff spawns queries in `shed::sql::sql_common::mysql_wrapper` - this covers all the places in Mononoke where we talk to MySQL. Also I removed the spawn from hg sync code, because it is not needed anymore and to illustrate that this approach works.

Reviewed By: StanislavGlebik

Differential Revision: D27639629

fbshipit-source-id: edaa2ce8f5948bf44e1899a19b443935920e33ef
2021-04-09 07:37:40 -07:00
Aida Getoeva
498a90659c mononoke: remove debug output from hg sync
Summary: This was added in D27503062 (a856799489) as a debug info and is very spammy, let's remove it.

Reviewed By: StanislavGlebik

Differential Revision: D27647927

fbshipit-source-id: 12c6b2d4cb8b1bae2d987fd8ff461bd480b7dc18
2021-04-08 05:15:06 -07:00
Thomas Orozco
c934b67e5b mononoke: remove all trivial usage of async-unit
Summary:
I'd like to just get rid of that library since it's one more place where we
specify the Tokio version and that's a little annoying with the Tokio 1.x
update. Besides, this library is largely obsoleted by `#[fbinit::test]` and
`#[tokio::test]`.

Reviewed By: farnz

Differential Revision: D27619147

fbshipit-source-id: 4a316b81d882ea83c43bed05e873cabd2100b758
2021-04-07 07:26:57 -07:00
Aida Getoeva
a856799489 mononoke/mysql: spawn tasks and add futures watchdog in hg sync
Summary:
Hg sync jobs were frequently failing due to the task performing MySQL query being starved.
It acquired a connection but then waited for many seconds until it could finally send a query. At that time the server returned error: the connection was open idle for >12s and now timed out:
```
I0401 11:08:32.085223   390 [main] eden/mononoke/mononoke_hg_sync_job/src/main.rs:355] error without entry
E0401 11:08:32.086126   390 [main] eden/mononoke/cmdlib/src/helpers.rs:336] Execution error: While executing ReadNextBookmarkLogEntries query
Caused by:
    0: While making query 'SELECT id, repo_id, name, to_changeset_id, from_changeset_id, reason, timestamp,
                     replay.bundle_handle, replay.commit_hashes_json
                FROM bookmarks_update_log log
                LEFT JOIN bundle_replay_data replay ON log.id = replay.bookmark_update_log_id
                WHERE log.id > 19896395 AND log.repo_id = 2106
                ORDER BY id asc
                LIMIT 20'
    1: Mysql Query failed: Failed (MyRouter) Idle timeout after 12 seconds see https://fburl.com/wait_timeout
I0401 11:08:32.172088   390 ClientSingletonManager.cpp:95] Shutting down Manifold ClientSingletonManager
remote: pushkey hooks finished (after 0.00s)
Error: Execution failed

```

Link to the full logs in a timeframe: https://fburl.com/tupperware/16th1yk7 (I added a debug output when `ReadNextBookmarkLogEntries` query runs).

Hg sync job initiates an infinite loop to look for the new commits to synchronize. In the async stream it runs `ReadNextBookmarkLogEntries` query and then prepares bundle and synchronizes it. The stream is [buffered](https://fburl.com/diffusion/z1r7648f) by [5 (link)](https://fburl.com/diffusion/surn37hx).

My guess is that the `ReadNextBookmarkLogEntries` query starts executing, while the previously discovered bundles are being prepared. The query opens a connection and then gets switched, now the bundles are being synced. But sometimes those bundles take too long to sync while the query task is waiting till it be executed again.
The sync finishes and the query task finally tries to send a MySQL query but hits an idle timeout error on the server.

This diff:
* Spawns the MySQL query and `apply_bundle` call.
* Adds watchdog on futures to help debug issues if they occur later, although I couldn't see any slow polls in the logs.

Reviewed By: StanislavGlebik

Differential Revision: D27503062

fbshipit-source-id: 6d1d9166b99487c056f3fb217502f8a9d3d46228
2021-04-06 08:55:00 -07:00
Stanislau Hlebik
c9e1ef391b mononoke: send bonsai to hg mapping while running hg sync
Differential Revision: D27361216

fbshipit-source-id: 794e4da332cfdc2902eecea137bef8a2480d8f2c
2021-03-29 12:39:19 -07:00
Stanislau Hlebik
4a5ca24a3b mononoke: remove unused code from sendunbundlereplay
Summary: We don't use it, so let's remove it

Reviewed By: farnz

Differential Revision: D27359959

fbshipit-source-id: 42ce7da16fd0359bbceeab9d1f99712f45a80314
2021-03-26 09:35:44 -07:00
Stanislau Hlebik
b5d9e79c9c mononoke: start syncing globalrevs to darkstorm repos via hg sync job
Reviewed By: krallin

Differential Revision: D27268740

fbshipit-source-id: d6688d3655b43d4a276c030bc9b0efa851273b7e
2021-03-26 02:12:58 -07:00
Stanislau Hlebik
e6fae1b836 mononoke: record which bonsai commits were pushed in hg sync bundle
Summary:
In the next diff I'd like to add support for syncing globalrev to our darkstorm
repos. Doing it the same way we do it for hgsql isn't going to work, because
darkstorm repos stores globalrevs the same way mononoke does it (i.e. per
commit entry in mysql) and not the way hgsql does (i.e. one row per repo).

In this diff I do a small refactoring that remembers which bonsai commits were pushed
in a bundle, so that in the next diff we can start writing them to darkstorm
db.

Reviewed By: krallin

Differential Revision: D27268778

fbshipit-source-id: bbb39de233719c8435d11d00980f6eaf5b755ba6
2021-03-26 02:12:58 -07:00
Mark Juggurnauth-Thomas
64461bb361 test_repo_factory: use test factory for remaining tests
Summary: Use the test factory for the remaining existing tests.

Reviewed By: StanislavGlebik

Differential Revision: D27169443

fbshipit-source-id: 00d62d7794b66f5d3b053e8079f09f2532d757e7
2021-03-25 07:34:51 -07:00
Egor Tkachenko
8b7dc976e6 Add support for backup-repo-name argument
Summary: We have support for backup-repo-id, but tw blobimport doesn't have id and have source repo name to use. Let's add support similar to other repo-id/source-repo-id etc.

Reviewed By: StanislavGlebik

Differential Revision: D27325583

fbshipit-source-id: 44b5ec7f99005355b8eaa4c066cb7168ec858049
2021-03-25 06:45:25 -07:00
Stanislau Hlebik
971fd68b85 mononoke: remove unnecessary async wrapper
Summary: Small cleanup

Reviewed By: krallin

Differential Revision: D27268779

fbshipit-source-id: 533fb9122bbefc425b1b9198efb582ebbccd8efa
2021-03-23 12:39:18 -07:00
Mark Juggurnauth-Thomas
db324150a1 blobrepo: make attributes real members again
Summary:
In preparation for making `BlobRepo` buildable by facet factories, restore
`BlobRepo` members that had been converted to `TypeMap` attributes back into
real members.

This re-introduces some dependencies that were previously removed, but this
will be cleaned up when crates no longer have to depend on BlobRepo directly,
just the traits they are interested in.

Reviewed By: ahornby

Differential Revision: D27169422

fbshipit-source-id: 14354e6d984dfdd2be5c169f527e5f998f00db1e
2021-03-22 07:26:47 -07:00
Stanislau Hlebik
ac6c609e01 mononoke: do not change repo_id when logging noop hg sync job iteration
Summary:
In D26945466 (7a3539b9c6) I started to use correct repo name for backup repos whenever we
sync an entry. However most of the time sync job is idle, and while doing so it
also logs a heartbeat to scuba table. But it was using wrong repo_id for that
(i.e. for instagram-server_backup it was using instagram-server repo_id). This
diff fixes that.

Reviewed By: krallin

Differential Revision: D27123193

fbshipit-source-id: 80425a56ad0a432180f420f5c7957105407e0fc9
2021-03-17 11:13:03 -07:00
Stanislau Hlebik
7a3539b9c6 mononoke: use correct repo name in darkstorm sync job
Summary:
For darkstorm sync job we shouldn't use the source mononoke repo name, because
it breaks our logging and alarms.

This diff fixes it

Reviewed By: farnz

Differential Revision: D26945466

fbshipit-source-id: d90abd0cf2e1c480d529d70f825a14f1460d2e29
2021-03-10 06:02:44 -08:00
generatedunixname89002005325677
ef0e758bd4 Daily arc lint --take RUSTFMT
Reviewed By: farnz

Differential Revision: D26841465

fbshipit-source-id: 37d21b771bfc80b00915997754a3130b01bc3857
2021-03-05 04:07:28 -08:00
Stanislau Hlebik
82fa3ad118 mononoke: change how mononoke_hg_sync_job recovers in case it failed to update mutable_counter
Summary:
Mononoke hg sync job does the following operation in the loop:
1) check if we got new bookmark update log entries in db
2) Sync these log entries to hg server
3) update the counter that says what was the latest synced entry

It's possible that step 2 was successful but step 3 failed. After that sync job restarts and tries to replay already-replayed bundle again, and this fails.
We had a protection against this situation - after every failed bundle we sent a request to hg server to check what's the location of bookmark is - however it stopped working after we added `--combine-bundles` option which syncs more than 1 bookmark update log entry at once.

To be precise, this situation could happen
1) Sync job syncs log entries [1, 2] successfully
2) Sync job tries to update the `latest-replayed-request` counter to 2, and fails
3) New log entry 3 comes into queue
4) sync job restarts and tries to sync [1, 2, 3]. This fails because [1, 2] were already synced. Because of that sync job tries to verify that the location of the bookmark on the server is at the position 3, but actually it's at 2. So sync job fails again and it keeps crashlooping.

This diff attempts to fix it by trying to detect if any of the first entries in the first bundle were already synced. It does so by taking the value of the bookmark from BookmarkOverlay (it's equivalent to the value of the bookmark on hg server), and comparing with to_changeset_id/from_changeset_id values from the batch, and skipping a first few. See code comments for more details on how it does it.

A few notes:
1) BookmarkOverlay might actually contain outdated values i.e. not the latest snapshot of hgsql. This happens because there's a delay between commit appearing on hgsql and propagating to hg server repositories. In a case like that it's possible that adjusting the first batch might not kick in, and sync job will fail to sync the bundle. However my assumption is that in practice it should be ok - sync job will restart and try to repeat the bundle adjustment, and eventually it should succeed. Also the current `verify-server-bookmark-on-failure` likely to have the same problem.
2) Note that this still might not work if we try to decrease the combine-bundles value when sync job counter is pointing to the wrong log entry. I expect it to be rare enough so that we don't need to worry about it

Note that it still worth keeping the `verify-server-bookmark-on-failure` option, since it can detect other kinds of failure (e.g. hg command returning non-zero exit, but actually successfully syncing the entry).

Reviewed By: krallin

Differential Revision: D26753763

fbshipit-source-id: bea9da9ab1ceede19666c99e28553e74edb0ed2a
2021-03-04 12:37:57 -08:00
Thomas Orozco
2a803fc10d third-party/rust: update futures
Summary:
Those newer versions of Futures have compatibility improvements with Tokio,
notably:

- https://github.com/rust-lang/futures-rs/pull/2333
- https://github.com/rust-lang/futures-rs/pull/2358

Reviewed By: farnz

Differential Revision: D26778794

fbshipit-source-id: 5a9dc002083e5edfa5c614d8d2242e586a93fcf6
2021-03-04 06:42:55 -08:00
Alex Hornby
2ff9ad0fea rust: async sql queries macros
Summary:
Async the query macros.  This change also migrates most callsites, with a few more complicated ones handle as separate diffs, which temporarily use sql01::queries in this diff.

With this change the query string is computed lazily (async fn/blocks being lazy) so we're not holding the extra memory of query string as well as query params for quite as long.  This is of most interest for queries doing writes where the query string can be large when large values passed (e.g. Mononoke sqlblob blobstore )

Reviewed By: krallin

Differential Revision: D26586715

fbshipit-source-id: e299932457682b0678734f44bb4bfb0b966edeec
2021-03-04 01:52:41 -08:00
Stanislau Hlebik
f1772e601f mononoke: use darkstorm repo id as a counter repo id if specified
Summary:
When we are using hg sync job to backup a darkstorm repository we need to read
latest commits from source mononoke repo, however use darkstorm repo id for
counters - otherwise there will be two sync job using the same counter (i.e.
mononoke -> hg and mononoke -> darkstorm) and that wouldn't end well.

This diff does that. I also changed our tests a bit to always set
--darkstorm-repo-id option, since we are going to use it in prod anyway.

Differential Revision: D26782326

fbshipit-source-id: 0f6188047fe3d01dfa7bf7b3eb407e4f2c9a5d09
2021-03-03 12:03:53 -08:00
Thomas Orozco
ef7045e818 common/rust: use fbinit-tokio
Summary:
This diffs add a layer of indirection between fbinit and tokio, thus allowing
us to use fbinit with tokio 0.2 or tokio 1.x.

The way this works is that you specify the Tokio you want by adding it as an
extra dependency alongside `fbinit` in your `TARGETS` (before this, you had to
always include `tokio-02`).

If you use `fbinit-tokio`, then `#[fbinit::main]` and `#[fbinit::test]` get you
a Tokio 1.x runtime, whereas if you use `fbinit-tokio-02`, you get a Tokio 0.2
runtime.

This diff is big, because it needs to change all the TARGETS that reference
this in the same diff that introduces the mechanism. I also didn't produce it
by hand.

Instead, I scripted the transformation using this script: P242773846

I then ran it using:

```
{ hg grep -l "fbinit::test"; hg grep -l "fbinit::main"  } | \
  sort | \
  uniq | \
  xargs ~/codemod/codemod.py \
&&  yes | arc lint \
&& common/rust/cargo_from_buck/bin/autocargo
```

Finally, I grabbed the files returned by `hg grep`, then fed them to:

```
arc lint-rust --paths-from ~/files2 --apply-patches --take RUSTFIXDEPS
```

(I had to modify the file list a bit: notably I removed stuff from scripts/ because
some of that causes Buck to crash when running lint-rust, and I also had to add
fbcode/ as a prefix everywhere).

Reviewed By: mitrandir77

Differential Revision: D26754757

fbshipit-source-id: 326b1c4efc9a57ea89db9b1d390677bcd2ab985e
2021-03-03 04:09:15 -08:00
Ilia Medianikov
58b9ac23ef mononoke: Don't create separate ConfigStore's in tests
Summary:
Atm in tests a separate ConfigStore with file source is created for some configs and then a reference to it is dropped immediately ([see get_config_handle function in mod.rs](https://fburl.com/diffusion/fpkj7ekv)). This is uncomfortable as we may need a reference to e.g. force update configs in tests.

Instead of creating separate stores we can reuse static configerator which already uses local files (in tests).

Reviewed By: krallin

Differential Revision: D26725515

fbshipit-source-id: 24269cd93b7d35216c025807c3f3eb527688b72b
2021-03-03 03:52:41 -08:00
Egor Tkachenko
e7cfc155d3 Sync to readonly repo
Summary: Since we don't have repo-lock db for backup repos. I'm making them readonly D26693725 and add bypass for sync job.

Reviewed By: krallin

Differential Revision: D26693675

fbshipit-source-id: 2eaa9419850c3e7a5df45871424283ee280f5ec1
2021-03-01 09:34:16 -08:00
Lukas Piatkowski
f317302b0f autocargo v1: reformating of oss-dependencies, workspace and patch sections and thrift files to match v2
Summary:
For dependencies V2 puts "version" as the first attribute of dependency or just after "package" if present.
Workspace section is after patch section in V2 and since V2 autoformats patch section then the third-party/rust/Cargo.toml manual entries had to be formatted manually since V1 takes it as it is.
The thrift files are to have "generated by autocargo" and not only "generated" on their first line. This diff also removes some previously generated thrift files that have been incorrectly left when the corresponding Cargo.toml was removed.

Reviewed By: ikostia

Differential Revision: D26618363

fbshipit-source-id: c45d296074f5b0319bba975f3cb0240119729c92
2021-02-25 15:10:56 -08:00
Thomas Orozco
d71fa2882c common/rust/futures_ext: update to tokio_shim
Summary:
Like it says in the title, this updates futures_ext to use tokio_shim, which
makes it compatible with Tokio 0.2 and 1.0.

There is one small difference in behavior here, which is that in Tokio 1.0,
sleep isn't Unpin anymore, so callers will need to call `boxed()` or use Tokio's `pin!` macro if they need
Unpin.

I do want to get as close to what upstream is doing in Tokio 1.0, so I think
it's good to keep that behavior.

Reviewed By: farnz

Differential Revision: D26610036

fbshipit-source-id: ff72275da55558fdf8fe3ad009d25cf84e108a5a
2021-02-25 02:11:30 -08:00
Alex Hornby
aa8f84ad4c mononoke: async myrouter_ready()
Summary: Small clean up. Allows us to pass Logger by reference, removing the FIXME in blobrepo factory

Reviewed By: farnz

Differential Revision: D26551592

fbshipit-source-id: d6bb04b8bb3034ad056f071b67b5ae0ce3c6f224
2021-02-23 10:55:42 -08:00
Thomas Orozco
097e4ad00c mononoke: remove tokio-compat (i.e. use tokio 0.2 exclusively)
Summary:
The earlier diffs in this stack have removed all our dependencies on the Tokio
0.1 runtime environment (so, basically, `tokio-executor` and `tokio-timer`), so
we don't need this anymore.

We do still have some deps on `tokio-io`, but this is just traits + helpers,
so this doesn't actually prevent us from removing the 0.1 runtime!

Note that we still have a few transitive dependencies on Tokio 0.1:

- async-unit uses tokio-compat
- hg depends on tokio-compat too, and we depend on it in tests

This isn't the end of the world though, we can live with that :)

Reviewed By: ahornby

Differential Revision: D26544410

fbshipit-source-id: 24789be2402c3f48220dcaad110e8246ef02ecd8
2021-02-22 09:22:42 -08:00
Lukas Piatkowski
cd0b6d50e2 autocargo v1: changes to match autocargo v2 generation results.
Summary:
The changes (and fixes) needed were:
- Ignore rules that are not rust_library or thrift_library (previously only ignore rust_bindgen_library, so that binary and test dependencies were incorrectly added to Cargo.toml)
- Thrift package name to match escaping logic of `tools/build_defs/fbcode_macros/build_defs/lib/thrift/rust.bzl`
- Rearrange some attributes, like features, authors, edition etc.
- Authors to use " instead of '
- Features to be sorted
- Sort all dependencies as one instead of grouping third party and fbcode dependencies together
- Manually format certain entries from third-party/rust/Cargo.toml, since V2 formats third party dependency entries and V1 just takes them as is.

Reviewed By: zertosh

Differential Revision: D26544150

fbshipit-source-id: 19d98985bd6c3ac901ad40cff38ee1ced547e8eb
2021-02-19 11:03:55 -08:00
Thomas Orozco
767c961fa4 mononoke/mononoke_hg_sync_job: update to Tokio 0.2 Hyper
Summary:
Like it says in the title, this updates us to a newer version of Hyper to avoid
being on one that depends on Tokio 0.1.

Reviewed By: StanislavGlebik

Differential Revision: D26511979

fbshipit-source-id: 325d24f9b4e17fc2e801a2ba79863c0e656870d4
2021-02-19 07:00:55 -08:00
Thomas Orozco
0490fd9622 mononoke_hg_sync_job: update hg peer to tokio 0.2
Summary:
This code was a bit hard to convert because just using the 0.2 variants really
doesn't work very well. So, I went ahead and actually refactored it. Here's
what I changed:

- Rather than use a `!Sync` bound to ensure we don't have concurrent access to
  the `HgPeer`, I updated this to actually use a `&mut` reference in an `async
  fn`. Note that the `!Sync` bound doesn't really do anything here because
  it prevented you from instantiating a future concurrently in 2 threads but
  nothing prevents you from creating 2 futures and awaiting them concurrently.
  The `&mut` reference does (and means we have to wrap this in a Tokio mutex).
- I moved the management of `invalidate()` to the `HgPeer` and not
  `AsyncProcess`, given it's always the `HgPeer` that makes the decision to
  invalidate.

Reviewed By: StanislavGlebik

Differential Revision: D26511980

fbshipit-source-id: 6deb5be76effbb65cef29c789b9b3e4429326c5f
2021-02-19 07:00:55 -08:00
Thomas Orozco
f18d98939b mononoke/mononoke_hg_sync_job: rename 0.1 futures_ext
Summary:
I want to use both future_ext libraries later in this stack, and this will
make it easier.

Reviewed By: StanislavGlebik

Differential Revision: D26510582

fbshipit-source-id: c90fca327697e0e966e8d6ac262115ee69c99112
2021-02-19 07:00:54 -08:00
Lukas Piatkowski
87ddbe2f74 autocargo v1: update autocargo field format to allow transition to autocargo v2
Summary:
Autocargo V2 will use a more structured format for autocargo field
with the help of `cargo_toml` crate it will be easy to deserialize and handle
it.

Also the "include" field is apparently obsolete as it is used for cargo-publish (see https://doc.rust-lang.org/cargo/reference/manifest.html#the-exclude-and-include-fields). From what I know this might be often wrong, especially if someone tries to publish a package from fbcode, then the private facebook folders might be shipped. Lets just not set it and in the new system one will be able to set it explicitly via autocargo parameter on a rule.

Reviewed By: ahornby

Differential Revision: D26339606

fbshipit-source-id: 510a01a4dd80b3efe58a14553b752009d516d651
2021-02-12 23:28:25 -08:00
David Tolnay
1ae53c794f Update symlinks for Rust 1.50.0
Summary: Release notes: https://blog.rust-lang.org/2021/02/11/Rust-1.50.0.html

Reviewed By: jsgf, Imxset21

Differential Revision: D26363768

fbshipit-source-id: 25188531cf0a5647128cbeb469225d8dd756d0af
2021-02-12 13:54:50 -08:00
Thomas Orozco
2a21e4fb17 third-party/rust: update Tokio to 0.2.25 + add a patch to disable coop scheduling
Summary:
See the patch & motivation here:

818f943db3

Reviewed By: StanislavGlebik

Differential Revision: D26399890

fbshipit-source-id: e184a3f6c1dd03cb4cdb7ea18073c3392d7ce355
2021-02-12 04:56:23 -08:00
Stanislau Hlebik
af2ab0cf10 mononoke: store hydrated tree manifests in .hg
Reviewed By: krallin

Differential Revision: D26401093

fbshipit-source-id: e5050883b0e6f370a7cfbb5f46721aca7469dce1
2021-02-11 10:12:27 -08:00
Stefan Filip
0a308f9f84 update Cargo.toml after assert_matches update
Summary: cargo autocargo

Reviewed By: fanzeyi

Differential Revision: D26316542

fbshipit-source-id: f9e12a9d7b3b4e03a6f7b074ea2873ad6dcc82ad
2021-02-08 10:23:00 -08:00
Alex Hornby
fdb9ab5278 mononoke: fix reference to wrong tokio from mononoke_hg_sync_job test
Summary: Was accidentally picking up tokio 1.0 rather than tokio 0.2

Reviewed By: krallin

Differential Revision: D26201021

fbshipit-source-id: 177f42a9b7862510cfe996f6e46d155d8dab6123
2021-02-02 12:43:47 -08:00
Egor Tkachenko
1b473dcd9c Syncronise lfs entries during darkstorm backups
Summary: Sync job doesn't syncronize large files. For darkstorm backup sync lets make a special lfs verifier, which will upload files from origin blobstore into backup.

Reviewed By: StanislavGlebik

Differential Revision: D24991705

fbshipit-source-id: de668b7ad33ace3445f50cd9c92a678201ffb6f6
2021-02-01 11:23:47 -08:00
Iván Yesid Castellanos
e58c8e819c Removed static lifetime constants
Summary: removed the static lifetime constants in mononoke code source base

Reviewed By: krallin

Differential Revision: D26123507

fbshipit-source-id: 9e1689c42603bd17d44924f92219378340ab082b
2021-01-29 04:40:27 -08:00
Thomas Orozco
2768bb08d2 mononoke: hg sync job: only sync globalrevs for the publishing bookmark
Summary:
We want multiple bookmarks, but only one of them should assign new globalrevs,
so it follows that we shouldn't sync the counter when other bookmarks are being
moved.

This does that.

Reviewed By: ahornby

Differential Revision: D26076567

fbshipit-source-id: 0ccc311984d3379cb44ccf10cbcb90ac31b82ee3
2021-01-27 08:32:39 -08:00
Daniel Xu
5715e58fce Add version specificiation to internal dependencies
Summary:
Lots of generated code in this diff. Only code change was in
`common/rust/cargo_from_buck/lib/cargo_generator.py`.

Path/git-only dependencies (ie `mydep = { path = "../foo/bar" }`) are not
publishable to crates.io. However, we are allowed to specify both a path/git
_and_ a version. When building locally, the path/git is chosen. When publishing,
the version on crates.io is chosen.

See https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html#multiple-locations .

Note that I understand that not all autocargo projects are published on crates.io (yet).
The point of this diff is to allow projects to slowly start getting uploaded.
The end goal is autocargo generated `Cargo.toml`s that can be `cargo publish`ed
without further modification.

Reviewed By: lukaspiatkowski

Differential Revision: D26028982

fbshipit-source-id: f7b4c9d4f4dd004727202bd98ab10e201a21e88c
2021-01-25 22:10:24 -08:00
Thomas Orozco
4dd3461824 third-party/rust: update Tokio 0.2.x to 0.2.24 & futures 1.x to 1.30
Summary:
When we tried to update to Tokio 0.2.14, we hit lots of hangs. Those were due
to incompatibilities between Tokio 0.2.14 and Futures 1.29. We fixed some of
the bugs (and others had been fixed and were pending a release), and Futures
1.30 have now been released, which unblocks our update.

This diff updates Tokio accordingly (the previous diff in the stack fixes an
incompatibility).

The underlying motivation here is to ease the transition to Tokio 1.0.
Ultimately we'll be pulling in those changes one or way or another, so let's
get started on this incremental first step.

Reviewed By: farnz

Differential Revision: D25952428

fbshipit-source-id: b753195a1ffb404e0b0975eb7002d6d67ba100c2
2021-01-25 08:06:55 -08:00
Radu Szasz
5fb5d23ec8 Make tokio-0.2 include test-util feature
Summary:
This feature is useful for testing time-dependent stuff (e.g. it
allows you to stop/forward time). It's already included in the buck build.

Reviewed By: SkyterX

Differential Revision: D25946732

fbshipit-source-id: 5e7b69967a45e6deaddaac34ba78b42d2f2ad90e
2021-01-18 10:38:08 -08:00
Alex Hornby
2d0b7db627 mononoke: allow cmdlib init_logging to return a Result
Summary: Allow us to return arg parsing errors rather than panicing

Reviewed By: krallin

Differential Revision: D25837626

fbshipit-source-id: 87e39de140b1dcd3b13a529602fdafc31233175d
2021-01-14 09:52:40 -08:00
Egor Tkachenko
11dd72d6c5 Add unbundlereplay command
Summary:
Unbundlereplay command was not implemented in the mononoke but it is used by sync job. So let's add this command here
together with additional integration test for sync between 2 mononoke repos. In addition I'm adding non fast forward bookmark movements by specifying key to sync job.

Reviewed By: StanislavGlebik

Differential Revision: D25803375

fbshipit-source-id: 6be9e8bfed8976d47045bc425c8c796fb0dff064
2021-01-07 20:36:26 -08:00
Daniel Xu
1e78d023e7 Update regex to v1.4.2
Summary: Update so libbpf-cargo doens't need to downgrade regex version.

Reviewed By: kevin-vigor

Differential Revision: D25719327

fbshipit-source-id: 5781871a359f744e2701a34df1931f0c37958c27
2020-12-29 22:59:52 -08:00
Aida Getoeva
8b93f52b71 mononoke/mysql: use single static shared connection pool
Summary:
The correct workflow for using a multi-threaded connection pool for multiple DBs is to have a single shared pool for all the use-cases. The pool is smart enough to maintain separate "pools" for each DB locator and limit them to maximum 100 conn per key.

In this diff I create a `OnceCell` connection pool that is initialized once and reused for every attempt to connect to the DB.
The pool is stored in `MononokeAppData` in order to bind its lifetime to the lifetime of Mononoke app. Then it is passed down as a part of `MysqlOptions`.  Unfortunately this makes `MysqlOptions` not copyable, so the diff also contains lots of "clones".

Reviewed By: ahornby

Differential Revision: D25055819

fbshipit-source-id: 21f7d4a89e657fc9f91bf22c56c6a7172fb76ee8
2020-12-17 15:46:30 -08:00
Aida Getoeva
e9f3284b5b mononoke/mysql: make mysql options not copyable
Summary:
In the next diff I'm going to add Mysql connection object to `MysqlOptions` in order to pass it down from `MononokeAppData` to the code that works with sql.
This change will make MysqlOptions un-copyable.

This diff fixed all issues produced by the change.

Reviewed By: ahornby

Differential Revision: D25590772

fbshipit-source-id: 440ae5cba3d49ee6ccd2ff39a93829bcd14bb3f1
2020-12-17 15:46:30 -08:00
Pavel Aslanov
0fc5c3aca7 convert BlobRepoHg to new type futures
Summary: Convert all BlobRepoHg methods to new type futures

Reviewed By: StanislavGlebik

Differential Revision: D25471540

fbshipit-source-id: c8e99509d39d0e081d082097cbd9dbfca431637e
2020-12-17 07:45:26 -08:00