Commit Graph

136 Commits

Author SHA1 Message Date
Ilia Medianikov
f394b86528 mononoke/scs: add method to look up commit origin over pushrebase mutations
Reviewed By: StanislavGlebik

Differential Revision: D30495086

fbshipit-source-id: 5fe659033b5e0e8a557f173d677caa0dfd531b05
2021-09-10 16:27:16 -07:00
Callum Ryan
65bd8a9bc9 Support thrift_library::rust_include_srcs
Summary:
`rust_include_srcs` is supported on `thrift_library` as a way of including other Rust code in the generated crate, generally used to implement other traits on the generated types.

Adding support for this in autocargo by copying these files into the output dir and making sure their option is specified to the thrift compiler

Reviewed By: ahornby

Differential Revision: D30789835

fbshipit-source-id: 325cb59fdf85324dccfff20a559802c11816769f
2021-09-10 00:12:44 -07:00
Jeremy Fitzhardinge
cbceb08640 third-party/rust: local patch to tracing-subscriber
Summary:
Add impls for Layer for Box/Arc<L: Layer> and <dyn Layer>. Also a pile of other
updates in git which haven't been published to crates.io yet, including proper
level filtering of trace events being fed into log.

Reviewed By: dtolnay

Differential Revision: D30829927

fbshipit-source-id: c01c9369222df2af663e8f8bf59ea78ee12f7866
2021-09-09 22:38:25 -07:00
Jeremy Fitzhardinge
fd03bff2e2 third-party/rust: bump tracing versions in preparation for patching
Summary:
Bump all the versions on crates.io to highest to make migration to github
versions in next diff work.

Reviewed By: dtolnay

Differential Revision: D30829928

fbshipit-source-id: 09567c26f275b3b1806bf8fd05417e91f04ba2ef
2021-09-09 22:38:25 -07:00
David Tolnay
5e9b8cd4b2 third-party/rust: Update thiserror from 1.0.23 to 1.0.29
Summary:
Release notes:

- https://github.com/dtolnay/thiserror/releases/tag/1.0.24
- https://github.com/dtolnay/thiserror/releases/tag/1.0.25
- https://github.com/dtolnay/thiserror/releases/tag/1.0.26
- https://github.com/dtolnay/thiserror/releases/tag/1.0.27
- https://github.com/dtolnay/thiserror/releases/tag/1.0.28
- https://github.com/dtolnay/thiserror/releases/tag/1.0.29

The pertinent feature is 1.0.29 adding support for inferred trait bounds on error types that contain generic type parameters. I remember someone asking for this in fbcode but I forget what project it was for.

```
use thiserror::Error;

#[derive(Error, Debug)]
pub enum MyError<E, F, G> {
    #[error("thing {0} ({0:?})")]
    Variant(E),
    #[error("some error")]
    Delegate(#[source] SomeError<F>),
    #[error("err 0o{val:o}")]
    Octal { val: G },
}
```

```
// generated

impl<E, F, G> std::error::Error for MyError<E, F, G>
where
    SomeError<F>: std::error::Error + 'static,  //'
    Self: std::fmt::Debug + std::fmt::Display;

impl<E, F, G> std::fmt::Display for MyError<E, F, G>
where
    E: std::fmt::Debug + std::fmt::Display,
    G: std::fmt::Octal;
```

Reviewed By: zertosh

Differential Revision: D30758449

fbshipit-source-id: b3afe08fe8c8affa26693df9cbb63e04632ea1d3
2021-09-08 20:49:35 -07:00
Mateusz Kwapich
27830555a1 allow recreating targets without removing the configs
Summary:
Currently there are two things preventing us from running add_sync_target
on existing target:
 * already existing bookmark
 * already existing config

Both need to be deleted to create new target. This diff removes the second
one to simplify code and make it easier to recreate the target (it's easy to
forget about manually removing the config as they otherwise don't need
human interventions).

Reviewed By: StanislavGlebik

Differential Revision: D30767613

fbshipit-source-id: f951c0e1ef9bde69d805dc911331fcdb220123f2
2021-09-07 11:33:18 -07:00
Thomas Orozco
35e3466031 third-party/rust: update daemonize to 0.5
Summary:
Like it says in the title, this updates us to use Daemonize 0.5, though from
Github and not Crates.io, because it hasn't been released to the latter yet.

The main motivation here is to pull in
https://github.com/knsd/daemonize/pull/39 to avoid leaking PID files to
children of the daemon.

This required some changes in `hphp/hack/src/facebook/hh_decl`  and `xplat/rust/mobium` since the way to
run code after daemonization has changed (and became more flexible).

Reviewed By: ndmitchell

Differential Revision: D30694946

fbshipit-source-id: d99768febe449d7a079feec78ab8826d0e29f1ef
2021-09-02 06:27:03 -07:00
Thomas Orozco
0d2bfbeccd Update autocargo component on FBS:master
Summary:
Manual component version update
Bump Schedule: https://www.internalfb.com/intern/msdk/bump/?schedule_fbid=342556550408072
Package: https://www.internalfb.com/intern/msdk/package/181247287328949/
Oncall Team: rust_foundation
NOTE: This build is expected to expire at 2022/09/01 09:14AM PDT
---------
New project source changes since last bump based on D30663071 (08e362a355e0a64a503f5073f57f927394696b8c at 2021/08/31 03:47AM -05):
| 2021/08/31 04:41AM -05 | generatedunixname89002005294178 | D30665384 | [MSDK] Update autocargo component on FBS:master |
| 2021/08/31 07:14PM PDT | kavoor | D30681642 | [autocargo] Make cxx-build match version of cxx |
| 2021/09/01 04:05PM BST | krallin | D30698095 | autocargo: include generated comment in OSS manifests |
---------

build-break (bot commits are not reviewed by a human)

Reviewed By: farnz

Differential Revision: D30717040

fbshipit-source-id: 2c1d09f0d51b6ff2e2636496cf22bcf781f22889
2021-09-02 02:33:56 -07:00
David Tolnay
ba87c55127 third-party/rust: Patch mockall_derive to fix nondeterminism failures in Conveyor
Summary:
The mockall crate's `automock` attribute previously created nondeterministic output, which leads to frequent random "Found possibly newer version of crate" failures in Buck builds that involve cache.

The affected trait in Conveyor is:

https://www.internalfb.com/code/fbsource/[4753807291f7275a061d67cead04ea12e7b38ae2]/fbcode/conveyor/common/just_knobs/src/lib.rs?lines=13-23

which has a method with two lifetime parameters. Mockall's generated code shuffled them in random order due to emitting the lifetimes in HashSet order. The generated code would randomly contain one of these two types:

`Box<dyn for<'b, 'a> FnMut(&str, Option<&'a str>, Option<&'b str>) -> Result<bool> + Send>`

`Box<dyn for<'a, 'b> FnMut(&str, Option<&'a str>, Option<&'b str>) -> Result<bool> + Send>`

Reviewed By: jsgf

Differential Revision: D30656936

fbshipit-source-id: c1a251774333d7a4001a7492c1995efd84ff22e5
2021-08-30 21:12:18 -07:00
Gus Wynn
87a09132dc tokio -> 1.10
Reviewed By: dtolnay

Differential Revision: D30647831

fbshipit-source-id: 7094873ec5cfbf80cd7c3564fdd011268053b0d3
2021-08-30 15:55:16 -07:00
CodemodService Bot
0a375b8e5d Daily common/rust/cargo_from_buck/bin/autocargo
Reviewed By: StanislavGlebik

Differential Revision: D30535840

fbshipit-source-id: a941161547246c1e9aac0735a1994f20389ce1ae
2021-08-25 03:07:04 -07:00
Stanislau Hlebik
139cedb239 mononoke: add retries in add_sync_target derived data derivations
Summary:
add_sync_target might need to derive a lot of data, and it takes a long time to
do it. We don't have any resumability, so if it fails for any reason, then we'd
need to start over.

For now let's just retry a few times so that we don't have to start over
because of flakiness

Reviewed By: mitrandir77

Differential Revision: D30511785

fbshipit-source-id: 1a9c5e62db366022ad487ed108dd41b1dea4caa2
2021-08-24 09:19:16 -07:00
David Tolnay
cf16f0b157 Add nested-values feature to slog
Reviewed By: zertosh

Differential Revision: D30387633

fbshipit-source-id: 27b1d601a73abf522d835c2f857d5a621c2b693b
2021-08-18 10:47:58 -07:00
Yan Soares Couto
6ed51f5514 Remove old futures from cmdlib/futures.rs
Summary:
These functions were used heavily, and used old futures.

That meant a lot of uselessly copied stuff, and harder interop.

I also took the opportunity to pass `CoreContext` around as a reference, and did some BlobRepo refactoring to use attribute traits where possible.

Reviewed By: StanislavGlebik

Differential Revision: D30368261

fbshipit-source-id: 2e63677601fafa3c2e3d9d3340df0a5f31a19a11
2021-08-18 09:31:43 -07:00
Stanislau Hlebik
38bcc731be mononoke: fix build
Summary: There was a land race.

Differential Revision: D30390470

fbshipit-source-id: 70cb73c25aea7dd04c08e2af9609ea1fa33d6988
2021-08-18 00:38:36 -07:00
Robbin Xu
0af6c0031f Revert D30351289: Add nested-values feature to slog
Differential Revision:
D30351289 (637bd00002)

Original commit changeset: b6c1c896b06c

fbshipit-source-id: c226f283a744170bb6bc2ed0b00e59249f9392c3
2021-08-17 16:33:26 -07:00
Matt Smith
637bd00002 Add nested-values feature to slog
Summary:
The diff is giant, but it's just a one-line change to add the
nested-values feature to slog, we just have a whole bunch of projects dependent
on slog.

Reviewed By: dtolnay

Differential Revision: D30351289

fbshipit-source-id: b6c1c896b06cbdf23b1f92c0aac9a97aa116085d
2021-08-17 15:28:16 -07:00
Yan Soares Couto
be8daaa23c derived_data: make mapping not depend on BlobRepo
Summary:
In preparation for the derived data manager, ensure that derived data
mappings do not require a `BlobRepo` reference.

The main use for this was to log to scuba.  This functionality is extracted out
to the new `BonsaiDerivedMappingContainer`, which now contains just enough
information to be able to log to scuba.

Reviewed By: mitrandir77

Differential Revision: D30135447

fbshipit-source-id: 1daa468a87f297adc531cb214dda3fa7fe9b15da
2021-08-17 10:30:07 -07:00
Stanislau Hlebik
dc8bf342da mononoke: set mutable renames while creating move commits
Reviewed By: mitrandir77

Differential Revision: D30338443

fbshipit-source-id: de5e39aad224c29cfe0bbdce011624037811aa36
2021-08-17 08:01:28 -07:00
Stanislau Hlebik
995a0a1bd5 mononoke: introduce DirectoryMultiMover
Summary:
We have mover only for files, and it doesn't quite work for directories - at
the very least directory can be None (i.e. root of the repo).

In the next diffs we'll start recording files and directories renames during
megarepo operations, so let's DirectoryMultiMover as a preparation for that.

Reviewed By: mitrandir77

Differential Revision: D30338444

fbshipit-source-id: 4fed5f50397a7d3d8b77f23552921d515a684604
2021-08-17 08:01:28 -07:00
Simon Farnsworth
bd66f8a79b Add megarepo API to create a release branch point
Summary:
AOSP megarepo wants to create release branches from existing branches, and then update configs to follow only release-ready code.

Provide the primitive they need to do this, which takes an existing commit and config, and creates a new config that tracks the same sources. The `change_target_config` method can then be used to shift from mainline to release branch

Reviewed By: StanislavGlebik

Differential Revision: D30280537

fbshipit-source-id: 43dac24451cf66daa1cd825ada8f685957cc33c1
2021-08-17 06:56:29 -07:00
Yan Soares Couto
f761a291a7 Add snapshot_state to bonsai changeset
Summary:
This diff adds some data to BonsaiChangeset that tells whether it is a snapshot or not.

For now, this marks every changeset as not being a snapshot. The next diff will add validation to snapshots, some tests, and mark the current `snapshot createremote` command as uploading snapshots.

Reviewed By: markbt

Differential Revision: D30158530

fbshipit-source-id: 9835450ac44e39ce8d653938f3a629f081247d2f
2021-08-16 09:19:05 -07:00
Thomas Orozco
de5b8e2dcb rust: ignore metadata-sys rules in Autocargo
Summary:
Autocargo only allows 1 rust-library per Cargo.toml, but right now we have 3
per Thrift library so that doesn't work:

https://www.internalfb.com/intern/sandcastle/log/?instance_id=27021598231105145&step_id=27021602582167211&step_index=13&name=Run%20config

There's little benefit in Autocargo-ifying those rules anyway since they're of
use to Thrift servers and this doesn't work at all in our OSS builds, so let's
just see if we can just noop them. That'll make the crate not exist at all as a
dep, but even considering that it exists only to link to a C++ library that
Autocargo doesn'tk now how to build anyway, that seems OK?

drop-conflicts

Reviewed By: markbt

Differential Revision: D30304720

fbshipit-source-id: 047524985b2dadab8610267c05e3a1b3770e84e6
2021-08-13 10:43:40 -07:00
Yan Soares Couto
07d66e6df1 Add more types to FileChange struct
Summary: This adds types to FileChange thrift and rust structs to deal with additional possible snapshot states, that is, untracked and missing files. Conflicted stuff not added yet.

Reviewed By: StanislavGlebik

Differential Revision: D30103162

fbshipit-source-id: 59faa9e4af8dca907b1ec410b8af74985d85b837
2021-08-12 12:40:48 -07:00
Yan Soares Couto
4d52344fee Use FileChange enum instead of Option<FileChange>
Summary:
for now this changes:
```
struct FileChange {
  ...stuff
}
fn f(x: Option<FileChange>)
```
to
```
struct TrackedFileChange {
  ...stuff
}
enum FileChange {
  TrackedChange(TrackedFileChange),
  Deleted,
}
fn f(x: FileChange)
```

This makes it much clearer that `None` actually means the file was deleted. It will also be useful as in the next diff I will add more stuff inside FileChange (for untracked changes), and this refactor will make it easy.

(The refactor from using `Option` to putting it all inside the enum isn't really necessary, but IMO it looks much clearer, so I did it.)

Reviewed By: StanislavGlebik

Differential Revision: D30103454

fbshipit-source-id: afd2f29dc96baf9f3d069ad69bb3555387cff604
2021-08-11 08:56:40 -07:00
Alex Hornby
2f28c4121c rust: remove chashmap from cargo vendoring
Summary: Previous diffs switched all our usage from chashmap to dashmap as dashmap upstream is more responsive. Now remove chashmap from the cargo vendoring.

Reviewed By: dtolnay

Differential Revision: D30046522

fbshipit-source-id: 111ef9375bd8095f8b7c95752ecbc1988fb0438d
2021-08-04 07:31:08 -07:00
Simon Farnsworth
370a536f4a Provide a way to write Megarepo configs to disk for testing
Summary:
In integration tests, we want to be able to run through the megarepo processing, and then check that configs have persisted correctly, so that we can start async workers after sending a config change down, and see the change be picked up.

Make it possible

Reviewed By: StanislavGlebik

Differential Revision: D30012106

fbshipit-source-id: f944165e7b93451180a78d8287db8a59d71bbe13
2021-08-02 13:53:04 -07:00
David Tolnay
aa8152f1dd Make thrift-generated dyn async traits future compatible
Summary:
The use of dyn traits of the Thrift-generated server traits was emitting future compatibility warnings with recent versions of rustc, due to a fixed soundness hole in the trait object system:

```
error: the trait `x_account_aggregator_if::server::XAccountAggregator` cannot be made into an object
     |
     = this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
```

This diff pulls in https://github.com/dtolnay/async-trait/releases/tag/0.1.51 which results in the Thrift-generated server traits no longer hitting the problematic pattern.

Reviewed By: zertosh

Differential Revision: D29979939

fbshipit-source-id: 3e6e976181bfcf35ed453ae681baeb76a634ddda
2021-07-29 16:25:33 -07:00
Stanislau Hlebik
03f5a60109 mononoke: log resulting cs_id from megarepo calls
Summary: It's useful for debugging

Reviewed By: mojsarn

Differential Revision: D29960133

fbshipit-source-id: e026b473b4a9fecebe41f2fff22dd57d514e51ab
2021-07-29 02:33:58 -07:00
Stanislau Hlebik
b9ce9c0933 mononoke: make sync_changeset return result immediately if it was computed
Summary:
Just as with D29874802 and D29848377, let's make sure if the same
sync_changeset request was sent again then we would return the same result.

Reviewed By: mojsarn

Differential Revision: D29876414

fbshipit-source-id: 91c3bd38983809da8ce246f44066204df667bb12
2021-07-28 10:03:26 -07:00
Stanislau Hlebik
34f0396fa0 mononoke: move bookmark in sync_changeset conditionally
Summary:
# Goal of the stack

There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.

Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.

In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.

However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.

So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not

# This diff

Now that we have target_location in sync_changeset() method,
let's move bookmark in sync_changeset conditionally, just as in D29874803 (5afc48a292).

This would prevent race conditions from happening when the same sync_changeset
method is executing twice.

Reviewed By: krallin

Differential Revision: D29876413

fbshipit-source-id: c076e14171c6615fba2cedf4524d442bd25f83ab
2021-07-28 10:03:26 -07:00
Stanislau Hlebik
c5162598f0 mononoke: add target_location to sync_changeset method
Summary:
# Goal of the stack

There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.

Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.

In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.

However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.

So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not

# This diff

We already have it for change_target_config, and it's useful to prevent races
and inconsistencies. That's especially important given that our async request
worker might run a few identical sync_changeset methods at the same time, and
target_location can help process this situation correctly.

Let's add target_location to sync_changeset, and while there I also updated the
comment for these fields in other methods. The comment said

```
// This operation will succeed only if the
// `target`'s bookmark is still at the same location
// when this operation tries to advance it
```

This is not always the case - operation might succeed if the same operation has been
re-sent twice,  see previous diffs for more explanationmotivation.

Reviewed By: krallin

Differential Revision: D29875242

fbshipit-source-id: c14b2148548abde984c3cb5cc62d04f920240657
2021-07-28 10:03:26 -07:00
Stanislau Hlebik
c9473f74f6 mononoke: make change_target_config return result immediately if it was computed
Summary:
# Goal of the stack

There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.

Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.

In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.

However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.

So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not

# This diff

Same as with D29848377 - if result was already computed and client retries the
same request, then return it.

Differential Revision: D29874802

fbshipit-source-id: ebc2f709bc8280305473d6333d0725530c131872
2021-07-28 10:03:26 -07:00
Stanislau Hlebik
47e92203dc mononoke: make add_sync_target return result immediately if it was computed
Summary:
# Goal of the stack

There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.

Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.

In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.

However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.

So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not

# This diff

If a previous add_sync_target() call was successful on mononoke side, but we
failed to deliver this result to the client (e.g. network issues), then client
would just try to retry this call. Before this diff it wouldn't work (i.e. we
just fail to create a bookmark because it's already created). This diff fixes
it by checking a commit this bookmark points to and checking if it looks like
it was created by a previous add_sync_target call. In particular, it checks
that remapping state file matches the request parameters, and that config
version is the same.

Differential Revision: D29848377

fbshipit-source-id: 16687d975748929e5eea8dfdbc9e206232ec9ca6
2021-07-28 10:03:26 -07:00
Stanislau Hlebik
e17e77eea3 mononoke: add repo_id parameter when finding abandoned requests
Summary:
Addressing comment from
https://www.internalfb.com/diff/D29845826 (f4a078e257)?transaction_fbid=1017293239127849

Reviewed By: krallin

Differential Revision: D29955591

fbshipit-source-id: a99bdd9dd8181e5cba54944d4957ce56b8ecb4f3
2021-07-28 06:23:31 -07:00
Stanislau Hlebik
ad0c9b7e2c mononoke: add more scuba logging to async request worker
Summary: It's nice to understand what's going on

Reviewed By: liubov-dmitrieva

Differential Revision: D29846694

fbshipit-source-id: 7551199ef4529e45c0eb23f79c0cc4a71ba54d0f
2021-07-27 14:12:54 -07:00
Stanislau Hlebik
f4a078e257 mononoke: make sure async megarepo requests are picked up by another worker if current worker dies
Summary:
High-level goal of this diff:
We have a problem in long_running_request_queue - if a tw job dies in the
middle of processing a request then this request will never be picked up by any
other job, and will never be completed.
The idea of the fix is fairly simple - while a job is executing a request it
needs to constantly update inprogress_last_updated_at field with the current
timestamp. In case a job dies then other jobs would notice that timestamp
hasn't been updated for a while and mark this job as "new" again, so that
somebody else can pick it up.
Note that it obviously doesn't prevent all possible race conditions - the worker
might just be too slow and not update the inprogress timestamp in time, but
that race condition we'd handle on other layers i.e. our worker guarantees that
every request will be executed at least once, but it doesn't guarantee that it will
be executed exactly once.

Now a few notes about implementation:
1) I intentionally separated methods for finding abandoned requests, and marking them new again. I did so to make it easier to log which requests where abandoned (logging will come in the next diffs).

2) My original idea (D29821091) had an additional field called execution_uuid, which would be changed each time a new worker claims a request. In the end I decided it's not worth it - while execution_uuid can reduce the likelyhood of two workers running at the same time, it doesn't eliminate it completely. So I decided that execution_uuid doesn't really gives us much.

3) It's possible that there will be two workers will be executing the same request and update the same inprogress_last_updated_at field. As I mentioned above, this is expected, and request implementation needs to handle it gracefully.

Reviewed By: krallin

Differential Revision: D29845826

fbshipit-source-id: 9285805c163b57d22a1936f85783154f6f41df2f
2021-07-27 14:12:53 -07:00
Stanislau Hlebik
9271300067 mononoke: mark some fields as nullable
Summary:
Currently they got zeros by default, but having NULL here seems like a nicer
option.

Reviewed By: krallin

Differential Revision: D29846254

fbshipit-source-id: 981d979055eca91594ef81f0d6dc4ba571a2e8be
2021-07-27 14:12:53 -07:00
Stanislau Hlebik
3b7d6bdfae mononoke: bring long_running_request_queue in sync with what we have in prod
Reviewed By: krallin

Differential Revision: D29817070

fbshipit-source-id: 37b029e74c54df7ff5a7bd4a1c8ef3f85fff127c
2021-07-27 14:12:53 -07:00
Stanislau Hlebik
5dcc30a4b1 mononoke: fix megarepo logging to use correct method name
Reviewed By: krallin

Differential Revision: D29894709

fbshipit-source-id: 3f33df57cd0c32b40eb55dc02ef3820138a423d0
2021-07-27 02:13:09 -07:00
Arun Kulshreshtha
14d8c051c1 third-party/rust: remove patch from curl and curl-sys
Summary:
The patches to these crates have been upstreamed.

allow-large-files

Reviewed By: jsgf

Differential Revision: D29891894

fbshipit-source-id: a9f2ee0744752b689992b770fc66b6e66b3eda2b
2021-07-26 15:00:16 -07:00
Stanislau Hlebik
5afc48a292 mononoke: move bookmark in change_target_config conditionally
Summary:
Do a similar change to change_target_config as we've done for add_sync_target
in D29848378. Move bookmark only if it points to an expected commit. That would
prevent make it safer to deal with cases where the same change_target_config
was executing twice.

Reviewed By: mojsarn

Differential Revision: D29874803

fbshipit-source-id: d21a3029ee58e2a8acc41e37284d0dd03d2803a3
2021-07-24 03:55:08 -07:00
Stanislau Hlebik
4f632c4e8b mononoke: create bookmark in add_sync_target
Summary:
This is the first diff that tries to make megarepo asynchronous methods
idempotent - replaying the same reqeust twice shouldn't cause corruption on the
server. At the moment this is not the case - if we have a runaway
add_sync_target call, then in the end it moves a bookmark to a random place,
even if there was another same successful add_sync_target call and a few others on
top.

add_sync_target should create a new bookmark, and if a bookmark already exists
it's better to not move it to a random place.

This diff does it, however it creates another problem - if a request was successful on mononoke side, but we failed to deliver the successful result to the client (e.g. network issues), then retrying this request would fail because bookmark already exists. This problem will be addressed in the next diff.

Reviewed By: mojsarn

Differential Revision: D29848378

fbshipit-source-id: 8a58e35c26b989a7cbd4d4ac4cbae1691f6e9246
2021-07-24 03:55:08 -07:00
Stanislau Hlebik
d1e86ab457 mononoke: add more logging to add sync target call
Summary: It's nice to be able to keep track of what's going on

Reviewed By: mwdevine

Differential Revision: D29790543

fbshipit-source-id: b855d72efe8826a99b3a6a562722e299e9cbfece
2021-07-22 14:52:03 -07:00
CodemodService Bot
0a402ce760 Daily common/rust/cargo_from_buck/bin/autocargo
Reviewed By: krallin

Differential Revision: D29841733

fbshipit-source-id: c9da8e0324f402f3b9726f2733b51de56abde8f6
2021-07-22 09:22:41 -07:00
Stanislau Hlebik
46827b3756 mononoke: remove unused method
Summary:
This is not used. Even though this method has the "right intention" (i.e. we
need to start marking long running requests as new), I'm not sure we can use it
as is. So let's just delete it for now.

Reviewed By: farnz

Differential Revision: D29817068

fbshipit-source-id: 84d392fea01dfb5fb7bc56f0072baf2cf70b39f4
2021-07-21 12:02:59 -07:00
Stanislau Hlebik
d74cc69de4 mononoke: avoid creating deletion commit on megarepo mainline
Summary:
In previous diff we started creating deletion commits on megarepo mainline.
This is not great since it breaks bisects, and this diff avoids that.

The way it does it is the following:
1) First do the same thing we did before - create deletion commit, and then
create a merge commit with p1 as deletion commit and p2 as an addition commit.
Let's call it "fake merge", since this commit won't be used for our mainline
2) Generate manifest for our "fake merge", and then use this manifest to
generate bonsai diff. But this time make p1 an old target commit (i.e. remove
deleted commit as if it never exited).
3) Use generated bonsai diff to create a commit.

So in short we split the procedure in two - first generate and validate the
resulting manifest (this is what we use "fake merge" commit for), and then
generate bonsai changeset using this manifest. It's unfortunate that in order
to generate resulting manifest we actually need to create commit and save it to
blobstore. If we had in-memory manifests we could have avoided that, but alas
we don't have them yet.

This way of creating bonsai changesets is a bit unconventional, but I think it has the benefit of relying on tools that we have confidence that they work (i.e. bonsai_diff), and we don't need to reimplement all the bonsai logic again

Reviewed By: mitrandir77

Differential Revision: D29633340

fbshipit-source-id: eebdb0e4db5abbab9346c575b662b7bb467497c4
2021-07-09 05:23:43 -07:00
Stanislau Hlebik
58ffbc5cec mononoke: redo the way we create merge bonsai changesets in change_target_config
Summary:
Initially I just wanted to address comments from D29515737 (fa8796ae19) about unnecessary
manifest retraversals, but there were a few more problems:
1) We didn't detect file conflicts in the final merge commit correctly. For
example, if additions_merge commit added a file "dir/1.txt", but there's
already file "dir" in target changeset that we won't detect this problem.
2) What's worse is that we might produce invalid bonsai merge changeset
ourselves. Say, if we delete "source_1/dir/file.txt", and then add file
"source_1/dir" in additions merge commit then resulting bonsai changeset should
have "source_1/dir" entry in the bonsai changeset.

This diff does the following:
1) Adds more tests to cover different corner cases - some of them were failing
before this diff.
2) Improves logic to verify file conflicts
3) Instead of trying to generate correct merge bonsai changeset it simplifies
the task and creates a separate deletion commit.

Note that creating a deletion commit on the mainline is something we want to
avoid to not break bisects. This will be addressed in the next diff.

Reviewed By: mitrandir77

Differential Revision: D29633341

fbshipit-source-id: 8f755d852212fbce8f9331049bf836c1d0a4ef42
2021-07-09 05:23:43 -07:00
Mateusz Kwapich
3a41e7fbc3 megarepo_add_branching_sync_target method
Summary: This new method will allow the megarepo customers to create a sync target that's branching off the existing target. This feature is meant to be used for release branches.

Reviewed By: StanislavGlebik

Differential Revision: D29275281

fbshipit-source-id: 7b58d5cc49c99bbc5f7e01814178376aa3abfcdf
2021-07-09 05:23:43 -07:00
Mateusz Kwapich
051894b81d add fb303 flags to async request worker
Summary: needed to set up tw health check

Reviewed By: StanislavGlebik

Differential Revision: D29580808

fbshipit-source-id: 6a3833d652979915fd44dc6d89511192397d8b96
2021-07-07 03:47:07 -07:00