Commit Graph

103 Commits

Author SHA1 Message Date
Stanislau Hlebik
47e92203dc mononoke: make add_sync_target return result immediately if it was computed
Summary:
# Goal of the stack

There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.

Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.

In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.

However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.

So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not

# This diff

If a previous add_sync_target() call was successful on mononoke side, but we
failed to deliver this result to the client (e.g. network issues), then client
would just try to retry this call. Before this diff it wouldn't work (i.e. we
just fail to create a bookmark because it's already created). This diff fixes
it by checking a commit this bookmark points to and checking if it looks like
it was created by a previous add_sync_target call. In particular, it checks
that remapping state file matches the request parameters, and that config
version is the same.

Differential Revision: D29848377

fbshipit-source-id: 16687d975748929e5eea8dfdbc9e206232ec9ca6
2021-07-28 10:03:26 -07:00
Stanislau Hlebik
e17e77eea3 mononoke: add repo_id parameter when finding abandoned requests
Summary:
Addressing comment from
https://www.internalfb.com/diff/D29845826 (f4a078e257)?transaction_fbid=1017293239127849

Reviewed By: krallin

Differential Revision: D29955591

fbshipit-source-id: a99bdd9dd8181e5cba54944d4957ce56b8ecb4f3
2021-07-28 06:23:31 -07:00
Stanislau Hlebik
ad0c9b7e2c mononoke: add more scuba logging to async request worker
Summary: It's nice to understand what's going on

Reviewed By: liubov-dmitrieva

Differential Revision: D29846694

fbshipit-source-id: 7551199ef4529e45c0eb23f79c0cc4a71ba54d0f
2021-07-27 14:12:54 -07:00
Stanislau Hlebik
f4a078e257 mononoke: make sure async megarepo requests are picked up by another worker if current worker dies
Summary:
High-level goal of this diff:
We have a problem in long_running_request_queue - if a tw job dies in the
middle of processing a request then this request will never be picked up by any
other job, and will never be completed.
The idea of the fix is fairly simple - while a job is executing a request it
needs to constantly update inprogress_last_updated_at field with the current
timestamp. In case a job dies then other jobs would notice that timestamp
hasn't been updated for a while and mark this job as "new" again, so that
somebody else can pick it up.
Note that it obviously doesn't prevent all possible race conditions - the worker
might just be too slow and not update the inprogress timestamp in time, but
that race condition we'd handle on other layers i.e. our worker guarantees that
every request will be executed at least once, but it doesn't guarantee that it will
be executed exactly once.

Now a few notes about implementation:
1) I intentionally separated methods for finding abandoned requests, and marking them new again. I did so to make it easier to log which requests where abandoned (logging will come in the next diffs).

2) My original idea (D29821091) had an additional field called execution_uuid, which would be changed each time a new worker claims a request. In the end I decided it's not worth it - while execution_uuid can reduce the likelyhood of two workers running at the same time, it doesn't eliminate it completely. So I decided that execution_uuid doesn't really gives us much.

3) It's possible that there will be two workers will be executing the same request and update the same inprogress_last_updated_at field. As I mentioned above, this is expected, and request implementation needs to handle it gracefully.

Reviewed By: krallin

Differential Revision: D29845826

fbshipit-source-id: 9285805c163b57d22a1936f85783154f6f41df2f
2021-07-27 14:12:53 -07:00
Stanislau Hlebik
9271300067 mononoke: mark some fields as nullable
Summary:
Currently they got zeros by default, but having NULL here seems like a nicer
option.

Reviewed By: krallin

Differential Revision: D29846254

fbshipit-source-id: 981d979055eca91594ef81f0d6dc4ba571a2e8be
2021-07-27 14:12:53 -07:00
Stanislau Hlebik
3b7d6bdfae mononoke: bring long_running_request_queue in sync with what we have in prod
Reviewed By: krallin

Differential Revision: D29817070

fbshipit-source-id: 37b029e74c54df7ff5a7bd4a1c8ef3f85fff127c
2021-07-27 14:12:53 -07:00
Stanislau Hlebik
5dcc30a4b1 mononoke: fix megarepo logging to use correct method name
Reviewed By: krallin

Differential Revision: D29894709

fbshipit-source-id: 3f33df57cd0c32b40eb55dc02ef3820138a423d0
2021-07-27 02:13:09 -07:00
Arun Kulshreshtha
14d8c051c1 third-party/rust: remove patch from curl and curl-sys
Summary:
The patches to these crates have been upstreamed.

allow-large-files

Reviewed By: jsgf

Differential Revision: D29891894

fbshipit-source-id: a9f2ee0744752b689992b770fc66b6e66b3eda2b
2021-07-26 15:00:16 -07:00
Stanislau Hlebik
5afc48a292 mononoke: move bookmark in change_target_config conditionally
Summary:
Do a similar change to change_target_config as we've done for add_sync_target
in D29848378. Move bookmark only if it points to an expected commit. That would
prevent make it safer to deal with cases where the same change_target_config
was executing twice.

Reviewed By: mojsarn

Differential Revision: D29874803

fbshipit-source-id: d21a3029ee58e2a8acc41e37284d0dd03d2803a3
2021-07-24 03:55:08 -07:00
Stanislau Hlebik
4f632c4e8b mononoke: create bookmark in add_sync_target
Summary:
This is the first diff that tries to make megarepo asynchronous methods
idempotent - replaying the same reqeust twice shouldn't cause corruption on the
server. At the moment this is not the case - if we have a runaway
add_sync_target call, then in the end it moves a bookmark to a random place,
even if there was another same successful add_sync_target call and a few others on
top.

add_sync_target should create a new bookmark, and if a bookmark already exists
it's better to not move it to a random place.

This diff does it, however it creates another problem - if a request was successful on mononoke side, but we failed to deliver the successful result to the client (e.g. network issues), then retrying this request would fail because bookmark already exists. This problem will be addressed in the next diff.

Reviewed By: mojsarn

Differential Revision: D29848378

fbshipit-source-id: 8a58e35c26b989a7cbd4d4ac4cbae1691f6e9246
2021-07-24 03:55:08 -07:00
Stanislau Hlebik
d1e86ab457 mononoke: add more logging to add sync target call
Summary: It's nice to be able to keep track of what's going on

Reviewed By: mwdevine

Differential Revision: D29790543

fbshipit-source-id: b855d72efe8826a99b3a6a562722e299e9cbfece
2021-07-22 14:52:03 -07:00
CodemodService Bot
0a402ce760 Daily common/rust/cargo_from_buck/bin/autocargo
Reviewed By: krallin

Differential Revision: D29841733

fbshipit-source-id: c9da8e0324f402f3b9726f2733b51de56abde8f6
2021-07-22 09:22:41 -07:00
Stanislau Hlebik
46827b3756 mononoke: remove unused method
Summary:
This is not used. Even though this method has the "right intention" (i.e. we
need to start marking long running requests as new), I'm not sure we can use it
as is. So let's just delete it for now.

Reviewed By: farnz

Differential Revision: D29817068

fbshipit-source-id: 84d392fea01dfb5fb7bc56f0072baf2cf70b39f4
2021-07-21 12:02:59 -07:00
Stanislau Hlebik
d74cc69de4 mononoke: avoid creating deletion commit on megarepo mainline
Summary:
In previous diff we started creating deletion commits on megarepo mainline.
This is not great since it breaks bisects, and this diff avoids that.

The way it does it is the following:
1) First do the same thing we did before - create deletion commit, and then
create a merge commit with p1 as deletion commit and p2 as an addition commit.
Let's call it "fake merge", since this commit won't be used for our mainline
2) Generate manifest for our "fake merge", and then use this manifest to
generate bonsai diff. But this time make p1 an old target commit (i.e. remove
deleted commit as if it never exited).
3) Use generated bonsai diff to create a commit.

So in short we split the procedure in two - first generate and validate the
resulting manifest (this is what we use "fake merge" commit for), and then
generate bonsai changeset using this manifest. It's unfortunate that in order
to generate resulting manifest we actually need to create commit and save it to
blobstore. If we had in-memory manifests we could have avoided that, but alas
we don't have them yet.

This way of creating bonsai changesets is a bit unconventional, but I think it has the benefit of relying on tools that we have confidence that they work (i.e. bonsai_diff), and we don't need to reimplement all the bonsai logic again

Reviewed By: mitrandir77

Differential Revision: D29633340

fbshipit-source-id: eebdb0e4db5abbab9346c575b662b7bb467497c4
2021-07-09 05:23:43 -07:00
Stanislau Hlebik
58ffbc5cec mononoke: redo the way we create merge bonsai changesets in change_target_config
Summary:
Initially I just wanted to address comments from D29515737 (fa8796ae19) about unnecessary
manifest retraversals, but there were a few more problems:
1) We didn't detect file conflicts in the final merge commit correctly. For
example, if additions_merge commit added a file "dir/1.txt", but there's
already file "dir" in target changeset that we won't detect this problem.
2) What's worse is that we might produce invalid bonsai merge changeset
ourselves. Say, if we delete "source_1/dir/file.txt", and then add file
"source_1/dir" in additions merge commit then resulting bonsai changeset should
have "source_1/dir" entry in the bonsai changeset.

This diff does the following:
1) Adds more tests to cover different corner cases - some of them were failing
before this diff.
2) Improves logic to verify file conflicts
3) Instead of trying to generate correct merge bonsai changeset it simplifies
the task and creates a separate deletion commit.

Note that creating a deletion commit on the mainline is something we want to
avoid to not break bisects. This will be addressed in the next diff.

Reviewed By: mitrandir77

Differential Revision: D29633341

fbshipit-source-id: 8f755d852212fbce8f9331049bf836c1d0a4ef42
2021-07-09 05:23:43 -07:00
Mateusz Kwapich
3a41e7fbc3 megarepo_add_branching_sync_target method
Summary: This new method will allow the megarepo customers to create a sync target that's branching off the existing target. This feature is meant to be used for release branches.

Reviewed By: StanislavGlebik

Differential Revision: D29275281

fbshipit-source-id: 7b58d5cc49c99bbc5f7e01814178376aa3abfcdf
2021-07-09 05:23:43 -07:00
Mateusz Kwapich
051894b81d add fb303 flags to async request worker
Summary: needed to set up tw health check

Reviewed By: StanislavGlebik

Differential Revision: D29580808

fbshipit-source-id: 6a3833d652979915fd44dc6d89511192397d8b96
2021-07-07 03:47:07 -07:00
Mateusz Kwapich
fa8796ae19 change_target_config implementation
Summary: The implementation of change_sync_target_config_method.

Reviewed By: StanislavGlebik

Differential Revision: D29515737

fbshipit-source-id: 748278e73b1ed727550f3f05451b508a70be07db
2021-07-06 08:32:48 -07:00
Mateusz Kwapich
28d69d60c8 use the SourceName newtype where possible
Summary:
I got frustrated with the fact that half of the functions in
megarepo_api required the source name to be wrapped into newtype and
other half didn't. This refactor unifes it everywhere except the thrift
datastructure itself - not sure if we can afffect thrift codegen in this way.

Reviewed By: StanislavGlebik

Differential Revision: D29515474

fbshipit-source-id: 2d55a03cf396b174b0228c3fcc627b2296600400
2021-07-06 08:32:48 -07:00
Mateusz Kwapich
ae57ff3ccc make writing state optional in create_merge_commits
Summary:
The merge commit in case of change_target_sync_config won't be representing any
consistent state of the target so we don't want to write the remapping state
file there.

Reviewed By: StanislavGlebik

Differential Revision: D29515476

fbshipit-source-id: b0703be1127af6582785510fde51ff8501fb4f17
2021-07-06 08:32:48 -07:00
Mateusz Kwapich
15f3eadc49 make create_move_commits take just sources
Summary:
in case of change_target_sync_config we'll be creating move commits only for subset
of sources to let's change the function singature to so it's possible to
specify such subset.

Reviewed By: StanislavGlebik

Differential Revision: D29515475

fbshipit-source-id: 31002ec56dad872948bcbc79b0ed5fdb794e1f10
2021-07-06 08:32:48 -07:00
Mateusz Kwapich
85f31f3f85 move reusable functions to common
Summary:
The `change_target_config` methods responsibilities have a huge intersection
with `add_target_config`: the change method needs to know how to merge-in new
sources into the target and the whole "create move commits, then create merge
commits" flow can be reused.

Reviewed By: StanislavGlebik

Differential Revision: D29515301

fbshipit-source-id: c15f95875cbcbf5aad00e5047f6a8ffb55c4da31
2021-07-06 08:32:48 -07:00
Yan Soares Couto
1bcae1ae65 Add redaction config to common config, don't use it yet
Summary:
This reads the config added on D29305462. It populates it into `CommonConfig` struct, and also adds it to `RepoFactory`, but doesn't yet use it anywhere, this will be done on the next diff.

There is a single behaviour change in this diff, which I believe should be harmless but is noted in the comments in case it isn't.

Reviewed By: markbt

Differential Revision: D29272581

fbshipit-source-id: 62cd7dc78478c1d8cb212eafdd789527ead50ef6
2021-06-30 08:57:30 -07:00
Stanislau Hlebik
39e915d8d9 mononoke: allow creation of multiple symlinks that point to the same directory
Summary:
Previously it wasn't possible because symlink target was a key in the map that
mega_grepo_sync was sending to scs, and so we can't have two different symlink
for the same symlink target. However we actually need it - some of aosp repos
have symlink different sources that point to the same symlink target.

This diff fixes it by reverting the key and valud in the `linkfiles` map.

Differential Revision: D29359634

fbshipit-source-id: da74d6e934350822d82d2135ab06c754824525c9
2021-06-28 04:04:46 -07:00
Xavier Deguillard
41897e3acc third-party: patch os_info to properly support Centos Stream
Summary:
This is just updating the os_info crate to my fork with a fix for Centos
Stream: https://github.com/stanislav-tkach/os_info/pull/267

Reviewed By: quark-zju

Differential Revision: D29410043

fbshipit-source-id: 3642e704f5a056e75fee4421dc59020fde13ed5e
2021-06-25 21:07:33 -07:00
Daniel Xu
431a4ed16b Fix autocargo skew
Summary: I think someone landed a dependency change or something and forgot to update autocargo

Reviewed By: dtolnay

Differential Revision: D29402335

fbshipit-source-id: e9a4906bf249470351c2984ef64dfba9daac8891
2021-06-25 17:23:33 -07:00
Stanislau Hlebik
5ef7ba764b mononoke: do not check non-prefix free paths and verify_config earlier
Summary:
1) Turned out it's possible to have non-prefix free paths in aosp manifests. So
we have to remove this check for now
2) also let's verify config earlier so that we can return an error to the user
faster

Differential Revision: D29335602

fbshipit-source-id: 3dd72d63a370515eca5d356b3b98bb2ac2245aee
2021-06-25 09:26:33 -07:00
Thomas Orozco
8c83bd9a1c third-party/rust: update Tokio to 1.7.1
Summary: There is a regression in 1.7.0 (which we're on at the moment) so we might as well update.

Reviewed By: zertosh, farnz

Differential Revision: D29358047

fbshipit-source-id: 226393d79c165455d27f7a09b14b40c6a30d96d3
2021-06-25 06:17:41 -07:00
Stanislau Hlebik
3c14f3c20b mononoke: fix symlink handling in megarepo_api
Summary:
Path should be relative to the symlink path, not to the repo root. This diff
fixes it

Reviewed By: farnz

Differential Revision: D29327682

fbshipit-source-id: a51161a8039a88263fe941562f2c2134aa5d4fef
2021-06-23 04:20:33 -07:00
Andres Suarez
fc37fea20c Update itertools 0.8.2 to 0.10.1
Reviewed By: dtolnay

Differential Revision: D29286012

fbshipit-source-id: 6923c0b750692e6932e85fd539b076b172ff43b7
2021-06-22 04:09:00 -07:00
Andrew Gallagher
05cf7acd77 object-0.25.3: patch SHT_GNU_versym entsize fix
Summary:
Pull in a patch which fixes writing out an incorrect entsize for the
`SHT_GNU_versym` section:
ddbae72082

Reviewed By: igorsugak

Differential Revision: D29248208

fbshipit-source-id: 90bbaa179df79e817e3eaa846ecfef5c1236073a
2021-06-21 09:31:49 -07:00
Andres Suarez
845128485c Update bytecount
Reviewed By: dtolnay

Differential Revision: D29213998

fbshipit-source-id: 92e7a9de9e3d03f04b92a77e16fa0e37428fe2fb
2021-06-17 19:50:32 -07:00
Davide Cavalca
b82c5672fc Update several rust crate versions
Summary: Update versions for several of the crates we depend on.

Reviewed By: danobi

Differential Revision: D29165283

fbshipit-source-id: baaa9fa106b7dad000f93d2eefa95867ac46e5a1
2021-06-17 16:38:19 -07:00
CodemodService Bot
4c4dfd45ad Daily common/rust/cargo_from_buck/bin/autocargo
Reviewed By: krallin

Differential Revision: D29158387

fbshipit-source-id: 48a0b590e01083d762bbed2b7e272cbefc72641f
2021-06-16 04:50:15 -07:00
Alex Hornby
4457092322 rust: revert zstd crates
Summary: revert the zstd crates back to previous version

Reviewed By: johansglock

Differential Revision: D29038514

fbshipit-source-id: 3cbc31203052034bca428441d5514557311b86ae
2021-06-11 04:39:54 -07:00
Stanislau Hlebik
2029d295e9 mononoke: fix validation failure when copyfile is used
Summary:
Turned out we still don't process copyfile attribute correctly. copyfile
creates two overrides like
{
  "copyfile source" -> default prefix + "/copyfile source"
  "copyfile source" -> "destination"
}

However we also have a check that validates that there no paths (that includes
default path, overrides, linkfiles etc) that are prefixes of each other. And
this check fails in the copyfile case, even though the path maps to exactly the
same path as if default prefix was applied.

Let's fix it. Also note that we haven't seen this failure in the integration
test because we didn't run verify_paths in tests. This diff fixes it as well.

Reviewed By: mitrandir77

Differential Revision: D28992456

fbshipit-source-id: 5fd993914b189cf768ba03010194b1c26026f7a8
2021-06-09 08:15:09 -07:00
Alex Hornby
f89dbebae8 rust: update zstd bindings to 1.5.0
Summary: Update to latest version.  This includes a patch to async-compression crate from [my PR updating it](https://github.com/Nemo157/async-compression/pull/125), I will remove once the crate is released.

Reviewed By: mitrandir77

Differential Revision: D28897019

fbshipit-source-id: 07c72f2880e7f8b85097837d084178c6625e77be
2021-06-08 07:57:29 -07:00
Mateusz Kwapich
98f21e7cb2 add support for syncing merge commits
Reviewed By: StanislavGlebik

Differential Revision: D28887334

fbshipit-source-id: 909a3948df75312767dda8d2f184c0a885a56962
2021-06-08 05:49:01 -07:00
Mateusz Kwapich
59fc317881 add support for copyfiles in test util
Summary: I need it for test in next diff

Reviewed By: StanislavGlebik

Differential Revision: D28959433

fbshipit-source-id: c67ca7eec03f94425332e446f6f97038edff598d
2021-06-08 05:49:01 -07:00
Mateusz Kwapich
eb0290d82a extract last_synced_commit to method
Summary:
Summar:
I'll be using it more in the next diff, so why not have it in it's own method.

Reviewed By: StanislavGlebik

Differential Revision: D28887333

fbshipit-source-id: 35accb495a577e1c01ec8114fc60acf38ed11fee
2021-06-08 05:49:01 -07:00
Mateusz Kwapich
f29792433d add a way to reorder parents during rewrite
Summary:
When syncing merge commits with two parents it would be nice if it was the first parent that comes from the unified branch. In **case of octopus merges** we really don't want
the parent in unified branch to be third (that would turn the sync into
non-forward move!). Let's add a way to tell the commit rewriter which parent
needs to be first.

Reviewed By: farnz

Differential Revision: D28885488

fbshipit-source-id: 57a081ce2d285ba2b6d6d98110cd1c64a241548e
2021-06-08 05:49:01 -07:00
Mateusz Kwapich
fed6a478a8 move create_single_move_commit to common
Summary: I'm planning to reuse it in syncing merge commits.

Reviewed By: farnz

Differential Revision: D28885489

fbshipit-source-id: 6035c0e7290f137b723b73e656b73d4f78e2da9d
2021-06-08 05:49:01 -07:00
CodemodService Bot
254d2a37ad Daily common/rust/cargo_from_buck/bin/autocargo
Reviewed By: krallin

Differential Revision: D28928316

fbshipit-source-id: 6da6c9a5321d722a3dfd816b49f3994df98c7471
2021-06-07 02:19:59 -07:00
Mateusz Kwapich
d2530263b3 obtain repo without doing ACL checks
Summary:
The worker should be able to process the requests from the queue no matter
which repo it is and what are it ACLs. It's during the request scheduling when
we should check the identity of the entity scheduling request.

Reviewed By: StanislavGlebik

Differential Revision: D28866807

fbshipit-source-id: 5d57eb9ba86e10d477be5cfc51dfb8f62ea16b9e
2021-06-03 12:32:54 -07:00
Stanislau Hlebik
530f8279b8 mononoke: put the name of the source in the move and merge commits
Summary: This makes it easier to understand what each move and merge commits are for.

Reviewed By: mitrandir77

Differential Revision: D28839677

fbshipit-source-id: 1a42205c164224b64c773cff80b690b251a48381
2021-06-03 05:24:40 -07:00
Stanislau Hlebik
12969d5738 mononoke: derive data in add_sync_target
Summary:
This is a precaution. add_sync_target can create a very branchy repository, and
I'm not sure how well Mononoke is going to handle deriving of these commits by
all mononoke hosts at once (in particular, I'm worried about traversal that has
to be done by all hosts in parallel). In theory it should work fine, but
deriving data during add_sync_target call should be a reasonable thing to do
anyway.

While working on that I noticed that "git" derived data is not  supported by derived data utils, and it was causing test failures. I don't think there's any reason to not have TreeHandle in derived data utils, so I'm adding it now.

Reviewed By: farnz

Differential Revision: D28831052

fbshipit-source-id: 5f60ac5b93d2c38b4afa0f725c6908edc8b98b18
2021-06-03 02:23:15 -07:00
Robin Håkanson
31fb330fc7 Consistent and unique source_name
Summary:
source_name to be unique even if the same git-repo and project name is mapped several times in the same manifest.

source_name need to match between all the different places we use it, `add_target_config, sync_changeset, changesets_to_merge`  we where not consistent.

Reviewed By: StanislavGlebik

Differential Revision: D28845869

fbshipit-source-id: 54e96dcdeaf22ec68f626e9c30e5e60c54ec149b
2021-06-02 15:03:59 -07:00
Stanislau Hlebik
56107af712 mononoke: create move commits in parallel
Summary:
We are going to create quite a lot of move commits at the same time, and it can
be slow. Let's instead create them in parallel, and then call
`save_bonsai_changesets` for all the commits in one go.

Reviewed By: mitrandir77

Differential Revision: D28795525

fbshipit-source-id: f6b6420c2fe30bb98680ac7e25412c55c99883e0
2021-06-02 07:59:55 -07:00
Stanislau Hlebik
631d21ec95 mononoke: create a stack of merge commits
Summary:
Previously add sync target method was creating a single merge commit. That
means that we might create a bonsai commit with hundreds of parents. This is
not ideal, because mercurial can only work correctly with 2 parents - for
bonsai changeset with 3 parents or more mercurial file histories might be lost.

So instead of creating a single giant merge commit let's create a stack of
merge commits.

Reviewed By: mitrandir77

Differential Revision: D28792581

fbshipit-source-id: 2f8ff6b49db29c4692b7385f1d1ab57986075d57
2021-06-02 07:59:55 -07:00
Stanislau Hlebik
2c63981029 aosp megarepo: validate commits that are passed to add_sync_target
Summary:
Basically do validations that are described in [source_control.thrift
file](https://www.internalfb.com/code/fbsource/[9f147c38ea74f1c6482b99cfd1edd6103d5bd3db]/fbcode/eden/mononoke/scs/if/source_control.thrift?lines=965-975).

Reviewed By: mitrandir77

Differential Revision: D28712729

fbshipit-source-id: 7c55116e4ac875961ded82d6708af56d14a1bf79
2021-06-02 07:59:55 -07:00