Summary:
`rust_include_srcs` is supported on `thrift_library` as a way of including other Rust code in the generated crate, generally used to implement other traits on the generated types.
Adding support for this in autocargo by copying these files into the output dir and making sure their option is specified to the thrift compiler
Reviewed By: ahornby
Differential Revision: D30789835
fbshipit-source-id: 325cb59fdf85324dccfff20a559802c11816769f
Summary:
Add impls for Layer for Box/Arc<L: Layer> and <dyn Layer>. Also a pile of other
updates in git which haven't been published to crates.io yet, including proper
level filtering of trace events being fed into log.
Reviewed By: dtolnay
Differential Revision: D30829927
fbshipit-source-id: c01c9369222df2af663e8f8bf59ea78ee12f7866
Summary:
Bump all the versions on crates.io to highest to make migration to github
versions in next diff work.
Reviewed By: dtolnay
Differential Revision: D30829928
fbshipit-source-id: 09567c26f275b3b1806bf8fd05417e91f04ba2ef
Summary:
Currently there are two things preventing us from running add_sync_target
on existing target:
* already existing bookmark
* already existing config
Both need to be deleted to create new target. This diff removes the second
one to simplify code and make it easier to recreate the target (it's easy to
forget about manually removing the config as they otherwise don't need
human interventions).
Reviewed By: StanislavGlebik
Differential Revision: D30767613
fbshipit-source-id: f951c0e1ef9bde69d805dc911331fcdb220123f2
Summary:
Like it says in the title, this updates us to use Daemonize 0.5, though from
Github and not Crates.io, because it hasn't been released to the latter yet.
The main motivation here is to pull in
https://github.com/knsd/daemonize/pull/39 to avoid leaking PID files to
children of the daemon.
This required some changes in `hphp/hack/src/facebook/hh_decl` and `xplat/rust/mobium` since the way to
run code after daemonization has changed (and became more flexible).
Reviewed By: ndmitchell
Differential Revision: D30694946
fbshipit-source-id: d99768febe449d7a079feec78ab8826d0e29f1ef
Summary:
Manual component version update
Bump Schedule: https://www.internalfb.com/intern/msdk/bump/?schedule_fbid=342556550408072
Package: https://www.internalfb.com/intern/msdk/package/181247287328949/
Oncall Team: rust_foundation
NOTE: This build is expected to expire at 2022/09/01 09:14AM PDT
---------
New project source changes since last bump based on D30663071 (08e362a355e0a64a503f5073f57f927394696b8c at 2021/08/31 03:47AM -05):
| 2021/08/31 04:41AM -05 | generatedunixname89002005294178 | D30665384 | [MSDK] Update autocargo component on FBS:master |
| 2021/08/31 07:14PM PDT | kavoor | D30681642 | [autocargo] Make cxx-build match version of cxx |
| 2021/09/01 04:05PM BST | krallin | D30698095 | autocargo: include generated comment in OSS manifests |
---------
build-break (bot commits are not reviewed by a human)
Reviewed By: farnz
Differential Revision: D30717040
fbshipit-source-id: 2c1d09f0d51b6ff2e2636496cf22bcf781f22889
Summary:
The mockall crate's `automock` attribute previously created nondeterministic output, which leads to frequent random "Found possibly newer version of crate" failures in Buck builds that involve cache.
The affected trait in Conveyor is:
https://www.internalfb.com/code/fbsource/[4753807291f7275a061d67cead04ea12e7b38ae2]/fbcode/conveyor/common/just_knobs/src/lib.rs?lines=13-23
which has a method with two lifetime parameters. Mockall's generated code shuffled them in random order due to emitting the lifetimes in HashSet order. The generated code would randomly contain one of these two types:
`Box<dyn for<'b, 'a> FnMut(&str, Option<&'a str>, Option<&'b str>) -> Result<bool> + Send>`
`Box<dyn for<'a, 'b> FnMut(&str, Option<&'a str>, Option<&'b str>) -> Result<bool> + Send>`
Reviewed By: jsgf
Differential Revision: D30656936
fbshipit-source-id: c1a251774333d7a4001a7492c1995efd84ff22e5
Summary:
add_sync_target might need to derive a lot of data, and it takes a long time to
do it. We don't have any resumability, so if it fails for any reason, then we'd
need to start over.
For now let's just retry a few times so that we don't have to start over
because of flakiness
Reviewed By: mitrandir77
Differential Revision: D30511785
fbshipit-source-id: 1a9c5e62db366022ad487ed108dd41b1dea4caa2
Summary:
These functions were used heavily, and used old futures.
That meant a lot of uselessly copied stuff, and harder interop.
I also took the opportunity to pass `CoreContext` around as a reference, and did some BlobRepo refactoring to use attribute traits where possible.
Reviewed By: StanislavGlebik
Differential Revision: D30368261
fbshipit-source-id: 2e63677601fafa3c2e3d9d3340df0a5f31a19a11
Summary:
The diff is giant, but it's just a one-line change to add the
nested-values feature to slog, we just have a whole bunch of projects dependent
on slog.
Reviewed By: dtolnay
Differential Revision: D30351289
fbshipit-source-id: b6c1c896b06cbdf23b1f92c0aac9a97aa116085d
Summary:
In preparation for the derived data manager, ensure that derived data
mappings do not require a `BlobRepo` reference.
The main use for this was to log to scuba. This functionality is extracted out
to the new `BonsaiDerivedMappingContainer`, which now contains just enough
information to be able to log to scuba.
Reviewed By: mitrandir77
Differential Revision: D30135447
fbshipit-source-id: 1daa468a87f297adc531cb214dda3fa7fe9b15da
Summary:
We have mover only for files, and it doesn't quite work for directories - at
the very least directory can be None (i.e. root of the repo).
In the next diffs we'll start recording files and directories renames during
megarepo operations, so let's DirectoryMultiMover as a preparation for that.
Reviewed By: mitrandir77
Differential Revision: D30338444
fbshipit-source-id: 4fed5f50397a7d3d8b77f23552921d515a684604
Summary:
AOSP megarepo wants to create release branches from existing branches, and then update configs to follow only release-ready code.
Provide the primitive they need to do this, which takes an existing commit and config, and creates a new config that tracks the same sources. The `change_target_config` method can then be used to shift from mainline to release branch
Reviewed By: StanislavGlebik
Differential Revision: D30280537
fbshipit-source-id: 43dac24451cf66daa1cd825ada8f685957cc33c1
Summary:
This diff adds some data to BonsaiChangeset that tells whether it is a snapshot or not.
For now, this marks every changeset as not being a snapshot. The next diff will add validation to snapshots, some tests, and mark the current `snapshot createremote` command as uploading snapshots.
Reviewed By: markbt
Differential Revision: D30158530
fbshipit-source-id: 9835450ac44e39ce8d653938f3a629f081247d2f
Summary:
Autocargo only allows 1 rust-library per Cargo.toml, but right now we have 3
per Thrift library so that doesn't work:
https://www.internalfb.com/intern/sandcastle/log/?instance_id=27021598231105145&step_id=27021602582167211&step_index=13&name=Run%20config
There's little benefit in Autocargo-ifying those rules anyway since they're of
use to Thrift servers and this doesn't work at all in our OSS builds, so let's
just see if we can just noop them. That'll make the crate not exist at all as a
dep, but even considering that it exists only to link to a C++ library that
Autocargo doesn'tk now how to build anyway, that seems OK?
drop-conflicts
Reviewed By: markbt
Differential Revision: D30304720
fbshipit-source-id: 047524985b2dadab8610267c05e3a1b3770e84e6
Summary: This adds types to FileChange thrift and rust structs to deal with additional possible snapshot states, that is, untracked and missing files. Conflicted stuff not added yet.
Reviewed By: StanislavGlebik
Differential Revision: D30103162
fbshipit-source-id: 59faa9e4af8dca907b1ec410b8af74985d85b837
Summary:
for now this changes:
```
struct FileChange {
...stuff
}
fn f(x: Option<FileChange>)
```
to
```
struct TrackedFileChange {
...stuff
}
enum FileChange {
TrackedChange(TrackedFileChange),
Deleted,
}
fn f(x: FileChange)
```
This makes it much clearer that `None` actually means the file was deleted. It will also be useful as in the next diff I will add more stuff inside FileChange (for untracked changes), and this refactor will make it easy.
(The refactor from using `Option` to putting it all inside the enum isn't really necessary, but IMO it looks much clearer, so I did it.)
Reviewed By: StanislavGlebik
Differential Revision: D30103454
fbshipit-source-id: afd2f29dc96baf9f3d069ad69bb3555387cff604
Summary: Previous diffs switched all our usage from chashmap to dashmap as dashmap upstream is more responsive. Now remove chashmap from the cargo vendoring.
Reviewed By: dtolnay
Differential Revision: D30046522
fbshipit-source-id: 111ef9375bd8095f8b7c95752ecbc1988fb0438d
Summary:
In integration tests, we want to be able to run through the megarepo processing, and then check that configs have persisted correctly, so that we can start async workers after sending a config change down, and see the change be picked up.
Make it possible
Reviewed By: StanislavGlebik
Differential Revision: D30012106
fbshipit-source-id: f944165e7b93451180a78d8287db8a59d71bbe13
Summary:
The use of dyn traits of the Thrift-generated server traits was emitting future compatibility warnings with recent versions of rustc, due to a fixed soundness hole in the trait object system:
```
error: the trait `x_account_aggregator_if::server::XAccountAggregator` cannot be made into an object
|
= this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
```
This diff pulls in https://github.com/dtolnay/async-trait/releases/tag/0.1.51 which results in the Thrift-generated server traits no longer hitting the problematic pattern.
Reviewed By: zertosh
Differential Revision: D29979939
fbshipit-source-id: 3e6e976181bfcf35ed453ae681baeb76a634ddda
Summary:
Just as with D29874802 and D29848377, let's make sure if the same
sync_changeset request was sent again then we would return the same result.
Reviewed By: mojsarn
Differential Revision: D29876414
fbshipit-source-id: 91c3bd38983809da8ce246f44066204df667bb12
Summary:
# Goal of the stack
There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.
Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.
In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.
However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.
So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not
# This diff
Now that we have target_location in sync_changeset() method,
let's move bookmark in sync_changeset conditionally, just as in D29874803 (5afc48a292).
This would prevent race conditions from happening when the same sync_changeset
method is executing twice.
Reviewed By: krallin
Differential Revision: D29876413
fbshipit-source-id: c076e14171c6615fba2cedf4524d442bd25f83ab
Summary:
# Goal of the stack
There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.
Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.
In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.
However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.
So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not
# This diff
We already have it for change_target_config, and it's useful to prevent races
and inconsistencies. That's especially important given that our async request
worker might run a few identical sync_changeset methods at the same time, and
target_location can help process this situation correctly.
Let's add target_location to sync_changeset, and while there I also updated the
comment for these fields in other methods. The comment said
```
// This operation will succeed only if the
// `target`'s bookmark is still at the same location
// when this operation tries to advance it
```
This is not always the case - operation might succeed if the same operation has been
re-sent twice, see previous diffs for more explanationmotivation.
Reviewed By: krallin
Differential Revision: D29875242
fbshipit-source-id: c14b2148548abde984c3cb5cc62d04f920240657
Summary:
# Goal of the stack
There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.
Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.
In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.
However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.
So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not
# This diff
Same as with D29848377 - if result was already computed and client retries the
same request, then return it.
Differential Revision: D29874802
fbshipit-source-id: ebc2f709bc8280305473d6333d0725530c131872
Summary:
# Goal of the stack
There goal of this stack is to make megarepo api safer to use. In particular, we want to achieve:
1) If the same request is executed a few times, then it won't cause corrupt repo in any way (i.e. it won't create commits that client didn't intend to create and it won't move bookmarks to unpredictable places)
2) If request finished successfully, but we failed to send the success to the client, then repeating the same request would finish successfully.
Achieving #1 is necessary because async_requests_worker might execute a few requests at the same time (though this should be rare). Achieveing #2 is necessary because if we fail to send successful response to the client (e.g. because of network issues), we want client to retry and get this successful response back, so that client can continue with their next request.
In order to achieve #1 we make all bookmark move conditional i.e. we move a bookmark only if current location of the bookmark is at the place where client expects it. This should help achieve goal #1, because even if we have two requests executing at the same time, only one of them will successfully move a bookmark.
However once we achieve #1 we have a problem with #2 - if a request was successful, but we failed to send a successful reply back to the client then client will retry the request, and it will fail, because a bookmark is already at the new location (because previous request was successful), but client expects it to be at the old location (because client doesn't know that the request was succesful). To fix this issue before executing the request we check if this request was already successful, and we do it heuristically by checking request parameters and verifying the commit remapping state. This doesn't protect against malicious clients, but it should protect from issue #2 described above.
So the whole stack of diffs is the following:
1) take a method from megarepo api
2) implement a diff that makes bookmark moves conditional
3) Fix the problem #2 by checking if a previous request was successful or not
# This diff
If a previous add_sync_target() call was successful on mononoke side, but we
failed to deliver this result to the client (e.g. network issues), then client
would just try to retry this call. Before this diff it wouldn't work (i.e. we
just fail to create a bookmark because it's already created). This diff fixes
it by checking a commit this bookmark points to and checking if it looks like
it was created by a previous add_sync_target call. In particular, it checks
that remapping state file matches the request parameters, and that config
version is the same.
Differential Revision: D29848377
fbshipit-source-id: 16687d975748929e5eea8dfdbc9e206232ec9ca6
Summary:
High-level goal of this diff:
We have a problem in long_running_request_queue - if a tw job dies in the
middle of processing a request then this request will never be picked up by any
other job, and will never be completed.
The idea of the fix is fairly simple - while a job is executing a request it
needs to constantly update inprogress_last_updated_at field with the current
timestamp. In case a job dies then other jobs would notice that timestamp
hasn't been updated for a while and mark this job as "new" again, so that
somebody else can pick it up.
Note that it obviously doesn't prevent all possible race conditions - the worker
might just be too slow and not update the inprogress timestamp in time, but
that race condition we'd handle on other layers i.e. our worker guarantees that
every request will be executed at least once, but it doesn't guarantee that it will
be executed exactly once.
Now a few notes about implementation:
1) I intentionally separated methods for finding abandoned requests, and marking them new again. I did so to make it easier to log which requests where abandoned (logging will come in the next diffs).
2) My original idea (D29821091) had an additional field called execution_uuid, which would be changed each time a new worker claims a request. In the end I decided it's not worth it - while execution_uuid can reduce the likelyhood of two workers running at the same time, it doesn't eliminate it completely. So I decided that execution_uuid doesn't really gives us much.
3) It's possible that there will be two workers will be executing the same request and update the same inprogress_last_updated_at field. As I mentioned above, this is expected, and request implementation needs to handle it gracefully.
Reviewed By: krallin
Differential Revision: D29845826
fbshipit-source-id: 9285805c163b57d22a1936f85783154f6f41df2f
Summary:
Currently they got zeros by default, but having NULL here seems like a nicer
option.
Reviewed By: krallin
Differential Revision: D29846254
fbshipit-source-id: 981d979055eca91594ef81f0d6dc4ba571a2e8be
Summary:
The patches to these crates have been upstreamed.
allow-large-files
Reviewed By: jsgf
Differential Revision: D29891894
fbshipit-source-id: a9f2ee0744752b689992b770fc66b6e66b3eda2b
Summary:
Do a similar change to change_target_config as we've done for add_sync_target
in D29848378. Move bookmark only if it points to an expected commit. That would
prevent make it safer to deal with cases where the same change_target_config
was executing twice.
Reviewed By: mojsarn
Differential Revision: D29874803
fbshipit-source-id: d21a3029ee58e2a8acc41e37284d0dd03d2803a3
Summary:
This is the first diff that tries to make megarepo asynchronous methods
idempotent - replaying the same reqeust twice shouldn't cause corruption on the
server. At the moment this is not the case - if we have a runaway
add_sync_target call, then in the end it moves a bookmark to a random place,
even if there was another same successful add_sync_target call and a few others on
top.
add_sync_target should create a new bookmark, and if a bookmark already exists
it's better to not move it to a random place.
This diff does it, however it creates another problem - if a request was successful on mononoke side, but we failed to deliver the successful result to the client (e.g. network issues), then retrying this request would fail because bookmark already exists. This problem will be addressed in the next diff.
Reviewed By: mojsarn
Differential Revision: D29848378
fbshipit-source-id: 8a58e35c26b989a7cbd4d4ac4cbae1691f6e9246
Summary: It's nice to be able to keep track of what's going on
Reviewed By: mwdevine
Differential Revision: D29790543
fbshipit-source-id: b855d72efe8826a99b3a6a562722e299e9cbfece
Summary:
This is not used. Even though this method has the "right intention" (i.e. we
need to start marking long running requests as new), I'm not sure we can use it
as is. So let's just delete it for now.
Reviewed By: farnz
Differential Revision: D29817068
fbshipit-source-id: 84d392fea01dfb5fb7bc56f0072baf2cf70b39f4
Summary:
In previous diff we started creating deletion commits on megarepo mainline.
This is not great since it breaks bisects, and this diff avoids that.
The way it does it is the following:
1) First do the same thing we did before - create deletion commit, and then
create a merge commit with p1 as deletion commit and p2 as an addition commit.
Let's call it "fake merge", since this commit won't be used for our mainline
2) Generate manifest for our "fake merge", and then use this manifest to
generate bonsai diff. But this time make p1 an old target commit (i.e. remove
deleted commit as if it never exited).
3) Use generated bonsai diff to create a commit.
So in short we split the procedure in two - first generate and validate the
resulting manifest (this is what we use "fake merge" commit for), and then
generate bonsai changeset using this manifest. It's unfortunate that in order
to generate resulting manifest we actually need to create commit and save it to
blobstore. If we had in-memory manifests we could have avoided that, but alas
we don't have them yet.
This way of creating bonsai changesets is a bit unconventional, but I think it has the benefit of relying on tools that we have confidence that they work (i.e. bonsai_diff), and we don't need to reimplement all the bonsai logic again
Reviewed By: mitrandir77
Differential Revision: D29633340
fbshipit-source-id: eebdb0e4db5abbab9346c575b662b7bb467497c4
Summary:
Initially I just wanted to address comments from D29515737 (fa8796ae19) about unnecessary
manifest retraversals, but there were a few more problems:
1) We didn't detect file conflicts in the final merge commit correctly. For
example, if additions_merge commit added a file "dir/1.txt", but there's
already file "dir" in target changeset that we won't detect this problem.
2) What's worse is that we might produce invalid bonsai merge changeset
ourselves. Say, if we delete "source_1/dir/file.txt", and then add file
"source_1/dir" in additions merge commit then resulting bonsai changeset should
have "source_1/dir" entry in the bonsai changeset.
This diff does the following:
1) Adds more tests to cover different corner cases - some of them were failing
before this diff.
2) Improves logic to verify file conflicts
3) Instead of trying to generate correct merge bonsai changeset it simplifies
the task and creates a separate deletion commit.
Note that creating a deletion commit on the mainline is something we want to
avoid to not break bisects. This will be addressed in the next diff.
Reviewed By: mitrandir77
Differential Revision: D29633341
fbshipit-source-id: 8f755d852212fbce8f9331049bf836c1d0a4ef42
Summary: This new method will allow the megarepo customers to create a sync target that's branching off the existing target. This feature is meant to be used for release branches.
Reviewed By: StanislavGlebik
Differential Revision: D29275281
fbshipit-source-id: 7b58d5cc49c99bbc5f7e01814178376aa3abfcdf
Summary: needed to set up tw health check
Reviewed By: StanislavGlebik
Differential Revision: D29580808
fbshipit-source-id: 6a3833d652979915fd44dc6d89511192397d8b96