sapling/eden/mononoke/cmds/rechunker.rs
Yan Soares Couto 302131bd5f Allow cli commands to build any "repo object"
Summary:
The important change on this diff is in this file: `eden/mononoke/cmdlib/src/args/mod.rs`

On this diff I change that file's repo-building functions to be able to build both `BlobRepo` and `InnerRepo` (added on D28748221 (e4b6fd3751)). In fact, they are now able to build any facet container that can be built by the `RepoFactory` factory, so each binary can specify their own subset of needed "attributes" and only build those ones.

For now, they're all still using BlobRepo, this diff is only a refactor that enables easily changing the repo attributes you need.

The rest of the diff is mostly giving hints to the compiler, as in several places it couldn't infer it should use `BlobRepo` directly, so I had to add type hints.

## High level goal

This is part of the blobrepo refactoring effort.

I am also doing this in order to:
1. Make sure every place that builds `SkiplistIndex` uses `RepoFactory` for that.
2. Then add a `BlobstoreGetOps` trait for blobstores, and use the factory to feed it to skiplist index, so it can query the blobstore while skipping cache. (see [this thread](https://www.internalfb.com/diff/D28681737 (850a1a41b7)?dst_version_fbid=283910610084973&transaction_fbid=106742464866346))

Reviewed By: StanislavGlebik

Differential Revision: D28877887

fbshipit-source-id: b5e0093449aac734591a19d915b6459b1779360a
2021-06-09 05:16:13 -07:00

102 lines
2.9 KiB
Rust

/*
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This software may be used and distributed according to the terms of the
* GNU General Public License version 2.
*/
#![deny(warnings)]
use anyhow::{format_err, Error};
use blobrepo::BlobRepo;
use blobstore::{Loadable, PutBehaviour};
use clap::Arg;
use cloned::cloned;
use context::CoreContext;
use fbinit::FacebookInit;
use futures::stream::{self, TryStreamExt};
use mercurial_types::{HgFileNodeId, HgNodeHash};
use std::str::FromStr;
use cmdlib::{args, helpers::block_execute};
const NAME: &str = "rechunker";
const DEFAULT_NUM_JOBS: usize = 10;
#[fbinit::main]
fn main(fb: FacebookInit) -> Result<(), Error> {
let matches = args::MononokeAppBuilder::new(NAME)
.with_advanced_args_hidden()
.with_special_put_behaviour(PutBehaviour::Overwrite)
.build()
.about("Rechunk blobs using the filestore")
.arg(
Arg::with_name("filenodes")
.value_name("FILENODES")
.takes_value(true)
.required(true)
.min_values(1)
.help("filenode IDs for blobs to be rechunked"),
)
.arg(
Arg::with_name("jobs")
.short("j")
.long("jobs")
.value_name("JOBS")
.takes_value(true)
.help("The number of filenodes to rechunk in parallel"),
)
.get_matches(fb)?;
let logger = matches.logger();
let ctx = CoreContext::new_with_logger(fb, logger.clone());
let jobs: usize = matches
.value_of("jobs")
.map_or(Ok(DEFAULT_NUM_JOBS), |j| j.parse())
.map_err(Error::from)?;
let filenode_ids: Vec<_> = matches
.values_of("filenodes")
.unwrap()
.into_iter()
.map(|f| {
HgNodeHash::from_str(f)
.map(HgFileNodeId::new)
.map_err(|e| format_err!("Invalid Sha1: {}", e))
})
.collect();
let blobrepo = args::open_repo(fb, logger, &matches);
let rechunk = async move {
let blobrepo: BlobRepo = blobrepo.await?;
stream::iter(filenode_ids)
.try_for_each_concurrent(jobs, |fid| {
cloned!(blobrepo, ctx);
async move {
let env = fid.load(&ctx, blobrepo.blobstore()).await?;
let content_id = env.content_id();
filestore::force_rechunk(
&blobrepo.get_blobstore(),
blobrepo.filestore_config().clone(),
&ctx,
content_id,
)
.await
.map(|_| ())
}
})
.await
};
block_execute(
rechunk,
fb,
"rechunker",
logger,
&matches,
cmdlib::monitoring::AliveService,
)
}