Summary:
This diff moves initFacebook calls that used to happen just before FFI calls to instead happen at the beginning of main.
The basic assumption of initFacebook is that it happens at the beginning of main before there are additional threads. It must be allowed to modify process-global state like env vars or gflags without the possibility of a data race from other code concurrently reading those things. As such, the previous approach of calling initFacebook through `*fbinit::FACEBOOK` near FFI calls was prone to race conditions.
The new approach is based on attribute macros added in D17245802.
---
The primary remaining situations that still require `*fbinit::FACEBOOK` are when we don't directly control the function arguments surrounding the call to C++, such as in lazy_static:
lazy_static! {
static ref S: Ty = {
let _ = *fbinit::FACEBOOK;
/* call C++ */
};
}
and quickcheck:
quickcheck! {
fn f(/* args that impl Arbitrary */) {
let _ = *fbinit::FACEBOOK;
/* call C++ */
}
}
I will revisit these in a separate diff. They are a small fraction of total uses of fbinit.
Reviewed By: Imxset21
Differential Revision: D17328504
fbshipit-source-id: f80edb763e7f42b3216552dd32f1ea0e6cc8fd12
Summary: I think these are left over from pre-2018 code where they may have been necessary. In 2018 edition, import paths in `use` always begin with a crate name or `crate`/`super`/`self`, so `use $ident;` always refers to a crate. Since extern crates are always in scope in every module, `use $ident` does nothing.
Reviewed By: Imxset21
Differential Revision: D17290473
fbshipit-source-id: 23d86e5d0dcd5c2d4e53c7a36b4267101dd4b45c
Summary:
Commit sync will operate based on the following idea:
- there's one "large" and potentially many "small" repos
- there are two possible sync directions: large-to-small and small-to-large
- when syncing a small repo into a large repo, it is allowed to change paths
of each individual file, but not to drop files
- large repo prepends a predefined prefix to every bookmark name from the small repo, except for the bookmarks, specified in `common_pushrebase_bookmarks` list, which refers to the bookmarks that can be advanced by any small repo
Reviewed By: krallin
Differential Revision: D17258451
fbshipit-source-id: 6cdaccd0374250f6bbdcbc9a280da89ccd7dff97
Summary:
This is a mechanical part of rename, does not change any commit messages in
tests, does not change the scuba table name/config setting. Those are more
complex.
Reviewed By: krallin
Differential Revision: D16890120
fbshipit-source-id: 966c0066f5e959631995a1abcc7123549f7495b6
Summary: This updates our repo config to allow passing through Filestore params. This will be useful to conditionally enable Filestore chunking for new repos.
Reviewed By: HarveyHunt
Differential Revision: D16580700
fbshipit-source-id: b624bb524f0a939f9ce11f9c2983d49f91df855a
Summary:
NOTE: This isn't 100% complete yet. I have a little more work to do around the aliasverify binary, but I think it'll make sense to rework this a little bit with the Filestore anyway.
This patch incorporates the Filestore throughout Mononoke. At this time, what this means is:
- Blobrepo methods return streams of `FileBytes`.
- Various callsites that need access to `FileBytes` call `concat2` on those streams.
This also eliminates the Sha256 aliasing code that we had written for LFS and replaces it with a Filestore-based implementation.
However, note that this does _not_ change how files submitted through `unbundle` are written to blobstores right now. Indeed, those contents are passed into the Filestore through `store_bytes`, which doesn't do chunking. This is intentional since it lets us use LFS uploads as a testbed for chunked storage before turning it on for everything else (also, chunking those requires further refactoring of content uploads, since right now they don't expect the `ContentId` to come back through a Future).
The goal of doing it this way is to make the transition simpler. In other words, this diff doesn't change anything functionally — it just updates the underlying API we use to access files. This is also important to get a smooth release: it we had new servers that started chunking things while old servers tried to read them, things would be bad. Doing it this way ensures that doesn't happen.
This means that streaming is there, but it's not being leveraged just yet. I'm planning to do so in a separate diff, starting with the LFS read and write endpoints in
Reviewed By: farnz
Differential Revision: D16440671
fbshipit-source-id: 02ae23783f38da895ee3052252fa6023b4a51979
Summary:
This adds support for chunking in the Filestore. To do so, this reworks writes to be a 2 stage process:
- First, you prepare your write. This puts everything into place, but doesn't upload the aliases, backmappings, and the logical FileContents blob representing your file. If you're uploading a file that fits in a single chunk, preparing your writes effectively makes no blobstore changes. If you're uploading chunks, then you upload the individual chunks (those are uploaded essentially as if they were small files).
- Then, you commit your write. At that point, prepared gave you `FileContents` that represent your write, and a set of aliases. To commit your write, you write the aliases, then the file contents, and then a backmapping.
Note that we never create hierarchies when writing to the Filestore (i.e. chunked files always reference concrete chunks that contain file data), but that's not a guarantee we can rely on when reading. Indeed, since chunks and files are identical as far as blobstore storage is concerned, we have to handle the case where a chunked file references chunks that are themselves chunked (as I mentioned, we won't write it that way, but it could happen if we uploaded a file, then later on reduced the chunk size and wrote an identical file).
Note that this diff also updates the FileContents enum and adds unimplemented methods there. These are all going away later in this stack (in the diff where I incorporate the Filestore into the rest of Mononoke).
Reviewed By: StanislavGlebik
Differential Revision: D16440678
fbshipit-source-id: 07187aa154f4fdcbd3b3faab7c0cbcb1f8a91217
Summary:
It's used only in a very few places, and most likely that's by accident. We
pass logger via CoreContext now
Reviewed By: krallin
Differential Revision: D16336953
fbshipit-source-id: 36ea4678b3c3df448591c606628b93ff834fae45
Summary:
This diff sets two Rust lints to warn in fbcode:
```
[rust]
warn_lints = bare_trait_objects, ellipsis_inclusive_range_patterns
```
and fixes occurrences of those warnings within common/rust, hg, and mononoke.
Both of these lints are set to warn by default starting with rustc 1.37. Enabling them early avoids writing even more new code that needs to be fixed when we pull in 1.37 in six weeks.
Upstream tracking issue: https://github.com/rust-lang/rust/issues/54910
Reviewed By: Imxset21
Differential Revision: D16200291
fbshipit-source-id: aca11a7a944e9fa95f94e226b52f6f053b97ec74
Summary:
This adds support for disabling arbitrary hooks by name using a `--disable-hook` flag. This serves as a replacement for our current approach, which is to muck with configuration when we need to disable a hook on an ad-hoc basis, with mixed results ().
There are a couple use cases for which this is useful:
- Running the Hook Tailer.
- Running Mononoke Server locally.
in both cases, you probably don't want to have the verify integrity and check_unittests hooks enabled, respectively because the former might not be installed on your server (and it'll do wrong things for hook tailing anyway), and because the latter requires a token file you're not likely to have.
Reviewed By: ahornby
Differential Revision: D16201500
fbshipit-source-id: 76404eab72b9335d82235c63003c2425f0ecc480
Summary:
This adds the ability to provide an infinitepush namespace configuration without actually allowing infinite pushes server side. This is useful while Mercurial is the write master for Infinite Push commits, for two reasons:
- It lets us enable the infinitepush namespace, which will allow the sync to proceed between Mercurial and Mononoke, and also prevents users from making regular pushes into the infinitepush namespace.
- It lets us prevent users from sending commit cloud backups to Mononoke (we had an instance of this reported in the Source Control @ FB group).
Note that since we are routing backfills through the shadow tier, I've left infinitepush enabled there.
Reviewed By: StanislavGlebik
Differential Revision: D16071684
fbshipit-source-id: 21e26f892214e40d94358074a9166a8541b43e88
Summary:
Added an option to control for which repositories should censoring be
enabled or disabled. The option is added in `server.toml` as `censoring` and it
is set to true or false. If `censoring` is not specified, then the default
option is set to true ( censoring is enabled).
By disabling `censoring` the verification if the key is blacklisted or not is
omitted, therefor all the files are fetchable.
Reviewed By: ikostia
Differential Revision: D16029509
fbshipit-source-id: e9822c917fbcec3b3683d0e3619d0ef340a44926
Summary:
- more tracing for potentialy large pieces of work
- removed some unnecessary tracing
Reviewed By: StanislavGlebik
Differential Revision: D15851576
fbshipit-source-id: 6686c00da56176cad43f72d1671e08eb8141110f
Summary: This is the first step towards per-repo control of whether pushes are allowed.
Reviewed By: StanislavGlebik
Differential Revision: D15519959
fbshipit-source-id: a0bb96bd995af7df0cef225c73d559f309cfe592
Summary:
This adds a sanity check that limits the count of matches in `list_all_bookmarks_with_prefix`.
If we find more matches than the limit, then an error will be returned (right now, we don't have support for e.g. offsets in this functionality, so the only alternative approach is for the caller to retry with a more specific pattern).
The underlying goal is to ensure that we don't trivially expose Mononoke to accidental denial of service when a list lists `*` and we end up querying literally all bookmarks.
I picked a fairly conservative limit here (500,000), which is > 5 times the number of bookmarks we currently have (we can load what we have right now successfully... but it's pretty slow);
Note that listing pull default bookmarks is not affected by this limit: this limit is only used when our query includes scratch bookmarks.
Reviewed By: StanislavGlebik
Differential Revision: D15413620
fbshipit-source-id: 1030204010d78a53372049ff282470cdc8187820
Summary:
This updates our receive path for B2xInfinitepush to create new scratch bookmarks.
Those scratch bookmarks will:
- Be non-publishing.
- Be non-pull-default.
- Not be replicated to Mercurial (there is no entry in the update log).
I added a sanity check on infinite pushes to validate that bookmarks fall within a given namespace (which is represented as a Regexp in configuration). We'll want to determine whether this is a good mechanism and what the regexp for this should be prior to landing (I'm also considering adding a soft-block mode that would just ignore the push instead of blocking it).
This ensures that someone cannot accidentally perform an infinitepush onto master by tweaking their client-side configuration.
---
Note that, as of this diff, we do not support the B2xInfinitepushBookmarks part (i.e. backup bookmarks). We might do that separately later, but if we do, it won't be through scratch Bookmarks (we have too many backup bookmarks for this to work)
Reviewed By: StanislavGlebik
Differential Revision: D15364677
fbshipit-source-id: 23e67d4c3138716c791bb8050459698f8b721277
Summary:
The idea was that the short description is used for mechanical summaries of the hook failures, and the long description is used for human-readable "how to handle this" forms.
Instead, we had a mixture of styles, plus only ever returning the short description. Change this to only ever return the long description and fix hooks so that the long description is meaningful
Reviewed By: StanislavGlebik
Differential Revision: D15537580
fbshipit-source-id: 6289c1c9786862db8190b4464a3133c0620eb09c
Summary:
Previously file hook would look up a file name from the root of the repo on
every hook run. So if we add or modify a lot of files in the same directory,
then the manifest for this directory will be parsed over and over again.
There is not need to do it, since we can just use file node id of the file.
This diff does exactly that - now HookFile will use FileNodeId instead of
(HgChangesetId + MPath) to look up a file content.
Reviewed By: krallin
Differential Revision: D15349196
fbshipit-source-id: 503109fc87ccf2659d481eeca165044baa440463
Summary:
As part of adding support for infinitepush in Mononoke, we'll include additional server-side metadata on Bookmarks (specifically, whether they are publishing and pull-default).
However, we do use the name `Bookmark` right now to just reference a Bookmark name. This patch updates all reference to `Bookmark` to `BookmarkName` in order to free up `Bookmark`.
Reviewed By: StanislavGlebik
Differential Revision: D15364674
fbshipit-source-id: 126142e24e4361c19d1a6e20daa28bc793fb8686
Summary: I'm going to be doing some work on this file, but it's not up-to-date with rustfmt. To minimize merge conflicts and simplify diff reviews, I ran that earlier.
Reviewed By: StanislavGlebik
Differential Revision: D15364675
fbshipit-source-id: 66e3d2287ffcaf93bc6ad17f0217517f0ddb9b2f
Summary:
There was a request about importing a GitHub repo into fbsource. While pushing
it to Mononoke with pushrebase disabled, the sync job broke because it can only
handle pushrebase pushes.
Before this diff, pushrebase has a repo-level config about whether dates need
to be rewritten. We definitely want "master" to have date rewritten turned on,
but not the imported commits. This diff adds logic to turn off date rewriting
for bookmarks by using the `rewrite_dates` config, to address the repo import
requirement.
Reviewed By: StanislavGlebik
Differential Revision: D15291030
fbshipit-source-id: 8dcf8359d7de9ac33f0af6f9ab3bcbac424323e4
Summary:
This migrates the internal structures representing the repo and storage config,
while retaining the existing config file format.
The `RepoType` type has been replaced by `BlobConfig`, an enum containing all
the config information for all the supported blobstores. In addition there's
the `StorageConfig` type which includes `BlobConfig`, and also
`MetadataDBConfig` for the local or remote SQL database for metadata.
Reviewed By: StanislavGlebik
Differential Revision: D15065421
fbshipit-source-id: 47636074fceb6a7e35524f667376a5bb05bd8612
Summary:
At the moment verify integrity script blocks master commits that do not pass
the check. Non-master commits are not blocked even if they don't pass the check
(for example, if they don't have a valid reviewer), but verify_integrity hook
still logs them to scuba.
Mononoke's verify_integrity was enabled only on master, meaning that the hook
won't run on non-master commits at all, and we won't log non-master commits to scuba.
This diff fixes it.
Reviewed By: farnz
Differential Revision: D15146725
fbshipit-source-id: ab432fb2eccae0fbcc10755f5c8447964c490285
Summary: Now it is possible to configure and enable/disable bookmark cache from configs
Reviewed By: StanislavGlebik
Differential Revision: D14952840
fbshipit-source-id: 3080f7ca4639da00d413a949547705ad480772f7
Summary:
We need this hook to replicate functionality of [disable-nonff.py](diffusion/OPSFILES/browse/master/chef/cookbooks/other/fb_mercurial_server/files/default/scripts/hooks/bin/disable-nonff.py;84ce14ceebcb6ef47b0c32feb0db742439e0ffb5$46-49)
I is now possible to restrict users from moving specified bookmark
Reviewed By: StanislavGlebik
Differential Revision: D14871808
fbshipit-source-id: e2f7e8b97f789cfaff2d76abc405e8bd1c6abdd8
Summary:
Previously, the hook_tailer would only print hook rejections if
the --debug flag was passed. Modify the tailer so that it always prints
out rejections.
Reviewed By: farnz
Differential Revision: D14799244
fbshipit-source-id: 3b2c5b00b6cfa54f6fa93c58406b1720876fd9d4
Summary: Update Rust toolchain to 1.33.0 with fixes to make our code compatible with 1.33.0.
Reviewed By: Imxset21, kulshrax
Differential Revision: D14608312
fbshipit-source-id: 2d9cf7d01692abaed32f9adffa0e5eb51cfacb4f
Summary:
This is a hook in mercurial, in Mononoke it will be part of the implementation. By default all non fastforward pushes are blocked, except when using the NON_FAST_FORWARD pushvar (--non-forward-move is also needed to circumvent client side restrictions). Additionally certain bookmarks (e.g. master) shouldn't be able to be moved in a non fastforward manner at all. This can be done by setting block_non_fast_forward field in config.
Pushrebase can only move the bookmark that is actually being pushrebased so we do not need to check whether it is a fastforward move (it always is)
Reviewed By: StanislavGlebik
Differential Revision: D14405696
fbshipit-source-id: 782b49c26a753918418e02c06dcfab76e3394dc1
Summary: There is no need to have an Option<Vec<XYZ>> as None can simply be represented by an empty vector. This makes these fields easier to use.
Reviewed By: StanislavGlebik
Differential Revision: D14405687
fbshipit-source-id: e4c5ba12a1e3c6a18130026af6814d54952da4d2
Summary: See D14279065, this diff is simply to clean up the deprecated code
Reviewed By: StanislavGlebik
Differential Revision: D14279210
fbshipit-source-id: 10801fb04ad533a80bb7a2f9dcdf3ee5906aa68d
Summary: This endpoint is also used in Mercurial now.
Reviewed By: StanislavGlebik
Differential Revision: D14303557
fbshipit-source-id: fe38b62d010de2846dcf800f93ba050d9c396873
Summary: The phabricator message parsing capability will be also used from check_unittests Rust hook, so it had to be extracted and adjusted to Rust standards.
Reviewed By: StanislavGlebik
Differential Revision: D14301561
fbshipit-source-id: 47b59527dfadd7b761f750825da52ffce14fdf21
Summary:
In this first diff check_unittests doesn't do much except it calls the interngraph endpoint with the proper app id and auth token.
Mocking and proper logic of this hook will be added in following diffs.
Reviewed By: StanislavGlebik
Differential Revision: D14089265
fbshipit-source-id: 17c4330691ba3ffde33668e59f8b1cad4a9242a1
Summary: New commits should be logged to scribe, these will be used to trigger the update for the hg clone streamfile.
Reviewed By: lukaspiatkowski
Differential Revision: D14022599
fbshipit-source-id: a8a68f12a8dc1e65663d1ccf1a5eafa54ca2daf0
Summary: This is to tell Mononoke whether to preserve bundle2 raw content or not.
Reviewed By: StanislavGlebik
Differential Revision: D14038588
fbshipit-source-id: f1781ed8b4aca7e37925e7f40aeb506af60d071a
Summary: This should be a big improvement for hooks that only care about file type or size and not the content.
Reviewed By: StanislavGlebik
Differential Revision: D14023307
fbshipit-source-id: 28856512b3092af9517079a9f379d23a9c4f3864