Summary: Upstream crate has landed my PR for zstd 1.4.9 support and made a release, so can remove this patch now.
Reviewed By: ikostia
Differential Revision: D28221163
fbshipit-source-id: b95a6bee4f0c8d11f495dc17b2737c9ac9142b36
Summary: It was requesting a slice but always converted it to an Iterator anyway. Receiving an iterator saves constructing a temporary Vec both here and later in the stack
Reviewed By: krallin
Differential Revision: D28127582
fbshipit-source-id: 625c1f17f1ded973f8b2aa13566928af0df83aec
Summary:
We used to carry patches for Tokio 0.2 to add support for disabling Tokio coop
(which was necessary to make Mononoke work with it), but this was upstreamed
in Tokio 1.x (as a different implementation), so that's no longer needed. Nobody
else besides Mononoke was using this.
For Hyper we used to carry a patch with a bugfix. This was also fixed in Tokio
1.x-compatible versions of Hyper. There are still users of hyper-02 in fbcode.
However, this is only used for servers and only when accepting websocket
connections, and those users are just using Hyper as a HTTP client.
Reviewed By: farnz
Differential Revision: D28091331
fbshipit-source-id: de13b2452b654be6f3fa829404385e80a85c4420
Summary:
This used to be used by Mononoke, but we're now on Tokio 1.x and on
corresponding versions of Gotham so it's not needed anymore.
Reviewed By: farnz
Differential Revision: D28091091
fbshipit-source-id: a58bcb4ba52f3f5d2eeb77b68ee4055d80fbfce2
Summary:
NOTE: there is one final pre-requisite here, which is that we should default all Mononoke binaries to `--use-mysql-client` because the other SQL client implementations will break once this lands. That said, this is probably the right time to start reviewing.
There's a lot going on here, but Tokio updates being what they are, it has to happen as just one diff (though I did try to minimize churn by modernizing a bunch of stuff in earlier diffs).
Here's a detailed list of what is going on:
- I had to add a number `cargo_toml_dir` for binaries in `eden/mononoke/TARGETS`, because we have to use 2 versions of Bytes concurrently at this time, and the two cannot co-exist in the same Cargo workspace.
- Lots of little Tokio changes:
- Stream abstractions moving to `tokio-stream`
- `tokio::time::delay_for` became `tokio::time::sleep`
- `tokio::sync:⌚:Sender::send` became `tokio::sync:⌚:Sender::broadcast`
- `tokio::sync::Semaphore::acquire` returns a `Result` now.
- `tokio::runtime::Runtime::block_on` no longer takes a `&mut self` (just a `&self`).
- `Notify` grew a few more methods with different semantics. We only use this in tests, I used what seemed logical given the use case.
- Runtime builders have changed quite a bit:
- My `no_coop` patch is gone in Tokio 1.x, but it has a new `tokio::task::unconstrained` wrapper (also from me), which I included on `MononokeApi::new`.
- Tokio now detects your logical CPUs, not physical CPUs, so we no longer need to use `num_cpus::get()` to figure it out.
- Tokio 1.x now uses Bytes 1.x:
- At the edges (i.e. streams returned to Hyper or emitted by RepoClient), we need to return Bytes 1.x. However, internally we still use Bytes 0.5 in some places (notably: Filestore).
- In LFS, this means we make a copy. We used to do that a while ago anyway (in the other direction) and it was never a meaningful CPU cost, so I think this is fine.
- In Mononoke Server it doesn't really matter because that still generates ... Bytes 0.1 anyway so there was a copy before from 0.1 to 0.5 and it's from 0.1 to 1.x.
- In the very few places where we read stuff using Tokio from the outside world (historical import tools for LFS), we copy.
- tokio-tls changed a lot, they removed all the convenience methods around connecting. This resulted in updates to:
- How we listen in Mononoke Server & LFS
- How we connect in hgcli.
- Note: all this stuff has test coverage.
- The child process API changed a little bit. We used to have a ChildWrapper around the hg sync job to make a Tokio 0.2.x child look more like a Tokio 1.x Child, so now we can just remove this.
- Hyper changed their Websocket upgrade mechanism (you now need the whole `Request` to upgrade, whereas before that you needed just the `Body`, so I changed up our code a little bit in Mononoke's HTTP acceptor to defer splitting up the `Request` into parts until after we know whether we plan to upgrade it.
- I removed the MySQL tests that didn't use mysql client, because we're leaving that behind and don't intend to support it on Tokio 1.x.
Reviewed By: mitrandir77
Differential Revision: D26669620
fbshipit-source-id: acb6aff92e7f70a7a43f32cf758f252f330e60c9
Summary:
Update the zstd crates.
This also patches async-compression crate to point at my fork until upstream PR https://github.com/Nemo157/async-compression/pull/117 to update to zstd 1.4.9 can land.
Reviewed By: jsgf, dtolnay
Differential Revision: D27942174
fbshipit-source-id: 26e604d71417e6910a02ec27142c3a16ea516c2b
Summary:
I'd like to be able to track the proportion of traffic coming to bookmarks from
warm bookmarks cache vs. from elsewhere. We don't have a great abstraction to
pass this via the CoreContext at this time, but the SessionClass seems like a
pretty good fit.
Indeed, since it's always available in the CoreContext, and can be freely
mutated without having to rebuild the whole session. Besides, it aligns pretty
well with the existing use cases we have for SessionClass, which is to give you
different level of service depending on who you are.
Reviewed By: StanislavGlebik
Differential Revision: D27938413
fbshipit-source-id: a9dcc5a10c8d1459ee9586324a727c668e2e4e40
Summary:
Instead of using get_public method which queries bookmarks let's call
get_public_raw instead, which just does a phases fetch from a local db.
See previous diff for more motivation
Reviewed By: krallin
Differential Revision: D27821547
fbshipit-source-id: a71c8c9ad283259e9be98e63c9c72428e35c6142
Summary:
We didn't log it to perf counters log, and that makes it hard to aggregate,
show distributions etc
Let's start doing that
Reviewed By: krallin
Differential Revision: D27856968
fbshipit-source-id: 82fbba70154ee011073f3122256bd296bbb938ae
Summary:
To prevent bonsai changeset divergence between prod and backup repo by copying
bonsais from prod repo directly during hg sync job push.
See more details about motivation in D27824210
Reviewed By: ikostia
Differential Revision: D27852341
fbshipit-source-id: 93e0b1891008858eb99d5e692e4dd60c2e23f446
Summary:
This will allow us to have greater visibility into what's going on when there are production issues.
Note: for getpack, the params data model is `[MPath, [Node]]`. In practice there seems to always just be 1 node per mpath. However, to preserve the mapping, I log every mpath in a separate sample.
Reviewed By: ahornby
Differential Revision: D26690685
fbshipit-source-id: 36616256747b61390b0435467892daeff2b4dd07
Summary:
Previously this query failed because it tried to convert bytes to int, and our
mysql wrapper doesn't support that.
Let's cast it instead
Reviewed By: krallin
Differential Revision: D27736863
fbshipit-source-id: 66a7cb33c0f623614f292511e18eb62e31ea582f
Summary:
Previously we ran into an issue where client has sent us too large known
request, and we passed it all the way to the mysql.
Mysql slow log shows that we have quite a few slow queries
(https://fburl.com/scuba/mysql_slow_log/w0ugmc1i), so it might be that these
requests are still coming, but because of the problems in the logging (see
previous diff), we can't know it for sure.
In any case, adding a knob like that can be useful
Reviewed By: farnz
Differential Revision: D27650806
fbshipit-source-id: c4c82b7b5781a85c349abb4e5fa534b5e8f125a0
Summary:
the code is almost the same, so it would be good to deduplicate it. The
duplication let to the annoying differences in logging - i.e. we logged how
many nodes were sent to us in `known` call but not in `knownnodes` call.
Reviewed By: farnz
Differential Revision: D27650583
fbshipit-source-id: 5e2e3be3b9fd66631364d23f34d241c27e370340
Summary:
Now that the `hg_external_sync` jobs are gone we can delete the code
in Mononoke that behaves differently when a sync job connects.
Reviewed By: StanislavGlebik
Differential Revision: D27500506
fbshipit-source-id: 443fb54577833dbf44ece6ae90a5f25ffed38cd5
Summary:
Currently if we failed to fetch the repo status we only see "Repo is marked as read-only: Failed to fetch repo lock status" error, which is not very informative. Example of the error in production: P385612782.
Let's log the error.
Reviewed By: krallin
Differential Revision: D27621996
fbshipit-source-id: 85d9f0fe39397759da1b51e197f9188761678715
Summary:
Add support for returning unhydrated draft commits if requested by the client using a config option 'wantsunhydratedcommits'
This is needed to support slow enabling it for some clients like OnDemand.
Reviewed By: StanislavGlebik
Differential Revision: D27621442
fbshipit-source-id: 672129c8bfcbcdb4cee3ba1b092dac16c0b1877d
Summary:
We already log file count, but file sizes is another useful piece of
information.
I evaluated two options - either do as I did in this diff or just change ScribeToScuba
logging python script to query scs to get file sizes. I opted for option #1
mainly because scs doesn't have a method to query file sizes for many files at once, and
querying one by one might be too expensive. We can add a method like that, but
that's a bit of a bigger change that I'd like.
Reviewed By: andll
Differential Revision: D27620507
fbshipit-source-id: 2618e60845bc293535b190d4da85df5667a7ab60
Summary:
This import is only used for the `ReadOnlyStorage` type, which is canonically
defined in `blobstore_factory`.
Reviewed By: krallin
Differential Revision: D27363466
fbshipit-source-id: 7cb1effcee6d39de92b471fecfde56724d24a6a4
Summary:
This is ... a stopgap :( There is probably some slow polling happening in
unbundle_future, and this causes us to fail to use our connection in time in
check_lock_repo...
Reviewed By: ahornby, StanislavGlebik
Differential Revision: D27620728
fbshipit-source-id: b747011405328b60419a99f0e5dbbaf64d53196a
Summary:
I'd like to just get rid of that library since it's one more place where we
specify the Tokio version and that's a little annoying with the Tokio 1.x
update. Besides, this library is largely obsoleted by `#[fbinit::test]` and
`#[tokio::test]`.
Reviewed By: farnz
Differential Revision: D27619147
fbshipit-source-id: 4a316b81d882ea83c43bed05e873cabd2100b758
Summary:
Remove use of dangerous_override from the repo client tests.
Previously this was used to override filestore config, so just use the existing
config override mechanism to set the filestore params this is generated from.
Reviewed By: ahornby
Differential Revision: D27169424
fbshipit-source-id: 7d17437f0e218d1cf19cc64d48e1efdd7012e927
Summary: Use the test factory for existing repo_client and repo_import tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169425
fbshipit-source-id: 2d0c34f129447232cec8faee42056d83613de179
Summary:
While Mononoke should support pushing large commits, it's not clear if we need
(or want) to support pushing a lot of commits. At the very least pushing lots of commits at the same
time can create problems with derivation, but there could be more places that might break.
Besides there's usually an easy way to work around that i.e. sqush commits or push in small batches.
If we ever need to import a lot of it we use tools like blobimport/repo_import,
so blocking pushing of lots of commits should be fine.
Reviewed By: farnz
Differential Revision: D27237798
fbshipit-source-id: 435e61110f145b06a177d6e492905fccd38d83da
Summary:
In preparation for making `BlobRepo` buildable by facet factories, restore
`BlobRepo` members that had been converted to `TypeMap` attributes back into
real members.
This re-introduces some dependencies that were previously removed, but this
will be cleaned up when crates no longer have to depend on BlobRepo directly,
just the traits they are interested in.
Reviewed By: ahornby
Differential Revision: D27169422
fbshipit-source-id: 14354e6d984dfdd2be5c169f527e5f998f00db1e
Summary:
Currently we can only limit which users are allowed to move a bookmark by a
regex. We also want to allow specifying a hipster group.
Reviewed By: krallin
Differential Revision: D27156690
fbshipit-source-id: 99a5678a82f4c34ed2e57625361ba7cdb08ed839
Summary:
the reclone option code has landed for fbclone, so now we can direct
users there first, so they don't have to go through all these steps
(won't land until I check that this option has actually made it to production)
I also updated the wiki this points to tell users to use `eden list` to detect
EdenFs checkouts instead of looking for .eden, as these steps are also for when
an EdenFS checkout is borked and needs a reclone and `eden list` more reliably
works in this situation.
Reviewed By: StanislavGlebik
Differential Revision: D26435380
fbshipit-source-id: 9153e730e1be949d130af85d604623d2bfbd3990
Summary:
This command can be used to update already existing streaming changelog.
It takes a newly cloned changelog and updates the new streaming changelog
chunks in the database.
The biggest difference from "create" command is that we first need to figure
out what's already uploaded to streaming changelog. For that two new methods
were added SqlStreamingChunksFetcher.
Reviewed By: farnz
Differential Revision: D27045386
fbshipit-source-id: 36fc9387f621e1ec8ad3eb4fbb767ab431a9d0bb
Summary:
Our current straming changelog updater logic is written in python, and it has a
few downsides:
1) It writes directly to manifold, which means it bypasses all the multiplexed
blobstore logic...
2) ...more importantly, we can't write to non-manifold blobstores at all.
3) There are no tests for the streaming changelogs
This diff moves the logic of initial creation of streaming changelog entry to
rust, which should fix the issues mentioned above. I want to highligh that
this implementation only works for the initial creation case i.e. when there are no
entries in the database. Next diffs will add incremental updates functionality.
Reviewed By: krallin
Differential Revision: D27008485
fbshipit-source-id: d9583bb1b98e5c4abea11c0a43c42bc673f8ed48
Summary:
Previously it was possible to use streaming clone only with xdb table. This
diff changes it
Reviewed By: farnz
Differential Revision: D27008486
fbshipit-source-id: b8d51832dd62b4343b36c3a7a96b83a327056025
Summary:
This tunable is not used anymore, we use
getbundle_high_low_gen_num_difference_threshold instead. Let's remove it.
Differential Revision: D26984966
fbshipit-source-id: 4e8ded5982f7e90c90476ff758b766df55644273
Summary:
Remove case conflict checking on upload. Disallowing case conflicts will
always be done during bookmark movement by checking the skeleton manifests.
A side-effect of this change is that configuring a repository with case
conflict checks, but not enabling skeleton manifests, will mean that commits
can't be landed in that repository, as there are no skeleton manifests to
check.
Reviewed By: ahornby
Differential Revision: D26781269
fbshipit-source-id: b4030bea5d92fa87f182a70d31f7e9d9e74989a9
Summary:
It was added for the initial rollout only so that we can fallback quickly if
needed (see D26221250 (7115cf31d2)). We can remove it now since the feature has been enabled
for a few weeks already with no big issues.
Reviewed By: krallin
Differential Revision: D26909490
fbshipit-source-id: 849fac4838c272e92a04a971869842156e88a1cf
Summary:
I've been investigating getbundle on mononoke darkstorm, and it was hard to
understand what's going on. Adding more logs hopefully should be easier.
Also fix how we log `nodes_to_send` - previously `partial_result.partial`
wasn't counted. Now it should be fixed.
Reviewed By: krallin
Differential Revision: D26909296
fbshipit-source-id: 0af6f0b8d6af0350b5c87a20146ef8c7c64b3dc8
Summary:
We ran into an issue while uploading too many blobs at once to darkstorm repo.
We were able to workaround this issue by spawning less blobstore writes at
once.
It's still a bit unclear why this issue happens exactly, but I'd like to make
the number of concurrent uploaded blobs configurable so that we can tweak it if
necessary.
Differential Revision: D26883061
fbshipit-source-id: 57c0d6fc51548b3c7404ebd55b5e07deba9e0601
Summary:
AsyncVfs provides async vfs interface.
It will be used in the native checkout instead of current use case that spawns blocking tokio tasks for VFS action
Reviewed By: quark-zju
Differential Revision: D26801250
fbshipit-source-id: bb26c4fc8acac82f4b55bb3f2f3964a6d0b64014
Summary:
Async the query macros. This change also migrates most callsites, with a few more complicated ones handle as separate diffs, which temporarily use sql01::queries in this diff.
With this change the query string is computed lazily (async fn/blocks being lazy) so we're not holding the extra memory of query string as well as query params for quite as long. This is of most interest for queries doing writes where the query string can be large when large values passed (e.g. Mononoke sqlblob blobstore )
Reviewed By: krallin
Differential Revision: D26586715
fbshipit-source-id: e299932457682b0678734f44bb4bfb0b966edeec
Summary:
This diffs add a layer of indirection between fbinit and tokio, thus allowing
us to use fbinit with tokio 0.2 or tokio 1.x.
The way this works is that you specify the Tokio you want by adding it as an
extra dependency alongside `fbinit` in your `TARGETS` (before this, you had to
always include `tokio-02`).
If you use `fbinit-tokio`, then `#[fbinit::main]` and `#[fbinit::test]` get you
a Tokio 1.x runtime, whereas if you use `fbinit-tokio-02`, you get a Tokio 0.2
runtime.
This diff is big, because it needs to change all the TARGETS that reference
this in the same diff that introduces the mechanism. I also didn't produce it
by hand.
Instead, I scripted the transformation using this script: P242773846
I then ran it using:
```
{ hg grep -l "fbinit::test"; hg grep -l "fbinit::main" } | \
sort | \
uniq | \
xargs ~/codemod/codemod.py \
&& yes | arc lint \
&& common/rust/cargo_from_buck/bin/autocargo
```
Finally, I grabbed the files returned by `hg grep`, then fed them to:
```
arc lint-rust --paths-from ~/files2 --apply-patches --take RUSTFIXDEPS
```
(I had to modify the file list a bit: notably I removed stuff from scripts/ because
some of that causes Buck to crash when running lint-rust, and I also had to add
fbcode/ as a prefix everywhere).
Reviewed By: mitrandir77
Differential Revision: D26754757
fbshipit-source-id: 326b1c4efc9a57ea89db9b1d390677bcd2ab985e
Summary:
For dependencies V2 puts "version" as the first attribute of dependency or just after "package" if present.
Workspace section is after patch section in V2 and since V2 autoformats patch section then the third-party/rust/Cargo.toml manual entries had to be formatted manually since V1 takes it as it is.
The thrift files are to have "generated by autocargo" and not only "generated" on their first line. This diff also removes some previously generated thrift files that have been incorrectly left when the corresponding Cargo.toml was removed.
Reviewed By: ikostia
Differential Revision: D26618363
fbshipit-source-id: c45d296074f5b0319bba975f3cb0240119729c92
Summary:
Like it says in the title, this updates futures_ext to use tokio_shim, which
makes it compatible with Tokio 0.2 and 1.0.
There is one small difference in behavior here, which is that in Tokio 1.0,
sleep isn't Unpin anymore, so callers will need to call `boxed()` or use Tokio's `pin!` macro if they need
Unpin.
I do want to get as close to what upstream is doing in Tokio 1.0, so I think
it's good to keep that behavior.
Reviewed By: farnz
Differential Revision: D26610036
fbshipit-source-id: ff72275da55558fdf8fe3ad009d25cf84e108a5a