Summary:
Update from 1.44.0. Updates have been blocked because of a bad interaction
between platform007+mode/opt-clang-thinlto+gold. Now that the default linker is
lld, perhaps this is no longer an issue.
Various tweaks due to updates:
- `atomic_min_max`, `const_transmute`, `inner_deref`, `ptr_offset_from` and `str_strip` are now stable
- `asm` renamed to `llvm_asm`
- `intra_doc_link_resolution_failure` lint renamed to `broken_intra_doc_link` (ndmitchell I didn't fix Gazebo because you'd explicitly suppressed the warning - I'll let you work out what to do with that)
(This is caused by incompatibility between the llvm used by rustc in
platform007 and llvm-fb. In platform009, rustc uses llvm-fb directly so there's
no scope for incompatibility.)
Reviewed By: dtolnay
Differential Revision: D24288638
fbshipit-source-id: 5155d85c186fd79d3cc86cb0bb554ab77d76c12c
Summary: Adds version of `bounded_traversal_stream` where unfold returns a stream over children instead of an iterator. This function also applies back pressure on children iteration when we have too many unscheduled items.
Reviewed By: krallin
Differential Revision: D23931035
fbshipit-source-id: 2e2806653782d4e646dcdf4b2d4e624fd6543da8
Summary:
This just adds a single fn. I did not come up with a better place/name to put
it, suggestions are welcome. Seems generic enough to belong at the top-level
common location.
I've already needed this twice, so decided to extract. Second callsite will be further in the stack.
Reviewed By: StanislavGlebik
Differential Revision: D24080193
fbshipit-source-id: c3e0646f263562f3eed93f1fdbab9a076729f33c
Summary:
This diff introduces Mysql client for Rust to Mononoke as a one more backend in the same row with raw xdb connections and myrouter. So now Mononoke can use new Mysql client connections instead of Myrouter.
To run Mononoke with the new backend, pass `--use-mysql-client` options (conflicts with `--myrouter-port`).
I also added a new target for integration tests, which runs mysql tests using mysql client.
Now to run mysql tests using raw xdb connections, you can use `mononoke/tests/integration:integration-mysql-raw-xdb` and using mysql client `mononoke/tests/integration:integration-mysql`
Reviewed By: ahornby
Differential Revision: D23213228
fbshipit-source-id: c124ccb15747edb17ed94cdad2c6f7703d3bf1a2
Summary: The diff adds API to create a set of connections: read, read master and write.
Reviewed By: ahornby
Differential Revision: D23568561
fbshipit-source-id: b3ee954604557497ed56c6b369256b6f76a1e042
Summary:
This diff makes blobstore healer to use MyAdmin to get replication lag for a DB shard and removes "laggable" interface for connections.
The old "laggable" API worked this way: we maintained potential connections to each possible region, then tried to query replica status on all of them. If there was no replica hosts in some of the regions, we just wanted to ignore it by handling a specific error type.
This is legacy and makes the logic more complicated. We want for the new code to use Myadmin instead.
Reviewed By: krallin
Differential Revision: D23767442
fbshipit-source-id: 9f85f07bd318ad020d203d2bcd1c8898061f7572
Summary:
In preparation of moving away from SSH as an intermediate entry point for
Mononoke, let Mononoke work with newly introduced Metadata. This removes any
assumptions we now make about how certain data is presented to us, making the
current "ssh preamble" no longer central.
Metadata is primarily based around identities and provides some
backwards-compatible entry points to make sure we can satisfy downstream
consumers of commits like hooks and logs.
Simarly we now do our own reverse DNS resolving instead of relying on what's
been provided by the client. This is done in an async matter and we don't rely
on the result, so Mononoke can keep functioning in case DNS is offline.
Reviewed By: farnz
Differential Revision: D23596262
fbshipit-source-id: 3a4e97a429b13bae76ae1cdf428de0246e684a27
Summary:
Generated by formatting with rustfmt 2.0.0-rc.2 and then a second time with fbsource's current rustfmt (1.4.14).
This results in formatting for which rustfmt 1.4 is idempotent but is closer to the style of rustfmt 2.0, reducing the amount of code that will need to change atomically in that upgrade.
---
*Why now?* **:** The 1.x branch is no longer being developed and fixes like https://github.com/rust-lang/rustfmt/issues/4159 (which we need in fbcode) only land to the 2.0 branch.
---
Reviewed By: StanislavGlebik
Differential Revision: D23568780
fbshipit-source-id: b4b4a0aa683d236e2fdeb5b96d723ac2d84b9faf
Summary:
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/37
mononoke_hg_sync_job is used in integration tests, make it public
Reviewed By: krallin
Differential Revision: D22795881
fbshipit-source-id: 7a32c8e8adf723a49922dbb9e7723ab01c011e60
Summary: D22381744 updated the version of `futures` in third-party/rust to 0.3.5, but did not regenerate the autocargo-managed Cargo.toml files in the repo. Although this is a semver-compatible change (and therefore should not break anything), it means that affected projects would see changes to all of their Cargo.toml files the next time they ran `cargo autocargo`.
Reviewed By: dtolnay
Differential Revision: D22403809
fbshipit-source-id: eb1fdbaf69c99549309da0f67c9bebcb69c1131b
Summary:
Sometimes we take a token then realize we don't want it. In this case, giving it back is convenient.
This adds this!
Reviewed By: farnz
Differential Revision: D22374607
fbshipit-source-id: ccf47e6c75c37d154704645c9e826f514d6f49f6
Summary:
Simple implementation that queries the MyAdmin service to fetch replication
lag.
Caching like in sqlblob::facebook::myadmin will probably come in a follow
up change.
Reviewed By: StanislavGlebik
Differential Revision: D22104350
fbshipit-source-id: fbd90174d528ddae4045e957c343e6c213f70d26
Summary:
ReplicaLagMonitor is aimed to generalize over different stategies of fetching
the replication lag in a SQL database. Querying a set of connections is one
such strategy.
Reviewed By: ikostia
Differential Revision: D22104348
fbshipit-source-id: bbbeccb55a664e60b3c14ee17f404982d09f2b25
Summary:
SQLite's `LIKE` operator is case insensitive by default. This doesn't match MySQL, and
also seems like a surprising default. Set the pragma on every connection to make it
case sensitive.
Reviewed By: farnz
Differential Revision: D22332419
fbshipit-source-id: 4f503eeaa874e110c03c27300467ddc02dc9b365
Summary:
At the moment we can't test logging to scribe easily - we don't have a way to
mock it. Scribe are supposed to help with that.
They will let us to configure all scribe logs to go to a directory on a
filesystem similar to the way we configure scuba. The Scribe itself will
be stored in CoreContext
Reviewed By: farnz
Differential Revision: D22237730
fbshipit-source-id: 144340bcfb1babc3577026191428df48e30a0bb6
Summary:
Eventually, we want everything to be `async`/`await`; as a stepping stone in that direction, switch the remaining lobstore traits to new-style futures.
This just pushes the `.compat()` out to old-style futures, but it makes the move to non-'static lifetimes easier, as all the compile errors will relate to lifetime issues.
Reviewed By: krallin
Differential Revision: D22183228
fbshipit-source-id: 3fe3977f4469626f55cbf5636d17fff905039827
Summary:
This diff adds additional filed `BlobRepo::attributes` which can store attributes of arbitrary type. This will help store opaque types inside blobrepo without creating dependency on a crate which contains type definition for this attribute. This diff also moves `BonsaiHgMapping` inside attributes set.
- This work will allow to move mercurial changeset generation logic to derive data infrastructure
Reviewed By: ikostia
Differential Revision: D21640438
fbshipit-source-id: 3abd912e7227738a73ea9b17aabdda72a33059aa
Summary:
Remove unused dependencies for Rust targets.
This failed to remove the dependencies in eden/scm/edenscmnative/bindings
because of the extra macro layer.
Manual edits (named_deps) and misc output in P133451794
Reviewed By: dtolnay
Differential Revision: D22083498
fbshipit-source-id: 170bbaf3c6d767e52e86152d0f34bf6daa198283
Summary: I would like to use these utilities when building segmented changelog.
Reviewed By: krallin
Differential Revision: D21876432
fbshipit-source-id: 9022627e224bfcb155b47d696371d24e538e6f39
Summary:
Right now, we debug-print the root cause and pretty-print everything else. This
is pretty bad because the root cause is usually the one thing we would want to
pretty print so we can add instructions there (such as "your hooks failed, fix
it").
This fixes this so we stop pretty-printing the root cause, but also debug print
the whole error, which gives us more developer-friendly context and is easier
for automation to match on.
This is actually in common/rust ... but we're the only people using it AFAICT.
Reviewed By: StanislavGlebik
Differential Revision: D21522518
fbshipit-source-id: 10158811574b56024e14852229e4541da19d5609
Summary:
Limits on concurrent calls are a bit hard to reason about, and it's not super
obvious what a good limit when all our underlying limits are expressed in QPS
(and when our data sets don't have peak concurrency - instead they have
completion time + # blob accesses).
Considering our past experience with ThrottledBlob has been quite positive
overall, I'd like to just use the same approach in ContextConcurrencyBlobstore.
To be safe, I've also updated this to be driven by tunables, which make it
easier to rollout and rollback.
Note that I removed `Debug` on `CoreContext` as part of this because it wasn't
used anywhere. We can bring back a meaningful implementation of `Debug` there
in the future if we want to. That triggered some warnings about unused fields,
which for now I just silenced.
Reviewed By: farnz
Differential Revision: D21449405
fbshipit-source-id: 5ca843694607888653a75067a4396b36e572f070
Summary:
The motivation for making this function async is that it needs to spawn things,
so it should only ever execute while polled by an executor. If we don't do
this, then it can panic if there is no executor, which is annoying.
I've been wanting to do this for a while but hadn't done it because it required
refactoring a lot of things (see the rest of this stack). But, now, it's done.
Reviewed By: mitrandir77
Differential Revision: D21427348
fbshipit-source-id: bad077b90bcf893f38b90e5c470538d2781c51e9
Summary: To make it easier to navigate the codebase the oss-only code will be from now on stored in a separate module, similarly to how the fbcode-only code is stored.
Reviewed By: markbt
Differential Revision: D21429060
fbshipit-source-id: aa7e80961de2897dae31bd0ec83488c683633b7a
Summary: Cover as much as remining code with `Cargo.toml`s, for the rest create an exlusion list in the autocargo config.
Reviewed By: krallin
Differential Revision: D21383620
fbshipit-source-id: 64cc78a38ce0ec482966f32a2963ab4939e20eba
Summary: Covering repo_listener and microwave plus some final touch and we have a buildable Mononoke binary.
Reviewed By: krallin
Differential Revision: D21379008
fbshipit-source-id: cca3fbb53b90ce6d2c3f3ced7717404d6b04dd51
Summary:
There are few related changes included in this diff:
- backsyncer is made public
- stubs for SessionContext::is_quicksand and scuba_ext::ScribeClientImplementation
- mononoke/hgproto is made buildable
Reviewed By: krallin
Differential Revision: D21330608
fbshipit-source-id: bf0a3c6f930cbbab28508e680a8ed7a0f10031e5
Summary:
When we log to Scuba, we need to truncate the `msg` field if it's too long, or
we might be missing log entries in Scuba. I put this behind a tunable so we can, well,
tune it.
Reviewed By: farnz
Differential Revision: D21405959
fbshipit-source-id: 08f0d3491d1a9728b0ca9221436dee2e8f1a17eb
Summary:
This is needed because the tonic crate (see the diff stack) relies on tokio ^0.2.13
We can't go to a newer version because a bug that affects mononoke was introduced on 0.2.14 (discussion started on T65261126). The issue was reported upstream https://github.com/tokio-rs/tokio/issues/2390
This diff simply changed the version number on `fbsource/third-party/rust/Cargo.toml` and ran `fbsource/third-party/rust/reindeer/vendor`.
Also ran `buck run //common/rust/cargo_from_buck:cargo_from_buck` to fix the tokio version on generated cargo files
Reviewed By: krallin
Differential Revision: D21043344
fbshipit-source-id: e61797317a581aa87a8a54e9e2ae22655f22fb97
Summary:
We had accumulated lots of unused dependendencies, and had several test_deps in deps instead. Clean this all up to reduce build times and speed up autocargo processing.
Net removal is of around 500 unneeded dependency lines, which represented false dependencies; by removing them, we should get more parallelism in dev builds, and less overbuilding in CI.
Reviewed By: krallin, StanislavGlebik
Differential Revision: D20999762
fbshipit-source-id: 4db3772cbc3fb2af09a16601bc075ae8ed6f0c75
Summary: Asyncified main functions of the fastlog/ops, so it'd be easier to modify them and proceed with the new features.
Reviewed By: StanislavGlebik
Differential Revision: D20801028
fbshipit-source-id: 2a03eedca776c6e1048a72c7bd613a6ef38c5c17
Summary:
This diff turns off the support_old_nightly feature of async-trait (https://github.com/dtolnay/async-trait/blob/0.1.24/Cargo.toml#L28-L32) everywhere in fbcode. I am getting ready to remove the feature upstream. It was an alternative implementation of async-trait that produces worse error messages but supports some older toolchains dating back to before stabilization of async/await that the default implementation does not support.
This diff includes updating async-trait from 0.1.24 to 0.1.29 to pull in fixes for some patterns that used to work in the support_old_nightly implementation but not the default implementation.
Differential Revision: D20805832
fbshipit-source-id: cd34ce55b419b5408f4f7efb4377c777209e4a6d
Summary: Now that everything is using `sql_construct`, we can remove the old `SqlConstructors` trait.
Reviewed By: StanislavGlebik
Differential Revision: D20734881
fbshipit-source-id: af46b41d17b40f6eb0839cdb9e85b00067360fe9
Summary:
Migrate the configuration of sql data managers from the old configuration using `sql_ext::SqlConstructors` to the new configuration using `sql_construct::SqlConstruct`.
In the old configuration, sharded filenodes were included in the configuration of remote databases, even when that made no sense:
```
[storage.db.remote]
db_address = "main_database"
sharded_filenodes = { shard_map = "sharded_database", shard_num = 100 }
[storage.blobstore.multiplexed]
queue_db = { remote = {
db_address = "queue_database",
sharded_filenodes = { shard_map = "valid_config_but_meaningless", shard_num = 100 }
}
```
This change separates out:
* **DatabaseConfig**, which describes a single local or remote connection to a database, used in configuration like the queue database.
* **MetadataDatabaseConfig**, which describes the multiple databases used for repo metadata.
**MetadataDatabaseConfig** is either:
* **Local**, which is a local sqlite database, the same as for **DatabaseConfig**; or
* **Remote**, which contains:
* `primary`, the database used for main metadata.
* `filenodes`, the database used for filenodes, which may be sharded or unsharded.
More fields can be added to **RemoteMetadataDatabaseConfig** when we want to add new databases.
New configuration looks like:
```
[storage.metadata.remote]
primary = { db_address = "main_database" }
filenodes = { sharded = { shard_map = "sharded_database", shard_num = 100 } }
[storage.blobstore.multiplexed]
queue_db = { remote = { db_address = "queue_database" } }
```
The `sql_construct` crate facilitates this by providing the following traits:
* **SqlConstruct** defines the basic rules for construction, and allows construction based on a local sqlite database.
* **SqlShardedConstruct** defines the basic rules for construction based on sharded databases.
* **FbSqlConstruct** and **FbShardedSqlConstruct** allow construction based on unsharded and sharded remote databases on Facebook infra.
* **SqlConstructFromDatabaseConfig** allows construction based on the database defined in **DatabaseConfig**.
* **SqlConstructFromMetadataDatabaseConfig** allows construction based on the appropriate database defined in **MetadataDatabaseConfig**.
* **SqlShardableConstructFromMetadataDatabaseConfig** allows construction based on the appropriate shardable databases defined in **MetadataDatabaseConfig**.
Sql database managers should implement:
* **SqlConstruct** in order to define how to construct an unsharded instance from a single set of `SqlConnections`.
* **SqlShardedConstruct**, if they are shardable, in order to define how to construct a sharded instance.
* If the database is part of the repository metadata database config, either of:
* **SqlConstructFromMetadataDatabaseConfig** if they are not shardable. By default they will use the primary metadata database, but this can be overridden by implementing `remote_database_config`.
* **SqlShardableConstructFromMetadataDatabaseConfig** if they are shardable. They must implement `remote_database_config` to specify where to get the sharded or unsharded configuration from.
Reviewed By: StanislavGlebik
Differential Revision: D20734883
fbshipit-source-id: bb2f4cb3806edad2bbd54a47558a164e3190c5d1
Summary:
Refactor `sql_ext::SqlConstructors` and its related traits into a separate
crate. The new `SqlConstruct` trait is joined by `SqlShardedConstruct` which
allows construction of sharded databases.
The new crate will support a new configuration model where the distinction
between the database configuration for different repository metadata types
will be made clear.
Reviewed By: StanislavGlebik
Differential Revision: D20734882
fbshipit-source-id: b44cf9d1efd014c29df88e2ad933025e440119dc
Summary: In the process the blobstore/factory/lib.rs was split into submodules - this way it was easier to untangle the dependencies and refactor it, so I've left the split in this diff.
Reviewed By: markbt
Differential Revision: D20302068
fbshipit-source-id: caa3a2b5487c30198c62f7e4f4e9cb7c488dc8de
Summary: This diff makes it possible to relay on the thrift structures from configerator in OSS.
Reviewed By: ahornby
Differential Revision: D20279457
fbshipit-source-id: 39df1c7a6f78b10a0a5bdeeebe476249ab674cc8
Summary:
This shifts the responsibility of mocking all facebook-specific code
to mononoke's sql_ext crate. If OSS code calls into any of that code it will
most likely result in a panic.
Reviewed By: ahornby
Differential Revision: D20247580
fbshipit-source-id: 43f158d91aa32adaa5df6e3786243fb89c9ce961
Summary: separate out the Facebook-specific pieces of the sql_ext crate
Reviewed By: ahornby
Differential Revision: D20218219
fbshipit-source-id: e933c7402b31fcd5c4af78d5e70adafd67e91ecd
Summary:
Context: https://fb.workplace.com/groups/rust.language/permalink/3338940432821215/
This codemod replaces all dependencies on `//common/rust/renamed:tokio-preview` with `fbsource//third-party/rust:tokio-preview` and their uses in Rust code from `tokio_preview::` to `tokio::`.
This does not introduce any collisions with `tokio::` meaning 0.1 tokio because D20235404 previously renamed all of those to `tokio_old::` in crates that depend on both 0.1 and 0.2 tokio.
This is the tokio version of what D20213432 did for futures.
Codemod performed by:
```
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| sed 's,TARGETS$,:,' \
| xargs \
-x \
buck query "labels(srcs, rdeps(%Ss, //common/rust/renamed:tokio-preview, 1))" \
| xargs sed -i 's,\btokio_preview::,tokio::,'
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| xargs sed -i 's,//common/rust/renamed:tokio-preview,fbsource//third-party/rust:tokio-preview,'
```
Reviewed By: k21
Differential Revision: D20236557
fbshipit-source-id: 15068b93a0a944d6249a1d9f63840a4c61c9c1ba
Summary:
Context: https://fb.workplace.com/groups/rust.language/permalink/3338940432821215/
In targets that depend on both 0.1 and 0.2 tokio, this codemod renames the 0.1 dependency to be exposed as tokio_old::. This is in preparation for flipping the 0.2 dependencies from tokio_preview:: to plain tokio::.
This is the tokio version of what D20168958 did for futures.
Codemod performed by:
```
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| sed 's,TARGETS$,:,' \
| xargs \
-x \
buck query "labels(srcs,
rdeps(%Ss, fbsource//third-party/rust:tokio-old, 1)
intersect
rdeps(%Ss, //common/rust/renamed:tokio-preview, 1)
)" \
| xargs sed -i 's,\btokio::,tokio_old::,'
```
Reviewed By: k21
Differential Revision: D20235404
fbshipit-source-id: cfb2689a584ad0d73f16d98d8587fb9c44661465
Summary:
In targets that depend on *both* 0.1 and 0.3 futures, this codemod renames the 0.1 dependency to be exposed as futures_old::. This is in preparation for flipping the 0.3 dependencies from futures_preview:: to plain futures::.
rs changes performed by:
```
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| sed 's,TARGETS$,:,' \
| xargs \
-x \
buck query "labels(srcs,
rdeps(%Ss, fbsource//third-party/rust:futures-old, 1)
intersect
rdeps(%Ss, //common/rust/renamed:futures-preview, 1)
)" \
| xargs sed -i 's/\bfutures::/futures_old::/'
```
Reviewed By: jsgf
Differential Revision: D20168958
fbshipit-source-id: d2c099f9170c427e542975bc22fd96138a7725b0
Summary:
This introduces caching of filenodes to Memcache as in the old filenodes
implementation. The code is mostly was ported over from the existing filenodes
implementation, and converted to async / await. However, one key difference is
that the lookups happen once we hold the semaphore to talk to the underlying
MySQL shard.
The reason for this is:
- Reads to Memcache are really fast. They're often under 1ms. If you're going
to miss in Memcache and have to go to SQL, it won't make you much slower.
- Reads to Memcache are kinda expensive CPU-wise. Data in Memcache is
compressed, and we often see a lot of our CPU cycles spent talking to Memache
when we're under load.
- Memcache isn't an infinite resource. If we're reading the exact same
key a hundred times, that's going to hit the same Memcache box. A bit of
deduplication on our end is a nice thing to strive for. Besides, our own
thread pool we use to talk to Memcache is limited in size.
From a performance perspective, this doesn't make things any slower, but
reduces CPU usage when we'd otherwise have a lot of duplicate fetching.
Finally, note that this update also includes support for dirty-tracking in our
local cache. We use this to know if we should fill the remote cache (if we 100%
hit in local cache, we don't fill the remote cache).
Reviewed By: StanislavGlebik
Differential Revision: D19905390
fbshipit-source-id: 363f638bb24cf488c7cd3a8ecea43e93f8391d3f