Summary:
As part of modernising MultiplexedBlobstore, I want to fully asyncify the blobstore_sync_queue; that means I need this fully asyncified.
Fully asyncify everything but the bits that interact with blobstore_sync_queue; those have to wait for MultiplexedBlobstore to be asyncified
End goal is to reduce the number of healer overloads, by adding a mode of operation in which writes (e.g. from backfills or derived data) can avoid a sync queue write when all blobstores are working
Reviewed By: StanislavGlebik
Differential Revision: D22460059
fbshipit-source-id: 5792c4a8daf17ffe99a04d792792f568c40fde37
Summary: I'm about to asyncify the healer - move 2/3rds of the file content (tests) into their own file.
Reviewed By: ikostia
Differential Revision: D22460166
fbshipit-source-id: 18c0dde5f582c4c7006e3f023816ac457d38234b
Summary: Stage 1 of a migration - next step is to make all users of this trait use new futures, and then I can come back, add lifetimes and references, and leave it modernised
Reviewed By: StanislavGlebik
Differential Revision: D22460164
fbshipit-source-id: 94591183912c0b006b7bcd7388a3d7c296e60577
Summary: This allowed me to compare two alternative approaches to queue draining, and generally seems like a useful thing to do.
Reviewed By: krallin
Differential Revision: D22364733
fbshipit-source-id: b6c76295c85b4dec6f0bfd7107c30bb4e4a28942
Summary: We were monitoring the wrong lag so far.
Reviewed By: farnz
Differential Revision: D22356455
fbshipit-source-id: abe41a4154c2a8d53befed4760e2e9544797c845
Summary:
ReplicaLagMonitor is aimed to generalize over different stategies of fetching
the replication lag in a SQL database. Querying a set of connections is one
such strategy.
Reviewed By: ikostia
Differential Revision: D22104348
fbshipit-source-id: bbbeccb55a664e60b3c14ee17f404982d09f2b25
Summary:
Blobstore healer has a logic, which prevents it from doing busy work, when the
queue is empty. This is implemented by means of checking whether the DB query
fetched the whole `LIMIT` of values. Or that is the idea, at least. In
practice, here's what happens:
1. DB query is a nested one: first it gets at most `LIMIT` distinct
`operation_key` entries, then it gets all rows with such entries. In practice
this almost always means `# of blobstores * LIMIT` rows, as we almost always
succeed writing to every blobstore
2. Once this query is done, the rows are grouped by the `blobstore_key`, and a
future is created for each such row (for simplicity, ignore that future may not
be created).
3. We then compare the number of created futures with `LIMIT` and report an
incomplete batch if the numbers are different.
This logic has a flaw: same `blobstore_key` may be written multiple times with
different `operation_key` values. One example of this: `GitSha1` keys for
identical contents. When this happens, grouping from step 2 above will produce
fewer than `LIMIT` groups, and we'll end up sleeping for nothing.
This is not a huge deal, but let's fix it anyway.
My fix also adds some strictly speaking unnecessary logging, but I found it
helpful during this investigation, so let's keep it.
The price of this change is collecting two `unique_by` calls, both of which
allocates a temporary hash set [1] of the size `LIMIT * len(blobstore_key) * #
blobstores` (and another one with `operation_key`). For `LIMIT=100_000`
`len(blobstore_key)=255`, `# blobstores = 3` we have roughly 70 mb for the
larger one, which should be ok.
[1] https://docs.rs/itertools/0.9.0/itertools/trait.Itertools.html#method.unique
Reviewed By: ahornby
Differential Revision: D22293204
fbshipit-source-id: bafb7817359e2c867cf33c319a886653b974d43f
Summary:
Eventually, we want everything to be `async`/`await`; as a stepping stone in that direction, switch the remaining lobstore traits to new-style futures.
This just pushes the `.compat()` out to old-style futures, but it makes the move to non-'static lifetimes easier, as all the compile errors will relate to lifetime issues.
Reviewed By: krallin
Differential Revision: D22183228
fbshipit-source-id: 3fe3977f4469626f55cbf5636d17fff905039827
Summary: Let's not heal 10000 blobs in parallel, that's a little too much data.
Reviewed By: farnz
Differential Revision: D22186543
fbshipit-source-id: 939fb5bc83b283090e979ac5fe3efc96191826d3
Summary: I would like to use these utilities when building segmented changelog.
Reviewed By: krallin
Differential Revision: D21876432
fbshipit-source-id: 9022627e224bfcb155b47d696371d24e538e6f39
Summary:
Replace the use of `RepoConfigs::read*` associated functions with free
functions. These didn't really need to be associated functions (and in the
case of the common and storage configs, really didn't belong there either).
Reviewed By: krallin
Differential Revision: D21837270
fbshipit-source-id: 2dc73a880ed66e11ea484b88b749582ebdf8a73f
Summary:
This updates our blobrepo factory code to async / await. The underlying
motivation is to make this easier to modify. I've ran into this a few times
now, and I'm sure others have to, so I think it's time.
In doing so, I've simplified the code a little bit to stop passing futures
around when values will do. This makes the code a bit more sequential, but
considering none of those futures were eager in any way, it shouldn't really
make any difference.
Reviewed By: markbt
Differential Revision: D21427290
fbshipit-source-id: e70500b6421a95895247109cec75ca7fde317169
Summary:
- Change get return value for `Blobstore` from `BlobstoreBytes` to `BlobstoreGetData` which include `ctime` metadata
- Update the call sites and tests broken due to this change
- Change `ScrubHandler::on_repair` to accept metadata and log ctime
- `Fileblob` and `Manifoldblob` attach the ctime metadata
- Tests for fileblob in `mononoke:blobstore-test` and integration test `test-walker-scrub-blobstore.t`
- Make cachelib based caching use `BlobstoreGetData`
Reviewed By: ahornby
Differential Revision: D21094023
fbshipit-source-id: dc597e888eac2098c0e50d06e80ee180b4f3e069
Summary:
This removes our own (Mononoke's) implementation of failure chains, and instead
replaces them with usage of Anyhow. This doesn't appear to be used anywhere
besides Mononoke.
The historical motivation for failure chains was to make context introspectable
back when we were using Failure. However, we're not using Failure anymore, and
Anyhow does that out of the box with its `context` method, which you can
downcast to the original error or any of the context instances:
https://docs.rs/anyhow/1.0.28/anyhow/trait.Context.html#effect-on-downcasting
Reviewed By: StanislavGlebik
Differential Revision: D21384015
fbshipit-source-id: 1dc08b4b38edf8f9a2c69a1e1572d385c7063dbe
Summary:
We used to implicitly do this when creating the sync queue (though it wasn't
needed there - if we don't wait we crash later when checking for replication
lag), but we no longer do after the SqlConstruct refactor.
This fixes that so now we can start the healer again.
Reviewed By: farnz
Differential Revision: D21063118
fbshipit-source-id: 24f236d10b411bc9a5694b42c19bf2afa352a54c
Summary:
Being told `Input/output error: Connection refused (os error 111)` isn't very
helpful when things are broken. However, being told:
```
Execution error: While waiting for replication
Caused by:
0: While fetching repliction lag for altoona
1: Input/output error: Connection refused (os error 111)
```
Is nicer.
Reviewed By: farnz
Differential Revision: D21063120
fbshipit-source-id: 1408b9eca025b120790a95d336895d2f50be3d5d
Summary:
This turns out quite nice because we had some futures there that were always
`Ok`, and now we can use `Output` instead of `Item` and `Error`.
Reviewed By: ahornby
Differential Revision: D21063119
fbshipit-source-id: ab5dc67589f79c898d742a276a9872f82ee7e3f9
Summary:
I'd like to do a bit of work on this, so might as well convert it to async /
await first.
Reviewed By: ahornby
Differential Revision: D21063121
fbshipit-source-id: e388d59cecf5ba68d9bdf551868cea79765606f7
Summary:
Migrate the configuration of sql data managers from the old configuration using `sql_ext::SqlConstructors` to the new configuration using `sql_construct::SqlConstruct`.
In the old configuration, sharded filenodes were included in the configuration of remote databases, even when that made no sense:
```
[storage.db.remote]
db_address = "main_database"
sharded_filenodes = { shard_map = "sharded_database", shard_num = 100 }
[storage.blobstore.multiplexed]
queue_db = { remote = {
db_address = "queue_database",
sharded_filenodes = { shard_map = "valid_config_but_meaningless", shard_num = 100 }
}
```
This change separates out:
* **DatabaseConfig**, which describes a single local or remote connection to a database, used in configuration like the queue database.
* **MetadataDatabaseConfig**, which describes the multiple databases used for repo metadata.
**MetadataDatabaseConfig** is either:
* **Local**, which is a local sqlite database, the same as for **DatabaseConfig**; or
* **Remote**, which contains:
* `primary`, the database used for main metadata.
* `filenodes`, the database used for filenodes, which may be sharded or unsharded.
More fields can be added to **RemoteMetadataDatabaseConfig** when we want to add new databases.
New configuration looks like:
```
[storage.metadata.remote]
primary = { db_address = "main_database" }
filenodes = { sharded = { shard_map = "sharded_database", shard_num = 100 } }
[storage.blobstore.multiplexed]
queue_db = { remote = { db_address = "queue_database" } }
```
The `sql_construct` crate facilitates this by providing the following traits:
* **SqlConstruct** defines the basic rules for construction, and allows construction based on a local sqlite database.
* **SqlShardedConstruct** defines the basic rules for construction based on sharded databases.
* **FbSqlConstruct** and **FbShardedSqlConstruct** allow construction based on unsharded and sharded remote databases on Facebook infra.
* **SqlConstructFromDatabaseConfig** allows construction based on the database defined in **DatabaseConfig**.
* **SqlConstructFromMetadataDatabaseConfig** allows construction based on the appropriate database defined in **MetadataDatabaseConfig**.
* **SqlShardableConstructFromMetadataDatabaseConfig** allows construction based on the appropriate shardable databases defined in **MetadataDatabaseConfig**.
Sql database managers should implement:
* **SqlConstruct** in order to define how to construct an unsharded instance from a single set of `SqlConnections`.
* **SqlShardedConstruct**, if they are shardable, in order to define how to construct a sharded instance.
* If the database is part of the repository metadata database config, either of:
* **SqlConstructFromMetadataDatabaseConfig** if they are not shardable. By default they will use the primary metadata database, but this can be overridden by implementing `remote_database_config`.
* **SqlShardableConstructFromMetadataDatabaseConfig** if they are shardable. They must implement `remote_database_config` to specify where to get the sharded or unsharded configuration from.
Reviewed By: StanislavGlebik
Differential Revision: D20734883
fbshipit-source-id: bb2f4cb3806edad2bbd54a47558a164e3190c5d1
Summary:
It is preferable to use the higher-level API of cached_config instead of ConfigeratorAPI whenever possible since the higher-level API supports OSS builds.
For `ConfigStore` let `poll_interval` be None so that for one-off reading of configs the ConfigStore doesn't needlessly spawn an updating thread.
Also this update is with compliance to the discussion in D19026190.
Reviewed By: ahornby
Differential Revision: D20670224
fbshipit-source-id: 24fc124d440fd458a9fa88a906fc3a1cfdbd827e
Summary:
From time to time we're experiencing the blobstore healer to crash
because its SQL queries timing out. The rootcause of the problem
is that the same blob_key may show up on the queue many times repeatedly
and the query is trying to select all occurences.
But, the original intention of blobstore healer is to act on a single
put operation across all blobstores. To be able to identify which
puts in the healer queue are part of the same operation we need
some unique id that we'll use per such operation, let's call it OperationKey.
corresponding configerator change to create db column: D20557659
NOTE: This diff has to be landed and rolled out first, before D20557700 is rolled out. I'm assuming that after some time since rolling out this diff all the rows in the production db will have proper `operation_key` value set.
Reviewed By: krallin
Differential Revision: D20557702
fbshipit-source-id: 404d9fdea6796b38193292d1bbd4b8cd4b5b3eb8
Summary: separate out the Facebook-specific pieces of the sql_ext crate
Reviewed By: ahornby
Differential Revision: D20218219
fbshipit-source-id: e933c7402b31fcd5c4af78d5e70adafd67e91ecd
Summary:
Context: https://fb.workplace.com/groups/rust.language/permalink/3338940432821215/
This codemod replaces all dependencies on `//common/rust/renamed:tokio-preview` with `fbsource//third-party/rust:tokio-preview` and their uses in Rust code from `tokio_preview::` to `tokio::`.
This does not introduce any collisions with `tokio::` meaning 0.1 tokio because D20235404 previously renamed all of those to `tokio_old::` in crates that depend on both 0.1 and 0.2 tokio.
This is the tokio version of what D20213432 did for futures.
Codemod performed by:
```
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| sed 's,TARGETS$,:,' \
| xargs \
-x \
buck query "labels(srcs, rdeps(%Ss, //common/rust/renamed:tokio-preview, 1))" \
| xargs sed -i 's,\btokio_preview::,tokio::,'
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| xargs sed -i 's,//common/rust/renamed:tokio-preview,fbsource//third-party/rust:tokio-preview,'
```
Reviewed By: k21
Differential Revision: D20236557
fbshipit-source-id: 15068b93a0a944d6249a1d9f63840a4c61c9c1ba
Summary:
Context: https://fb.workplace.com/groups/rust.language/permalink/3338940432821215/
This codemod replaces *all* dependencies on `//common/rust/renamed:futures-preview` with `fbsource//third-party/rust:futures-preview` and their uses in Rust code from `futures_preview::` to `futures::`.
This does not introduce any collisions with `futures::` meaning 0.1 futures because D20168958 previously renamed all of those to `futures_old::` in crates that depend on *both* 0.1 and 0.3 futures.
Codemod performed by:
```
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| sed 's,TARGETS$,:,' \
| xargs \
-x \
buck query "labels(srcs, rdeps(%Ss, //common/rust/renamed:futures-preview, 1))" \
| xargs sed -i 's,\bfutures_preview::,futures::,'
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| xargs sed -i 's,//common/rust/renamed:futures-preview,fbsource//third-party/rust:futures-preview,'
```
Reviewed By: k21
Differential Revision: D20213432
fbshipit-source-id: 07ee643d350c5817cda1f43684d55084f8ac68a6
Summary:
While we are transitioning from tokio 0.1 to tokio 0.2 we might need to use
[tokio_compat](https://docs.rs/tokio-compat/0.1.4/tokio_compat/) crate.
Let's add a helper macro similar to fbinit::test that uses tokio_compat
runtime.
Reviewed By: farnz
Differential Revision: D20213814
fbshipit-source-id: 18976e953011c8ada1fa915686e2dcb76ea288d5
Summary:
In targets that depend on *both* 0.1 and 0.3 futures, this codemod renames the 0.1 dependency to be exposed as futures_old::. This is in preparation for flipping the 0.3 dependencies from futures_preview:: to plain futures::.
rs changes performed by:
```
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| sed 's,TARGETS$,:,' \
| xargs \
-x \
buck query "labels(srcs,
rdeps(%Ss, fbsource//third-party/rust:futures-old, 1)
intersect
rdeps(%Ss, //common/rust/renamed:futures-preview, 1)
)" \
| xargs sed -i 's/\bfutures::/futures_old::/'
```
Reviewed By: jsgf
Differential Revision: D20168958
fbshipit-source-id: d2c099f9170c427e542975bc22fd96138a7725b0
Summary:
The Bytes 0.5 update left us in a somewhat undesirable position where every
access to our blobstore incurs an extra copy whenever we fetch data out of our
cache (by turning it from Bytes 0.5 into Bytes 0.4) — we also have quite a few
place where we convert in one direction then immediately into the other.
Internally, we can start using Bytes 0.5 now. For example, this is useful when
pulling data out of our blobstore and deserializing as Thrift (or conversely,
when serializing and putting it into our blobstore).
However, when we interface with Tokio (i.e. decoders & encoders), we still have
to use Bytes 0.4. So, when needed, we convert our Bytes 0.5 to 0.4 there.
The tradeoff idea is that we deal with more bytes internally than we end up
sending to clients, so doing the Bytes conversion closer to the point of
sending data to clients means less copies.
We can also start removing those once we migrate to Tokio 0.2 (and newer
versions of Hyper for HTTP services).
Changes that were required:
- You can't extend new bytes (because that implicitly copies). You need to use
BytesMut instead, which I did where that was necessary (I also added calls in
the Filestore to do that efficiently).
- You can't create bytes from a `&'a [u8]`, unless `'a` is `'static`. You need
to use `copy_from_slice` instead.
- `slice_to` and `slice_from` have been replaced by a `slice()` function that
takes ranges.
Reviewed By: StanislavGlebik
Differential Revision: D20121350
fbshipit-source-id: eb31af2051fd8c9d31c69b502e2f6f1ce2190cb1
Summary:
The blobstore_healer has never waited for MyRouter before querying for slave
status, but it ended up implicitly working because creating a blobstore
required a SQL factory, and creating a SQL factory would result in waiting for
MyRouter.
Now that creating a blobstore doesn't require SQL factory unless you're going
to actually use it (which the healer isn't: it doesn't use a multiplexblob, it
uses the underlying blobstores instead), we no longer wait properly for
MyRouter, so if MyRouter isn't there when we boot, we crash.
This fixes that.
Reviewed By: ahornby
Differential Revision: D20094829
fbshipit-source-id: 82b7e8d893a01049d1f434ee8dff36a877a0d2f4
Summary:
This updates our multiplexed blobstore configuration to carry its own DB
config. The upshot of this change is that we can move the blobstore sync queue
(a fairly unruly table) to its own DB.
Another nice side effect of this is that it cleans up a bunch of other code, by
finally decoupling the blobstore config from the DB config. For examples,
places that need to instantiate a blobstore can now to do even without a DB
config (such as wireproto logging).
Obviously, this cannot land until we update the configs to include this. I'll
do so in Configerator prior to landing the diff.
Reviewed By: HarveyHunt
Differential Revision: D19973905
fbshipit-source-id: 79e4ff92cdb989aab4532decd3fe4fd6c55e2bb2
Summary:
The former implementation here was a little difficult to work with, and
resulted in a whole lot of cloning of closures, etc.
This updates the implementation to be a little simpler on the whole (async /
await is nicer for while loops, since you can use, well, loops)
It does slightly change a few parts of the behavior:
- The old implementation would wait for the replication lag duration. That's
not really correct. As we've observed several time this weeks, replication
lag usually drops quickly once it starts dropping. I.e. if the replication
lag is 10 seconds, it doesn't take 10 seconds to catch up. This gets more
important with big lag durations.
- I updated replication lag to be u64 instead of usize. usize doesn't really
make sense for something that has absolutely nothing to do with our pointer
size.
I also split out the logic for calculating how long we wait in a part that
cares about whether we are busy and one that cares about replication lag
(whereas the older one kinda mixed the two together). We wait for our own
throttling (i.e. sleep for a sec if we didn't do anything) before we wait for
replication lag, so the new behavior should have the desired behavior of:
- If we don't have much work to do, we sleep 1 second between each iteration
(but if we do have work, we don't).
- No matter what, if we have replication lag, we wait until that passes before
doing any work.
The old one did that too, but it mixed the two calculations together, and was
(at least in my opinion) kinda hard to reason about as a result.
Reviewed By: StanislavGlebik
Differential Revision: D19997587
fbshipit-source-id: 1de6a9f9c1ecb56e26c304d32b907103b47b4728
Summary:
We had crahsloops on this (which I'm fixing earlier in this stack), which
resulted in overloading our queue as we tried to repeatedly clear out 100K
entries at a time, rebooted, and tried again.
We can fix the root cause that caused us to die, but we should also make sure
crashloops don't result in ignoring lag altogether.
Also, while in there, convert some of this code to async / await to make it
easier to work on.
Reviewed By: HarveyHunt
Differential Revision: D19997589
fbshipit-source-id: 20747e5a37758aee68b8af2e95786430de55f7b1
Summary:
This commit manually synchronizes the internal move of
fbcode/scm/mononoke under fbcode/eden/mononoke which couldn't be
performed by ShipIt automatically.
Reviewed By: StanislavGlebik
Differential Revision: D19722832
fbshipit-source-id: 52fbc8bc42a8940b39872dfb8b00ce9c0f6b0800
Summary:
Modify the multiplexed blobstore implementation so that the
multiplex_id is written to the healer queue after a put. Further, update the
blobstore healer to only look at entries with the same multiplex ID as it's
configured to run with.
Reviewed By: ahornby
Differential Revision: D19770057
fbshipit-source-id: 41db19f0b0f84c048d49ab9e6258cccc89cf4195