Commit Graph

136 Commits

Author SHA1 Message Date
Thomas Orozco
21290702e1 third-party/rust: import async-compression + update zstd
Summary:
This imports the async-compression crate. We have an equivalent-ish in
common/rust, but it targets Tokio 0.1, whereas this community-supported crate
targets Tokio 0.2 (it offers a richer API, notably in the sense that we
can use it for Streams, whereas the async-compression crate we have is only for
AsyncWrite).

In the immediate term, I'd like to use this for transfer compression in
Mononoke's LFS Server. In the future, we might also use it in Mononoke where we
currently use our own async compression crate when all that stuff moves to
Tokio 0.2.

Finally, this also updates zstd: the version we link to from tp2 is actually
zstd 1.4.5, so it's a good idea to just get the same version of the zstd crate.

The zstd crate doesn't keep a great changelog, so it's hard to tell what has changed.
At a glance, it looks like the answer is not much, but I'm going to look to Sandcastle
to root out potential issues here.

Reviewed By: StanislavGlebik

Differential Revision: D23652335

fbshipit-source-id: e250cef7a52d640bbbcccd72448fd2d4f548a48a
2020-09-15 07:59:53 -07:00
Aida Getoeva
b92c64af7d shed/sql: make queries! macros work with new Rust mysql client
Summary:
shed/sql library used mainly to communicate with Mysql db and to have a nice abstraction layer around mysql (which is used in production) and sqlite (integration tests). The library provided an interface, that was backed up from Mysql side my raw connections and by MyRouter.
This diff introduces a new backend - new Mysql client for Rust.

New backend is exposed as a third variant for the current model: sqlite, mysql (raw conn and myrouter) and mysql2 (new client). The main reason for that is the fact that the current shed/sql interface for Mysql
(1) heavily depends on mysql_async crate, (2) introduces much more complexity than needed for the new client and (3) it seems like this will be refactored and cleaned up later, old things will be deprecated.
So to not overcomplicate things by trying to implement the given interface for the new Mysql client, I tried to simplify things by adding it as a third backend option.

Reviewed By: farnz

Differential Revision: D22458189

fbshipit-source-id: 4a484b5201a38cc017023c4086e9f57544de68b8
2020-09-11 06:33:37 -07:00
Simon Farnsworth
89e30973ff Report write errors when scrubbing
Summary: When we're scrubbing blobstores, it's not actually a success state if a scrub fails to write. Report this back to the caller - no-one will usually be scrubbing unless they expect repair writes to succeed, and a failure is a sign that we need to investigate further

Reviewed By: mitrandir77

Differential Revision: D23601541

fbshipit-source-id: d328935af9999c944719a6b863d0c86b28c54f59
2020-09-10 02:29:47 -07:00
David Tolnay
0cb8a052f5 Update formatter to rustfmt 2.0
Reviewed By: zertosh

Differential Revision: D23591021

fbshipit-source-id: e664aa2fdd3aaa457796a59080be6b94f604a112
2020-09-09 07:52:33 -07:00
Simon Farnsworth
aa2df38491 Improve errors on scrub failure
Summary:
With three blobstores in play, we have issues working out exactly what's wrong during a manual scrub. Make the error handling better:

1. Manual scrub adds the key as context for the failure.
2. Scrub error groups blobstores by content, so that you can see which blobstore is most likely to be wrong.

Reviewed By: ahornby, krallin

Differential Revision: D23565906

fbshipit-source-id: a199e9f08c41b8e967d418bc4bc09cb586bbb94b
2020-09-09 07:25:13 -07:00
Stanislau Hlebik
66fbdf72c7 mononoke: add sampling for redacted accesses
Summary:
Previously we were not logging a redacted access if previous access was logged
less < MIN_REPORT_TIME_DIFFERENCE_NS ago. That doesn't work well with our
tests.

Let's instead add a sampling tunable.

Reviewed By: krallin

Differential Revision: D23595067

fbshipit-source-id: 47f6152945d9fdc2796fd1e74804e8bcf7f34940
2020-09-09 02:51:41 -07:00
David Tolnay
be0786f14b Prepare for rustfmt 2.0
Summary:
Generated by formatting with rustfmt 2.0.0-rc.2 and then a second time with fbsource's current rustfmt (1.4.14).

This results in formatting for which rustfmt 1.4 is idempotent but is closer to the style of rustfmt 2.0, reducing the amount of code that will need to change atomically in that upgrade.

 ---

*Why now?* **:** The 1.x branch is no longer being developed and fixes like https://github.com/rust-lang/rustfmt/issues/4159 (which we need in fbcode) only land to the 2.0 branch.

 ---

Reviewed By: StanislavGlebik

Differential Revision: D23568780

fbshipit-source-id: b4b4a0aa683d236e2fdeb5b96d723ac2d84b9faf
2020-09-08 07:33:16 -07:00
Stanislau Hlebik
7b323a4fd9 mononoke: add log-only mode in redaction
Summary:
Before redacting something it would be good to check that this file is not
accessed by anything. Having log-only mode would help with that.

Reviewed By: ikostia

Differential Revision: D23503666

fbshipit-source-id: ae492d4e0e6f2da792d36ee42a73f591e632dfa4
2020-09-04 07:37:15 -07:00
David Tolnay
75c2118e01 Remove crate_root from Rust dependency info
Reviewed By: danobi

Differential Revision: D23430948

fbshipit-source-id: c4b374021325fc247121ceecd0e82a0291aa75d6
2020-08-31 14:43:24 -07:00
Alex Hornby
213edc10ce mononoke: limit queue peeking from scrub blobstore when one store has missing value
Summary: When running manual scrub for a large repo with one empty store, we are doing one peek per key.  For keys that have existed for some time this is unnecessary as we know the key should exist and slows down the scrub.

Reviewed By: farnz

Differential Revision: D23054582

fbshipit-source-id: d2222350157ca37aa31b7792214af4446129c692
2020-08-14 02:37:45 -07:00
Alex Hornby
d59dd787c5 mononoke: make blobstore ctime a bit easier to use
Summary: Ctime is an Option<i64>, so rather than as_ctime()/into_ctime() use the fact that it's fairly small and Copy to just .ctime()

Reviewed By: krallin

Differential Revision: D23081739

fbshipit-source-id: be62912eca02e5c29d7473d6f386d98df11000dd
2020-08-14 02:09:46 -07:00
Alex Hornby
e0c6e249fe mononoke: add a non-thrift header to packblob so we can vary thrift protocol in future
Summary: Add a non-thrift header to packblob so we can vary thrift protocol in future.

Reviewed By: farnz

Differential Revision: D22953758

fbshipit-source-id: a114a350105e75cbe57f6c824295d863c723f32f
2020-08-07 03:43:56 -07:00
Simon Farnsworth
0c3fe9b20f Fully asyncify blobstore sync queue
Summary: Move it from `'static` BoxFutures to async_trait and lifetimes

Reviewed By: markbt

Differential Revision: D22927171

fbshipit-source-id: 637a983fa6fa91d4cd1e73d822340cb08647c57d
2020-08-05 15:41:15 -07:00
Simon Farnsworth
aa94fb9581 Add a multiplex mode that doesn't update the sync queue
Summary:
Backfillers and other housekeeping processes can run so far ahead of the blobstore sync queue that we can't empty it from the healer task as fast as the backfillers can fill it.

Work around this by providing a new mode that background tasks can use to avoid filling the queue if all the blobstores are writing successfully. This has a side-effect of slowing background tasks to the speed of the slowest blobstore, instead of allowing them to run ahead at the speed of the fastest blobstore and relying on the healer ensuring that all blobs are present.

Future diffs will add this mode to appropriate tasks

Reviewed By: ikostia

Differential Revision: D22866818

fbshipit-source-id: a8762528bb3f6f11c0ec63e4a3c8dac08d0b4d8e
2020-08-05 06:35:46 -07:00
Simon Farnsworth
33c2a0c846 Update auto_impl to 0.4
Summary: We were using a git snapshot of auto_impl from somewhere between 0.3 and 0.4; 0.4 fixes a bug around Self: 'lifetime constraints on methods that blocks work I'm doing in Mononoke, so update.

Reviewed By: dtolnay

Differential Revision: D22922790

fbshipit-source-id: 7bb68589a1d187393e7de52635096acaf6e48b7e
2020-08-04 18:12:45 -07:00
Santiago Alfonso Muñoz Rodriguez
007dc93916 Enumeration API for BlobStore keys
Summary:
- Enumerate API now provided via trait BlobstoreKeySource
- Implementation for Fileblob and ManifoldBlob
- Modified populate_healer to use new api
- Modified fixrepocontents to use new api

Reviewed By: ahornby

Differential Revision: D22763274

fbshipit-source-id: 8ee4503912bf40d4ac525114289a75d409ef3790
2020-08-04 06:54:18 -07:00
Simon Farnsworth
f7e8931a56 Add a minimum successful writes count for MultiplexedBlobstore
Summary:
There are two reasons to want a write quorum:

1. One or more blobstores in the multiplex are experimental, and we don't want to accept a write unless the write is in a stable blobstore.
2. To reduce the risk of data loss if one blobstore loses data at a bad time.

Make it possible

Reviewed By: krallin

Differential Revision: D22850261

fbshipit-source-id: ed87d71c909053867ea8b1e3a5467f3224663f6a
2020-08-04 02:45:38 -07:00
Simon Farnsworth
a5e9b79d7d Return all errors in the event of a multiplexed put failure
Summary:
With upcoming write quorum work, it'll be interesting to know all the failures that prevent a put from succeeding, not just the most recent, as the most recent may be from a blobstore whose reliability is not yet established.

Store and return all errors, so that we can see exactly why a put failed

Reviewed By: ahornby

Differential Revision: D22896745

fbshipit-source-id: a3627a04a46052357066d64135f9bf806b27b974
2020-08-03 09:30:05 -07:00
Simon Farnsworth
a9b8793d2d Add a write-mostly blobstore mode for populating blobstores
Summary:
We're going to add an SQL blobstore to our existing multiplex, which won't have all the blobs initially.

In order to populate it safely, we want to have normal operations filling it with the latest data, and then backfill from Manifold; once we're confident all the data is in here, we can switch to normal mode, and never have an excessive number of reads of blobs that we know aren't in the new blobstore.

Reviewed By: krallin

Differential Revision: D22820501

fbshipit-source-id: 5f1c78ad94136b97ae3ac273a83792ab9ac591a9
2020-08-03 04:36:19 -07:00
Alex Hornby
ecb58ff8d7 mononoke: add cmdlib argument to control cachelib zstd compression
Summary:
Add a cmdlib argument to control cachelib zstd compression. The default behaviour is unchanged, in that the CachelibBlobstore will attempted compression when putting to the cache if the object is larger than the cachelib max size.

To make the cache behaviour more testable, this change also adds an option to do an eager put to cache without the spawn. The default remains to do a lazy fire and forget put into the cache with tokio::spawn.

The motivation for the change is that when running the walker the compression putting to cachelib can dominate CPU usage for part of the walk, so it's best to turn it off and let those items be uncached as the walker is unlikely to visit them again (it only revisits items that were not fully derived).

Reviewed By: StanislavGlebik

Differential Revision: D22797872

fbshipit-source-id: d05f63811e78597bf3874d7fd0e139b9268cf35d
2020-07-31 01:12:02 -07:00
Simon Farnsworth
a40a8f36b7 Asyncify MultiplexedBlobstore
Summary:
We have two deficiencies to correct in here; modernise the code without changing behaviour first to make it easier to later fix them.

Deficiency 1 is that we always call the `on_put` handler; we need a mode that doesn't do that unless a blobstore returns an error, for jobs not waiting on a human.

Deficiency 2 is that we accept a write after one blobstore accepts it; there's a risk of that being the only copy if a certain set of race conditions are met

Reviewed By: StanislavGlebik

Differential Revision: D22701961

fbshipit-source-id: 0990d3229153cec403717fcd4383abcdf7a52e58
2020-07-27 06:09:47 -07:00
Lukasz Piatkowski
0dd3c4e4bb add Mononoke integration tests CI (#26)
Summary:
This diff adds a minimal workflow for running integrations tests for Mononoke. Currently only one test is run and it fails.

This also splits the regular Mononoke CI into separate files for Linux and Mac to match the current style in Eden repo.
There are the "scopeguard::defer" fixes here that somehow escaped the CI tests.
Some tweaks have been made to "integration_runner_real.py" to make it runnable outside FB context.
Lastly the change from using "[[ -v ... ]" to "[[ -n "${...:-}" ]]; in "library.sh" was made because the former is not supported by the default Bash version preinstalled on modern MacOS.

Pull Request resolved: https://github.com/facebookexperimental/eden/pull/26

Reviewed By: krallin

Differential Revision: D22541344

Pulled By: lukaspiatkowski

fbshipit-source-id: 5023d147823166a8754be852c29b1e7b0e6d9f5f
2020-07-16 12:16:10 -07:00
Simon Farnsworth
78847ff88c Make BlobstoreSyncQueue use new futures
Summary: Stage 1 of a migration - next step is to make all users of this trait use new futures, and then I can come back, add lifetimes and references, and leave it modernised

Reviewed By: StanislavGlebik

Differential Revision: D22460164

fbshipit-source-id: 94591183912c0b006b7bcd7388a3d7c296e60577
2020-07-10 06:43:13 -07:00
Thomas Orozco
ae917ba227 mononoke/virtually_sharded_blobstore: make sampling rate tunable
Summary: As it says in the title.

Reviewed By: farnz

Differential Revision: D22432526

fbshipit-source-id: 42726584689cbc2f5c9138b42b7bf77939921bdd
2020-07-08 09:07:19 -07:00
Arun Kulshreshtha
5f0181f48c Regenerate all Cargo.tomls after upgrade to futures 0.3.5
Summary: D22381744 updated the version of `futures` in third-party/rust to 0.3.5, but did not regenerate the autocargo-managed Cargo.toml files in the repo. Although this is a semver-compatible change (and therefore should not break anything), it means that affected projects would see changes to all of their Cargo.toml files the next time they ran `cargo autocargo`.

Reviewed By: dtolnay

Differential Revision: D22403809

fbshipit-source-id: eb1fdbaf69c99549309da0f67c9bebcb69c1131b
2020-07-06 20:49:43 -07:00
Thomas Orozco
ce0af2d591 mononoke/virtually_sharded_blobstore: deduplicate puts based on data being put
Summary:
This updates the virtually_sharded_blobstore to deduplicate puts only if the
data being put is actually the data we have put in the past. This is done by
keeping track of the hash of things we've put in the presence cache.

This has 2 benefits:

- This is safer. We only dedupe puts we 100% know succeeded (because this
  particular instance was the one to attempt the put).
- This is creates less surprises, notably it lets us overwrite data in the
  backing store (if we are writing something different).

Reviewed By: StanislavGlebik

Differential Revision: D22392809

fbshipit-source-id: d76a49baa9a5749b0fb4865ee1fc1aa5016791bc
2020-07-06 12:10:46 -07:00
Thomas Orozco
19b31ead9d mononoke/virtually_sharded_blobstore: make race tests a little more forgiving
Summary:
Running those on my devserver, I noticed they can be a bit flaky. They're are
racy on the purpose, but let's relax them a bit.

We have a lot of margin here — our blobstore is rate limited at once request
every 10ms, and we need to do 100 requests (the goal is to show that they don't
all wait), so 100ms is fine to prove that they're not rate limited when sharing
the same data.

Reviewed By: StanislavGlebik

Differential Revision: D22392810

fbshipit-source-id: 2e3c9cdf19b0e4ab979dfc000fbfa8da864c4fd6
2020-07-06 12:10:46 -07:00
Thomas Orozco
07907b2b26 mononoke/virtually_sharded_blobstore: merge in the context_concurrency_blobstore
Summary:
There is inevitably interaction between caching, deduplication and rate
limiting:

- You don't want the rate limiting to be above caching (in the blobstore stack,
  that is), because you shouldn't rate limits cache hits (this is where we are
  today).
- You don't want the rate limiting to below deduplication, because then you get
  priority inversion where a low-priority rate-limited request might hold the
  semaphore while a higher-priority, non rate limited request wants to do the
  same fetch (we could have moved rate limiting here prior to introducing
  deduplication, but I didn't do it earlier because I wanted to eventually
  introduce deduplication).

So, now that we have caching and deduplication in the same blobstore, let's
also incorporate rate limiting there!.

Note that this also brings a potential motivation for moving Memcache into this
blobstore, in case we don't want rate limiting to apply to requests before they
go to the _actual_ blobstore (I did not do this in this diff).

The design here when accessing the blobstore is as follows:

- Get the semaphore
- Check if the data is in cache, if so release the semaphore and return the
  data.
- Otherwise, check if we are rater limited.

Then, if we are rate limited:

- Release the semaphore
- Wait for our turn
- Acquire the semaphore again
- Check the cache again (someone might have put the data we want while we were
  waiting).
    - If the data is there, then return our rate limit token.
    - If the data isn't there, then proceed to query the blobstore.

If we aren't rate limited, then we just proceed to query the blobstore.

There are a couple subtle aspects of this:

- If we have a "late" cache hit (i.e. after we waited for rate limiting), then
  we'll have waited but we won't need to query the blobstore.
    - This is important when a large number of requests from the same key
      arrive at the same time and get rate limited. If we don't do this second
      cache check or if we don't return the token, then we'll consume a rate
      limiting token for each request (instead of 1 for the first request).
- If a piece of data isn't cacheable, we should treat it like a cache hit with
  regard to semaphores (i.e. release early), but like a miss with regard to
  rate limits (i.e. wait).

Both of those are addressed captured in the code by returning the `Ticket` on a
cache hit. We can then choose to either return the ticket on a cache hit, or wait
for it on a cache miss.

(all of this logic is captured in unit tests, we can remove any of the blocks
there in `Shards::acquire` and a test will fail)

Reviewed By: farnz

Differential Revision: D22374606

fbshipit-source-id: c3a48805d3cdfed2a885bec8c47c173ee7ebfe2d
2020-07-06 04:38:31 -07:00
Thomas Orozco
de731a89fc mononoke/virtually_sharded_blobstore: log deduplicated puts
Summary:
If anything were to go wrong, we'd be happy to know which puts we ignored. So,
let's log them.

Reviewed By: farnz

Differential Revision: D22356714

fbshipit-source-id: 5687bf0fc426421c5f28b99a9004d87c97106695
2020-07-03 05:53:11 -07:00
Thomas Orozco
c68100f46e mononoke/virtually_sharded_blobstore: spawn before taking semaphores
Summary:
I canaried this on Fastreplay, but unfortunately that showed that sometimes we
just deadlock, or get so slow we might as well be deadlocked (and it happens
pretty quickly, after ~20 minutes). I tried spawning all the `get()` futures,
and that fixes the problem (but it makes gettreepack noticeably slower), so
that suggests something somewhere is creating futures, polling them a little
bit, then never driving them to completion.

For better or worse, I'd experienced the exact same problem with the
ContextConcurrencyBlobstore (my initial attempt at QOS, which also used a
semaphore), so I was kinda expecting this to happen.

In a sense, this nice because I we've suspected there were things like that in
the codebase for a while (e.g. with the occasional SQL timeout we see where it
looks like MySQL responds fast but we don't actually poll it until past the
timeout), and it gives us a somewhat convenient repro.

In another sense, it's annoying because it blocks this work :)

So, to work around the problem, for now, let's spawn futures to force the work
to complete when a semaphore is held. I originally had an unconditional spawn
here, but that is too expensive for the cache-hit code path and slows things
down (by about ~2x).

However, having it only if we'll query the blobstore isn't not as expensive,
and that seems to be fine (in fact it is a ~20% p99 perf improvement,
though the exact number depends on the number of shard we use for this, which I've had to tweak a bit).

https://pxl.cl/1c18H

I did find what I think is one potential instance of this problem in
`bounded_traversal_stream`, which is that we never try to poll `scheduled` to
completion. Instead, we just poll for the next ready future in our
FuturesUnordered, and if that turns out to be synchronous work then we'll just
re-enqueue more stuff (and sort of starve async work in this FuturesUnordered).

I tried updating bounded traversal to try a fairer implementation (which polls
everything), but that wasn't sufficient to make the problem go away, so I think
this is something we have to just accept for now (note that this actually has
some interesting perf impact in isolation: it's a free ~20% perf improvement on
p95+: https://pxl.cl/1c192

see 976b6b92293a0912147c09aa222b2957873ef0df if you're curious

Reviewed By: farnz

Differential Revision: D22332478

fbshipit-source-id: 885b84cda1abc15c51fbc5dd34473e49338e13f4
2020-07-03 05:53:11 -07:00
Thomas Orozco
2082621d51 mononoke/virtually_sharded_blobstore: add ODS metrics
Summary: Those are useful to track.

Reviewed By: farnz

Differential Revision: D22332480

fbshipit-source-id: 43f5cd7121c4aa497d961015e7c16973615798d1
2020-07-03 05:53:10 -07:00
Thomas Orozco
1db62473f2 mononoke/virtually_sharded_blobstore: track perf counters
Summary: Like it says in the title. Those are useful!

Reviewed By: farnz

Differential Revision: D22332479

fbshipit-source-id: f9bddad75fcbed2593c675f9ba45965bd87f1575
2020-07-03 05:53:10 -07:00
Thomas Orozco
c297024a52 mononoke/virtually_sharded_blobstore: do not delay reads for uncacheable data
Summary:
The goal of this blobstore is to dedupe reads by waiting for them to finish and
hit cache instead (and also to dedupe writes, but that's not relevant here).

However, this is not a desirable feature if a blob cannot be stored in cache,
because then we're serializing accesses for no good reason. So, when that
happens, we store "this cannot be stored in cache", and we release reads
immediately.

Reviewed By: farnz

Differential Revision: D22285269

fbshipit-source-id: be7f1c73dc36b6d58c5075172e5e3c5764eed894
2020-07-03 05:53:10 -07:00
Thomas Orozco
b9319a4d32 mononoke/virtually_sharded_blobstore: add a newtype for cache keys + a prefix
Summary:
I'm going to store things that aren't quite the exact blobs in here, so on the
off chance that we somehow have two caching blobstores (the old one and this
one) that use the same pools, we should avoid collisions by using a prefix.

And, since I'm going to use a prefix, I'm adding a newtype wrapper to not use
the prefixed key as the blobstore key by accident.

Differential Revision: D22285271

fbshipit-source-id: e352ba107f205958fa33af829c8a46896c24027e
2020-07-03 05:53:10 -07:00
Thomas Orozco
bf3c2e19f0 mononoke/virtually_sharded_blobstore: a caching blobstore that deduplicates
Summary:
This introduces a caching blobstore that deduplicates reads and writes. The
underlying motivation is to improve performance for processes that might find
themsleves inadvertently reading the same data concurrently from a bunch of
independent callsites (most of Mononoke), or writing the same bit of data over
and over again.

The latter is particularly useful for things like commit cloud backfilling in
WWW, where some logger commits include the same blob being written hundreds or
thousands of times, and cause us to overload the underlying Zippy shard in
Manifold. This is however a problem we've also encountered in the past in e.g.
the deleted files manifest and had to solve there. This blobstore is a little
different in the sense that it solves that problem for all writers.

This comes at the cost of writes being dropped if they're known to be
redundant, which prevents updates through this blobstore. This is desirable for
most of Mononoke, but not all (notably, for skiplist updates it's not great).

For now, I'm going to add this behind an opt-in flag, and later on I'm planning
to make it opt-out and turn it off there (I'm thinking to use the CoreContext
for this).

Reviewed By: farnz

Differential Revision: D22285270

fbshipit-source-id: 4e3502ab2da52a3a0e0e471cd9bc4c10b84a3cc5
2020-07-03 05:53:10 -07:00
Stanislau Hlebik
f55fa975a5 mononoke: fix unused imports
Reviewed By: krallin

Differential Revision: D22281331

fbshipit-source-id: 656ba6500193bd179d1e6cd1443de3e85d37c597
2020-06-29 03:22:11 -07:00
Simon Farnsworth
7938a1957a Support BlobstoreWithLink in Sqlblob
Summary: We designed the schema to make this simple to implement - it's literally a metadata read and a metadata write.

Reviewed By: ikostia

Differential Revision: D22233922

fbshipit-source-id: b392b4a3a23859c6106934f73ef60084cc4de62c
2020-06-26 03:54:42 -07:00
Simon Farnsworth
b1c85aaf4b Switch Blobstore to new-style futures
Summary:
Eventually, we want everything to be `async`/`await`; as a stepping stone in that direction, switch the remaining lobstore traits to new-style futures.

This just pushes the `.compat()` out to old-style futures, but it makes the move to non-'static lifetimes easier, as all the compile errors will relate to lifetime issues.

Reviewed By: krallin

Differential Revision: D22183228

fbshipit-source-id: 3fe3977f4469626f55cbf5636d17fff905039827
2020-06-26 03:54:42 -07:00
Simon Farnsworth
454de31134 Switch Loadable and Storable interfaces to new-style futures
Summary:
Eventually, we want everything to be `async`/`await`; as a stepping stone in that direction, switch some of the blobstore interfaces to new-style `BoxFuture` with a `'static` lifetime.

This does not enable any fixes at this point, but does mean that `.compat()` moves to the places that need old-style futures instead of new. It also means that the work needed to make the transition fully complete is changed from a full conversion to new futures, to simply changing the lifetimes involved and fixing the resulting compile failures.

Reviewed By: krallin

Differential Revision: D22164315

fbshipit-source-id: dc655c36db4711d84d42d1e81b76e5dddd16f59d
2020-06-25 08:45:37 -07:00
Alex Hornby
5e9223f633 mononoke: add link support to CountedBlobstore
Summary: Add link support to CountedBlobstore

Reviewed By: StanislavGlebik

Differential Revision: D22090644

fbshipit-source-id: 36dc5454f1ca12c91d0eac6e5059f554ac5cb352
2020-06-22 03:15:53 -07:00
Alex Hornby
1458abb967 mononoke: fix cacheblob test build
Summary: Fix cacheblob test build

Differential Revision: D22158585

fbshipit-source-id: 2b702203b52e8dbf04c6afce1b8b3795101f5043
2020-06-22 02:31:40 -07:00
Stanislau Hlebik
dc84f9741d mononoke: try to compress values if they above cachelib limit
Summary: If a value is above cachelib limit let's try to compress it.

Reviewed By: krallin

Differential Revision: D22139644

fbshipit-source-id: 9eb366e8ec94fe66529d27892a988b035989332a
2020-06-20 01:05:54 -07:00
Jeremy Fitzhardinge
c97d050994 eden: fix up unused Rust dependencies
Reviewed By: StanislavGlebik

Differential Revision: D22132460

fbshipit-source-id: 4a86bdf31254172aa33ff286127429b956e606ec
2020-06-19 16:11:09 -07:00
Kostia Balytskyi
76471a8505 fix unused dependencies breakages
Summary: A bunch of our dependencies weren't really used, and this fact has recently became a source of hard failures. This diff is an attempt to fix it.

Reviewed By: StanislavGlebik

Differential Revision: D22136288

fbshipit-source-id: 4ae265a93e155312ae086647a27ecf1cbbd57e9c
2020-06-19 06:49:04 -07:00
Jeremy Fitzhardinge
35b292ce9d eden: manual dependency fixes
Summary:
Tooling can't handle named_deps yet, but it can warn about them

P133451794

Reviewed By: StanislavGlebik

Differential Revision: D22083499

fbshipit-source-id: 46de533c19b13b2469e912165c1577ddb63d15cd
2020-06-17 17:55:04 -07:00
Jeremy Fitzhardinge
1b4edb5567 eden: remove unused Rust dependencies
Summary:
Remove unused dependencies for Rust targets.

This failed to remove the dependencies in eden/scm/edenscmnative/bindings
because of the extra macro layer.

Manual edits (named_deps) and misc output in P133451794

Reviewed By: dtolnay

Differential Revision: D22083498

fbshipit-source-id: 170bbaf3c6d767e52e86152d0f34bf6daa198283
2020-06-17 17:55:03 -07:00
Alex Hornby
50ed8dd53f mononoke: add ZstdFromDictValue decoding to packblob
Summary: Add ZstdFromDictValue decoding to packblob

Reviewed By: farnz

Differential Revision: D22038110

fbshipit-source-id: 1f3a6c1c511b10f97d0a68885352d6d5c4725ceb
2020-06-17 03:49:56 -07:00
Alex Hornby
9c53e07e46 mononoke: add optional compress to packblob put
Summary:
Add optional compress on put controlled by a command line option.

Other than costing some CPU time, this may be a good option when populating repos from existing uncompressed stores to new stores.

Reviewed By: farnz

Differential Revision: D22037756

fbshipit-source-id: e75190ddf9cfd4ed3ea9a18a0ec6d9342a90707b
2020-06-17 02:35:04 -07:00
Alex Hornby
1a3968376c mononoke: add zstd decoding to packblob
Summary: Add zstd decoding support to packblob so that if store contains individually zstd compressed blobs we can load them on get()

Reviewed By: farnz

Differential Revision: D22037755

fbshipit-source-id: 41d85be6fcccf14fb198f6ea33a7ca26c4527a46
2020-06-17 02:35:03 -07:00
Alex Hornby
18586c5ece mononoke: add repo prefix detection and removal for embedded keys
Summary:
Add a regex for repo prefix detection, and use it in prefixblob for removal of repo prefix from blob-embedded keys.

This is important to keep blobs copyable between repos, and to allow dedupe between same blob in two repos.

I looked at alternate approaches of passing down the prefix from PrefixBlob::new(),  but that was fragile and didn't cover the use cases from things like blobstore_healer where no prefixblob is configured at all but packblob will be in the blobstore stack.  Using a pattern is the only real option in such "all repo" cases.

The aim of binding the pattern and its tests closely to the prefix generation is to make it hard for someone to get them out of sync, and provide a clear local test failure if they do.

Reviewed By: farnz

Differential Revision: D22034638

fbshipit-source-id: 95a1c2e1ef81432cba606c22afa16b026f59fd5f
2020-06-17 02:35:03 -07:00