Summary:
This is a mechanical part of rename, does not change any commit messages in
tests, does not change the scuba table name/config setting. Those are more
complex.
Reviewed By: krallin
Differential Revision: D16890120
fbshipit-source-id: 966c0066f5e959631995a1abcc7123549f7495b6
Summary: Clean up non-test usage of CoreContext::test_mock left over from T37478150
Reviewed By: farnz
Differential Revision: D16804838
fbshipit-source-id: f420b8186557a42e9b6c78437c0fb76c9a343b31
Summary: This updates our repo config to allow passing through Filestore params. This will be useful to conditionally enable Filestore chunking for new repos.
Reviewed By: HarveyHunt
Differential Revision: D16580700
fbshipit-source-id: b624bb524f0a939f9ce11f9c2983d49f91df855a
Summary:
NOTE: This isn't 100% complete yet. I have a little more work to do around the aliasverify binary, but I think it'll make sense to rework this a little bit with the Filestore anyway.
This patch incorporates the Filestore throughout Mononoke. At this time, what this means is:
- Blobrepo methods return streams of `FileBytes`.
- Various callsites that need access to `FileBytes` call `concat2` on those streams.
This also eliminates the Sha256 aliasing code that we had written for LFS and replaces it with a Filestore-based implementation.
However, note that this does _not_ change how files submitted through `unbundle` are written to blobstores right now. Indeed, those contents are passed into the Filestore through `store_bytes`, which doesn't do chunking. This is intentional since it lets us use LFS uploads as a testbed for chunked storage before turning it on for everything else (also, chunking those requires further refactoring of content uploads, since right now they don't expect the `ContentId` to come back through a Future).
The goal of doing it this way is to make the transition simpler. In other words, this diff doesn't change anything functionally — it just updates the underlying API we use to access files. This is also important to get a smooth release: it we had new servers that started chunking things while old servers tried to read them, things would be bad. Doing it this way ensures that doesn't happen.
This means that streaming is there, but it's not being leveraged just yet. I'm planning to do so in a separate diff, starting with the LFS read and write endpoints in
Reviewed By: farnz
Differential Revision: D16440671
fbshipit-source-id: 02ae23783f38da895ee3052252fa6023b4a51979
Summary:
It's used only in a very few places, and most likely that's by accident. We
pass logger via CoreContext now
Reviewed By: krallin
Differential Revision: D16336953
fbshipit-source-id: 36ea4678b3c3df448591c606628b93ff834fae45
Summary:
Before this diff, the `RepoBlobstore` type alias (and the newly-added
`RepoBlobstoreArgs` struct) lived in `blobrepo/blob_changeset` crate, which is
obviously the most correct place for them to be located. All would be fine, had
these things been used locally only. But `RepoBlobstore` is a reasonably
widely-used type alias across our codebase and importing it from
`blob_changeset` seems weird. Let's move it into a dedicated crate.
Reviewed By: StanislavGlebik
Differential Revision: D16174126
fbshipit-source-id: b83e345adcfe567e4a67c8a1621f3a789fab63c6
Summary:
This diff does two things:
- resolves a problem with dropping censorship information when calling
`in_memory_writes_READ_DOC_COMMENT`
- prevents someone from accidentally creating a `BlobRepo` where internal blobstore's prefix is different from the `repoid`. While prefix is conceptually unrelated to a blobstore, we do care that existing blobstores continue to work, so we need this safeguard.
Reviewed By: farnz
Differential Revision: D16163225
fbshipit-source-id: fc1c9d4dc32f6958b4b0e2e61026c1f3fe5f3b17
Summary:
Report to Scuba whenever someone tries to access a blobstore which is blacklisted. Scuba reporting is done for any `get` or `put` method call.
Because of the possible overload - given the high number of requests mononoke receives and that CensoredBlobstore make the verification before we add the caching layer for blobstores - I considered reporting at most one bad request per second. If multiple requests to blacklisted blobstores are made in less than one second, only the first request should be reported. Again, this is not the best approach (to not report all of them), but performance wise is the best solution.
NOTE: I also wrote an implementation using `RwLock` (instead of the current `AtomicI64`), but atomic variables should be faster than using lockers so I gave up on that idea.
Reviewed By: ikostia, StanislavGlebik
Differential Revision: D16108456
fbshipit-source-id: 9e5338c50a1c7d15f823a2b8af177ffdb99e399f
Summary:
Seems cleaner this way. Also allows the `admin` tool to initialize
a censored blobstore.
Differential Revision: D16154919
fbshipit-source-id: f5edacc8b8332c67f1f5dfaf9bf49b4aeaecb33a
Summary:
Added an option to control for which repositories should censoring be
enabled or disabled. The option is added in `server.toml` as `censoring` and it
is set to true or false. If `censoring` is not specified, then the default
option is set to true ( censoring is enabled).
By disabling `censoring` the verification if the key is blacklisted or not is
omitted, therefor all the files are fetchable.
Reviewed By: ikostia
Differential Revision: D16029509
fbshipit-source-id: e9822c917fbcec3b3683d0e3619d0ef340a44926
Summary:
CensoredBlob was placed between Blobstore and PrefixBlobstore. I moved CensoredBlob, so that now it is a wrapper for PrefixBlobstore. This means the key is compared before appending the `repoid` to the key.
By moving CensoredBlob on top of PrefixBlobstore, it provides better isolation for the existing blobstores. This way CensoredBlob does not interact with the underlaying layers and future changes of those layers will, most probably, not impact CensoredBlob's implementation.
Reviewed By: ikostia
Differential Revision: D15900610
fbshipit-source-id: 391594355d766f43638f3152b56d4e9acf49af32
Summary: Add type safety to `abomonation_future_cache` by requiring usage of `VolatileLruCachePool`, and make that change for all usages of `LruCachePool`.
Reviewed By: farnz
Differential Revision: D15882275
fbshipit-source-id: 3f192142af254d7b6b8ea7f9cc586c2034c97b93
Summary:
New name better reflects what this function does - it might return cached
version of filenodes that might be out of date.
Reviewed By: aslpavel
Differential Revision: D15896734
fbshipit-source-id: caf4f1d3a9a29889327c3373ac886687ec916903
Summary:
Looks like D15166925 conflicted with D15199637, which has broken our builds. This fixes that.
#quickstamp
Reviewed By: StanislavGlebik
Differential Revision: D15321997
fbshipit-source-id: 35c39a51c183e6153e6214f950262f83050b4bf5
Summary:
This synthetic benchmark/simulation:
- it creates `BlobRepo` which contains delayed implementation for main components, but also includes all caches enabled, since most of our code heavily depends on caching
- it also includes stack generator which can produce stack of changesets
- this particular benchmark exercises bonsai->hg generation path
Reviewed By: StanislavGlebik
Differential Revision: D15166925
fbshipit-source-id: 8ca7fcf1df1400af6c61616218a84eac655c276f