Summary:
It would be suboptimal if memcache could be abused to examine objects
you should not have access to. Change the key used so that (a) it's a regional
cluster for wider caching, and (b) it's got the blobstore details in it so that
you cannot extract objects not from your blob store.
Reviewed By: StanislavGlebik
Differential Revision: D8424298
fbshipit-source-id: 78f9a1a7302b4a60575f257bda665f719dc1a7b6
Summary: Added logic to save `FileChange` as a Mercurial format `HgBlobEntry`
Reviewed By: sunshowers
Differential Revision: D8187792
fbshipit-source-id: 4714c81ab23ebac528cfec15c4a9e66083d4fb6c
Summary:
Update to Rust 1.26.2 toolchain, which has a fix for a soundness
problem in match ergonomics. There was one instance of code affected by this.
Reviewed By: farnz
Differential Revision: D8401773
fbshipit-source-id: 9dfdd933b1e0cf92cdc179a84ea1b67064585ba1
Summary:
Note that no prefix is actually prepended at the moment -- there's an
XXX marking the spots we'll need to update. We'll probably add a prefix once Thrift serialization is turned on.
Reviewed By: farnz
Differential Revision: D8387761
fbshipit-source-id: 0fe2005692183fa91f9787b4c80f600df21d1d93
Summary:
Implementation of generic `store|fetch` for bonsai types.
- bonsai types have unique typed hashes associated with each bonsai type, I'm leveraging this fact to implement generic `store|fetch` methods on `BlobRepo`
Reviewed By: farnz
Differential Revision: D8254810
fbshipit-source-id: 5f798fade4cb8d1ac851f94c7ad7e64636bbca65
Summary:
Unfortunately `HgParents` can't represent all valid parents, because
it can't represent the semantically important case where `p1` is `None` and
`p2` is not. (For incoming changesets we'd like to keep full fidelity with
Mercurial.)
All the Thrift definitions store `p1` and `p2` separately anyway, so just make
that change throughout `RevlogChangeset` and `BlobChangeset`.
Reviewed By: StanislavGlebik
Differential Revision: D8374125
fbshipit-source-id: 63674daaad05d4d4cae3778744dbf1c14b3c2e3b
Summary:
The old blobimport tool will not be able to import commits with the new Thrift serialization they'll be switching to.
`blobrepo::utils::RawNodeBlob` is also used by the admin tool, and it will go away once we start using Thrift serialization.
Reviewed By: farnz
Differential Revision: D8372455
fbshipit-source-id: d02a37e33e1ccd4dd1f695e38dbb40851dd51cd6
Summary:
Mostly this was about adding support for file stores to
`new_blobimport`.
Reviewed By: StanislavGlebik
Differential Revision: D8372063
fbshipit-source-id: 2e3791c6222ec430015008f038e1df0464d3f0ba
Summary: There shouldn't be more than one thread writing to the database, because it causes lags in slaves and they race for database locks between themselves. One write connection should be sufficient enough.
Reviewed By: StanislavGlebik
Differential Revision: D8348604
fbshipit-source-id: ceef081ed89611978accfa55969883078d65a58f
Summary: Lack of proper error context makes it hard to understand errors returned from Mononoke
Reviewed By: StanislavGlebik
Differential Revision: D8298154
fbshipit-source-id: 57fe5df7d891b5215fba783255178f14122199cf
Summary: the newly added context should add much more visibility into why an error was returned
Reviewed By: jsgf
Differential Revision: D8286343
fbshipit-source-id: d65387f40da2c14964e85552ae1ec7e75a135c47
Summary:
Now it is as it should be: mercurial_types have the types, mercurial has revlog related structures
burnbridge
Reviewed By: farnz
Differential Revision: D8319906
fbshipit-source-id: 256e73cdd1b1a304c957b812b227abfc142fd725
Summary: Factor out common stuff for local state.
Reviewed By: StanislavGlebik
Differential Revision: D8310823
fbshipit-source-id: e1ce7eebd76d37688e830a5df0486bdfd1e3361c
Summary: This log is by far the most common one and it makes reading logs much harder. It should probably be changed to ODS counters, but for now lets just make it trace!
Reviewed By: farnz
Differential Revision: D8235663
fbshipit-source-id: 3685b260f1c6c43c1fde8501731583debc8d063b
Summary:
The new_blobimport job is having difficulties when the pool is too large, because the write transactions are taking too long. If the pool is configured to be 1 for it then everything seems fine and fast enough.
On the other hand the Mononoke server should have bigger connectino pool size to be able to quickly respond for read requests.
Reviewed By: farnz
Differential Revision: D8235413
fbshipit-source-id: 84e0013ce569c3f103a2096001605aab828d178c
Summary: It only needs to borrow them.
Reviewed By: kulshrax
Differential Revision: D8244267
fbshipit-source-id: 2a24a3b7c6eb65177e4e26c57650dd7e096b4202
Summary:
I got confused myself, so decided to rename it.
dirname here means "all the path without the last path component". So for directory "a/b/c", "a/b" is a dirname. For file "a/b.txt", "a" is a dirname For root dirname is empty.
Reviewed By: jsgf
Differential Revision: D8207274
fbshipit-source-id: 0744ee020d3cf1ecc185efee445f6d40183d6366
Summary:
This is a follow-on from D8014913 (Make in-memory manifests load lazily), addressing jsgf's review comments.
I've made the following stylistic changes:
1. Use `impl Future` for all private APIs instead of `BoxFuture` (making use of `Either` instead of boxing where possible).
2. Use Rust 1.26 style `match` statements with inferred `ref` and `&` whenever possible.
3. Rework the logic in `MemoryManifestEntry::save` to not need `filter_map`.
4. Remove `future::result` and exploit `impl IntoFuture for Result` where possible to reduce code
5. Improve a couple of comments
Reviewed By: jsgf
Differential Revision: D8209186
fbshipit-source-id: c759ad8894fc25616dc6a291d46c487191f96382
Summary: Since signal_parent_ready purpose is to ensure that the child Changesets might perform their own checks it is good enough to send signal_parent_ready after all blobs have been written out, but before the parent did it's own checks
Reviewed By: farnz
Differential Revision: D8202108
fbshipit-source-id: 15ac85bd18bcf9ded61363a6380ad05462c189d6
Summary:
Memcache provides us with a low-latency out-of-process cache, shared
between multiple machines. If we use this to provide a write-through cache,
then we get a couple of benefits:
1. Mononoke servers accessing the same memcache backend get low latency access
to commonly requested blobs.
2. Recently written blobs are available at low latency to the server that just
wrote them, hopefully speeding up new_blobimport.
Reviewed By: StanislavGlebik
Differential Revision: D8124623
fbshipit-source-id: 5af085aa8bb63c1366740edfda9020d72b8a9015
Summary:
instead of doing for_each which causes the entries to be processed one-by-one use the map + buffer_unordered technique
This has speeden up new_blobimoprt of 2 commits from >2m to <1m30s
Reviewed By: farnz
Differential Revision: D8184176
fbshipit-source-id: 4a0c3124d2398ed41f8b93785bc3c890a23a88aa
Summary:
In previous diffs we've added a function that does a bulk prefetch of
filenodes. Now we can use it in the getfiles request handling code.
Before constructing file history we do a bulk prefetch. Then we use
results of the bulk prefetch as a cache. It means that we first check
whether we have file history node in the prefetched data, and if not it will be
fetched again. It allows us to later plugin a memcache or some other kind of
cache.
repo.rs file is getting bigger and bigger. I'm planning to split it, however
only after I land this diff in order not to deal with merges
Reviewed By: farnz
Differential Revision: D8185701
fbshipit-source-id: 6ca37aeb029236db51d2b5a03cb7053f969cf47e
Summary:
I'm about to introduce a MemoizedBlobstore - let's ensure that
ambiguity is kept to a minimum by renaming CachingBlobstore
Reviewed By: jsgf
Differential Revision: D8124625
fbshipit-source-id: 301fbb31c5a772c11e84566889b8a6ac86cdae19
Summary:
In practice, caching blobstores are always wrapped around a known
concrete blob store. Make it generic in the type of the Blobstore, so that
rustc can reach in and inline appropriately if it sees an optimization
opportunity.
This is a micro-optimization, hence nice to have
Reviewed By: StanislavGlebik
Differential Revision: D8124627
fbshipit-source-id: 0d5d9b11fdede062ceaa949ace574b3d1c75e6b5
Summary:
We want to be able to go to and from Bonsai Changesets in Mononoke, so
that we can operate with Mercurial clients that haven't yet been updated to
support Bonsai form. This RFC commit shows a shell of doing so, as an
illustration for the hackamonth task
Reviewed By: jsgf
Differential Revision: D8014909
fbshipit-source-id: 28adf18ecf80e0116290662c117731b4c1632ff9
Summary:
Loading the entire manifest tree just for a limited set of conflicts
and changes is wasteful. Lazily load the referenced parts of the tree instead
Reviewed By: jsgf
Differential Revision: D8014913
fbshipit-source-id: 07678bee39de02414fdc062cf680fcd049a28415
Summary:
Those flags will let us control the subset of commits that the new_blobimport is supposed to import.
It's a temporary solution to be used instead of the old blobimport but before "hg push" works
Reviewed By: farnz
Differential Revision: D8122318
fbshipit-source-id: a1ac0824020341cd4bb18ec46d91caee50d9606e
Summary:
This could never fail - so don't allow for failure, but use the type
system to deal with the error case.
Reviewed By: jsgf
Differential Revision: D8014910
fbshipit-source-id: 8f5f0a3ff55a96b57cd4c246d072793c986724d5
Summary:
We want to handle conflicts here - do so, but with asserts, by merging
two manifests together.
The asserts will be removed once we make this all lazy
Reviewed By: jsgf
Differential Revision: D8014912
fbshipit-source-id: 7a09186f7e24a3af93d15a859c877e3a319fb110
Summary: Once we have a lazy loading version, we'll want a way to cope with multiple accessors all trying to update the same `children` array with identical data; make it immutable in use, so that overwriting it multiple times is harmless
Reviewed By: jsgf
Differential Revision: D8014908
fbshipit-source-id: 9a2750fb1fca54601051fede1d9a37de8cfc2a74
Summary:
We're going to need to be able to edit memory manifests. Provide
remove and set operations, to match the Bonsai Changeset data structures
Reviewed By: StanislavGlebik
Differential Revision: D7620527
fbshipit-source-id: e85459c5dbfa8855267fd2cf6578c9fc39f223f8
Summary:
Removing entries can leave us with empty manifests. Rather than clean
that up mid-flow, simply skip empty manifests when saving
Reviewed By: StanislavGlebik
Differential Revision: D7620526
fbshipit-source-id: 2f0799eba305a5295eaad28cd4ad90c9de04306f
Summary:
To do Bonsai Changesets, we're going to need to perform surgery on
manifests. Provide a mechanism to get the Tree nodes only into memory, and
write them back out, so that we can do the surgery the easy way.
Reviewed By: StanislavGlebik
Differential Revision: D7557271
fbshipit-source-id: 7afdc3ef464fc042eb758af863ade8938c4e9fc5
Summary:
Previously, we assumed that all content hashes came from Mercurial;
this is not going to remain true, as we will want to be able to upload manifests
that have been synthesised from Bonsai Changesets. Turn the previous boolean
into a tri-state, and fix up all callers to get the behaviour they expect.
Reviewed By: StanislavGlebik
Differential Revision: D8014911
fbshipit-source-id: 9156b9fab4542ceb269626ad005e1b28392b5329
Summary:
Rust 1.26 adds many new language features. In particular `impl Trait` is now
stable, so we no longer need `conservative_impl_trait`.
There also seems to have been changed in the (unstable) TryFrom with respect to
usize, and the behaviour of the never type `!`.
There are still a few deprecation warnings, but they don't cause the build to
fail.
Path remapping is now stable, so the buck config needs to change to use it
rather than the unstable command line option.
TODO:
- get aarch64 rust-crates-io build (can defer to a later update)
Reviewed By: Imxset21
Differential Revision: D7966091
fbshipit-source-id: 2e61e262c21eb01c852a36f49c6a6369cdaddcdb
Summary:
This is a (hopefully) short term hack to overcome the problem of overloading
Manifold.
Ideally manifold client has to adjust dynamically to the load. However
implementing it is
not trivial, so for now let's configure via config option.
Reviewed By: jsgf
Differential Revision: D7910979
fbshipit-source-id: c2dc32b592747732e7e6574e0fecf2d0aaef447e
Summary:
Simple precaching. Reads all the manifests for a bookmark and up to
`commit_warmup_limit` of ancestors.
Warming up file content can be slow, so we don't do it now.
Reviewed By: jsgf
Differential Revision: D7863728
fbshipit-source-id: bed1508b01e4e002a399d00ea45faf8a8e228d0a
Summary:
This will make it easier to change the "real" bookmark type from AsciiString to
String if we decide to do that.
BookmarkPrefix is a separate type because we may want to change it from
AsciiString to String. Also we don't want to confuse a bookmark prefix with a
bookmark name.
Reviewed By: jsgf
Differential Revision: D7909992
fbshipit-source-id: 3d4d075c204ed5ef1114a743430982c2836bac04
Summary:
We don't need to explicitly create timers since the environment has
one set up by default.
Reviewed By: StanislavGlebik
Differential Revision: D7873576
fbshipit-source-id: bfcdc27a46397bff0730f64ad4f3de3865c7cfa1
Summary:
Let's use the new feature in SendWrapper to use many io threads. That will help
us mitigate the high cpu usage issues we were having with blobstore requests.
Manifold blobstore now creates the io threads itself.
Reviewed By: kulshrax
Differential Revision: D7831420
fbshipit-source-id: ec9f3327347ca6bfbd23c482e69a6fee663b1da5
Summary: As with changesets and blobs, let's cache filenodes data
Reviewed By: jsgf
Differential Revision: D7831105
fbshipit-source-id: 334cb474f5cc3ef8dba0945d11273b2b3875e8ad
Summary:
Specialized revsets to make pull faster.
Previous Union + Intersect combination was extremely slow because it fetched a
lot of stuff that wasn't used.
Reviewed By: farnz
Differential Revision: D7829394
fbshipit-source-id: c038f184c305e48e18b6fcb0f83bab9e9a42b098
Summary:
The commits that are blobimported have out of order or simply incorrect lists of changed files.
Because we have to persists Changesets as is we are passing the untouched list of files here to be used by Changeset.
Reviewed By: farnz
Differential Revision: D7830310
fbshipit-source-id: 56adec2c317896decaa9176b3a6bfb0cab187ed0