Summary: Let's pass just one field instead of 4
Reviewed By: farnz
Differential Revision: D8889899
fbshipit-source-id: 8b30496a86950ed534439f5469f8740ee32345b8
Summary: tokio::runtime can handle multithreading, no need for all this bolierplate
Reviewed By: StanislavGlebik
Differential Revision: D8861170
fbshipit-source-id: 2c489068a55f8cba1854f8a748df1e6efe8b47b7
Summary:
There seems to be a deadlock in the internals of the now outdated tokio_core.
After applying the modern tokio::run the deadlock is not being triggered.
Reviewed By: farnz
Differential Revision: D8783183
fbshipit-source-id: 47a7d1d8e2756ea4d40812d0b8a6c850d7f7e9f8
Summary: This will be useful for the bonsai verification tool.
Reviewed By: StanislavGlebik
Differential Revision: D8792562
fbshipit-source-id: f409d0fa042528b04462a1539fd3c2a8064a4f6e
Summary: This code can easily be shared.
Reviewed By: StanislavGlebik
Differential Revision: D8777307
fbshipit-source-id: f11314f6a63bb191dc38d07cec181a4b05b158d9
Summary:
When we add cachelib bindings to Rust, we're going to want to implement a
cachelib blobstore that's more or less the same as the memcache version, but
backed by a cachelib pool instead of a memcache instance.
Split this code up so that we don't duplicate functionality
Reviewed By: StanislavGlebik
Differential Revision: D8523713
fbshipit-source-id: 882298abab8c208103f6d8c74fee60a768c877f6
Summary: This diff refactors the hook code to use the Bookmark struct instead of strings
Reviewed By: farnz
Differential Revision: D8724197
fbshipit-source-id: 920aa1266ca94b2bd8683a995e4fd781159bd5b1
Summary:
This diff extends lua hooks to support per file hooks.
We also now call the hook function via a Lua wrapper function which allows us to have a better API in the hook as we can construct lua table to pass into the hook and we can also return multiple return values from the hook, which allows us to support rich hook failure reasons properly. Both of these are hard to do when calling hooks directly using hlua.
Reviewed By: StanislavGlebik
Differential Revision: D8711280
fbshipit-source-id: b91f9e47a1f8eab302775a5bbfd61590a8635282
Summary:
This diff implements hooks which act upon individual files. The results of the hooks are cached using Asyncmemo.
The cache is currently keyed on (changeset_id, hook_name, file path) but this will change to file content hash once we move to Bonsai changesets.
Reviewed By: StanislavGlebik
Differential Revision: D8707726
fbshipit-source-id: ceaf94abd09e1dd7f6b2d8f9c87a9a221439a252
Summary: This diff implements tracking of hooks by bookmark in the hook manager and functionality to run hooks per bookmark.
Reviewed By: StanislavGlebik
Differential Revision: D8525598
fbshipit-source-id: 7987d1f8d90a77667f120f4940f12aa3cb5aa86e
Summary:
In case the bookmarks have been updated while we are reading the revlogs we shoud read the bookmarks before reading changestes.
In this diff we are also reading bookmarks after importing so that we ensure that the newest possible version of bookmarks are persisted.
Reviewed By: farnz
Differential Revision: D8723522
fbshipit-source-id: 278382dae10a0554abc8edc398e7a15a37569676
Summary:
We need to be able to distinguish logical transaction failure from
infrastructure failure, so change the `Transaction::commit` future to
`Future<Item=bool, Error=Error>`, where `false` indicates a logical transaction
failure. This allows the caller to determine whether a retry or other recovery
logic is needed.
Reviewed By: lukaspiatkowski
Differential Revision: D8555727
fbshipit-source-id: 8ab64f3019f2644e7eaabc8d699d99aa8eb08fbb
Summary:
I want to use this code for the bonsai verification tool without
copying it around. It's also a nice simplification of the code.
Reviewed By: jsgf
Differential Revision: D8675825
fbshipit-source-id: cf9432069f154dafeb81a5e93a2fb5cbd717b981
Summary:
There are so many individual arguments here that it's honestly hard to keep
track.
It is unfortunate that a bunch of string copies have to be done, but not really
a big deal.
Reviewed By: jsgf
Differential Revision: D8675237
fbshipit-source-id: 6a333d01579532a0a88c3e26b2db86b46cf45955
Summary: This will be pretty useful while debugging.
Reviewed By: jsgf
Differential Revision: D8667770
fbshipit-source-id: 1c776741844d74529415124ab826b013414678c7
Summary: It's no longer test, we use it in prod
Reviewed By: farnz
Differential Revision: D8611639
fbshipit-source-id: dc52e0bcdc26c704c0d9cf820d7aa5deae3e06e4
Summary: more logging more fun
Reviewed By: StanislavGlebik
Differential Revision: D8577655
fbshipit-source-id: 92a160ea8f8c0b8e012a1461fbd3f5d71b4bd171
Summary:
Manifests are always able to return entries immediately, and never
fail.
Reviewed By: lukaspiatkowski, farnz
Differential Revision: D8556499
fbshipit-source-id: e21a2522f1219e47db9b55b24b6ac6c0c463933e
Summary:
Fetching the blob is still required to compute the node hash, but we don't have
to reupload it.
Reviewed By: farnz
Differential Revision: D8508462
fbshipit-source-id: 341a1a2f82d8f8b939ebf3990b3467ed7ad9244c
Summary:
This will also allow file blob sharing between the Mercurial and Mononoke
data models.
Reviewed By: farnz
Differential Revision: D8440330
fbshipit-source-id: a29cd07dcecf0959dffb74b7428f3cb11fbd3db6
Summary:
Store manifests as Thrift blobs instead. Required fixing up a lot of
different places, but they should all be pretty clear now.
Reviewed By: farnz
Differential Revision: D8416238
fbshipit-source-id: 523e3054e467e54d180df5ba78445c9b1ccc3b5c
Summary:
Pretty straightforward. Also using this opportunity to add per-repo
prefixes, since all the hashes are going to break anyway.
Note for reviewers: almost all the change is regenerated test fixtures (unfortunately necessary to make atomic). The actual substantive changes are all in the first few files.
Reviewed By: farnz
Differential Revision: D8392234
fbshipit-source-id: c93fc8c6388cb00fe5cff95646ad8c853581cb8c
Summary:
This diff introduces the ChangesetStore trait so the HookManager does not directly depend on the BlobRepo. This means the manager can obtain changesets from a simple in memory implementation of ChangesetStore which can be populated with changesets before they are committed to the server. This means the exact same hook manager logic can be used irrespective of whether the changesets are committed or not.
The diff includes two implementations of ChangesetStore - A simple in memory implementation backed by a HashMap, and one that backs onto BlobStore.
Reviewed By: StanislavGlebik
Differential Revision: D8446430
fbshipit-source-id: 8d14e48cb562fcd10a17370e34f13a662af827df
Summary: This diff implements memoization of hooks. If a hook has already been run against the same changeset it is not run again, instead the cached result is used.
Reviewed By: StanislavGlebik
Differential Revision: D8431102
fbshipit-source-id: b4080ba48a3214e767392cbcb46425aa05bc2b64
Summary: This diff refactors the hook manager to run hooks based on a changeset_id not the actual changeset. The actual changeset is looked up by the hook manager lazily when running a hook. This is needed to make it work well for AsyncMemo as we don't want the changeset cached in the key.
Reviewed By: StanislavGlebik
Differential Revision: D8422442
fbshipit-source-id: 40bc89124942e05c5aaeb2b4ee00215afd816642
Summary: This diff refactors the runhook tests to use the linear fake blobrepo to supply changesets
Reviewed By: farnz
Differential Revision: D8418096
fbshipit-source-id: 74fd2578095dbae86ed9e96eac3ca4b344c036da
Summary:
It would be suboptimal if memcache could be abused to examine objects
you should not have access to. Change the key used so that (a) it's a regional
cluster for wider caching, and (b) it's got the blobstore details in it so that
you cannot extract objects not from your blob store.
Reviewed By: StanislavGlebik
Differential Revision: D8424298
fbshipit-source-id: 78f9a1a7302b4a60575f257bda665f719dc1a7b6
Summary:
Note that no prefix is actually prepended at the moment -- there's an
XXX marking the spots we'll need to update. We'll probably add a prefix once Thrift serialization is turned on.
Reviewed By: farnz
Differential Revision: D8387761
fbshipit-source-id: 0fe2005692183fa91f9787b4c80f600df21d1d93
Summary:
Centralize parsing for manifold args, and actually pay attention to
repo-id. Also use dashes uniformly because I like them more.
Reviewed By: farnz
Differential Revision: D8402467
fbshipit-source-id: 2c281c7ddb33c4d0fd2d8edf219f4512c0ba0003
Summary:
So that both prefixed and non-prefixed blobstores can be used with the
same code.
Reviewed By: farnz
Differential Revision: D8402466
fbshipit-source-id: 3a4f7882ce697d0582eb7f7908b74716322806cf
Summary:
The old blobimport tool will not be able to import commits with the new Thrift serialization they'll be switching to.
`blobrepo::utils::RawNodeBlob` is also used by the admin tool, and it will go away once we start using Thrift serialization.
Reviewed By: farnz
Differential Revision: D8372455
fbshipit-source-id: d02a37e33e1ccd4dd1f695e38dbb40851dd51cd6
Summary:
Mostly this was about adding support for file stores to
`new_blobimport`.
Reviewed By: StanislavGlebik
Differential Revision: D8372063
fbshipit-source-id: 2e3791c6222ec430015008f038e1df0464d3f0ba
Summary: There shouldn't be more than one thread writing to the database, because it causes lags in slaves and they race for database locks between themselves. One write connection should be sufficient enough.
Reviewed By: StanislavGlebik
Differential Revision: D8348604
fbshipit-source-id: ceef081ed89611978accfa55969883078d65a58f
Summary: this will make it easier to identify stats exported from mononoke vs external libraries
Reviewed By: StanislavGlebik
Differential Revision: D8331418
fbshipit-source-id: c151e76aa386fb13759fced7cc07b03ac67fe051
Summary: This test implements the passing of all current changeset fields to hooks
Reviewed By: StanislavGlebik
Differential Revision: D8298019
fbshipit-source-id: 0e6be3c83b1e4d4c3eab95c76c9041ea2a57f0d3
Summary:
Now it is as it should be: mercurial_types have the types, mercurial has revlog related structures
burnbridge
Reviewed By: farnz
Differential Revision: D8319906
fbshipit-source-id: 256e73cdd1b1a304c957b812b227abfc142fd725
Summary:
* `.hg` is where mononoke sockets used to reside, but they no longer do.
* `heads` is no longer a thing
* `BlobRepo::new_rocksdb` already takes care of creating `SqliteChangesets` if it doesn't exist.
Reviewed By: StanislavGlebik
Differential Revision: D8289145
fbshipit-source-id: c8b0ad3b9bee5c22f79861474fd08256dbd0fb8f
Summary: This diff adds unit tests for the runhook command line utility
Reviewed By: jsgf
Differential Revision: D8257571
fbshipit-source-id: 5c390d2a45d895080fce28dcd7943da5d803ff92
Summary:
This diff adds more robust testing for various errors in Lua hooks.
It also contains a little bit of cleanup in the runHook command
Reviewed By: StanislavGlebik
Differential Revision: D8253525
fbshipit-source-id: de9d298e70ec647f2c13e27c9937605ac5b57485
Summary: Changed the about section of new_blobimport to something descriptive.
Reviewed By: kulshrax
Differential Revision: D8307334
fbshipit-source-id: 2198d43cdfaf566b57001be5230a74206306dac1
Summary:
Replaced the help and about messages to summarize what the admin tool
is useful for
Reviewed By: jsgf
Differential Revision: D8302356
fbshipit-source-id: 77d1d4bb50825b0cc2d1ac2ee8d47ee906e04a22
Summary:
The new_blobimport job is having difficulties when the pool is too large, because the write transactions are taking too long. If the pool is configured to be 1 for it then everything seems fine and fast enough.
On the other hand the Mononoke server should have bigger connectino pool size to be able to quickly respond for read requests.
Reviewed By: farnz
Differential Revision: D8235413
fbshipit-source-id: 84e0013ce569c3f103a2096001605aab828d178c
Summary: It only needs to borrow them.
Reviewed By: kulshrax
Differential Revision: D8244267
fbshipit-source-id: 2a24a3b7c6eb65177e4e26c57650dd7e096b4202
Summary:
Now hook manager returns result of running hooks as BoxFuture<HashMap<String, HookExecution> where HookExecution is a new richer type representing the result of running a hook.
This provides more info to the user as to why the hook rejected the changeset and a map is simpler to lookup a particular hook failure than a Vec.
Reviewed By: StanislavGlebik
Differential Revision: D8235970
fbshipit-source-id: 9a617b6d459f105aa9dad9782e784459dd716c45
Summary:
There's a bit of Rust/C FFI, but hopefully the comments make it clear
how ownership works.
As part of the lifecycle I also had to handle shutdown safely. This means that `run_service_framework` actually returns now.
Reviewed By: jsgf
Differential Revision: D8133252
fbshipit-source-id: 1e394a60c6e62f5c3d56933f3e7be7a2bfd1e2c0
Summary:
This diff introduces HookManager which knows how to install, uninstall and run hooks.
Hooks are now run in parallel on a cpu_pool.
Reviewed By: lukaspiatkowski
Differential Revision: D8208538
fbshipit-source-id: f18687c14a15cadf4d832318cd66fa400586c29f
Summary: If we're going to use memcache to cache things, we need to be able to confirm that the blobstore contains what we expect. Add an admin option that lets us check what's in memcache and/or in the blobstore when using memcache
Reviewed By: StanislavGlebik
Differential Revision: D8124624
fbshipit-source-id: 763786930cf9f4be6d2d8e346ae1ef94dc8e492c
Summary: This diff introduces a simple command line utility which allows a hook to be run against a specific changeset. The idea is to allow hooks to be easily tested without having to run a Mononoke server.
Reviewed By: StanislavGlebik
Differential Revision: D8183908
fbshipit-source-id: 2ebadf026a23ac69bc14db6794fdf760728f1d3b
Summary: This should prevent new_blobimport from updating bookmarks to non-existing changesets
Reviewed By: farnz
Differential Revision: D8122315
fbshipit-source-id: 20bcbb6887b54a88b39f9ba375884e8b9c0143f7
Summary:
Those flags will let us control the subset of commits that the new_blobimport is supposed to import.
It's a temporary solution to be used instead of the old blobimport but before "hg push" works
Reviewed By: farnz
Differential Revision: D8122318
fbshipit-source-id: a1ac0824020341cd4bb18ec46d91caee50d9606e
Summary:
Instead of writing changesets one-by-one run mutiple of them at once.
The size `100` for the buffer is arbitrary, but it shouldn't matter much since we already have backpressure on the database writes.
Reviewed By: farnz
Differential Revision: D8057268
fbshipit-source-id: ca3766505395dcb6be6684323462f1bb23222435
Summary:
This tool will be installed on all devservers in our team. It should have
useful debugging commands. Currently it has a command to fetch an entry from
the blobstore, and a command to get content by path/commit hash.
Reviewed By: farnz
Differential Revision: D8028933
fbshipit-source-id: e0c37660a24e40dd9dc8f19d1789b2f25d99bfe6
Summary:
Previously, we assumed that all content hashes came from Mercurial;
this is not going to remain true, as we will want to be able to upload manifests
that have been synthesised from Bonsai Changesets. Turn the previous boolean
into a tri-state, and fix up all callers to get the behaviour they expect.
Reviewed By: StanislavGlebik
Differential Revision: D8014911
fbshipit-source-id: 9156b9fab4542ceb269626ad005e1b28392b5329
Summary: Parsing and reading revlogs is cpu intensive, thus let's use cpupool for it
Reviewed By: StanislavGlebik
Differential Revision: D7926174
fbshipit-source-id: 7f023088941e1ad118a683da972f87607e0bfec4
Summary: printing every CS is too verbose, but we still want to see progress in non-debug mode
Reviewed By: kulshrax
Differential Revision: D7925747
fbshipit-source-id: c3ed92ef8c8fbf7714779a2bf011d31c94aefa37
Summary:
Rust 1.26 adds many new language features. In particular `impl Trait` is now
stable, so we no longer need `conservative_impl_trait`.
There also seems to have been changed in the (unstable) TryFrom with respect to
usize, and the behaviour of the never type `!`.
There are still a few deprecation warnings, but they don't cause the build to
fail.
Path remapping is now stable, so the buck config needs to change to use it
rather than the unstable command line option.
TODO:
- get aarch64 rust-crates-io build (can defer to a later update)
Reviewed By: Imxset21
Differential Revision: D7966091
fbshipit-source-id: 2e61e262c21eb01c852a36f49c6a6369cdaddcdb
Summary:
This is a (hopefully) short term hack to overcome the problem of overloading
Manifold.
Ideally manifold client has to adjust dynamically to the load. However
implementing it is
not trivial, so for now let's configure via config option.
Reviewed By: jsgf
Differential Revision: D7910979
fbshipit-source-id: c2dc32b592747732e7e6574e0fecf2d0aaef447e
Summary:
This will make it easier to change the "real" bookmark type from AsciiString to
String if we decide to do that.
BookmarkPrefix is a separate type because we may want to change it from
AsciiString to String. Also we don't want to confuse a bookmark prefix with a
bookmark name.
Reviewed By: jsgf
Differential Revision: D7909992
fbshipit-source-id: 3d4d075c204ed5ef1114a743430982c2836bac04
Summary:
Let's use the new feature in SendWrapper to use many io threads. That will help
us mitigate the high cpu usage issues we were having with blobstore requests.
Manifold blobstore now creates the io threads itself.
Reviewed By: kulshrax
Differential Revision: D7831420
fbshipit-source-id: ec9f3327347ca6bfbd23c482e69a6fee663b1da5
Summary: As with changesets and blobs, let's cache filenodes data
Reviewed By: jsgf
Differential Revision: D7831105
fbshipit-source-id: 334cb474f5cc3ef8dba0945d11273b2b3875e8ad
Summary: The goal is to be able to read revlogs using Rust code and also parse and serialize them in Rust formats for debugging purposes
Reviewed By: farnz
Differential Revision: D7830358
fbshipit-source-id: 95e257a4482eca22b328b174bce3fceec1b47245
Summary:
The commits that are blobimported have out of order or simply incorrect lists of changed files.
Because we have to persists Changesets as is we are passing the untouched list of files here to be used by Changeset.
Reviewed By: farnz
Differential Revision: D7830310
fbshipit-source-id: 56adec2c317896decaa9176b3a6bfb0cab187ed0
Summary: Useful utility that let's you f.e. fetch blob of data from manifold, decode it and show it to you
Reviewed By: jsgf
Differential Revision: D7779154
fbshipit-source-id: aaa4ae1d09b64f7f52c7942a51e8bb4ccc0cb700
Summary: the idea of Mercurial heads in Mononoke will be represented by bookmarks, so there is no need to have them around
Reviewed By: StanislavGlebik
Differential Revision: D7775032
fbshipit-source-id: 1618a1e51862d7c115b2955082f40ee890a045f1
Summary: For on-disk-rocksdb use cases we should persist bookmarks like any other table we use
Reviewed By: farnz
Differential Revision: D7728717
fbshipit-source-id: f63a6410f5ed254a719a16a7504d1b31da5a20a8
Summary:
This change isn't doing much on it's own since rocksdb's BlobRepo is using in memory Bookmarks ATM and they dissapear when import is finished.
Later bookmarks will be used for Head discovery, then it will be properly tested
Reviewed By: farnz
Differential Revision: D7728716
fbshipit-source-id: ad50f35b18d93aa1e38951408092e46e67fde0c7
Summary: I am planning to add importing bookmarks, doing it on current main.rs would make it unreadable, so I am splitting this file now
Reviewed By: farnz
Differential Revision: D7728185
fbshipit-source-id: fdfb4f60eecd9c8af7626bd0e892bb1bfbf7f081
Summary: The eden integration test contains a commit with no content which new_blobimport couldn't import. With this changes the commit API is capable of handling such commits.
Reviewed By: jsgf
Differential Revision: D7709243
fbshipit-source-id: 7d55eb2ec421820d189ab05b0f8cb4411f850a7b
Summary:
Let's fail only if inconsistent data was inserted - for example, same commit
hash but different parents.
This matches core hg behavior, and also it's completely normal for commit cloud
to send more parent commits than necessary.
Reviewed By: lukaspiatkowski
Differential Revision: D7722649
fbshipit-source-id: 172a0985fb3fda27d55e9dce8916ec3793de5db9
Summary:
There are a few separate steps during blobimport. One of them is inserting the
blobs and another is inserting the changesets. That worked fine if RevlogRepo was
static. However if RevlogRepo has changed between first and second step we
could've inserted blobs for N commits, but insert (N + X) changesets into
changesets table. That means that a few commits would have no blobs at all.
This diff fixes it by fixing the changesets that we want to blobimport
beforehand.
Reviewed By: farnz
Differential Revision: D7615133
fbshipit-source-id: 1a66907e34a65588b101199c8f59abda53f7bc20
Summary: As with filenodes, we also want to write changesets in a real store.
Reviewed By: lukaspiatkowski
Differential Revision: D7615101
fbshipit-source-id: 269deb8fc3c1f58afb82f453a68ea4d8a3f1f63d
Summary:
Use asyncmemo to cache Changesets.
Unfortunately currently we are using separate asyncmemo cache, so we have to
specify the size for the caches separately. Later we'll have a single cache for
everything, and the number of config knobs will go down.
Reviewed By: lukaspiatkowski
Differential Revision: D7685376
fbshipit-source-id: efe8a3a95fcc72fab4f4af93564e706cd1540c2f
Summary:
Let's use it! Pass config option that set's the cache max memory usage (don't
put a limit on the number of entries, it's useless in that case).
Currently we'll set a separate size for each of the caches that we use
(blobstore, changesets, filenodes, etc). Later we'll have just one single option that
sets the cache size for all of them.
Reviewed By: lukaspiatkowski
Differential Revision: D7671814
fbshipit-source-id: f9571078e6faaa80ea4c31c76a9eebcc24d8a68a
Summary:
We know that the hashes for non-root-tree-manifests and filenodes
should always be consistent. Verify that.
Reviewed By: farnz
Differential Revision: D7704087
fbshipit-source-id: 7f6207878c5cd372b272aa6970506dd63b5a3c7c
Summary:
As the comment explains, sometimes the hashes don't match the
contents. Accept such pushes.
Reviewed By: farnz
Differential Revision: D7699930
fbshipit-source-id: 376f01b6cf03f6cad84c2c878d192d55f8d81812
Summary:
We're going to keep this around for now as part of double-writing.
All the hashes here are definitely Mercurial hashes, so use them that way.
Reviewed By: lukaspiatkowski
Differential Revision: D7683890
fbshipit-source-id: 270091cd11f3cec7ef4cf565de5ef913fcf7adea
Summary:
file::File works entirely in the Mercurial domain, so these
conversions are good.
Reviewed By: StanislavGlebik
Differential Revision: D7665973
fbshipit-source-id: 8a192c5d1886492ad21593693b080c8e5ddf8f7e
Summary:
This is because these Mercurial entries are (at least currently) going
to be stored as they come in, and this data structure is entirely in the
Mercurial domain.
Reviewed By: lukaspiatkowski
Differential Revision: D7664972
fbshipit-source-id: 9de5475eed0d7ab7085c29fd0282f205043cfe5a
Summary:
I was trying to debug something with the new blobimport, and this was
getting in the way.
Reviewed By: StanislavGlebik
Differential Revision: D7664660
fbshipit-source-id: 2ec4ee79fbe13584f35e7dcd9e8df2b8bdf181c0
Summary:
The comment doesn't quite look right, and `HgBlob` will always have
content available now.
Reviewed By: StanislavGlebik
Differential Revision: D7663200
fbshipit-source-id: 614b8bc97ece99aaefdc8fa6eaf36fe66779be13
Summary:
The list of arguments is becoming too long, and I need to add even
more here.
Reviewed By: StanislavGlebik, farnz
Differential Revision: D7652096
fbshipit-source-id: 62a4631e163e95cf5c950a949e72facab629ea54
Summary:
Currently, any sort of `Bytes` can be stored in the blobstore. That
caused me to make several mistakes while writing the code to store bonsai
changesets, because I'd just end up storing the wrong set of `Bytes`.
Introduce stronger typing so that only types that explicitly implement
`BlobStorable` can be stored in the blobstore.
Currently, these sorts of blobs can be stored in the blob store:
* `ChangesetBlob` and `ContentBlob` in `mononoke-types` (these are Thrift-serialized structures)
* The envelope `RawNodeBlob` and `RawCSBlob` types in `blobrepo`, once converted to `EnvelopeBlob` instances
* `HgBlob`, which contains revlog data (manifests or files) exactly as serialized by Mercurial
Reviewed By: StanislavGlebik
Differential Revision: D7627290
fbshipit-source-id: d1bcbde8881e365dec99618556e7b054985bccf7
Summary:
Pass mysql tier name to the BlobRepo, so that we can use it to connect to mysql
based storages like mysql changeset storage, filenodes storage etc.
Note that currently Filenodes storage only connects to master region. This will
be fixed in the later diffs
Reviewed By: lukaspiatkowski
Differential Revision: D7585191
fbshipit-source-id: 168082abfeb7ccba549c7a49e6269cc01c490c14
Summary:
Now that `BlobNode` no longer returns `None`:
* don't expose the `BlobNode` API outside the crate because it turns out to not be very useful (it should probably go away eventually?)
* make the `File` API not return `Option` types
* Add a new `file_contents` that returns a brand-new `FileContents` (this is the first time we're tying together Mercurial and Mononoke data structures!)
Also remove a `Symlink` API that isn't really correct honestly.
Reviewed By: StanislavGlebik
Differential Revision: D7624729
fbshipit-source-id: 38443093b8bfea91384c959f3425cf355fac9f65
Summary: Having an implicit `From` parsing makes it hard to track the exceptional places where RevlogChangest can be directly translated to BlobChangeset. Make it explicit for better tracking of this behavior
Reviewed By: StanislavGlebik
Differential Revision: D7637247
fbshipit-source-id: 781341315102ea6b2265c33bb09a89aae3d0c329
Summary:
I'm going to put in some stronger typing around what can be stored in
the blob store. Centralizing the management here makes that much easier.
Reviewed By: StanislavGlebik
Differential Revision: D7619519
fbshipit-source-id: a428679018f0a1571e54bc01bb5483ba9fdf1cb5
Summary: The Hg prefix is unique now so let's not use verbose mercurial::
Reviewed By: sid0
Differential Revision: D7620112
fbshipit-source-id: 0aece310ed817445fef4c94b32f78fda3a3b1c49
Summary: mercurial_types::DBlobNode should be replaced by types from mononoke_types or mercurial in most cases. This rename should help with tracking this
Reviewed By: sid0
Differential Revision: D7619793
fbshipit-source-id: 261fd92acae825dc4bc8011c3716c5585eb0413c
Summary: mercurial_types::DParent should be replaced by types from mononoke_types or mercurial in most cases. This rename should help with tracking this
Reviewed By: sid0
Differential Revision: D7619686
fbshipit-source-id: 5ad105113779387f1408c806860483e06ed5fb3d
Summary: mercurial_types::DFileNodeId should be replaced by types from mononoke_types in most cases. This rename should help with tracking this
Reviewed By: sid0
Differential Revision: D7619290
fbshipit-source-id: aa6a8e55ae3810c4531028c3b3db2e5730fe7846
Summary: mercurial_types::DChangesetId should be replaced by types from mononoke_types in most cases and by mercurial::HgChangesetId in others. This rename should help with tracking this
Reviewed By: sid0
Differential Revision: D7618897
fbshipit-source-id: 78904f57376606be99b56662164e0c110e632c64
Summary: Migrates some uses of `.map_err()` and `.context(format!())` usage in Mononoke to `.with_context()`
Reviewed By: lukaspiatkowski
Differential Revision: D7607935
fbshipit-source-id: 551538c78a1755f7aa0716532ab437a1baf6dd89
Summary:
This is a cleanup of NodeHash API. There were few unused methods and few ways to convert between mercurial and mercurial_types hashes. With this diff it is very easy to identify the places where this converstion happens.
A followup of this diff will be to use this new API to easily replace the NodeHash convertions in places where it requires remapping.
Reviewed By: sid0
Differential Revision: D7592876
fbshipit-source-id: 6875aa6df1a3708ce54ca5724f6eb960d179192b
Summary:
Let's fill cs table even if we import only part of the repo. This let's us
import new changesets incrementally.
That can be dangerous since we don't check if parent commits are present.
However this blobimport is a temporary measure until we get a full-fidelity
blobimport that uses a commit API.
Reviewed By: jsgf
Differential Revision: D7485495
fbshipit-source-id: 63ba91bad4eb1c1662db73293c76a506f48a4753
Summary: They are replaced by filenodes
Reviewed By: farnz
Differential Revision: D7443320
fbshipit-source-id: 13c7d07bc00dcbaa991663c8da8a07fcb0de1332
Summary:
This will probably go away soon, but for now I want to be able to
disambiguate the new Thrift-encoded blobs in Mononoke from these.
Reviewed By: StanislavGlebik
Differential Revision: D7565808
fbshipit-source-id: d61f3096fa368b934a923dee54a0ea1e3469ae0d
Summary:
Since `FileType` now exists, the `Type` enum can use it instead of
defining its own stuff.
Reviewed By: farnz
Differential Revision: D7526046
fbshipit-source-id: 3b8eb5502bee9bc410ced811dc019c1ce757633f
Summary:
Previously we were able to create just sqlite filenodes, now let's make it
possible to create a mysql filenodes.
Reviewed By: jsgf
Differential Revision: D7485098
fbshipit-source-id: b9156e51d41a570f9e6aaf9eaa9e476222257bca
Summary:
We'll need filenodes in blobimport when we'll add filenodes to the BlobRepo.
The implementation is not great - it creates a separate thread for the
filenodes (see "filenodeinserts"). New filenodes are sent via UnboundedSender
from the parsing cpupool
However it doesn't worth the effort to clean up the code that we are
going to deprecate in a couple of weeks.
Reviewed By: farnz
Differential Revision: D7429440
fbshipit-source-id: 4a9220915bd27f5c1c2028ec604afd700bb8a509
Summary:
The diff adds an extension of streams to the `failures_ext` crate, allowing streams with error type `failure::Error` or `failure::Fail` to store a context.
As a proof-of-concept, the resulting `context()` function is applied to a stream in use in mononoke.
Reviewed By: lukaspiatkowski
Differential Revision: D7336012
fbshipit-source-id: 822c9dcd5b6c0a60470e8fd98fecd569928be7d1
Summary:
There's no point passing it by reference since callers don't need to
retain it, and the async implementation needs to move it into another context.
Reviewed By: farnz
Differential Revision: D7350001
fbshipit-source-id: 5947557a84621afae801dc20e3994496244e3a10
Summary:
This codemod tries not to change the existing behavior of system, only introduce new types specific to Mercurial Revlogs.
It introduces a lot of copypasta intentionally and it will be cleaned in following diffs.
Reviewed By: farnz
Differential Revision: D7367191
fbshipit-source-id: 0a915f427dff431065e903b5f6fbd3cba6bc22a7
Summary:
We're going to get rid of empty MPaths very soon, so stop using them
here.
Reviewed By: StanislavGlebik, farnz
Differential Revision: D7358023
fbshipit-source-id: 2d5fff40eae03cc63ef5514faee1a8505b3b2bd0
Summary:
Lots of places specifically want a non-empty `MPath`, and earlier those cases
weren't type-checked. Now `Option<MPath>` stands for an empty `MPath`. In the
next diff `MPath::empty()` will go away and the only way to represent an
`MPath` that doesn't exist will be with `Option<MPath>`.
Reviewed By: farnz
Differential Revision: D7350970
fbshipit-source-id: 1612aec67134e7a0ebad15dbaa93b5ea972f8ddf
Summary:
The `Option<&MPathElement>` type is more general -- it's easy to
convert from `&Option<MPathElement>` to it, but the other way around can
require a clone.
Reviewed By: farnz
Differential Revision: D7339161
fbshipit-source-id: 0c8ab57a19bc330245c612e3e0e3651e368ab8cb
Summary:
The new_blobimport as opposed to the old one do two things differently:
1. It uses a better structured API of RevlogRepo. The old one reads the Revlogs directly and does not verify if the data it has read is correct or it does not let us fix it into canonical form easily (once we have a canonical form different from Revlog's).
2. It uses BlobRepo's Commit API instead of writing directly to storage. This ensures consistency in our code and let's us leverage the validation that is incorporated in Commit API.
Reviewed By: farnz
Differential Revision: D7041976
fbshipit-source-id: fe592524533955f364f1b037109b3b5b5bab6b02
Summary:
RevlogRepo exposes a ton of methods that are almost equvalent to taking Revlog directly and ignoring the RevloRepo abstraction above it.
This diff cleans this up a bit, there are still some methods that the "old" blobimport uses, but the "new" one shouldn't need to do that.
Reviewed By: StanislavGlebik
Differential Revision: D7289445
fbshipit-source-id: ac7130fe41c4e4484d6986fe5b19d5adc751369a
Summary:
Mononoke will introduce its own ChangesetId, ManifestId and BlobHash, and it
would be good to rename these before that lands.
Reviewed By: farnz
Differential Revision: D7293334
fbshipit-source-id: 7d9d5ddf1f1f45ad45f04194e4811b0f6decb3b0
Summary: Replace the generic types if `Blob` and `BlobNode` with `Bytes`.
Reviewed By: lukaspiatkowski
Differential Revision: D7115361
fbshipit-source-id: 924d347377569c6d1b3b4aed14d584510598da7b
Summary: Update to include num_cpu and blake2. Also update bincode to 1.0.0
Reviewed By: StanislavGlebik
Differential Revision: D7098292
fbshipit-source-id: 67793a6f458d50fc049781f34abaf313c8ff7a79
Summary:
Provide an API to ask BlobRepo to create changesets for you from
pieces that you either have to hand, or have created via upload_entry().
Parallelism is maintained in as far as possible - if you commit N changesets,
they should all upload blobs in parallel, but the final completion future
depends on the parents, so that completion order can be maintained.
The ultimate goal of this API is to ensure that only valid commits are added to the `BlobRepo` - this means that, once the future returned by `create_changeset` resolves, you have a repo with commits and blobs in place. Until then, all the pieces can be uploaded, but are not guaranteed to be accessible to clients.
Still TODO is teaching this to use the complete changesets infra so that we
simply know which changesets are fully uploaded.
Reviewed By: StanislavGlebik
Differential Revision: D6743004
fbshipit-source-id: 813329058d85c022d75388890181b48b78d2acf3
Summary:
Iff all the inserts finished successfully, then it's safe to mark changesets as complete.
This diff fills up changesets store after blobimport successfully finishes.
For simplicity if --commit-limit or --skip is set then we skip filling up the changeset store.
Reviewed By: sid0
Differential Revision: D7043831
fbshipit-source-id: 8ae864b45222d52281c885a49c2dca44ba577137
Summary:
As we discussed before, let's add get_name() method that returns MPathElement,
and remove get_path() and get_mpath().
Except for renaming, diff also make repoconfig work with tree manifest, and
fixes linknodes creation in blobimport - previously basename was used instead
of the whole path.
Reviewed By: jsgf
Differential Revision: D6857097
fbshipit-source-id: c09f3ff40d38643bd44aee8b4488277d658cf4f6
Summary: Change BlobChangeset and callers to use ChangesetId instead of NodeId
Reviewed By: lukaspiatkowski
Differential Revision: D6835450
fbshipit-source-id: 7b20359837632aef4803e40965380c38f54c9d0a
Summary:
Adds an option that sets the number of filelogs and revlogs that will be loaded
in memory. That let's us use blobimporting in memory constrained
enviroments.
Reviewed By: jsgf
Differential Revision: D6532734
fbshipit-source-id: b748478ec80e75f56a8e07ae1532b0d69c4a5e16
Summary:
Like the other BlobState components, Linknode was too generic -
reduce down to a practical set for live implementations.
Error handling is not great here or in Bookmarks, but I'm going to await the
decision on moving to Failure before I improve it.
Reviewed By: jsgf
Differential Revision: D6459012
fbshipit-source-id: 00314309f62ba070b5908a28f5174a31b6dd0d84
Summary:
Remove the last associated types from BlobStore - this means that
BlobStore now has an associated trait object type.
Reviewed By: jsgf
Differential Revision: D6425414
fbshipit-source-id: 7186dab9b56593dd1d70be732d4ad56d1e7b3c63
Summary:
Don't use failure's bail!() and ensure!() macros.
Instead, failure_ext provides:
- bail_err!(err) - Converts its single parameter to the expected error and returns; ie `return Err(From::from(err));`
- bail_msg!(fmt, ...) - takes format string parameters and returns a `failure::err_msg()` error
- ensure_err!(), ensure_msg!() - corresponding changes
Also:
- remove all stray references to error-chain
- remove direct references to failure_derive (it's reexported via failure and failure_ext)
- replace uses of `Err(foo)?;` with `bail_err!()` (since `bail_err` unconditionally returns, but `Err(x)?` does not in principle, which can affect type inference)
Reviewed By: kulshrax
Differential Revision: D6507717
fbshipit-source-id: 635fb6f8c96d185b195dff171ea9c8db9e83af10
Summary:
Make it possible to skip a number of commits.
Also change type from usize to u64, to make sure it works the same on 32-bit
platforms (although that shouldn't matter much).
Reviewed By: jsgf
Differential Revision: D6395743
fbshipit-source-id: 88a12583de2b23d4f55115d696c5398f6814c2da
Summary:
Convert scm/mononoke to use failure, and update common/rust crates it depends on as well.
What it looks like is a lot of deleted code...
General strategy:
- common/rust/failure_ext adds some things that are in git failure that aren't yet in crates.io (`bail!` and `ensure!`, `Result<T, Error>`)
- everything returns `Result<T, failure::Error>`
- crates with real error get an error type, with a derived Fail implementation
- replicate error-chain by defining an `enum ErrorKind` where the fields match the declared errors in the error! macro
- crates with dummy error-chain (no local errors) lose it
- `.chain_err()` -> `.context()` or `.with_context()`
So far the only place I've needed to extract an error is in a unit test.
Having a single unified error type has simplified a lot of things, and removed a lot of error type parameters, error conversion, etc, etc.
Reviewed By: sid0
Differential Revision: D6446584
fbshipit-source-id: 744640ca2997d4a85513c4519017f2e2e78a73f5
Summary:
BlobStore is entirely generic, and puts no limits on its
implementations. Remove ValueIn and ValueOut type parameters, and insist that
all blobs are Bytes (as per production setups)
Reviewed By: StanislavGlebik
Differential Revision: D6425413
fbshipit-source-id: 455e526d8baebd0d0f1906941648acca89be4881
Summary:
BlobStore is entirely generic, and puts no limits on its
implementations. Remove the "Key" type parameter, and insist that all keys are
String (as per production setups)
Reviewed By: StanislavGlebik
Differential Revision: D6425412
fbshipit-source-id: 1f1229bf8e001bf780964e883c6beb071e9ef1d8
Summary:
In some cases output path is not necessary at all - for example, if we put
blobs into the remote storage and we don't care about heads.
Let's make OUTPUT parameter optional for these cases.
Reviewed By: jsgf
Differential Revision: D6397168
fbshipit-source-id: 06ee3b2bba038ff5076040a01c9d73c2b6e2b5fc
Summary:
As part of removing excess genericism, make Heads a trait with no
associated types or type parameters.
Reviewed By: StanislavGlebik
Differential Revision: D6352727
fbshipit-source-id: df9ef87e0e0abe43c30e7318da38d7f930c37c6e
Summary:
This makes it quite easy to write out linknodes.
Also regenerate linknodes for our test fixtures -- the next commit will bring
them in.
Reviewed By: jsgf
Differential Revision: D6214033
fbshipit-source-id: 3b930fe9eda45a1b7bc6f0b3f81dd8af102061fc
Summary:
It's an interesting prototype, but awkward to keep running and we
only need it for reference.
Reviewed By: StanislavGlebik
Differential Revision: D6306296
fbshipit-source-id: 10b5bf3631debcb9de258d4d68089ff709dc1329
Summary:
Putting retries on this layer is not very good, because it requires every
client to add RetryingBlobstore.
Reviewed By: kulshrax
Differential Revision: D6298254
fbshipit-source-id: dbdce7fe141f9e1511322e74a1258d3819a68eb5
Summary:
We need ownership of the buffer in all of these cases, and
`AsRef<Path>` could potentially create unnecessary copies.
Reviewed By: jsgf
Differential Revision: D6214034
fbshipit-source-id: 806a87bfe3b125febaaaaf26c8b8dcac407de145
Summary:
This option can be used non-production ready blobstores that can't yet handle
big blobs.
Reviewed By: farnz
Differential Revision: D6189922
fbshipit-source-id: fa4df5b49c6d1126d3b3114e9ebe376931947917
Summary:
It's quite useful option for testing and I had to reimplement this option a
couple of time. It's time to land it.
Reviewed By: farnz
Differential Revision: D6172230
fbshipit-source-id: ec1b7c0453a3a612a173aec87978a4917568cd7b
Summary:
`RepoPath` represents any absolute path -- root, directory or file. There's a
lot of code that manually switches between directory and file entries --
abstract all of that away.
Reviewed By: farnz
Differential Revision: D6201383
fbshipit-source-id: 0047023a67a5484ddbdd00bb57bca3bfb7d4dd3f
Summary:
copy_changeset was getting a bit too long, and linknode stuff would
have made it even longer.
Reviewed By: StanislavGlebik
Differential Revision: D6097840
fbshipit-source-id: 00800cf9516adf69f2ca19244d3e14268f148ae4
Summary:
Going to add more params here, and this is becoming quite hard to
read.
Reviewed By: StanislavGlebik
Differential Revision: D6096419
fbshipit-source-id: 50f0b99bb6b1804fc01f6a99fc0297c1695dbaa5
Summary:
I'm adding linknode support to this store, and with that and without
this refactoring the code becomes quite unmanageable.
I've recorded this as copies to preserve blame info for the bits that will
remain untouched. This seems to work pretty well.
Reviewed By: StanislavGlebik
Differential Revision: D6094812
fbshipit-source-id: f7a7a1d3546d4ef2dbfa33a0a8e97d47b44f51a5
Summary: Will factor this out into several files in upcoming patches.
Reviewed By: StanislavGlebik
Differential Revision: D6094811
fbshipit-source-id: cd354888882aff2552e61dea788aeb5426e08f4d
Summary:
There is no need to insert the same entries twice. Let's filter them.
Note that while it's possible to have the same manifest entries (for example,
file or dirs with the same content), all changeset entries should be unique,
because each changeset in the repo is unique and is processed exactly once.
Reviewed By: farnz
Differential Revision: D6076667
fbshipit-source-id: 64bdf25a21884eb2faf43f32590f7cbb8f8dd300
Summary:
Let's move all IO to the separate thread. This helps quite a lot when used with
slow blostore, because parser threads are not blocked on IO -
importing upstream mercurial repo went from 20 mins to 9 mins.
Reviewed By: lukaspiatkowski
Differential Revision: D6050992
fbshipit-source-id: c3877b123bad993d819495247135544a141eab10
Summary: Change the default bucket for blobimport to be mononoke_prod, a higher capacity bucket than the previous mononoke bucket. Also make it possible to specify the bucket via the CLI rather than hardcoding it.
Reviewed By: jsgf
Differential Revision: D6073745
fbshipit-source-id: 11dcf0c8bbef0b7c3f5971cf0676cf6325f276a6
Summary: the glog drain does not swallow f.e. backtrace of error_chain errors, so it is a bit easier to debug the tool
Reviewed By: farnz
Differential Revision: D6021671
fbshipit-source-id: 32bfe01bfd77d85c37a2a446cb3e5d000763c689
Summary:
Realized that we were missing a few crates from the Tokio cleanup because those crates
didn't have `#![deny(warnings)]`.
This also caused a bunch of files to be rustfmted, which is fine.
Reviewed By: kulshrax
Differential Revision: D6024628
fbshipit-source-id: 55032d20f3676c92ef124d861e1edcd34126ab55
Summary: Compaction can slow down blobimporting a lot. Let's add an option to postpone it till the end
Reviewed By: farnz
Differential Revision: D5882003
fbshipit-source-id: 0611a8e94b3d7331bdacf909d820526f547414a0
Summary: Also ensure that `blobimport` doesn't use its own copy.
Reviewed By: jsgf
Differential Revision: D5847604
fbshipit-source-id: 5390848cd5fab8abd967ef9701720491d703c0f1
Summary: Use `impl Future` rather than a boxed future.
Reviewed By: sid0
Differential Revision: D5829773
fbshipit-source-id: 40c4339e96f7194544f416534952b78a23d93fa6
Summary: Add the `--blobstore manifold` option to blobimport to make it write blobs to Manifold.
Reviewed By: jsgf
Differential Revision: D5758930
fbshipit-source-id: a14a3c155b5d8d7b171ed7a4e53f8569539cb2e9
Summary:
`:` is a reserved character for Windows paths, so Mercurial rejects
them from being committed. Use `-` instead, so that we can commit file blob
repo test fixtures.
Reviewed By: kulshrax
Differential Revision: D5731525
fbshipit-source-id: 8d14fc03f1b135cbc4d42aeaf2f3a0ae6d13f956
Summary: This gets us `Display` support as well.
Reviewed By: lukaspiatkowski
Differential Revision: D5734383
fbshipit-source-id: 1485cf80bb310cdd282b4546bed56c60082be8ec
Summary: Just a few minor changes that make our lives easier overall.
Reviewed By: lukaspiatkowski
Differential Revision: D5737854
fbshipit-source-id: da951d7872433bffa8fc64d15cd0e917f77144b5
Summary:
We want to avoid putting the same entries twice in the blobstore. And even more - we want to avoid generating list of these entries at all in the first place.
The first approach was to add a `Mutex<HashSet>` that worker threads will use to filter out entries that were already imported. Turned out that this Mutex kills almost all the speedup from concurrency.
But since we have linkrevs then for each entry we know in which commit this entry was created [1]. That means that all of the entries are already nicely split between the threads. So no synchronization is needed.
It gives a good speedup - from ~7min to 2min of importing of hg upstream treemanifest repo using file blobstore.
Note: there is still a lock contention - tree revlogs and file revlogs maps are protected by mutex. We can optimize it later if needed.
[1] There is a well-known linkrev issue in mercurial. It shouldn't affect our case at all.
Reviewed By: jsgf
Differential Revision: D5650074
fbshipit-source-id: c4f9e2763127ffe4402417dd3963f1f450d7b325
Summary: Main part is `get_stream_of_manifest_entries` that creates a stream of all tree manifest entries by recursively going through all of them.
Reviewed By: jsgf
Differential Revision: D5622490
fbshipit-source-id: 4a8b2707df0300a37931c465bafb1ed54d6d4d25
Summary:
A preparation step before blob importing of tree manifest repos to blobrepo.
`get_parents()` method of BlobEntry reads parents from the blobstore. It works fine for file entries because file entries can stores its parents in the blobstore. With tree manifests BlobEntry can contain also tree manifest entries, and that means that tree manifest entries parents should also be stored somewhere in the blobstore.
I suggest to use the same logic for the tree manifest entries as for the file entries. File and manifest entries have two blobstore entries - one stores hash of the content and parents, another stores the actual content.
To do this I moved `RawNodeBlob` and `get_node()` to the separate module and made fields public.
Reviewed By: jsgf
Differential Revision: D5622342
fbshipit-source-id: c9f0c446107d4697b042544ff8b37a159064f061
Summary:
Instead of storing `Vec<u8>`, let's store `Vec<PathComponent>`, where PathComponent is Vec<u8> without b'\'.
To make sure len() is still `O(1)` let's store it too.
Reviewed By: sid0
Differential Revision: D5573721
fbshipit-source-id: 91967809284d79bf0fcdcabcae9fd787a37c318b