Commit Graph

1504 Commits

Author SHA1 Message Date
Liubov Dmitrieva
5ff6db64ef add special pushvar to push Commit Cloud commits to Mononoke
Summary: This allows us to run pushbackup and cloud sync commands for Read Only Mononoke repos.

Reviewed By: ikostia

Differential Revision: D13804545

fbshipit-source-id: 8026fc4668afc8bb5c2c0a9587ca024e3c6920da
2019-01-25 05:26:58 -08:00
Jeremy Fitzhardinge
55c76466f3 mononoke: glusterblob: add super-simple round-trip test
Summary:
The test is ignored by default because it would probably be
flakey/slowThe test is ignored by default because it would probably be
flakey/slow.

Reviewed By: aslpavel

Differential Revision: D13795070

fbshipit-source-id: 8459d34126917e0fce3151ec2f4e68e639d2e7e6
2019-01-24 11:54:43 -08:00
Liubov Dmitrieva
610671e486 implement bypass readonly option (to be used for Commit Cloud in
Summary:
Next step is to change infinitepush / cloud sync to pass this pushvar to
Mononoke

Reviewed By: StanislavGlebik

Differential Revision: D13802402

fbshipit-source-id: 25e3d699bd934e3e015b9784040bd2dc4b43188d
2019-01-24 08:50:27 -08:00
Liubov Dmitrieva
abe70d4324 move Send + Sync to the trait for LeastCommonAncestorsHint
Summary: This hint is passed to many places, so it reduces the code.

Reviewed By: StanislavGlebik

Differential Revision: D13802159

fbshipit-source-id: 891eef00c236b2241571e24c50dc82b9862872cc
2019-01-24 07:59:46 -08:00
Lukas Piatkowski
6466a8806e blobstore healer: join related data in GetRangeOfEntries
Reviewed By: StanislavGlebik

Differential Revision: D13671621

fbshipit-source-id: 7e4e643e6bff83ed88c616d4bf659a3dce69f3a4
2019-01-24 04:32:37 -08:00
Pavel Aslanov
8232ba03da added: tw_handle to logged scuba samples
Summary: Added tw_handle to be able to distinguish source of samples

Reviewed By: ikostia

Differential Revision: D13800968

fbshipit-source-id: 6c8528fc69302b2d5c5fbd40ccdf729c9379a101
2019-01-24 04:03:39 -08:00
Jeremy Fitzhardinge
23c154fcd4 mononoke: add stats to glusterblob
Summary: Simple stats for put/get/is_present so we can monitor basic functionality.

Reviewed By: StanislavGlebik

Differential Revision: D13736062

fbshipit-source-id: 1fd6def0d98d9d29dfbbf11430006b91936d35dc
2019-01-23 22:28:57 -08:00
Jeremy Fitzhardinge
91d83f2b19 mononoke: convert glusterblob to Rust 2018
Summary:
Use rustfix and some hand-editing to convert to Rust 2018 - main
benefit is the elimination of a lot of `extern crate` lines.

Reviewed By: HarveyHunt

Differential Revision: D13736087

fbshipit-source-id: 50e4edfdff3e8ceea94b2beed36399681871a436
2019-01-23 22:28:57 -08:00
Andrey Malevich
0c8edd40f5 Revert D13575719: [tp2] Update zstd to 1.3.8 as 1.3.x
Differential Revision:
D13575719

Original commit changeset: eb7961078ad1

fbshipit-source-id: 844414e83f8a05df89a21dc1c2a6b9e60bad5dcc
2019-01-23 18:18:55 -08:00
Stanislau Hlebik
a83bc8fee3 mononoke: decrease the number of connections for sharded db
Reviewed By: HarveyHunt

Differential Revision: D13784695

fbshipit-source-id: 0afaac5357776119f88fc3f466e4cd799a63c9c9
2019-01-23 12:15:38 -08:00
Nick Terrell
422784684e Update zstd to 1.3.8 as 1.3.x
Summary: Update zstd in TP2 to zstd-1.3.8.

Reviewed By: pixelb

Differential Revision: D13575719

fbshipit-source-id: eb7961078ad161eb633b08b7e80e87f1c63ccca5
2019-01-23 11:15:45 -08:00
Johan Schuijt-Li
25df607aed do not init cachelib for aliasverify during integration tests
Reviewed By: StanislavGlebik

Differential Revision: D13781478

fbshipit-source-id: 03396ca0987e84c78583b3c5445429efa15be64e
2019-01-23 08:00:55 -08:00
Johan Schuijt-Li
3d1463231a disable authorization verification hook for import
Reviewed By: StanislavGlebik

Differential Revision: D13770727

fbshipit-source-id: 8830c63124bc12a40ffe624110d23a3713cad680
2019-01-23 08:00:55 -08:00
Liubov Dmitrieva
dbade135c8 add initial simple integration test for hgmn cloud sync
Summary: This is the cloud sync test with local (on disk) commit cloud backend and mononoke backend.

Reviewed By: markbt

Differential Revision: D13762527

fbshipit-source-id: c454dbf67999333e37a54e5e6ac84c3cebcf8c3b
2019-01-22 08:53:43 -08:00
Liubov Dmitrieva
7e377f952a extra test for phases (more sophisticated)
Summary: This test covers corner case (partially public stacks)

Reviewed By: StanislavGlebik

Differential Revision: D13750852

fbshipit-source-id: cd1a5a84dfb62951cb37f1fbdd6c510d825adb41
2019-01-22 05:58:46 -08:00
Liubov Dmitrieva
23b3931529 add phases calculation for public roots
Summary:
this is required to cover corner cases when client has some stacks and part of those became public

calculation for public roots happen for draft heads only, it doesn't change performance of hg pull

Reviewed By: StanislavGlebik

Differential Revision: D13742685

fbshipit-source-id: d8c8bc357628b9b513bbfad4a82a7220d143f364
2019-01-22 05:58:46 -08:00
David Budischek
bb4284e428 Add thrift support for get_commit_v2
Summary: Create the corresponding endpoint for thrift calls to get_commit_v2. For now this only supports the non optional fields.

Reviewed By: StanislavGlebik

Differential Revision: D13730602

fbshipit-source-id: fd9d845620c864bf7dade13d810e98270425ea00
2019-01-22 03:32:48 -08:00
Stanislau Hlebik
bacf9f2d64 mononoke: update skiplist building
Summary:
Previously we rely on CachingChangesetFetcher to quickly fetch all commits into
memory, but CachingChangesetFetcher was deleted in D13695201. Instead let's use
`get_many()` method from Changesets trait to quickly fetch many changesets at
once into memory.

Reviewed By: lukaspiatkowski

Differential Revision: D13712783

fbshipit-source-id: 12e8fa148f7989028547ac8d374438e23b44b6d1
2019-01-21 12:58:44 -08:00
Stanislau Hlebik
3072d0ea6a changesets: add get_many method
Summary:
The primary motivation for this method is to quickly build skiplist indexes
(see next diff). Building skiplists is much faster if all the data is in
memory.

Reviewed By: lukaspiatkowski

Differential Revision: D13712784

fbshipit-source-id: 716dc020a49cbffac273eb466e474ed86887927d
2019-01-21 12:58:44 -08:00
Pavel Aslanov
3a44162690 added: multiplexed blobstore scuba logging
Summary:
- Added scuba logging for put/get operation of `MultiplexedBlobstore`
- Added `blobstore_scuba_table` configuration field

Reviewed By: StanislavGlebik

Differential Revision: D13732064

fbshipit-source-id: 9ac0e31f9e1773321b2a7a4d8d561cce9289944b
2019-01-21 10:04:41 -08:00
Arthur Kushka
aa4a1deb0a Back out "Improved internal representation of GetbundleArgs.bundlecaps"
Summary: mononoke: Original commit changeset: 6cb5124c7893

Reviewed By: StanislavGlebik

Differential Revision: D13751021

fbshipit-source-id: b80da7ebbaaca3324078efda15704c185050b35f
2019-01-21 07:43:26 -08:00
Liubov Dmitrieva
234c33a241 populate phases table in blobimport
Summary:
We decided to populate phases table in 2 places: blobimport and push-rebase
(alredy done).

This diff is for blobimport. We know the commits are public.

Reviewed By: lukaspiatkowski

Differential Revision: D13731900

fbshipit-source-id: b64e5643e7cffd9e8fb842e9929f4c1ee7a66197
2019-01-21 07:02:15 -08:00
Arthur Kushka
21048bd300 Improved internal representation of GetbundleArgs.bundlecaps
Summary: Implemented advanced parser to parse bundlecaps json object into more suitable to work datastructure.

Reviewed By: aslpavel

Differential Revision: D13602738

fbshipit-source-id: 9a2f8e78d55a21e80229aae23e5a38f6cc14c7e8
2019-01-21 02:39:04 -08:00
David Budischek
58d812ec0f Enable logging for thrift calls to apiserver
Summary: Log all thrift calls to scuba. Since the params are in thrift format we log them both in json and human readable format (e.g. path is binary thus not really readable in json format). In addition both http and thrift now log the request type.

Reviewed By: StanislavGlebik

Differential Revision: D13730245

fbshipit-source-id: 3419ec067a9066e181210d184195fbd02980c1e0
2019-01-21 01:09:18 -08:00
Arun Kulshreshtha
6810d7d835 Change Arc<BlobRepo> to BlobRepo
Summary: `BlobRepo` is cheaply clonable and doesn't need to be `Arc`'d.

Reviewed By: StanislavGlebik

Differential Revision: D13726727

fbshipit-source-id: 0b983c7c4625f47df8a11da5781b7777cc38d72f
2019-01-18 11:37:37 -08:00
Arun Kulshreshtha
32de06a0be Convert to Rust 2018
Summary: Convert this crate to Rust 2018 Edition.

Reviewed By: StanislavGlebik

Differential Revision: D13726726

fbshipit-source-id: 28670ef543ee18635b3cfca7e6fb8e4ed4f0832f
2019-01-18 11:37:37 -08:00
Stanislau Hlebik
b909f2bc9c mononoke: per wireproto command timeout
Summary:
Previously we had a timeout per session i.e. multiple wireproto command will
share the same timeout. It had a few disadvantages:

1) The main disadvantage was that if connection had timed out we didn't log
stats such as number of files, response size etc and we didn't log parameters
to scribe. The latter is even a bigger problem, because we usually want to
replay requests that were slow and timed out and not the requests that finished
quickly.

2) The less important disadvantage is that we have clients that do small
request from the server and then keep the connection open for a long time.
Eventually we kill the connection and log it as an error. With this change
the connection will be open until client closes it. That might potentially be
a problem, and if that's the case then we can reintroduce perconnection
timeout.

Initially I was planning to use tokio::util::timer to implement all the
timeouts, but it has different behaviour for stream - it only allows to set
per-item timeout, while we want timeout for the whole stream.
(https://docs.rs/tokio/0.1/tokio/timer/struct.Timeout.html#futures-and-streams)
To overcome it I implemented simple combinator StreamWithTimeout which does
exactly what I want.

Reviewed By: HarveyHunt

Differential Revision: D13731966

fbshipit-source-id: 211240267c7568cedd18af08155d94bf9246ecc3
2019-01-18 08:35:52 -08:00
Stanislau Hlebik
cf3b9b55eb mononoke: rustfmt
Reviewed By: HarveyHunt

Differential Revision: D13731965

fbshipit-source-id: 670f633baebed1d508a55d57e46f3ae4cd42b7d2
2019-01-18 08:35:52 -08:00
Stanislau Hlebik
b595edc7b7 mononoke: record Mononoke session UUID to wireproto
Reviewed By: HarveyHunt

Differential Revision: D13716732

fbshipit-source-id: d629ae09be0f708586f2a576e1fc11db9f276f93
2019-01-18 08:35:52 -08:00
Stanislau Hlebik
ca189b4865 mononoke: rustfmt
Reviewed By: HarveyHunt

Differential Revision: D13730321

fbshipit-source-id: 706f2723f1156b730f10ba819c41a983a41655d6
2019-01-18 03:15:58 -08:00
Liubov Dmitrieva
f1f34171f2 infinitepush test: enable extension for both repo push and repo pull
Summary:
the extension was not enabled for repo pull (the second repo)

hg pull -r was still working but other things like hg up <commit cloud hash> were not working.

it caused a bit of confusion

It is cleaner to enable the extension for both sides.

Reviewed By: StanislavGlebik

Differential Revision: D13710518

fbshipit-source-id: 231aec1a71a5c13d707c2b361ce77158573b93f0
2019-01-17 10:49:02 -08:00
Stanislau Hlebik
936a31a0e0 mononoke: fix warnings and enable deny(warnings) for revsets
Reviewed By: lukaspiatkowski

Differential Revision: D13710072

fbshipit-source-id: cdc0a4abd1133b1510158fdf8f3d99e4bd7d969d
2019-01-17 04:33:49 -08:00
Stanislau Hlebik
67d0e81000 mononoke: fix typo in the test name
Reviewed By: lukaspiatkowski

Differential Revision: D13710070

fbshipit-source-id: 3af3a6ac1cdfb80d0b7866164693f0bda131296b
2019-01-17 02:43:19 -08:00
Stanislau Hlebik
432138ac93 mononoke: remove CachingChangesets
Summary:
There is no much point in keeping since we have skiplist which should solve the
same problems in a better way.

The only way where CachingChangesets maybe useful is when many users fetch a
lot of commits simultaneously. It may happen when we merge a new big repository.
However current implementation of CachingChangesets won't help with it since we
do not update its indexes.

Reviewed By: lukaspiatkowski

Differential Revision: D13695201

fbshipit-source-id: 2a4600eccf8224453ca13047e5a2ef3a0af650e3
2019-01-17 02:33:35 -08:00
Stanislau Hlebik
5dbdffdfe7 mononoke: fix sharded filenodes
Summary:
Previously to get copy/move source we had to join `paths` and `fixedcopyinfo`
table. That worked fine when we had just one shard. However now we have many
shards, and join no longer works. The reason is that move source path is in a
different shard compared to move destination path, and join returns no data.

Consider this situation. shardA contains all the data for pathA, shardB
contains all the data for pathB. That means that sharded `paths` table will
have pathA in shardA and pathB in shardB. Then if file pathA was copied form
pathB, then `fixedcopyinfo` table in shardA contains a path_hash of pathB.
However joining shardA's `fixedcopyinfo` with shardA's `paths` to convert
path_hash to path fails because pathB is in shardB.

The only possible fix is to split fetching path_hash from `fixedcopyinfo` and
converting path_hash to path.

I don't think we'll be able to keep the logic with join that we have at the
moment. It would require us to have all paths on all shards which is
unfeasible because it'll make writes much slower.

Reviewed By: aslpavel

Differential Revision: D13690141

fbshipit-source-id: 16b5cae6f23c162bb502b65c208f3ca9e443fb04
2019-01-17 02:33:35 -08:00
Stanislau Hlebik
712bab10f9 mononoke: rustfmt
Summary:
Going to change these files in the next diff. To make next diff smaller
splitting format changes to this diff.

Reviewed By: aslpavel

Differential Revision: D13690143

fbshipit-source-id: 124232b832d8c67ee7fe931ef174230cb09ff564
2019-01-17 02:33:35 -08:00
Stanislau Hlebik
8ef5d4ba64 mononoke: change the way file content blobs are hashed
Summary:
File content blobs are thrift encoded in Mononoke. This is done so
that we can change the encoding of content blobs easily. For example, we can
add compression or we can add split the blobs in chunks.

However there is a problem. At the moment file content blob key is a hash of
the actual data that's written to blobstore i.e. of a thrift encoded data. That
means that if we add compression or change thrift encoding in any way, then the
file content blob key changes and it changes the commit hashes.

This is wrong. To fix it let's use hash of the actual file content as the key.

Reviewed By: farnz

Differential Revision: D12884898

fbshipit-source-id: e60a7b326c39dad86e2b26c6f637defcb0acc8e8
2019-01-17 02:33:35 -08:00
Stanislau Hlebik
b64f2a7136 mononoke: change how copy information is found
Summary:
Mercurial has a hack to determine if a file was renamed. If p1 is None then
copy metadata is checked. Note that this hack is purely to make finding renames
faster and we don't need it in Mononoke. So let's just read copy metadata.

This diff also removes `maybe_copied()` method and unused code like `Symlink`

Reviewed By: farnz

Differential Revision: D12826409

fbshipit-source-id: 53792218cb61fcba96144765790278d17eecdbb1
2019-01-17 02:33:35 -08:00
Liubov Dmitrieva
022b9164ab make bulk select query code safer
Summary:
as you can test the query like this:

```
select * from demo WHERE `name` IN ()
```

is fine for sqlite but **invalid syntax** in MySql (empty list of value)

the error will be similar to this:

```
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ') LIMIT 10000' at line 1; 'select * from phases WHERE repo_id IN () LIMIT 10000'
```

So, such errors are usually shoot in production.

It is better to have the empty check right before calling queries with lists

Reviewed By: lukaspiatkowski

Differential Revision: D13704726

fbshipit-source-id: a9fb3a2e21e88b3af14f57917c2004454eb42531
2019-01-17 02:28:10 -08:00
Adam Simpkins
d7eaf4e8d1 reformat several files
Summary:
Reformat several rust files with the current rustfmt, to make the linter
happy.

Reviewed By: yfeldblum

Differential Revision: D13683205

fbshipit-source-id: f7a02dae0fbe095b6acde4de380aca2acfedf39d
2019-01-16 11:47:37 -08:00
Fatih Aydin
0c8adaa4c4 Replacing HgNodeHash with Changesetid - Without formatting changes
Differential Revision: D13692998

Reviewed By: lukaspiatkowski

fbshipit-source-id: 0ba0d30a96b0a4d4d84f64b410036e9e58cf64b9
2019-01-16 11:08:41 -08:00
Fatih Aydin
b43b908a4b Lint Formatting Changes for Revsets
Summary: This diff is created to separate the lint formatting work from the rest of the code changes in D13632296

Reviewed By: lukaspiatkowski

Differential Revision: D13691680

fbshipit-source-id: 8e12016534d2e6066d803b51b5f12cbf6e89a822
2019-01-16 11:08:41 -08:00
Liubov Dmitrieva
21a03d27ec rustfmt (arc lint) for the file
Summary: running arc lint for this file

Reviewed By: StanislavGlebik

Differential Revision: D13695410

fbshipit-source-id: ea1af839f409501c5b599b2cea294cd7fdc7caab
2019-01-16 10:43:14 -08:00
Stanislau Hlebik
1d1e08b267 mononoke: avoid copies in encode_single
Summary:
It showed up in our profiles because it does unnecessary copies. And for big
request like getfiles it matters a bit. So let's fix it.

Reviewed By: aslpavel

Differential Revision: D13634952

fbshipit-source-id: 98be8bf7236eb12a4009b4b174ffac258f46e0f4
2019-01-16 09:21:44 -08:00
Harvey Hunt
1a89f74d8d mononoke: Add extra wireproto logging for getbundle() and unbundle()
Summary:
Add data to the extra_context scuba field that includes the number of commits in the bundle
as well as certain stats from the changesetfetcher (such as cache misses).

Reviewed By: aslpavel

Differential Revision: D13528646

fbshipit-source-id: 4603d7e95182f4e36b5ef325651ec80997742ea0
2019-01-16 09:11:18 -08:00
Harvey Hunt
1f6a4a0b10 mononoke: Add more scuba logging for gettreepack wireproto command
Summary:
Update the wireproto command gettreepack to log the total size of the returned
treepacks, as well as the number that are returned.

Reviewed By: StanislavGlebik

Differential Revision: D13278254

fbshipit-source-id: aab9b6f42b11240a7b84bfda07bf99f15508043d
2019-01-16 09:11:18 -08:00
Harvey Hunt
c49184a026 mononoke: Log summary for getfiles wireproto method
Summary:
Update the wireproto logging to log a summary of the getfile requests, rather than
logging every individual request. This should reduce our logging to scuba.

This diff includes logging of:

- Number of returned files
- Maximum file size
- Total size of files
- Maximum file request latency

Reviewed By: aslpavel

Differential Revision: D13278256

fbshipit-source-id: 069318a718fe915995c7bbe25aa8ccb02c2372f8
2019-01-16 09:11:18 -08:00
Harvey Hunt
d68f13c037 mononoke: Add perf counters to wireproto logger
Summary:
Some wireproto commands use WireProtoLogger to record information to
both scuba and scribe (for replay). Modify this struct to also allow
a PerfCounter struct to be logged to scuba but _not_ scribe.

This allows for logging of command specific information to scuba, such as
number of files requested.

Reviewed By: StanislavGlebik

Differential Revision: D13278255

fbshipit-source-id: 0ed364c8264ba3ae439746387126a7778712b860
2019-01-16 09:11:18 -08:00
Harvey Hunt
9e12a9b943 mononoke: Add PerfCounters struct to CoreContext
Summary:
PerfCounters is a small wrapper around a concurrent hashmap
that can be used to store performance metrics in. Include it in CoreContext
so that it can be used throughout the codebase.

Reviewed By: aslpavel

Differential Revision: D13528647

fbshipit-source-id: 7c3f26ab8c0c7ba5ee619e85a069af7e7721037f
2019-01-16 09:11:17 -08:00
Liubov Dmitrieva
fd7345bc58 calculate phases using bulk API
Summary:
The bulk api makes less queries to mysql and therefore is more efficient.

This is especially important for `hg pull` requests where the list of heads is very large.

Reviewed By: lukaspiatkowski

Differential Revision: D13677298

fbshipit-source-id: 3dec1b3462c520c11481325e82523ef7a6ae6516
2019-01-16 08:22:54 -08:00