Commit Graph

2801 Commits

Author SHA1 Message Date
Stanislau Hlebik
0450243694 mononoke: fix sql query for streaming changelog
Summary:
Previously this query failed because it tried to convert bytes to int, and our
mysql wrapper doesn't support that.
Let's cast it instead

Reviewed By: krallin

Differential Revision: D27736863

fbshipit-source-id: 66a7cb33c0f623614f292511e18eb62e31ea582f
2021-04-13 11:15:00 -07:00
Simon Farnsworth
0f817c72fb Provide an admin command for blobstore unlink
Summary: Currently just does XDB Blobstore, because the work to do other types and/or go via Packblob is significant.

Reviewed By: markbt

Differential Revision: D27735093

fbshipit-source-id: d3797017a2e0ff7c60525d1f4d4ee3e63b519d49
2021-04-13 08:38:29 -07:00
Jan Mazur
4cb7732dae remove deprecated --readonly-storage cmdline arg
Summary: We have deprecated it in favor of argument that takes a boolean value.

Reviewed By: farnz

Differential Revision: D27709429

fbshipit-source-id: 45e9569188f2e9d017f1c5bf61f7c61bc0e5318a
2021-04-13 07:09:35 -07:00
Thomas Orozco
3c88bd8832 mononoke/timeseries: track count of valid buckets
Summary:
It's useful when operating with timeseries to know what range of data has been
populated. This diff adds support for this in mononoke/timeseries, by tracking
the number of buckets that fall within intervals where data was provided.

Reviewed By: mitrandir77

Differential Revision: D27734229

fbshipit-source-id: 3058a7ce4da67666e8ce8a46e34e277b69153ea4
2021-04-13 06:24:37 -07:00
Mark Juggurnauth-Thomas
a9b1b36721 admin: set background session class for skiplist build
Summary:
When building skiplists, set the session class to `Background`.  This ensures
that the blobstore writes for the new skiplist have completed fully.

Reviewed By: StanislavGlebik

Differential Revision: D27735411

fbshipit-source-id: 4ba8e8b91dafbb1aa258d15b26e7d773f63b5812
2021-04-13 06:12:20 -07:00
Thomas Orozco
a0a7091517 mononoke/lfs_server: reject Range requests that are out of bounds
Summary:
If the caller asks us for a range that extends past the end of our file, we'd
rather give them an error instead of silently returning the file.

This actually revealed one of the tests needed work :)

Note that right now we'll just end up categorizing this as 500. I'd like to
rework the errors we emit in the Filestore, but that's a somewhat bigger
undertaking so I'd like to do it separately.

Reviewed By: quark-zju

Differential Revision: D27193353

fbshipit-source-id: 922d68859401eb343cffd201057ad06e4b653aad
2021-04-13 05:10:00 -07:00
Mark Juggurnauth-Thomas
876f812e4b commitcloud: do not send backup bookmarks part
Summary:
The backupbookmarks part was used for infinitepush backup bookmarks, which were
deprecated.  Now stop sending the part entirely unless
`commitcloud.pushbackupbookmarks` is set.

Reviewed By: StanislavGlebik

Differential Revision: D27710099

fbshipit-source-id: 1eb404f106f5a8d9df6d73e11f60f89c1fa10400
2021-04-13 03:07:50 -07:00
Thomas Orozco
87aed04d37 mononoke/sql_ext: publish SQL max open connections stat
Summary:
Like it says in the title, this adds support for publishing our max open
connections to ODS. Note that this is a little more involved than I would like
for it to be, but there is no way to get direct access to this information.

This means, we need to:

- Expose how many open connections we have in flight (this is done earlier in
  this stack in the Rust MySQL bindings).
- Periodically get this information out out for MySQL, put it in a timeseries.
- Get the max out of said timeseries and publish it to a counter so that it can
  be fetched in ODS.

This is what this diff does. Note that I've only done this for read pools,
largely because I think they're the ones we tend to exhaust the most and I'd
like to see if there is value in exposing those counters before I use them.

We do the aggregation on a dedicated thread here. I contemplated making this a
Tokio task, but I figured making it a thread would make it easier to see if
it's misbehaving in any way (also: note that the SQL client allocates a bunch
of threads already anyway).

Reviewed By: HarveyHunt

Differential Revision: D27678955

fbshipit-source-id: c7b386f3a182bae787d77e997d108d8a74a6402b
2021-04-13 03:05:23 -07:00
Stanislau Hlebik
85469374e9 mononoke: add leases to x-repo-sync
Reviewed By: ikostia

Differential Revision: D27677897

fbshipit-source-id: e86220f23f5950130f2f4ead2743f9a9b86abed7
2021-04-12 12:08:34 -07:00
Simon Farnsworth
802c038553 Slow down sqlblob_gc retries to let bad connections leave the pool
Summary:
We hammer MySQL during GC - slow down so that bad connections to
servers that are no longer current are dropped from the pool

https://fb.workplace.com/groups/scm.mononoke/permalink/1407064449656126/?comment_id=1408378062858098 justifies setting the MySQL max ages to 1 second - it works around a MyRouter issue where it *should* reconnect us to a different host, but doesn't.

Reviewed By: krallin

Differential Revision: D27500583

fbshipit-source-id: e900925e1f0d65828613fe3e3d7f4128dc7cde82
2021-04-12 05:55:58 -07:00
Simon Farnsworth
b38e7d3a40 Separate MySQL pool for SQLBlob
Summary: SQLBlob doesn't benefit from sharing a pool with other MySQL users, but does benefit from more aggressive connection timeouts. Give it its own pool, which we can tweak later.

Reviewed By: krallin

Differential Revision: D27651133

fbshipit-source-id: 8f5216ec0506b217f9365babfe1ebac00f68a9a9
2021-04-12 05:25:48 -07:00
Thomas Orozco
d677947066 metagit/hosts-down-tailer: use mononoke/common/timeseries
Summary:
Like it says in the title. This is a place where we use timeseries so we might
as well use that shared crate.

Reviewed By: mzr

Differential Revision: D27678389

fbshipit-source-id: 9b5d4980a1ddb5ce2a01c8ef417c78b1c3da80b7
2021-04-12 05:22:33 -07:00
Thomas Orozco
e64012ad9e mononoke/timeseries: introduce a basic crate for tracking time series
Summary:
I'd like to be able to track time series for access within Mononoke. The
underlying use case here is that I want to be able to track the max count of
connections in our SQL connection pools over time (and possibly other things in
the future).

Now, the obvious question is: why am I rolling my own? Well, as it turns out,
there isn't really an implementation of this that I can reuse:

- You might expect to be able to track the max of a value via fb303, but you
  can't:

https://www.internalfb.com/intern/diffusion/FBS/browse/master/fbcode/fb303/ExportType.h?commit=0405521ec858e012c0692063209f3e13a2671043&lines=26-29

- You might go look in Folly, but you'll find that the time series there only
  supports tracking Sum & Average, but I want my timeseries to track Max (and
  in fact I'd like it to be sufficiently flexible to track anything I want):

https://www.internalfb.com/intern/diffusion/FBS/browse/master/fbcode/folly/stats/BucketedTimeSeries.h

It's not the first time I've ran into a need for something like this. I needed
it in RendezVous to track connections over the last 2 N millisecond intervals,
and we needed it in metagit for host draining as well (note that the
implementation here is somewhat inspired by the implementation there).

Reviewed By: mzr

Differential Revision: D27678388

fbshipit-source-id: ba6d244b8bb848d4e1a12f9c6f54e3aa729f6c9c
2021-04-12 05:22:33 -07:00
Thomas Orozco
5186e6b92f mononoke: fix the build
Summary:
This is breaking with a warning because there's a method called `intersperse`
that might be introduced in the std lib:

```
stderr: error: a method with this name may be added to the standard library in the future
  --> eden/mononoke/hgproto/src/sshproto/response.rs:48:53
   |
48 |             let separated_results = escaped_results.intersperse(separator);
   |                                                     ^^^^^^^^^^^
   |
note: the lint level is defined here
  --> eden/mononoke/hgproto/src/lib.rs:14:9
```

This should fix it.

Reviewed By: ikostia

Differential Revision: D27705212

fbshipit-source-id: 5f2f641ea6561c838288c8b158c6d9e134ec0724
2021-04-12 05:07:48 -07:00
Stefan Filip
5f78ccb284 edenapi: update hash-to-location to discard unknown hashes
Summary:
The hashes that are passed in as parameters to the hash-to-location function
may not be hashes that actually exist. This change updates the code so that
we don't return an error when an unknown hash is passed in. The unknown
hash will be skipped in the list of results.

Reviewed By: quark-zju

Differential Revision: D27526758

fbshipit-source-id: 8bf9b7a134a6a8a4f78fa0df276f847d922472f5
2021-04-09 17:10:57 -07:00
Stefan Filip
f890348720 edenapi/types: add master_heads to HashToLocationRequestBatch
Summary:
We want to handle the case where the client has multiple heads for master. For
example when master is moved backwards (or when it get moved on the client by
force). Updating the request object for HashToLocation to send over all the
master heads.

When the server builds non-master commits, we will want to send over non-master
heads too. We may consider having one big list of heads but I think that we
would still want to distinguish the non-master commit case in order to optimize
that use-case.

Reviewed By: quark-zju

Differential Revision: D27521778

fbshipit-source-id: cc83119b47ee90f902c186528186ad57bf023804
2021-04-09 17:10:57 -07:00
Stefan Filip
243f858524 mononoke_api: update changeset_ids_to_locations to take multiple master heads
Summary:
This scenario appears when master moves backwards. Since the master group in
segmented changelog is append-only. Non-fast-forward master move will cause
multiple heads in the master group.

Since Segmented Changelog was updated to handle multiple master heads, we can
propagate the full list that we get from the client.

This diff makes the assumption that Mononoke will know to convert all client "master head"
hashes from HgChangesetId (Sha1) form to ChangesetId (Blake2). If any of the master
heads cannot be converted then it means the server might not be reliably answer the
client's question (in "ancestors(master_heads)", translate "this hash" to a path, or tell me
confidently that the "hash" is outside "ancestors(master_heads)"). That's an error case.

Reviewed By: quark-zju

Differential Revision: D27521779

fbshipit-source-id: 219e08a66aac17ac06d2cf02676a43c7f37e8e26
2021-04-09 17:10:57 -07:00
Stefan Filip
ab4425b7ee segmented_changelog: update changeset_id_to_location to use a list of master heads
Summary:
This scenario appears when master moves backwards.
Since the IdDag can handle multiple master heads, the server can piggy-back on that
functionality and support multiple master heads when translating location to hash.

Reviewed By: quark-zju

Differential Revision: D27521780

fbshipit-source-id: c27541890d4fda13648857f010c11a25bf96ef67
2021-04-09 17:10:57 -07:00
Jeremy Fitzhardinge
b496783adb rust: fix non-literal panic fmt strings
Summary:
`panic!()`, and things which use `panic!()` like `assert!()`, take a literal format
string, and whatever parameters to format. There's no need to use `format!()`
with it, and it is incorrect to pass a non-literal string.

Mostly it's harmless, but there are some genuinely confusing asserts which
trigger this warning.

Reviewed By: dtolnay

Differential Revision: D27672891

fbshipit-source-id: 73929cc77c4b3d354bd315d8d4b91ed093d2946b
2021-04-09 16:24:33 -07:00
Aida Getoeva
442775f79f mononoke/mysql: tokio spawn queries
Summary:
Sometimes we can hit an Idle timeout error while talking to MySQL, because we open a connection and go idle for a long time. Then when we finally send a query, server returns an error: the connection is expired. This is the issue we found and fixed in D27503062 (a856799489) that blocked MySQL client release.

## Future starvation
Imagine you have a stream in which you're connecting to a server, fetching and preparing some values:
```
let v = vec![1u32, 2, 3, 4, 5];
let mut s = stream::iter(v)
   .map(|key| async move {
        let conn = connect(..).await?;
        conn.fetch(..).await
   })
   .buffered(2)
   .map(|item| async move { prepare(..) })
   .buffered(2);
```
Now you want to asynchronously process those prepared values one by one:
```
while let Some(result) = s.next().await {
   let value = result?;
   process(value).await?;
}
```
This async `process(..)` call can be talking to some service to take these values or something else that doesn't require much of a CPU time. Although the operation can be long.

**Now what happens when we do s.next().await?**

Because the stream is `buffered(2)` we wait for the first 2 futures. When the first item is ready, it returns the result and polls next stream item with a key - 3. The third future only makes a `connect(..)` call and gets switched.

When we've got a next value from the stream, we're waiting on the `process(value)` call and not polling the underlying stream till the processing is done.

**As I mentioned earlier, it is not expensive...**
But what if it takes > 10s to complete anyway?

The third future from the stream, that was polled earlier, **will wait for all these > 10s till it is polled again**.

More details [in this post](https://fb.workplace.com/groups/learningrust/permalink/2890621307875402/).

## Solution

In this case spawning a future with connection and query steps is a way to fix the issue.

This diff spawns queries in `shed::sql::sql_common::mysql_wrapper` - this covers all the places in Mononoke where we talk to MySQL. Also I removed the spawn from hg sync code, because it is not needed anymore and to illustrate that this approach works.

Reviewed By: StanislavGlebik

Differential Revision: D27639629

fbshipit-source-id: edaa2ce8f5948bf44e1899a19b443935920e33ef
2021-04-09 07:37:40 -07:00
Simon Farnsworth
89688db6d7 Force high compression for the pack size test
Summary:
`getdeps` builds sometimes fail - try a higher compression level to
see if it's a different default internally and in open source

Reviewed By: ahornby

Differential Revision: D27659420

fbshipit-source-id: 702341e6b288ab79584bfa8de5b1ccd5ed6bc57a
2021-04-09 03:02:52 -07:00
Arun Kulshreshtha
e6e2e61084 third-party/rust: patch curl and curl-sys
Summary: Update the `curl` and `curl-sys` crates to use a patched version that supports `CURLOPT_SSLCERT_BLOB` and similar config options that allow the use of in-memory TLS credentials. These options were added last year in libcurl version `7.71.0`, but the Rust bindings have not yet been updated to support them. I intend to upstream this patch, but until then, this will allow us to use these options in fbcode.

Reviewed By: quark-zju

Differential Revision: D27633208

fbshipit-source-id: 911e0b8809bc0144ad8b32749e71208bd08458fd
2021-04-08 11:50:38 -07:00
Jan Mazur
0069cc83fe mononoke/lfs_server: return the same urls to client as the Host: they're connecting to
Summary: Currently no matter through which VIP clients are connecting to, they get a response with `"href": "https://monooke-lfs.internal.tfbnw.net"` which prevents us from enabling c2p vip in corp.

Reviewed By: krallin

Differential Revision: D27331945

fbshipit-source-id: f215cce2f64a2a38accd6d55d5100d8d364ce77b
2021-04-08 10:20:13 -07:00
Stanislau Hlebik
1876314c01 mononoke: allow blocking too large known calls
Summary:
Previously we ran into an issue where client has sent us too large known
request, and we passed it all the way to the mysql.

Mysql slow log shows that we have quite a few slow queries
(https://fburl.com/scuba/mysql_slow_log/w0ugmc1i), so it might be that these
requests are still coming, but because of the problems in the logging (see
previous diff), we can't know it for sure.

In any case, adding a knob like that can be useful

Reviewed By: farnz

Differential Revision: D27650806

fbshipit-source-id: c4c82b7b5781a85c349abb4e5fa534b5e8f125a0
2021-04-08 10:12:26 -07:00
Stanislau Hlebik
9b02233ed2 mononoke: deduplicate known* methods code
Summary:
the code is almost the same, so it would be good to deduplicate it. The
duplication let to the annoying differences in logging - i.e. we logged how
many nodes were sent to us in `known` call but not in `knownnodes` call.

Reviewed By: farnz

Differential Revision: D27650583

fbshipit-source-id: 5e2e3be3b9fd66631364d23f34d241c27e370340
2021-04-08 09:37:49 -07:00
Harvey Hunt
7358178f4a mononoke: Remove external sync logic
Summary:
Now that the `hg_external_sync` jobs are gone we can delete the code
in Mononoke that behaves differently when a sync job connects.

Reviewed By: StanislavGlebik

Differential Revision: D27500506

fbshipit-source-id: 443fb54577833dbf44ece6ae90a5f25ffed38cd5
2021-04-08 09:17:11 -07:00
Aida Getoeva
498a90659c mononoke: remove debug output from hg sync
Summary: This was added in D27503062 (a856799489) as a debug info and is very spammy, let's remove it.

Reviewed By: StanislavGlebik

Differential Revision: D27647927

fbshipit-source-id: 12c6b2d4cb8b1bae2d987fd8ff461bd480b7dc18
2021-04-08 05:15:06 -07:00
Aida Getoeva
1f0a3fb467 mononoke: log error if couldn't fetch repo lock status
Summary:
Currently if we failed to fetch the repo status we only see "Repo is marked as read-only: Failed to fetch repo lock status" error, which is not very informative. Example of the error in production: P385612782.

Let's log the error.

Reviewed By: krallin

Differential Revision: D27621996

fbshipit-source-id: 85d9f0fe39397759da1b51e197f9188761678715
2021-04-08 04:03:18 -07:00
Liubov Dmitrieva
781cd19f2d Add support for wantsunhydratedcommits in Mononke
Summary:
Add support for returning unhydrated draft commits if requested by the client using a config option 'wantsunhydratedcommits'

This is needed to support slow enabling it for some clients like OnDemand.

Reviewed By: StanislavGlebik

Differential Revision: D27621442

fbshipit-source-id: 672129c8bfcbcdb4cee3ba1b092dac16c0b1877d
2021-04-08 03:48:07 -07:00
Stanislau Hlebik
47eee63dc2 mononoke: log file size to the post push scribe logging
Summary:
We already log file count, but file sizes is another useful piece of
information.

I evaluated two options - either do as I did in this diff or just change ScribeToScuba
logging python script to query scs to get file sizes. I opted for option #1
mainly because scs doesn't have a method to query file sizes for many files at once, and
querying one by one might be too expensive. We can add a method like that, but
that's a bit of a bigger change that I'd like.

Reviewed By: andll

Differential Revision: D27620507

fbshipit-source-id: 2618e60845bc293535b190d4da85df5667a7ab60
2021-04-07 23:34:40 -07:00
Stefan Filip
0517c58e93 segmented_changelog: update SqlIdMap to avoid query on empty request
Summary:
Fixes MySQL syntax errors we've seen in some cases.  No reason to call to the
database if we have no items to query for. It seems that empty queries can come
from the caching layer under certain configurations.

Reviewed By: krallin

Differential Revision: D27624798

fbshipit-source-id: 2febeff127f2fbdc739368ff1d1065c8f64723f8
2021-04-07 16:27:43 -07:00
Mark Juggurnauth-Thomas
ef11de4fed blobrepo_override: remove overrides for bookmarks and config
Summary:
Remove the `DangerousOverride` mechanism for bookmarks traits and config.

Bookmarks overrides were only used to make unit test bookmarks use the same
metadata database as `SqlCounters`, which can now be done safely by getting the
metadata database from the factory.

Config overrides can be performed by the factory at repo construction.

Reviewed By: krallin

Differential Revision: D27424695

fbshipit-source-id: 0259da3abedb7ed4944fe945ba89800eea76ebff
2021-04-07 14:01:49 -07:00
Mark Juggurnauth-Thomas
c80d5c9149 repo_import: remove dangerous_override
Summary:
This dangerous override was being used to override
derived data config.  Replace it with customizing the
config in the factory.

Reviewed By: krallin

Differential Revision: D27424696

fbshipit-source-id: 6dcf0c1397e217f09c0b82cf4700743c943f506f
2021-04-07 14:01:49 -07:00
Mark Juggurnauth-Thomas
53550b9f10 blobrepo_factory: remove blobrepo_factory
Summary: This has been superseded by `RepoFactory`.

Reviewed By: krallin

Differential Revision: D27400617

fbshipit-source-id: e029df8be6cd2b7f3a5917050520b83bce5630e9
2021-04-07 14:01:49 -07:00
Mark Juggurnauth-Thomas
af31abce47 walker: use RepoFactory to construct repositories
Summary:
Use `RepoFactory` to construct repositories in the walker.

The walker previously had special handling to allow repositories to
share metadata database and blobstore connections.  This is now
implemented in `RepoFactory` itself.

Reviewed By: krallin

Differential Revision: D27400616

fbshipit-source-id: e16b6bdba624727977f4e58be64f8741b91500da
2021-04-07 14:01:49 -07:00
Mark Juggurnauth-Thomas
6677ea9c14 repo_factory: add blobstore_override
Summary: Add a way for users of `RepoFactory` to customize the blobstore that repos use.

Reviewed By: krallin

Differential Revision: D27400615

fbshipit-source-id: e3e515756c56dc78b8de8cf7b929109d05cec243
2021-04-07 14:01:48 -07:00
Mark Juggurnauth-Thomas
a5f7cccc38 benchmark: remove dependency on blobrepo_factory
Summary:
Remove the dependency on blobrepo factory by defining a custom facet factory
for benchmark repositories.

Reviewed By: krallin

Differential Revision: D27400618

fbshipit-source-id: 626e19f09914545fb72053d91635635b2bfb6e51
2021-04-07 14:01:48 -07:00
Mark Juggurnauth-Thomas
8f8d92dec1 lfs_server: use RepoFactory to construct repositories
Summary: Use `RepoFactory` to construct repositories in the LFS server.

Reviewed By: krallin

Differential Revision: D27363465

fbshipit-source-id: 09d5d32a133f166c6f308d56b2fb02f00031a179
2021-04-07 14:01:48 -07:00
Mark Juggurnauth-Thomas
cb96e2c7e0 microwave: use RepoFactory to construct repositories
Summary: Use `RepoFactory` to construct repositories in the microwave builder.

Reviewed By: krallin

Differential Revision: D27363468

fbshipit-source-id: 25bf2f7ee1ac0e52e1c6d4bda0c50ba67bc03110
2021-04-07 14:01:48 -07:00
Mark Juggurnauth-Thomas
9b8007e2f7 unbundle_replay: use RepoFactory to construct repositories
Summary: Use `RepoFactory` to construct repositories in unbundle_replay.

Reviewed By: krallin

Differential Revision: D27363469

fbshipit-source-id: d735e5e0bfd4522c25b6748234e107c2184b5bcf
2021-04-07 14:01:48 -07:00
Mark Juggurnauth-Thomas
3b9817b5d8 benchmark_storage_config: remove dependency on blobrepo_factory
Summary: Use the equivalent function from `repo_factory`.

Reviewed By: krallin

Differential Revision: D27363470

fbshipit-source-id: dce3cf843174caa2f9ef7083409e7935749be4cd
2021-04-07 14:01:47 -07:00
Mark Juggurnauth-Thomas
e709f4fc73 commit_rewriting: remove dependency on blobrepo_factory
Summary:
This import is only used for the `ReadOnlyStorage` type, which is canonically
defined in `blobstore_factory`.

Reviewed By: krallin

Differential Revision: D27363474

fbshipit-source-id: 78fb1866d8a1223564357eea27ec0cdbe54fb5db
2021-04-07 14:01:47 -07:00
Mark Juggurnauth-Thomas
94ea52cdd4 repo_client/mononoke_repo: remove dependency on blobrepo_factory
Summary:
This import is only used for the `ReadOnlyStorage` type, which is canonically
defined in `blobstore_factory`.

Reviewed By: krallin

Differential Revision: D27363466

fbshipit-source-id: 7cb1effcee6d39de92b471fecfde56724d24a6a4
2021-04-07 14:01:47 -07:00
Mark Juggurnauth-Thomas
20bd1ac245 hook_tailer: use RepoFactory to construct repositories
Summary: Use `RepoFactory` to construct repositories for the hook tailer.

Reviewed By: krallin

Differential Revision: D27363472

fbshipit-source-id: 337664d7be317d2cfc35c7cd0f1f1230e39b6b43
2021-04-07 14:01:47 -07:00
Mark Juggurnauth-Thomas
23e065fe0e repo_listener: remove dependency on blobrepo_factory
Summary:
This import is only used for the `ReadOnlyStorage` type, which is canonically
defined in `blobstore_factory`.

Reviewed By: krallin

Differential Revision: D27363467

fbshipit-source-id: ed1388e661453e1b434c83af63c76da1eea1bce1
2021-04-07 14:01:47 -07:00
Mark Juggurnauth-Thomas
ceb9497d00 cmdlib: use RepoFactory to construct repositories
Summary: Use `RepoFactory` to construct repositories for all users of `cmdlib`.

Reviewed By: krallin

Differential Revision: D27363471

fbshipit-source-id: c9a483b41709fd90406c6600936671bf9ba61625
2021-04-07 14:01:47 -07:00
Mark Juggurnauth-Thomas
117398d820 mononoke_api: use RepoFactory to construct repositories
Summary:
Switch from `blobrepo_factory` to the new `RepoFactory` to construct `BlobRepo`
in `mononoke_api`.

The factory is part of the `MononokeEnvironment`, and is used to build all of the repos.

Reviewed By: krallin

Differential Revision: D27363473

fbshipit-source-id: 81345969e5899467f01d285c232a510b8edddb17
2021-04-07 14:01:47 -07:00
Mark Juggurnauth-Thomas
3ef58dda72 blobrepo_factory: re-export common types from repo_factory
Summary:
To facilitate migration from `blobrepo_factory` to `repo_factory`, make common
types the same by re-exporting them from `repo_factory` in `blobrepo_factory`.

Reviewed By: ahornby

Differential Revision: D27323371

fbshipit-source-id: 9b0d98fe067de7905fc923d173ba8ae24eaa0d75
2021-04-07 14:01:46 -07:00
Mark Juggurnauth-Thomas
f902acfcd1 repo_factory: add main repo factory
Summary:
Add a factory for building development and production repositories.

This factory can be re-used to build many repositories, and they will share
metadata database factories and blobstores if their configs match.

Similarly, the factory will only load redacted blobs once per metadata
database config, rather than once per repo.

Reviewed By: krallin

Differential Revision: D27323369

fbshipit-source-id: 951f7343af97f5e507b76fb822ad2e66f4d8f3bd
2021-04-07 14:01:46 -07:00
Ilia Medianikov
ffade38f3d mononoke: tests: filter pycrypto SyntaxWarning when starting Mononoke in integration tests
Reviewed By: krallin

Differential Revision: D27591736

fbshipit-source-id: 311f84847c365916b76a7b718d9e347bd151d8b2
2021-04-07 12:40:28 -07:00