Summary:
This dangerous override was being used to override
derived data config. Replace it with customizing the
config in the factory.
Reviewed By: krallin
Differential Revision: D27424696
fbshipit-source-id: 6dcf0c1397e217f09c0b82cf4700743c943f506f
Summary: This has been superseded by `RepoFactory`.
Reviewed By: krallin
Differential Revision: D27400617
fbshipit-source-id: e029df8be6cd2b7f3a5917050520b83bce5630e9
Summary:
Use `RepoFactory` to construct repositories in the walker.
The walker previously had special handling to allow repositories to
share metadata database and blobstore connections. This is now
implemented in `RepoFactory` itself.
Reviewed By: krallin
Differential Revision: D27400616
fbshipit-source-id: e16b6bdba624727977f4e58be64f8741b91500da
Summary: Add a way for users of `RepoFactory` to customize the blobstore that repos use.
Reviewed By: krallin
Differential Revision: D27400615
fbshipit-source-id: e3e515756c56dc78b8de8cf7b929109d05cec243
Summary:
Remove the dependency on blobrepo factory by defining a custom facet factory
for benchmark repositories.
Reviewed By: krallin
Differential Revision: D27400618
fbshipit-source-id: 626e19f09914545fb72053d91635635b2bfb6e51
Summary: Use `RepoFactory` to construct repositories in the LFS server.
Reviewed By: krallin
Differential Revision: D27363465
fbshipit-source-id: 09d5d32a133f166c6f308d56b2fb02f00031a179
Summary: Use `RepoFactory` to construct repositories in the microwave builder.
Reviewed By: krallin
Differential Revision: D27363468
fbshipit-source-id: 25bf2f7ee1ac0e52e1c6d4bda0c50ba67bc03110
Summary: Use the equivalent function from `repo_factory`.
Reviewed By: krallin
Differential Revision: D27363470
fbshipit-source-id: dce3cf843174caa2f9ef7083409e7935749be4cd
Summary:
This import is only used for the `ReadOnlyStorage` type, which is canonically
defined in `blobstore_factory`.
Reviewed By: krallin
Differential Revision: D27363474
fbshipit-source-id: 78fb1866d8a1223564357eea27ec0cdbe54fb5db
Summary:
This import is only used for the `ReadOnlyStorage` type, which is canonically
defined in `blobstore_factory`.
Reviewed By: krallin
Differential Revision: D27363466
fbshipit-source-id: 7cb1effcee6d39de92b471fecfde56724d24a6a4
Summary: Use `RepoFactory` to construct repositories for the hook tailer.
Reviewed By: krallin
Differential Revision: D27363472
fbshipit-source-id: 337664d7be317d2cfc35c7cd0f1f1230e39b6b43
Summary:
This import is only used for the `ReadOnlyStorage` type, which is canonically
defined in `blobstore_factory`.
Reviewed By: krallin
Differential Revision: D27363467
fbshipit-source-id: ed1388e661453e1b434c83af63c76da1eea1bce1
Summary: Use `RepoFactory` to construct repositories for all users of `cmdlib`.
Reviewed By: krallin
Differential Revision: D27363471
fbshipit-source-id: c9a483b41709fd90406c6600936671bf9ba61625
Summary:
Switch from `blobrepo_factory` to the new `RepoFactory` to construct `BlobRepo`
in `mononoke_api`.
The factory is part of the `MononokeEnvironment`, and is used to build all of the repos.
Reviewed By: krallin
Differential Revision: D27363473
fbshipit-source-id: 81345969e5899467f01d285c232a510b8edddb17
Summary:
To facilitate migration from `blobrepo_factory` to `repo_factory`, make common
types the same by re-exporting them from `repo_factory` in `blobrepo_factory`.
Reviewed By: ahornby
Differential Revision: D27323371
fbshipit-source-id: 9b0d98fe067de7905fc923d173ba8ae24eaa0d75
Summary:
Add a factory for building development and production repositories.
This factory can be re-used to build many repositories, and they will share
metadata database factories and blobstores if their configs match.
Similarly, the factory will only load redacted blobs once per metadata
database config, rather than once per repo.
Reviewed By: krallin
Differential Revision: D27323369
fbshipit-source-id: 951f7343af97f5e507b76fb822ad2e66f4d8f3bd
Summary: I found the Writer based zstd::Encoder api was doing a lot more allocations than the buffer based Compressor api, so switched as its both faster and better fit to the usecase.
Differential Revision: D27588448
fbshipit-source-id: ee8f72180045308a2e16709b9b5aa7bcf3b5cafd
Summary: Reduce amount of manual steps needed to restart a manual scrub by checkpointing where it has got to to a file.
Differential Revision: D27588450
fbshipit-source-id: cb0eda7d6ff57f3bb18a6669d38f5114ca9196b0
Summary: Here we further add prefetch-metadata support to prefetching profiles
Reviewed By: genevievehelsel
Differential Revision: D27568542
fbshipit-source-id: 64507125f47cf093c0133c82fcab941ed6495f32
Summary:
This is ... a stopgap :( There is probably some slow polling happening in
unbundle_future, and this causes us to fail to use our connection in time in
check_lock_repo...
Reviewed By: ahornby, StanislavGlebik
Differential Revision: D27620728
fbshipit-source-id: b747011405328b60419a99f0e5dbbaf64d53196a
Summary:
This one is a little bit trickier since we want to use Tokio inside a
Quickcheck function. That said, this is basically the expansion `tokio::main`
does, so we can simply use it.
Reviewed By: farnz
Differential Revision: D27619146
fbshipit-source-id: 1e3ea2d119913900d9b55c0a6d33de8a6ed5781c
Summary:
I'd like to just get rid of that library since it's one more place where we
specify the Tokio version and that's a little annoying with the Tokio 1.x
update. Besides, this library is largely obsoleted by `#[fbinit::test]` and
`#[tokio::test]`.
Reviewed By: farnz
Differential Revision: D27619147
fbshipit-source-id: 4a316b81d882ea83c43bed05e873cabd2100b758
Summary:
The intention is that the packer decides what to pack and in what order, PackBlob provides the methods needed to do the packing as requested by a packer.
Change the API so that a packer cannot make mistakes
Reviewed By: ahornby
Differential Revision: D27476427
fbshipit-source-id: 7dd534302c62b2432a2aca474f49da8ab9cbef1a
Summary:
It is useful to have latency stats grouped by the shardmap and label to easily identify where the problem comes from if something is broken.
This diff switches a single histogram used for all the MySQL use-cases into a set of histograms: one per `shardmap:label`. Ans also makes the histograms a bit smaller as we don't actually have such big numbers as 10s per conn/query.
There is only one case when the histogram is created per shard instead of a shardmap, it is `xdb.hgsql` DB with 9 shards. The reason why it happens it's because we connect to each shard as to an individual tier: https://fburl.com/diffusion/um8lt7cr.
{F582699426}
Reviewed By: farnz
Differential Revision: D27503833
fbshipit-source-id: 40c7eb64df7ae0694f63d3644231f240df8212ec
Summary: introduce a way of requesting unhydrated commits using client telemetry
Reviewed By: StanislavGlebik
Differential Revision: D27591868
fbshipit-source-id: 7616f9d3b222c2c7f43c2ba930086eb0ec9e15bc
Summary:
Only used by one test that can define the constaint itself.
The problem with having it on the trait is that it's a bit noisy when
things operate on ToApi at the generic level. It adds to the list of
constaints they these users of the ToApi trait need to add.
Reviewed By: kulshrax
Differential Revision: D27549922
fbshipit-source-id: fff9e513eb4c06862111ce6eecc84ab981eea893
Summary:
This is only used in one utility which can define the constaint itself.
I am looking to simplify the Requirements for ToWire so that we can more
easily iterate on them. Debug as a requirement produces too much noise.
There is the risk of ending up in a scenario where we want to print the Wire
type but it's more practical to annotate structs with derive Debug when that
happens than to add the constaint in the trait.
Reviewed By: kulshrax
Differential Revision: D27549925
fbshipit-source-id: aacf7c1c465c94414be02aa143187897c7084980
Summary:
There is no use for it outside of one test which can describe that constraint
itself.
I think that requiring ToWire and ToApi to use the same objects is too much
for the general use case. We regularly convert between different object types
that are the same logical thing but have different representations. A good
example for that is the HgId. It makes sense to implement ToWire for all HgId
variations.
Reviewed By: kulshrax
Differential Revision: D27549924
fbshipit-source-id: d76d7a4beb528634bed46ae93dbd634d850547e5
Summary:
For async requests, we perform a blocking request in a separate thread, and stream the results back through a channel. However, if the curl handle for the request is dropped before starting the request (for example, because of a configuration error), this function would return a `oneshot::Canceled` error (from the channel consumer) instead of the real error message from the IO thread.
This diff fixes the issue by ensuring that the function waits for and returns the error message from the IO thread in the event that the IO thread returns before starting the request.
Reviewed By: quark-zju
Differential Revision: D27584502
fbshipit-source-id: 8447c158d253c3f28f03fcc4c36a28698fe6e83d
Summary:
This adds command line argument `-I` that supplies \0-separated list of files to add to commit.
Added files can be ignored/untracked.
No limit on total size for now, still waiting to hear from mononoke team on max files size
Reviewed By: quark-zju
Differential Revision: D27547822
fbshipit-source-id: 8bb755db5dd6e557e2752381dbeb5f1035073725
Summary: This will be used in ephemeral commit, since by default it does not need to include untracked files
Reviewed By: quark-zju
Differential Revision: D27580975
fbshipit-source-id: 16c4faa92e9afe472ff1677e5b92507bebaee247
Summary:
On macOS, the mount syscall for NFS expects the arguments to be XDR encoded.
This set of argument roughly match with its Linux counterpart and appears to
start the mount process. It fails early on when trying to access the .hg
directory but this is probably an issue with the NFS server code, not of the
mounting code.
Reviewed By: kmancini
Differential Revision: D27306769
fbshipit-source-id: 697fadfddc4048ef56c3a23f75dd5bdbcc92af1b
Summary:
* use `std::nullopt`
* TODO about sandcastle_instance_id in opensource version
Reviewed By: chadaustin
Differential Revision: D27575732
fbshipit-source-id: bf76970a15fee5a3dc1e4e411ea70f5af7248496
Summary:
When creating a commit via scs api we need to validate a few things (e.g. that
the file that a commit is trying to delete existed in the parent), and in order
to do that we need to use a manifest. Previously we were using fsnodes
manifests, however fsnodes is the slowest manifest to derive, and that makes
the whole create_commit operation much slower. Let's try to use skeleton
manifest, which is the fastest to derive.
Reviewed By: markbt
Differential Revision: D27587664
fbshipit-source-id: a60cab4956063bf26c0f1ec8c9cfa05233bb3cc0
Summary:
Previously ChangesetPathContext was holding both fsnode_id and unode_id, however it made it easier to misuse api and trigger expensive fsnodes or unodes path traversal (see more info in the comments for D27587664).
This diff splits it in two separate types.
Also I noticed that ChangesetPathContext wasn't using the `unode_id` future that it stored, so I just deleted it.
Reviewed By: markbt
Differential Revision: D27590997
fbshipit-source-id: 08fc14d33c82357275413c4cf2698f97620503ea
Summary: The default limit for commit cloud interactive history should be two weeks, not two days.
Reviewed By: farnz
Differential Revision: D27589697
fbshipit-source-id: 4314621fa7f06dac9243eb9b826acc1c7b0c0b10
Summary:
Hg sync jobs were frequently failing due to the task performing MySQL query being starved.
It acquired a connection but then waited for many seconds until it could finally send a query. At that time the server returned error: the connection was open idle for >12s and now timed out:
```
I0401 11:08:32.085223 390 [main] eden/mononoke/mononoke_hg_sync_job/src/main.rs:355] error without entry
E0401 11:08:32.086126 390 [main] eden/mononoke/cmdlib/src/helpers.rs:336] Execution error: While executing ReadNextBookmarkLogEntries query
Caused by:
0: While making query 'SELECT id, repo_id, name, to_changeset_id, from_changeset_id, reason, timestamp,
replay.bundle_handle, replay.commit_hashes_json
FROM bookmarks_update_log log
LEFT JOIN bundle_replay_data replay ON log.id = replay.bookmark_update_log_id
WHERE log.id > 19896395 AND log.repo_id = 2106
ORDER BY id asc
LIMIT 20'
1: Mysql Query failed: Failed (MyRouter) Idle timeout after 12 seconds see https://fburl.com/wait_timeout
I0401 11:08:32.172088 390 ClientSingletonManager.cpp:95] Shutting down Manifold ClientSingletonManager
remote: pushkey hooks finished (after 0.00s)
Error: Execution failed
```
Link to the full logs in a timeframe: https://fburl.com/tupperware/16th1yk7 (I added a debug output when `ReadNextBookmarkLogEntries` query runs).
Hg sync job initiates an infinite loop to look for the new commits to synchronize. In the async stream it runs `ReadNextBookmarkLogEntries` query and then prepares bundle and synchronizes it. The stream is [buffered](https://fburl.com/diffusion/z1r7648f) by [5 (link)](https://fburl.com/diffusion/surn37hx).
My guess is that the `ReadNextBookmarkLogEntries` query starts executing, while the previously discovered bundles are being prepared. The query opens a connection and then gets switched, now the bundles are being synced. But sometimes those bundles take too long to sync while the query task is waiting till it be executed again.
The sync finishes and the query task finally tries to send a MySQL query but hits an idle timeout error on the server.
This diff:
* Spawns the MySQL query and `apply_bundle` call.
* Adds watchdog on futures to help debug issues if they occur later, although I couldn't see any slow polls in the logs.
Reviewed By: StanislavGlebik
Differential Revision: D27503062
fbshipit-source-id: 6d1d9166b99487c056f3fb217502f8a9d3d46228
Summary:
Some manual scrub runs can take a long time. Provide progress feedback logging.
Includes a --quiet option for when progress reporting not required.
Reviewed By: farnz
Differential Revision: D27588449
fbshipit-source-id: 00840cdf2022358bc10398f08b3bbf3eeec2b299
Summary: D27591073 (a1e2833377) made the histogram smaller, so this is sufficiently fast to call directly.
Reviewed By: krallin
Differential Revision: D27592432
fbshipit-source-id: 50d3d594b237b87cc9d0a90910a6f022b7c40f2a
Summary:
There was no reason for this to be this large, and it's causing issues with
repo construction since it's pretty expensive to construct as a result
(D27501915 (69896e90b5)).
Let's just make it much smaller.
Reviewed By: StanislavGlebik
Differential Revision: D27591073
fbshipit-source-id: 1c986cb922d70b10c39711c57ac9f5899ed7496c