Summary: This line just collects a vector into a vector. Probably a remnant of a refactor. Delete it.
Reviewed By: quark-zju
Differential Revision: D27091643
fbshipit-source-id: fb611aabea375b8495476401b2d9cdf7ba12fde1
Summary:
These appear to have been made effectively dead by cleanup in D25313325 (23daa7f90f).
This is part of unblocking the hashed buck-out rollout effort
(https://fb.prod.workplace.com/groups/fbcode.fyi/permalink/3694589863910121/),
as `get_build_rule_output_path()` relies on hard-coded buck-out paths.
Reviewed By: mzlee
Differential Revision: D27072131
fbshipit-source-id: 4fccee06a73c4afbf89cb737b25e1713a1afc55a
Summary: This makes the new software respects writes by older software.
Differential Revision: D27093942
fbshipit-source-id: 097b57c61b5ee1f0264babb88737306113fe356a
Summary:
When requests are cancelled, their futures are dropped without completion.
Currently this causes no logs or statistics to be logged, as normally this
would happen after the request implementation completes.
Add logging for cancelled requests. Include the gathered statistics so far,
so that we know how much time was spent on the cancelled request.
Reviewed By: StanislavGlebik
Differential Revision: D27084866
fbshipit-source-id: d4c5c276d496478f0c7caa700627b92d8f9e80a2
Summary:
Pretty big bug here with the "Overlay" when we are updating both stores. It
turns out that we don't really want a standard Overlay. We want the loaded
iddag to operate with the Ids in the shared IdMap and we want whatever is
updates to use the in process IdMap. The problem we have with the overlay is
that the shared IdMap may have more data than the in process IdMap. The shared
IdMap is always updated by the tailer, after all. This means that when we query
the overlay, we may get data from the shared store even if this is the first
time we are trying to update a changeset for the current process.
The solution here is to specify which vertexes are fetched from either store.
Reviewed By: quark-zju
Differential Revision: D27028367
fbshipit-source-id: e09f003d94100778eabd990724579c84b0f86541
Summary:
Using the generic load function from SegmentedChangelogManager. This is the
config SegmentedChangelog that is consistent with the specified configuration.
I wanted to have another look at ArcSwap to understand if
`Arc<ArcSwap<Arc<dyn SegmentedChangelog>>>` was the type that it was
recommending for our situation and indeed it is.
Reviewed By: quark-zju
Differential Revision: D27028369
fbshipit-source-id: 7c601d0c664f2be0eef782700ef4dcefa9b5822d
Summary:
Keep SegmentedChangelog up to date by triggerring an update to the master
bookmark every minute.
Updating SegmentedChangelog in process has the sideeffect of adding some in
process only bookkeeping. Over long periods of time this can result in
increased memory usage. To mitigate any potential issues, we reload Segmented
Changelog every hour. This will make it's parameters more predictable.
Reviewed By: quark-zju
Differential Revision: D27028368
fbshipit-source-id: dae581b9a067c6eae7975b4517203085b168e2f0
Summary:
Several methods (`commit_compare`, `commit_is_ancestor_of`, `commit_file_diffs`
and `commit_common_base_with`) operate on a pair of commits. Currently these
all resolve the other commit manually and in different ways. Commonize the
code, and add contextual information so the caller can see which of the two
commits failed to resolve.
Reviewed By: StanislavGlebik
Differential Revision: D27079920
fbshipit-source-id: a2b735801ed75232dd302061aaff2da23448d812
Summary:
Add a `.context` method for `ServiceError`, which allows the addition of
context information in errors.
Since these are wrapped Thrift errors, we can't use the usual error-chain
mechanism of `std::error::Error`. Instead, we just prepend the message that
the Thrift client will see with the context.
Add an extenstion to `Result` for results that contain an error that can be
converted into a `ServiceError` to allow the addition of context when
processing a chain of `Result`s.
Reviewed By: StanislavGlebik
Differential Revision: D27079921
fbshipit-source-id: a1200f44346530c91bd559f4be0ca2b04f7d4480
Summary:
Initializing twice causes it to fail. Let's not do that, and also let's use
init_mononoke function instead of our adhoc logger and runtime creationg (at
the very least it also initializes tunables and sets correct tokio runtime
parameters).
Also let's add more logging to see the progress of uploading
Reviewed By: ahornby
Differential Revision: D27079673
fbshipit-source-id: 940135a9aed62f7139835b2450a1964b879e814b
Summary:
The way I plan to use new streaming_changelog in prod is by running it
periodically (say, every 15 mins or so). However some repos won't get many
commits in the last 15 mins (in fact, they might get just 1 or 2).
And even for high commit rate repos most of the times the last chunk
will not be a full chunk (i.e. it will be less that --max-data-chunk-size).
If we were just uploading last chunk regardless of its size then the size of
streaming changelog database table would've just keep growing by 1 entry every
15 mins even if it's completely unnecessary. Instead I suggest to add an option
to not upload the last chunk if it's not necessary.
Reviewed By: farnz
Differential Revision: D27045681
fbshipit-source-id: 2d0fed3094944c4ed921f36943b881af394d9c17
Summary:
This command can be used to update already existing streaming changelog.
It takes a newly cloned changelog and updates the new streaming changelog
chunks in the database.
The biggest difference from "create" command is that we first need to figure
out what's already uploaded to streaming changelog. For that two new methods
were added SqlStreamingChunksFetcher.
Reviewed By: farnz
Differential Revision: D27045386
fbshipit-source-id: 36fc9387f621e1ec8ad3eb4fbb767ab431a9d0bb
Summary:
Small refactoring that will be used in the next diff. In the next diff we'll add
"update" command, and this command will specify the chunk number's itself.
So let's move setting chunk numbers from upload_chunks_to_blobstore function
Differential Revision: D27045387
fbshipit-source-id: c5387a60841fe184c6db5edc4812ddd409eb2215
Summary:
Small refactoring that makes a few things easier to do in the later diffs:
1) Adds a verification that checks the data offset
2) We now read the first chunk's offset from revlog, instead of hardcoding it
to 0, 0. This will be useful in "update" commands which needs to skip revlog
entries that already exists in the database
Differential Revision: D27045388
fbshipit-source-id: 4ee80c96d9307c77b1108889e457f10e83c8beb7
Summary: Duplicate name caused getdeps build to fail. This diff fixes it
Reviewed By: krallin
Differential Revision: D27049661
fbshipit-source-id: b23fe52ad89cbe764e656dfe960921ff1ac92b32
Summary:
`hg stauts` can be "indeterministic" because of the last second mtime fix
special rule (see pytreestate/src/lib.rs:invalidatemtime).
The test sometimes fails like:
test-sparse-fetch-t.py:140: [] != ['x', 'x/x']
Update it to support both `[]` and `['x', 'x/x']` case.
Reviewed By: sfilipco
Differential Revision: D27071225
fbshipit-source-id: c413906897b408c1e85912852afed1717a87ffc9
Summary:
The error was triggered but it's unclear what's wrong. Make the error
more detailed.
Reviewed By: xavierd
Differential Revision: D27058212
fbshipit-source-id: 3f6220e2d100d9118c05a8b4c75c5ba19c9181db
Summary: This will be used by `doctor` command.
Reviewed By: sfilipco
Differential Revision: D27053349
fbshipit-source-id: bc33e25997f30107f919a090ff68693bfdd7199d
Summary:
By implementing DefaultOpenOptions, indexedlog provides `repair()` for free.
Re-export the `Repair` trait so other crates can use `repair()` without
importing indexedlog.
Reviewed By: sfilipco
Differential Revision: D27053352
fbshipit-source-id: 8fa952f0e51e007b9d348bc12699ef1d65000c6b
Summary:
With the new log for MultiMeta. It's now possible to repair a MultiLog by:
- Repair all Logs
- Scanning through the MultiMeta Log and find a valid MultiMeta.
- Set the current MultiMeta to the picked MultiMeta.
Reviewed By: sfilipco
Differential Revision: D27053346
fbshipit-source-id: d60596fb00323b3bcadd5ade2e34cad29a37d64a
Summary:
We recently saw a few reports about "multimeta" being 0-sized. MultiLog cannot
be repaired like other logs because the logs (ex. IdDag and IdMap) have to be
in sync. To implement Repair for MultiLog, let's track MultiMeta in a Log so
we can check its previous entries and fix the multimeta.
Reviewed By: sfilipco
Differential Revision: D27053347
fbshipit-source-id: af99b13d658ee62bfe63973ab9d37338d14a7d4a
Summary:
The test failed sometimes on Linux:
--- test-doctor.t
+++ test-doctor.t.err
@@ -204,11 +204,11 @@
M A2
A A0
A
- A X
R A
R A1
? B
? C
+ ? X
? Y
? Z
The treestate fix appears to rollback to even a previous version, which is also
a valid fix. Let's accept that state too.
Reviewed By: DurhamG
Differential Revision: D27064825
fbshipit-source-id: 6aab04e66ad14ad651f93805c9652c7423178665
Summary:
The test failed sometimes on OSX:
--- test-fb-hgext-fastlog.t
+++ test-fb-hgext-fastlog.t.err
@@ -34,6 +34,7 @@
$ hg log dir -T '{desc}\n'
b
a
+ Exception in thread Thread-3 (most likely raised during interpreter shutdown): (no-eol)
$ hg log dir -T '{desc}\n' --all
b
a2
The threading usage in fastlog does seem kind of risky (especially with async
Rust involved). Race condition in Py_Finalize is not at all fun. Let's just
make the test more robust for now. In the future we probably want to avoid
threading in fastlog.
Reviewed By: DurhamG
Differential Revision: D27064618
fbshipit-source-id: a6c2ee5eda0fbd5120c8b5e5cfcc7af0f158f9b9
Summary:
The test is failing:
--- test-fb-hgext-remotefilelog-repack-remove-old.t
+++ test-fb-hgext-remotefilelog-repack-remove-old.t.err
@@ -63,7 +63,7 @@
-r--r--r-- 80 *.datapack (glob)
-r--r--r-- 80 *.datapack (glob)
-r--r--r-- 80 *.datapack (glob)
- -r--r--r-- 144 *.datapack (glob)
+ -r--r--r-- 80 ef52660a201e447b43868610b08c72e22067b8b2.datapack
We are migrating away from repack so I just made the test pass without
investigating what's going on exactly.
Reviewed By: sfilipco
Differential Revision: D27064249
fbshipit-source-id: 6bcd583b6ecbe0b373d9fec2b23269b0da6a27f3
Summary: Now that EdenAPI requests are being logged to the same dataset as regular requests (`mononoke_test_perf`), let's prefix the EdenAPI-specific columns with `edenapi_` to avoid confusion.
Reviewed By: krallin
Differential Revision: D26896670
fbshipit-source-id: 92a0710ff1a7297c9cf46ff9bd9576c9bc155e26
Summary:
suppress the deprecated errors.
found something similar in here: https://stackoverflow.com/questions/1902021/suppressing-is-deprecated-when-using-respondstoselector
example failure:
https://www.internalfb.com/intern/buck/build/a3b550b8-4099-4f27-8975-5bfffd6447e5/
```
eden/fs/inodes/test/OverlayTest.cpp:730:1: error:
'InstantiateTestCase_P_IsDeprecated' is deprecated: INSTANTIATE_TEST_CASE_P is deprecated, please use INSTANTIATE_TEST_SUITE_P [-Werror,-Wdeprecated-declarations]
INSTANTIATE_TEST_CASE_P(
/Users/kuki/fbsource/third-party/googletest/googletest/include/gtest/gtest-param-test.h:507:38:
note: expanded from macro 'INSTANTIATE_TEST_CASE_P'
static_assert(::testing::internal::InstantiateTestCase_P_IsDeprecated(), \
/Users/kuki/fbsource/third-party/googletest/googletest/include/gtest/internal/gtest-internal.h:1209:1: note: 'InstantiateTestCase_P_IsDeprecated' has been explicitly marked deprecated here
GTEST_INTERNAL_DEPRECATED(
/Users/kuki/fbsource/third-party/googletest/googletest/include/gtest/internal/gtest-port.h:2215:59:
note: expanded from macro 'GTEST_INTERNAL_DEPRECATED'
#define GTEST_INTERNAL_DEPRECATED(message) __attribute__((deprecated(message)))
```
Reviewed By: mzlee
Differential Revision: D27037957
fbshipit-source-id: b12cc500441c9ed4ed72825475c57047fb0c2076