Summary: Based on comments on D20382825, we need to make sure that `_gettrees()` knows for sure whether on-demand tree fetching is in use in order to properly identify missing nodes in the response.
Reviewed By: quark-zju
Differential Revision: D20520439
fbshipit-source-id: ffa6d62dbe8b6f641b1dacebcb6f94ceae714c1b
Summary: 'new' is not very explicit with the fact that things are not refreshed.
Reviewed By: dtolnay
Differential Revision: D20356129
fbshipit-source-id: ff4a8c6fe4c34e93729c902e4b41afbe3c9deca1
Summary:
Now Segment has no lifetime we can create it directly and return the ownership.
Performance of "building segments" does not seem to change:
# before
building segments 750.129 ms
# after
building segments 712.177 ms
Reviewed By: sfilipco
Differential Revision: D20505200
fbshipit-source-id: 2448814751ad1a754b90267e43262da072bf4a16
Summary:
This allows structures like BTreeMap to own and store Segment.
It was not possible until D19818714, which adds minibytes::Bytes interface for
indexedlog.
In theory this hurts performance a little bit. But the perf difference does not
seem visible by `cargo bench --bench dag_ops`:
# before
building segments 714.420 ms
ancestors 54.045 ms
children 490.386 ms
common_ancestors (spans) 2.579 s
descendants (small subset) 406.374 ms
gca_one (2 ids) 161.260 ms
gca_one (spans) 2.731 s
gca_all (2 ids) 287.857 ms
gca_all (spans) 2.799 s
heads 234.130 ms
heads_ancestors 39.383 ms
is_ancestor 113.847 ms
parents 251.604 ms
parent_ids 11.412 ms
range (2 ids) 117.037 ms
range (spans) 241.156 ms
roots 507.328 ms
# after
building segments 750.129 ms
ancestors 53.341 ms
children 515.607 ms
common_ancestors (spans) 2.664 s
descendants (small subset) 411.556 ms
gca_one (2 ids) 164.466 ms
gca_one (spans) 2.701 s
gca_all (2 ids) 290.516 ms
gca_all (spans) 2.801 s
heads 240.548 ms
heads_ancestors 39.625 ms
is_ancestor 115.735 ms
parents 239.353 ms
parent_ids 11.172 ms
range (2 ids) 115.483 ms
range (spans) 235.694 ms
roots 506.861 ms
Reviewed By: sfilipco
Differential Revision: D20505201
fbshipit-source-id: c34d48f0216fc5b20a1d348a75ace89ace7c080b
Summary: Now that we sort the errors, we don't need this condition anymore.
Reviewed By: xavierd
Differential Revision: D20517578
fbshipit-source-id: 7012de387ee8acee1c1b630991f3a289a3fa48d1
Summary:
EdenFS is reported as `osxfuse_eden` on OSX after D20313385.
Update the fscap table to avoid slow paths testing fs capabilities.
Without this diff, churns on edenfs OSX will trigger undesirable watchman
events.
Reported by: fanzeyi
Reviewed By: fanzeyi
Differential Revision: D20518902
fbshipit-source-id: 2e8e472df16d08b17834b2c966c065bbaad052fe
Summary:
Now that Arun is about to roll this out to the team, we should get some more
logging in place server side. This updates the designated nodes handling code
to report whether it was enabled (and log prior to the request as well).
Reviewed By: HarveyHunt
Differential Revision: D20514429
fbshipit-source-id: 76ce62a296fe27310af75c884a3efebc5f210a8a
Summary:
The later is what is now recommended, and no longer requires a macro to
initialize a lazy value, leading to nicer code.
Reviewed By: DurhamG
Differential Revision: D20491488
fbshipit-source-id: 2e0126c9c61d0885e5deee9dbf112a3cd64376d6
Summary:
Lots of different warnings on this one. Main ones were:
- One bug where .write was used instead of .write_all
- Using .next instead of .nth(0) for iterators,
- Using .cloned() instead of .map(|x| x.clone())
- Using conditions as expressions instead of mut variables
- Using .to_vec() on slices instead of .iter().cloned().collect().
- Using .is_empty instead of comparing .len() against 0.
Reviewed By: DurhamG
Differential Revision: D20469894
fbshipit-source-id: 3666a44ad05e0fbfa68d490595703c022073af63
Summary:
These were from a wide variety of warnings. The only one I haven't addressed is
that clippy complains that Pin<Box<Vec<u8>>> can be replaced by Pin<Vec<u8>>. I
haven't investigated too much into it, someone more familiar with this code can
probably figure out if this is buggy or not :)
Reviewed By: DurhamG
Differential Revision: D20469647
fbshipit-source-id: d42891d95c1d21b625230234994ab49bbc45b961
Summary: The rust-crypto library is being deprecated as it's unmaintained for almost four years now. Hence, it's being replaced with the RustCrypto (https://github.com/RustCrypto) library. In this case, the hashing function to be replaced is sha256.
Reviewed By: krallin
Differential Revision: D20227885
fbshipit-source-id: 15aff5f5e6a1df8a46b2be0b334155659cbc2ea4
Summary:
This belongs to D20149376. However buck test does not include benchmarks so it
was not noticed.
Reviewed By: DurhamG
Differential Revision: D20505097
fbshipit-source-id: 24daeb17b68808f8e69e18452ab2cf26c7aa10a7
Summary: We had hooks logic scattered around the place - move it all into the hooks crate, so that it's easier to refactor to use Bonsai changesets instead of hg.
Reviewed By: StanislavGlebik
Differential Revision: D20198725
fbshipit-source-id: fb8bdc2cdbd1714c7181a5a0562c1dacce9fcc7d
Summary: Migrate hooks to new futures and thus modern tokio. In the process, replace Lua hooks with Rust hooks, and add fixes for the few cases where Lua was too restrictive about what could be done.
Reviewed By: StanislavGlebik
Differential Revision: D20165425
fbshipit-source-id: 7bdc6820144f2fdaed653a34ff7c998913007ca2
Summary:
Let's log what's the current max staleness in warm bookmark cache. The staleness is reported by bookmark updaters using the timestamp from bookmark update log.
To do that I extended `live_updaters` to also include the state of the updater, which bookmark coordinator collects, extracts staleness and logs it to ods
Reviewed By: krallin
Differential Revision: D20468423
fbshipit-source-id: 7f9aacc2ab5bc62c2aed123b8a58d9fc6d49c63c
Summary:
We are planning to expand our usage of warm bookmark cache beyond scs server.
In particular we'd like to use it in mononoke server. But before we do that let's
make warm bookmark cache a bit more reliable and predictable.
In particular, let's make sure that slow update of a single bookmark doesn't
block progress of all other bookmarks - for example, we don't want to block
updating master if we have problems with a single release bookmark. See more
discussion about it in D20161458. [1]
In order to do that we now update every bookmark separately - bookmarks updater
job spawns updater for a single bookmark and makes sure there's no more than a
single updater for a given bookmark.
A few caveats:
1) After this diff we no longer preserve an order of updates of a bookmark i.e.
even if bookmark A was updated first then bookmark B, warm bookmark cache might
see updated in different order. We expect it to not matter much with the only
caveat being stable and master boomarks - it'd be weird to have stable being
descendant of master. This glitch should only happen for a very brief time
period of time, so hopefully it shouldn't matter in practice.
2) Current implementation doesn't stop single bookmark updaters if the main
updater was cancelled. TBH, I don't think it's necessary.
In the next diff I'll add ods counters to track the delay between warm bookmark cache and actual
values of bookmarks
[1] Previously we wanted to update bookmarks in bookmark update log order, but
we decided it's not a great idea. See D20161458 for more details.
Reviewed By: krallin
Differential Revision: D20335195
fbshipit-source-id: 0b1242faa1a9ef286929132c2350c299a2594467
Summary: Pushing compat one level down in main in upload_globalrevs
Reviewed By: krallin
Differential Revision: D20469436
fbshipit-source-id: 50abf7bb401f4c1d534080e8dc6e341b06e936a9
Summary:
We see some hgbuild jobs failing because the order of errors is
different from what I see on my devserver. Let's sort them to make them stable.
This is presumably because we're operating in the order returned by readdir,
which is not guaranteed to be sorted.
Reviewed By: xavierd
Differential Revision: D20500566
fbshipit-source-id: bd4d3db1b77cd4bd7259f9bcc10bc65649fae7c6
Summary: We don't really need the Rust workers for this, as we do not expect thousands of files to be changed during an in-memory merge.
Reviewed By: DurhamG
Differential Revision: D20495141
fbshipit-source-id: e72f8c4b01deee46ee72364dcd6716692c4103ab
Summary: Basic test to validate that updating files works as expected.
Reviewed By: DurhamG
Differential Revision: D20450123
fbshipit-source-id: 3ce09e1f72fe00052b86eec07668f3aa45824725
Summary:
python2 doesn't exist on centos 8, and rather than guess
at which interpreter to use, run using the built in interpreter
from our rust binary which is accessible via `sys.executable`
under the `debugpython` subcommand.
Reviewed By: quark-zju
Differential Revision: D20379035
fbshipit-source-id: 57863427241ef01e5c0e8debbb5baf056fd41d65
Summary: Add a unit test for the new `trees_under_path` method.
Reviewed By: StanislavGlebik
Differential Revision: D20460242
fbshipit-source-id: 92ec983bc8307fbfa7862ba17b603b4150c42709
Summary:
Update a test that was attempting to call `os.fdopen()` in binary mode with
line buffering. Line buffering is only valid on files opened in text mode.
Python 3.8 now emits a warning about this invalid call, which causes the test
to fail. Switch the test to use unbuffered mode.
Reviewed By: pixelb
Differential Revision: D20484844
fbshipit-source-id: 3bedfa3f0fb7926ad3ab3b6ea85716d0e1b603c3
Summary:
Remove the custom `GzipFileWithTime` class from `mercurial/archival.py`
This code was added in 2007. Presumably back then Python's standard
`gzip.GzipFile` class did not support the `mtime` argument. However, this
argument is present in Python 2.7+, and we don't care about older versions.
The custom `GzipFileWithTime` class breaks in Python 3.8 since Python 3.8
added an extra `compresslevel` argument to the internal `_write_gzip_header()`
method.
Reviewed By: pixelb
Differential Revision: D20484845
fbshipit-source-id: 4e381799d8537c97cd1993273c8efd02743531df
Summary: Let's enable it for tests. We'll slow roll it in production.
Reviewed By: quark-zju
Differential Revision: D19543790
fbshipit-source-id: be7d18dd8ffe51615a27c39ebf4247ec405b4097
Summary:
When restarting EdenFS processes I've seen a small number of cases where it
took longer than 5 seconds for the kernel to clean up the old EdenFS process
after sending SIGKILL to it on heavily loaded systems.
If we hit this timeout during `edenfsctl restart` then the command exits
unsuccessfully and does not start a new process. It's therefore desirable to
wait longer if we can.
Reviewed By: wez
Differential Revision: D20484393
fbshipit-source-id: 98a8e5703e7b58c6d809875583347ddb302b9e02
Summary:
s/CURLE_SSL_CONNECT_ERROR/CURLE_RECV_ERROR/.
Note CURLE_SSL_CONNECT_ERROR is not explicitly checked in
code in the eden hierarchy (or anywhere else significant in fbcode).
Reviewed By: meyering
Differential Revision: D20490844
fbshipit-source-id: be4a66f49eb4a9eabaf73785f9a203f0aa6905a4
Summary:
The mutation store stores entries with a floating-point timestamp. This
pattern was copied from obsmarkers.
However, Mercurial uses integer timestamps in the commit metadata (the
parser supports floats for historical reasons, but only stores integer
timestamps). Mononoke also uses integer timestamps in its `DateTime`
type.
To keep things simple, switch to using integer timestamps for mutation
entries. Existing entries with floating point timestamps are truncated.
Add a new entry format version that encodes the timestamp as an integer.
For now, continue to generate the old version so that old clients can
read entries created by new clients.
Reviewed By: quark-zju
Differential Revision: D20444366
fbshipit-source-id: 4d6d9851aacb314abea19b87c9d0130c47fdf512