Summary:
This ensures indexes are complete even if index format or definition has been
changed.
Reviewed By: DurhamG
Differential Revision: D20286509
fbshipit-source-id: fcc4ebc616a4501e4b6fd2f1a9826f54f40b99b8
Summary:
This avoids loading all blackbox logs when `init()` gets called multiple times
(for example, once in Rust and once in Python).
Reviewed By: DurhamG
Differential Revision: D20286511
fbshipit-source-id: ef985e454782b787feac90a6249651a882b6552e
Summary: This API has the benefit that it does not trigger loading older logs.
Reviewed By: DurhamG
Differential Revision: D20286512
fbshipit-source-id: 426421691ad1130cdbb2305612d76f18c9f8798c
Summary:
This updates microwave to also support changesets, in addition to filenodes.
Those create a non-trivial amount of SQL load when we warm up the cache (due to
sequential reads), which we can eliminate by loading them through microwave.
They're also a bottleneck when manifests are loaded already.
Note: as part of this, I've updated the Microwave wrapper methods to panic if
we try to access a method that isn't instrumented. Since we'd be running
the Microwave builder in the background, this feels OK (because then we'd find
out if we call them during cache warmup unexpectedly).
Reviewed By: farnz
Differential Revision: D20221463
fbshipit-source-id: 317023677af4180007001fcaccc203681b7c95b7
Summary:
This incorporates microwave into the cache warmup process. See earlier in this
stack for a description of what this does, how it works, and why it's useful.
Reviewed By: ahornby
Differential Revision: D20219904
fbshipit-source-id: 52db74dc83635c5673ffe97cd5ff3e06faba7621
Summary:
With the new crate-public interfaces and Debug implementations it's possible to
write tests for DagSet. So let's do it.
Reviewed By: sfilipco
Differential Revision: D20242561
fbshipit-source-id: 180e04d9535f79471c79c4307f6ab6e8e8815067
Summary: The compiler was complaining about these on Windows.
Reviewed By: quark-zju
Differential Revision: D20250719
fbshipit-source-id: 89405e155875a4a549b243e93ce63cf3f53b1fab
Summary:
Don't restrict constructing a c_api datapack store to only Unix, we can
construct it on Windows too by assuming that their path will be valid UTF-8.
Reviewed By: quark-zju
Differential Revision: D20250718
fbshipit-source-id: 07234b6a71b50c803cfe3b962fa727f57037c919
Summary: This returns the ancestors in the reverser order as the parents method.
Reviewed By: sfilipco
Differential Revision: D20265277
fbshipit-source-id: 83277cee3d8e9070fc56d20d4c1877e6782c22f7
Summary:
adds a counter to track the imports queued to enable more statistics exposure.
- Add a counters to track the number of blob, tree, prefetch imports that are in the pending
- have the counters increment (increment in constructor of wrapper struct) when the import is about to be queued
- have counters decrement once the load has completed (decrement in destructor of wrapper struct)
Reviewed By: chadaustin
Differential Revision: D20256410
fbshipit-source-id: 5536b46307b30fc19dc5747414727a86961c78e1
Summary: Improvements aim to minimize number of db queries
Differential Revision: D20280711
fbshipit-source-id: 6cc06f1ac4ed8db9978e0eee956550fcd16bbe8a
Summary:
Implementation of derivation logic for the changeset info.
BonsaiDerived is implemented for the ChangesetInfo. `derive_from_parents` just derives an info and BonsaiDerivedMapping then puts it into the blobstore.
```
ChangesetInfo::derive(..) -> ChacgesetInfo
```
Reviewed By: krallin
Differential Revision: D20185954
fbshipit-source-id: afe609d1b2711aed7f2740714df6b9417c6fe716
Summary:
Introducing data structures for derived Bonsai changeset info, which is supposed to store all commit metadata except of the file changes.
Bonsai changeset consists of the commit metadata and a set of all the file changes associated with the commit.
Some of the changesets, usually for merge commits, include thousands of file changes. It is not a problem by itself, however in cases where we need to know some information about the commit apart from its hash, we have to fetch the whole changeset. And it can take up to 15-20 seconds
Changeset info as a separate data structure is needed to speed up changeset fetching process: when we need to use commit metadata but not the file changes.
Reviewed By: markbt
Differential Revision: D20139434
fbshipit-source-id: 4faab267304d987b44d56994af9e36b6efabe02a
Summary:
The new API is required for migration Commit Cloud off hg servers and infinitepush database
This also can fix phases issues with `hg cloud sl`.
Reviewed By: markbt
Differential Revision: D20221913
fbshipit-source-id: 67ddceb273b8c6156c67ce5bc7e71d679e8999b6
Summary:
Fix the tail interval delay, it wasn't triggering.
Took the opportunity to structure the code as a loop as well which simplified it a bit.
Reviewed By: markbt
Differential Revision: D20247077
fbshipit-source-id: 1786ef1528a4b0493f5e454d28450d7198af8ad4
Summary:
Remove a failing integration test that was testing behavior we don't really
care about.
My changes in D20210708 made this test start failing. This integration test
was initially added to exercise the code I reverted in D20210708.
This test fails when EdenFS is invoked in the foreground and under sudo. If
you send SIGSTOP to the EdenFS process sudo happens to notice this and send
the same signal to itself too. This results in a state where the `sudo`
command is stopped and is never resumed so it never wakes up to reap its child
EdenFS process when EdenFS exits. The behavior I reverted in D20210708 caused
the edenfsctl CLI code to simply ignore the fact that EdenFS was stuck in a
zombie state, and proceed anyway. This allowed EdenFS to at least restart,
but it left old zombies stuck forever on the system.
This problem is arguably an issue with how sudo operates, and it's sort of
hard for us to work around. To solve the problem you need to send SIGCONT to
the sudo process, but since it is running with root privileges you don't
normally have permission to send a signal to it. It is understandable why
sudo behaves this way, since normally it is desirable for sudo to background
itself when the child is stopped.
In practice this isn't really ever a situation that we care much about
handling. Normal users shouldn't ever get into this situation (they don't run
EdenFS in the foreground, and they generally don't run it under sudo either).
Reviewed By: genevievehelsel
Differential Revision: D20268924
fbshipit-source-id: d61d0a10ee1e132f00dbd2e4dc135808b7c79345
Summary:
D18538145 introduced a transaction that spans the entire infintepush
pull. This has a couple of unfortunate consequences:
1. hg pull --rebase now aborts the entire pull if the rebase hits a conflict,
since it's unable to commit the transaction.
2. If tree prefetching fails, it aborts the entire pull as well.
Tests seem to work fine if we scope down this lock.
Reviewed By: xavierd
Differential Revision: D20260480
fbshipit-source-id: d84228ababdb5572401645f74e78df035bf1461b
Summary: Those will be reused by nameset::DagSet.
Reviewed By: sfilipco
Differential Revision: D20242563
fbshipit-source-id: 944e9a04aeb15439256ecea64355b67e326e5c89
Summary:
This is useful for `assert_eq!(format!("{:?}", set), "...")` tests.
It will be eventually exposed to Python as `__repr__`, similar to Python's
smartsets.
Reviewed By: sfilipco
Differential Revision: D20242562
fbshipit-source-id: 5373bb180db7cafebf273ace7cf2cb80fbfb8038
Summary:
In the Python world all smartsets have some kind of "debug" information. Let's
do something similar in Rust.
Related code is updated so the test is more readable.
Reviewed By: sfilipco
Differential Revision: D20242564
fbshipit-source-id: 7439c93d82d5d037c7167818f4e1125c5a1e513e
Summary:
Replace the methods to get CPU and memory usage statistics:
- For the memory: use `VmRSS` of `/proc/[pid]/status`: http://man7.org/linux/man-pages/man5/proc.5.html
- For the CPU%: calculate the process is occupied how much percentage of the CPU time, use `getrusage()`: http://man7.org/linux/man-pages/man2/getrusage.2.html
- Implemented like the sigar: https://our.intern.facebook.com/intern/diffusion/FBS/browse/master/third-party/sigar/src/sigar.c?commit=4f945812675131ea64cb3d143350b1414f34a351&lines=111-169
- Formula:
- CPU% = `process used time` during the period / `time period` * 100
- `time period` = current query timestamp - last query timestamp
- `process used time` = current `process total time` - last query `process total time`
- `process total time` = CPU time used in user mode + CPU time used in system mode // get from the API `ru_utime` and `ru_stime`
Remove the `fbzmq::ResourceMonitor` and `sigar`:
- Change and rename the UT
- `ResourceMonitorTest.cpp` -> `SystemMetricsTest.cpp`
- `ResourceMonitor` -> `SystemMetricsTest` in `openr/tests/OpenrSystemTest.cpp`
- Remove `ResourceMonitor` code and dependency for `Watchdog` and `ZmqMonitor`
- Remove `sigar` dependency used in building
Reviewed By: saifhhasan
Differential Revision: D20049944
fbshipit-source-id: 00b90c8558dc5f0fb18cc31a09b9666a47b096fe
Summary:
Previously, `flush()` will skip writing the file if there are only metadata
changes. Fix it by detecting metadata changes.
This can potentially fix an issue that certain blackbox indexes are empty,
lagging and require scanning the whole log again and again. In that case,
the index itself is not changed (the root radix entry is not changed), but
only the metadata tracking how many bytes in Log the index covered
changed.
Reviewed By: sfilipco
Differential Revision: D20264627
fbshipit-source-id: 7ee48454a92b5786b847d8b1d738cc38183f7a32
Summary:
On filesystems without symlinks, the test fails because ln prints errors.
Fix the test by using `#if symlink`.
Reviewed By: DurhamG
Differential Revision: D20260904
fbshipit-source-id: 1d0ffcc7e95d2718087fb01297369ca276b59013
Summary: The `rust-crypto` crate has not been maintained; replacing it with the `sha-1` crate since it's the only algorithm used in this library.
Reviewed By: dtolnay
Differential Revision: D20236029
fbshipit-source-id: 9c4ff25f393b099ec9570a7badbe4b378fbd98af
Summary:
Previously warm bookmark cache tried to derive all bookmarks on startup. It slows down the startup time and in some cases it might prevent scs server from starting up at all.
Let's change how warm bookmark cache initializes the bookmarks - instead of trying to derive all of them let's move underived bookmarks back in history.
Reviewed By: krallin
Differential Revision: D20195211
fbshipit-source-id: 5cb5d8599d3035973175d3063186a7c01536889a
Summary:
We didn't use DelayBlob at all, however we use DelayedBlobstore in benchmark
lib. DelayedBlobstore seem to have more useful options, so let's remove
DelayBlob and use DelayedBlobstore instead.
Reviewed By: farnz
Differential Revision: D20245865
fbshipit-source-id: bd694a0e178367014adc2776185450693f87475d
Summary:
Context: https://fb.workplace.com/groups/rust.language/permalink/3338940432821215/
In targets that depend on both 0.1 and 0.2 tokio, this codemod renames the 0.1 dependency to be exposed as tokio_old::. This is in preparation for flipping the 0.2 dependencies from tokio_preview:: to plain tokio::.
This is the tokio version of what D20168958 did for futures.
Codemod performed by:
```
rg \
--files-with-matches \
--type-add buck:TARGETS \
--type buck \
--glob '!/experimental' \
--regexp '(_|\b)rust(_|\b)' \
| sed 's,TARGETS$,:,' \
| xargs \
-x \
buck query "labels(srcs,
rdeps(%Ss, fbsource//third-party/rust:tokio-old, 1)
intersect
rdeps(%Ss, //common/rust/renamed:tokio-preview, 1)
)" \
| xargs sed -i 's,\btokio::,tokio_old::,'
```
Reviewed By: k21
Differential Revision: D20235404
fbshipit-source-id: cfb2689a584ad0d73f16d98d8587fb9c44661465
Summary:
The `lines` renderer doesn't work if the output encoding doesn't support the
curved line drawing characters. In this case we should fall back to
`lines-square`.
Rename `lines` to `lines-curved`, and change `lines` to pick the best renderer
to use based on what is possible with the current output encoding.
Reviewed By: quark-zju
Differential Revision: D20248022
fbshipit-source-id: dfaf359426528a9cb515fb3e1d366fbfb15162ff
Summary:
The pager may accept a different encoding than either the process encoding or
the output encoding.
For example, on Windows:
* the process encoding may be cp1252 (which is used for all `...A` system calls.
* the output encoding may be cp436 (which is used for writing directly to the console).
* the pager encoding may be utf-8 (which is written to the console using more modern system calls).
To fix this, add a `pager.encoding` config option, which, when set, overrides
the output encoding when writing to the pager.
Reviewed By: quark-zju
Differential Revision: D20247650
fbshipit-source-id: 1e4d1246c95f2102763d879f9783d02acc193a73
Summary:
Update `edenfsctl restart` so that it does not treat zombie processes as
stopped. This effectively reverts the changes added in D9980225.
This behavior was causing `edenfsctl restart` to spuriously fail, as it would
try to start the new EdenFS process too early, before the kernel had fully
cleaned up the old EdenFS process and released all of its locks. In
particular, the new process would often fail to acquire the RocksDB lock.
Older versions of EdenFS did not always explicitly release this lock during
shutdown, and so it would end up being cleaned up by the kernel after the
process had exited.
I wrote a simple test program to verify this behavior, where one process
would acquire a file lock with an `F_SETLK` `fcntl()` call, and then exit
without releasing it. Another process that polled for this process to enter
zombie state and then try to acquire the lock. It would very reliably receive
`EAGAIN` failures if it attempted to acquire the lock immediately after it saw
the first process enter a zombie state.
In practice we shouldn't normally run into issues with EdenFS being stuck in a
zombie state. The situation described in D9980225 sounds like a corner case
encountered during development while running EdenFS under sudo.
Reviewed By: chadaustin
Differential Revision: D20210708
fbshipit-source-id: cd62b47405d7f3e53bd4a1fb4ff2964596ca3536
Summary:
Update some of the systemd tests that were using
`eden.cli.daemon.wait_for_process_exit()` and were relying on it to return for
zombie processes that had not been reaped. This test would spawn a subprocess
and then wait for it using `wait_for_process_exit()` instead of actually just
using `subprocess.Popen.wait()`.
The `wait_for_process_exit()` function is only intended to be used for
non-child processes. For immediate children processes it is always better to
simply use `wait()`.
This refactors the code so that it uses `subprocess.Popen.wait()` where
appropriate. This is needed to make these tests work even after D20210708
lands.
Reviewed By: wez
Differential Revision: D20242891
fbshipit-source-id: 0afd3d3d7ee1d733099ea74f7b9b19cbe48b22d4
Summary: clippy was failing, this diff should fix it hopefully
Reviewed By: krallin
Differential Revision: D20250585
fbshipit-source-id: 6a9becdb84ec293659433fa9078e456d40210b6c
Summary:
Using `if cfg!` instead of `#[cfg]` allows for the compiler to understand
that the arguments aren't unused, and silence the warnings.
Reviewed By: quark-zju
Differential Revision: D20242280
fbshipit-source-id: 332dfe17b3a80a1096d15c91c9fb6644bd10e0cd
Summary:
Compiling it on Windows produced a bunch of warning due to
`hgrc_configset_load_path` not being compiled on it. Fixed it so it no longer
depends on Unix specific imports.
Reviewed By: quark-zju
Differential Revision: D20241102
fbshipit-source-id: 3002f961191fbb9bc51aa9ac1154d6d50bd7fe23
Summary:
The `.into_iter()` for this object is being deprecated and won't compile in
the future, fix it now.
Reviewed By: quark-zju
Differential Revision: D20241103
fbshipit-source-id: fdee463ed81cd07a65f3cc4c70a96c88928b3b87
Summary:
While compiling on Windows, this file issues a bunch of warnings, use `if
cfg!` instead of `#[cfg]` to silence them. The behavior is the same, but the
later allows the compiler to recognize that some is not unused.
Reviewed By: quark-zju
Differential Revision: D20241104
fbshipit-source-id: 2cd7f171c7a2f7220cc73bea9be3359260de19b2
Summary:
This removes the Extend implementation for FileBytes, which was incorrect (it
discarded existing data!). I had introduced this as a backwards compatibility
shim when doing the Bytes 0.4 to Bytes 0.5 migration :/
We don't really need this shim, considering:
- The only place that really matters that uses this is the remotefilelog crate,
where we have a content id, and where we should use `filestore::fetch_concat`
instead.
- The other places are tests (or close to abandonware...), which can do their
own folding.
Longer term, I'd like to remove the whole `Content` stream in hg entries, so
those callsites can use the filestore methods, which a) have test coverage
(unlike ad-hoc folds, which don't always do), and b) are more efficient since
they know how large the destination buffer needs to be ahead of time, and don't
need to re-allocate.
To make sure this fixes the bug, I also introduced tests for the remotefilelog
crate. As expected, the chunked variant fails without this fix.
Reviewed By: mitrandir77
Differential Revision: D20248978
fbshipit-source-id: 1b554d3e595eb867b6b6cf4204d31f27dd90a111