Summary:
Let's log what's the current max staleness in warm bookmark cache. The staleness is reported by bookmark updaters using the timestamp from bookmark update log.
To do that I extended `live_updaters` to also include the state of the updater, which bookmark coordinator collects, extracts staleness and logs it to ods
Reviewed By: krallin
Differential Revision: D20468423
fbshipit-source-id: 7f9aacc2ab5bc62c2aed123b8a58d9fc6d49c63c
Summary:
We are planning to expand our usage of warm bookmark cache beyond scs server.
In particular we'd like to use it in mononoke server. But before we do that let's
make warm bookmark cache a bit more reliable and predictable.
In particular, let's make sure that slow update of a single bookmark doesn't
block progress of all other bookmarks - for example, we don't want to block
updating master if we have problems with a single release bookmark. See more
discussion about it in D20161458. [1]
In order to do that we now update every bookmark separately - bookmarks updater
job spawns updater for a single bookmark and makes sure there's no more than a
single updater for a given bookmark.
A few caveats:
1) After this diff we no longer preserve an order of updates of a bookmark i.e.
even if bookmark A was updated first then bookmark B, warm bookmark cache might
see updated in different order. We expect it to not matter much with the only
caveat being stable and master boomarks - it'd be weird to have stable being
descendant of master. This glitch should only happen for a very brief time
period of time, so hopefully it shouldn't matter in practice.
2) Current implementation doesn't stop single bookmark updaters if the main
updater was cancelled. TBH, I don't think it's necessary.
In the next diff I'll add ods counters to track the delay between warm bookmark cache and actual
values of bookmarks
[1] Previously we wanted to update bookmarks in bookmark update log order, but
we decided it's not a great idea. See D20161458 for more details.
Reviewed By: krallin
Differential Revision: D20335195
fbshipit-source-id: 0b1242faa1a9ef286929132c2350c299a2594467
Summary: Pushing compat one level down in main in upload_globalrevs
Reviewed By: krallin
Differential Revision: D20469436
fbshipit-source-id: 50abf7bb401f4c1d534080e8dc6e341b06e936a9
Summary:
We see some hgbuild jobs failing because the order of errors is
different from what I see on my devserver. Let's sort them to make them stable.
This is presumably because we're operating in the order returned by readdir,
which is not guaranteed to be sorted.
Reviewed By: xavierd
Differential Revision: D20500566
fbshipit-source-id: bd4d3db1b77cd4bd7259f9bcc10bc65649fae7c6
Summary: We don't really need the Rust workers for this, as we do not expect thousands of files to be changed during an in-memory merge.
Reviewed By: DurhamG
Differential Revision: D20495141
fbshipit-source-id: e72f8c4b01deee46ee72364dcd6716692c4103ab
Summary: Basic test to validate that updating files works as expected.
Reviewed By: DurhamG
Differential Revision: D20450123
fbshipit-source-id: 3ce09e1f72fe00052b86eec07668f3aa45824725
Summary:
python2 doesn't exist on centos 8, and rather than guess
at which interpreter to use, run using the built in interpreter
from our rust binary which is accessible via `sys.executable`
under the `debugpython` subcommand.
Reviewed By: quark-zju
Differential Revision: D20379035
fbshipit-source-id: 57863427241ef01e5c0e8debbb5baf056fd41d65
Summary: Add a unit test for the new `trees_under_path` method.
Reviewed By: StanislavGlebik
Differential Revision: D20460242
fbshipit-source-id: 92ec983bc8307fbfa7862ba17b603b4150c42709
Summary:
Update a test that was attempting to call `os.fdopen()` in binary mode with
line buffering. Line buffering is only valid on files opened in text mode.
Python 3.8 now emits a warning about this invalid call, which causes the test
to fail. Switch the test to use unbuffered mode.
Reviewed By: pixelb
Differential Revision: D20484844
fbshipit-source-id: 3bedfa3f0fb7926ad3ab3b6ea85716d0e1b603c3
Summary:
Remove the custom `GzipFileWithTime` class from `mercurial/archival.py`
This code was added in 2007. Presumably back then Python's standard
`gzip.GzipFile` class did not support the `mtime` argument. However, this
argument is present in Python 2.7+, and we don't care about older versions.
The custom `GzipFileWithTime` class breaks in Python 3.8 since Python 3.8
added an extra `compresslevel` argument to the internal `_write_gzip_header()`
method.
Reviewed By: pixelb
Differential Revision: D20484845
fbshipit-source-id: 4e381799d8537c97cd1993273c8efd02743531df
Summary: Let's enable it for tests. We'll slow roll it in production.
Reviewed By: quark-zju
Differential Revision: D19543790
fbshipit-source-id: be7d18dd8ffe51615a27c39ebf4247ec405b4097
Summary:
When restarting EdenFS processes I've seen a small number of cases where it
took longer than 5 seconds for the kernel to clean up the old EdenFS process
after sending SIGKILL to it on heavily loaded systems.
If we hit this timeout during `edenfsctl restart` then the command exits
unsuccessfully and does not start a new process. It's therefore desirable to
wait longer if we can.
Reviewed By: wez
Differential Revision: D20484393
fbshipit-source-id: 98a8e5703e7b58c6d809875583347ddb302b9e02
Summary:
s/CURLE_SSL_CONNECT_ERROR/CURLE_RECV_ERROR/.
Note CURLE_SSL_CONNECT_ERROR is not explicitly checked in
code in the eden hierarchy (or anywhere else significant in fbcode).
Reviewed By: meyering
Differential Revision: D20490844
fbshipit-source-id: be4a66f49eb4a9eabaf73785f9a203f0aa6905a4
Summary:
The mutation store stores entries with a floating-point timestamp. This
pattern was copied from obsmarkers.
However, Mercurial uses integer timestamps in the commit metadata (the
parser supports floats for historical reasons, but only stores integer
timestamps). Mononoke also uses integer timestamps in its `DateTime`
type.
To keep things simple, switch to using integer timestamps for mutation
entries. Existing entries with floating point timestamps are truncated.
Add a new entry format version that encodes the timestamp as an integer.
For now, continue to generate the old version so that old clients can
read entries created by new clients.
Reviewed By: quark-zju
Differential Revision: D20444366
fbshipit-source-id: 4d6d9851aacb314abea19b87c9d0130c47fdf512
Summary:
Tracking the origin of mutation entries did not prove useful, and just creates
an un-necessary overhead. Remove the tracking and repurpose the field as a
version field.
Reviewed By: quark-zju
Differential Revision: D20444365
fbshipit-source-id: 65ff11ee8cfe77d5e67a83d03a510541d58ef69b
Summary: This diff bumps the open call from FUSE to High priority (which is higher than any other blob open request atm). This has shown improvement on the user experience of EdenFS when it's importing many other things from other channels (thrift, etc.)
Reviewed By: chadaustin
Differential Revision: D20287389
fbshipit-source-id: 319bc44ef8be5c904d7cf0db7cc2f8be28b4760a
Summary:
Replacing `HgBackingStore` with `HgQueuedBackingStore`.
This does not really improve anything except that all the caller of `getBlob`/`getTree` on this store is generating a `HgImportRequest` and getting a `SemiFuture` from the Promise associated with the request back.
Reviewed By: chadaustin
Differential Revision: D19184822
fbshipit-source-id: a8aef6b0d7392a6c407d311e8e1982754e736e9f
Summary: This diff implements `HgQueuedBackingStore` that uses `HgImportRequestQueue` to provide SCM data importing with priorities. This can allow us to customize how we import things and batch importing.
Reviewed By: chadaustin
Differential Revision: D19184826
fbshipit-source-id: da579b5bbff0b1449e9689e2c0159d4a3a475a83
Summary: This diff adds `HgImportRequestQueue` that is responsible for managing incoming requests by their priorities. This queue is later used in the `HgQueuedBackingStore` to prioritize works to the workers of the backing store.
Reviewed By: chadaustin
Differential Revision: D20197069
fbshipit-source-id: 246bbc086054a8021226e9ba6ab26d3bf0cfb7a3
Summary:
This class is used to represent an import request that will be used later in the queue implementation.
When the EdenFS needs to import a blob, it creates an instance of this request and send it to the worker. Then it waits for the promise associated with the request.
In the future, we should be able to change the owned `Promise` into a non-owned `SemiFuture` to a `Promise` somewhere else for merging repetitive import requests.
Reviewed By: chadaustin
Differential Revision: D19184824
fbshipit-source-id: 823aabbed1156acf6306b7aefc76580a540d310d
Summary: This diff adds `Priority` added in the previous diff to the `BackingStore` interface with the default value set to `Priority::Normal`.
Reviewed By: chadaustin
Differential Revision: D20197071
fbshipit-source-id: a92f1b49bb82e3478042e5e3b79b047d834755ea
Summary:
This diff introduces a `Priority` type for EdenFS. This type is used to pass along the priority of a request.
The priority class itself contains two parts, `kind` and `offset`. `kind` uses the first 4-bytes and the reset 12-bytes are used to store offset. The idea is that we can roughly assign a priority kind to most of the requests and offset is used to dynamically tweak the priority of some particular requests. For example, when we saw a process is generate millions of requests we can use this to express "normal priority but less important than other process's normal priority".
Reviewed By: chadaustin
Differential Revision: D20287652
fbshipit-source-id: 9a849fb6cc6ba5e443fea978d5b4dc3ab8ca906a
Summary:
D20444137 added a new use of `lfs_threshold`, and D20441264 removed this
variable. These two diffs landed close to the same time without ever being
tested with both diffs together.
Reviewed By: StanislavGlebik
Differential Revision: D20484843
fbshipit-source-id: fd0f0837142cdb641892005a64fd14272da7d2b7
Summary:
We want to delete all the non-treestate dirstate implementations. Let's
start throwing an exception if treestate is not enabled. We temporarily have a
bypass in case we break an important usecase in the process.
This also sets the standard new repo to be created in treestate mode, but adding
treestate to newreporequirements.
This was landed once as D19204621 but was backed out because eden backing repos
were using the odl formats and hadn't been upgraded. We fixed that, and now the
data shows ~10 people still using repos in this condition
(https://fburl.com/scuba/dev_command_timers/zxb5hsg2). Some of them are broken
repos, some are ancient eden repos and a simple eden rm and eden clone should
fix them, some are simply old non-eden repos that no one has run commands in in a while.
Reviewed By: xavierd
Differential Revision: D20472234
fbshipit-source-id: 509b4f22b6ac4741b205ef69decfb26e56aebaf8
Summary: Using ptr.add is shorter and preferred to ptr.offset.
Reviewed By: quark-zju
Differential Revision: D20452752
fbshipit-source-id: 1dc2fdbc392267d2d690673c10dcc161ecd00dfa
Summary:
These warnings are fairly trivial, as it recommends using single quote (char)
for single characters search instead of a double quote (str).
Reviewed By: quark-zju
Differential Revision: D20452408
fbshipit-source-id: b2951e133e57633a8e766536e22969fa9ac0ecee
Summary:
Clippy had 3 sources of warnings in this crate:
- from_str method not in impl FromStr. We still have 2 of them in path.rs, but
this is documented as not supported by the FromStr trait due to returning a
reference. Maybe we can find a different name?
- Use of mem::transmute while casts are sufficient. I find the cast to be
ugly, but they are simply safer as the compiler can do some type checking on
them.
- Unecessary lifetime parameters
Reviewed By: quark-zju
Differential Revision: D20452257
fbshipit-source-id: 94abd8d8cd76ff7af5e0bbfc97c1e106cdd142b0
Summary:
Clippy complains about 3 things:
- Using raw pointers in a public function that is not declared as unsafe. This
happens for C exported ones, this feels like a warning, so I haven't changed
it.
- Using .map(...).unwrap_or(<default value constructed>). The recommendation
is to use .unwrap_or_default().
- Single match instead of if let, the latter makes code much shorter.
Reviewed By: quark-zju
Differential Revision: D20452751
fbshipit-source-id: 8eeff7581c119c651ca41d8117f1f70f15774833
Summary:
Right now the module has one implementation IndexedLogStore. The name could
be more specific in the context of the crate.
The goal will be to add a trait for storage requirements of IdDag and
make IndexedLogStorage one implementation of that trait.
Reviewed By: quark-zju
Differential Revision: D20446042
fbshipit-source-id: 7576e1cc4ad757c1a2c00322936cc884838ff710
Summary:
Update the `getpack` code to calculate how many files (and their total
size) would be served over LFS.
NOTE: The columns have `Possible` in their names as we might not have LFS
enabled, in which case we aren't actually fetching this many blobs from an LFS
server.
Reviewed By: farnz
Differential Revision: D20444137
fbshipit-source-id: 85506d8c468cfdc470684dd216567f1848c43d08
Summary:
In dev mode, the glob benchmark failed inside of
folly::Range::operator[] because asserting null termination
technically violates the bounds check.
Reviewed By: simpkins
Differential Revision: D20268416
fbshipit-source-id: ee9b16a6eb9882e850631aa9d83fffe7b6fb67c3