Summary: `%s` with a rev number is not accepted in the current codebase. Use `%d` instead.
Reviewed By: ikostia
Differential Revision: D27873899
fbshipit-source-id: b34eb0b80f0789c9e06af366bfdaa884c5c69357
Summary: `%s` with `revid` is not accepted in the current codebase.
Reviewed By: ikostia
Differential Revision: D27873898
fbshipit-source-id: e3790855892d3b07e1e5ea6bd92a14738bf6c100
Summary:
We didn't log it to perf counters log, and that makes it hard to aggregate,
show distributions etc
Let's start doing that
Reviewed By: krallin
Differential Revision: D27856968
fbshipit-source-id: 82fbba70154ee011073f3122256bd296bbb938ae
Summary: Its more efficient to bulk load the mappings for a chunk than do the queries one by one
Differential Revision: D27801830
fbshipit-source-id: 9c38ddfb1c1d827fc3028cd09f9ad51e3cbee5dc
Summary: Add an accessor so that we keep a reverse mapping of the WalkState::bcs_to_hg member as a cache of bonsai to hg mappings and also populate it on derivations.
Differential Revision: D27800533
fbshipit-source-id: f9b1c279a78ce3791013c3c83a32251fdc3ad77f
Summary: Add an accessor so that we can use the WalkState::hg_to_bcs member as a cache of hg to bonsai mappings
Reviewed By: farnz
Differential Revision: D27797638
fbshipit-source-id: 44322e93849ea78b255b2e3cb05feb8db6b4c7a7
Summary: This diff makes treeoverlay the default overlay type for Windows users.
Reviewed By: kmancini
Differential Revision: D27247658
fbshipit-source-id: 866eafc794eff1c262eab3061f14eb597bea0a66
Summary: This diff allows EdenFS to create tree overlay based on checkout configuration.
Reviewed By: kmancini
Differential Revision: D27242580
fbshipit-source-id: d0ebe33017c16517c117c1886f62bc9c6fe9f466
Summary:
`debugresetheads` is expected to remove all non-essential heads. That
includes bookmarks.
Reviewed By: kulshrax
Differential Revision: D27861548
fbshipit-source-id: 045976a5a9e27e7eee7ee48448c44552da439983
Summary:
Now that lastCheckoutTime is a single uint64_t, we no longer need a lock to
protect it, simple atomics are sufficient. Since reading an atomic usually
doesn't require any atomic operation, this will save a handful of atomics when
loading an inode where the last checkout time is read.
Reviewed By: chadaustin
Differential Revision: D27860653
fbshipit-source-id: 464e950c949ca243664d213da99d96ff5d0fcbb8
Summary:
The lastCheckoutTime is mostly used to initialize the timestamps of newly
loaded inodes and since these store an EdenTimestamp, we incur a conversion for
every inode loads. Instead of doing this conversion, we can simply make it an
EdenTimestamp directly.
Similarly, the getNow function was always converted to an EdenTimestamp
(sometimes more than once), we can also make it return the EdenTimestamp
directly.
Reviewed By: chadaustin
Differential Revision: D27860652
fbshipit-source-id: 9ea8fe9a312e6c3d8667b93130bb32a46ab92961
Summary:
Some test runner don't properly redirect stdout/stderr of nested processes, or
even direct writes to filedescriptors. On these, debugging a test failure is
almost impossible for EdenFS as we rely on the test output to be interleaved
with the EdenFS logs to understand what the daemon is doing.
To solve this, we can simply create a thread that redirects the output of
EdenFS to sys.std{out,err}.
Reviewed By: kmancini
Differential Revision: D27570966
fbshipit-source-id: 6a8d5229d8d5d141e6ab423f7658744b42af46e3
Summary: The Python `[auth]` matching code does not take cert validity into account when performing certificate matching, whereas the Rust version of the code does. In practice, the existing call sites for the Rust code disable match-time validation, and instead validate the certificate at time-of-use. This diff makes the Rust code's behavior match Python so we can remove the latter entirely.
Reviewed By: DurhamG
Differential Revision: D27837343
fbshipit-source-id: 0bfb5ebc3a36c8fa748cb289e2d8d1495ba8b296
Summary:
The svfs might have a different permission setup (ex. g+s, on ext4) that cannot
be applied to other vfs (ex. on edenfs). Do not inherit it. Instead, calculate
proper mode from the vfs root (ex. `.hg`).
Practically, the `createemode` is `None` in most of our repos. However,
`debugsegmentclone` might create svfs with `g+s` mode due to indexedlog's
`mkdir -p` recursively chmods created parents.
The original logic was added in 6590bef21 (FBS), 80e51429cb9a (HG) in 2008 with
little review comments: https://www.mercurial-scm.org/pipermail/mercurial-devel/2008-July/007269.html
Reviewed By: DurhamG
Differential Revision: D27860581
fbshipit-source-id: 43f93080621aaef168d2cecae46fd6dfb879fa1c
Summary: Enables the `Serialize` and `Deserialize` impls on the `Uuid` type.
Reviewed By: dtolnay
Differential Revision: D27799952
fbshipit-source-id: 4b0e2f8ab4ede20a2113fc1dda42c2ba8b3d0b35
Summary:
Previously we were always caching bookmarks, but D27323369 (f902acfcd1) accidentally remove
that. Let's add it back
Reviewed By: krallin
Differential Revision: D27859523
fbshipit-source-id: 8137c838fc56ecbbc64ba139d4a590dccd011bbc
Summary:
I am debugging why some people get vim to pop up during a merge conflict and some do not.
also fixes a few lint issues
Reviewed By: DurhamG
Differential Revision: D27684419
fbshipit-source-id: f636d71c18353a3816d1e442c05790cf4fd7b90b
Summary: I am removing this change because we've decided to store prepushrebase changeset id server-side.
Reviewed By: ikostia
Differential Revision: D27853518
fbshipit-source-id: 888897bc48c67477309b09af5f8c1825ce20cbca
Summary:
To prevent bonsai changeset divergence between prod and backup repo by copying
bonsais from prod repo directly during hg sync job push.
See more details about motivation in D27824210
Reviewed By: ikostia
Differential Revision: D27852341
fbshipit-source-id: 93e0b1891008858eb99d5e692e4dd60c2e23f446
Summary:
In the next diff it's going to be used to copy bonsais from the prod repo
during hg sync job, and in this diff I move this code to the common place so
that we can use it in the next diff
Differential Revision: D27852340
fbshipit-source-id: 9744571430e15a9d7f1e569d9b6690bc45787bd2
Summary:
This is not used on its on, but in subsequent diffs I will add a use-case, by
the megarepo configs crate.
When built in non-fbcode mode, this crate does not export anything. I chose this approach as opposed to the approach of exporting no-op stubs to force the clients to pay attention and implement gating their side too. This seems reasonable for a rather generic configo client.
Reviewed By: StanislavGlebik
Differential Revision: D27790753
fbshipit-source-id: d6dcec884ed7aa88abe5796ef0e58be8525893e2
Summary:
This diff does the following:
1. makes `megarepo_add_sync_target` into an async call
2. wraps all the async call responses into an additional struct to convey the
idea of pending requests
3. fixes code which imports/implements these interfaces
1 is needed because adding a new target will need to create an initial state of
the megarepo (so not just write some configs), and that is an expensive
operation.
2 is needed because we don't want to express the idea of "this request is not
yet processed" through a thrift exception. Instead, let make `_poll` calls
return a struct with a single optional field. When present, that field will
contain the payload of response to the underlying request. When absent, it will
indicate the fact that the request is still pending.
Of course, this is a compatibility-breaking change, that's why I want to get it in as early as possible (while there are no real clients calling changed methods).
Reviewed By: StanislavGlebik
Differential Revision: D27823377
fbshipit-source-id: dc2a5ed327b38d1cacd575af9d7edf5768f9c377
Summary: Now we're on rustc 1.51 the fork is no longer needed.
Reviewed By: dtolnay
Differential Revision: D27827632
fbshipit-source-id: 131841590d3987d53f5f8afb5ebc205cd36937fb
Summary:
If you ask for a range starting at byte 3, and the chunks are of size 3, then
you don't actually need need the first chunk, but right now we'll fetch it
then extract zero bytes from it.
This is quite wasteful, and for LFS range fetches will be problematic since it
basically doubles the volume of stuff we need to keep in cache (we need both
chunks).
Reviewed By: farnz
Differential Revision: D27824411
fbshipit-source-id: 7103f5b4d5bb78f023245f3e8a1bcb0c2f28faab
Summary:
Like it says in the title. We'd like to ask for the exact size that was
configured, because this way we can set the chunk size to the LFS threshold and
it avoids overlapping any file chunks server side.
Reviewed By: DurhamG
Differential Revision: D27824418
fbshipit-source-id: 43f40eb87080ec58e813ba1f1dda5b6a5e9f98ee
Summary:
While soft mount are nice as they allow the server (edenfs) to die and the
client applications to not end up in D state, this also force a maximum
(non-configuerable) 60s timeout for all IOs, after which application receive a
ETIMEDOUT. Thus, we need to not make the mount hard, thankfully, since the
mount is INTR, applications should not stay in D state if EdenFS dies.
Reviewed By: genevievehelsel
Differential Revision: D27808311
fbshipit-source-id: 17c30e88e5b236418064d8c309d85fdc6f1ca3e9
Summary:
Like it says in the title, this includes rendezvous into changesets. This is
our busiest connection by far, and it is often hitting the limits of our
connection pool: https://fburl.com/ods/kuq5x1vw
Reviewed By: markbt
Differential Revision: D27794574
fbshipit-source-id: e2574ce003f12f6c9ecafd0079fe5194cc63c24b
Summary:
I'd like to add RendezVous here (because this is our busiest connection:
https://fburl.com/ods/6d4a9qb5), and it'll be easier to do so if I just have
one code path to change instead of two.
Reviewed By: farnz
Differential Revision: D27794575
fbshipit-source-id: 350e3f8e3f3a74cb7c675cef1264c8083c516480
Summary:
After doing some local benchmarking (using MononokeApi instantiation as the
benchmark), one thing that's apparent is that we have quite a few parameters
here and that tuning them is likely to be a challenge.
One parameter in particular is the batch "objective", which controls how many
requests we want to see in the last batching interval before we choose to
batch (this is `rendezvous_dispatch_min_threshold`).
The problem with this is this is that there is no good, real-world, metric to
set it based on. This in contrast to the other parameters we have, which do
have some reasonable metric to compare to:
- rendezvous_dispatch_delay_ms: this is overhead we add to queries, so it
should be small & on the order of query execution latency (i.e. a few ms).
- rendezvous_dispatch_max_threshold: this controls how big our batches get, so
it should be on the order of what makes a SQL query too big (i.e. less than
a hundred records).
In contrast, we want to set `rendezvous_dispatch_min_threshold` such that
batching kicks in before we start using too many concurrent connections (which
is what query batching seeks to reduce), but the problem is that those two
numbers aren't directly connected. One clear problem, for example, is that if
our DB is in-region vs. out of-region, then for a given query execution time,
and a desired concurrency level before batching kicks in, we'd need different
values of `rendezvous_dispatch_min_threshold` (it would have to kick in faster
for the out-of-region workload).
So, this diff updates rendez vou to actually track concurrent connection count
before we force batching. This is the actual metric we care about here, and it
has a pretty natural "real world" values we can look at to decide where to set
it (our connection pool — which is limited at 100 concurrent connections —, and
our open connection baseline).
Note: I set this at 5 because that's more or less what servers look like
outside of spikes for Bonsai hg mapping, and of Changesets where I'm planning to
introduce this in the future:
- bonsai: https://fburl.com/ods/6d4a9qb5
- changesets: https://fburl.com/ods/kuq5x1vw (note: to make sense of this,
focus on just one server, otherwise the constnat spikes we get sort of hide
the big picture).
Reviewed By: farnz
Differential Revision: D27792603
fbshipit-source-id: 1a9189f6b50d48444b3373bd1cb14dc51b85a6d2
Summary:
Like it says in the title. There's no reason for this to be ad ad-hoc "throw in
an arg" when everything else is done by adding arg types.
Reviewed By: HarveyHunt
Differential Revision: D27791333
fbshipit-source-id: 38e5a479800179b249ace5cc599340cb84eb53e2
Summary:
Like it says in the title. Let's remove ad-hoc "add an arg then look the arg"
mechanisms like this one.
Reviewed By: HarveyHunt
Differential Revision: D27791334
fbshipit-source-id: 257cea7763ab5130525ad739fe4ebdda4e8bfeb6
Summary:
This module is way too big and bundles many different functions:
- Our app builder
- Our matches object and environment initialization
- A bunch of utility functions
Let's split it up
Reviewed By: HarveyHunt
Differential Revision: D27790730
fbshipit-source-id: 8353b18a28fde5267d03ba0342c8cb98ad855e37
Summary:
This isn't useful anymore. Let's ask our MononokeMatches what is set up for
caching instead of parsing the args one more time.
Reviewed By: HarveyHunt
Differential Revision: D27767697
fbshipit-source-id: 9da83769284a4aed4a96cd0eb212f42dd01ade87
Summary:
There is a very frustrating operation that happens often when working on the
Mononoke code base:
- You want to add a flag
- You want to consume it in the repo somewhere
Unfortunately, when we need to do this, we end up having to thread this from a
million places and parse it out in every single main() we have.
This is a mess, and it results in every single Mononoke binary starting with
heaps of useless boilerplate:
```
let matches = app.get_matches();
let (caching, logger, mut runtime) = matches.init_mononoke(fb)?;
let config_store = args::init_config_store(fb, &logger, &matches)?;
let mysql_options = args::parse_mysql_options(&matches);
let blobstore_options = args::parse_blobstore_options(&matches)?;
let readonly_storage = args::parse_readonly_storage(&matches);
```
So, this diff updates us to just use MononokeEnvironment directly in
RepoFactory, which means none of that has to happen: we can now add a flag,
parse it into MononokeEnvironment, and get going.
While we're at it, we can also remove blobstore options and all that jazz from
MononokeApiEnvironment since now it's there in the underlying RepoFactory.
Reviewed By: HarveyHunt
Differential Revision: D27767700
fbshipit-source-id: e1e359bf403b4d3d7b36e5f670aa1a7dd4f1d209
Summary:
ScrubOptions normally represents options we parsed from the CLI, but right now
we abuse this a little bit to throw a ScrubHandler into them, which we
sometimes mutate before using this config.
In this stack, I'm unifying how we pass configs to RepoFactory, and this little
exception doesn't really fit. So, let's change this up, and make ScrubHandler
something you may give the RepoFactory if you're so inclined.
Reviewed By: HarveyHunt
Differential Revision: D27767699
fbshipit-source-id: fd38bf47eeb723ec7d62f8d34e706d8581a38c43
Summary:
Basically every single Mononoke binary starts with the same preamble:
- Init mononoke
- Init caching
- Init logging
- Init tunables
Some of them forget to do it, some don't, etc. This is a mess.
To make things messier, our initialization consists of a bunch of lazy statics
interacting with each other (init logging & init configerator are kinda
intertwined due to the fact that configerator wants a logger but dynamic
observability wants a logger), and methods you must only call once.
This diff attempts to clean this up by moving all this initialization into the
construction of MononokeMatches. I didn't change all the accessor methods
(though I did update those that would otherwise return things instantiated at
startup).
I'm planning to do a bit more on top of this, as my actual goal here is to make
it easier to thread arguments from MononokeMatches to RepoFactory, and to do so
I'd like to just pass my MononokeEnvironment as an input to RepoFactory.
Reviewed By: HarveyHunt
Differential Revision: D27767698
fbshipit-source-id: 00d66b07b8c69f072b92d3d3919393300dd7a392
Summary:
We actually require tunables in our binaries, but some of our tests have
historically not initialized them, because the underlying binaries don't
load tunables (so they get defaults).
I'd like to remove the footgun of binaries not initializing tunables, but to do
this I need tunables to be everywhere, which is what this does.
Reviewed By: StanislavGlebik
Differential Revision: D27791723
fbshipit-source-id: 13551a999ecebb8e35aef55c0e2c0df0dac20d43
Summary:
I want to call something else MononokeEnvironment (the environment the whole
binary is running in), so let's rename this one.
Reviewed By: StanislavGlebik
Differential Revision: D27767696
fbshipit-source-id: bd6f2f282a7fc1bc09926a0286ecb8a5777a0a24
Summary: This test failed on CI for unknown reasons, log the sizes in the failure as a clue.
Reviewed By: farnz
Differential Revision: D27822287
fbshipit-source-id: d15c8165c1d5a5a588b48d7b8469e5cd9cba1a35
Summary: Changing generic anyhow::Error to ErrorKind so there is no need to downcast when we want to match on errors.
Reviewed By: krallin
Differential Revision: D27742374
fbshipit-source-id: ba4c1779d5919eb989dadf5f457d893a3618fffc
Summary:
In the next diff, the packer will need to create PackBlobs with access to link and unlink operations on the underlying data store.
Rearrange blobstore factory so that this is guaranteed by design, noting that we will want to manually create just a PackBlob later.
Reviewed By: ahornby
Differential Revision: D27795485
fbshipit-source-id: e16c7baea4f2402a4f8f95d722adb5c422c5b8e3
Summary:
This replicates behaviour of Python code - if unknown file content matches content of the file to be check out, do not abort checkout
This is useful for resuming interrupted checkouts / clones
Reviewed By: DurhamG
Differential Revision: D27799147
fbshipit-source-id: 7d2582583525da18fba08dfcd8bced2b619304de
Summary:
activate recently got broken when we added the prefetch-metadata flag,
this needs to be on activate as well as fetch
Reviewed By: xavierd
Differential Revision: D27778771
fbshipit-source-id: 052710c2f206e66d8042314773b6b408cff4915c
Summary: Currently native checkout aborts on unknown files even with --clean flag. It should not abort with --clean
Reviewed By: DurhamG
Differential Revision: D27779554
fbshipit-source-id: 2badc84c10eab28d2b1fc8840142ef883ac48c26
Summary: It's been showing up while building mononoke. Let's fix it
Reviewed By: sfilipco
Differential Revision: D27789928
fbshipit-source-id: a15912f66b9ad3370545aed88405dbeb800e63de
Summary: This seems to have broken the EdenFS HgPrefetch test.
Reviewed By: xavierd
Differential Revision: D27795192
fbshipit-source-id: 80a748036961aa6a5750182bf65637fb76825341
Summary: This will show proper checkout progress when using native checkout
Reviewed By: quark-zju
Differential Revision: D27775423
fbshipit-source-id: 79f2afa02bd1fab7d5f747da1c714d4d1126ce7c
Summary:
EdenAPI makes heavy use of streaming HTTP responses consisting of a series of serialized CBOR values. In order to process the data in a streaming manner, we use the `CborStream` combinator, which attempts to deserialize the CBOR values as they are received.
`CborStream` hits a pathological case when it receives a very large CBOR value. Previously, it would always buffer the input stream into 1 MB chunks, and attempt to deserialize whenever a new chunk was received. In the case of downloading values that are >1GB in size, this meant potentially thousands of wasted deserialization attempts. In practice, this meant that EdenAPI would hang when receiving the content of large files.
To address this problem, this diff adds a simple heuristic: If a partial CBOR value exceeds the current buffer size, double the size threshold before attempting to deserialize again. This reduces the pathological case from `O(n^2)` to `O(log(n))` (with some caveats, described in the comment in the code).
Reviewed By: krallin
Differential Revision: D27759698
fbshipit-source-id: 67882c31ce95a934b96c61f1c72bd97cad942d2e
Summary:
Previously we'd skip dynamicconfigs when there wasn't a repo available.
Now that dynamicconfig can represent the NoRepo state, let's load dynamicconfigs
in that situation.
This also supports the case where there is no user name.
Reviewed By: kulshrax
Differential Revision: D26801059
fbshipit-source-id: 377cfffb6695a7fbe31303868a88862259ebf8c4
Summary: Add a new `edenapi.maxrequests` config option to allow controlling the number of parallel in-flight requests. This can be used to bound resource usage of the client when requesting large amounts of data.
Reviewed By: sfilipco
Differential Revision: D27724817
fbshipit-source-id: 8d607efa83d8b0b94074d1a6e06f6c536cc0c791
Summary: Add a method to allow setting `CURLMOPT_MAX_TOTAL_CONNECTIONS`, which limits the number of concurrent requests within a curl multi session. If the number of requests in the session exceeds this number, they will be queued and sent once earlier requests have completed.
Reviewed By: sfilipco
Differential Revision: D27724818
fbshipit-source-id: 436384aed9d6d29f426e5e45aebb7a72c24ba667
Summary:
Without this, `make local` will build `hostcaps` without fb-specific logic and
cause wrong configs being used. `hg up master` will error out like:
File "treemanifest/__init__.py", line 690, in _httpgetdesignatednodes
self.edenapi.trees(dpack, self.name, keys)
RustError: Server reported an error (403 Forbidden)
Reviewed By: quark-zju
Differential Revision: D27759821
fbshipit-source-id: d42895f44bc53003f2578b65640ebe4ee05d52e6
Summary:
Right now, if prefetch fails, we just give the client back an error saying
"content not found".
This isn't super helpful, because usually the reason the content is not found
is because we cannot talk to the server that has the content, so showing the
user why we cannot talk to said server is more useful.
I'd like to ship this gradually, so I also added a config flag to turn it off.
Initially I'll have the flag set, but I did default it to not-set in the code
so that our tests run with the desired configuration.
Note: I initially contemplated adding logging for this here, but after
discussion with xavierd it looks like just failing instead of eating the error
is probably a better approach (and it's much simpler). This is also consistent
with what EdenAPI does.
Reviewed By: mzr
Differential Revision: D27761572
fbshipit-source-id: 3506d9c97a00e3f076bd346883e07f49194b0b06
Summary:
Right now, if the server ever tells us a file is missing, we fail the entire
batch download. This is a bit unfortunate because other objects could still be
downloaded, but also because we lose the distinction between "server crashed"
and "server told us the data we want does not exist".
Besides, it's actually a bit unnecessary. Indeed, this fails, we just ignore
the error anyway up the stack, so it never actually gets accessed.
Reviewed By: mzr
Differential Revision: D27761574
fbshipit-source-id: cb4fb0526a3bf19c04ecb81c05d44d4d8afb81ad
Summary: We can just return the actual error here now.
Reviewed By: sfilipco
Differential Revision: D27761573
fbshipit-source-id: 0866f976b4ed434deffd96be6820ad05d27b7b93
Summary:
If a operation can't proceed because we are in an interrupted update state,
indicate in the hint that `hg update` needs a destination.
Reviewed By: sfilipco
Differential Revision: D27764182
fbshipit-source-id: f0734a4929e34833c4bf84e148db04d57b779246
Summary:
This will allow us to have greater visibility into what's going on when there are production issues.
Note: for getpack, the params data model is `[MPath, [Node]]`. In practice there seems to always just be 1 node per mpath. However, to preserve the mapping, I log every mpath in a separate sample.
Reviewed By: ahornby
Differential Revision: D26690685
fbshipit-source-id: 36616256747b61390b0435467892daeff2b4dd07
Summary:
NOTE: The revisionstore LFS tests talk to prod Mononoke LFS, so the test here
will fail until that is deployed.
Like it says in the title, this adds support for downloading content in chunks
instead of all in one go. The underlying goal here is to let us do better load
balancing server side. This earlier diff in this stack explains that in more
detail: D27188610 (820b538001).
Reviewed By: quark-zju
Differential Revision: D27191594
fbshipit-source-id: 19b1be9bddc6b32b1fabfe4ab5738dfcf71b1a85
Summary:
Historically, we haven't really cared about the sizes provided by clients in
LFS, and we went as far as just echoing them back to the client.
However, with the support I'm adding for range requests, this will start to
matter, because clients will ask for content in this range, so if the client
has the wrong size, we should correct it for them rather than just let them
proceed, lest they fail to download the file properly.
That being said, I am pretty sure there are places relying on us not caring
about the size, so I'm not throwing errors there.
Reviewed By: mitrandir77
Differential Revision: D27710438
fbshipit-source-id: ab670b44364604c07c449e500e379ca40b8c5ec1
Summary:
Like it says in the title. This is helpful for the next diff here, and it's
generally convenient to have nice conversions between our frontend and backend
types.
Reviewed By: HarveyHunt
Differential Revision: D27710439
fbshipit-source-id: f7d4279d750715866844ee0b32418825fd325499
Summary: Since HeaderClientChannel now accepts a transport unique_ptr there's no need to have this deleter exposed outside of HeaderClientChannel.
Reviewed By: iahs
Differential Revision: D27729209
fbshipit-source-id: 064b03afdfe567b6df6437348596f0f6f97f6aaf
Summary: Introduce `FetchKey` and `FetchValue` traits to simplify repeated trait bounds in many `ReadStore` implementations. We also newly require `Clone` for both keys and values, which was already required by the fallback combinator.
Reviewed By: DurhamG
Differential Revision: D27652499
fbshipit-source-id: 6a3d5eb18a904b982fdb9946b80fcc9025d391ea
Summary:
Extend debugscmstore command to fetch arbitrary files / trees by key.
Replace debugpyscmstore with a python fallback for debugscmstore (allowing you to test with the store as it is constructed for Python, with legacy fallback).
Refactor some functionality so it is shared between the rust and python versions of debugscmstore.
Currently the output is pretty ugly. It uses the `{:#?}` format for everything. In the next change, I propose modifying the `Debug` implementation for `minibytes::Bytes` to use ascii-escaped bytestrings rather than the default slice formatter to make things much nicer.
This new `debugscmstore` functionality should be useful in integration tests for testing scmstore under different repo configurations, and for test harnesses and performance testing (fetch a specific set of things easily, simulate delays in the key stream by delaying the input pipe, etc).
Reviewed By: andll
Differential Revision: D27351321
fbshipit-source-id: 8650480e3f5b045b279472643570309c48d7fe6b
Summary: Like `FileScmStoreBuilder`, but for trees. As LFS is not used for trees, `TreeScmStoreBuilder` defaults to `ExtStoredPolicy::Use` (pass along anything you find without LFS-specific checks).
Reviewed By: DurhamG
Differential Revision: D27641290
fbshipit-source-id: 637340a23cef058e7e37a41ae7f5b4fcc9481190
Summary: Introduce a new `FileScmStoreBuilder` structured much like `ContentStoreBuilder`, but supporting the features needed for the intermingling of `contentstore` and `filescmstore` construction (shared indexedlog, scmstore fallback to contentstore).
Reviewed By: DurhamG
Differential Revision: D27640702
fbshipit-source-id: e9771e6f61d80698a9dd761a0db66407b565c010
Summary: The previous change here wasn't sufficient. We need to wrap the handle fd in a Handle now as well.
Reviewed By: quark-zju, sfilipco
Differential Revision: D27753458
fbshipit-source-id: bd8ebbdcdc9acb411362795263b1419360f15e1a
Summary: This test fails without other diffs in stack because previously native checkout was overwriting untracked files
Reviewed By: DurhamG
Differential Revision: D27667151
fbshipit-source-id: 9b3aea37ba5c2d07fe4fc975dd40b4d7bea9d810
Summary: These tests were broken in D27710099 (876f812e4b), but they show as passing unless run in a particular environment, so it went unnoticed. This change reverts the tests to use the pre- D27710099 (876f812e4b) behavior, which should unbreak them until they can be updated correctly.
Reviewed By: quark-zju
Differential Revision: D27756348
fbshipit-source-id: cfa6c12871b6ac0d22b8c70400e72b3ec5dd83a3
Summary:
The `add_heads_and_flush` method might add new nodes in the master group,
and it should update `overlay_map_next_id` accordingly. Without it, it
might error out like:
RustError: ProgrammingError: Server returned x~n (x = 9ebc9ebc49f1819767b40f9ceb22c37547a10c37 8459584, n = 1411).
But x exceeds the head in the local master group {}. This is not expected and indicates some logic error on the server side.
Full error: P387088806
Reviewed By: sfilipco
Differential Revision: D27637278
fbshipit-source-id: b45370db0561dec52cd513a12e2fd0110f18e0e5
Summary:
The filternodes is an API that can batch hasnode checks. It is more efficient
if the commit hashes are lazy.
Reviewed By: sfilipco
Differential Revision: D27636338
fbshipit-source-id: 4eb2dcd20b939faa38611b82e32ed97cf0c8f175
Summary:
The filternodes is an API that can batch hasnode checks. It is more efficient
if the commit hashes are lazy.
Reviewed By: sfilipco
Differential Revision: D27636341
fbshipit-source-id: 69cd708a46c719624d654c53de3d92968392d572
Summary:
If a vertex was resolved via remote protocol, respect it and
avoid requesting the same thing twice.
Reviewed By: sfilipco
Differential Revision: D27636340
fbshipit-source-id: 43cf86010745a85cf622c67be4261ea47a33c802
Summary:
Many places use the `[n for n in nodes if hasnode(n)]` pattern, which
can be slow with a lazy graph due to N+1 problem. Add an API so the
Python world can use a more efficient way for the same query.
Reviewed By: sfilipco
Differential Revision: D27636339
fbshipit-source-id: 99ccb85b2266aed22f1cf87a5e2528106edb3783
Summary:
That could cause a slow loop testing node.__contains__ remotely.
This changes the behavior subtly - nodes added via addgroup will change `tip`
position regardless of whether the nodes exist. This might be more desirable,
since add or addgroup explicitly adding nodes should probably update the
tip position.
Offending test `test-globalrevs-requires.t` was removed since we have
forked the server-side codebase and do not need to maintain hg server
features here.
Reviewed By: sfilipco
Differential Revision: D27630090
fbshipit-source-id: cf7ecc44bf08ed756f0f1aece6655bf674171249
Summary:
The idset is not fully backed by Rust and do not batch resolve vertexes.
The nameset is backed by Rust NameSet and has proper batch prefetching.
Use nameset if possible but fallback to idset if the backend is not in
Rust (rare, only used by hgsql repos now).
Reviewed By: sfilipco
Differential Revision: D27630092
fbshipit-source-id: cf847dd1a78bd5273a8928ecb6616fe11f2c7026
Summary:
This will be useful to avoid suboptimal code paths that is very slow
with lazy vertexes.
Reviewed By: sfilipco
Differential Revision: D27630089
fbshipit-source-id: 35ee4ba79b551453de78fd22aecccf10bc43b08b
Summary:
While it is in theory "correct" going through the remote resolution even if the
protocol is "local". The overhead turns out to be something. And the tracing
message "resolve .. remotely" can be quite noisy. Let's just skip remote
resolutions early in IdConvert implementations.
Reviewed By: sfilipco
Differential Revision: D27630094
fbshipit-source-id: 7d87079876f040cf8f764f7362021dddba0d4723
Summary:
Currently the "contains vertex" check can trigger excessive
fetches for add_heads (and add_heads_and_flush used by flush).
Add a test to demonstrate the problem.
Reviewed By: sfilipco
Differential Revision: D27630091
fbshipit-source-id: ce3639c2a13226ba5681b4e8696edd7acbcb57f9
Summary:
Otherwise it can cause a lazy dag to think vertexes as "missing", insert
vertexes unnecessarily, and potentially break key graph properties (a
vertex should only have one Id).
Reviewed By: sfilipco
Differential Revision: D27629315
fbshipit-source-id: 1688d13cb94015bbc675613ecf9225556ff48372
Summary:
Also move related functions.
Similar to D27547584 (af3c3b3fd0), this allows `add_heads_and_flush` use `IdConvert`
on the `NameDag`, instead of the `IdMap` to trigger remote fetches properly.
This diff is easier to view with whitespace changes ignored.
Reviewed By: sfilipco
Differential Revision: D27629314
fbshipit-source-id: 8f79223c5d324aabfc5ab9813a9f65400fc533ec
Summary:
See the previous diff for context. Drop Locked and related APIs (prepare_filesystem_sync).
This makes it easier to use operate on a mut NameDag on flush because it does not need
to use separate types (Locked) for writing which has issues of not having the remote protocol
involved.
Reviewed By: sfilipco
Differential Revision: D27629306
fbshipit-source-id: 301445b242321ad5f424741ea91ebf6c075bff1c
Summary:
See the previous diff for context. Drop SyncableIdMap so we are one step
closer to using mut NameDag directly on add_heads, which knows when and how to
do remote fetching properly.
Reviewed By: sfilipco
Differential Revision: D27629310
fbshipit-source-id: 883606e40bb83907dfa6142ddd2c3030de61698f
Summary:
By using SyncableIdDag and SyncableIdMap, it's harder to use extra features
around them (ex. remote fetching). Drop SyncableIdDag so we are one step
closer to using mut NameDag directly on add_heads, which knows when and how to
do remote fetching properly.
Reviewed By: sfilipco
Differential Revision: D27629307
fbshipit-source-id: 8e9a5a4348a42b9751752b82feb3f3d2d7c4ba45
Summary:
The `Parents` trait is used for input of adding (non-lazy) vertexes to the
graph. The API will be used to provide extra hints to optimize network
fetches.
With the current logic, `assign_head` will ask the server to resolve the heads
first, to check if it is already assigned, then to resolve the parents, etc. to
the roots (in the "to assign" set). Ideally the `assign_head` logic can ask
the server to resolve the roots first, and if that's unassigned, then just mark
all descendants of the roots as unassigned, do not send more requests.
Note: the current pull logic has all the hashes ready (hashes are known).
But whether the hashes have Id assigned are unknown. It is more tricky
taking the "lock" and "reload" into consideration - hashes without Ids might
"become" having ids assigned after we obtain the lock to write data to disk.
Practically, `pull` using the current logic would take a very long time because
it tries to resolve things remotely for every "to assign" commits.
Reviewed By: sfilipco
Differential Revision: D27629317
fbshipit-source-id: e02f54f43ef65228ce6e3a8a8723dd9ae0a07008
Summary: This just simplifies the test code a bit.
Reviewed By: sfilipco
Differential Revision: D27629308
fbshipit-source-id: 04eac5cd045c71123e7fc410af74609dbadb8fb7
Summary: This avoids triggering remote lookups if an unknown name was looked up multiple times.
Reviewed By: sfilipco
Differential Revision: D27629316
fbshipit-source-id: 64c1ce5e872a5ce4f14c650a946ae8396f4cc74c
Summary:
When translating RequestNameToLocation to ResponseIdNamePair. If "heads" are
known, but some "names" aren't. Do not treat it as an error. This will be
used by the client-side to properly handle the "contains" check.
Reviewed By: sfilipco
Differential Revision: D27629309
fbshipit-source-id: 206ec5df956b33f4e816ab8d6a67ce776fd7bd74
Summary: This will make it easier to test client / server dags in upcoming changes.
Reviewed By: sfilipco
Differential Revision: D27629318
fbshipit-source-id: e3137654613aa3208a8f2e4b9f4ddfe73871f2c5
Summary: This will be used in upcoming changes. It just delegates to the Arc inner.
Reviewed By: sfilipco
Differential Revision: D27629313
fbshipit-source-id: ba6cd7cac2b9f5c1a2898c439c53768995a2dc42
Summary: This will be used by upcoming changes.
Reviewed By: sfilipco
Differential Revision: D27629312
fbshipit-source-id: 6c56e73caf4e1a398ac3a8b4294bd9f380af3764
Summary: This will be used by upcoming changes.
Reviewed By: sfilipco
Differential Revision: D27629319
fbshipit-source-id: d19e490268561f3154642e5bb1e415da4c5d03ee
Summary:
See the previous diff for context. A concrete HashMap can provide
"hint_pending_subdag". But a parent function cannot.
Reviewed By: sfilipco
Differential Revision: D27629311
fbshipit-source-id: 65168a8d00d9a672396312200016d6749f416d02
Summary: The lazy backend can now (partially) support the non-full IdMap segment clone.
Reviewed By: sfilipco
Differential Revision: D27581488
fbshipit-source-id: 51eded6acdbe82d22f5bb73eb4a715e2c22f4d75
Summary:
Make mutationstore more friendly to async.
This resolves an issue with smartlog with the lazy commit hash backend.
Reviewed By: sfilipco
Differential Revision: D27583844
fbshipit-source-id: 5b0b0b9b8ab82399f80eb2b410a0c4b84bd6a444
Summary:
Otherwise it might panic (ex. calling into tokio without entering a tokio
runtime). This can happen in, for example, `Debug::fmt(&IdStaticSet, ...)`.
Reviewed By: sfilipco
Differential Revision: D27581487
fbshipit-source-id: feec53e088706adcc6710afcf24fa70598f886cf
Summary: `SyncNameSetQuery` will stop working with lazy commit hashes. Change them to async.
Reviewed By: sfilipco
Differential Revision: D27581486
fbshipit-source-id: bfac1d0676f1fe102c74cc4fc2b83d4c9aed970e
Summary:
This will be used by "add_heads" logic to detect what vertexes to insert
and might reduce remote fetches.
Reviewed By: sfilipco
Differential Revision: D27572359
fbshipit-source-id: d0bf027a69d180663a1587dfde613cb76b05072a
Summary: The API returns entries buffered in memory not persisted.
Reviewed By: sfilipco
Differential Revision: D27572360
fbshipit-source-id: 555988f7c891f2d928bfa1e486a0fc1d089b4ad5
Summary: This will be used to select "dirty" (not written to disk) set in the IdDag.
Reviewed By: sfilipco
Differential Revision: D27572361
fbshipit-source-id: 0b4d2e092ece835e3d4b6aa831d32ffffc7087ca
Summary:
Before this change, overlap IdMap was not considered for prefix lookup. That
results in "shortest" template not working and smartlog prints full hashes
for remote/stable etc.
Reviewed By: sfilipco
Differential Revision: D27547582
fbshipit-source-id: 7a56590775eed9d509f2212f8e5009c04aaf4e9d
Summary: It will be reused in NameDag.
Reviewed By: sfilipco
Differential Revision: D27547583
fbshipit-source-id: da85fc7504d20877210e8ed1a97cbec18d06eede
Summary:
Now NameSet iteration can be blocking, SyncNameSetQuery is no longer sound.
Remove SyncNameSetQuery in key logic (namedag and ops) and replace them with
async logic.
Reviewed By: sfilipco
Differential Revision: D27547581
fbshipit-source-id: af69b1a8219e97d10278939407ee79f9b333a77f
Summary: Dag algorithms like `parent_names` need to fetch vertexes via remote automatically.
Reviewed By: sfilipco
Differential Revision: D27547584
fbshipit-source-id: 8106931d6f642c9a4bf0f3c546ba881c2ca73fea
Summary: Similar to "lazytext" but IdMap is also lazy.
Reviewed By: sfilipco
Differential Revision: D27547579
fbshipit-source-id: 70452f1a8e7f00d6a216a2aaec2d55442130d3ce
Summary: This can be used to create lazy commit hash backends.
Reviewed By: sfilipco
Differential Revision: D27547580
fbshipit-source-id: 3329854f1173b8f15fd6b51f4e595d2226c8bbb1
Summary:
This allows unix sockets to be created in the mount. This will allow Buck to
run properly as it tries to create sockets in the repository.
Reviewed By: kmancini
Differential Revision: D27690406
fbshipit-source-id: 5725d68bdda12f3a5882ce48b6bdd02b14cdece4
Summary: This merely adds the types for the procedure
Reviewed By: kmancini
Differential Revision: D27690405
fbshipit-source-id: b94fb03658cabaece4166c29135c5fdf9a613d3c
Summary:
This is roughly the same logic as the UNLINK one with the only difference being
in the handling of "." and "..".
Reviewed By: kmancini
Differential Revision: D27684716
fbshipit-source-id: 86a95c38e6c783bc3a45c0a8b000d0210b6dd0b8
Summary:
This merely adds the types needed for the RMDIR procedure. Implementation will
follow.
Reviewed By: genevievehelsel
Differential Revision: D27684736
fbshipit-source-id: 84f5a4f3dc805e7893853b0de1dc19cb01c1319f
Summary:
To get the size in bytes, we need to multiply the quantity with the block size,
not with itself.
Reviewed By: genevievehelsel
Differential Revision: D27690857
fbshipit-source-id: 7d7ca767881b1118fc24befed230a63f342bc911
Summary:
In revset limit(a, x, y), both x and y are numbers, not commit identities.
The issue is that the revset AST uses different ways to represent functions
with one argument or multiple arguments. For example:
(func (symbol parents) (symbol master))
(func (symbol limit) (list (x) (symbol 1) (symbol 2)))
Fix it by special handling the `list` AST.
Reviewed By: DurhamG
Differential Revision: D27632395
fbshipit-source-id: 081506bdd4b10e197a2685f4ab4d6448fbd79957
Summary: This crate does not panic on Windows.
Reviewed By: DurhamG
Differential Revision: D27640362
fbshipit-source-id: f50f6b8e0bd31e5f80fa939bcfb6846bc8fd4a63
Summary:
Recently we saw some progress rendering issues. Add a command to attempt to
reproduce them.
Reviewed By: DurhamG
Differential Revision: D27669184
fbshipit-source-id: 62fcf82d8261fd27e91ba5a116c61f4df1919007
Summary: This will be used later.
Reviewed By: skotchvail
Differential Revision: D27744058
fbshipit-source-id: 411ab66ccc38b306c6bffb190e936ba1e455f07a
Summary:
`os._exit` bypasses all clean-up logic, including `IO::drop` in `hgmain` which
cleans up the progress bars. So let's explicitly clean up the progress bars
before `os._exit`.
Reviewed By: kulshrax
Differential Revision: D27744944
fbshipit-source-id: 5cd50b283728fd4e3b559142f7f61fc6672492e9
Summary: This will make RotateLog achieve zero-copy reading more easily.
Reviewed By: kulshrax
Differential Revision: D27724331
fbshipit-source-id: 57915516dc6bd1935838bd099a60c104f0bdef3d
Summary: This makes it more flexible.
Reviewed By: kulshrax
Differential Revision: D27724332
fbshipit-source-id: 43ad670519f0617a97e0b7d38b374f497e9c01af
Summary:
This allows setting the wrapping mode. For example:
lhg log -pv
# copy paste long lines works.
lhg log -pv --config pager.wrapping-mode=unwrapped
# lines are not wrapped, ">" is shown for long lines.
lhg log -pv --config pager.wrapping-mode=word
# long lines wrapped at word level.
The default value matches "less" behavior.
Reviewed By: DurhamG
Differential Revision: D27720767
fbshipit-source-id: e29d6b13656407c0a1e63287fb96e2f8d914cfc8
Summary: This is needed to prevent situation when we try writing into file that is untracked by hg
Reviewed By: DurhamG
Differential Revision: D27667152
fbshipit-source-id: 31bb9e30bd6b58e80ba96d280ff6ca1842c8caf6
Summary: This method checks if any of files that checkout writes is not tracked in hg and exists on disk
Reviewed By: DurhamG
Differential Revision: D27667153
fbshipit-source-id: 4ad8bc08520678ea0b51008ed14fb51ca4a98f76
Summary:
Previously this query failed because it tried to convert bytes to int, and our
mysql wrapper doesn't support that.
Let's cast it instead
Reviewed By: krallin
Differential Revision: D27736863
fbshipit-source-id: 66a7cb33c0f623614f292511e18eb62e31ea582f
Summary:
`hostcaps` abstracts the logic for determining whether we have a prod or corp
environment.
Reviewed By: DurhamG
Differential Revision: D27684641
fbshipit-source-id: 50df9a60b6a613b4cb5c9aed6cad2844aae85a6f
Summary:
We want to use it in Mercurial and the directory structure was playing bad
with Mononoke's OSS build.
Reviewed By: xavierd
Differential Revision: D27684642
fbshipit-source-id: 8827645eee58fa671f9c9e1964a34c34e3a8eeb6
Summary:
MacOS does not have the device field like linux that we can use to mark edenfs
nfs mounts. But there is the `f_mntfromname` field. This field more typically
might have the path which this nfs mount is mirrored from, but it should be fine
to hyjack as the edenfs indicator field.
Reviewed By: xavierd
Differential Revision: D27717945
fbshipit-source-id: 056fb39dc3273b68d79c26487fd045f4e7f93b7b
Summary:
With fuse we report "edenfs:" as the device, let's do the same thing with nfs
so watchman can recognize edenfs nfs mounts similarly.
I think its fine to use the standard "edenfs" as the server name in the mount
call rather than the address, from looking at:
https://www.systutorials.com/docs/linux/man/8-mount.nfs/
Reviewed By: xavierd
Differential Revision: D27630764
fbshipit-source-id: 9e476c90ece90e758b98d140c6bf4067dbca3661
Summary: Currently just does XDB Blobstore, because the work to do other types and/or go via Packblob is significant.
Reviewed By: markbt
Differential Revision: D27735093
fbshipit-source-id: d3797017a2e0ff7c60525d1f4d4ee3e63b519d49
Summary: We have deprecated it in favor of argument that takes a boolean value.
Reviewed By: farnz
Differential Revision: D27709429
fbshipit-source-id: 45e9569188f2e9d017f1c5bf61f7c61bc0e5318a
Summary:
It's useful when operating with timeseries to know what range of data has been
populated. This diff adds support for this in mononoke/timeseries, by tracking
the number of buckets that fall within intervals where data was provided.
Reviewed By: mitrandir77
Differential Revision: D27734229
fbshipit-source-id: 3058a7ce4da67666e8ce8a46e34e277b69153ea4
Summary:
When building skiplists, set the session class to `Background`. This ensures
that the blobstore writes for the new skiplist have completed fully.
Reviewed By: StanislavGlebik
Differential Revision: D27735411
fbshipit-source-id: 4ba8e8b91dafbb1aa258d15b26e7d773f63b5812
Summary:
If the caller asks us for a range that extends past the end of our file, we'd
rather give them an error instead of silently returning the file.
This actually revealed one of the tests needed work :)
Note that right now we'll just end up categorizing this as 500. I'd like to
rework the errors we emit in the Filestore, but that's a somewhat bigger
undertaking so I'd like to do it separately.
Reviewed By: quark-zju
Differential Revision: D27193353
fbshipit-source-id: 922d68859401eb343cffd201057ad06e4b653aad
Summary:
The backupbookmarks part was used for infinitepush backup bookmarks, which were
deprecated. Now stop sending the part entirely unless
`commitcloud.pushbackupbookmarks` is set.
Reviewed By: StanislavGlebik
Differential Revision: D27710099
fbshipit-source-id: 1eb404f106f5a8d9df6d73e11f60f89c1fa10400
Summary:
Like it says in the title, this adds support for publishing our max open
connections to ODS. Note that this is a little more involved than I would like
for it to be, but there is no way to get direct access to this information.
This means, we need to:
- Expose how many open connections we have in flight (this is done earlier in
this stack in the Rust MySQL bindings).
- Periodically get this information out out for MySQL, put it in a timeseries.
- Get the max out of said timeseries and publish it to a counter so that it can
be fetched in ODS.
This is what this diff does. Note that I've only done this for read pools,
largely because I think they're the ones we tend to exhaust the most and I'd
like to see if there is value in exposing those counters before I use them.
We do the aggregation on a dedicated thread here. I contemplated making this a
Tokio task, but I figured making it a thread would make it easier to see if
it's misbehaving in any way (also: note that the SQL client allocates a bunch
of threads already anyway).
Reviewed By: HarveyHunt
Differential Revision: D27678955
fbshipit-source-id: c7b386f3a182bae787d77e997d108d8a74a6402b
Summary: This name is more reasonable, since this commit is not actually ephemeral
Reviewed By: quark-zju
Differential Revision: D27722921
fbshipit-source-id: e2c0243d41a73341f9d0afdc79696ea37b34b8c7
Summary: Running these with tsan appears to run properly, let's try to re-enable them.
Reviewed By: genevievehelsel
Differential Revision: D27723525
fbshipit-source-id: 42e61d26cf478cbe808698a6a0615015832180fa
Summary:
We have `experimental.findcommonheadsnew` set to True in all tests, and
Rust commit backends force the `findcommonheadsnew` paths, which is
pretty much everywhere except hgsql repos. Remove `_findcommonheadsold`.
The fast discovery is also unnecessary. Remove them too.
Reviewed By: DurhamG
Differential Revision: D27630496
fbshipit-source-id: ab1948f03a8c84e75e3b5c9ff4769e17533447d2
Summary: Many users have asked what scratch directory is when they look to free some disk space. Placing a README.txt under the scratch root could be helpful to explain what is in there.
Reviewed By: fanzeyi
Differential Revision: D27710277
fbshipit-source-id: e3ccd92fa1920ac4c791026b8d98aa05a1c8b268
Summary: This diff uses filescmstore for native checkout when nativecheckout.usescmstore config is set
Reviewed By: DurhamG
Differential Revision: D27658844
fbshipit-source-id: ec3442d677ccb25e8b08cc194e4c8c18c0e01fa1
Summary:
This diff introduces CheckoutPlan::apply_read_store to apply checkout plan using ReadStore as a data source
This requires some minor changes in apply_stream flow as ReadStore does not guarantee ordering of returned files
Reviewed By: DurhamG
Differential Revision: D27658346
fbshipit-source-id: 5a289554d8dd7b6bb4b5a996659cd0661779ad5f
Summary: The latter is more lightweight.
Reviewed By: DurhamG
Differential Revision: D27641665
fbshipit-source-id: d46f62f9067eb9cb4c8517a62efa6f663d4b6732
Summary: The latter is more lightweight.
Reviewed By: DurhamG
Differential Revision: D27641669
fbshipit-source-id: d907407f5a6e868862fe37f1f67fbe99ee378156
Summary: The latter is more lightweight.
Reviewed By: DurhamG
Differential Revision: D27641667
fbshipit-source-id: adce5a39fcb5d8e8d5d989fed46991e20ab3710d
Summary: Provides a way to read config with lighter dependencies.
Reviewed By: DurhamG
Differential Revision: D27641668
fbshipit-source-id: fc99a78f5f51e63f61d1b049af74f61f5d1916a3
Summary:
The `configparser` is now too heavyweight. Some other crates (ex. io, auth,
revisionstore) just want to extract config values without complicated parsing /
remote hotfix logic.
Add a configmodel crate to satisfy the need.
Reviewed By: DurhamG
Differential Revision: D27641666
fbshipit-source-id: 26bd0b606ae3d286b3ec218927aef726d6802c63
Summary:
The CMake documentation states:
>By default, the value index of value-parameterized tests is replaced by the
>actual value in the CTest test name. If this behavior is undesirable (e.g.
>because the value strings are unwieldy), this option will suppress this
>behavior.
Which appears to be a decent default, but not when the parameter is a pointer,
in which case the test name will contain some hex values.
Reviewed By: genevievehelsel
Differential Revision: D27713222
fbshipit-source-id: 0f15b24d04817384ff975ad7b07e16b744e1eb2e
Summary: Add a new `http.verbose` config option that turns on verbose output for libcurl (similar to the output printed by `curl -v`). This can be very useful for debugging HTTP issues.
Reviewed By: DurhamG
Differential Revision: D27693304
fbshipit-source-id: 2ad7a08889f40ffbcd2f14ac9c21d70433629da4
Summary: Make this method match the behavior of `remotefileslog.memcachestore` and cache the `edenapistore` instead of constructing a new one each time. Right now this doesn't matter too much because we currently only call this once when setting up the Rust `revisionstore`, but it would be good to avoid creating multiple instances if we do start using this elsewhere.
Reviewed By: DurhamG
Differential Revision: D27684210
fbshipit-source-id: 7987f603c79758902b4740dd8b46d26a25baec93
Summary:
This diff causes memcache to be disabled when `remotefilelog.cachekey` is set to the empty string, thereby allowing memcache to be disabled on the command line with `--config remotefilelog.cachekey=''`. This is useful when testing data-layer changes.
Previously, the only way to do this was to add `%unset cachekey` to the `remotefilelog` section of your `hgrc`, which was a bit tedious compared to just using a `--config` flag.
Reviewed By: DurhamG
Differential Revision: D27683782
fbshipit-source-id: 3e0434e98a32db916a07935e8b26f70317f50286
Summary: The error is not going to have a message in many cases leading to crashes.
Reviewed By: quark-zju
Differential Revision: D27637119
fbshipit-source-id: 135b133371916dddf0c47a84f00957a8b8fdfe92
Summary:
It looks like nfs_test is tripping the nightly pyre infer job causing it to not
publish diff that increase our typing coverage. Let's manually type it to
unblock the nightly job.
Reviewed By: genevievehelsel
Differential Revision: D27682093
fbshipit-source-id: a32df9f5b8eeaef2006de7d64f5adadb763402e8
Summary:
We hammer MySQL during GC - slow down so that bad connections to
servers that are no longer current are dropped from the pool
https://fb.workplace.com/groups/scm.mononoke/permalink/1407064449656126/?comment_id=1408378062858098 justifies setting the MySQL max ages to 1 second - it works around a MyRouter issue where it *should* reconnect us to a different host, but doesn't.
Reviewed By: krallin
Differential Revision: D27500583
fbshipit-source-id: e900925e1f0d65828613fe3e3d7f4128dc7cde82
Summary: SQLBlob doesn't benefit from sharing a pool with other MySQL users, but does benefit from more aggressive connection timeouts. Give it its own pool, which we can tweak later.
Reviewed By: krallin
Differential Revision: D27651133
fbshipit-source-id: 8f5216ec0506b217f9365babfe1ebac00f68a9a9
Summary:
Like it says in the title. This is a place where we use timeseries so we might
as well use that shared crate.
Reviewed By: mzr
Differential Revision: D27678389
fbshipit-source-id: 9b5d4980a1ddb5ce2a01c8ef417c78b1c3da80b7
Summary:
I'd like to be able to track time series for access within Mononoke. The
underlying use case here is that I want to be able to track the max count of
connections in our SQL connection pools over time (and possibly other things in
the future).
Now, the obvious question is: why am I rolling my own? Well, as it turns out,
there isn't really an implementation of this that I can reuse:
- You might expect to be able to track the max of a value via fb303, but you
can't:
https://www.internalfb.com/intern/diffusion/FBS/browse/master/fbcode/fb303/ExportType.h?commit=0405521ec858e012c0692063209f3e13a2671043&lines=26-29
- You might go look in Folly, but you'll find that the time series there only
supports tracking Sum & Average, but I want my timeseries to track Max (and
in fact I'd like it to be sufficiently flexible to track anything I want):
https://www.internalfb.com/intern/diffusion/FBS/browse/master/fbcode/folly/stats/BucketedTimeSeries.h
It's not the first time I've ran into a need for something like this. I needed
it in RendezVous to track connections over the last 2 N millisecond intervals,
and we needed it in metagit for host draining as well (note that the
implementation here is somewhat inspired by the implementation there).
Reviewed By: mzr
Differential Revision: D27678388
fbshipit-source-id: ba6d244b8bb848d4e1a12f9c6f54e3aa729f6c9c
Summary:
This is breaking with a warning because there's a method called `intersperse`
that might be introduced in the std lib:
```
stderr: error: a method with this name may be added to the standard library in the future
--> eden/mononoke/hgproto/src/sshproto/response.rs:48:53
|
48 | let separated_results = escaped_results.intersperse(separator);
| ^^^^^^^^^^^
|
note: the lint level is defined here
--> eden/mononoke/hgproto/src/lib.rs:14:9
```
This should fix it.
Reviewed By: ikostia
Differential Revision: D27705212
fbshipit-source-id: 5f2f641ea6561c838288c8b158c6d9e134ec0724
Summary:
`const_cstr::ConstCStr` is represented internally as a fat pointer with fixed size: `&'static str`. See https://docs.rs/const-cstr/0.3.0/const_cstr/struct.ConstCStr.html. Notably this is **different** from the representation of `std::ffi::CStr`, which is a dynamically sized type and normally passed around behind a reference, as `&CStr`. Using `&ConstCStr` in signatures, which is effectively like `&'a &'static CStr`, is confusing due to the discrepancy between the two relatedly named types. Additionally having two different lifetimes involved -- the static lifetime of the underlying bytes, and the short lifetime of the fat pointer -- is unnecessarily confusing when async code and a language boundary are involved.
The utf8-cstr crate uses what seems like a better representation to me than the const-cstr crate. See https://docs.rs/utf8-cstr/0.1.6/utf8_cstr/struct.Utf8CStr.html. `Utf8CStr` is the dynamically sized type, just like `CStr`. Then `&'static Utf8CStr` is how it would commonly be passed around, just like `&CStr`.
Reviewed By: krallin
Differential Revision: D27698169
fbshipit-source-id: ffe172c2c2fc77aeab6b0a0a8aed3e3c196098cc
Summary: Migrate the codebase away from the ad-hoc `folly::uint64ToBufferUnsafe` and to `folly::to_ascii_decimal` which is intended for these cases.
Reviewed By: WillerZ
Differential Revision: D27281875
fbshipit-source-id: 0c98749e4aed9c873853eed2221cf54a89279ff4
Summary:
The hashes that are passed in as parameters to the hash-to-location function
may not be hashes that actually exist. This change updates the code so that
we don't return an error when an unknown hash is passed in. The unknown
hash will be skipped in the list of results.
Reviewed By: quark-zju
Differential Revision: D27526758
fbshipit-source-id: 8bf9b7a134a6a8a4f78fa0df276f847d922472f5
Summary:
We want to handle the case where the client has multiple heads for master. For
example when master is moved backwards (or when it get moved on the client by
force). Updating the client code to thread the list of master commits to the
EdenApi client.
Reviewed By: quark-zju
Differential Revision: D27523868
fbshipit-source-id: db4148a3f1d0e8b0b162e0ecc934e07f041c5511
Summary:
We want to handle the case where the client has multiple heads for master. For
example when master is moved backwards (or when it get moved on the client by
force). Updating the request object for HashToLocation to send over all the
master heads.
When the server builds non-master commits, we will want to send over non-master
heads too. We may consider having one big list of heads but I think that we
would still want to distinguish the non-master commit case in order to optimize
that use-case.
Reviewed By: quark-zju
Differential Revision: D27521778
fbshipit-source-id: cc83119b47ee90f902c186528186ad57bf023804
Summary:
This scenario appears when master moves backwards. Since the master group in
segmented changelog is append-only. Non-fast-forward master move will cause
multiple heads in the master group.
Since Segmented Changelog was updated to handle multiple master heads, we can
propagate the full list that we get from the client.
This diff makes the assumption that Mononoke will know to convert all client "master head"
hashes from HgChangesetId (Sha1) form to ChangesetId (Blake2). If any of the master
heads cannot be converted then it means the server might not be reliably answer the
client's question (in "ancestors(master_heads)", translate "this hash" to a path, or tell me
confidently that the "hash" is outside "ancestors(master_heads)"). That's an error case.
Reviewed By: quark-zju
Differential Revision: D27521779
fbshipit-source-id: 219e08a66aac17ac06d2cf02676a43c7f37e8e26
Summary:
This scenario appears when master moves backwards.
Since the IdDag can handle multiple master heads, the server can piggy-back on that
functionality and support multiple master heads when translating location to hash.
Reviewed By: quark-zju
Differential Revision: D27521780
fbshipit-source-id: c27541890d4fda13648857f010c11a25bf96ef67
Summary:
`panic!()`, and things which use `panic!()` like `assert!()`, take a literal format
string, and whatever parameters to format. There's no need to use `format!()`
with it, and it is incorrect to pass a non-literal string.
Mostly it's harmless, but there are some genuinely confusing asserts which
trigger this warning.
Reviewed By: dtolnay
Differential Revision: D27672891
fbshipit-source-id: 73929cc77c4b3d354bd315d8d4b91ed093d2946b
Summary:
Modify the `Debug` implementation for `minibytes::Bytes` to use `std::ascii::escape_default` to debug print a `Bytes` as an ascii-escaped bytestring.
For comparison, the `bytes` crate `Bytes` type provides the same functionality, though it doesn't use the standard library `escape_default` function: https://docs.rs/bytes/1.0.1/src/bytes/fmt/debug.rs.html#39-43
This change greatly improves the output of the `debugscmstore` command. If we don't want to make this the default behavior, we can provide a formatting wrapper type or I can specialize the output in `debugscmstore`, but I can't see any real downsides, especially given the `bytes` crate does the same thing, and we have a similar specialization for `HgId` (hex format in that case).
Reviewed By: quark-zju
Differential Revision: D27642721
fbshipit-source-id: 8faba421fa5082a2098b13ef7d286e05eccb6400
Summary: Add the `with_key` function to `Entry`, which replaces it's key with a provided key. Currently, scmstore returns incorrect results when multiple entries exist with different paths but the same HgId (as scmstore directly returns the path found on disk locally). This isn't a problem in the legacy API, which returns a bare `Vec<u8>` content, which is implicitly associated with the requesting key because it is the result of a single `get` call, or is irrelevant because the `prefetch` method doesn't directly return the results.
Reviewed By: andll
Differential Revision: D27664025
fbshipit-source-id: 014d44ca9a1dc2721685622fd2b077ed3483838f
Summary:
We are moving our skelton for eden top a bit further, by creating the view
structure for eden top. This creates the widget for the banner of eden top,
the help page and the main page as well as a couple sections of the main
page.
The main page is displayed on default and they are toggled with `h` for help
page and `esc` to return to the mage page. Visable and hidden widgets are not
implemented in termwiz yet so we have to do a bit of hacking to hide and
display widgets for our selves.
Each of the sections is stubed with place holder text for testing.
Reviewed By: xavierd
Differential Revision: D26892620
fbshipit-source-id: a7bb4d0e11f3a8968ef071e7f585d07a9c286880
Summary:
D27659634 (8e8aaa61d6) removed these files, so let's drop their exclusions from
test-check-code.t
Reviewed By: sfilipco
Differential Revision: D27682136
fbshipit-source-id: f8e10fac37ea90fb2782b960faf4536f1ff9133b
Summary:
fctx is not guaranteed to have the _path and _filenode attributes. Those are
specific to implementations, e.g. `absentfilectx` does not have them.
`basefilectx` instead defines the `path()` and `filenode()` for general fctx
use.
Reviewed By: quark-zju
Differential Revision: D27667176
fbshipit-source-id: 1d7889d264b597665ef05f84a752323f078cb455
Summary:
Create a fork of the Mercurial code that we can use to build server
rpms. The hg servers will continue to exist for a few more months while we move
the darkstorm and ediscovery use cases off them. In the mean time, we want to
start making breaking changes to the client, so let's create a stable copy of
the hg code to produce rpms for the hg servers.
The fork is based off c7770c78d, the latest hg release.
This copies the files as is, then adds some minor tweaks to get it to build:
- Disables some lint checks that appear to be bypassed by path
- sed replace eden/scm with eden/hg-server
- Removed a dependency on scm/telemetry from the edenfs-client tests since
scm/telemetry pulls in the original eden/scm/lib/configparser which conflicts
with the hg-server conflict parser.
allow-large-files
Reviewed By: quark-zju
Differential Revision: D27632557
fbshipit-source-id: b2f442f4ec000ea08e4d62de068750832198e1f4