Summary:
## Wider goal
We want the flexibility to return hydrated responses for `getbundle` wireproto
requests for draft commits. This means that the responses will contain not
only the commit data (as they do now), but also trees and files.
For context, when an "unhydrated" response is returned for the `getbundle`
request for a draft commit, we expect one of two things to happen later
in the e2e scenario:
- either `hg` client would immediately make another wireproto request
(`gettreepack`, `getpackv1`) within the same client `hg` command execution
- or a subsequent `hg update` call will cause another wireproto request
In any case, another request is needed before the pulled commit can be used.
This request can hit a different server, sometimes it can even be Mercurial
instead of Mononoke. Specifically, it can Mercurial instead of Mononoke if the
`fallback` path markers are configured incorrectly. In that case we have a
problem, as Mercurial is incapable of serving `gettreepack` or `getpackv1` for
infinitepush commits.
One way to deal with this is to always have correct path markers, which is
prone to human mistakes. Another way is to guarantee that Mononoke returns
everything in the original `getbundle` request. We don't want to do this for
public commits, as `pull`s of public commits typically fetch thousands of those
commits and never care about tree or file data for all but one of them. Draft
commits are different however, as they are usually exactly what the client
intends to use, so hydrating those is fine. Still, we want this behavior to
be gated behind a config flag.
## This diff
A lot of the needed code is already implemented in the hg-sync job, bundle
generating variant. So prior to implementing the actual behavior described
above, let's move the relevant bits to `getbundle_response`. Later we can comb
them up a bit (asyncify) and use to implement the needed behavior.
Reviewed By: StanislavGlebik
Differential Revision: D20068839
fbshipit-source-id: 0ab63d57b2d167401b7ee8864fe7760f5f65f8ec
Summary:
This is the moral equivalent of D20115877 in fbcode. See that diff for
motivation.
Reviewed By: StanislavGlebik
Differential Revision: D20118575
fbshipit-source-id: 8f77f572068e611003b1344be3434f2d04ec56ca
Summary:
Previously it was hard to tell whether the process were actually responsible
for generating derived data or it was just waiting for it to be generated.
Let's make this distinction clearer.
Reviewed By: johansglock
Differential Revision: D20138284
fbshipit-source-id: 52ae12679db2f61869f048baf2a603b456710a71
Summary:
Add `__enter__()` and `__exit__()` methods to `TelemetrySample` so it can be
used in `with` statements. It will automatically track the runtime for the
body of the `with` context, and will record this in the `duration` field of
the sample. It will also set the `success` field to True if the context exis
normally and False if it exits due to an exception. On an exception the
`error` field will also be populated with the exception message.
Reviewed By: genevievehelsel
Differential Revision: D20112723
fbshipit-source-id: d55ac3f1b53c23dc001f92a4f8eae431db8954e1
Summary:
Add a TelemetryLogger class that logs directly to scuba, and use that if we
are building in a Facebook environment.
Reviewed By: genevievehelsel
Differential Revision: D20112727
fbshipit-source-id: 284ca45d1902d51b753ff9a90debf3dfa8282f82
Summary:
Add a `TelemetryLogger` class that abstracts the mechanism we use to log
telemetry samples. This makes it possible to plug in alternative
implementations.
This includes 3 initial implementations of this class:
* `ExternalTelemetryLogger` logs samples by calling an external command
* `LocalTelemetryLogger` logs JSON samples to a local file
* `NullTelemetryLogger` simply discards all samples
This also moves some of the helper code for constructing telemetry samples
from the `EdenInstance` class and into `TelemetryLogger`.
Reviewed By: genevievehelsel
Differential Revision: D20112725
fbshipit-source-id: dbe24952a92fe548631fc169f146cc14008a7bb6
Summary:
Update the thrift `getDaemonInfo()` call to also return the fb303 status.
This allows the CLI to make a single thrift call instead of 2 when checking if
the EdenFS daemon is healthy.
Reviewed By: genevievehelsel
Differential Revision: D20130406
fbshipit-source-id: 9d25341e1d5f82fb1a921e1d7b1ebd34bcf19dc8
Summary:
Fix the `check_health()` function to always set a timeout when querying for
EdenFS's health. Originally we used to always set a default timeout of 60
seconds when creating thrift connections to EdenFS, but this was removed in
D5942205. In practice we ideally really want a handful of specific thrift
calls (e.g., 'checkOutRevision()`, `getScmStatusV2()`) to have extremely high
timeouts, but most other calls should have fairly short timeouts.
For now this ensures that we apply a 3 second timeout by default when checking
for EdenFS health. The `edenfsctl status` call did explicitly set a 15 second
timeout, but other commands like `edenfsctl clone` and `edenfsctl restart`
would also check for health and were not applying their own timeout.
Also add thrift timeout for the `initiateShutdown()` call when doing a full
restart in `edenfsctl restart`
Reviewed By: chadaustin
Differential Revision: D20130405
fbshipit-source-id: c59118dbcafc2ed0d29206e33891f1a58da8c05f
Summary:
Right now, all of our manifest parsing and evaluation is in the repo() class, but this is a design mistake. Over a repo's convert lifetime, a single repo will have many different manifests, based on branch, and location in the commit history. What's worse is that the current design makes it hard to build unit tests and new features like include evaluation.
This commit creates a whole new class called repomanifest, that represents a specific manifest (and its included files). It also has unit tests to test the various operations that the manifest performs, such as path and revision mapping. This commit does not modify the existing converter code outside of the class to use this new implementation.
Reviewed By: tchebb
Differential Revision: D19402995
fbshipit-source-id: b97dadcc595c6332f4495460618317194873a780
Summary:
In the past I saw test breakages where the stderr from the remote ssh process
becomes incomplete. It's hard to reproduce by running the tests directly.
But inserting a sleep in the background stderr thread exposes it trivially:
```
# sshpeer.py:class threadedstderr
def run(self):
# type: () -> None
while not self._stop:
buf = self._stderr.readline()
+ import time
+ time.sleep(5)
if len(buf) == 0:
break
```
Example test breakage:
```
--- a/test-commitcloud-sync.t
+++ b/test-commitcloud-sync.t.err
@@ -167,8 +167,7 @@ Make a commit in the first client, and sync it
$ hg cloud sync
commitcloud: synchronizing 'server' with 'user/test/default'
backing up stack rooted at fa5d62c46fd7
remote: pushing 1 commit:
- remote: fa5d62c46fd7 commit1
commitcloud: commits synchronized
finished in * (glob)
....
```
Upon investigation it's caused by 2 factors:
- The connection pool calls pipee.close() before pipeo.close(), to workaround
an issue that I suspect solved by D19794281.
- The new threaded stderr (pipee)'s close() method does not actually closes the
pipe immediately. Instead, it limits the text to read to one more line at
most, which causes those incomplete messages.
This diff made the following changes:
- Remove the `pipee.close` workaround in connectionpool.
- Remove `pipee.close`. Embed it in `pipee.join` to prevent misuses.
- Add detailed comments in sshpeer.py for the subtle behaviors.
Reviewed By: xavierd
Differential Revision: D19872610
fbshipit-source-id: 4b61ef8f9db81c6c347ac4a634e41dec544c05d0
Summary:
This makes `peer.close()` actually close the ssh connection if it's an
sshpeer. This affects the `clone` path to actually clean up the ssh connection
so we don't depend on (fragile) `__del__`.
I traced the code back to peerrepository.close in 2011 [1]. At that time it
seems the codebase depends on `__del__`. Nowadays the codebase calls `close()`
properly so I think it's reasonable to make the change.
[1]: https://www.mercurial-scm.org/repo/hg/rev/d747774ca9da.
Reviewed By: ikostia
Differential Revision: D19911393
fbshipit-source-id: ea640d1cd82ffcb786e22f47da8116c7f50a4690
Summary:
The added function can be used by extensions to run extra logic before the
"clone" function closes the repos or peers.
This is needed to make the next diff work. Otherwise extensions like remotenames will try to write to a closed sshpeer and cause errors.
Reviewed By: DurhamG
Differential Revision: D19911390
fbshipit-source-id: ca1364e808cebb632e051fbbdcfe4bf0dca721bc
Summary:
These comments end up being a source of churn as we roll out D20125635, and anyway are not particularly meaningful after the transformations performed by autocargo. For example:
```
bytes = { version = "0.4", features = ["serde"] } # todo: remove
```
^ This doesn't mean the generated Cargo.toml intends to drop its bytes dependency altogether, but just that will be migrated to a different version that is present in the third-party/rust/Cargo.toml but not visible in the generated Cargo.toml.
Reviewed By: jsgf
Differential Revision: D20128612
fbshipit-source-id: a9e7b29ddc4b26bc47a626dd73bdaa4771ee7b18
Summary:
Since Mononoke's filenodes were migrated to derived data framework
hg_linknode_populated alarm has been firing. The main reason was that there's
now a delay between hg changeset being generated and filenodes being generated.
This diff fixes it by making sure walker won't visit hg changesets without
generated filenodes (note that walker will visit these changesets later after filenodes will be
generated).
Reviewed By: ahornby
Differential Revision: D20067615
fbshipit-source-id: 285e9a3d8c89b85441491c889a8458c86ca0e3a8
Summary:
Update the `print_status()` function to take a `clidispatch::io::IO` object as
a parameter, instead of a simple output object. This will allow us to also
print error messages from this function in a future diff.
Reviewed By: quark-zju
Differential Revision: D19958504
fbshipit-source-id: bf482fdc4420e1350363a730c6a539cd760aef25
Summary: Updates the C code to support unicode filenames and states.
Reviewed By: simpkins
Differential Revision: D19786275
fbshipit-source-id: e7aeb029b792818b1b1a9c5d3028640b56522235
Summary: There is no need to open a transaction otherwise.
Reviewed By: DurhamG
Differential Revision: D20109840
fbshipit-source-id: e47adaaeea2d7565f3629701d8de4a67d4b55182
Summary:
Verifying the changelog is quite slow and we've had more users needing
to run hg recover these days. Let's finally get rid of the verify step.
Reviewed By: simpkins
Differential Revision: D20109706
fbshipit-source-id: a512d9e11716514bce986b0e3a26347fe6afd955
Summary: Most of the fixes related to encoding in `patch.py`
Reviewed By: DurhamG
Differential Revision: D19713378
fbshipit-source-id: 66ccbd0fc7826ab2d4c05173c7e9edb96700d106
Summary:
There is no need to generate expensive file history stream if only one node is requested.
I refactored code that generated stream of history commits, so it'd first yield the nodes and only then prefetch their parents. That will help to solve latency problem for the history request for only a single commit.
I removed BFS queue and added two state variables: ready nodes and already processed:
* The last are the nodes that were return as a part of a history stream on the last iteration and now can be used to construct next BFS layer: prefetch fastlog batches, fill the commit graph, take parents in BFS order to form new bunch of nodes.
* First are used if it's the first iteration - there is no processed nodes yet but there are some that are ready to be returned.
I believe removing the queue I simplified the code and logic a little bit.
Reviewed By: StanislavGlebik
Differential Revision: D19818100
fbshipit-source-id: c30d28c623464ba3552a00e8542552f7655076ef
Summary:
During our tests we noticed that we can send too many blobstore read requests to the
mapping. Let's add exponential backoff to prevent that
Reviewed By: ikostia
Differential Revision: D20116043
fbshipit-source-id: 6fecbda4c36a5065b77ba9df561c6d9c6a969089
Summary:
Memcache doesn't care (because both old and new Bytes to `Into<IOBuf>`), but
Thrift is Bytes 0.5. We have our caching ext layer in the middle, which wants
Bytes 0.4. This means we end up copying things we don't need to copy.
Let's update to fewer copies. I didn't update apiserver, because a) it's going
away, and b) those bytes go into Actix, and Actix isn't upgrading to Bytes 0.5
any time soon! Besides, this doesn't actually need updating besides tests anyway.
Reviewed By: dtolnay
Differential Revision: D20006062
fbshipit-source-id: 42766363a0ff8494f18349bcc822b5238e1ec0cd
Summary:
Add methods to `version.py` to get the version of the current running Eden CLI
code, rather than looking for the current installed RPM version. This means
that we no longer have to execute a separate subprocess that examines the RPM
database. This also makes sure we log the correct version information in
cases where developers are testing local development code even though they
have a different RPM version currently installed.
Reviewed By: genevievehelsel
Differential Revision: D20102259
fbshipit-source-id: ba9eb0c563c7f7c929170b130566946a67f679a5
Summary:
Update `get_installed_eden_rpm_version_parts()` to simplify the return type
from `Tuple[Optional[str], Optional[str]]` to `Optional[Tuple[str, str]]`
This also improves the output of `get_installed_eden_rpm_version()` when the
RPM is not installed so that it returns `<Not Installed>` rather than
`<Not Installed>-` with a trailing dash.
Additionally this updates the telemetry logging to include the full
version+release string. With our current version number scheme there can be
multiple packages with the same version but different release numbers if we
release multiple packages within a single day.
Reviewed By: genevievehelsel
Differential Revision: D20102263
fbshipit-source-id: 24d2df4cdca6ac576267be66b85422c3e50f1229
Summary:
Move the `get_running_eden_version()` functions from the `version.py` module
into the `EdenInstance` class in `config.py`. This helps eliminate some
circular dependency cycles in the code, so I can start breaking a few modules
out of the main CLI `lib` library.
I also changed the return type of `get_running_version_parts()` from
`Tuple[Optional[str], Optional[str]]` to just `Tuple[str, str]`. A dev build
of EdenFS already returns empty strings (rather than `None`) for the version
and release fields). There shouldn't really be any cases where `None` is
returned here, and even if there were I don't think we would ever care to
distinguish this from the empty string case.
Reviewed By: genevievehelsel
Differential Revision: D20102262
fbshipit-source-id: 564ec5ee820026a0c86c70ad0d7cfd3750ad94f5
Summary: Log when a user runs a normal (full) restart, including success or not. Success is determined by the return code of `start_daemon()` (which calls `subprocess.call()`), similar to the success critera for graceful restart logging
Reviewed By: fanzeyi
Differential Revision: D20098949
fbshipit-source-id: 0c6f4927571f686ed6b678d5c814f76c78322274
Summary: log when a user runs eden doctor, and log how many errors they encounter
Reviewed By: fanzeyi
Differential Revision: D20084617
fbshipit-source-id: 122a062c538931eb906cbfcd515ec1e8093efc38
Summary: This is required for eden doctor cli tests when adding logging to the eden doctor code path. This can just be a stub since we don't consume these scuba log statements during testing
Reviewed By: fanzeyi
Differential Revision: D20087861
fbshipit-source-id: 6805ae8d9c51e33a118cbda76461483962e876f3
Summary: the TypeCheck test cases were yelling at me because of this annotation missing when running locally, so adding it to fix those tests.
Reviewed By: fanzeyi
Differential Revision: D20098619
fbshipit-source-id: 630e7bca2b63033b34d72d1c739184819d3d86a3
Summary: Moving `compat` one level down to the call sites of subcommand functions.
Reviewed By: farnz
Differential Revision: D20085398
fbshipit-source-id: 461e147d2ae6e560b3a75fb92fa6b23f9f54d13e
Summary:
The problem is that the datapack files are not flushed to disk when it is prefetched. By having a pair of brackets around the `HgBackingStore`, it will ensure the `HgImporter` is closed by the time when we verify the prefetch with `hg cat` since it will terminate the `debugedenimporthelper` process in its destructor, which flushes the datapack files.
The real cause of the test failure is still unclear but I believe this is the correct way of doing this test.
Reviewed By: xavierd
Differential Revision: D20090249
fbshipit-source-id: 8e3966936a402c92311919433282027846d065e8
Summary: Windows SDK doesn't define dirent. Defining it here for adding Inodes support on Edenfs on Windows.
Reviewed By: simpkins
Differential Revision: D19956272
fbshipit-source-id: 1bdf9a7563c194fe38008741b09668242ffa64ee
Summary:
Logging on Windows doesn't work when the async is set. We haven't debugged it yet. Removing the async mode flag until we fix that.
Also bumping up the log level to 4. This would help to get more info while we are running in beta.
Reviewed By: simpkins
Differential Revision: D19776609
fbshipit-source-id: ccd6a6ed4d81f4a2edd550c6bb7195ac8b8b4d16
Summary:
During S196197 lease expired and we were rederiving the same derived data over and over again for a big commit.
this diff adds lease renewal that should help with this problem.
Reviewed By: HarveyHunt
Differential Revision: D20093323
fbshipit-source-id: d139abf6659722f47ea40d9b2f279daa03623ff4
Summary: log when a user runs eden rage
Reviewed By: simpkins
Differential Revision: D20084529
fbshipit-source-id: a92c5472554cd541c9a7d340edcf6845c1c9c0c0
Summary:
fetch_root_filenode is called by FilenodesOnlyPublicMapping to figure out if
filenodes were already derived. Previously it first derived hg changeset and
then fetched looked up root manifest in db. However if hg changeset is not
derived then filenodes couldn't possible be derived either and we can return an
answer faster.
This is useful in the next diff where I change walker
Reviewed By: ahornby
Differential Revision: D20068819
fbshipit-source-id: 17f066c437e0b1f7bbeb8f6e247eadc9afe94f90
Summary:
The blobstore_healer has never waited for MyRouter before querying for slave
status, but it ended up implicitly working because creating a blobstore
required a SQL factory, and creating a SQL factory would result in waiting for
MyRouter.
Now that creating a blobstore doesn't require SQL factory unless you're going
to actually use it (which the healer isn't: it doesn't use a multiplexblob, it
uses the underlying blobstores instead), we no longer wait properly for
MyRouter, so if MyRouter isn't there when we boot, we crash.
This fixes that.
Reviewed By: ahornby
Differential Revision: D20094829
fbshipit-source-id: 82b7e8d893a01049d1f434ee8dff36a877a0d2f4
Summary:
Add support for loading by GitSha1 Aliases. This relies on the change to
Alias::GitSha1 earlier in stack.
Reviewed By: ikostia
Differential Revision: D19903577
fbshipit-source-id: 73cdccc04af61fa524c3683851d8af9ae90d31dc
Summary:
D17135557 added a bunch of `pyre-fixme` comments to the EdenFS integration
tests for cases where Pyre cannot detect that some attributes are initialized
by the test case `setUp()` method.
It looks like Pyre's handling of `setUp()` is somewhat incorrect: it looks
like if a class has a `setUp()` method this currently suppresses all
uninitialized attribute errors (even if some attributes really are never
initialized). However, Pyre does not detect `setUp()` methods inherited from
parent classes, and always warns about uninitialized attributes in this case
even they are initialized.
Lets change these comments from `pyre-fixme` to `pyre-ignore` since this
appears to be an issue with Pyre rather than with this code. T62487924 is
open to track adding support for annotating custom constructor methods, which
might help here. I've also posted in Pyre Q&A about incorrect handling of
`setUp()` in derived classes.
Reviewed By: grievejia
Differential Revision: D19963118
fbshipit-source-id: 9fd13fc8665367e0780f871a5a0d9a8fe50cc687
Summary: As I work, it's getting harder and harder to keep my multiple changes from introducing merge conflicts between different branches. We need to break out the repo_source's implementation in to a bunch of different files to make it easier to keep things separate.
Reviewed By: zhonglowu, tchebb
Differential Revision: D20015946
fbshipit-source-id: bf954ac581e5ca9e43c091b6b1b4c539c14471f2
Summary:
Fix the PathRelativizer APIs to accept `Path` and even `str` arguments instead
of just `PathBuf`. The old code required a `PathBuf`, which often forced
callers to make a copy of the path data.
Reviewed By: quark-zju
Differential Revision: D19958505
fbshipit-source-id: 6fa40dd4b75df4e3faf9ad2ae4f0e4e6595669f6
Summary:
This should give us a slightly better idea of what hosts are doing to
troubleshoot duplicate derivation.
Also, let's make the logging a bit less confusing.
Reviewed By: StanislavGlebik
Differential Revision: D20070619
fbshipit-source-id: 91cc264b7043b8fc8c21c007832fba328ef0017d
Summary:
This updates our multiplexed blobstore configuration to carry its own DB
config. The upshot of this change is that we can move the blobstore sync queue
(a fairly unruly table) to its own DB.
Another nice side effect of this is that it cleans up a bunch of other code, by
finally decoupling the blobstore config from the DB config. For examples,
places that need to instantiate a blobstore can now to do even without a DB
config (such as wireproto logging).
Obviously, this cannot land until we update the configs to include this. I'll
do so in Configerator prior to landing the diff.
Reviewed By: HarveyHunt
Differential Revision: D19973905
fbshipit-source-id: 79e4ff92cdb989aab4532decd3fe4fd6c55e2bb2
Summary:
I'd like to refactor our multiplex blob to store its DB using a different
shard. In preparation of doing so, let's:
- Extract parsing DB configs from storage configs
- Tidy up some related places that take a reference when they actually need
ownership (which is sort of wasteful).
Reviewed By: StanislavGlebik
Differential Revision: D19973906
fbshipit-source-id: 82baceb892e9e24e5fd0349ffa5503884c177a7a
Summary:
Most of EdenFS's main logging is done through folly::logging, however a number
of libraries that we use do logging through glog. Previously we set glog's
`--minloglevel` setting to `0`, and we use the default `--v=0` setting.
This enabled glog `VLOG` messages, only for at VLOG level `0` messages.
Now that the Rust backing store code can fetch directly from memcache this now
links in some additional memcache library code that has some `VLOG(0)`
messages that are logged fairly frequently. These aren't useful for us to
have in our logs, so reduce the `minloglevel` to `1` for now, which disables
all `VLOG` messages.
Reviewed By: genevievehelsel
Differential Revision: D20050589
fbshipit-source-id: 167e301d61e46ae3c19975e0c9233eda371495c0
Summary: Now it no longer depends on mononoke_types, we can build it with cargo
Reviewed By: krallin
Differential Revision: D20070438
fbshipit-source-id: 1b2f9cc3640c58fd38e962c7c738d08cbb22a71d
Summary:
The bytes 0.5 is a depencency of newer tokio, it's also newer, and thus better.
Staying on 0.4 means that copies between Bytes 0.4 and 0.5 need to be done,
this will be especially bad in the LFS code since 10+MB buffer will have to be
copied...
One main API change is for the configparser. The code used to take Into<Bytes>
for the keys, I switched it to AsRef<[u8]>.
For hg_memcache_client, an extra copy is performed to build a Delta, since this
code uses an old tokio, and is being replaced right now, the effort of
switching to a new tokio and new bytes was not deemed worth it, the copy will
do for now.
Reviewed By: dtolnay
Differential Revision: D20043137
fbshipit-source-id: 395bfc3749a3b1bdfea652262019ac6a086e61e0
Summary:
This is the second (and last) step on removing RocksDB as a blobstore.
Check the task for more description.
Context for OSS:
> The issue with rocksblob (and to some extent sqlite) is that unless we
> introduce a blobstore tier/thift api (which is something I'm hoping to avoid
> for xdb blobstore) we'd have to combine all the mononoke function like hg,
> scs, LFS etc into one binary for it to have access to rocksdb, which would be
> quite a big difference to how we deploy internally
(Note: this ignores all push blocking failures!)
Reviewed By: farnz
Differential Revision: D20001261
fbshipit-source-id: c4b2b2a393b918d17680ad483aa1d77356f1d07c
Summary:
Add option to start the roots of the walk from any graph node, rather than just bookmarks.
This is useful when reproducing issues loading a key, validating a changeset/filenode etc, or to get consistent results on things like sizing where specifying root by bookmark would result in changes between runs.
Reviewed By: farnz
Differential Revision: D19886707
fbshipit-source-id: b7361cbec894aba08b6f702ff0731b9b201224d3
Summary:
Add `scsc export`. Analogous to `svn export`, this exports the contents of a
directory within a commit to files on disk, without a local checkout.
Reviewed By: mitrandir77
Differential Revision: D20006307
fbshipit-source-id: 5870712172cd8a030e85dbff75273c28ab0c332c
Summary: so that it can be shared more easily with AsyncUDPSocket
Reviewed By: yangchi
Differential Revision: D19851480
fbshipit-source-id: ec8cdb852519724db6f89cf70c4a4169de5028b6
Summary:
`treedirstatemap._repacked` is sometimes set in write(), but does not appear
to be used anywhere. Remove it. (I noticed this since Pyre complains about
it if you enable type checking for `write()`)
Reviewed By: xavierd
Differential Revision: D19958219
fbshipit-source-id: a55e237865160191d814ed950f69c3113bec4f64
Summary:
Add type annotations for the propertycache type.
Unfortunately at the moment Pyre still can't properly type check code that
uses this class, as it does not understand the special `__get__()` method.
It looks like support for this is hopefully coming in D19206575.
Reviewed By: xavierd
Differential Revision: D19958223
fbshipit-source-id: 0f8f15fc6935ec3feaef41d3be373a85225276fe
Summary:
Add type annotations for `dirstate.status()` and
`filesystem.pendingchanges()`
Unfortunately Pyre appears to choke when processing the `dirstate.status()`
function, and currently does not actually report type errors inside this
function at the moment. I've let the Pyre team know about this.
(If Pyre did work correctly it would report one issue since it doesn't realy
understand the `rootcache` decorator applied to `dirstate._ignore`)
Reviewed By: xavierd
Differential Revision: D19958226
fbshipit-source-id: a1cd4b9402a0a449481035cee819533c56b9b336
Summary:
This module previously used to handle deciding how a particular module should
be imported if it had multiple versions (e.g., pure Python or native).
However, as of D18819680 it was changed to always import the native C version.
Lets go ahead and remove it entirely now. Using `policy.importmod` simply
makes it harder for type checkers to figure out the actual module that will be
used.
The only functionality that `policy.importmod()` still provided was verifying
that the module contained a "version" field that looked like what was
expected. In practice these version numbers are not bumped often, so this
doesn't really seem to provide much value in checking that we imported the
correct version that we expected to be shipped with this release.
Reviewed By: xavierd
Differential Revision: D19958227
fbshipit-source-id: 05f1d027d0a41cf99c4aa93cb84a51e830305077
Summary:
Add *.pyi type stub files for most of the native C extensions.
This allows Pyre to type check functions that use these extensions.
These type annotations likely aren't complete, but contain enough information
to allow Pyre to pass cleanly on the existing type-checked locations in the
code using these modules.
Reviewed By: xavierd
Differential Revision: D19958220
fbshipit-source-id: 85dc39a16e595595a174a8e59e419c418d3531be
Summary:
This moves the build rules for the extensions in mercurial/cext into a TARGETS
file in this directory.
This will allow us to start writing `*.pyi` files that contain type
information for these modules, and store them alongside the corresponding `.c`
files. By having the build rules in the top-level `eden/scm` directory we
would have needed to keep the `.pyi` files for these modules directly in the
`eden/scm` directory instead, as the namespace for the `pyi` files is assumed
to be the basemodule plus their path relative to the TARGETS file.
Reviewed By: xavierd
Differential Revision: D19958222
fbshipit-source-id: fdc26ead16663036ffa2562a96eb1649f91cba81
Summary:
Mercurial wishes to use this crate, but pulling in mononoke_types brings way
too many dependencies. Since the only reason mononoke_types is brought in is
for the Sha256 type, let's just hardcode it to [u8; 32].
Reviewed By: krallin
Differential Revision: D20003596
fbshipit-source-id: 53434143c61cd1a1275027200e1149040d30beae
Summary:
The last diff fixed this for fsmonitor. Let's skip these same paths for
non-fsmonitor.
Reviewed By: quark-zju
Differential Revision: D20014808
fbshipit-source-id: 02e3cd9aa29d9c024ba3e8e42a46e21a7c8dfc30
Summary:
This hook was implemented to prevent incorrect users from moving a
bookmark. However, it doesn't work and the functionality is now implemented by
`is_allowed_user` in the pushrebase pipeline.
Remove the unused hook.
Reviewed By: johansglock
Differential Revision: D20030479
fbshipit-source-id: bcbc9508eebe77cffbc7936382ba4d345b76f74f
Summary:
Watchman may report invalid utf-8 filenames, even after they've been
deleted. Let's skip them, and print a warning.
Reviewed By: sfilipco
Differential Revision: D20012187
fbshipit-source-id: b13550918a8330ef3eb5c546105d1e054dcb7724
Summary:
Error strings were being converted to unicode if they contained certain
characters. This caused python 2 Mercurial to throw various errors when it tried
to turn them into strings to report errors.
Let's return cpython_ext::Str instead of String.
Reviewed By: sfilipco
Differential Revision: D20012188
fbshipit-source-id: af6fa7d98d68e3c188292e4972cfc1bdb758dbdf
Summary:
We're working towards sharding Bonsais. Let's make them easier to cache by also
not allowing arbitrarily large commit messages.
Reviewed By: StanislavGlebik
Differential Revision: D20002994
fbshipit-source-id: b2319ac9d5709e968121d4299396e03a90df4a06
Summary:
Let's populate the bonsai<->git mapping on pushrebase of the commits that are
coming from git. By this being a pushrebase hook we can have the accuare mappings
being available as soon as the bonsai commit is available.
Corresponding configerator change: D19951607
Reviewed By: krallin
Differential Revision: D19949472
fbshipit-source-id: b957cbcdd0f14450ceb090539814952db9872576
Summary: During the pushrebase hook phase we'll need to reuse existing transaction.
Reviewed By: krallin
Differential Revision: D19949473
fbshipit-source-id: 7c53308724bec6df6d40933405f703c86be15a7a
Summary:
By having it in blobrepo we can ensure that all parts of mononoke can access it
easily
Reviewed By: StanislavGlebik
Differential Revision: D19949474
fbshipit-source-id: ac3831d61177c4ef0ad7db248f2a0cc5edb933b1
Summary:
We need a table to store git<->bonsai mappings and a crate that would abrstract operations on it:
* it's going to be useful immediately to store git hashes for configerator
commits and doing the hash translations via SCS.
* it's going to be useful further down the line for real git support.
NOTE: I'm explicitly using the name `SHA1` all over the place to minimize the
confusion if we'll ever want to support other hashing schemes for git commits.
(Git Community is working on SHA256 support nowdays).
The corresponding AOSC diff: D19835975
Reviewed By: krallin
Differential Revision: D19835974
fbshipit-source-id: 113640f4db9681b060892a8cedd93092799ab732
Summary:
Whenever remotefilelog.cacheprocess2 is set, remotefilelog.cachekey is also
set, but the later is not be present when remotefilelog.cacheprocess is. Since
remotefilelog.cacheprocess already includes the cachekey, let's not add it
twice.
This also fixes the issue where hg_memcache_client would die early due to being
passed too many arguments.
Reviewed By: DurhamG
Differential Revision: D20014792
fbshipit-source-id: 8ed6775f70cf967d1c069f8acdb5a782ee819090
Summary:
This error handling can be extremely slow: calling `self.node()` can end up
triggering a linkrev scan of the changelog, which can take over 5 minutes.
If we did want to add this back in the future we would need some sort of API
on `filectx` to try and get the node ID only if it was cheap, and that would
fail fast if this is using remotefilelog and trying to get the node ID will
require scanning the changelog.
Note that KeyError can occur fairly regularly when invoked in long-lived
commands like `hg debugedenimporthelper`. If we are asked about data in a new
commit that was added since this repository was originally opened a KeyError
will be thrown here (in which case `debugedenimporthelper` will call
`repo.invalidate()` and then retry).
Reviewed By: quark-zju
Differential Revision: D20010279
fbshipit-source-id: 0e9b4c163cb9256de57daa91eed70a3736cb1075
Summary: It seems to be stable and not causing issues. Let's make it default everywhere.
Reviewed By: wez
Differential Revision: D19896738
fbshipit-source-id: cf6abe8f536e570017742b3a0674213a932a6a4d
Summary: There are two copies of pywatchman in fbcode (!) and some changes didn't make it into the edenscm copy.
Reviewed By: quark-zju
Differential Revision: D19794480
fbshipit-source-id: bcc85e0d3efc225d94b8bfa1e433f6e9cc024643
Summary:
Mercurial filenode hash is computed by including the copy information in the
blob header. Before computing the blob content hash, or returning it to the
upper layers, we need to either strip or reconstruct this header appropriately.
Reviewed By: DurhamG
Differential Revision: D19975887
fbshipit-source-id: 7555e7219e50f4d18ec677fdecc216ee705d7af4
Summary: This will make it easier to support more hash schemes in the future.
Reviewed By: DurhamG
Differential Revision: D19975888
fbshipit-source-id: 8b8ce3b20d72199bac3cd20a48475b5ab56bfc52
Summary:
With the Arc embedded into the store themselves, this forces a second
allocation in order to use them as trait objects. Since in most cases, we do
not want the stores themselves to be cloneable, we can move the Arc outside and
thus reduce the number of pointer indirection.
Reviewed By: DurhamG
Differential Revision: D19867568
fbshipit-source-id: 9cd126831fe2b9ee715472ac3299b7a09df95fce
Summary:
The ContentStore now can read LFS blobs from both the shared cache, and the
local store.
Reviewed By: DurhamG
Differential Revision: D19866249
fbshipit-source-id: a6fb3523495e9d3832613b56438f631cfa552b91
Summary:
With the LFS store being added, and the indexedlog being soon used for trees,
this simplification should help in formalizing the hierarchy of files/folders.
It will look like the following:
<root dir>/lfs: for the lfs store
<root dir>/indexedlog*: for the indexedlog
<root dir>/foobar: for a hypothetical foobar store
For manifests, <root dir> will therefore be: <store dir>/manifests. The
unfortunate part is that the current tree data lives under
<store dir>/packs/manifests. As packfiles will be replaced, this small
discrepency is acceptable.
Reviewed By: DurhamG
Differential Revision: D19866248
fbshipit-source-id: 7ef59ef7df19149b19a529b4f4a45a479cc9d23b
Summary:
This is the first step in having a stronger integration between LFS blobs and
the ContentStore abstraction. The 2 main difference between the Python based
LFS implementation and this one are:
- pointers are not stored alongside plain data,
- blobs are split between local and shared blobs
As of now, no reclamation is being performed for shared blobs, blobs aren't
fetched or uploaded. This will come in future diffs.
Reviewed By: DurhamG
Differential Revision: D19859291
fbshipit-source-id: 45000fc574e6fbd6d3487f4966cad4f49dab731c
Summary:
Add the `--parent` flag to `scsc blame`. This runs blame against the first
parent of the specified commit, rather than the commit itself. This allows
users to copy and paste commit hashes from previous blame output in order to
skip the commit, rather than having to look up the parent commit hash
themselves.
Reviewed By: StanislavGlebik
Differential Revision: D20006308
fbshipit-source-id: d1c25aad8f236fe27e467e29f6a96c957b6c8c8f
Summary:
The former implementation here was a little difficult to work with, and
resulted in a whole lot of cloning of closures, etc.
This updates the implementation to be a little simpler on the whole (async /
await is nicer for while loops, since you can use, well, loops)
It does slightly change a few parts of the behavior:
- The old implementation would wait for the replication lag duration. That's
not really correct. As we've observed several time this weeks, replication
lag usually drops quickly once it starts dropping. I.e. if the replication
lag is 10 seconds, it doesn't take 10 seconds to catch up. This gets more
important with big lag durations.
- I updated replication lag to be u64 instead of usize. usize doesn't really
make sense for something that has absolutely nothing to do with our pointer
size.
I also split out the logic for calculating how long we wait in a part that
cares about whether we are busy and one that cares about replication lag
(whereas the older one kinda mixed the two together). We wait for our own
throttling (i.e. sleep for a sec if we didn't do anything) before we wait for
replication lag, so the new behavior should have the desired behavior of:
- If we don't have much work to do, we sleep 1 second between each iteration
(but if we do have work, we don't).
- No matter what, if we have replication lag, we wait until that passes before
doing any work.
The old one did that too, but it mixed the two calculations together, and was
(at least in my opinion) kinda hard to reason about as a result.
Reviewed By: StanislavGlebik
Differential Revision: D19997587
fbshipit-source-id: 1de6a9f9c1ecb56e26c304d32b907103b47b4728
Summary:
We had crahsloops on this (which I'm fixing earlier in this stack), which
resulted in overloading our queue as we tried to repeatedly clear out 100K
entries at a time, rebooted, and tried again.
We can fix the root cause that caused us to die, but we should also make sure
crashloops don't result in ignoring lag altogether.
Also, while in there, convert some of this code to async / await to make it
easier to work on.
Reviewed By: HarveyHunt
Differential Revision: D19997589
fbshipit-source-id: 20747e5a37758aee68b8af2e95786430de55f7b1
Summary:
Our blobstore_sync_queue selects entries with a limit on the number of unique
keys it's going to load. Then, it tries to delete them. However, the number of
entries might be (much) bigger than the number of keys. When we try to delete
them, we time out waiting for MySQL because deleting 100K entries at once isn't
OK.
This results in crashlooping in the healer, where we start, delete 100K
entries, then time out.
This is actually double bad, because when we come back up we just go wihhout
checking replication lag first, so if we're crashlooping, we disregard the
damage we're doing in MySQL (I'm fixing this later in this stack).
So, let's be a bit more disciplined, and delete keys 10K at a time, at most.
Reviewed By: HarveyHunt
Differential Revision: D19997588
fbshipit-source-id: 2262f9ba3f7d3493d0845796ad8f841855510180
Summary:
Some of our upcoming repo merges will make it infeasible for someone to
use a full checkout. Let's add a config that will warn users of this. It has a
few levels, starting with a suppressable hint, then a non-suppressable warning,
then a suppressable exception, then a non-suppressable exception.
Reviewed By: ikostia
Differential Revision: D19974408
fbshipit-source-id: bad35a477ad8626dbc0977465368f5d71007e2d5
Summary:
MyRouter needs to be told which shards to watch. Since I'm adding a new shard,
it'll be easier for everyone to know that they need to update their MyRouter
configuration if we start logging the shard name we're trying to hit.
Reviewed By: ikostia
Differential Revision: D20001704
fbshipit-source-id: 8a9ff3521bc7e3c9b7ed39c6ae33d0ddc1d467b7
Summary:
On Windows, there are *two* 8-bit encodings for each process.
* The ANSI code page is used for all `...A` system calls, and this is what
Mercurial uses internally. It can be overridden using the `--encoding`
command line option.
* The OEM code page is used when outputing to the console. Mercurial has no
concept of this, and instead renders to the console using the ANSI code page,
which results in mojibake like "Θ" instead of "é".
Add the concept of an `outputencoding`. If this differs from `encoding`, we
convert from the local encoding to the output encoding before writing to the
console.
On non-Windows platforms, this defaults to the same encoding as the local encoding,
so this is a no-op unless `--outputencoding` is manually specified.
On Windows, this defaults to the codepage given by `GetOEMCP`, causing output
to be converted to the OEM codepage before being printed.
For ordinary strings, the local encoded version is wrapped by `localstr` if the
encoding does not round-trip cleanly. This means the output encoding works
even if the character is not represented in the local encoding.
Unfortunately, the templater is not localstr-clean, which means strings can get
flattened down to the local encoding and the original code points are lost. In
this case we can only output characters which are in the intersection of the
encoding and the output encoding.
Most US English Windows systems use cp1252 for the ANSI code page and cp437 for
the OEM code page. These both contain many accented characters, so users with
accented characters in their names will now see them correctly rendered.
All of this only applies to Python 2.7. In Python 3, everything is Unicode,
the `--encoding` and `--outputencoding` options do nothing, and it just works.
Reviewed By: quark-zju, ikostia
Differential Revision: D19951381
fbshipit-source-id: d5cb8b5bfe2bc131b2e6c3b892137a48b2139ca9
Summary:
`hg rage` generates the rage in the user's encoding. Since pastes are expected
to be in UTF-8, non-UTF-8 encodings result in garbled pastes.
Similarly, the lines-dec graph renderer uses escape sequences that won't work
on web pages, and the lines graph renderer uses curved lines which don't
render very well either. Force the use of the lines-square graph renderer,
which renders well.
Reviewed By: quark-zju
Differential Revision: D19951382
fbshipit-source-id: d1a5fd2ef195658f9bf10210088031474355f168
Summary:
The Rust graph renderer expects the message to be a unicode string, so ensure
we convert it from the local encoding before passing it to Rust.
Reviewed By: quark-zju
Differential Revision: D19951383
fbshipit-source-id: 644862c63873079364cb9902bd1bb49de8aa1ab9
Summary:
This adds a file hook to limit the file length we are willing to allow in
commits. This is necessary for now since Mercurial does have a limit on its
end, and we shouldn't allow commits that we cannot sync to Mercurial.
Reviewed By: HarveyHunt
Differential Revision: D19969689
fbshipit-source-id: 1da8a62d54e98b047d381a9d073ac148c9af84b0
Summary:
See later in this stack for motivation. This seems to work fine, and it allows
characters that don't fit latin1 when rendering diffs.
Reviewed By: markbt
Differential Revision: D19969743
fbshipit-source-id: 79c4afce5a19822d9b075d23ff4c88aa76ce2f42
Summary:
This adds some basic logging for input size for Gettreepack and Getpack. This
might make it easier to understand "poison pill" requests that take out the
host before it has a chance to finish the request.
Reviewed By: StanislavGlebik
Differential Revision: D19974661
fbshipit-source-id: deae13428ae2d1857872185de2b6c0a8bcaf3334
Summary:
I'm going to modify it in the next diff, so let's make it async.
Note that we used `spawn_future()` before which I replaced with tokio::spawn()
here. It's not really clear if we need it at all - I'll experiment with later.
Removing it will make the code cleaner.
Reviewed By: krallin
Differential Revision: D19973315
fbshipit-source-id: cbbb9a88f4424e6e717caf1face6807ab6c32438
Summary:
As of 63c471ad8a4ba0bebd1acf70569bcdcefc3fffbf in upstream Dulwich, it
now turns commands into unicode. Unfortunately, _ssh.py in hggit sees that the
type is no longer str or bytes and thinks it's an array and puts spaces between
every letter, causing it to break.
Let's allow unicode. This broke because dulwich was recently upgraded.
Reviewed By: sfilipco
Differential Revision: D19983215
fbshipit-source-id: 059756905bf4b2c73009001b078c8723ae378246
Summary: Not very valuable, if it just prints the constant name.
Reviewed By: StanislavGlebik
Differential Revision: D19978690
fbshipit-source-id: ae2b648f50098b479cb3719fd9b9d4b82bac3d3c
Summary: This should get rid of the extraneous uninitialized attribute errors related to `setUp` and abstract classes.
Reviewed By: simpkins
Differential Revision: D19964487
fbshipit-source-id: 52d5a6496e372d99d4398473f9ed7672228a76f5
Summary:
This is a revised version of D19887220.
D19887220 has 2 problems:
- It can silently ignore the mt.exe error after failures of all retries.
- There is another place that `mt.exe` runs that is not covered by retry.
This diff fixes them by wrapping the `set_long_paths_manifest` function
directly so it covers two `mt.exe` places, and makes sure all retry failure
is still a failure.
Reviewed By: sfilipco
Differential Revision: D19977802
fbshipit-source-id: 774d0c42b247a7e111841cd69f71760a5544d685
Summary:
Update includes to the third-party xdiff.h file to use absolute includes
from the repository root. This allows many parts of our internal build
tooling to work better, including automatic dependency processing.
Reviewed By: xavierd
Differential Revision: D19958228
fbshipit-source-id: 341dd8c94f1138cf4a387b92e1817b2a286d6aa1
Summary:
Update the C files under edenscm/mercurial/cext to use absolute includes from
the repository root. Also update a few of the libraries in edenscm/mercurial
that the cext code depends on.
This makes these files easier to build with Buck in fbsource, and reduces the
number of places where we have to use deprecated Buck functionality to help
find these headers. This also allows autodeps to work with the build targets
for these rules.
Reviewed By: xavierd
Differential Revision: D19958221
fbshipit-source-id: e6e471583a795ba5773bae5f16ed582c9c5fd57e
Summary:
Remove `thirdparty/pyre2/__init__.py` from the `libhg` sources list.
We don't compile the `thirdparty/pyre2/_re2.cc` file in the fbcode build, so
importing the `__init__.py` module from this package just triggers an
ImportError when the code tries to use it. The code then always falls back to
using the version of pyre2 included from the `fb-re2` wheel.
Dropping the `__init__.py` module from our library should simply trigger an
ImportError earlier when we can't even find this file, and the code will still
fall back to using `fb-re2`.
Including this `__init__.py` file just causes issues for type checking, since
it causes us to try and type check this file even though its dependencies are
not present.
Reviewed By: xavierd
Differential Revision: D19958224
fbshipit-source-id: 34ea8806b6ee9377f17a9318c64c91ec242225df
Summary:
Some of the methods in eden_dirstate_map.py had comments that were close to
type annotations that were added a couple years ago. Update them to proper
type comments that can be recognized by Pyre and mypy.
Also remove the unused create_clone_of_internal_map() method.
Reviewed By: chadaustin, xavierd
Differential Revision: D19958225
fbshipit-source-id: b753c030acb15cf4f8d8c536614e657ee1bcba52
Summary:
Update the `eden_dirstate_map` class to store `dirstatetuple` objects instead
of plain tuples in its `_map` member variable. Without this the `filefoldmap`
code that is used on Windows fails, as it directly accesses `self._map` and
expects it to contain `dirstatetuple` objects.
Reviewed By: DurhamG, pkaush
Differential Revision: D19841881
fbshipit-source-id: ddb7523b598cfd8ec8719a8a74446cefcb411358
Summary:
Eden SCM expects that DRY_RUN reports the same conflicts as a normal
checkout, but EdenFS would skip traversing deleted trees in dry run
mode. Fix that and add a test.
Reviewed By: genevievehelsel
Differential Revision: D19782543
fbshipit-source-id: 7a269e67a41b7ad6ce6c54fde37e8f74fcc1ef51
Summary:
bonsai_verify occasionally visits the same commit twice (I found out by adding
logging and noting that it occasionally visits the same commit twice). Let's
allow this here.
Reviewed By: StanislavGlebik
Differential Revision: D19951390
fbshipit-source-id: 3e470476c6bc43ffd62cf24c3486dfcc7133de6c
Summary: We're about to start adding more handlers to the server. Rather than putting them all in the same file, let's create a submodule for them.
Reviewed By: krallin
Differential Revision: D19957012
fbshipit-source-id: 38192664371f0b0ef5eadb4969739f7cb6e5c54c
Summary: Add a `RequestContext` type that stores per-request state, along with a `Middleware` implementation that injects a `RequestContext` into Gotham's `State` object for each request. This is essentially a stripped-down version of the `RequestContextMiddleware` used in the LFS server. Given that the RequestContext contains application-specific functionality, this Middleware lives alongside the rest of the EdenAPI server code rather than in the `gotham_ext` crate (where all of the generic Middleware lives).
Reviewed By: krallin
Differential Revision: D19957013
fbshipit-source-id: 6fad2b92aea0b3662403a69e6a6598e4cd26f083
Summary:
Currently if derivation of a particular derived data type is disabled, but a
client makes a request that requires that derived data type, we will fail with
an internal error.
This is not ideal, as internal errors should indicate something is wrong, but
in this case Mononoke is behaving correctly as configured.
Convert these errors to a new `DeriveError` type, and plumb this back up to
the SCS server. The SCS server converts these to a new `RequestError`
variant: `NOT_AVAILABLE`.
Reviewed By: krallin
Differential Revision: D19943548
fbshipit-source-id: 964ad0aec3ab294e4bce789e6f38de224bed54fa
Summary: fork exec wait in `daemon.dameon_exec` so we can get exit code of child process in order to log.
Reviewed By: simpkins
Differential Revision: D19861810
fbshipit-source-id: 85fce52b2e2d252bb4dec779f5f975e3712b6bb5
Summary:
Prepare configs locally that can be passed to any Mononoke binary where things
/just work/.
Reviewed By: HarveyHunt
Differential Revision: D19952512
fbshipit-source-id: 14a3b520972b0bdf4fa7810805066ba746bbef1a
Summary: Adds the Cargo.toml files for blobstore, this is a step towards covering mononoke-types, so only the blobstore traits are covered by this diff.
Reviewed By: aslpavel
Differential Revision: D19948739
fbshipit-source-id: c945a9ca16ccceb0e50a50d941dec65ea74fe78f
Summary: Generate the Cargo.toml files inside xdiff with autocargo. This will enable Mononoke to depend on this code easily without sacrificing anything on eden/scm side.
Reviewed By: aslpavel
Differential Revision: D19948741
fbshipit-source-id: 905ff3d64b90830e5f075e4c6ed2b3de959e3f00
Summary:
Not being able to prefetch draft parent trees should not be considered as a
fatal error.
This code path is causing trouble with narrow-heads clone:
1. Streaming clone. The client gets a changelog.
2. The client runs "pull" to get new commits. The prefetchdraftparents code path runs.
3. The client has stale remote names, and public() is lagging. `prefetchdraftparents`
will try to fetch trees at the old master, but the repo is not configured properly.
That causes a stacktrace like:
$ /usr/bin/hg --config 'extensions.fsmonitor=!' clone --shallow -U --config 'ui.ssh=ssh -oControlMaster=no' --configfile /etc/mercurial/repo-specific/www.rc ssh://hg.fb.com/repo repo
connected to hg.fb.com
streaming all changes
searching for changes
adding commits
adding manifests
adding file changes
added 1 commits with 0 changes to 0 files # <<<< No traceback if this says "0 commit".
Traceback (most recent call last):
File "edenscm/hgext/remotenames.py", line 1464, in exclonecmd
orig(ui, *args, **opts)
File "edenscm/hgext/remotefilelog/__init__.py", line 433, in cloneshallow
orig(ui, repo, *args, **opts)
File "edenscm/mercurial/commands/__init__.py", line 1615, in clone
shareopts=shareopts,
# shareopts = {'mode': 'identity'}
File "edenscm/mercurial/hg.py", line 741, in clone
exchange.pull(local, srcpeer, revs, streamclonerequested=stream)
File "edenscm/mercurial/util.py", line 621, in __exit__
self.close()
File "edenscm/mercurial/transaction.py", line 46, in _active
return func(self, *args, **kwds)
File "edenscm/mercurial/transaction.py", line 543, in close
self._postclosecallback[cat](self)
# cat = bin('6472616674706172656e74747265656665746368')
File "edenscm/hgext/treemanifest/__init__.py", line 490, in _parenttreefetch
self.prefetchtrees([c.manifestnode() for c in draftparents])
# c = <changectx b5ad643b3009>
# draftparents = [<changectx b5ad643b3009>]
File "edenscm/hgext/treemanifest/__init__.py", line 522, in prefetchtrees
self._prefetchtrees("", mfnodes, basemfnodes, [], depth)
# basemfnodes = [bin('a25f17018d7cd07f1f6bc3076f95c5980ba087a9')]
# mfnodes = [bin('ad717aac7700e783a1d84f3330d13a7731a4726a')]
File "edenscm/hgext/treemanifest/__init__.py", line 529, in _prefetchtrees
fallbackpath = getfallbackpath(self)
File "edenscm/hgext/treemanifest/__init__.py", line 2173, in getfallbackpath
if util.safehasattr(repo, "fallbackpath"):
File "edenscm/mercurial/util.py", line 190, in safehasattr
return getattr(thing, attr, _notset) is not _notset
# attr = 'fallbackpath'
File "edenscm/mercurial/util.py", line 904, in __get__
result = self.func(obj)
File "edenscm/hgext/remotefilelog/shallowrepo.py", line 42, in fallbackpath
"no remotefilelog server " "configured - is your .hg/hgrc trusted?"
Abort: no remotefilelog server configured - is your .hg/hgrc trusted?
abort: no remotefilelog server configured - is your .hg/hgrc trusted?
Fix it by making prefetchdraftparents non-fatal. This would hopefully unblock
narrow-heads rollout.
Reviewed By: xavierd
Differential Revision: D19957251
fbshipit-source-id: e65bbe6bf422776effe49055f7332ec538177a41
Summary: Notifications is using folly Subprocess which doesn't work on Windows.
Reviewed By: genevievehelsel
Differential Revision: D19863375
fbshipit-source-id: 63b047253c0f8a48b1b0ccc767f5820e77a28d80
Summary:
This will allow us to improve our dashboards filtering out errors we are
responsible for, like missing certs on the machines.
Reviewed By: mitrandir77
Differential Revision: D19950614
fbshipit-source-id: 73503e984dfe8513a700fdcb2fc36b1618c20a4f
Summary:
Commit messages and extras can be unbounded in size. This can cause problems if users create commits with exceptionally large messages or extras. Mercurial will commit these to the changelog, increasing its size. On Mononoke, large commit messages may go over the cacheing threshold, resulting in poor performance for requests involving these commits as Mononoke will need to reload on every access.
Commit messages should not usually be that large. Mostly likely it will happen by accident, e.g. through use of `hg commit -l some-large-file`. Prevent this from happening by accident by adding configuration for soft limits when creating commits.
If a user really does need to create a commit with a very large message or extras, they can override using the config option.
Reviewed By: xavierd
Differential Revision: D19942522
fbshipit-source-id: 09b9fe1f470467237acc1b20286d2b1d2ab25613
Summary:
This parameter was originally removed in D12811551, but re-added in D12855935
due to the fact that at the time the `eden_dirstate.py` and `dirstate.py`
files were deployed in separate RPMs and could not be updated together
atomically. We now deploy these files together, so we can drop this extra
unnecessary argument.
Reviewed By: chadaustin
Differential Revision: D19913057
fbshipit-source-id: 0f0b4fde4b3124a8fc5bb568551b4e67de14d410
Summary:
- Pushing .compat down from main into run function and switch to 0.3 timed function
Note: Possible next level of pushing down: pushing .compact into derive_fn and get rid of BoxFuture run's signature.
Reviewed By: ikostia
Differential Revision: D19943392
fbshipit-source-id: 65bd84492855d3e2e560299a586af6dd4fe9c3ea
Summary:
Sometimes the treestate points to an unknown commit (ex. aborted transaction
might strip commits). While `debugrebuilddirstate -r HASH --hidden` is able to
fix it, it is too slow.
This diff adds treestate repair logic to the `doctor` command. It scans through
the treestate files, find a most recent `Root` entry with `p1` pointing to a
known commit.
This can be much faster than `debugrebuilddirstate` in some cases, because the
watchman clock might still be valid, and the NEED_CHECK file list might still
be small. In that case, `status` can still be fast.
Since treestate atomically updates all information needed for `status`
calculation (parents, need-check-files (or, "non-normal files"), watchman-clock
(only with fsmonitor), and stat for clean files). Reverting to a previous state
is still atomic. Correctness-wise, this is equivalent to aborting a "large"
transaction, and restoring treestate data to the state before the transaction.
It should be consistent, and the next `status` call won't mis-report files like
the dangerous `debugsetparents` command.
Reviewed By: DurhamG
Differential Revision: D19864422
fbshipit-source-id: d5d2f8b43a0c15ea2ac0e3c164edec7deeb8451f
Summary:
See the test change. Without this change repairing the changelog won't give the
user back a working repo.
Reviewed By: markbt
Differential Revision: D19864421
fbshipit-source-id: b84582c5302469828c8cfcb3db362ea82f2eea63
Summary:
Reuse utilities in the fixcorrupt extension to repair changelog.
This is better than fixcorrupt because `hg doctor` does not require a repo
object. Some messages are updated so they become more consistent with the
rest of `hg doctor`.
The main motivation is to get changelog fixed early, so other repair logic can
check if a commit hash is known by changelog or not.
Reviewed By: markbt
Differential Revision: D19864418
fbshipit-source-id: 6f95c6c6191d7db2a474a07a5278a857cf41d8e2
Summary:
Run 'edenfsctl doctor' on an edenfs repo. If there is no current repo, it might
be caused by edenfs daemon stopped running. So let's also run edenfsctl doctor
in that case.
Reviewed By: markbt
Differential Revision: D19864419
fbshipit-source-id: d2a49a126a040845b88b4883d214162326d08d8d
Summary:
We're seeing a user have issues because their username contains unicode
characters and sampling's use of json doesn't handle it well. I've not been able
to repro it unfortunately, but let's go ahead and switch sampling to use
mercurial.json.
Differential Revision: D19895419
fbshipit-source-id: a1f087d1e2c7568488c2b8d54f267bd5c8266202
Summary: This will be used in the LFS store.
Reviewed By: DurhamG
Differential Revision: D19895803
fbshipit-source-id: 4cf447987c10fed0b5c98904f20c841428965d89
Summary:
In some cases, higher level stores may want to store data in either a plain
IndexedLog, or in a RotateLog, for local and shared data. Due to slight
difference between the 2, they can't easily be adapted into a common trait.
Instead let's just wrap both into an enum and implement the main functions that
the higher level stores need.
The first use of this will be the LfsStore, future use will include the
IndexedLogDataStore and the IndexedLogHistoryStores.
Reviewed By: DurhamG
Differential Revision: D19859292
fbshipit-source-id: 920572e0cf5f69bda4901a727a6b0dc0f08fc8d0
Summary: records if a start was successful or not
Reviewed By: simpkins
Differential Revision: D19817810
fbshipit-source-id: b67253099781bb534b7e2fb26a09ba41c1f0bd69
Summary: Since we cannot log this case from the daemon because we can't catch sigkill, log failed stop from CLI layer.
Reviewed By: simpkins
Differential Revision: D19826140
fbshipit-source-id: eb3aa27802db0206a13e552c4cb1384f856905d2
Summary:
this is used up the stack. This introduces generic scuba logging for the cli layer. In case of the open source build, `log` will be a no-op as suggested in `cli/telemetry.py`.
this is used as so:
```
from .telemetry import build_base_sample, log
# for example, I am adding the field "status" to know that this is a status call.
sample = instance.build_sample("status").add_string("something", "another")
instance.log(sample)
```
Reviewed By: simpkins
Differential Revision: D19816913
fbshipit-source-id: b055d4d1e29456e3549292e6f5047b935f11e4e2
Summary: Add the max_jitter_ms field to the rate limiting config struct, and to the integration test.
Reviewed By: HarveyHunt
Differential Revision: D19905068
fbshipit-source-id: b44251c456a45bc494d1080e405f2d009becc0d2
Summary:
This is required for 0.2 timers or runtime reliant code to work within the sync
job. To achieve this, we need to get of Tokio 0.1 fs code, which is
incompatible with Tokio 0.2 because it uses `blocking()`.
Reviewed By: ikostia
Differential Revision: D19909434
fbshipit-source-id: 58781e858dd55a9a5fc10a004e8ebdace1a533a4
Summary:
This update the warm_bookmarks_cache's constructor to use the passed in
blobrepo's derived data configuration (instead of whatever the caller is
passing in), since we now have that information.
Reviewed By: HarveyHunt
Differential Revision: D19949725
fbshipit-source-id: 575a1b9ff48f06003dbf9e0230b7cca723ad68f5
Summary: Add hash::GitSha1 as a pure hash-only key for git Aliases, so one no longer needs to know the size or type to load by Alias::GitSha1.
Reviewed By: krallin
Differential Revision: D19903578
fbshipit-source-id: bf919b197da2976bf31073ef04d12e0edfce0f9b
Summary:
Rename GitSha1 to RichGitSha1 in preparation for introducing hash::GitSha1 as a pure sha1 without extra fields in next in stack.
Motivation for this is that currently one can't load content aliased by Alias::GitSha1 give just the hash, one has to know the type and size as well.
Once the next couple stack are done we will be able to load via just the git hash.
Reviewed By: krallin
Differential Revision: D19903280
fbshipit-source-id: ab2b8b841206a550c45b1e7f16ad83bfef0c2094
Summary:
When max concurrency is 1, we should process at most one request concurrently,
not 2! This had resulted in a flaky test since we're processing traffic out of
order there.
Reviewed By: HarveyHunt
Differential Revision: D19948594
fbshipit-source-id: 00268926095fdbbfdfd5a23366aafcfb763580f4