Summary:
If for some reason EdenFS cannot be started, we shouldn't attempt to run the
fsck tests as these would always fail.
Reviewed By: genevievehelsel
Differential Revision: D29918436
fbshipit-source-id: 6e4a01a747157427e5c1028084e32cef8066c96a
Summary: This affects all platforms but more noticeable on Mac that tons of 100% printed (e.g. P409794954), probably due to some weirdness with cursor.
Reviewed By: fanzeyi
Differential Revision: D29922276
fbshipit-source-id: 987f6b9ef5a8a4ab738aa6edbd617184bbcb2d1c
Summary: As title. `RequsetContext` allows us to track metrics such as latency and count.
Reviewed By: genevievehelsel
Differential Revision: D29835813
fbshipit-source-id: 6b85fc8f11923f530fce6d871fa2253db21bfa98
Summary:
Previously the missing vertex cache was ignored by vertex_id_batch.
Respecting it can help reduce remote lookups.
Reviewed By: andll
Differential Revision: D29889457
fbshipit-source-id: 0469b1e61c42ad31e0dd486ab7c752bf4aeeba5c
Summary:
This will help remove some unnecessary cache invalidations, and help avoid
remote lookups.
Reviewed By: andll
Differential Revision: D29889458
fbshipit-source-id: e9a36b227c3b2c7f6b9830a8b27f5a16e363c94e
Summary:
This will be used to detect if the NameDag was changed between reloads,
and decide whether we need to invalidate caches or not.
Reviewed By: andll
Differential Revision: D29888938
fbshipit-source-id: 377879bd8d28c92feca80c025613a65139ccb866
Summary:
The version gets bumped on writing to disk.
This makes it easier for callsites to detect whether there are changes to the
MultiLog. It will be used by the upcoming changes.
Reviewed By: andll
Differential Revision: D29888939
fbshipit-source-id: 278887cd59c85e49f606334529a27557a4bc1dc5
Summary:
It turns out that the namedag was opened multiple times. Add a fail point to
help figure out the callsite.
The `fail` crate allows something like:
FAILPOINTS="dag-namedag-open=1*sleep(1)->return"
FAILPOINTS="dag-namedag-open=1*sleep(1)->panic"
Meaning that the first open causes 1ms sleep, and the second
causes an error (turns into a Python backtrace), or a panic (turns into a Rust
backtrace with RUST_BACKTRACE=1).
Reviewed By: andll
Differential Revision: D29888937
fbshipit-source-id: b1644d7196f68262523ab9a5fc4fb110a4cc0062
Summary:
Previously it checks whether the new hash exists remotely, which makes offline
commit impossible.
`tip` is not that important. Just do a local check instead.
Reviewed By: andll
Differential Revision: D29834904
fbshipit-source-id: 94924591a5827942f428b74231b4494999856361
Summary: Show that lazy changelog makes it impossible to commit or amend offline.
Reviewed By: andll
Differential Revision: D29834907
fbshipit-source-id: a268be05947cbf215cff1471a25dba72447bafec
Summary:
Similar to D29440143 (38f3ceafbc), add a way to disable resolving names by setting
a limit using `EDENSCM_REMOTE_NAME_THRESHOLD`.
This is useful to figure out callsite that tries to resolve names that
are previously unknwon, ex. newly generated commit hashes.
Reviewed By: andll
Differential Revision: D29834906
fbshipit-source-id: 9b6161bd62a026fa5a37e1cda9912bcb8bca6971
Summary:
The patches to these crates have been upstreamed.
allow-large-files
Reviewed By: jsgf
Differential Revision: D29891894
fbshipit-source-id: a9f2ee0744752b689992b770fc66b6e66b3eda2b
Summary:
On Windows, there a commonly occuring issue where a checkout operation would
crash EdenFS as a conflict is being added for an unlinked inode, thus
triggering the XCHECK in the addConflict method.
From looking at the code, the comment that claims that inodes cannot be
unlinked during checkout isn't entirely accurate: EdenFS will unlink inodes
during checkout when their content changed. The code itself should properly
remove the unlinked inode from its parent TreeInode and thus I haven't fully
figured out the exact series of event that leads to a conflict being added for
an unlinked inode. Since the asumption from the comment is invalid, it should
be safe to not assert that the inode shouldn't be unlinked and use
InodeBase::getUnsafePath instead of InodeBase::getPath
Reviewed By: kmancini
Differential Revision: D29241901
fbshipit-source-id: 4239df576b3cbf716fb336fd4d6542939337a297
Summary:
In some cases, the code needs to have access to the path for an inode even if
that inode is unlinked. In such situation, neither getPath nor getLogPath are
suitable, thus let's introduce a getUnsafePath, which is intended for these
handful of places.
The only known use case for such method is when adding conflicts during checkouts.
Reviewed By: genevievehelsel
Differential Revision: D29241902
fbshipit-source-id: 7756a95813d6fd5e471538cf82d29604dd5b8e5e
Summary:
Implement batch derivation of blame V2.
Blame derivations are independent so long as the two commits do not change or
delete any of the same files. We can re-use the existing batching code so long
as we change it to split the stacks on *any* change (not just a
change-vs-delete conflict).
Reviewed By: StanislavGlebik
Differential Revision: D29776514
fbshipit-source-id: b06289467c9ec502170c2f851b07569214b6ff0a
Summary:
I noticed that reading one of the mononoke configs was failing with
```
invalid type: string \"YnrbN4fJXYGlR1EzoxLRvVbibyUiRM/HZThRJnKBThA\", expected
a sequence at line 2587 column 61)\x18ninvalid type: string
\"YnrbN4fJXYGlR1EzoxLRvVbibyUiRM/HZThRJnKBThA\", expected a sequence at line
2587 column 61
```
The problem is coming from the fact that configerator configs use thrift simple
json encoding, which is different from normal json encoding. At the very least
the difference is in how binary fields are encoded - thrift simple json
encoding uses base64 to encode them. [1]
Because of this encoding difference reading the configs with binary fields in
them fails.
This diff fixes it by using simple_json deserialization for
get_config_handle()... but the existing callers used the old broken
`get_config_handle()` which is
incompatible with the new one. Old `get_config_handle()` relied on the fact
that serde::Deserializer can be used to deserialize the config, while thrift
simple json doesn't implement serde::Deserializer.
As a first step I migrated existing callers to use old deprecated method, and
we can migrate them to the new one as needed.
[1] It was a bit hard to figure out for sure what kind of encoding is used, but
discussion in
https://fb.workplace.com/groups/configerator.users/posts/3062233117342191
suggests that it's thrift simple json encoding after all
Reviewed By: farnz
Differential Revision: D29815932
fbshipit-source-id: 6a823d0e01abe641e0e924a1b2a4dc174687c0b4
Summary:
Do a similar change to change_target_config as we've done for add_sync_target
in D29848378. Move bookmark only if it points to an expected commit. That would
prevent make it safer to deal with cases where the same change_target_config
was executing twice.
Reviewed By: mojsarn
Differential Revision: D29874803
fbshipit-source-id: d21a3029ee58e2a8acc41e37284d0dd03d2803a3
Summary:
This is the first diff that tries to make megarepo asynchronous methods
idempotent - replaying the same reqeust twice shouldn't cause corruption on the
server. At the moment this is not the case - if we have a runaway
add_sync_target call, then in the end it moves a bookmark to a random place,
even if there was another same successful add_sync_target call and a few others on
top.
add_sync_target should create a new bookmark, and if a bookmark already exists
it's better to not move it to a random place.
This diff does it, however it creates another problem - if a request was successful on mononoke side, but we failed to deliver the successful result to the client (e.g. network issues), then retrying this request would fail because bookmark already exists. This problem will be addressed in the next diff.
Reviewed By: mojsarn
Differential Revision: D29848378
fbshipit-source-id: 8a58e35c26b989a7cbd4d4ac4cbae1691f6e9246