Summary:
Copy from Watchman.
This allows us to show stack trace when EdenFS terminates on Windows.
Reviewed By: chadaustin
Differential Revision: D27896966
fbshipit-source-id: f3238a37a1176f879d5e6bc051ec97031c9a7096
Summary:
Add a way to extend the graph with concrete commit hashes, without specifying
exact commit messages.
Reviewed By: sfilipco
Differential Revision: D27897894
fbshipit-source-id: fccd64b2fef1386d79cddd841208da6a938a5217
Summary:
scrub blobstore logging was missing the common server logging fields that LogBlob and MultiplexedBlobstore add.
Also moved the LogBlob scuba construction closer to use site for clarity.
Reviewed By: StanislavGlebik
Differential Revision: D27966453
fbshipit-source-id: 77fe70606602753301a2503691a490c0b11c755a
Summary:
Currently when we call with_mutated_scuba() we create a new LoggingContainer.
That means that all the data from the previous LoggingContainer like PerfCounters
is lost. I suspect this is the reason we don't log any BlobGets/Puts for
repo_create_commit methods (see
[scuba](https://fburl.com/scuba/mononoke_scs_server/fautos3s)) - we call
[with_mutated_scuba method](https://fburl.com/code/srd1c4xu) right before
calling repo_create_commit(), and I suspect this loses the counters.
Let's instead copy all the Logging fields when calling `with_mutated_scuba`.
Reviewed By: krallin
Differential Revision: D27964719
fbshipit-source-id: 881c11bb5fb1927dbf55d0d625ea8bfbf11be131
Summary:
In test-cross-repo-commit-validator.t and test-cross-repo-commit-sync., we
modify bookmarks outside of Mononoke, so we need to flush them before pulling.
In test-megarepo-invisible-merge.t, things are actually a little more subtle
and I wonder if there might be another issue laying around there. If we don't
flush bookmarks, then we attempt to upload one more hg commit, and that
blows up: P410235472. However, if we flush bookmarks, then we don't attempt to
upload, and all is fine. Here, flushing is just a workaround, but for no
that'll do. There was also another bug here, where we change configs but
don't force them to take effect.
Reviewed By: StanislavGlebik
Differential Revision: D27964959
fbshipit-source-id: 9c4304b38513177e402ee64f309e019e227ed2a7
Summary:
When we're packing, we pay an overhead price for keeping the key in the pack. As we're only bothered about reducing size, let's limit that price to when the savings from packing are worth it.
There are two cases where it's not worth it:
1. When the compressed pack is larger than the sum of single compressed files sizes.
2. When compressing a single file on its own.
Reviewed By: ahornby
Differential Revision: D27913258
fbshipit-source-id: 36cdc2a2b30aa508281ac3bbd70da41322533edb
Summary:
Like it says in the title. This test wants to see consistent bookmarks so it
should flush them.
Note that this used to just opt out of bookmarks caching entirely, but I'd like
us to try and avoid having so many snowflakes in our tests (because it makes
their maintenance harder), so instead of changing the environment, let's change
the test to do what it needs to do.
Reviewed By: mzr, HarveyHunt
Differential Revision: D27964099
fbshipit-source-id: 72e00bad07dec15f18faaf4aa2e32e78cb333ab0
Summary: Was getting orphan autocargo lints on these so add a config for them.
Reviewed By: krallin
Differential Revision: D27947231
fbshipit-source-id: 925fb78889d8f80f51145536a157fa0e63cc68d7
Summary:
Current implementation had a bug(demonstrated in test case) in handling unknown files on case insensitive fs.
When file is replaced with another file, whose name only differs in case, we get two distinct update operations - rm file, and create file.
Create operation checks against unknown files, and see that file "exists". In this case operation is aborted.
However, we should proceed in this case, and this diff fixes it.
Reviewed By: quark-zju
Differential Revision: D27926953
fbshipit-source-id: 48c8824322d6e5dd9ae57fee1f849b57dc11a4df
Summary: Will be useful on case insensitive fs
Reviewed By: quark-zju
Differential Revision: D27946982
fbshipit-source-id: e7a2fd0ee503c4a580531e6f52225fe2316e5b76
Summary: This diff adds flag to VFS to detect whether FS is case sensitive. The logic in this code losely follows similar logic in Python
Reviewed By: quark-zju
Differential Revision: D27926952
fbshipit-source-id: 36fdf4187ae513b25346f704050c64f9a1a4ec74
Summary: This way the fallback server know which traffic is coming from mononoke
Reviewed By: krallin
Differential Revision: D27946019
fbshipit-source-id: 8c13ae641ba340ba55322871ca30fb6accb3f007
Summary:
Update the zstd crates.
This also patches async-compression crate to point at my fork until upstream PR https://github.com/Nemo157/async-compression/pull/117 to update to zstd 1.4.9 can land.
Reviewed By: jsgf, dtolnay
Differential Revision: D27942174
fbshipit-source-id: 26e604d71417e6910a02ec27142c3a16ea516c2b
Summary:
When EdenFS is killed, either due to `eden stop` timing out, or when simply
rebooting the host, the edenfs.log becomes filled with fsck errors, which also
slows down the fsck process.
Since we already print the number of errors per mount, limiting ourself to the
first 50 errors is probably good enough.
Reviewed By: fanzeyi
Differential Revision: D27943618
fbshipit-source-id: 2b3e6e3ae4df648d4b1dccf73c71f8dbbded3892
Summary:
We get pretty frequent query errors from MySQL on this, but it's hard to debug
without knowing what is being queried.
Reviewed By: StanislavGlebik
Differential Revision: D27941603
fbshipit-source-id: 62e0f0fe9c3af36ed829c401e957ecf7683a4000
Summary:
The migration to tpx broke Watchman's tests because test_bser relies on YARN_YARN_OFFLINE_MIRROR being set in the environment and tpx doesn't forward environment variables into the test.
Explicitly forward them ourselves.
Reviewed By: fanzeyi
Differential Revision: D27897172
fbshipit-source-id: 16c8017a89979802bd9d443825ed4e22cb6ff6c9
Summary: Mercurial still needs this to work in Python 2 for a few more weeks.
Reviewed By: quark-zju, xavierd
Differential Revision: D27943521
fbshipit-source-id: 2b5106496fbb523cdc97a3dce3ad0cbfab5c17b7
Summary:
This handles large chunk of cases where tree merge returns conflict, but the conflict can be trivialy resolved by textual merge.
No markers are left in file, if merge yields conflicts we simple abort to on-disk merge, same as with existing code
Reviewed By: quark-zju
Differential Revision: D27752771
fbshipit-source-id: ff8d4bbc88b48812150327cae6e31991a30236c9
Summary: Those conflicts can be resolved in Python using textual 3-way merge
Reviewed By: DurhamG
Differential Revision: D27752770
fbshipit-source-id: 816a601112ee2e747d780f8b17473049df46b469
Summary:
This diff modifies rebase flow(based on config) and attempts to create commit wihthout using merge.py:update
This currently passes some test cases, but not all.
This implementaiton currently does not attempt to resolve conflicts and fallbacks to on-disk merge if they are encountered.
This fails some test cases, because they expect some trivial conflicts to be resolved by in-memory merge.
There are also certain rebase flags that currently are not handled
Reviewed By: DurhamG
Differential Revision: D27639394
fbshipit-source-id: d8f71e955930e3a8a64d7d95a0cf184d9b4ccadc
Summary: This diff exposes manifestbuilder that can be used to construct memctx from Python
Reviewed By: DurhamG
Differential Revision: D27639395
fbshipit-source-id: ed047d3d7533f9d2bc592a5d948dc01e429692a7
Summary:
This diff makes MySQL FFI client the default option for a MySQL connection. It means that if no arguments provided, the MySQL FFI client is used. `--use-mysql-client` option is still accepted, as it is used in the configs, and will be removed a bit later.
I also removed raw connections as a way to connect to MySQL from Mononoke, as it is no longer used. Although I had to keep some `sql_ext` API for now because other projects rely on it.
(I talked to the teams and they are willing to switch to the new client as well. I'm helping where it's possible to replace these raw xdb conns.)
Reviewed By: krallin
Differential Revision: D27925435
fbshipit-source-id: 4f08eef07df676a4e6be58b6e351be3e3d3e8ab7
Summary:
Right now, we can't have defaults in our tunables, because some tests clobber
them. Let's start updating tunables instead of replacing them.
NOTE: I was planning to use this in my next diff, but I ended up not needing
it, that said, it feels like a generally positive improvements, so I figured
I'd keep it.
Reviewed By: StanislavGlebik
Differential Revision: D27915402
fbshipit-source-id: feeb868d99565a375e4e9352520f05493be94a63
Summary:
This updates the bookmarks cache TTL to be something we configure using
tunables instead of repo configs. There's a few goals here:
- Less us tune the pressure we put on SQL via a tunable
- Letting us experiment more easily with disabling this cache and tuning the
WBC poll interval.
Right now, this defaults to what we had set up in all our repo configs, which
is 2000ms.
Reviewed By: farnz
Differential Revision: D27915403
fbshipit-source-id: 4361d38939e5b2a0fe37442ed8c1dd0611af5098
Summary:
One of our plans for this half is to replace the warm bookmarks cache with a
service, and we suspect this will effectively eliminate bookmarks queries from
our hosts, because we think they all come from the WBC.
But, before we invest our time into this, let's make sure that this assumption
is actually correct, by tracking who's querying bookmarks.
Reviewed By: StanislavGlebik
Differential Revision: D27938407
fbshipit-source-id: d9a9298e7409c9518a4b9bf8ac0a6cef53750473
Summary:
I'd like to be able to track the proportion of traffic coming to bookmarks from
warm bookmarks cache vs. from elsewhere. We don't have a great abstraction to
pass this via the CoreContext at this time, but the SessionClass seems like a
pretty good fit.
Indeed, since it's always available in the CoreContext, and can be freely
mutated without having to rebuild the whole session. Besides, it aligns pretty
well with the existing use cases we have for SessionClass, which is to give you
different level of service depending on who you are.
Reviewed By: StanislavGlebik
Differential Revision: D27938413
fbshipit-source-id: a9dcc5a10c8d1459ee9586324a727c668e2e4e40
Summary:
phases calculation could be expensive on the server and it should be a perf win to disable it if not needed
It shouldn't be needed if narrow heads are enabled
Reviewed By: quark-zju
Differential Revision: D27908691
fbshipit-source-id: 7000fb23f9332d58c2c488ffbef14d73af4ac532
Summary:
`MononokeMegarepoConfig` is going to be a single point of access to
config storage system - provide both writes and reads. It is also a trait, to
allow for unit-test implementations later.
This diff introduces a trait, as well as implements the write side of the
configerator-based implementor. The read side/oss impl/test impl
is left `unimplemented`. Read side and test impl will be implemented in the future.
Things I had to consider while implementing this:
- I wanted to store each version of `SyncTargetConfig` in an individual
`.cconf` in configerator
- at the same time, I did not want all of them to live in the same dir, to
avoid having dirs with thousands of files in it
- dir sharding uses sha1 of the target repo + target bookmark + version name,
then separates it into a dir name and a file name, like git does
- this means that these `.cconf` files are not "human-addressable" in the
configerator repo
- to help this, each new config creation also creates an entry in one of the
"index" files: human-readable maps from target + version name to a
corresponding `.cconf`
- using a single index file is also impractical, so these are separated by
ascification of the repo_id + bookmark name
Note: this design means that there's no automatic way to fetch the list of all
targets in use. This can be bypassed by maintaining an extra index layer, whihc
will list all the targets. I don't think this is very important atm.
Reviewed By: StanislavGlebik
Differential Revision: D27795663
fbshipit-source-id: 4d824ee4320c8be5187915b23e9c9d261c198fe1
Summary:
We started getting the message
```stderr: eden/fs/utils/SpawnedProcess.cpp:798:21: error: 'getIOExecutor' is deprecated: getIOExecutor is deprecated. To use the global mutable executor use getUnsafeMutableGlobalIOExecutor. For a better solution use getGlobalIOExecutor. [-Werror,-Wdeprecated-declarations]
```
I don't see why we would need a mutable executor here so I chose `getGlobalIOExecutor` over `getUnsafeMutableGlobalIOExecutor`.
Reviewed By: kmancini
Differential Revision: D27912276
fbshipit-source-id: 95b1053f72c2b4eb2746e3c40c0cf76b69d90d6e
Summary:
In case the Mononoke server cannot provide the commit graph, and we need to
checkout and push changes. Let's add an emergency mode where the commit graph
only contains a single commit: master.
This can be used using `--config unsafe.emergency-clone=1`:
~/hg % lhg clone --shallow -U mononoke://mononoke.internal.tfbnw.net/fbsource ~/tmp/c1 --config unsafe.emergency-clone=1 --configfile /data/users/quark/.eden-backing-repos/fbs-lazy/.hg/hgrc.dynamic
connected to <remote host> session yyvXqQlHnMYQMEfw
warning: cloning as emergency commit+push use-case only! accessing older commits is broken!
resolving master
connected to <remote host> session ODc4PPiJ21L6r4Sn
added master: 248bd246f4467a2d4d0cacc09c5e55131ada9919
warning: this repo was cloned for emergency commit+push use-case only! accessing older commits is broken!
Smartlog:
~/hg % cd ~/tmp/c1
~/tmp/c1 % lhg sl
warning: this repo was cloned for emergency commit+push use-case only! accessing older commits is broken!
o 248bd246f 25 seconds ago remote/master
Pull:
~/tmp/c1 % lhg pull
warning: this repo was cloned for emergency commit+push use-case only! accessing older commits is broken!
pulling from ssh://hg.vip.facebook.com//data/scm/fbsource?stage1_read
connected to twshared1103.03.prn6.facebook.com session L4sDKzLm093aLUbo
searching for changes
adding commits
adding manifests
adding file changes
added 8 commits with 0 changes to 0 files
Checkout:
~/tmp/c1 % lhg sparse include .gitignore
warning: this repo was cloned for emergency commit+push use-case only! accessing older commits is broken!
~/tmp/c1 % lhg up master
warning: this repo was cloned for emergency commit+push use-case only! accessing older commits is broken!
19 files updated, 0 files merged, 0 files removed, 0 files unresolved
Commit:
~/tmp/c1 % vim .gitignore
~/tmp/c1 % lhg c -m gitignorewarning: this repo was cloned for emergency commit+push use-case only! accessing older commits is broken!
Smartlog:
~/tmp/c1 % lhg sl
warning: this repo was cloned for emergency commit+push use-case only! accessing older commits is broken!
@ cc43f0e5b (Backup pending) 4 seconds ago quark
╭─╯ gitignore
│
o 10ef2879e 5 minutes ago remote/master
│
~
Reviewed By: andll
Differential Revision: D27897892
fbshipit-source-id: f1770482455968dac217c9c6ee34ec0a20e5f432
Summary:
I found that there are still lots of (automation) users use the legacy clone
code path but it's unclear why (not having selectivepull?). Let's log the
reasons why the legacy path is used.
Reviewed By: sfilipco
Differential Revision: D27913616
fbshipit-source-id: b83f15e42a4afa94164b68bc9a91b4f0c022260c