Summary:
See the previous diff for context. Disable the Rust progress for external
pager.
Reviewed By: kulshrax
Differential Revision: D27149241
fbshipit-source-id: 4260a8be55bbfa648d8910f021195e9d11bdab73
Summary:
When testing "disable_progress" with chg (next diff) I found it was not
effective because there are 2 separate IO structs. The one we disable
from Python is different from the one the Rust progress thread uses.
I traced it down here. Since the Python IOs are just wrappers of
Rust IOs in the chg use-case. There is no need to recreate an IO
struct.
The "creating IO" struct is still useful, for things like "-t.py" testing where
the output needs to be captured into different Python variables per different
commands.
Reviewed By: DurhamG
Differential Revision: D27149243
fbshipit-source-id: 6e27adcc9f48b21fc24fba120be8c4a8fef1f909
Summary:
In some cases (ex. using an external pager). The IO states are changed outside
the IO struct's control. Ideally we should implement the external pager logic
on IO too but for now let's just add an API so the Python external pager logic
can disable progress output after starting an external pager.
Reviewed By: kulshrax
Differential Revision: D27149242
fbshipit-source-id: ff51fc153d3cc211cfa8ef697923d36f7c0f0d9b
Summary:
Detecting prod on Windows wasn't working because we used a posix path.
Let's add the Windows equivalent.
Also moves us to use the new hostcaps crate.
Reviewed By: chadaustin
Differential Revision: D27126497
fbshipit-source-id: 4035012fb7701378fb6e2e902c0efcd54ef42ea9
Summary:
We've been seeing issues where repositories end up with incorrect
dynamic configuration since there's a window of time after they're cloned where
they don't have %include /etc/mercurial/.../repo.rc and therefore generate an
incorrect dynamicconfig which gets used for 15 minutes until we regen the
dynamicconfig.
Let's change hg clone to write the %include as part of the initial hgrc, so we
remove that window of time and the repo will always be correctly configured.
Reviewed By: quark-zju
Differential Revision: D27093772
fbshipit-source-id: a9ca0ec54e06549546d532d1c49a80d49981decf
Summary:
I'd like to add support for event handlers in the Rust ServiceRouter client (I
need this in order to inject CATs in calls made by the SMC client). To do so, I
need to be able to instantiate a `ContextStack`, and to do so, I need a static
c-string representing the service name & function name, which is what this diff
does.
Note that we actually do the same thing for Rust servers already.
#forcetdhashing
Reviewed By: farnz
Differential Revision: D27088187
fbshipit-source-id: be2ad541d5123b31f0dab73da16b35dbfd308d6f
Summary:
The NFS readdir turns out to be pretty similar to the FUSE one, with a couple
of differences. For one, it only populates the directory entry name, it also
puts a limit on the total size of the serialized result, including all the
NFS/XDR overhead.
It is not specified if the . and .. entries need to be returned, but since the
NFS spec is usually pretty explicit about these and makes it clear that this is
for the most part a client burden, I didn't add these. I may have to revisit
this later when I get to manually browse a repository.
Since the READDIR RPC doesn't populate any filehandle, the client will have to
issue a LOOKUP RPC for each entries, potentially leading to some
inefficiencies. A future diff will implement the READDIRPLUS to fix these.
Reviewed By: chadaustin
Differential Revision: D26802310
fbshipit-source-id: b821b57021d0c2dca33427975b1acd665173bc5c
Summary:
This simplifies a handful of tests and will make writing the READDIR RPC a bit
less magic when computing the amount of memory needed per entry.
Reviewed By: chadaustin
Differential Revision: D26802312
fbshipit-source-id: fc66cb68f721ed34c8f9879cdda2cd8db6ed8daa
Summary: This merely adds the types for the READDIR RPC.
Reviewed By: chadaustin
Differential Revision: D26802313
fbshipit-source-id: 634ff9b3f97dc4dba56d225c1fb9eae0a94c02d5
Summary:
Looking at the spec, READDIRPLUS appears to be more complex to implement than
READDIR, for now, let's force the use of READDIR. Future changes will have to
implement READDIRPLUS as that will likely be a perf improvement.
Reviewed By: chadaustin
Differential Revision: D26802311
fbshipit-source-id: cb784d74507e6c2c2ba4dc0aebe69cfcd69db40b
Summary:
This type is very specific to Fuse, let's make it obvious. The readdir method
has also been renamed as it is also very specific to Fuse.
Reviewed By: chadaustin
Differential Revision: D26802309
fbshipit-source-id: c2acdfd1c0006935c59b685fcda729e1bef88928
Summary:
This test verifies that the issue we had previously with assign_ids does not
creep up again.
Reviewed By: quark-zju
Differential Revision: D27105741
fbshipit-source-id: 49b385b2026b599c92c406331a2299931a2eae46
Summary: Update the logs so that it's more clear what is going on.
Reviewed By: quark-zju
Differential Revision: D27145099
fbshipit-source-id: 11ec7b467157d07dd41893dc82f251a1c555365f
Summary:
We are also going to make update the IdMapVersionStore before we start writing
the IdMap. This means that if we crash while writing the IdMap, future runs
don't try to use the same IdMapVersion that we used previously.
Reviewed By: quark-zju
Differential Revision: D27145097
fbshipit-source-id: b911e2dca32d0fe8ae0aead3de75373dd2f936c4
Summary:
We are going to build the iddag before starting to write the idmap.
This means if the iddag fails to build for whatever reason we would not have
written a potentially useless idmap.
Reviewed By: quark-zju
Differential Revision: D27145098
fbshipit-source-id: c9045abea2a1f5a8b96c524d546776fdc693b56a
Summary:
`update::build` is only used by the Seeder. The steps in this function are not
isolated enough from the seeder to have a separate function. The seeder has the
role of builing it's own type of StartState. It is also the only process that
deals with the IdMapVersionStore. The seeder is particular enough that it makes
sense to inline it's build order.
Reviewed By: quark-zju
Differential Revision: D27099265
fbshipit-source-id: f86b8d7d4637a5f2582e70fc58b60c2041b93548
Summary:
The most important invariant for IdDag is that parent nodes have ids that are
smaller than child nodes. We had a couple of issues that resulted in failing
this invariant so we are adding these extra checks. They will help us diagnose
issues faster and proctect protect production data against faulty updates.
Reviewed By: quark-zju
Differential Revision: D27092204
fbshipit-source-id: 1f052b290a494e267fac2f551ba51582baa67973
Summary: Shadowing can end up being more confusing.
Reviewed By: quark-zju
Differential Revision: D27143481
fbshipit-source-id: 0a1913d8952fe913cc7596b9aea84df2d62cc3fe
Summary:
Check that head has a dag id assignment after finishing the process. This was
done at a later point but it is better to group it with assignment process so
that we have a clear source of the error.
Reviewed By: quark-zju
Differential Revision: D27143482
fbshipit-source-id: 2a94cee70142967b4f8d57df43dfcc339a0b4f2e
Summary: Move around code, similar to other data types.
Reviewed By: StanislavGlebik
Differential Revision: D27044301
fbshipit-source-id: 6c1c104533592733e95c3976717c5ac484218c6f
Summary:
Like it says in the title. We do the same thing (with the sampling rates) in
repo client.
Reviewed By: mitrandir77
Differential Revision: D27156569
fbshipit-source-id: ffaec7e27b454650263e82fd6d18f25c1bbf88eb
Summary:
We use srselect on hg hosts to figure out which Mononoke servers hgcli can connect to.
We were getting BrokenPipe errors most likely from srselect.
`E0318 03:51:02.558708 679 [tk] eden/mononoke/server/repo_listener/src/connection_acceptor.rs:201] connection_acceptor error: Failed to handle connection to [2401:db00:12:90c3:face:0:361:0]:57594: Failed to handle_connection: Failed to handle_http: Failed to serve_connection: error shutting down connection: Broken pipe (os error 32): Broken pipe (os error 32)
`
Tbh it isn't much different than `ErrorKind::NotConnected` because we'are shutting the connection down anyway and it doesn't matter whether client hung up or is already dead.
Reviewed By: krallin
Differential Revision: D27155123
fbshipit-source-id: 96bb2b268f116a20f16605eb04c867c9ad047b1f
Summary: We have new config fields available that can specify default compression level, let's read and use them.
Reviewed By: StanislavGlebik
Differential Revision: D27127455
fbshipit-source-id: 27935fd58da5f1150c9caf56d9601c37f2ae3581
Summary: Bring across the thrift changes so can code against them.
Reviewed By: krallin
Differential Revision: D27116899
fbshipit-source-id: 27bf6f23bebbc43d6c4d6c668ff905b72b0eb0f9
Summary:
In regular xcode this was warning and being ignores. Not the working is handled as an error.
This diff is only a workaround so we wont get those errors .
```
eden/scm/edenscm/mercurial/cext/osutil.c:745:49: error: passing 'const char *' to parameter of type 'char *' discards qualifiers [-Werror,-Wincompatible-pointer-types-discards-qualifiers]
ret = _listdir_batch(path, pathlen, keepstat, skip, &fallback);
^~~~
eden/scm/edenscm/mercurial/cext/osutil.c:586:11: note: passing argument to parameter 'skip' here
char* skip,
```
Reviewed By: mzlee
Differential Revision: D27136440
fbshipit-source-id: 00d61fd00e3ed8e23643ea69b5a82dbeb5e742ce
Summary:
With `sl -r OBSOLETED` the intention is see the obsoleted stack instead of just
a single commit. So filter the "::heads" with "- public()", not "& draft()".
The goal is to deprecate `--hidden`. See the linked post for more context.
Reviewed By: DurhamG
Differential Revision: D27093425
fbshipit-source-id: 76e9650a809c1d94da2341e2aca31d349487610d
Summary:
When creating the .hg directory, Mercurial issues a SYMLINK RPC, thus let's
support it.
Reviewed By: kmancini
Differential Revision: D26785005
fbshipit-source-id: a760d55e6117cc3725444c604e3e4036f4a317b2
Summary:
For now, this simply clone a repo with NFS, and nothing else, more of the
protocol needs implementing to support reading directories, files, etc.
Reviewed By: kmancini
Differential Revision: D26266144
fbshipit-source-id: e379e12126162f41d8d166bb53652e1e501de2e9
Summary:
Better Git<->Bonsai conflict reporting.
It now prints the conflicting mapping, so one can see if there is a 2xbcs_id -> same git_sha1 or 2xgit_sha1 -> same bcs_id conflict. This is useful when trying to sort out why this issue appeared in the first place.
Reviewed By: ahornby, krallin
Differential Revision: D27058044
fbshipit-source-id: 9db7a210c1b0be3e0e90aa78b561293d6cf29c26
Summary:
Allow to use a custom accumulator inside gitimport so we selectively can decide what to save.
This was triggered mainly because we run out of memory due to the large BonsaiChangesets always collected even when not needed earlier.
Reviewed By: krallin
Differential Revision: D27117686
fbshipit-source-id: 99ce33562e76470f91ff8c0c46391bd513801afa
Summary:
This is mostly just copying /usr/include/linux/fuse.h from my devserver and
updating some flags in FuseChannel to display the new flags.
Reviewed By: chadaustin
Differential Revision: D27144667
fbshipit-source-id: 4854c6edd4c793ca707db26fecd11e2a3e9d7b75
Summary:
Segmented Changelog distinguishes commits into two groups: MASTER and
NON_MASTER. The MASTER group is assumed to big and special attention is payed
to it. Algorithms optimize for efficiency on MASTER.
The current state for the segmented_changelog crate in Mononoke is that it does
not assign NON_MASTER commits. It doesn't need to right now. We want to
establish a baseline with the MASTER group. It was however possible for the
on demand update dag to assign commits that were no in the master group to the
master group because no explicit checks were performed. That could lead to
surprising behavior.
At a high level, the update logic that we want is: 1. assign the master
bookmark changeset to the MASTER group, 2. assign other commits that we need to
operate on to the NON_MASTER group. For now we need 1, we will implement 2
later.
Reviewed By: krallin
Differential Revision: D27070083
fbshipit-source-id: 922bcde3641ca25512000cd1a912c5b399bdff4b
Summary:
Pull in SegmentedChangelogConfig and build a SegmentedChangelog instance.
This ties the config with the object that we build on the servers.
Separating the instatiation of the sql connections from building any kind of
segmented changelog structure. The primary reason is that there may be multiple
objects that get instantiated and for that it is useful to be able to pass
this object around.
Reviewed By: krallin
Differential Revision: D26708175
fbshipit-source-id: 90bc22eb9046703556381399442117d13b832392
Summary:
This was lost somehow. I probably incorrectly resolved some conflict when
rebasing a previous change.
Reviewed By: quark-zju
Differential Revision: D27146022
fbshipit-source-id: 13bb0bb3df565689532b2ab5299cd757f278f26e
Summary:
the reclone option code has landed for fbclone, so now we can direct
users there first, so they don't have to go through all these steps
(won't land until I check that this option has actually made it to production)
I also updated the wiki this points to tell users to use `eden list` to detect
EdenFs checkouts instead of looking for .eden, as these steps are also for when
an EdenFS checkout is borked and needs a reclone and `eden list` more reliably
works in this situation.
Reviewed By: StanislavGlebik
Differential Revision: D26435380
fbshipit-source-id: 9153e730e1be949d130af85d604623d2bfbd3990
Summary:
Some of our subprocess calls are running into dylib errors. The cause looks to
be related to our environment variables. We already have environment hygenics
for buck, so lets borrow this to use elsewhere.
This is to fix prefetch profile fetching on mac, but I ran into another error
when testing `eden du --clean`.
Reviewed By: genevievehelsel
Differential Revision: D27135268
fbshipit-source-id: 3955ddefc5e9ff60e966f63f7dc65ef737186464
Summary:
This creates .hg directories when there is no repository at all, which
breaks jf submit in git repos. D26801059 is the real fix, but it has some other
complications. Let's drop the creation here, since it really isn't necessary.
Reviewed By: quark-zju
Differential Revision: D27134087
fbshipit-source-id: d15048b2d1022d38393b62cc02ebf022e617ed4f
Summary:
In D26945466 (7a3539b9c6) I started to use correct repo name for backup repos whenever we
sync an entry. However most of the time sync job is idle, and while doing so it
also logs a heartbeat to scuba table. But it was using wrong repo_id for that
(i.e. for instagram-server_backup it was using instagram-server repo_id). This
diff fixes that.
Reviewed By: krallin
Differential Revision: D27123193
fbshipit-source-id: 80425a56ad0a432180f420f5c7957105407e0fc9
Summary:
Previously the code would result an exception raised while handling another
exception which is a little confusing. This diff fixes that.
Reviewed By: chadaustin
Differential Revision: D27100659
fbshipit-source-id: 8c6be4df62214c8e8d778478de66f271f7b84d3c
Summary:
When using `log -r REVS` with filtering flags like `-u`, `-d`, preserve the
prefetch information by using the `revs(subset=subset)` API.
Reviewed By: sfilipco
Differential Revision: D27119174
fbshipit-source-id: 8483d7113cfc819c6053d1429221588c3a917c12
Summary:
This allows specifying subsets. So we can rewrite:
revs & repo.revs(expr)
to:
repo.revs(expr, subset=revs)
The latter will apply prefetch tweaks on subset when evaluating the revset
expr, while the former will lose prefetch information.
There might be a way to make `revs & repo.revs(expr)` do the right thing
for pre-fetching, too. But that could be more complicated w.r.t `&` fast
paths, over fetching (?), etc. For now I just took the fix that looks more
obvious to me.
Reviewed By: sfilipco
Differential Revision: D27119175
fbshipit-source-id: 2629d21594cf97d7c0f63cf085a2c427d8782e58
Summary:
The filteredset can often be expensive filtering commits like `hg log -u foo`,
`hg log -d '2010-1-1'`. Add a progress bar to show what's going on.
Reviewed By: sfilipco
Differential Revision: D27119176
fbshipit-source-id: 458fbf331978b26e78e6a85fb194ae8b12b949d6