Summary:
The RPC simply queries various filesystem attributes, we merely forward what
statfs on the overlayfs gives us.
Reviewed By: kmancini
Differential Revision: D26681613
fbshipit-source-id: 5b94d05cafff8d77390fe60a3b5cf1dc3e022f42
Summary: This merely adds the types for the RPC.
Reviewed By: kmancini
Differential Revision: D26681615
fbshipit-source-id: d092cf0b6b5bb7435702d125b5c6ea7ee68356dc
Summary:
This simply writes the data passed in to the inode. Note that the current
implementation has a protocol violation since it doesn't sync the data written
to disk but advertise to the client that it did. This is completely wrong from
a data consistency guarantee, but is probably fine for now. Once the code
becomes closer to being production ready, this will be changed to honor the
client asks.
Reviewed By: kmancini
Differential Revision: D26681614
fbshipit-source-id: 82ad7a141be3bbe365363b1f6692ae62f253423f
Summary:
The Appender API doesn't allow us to simply append an IOBuf chain to it forcing
the entire chain to be copied to it. For the most part, this isn't an issue,
but to reduce the overhead of the READ NFS procedure, this will be a nice
benefit.
For the most part, this diff is a codemod, with a couple of places that needed
to be manually fixed. These were in the rpc/Server.{cpp,h} and
rpc/StreamClient.{cpp,h}.
Reviewed By: genevievehelsel
Differential Revision: D26675988
fbshipit-source-id: 04feef8623fcddd02ff7aea0b68a17598ab1d0f8
Summary:
The ser, de and roundtrip are duplicated 3 times in the various tests, let's
move them into a common library.
Reviewed By: chadaustin
Differential Revision: D26675989
fbshipit-source-id: 1da0bc33429795a889b72c76fa18e5fa3cb7df6f
Summary: This merely adds the various types for the WRITE RPC.
Reviewed By: chadaustin
Differential Revision: D26671895
fbshipit-source-id: 8409c8a1f90e97478aed7c9f19b881c46234b539
Summary:
For the READ/WRITE RPC calls, copying data in and out of an IOBuf chain can be
fairly expensive, to avoid this overhead, we can simply clone the data out of
the IOBuf chain directly, saving on the cost of copy.
Since the code uses a folly::io::Appender that doesn't support adding an IOBuf
to it, we still pay the cost of copying data to it, switching to
folly::io::QueueAppender may solve this.
Reviewed By: chadaustin
Differential Revision: D26671896
fbshipit-source-id: 0161f04cb820bf27ef66fdef6b4a1ce4eb778b96
Summary:
The CREATE RPC is intended to create regular files on the server. It supports 3
ways of creating them: unchecked, guarded and exclusive. The exclusive mode
allows for a client to being able to re-transmit the creation and for it to not
fail when that happens. The way this is implemented requires the server to
store a cookie passed in by the client and on retransmission to compare it in
order to not fail the file creation. This complexity may not be warranted and
needed in the case of EdenFS, and the spec allows for a server to not support
this operation, thus EdenFS claims to not support it.
The 2 other modes are more traditional, guarded will simply fail if the file
already exist, while unchecked succeed in that same situation. It's not
entirely clear in the RFC what the behavior is supposed to be in the unchecked
case, and whether the attributes needs to be applied. For now, the error is
just ignored, a warning message is logged and a success is returned in this
case. In the future, setting the file attribute (in particular the mode and
size) may be necessary.
Reviewed By: kmancini
Differential Revision: D26654766
fbshipit-source-id: c2e8142b8a4ff756a868e5859fdda4b07e53bddf
Summary:
I moved the source of the sphinx project to newdoc for newdoc/dev.
I updated the sphinx config for markdown to something that works for recent
versions.
PNG images rendered better than SVG for me.
I moved the TARGETS to newdoc.
Reviewed By: quark-zju
Differential Revision: D26801426
fbshipit-source-id: 3ae51e886d27f848f0f6a48c96056da607d8da45
Summary:
Mercurial uses Makefile here to build stuff. We want to track Makefile under
scm.
Reviewed By: quark-zju
Differential Revision: D26802165
fbshipit-source-id: 1fe8db13d50c07a6a0681180959eba22eaf8d486
Summary:
This won't be very long lived, as I'll remove the 0.2 version once Mononoke is
updated (we are the only users), and stop using a fork as soon as Gotham
releases. However, for now, I need both versions in order to avoid to do it all
in 1 diff.
A couple things worth noting here:
- See the previous diff for why I am using 2 Git repos for this.
- I can't easily use 2 forks for this because there are some extra crates they
vendor, notably `borrow_bag`. I removed it from both forks and pulled it from
crates.io instead.
Reviewed By: mitrandir77
Differential Revision: D26781291
fbshipit-source-id: 0b9824c07b880dbd5b94cd6d62d2cb2a68e348e9
Summary:
There is a bug in Hyper with 101 Continue responses: it sends a Content-Length.
This makes Proxygen unhappy with websocket upgrades. We used to have this
patched in hyper-02, but since Mononoke is about to update to Tokio 1.x, we
also need in the matching Hyper.
One thing that's a bit awkward is you might notice I changed the fork where the
patch comes from. This is because `cargo vendor` cannot deal with 2 overriding
sources coming from the same upstream. I posted about my adventures in this
here: https://fb.workplace.com/groups/rust.language/permalink/5278514098863829/
Reviewed By: HarveyHunt
Differential Revision: D26780899
fbshipit-source-id: e775b7151427898d63d8767acaa53f5f68229db6
Summary: This is used in Metagit and I'd like to decouple those 2 Tokio 1.x migrations.
Reviewed By: HarveyHunt
Differential Revision: D26813352
fbshipit-source-id: 7bc34e1cad00c83bf66edce559b07104d44a7357
Summary: Now the queries macros are asynced, lets do the same with the Transaction api exposed from them.
Reviewed By: krallin
Differential Revision: D26730195
fbshipit-source-id: 278753a5d0401f602ce50519138164bb5e49d550
Summary: Migrate to the std futures version of sql::queries!
Reviewed By: krallin
Differential Revision: D26700359
fbshipit-source-id: 39c75d7896a5975e53dd3af53860ce486683b4ed
Summary: Migrate to the std futures version of sql::queries!
Reviewed By: krallin
Differential Revision: D26700357
fbshipit-source-id: ea9382028b2e5abfa1946e1c5de344e32ac60d04
Summary: Migrate to the std futures version of sql::queries!
Reviewed By: krallin
Differential Revision: D26700360
fbshipit-source-id: 9ed2664d522bde8d0e923142357ca876a7de2613
Summary: Migrate to the std futures version of sql::queries!
Reviewed By: krallin
Differential Revision: D26700358
fbshipit-source-id: 4a100705c43d77d67fb784afbb6b44b57904cba0
Summary:
Async the query macros. This change also migrates most callsites, with a few more complicated ones handle as separate diffs, which temporarily use sql01::queries in this diff.
With this change the query string is computed lazily (async fn/blocks being lazy) so we're not holding the extra memory of query string as well as query params for quite as long. This is of most interest for queries doing writes where the query string can be large when large values passed (e.g. Mononoke sqlblob blobstore )
Reviewed By: krallin
Differential Revision: D26586715
fbshipit-source-id: e299932457682b0678734f44bb4bfb0b966edeec
Summary:
The previous fix D23357655 (d60e80796a) actually only fixes py2 absorb -i. On Python 3,
`b"-"[0]` is `45`, not `b"-"` like Python 2. Fix it again using `b"-"[0:1]`.
Reviewed By: singhsrb
Differential Revision: D26805315
fbshipit-source-id: 07ca850373a6bc49b561466ead478024631ce051
Summary:
Memcache is dogslow to initialize, taking >30s on debug build. As a
consequence, this slows down every single test by that amount time, with the
guarantee that no blobs will be found in memcache, ie: a total waste of time.
On release builds, Memcache is significantly faster to initialize, so let's
only disable initializing Memcache for debug builds only.
Reviewed By: fanzeyi
Differential Revision: D26800265
fbshipit-source-id: 8b393c603414de68268fdadb385de177e214a328
Summary:
Adds a new hg py3 windows package to hgbuild for publishing. Currently
the tests don't run. I'll do that next.
Reviewed By: quark-zju
Differential Revision: D26768336
fbshipit-source-id: bd4533abbbc1e1c861aa9995b7a3868a7f6a1a22
Summary: Trigger the cleanup logic automatically if there are too many remote bookmarks.
Reviewed By: sfilipco
Differential Revision: D26802251
fbshipit-source-id: 1ab40c7be98c507865ea17001dc9775b6edf4446
Summary: This is handy to make the `sl` output cleaner.
Reviewed By: sfilipco
Differential Revision: D26802250
fbshipit-source-id: 1b74f3d9a42ab6b0780f07bec59b03a5cd0ea6a9
Summary:
Previously remotenames pointing to unknown commits are just ignored.
If key remotename like master is ignored, it can cause very slow operations
in pull, etc. Let's just raise an exception in this case.
Reviewed By: DurhamG
Differential Revision: D26800236
fbshipit-source-id: 13be4af5499da1b9098b4ff1a6ef41c54092824a
Summary:
Remove public heads when using Rust changelog backends. This should address
some issues seen in commit cloud sync.
This is done at the metalog commit time so we get the latest "remotenames" data for
accurate "public()" set calculation.
Reviewed By: singhsrb
Differential Revision: D26792731
fbshipit-source-id: 00b894fee9804740d664dad0ac47be564820da33
Summary:
The output encoding is used to render the graph log edges. With D26612487 (62ba7447f6), we
switched to Rust IO. The Rust IO requires UTF-8 data. So let's set
outputencoding to UTF-8.
Reviewed By: sfilipco
Differential Revision: D26799551
fbshipit-source-id: aa3e6420067d7c75bef47448e12e48f4cef56a84
Summary:
We have configs that affect peer connections, like help.tlsauthhelp,
that are considered "repo-specific" configs now that they come from
dynamicconfig. Unfortunately repo-specific configs are removed from the ui when
copying it for use in a new remotepeer.
Let's add a few config sections to the allow list for what can go in a remote
peer ui. I have a task for making even repo-less commands load the standard
config, so in the future we can have these new peer objects use the standard
repo-less config, which will remove much of the need for maintaining this
allow-list.
Reviewed By: singhsrb
Differential Revision: D26784364
fbshipit-source-id: 30d9292e48b0f27ce7f4d4904ff6d5ff8dcaf069
Summary: For the OWNERS files in `secure` and `core` we need to check that all the groups and users mentioned in the OWNERS are in the secure owners group.
Reviewed By: StanislavGlebik
Differential Revision: D26761772
fbshipit-source-id: 02ad2bc45c82792e51702cd4d8d092557c76c015
Summary:
When we are using hg sync job to backup a darkstorm repository we need to read
latest commits from source mononoke repo, however use darkstorm repo id for
counters - otherwise there will be two sync job using the same counter (i.e.
mononoke -> hg and mononoke -> darkstorm) and that wouldn't end well.
This diff does that. I also changed our tests a bit to always set
--darkstorm-repo-id option, since we are going to use it in prod anyway.
Differential Revision: D26782326
fbshipit-source-id: 0f6188047fe3d01dfa7bf7b3eb407e4f2c9a5d09
Summary:
Provide a method that does pure serialization from (name, node) pairs to the
remotenames blob. This makes it reusable outside `vfs` or `repo` context.
Reviewed By: DurhamG
Differential Revision: D26707454
fbshipit-source-id: c45662922d337e31d17070e5f5828d47e23773b1
Summary:
sometimes in visible heads the hashes of the remote bookmarks are not the latest, so the filtration like it was initially implemented didn't work
if visible heads are polluted, it makes commit cloud operations slow but at least this correct check won't allow public heads to enter workspaces
moreover, after roll out it will fix already existing workspaces with public heads automatically by removing them from workspaces
also, this will allow our magic script to fix remote bookmarks in workspaces to work properly (debugfixcommitcloud), unfortunately now it can break a workspace if we remove bookmarks but not heads
cc quark-zju - could we fix the initial issue how the public heads can enter visible heads in some cases in pull logic? I have a repro if you are interested in.
Reviewed By: singhsrb
Differential Revision: D26778632
fbshipit-source-id: 05dbd4cd415911283ea66ae17772b8d3e458bbd7
Summary: Previously they were returned in random order. This diff fixes it.
Reviewed By: krallin
Differential Revision: D26778558
fbshipit-source-id: bb8eef4f6dadb6b09227d7140c2a462a471550b3
Summary:
It doesn't make a whole lot of sense to user the "tupperware" permission here.
Lets make a distinction in who is allowed to provision mononoke vs. who is allowed to forward identities to it.
I have created a separate action with the same identities allows as in the "tupperware" permission.
https://www.internalfb.com/intern/hipster/acls/view/?type=TIER&consumer=mononoke
Reviewed By: StanislavGlebik
Differential Revision: D26777489
fbshipit-source-id: d3999dc4da7a57ac721572610f65eba664e595e9
Summary:
This diffs add a layer of indirection between fbinit and tokio, thus allowing
us to use fbinit with tokio 0.2 or tokio 1.x.
The way this works is that you specify the Tokio you want by adding it as an
extra dependency alongside `fbinit` in your `TARGETS` (before this, you had to
always include `tokio-02`).
If you use `fbinit-tokio`, then `#[fbinit::main]` and `#[fbinit::test]` get you
a Tokio 1.x runtime, whereas if you use `fbinit-tokio-02`, you get a Tokio 0.2
runtime.
This diff is big, because it needs to change all the TARGETS that reference
this in the same diff that introduces the mechanism. I also didn't produce it
by hand.
Instead, I scripted the transformation using this script: P242773846
I then ran it using:
```
{ hg grep -l "fbinit::test"; hg grep -l "fbinit::main" } | \
sort | \
uniq | \
xargs ~/codemod/codemod.py \
&& yes | arc lint \
&& common/rust/cargo_from_buck/bin/autocargo
```
Finally, I grabbed the files returned by `hg grep`, then fed them to:
```
arc lint-rust --paths-from ~/files2 --apply-patches --take RUSTFIXDEPS
```
(I had to modify the file list a bit: notably I removed stuff from scripts/ because
some of that causes Buck to crash when running lint-rust, and I also had to add
fbcode/ as a prefix everywhere).
Reviewed By: mitrandir77
Differential Revision: D26754757
fbshipit-source-id: 326b1c4efc9a57ea89db9b1d390677bcd2ab985e
Summary:
This introduces a couple "trampoline" crates to use alongside fbinit in order
to let callers choose which version of Tokio they want by selecting one or the
other and adding it to their `TARGETS` alongside `fbinit` (right now, they
need to include `tokio` there).
The entrypoints here will be called by the expansion of the `#[fbinit::main]`
and `#[fbinit::test]` macros for `async fn`-s.
Right now, this isn't wired up: that happens in the next diff. In this diff,
I'm just adding the two entrypoints.
Reviewed By: mitrandir77
Differential Revision: D26754751
fbshipit-source-id: 1966dadf8bbe427ce4a1e90559a81790d8e56e7a
Summary:
Atm in tests a separate ConfigStore with file source is created for some configs and then a reference to it is dropped immediately ([see get_config_handle function in mod.rs](https://fburl.com/diffusion/fpkj7ekv)). This is uncomfortable as we may need a reference to e.g. force update configs in tests.
Instead of creating separate stores we can reuse static configerator which already uses local files (in tests).
Reviewed By: krallin
Differential Revision: D26725515
fbshipit-source-id: 24269cd93b7d35216c025807c3f3eb527688b72b
Summary:
For public bookmarks, we can avoid querying the database and instead serve
`list_bookmarks` from the warm bookmarks cache. The listed bookmarks might
be slightly old, but only because derived data is still in progress, and so
listing the older bookmark value is a better choice.
The tests now need a way to make sure that the warm bookmark cache is
up-to-date, so add a `sync` method that waits for a complete cycle of the warm
bookmark cache update thread.
Reviewed By: StanislavGlebik
Differential Revision: D26693444
fbshipit-source-id: 7145964bdb8c22d98ab6a2bb8c5091c19addd03e
Summary: EdenFS doesn't need the history, therefore let's not spend time prefetching it.
Reviewed By: fanzeyi
Differential Revision: D26767634
fbshipit-source-id: 7113f4ce79fdef5455a2bb238ab9d51b7339d8b6
Summary:
Parsing the remotenames (blob) into a list of name -> node pairs.
This makes it reusable outside `vfs` or `repo` context.
Reviewed By: DurhamG
Differential Revision: D26707457
fbshipit-source-id: e6c8bd9ff14d0fea9209c25b89fe733675da747e
Summary:
Use the new memctx.mirror and memctx.__setitem__ APIs. This simplifies the
code.
Reviewed By: DurhamG
Differential Revision: D26726474
fbshipit-source-id: 044616137b883ca250e6d84c0ecfcc70458ec07a
Summary:
Use the Rust tree matcher to rule out files that do not need dirsync quickly.
This should make codemod faster to commit.
I created 5000 files outside dirsync config in fbsource (with 494 lines of
dirsync config), and `hg add` them:
$ mkdir unused
$ cd unused
$ for f in `seq 5000`; do touch $f; done
$ hg add .
Baseline "status":
In [3]: %time repo.status();1
CPU times: user 111 ms, sys: 10.2 ms, total: 122 ms
Wall time: 148 ms
Before, dirsync overhead is ~8x "status":
In [1]: %time x.dirsync.dirsyncctx(repo[None])
CPU times: user 1.37 s, sys: 28.8 ms, total: 1.4 s
Wall time: 1.79 s
Out[1]: (<workingctx f23d7c84c5a7+>, set())
In [2]: %time x.dirsync.dirsyncctx(repo[None])
CPU times: user 1.07 s, sys: 8.41 ms, total: 1.08 s
Wall time: 1.11 s
Out[2]: (<workingctx f23d7c84c5a7+>, set())
After, dirsync overhead is ~1/2 of "status":
In [1]: %time x.dirsync.dirsyncctx(repo[None])
CPU times: user 203 ms, sys: 18.9 ms, total: 222 ms
Wall time: 245 ms
Out[1]: (<workingctx 8ff14e46c9d8+>, set())
In [2]: %time x.dirsync.dirsyncctx(repo[None])
CPU times: user 154 ms, sys: 24.1 ms, total: 178 ms
Wall time: 202 ms
Out[2]: (<workingctx 8ff14e46c9d8+>, set())
Reviewed By: DurhamG
Differential Revision: D26726476
fbshipit-source-id: e34218769c779c9a4ee64c654c75298b7c79f213
Summary: Now dirsync works with IMM rebase. Add a test for it.
Reviewed By: DurhamG
Differential Revision: D26726478
fbshipit-source-id: 6712538d7e903ddb0e3c3df44f7dde638276e99d
Summary: Now dirsync works with absorb. Add a test for it.
Reviewed By: DurhamG
Differential Revision: D26726477
fbshipit-source-id: 4505ad6c1e1fd03bfb2cf12b46bd07c98f2bcc2b
Summary:
Previously, dirsync wraps `repo.commit`, requires an on-disk working copy,
and dirstate to work properly. This diff updates dirsync to wrap
`repo.commitctx` instead, do commit edits purely in memory, then sync the
commit back to disk. It makes dirsync compatible with absorb and in-memory
rebase (and potentially other things like drawdag, if drawdag's context APIs
are improved).
To sync the changes made in-memory back to the filesystem, a dirstate callback
is added to write back mirrored files from commit to disk. This works for both
amend and absorb so the special wrapper about amend is dropped. It is
also optimal for absorb, because it only writes the mirrored files once for
the "final" commit, instead of writing the files for each commit in the stack.
Some `O(N^2)`s (N: len(status)) complexities were avoided:
- `applytomirrors` was called N times.
- `allchanges = set(status.modified + status.removed + status.added)` in
`applytomirrors` was O(N).
- `sourcepath in status.removed` in `applytomirrors` was O(N).
- `mirrorpath in status.removed` in `applytomirrors` was O(N).
Note there is still a suboptimal complexity of `getmirrors` called per changed
path, `O(N*M)` (N: len(status), M: len(dirsync_conig)). That will be addressed
in a later diff.
Reviewed By: DurhamG
Differential Revision: D26726479
fbshipit-source-id: 482c6c830ab65cc0d9cd569a51ec610a1dac49cc
Summary: This is unused, and it has a hard dependency on Fuse, so let's remove it.
Reviewed By: chadaustin
Differential Revision: D26742212
fbshipit-source-id: 091556be39e599512d34920503083d03d4c5a0c2
Summary:
## Why this diff
we want hostname prefix to support targeting configs at clients in corp ("corp" means laptop, labs, and other machines that are not in "prod" datacenters), like FRL machines, that don't support our existing tier mechanism.
## Changes
* Extract hostname prefix in `dynamicconfig.rs` and add a getter function `hostname_prefix()` for it.
*A hostname prefix only consists of alphabetical letters and dashes, which is followed by one or more digits in the hostname. If no valid match, the prefix is set to the empty string.*
* Use `gen.hostname_prefix()` in the `evaluate()` fn inside `mod.rs` to check the generator's prefix against a list of given prefixes.
* Copy changes from `configerator/source/scm/hg/hgclientconf/hgclient.thrift` to `fbsource/fbcode/configerator/structs/scm/hg/hgclientconf/hgclient.thrift`.
* Rebuild in `eden/scm/`.
Reviewed By: DurhamG
Differential Revision: D26706686
fbshipit-source-id: 725506a1c1f0983e981b0b3f3993c7c14510b1db