Summary:
Errors could occur with various reasons. Such as permissions.
Not having periodical sigtrace does not affect correctness, so let's just
ignore errors.
Reviewed By: singhsrb
Differential Revision: D26192199
fbshipit-source-id: 8d2dba56979efd121c9bdc757895e86aa23d6694
Summary:
Add the `--ordered` flag to `mononoke_admin skeleton-manifests tree`. This
uses `bounded_traversal_ordered_stream` to list the manifest in order.
Reviewed By: mitrandir77
Differential Revision: D26197359
fbshipit-source-id: 2f95471abfccd514d713b2092844d271bc732498
Summary:
Add `bounded_traversal_ordered_stream`. This function operates much like
`bounded_traversal_stream`, in that it traverses a tree producing a stream of
visited leaves. The difference is that the order of produced items is
maintained.
Key differences are:
* The `unfold` method produces a sequence of `OrderedTraversal` nodes, rather
than separate output and recursion sequences. The order between `Output`
variants and the result of recursively expanding `Recurse` variants is
what is maintained.
* The `unfold` method, as well as the initial values, must provide estimates
of the number of output items that the recursive result expands to. This is
used to delay expanding of later items while earlier items are being expanded.
* There is an additional dimension to bound. The `queue_max` parameter bounds
the size of the queue of unyielded output elements. Recursive steps will not
be scheduled for unfolding until there is sufficient capacity in the queue
for the items they will produce. The bound is a soft bound: to ensure progress
can always be made even if some `unfold` output produce more than `queue_max`
elements, the queue is permitted to grow beyond `queue_max` with the output of
one additional `unfold` call.
Reviewed By: StanislavGlebik
Differential Revision: D25867667
fbshipit-source-id: 884bffbeee3862cce56df78084d57ca62089814c
Summary:
Walker checkpoints will allow a scrub or other walk to continue where it left off. This is useful to release new code and restart jobs without making the job start from scratch again.
This change adds the sqlite schema and SqlCheckpoint implementation for storing checkpoints and is exercised by new tests. Its connected up to walker tailing in the following diff.
Differential Revision: D25995108
fbshipit-source-id: 18a27c95aa7c38f8aa3d432d74de2831213c4ba2
Summary: Vendor update to smallvec 1.6.1, which is a prerequisite for governor crate.
Reviewed By: jsgf
Differential Revision: D26172450
fbshipit-source-id: 9e87c1ada75b75ac54afd85435cd5422a1c385fa
Summary:
Address yarn's `node_modules` problem on Winodws.
The design is debatable and probably not the best. Another approach could be simply special casing subdir path ends with `node_modules`. This won't require any user configuration but it would be a special case.
Reviewed By: chadaustin
Differential Revision: D26149393
fbshipit-source-id: b3e66cb2d4b70078bb25e7329988cd5ff8fdeadd
Summary:
ui.fout and ui.ferr are now refcells. If you do ui.fout = ui.ferr (as
done in hg grep's fallback path to redirect stdout) and then do
ui.fout.swap(ui.ferr) (as is done in the pullpaths hotfix) you can end up with
a refcell owning itself.
Let's avoid that by making hg grep use ui.fout.swap appropriately.
Reviewed By: quark-zju
Differential Revision: D26150647
fbshipit-source-id: eede8f00f9f9566157b0c17fe7049ca860ce715b
Summary:
The folly::async bits are a bit complicated to get right in particular around
its lifetime management. The solution that is used in our codebase is to make
use of a folly::DelayedDestruction and the DestructorGuard. As long as a
DestructorGuard is alive, the object cannot be destroyed, and thus we can
simply hold a guard in the Reader, and make sure that a guard is held during
the future callbacks.
The only thing to be aware of is that the DestructorGuard is not thread safe,
but since all the callbacks and futures are running in the context of the
single threaded event base, this is not an issue.
Reviewed By: chadaustin
Differential Revision: D26105288
fbshipit-source-id: 4c7b91eca2bc1cae66d8d1724931b20340740574
Summary:
In the public API at least. A public method will consume the builder. If some
code wants to call multiple methods using the same configuration, they can
safely clone the builder to get a second instance.
SegmentedChangelogBuilder needs to pass references internally to build
individual components otherwise it would have to clone itself excessively.
This pattern leaked towards public methods too. Some tests use this builder too
and they use some crate public methods that need to be defined using
references. I don't know if we should remove that dependency. Anyway, the
Builder is hopefully easier to use now.
Reviewed By: quark-zju
Differential Revision: D26152066
fbshipit-source-id: 63285e200d8e9fde06fede03773b7d4c02e9cea7
Summary:
moving to 1.7.0 for open source github sai
also build fake with 1.7.0
Differential Revision: D26152792
fbshipit-source-id: 02e7bfb218d200666e506e3115ee8f5dba5ece0a
Summary: Sync job doesn't syncronize large files. For darkstorm backup sync lets make a special lfs verifier, which will upload files from origin blobstore into backup.
Reviewed By: StanislavGlebik
Differential Revision: D24991705
fbshipit-source-id: de668b7ad33ace3445f50cd9c92a678201ffb6f6
Summary:
When the RPC server starts, it registers itself against the global rpcbind
daemon to allow RPC clients to reach it. However, it never unregisters itself
when it is destructed, leading to the rpcbind daemon having outdated
information. It also prevents a future RPC server for the same program and
version to register itself.
To fix this, we can simply unregister on destruction. The MountdUtil binary is
also fixed to be able to interrupt it with ctrl+c.
Reviewed By: chadaustin
Differential Revision: D26105289
fbshipit-source-id: c9c3d552ec04a78025d6ff715a425f9ec3aa1328
Summary:
In NFSv3, mounting is not part of the NFS program but is done in a separate
program. This adds the basic scaffolding to reply to the NULL procedure for the
mount protocol. Later, this will be enhanced by having `eden mount` first
register a mount point to Mountd, then trigger a `mount.nfs` against localhost
where Mountd will be able to reply properly.
Similarly to PortMapUtil, I've added a small binary that allows for a "mountd"
process to be spawned. Note that since the code doesn't yet answer properly to
the mount procedure, nfusr closes the connection and retries, causing mountd to
crash due to RpcServer being unsafe on error, this will be fixed separately.
Reviewed By: chadaustin
Differential Revision: D26033504
fbshipit-source-id: 27f90d9072a93460a3a383aadde38e50801e3e87
Summary:
The NFS protocol is comprised of several different RPC "programs": the mount,
nlm and nfs programs. Since all of these 3 needs to independently register with
rpcbind, let's have some common scafolding to read/write from the socket and to
simplify the implementation of these programs.
This code was written by wez
Reviewed By: chadaustin
Differential Revision: D25986691
fbshipit-source-id: 15c5fdc68323fd964ed79aa06392a83bf964ab4a
Summary:
The portmap protocol allows for service discovery and registration against the
per-host rpcbind daemon. An NFS server will need to register against it to be
mountable.
The portmap_util binary is here for testing purposes and will not be used in
EdenFS.
This code was written by wez.
Reviewed By: kmancini
Differential Revision: D25986694
fbshipit-source-id: 1eee7238fdf70c8c4937e685da91ad08d46befe4
Summary:
Built on top of XDR, the Remote Procedure Call (RPC) protocol allows for
structured client/server communication. NFS being built on top of this
protocol, this adds some basic infrastructure and types defined in the RPC RFC.
A basic client is also being added.
This code was written by wez.
Reviewed By: chadaustin
Differential Revision: D25986693
fbshipit-source-id: a5feffcb22607bcd2c7fa2cede1b70dd8aa48caf
Summary:
It's possible that a bookmark has no history in its bookmark log. It shouldn't
happen normally, but it might happen in a few cases:
1) If bookmarks were imported before bookmark update log was added to mononoke
(unusual, but happens)
2) if bookmark update log was cleaned for some reason.
So if a bookmark wasn't warm at start, then single_bookmark_updater would never
warm it unless this bookmark gets new entry in boomkark update log. This diff
fixes it.
Reviewed By: krallin
Differential Revision: D26149435
fbshipit-source-id: 5bba8764050349adf106c0e68488981cf21055c4
Summary:
We have some tests that are a bit racy because they write bookmarks from
one process then look at them from another process, but that can fail because
we have a cache on bookmarks that holds them for 2 seconds.
This is normally not a huge issue because we don't access said bookmarks, but
now that as of my earlier diff we run a warm bookmarks cache, it *is* a
problem. This fixes that. We can expand it later to do things like reload
tunables, but for now this satisfies one basic use case.
Reviewed By: ahornby
Differential Revision: D26149371
fbshipit-source-id: 11c7f64b1ae45f6a0de142be25ab367cb25df567
Summary:
Right now, if you enable Mononoke API, you always get a WBC, for all derived
data kinds, and with a delay. This isn't great for a few reasons:
- Some places don't really care about bookmarks being warm to being with. This
is the case in the bookmarks filler (note that right now, it still does
polling to satisfy the WBC API, it's just not "warm").
- Some places don't want a delay, or don't want all kinds. This is the case for
Mononoke Server (which doesn't use Mononoke API right now, but that's what
I'm working towards), or EdenAPI, which uses a WBC sort-of-de-facto but
doesn't really care (but likely will in the future, and will want to follow
Mononoke Server's behavior).
As of this diff, we can now configure this when initializing `Mononoke`. I also
split out all those args into a `MononokeEnvironment` because the args list
was getting out of hand. One thing I did not do is make a way to instantiate
`MononokeEnvironment` from args (but we should do it at some point in the
future).
Reviewed By: StanislavGlebik
Differential Revision: D26100706
fbshipit-source-id: 1daa6335f3ce2b297929a84788bc5b4d9ad6432f
Summary:
This test module accidentally got lost when I added a `mod tests { ... }` in
the containing module. This brings it back and modernizes the tests that could
be. The push redirection test has way too much boilerplate to be manageable so
for now I removed it. I'll see if I can bring it back after some refactoring
I'm doing.
I'll try to see if there's a way we can try to lint / warn against inline
modules shadowing other files.
Reviewed By: ahornby
Differential Revision: D26124354
fbshipit-source-id: 7b24c4fe635bf8197142ab9ee087631ed49f10be
Summary:
I'd like to be able to use mononoke_api from repo_client, but the hg parts
depend on repo_client, so that's a circular dependency.
This splits out the hg parts of Mononoke API so that places that don't want
them don't have to import them. This is similar to what we did with blobrepo.
Reviewed By: StanislavGlebik
Differential Revision: D26099495
fbshipit-source-id: 73a9c7b5dc95feceb35b5eabccf697e9aa0a27de
Summary:
Right now, if we got no Hipster ACL in a repo listener, we default to denying
all access.
This is kind of annoying when using Mononoke locally but behind a trusted HTTP
proxy, because that means you cannot access the repo at all (if you're not
behind a proxy, then your TLS identities are used instead and everything is
fine).
If we trust a given proxy to impersonate literally anyone, we probably trust
them to access the repo to begin with (since the set of people with access is
not empty and therefore there is always at least someone they could impersonate
that has access), so this is what this diff does.
Reviewed By: johansglock
Differential Revision: D26073274
fbshipit-source-id: 0ef06cb6283d7f69072b712d3cb5a8383a493998
Summary:
If the socket is already shutdown for writes, then not being able to shut it
down is fine.
Reviewed By: ahornby
Differential Revision: D26052499
fbshipit-source-id: 2da6c34f657317419df00a0b7ba615e0eb351e0d
Summary:
Like it says in the title, this updates our HTTP stack to Hyper. There are a
few reasons to do this here:
- We can remove all the manual parsing & generation of HTTP, and instead let
Hyper (i.e. a HTTP framework) handle HTTP for us.
- We can use / reuse more pre-existing abstractions for things where we have to
implement HTTP handling (rather than just try to update to a websocket ASAP),
like the net speed test.
- And finally, my main motivation for this is that this will make it much
easier to load EdenAPI into Mononoke Server as a service. At this point, we
have a `Request` to pass into a function that returns a `Response`, which is
exactly what EdenAPI is, so hooking it in will be trivial.
There's a lot going on in this diff, so here is an overview. Overall, our
connection handling process is:
- Accept connection
- Perform TLS handshake
- Check if the remote is trusted
- Check ALPN:
- If hgcli, then read the preamble then run wireproto over the socket.
- If http, hand off the socket to Hyper. Hyper will call into our
MononokeHttpService (which is a Hyper Service) once per request.
- If websocket upgrade, accept the upgrade, then run wireproto over the
resulting I/O (i.e. the upgraded connection). An upgrade takes over the
connection, so implicitly this means there won't be further requests.
- If health check or net speed test, handle it. There might be multiple
requests here via connection reuse.
- This is where hooking EdenAPI will happen. We can instantiate Gotham
here: it also is a Hyper Service, so we just need to call it.
While in there, I've modelled those various states using structs instead of
passing a handful of arguments here or there.
Reviewed By: johansglock
Differential Revision: D26018641
fbshipit-source-id: dd757d72fe0f17f7c98c1948a6aa34d0c4d95cbf
Summary:
Like it says in the title, this updates our implementation of Globalrevs to
be a little more relaxed, and allows you to create and move bookmarks as long as
they are ancestors of the "main" Globalrevs bookmark (but NOT to pushrebase to
them later, because we only want to allow ONE globalrevs-publishing bookmark
per repo).
While in there, I also deduplicated how we instantiate pushrebase hooks a
little bit. If anything, this could be better in the pushrebase crate, but
that'd be a circular dependency between pushrebase & bookmarks movement.
Eventually, the callsites should probably be using bookmarks movement anyway,
so leaving pushrebase as the low-level crate and bookmarks movement as the high
level one seems reasonable.
Reviewed By: StanislavGlebik
Differential Revision: D26020274
fbshipit-source-id: 5ff6c1a852800b491a16d16f632462ce9453c89a
Summary:
Prior to this diff, only mononoke server initialized
`MononokeScubaSampleBuilder` in a way that used observability context, and
therefore respected vebosity settings.
Let's make a generic sample initializing function use this config too.
Reviewed By: ahornby
Differential Revision: D26156986
fbshipit-source-id: 632bda279e7f3905367b82db5b36f81264156de3
Summary: This is more flexible.
Reviewed By: StanislavGlebik
Differential Revision: D26168559
fbshipit-source-id: 5946b8b06b3a577f1a8398a228467925f748acf7
Summary: Just as we have global strings/ints, let's have per-repo ones.
Reviewed By: StanislavGlebik
Differential Revision: D26168541
fbshipit-source-id: f31cb4d556231d8f13f7d7dd521086497d52288b
Summary:
Please see added test. Without this diff such test does not even compile,
as `new_values_by_repo` is moved out by `self.#names.swap(Arc::new(new_values_by_repo));` after processing the first tunable (line 202).
Reviewed By: StanislavGlebik
Differential Revision: D26168371
fbshipit-source-id: 3cd9d77b72554eb97927662bc631611fa91eaecb
Summary:
If your disk1 is an external HFS-formatted disk, then
eden_apfs_mount_helper will fail to create apfs subvolumes on
it. Instead, use the disk backing the mount.
Reviewed By: fanzeyi
Differential Revision: D26096296
fbshipit-source-id: baa45181afb6610a095c864eb3183e5af76ec4e0
Summary:
It is already broken with segmented changelog (it assumes 0..len(repo) are
valid revs). It is super slow and cannot be optimized efficiently. The _only_
non-zero-exit-code usage in the past month is like:
hg log -r 'reverse(children(ancestors(remote/master) and branchpoint()) and draft() and age("<4d"))'
which takes 40 to 100s and can be rewritten using more efficient queries like `parents(roots(draft()))`.
Reviewed By: singhsrb
Differential Revision: D26158011
fbshipit-source-id: 7957710f27af8a83920021a228e4fa00439b6f3d