Summary:
Finally support merges. Whenever we have a merge a completely new batch is
created.
The one thing that surprised me was that ParentOffsets actually can be
negative. For example
```
o
/ \
o o
| |
o |
| |
o <--- | --- ParentOffset of this commit can be negative!
\ /
o
```
It happens because of BFS order of traversal - parent can be visited befor the
child. Note that negative offsets shouldn't cause problems.
Reviewed By: aslpavel
Differential Revision: D17183355
fbshipit-source-id: b5165ffef7212ce220dd338079db9e26a3030f58
Summary:
Implement the `commit_is_ancestor_of` call, which returns whether this commit
is an ancestor of some other commit.
Reviewed By: krallin
Differential Revision: D17183595
fbshipit-source-id: be7826e778c48dd86f116d7fbcaabe18bdffab44
Summary: Implement the `commit_info` call, which returns the metadata for the commit.
Reviewed By: krallin
Differential Revision: D17183596
fbshipit-source-id: 500029b7c4b4705fd937a894d15c14d911129b3c
Summary:
Implement the `commit_lookup` call, which looks-up commits to see if they exist,
and maps between commit identity schemes.
Reviewed By: krallin
Differential Revision: D17183597
fbshipit-source-id: 3d21c9b0804ce3bbd576543716ce9647d7d1d7e2
Summary:
Implement the `repo_list_bookmarks` call, which lists bookmarks.
Listing scratch bookmarks requires the user to provide a prefix to match on an
a limit for the number of bookmarks to fetch. There is currently no provision
for paging.
Reviewed By: krallin
Differential Revision: D17157497
fbshipit-source-id: 247f02299f40a9e9142c6ca838fca1d1de874382
Summary:
Implement the `repo_resolve_bookmark` call, which resolves a bookmark in a repo
to the commits that the bookmark points to.
Reviewed By: krallin
Differential Revision: D17157498
fbshipit-source-id: e7e53df2cb9e3efaddcd22a8d0db344a8d279f08
Summary:
Implement the `list_repos` call, which returns a list of the repos that
Mononoke knows about.
Reviewed By: krallin
Differential Revision: D16984064
fbshipit-source-id: 872ef5b53453ae7f9eb664044f03ec15c83d2152
Summary:
Instantiate the new Mononoke internal API's `Mononoke` object alongside the
existing apiserver `Mononoke` abstraction. We will migrate all uses of the
existing one to the new one and then remove the apiserver's `Mononoke`.
For now, construct the new one by assembling it from the parts of the existing
`Mononoke` abstraction. When that is removed, we will create the new one
directly.
Provide the new Mononoke object to the HTTP server and the Source Control
Service Thrift server, as both of these will need it.
Reviewed By: krallin
Differential Revision: D16984061
fbshipit-source-id: eb8c237dfa6d82a96d4cb0baf29e4bfa39119bc9
Summary:
Make the Mononoke apiserver implement the new source_control service. None of
the methods are defined yet.
Reviewed By: krallin
Differential Revision: D16984062
fbshipit-source-id: 997e3aad6fc0b31eb546260f912c046c6803e176
Summary:
Implement `From<MononokeError>` for the source control service thrift exceptions.
This isn't really the right place, as the `mononoke_api` crate shouldn't need
to know anything about thrift, however Rust trait rules mean we have to declare
the implementation in the same crate as `MononokeError`.
Reviewed By: krallin
Differential Revision: D17208631
fbshipit-source-id: f74f16bbde1da9c6dc0c9e1aede769dca6e9a114
Summary:
Make it so that full Mercurial hashes can be used to specify changesets, as
well as full bonsai changeset hashes.
Reviewed By: krallin
Differential Revision: D16803204
fbshipit-source-id: 14db530bf77eb6b76e83697e00d6e2cf46586a0c
Summary:
Start the creation of a new Mononoke internal API within the `mononoke_api` crate.
This abstraction will be used by the API server, as well as any future server types.
To create a new Mononoke instance the caller instantiates a `Mononoke` object,
passing it the configuration to load.
Queries begin by calling a method on the `Mononoke` instance, passing a
`CoreContext` for that request. For example, calling the `repo` method returns
a `RepoContext` object which is bound to the repository and the request
`CoreContext`. Further methods on these context objects can be called, which
will return either new data or new, more specialized, context objects.
For example, to find the author of a commit, the caller can use:
```
let ctx = CoreContext::new(...);
let author = mononoke.repo(ctx, "reponame").changeset(changeset_specifier).author();
```
Reviewed By: StanislavGlebik
Differential Revision: D16803206
fbshipit-source-id: 20a8e08f0075476844df4b8c5015ac3faa22d423
Summary: Admin subcommand to force unode regeneration for specified commit. This is useful for debugging unodes generation perforamance
Reviewed By: farnz
Differential Revision: D17225913
fbshipit-source-id: 0ec700f670edd01e4c7659fe437e91c2f2c43497
Summary: This commit tries to improve performance by batching multiple updates of `DerivedDataMapping` into one
Reviewed By: StanislavGlebik
Differential Revision: D17156437
fbshipit-source-id: dd80169b39b43c58cb90b0347cfe95e24e341d0b
Summary: This is helpful for Tupperware.
Reviewed By: HarveyHunt
Differential Revision: D17227429
fbshipit-source-id: 5b30833eb5b7f4618ee3360daca486d13b8142b3
Summary: Not much to see here. This is just a POC of some logging being setup.
Reviewed By: StanislavGlebik
Differential Revision: D17203576
fbshipit-source-id: 14e0d8d3d91b9142029e4198fa7be688e666952f
Summary: This updates the lfs_server to support TLS.
Reviewed By: StanislavGlebik
Differential Revision: D17203298
fbshipit-source-id: aa91a1b6304c7203018cd5f9feb90645e085cb31
Summary:
This introduces a LFS server for Mononoke, which is designed to also act as a
reverse proxy to a fallback LFS server. The goal of this LFS server is to
uphold the following guarantees:
- If a client uploads successfully, then the LFS content will be present in
Mononoke as well as the upstream server.
- If a piece of content is present in either Mononoke or the upstream server,
then downloading this content will succeed. If the content is available in
Mononoke, it'll be served there. Otherwise, the client will be redirected to
the upstream server.
Note that while the LFS server for Mononoke exposes a route per repo, that
isn't the case for the upstream server.
Also, note that this only does proxying for the batch endpoint and uploads.
Clients must have access to the upstream to download from there blobs that
don't exist in Mononoke.
Implementation-wise, this uses async Rust and the Gotham web framework (Gotham
is based on Hyper, which we also use to talk to upstream LFS). Note that
neither Gotham nor Mononoke's filestore doesn't use Tokio 0.2 yet, so we still
have to run on Tokio 0.1 as the executor.
Reviewed By: StanislavGlebik
Differential Revision: D17182753
fbshipit-source-id: 24c9c9c7183506c38d7534665a19d42acf4b3442
Summary:
I noticed while testing the LFS proxy that we could occasionally poll the
stream underlying a `ChunkStream` when it had already indicated it had no more
data. This is bad because it might panic on some streams.
I initially thought this might have to do with the `poll()` shenanigans we do
in `prepare()`, but that doesn't appear to be the case (getting rid of those
doesn't fix anything).
Regardless, we can fix this by implicitly fusing things in `ChunkStream`, which
seems to be the problem here (in the LFS server, uploads that don't get chunked
don't crash, which makes sense we only use stdlib combinators to concatenate
the incoming stream).
Reviewed By: StanislavGlebik
Differential Revision: D17225985
fbshipit-source-id: 76ad21744461cdbd1376c6ce810e0a7a10b37ef0
Summary:
I had a diff that removed `safe_writes` that landed at about the same time
ikostia's diff that added a new command that includes `safe_writes`, so the
`mononoke-buck` build is broken as a result. This fixes that.
I also noticed a build warning (unused future) while checking that the build
was fixed, so I also fixed that.
Reviewed By: ikostia, StanislavGlebik
Differential Revision: D17257765
fbshipit-source-id: 47ac7e46b5263878f8aa7fd3e3326e356fecbbe6
Summary:
Let's make a function that creates FastlogBatch smarter - it now makes sure
that size of `latest` and `previous_batches` fields is bounded.
It will be useful in the next diffs where I add support for handling merge
commits.
Reviewed By: krallin
Differential Revision: D17181156
fbshipit-source-id: 8c058cd39abf51f3bc1ebdbe76a5bb3e1b001f98
Summary:
This is the first step in supporting merges in apiserver's fastlog.
To generate FastlogBatches we need to find unodes that were modified.
For a commit with single parent it's just a diff between this commit and it's
parent.
That would work for merges as well, however it might do unnecessary work.
For example, let's say first parent introduced file A, second parent introduced
file B. Diff between merge commit and first parent would return file B, and
we'd try to generate FastlogBatch for it even though this FastlogBatch is
already generated. That might be a problem in case of big merges.
To avoid this problem let's filter out unodes that exist in any of the parents.
Reviewed By: krallin
Differential Revision: D17180324
fbshipit-source-id: ecfa6748937159fdcdab9c7f08d964230f87739a
Summary: None of these do anything.
Reviewed By: StanislavGlebik
Differential Revision: D16963565
fbshipit-source-id: bb0dd344daf151dd1d5c45392c43b014d2b17a07
Summary:
This make it easier to benchmark things, in particular Manifold CDN
performance.
Reviewed By: StanislavGlebik
Differential Revision: D16963422
fbshipit-source-id: c110bd620fec6cfb25aa45cffdd059024495ff32
Summary:
This avoids typos when accessing arguments, and thus avoids panics when we
unwrap arguments that don't exist.
Reviewed By: StanislavGlebik
Differential Revision: D16963424
fbshipit-source-id: cf35dfaf026be8842902af3c817301672a8929d2
Summary: This makes it easier to measure performance differences across canary runs.
Reviewed By: StanislavGlebik
Differential Revision: D17225082
fbshipit-source-id: bfabfddaaaca711ed4f973bfbcdd93d618f90b33
Summary:
Update the debugmutation format to collapse long chains. Add
`hg debugmutation -s` to follow the successor relationship (printing what the
commit became) instead of the predecessor relationship.
Reviewed By: mitrandir77
Differential Revision: D17156463
fbshipit-source-id: 44a68692e3bdea6a2fe2ef9f53b533199136eab1
Summary: Currently in mononoke_admin tool we have to use repo-id to identify repository. Sometimes it can be inconvenient. Changed it, so we can use either repo-id or reponame.
Reviewed By: StanislavGlebik
Differential Revision: D17202962
fbshipit-source-id: d33ad55f53c839afc70e42c26501ecd4421e32c0
Summary:
ManifestOps::diff() returns empty stream if entries are the same and it
returns root difectory if entries are not the same.
No need to repeat this logic here
Reviewed By: krallin
Differential Revision: D17179427
fbshipit-source-id: 54a6c83bf35883cf57ba385ea17bc2e51c0cc71e
Summary:
Apiserver response can be slow, and we've noticed that it's often caused by
derived data generation (e.g. unodes and hg changesets). Usually that happens
when request to "master" bookmark is sent.
This diff trade offs staleness of bookmarks with latency of read path. Instead
of reading bookmarks from repo object directly let's keep in-memory cache which
is updated only after all derived data is generated.
Reviewed By: aslpavel
Differential Revision: D17201325
fbshipit-source-id: 4e9efab4e42b9eddc22fd37fa2122c6615ad086f
Summary: Update bounded_traversal_stream to take IntoIterator of initial values. This allows simultaneous navigation of a graph from multiple roots.
Reviewed By: farnz
Differential Revision: D17163609
fbshipit-source-id: c999e7653cb620c215331ecc46f5a800ced8ef37
Summary:
I ran into a tricky case while deriving FastlogBatch for empty commits:
* If a commit is empty then new unode manifest is not created [1]
* But FastlogBatch's derived data logic was still creating a new FastlogBatch.
This batch would point not to the commit that created the unode, but to the
latest empty commit.
This is obviously not a huge deal because it affects only root entry only for
empty commits. But it can be confusing and tricky to investigate, so I think
it's better to fix it.
[1] There is one exception - empty commit with no parents
Reviewed By: krallin
Differential Revision: D17160654
fbshipit-source-id: 949d35860efe2f3df6ec1c11b4c6c4d077982c95
Summary:
This diff adds handling of "overflows" i.e. making sure that each FastlogBatch
doesn't store more than 60 entries.
The idea is that FastlogBatch has "latest" entries and "previous_batches". New
entry is prepended to "latest", but if "latest" has more than 10 entries then
new FastlogBatch is uploaded and prepended to "previous_batches". If
"previous_batches" has more than 5 entries than the oldest entries is removed
from the batch.
Note that merges are still not handled in this diff, this functionality will be added in the next
diffs
Reviewed By: krallin
Differential Revision: D17160506
fbshipit-source-id: 956bc5320667c6c5511d2a567740a4f6ebd8ac1b
Summary:
This diff adds batching of linear file or directory history (i.e. no merges).
Note that it still doesn't handle "overflows" i.e. FastlogBatch is not yet
compressed if it has too many entries. This will be added in the next diffs
Reviewed By: krallin
Differential Revision: D17152698
fbshipit-source-id: 05706444bcdcdb5ed0fd453c5c0684f8911603c6
Summary:
In order to answer file history requests history we need to avoid doing serial
fetches from a blobstore.
This diff is a first attempt in implementing it. The general idea is to store precalculated history in the blobstore.
Note that at the moment it covers only the very basic case - file history for a commit with no parents, so there's quite a few "TODOs" and "unimplemented!()" in the codebase. This functionality will be extended in the next diffs
Reviewed By: krallin
Differential Revision: D17070901
fbshipit-source-id: 8150afb2509fcd2a428d2369bab58468ab774d72
Summary:
We were generating hgchangesets in list_directory_unodes() method, however this
is not always necessary - if a client sends a bookmark then we can fetch bonsai
changesets and generate unodes directly bypassing hg changeset generation stage.
Same goes for is_ancestor method.
Reviewed By: krallin
Differential Revision: D17146609
fbshipit-source-id: 9613e28f8bacbce5d8de94a6ab88b152d19b0a08