Summary:
This augments `/tree` to yields the size and content sha1 hash for the entries.
This is important for Eden and avoids additional round trips to the server.
The content hashing is the portion that I expect some push back on,
because it doesn't appear to be cached today and the implementation
here does a simplistic fetch and hash. By doing this we hope to
squash out a potential later fetch of the entire contents when
buck build is run.
Differential Revision: D10865588
fbshipit-source-id: c020ef07b99d8a5e8b2f8f7b699bf15e750d60a5
Summary:
troubleshooting startup problems is overly difficult without
printing more context, so print it.
Reviewed By: Anastasiya-Zhyrkevich
Differential Revision: D12814794
fbshipit-source-id: e815a6a93b4d1d3d03370b158f6fdc93edbc4ef5
Summary:
Background:
According to [git lfs protocol](https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md) HTTP POST "batch" request should return a link to
the look-aside server.
In our case Mononoke API server is a look-aside server, and process both "batch" request and "upload/download" requests.
So it need to return a link to itself.
New approach requests a separate lfs-url for "batch" request.
The previous approach requested attributes --http-host and --http-port to make a link to the instance of API server running.
Reviewed By: StanislavGlebik
Differential Revision: D10488586
fbshipit-source-id: ed9d78ee9bc78bdcec5eea813bd9aaa6e4590a5c
Summary:
For unification with commit cloud vip configuration apiserver should support the same health check api
Request needed for corp2prod
The same as : D10488369
Reviewed By: liubov-dmitrieva
Differential Revision: D10488494
fbshipit-source-id: 50b4024295c596342a8080474383de850bb7754a
Summary:
As per the comments added - MyRouter setup is such that it starts inside a tupperware container together with the binary that will be using it. This means that by the time the binary wants to use the MyRouter connection the MyRouter instance might not be ready yet. In order to mitigate this effect the myrouter::Builder will attempt to make a "Select 1" query and retry it with a backoff for a max of 2 min or until the connection is actually established.
Unfortunately the `queries!` macro had to be moved inside the `macro` module in order to make it usable from inside `myrouter` module, see this: https://stackoverflow.com/questions/31103213/import-macro-from-parent-module
Reviewed By: farnz
Differential Revision: D10270464
fbshipit-source-id: 9cf6ad936a0cabd72967fb96796d4af3bab25822
Summary: Make get_manifest_by_nodeid accept HgManifestId and correct all calls to get_manifest_by_nodeid.
Reviewed By: StanislavGlebik
Differential Revision: D10298425
fbshipit-source-id: 932e2a896657575c8998e5151ae34a96c164e2b2
Summary:
The idea for rollout is to:
- first make sure that Mononoke doesn't crash when a --myrouter-port is provided
- then tupperware configs will be modified to include myrouter as a collocated proces on every host and the port of that myrouter instance will be provided via command line
- lastly land the change that actually talks to myrouter
Reviewed By: StanislavGlebik
Differential Revision: D10258251
fbshipit-source-id: ea9d461b401d41ef624304084014c2227968d33f
Summary:
Test is failing, as Mononoke server lfs support is not implemented yet.
Integration test for commands from hg client to Mononoke server.
\s(re) lines are added as after auto-save, the test script is formatted, and delete spaces at the empty lines.
In order to keep such lines, \s(re) could be added
In comparison of such line, pattern \s(re) is deleted and not compared.
See to mononoke/tests/integration/third_party/hg_run_tests.py for more information about comparison of the output lines.
Reviewed By: StanislavGlebik
Differential Revision: D10089289
fbshipit-source-id: 2962e80d919c21801d08990be190f2574c48646d
Summary:
PUT request upload to mononoke API
hg client sends a PUT request to store a file into blobstore during push supporting LFS
Upload file by alias is divied into 2 parts:
- Put alias : blobstore key
- Put blobstore_key: contents
Keep in mind, that file content is thrift encoded
host_address for batch request is from command line flags -H for host, -p for port
Reviewed By: StanislavGlebik
Differential Revision: D10026683
fbshipit-source-id: 6c2726c7fee2fb171582bdcf7ce86b22b0130660
Summary:
Previously cachelib cmdline args were added only to cmd line binaries, but not
to Mononoke this diff fixes it.
Reviewed By: farnz
Differential Revision: D10083899
fbshipit-source-id: 8febba96561c5ab9a61f60fafc7a7e56985dc038
Summary:
JSON blobs let other users of Mononoke learn what they need to know
about commits. When we get a commit, log a JSON blob to Scribe that other users can pick up to learn what they want to know.
Because Scribe does not guarantee ordering, and can sometimes lose messages, each message includes enough data to allow a tailer that wants to know about all commits to follow backwards and detect lost messages (and thus fix them up locally). It's expected that tailers will either sample this data, or have their own state that they can use to detect missing commits.
Reviewed By: StanislavGlebik
Differential Revision: D9995985
fbshipit-source-id: 527b6b8e1ea7f5268ce4ce4490738e085eeeac72
Summary:
POST request mononoke_api/objects/batch from hg client.
According to git-lfs protocol
https://github.com/git-lfs/git-lfs/tree/master/docs/apihttps://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
In order to get url for uploading/downloading files, hg client is sending POST request mononoke_api/objects/batch of the following format.
This diff implements support for this POST request.
As an answer it returns json in the format required in git-lfs protocol (see link for more info).
Reviewed By: StanislavGlebik
Differential Revision: D9966691
fbshipit-source-id: 53bcbb4b455e61d9d344bfd9b5b6fb00bc201084
Summary:
WIP
Mononoke API download for lfs
support get request
curl http://127.0.0.1:8000/{repo_name}/lfs/download/{sha256}
Reviewed By: StanislavGlebik
Differential Revision: D9850413
fbshipit-source-id: 4d756679716893b2b9c8ee877433cd443df52285
Summary:
The "path" in manifold blobrepo is used for logging, but it has been quite confusing with "fbsource" and "fbsource-pushrebase" to be logged in an identical way - both are "fbsource", because of the "path" config. Lets not use the "path" for logging, instead use the "reponame" from metaconfig repo.
In case we ever want to have two repos that are named the same (please don't) or have logging under a different name than "reponame" from config then we can add a proper optional "name" parameter, but for now we don't require this confusing feature.
Reviewed By: StanislavGlebik
Differential Revision: D9769514
fbshipit-source-id: 89f2291df90a7396749e127d8985dc12e61f4af4
Summary:
Use the err_downcast macros instead of manual downcasting. Doesn't make
a huge code-size difference in this case, but a little neater?
Reviewed By: kulshrax, fanzeyi
Differential Revision: D9405014
fbshipit-source-id: 170665f3ec3e78819c5c8a78d458636de253bb6f
Summary: While I was working on `actix-srserver`, I realized the current design of the API server is quite unnecessary. The "MononokeActor" and "MononokeRepoActor" are only returning futures without much CPU computation cost. So it don't need to be placed in a separate thread.
Reviewed By: jsgf
Differential Revision: D9472848
fbshipit-source-id: 618ec39c42d90717fa6985fee7d6308420962d3f
Summary: Added a thrift client library and binary for Mononoke API Server that allows us to play with the API Server's thrift port.
Reviewed By: farnz
Differential Revision: D9110899
fbshipit-source-id: 603cc5e2b5e0419a73c9eccb35f8c95455ada9ce
Summary: This commit adds a basic thrift server that responds to fb303 status check queries to Mononoke API Server.
Reviewed By: farnz
Differential Revision: D9092291
fbshipit-source-id: d1e4ddb280c252f549d40a0bb03d05afccbf73b8
Summary: Adds proper url decoding for is_ancestor, so that special characters can be encoded in the url.
Reviewed By: kulshrax
Differential Revision: D9325467
fbshipit-source-id: d3ff60e004be8d254ea6f7288188adf54ab7ff5f
Summary:
We'll be running in Tupperware, and want to shrink when we get too
large to avoid OOM due to caches. Configure cachelib appropriately
Reviewed By: StanislavGlebik
Differential Revision: D8900371
fbshipit-source-id: 4f1f64c2508c64e4ce2d201e0a0e86446f84ffef
Summary: This fixes the `blocking` cannot run without threadpool error.
Reviewed By: farnz
Differential Revision: D9017757
fbshipit-source-id: 037fd6f30598f56a83c1dd91c9b8c4f3c8e413b3
Summary: The 'mut' requirement wasn't required for structs implementing `ReachabilityIndex`, and will get in the way when incorporating this work into the Mononoke server / API server.
Reviewed By: StanislavGlebik
Differential Revision: D9142238
fbshipit-source-id: 4853b468bf04493289fb017bf56b3a1753f29dcd
Summary: Clean up main.rs to move all usage of `matches` together. So we don't need to deal with the lifetime of `matches` in my next diff.
Reviewed By: farnz
Differential Revision: D9017723
fbshipit-source-id: ae60bc9bb0a78983b1db91da39499024dc5af2ad
Summary: Someone imported `slog-async` so API Server can get rid of the `Mutex`.
Reviewed By: farnz
Differential Revision: D9031672
fbshipit-source-id: 1525707899f29826c363496459b2a9bb246f3e99
Summary: This commit changes MononokeBackingStore to use new APIs provided by Mononoke API Server.
Reviewed By: chadaustin
Differential Revision: D8882789
fbshipit-source-id: 0f06ca5f850af9fb52f1d593b9abd715a541488a
Summary: This commit adds support for changeset information retriving at `/<repo>/changeset/<commit hash>`.
Reviewed By: StanislavGlebik
Differential Revision: D8880547
fbshipit-source-id: ed68c577316693e0c685c347405b5d344d1bc87e
Summary: This commit adds support for `/<repo>/tree/<treehash>` (retrieving tree content by hash) .
Reviewed By: StanislavGlebik
Differential Revision: D8870870
fbshipit-source-id: 8b3271c819e47d112a8b44097f626360a05540d1
Summary: This commit implements the `<repo>/blob/<blobhash>` API that Eden needs.
Reviewed By: StanislavGlebik
Differential Revision: D8870300
fbshipit-source-id: eca9dc434c8fb584dfba1542c5242fbee18e6619
Summary: This commit adds support to ls operation that lists files in a folder at some commit.
Reviewed By: StanislavGlebik
Differential Revision: D8729389
fbshipit-source-id: cad6d02da075e94b5269cc18052a5a3916ddac86
Summary:
Back out "[mononoke] Switch to cachelib for blob caching"
Original commit changeset: 2549d85dfcba
Back out "[mononoke] Remove unused asyncmemo imports"
Original commit changeset: e34f8c34a3f6
Back out "mononoke: fix blobimport"
Original commit changeset: b540201b93f1
Reviewed By: StanislavGlebik
Differential Revision: D8989404
fbshipit-source-id: e4e7c629cb4dcf196aa56eb07a53a45f6008eb4e
Summary:
Added support for queries which use bookmark names in place of node hashes. This involved:
* Creating a method `string_to_bookmark_changeset_id`, which takes a string, treats it as a bookmark, and tries to find the corresponding changeset id in the repo.
* Modifying the `is_ancestor call` in `MononokeRepoActor` to try to interpret the query strings as bookmarks if they can't be interpretted as node hashes.
* Introducing the `cloned` crate from `//common/rust` into the API server to make the above methods cleaner.
* Modifying the integration test to add a bookmark to the test repo and attempt querying using the bookmark name.
Reviewed By: fanzeyi
Differential Revision: D8976793
fbshipit-source-id: 3a2b58cac0fb80ee18fad8529bd58af5b54f85ef
Summary: Update actix and actix-web to the latest version
Reviewed By: sunshowers
Differential Revision: D8965698
fbshipit-source-id: 18324161c832ccce4d908799a703368cd615996c
Summary:
Moved two types of functionality to a shared 'helpers' file so that they can be used by other indexes:
* Getting the Generation number of a changeset. The BlobRepo method currently returns an Option<Generation> as the success type, so putting the combinator calls to get the underlying Generation or map to an Error as a separate method will help keep the code more readible, and allow this logic to be reused in other parts.
* Checking if a node exists in the repo. This wraps the changeset_exists method from BlobRepo and returns an error if it was false or an error itself, else success with a void item. This just helps with code readability, so it will be obvious if the result of a future is being used, or if its success is just a prereq for the rest of the operations.
* Convert a collection of HgChangesetId to a collection of (HgNodeHash, Generation). Again, will help with code readability in more complicated functions, since the combinators of this method are, in my opinion, cluttering up the other methods using this functionality and making them more difficult to follow.
Reviewed By: StanislavGlebik
Differential Revision: D8919874
fbshipit-source-id: fc6cdf6e3a1f0dfa73c74ec94f0abac4a7860794
Summary:
Adding support to the Mononoke API server for naive is_ancestor queries using a BFS.
The API server now supports queries as:
Request URL: "{repo}/is_ancestor/{proposed_ancestor}/{proposed_descendent}"
where the arguments in the URL are:
- repo: the name of the repo to query reachability in
- proposed_ancestor: a 20 byte hex encoded string representing a node hash
- proposed_descendent: a 20 byte hex encoded string representing a node hash
Response: One of:
- the string, "true", if 'proposed_ancestor' is an ancestor of 'proposed_descendent' in 'repo'.
- the string, "false", if the above condition isn't satisfied.
- an error if the query couldn't be performed
This involved adding:
- new enum values for the MononokeRepoQuery and MononokeRepoResponse structs for 'IsAncestor' queries and responses.
- a dependency on the 'mononoke/reachabilityindex' crate.
- a 'is_ancestor' function to the MononokeRepoActor struct, which delegates queries to a GenerationNumberBFS index.
- appropriate url handling to the server object in main
- New enums to the API server ErrorKind, and appropriate down casting from the ReachabilityIndex ErrorKind.
- Integration tests which made the test repo in test-apiserver.t have a few branches, and queries the API server for reachability of pairs of commits.
Reviewed By: fanzeyi
Differential Revision: D8844221
fbshipit-source-id: 1ba102fede378688243827850ff67aabc587a748
Summary:
cachelib can shrink caches to avoid running into OOM, while still
giving as close to full-sized cache performance as possible. Make it possible
for Rust users of cachelib to request cache shrinking on low memory, while
choosing a strategy that suits them.
Reviewed By: Imxset21
Differential Revision: D8895814
fbshipit-source-id: ca4eb5b002c9ed922e7f7f56de002313a0d2303b
Summary:
Start deprecating AsyncMemo for caching purposes, by removing its use
as a blobstore cache.
Reviewed By: StanislavGlebik
Differential Revision: D8840496
fbshipit-source-id: 2549d85dfcba6647e9b0824ab55ab76165a17564
Summary: It really bugs me to have a file with many different things. It is time to split these into individual files. Hope this is not an overdesign.
Reviewed By: kulshrax
Differential Revision: D8862772
fbshipit-source-id: 91f211764e0ffe8a9fb127c3d2f7c9890e1ce0f4
Summary: This commit cleans up the code in the API Server, and move the logic of mononoke related operation to the `api` crate.
Reviewed By: kulshrax
Differential Revision: D8849200
fbshipit-source-id: 5a53c8db1f76661efebce8ebb79a327350059837
Summary: This commit renames `/blob/` to `/raw/`. This helps users to distinguish the proposed "get file content by hash" API from the original "get file content by path".
Reviewed By: kulshrax
Differential Revision: D8869635
fbshipit-source-id: 79d9cdaeb7e4e55b3d804d4530fb17835104cc32
Summary:
This commit added three options to specify locations to SSL certificates so the API server will accept encrypted traffic.
Currently this only works in HTTP/1.1 due to a bug in HTTP/2 parsing in actix-web. Once they fixed the bug upstream we will be able to serve HTTP/2 traffic as well.
Reviewed By: jsgf
Differential Revision: D8703861
fbshipit-source-id: 0d4e68276013a8aeb6ee006e5175b8caeba767cb
Summary:
This commit:
* add `--with-scuba` to apiserver.tw so we collect scuba metrics in tw deployed jobs.
* disable scuba for `/status` as it adds too much noise in apiserver's scuba table.
Reviewed By: kulshrax
Differential Revision: D8848271
fbshipit-source-id: 2f9638bd7ccb44f8ae02172097ccc38b3818b0e4
Summary:
This is a series of patches which adds Cargo.toml files to all the crates and tries to build them. There is individual patch for each crate which tells whether that crate build successfully right now using cargo or not, and if not, reason behind that.
Following are the reasons why the crates don't build:
* failure_ext and netstring crates which are internal
* error related to tokio_io, there might be an patched version of tokio_io internally
* actix-web depends on httparse which uses nightly features
All the build is done using rustc version `rustc 1.27.0-dev`.
Pull Request resolved: https://github.com/facebookexperimental/mononoke/pull/7
Differential Revision: D8778746
Pulled By: jsgf
fbshipit-source-id: 927a7a20b1d5c9643869b26c0eab09e90048443e
Summary: This commit bridges messages logged via `log` crate to be forwarded to `slog` using `slog-stdlog` crate. This allows us to see debug messages printed from third party crates that are using `log` instead of `slog`.
Reviewed By: StanislavGlebik
Differential Revision: D8703996
fbshipit-source-id: f847b76a52c262eded4f576a5d9065574ff7e4dd