Summary:
We already had a few diffs that added or removed
```
Ignoring setSSLLockTypes after initialization
```
line.
I'm not sure why we have them, but we don't want to see them anyway, so disable
it via minloglevel glog option (level 2 means see only ERROR and FATAL).
Reviewed By: HarveyHunt
Differential Revision: D13416156
fbshipit-source-id: 362153385b77e133e404b21faa1735a9544fe13e
Summary:
Currently it creates too many commits and stress run times out. Besides it also
creates random filenames, so even if we find a failure we won't be able to
repro it.
Let's create just two commits instead of 200.
Reviewed By: HarveyHunt
Differential Revision: D13415238
fbshipit-source-id: 927bd69b55761e4dd4ea6e5f459df58a95918955
Summary:
We've moved to the new config structure, I forgot to make the same changes to
apiserver
Reviewed By: lukaspiatkowski
Differential Revision: D13358407
fbshipit-source-id: cec81a21518cdb3c91dabb93e220d1ba3e25d02c
Summary: Set the proper context for sessions.
Reviewed By: liubov-dmitrieva
Differential Revision: D13258641
fbshipit-source-id: edd18d4abc8f5475e0d2ac8395dfc877b2dd5958
Summary:
Previously we manually specified blobstore type and all the necessary
parameters. That was error-prone but worked because we had only one blobstore.
Since we are going to add secondary blobstores soon configuring binaries like
blobimport will be harder because we'll need to specify parameters for all
blobstores. Let's make it so that
blobimport, mononoke admin and other binaries read the configuration the same
way as Mononoke does it i.e. via toml files.
Reviewed By: lukaspiatkowski
Differential Revision: D13183244
fbshipit-source-id: 99caa6348133acec11dd04ae44e1f9f0a8ebb197
Summary:
Config repo proved to be tricky to understand and hard to use. Let's just use
toml files.
Reviewed By: farnz
Differential Revision: D13179926
fbshipit-source-id: 3a44ee08c37284cc4c189c74b5c369ce82651cc6
Summary:
Add an integration test that talks to the API server from remotefilelog.
Since this involves running a local instance of Mononoke and the API server, I've added it to Mononoke's integration tests rather than Mercurial's integration tests in order to re-use the setup code from Mononoke's tests. Since the server is running locally, and would use a different SSL setup that the VIP that hg would normally access it through, the local API server is run without SSL enabled.
Reviewed By: Anastasiya-Zhyrkevich, farnz
Differential Revision: D13089196
fbshipit-source-id: 01f415d8ee7173f7f2ab3a234565fd79d618126e
Summary: test-init.t and test-lfs-to-mononoke.t were failing. This diff fixes them
Differential Revision: D13301143
fbshipit-source-id: 1f8060d4c6b641c555ba8a5cdcfe4cb14ac89d0a
Summary:
According to [Git-LFS Plan](https://www.mercurial-scm.org/wiki/LfsPlan), `getfiles` instead of file content should return file in the [following format](https://www.mercurial-scm.org/wiki/LfsPlan#Metadata_format)
```
oid: sha256.SHA256HASH
size: size_int
```
Hg client requests files using sha1 hgfilenode hash. To calculate sha256 of the content, Mononoke is fetching the file from blobstore to memory, and calculate sha256.
It does not give any profit in time and memory consumptions, comparing to non-LFS transfer of Mononoke.
*Solution:*
To put a `key-value` to blobstore, after first request of the file. This means, that after hg client requested sha256 of the file for the first time, after calculation, put it to the blobstore.
Next request of the sha256 of the file content avoid recalcualtion of sha256 in Mononoke. It return sha256 saved in the blob.
Reviewed By: StanislavGlebik
Differential Revision: D13021826
fbshipit-source-id: 692e01e212e7d716bd822fa968e87abed5103aa7
Summary:
The file of some revision is the initial file content with applied deltas
Delta is a vector of Fragments.
Fragment is a sequential change of the file (old part of the content -> new content)
This diff is representing the implementation of optimization of the process of getting a file content of some revision.
Reviewed By: lukaspiatkowski
Differential Revision: D12928138
fbshipit-source-id: fcc28e2d0e0acf83e17887092f6593e155431c1b
Summary:
Update test certificates, because they expired. Set next expiration date in 10
years.
```
stash@devvm3292
/data/users/stash/fbsource/fbcode/scm/mononoke/tests/integration
(eab3173|remote/master) $ openssl req -x509 -nodes -days 3650 -newkey rsa:2
048 -keyout testcert.key -out testcert.crt
Generating a 2048 bit RSA private key
.............+++
.....................................................................................................................+++
writing new private key to 'testcert.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:uk
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:localhost
Email Address []:
```
Also update the warning in some tests
Reviewed By: farnz
Differential Revision: D13082351
fbshipit-source-id: 8636e711e82113e692ecd146154cbe4796b33a59
Summary:
New debug line started to show up. Hard to track where it comes from, but let's
add it to the test.
Reviewed By: farnz
Differential Revision: D13042463
fbshipit-source-id: 0901920d127099b4f27c242df4f6c3814d701a08
Summary:
As part of our read path rollout, we want to block user error from
creating new commits that will confuse blobimport. Make it possible to
configure a read-only repo
Reviewed By: StanislavGlebik
Differential Revision: D12945024
fbshipit-source-id: 4265bf57f8adac7965117b710b8285bac483b8ee
Summary:
Purpose:
- Sha256 alias link to file_content is a required part of LFS getfiles works correct.
LFS protocol uses SHA-256 to refer to the file content. Mononoke uses Blake2.
To support LFS in Mononoke we need to set up a link from SHA-256 hash of the content to blake2 of the content.
These links are called aliases.
- Aliases are uploading together with file content blobs.
But only for new push operations.
- If repo is blobimported from somewhere, we need to make sure, that all the links are in blobstore.
If repo was blobimported before aliases were added then it may miss aliases for some blobs.
- This tool can be used to
- find if any aliases are missing
- fill missing aliases.
Implementation:
- Run in repo.
- Iterate through all changesets.
- Go through all the file_content blobs in the changesets
- Verify/generate alias256 links to file_content blobs.
Mode supported:
- verify, count the number of errors and print to console
- generate, if blob is missing to add it to the blobstore
Reviewed By: StanislavGlebik
Differential Revision: D10461827
fbshipit-source-id: c2673c139e2f2991081c4024db7b85953d2c5e35
Summary: Now that Tupperware is using `/health_check`, there are no users of `/status`, so remove this endpoint.
Reviewed By: StanislavGlebik
Differential Revision: D12908785
fbshipit-source-id: be9bfae9453143a6f4b26f7fc6cbc68a3f1adc5c
Summary:
Mononoke-apiserver now will parse http header in order to get "host" url.
Based on that Host url, batch request will return the link to the same host.
Requirement:
LFS protocol consists of two parts:
- "batch" request, GET request
- HG client sends ids of the file it wants to download/upload
- API server should return valid url for downloading and uploading objects
"upload/download" request.
- Actual upload/download of the file
This diff allows API Server to return the link to itself on "batch" request. So Hg client will go to API Server again for uploading and downloading files.
Reviewed By: kulshrax
Differential Revision: D12941635
fbshipit-source-id: e56453b0d6239daa3848c285a1df09a4a869f2c8
Summary:
We are a tree-only server; advertising this in our caps should ensure
that when we're source of truth at Facebook, anyone with pre-treemanifest
versions of Mercurial gets an insta-fail on connect, rather than a weird error
when they start exchanging data with us.
Reviewed By: HarveyHunt
Differential Revision: D12927232
fbshipit-source-id: 52c15c8a0f1842b6ca023f97228277d0fd9e8e38
Summary:
Panic is useless here. It produces huge stack trace which just contains the
main function and makes it harder to debug the actual problem.
Let's just exit in case of errors.
Reviewed By: farnz
Differential Revision: D12912198
fbshipit-source-id: 1faeacfb96765ce047a801f6b072112f10b50b7b
Summary:
This augments `/tree` to yields the size and content sha1 hash for the entries.
This is important for Eden and avoids additional round trips to the server.
The content hashing is the portion that I expect some push back on,
because it doesn't appear to be cached today and the implementation
here does a simplistic fetch and hash. By doing this we hope to
squash out a potential later fetch of the entire contents when
buck build is run.
Differential Revision: D10865588
fbshipit-source-id: c020ef07b99d8a5e8b2f8f7b699bf15e750d60a5
Summary:
Recently there was a change in core hg that changed the way we encode filenames - D9967059. However, it wasn't reflected in Mononoke blobimport code, so the job is constantly fails
This diff change the filename encoding process according to Mercurial
Encoding process is in 3 steps:
1. (Capital -> _lowercaseletter) + ( _ -> __).
If new file name is > 255, than go to step 2, otherwise exit
2. (Capital -> Capital) + (_ -> __)
if new filename is > 255, then got to step 3, otherwise exit
3. (Capitals -> Capitals) + (_ -> : )
Reviewed By: StanislavGlebik
Differential Revision: D10851634
fbshipit-source-id: 28b7503b2601729113326a18ede3e93c04572c6d
Summary:
On push from hg client, require sha1 check on any type of the upload
If data is provided, sha1 will be calculated from bytes [sha1 calculation place](https://fburl.com/noa99y37)
If LFSMetaData is provided, sha1 will be calculated by fetching the file from blobstore [fetching place](https://fburl.com/boj4s74f)
Reviewed By: StanislavGlebik
Differential Revision: D10509331
fbshipit-source-id: 216f59541b8adf8ab87026612e735ac1527e7cc2
Summary:
Display the hash of the commit that didn't pass a hook,
which is a common occurrence in fbsource hooks using $HG_NODE.
I fixed up the tests, but test-hooks.t is broken from
the hg amend/fbamend fallout and also has some other issues. I tried to only add
the changes relevant to this commit.
Reviewed By: StanislavGlebik
Differential Revision: D10466395
fbshipit-source-id: dd1cdc994171a014c3d4806804ace14e85e726d4
Summary:
Background:
According to [git lfs protocol](https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md) HTTP POST "batch" request should return a link to
the look-aside server.
In our case Mononoke API server is a look-aside server, and process both "batch" request and "upload/download" requests.
So it need to return a link to itself.
New approach requests a separate lfs-url for "batch" request.
The previous approach requested attributes --http-host and --http-port to make a link to the instance of API server running.
Reviewed By: StanislavGlebik
Differential Revision: D10488586
fbshipit-source-id: ed9d78ee9bc78bdcec5eea813bd9aaa6e4590a5c
Summary:
For unification with commit cloud vip configuration apiserver should support the same health check api
Request needed for corp2prod
The same as : D10488369
Reviewed By: liubov-dmitrieva
Differential Revision: D10488494
fbshipit-source-id: 50b4024295c596342a8080474383de850bb7754a
Summary:
It was broken because it only matched conflict markers that were on the first
line. This diff fixes it by splitting the file content by \n first
Reviewed By: farnz
Differential Revision: D10447393
fbshipit-source-id: a2091f6bc43e8bb9a77c63536e749432d524bbff
Summary:
This hook is designed to prevent text directives in .gitattributes
from making it into the repo.
As noted in the integration test, our regex may be too loose,
but it's probably OK, in practice.
For better or worse, for now, we're just trying to maintain the
behavior of the existing hook (though perhaps the existing hook
would have been a bit stricter if it wren't written in Bash).
For easy reference, here are the Git docs on gitattributes:
https://git-scm.com/docs/gitattributes/
Reviewed By: StanislavGlebik
Differential Revision: D10387336
fbshipit-source-id: c58f689ecc0648c2cc359a818c92d701258e8f46
Summary:
We want to deny landing files whose path contains magic strings. Add a
hook to do this, with some predefined examples of how to write patterns
Reviewed By: StanislavGlebik
Differential Revision: D10446531
fbshipit-source-id: 67f1a712d923345288c8d0a4f3e5da1e8f4e29f8
Summary:
getfiles implementation for lfs
The implementation is the following:
- get file size from file envelope (retrieve from manifold by HgNodeId)
- if file size > threshold from lfs config
- fetch file to memory, get sha256 of the file, will be fixed later, as this approach consumes a lot of memory, but we don't have any mapping from sha256 - blake2 [T35239107](https://our.intern.facebook.com/intern/tasks/?t=35239107)
- generate lfs metadata file according to [LfsPlan](https://www.mercurial-scm.org/wiki/LfsPlan)
- set metakeyflag (REVID_STORED_EXT) in the file header
- if file size < threshold, process usual way
Reviewed By: StanislavGlebik
Differential Revision: D10335988
fbshipit-source-id: 6a1ba671bae46159bcc16613f99a0e21cf3b5e3a
Summary:
According to [Mercurial Lfs Plan](https://www.mercurial-scm.org/wiki/LfsPlan), on push, for files which size is above the threshold (lfs.threshold config) hg client is sending LFS metadata instead of actual files contents. The main part of LFS metadata is SHA-256 of the file content (oid).
The format requires the following mandatory fields: version, oid, size.
When lfs metadata is sent instead of a real file content then lfs_ext_stored flag is in the request's revflags.
If this flag is set, We are ignoring sha-1 hash verification inconsistency.
Later check that the content is actually loaded to the blobstore and create filenode envelope from it, load the envelope to the blobstore.
Filenode envelope requires the following info:
- size - retrieved on fetching the actual data from blobstore.
- copy_from - retrieved from the file, sent by hg client.
Mononoke still does the same checks for LFS push as for non-lfs push (i.e. checks that all the necessary manifests/filelogs were uploaded by a client)
Reviewed By: StanislavGlebik
Differential Revision: D10255314
fbshipit-source-id: efc8dac4c9f6d6f9eb3275d21b7b0cbfd354a736
Summary:
One brainless idiot decided to prune all trees from changed files calcualation.
Since it also prunes subtrees, that leaves with just files in the root
directory.
Reviewed By: lukaspiatkowski
Differential Revision: D10302299
fbshipit-source-id: 8fe2c4ad8de998dfd4083d97cd816d85b5fec604
Summary:
Hooks that makes sure that there are no conflict markers in file contents.
This hook is bypassable.
Reviewed By: purplefox
Differential Revision: D10260230
fbshipit-source-id: b9d69e757f18ed3f4f889a01032ef7360cba6867
Summary: Not a final version for sure, just a small improvement
Reviewed By: lukaspiatkowski
Differential Revision: D10260231
fbshipit-source-id: 9f9f61f23da5ac9a5d1abc9ad2f50900ca434326
Summary: Pushvars is a one more way to bypass hooks. This diff implements it
Reviewed By: purplefox
Differential Revision: D10257602
fbshipit-source-id: 1bd188239878ff917ded7db995ea2453da9f64c4
Summary:
Let's add a logic to allow users to bypass hooks.
We'll have two ways to bypass hooks. One is via a string in commit message,
another is via pushvars.
This diff implements the first one.
Reviewed By: purplefox
Differential Revision: D10255378
fbshipit-source-id: 31e803a58e2f4798294f7c807933c8e26de3cfaf