Summary: This will help traffic replay to distinguish between different repos
Reviewed By: farnz
Differential Revision: D12922287
fbshipit-source-id: 6eed2a0eebceca0636512baa3ee885f5d9c95ccb
Summary:
Sharding filenodes by path should stop us knocking over databases -
make it configurable.
Reviewed By: StanislavGlebik
Differential Revision: D12894523
fbshipit-source-id: e27452f9b436842e1cb5e9e0968c1822f422b4c9
Summary:
We can already flatten a single XDB server with filenodes traffic, and
do if we start up a server instance without a warm memcache. This is only going
to get worse in the future.
Start the process of sharding across multiple servers. For now, we can only
deal with shard size == 1, but this code should be ready to handle shard sizes
greater than 1
Reviewed By: StanislavGlebik
Differential Revision: D12888927
fbshipit-source-id: 8e01694357c390837487fdb3710685fd09feaec0
Summary:
Panic is useless here. It produces huge stack trace which just contains the
main function and makes it harder to debug the actual problem.
Let's just exit in case of errors.
Reviewed By: farnz
Differential Revision: D12912198
fbshipit-source-id: 1faeacfb96765ce047a801f6b072112f10b50b7b
Summary:
This augments `/tree` to yields the size and content sha1 hash for the entries.
This is important for Eden and avoids additional round trips to the server.
The content hashing is the portion that I expect some push back on,
because it doesn't appear to be cached today and the implementation
here does a simplistic fetch and hash. By doing this we hope to
squash out a potential later fetch of the entire contents when
buck build is run.
Differential Revision: D10865588
fbshipit-source-id: c020ef07b99d8a5e8b2f8f7b699bf15e750d60a5
Summary:
This diffs add a signal handler for SIGTERM signal. When it's received then a
terminate process flag is set to true. When this flag is set then no new client
connections will be accepted, and server waits until open connections finish.
The connections can take a long time, so ideally an external process should
sent SIGKILL after a timeout.
Note that this change also makes thrift server thread detached. The reason is
because making it gracefully stop is non-trivial, so for making it detached
should be fine
Reviewed By: farnz
Differential Revision: D12857453
fbshipit-source-id: 6a8f890ff529d74c21fc0c62e16951dd95a3f101
Summary:
According to [blobimport logic](diffusion/FBS/browse/master/fbcode/scm/mononoke/cmdlib/src/blobimport_lib/changeset.rs;fd16808edd6e51c1d0b82f4812fe843e797025e0$163-164) blobimport requests
for parents and node content
Previous implementation was reconstructing filenode to
current revidx for both cases: for getting raw_content and for getting parents.
New implementation avoid reconstruction of the file content to retrieve parents.
Reviewed By: quark-zju
Differential Revision: D12857440
fbshipit-source-id: e1118affe85647931dd551b9ca7be5297afe56ce
Summary:
Add one more restriction to the config repo to make sure we don't forget to
move PROD bookmark
Reviewed By: HarveyHunt
Differential Revision: D12857619
fbshipit-source-id: c4b5e65f2d0b437aad77d8ccc4b4971b60020af4
Summary:
Let's have separate config bookmarks for release candidate and prod.
That will let us customize shadow tier behaviour.
This diff also adds checking of config repo consistency. It requires that RC
bookmark is a descendant of a PROD bookmark. This topology makes it easy to see
what are the changes between PROD and RC, and verification prevents divergence of
configs i.e. sutiations when somebody updated a prod config but forget to
rebase rc config.
Reviewed By: HarveyHunt
Differential Revision: D12857131
fbshipit-source-id: b60d8f7af16e3d530e5edeb22145ec0bd473ffe4
Summary:
Let's add an option to validate the getfiles content that we return to users on
some percent of requests.
It'll increase the latency so let's not enabling that by default
Reviewed By: HarveyHunt
Differential Revision: D10558180
fbshipit-source-id: 2d7ec4dfe7b37b7b5541013278006278d1df68fa
Summary: This will enable doing queries like DELETE, UPDATE or REPLACE without listing all possibilites in the macros
Reviewed By: StanislavGlebik
Differential Revision: D10499501
fbshipit-source-id: 3e2ba433722bd34ffb5960840c509dc27cc9eb5d
Summary:
troubleshooting startup problems is overly difficult without
printing more context, so print it.
Reviewed By: Anastasiya-Zhyrkevich
Differential Revision: D12814794
fbshipit-source-id: e815a6a93b4d1d3d03370b158f6fdc93edbc4ef5
Summary:
Recently there was a change in core hg that changed the way we encode filenames - D9967059. However, it wasn't reflected in Mononoke blobimport code, so the job is constantly fails
This diff change the filename encoding process according to Mercurial
Encoding process is in 3 steps:
1. (Capital -> _lowercaseletter) + ( _ -> __).
If new file name is > 255, than go to step 2, otherwise exit
2. (Capital -> Capital) + (_ -> __)
if new filename is > 255, then got to step 3, otherwise exit
3. (Capitals -> Capitals) + (_ -> : )
Reviewed By: StanislavGlebik
Differential Revision: D10851634
fbshipit-source-id: 28b7503b2601729113326a18ede3e93c04572c6d
Summary:
On push from hg client, require sha1 check on any type of the upload
If data is provided, sha1 will be calculated from bytes [sha1 calculation place](https://fburl.com/noa99y37)
If LFSMetaData is provided, sha1 will be calculated by fetching the file from blobstore [fetching place](https://fburl.com/boj4s74f)
Reviewed By: StanislavGlebik
Differential Revision: D10509331
fbshipit-source-id: 216f59541b8adf8ab87026612e735ac1527e7cc2
Summary:
We have a problem with service upgrades/restarts because many servers start
sending too many requests to mysql db.
Let's add a memcache that will prevent that.
Reviewed By: jsgf
Differential Revision: D10488624
fbshipit-source-id: 4575d359bc269e29fe72b47d7f47cda22bf4acd7
Summary:
Display the hash of the commit that didn't pass a hook,
which is a common occurrence in fbsource hooks using $HG_NODE.
I fixed up the tests, but test-hooks.t is broken from
the hg amend/fbamend fallout and also has some other issues. I tried to only add
the changes relevant to this commit.
Reviewed By: StanislavGlebik
Differential Revision: D10466395
fbshipit-source-id: dd1cdc994171a014c3d4806804ace14e85e726d4
Summary:
Background:
According to [git lfs protocol](https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md) HTTP POST "batch" request should return a link to
the look-aside server.
In our case Mononoke API server is a look-aside server, and process both "batch" request and "upload/download" requests.
So it need to return a link to itself.
New approach requests a separate lfs-url for "batch" request.
The previous approach requested attributes --http-host and --http-port to make a link to the instance of API server running.
Reviewed By: StanislavGlebik
Differential Revision: D10488586
fbshipit-source-id: ed9d78ee9bc78bdcec5eea813bd9aaa6e4590a5c
Summary:
For unification with commit cloud vip configuration apiserver should support the same health check api
Request needed for corp2prod
The same as : D10488369
Reviewed By: liubov-dmitrieva
Differential Revision: D10488494
fbshipit-source-id: 50b4024295c596342a8080474383de850bb7754a
Summary: Let's allow to set the number of commits processes by hook tailer.
Reviewed By: lukaspiatkowski
Differential Revision: D10361239
fbshipit-source-id: ced118d5dfca3c8aea65cb8a21f5b487f47628cd
Summary:
Bookmarks point to Bonsai changesets. So previously we were fetching bonsai
changeset for a bookmark then converting it to hg changeset in `get_bookmark`
method, then converting it back to bonsai in `pushrebase.rs`.
This diff adds method `get_bonsai_bookmark()` that removes these useless
conversions.
Reviewed By: farnz
Differential Revision: D10427433
fbshipit-source-id: 1b15911fc5d77483b5a135a8d4484fccff23c774
Summary:
This is based on my reading of the source code, so I may have gotten things
wrong. Feel free to editorialize!
By putting this in a doc, hopefully it makes it easier for us to reason about
the API at a high level. Obviously it would be great if we could keep this up
to date going forward.
Reviewed By: StanislavGlebik
Differential Revision: D10340604
fbshipit-source-id: 9f3e82d234842e06c52f8a8b4440f8e06c487c0b
Summary:
Correctly handles case conflicting renames (only change in casing).
- path can now be removed from `CaseConlictingTrie`
- `check_case_conflicts` operates on `BonsaiChangeset` in pushrebase logic
Reviewed By: StanislavGlebik
Differential Revision: D10447522
fbshipit-source-id: d5342e7aa48154debee123b38bf3168e3371baa6
Summary:
It was broken because it only matched conflict markers that were on the first
line. This diff fixes it by splitting the file content by \n first
Reviewed By: farnz
Differential Revision: D10447393
fbshipit-source-id: a2091f6bc43e8bb9a77c63536e749432d524bbff
Summary:
This hook is designed to prevent text directives in .gitattributes
from making it into the repo.
As noted in the integration test, our regex may be too loose,
but it's probably OK, in practice.
For better or worse, for now, we're just trying to maintain the
behavior of the existing hook (though perhaps the existing hook
would have been a bit stricter if it wren't written in Bash).
For easy reference, here are the Git docs on gitattributes:
https://git-scm.com/docs/gitattributes/
Reviewed By: StanislavGlebik
Differential Revision: D10387336
fbshipit-source-id: c58f689ecc0648c2cc359a818c92d701258e8f46
Summary:
We want to deny landing files whose path contains magic strings. Add a
hook to do this, with some predefined examples of how to write patterns
Reviewed By: StanislavGlebik
Differential Revision: D10446531
fbshipit-source-id: 67f1a712d923345288c8d0a4f3e5da1e8f4e29f8
Summary:
Using multiple Runtimes might be a cause of problems in future and even if it isn't it will be a cause for investigating whether it is a problem or not.
The issue I have in mind is that if someone runs a future on one runtime that calls `tokio::spawn` on it (f.e. schedule a job that works for ever) but then uses a different runtime to drive another future to complection one might not suspect that the previous spawn is already lost with the previous Runtime.
Reviewed By: farnz
Differential Revision: D10446122
fbshipit-source-id: 4bfd2a04487a70355a26f821e6348f5223901c0d
Summary:
Previously buffered() wasn't particularly useful because it buffered only
mapping from ChangesetId to HgChangesetId. The actual running of hooks was done in
`.and_then()` and that means that each future in the stream should finish
before the next one starts.
Let's put running of hooks inside a buffer, that helps with perf a lot.
Reviewed By: jsgf
Differential Revision: D10359546
fbshipit-source-id: 48b8b200d7397eef8622c32cad9cec889b96f9d0