2017-07-28 04:00:19 +03:00
#!/bin/bash
2019-10-11 23:51:17 +03:00
# Copyright (c) Facebook, Inc. and its affiliates.
2019-05-30 17:10:49 +03:00
#
# This software may be used and distributed according to the terms of the
2019-10-11 23:51:17 +03:00
# GNU General Public License found in the LICENSE file in the root
# directory of this source tree.
2019-05-30 17:10:49 +03:00
2017-07-28 04:00:19 +03:00
# Library routines and initial setup for Mononoke-related tests.
2019-06-18 17:49:07 +03:00
if [ [ -n " $DB_SHARD_NAME " ] ] ; then
mononoke: don't require cachelib to talk to a remote DB
Summary:
Currently, we implicitly expect that caching is enabled if we're dealing with a remote repository, but that means cachelib must be enabled when running with a remote repository, and that is ... slow.
This can be problematic in two cases:
In tests. It makes MySQL tests unbearably slow, and a little more flaky because we end up using so much CPU. With this patch, MySQL tests remain slower than SQLite tests, but by a factor of < 2, which is a pretty substantial improvement.
Running trivial administrative commands (e.g. a `mononoke_admin`), notably using a dev build (which right now unbearably slow). With this patch, such a trivial command is about 6x faster:
```
[torozco@devvm4998.lla1 ~/fbcode] time buck-out/gen/scm/mononoke/admin#binary/admin --repo-id 2102 --mononoke-config-path /home/torozco/local/.mononoke_exec/config/PROD --skip-caching bookmarks list --kind publishing
Jun 21 08:57:36.658 INFO using repo "instagram-server" repoid RepositoryId(2102)
master c96ac4654e4d2da45a9597af859adeac9dba3d7ca964cb42e5c96bc153f185e3 2c5713ad27262b91bf1dfaf21b3cf34fe3926c8d
real 0m5.299s
user 0m5.097s
sys 0m0.699s
[torozco@devvm4998.lla1 ~/fbcode] time buck-out/gen/scm/mononoke/admin#binary/admin --repo-id 2102 --mononoke-config-path /home/torozco/local/.mononoke_exec/config/PROD bookmarks list --kind publishing
I0621 08:57:59.299988 1181997 CacheAllocator-inl.h:3123] Started worker 'PoolRebalancer'
Jun 21 08:57:59.328 INFO using repo "instagram-server" repoid RepositoryId(2102)
master c96ac4654e4d2da45a9597af859adeac9dba3d7ca964cb42e5c96bc153f185e3 2c5713ad27262b91bf1dfaf21b3cf34fe3926c8d
real 0m28.620s
user 0m27.680s
sys 0m2.466s
```
This is also nice because it means the MySQL tests won't talk to Memcache anymore.
---
Note: in this refactor, I made `Caching` an enum so it can't accidentally be swapped with some other boolean.
---
Finally, it also uses up quite a bit less RAM (we no longer need 2GB of RAM to output one line of bookmarks — although we're still using quite a bit!):
```
[torozco@devvm4998.lla1 ~/fbcode] env time buck-out/gen/scm/mononoke/admin#binary/admin --skip-caching --repo-id 2102 --mononoke-config-path /home/torozco/local/.mononoke_exec/config/PROD bookmarks list --kind publishing
Jun 21 09:18:36.074 INFO using repo "instagram-server" repoid RepositoryId(2102)
master abdd2f78dafeaa8d4b96897955a63844b31324f9d89176b3a62088d0e2ae2b22 1702392d14bf7a332bf081518cb1ea3c83a13c39
5.08user 0.68system 0:05.28elapsed 109%CPU (0avgtext+0avgdata 728024maxresident)k
6776inputs+0outputs (8major+115477minor)pagefaults 0swaps
[torozco@devvm4998.lla1 ~/fbcode] env time buck-out/gen/scm/mononoke/admin#binary/admin --repo-id 2102 --mononoke-config-path /home/torozco/local/.mononoke_exec/config/PROD bookmarks list --kind publishing
I0621 09:19:01.385933 1244489 CacheAllocator-inl.h:3123] Started worker 'PoolRebalancer'
Jun 21 09:19:01.412 INFO using repo "instagram-server" repoid RepositoryId(2102)
master abdd2f78dafeaa8d4b96897955a63844b31324f9d89176b3a62088d0e2ae2b22 1702392d14bf7a332bf081518cb1ea3c83a13c39
26.96user 2.27system 0:27.93elapsed 104%CPU (0avgtext+0avgdata 2317716maxresident)k
11416inputs+5384outputs (17major+605118minor)pagefaults 0swaps
```
Reviewed By: farnz
Differential Revision: D15941644
fbshipit-source-id: 0df4a74ccd0220a786ebf0e883e1a9b8aab0a647
2019-06-24 16:03:31 +03:00
MONONOKE_DEFAULT_START_TIMEOUT = 60
2019-06-18 17:49:07 +03:00
else
mononoke: rebuild test framework
Summary:
Our test framework as it stands right now is a light passthrough to the hg `run-tests.py` test framework, which attempts to place all the files it needs to run (including tests) into a `python_binary`, then runs the hg test runner from that directory.
It heavily relies on how Buck works to offer functionality:
- It expects that all the sources it registers for its master binary will all be in the same directory when it builds
- It expects that the sources will be symlinks to the real files so that `--interactive` can work.
This has a few problems:
- It doesn't work in `mode/opt`. The archive that gets built in `mode/opt` doesn't actually have all the sources we registered, so it's impossible to run tests.
- To add a new test, you must rebuild everything. We don't do that very often, but it'd be nice if we didn't have to.
- Iterating on the runner itself is painful, because as far as Buck is concerned, it depends on the entire world. This means that every change to the runner has to scan a lot more stuff than necessary. There's some functionality I'd like to get into the runner (like reporting test timings) that hasn't been easy to add as a result.
This diff attempts to solve these problems by separating concerns a little more:
- The runner is now just a simple `python_binary`, so it's easier to make changes to it.
- The runner now provides the logic of working from local files when needed (this means you can add a new test and it'll work immediately),
- All the binaries we need are dependencies of the integration test target, not the runner's. However, to make it possible to run the runner incrementally while iterating on something, there's a manifest target that points at all the various paths the runner needs to work. This will also help integrate the test runner with other build frameworks if necessary (e.g. for open-sourcing).
- We have separate targets for various assets we need to run the tests (e.g. the hg test framework).
- The runner now controls whether to use the network blackhole. This was necessary because the network blackhole breaks PAR archives (because tmp is no longer owned by the right owner, because we use a user namespace). We should be able to bring this back at some point if we want to by using a proper chroot for opt tests.
I included a README to explain this new design as well.
There are some things that could yet stand to be improved here (notably, I think we should put assets and tests in different directories for the sake of clarity), but so far I've been aiming at providing a 1-1 translation of the old system into the new one. I am planning to make further improvements in followup diffs.
Reviewed By: farnz
Differential Revision: D15921732
fbshipit-source-id: 09052591c419acf97f7e360b1e88ef1f412da6e5
2019-06-25 18:30:05 +03:00
MONONOKE_DEFAULT_START_TIMEOUT = 15
2019-06-18 17:49:07 +03:00
fi
2019-09-23 17:14:55 +03:00
REPOID = 0
REPONAME = repo
2019-12-17 14:55:40 +03:00
COMMON_ARGS = ( --skip-caching --mysql-master-only)
mononoke: rebuild test framework
Summary:
Our test framework as it stands right now is a light passthrough to the hg `run-tests.py` test framework, which attempts to place all the files it needs to run (including tests) into a `python_binary`, then runs the hg test runner from that directory.
It heavily relies on how Buck works to offer functionality:
- It expects that all the sources it registers for its master binary will all be in the same directory when it builds
- It expects that the sources will be symlinks to the real files so that `--interactive` can work.
This has a few problems:
- It doesn't work in `mode/opt`. The archive that gets built in `mode/opt` doesn't actually have all the sources we registered, so it's impossible to run tests.
- To add a new test, you must rebuild everything. We don't do that very often, but it'd be nice if we didn't have to.
- Iterating on the runner itself is painful, because as far as Buck is concerned, it depends on the entire world. This means that every change to the runner has to scan a lot more stuff than necessary. There's some functionality I'd like to get into the runner (like reporting test timings) that hasn't been easy to add as a result.
This diff attempts to solve these problems by separating concerns a little more:
- The runner is now just a simple `python_binary`, so it's easier to make changes to it.
- The runner now provides the logic of working from local files when needed (this means you can add a new test and it'll work immediately),
- All the binaries we need are dependencies of the integration test target, not the runner's. However, to make it possible to run the runner incrementally while iterating on something, there's a manifest target that points at all the various paths the runner needs to work. This will also help integrate the test runner with other build frameworks if necessary (e.g. for open-sourcing).
- We have separate targets for various assets we need to run the tests (e.g. the hg test framework).
- The runner now controls whether to use the network blackhole. This was necessary because the network blackhole breaks PAR archives (because tmp is no longer owned by the right owner, because we use a user namespace). We should be able to bring this back at some point if we want to by using a proper chroot for opt tests.
I included a README to explain this new design as well.
There are some things that could yet stand to be improved here (notably, I think we should put assets and tests in different directories for the sake of clarity), but so far I've been aiming at providing a 1-1 translation of the old system into the new one. I am planning to make further improvements in followup diffs.
Reviewed By: farnz
Differential Revision: D15921732
fbshipit-source-id: 09052591c419acf97f7e360b1e88ef1f412da6e5
2019-06-25 18:30:05 +03:00
TEST_CERTDIR = " ${ HGTEST_CERTDIR :- " $TEST_CERTS " } "
2019-05-28 13:12:27 +03:00
2018-05-10 11:20:26 +03:00
function get_free_socket {
# From https://unix.stackexchange.com/questions/55913/whats-the-easiest-way-to-find-an-unused-local-port
2018-10-09 02:00:11 +03:00
cat > " $TESTTMP /get_free_socket.py " <<EOF
2018-05-10 11:20:26 +03:00
import socket
s = socket.socket( socket.AF_INET, socket.SOCK_STREAM)
s.bind( ( '' , 0) )
addr = s.getsockname( )
print( addr[ 1] )
s.close( )
EOF
python " $TESTTMP /get_free_socket.py "
}
2018-11-20 16:13:59 +03:00
# return random value from [1, max_value]
function random_int( ) {
max_value = $1
VAL = $RANDOM
( ( VAL %= $max_value ) )
( ( VAL += 1 ) )
echo $VAL
}
2018-07-17 02:56:46 +03:00
function sslcurl {
2019-05-28 13:12:27 +03:00
curl --cert " $TEST_CERTDIR /localhost.crt " --cacert " $TEST_CERTDIR /root-ca.crt " --key " $TEST_CERTDIR /localhost.key " " $@ "
2018-07-17 02:56:46 +03:00
}
2019-11-19 19:29:09 +03:00
function ssldebuglfssend {
hg --config extensions.lfs= --config hostsecurity.localhost:verifycertsfile= " $TEST_CERTDIR /root-ca.crt " \
--config auth.lfs.cert= " $TEST_CERTDIR /localhost.crt " \
--config auth.lfs.key= " $TEST_CERTDIR /localhost.key " \
--config auth.lfs.schemes= https \
--config auth.lfs.prefix= localhost debuglfssend " $@ "
}
2017-07-28 04:00:19 +03:00
function mononoke {
2019-05-20 23:28:48 +03:00
# Ignore specific Python warnings to make tests predictable.
2018-05-10 11:20:26 +03:00
export MONONOKE_SOCKET
MONONOKE_SOCKET = $( get_free_socket)
2019-11-14 19:34:45 +03:00
2019-05-20 23:28:48 +03:00
PYTHONWARNINGS = "ignore:::requests" \
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_SERVER " " $@ " \
--ca-pem " $TEST_CERTDIR /root-ca.crt " \
2019-05-28 13:12:27 +03:00
--private-key " $TEST_CERTDIR /localhost.key " \
--cert " $TEST_CERTDIR /localhost.crt " \
--ssl-ticket-seeds " $TEST_CERTDIR /server.pem.seeds " \
2018-06-30 00:36:49 +03:00
--debug \
2019-07-12 17:19:06 +03:00
--test-instance \
2019-01-03 19:43:45 +03:00
--listening-host-port " [::1]: $MONONOKE_SOCKET " \
2018-12-04 14:50:22 +03:00
-P " $TESTTMP /mononoke-config " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " >> " $TESTTMP /mononoke.out " 2>& 1 &
2019-01-03 19:43:45 +03:00
export MONONOKE_PID = $!
echo " $MONONOKE_PID " >> " $DAEMON_PIDS "
2017-07-28 04:00:19 +03:00
}
2019-02-21 16:43:59 +03:00
function mononoke_hg_sync {
2019-09-24 15:28:28 +03:00
HG_REPO = " $1 "
shift
START_ID = " $1 "
shift
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_HG_SYNC " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-04-05 12:54:10 +03:00
--retry-num 1 \
2019-09-23 17:14:55 +03:00
--repo-id $REPOID \
2019-04-05 12:54:10 +03:00
--mononoke-config-path mononoke-config \
2019-05-25 02:19:14 +03:00
--verify-server-bookmark-on-failure \
2019-09-24 15:28:28 +03:00
ssh://user@dummy/" $HG_REPO " " $@ " sync-once --start-id " $START_ID "
2019-04-05 12:54:10 +03:00
}
2019-09-06 15:27:05 +03:00
function megarepo_tool {
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MEGAREPO_TOOL " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-09-23 17:14:55 +03:00
--repo-id $REPOID \
2019-09-06 15:27:05 +03:00
--mononoke-config-path mononoke-config \
" $@ "
}
2019-11-11 22:12:21 +03:00
function megarepo_tool_multirepo {
GLOG_minloglevel = 5 " $MEGAREPO_TOOL " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-11-11 22:12:21 +03:00
--mononoke-config-path mononoke-config \
" $@ "
2019-12-10 14:59:21 +03:00
}
function mononoke_walker {
GLOG_minloglevel = 5 " $MONONOKE_WALKER " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-12-10 14:59:21 +03:00
--repo-id $REPOID \
--mononoke-config-path mononoke-config \
" $@ "
2019-11-11 22:12:21 +03:00
}
2019-09-27 00:45:16 +03:00
function mononoke_x_repo_sync_once( ) {
source_repo_id = $1
target_repo_id = $2
target_bookmark = $3
shift
shift
shift
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_X_REPO_SYNC " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2020-01-13 16:25:26 +03:00
--mononoke-config-path " $TESTTMP /mononoke-config " \
2019-09-27 00:45:16 +03:00
--source-repo-id " $source_repo_id " \
--target-repo-id " $target_repo_id " \
--target-bookmark " $target_bookmark " \
" $@ "
}
2020-01-17 15:56:33 +03:00
function new_mononoke_x_repo_sync_once( ) {
source_repo_id = $1
target_repo_id = $2
shift
shift
GLOG_minloglevel = 5 " $NEW_MONONOKE_X_REPO_SYNC " \
" ${ COMMON_ARGS [@] } " \
--mononoke-config-path " $TESTTMP /mononoke-config " \
--source-repo-id " $source_repo_id " \
--target-repo-id " $target_repo_id " \
" $@ "
}
2019-08-16 19:07:36 +03:00
function mononoke_rechunker {
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_RECHUNKER " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-09-23 17:14:55 +03:00
--repo-id $REPOID \
2019-08-16 19:07:36 +03:00
--mononoke-config-path mononoke-config \
" $@ "
}
2019-04-05 12:54:10 +03:00
function mononoke_hg_sync_with_retry {
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_HG_SYNC " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-04-05 12:54:10 +03:00
--base-retry-delay-ms 1 \
2019-09-23 17:14:55 +03:00
--repo-id $REPOID \
2019-02-26 21:50:12 +03:00
--mononoke-config-path mononoke-config \
2019-05-25 02:19:14 +03:00
--verify-server-bookmark-on-failure \
2019-02-26 21:50:12 +03:00
ssh://user@dummy/" $1 " sync-once --start-id " $2 "
2019-02-21 16:43:59 +03:00
}
2019-03-18 01:04:13 +03:00
function mononoke_hg_sync_with_failure_handler {
2019-07-09 12:42:47 +03:00
sql_name = " ${ TESTTMP } /hgrepos/repo_lock "
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_HG_SYNC " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-04-05 12:54:10 +03:00
--retry-num 1 \
2019-09-23 17:14:55 +03:00
--repo-id $REPOID \
2019-03-18 01:04:13 +03:00
--mononoke-config-path mononoke-config \
2019-05-25 02:19:14 +03:00
--verify-server-bookmark-on-failure \
2019-07-09 12:42:47 +03:00
--lock-on-failure \
--repo-lock-sqlite \
--repo-lock-db-address " $sql_name " \
2019-03-18 01:04:13 +03:00
ssh://user@dummy/" $1 " sync-once --start-id " $2 "
}
2019-02-27 17:59:02 +03:00
2019-07-09 12:42:47 +03:00
function create_repo_lock_sqlite3_db {
cat >> " $TESTTMP " /repo_lock.sql <<SQL
CREATE TABLE repo_lock (
repo VARCHAR( 255) PRIMARY KEY,
state INTEGER NOT NULL,
reason VARCHAR( 255)
) ;
SQL
mkdir -p " $TESTTMP " /hgrepos
2019-07-09 20:30:49 +03:00
sqlite3 " $TESTTMP /hgrepos/repo_lock " < " $TESTTMP " /repo_lock.sql
2019-07-09 12:42:47 +03:00
}
function init_repo_lock_sqlite3_db {
# State 2 is mononoke write
sqlite3 " $TESTTMP /hgrepos/repo_lock " \
"insert into repo_lock (repo, state, reason) values(CAST('repo' AS BLOB), 2, null)" ;
}
2019-06-24 19:02:15 +03:00
function mononoke_bookmarks_filler {
local sql_source sql_name
if [ [ -n " $DB_SHARD_NAME " ] ] ; then
sql_source = "xdb"
sql_name = " $DB_SHARD_NAME "
else
sql_source = "sqlite"
sql_name = " ${ TESTTMP } /replaybookmarksqueue "
fi
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_BOOKMARKS_FILLER " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-09-23 17:14:55 +03:00
--repo-id $REPOID \
2019-06-24 19:02:15 +03:00
--mononoke-config-path mononoke-config \
" $@ " " $sql_source " " $sql_name "
}
2019-02-27 17:59:02 +03:00
function create_mutable_counters_sqlite3_db {
cat >> " $TESTTMP " /mutable_counters.sql <<SQL
CREATE TABLE mutable_counters (
repo_id INT UNSIGNED NOT NULL,
name VARCHAR( 512) NOT NULL,
value BIGINT NOT NULL,
PRIMARY KEY ( repo_id, name)
) ;
SQL
2019-11-07 19:13:38 +03:00
sqlite3 " $TESTTMP /monsql/sqlite_dbs " < " $TESTTMP " /mutable_counters.sql
2019-02-27 17:59:02 +03:00
}
function init_mutable_counters_sqlite3_db {
2019-11-07 19:13:38 +03:00
sqlite3 " $TESTTMP /monsql/sqlite_dbs " \
2019-02-27 17:59:02 +03:00
"insert into mutable_counters (repo_id, name, value) values(0, 'latest-replayed-request', 0)" ;
}
2019-04-29 17:22:37 +03:00
function create_books_sqlite3_db {
2019-05-09 19:55:04 +03:00
cat >> " $TESTTMP " /bookmarks.sql <<SQL
2019-04-29 17:22:37 +03:00
CREATE TABLE bookmarks_update_log (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
repo_id INT UNSIGNED NOT NULL,
name VARCHAR( 512) NOT NULL,
from_changeset_id VARBINARY( 32) ,
to_changeset_id VARBINARY( 32) ,
reason VARCHAR( 32) NOT NULL, -- enum is used in mysql
timestamp BIGINT NOT NULL
) ;
SQL
2019-11-07 19:13:38 +03:00
sqlite3 " $TESTTMP /monsql/sqlite_dbs " < " $TESTTMP " /bookmarks.sql
2019-04-29 17:22:37 +03:00
}
2019-02-27 17:59:02 +03:00
function mononoke_hg_sync_loop {
2019-04-30 18:20:34 +03:00
local repo = " $1 "
local start_id = " $2 "
shift
shift
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_HG_SYNC " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-04-05 12:54:10 +03:00
--retry-num 1 \
2019-09-23 17:14:55 +03:00
--repo-id $REPOID \
2019-02-27 17:59:02 +03:00
--mononoke-config-path mononoke-config \
2019-04-30 18:20:34 +03:00
ssh://user@dummy/" $repo " sync-loop --start-id " $start_id " " $@ "
2019-02-27 17:59:02 +03:00
}
2019-09-25 20:26:54 +03:00
function mononoke_hg_sync_loop_regenerate {
local repo = " $1 "
local start_id = " $2 "
shift
shift
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_HG_SYNC " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-09-25 20:26:54 +03:00
--retry-num 1 \
--repo-id 0 \
--mononoke-config-path mononoke-config \
ssh://user@dummy/" $repo " --generate-bundles sync-loop --start-id " $start_id " " $@ "
}
2019-04-29 17:22:37 +03:00
function mononoke_admin {
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_ADMIN " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-09-23 17:14:55 +03:00
--repo-id $REPOID \
2019-04-29 17:22:37 +03:00
--mononoke-config-path " $TESTTMP " /mononoke-config " $@ "
}
2019-12-13 00:07:34 +03:00
function mononoke_admin_source_target {
local source_repo_id = $1
shift
local target_repo_id = $1
shift
GLOG_minloglevel = 5 " $MONONOKE_ADMIN " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-12-13 00:07:34 +03:00
--source-repo-id " $source_repo_id " \
--target-repo-id " $target_repo_id " \
--mononoke-config-path " $TESTTMP " /mononoke-config " $@ "
}
2019-11-25 16:09:58 +03:00
function mononoke_admin_sourcerepo {
GLOG_minloglevel = 5 " $MONONOKE_ADMIN " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-11-25 16:09:58 +03:00
--source-repo-id $REPOID \
--mononoke-config-path " $TESTTMP " /mononoke-config " $@ "
}
2019-04-29 17:22:37 +03:00
function write_stub_log_entry {
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $WRITE_STUB_LOG_ENTRY " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-09-23 17:14:55 +03:00
--repo-id $REPOID \
2019-04-29 17:22:37 +03:00
--mononoke-config-path " $TESTTMP " /mononoke-config --bookmark master_bookmark " $@ "
}
2019-12-11 16:55:12 +03:00
# Remove the glog prefix
function strip_glog {
# based on https://our.internmc.facebook.com/intern/wiki/LogKnock/Log_formats/#regex-for-glog
sed -E -e 's%^[VDIWECF][[:digit:]]{4} [[:digit:]]{2}:?[[:digit:]]{2}:?[[:digit:]]{2}(\.[[:digit:]]+)?\s+(([0-9a-f]+)\s+)?(\[([^]]+)\]\s+)?(\(([^\)]+)\)\s+)?(([a-zA-Z0-9_./-]+):([[:digit:]]+))\]\s+%%'
}
2019-12-13 00:46:40 +03:00
function wait_for_json_record_count {
# We ask jq to count records for us, so that we're a little more robust ot
# newlines and such.
local file count
file = " $1 "
count = " $2 "
for _ in $( seq 1 50) ; do
if [ [ " $( jq 'true' < " $file " | wc -l) " -eq " $count " ] ] ; then
return 0
fi
2019-11-25 20:11:55 +03:00
2019-12-13 00:46:40 +03:00
sleep 0.1
done
2019-11-25 20:11:55 +03:00
2019-12-13 00:46:40 +03:00
echo " File $file did not contain $count records " >& 2
jq -S . < " $file " >& 2
return 1
2019-11-25 20:11:55 +03:00
}
2018-01-02 04:38:31 +03:00
# Wait until a Mononoke server is available for this repo.
function wait_for_mononoke {
2019-02-01 14:48:43 +03:00
# MONONOKE_START_TIMEOUT is set in seconds
# Number of attempts is timeout multiplied by 10, since we
# sleep every 0.1 seconds.
2019-06-18 17:49:07 +03:00
local attempts timeout
timeout = " ${ MONONOKE_START_TIMEOUT :- " $MONONOKE_DEFAULT_START_TIMEOUT " } "
attempts = " $(( timeout * 10 )) "
2018-06-30 00:36:49 +03:00
2018-07-17 02:56:46 +03:00
SSLCURL = " sslcurl --noproxy localhost \
2018-06-30 00:36:49 +03:00
https://localhost:$MONONOKE_SOCKET "
2018-05-10 11:20:26 +03:00
for _ in $( seq 1 $attempts ) ; do
2018-06-30 00:36:49 +03:00
$SSLCURL 2>& 1 | grep -q 'Empty reply' && break
2018-01-02 04:38:31 +03:00
sleep 0.1
done
2018-05-10 11:20:26 +03:00
2018-06-30 00:36:49 +03:00
if ! $SSLCURL 2>& 1 | grep -q 'Empty reply' ; then
2018-03-21 19:55:19 +03:00
echo "Mononoke did not start" >& 2
2018-05-10 11:20:26 +03:00
cat " $TESTTMP /mononoke.out "
2018-03-21 19:55:19 +03:00
exit 1
fi
2018-01-02 04:38:31 +03:00
}
2018-06-25 17:25:53 +03:00
# Wait until cache warmup finishes
function wait_for_mononoke_cache_warmup {
local attempts = 150
for _ in $( seq 1 $attempts ) ; do
grep -q "finished initial warmup" " $TESTTMP /mononoke.out " && break
sleep 0.1
done
if ! grep -q "finished initial warmup" " $TESTTMP /mononoke.out " ; then
echo "Mononoke warmup did not finished" >& 2
cat " $TESTTMP /mononoke.out "
exit 1
fi
}
2018-10-09 12:05:20 +03:00
function setup_common_hg_configs {
2018-02-14 17:34:10 +03:00
cat >> " $HGRCPATH " <<EOF
[ ui]
ssh = " $DUMMYSSH "
[ extensions]
remotefilelog =
[ remotefilelog]
cachepath = $TESTTMP /cachepath
EOF
}
2018-10-09 12:05:20 +03:00
function setup_common_config {
2018-12-04 14:50:22 +03:00
setup_mononoke_config " $@ "
2018-10-09 12:05:20 +03:00
setup_common_hg_configs
}
2019-02-06 01:52:00 +03:00
function create_pushrebaserecording_sqlite3_db {
cat >> " $TESTTMP " /pushrebaserecording.sql <<SQL
CREATE TABLE pushrebaserecording (
id bigint( 20) NOT NULL,
repo_id int( 10) NOT NULL,
ontorev binary( 40) NOT NULL,
onto varchar( 512) NOT NULL,
onto_rebased_rev binary( 40) ,
conflicts longtext,
pushrebase_errmsg varchar( 1024) DEFAULT NULL,
upload_errmsg varchar( 1024) DEFAULT NULL,
bundlehandle varchar( 1024) DEFAULT NULL,
timestamps longtext NOT NULL,
recorded_manifest_hashes longtext NOT NULL,
real_manifest_hashes longtext NOT NULL,
duration_ms int( 10) DEFAULT NULL,
replacements_revs varchar( 1024) DEFAULT NULL,
ordered_added_revs varchar( 1024) DEFAULT NULL,
PRIMARY KEY ( id)
) ;
SQL
2019-07-09 20:30:49 +03:00
sqlite3 " $TESTTMP " /pushrebaserecording < " $TESTTMP " /pushrebaserecording.sql
2019-02-06 01:52:00 +03:00
}
2019-02-21 16:43:59 +03:00
function init_pushrebaserecording_sqlite3_db {
sqlite3 " $TESTTMP /pushrebaserecording " \
" insert into pushrebaserecording \
( id, repo_id, bundlehandle, ontorev, onto, timestamps, recorded_manifest_hashes, real_manifest_hashes) \
values( 1, 0, 'handle' , 'add0c792bfce89610d277fd5b1e32f5287994d1d' , 'master_bookmark' , '' , '' , '' ) " ;
}
function init_bookmark_log_sqlite3_db {
2019-11-07 19:13:38 +03:00
sqlite3 " $TESTTMP /monsql/sqlite_dbs " \
2019-02-21 16:43:59 +03:00
" insert into bookmarks_update_log \
( repo_id, name, from_changeset_id, to_changeset_id, reason, timestamp) \
values( 0, 'master_bookmark' , NULL, X'04C1EA537B01FFF207445E043E310807F9059572DD3087A0FCE458DEC005E4BD' , 'pushrebase' , 0) " ;
2019-11-07 19:13:38 +03:00
sqlite3 " $TESTTMP /monsql/sqlite_dbs " "select * from bookmarks_update_log" ;
2019-02-21 16:43:59 +03:00
}
2019-10-30 20:42:39 +03:00
function get_bonsai_globalrev_mapping {
2019-11-07 19:13:38 +03:00
sqlite3 " $TESTTMP /monsql/sqlite_dbs " "select hex(bcs_id), globalrev from bonsai_globalrev_mapping order by globalrev" ;
2019-10-30 20:42:39 +03:00
}
2018-12-04 14:50:22 +03:00
function setup_mononoke_config {
cd " $TESTTMP " || exit
2019-09-23 17:14:55 +03:00
mkdir -p mononoke-config
2019-12-09 15:01:53 +03:00
REPOTYPE = "blob_rocks"
2018-10-09 02:00:11 +03:00
if [ [ $# -gt 0 ] ] ; then
REPOTYPE = " $1 "
2019-12-10 14:59:21 +03:00
shift
fi
local blobstorename = blobstore
if [ [ $# -gt 0 ] ] ; then
blobstorename = " $1 "
shift
2018-10-09 02:00:11 +03:00
fi
2019-05-23 23:19:31 +03:00
if [ [ ! -e " $TESTTMP /mononoke_hgcli " ] ] ; then
cat >> " $TESTTMP /mononoke_hgcli " <<EOF
2019-06-05 16:15:38 +03:00
#! /bin/sh
2019-05-23 23:19:31 +03:00
" $MONONOKE_HGCLI " --no-session-output "\$@"
EOF
chmod a+x " $TESTTMP /mononoke_hgcli "
MONONOKE_HGCLI = " $TESTTMP /mononoke_hgcli "
fi
2019-05-02 15:17:07 +03:00
ALLOWED_USERNAME = " ${ ALLOWED_USERNAME :- myusername0 } "
2019-11-14 19:34:45 +03:00
cd mononoke-config || exit 1
2019-05-02 15:17:07 +03:00
mkdir -p common
2019-09-10 17:21:56 +03:00
touch common/commitsyncmap.toml
2019-05-02 15:17:07 +03:00
cat > common/common.toml <<CONFIG
[ [ whitelist_entry] ]
identity_type = "USER"
identity_data = " $ALLOWED_USERNAME "
CONFIG
2019-12-10 14:59:21 +03:00
echo "# Start new config" > common/storage.toml
setup_mononoke_storage_config " $REPOTYPE " " $blobstorename "
setup_mononoke_repo_config " $REPONAME " " $blobstorename "
}
function db_config( ) {
local blobstorename = " $1 "
if [ [ -n " $DB_SHARD_NAME " ] ] ; then
echo " [ $blobstorename .db.remote] "
echo " db_address=\" $DB_SHARD_NAME \" "
else
echo " [ $blobstorename .db.local] "
echo " local_db_path=\" $TESTTMP /monsql\" "
fi
}
function setup_mononoke_storage_config {
local underlyingstorage = " $1 "
local blobstorename = " $2 "
local blobstorepath = " $TESTTMP / $blobstorename "
if [ [ -v MULTIPLEXED ] ] ; then
mkdir -p " $blobstorepath /0 " " $blobstorepath /1 "
cat >> common/storage.toml <<CONFIG
$( db_config " $blobstorename " )
[ $blobstorename .blobstore.multiplexed]
components = [
{ blobstore_id = 0, blobstore = { $underlyingstorage = { path = " $blobstorepath /0 " } } } ,
{ blobstore_id = 1, blobstore = { $underlyingstorage = { path = " $blobstorepath /1 " } } } ,
]
CONFIG
else
mkdir -p " $blobstorepath "
cat >> common/storage.toml <<CONFIG
$( db_config " $blobstorename " )
[ $blobstorename .blobstore.$underlyingstorage ]
path = " $blobstorepath "
CONFIG
fi
2019-05-28 13:12:27 +03:00
}
2019-09-19 13:19:48 +03:00
function setup_commitsyncmap {
cp " $TEST_FIXTURES /commitsyncmap.toml " " $TESTTMP /mononoke-config/common/commitsyncmap.toml "
}
2019-11-14 19:34:45 +03:00
function setup_configerator_configs {
export LOADSHED_CONF
LOADSHED_CONF = " $TESTTMP /configerator/scm/mononoke/loadshedding "
mkdir -p " $LOADSHED_CONF "
cat >> " $LOADSHED_CONF /limits " <<EOF
{
"defaults" : {
"egress_bytes" : 1000000000000,
"ingress_blobstore_bytes" : 1000000000000,
"total_manifests" : 1000000000000,
"quicksand_manifests" : 1000000000,
"getfiles_files" : 1000000000,
"getpack_files" : 1000000000,
"commits" : 1000000000
} ,
"datacenter_prefix_capacity" : {
} ,
"hostprefixes" : {
} ,
"quicksand_multiplier" : 1.0,
"rate_limits" : {
}
}
EOF
export PUSHREDIRECT_CONF
PUSHREDIRECT_CONF = " $TESTTMP /configerator/scm/mononoke/pushredirect "
mkdir -p " $PUSHREDIRECT_CONF "
cat >> " $PUSHREDIRECT_CONF /enable " <<EOF
{
"per_repo" : { }
}
EOF
}
2019-05-28 13:12:27 +03:00
function setup_mononoke_repo_config {
cd " $TESTTMP /mononoke-config " || exit
local reponame = " $1 "
2019-12-10 14:59:21 +03:00
local storageconfig = " $2 "
2019-05-28 13:12:27 +03:00
mkdir -p " repos/ $reponame "
2019-09-23 17:14:55 +03:00
mkdir -p " $TESTTMP /monsql "
2019-05-29 11:27:22 +03:00
mkdir -p " $TESTTMP / $reponame "
2019-12-09 21:36:00 +03:00
mkdir -p " $TESTTMP /traffic-replay-blobstore "
2019-05-28 13:12:27 +03:00
cat > " repos/ $reponame /server.toml " <<CONFIG
2019-09-23 17:14:55 +03:00
repoid = $REPOID
enabled = ${ ENABLED :- true }
2019-01-17 13:26:01 +03:00
hash_validation_percentage = 100
mononoke: move storage configuration to a common file
Summary:
This change has two goals:
- Put storage configuration that's common to multiple repos in a common place,
rather than replicating it in each server.toml
- Allow tools that only operate on the blobstore level - like blobstore healing
- to be configured directly in terms of the blobstore, rather than indirectly
by using a representative repo config.
This change makes several changes to repo configuration:
1. There's a separate common/storage.toml which defines named storage
configurations (ie, a combination of a blobstore and metadata DB)
2. server.toml files can also define local storage configurations (mostly
useful for testing)
3. server.toml files now reference which storage they're using with
`storage_config = "name"`.
4. Configuration of multiplex blobstores is now explicit. Previously if a
server.toml defined multiple blobstores, it was assumed that it was a
multiplex. Now storage configuration only accepts a single blobstore config,
but that config can be explicitly a multiplexed blobstore, which has the
sub-blobstores defined within it, in the `components` field. (This is
recursive, so it could be nested, but I'm not sure if this has much value in
practice.)
5. Makes configuration parsing more strict - unknown fields will be treated as
an error rather than ignored. This helps flag problems in refactoring/updating
configs.
I've updated all the configs to the new format, both production and in
integration tests. Please review to make sure I haven't broken anything.
Reviewed By: StanislavGlebik
Differential Revision: D15065423
fbshipit-source-id: b7ce58e46e91877f4e15518c014496fb826fe03c
2019-05-09 19:55:04 +03:00
CONFIG
2019-11-14 11:23:44 +03:00
if [ [ ! -v NO_BOOKMARKS_CACHE ] ] ; then
cat >> " repos/ $reponame /server.toml " <<CONFIG
bookmarks_cache_ttl = 2000
CONFIG
fi
mononoke: move storage configuration to a common file
Summary:
This change has two goals:
- Put storage configuration that's common to multiple repos in a common place,
rather than replicating it in each server.toml
- Allow tools that only operate on the blobstore level - like blobstore healing
- to be configured directly in terms of the blobstore, rather than indirectly
by using a representative repo config.
This change makes several changes to repo configuration:
1. There's a separate common/storage.toml which defines named storage
configurations (ie, a combination of a blobstore and metadata DB)
2. server.toml files can also define local storage configurations (mostly
useful for testing)
3. server.toml files now reference which storage they're using with
`storage_config = "name"`.
4. Configuration of multiplex blobstores is now explicit. Previously if a
server.toml defined multiple blobstores, it was assumed that it was a
multiplex. Now storage configuration only accepts a single blobstore config,
but that config can be explicitly a multiplexed blobstore, which has the
sub-blobstores defined within it, in the `components` field. (This is
recursive, so it could be nested, but I'm not sure if this has much value in
practice.)
5. Makes configuration parsing more strict - unknown fields will be treated as
an error rather than ignored. This helps flag problems in refactoring/updating
configs.
I've updated all the configs to the new format, both production and in
integration tests. Please review to make sure I haven't broken anything.
Reviewed By: StanislavGlebik
Differential Revision: D15065423
fbshipit-source-id: b7ce58e46e91877f4e15518c014496fb826fe03c
2019-05-09 19:55:04 +03:00
if [ [ -v READ_ONLY_REPO ] ] ; then
2019-05-28 13:12:27 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
mononoke: move storage configuration to a common file
Summary:
This change has two goals:
- Put storage configuration that's common to multiple repos in a common place,
rather than replicating it in each server.toml
- Allow tools that only operate on the blobstore level - like blobstore healing
- to be configured directly in terms of the blobstore, rather than indirectly
by using a representative repo config.
This change makes several changes to repo configuration:
1. There's a separate common/storage.toml which defines named storage
configurations (ie, a combination of a blobstore and metadata DB)
2. server.toml files can also define local storage configurations (mostly
useful for testing)
3. server.toml files now reference which storage they're using with
`storage_config = "name"`.
4. Configuration of multiplex blobstores is now explicit. Previously if a
server.toml defined multiple blobstores, it was assumed that it was a
multiplex. Now storage configuration only accepts a single blobstore config,
but that config can be explicitly a multiplexed blobstore, which has the
sub-blobstores defined within it, in the `components` field. (This is
recursive, so it could be nested, but I'm not sure if this has much value in
practice.)
5. Makes configuration parsing more strict - unknown fields will be treated as
an error rather than ignored. This helps flag problems in refactoring/updating
configs.
I've updated all the configs to the new format, both production and in
integration tests. Please review to make sure I haven't broken anything.
Reviewed By: StanislavGlebik
Differential Revision: D15065423
fbshipit-source-id: b7ce58e46e91877f4e15518c014496fb826fe03c
2019-05-09 19:55:04 +03:00
readonly = true
CONFIG
fi
2019-12-10 14:59:21 +03:00
# Normally point to common storageconfig, but if none passed, create per-repo
if [ [ -z " $storageconfig " ] ] ; then
storageconfig = " blobstore_ $reponame "
setup_mononoke_storage_config " $REPOTYPE " " $storageconfig "
fi
cat >> " repos/ $reponame /server.toml " <<CONFIG
storage_config = " $storageconfig "
CONFIG
2019-08-16 19:07:36 +03:00
if [ [ -v FILESTORE ] ] ; then
cat >> " repos/ $reponame /server.toml " <<CONFIG
[ filestore]
chunk_size = ${ FILESTORE_CHUNK_SIZE :- 10 }
concurrency = 24
CONFIG
fi
2019-08-20 13:58:46 +03:00
if [ [ -v REDACTION_DISABLED ] ] ; then
2019-07-02 15:01:35 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2019-08-20 13:58:46 +03:00
redaction = false
2019-07-02 15:01:35 +03:00
CONFIG
fi
2019-05-30 17:10:49 +03:00
if [ [ -v LIST_KEYS_PATTERNS_MAX ] ] ; then
2019-09-23 17:14:55 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2019-05-30 17:10:49 +03:00
list_keys_patterns_max = $LIST_KEYS_PATTERNS_MAX
2018-11-08 15:07:24 +03:00
CONFIG
2019-05-10 06:50:27 +03:00
fi
2018-11-08 15:07:24 +03:00
2019-12-09 21:36:00 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
[ wireproto_logging]
storage_config = "traffic_replay_blobstore"
remote_arg_size_threshold = 0
[ storage.traffic_replay_blobstore.db.local]
local_db_path = " $TESTTMP /monsql "
[ storage.traffic_replay_blobstore.blobstore.blob_files]
path = " $TESTTMP /traffic-replay-blobstore "
CONFIG
2019-03-18 14:09:28 +03:00
if [ [ -v ONLY_FAST_FORWARD_BOOKMARK ] ] ; then
2019-05-28 13:12:27 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2019-03-18 14:09:28 +03:00
[ [ bookmarks] ]
name = " $ONLY_FAST_FORWARD_BOOKMARK "
only_fast_forward = true
CONFIG
fi
if [ [ -v ONLY_FAST_FORWARD_BOOKMARK_REGEX ] ] ; then
2019-05-28 13:12:27 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2019-03-18 14:09:28 +03:00
[ [ bookmarks] ]
regex = " $ONLY_FAST_FORWARD_BOOKMARK_REGEX "
only_fast_forward = true
CONFIG
2018-11-08 15:07:24 +03:00
fi
2018-08-31 18:41:24 +03:00
2019-05-28 13:12:27 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2019-02-25 20:24:37 +03:00
[ pushrebase]
2019-04-04 17:13:54 +03:00
forbid_p2_root_rebases = false
2019-03-18 21:16:57 +03:00
CONFIG
if [ [ -v BLOCK_MERGES ] ] ; then
2019-05-28 13:12:27 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2019-03-18 21:16:57 +03:00
block_merges = true
CONFIG
fi
if [ [ -v PUSHREBASE_REWRITE_DATES ] ] ; then
2019-05-28 13:12:27 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2019-02-25 20:24:37 +03:00
rewritedates = true
CONFIG
else
2019-05-28 13:12:27 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2018-08-31 18:41:24 +03:00
rewritedates = false
2018-06-30 06:19:49 +03:00
CONFIG
2019-04-23 13:48:08 +03:00
fi
if [ [ -v EMIT_OBSMARKERS ] ] ; then
2019-05-28 13:12:27 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2019-04-23 13:48:08 +03:00
emit_obsmarkers = true
CONFIG
2019-02-25 20:24:37 +03:00
fi
2018-06-30 06:19:49 +03:00
2019-02-01 14:48:43 +03:00
if [ [ ! -v ENABLE_ACL_CHECKER ] ] ; then
2019-05-28 13:12:27 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2019-01-04 17:14:03 +03:00
[ hook_manager_params]
disable_acl_checker = true
CONFIG
2019-02-01 14:48:43 +03:00
fi
2019-01-04 17:14:03 +03:00
2019-02-12 23:36:51 +03:00
if [ [ -v ENABLE_PRESERVE_BUNDLE2 ] ] ; then
2019-05-28 13:12:27 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2019-02-12 23:36:51 +03:00
[ bundle2_replay_params]
preserve_raw_bundle2 = true
CONFIG
fi
2019-05-31 20:46:10 +03:00
if [ [ -v DISALLOW_NON_PUSHREBASE ] ] ; then
cat >> " repos/ $reponame /server.toml " <<CONFIG
[ push]
pure_push_allowed = false
CONFIG
fi
2018-07-04 17:03:34 +03:00
if [ [ -v CACHE_WARMUP_BOOKMARK ] ] ; then
2019-05-28 13:12:27 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2018-07-04 17:03:34 +03:00
[ cache_warmup]
bookmark = " $CACHE_WARMUP_BOOKMARK "
CONFIG
2018-10-17 12:17:11 +03:00
fi
if [ [ -v LFS_THRESHOLD ] ] ; then
2019-05-28 13:12:27 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2018-10-17 12:17:11 +03:00
[ lfs]
threshold = $LFS_THRESHOLD
CONFIG
2018-07-04 17:03:34 +03:00
fi
2019-05-30 17:10:49 +03:00
2019-07-02 20:31:16 +03:00
if [ [ -v INFINITEPUSH_ALLOW_WRITES ] ] || [ [ -v INFINITEPUSH_NAMESPACE_REGEX ] ] ; then
namespace = ""
if [ [ -v INFINITEPUSH_NAMESPACE_REGEX ] ] ; then
2019-11-28 17:32:44 +03:00
namespace = " namespace_pattern=\" $INFINITEPUSH_NAMESPACE_REGEX \" "
2019-07-02 20:31:16 +03:00
fi
2019-09-23 17:14:55 +03:00
cat >> " repos/ $reponame /server.toml " <<CONFIG
2019-05-30 17:10:49 +03:00
[ infinitepush]
2019-07-02 20:31:16 +03:00
allow_writes = ${ INFINITEPUSH_ALLOW_WRITES :- true }
${ namespace }
2019-05-30 17:10:49 +03:00
CONFIG
fi
2018-10-09 12:05:20 +03:00
}
2018-06-25 17:25:53 +03:00
2018-10-09 12:05:20 +03:00
function register_hook {
2018-11-13 20:34:53 +03:00
hook_name = " $1 "
path = " $2 "
hook_type = " $3 "
2018-10-11 15:41:02 +03:00
2018-11-13 20:34:53 +03:00
shift 3
2019-02-08 18:24:03 +03:00
EXTRA_CONFIG_DESCRIPTOR = ""
2018-10-11 15:41:02 +03:00
if [ [ $# -gt 0 ] ] ; then
2019-02-08 18:24:03 +03:00
EXTRA_CONFIG_DESCRIPTOR = " $1 "
2018-10-11 15:41:02 +03:00
fi
2019-02-08 18:24:03 +03:00
(
cat <<CONFIG
2018-10-09 12:05:20 +03:00
[ [ bookmarks.hooks] ]
hook_name = " $hook_name "
[ [ hooks] ]
name = " $hook_name "
path = " $path "
hook_type = " $hook_type "
CONFIG
2019-02-08 18:24:03 +03:00
[ -n " $EXTRA_CONFIG_DESCRIPTOR " ] && cat " $EXTRA_CONFIG_DESCRIPTOR "
2019-09-23 17:14:55 +03:00
) >> " repos/ $REPONAME /server.toml "
2018-10-09 12:05:20 +03:00
}
function blobimport {
2019-12-10 14:59:21 +03:00
local always_log =
if [ [ " $1 " = = "--log" ] ] ; then
always_log = 1
shift
fi
2018-12-11 20:45:43 +03:00
input = " $1 "
output = " $2 "
shift 2
2018-07-12 04:25:23 +03:00
mkdir -p " $output "
2019-09-23 17:14:55 +03:00
$MONONOKE_BLOBIMPORT --repo-id $REPOID \
2018-12-04 14:50:23 +03:00
--mononoke-config-path " $TESTTMP /mononoke-config " \
2019-12-17 14:55:40 +03:00
" $input " " ${ COMMON_ARGS [@] } " " $@ " > " $TESTTMP /blobimport.out " 2>& 1
2018-10-01 13:28:09 +03:00
BLOBIMPORT_RC = " $? "
if [ [ $BLOBIMPORT_RC -ne 0 ] ] ; then
cat " $TESTTMP /blobimport.out "
# set exit code, otherwise previous cat sets it to 0
return " $BLOBIMPORT_RC "
2019-12-10 14:59:21 +03:00
elif [ [ -n " $always_log " ] ] ; then
cat " $TESTTMP /blobimport.out "
2018-10-01 13:28:09 +03:00
fi
2018-03-20 21:42:47 +03:00
}
2018-08-16 09:31:22 +03:00
function bonsai_verify {
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_BONSAI_VERIFY " --repo-id " $REPOID " \
2019-12-17 14:55:40 +03:00
--mononoke-config-path " $TESTTMP /mononoke-config " " ${ COMMON_ARGS [@] } " " $@ "
2018-08-16 09:31:22 +03:00
}
2019-08-21 12:22:32 +03:00
function lfs_import {
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_LFS_IMPORT " --repo-id " $REPOID " \
2019-12-17 14:55:40 +03:00
--mononoke-config-path " $TESTTMP /mononoke-config " " ${ COMMON_ARGS [@] } " " $@ "
2019-08-21 12:22:32 +03:00
}
2018-11-22 21:25:53 +03:00
function setup_no_ssl_apiserver {
APISERVER_PORT = $( get_free_socket)
no_ssl_apiserver --http-host "127.0.0.1" --http-port " $APISERVER_PORT "
wait_for_apiserver --no-ssl
}
2020-01-09 17:18:18 +03:00
function s_client {
/usr/local/fbcode/platform007/bin/openssl s_client \
-connect localhost:$MONONOKE_SOCKET \
-CAfile " ${ TEST_CERTDIR } /root-ca.crt " \
-cert " ${ TEST_CERTDIR } /localhost.crt " \
-key " ${ TEST_CERTDIR } /localhost.key " \
-ign_eof " $@ "
}
2018-11-22 21:25:53 +03:00
2018-06-18 20:07:08 +03:00
function apiserver {
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_APISERVER " " $@ " \
--mononoke-config-path " $TESTTMP /mononoke-config " \
--without-skiplist \
2019-05-28 13:12:27 +03:00
--ssl-ca " $TEST_CERTDIR /root-ca.crt " \
--ssl-private-key " $TEST_CERTDIR /localhost.key " \
--ssl-certificate " $TEST_CERTDIR /localhost.crt " \
--ssl-ticket-seeds " $TEST_CERTDIR /server.pem.seeds " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " >> " $TESTTMP /apiserver.out " 2>& 1 &
2018-12-06 05:10:52 +03:00
export APISERVER_PID = $!
echo " $APISERVER_PID " >> " $DAEMON_PIDS "
2018-06-18 20:07:08 +03:00
}
2018-09-27 21:16:18 +03:00
function no_ssl_apiserver {
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_APISERVER " " $@ " \
mononoke/integration: disable blackhole for apiserver tests
Summary:
The network blackhole is causing the API server to occasionally hang while serving requests, which has broken some LFS tests. This appears to be have happened in the last month or so, but unfortunately, I haven't been able to root cause why this is happening.
From what I can tell, we have an hg client that tries an upload to the API Server, and uploads everything... and then the API server just hangs. If I kill the hg client, then the API server responds with a 400 (so it's not completely stuck), but otherwise it seems like the API server is waiting for something to happen on the client-side, but the client isn't sending that.
As far as I can tell, the API Server isn't actualy trying to make outbound requests (strace does report that it has a Scribe client that's trying to connect, but Scuba logging isn't enabled, and this is just trying to connect but not send anything), but something with the blackhole is causing this hg - API server interaciton to fail.
In the meantime, this diff disables the blackhole for those tests that definitely don't work when it's enabled ...
Reviewed By: HarveyHunt
Differential Revision: D16599929
fbshipit-source-id: c6d77c5428e206cd41d5466e20405264622158ab
2019-08-01 17:32:27 +03:00
--without-skiplist \
mononoke: don't require cachelib to talk to a remote DB
Summary:
Currently, we implicitly expect that caching is enabled if we're dealing with a remote repository, but that means cachelib must be enabled when running with a remote repository, and that is ... slow.
This can be problematic in two cases:
In tests. It makes MySQL tests unbearably slow, and a little more flaky because we end up using so much CPU. With this patch, MySQL tests remain slower than SQLite tests, but by a factor of < 2, which is a pretty substantial improvement.
Running trivial administrative commands (e.g. a `mononoke_admin`), notably using a dev build (which right now unbearably slow). With this patch, such a trivial command is about 6x faster:
```
[torozco@devvm4998.lla1 ~/fbcode] time buck-out/gen/scm/mononoke/admin#binary/admin --repo-id 2102 --mononoke-config-path /home/torozco/local/.mononoke_exec/config/PROD --skip-caching bookmarks list --kind publishing
Jun 21 08:57:36.658 INFO using repo "instagram-server" repoid RepositoryId(2102)
master c96ac4654e4d2da45a9597af859adeac9dba3d7ca964cb42e5c96bc153f185e3 2c5713ad27262b91bf1dfaf21b3cf34fe3926c8d
real 0m5.299s
user 0m5.097s
sys 0m0.699s
[torozco@devvm4998.lla1 ~/fbcode] time buck-out/gen/scm/mononoke/admin#binary/admin --repo-id 2102 --mononoke-config-path /home/torozco/local/.mononoke_exec/config/PROD bookmarks list --kind publishing
I0621 08:57:59.299988 1181997 CacheAllocator-inl.h:3123] Started worker 'PoolRebalancer'
Jun 21 08:57:59.328 INFO using repo "instagram-server" repoid RepositoryId(2102)
master c96ac4654e4d2da45a9597af859adeac9dba3d7ca964cb42e5c96bc153f185e3 2c5713ad27262b91bf1dfaf21b3cf34fe3926c8d
real 0m28.620s
user 0m27.680s
sys 0m2.466s
```
This is also nice because it means the MySQL tests won't talk to Memcache anymore.
---
Note: in this refactor, I made `Caching` an enum so it can't accidentally be swapped with some other boolean.
---
Finally, it also uses up quite a bit less RAM (we no longer need 2GB of RAM to output one line of bookmarks — although we're still using quite a bit!):
```
[torozco@devvm4998.lla1 ~/fbcode] env time buck-out/gen/scm/mononoke/admin#binary/admin --skip-caching --repo-id 2102 --mononoke-config-path /home/torozco/local/.mononoke_exec/config/PROD bookmarks list --kind publishing
Jun 21 09:18:36.074 INFO using repo "instagram-server" repoid RepositoryId(2102)
master abdd2f78dafeaa8d4b96897955a63844b31324f9d89176b3a62088d0e2ae2b22 1702392d14bf7a332bf081518cb1ea3c83a13c39
5.08user 0.68system 0:05.28elapsed 109%CPU (0avgtext+0avgdata 728024maxresident)k
6776inputs+0outputs (8major+115477minor)pagefaults 0swaps
[torozco@devvm4998.lla1 ~/fbcode] env time buck-out/gen/scm/mononoke/admin#binary/admin --repo-id 2102 --mononoke-config-path /home/torozco/local/.mononoke_exec/config/PROD bookmarks list --kind publishing
I0621 09:19:01.385933 1244489 CacheAllocator-inl.h:3123] Started worker 'PoolRebalancer'
Jun 21 09:19:01.412 INFO using repo "instagram-server" repoid RepositoryId(2102)
master abdd2f78dafeaa8d4b96897955a63844b31324f9d89176b3a62088d0e2ae2b22 1702392d14bf7a332bf081518cb1ea3c83a13c39
26.96user 2.27system 0:27.93elapsed 104%CPU (0avgtext+0avgdata 2317716maxresident)k
11416inputs+5384outputs (17major+605118minor)pagefaults 0swaps
```
Reviewed By: farnz
Differential Revision: D15941644
fbshipit-source-id: 0df4a74ccd0220a786ebf0e883e1a9b8aab0a647
2019-06-24 16:03:31 +03:00
--mononoke-config-path " $TESTTMP /mononoke-config " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " >> " $TESTTMP /apiserver.out " 2>& 1 &
2018-09-27 21:16:18 +03:00
echo $! >> " $DAEMON_PIDS "
}
2018-09-24 20:59:02 +03:00
function wait_for_apiserver {
for _ in $( seq 1 200) ; do
2018-12-03 13:12:11 +03:00
if [ [ -a " $TESTTMP /apiserver.out " ] ] ; then
2020-01-09 17:18:18 +03:00
PORT = $( grep "Listening to" < " $TESTTMP /apiserver.out " | grep -Po "(\\d+)\$" ) && break
2018-12-03 13:12:11 +03:00
fi
2018-09-24 20:59:02 +03:00
sleep 0.1
done
if [ [ -z " $PORT " ] ] ; then
echo "error: Mononoke API Server is not started"
cat " $TESTTMP /apiserver.out "
exit 1
fi
2018-09-27 21:16:18 +03:00
2018-12-04 19:27:47 +03:00
export APIHOST = " localhost: $PORT "
2018-09-24 20:59:02 +03:00
export APISERVER
2018-09-27 21:16:18 +03:00
APISERVER = " https://localhost: $PORT "
if [ [ ( $# -eq 1 && " $1 " = = "--no-ssl" ) ] ] ; then
APISERVER = " http://localhost: $PORT "
fi
2018-09-24 20:59:02 +03:00
}
2019-11-07 18:50:46 +03:00
function start_and_wait_for_scs_server {
export SCS_PORT
SCS_PORT = $( get_free_socket)
GLOG_minloglevel = 5 " $SCS_SERVER " " $@ " \
-p " $SCS_PORT " \
--mononoke-config-path " $TESTTMP /mononoke-config " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " >> " $TESTTMP /scs_server.out " 2>& 1 &
2019-11-07 18:50:46 +03:00
export SCS_SERVER_PID = $!
2019-12-02 19:10:25 +03:00
echo " $SCS_SERVER_PID " >> " $DAEMON_PIDS "
2019-11-07 18:50:46 +03:00
# Wait until a SCS server is available
# MONONOKE_START_TIMEOUT is set in seconds
# Number of attempts is timeout multiplied by 10, since we
# sleep every 0.1 seconds.
local attempts timeout
timeout = " ${ MONONOKE_START_TIMEOUT :- " $MONONOKE_DEFAULT_START_TIMEOUT " } "
attempts = " $(( timeout * 10 )) "
2020-01-09 17:18:18 +03:00
CHECK_SSL = " /usr/local/fbcode/platform007/bin/openssl s_client -connect localhost: $SCS_PORT "
2019-11-07 18:50:46 +03:00
for _ in $( seq 1 $attempts ) ; do
$CHECK_SSL 2>& 1 </dev/null | grep -q 'DONE' && break
sleep 0.1
done
if ! $CHECK_SSL 2>& 1 </dev/null | grep -q 'DONE' ; then
echo "SCS server did not start" >& 2
cat " $TESTTMP /scs_server.out "
exit 1
fi
}
function scsc {
GLOG_minloglevel = 5 " $SCS_CLIENT " --host " localhost: $SCS_PORT " " $@ "
}
2019-09-10 16:37:56 +03:00
function lfs_server {
2019-11-05 17:27:49 +03:00
local port uri log opts args proto poll lfs_server_pid
2019-09-10 16:37:56 +03:00
port = " $( get_free_socket) "
log = " ${ TESTTMP } /lfs_server. ${ port } "
proto = "http"
poll = "curl"
opts = (
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } "
2019-09-10 16:37:56 +03:00
--mononoke-config-path " $TESTTMP /mononoke-config "
--listen-host 127.0.0.1
--listen-port " $port "
2019-11-19 20:22:13 +03:00
--test-friendly-logging
2019-09-10 16:37:56 +03:00
)
args = ( )
while [ [ " $# " -gt 0 ] ] ; do
if [ [ " $1 " = "--upstream" ] ] ; then
shift
args = ( " ${ args [@] } " " $1 " )
shift
2019-11-05 22:42:45 +03:00
elif [ [ " $1 " = "--live-config" ] ] ; then
2019-11-06 17:04:31 +03:00
opts = ( " ${ opts [@] } " " $1 " " $2 " "--live-config-fetch-interval" "1" )
2019-11-05 22:42:45 +03:00
shift
shift
2019-09-10 16:37:56 +03:00
elif [ [ " $1 " = "--tls" ] ] ; then
proto = "https"
poll = "sslcurl"
opts = (
" ${ opts [@] } "
--tls-ca " $TEST_CERTDIR /root-ca.crt "
--tls-private-key " $TEST_CERTDIR /localhost.key "
--tls-certificate " $TEST_CERTDIR /localhost.crt "
--tls-ticket-seeds " $TEST_CERTDIR /server.pem.seeds "
)
shift
2019-09-24 16:43:11 +03:00
elif [ [ " $1 " = "--always-wait-for-upstream" ] ] ; then
opts = ( " ${ opts [@] } " " $1 " )
shift
2019-12-03 17:23:21 +03:00
elif
[ [ " $1 " = "--allowed-test-identity" ] ] ||
[ [ " $1 " = "--scuba-log-file" ] ] ||
[ [ " $1 " = "--trusted-proxy-identity" ] ] ||
[ [ " $1 " = "--max-upload-size" ] ]
then
2019-11-19 19:29:09 +03:00
opts = ( " ${ opts [@] } " " $1 " " $2 " )
shift
shift
2019-09-10 16:37:56 +03:00
elif [ [ " $1 " = "--log" ] ] ; then
shift
log = " $1 "
shift
else
echo " invalid argument: $1 " >& 2
return 1
fi
done
uri = " ${ proto } ://localhost: ${ port } "
echo " $uri "
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $LFS_SERVER " \
" ${ opts [@] } " " $uri " " ${ args [@] } " >> " $log " 2>& 1 &
2019-09-10 16:37:56 +03:00
2019-11-05 17:27:49 +03:00
lfs_server_pid = " $! "
echo " $lfs_server_pid " >> " $DAEMON_PIDS "
2019-09-10 16:37:56 +03:00
for _ in $( seq 1 200) ; do
2019-09-25 16:21:30 +03:00
if " $poll " " ${ uri } /health_check " >/dev/null 2>& 1; then
2019-09-25 20:47:26 +03:00
truncate -s 0 " $log "
2019-09-10 16:37:56 +03:00
return 0
fi
sleep 0.1
done
echo "lfs_server did not start:" >& 2
cat " $log " >& 2
return 1
}
2018-07-26 07:36:05 +03:00
function extract_json_error {
2019-07-09 20:30:49 +03:00
input = $( < /dev/stdin)
2018-07-26 07:36:05 +03:00
echo " $input " | head -1 | jq -r '.message'
echo " $input " | tail -n +2
}
2017-07-28 04:00:19 +03:00
# Run an hg binary configured with the settings required to talk to Mononoke.
function hgmn {
2019-09-27 00:45:16 +03:00
hg --config ui.ssh= " $DUMMYSSH " --config paths.default= ssh://user@dummy/$REPONAME --config ui.remotecmd= " $MONONOKE_HGCLI " " $@ "
2017-07-28 04:00:19 +03:00
}
2018-01-21 18:21:53 +03:00
2018-03-01 01:00:26 +03:00
function hgmn_show {
echo " LOG $* "
hgmn log --template 'node:\t{node}\np1node:\t{p1node}\np2node:\t{p2node}\nauthor:\t{author}\ndate:\t{date}\ndesc:\t{desc}\n\n{diff()}' -r " $@ "
hgmn update " $@ "
echo
echo " CONTENT $* "
find . -type f -not -path "./.hg/*" -print -exec cat { } \;
}
2018-02-13 17:13:47 +03:00
function hginit_treemanifest( ) {
hg init " $@ "
cat >> " $1 " /.hg/hgrc <<EOF
[ extensions]
treemanifest =
remotefilelog =
2018-02-23 23:50:07 +03:00
smartlog =
2019-05-07 03:04:23 +03:00
clienttelemetry =
2018-02-13 17:13:47 +03:00
[ treemanifest]
2019-05-10 19:56:31 +03:00
flatcompat = False
2018-02-13 17:13:47 +03:00
server = True
sendtrees = True
2019-04-29 23:47:21 +03:00
treeonly = True
2018-02-13 17:13:47 +03:00
[ remotefilelog]
reponame = $1
2018-02-16 13:35:20 +03:00
cachepath = $TESTTMP /cachepath
2018-02-13 17:13:47 +03:00
server = True
shallowtrees = True
EOF
}
function hgclone_treemanifest( ) {
2019-04-29 23:47:21 +03:00
hg clone -q --shallow --config remotefilelog.reponame= master " $@ " --config extensions.treemanifest= --config treemanifest.treeonly= True
2018-02-13 17:13:47 +03:00
cat >> " $2 " /.hg/hgrc <<EOF
[ extensions]
treemanifest =
remotefilelog =
2018-02-23 23:50:07 +03:00
smartlog =
2019-05-07 03:04:23 +03:00
clienttelemetry =
2018-02-13 17:13:47 +03:00
[ treemanifest]
2019-05-10 19:56:31 +03:00
flatcompat = False
2018-02-13 17:13:47 +03:00
sendtrees = True
2018-02-16 13:35:18 +03:00
treeonly = True
2018-02-13 17:13:47 +03:00
[ remotefilelog]
reponame = $2
2018-02-16 13:35:20 +03:00
cachepath = $TESTTMP /cachepath
2018-02-13 17:13:47 +03:00
shallowtrees = True
EOF
}
2018-02-14 17:34:10 +03:00
2019-10-29 21:45:15 +03:00
function hgmn_init( ) {
hg init " $@ "
cat >> " $1 " /.hg/hgrc <<EOF
[ extensions]
treemanifest =
remotefilelog =
remotenames =
smartlog =
clienttelemetry =
lz4revlog =
[ treemanifest]
flatcompat = False
sendtrees = True
treeonly = True
[ remotefilelog]
reponame = $1
cachepath = $TESTTMP /cachepath
shallowtrees = True
EOF
}
function hgmn_clone( ) {
hgmn clone -q --shallow --config remotefilelog.reponame= master " $@ " --config extensions.treemanifest= --config treemanifest.treeonly= True --config extensions.lz4revlog=
cat >> " $2 " /.hg/hgrc <<EOF
[ extensions]
treemanifest =
remotefilelog =
remotenames =
smartlog =
clienttelemetry =
lz4revlog =
[ treemanifest]
flatcompat = False
sendtrees = True
treeonly = True
[ remotefilelog]
reponame = $2
cachepath = $TESTTMP /cachepath
shallowtrees = True
EOF
}
2018-03-22 00:18:21 +03:00
function enableextension( ) {
cat >> .hg/hgrc <<EOF
[ extensions]
$1 =
EOF
}
2018-02-14 17:34:10 +03:00
function setup_hg_server( ) {
cat >> .hg/hgrc <<EOF
[ extensions]
2019-11-13 19:46:59 +03:00
commitextras =
2018-02-14 17:34:10 +03:00
treemanifest =
remotefilelog =
2019-05-07 03:04:23 +03:00
clienttelemetry =
2018-02-14 17:34:10 +03:00
[ treemanifest]
server = True
[ remotefilelog]
server = True
shallowtrees = True
EOF
}
function setup_hg_client( ) {
cat >> .hg/hgrc <<EOF
[ extensions]
treemanifest =
remotefilelog =
2019-05-07 03:04:23 +03:00
clienttelemetry =
2018-02-14 17:34:10 +03:00
[ treemanifest]
2019-05-10 19:56:31 +03:00
flatcompat = False
2018-02-14 17:34:10 +03:00
server = False
treeonly = True
[ remotefilelog]
server = False
reponame = repo
EOF
}
2018-10-11 15:41:09 +03:00
# Does all the setup necessary for hook tests
function hook_test_setup( ) {
mononoke: rebuild test framework
Summary:
Our test framework as it stands right now is a light passthrough to the hg `run-tests.py` test framework, which attempts to place all the files it needs to run (including tests) into a `python_binary`, then runs the hg test runner from that directory.
It heavily relies on how Buck works to offer functionality:
- It expects that all the sources it registers for its master binary will all be in the same directory when it builds
- It expects that the sources will be symlinks to the real files so that `--interactive` can work.
This has a few problems:
- It doesn't work in `mode/opt`. The archive that gets built in `mode/opt` doesn't actually have all the sources we registered, so it's impossible to run tests.
- To add a new test, you must rebuild everything. We don't do that very often, but it'd be nice if we didn't have to.
- Iterating on the runner itself is painful, because as far as Buck is concerned, it depends on the entire world. This means that every change to the runner has to scan a lot more stuff than necessary. There's some functionality I'd like to get into the runner (like reporting test timings) that hasn't been easy to add as a result.
This diff attempts to solve these problems by separating concerns a little more:
- The runner is now just a simple `python_binary`, so it's easier to make changes to it.
- The runner now provides the logic of working from local files when needed (this means you can add a new test and it'll work immediately),
- All the binaries we need are dependencies of the integration test target, not the runner's. However, to make it possible to run the runner incrementally while iterating on something, there's a manifest target that points at all the various paths the runner needs to work. This will also help integrate the test runner with other build frameworks if necessary (e.g. for open-sourcing).
- We have separate targets for various assets we need to run the tests (e.g. the hg test framework).
- The runner now controls whether to use the network blackhole. This was necessary because the network blackhole breaks PAR archives (because tmp is no longer owned by the right owner, because we use a user namespace). We should be able to bring this back at some point if we want to by using a proper chroot for opt tests.
I included a README to explain this new design as well.
There are some things that could yet stand to be improved here (notably, I think we should put assets and tests in different directories for the sake of clarity), but so far I've been aiming at providing a 1-1 translation of the old system into the new one. I am planning to make further improvements in followup diffs.
Reviewed By: farnz
Differential Revision: D15921732
fbshipit-source-id: 09052591c419acf97f7e360b1e88ef1f412da6e5
2019-06-25 18:30:05 +03:00
# shellcheck source=fbcode/scm/mononoke/tests/integration/library.sh
. " ${ TEST_FIXTURES } /library.sh "
2018-10-11 15:41:09 +03:00
2018-12-04 14:50:22 +03:00
setup_mononoke_config
2018-10-11 15:41:09 +03:00
cd " $TESTTMP /mononoke-config " || exit 1
2019-04-30 19:02:50 +03:00
HOOKBOOKMARK = " ${ HOOKBOOKMARK :- master_bookmark } "
2019-09-23 17:14:55 +03:00
cat >> " repos/ $REPONAME /server.toml " <<CONFIG
2018-10-11 15:41:09 +03:00
[ [ bookmarks] ]
2019-04-30 19:02:50 +03:00
name = " $HOOKBOOKMARK "
2018-10-11 15:41:09 +03:00
CONFIG
HOOK_FILE = " $1 "
HOOK_NAME = " $2 "
HOOK_TYPE = " $3 "
shift 3
2019-02-08 18:24:03 +03:00
EXTRA_CONFIG_DESCRIPTOR = ""
2018-10-11 15:41:09 +03:00
if [ [ $# -gt 0 ] ] ; then
2019-02-08 18:24:03 +03:00
EXTRA_CONFIG_DESCRIPTOR = " $1 "
2018-10-11 15:41:09 +03:00
fi
2019-04-30 19:02:50 +03:00
2018-11-13 20:34:53 +03:00
if [ [ ! -z " $HOOK_FILE " ] ] ; then
mkdir -p common/hooks
cp " $HOOK_FILE " common/hooks/" $HOOK_NAME " .lua
2019-02-08 18:24:03 +03:00
register_hook " $HOOK_NAME " common/hooks/" $HOOK_NAME " .lua " $HOOK_TYPE " " $EXTRA_CONFIG_DESCRIPTOR "
2018-11-13 20:34:53 +03:00
else
2019-02-08 18:24:03 +03:00
register_hook " $HOOK_NAME " "" " $HOOK_TYPE " " $EXTRA_CONFIG_DESCRIPTOR "
2018-11-13 20:34:53 +03:00
fi
2018-10-11 15:41:09 +03:00
setup_common_hg_configs
cd " $TESTTMP " || exit 1
cat >> " $HGRCPATH " <<EOF
[ ui]
ssh = " $DUMMYSSH "
EOF
hg init repo-hg
cd repo-hg || exit 1
setup_hg_server
2019-12-17 05:10:38 +03:00
drawdag <<EOF
2018-10-11 15:41:09 +03:00
C
|
B
|
A
EOF
2019-04-30 19:02:50 +03:00
hg bookmark " $HOOKBOOKMARK " -r tip
2018-10-11 15:41:09 +03:00
cd ..
2018-12-11 20:45:43 +03:00
blobimport repo-hg/.hg repo
2018-10-11 15:41:09 +03:00
mononoke
wait_for_mononoke " $TESTTMP " /repo
hgclone_treemanifest ssh://user@dummy/repo-hg repo2 --noupdate --config extensions.remotenames= -q
cd repo2 || exit 1
setup_hg_client
cat >> .hg/hgrc <<EOF
[ extensions]
pushrebase =
remotenames =
EOF
2018-10-17 12:17:11 +03:00
}
2018-10-11 15:41:09 +03:00
2018-10-17 12:17:11 +03:00
function setup_hg_lfs( ) {
cat >> .hg/hgrc <<EOF
[ extensions]
lfs =
[ lfs]
url = $1
threshold = $2
usercache = $3
EOF
2018-10-11 15:41:09 +03:00
}
2018-11-08 14:39:41 +03:00
function aliasverify( ) {
2018-12-04 14:50:23 +03:00
mode = $1
shift 1
2019-10-03 18:43:58 +03:00
GLOG_minloglevel = 5 " $MONONOKE_ALIAS_VERIFY " --repo-id $REPOID \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2018-12-04 14:50:23 +03:00
--mononoke-config-path " $TESTTMP /mononoke-config " \
--mode " $mode " " $@ "
2018-11-08 14:39:41 +03:00
}
2019-01-09 20:20:53 +03:00
2019-01-22 16:56:17 +03:00
# Without rev
function tglogpnr( ) {
hg log -G -T "{node|short} {phase} '{desc}' {bookmarks} {branches}" " $@ "
}
function mkcommit( ) {
echo " $1 " > " $1 "
hg add " $1 "
hg ci -m " $1 "
}
2019-02-21 16:43:59 +03:00
2019-12-09 21:36:00 +03:00
function call_with_certs( ) {
REPLAY_CA_PEM = " $TEST_CERTDIR /root-ca.crt " \
THRIFT_TLS_CL_CERT_PATH = " $TEST_CERTDIR /localhost.crt " \
THRIFT_TLS_CL_KEY_PATH = " $TEST_CERTDIR /localhost.key " \
GLOG_minloglevel = 5 " $@ "
}
function traffic_replay( ) {
call_with_certs " $TRAFFIC_REPLAY " \
--loglevel warn \
--testrun \
--hgcli " $MONONOKE_HGCLI " \
--mononoke-address " [::1]: $MONONOKE_SOCKET " \
--mononoke-server-common-name localhost < " $1 "
}
2019-06-10 12:59:19 +03:00
function enable_replay_verification_hook {
cat >> " $TESTTMP " /replayverification.py <<EOF
def verify_replay( ui, repo, *args, **kwargs) :
EXP_ONTO = "EXPECTED_ONTOBOOK"
EXP_HEAD = "EXPECTED_REBASEDHEAD"
expected_book = kwargs.get( EXP_ONTO)
expected_head = kwargs.get( EXP_HEAD)
actual_book = kwargs.get( "key" )
actual_head = kwargs.get( "new" )
allowed_replay_books = ui.configlist( "facebook" , "hooks.unbundlereplaybooks" , [ ] )
# If there is a problem with the mononoke -> hg sync job we need a way to
# quickly disable the replay verification to let unsynced bundles
# through.
# Disable this hook by placing a file in the .hg directory.
if repo.localvfs.exists( 'REPLAY_BYPASS' ) :
ui.note( "[ReplayVerification] Bypassing check as override file is present\n" )
return 0
if expected_book is None and expected_head is None:
# We are allowing non-unbundle-replay pushes to go through
return 0
if allowed_replay_books and actual_book not in allowed_replay_books:
ui.warn( "[ReplayVerification] only allowed to unbundlereplay on %r\n" % ( allowed_replay_books, ) )
return 1
expected_head = expected_head or None
actual_head = actual_head or None
if expected_book = = actual_book and expected_head = = actual_head:
ui.note( "[ReplayVerification] Everything seems in order\n" )
return 0
ui.warn( "[ReplayVerification] Expected: (%s, %s). Actual: (%s, %s)\n" % ( expected_book, expected_head, actual_book, actual_head) )
return 1
EOF
cat >> " $TESTTMP " /repo_lock.py << EOF
def run( *args, **kwargs) :
"" " Repo is locked for everything except replays
In-process style hook."" "
if kwargs.get( "EXPECTED_ONTOBOOK" ) :
return 0
print "[RepoLock] Repo locked for non-unbundlereplay pushes"
return 1
EOF
[ [ -f .hg/hgrc ] ] || echo ".hg/hgrc does not exists!"
cat >>.hg/hgrc <<CONFIG
[ hooks]
prepushkey = python:$TESTTMP /replayverification.py:verify_replay
prepushkey.lock = python:$TESTTMP /repo_lock.py:run
CONFIG
}
2019-06-24 19:02:15 +03:00
2019-12-19 15:27:42 +03:00
function get_bonsai_bookmark( ) {
local bookmark repoid_backup
repoid_backup = $REPOID
export REPOID = " $1 "
bookmark = " $2 "
mononoke_admin bookmarks get -c bonsai " $bookmark " 2>/dev/null | cut -d' ' -f2
export REPOID = " $repoid_backup "
}
2019-06-24 19:02:15 +03:00
function create_replaybookmarks_table( ) {
if [ [ -n " $DB_SHARD_NAME " ] ] ; then
2019-12-09 21:55:42 +03:00
# We don't need to do anything: the MySQL setup creates this for us.
true
else
# We don't actually create any DB here, replaybookmarks will create it for it
# when it opens a SQLite DB in this directory.
mkdir " $TESTTMP /replaybookmarksqueue "
fi
2019-06-24 19:02:15 +03:00
}
function insert_replaybookmarks_entry( ) {
local repo bookmark
repo = " $1 "
bookmark = " $2 "
node = " $3 "
if [ [ -n " $DB_SHARD_NAME " ] ] ; then
# See above for why we have to redirect this output to /dev/null
db -w " $DB_SHARD_NAME " 2>/dev/null <<EOF
INSERT INTO replaybookmarksqueue ( reponame, bookmark, node, bookmark_hash)
VALUES ( '$repo' , '$bookmark' , '$node' , '$bookmark' ) ;
EOF
else
sqlite3 " $TESTTMP /replaybookmarksqueue/replaybookmarksqueue " <<EOF
INSERT INTO replaybookmarksqueue ( reponame, bookmark, node, bookmark_hash)
VALUES ( CAST( '$repo' AS BLOB) , '$bookmark' , '$node' , '$bookmark' ) ;
EOF
fi
}
2019-09-27 00:45:16 +03:00
2019-11-11 22:12:21 +03:00
function get_bonsai_bookmark( ) {
local bookmark repoid_backup
repoid_backup = $REPOID
export REPOID = " $1 "
bookmark = " $2 "
mononoke_admin bookmarks get -c bonsai " $bookmark " 2>/dev/null | cut -d' ' -f2
export REPOID = $repoid_backup
}
2019-09-27 00:45:16 +03:00
function add_synced_commit_mapping_entry( ) {
local small_repo_id large_repo_id small_bcs_id large_bcs_id
small_repo_id = " $1 "
small_bcs_id = " $2 "
large_repo_id = " $3 "
large_bcs_id = " $4 "
2019-11-07 19:13:38 +03:00
sqlite3 " $TESTTMP /monsql/sqlite_dbs " <<EOF
2019-09-27 00:45:16 +03:00
INSERT INTO synced_commit_mapping ( small_repo_id, small_bcs_id, large_repo_id, large_bcs_id)
VALUES ( '$small_repo_id' , X'$small_bcs_id' , '$large_repo_id' , X'$large_bcs_id' ) ;
EOF
}
2019-10-30 21:30:12 +03:00
function read_blobstore_sync_queue_size( ) {
local attempts timeout ret
timeout = "100"
attempts = " $(( timeout * 10 )) "
for _ in $( seq 1 $attempts ) ; do
2019-11-07 19:13:38 +03:00
ret = " $( sqlite3 " $TESTTMP /monsql/sqlite_dbs " "select count(*) from blobstore_sync_queue" 2>/dev/null) "
2019-10-30 21:30:12 +03:00
if [ [ -n " $ret " ] ] ; then
echo " $ret "
return 0
fi
sleep 0.1
done
return 1
}
2019-11-11 16:47:35 +03:00
function get_bonsai_bookmark( ) {
local bookmark repoid_backup
repoid_backup = $REPOID
export REPOID = " $1 "
bookmark = " $2 "
mononoke_admin bookmarks get -c bonsai " $bookmark " 2>/dev/null | cut -d' ' -f2
export REPOID = $repoid_backup
}
2019-12-02 14:35:52 +03:00
function log( ) {
# Prepend "$" to the end of the log output to prevent having trailing whitespaces
hg log -G -T "{desc} [{phase};rev={rev};{node|short}] {remotenames}" " $@ " | sed 's/^[ \t]*$/$/'
}
# Default setup that many of the test use
2020-01-16 17:11:09 +03:00
function default_setup_blobimport( ) {
setup_common_config " $@ "
2019-12-02 14:35:52 +03:00
cd " $TESTTMP " || exit 1
cat >> " $HGRCPATH " <<EOF
[ ui]
ssh = " $DUMMYSSH "
[ extensions]
amend =
EOF
hg init repo-hg
cd repo-hg || exit 1
setup_hg_server
2019-12-17 05:10:38 +03:00
drawdag <<EOF
2019-12-02 14:35:52 +03:00
C
|
B
|
A
EOF
2020-01-16 17:11:09 +03:00
hg bookmark master_bookmark -r tip
2019-12-02 14:35:52 +03:00
2020-01-16 17:11:09 +03:00
echo "hg repo"
log -r ":"
2019-12-02 14:35:52 +03:00
2020-01-16 17:11:09 +03:00
cd .. || exit 1
echo "blobimporting"
blobimport repo-hg/.hg repo
}
2019-12-02 14:35:52 +03:00
2020-01-16 17:11:09 +03:00
function default_setup( ) {
default_setup_blobimport " $BLOB_TYPE "
echo "starting Mononoke"
mononoke " $@ "
wait_for_mononoke " $TESTTMP /repo "
2019-12-02 14:35:52 +03:00
2020-01-16 17:11:09 +03:00
echo "cloning repo in hg client 'repo2'"
hgclone_treemanifest ssh://user@dummy/repo-hg repo2 --noupdate --config extensions.remotenames= -q
cd repo2 || exit 1
setup_hg_client
cat >> .hg/hgrc <<EOF
2019-12-02 14:35:52 +03:00
[ extensions]
pushrebase =
remotenames =
EOF
2019-12-05 15:07:22 +03:00
}
function gitimport( ) {
log = " $TESTTMP /gitimport.out "
" $MONONOKE_GITIMPORT " \
2019-12-17 14:55:40 +03:00
" ${ COMMON_ARGS [@] } " \
2019-12-05 15:07:22 +03:00
--repo-id " $REPOID " \
--mononoke-config-path " ${ TESTTMP } /mononoke-config " \
" $@ "
}
2019-12-02 14:35:52 +03:00
2019-12-05 15:07:22 +03:00
function git( ) {
2019-12-11 14:48:36 +03:00
local date name email
2019-12-05 15:07:22 +03:00
date = "01/01/0000 00:00 +0000"
2019-12-11 14:48:36 +03:00
name = "mononoke"
email = "mononoke@mononoke"
GIT_COMMITTER_DATE = " $date " \
GIT_COMMITTER_NAME = " $name " \
GIT_COMMITTER_EMAIL = " $email " \
GIT_AUTHOR_DATE = " $date " \
GIT_AUTHOR_NAME = " $name " \
GIT_AUTHOR_EMAIL = " $email " \
command git " $@ "
2019-12-02 14:35:52 +03:00
}
2020-01-13 18:16:03 +03:00
function summarize_scuba_json( ) {
local interesting_tags
local key_spec
interesting_tags = " $1 "
shift
key_spec = ""
for key in " $@ "
do
key_spec = " $key_spec + (if ( ${ key } != null) then { ${ key ##*. } : ${ key } } else {} end) "
done
jq -S " if (.normal.log_tag | match(\"^( $interesting_tags )\$\")) then ${ key_spec : 3 } else empty end "
}