graphql-engine/.circleci/test-server.sh

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

1422 lines
46 KiB
Bash
Raw Normal View History

#!/usr/bin/env bash
set -euo pipefail
echo "Running tests on node $CIRCLE_NODE_INDEX of $CIRCLE_NODE_TOTAL"
if [ -z "$SERVER_TEST_TO_RUN" ]; then
# shellcheck disable=SC2016
echo 'Please specify $SERVER_TEST_TO_RUN'
exit 1
else
echo "Running test $SERVER_TEST_TO_RUN"
fi
### Functions
# Disable the following warning:
# > Note that A && B || C is not if-then-else. C may run when A is true.
# We want this behavior, as we want to continue even if `kill` fails.
# shellcheck disable=SC2015
stop_services() {
echo "killing and waiting for spawned services"
[[ -n "${HGE_PIDS[*]}" ]] && kill -s INT "${HGE_PIDS[@]}" || true
[[ -n "$WH_PID" ]] && kill "$WH_PID" || true
[[ -n "$GQL_SERVER_PID" ]] && kill "$GQL_SERVER_PID" || true
[[ -n "${HGE_PIDS[*]}" ]] && wait "${HGE_PIDS[@]}" || true
[[ -n "$WH_PID" ]] && wait "$WH_PID" || true
[[ -n "$GQL_SERVER_PID" ]] && wait "$GQL_SERVER_PID" || true
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
}
time_elapsed() {
printf "(%02d:%02d)" $((SECONDS / 60)) $((SECONDS % 60))
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
}
fail_if_port_busy() {
local PORT="$1"
if nc -z localhost "$PORT"; then
echo "Port $PORT is busy. Exiting"
exit 1
fi
}
wait_for_port() {
local PORT="$1"
echo "waiting for $PORT"
for _ in $(seq 1 60); do
nc -z localhost "$PORT" && echo "port $PORT is ready" && return
echo -n .
sleep 0.25
done
echo "Failed waiting for $PORT" && exit 1
}
init_jwt() {
CUR_DIR="$PWD"
mkdir -p "$OUTPUT_FOLDER/ssl"
cd "$OUTPUT_FOLDER/ssl"
openssl genrsa -out jwt_private.key 2048
openssl rsa -pubout -in jwt_private.key -out jwt_public.key
openssl genpkey -algorithm ed25519 -outform PEM -out ed25519_jwt_private.key
openssl pkey -pubout -in ed25519_jwt_private.key -out ed25519_jwt_public.key
cd "$CUR_DIR"
}
# init_hge_and_test_jwt function will run the hge server using the environment varibles and run the pytest which is sent as argument
# The first argument is the relative path of the jwt-key-file. the jwt-key-file can be RSA or EdDSA
# The second argument is the test to run, eg. test_jwt_claims_map.py::TestJWTClaimsMapBasic, test_jwt.py, etc.
init_hge_and_test_jwt() {
local key_file
key_file="${OUTPUT_FOLDER}/${1}"
shift
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--hge-jwt-key-file="$key_file" --hge-jwt-conf="$HASURA_GRAPHQL_JWT_SECRET" \
"$@"
kill_hge_servers
}
init_ssl() {
CUR_DIR="$PWD"
mkdir -p "$OUTPUT_FOLDER/ssl"
cd "$OUTPUT_FOLDER/ssl"
CNF_TEMPLATE='[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
IP.1 = 127.0.0.1'
echo "$CNF_TEMPLATE" >webhook-req.cnf
openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10 -out ca.pem -subj "/CN=webhook-ca"
openssl genrsa -out webhook-key.pem 2048
openssl req -new -key webhook-key.pem -out webhook.csr -subj "/CN=hge-webhook" -config webhook-req.cnf
openssl x509 -req -in webhook.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out webhook.pem -days 10 -extensions v3_req -extfile webhook-req.cnf
cp ca.pem /etc/ssl/certs/webhook.crt
update-ca-certificates
cd "$CUR_DIR"
}
webhook_tests_check_root() {
if [ $EUID != 0 ]; then
echo -e "webhook tests require root (in order to trust certificate authority)."
exit 1
fi
}
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
kill_hge_servers() {
kill -s INT "${HGE_PIDS[@]}" || true
wait "${HGE_PIDS[@]}" || true
HGE_PIDS=()
}
HGE_INDEX=1
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
run_hge_with_args() {
i=$((HGE_INDEX++))
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
set -x
"$GRAPHQL_ENGINE" "$@" >"$OUTPUT_FOLDER/graphql-engine-${i}.log" 2>&1 &
HGE_PIDS=("${HGE_PIDS[@]}" $!)
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
set +x
}
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
start_multiple_hge_servers() {
run_hge_with_args --database-url "$HASURA_GRAPHQL_DATABASE_URL" serve
if [ -n "${HASURA_GRAPHQL_DATABASE_URL_2:-}" ]; then
run_hge_with_args --database-url "$HASURA_GRAPHQL_DATABASE_URL_2" serve --server-port 8081
wait_for_port 8081
fi
wait_for_port 8080
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
}
source_data_sources_utils() {
# Only source this file in the $SERVER_TEST_TO_RUN case branch it's used,
# to avoid sourcing for every server job & test.
# https://github.com/hasura/graphql-engine-mono/pull/1526#discussion_r661411538
SCRIPTS_SOURCE=$CIRCLECI_FOLDER/../scripts
# shellcheck source=../scripts/data-sources-util.sh
source "$SCRIPTS_SOURCE/data-sources-util.sh"
}
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
if [ -z "${HASURA_GRAPHQL_DATABASE_URL:-}" ]; then
echo "Env var HASURA_GRAPHQL_DATABASE_URL is not set"
exit 1
fi
if [ -z "${HASURA_GRAPHQL_DATABASE_URL_2:-}" ]; then
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
echo "Env var HASURA_GRAPHQL_DATABASE_URL_2 is not set"
exit 1
fi
CIRCLECI_FOLDER="${BASH_SOURCE[0]%/*}"
cd "$CIRCLECI_FOLDER"
CIRCLECI_FOLDER="$PWD"
PYTEST_ROOT="$CIRCLECI_FOLDER/../server/tests-py"
OUTPUT_FOLDER=${OUTPUT_FOLDER:-"$CIRCLECI_FOLDER/test-server-output"}
mkdir -p "$OUTPUT_FOLDER"
cd "$PYTEST_ROOT"
for port in 8080 8081 9876 5592 5000 5001 5593 5594; do
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
fail_if_port_busy $port
done
echo -e "\nINFO: GraphQL Executable : $GRAPHQL_ENGINE"
echo -e "INFO: Logs Folder : $OUTPUT_FOLDER\n"
# Copy the node_modules directory installed in the Docker image here.
cp -R /deps/node_modules .
export EVENT_WEBHOOK_HEADER="MyEnvValue"
export HASURA_GRAPHQL_STRINGIFY_NUMERIC_TYPES=true
export DEFAULT_HASURA_EXPERIMENTAL_FEATURES=streaming_subscriptions
export HASURA_GRAPHQL_EXPERIMENTAL_FEATURES=$DEFAULT_HASURA_EXPERIMENTAL_FEATURES
export HGE_URL="http://localhost:8080"
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
export HGE_URL_2=""
if [ -n "${HASURA_GRAPHQL_DATABASE_URL_2:-}" ]; then
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
HGE_URL_2="http://localhost:8081"
fi
export EVENT_WEBHOOK_HEADER="MyEnvValue"
export EVENT_WEBHOOK_HANDLER="http://localhost:5592"
export ACTION_WEBHOOK_HANDLER="http://localhost:5593"
export SCHEDULED_TRIGGERS_WEBHOOK_DOMAIN="http://localhost:5594"
export REMOTE_SCHEMAS_WEBHOOK_DOMAIN="http://localhost:5000"
export GRAPHQL_SERVICE_HANDLER="http://localhost:4001"
export GRAPHQL_SERVICE_1="http://localhost:4020"
export GRAPHQL_SERVICE_2="http://localhost:4021"
export GRAPHQL_SERVICE_3="http://localhost:4022"
PYTEST_REPORTING_ARGS=(
'-vv' # show all passes and the full assertion text
'-r' 'a' # show extra test summary info for every test except passes
)
PYTEST_COMMON_ARGS=(
"${PYTEST_REPORTING_ARGS[@]}"
--hge-urls="$HGE_URL"
--pg-urls="$HASURA_GRAPHQL_DATABASE_URL"
)
PYTEST_PARALLEL_ARGS=(
"${PYTEST_REPORTING_ARGS[@]}"
-n 2
--hge-urls "$HGE_URL" "${HGE_URL_2:-}"
--pg-urls "$HASURA_GRAPHQL_DATABASE_URL" "${HASURA_GRAPHQL_DATABASE_URL_2}"
)
HGE_PIDS=()
WH_PID=""
GQL_SERVER_PID=""
trap stop_services ERR
trap stop_services INT
trap stop_services EXIT
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
run_pytest_parallel() {
trap stop_services ERR
if [ -n "${HASURA_GRAPHQL_DATABASE_URL_2:-}" ]; then
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
set -x
pytest "${PYTEST_PARALLEL_ARGS[@]}" "$@"
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
set +x
else
set -x
pytest "${PYTEST_COMMON_ARGS[@]}" "$@"
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
set +x
fi
}
case "$SERVER_TEST_TO_RUN" in
haskell-tests)
echo -e "\n$(time_elapsed): <########## RUN GRAPHQL-ENGINE HASKELL TESTS ###########################################>\n"
SKIP_TEST_LIST=""
if [ "${PG_VERSION}" -eq "15" ]; then
# https://packages.ubuntu.com/search?suite=jammy&section=all&arch=any&keywords=postgresql-client-15&searchon=names
echo -e "Skipping Hasura.Server.Migrate tests until postgresql-client version 15 is available to fix pg_dump errors"
SKIP_TEST_LIST="Hasura.Server.Migrate"
fi
HSPEC_SKIP=${SKIP_TEST_LIST} "${GRAPHQL_ENGINE_TESTS:?}" postgres
;;
no-auth)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITHOUT ADMIN SECRET ###########################################>\n"
start_multiple_hge_servers
run_pytest_parallel
kill_hge_servers
;;
admin-secret)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET #####################################>\n"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
start_multiple_hge_servers
run_pytest_parallel
kill_hge_servers
;;
admin-secret-unauthorized-role)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND UNAUTHORIZED ROLE #####################################>\n"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
export HASURA_GRAPHQL_UNAUTHORIZED_ROLE="anonymous"
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-unauthorized-role \
test_graphql_queries.py::TestUnauthorizedRolePermission
kill_hge_servers
#unset HASURA_GRAPHQL_UNAUTHORIZED_ROLE
;;
jwt-rs512)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND JWT (RS512) #####################################>\n"
init_jwt
HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key }')"
export HASURA_GRAPHQL_ADMIN_SECRET HASURA_GRAPHQL_JWT_SECRET
start_multiple_hge_servers
run_pytest_parallel --hge-jwt-key-file="$OUTPUT_FOLDER/ssl/jwt_private.key" --hge-jwt-conf="$HASURA_GRAPHQL_JWT_SECRET"
kill_hge_servers
#unset HASURA_GRAPHQL_JWT_SECRET
;;
jwt-ed25519)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND JWT (Ed25519) #####################################>\n"
init_jwt
HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key }')"
export HASURA_GRAPHQL_ADMIN_SECRET HASURA_GRAPHQL_JWT_SECRET
start_multiple_hge_servers
run_pytest_parallel --hge-jwt-key-file="$OUTPUT_FOLDER/ssl/ed25519_jwt_private.key" --hge-jwt-conf="$HASURA_GRAPHQL_JWT_SECRET"
kill_hge_servers
#unset HASURA_GRAPHQL_JWT_SECRET
;;
jwt-stringified)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND JWT (in stringified mode) #####################################>\n"
init_jwt
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , claims_format: "stringified_json"}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/jwt_private.key" test_jwt.py
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key , claims_format: "stringified_json"}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/ed25519_jwt_private.key" test_jwt.py
# unset HASURA_GRAPHQL_JWT_SECRET
;;
jwt-audience-check-single-string)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND JWT (with audience check - string) #####################################>\n"
init_jwt
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , audience: "myapp-1234"}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/jwt_private.key" test_jwt.py
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key , audience: "myapp-1234"}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/ed25519_jwt_private.key" test_jwt.py
#unset HASURA_GRAPHQL_JWT_SECRET
;;
jwt-audience-check-list-string)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND JWT (with audience check - list of strings) #################################>\n"
init_jwt
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , audience: ["myapp-1234", "myapp-9876"]}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/jwt_private.key" test_jwt.py
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key , audience: ["myapp-1234", "myapp-9876"]}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/ed25519_jwt_private.key" test_jwt.py
unset HASURA_GRAPHQL_JWT_SECRET
;;
jwt-issuer-check)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND JWT (with issuer check) #####################################>\n"
init_jwt
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , issuer: "https://hasura.com"}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/jwt_private.key" test_jwt.py
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key , issuer: "https://hasura.com"}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/ed25519_jwt_private.key" test_jwt.py
unset HASURA_GRAPHQL_JWT_SECRET
;;
jwt-with-claims-namespace-path)
##########
# TODO(swann): should these not be run in parallel?
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND JWT (with claims_namespace_path) #####################################>\n"
init_jwt
# hasura claims at one level of nesting
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , claims_namespace_path: "$.hasura_claims"}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/jwt_private.key" test_jwt.py
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key , claims_namespace_path: "$.hasura_claims"}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/ed25519_jwt_private.key" test_jwt.py
unset HASURA_GRAPHQL_JWT_SECRET
# hasura claims at two levels of nesting with claims_namespace_path containing special character
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , claims_namespace_path: "$.hasura['\''claims%'\'']"}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/jwt_private.key" test_jwt.py
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key , claims_namespace_path: "$.hasura['\''claims%'\'']"}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/ed25519_jwt_private.key" test_jwt.py
unset HASURA_GRAPHQL_JWT_SECRET
# hasura claims at the root of the JWT token
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , claims_namespace_path: "$"}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/jwt_private.key" test_jwt.py
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key , claims_namespace_path: "$"}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/ed25519_jwt_private.key" test_jwt.py
unset HASURA_GRAPHQL_JWT_SECRET
;;
jwt-claims-map-with-json-path-values)
# test JWT with Claims map
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND JWT (with claims_map and values are json path) #####################################>\n"
init_jwt
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , claims_map: {"x-hasura-user-id": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].user.id"}, "x-hasura-allowed-roles": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].role.allowed"}, "x-hasura-default-role": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].role.default"}}}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/jwt_private.key" test_jwt_claims_map.py::TestJWTClaimsMapBasic
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key , claims_map: {"x-hasura-user-id": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].user.id"}, "x-hasura-allowed-roles": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].role.allowed"}, "x-hasura-default-role": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].role.default"}}}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/ed25519_jwt_private.key" test_jwt_claims_map.py::TestJWTClaimsMapBasic
unset HASURA_GRAPHQL_JWT_SECRET
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND JWT (with claims_map and values are json path with default values set) #####################################>\n"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , claims_map: {"x-hasura-user-id": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].user.id", "default":"1"}, "x-hasura-allowed-roles": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].role.allowed", "default":["user","editor"]}, "x-hasura-default-role": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].role.default","default":"user"}}}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/jwt_private.key" test_jwt_claims_map.py::TestJWTClaimsMapBasic
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key , claims_map: {"x-hasura-user-id": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].user.id", "default":"1"}, "x-hasura-allowed-roles": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].role.allowed", "default":["user","editor"]}, "x-hasura-default-role": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].role.default","default":"user"}}}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/ed25519_jwt_private.key" test_jwt_claims_map.py::TestJWTClaimsMapBasic
unset HASURA_GRAPHQL_JWT_SECRET
;;
jwt-with-expiry-time-leeway)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND JWT (with JWT config allowing for leeway) #####################################>\n"
init_jwt
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , allowed_skew: 60}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/jwt_private.key" test_jwt.py::TestJWTExpirySkew
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key , allowed_skew: 60}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/ed25519_jwt_private.key" test_jwt.py::TestJWTExpirySkew
unset HASURA_GRAPHQL_JWT_SECRET
;;
jwt-claims-map-with-literal-values)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND JWT (with claims_map and values are literal values) #####################################>\n"
init_jwt
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , claims_map: {"x-hasura-user-id": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].user.id"}, "x-hasura-allowed-roles": ["user","editor"], "x-hasura-default-role": "user","x-hasura-custom-header":"custom-value"}}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/jwt_private.key" test_jwt_claims_map.py::TestJWTClaimsMapWithStaticHasuraClaimsMapValues
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key , claims_map: {"x-hasura-user-id": {"path":"$.['"'"'https://myapp.com/jwt/claims'"'"'].user.id"}, "x-hasura-allowed-roles": ["user","editor"], "x-hasura-default-role": "user","x-hasura-custom-header":"custom-value"}}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/ed25519_jwt_private.key" test_jwt_claims_map.py::TestJWTClaimsMapWithStaticHasuraClaimsMapValues
unset HASURA_GRAPHQL_JWT_SECRET
;;
jwt-cookie)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET AND JWT (in cookie mode) #####################################>\n"
init_jwt
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , header: {"type": "Cookie", "name": "hasura_user"}}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/jwt_private.key" test_jwt.py
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/ed25519_jwt_public.key")" '{ type: "Ed25519", key: $key , header: {"type": "Cookie", "name": "hasura_user"}}')"
export HASURA_GRAPHQL_JWT_SECRET
init_hge_and_test_jwt "ssl/ed25519_jwt_private.key" test_jwt.py
unset HASURA_GRAPHQL_JWT_SECRET
;;
jwt-cookie-unauthorized-role)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH JWT (in cookie mode) AND UNAUTHORIZED ROLE #####################################>\n"
init_jwt
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
export HASURA_GRAPHQL_UNAUTHORIZED_ROLE="anonymous"
HASURA_GRAPHQL_JWT_SECRET="$(jq -n --arg key "$(cat "$OUTPUT_FOLDER/ssl/jwt_public.key")" '{ type: "RS512", key: $key , header: {"type": "Cookie", "name": "hasura_user"}}')"
export HASURA_GRAPHQL_JWT_SECRET
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--hge-jwt-key-file="$OUTPUT_FOLDER/ssl/jwt_private.key" --hge-jwt-conf="$HASURA_GRAPHQL_JWT_SECRET" \
--test-unauthorized-role \
test_graphql_queries.py::TestFallbackUnauthorizedRoleCookie
kill_hge_servers
unset HASURA_GRAPHQL_UNAUTHORIZED_ROLE
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--hge-jwt-key-file="$OUTPUT_FOLDER/ssl/jwt_private.key" --hge-jwt-conf="$HASURA_GRAPHQL_JWT_SECRET" \
--test-no-cookie-and-unauth-role \
test_graphql_queries.py::TestMissingUnauthorizedRoleAndCookie
kill_hge_servers
server: forward auth webhook set-cookies header on response > High-Level TODO: * [x] Code Changes * [x] Tests * [x] Check that pro/multitenant build ok * [x] Documentation Changes * [x] Updating this PR with full details * [ ] Reviews * [ ] Ensure code has all FIXMEs and TODOs addressed * [x] Ensure no files are checked in mistakenly * [x] Consider impact on console, cli, etc. ### Description > This PR adds support for adding set-cookie header on the response from the auth webhook. If the set-cookie header is sent by the webhook, it will be forwarded in the graphQL engine response. Fixes a bug in test-server.sh: testing of get-webhook tests was done by POST method and vice versa. To fix, the parameters were swapped. ### Changelog - [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. ### Affected components - [x] Server - [ ] Console - [ ] CLI - [x] Docs - [ ] Community Content - [ ] Build System - [x] Tests - [ ] Other (list it) ### Related Issues -> Closes [#2269](https://github.com/hasura/graphql-engine/issues/2269) ### Solution and Design > ### Steps to test and verify > Please refer to the docs to see how to send the set-cookie header from webhook. ### Limitations, known bugs & workarounds > - Support for only set-cookie header forwarding is added - the value forwarded in the set-cookie header cannot be validated completely, the [Cookie](https://hackage.haskell.org/package/cookie) package has been used to parse the header value and any unnecessary information is stripped off before forwarding the header. The standard given in [RFC6265](https://datatracker.ietf.org/doc/html/rfc6265) has been followed for the Set-Cookie format. ### Server checklist #### Catalog upgrade Does this PR change Hasura Catalog version? - [x] No - [ ] Yes - [ ] Updated docs with SQL for downgrading the catalog #### Metadata Does this PR add a new Metadata feature? - [x] No #### GraphQL - [x] No new GraphQL schema is generated - [ ] New GraphQL schema is being generated: - [ ] New types and typenames are correlated #### Breaking changes - [x] No Breaking changes PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2538 Co-authored-by: Robert <132113+robx@users.noreply.github.com> GitOrigin-RevId: d9047e997dd221b7ce4fef51911c3694037e7c3f
2021-11-09 15:00:21 +03:00
unset HASURA_GRAPHQL_JWT_SECRET
;;
server: forward auth webhook set-cookies header on response > High-Level TODO: * [x] Code Changes * [x] Tests * [x] Check that pro/multitenant build ok * [x] Documentation Changes * [x] Updating this PR with full details * [ ] Reviews * [ ] Ensure code has all FIXMEs and TODOs addressed * [x] Ensure no files are checked in mistakenly * [x] Consider impact on console, cli, etc. ### Description > This PR adds support for adding set-cookie header on the response from the auth webhook. If the set-cookie header is sent by the webhook, it will be forwarded in the graphQL engine response. Fixes a bug in test-server.sh: testing of get-webhook tests was done by POST method and vice versa. To fix, the parameters were swapped. ### Changelog - [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. ### Affected components - [x] Server - [ ] Console - [ ] CLI - [x] Docs - [ ] Community Content - [ ] Build System - [x] Tests - [ ] Other (list it) ### Related Issues -> Closes [#2269](https://github.com/hasura/graphql-engine/issues/2269) ### Solution and Design > ### Steps to test and verify > Please refer to the docs to see how to send the set-cookie header from webhook. ### Limitations, known bugs & workarounds > - Support for only set-cookie header forwarding is added - the value forwarded in the set-cookie header cannot be validated completely, the [Cookie](https://hackage.haskell.org/package/cookie) package has been used to parse the header value and any unnecessary information is stripped off before forwarding the header. The standard given in [RFC6265](https://datatracker.ietf.org/doc/html/rfc6265) has been followed for the Set-Cookie format. ### Server checklist #### Catalog upgrade Does this PR change Hasura Catalog version? - [x] No - [ ] Yes - [ ] Updated docs with SQL for downgrading the catalog #### Metadata Does this PR add a new Metadata feature? - [x] No #### GraphQL - [x] No new GraphQL schema is generated - [ ] New GraphQL schema is being generated: - [ ] New types and typenames are correlated #### Breaking changes - [x] No Breaking changes PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2538 Co-authored-by: Robert <132113+robx@users.noreply.github.com> GitOrigin-RevId: d9047e997dd221b7ce4fef51911c3694037e7c3f
2021-11-09 15:00:21 +03:00
# test with CORS modes
cors-domains)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH CORS DOMAINS ########>\n"
export HASURA_GRAPHQL_CORS_DOMAIN="http://*.localhost, http://localhost:3000, https://*.foo.bar.com"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
server: forward auth webhook set-cookies header on response > High-Level TODO: * [x] Code Changes * [x] Tests * [x] Check that pro/multitenant build ok * [x] Documentation Changes * [x] Updating this PR with full details * [ ] Reviews * [ ] Ensure code has all FIXMEs and TODOs addressed * [x] Ensure no files are checked in mistakenly * [x] Consider impact on console, cli, etc. ### Description > This PR adds support for adding set-cookie header on the response from the auth webhook. If the set-cookie header is sent by the webhook, it will be forwarded in the graphQL engine response. Fixes a bug in test-server.sh: testing of get-webhook tests was done by POST method and vice versa. To fix, the parameters were swapped. ### Changelog - [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. ### Affected components - [x] Server - [ ] Console - [ ] CLI - [x] Docs - [ ] Community Content - [ ] Build System - [x] Tests - [ ] Other (list it) ### Related Issues -> Closes [#2269](https://github.com/hasura/graphql-engine/issues/2269) ### Solution and Design > ### Steps to test and verify > Please refer to the docs to see how to send the set-cookie header from webhook. ### Limitations, known bugs & workarounds > - Support for only set-cookie header forwarding is added - the value forwarded in the set-cookie header cannot be validated completely, the [Cookie](https://hackage.haskell.org/package/cookie) package has been used to parse the header value and any unnecessary information is stripped off before forwarding the header. The standard given in [RFC6265](https://datatracker.ietf.org/doc/html/rfc6265) has been followed for the Set-Cookie format. ### Server checklist #### Catalog upgrade Does this PR change Hasura Catalog version? - [x] No - [ ] Yes - [ ] Updated docs with SQL for downgrading the catalog #### Metadata Does this PR add a new Metadata feature? - [x] No #### GraphQL - [x] No new GraphQL schema is generated - [ ] New GraphQL schema is being generated: - [ ] New types and typenames are correlated #### Breaking changes - [x] No Breaking changes PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2538 Co-authored-by: Robert <132113+robx@users.noreply.github.com> GitOrigin-RevId: d9047e997dd221b7ce4fef51911c3694037e7c3f
2021-11-09 15:00:21 +03:00
run_hge_with_args serve
wait_for_port 8080
server: forward auth webhook set-cookies header on response > High-Level TODO: * [x] Code Changes * [x] Tests * [x] Check that pro/multitenant build ok * [x] Documentation Changes * [x] Updating this PR with full details * [ ] Reviews * [ ] Ensure code has all FIXMEs and TODOs addressed * [x] Ensure no files are checked in mistakenly * [x] Consider impact on console, cli, etc. ### Description > This PR adds support for adding set-cookie header on the response from the auth webhook. If the set-cookie header is sent by the webhook, it will be forwarded in the graphQL engine response. Fixes a bug in test-server.sh: testing of get-webhook tests was done by POST method and vice versa. To fix, the parameters were swapped. ### Changelog - [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. ### Affected components - [x] Server - [ ] Console - [ ] CLI - [x] Docs - [ ] Community Content - [ ] Build System - [x] Tests - [ ] Other (list it) ### Related Issues -> Closes [#2269](https://github.com/hasura/graphql-engine/issues/2269) ### Solution and Design > ### Steps to test and verify > Please refer to the docs to see how to send the set-cookie header from webhook. ### Limitations, known bugs & workarounds > - Support for only set-cookie header forwarding is added - the value forwarded in the set-cookie header cannot be validated completely, the [Cookie](https://hackage.haskell.org/package/cookie) package has been used to parse the header value and any unnecessary information is stripped off before forwarding the header. The standard given in [RFC6265](https://datatracker.ietf.org/doc/html/rfc6265) has been followed for the Set-Cookie format. ### Server checklist #### Catalog upgrade Does this PR change Hasura Catalog version? - [x] No - [ ] Yes - [ ] Updated docs with SQL for downgrading the catalog #### Metadata Does this PR add a new Metadata feature? - [x] No #### GraphQL - [x] No new GraphQL schema is generated - [ ] New GraphQL schema is being generated: - [ ] New types and typenames are correlated #### Breaking changes - [x] No Breaking changes PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2538 Co-authored-by: Robert <132113+robx@users.noreply.github.com> GitOrigin-RevId: d9047e997dd221b7ce4fef51911c3694037e7c3f
2021-11-09 15:00:21 +03:00
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_cors.py
server: forward auth webhook set-cookies header on response > High-Level TODO: * [x] Code Changes * [x] Tests * [x] Check that pro/multitenant build ok * [x] Documentation Changes * [x] Updating this PR with full details * [ ] Reviews * [ ] Ensure code has all FIXMEs and TODOs addressed * [x] Ensure no files are checked in mistakenly * [x] Consider impact on console, cli, etc. ### Description > This PR adds support for adding set-cookie header on the response from the auth webhook. If the set-cookie header is sent by the webhook, it will be forwarded in the graphQL engine response. Fixes a bug in test-server.sh: testing of get-webhook tests was done by POST method and vice versa. To fix, the parameters were swapped. ### Changelog - [x] `CHANGELOG.md` is updated with user-facing content relevant to this PR. ### Affected components - [x] Server - [ ] Console - [ ] CLI - [x] Docs - [ ] Community Content - [ ] Build System - [x] Tests - [ ] Other (list it) ### Related Issues -> Closes [#2269](https://github.com/hasura/graphql-engine/issues/2269) ### Solution and Design > ### Steps to test and verify > Please refer to the docs to see how to send the set-cookie header from webhook. ### Limitations, known bugs & workarounds > - Support for only set-cookie header forwarding is added - the value forwarded in the set-cookie header cannot be validated completely, the [Cookie](https://hackage.haskell.org/package/cookie) package has been used to parse the header value and any unnecessary information is stripped off before forwarding the header. The standard given in [RFC6265](https://datatracker.ietf.org/doc/html/rfc6265) has been followed for the Set-Cookie format. ### Server checklist #### Catalog upgrade Does this PR change Hasura Catalog version? - [x] No - [ ] Yes - [ ] Updated docs with SQL for downgrading the catalog #### Metadata Does this PR add a new Metadata feature? - [x] No #### GraphQL - [x] No new GraphQL schema is generated - [ ] New GraphQL schema is being generated: - [ ] New types and typenames are correlated #### Breaking changes - [x] No Breaking changes PR-URL: https://github.com/hasura/graphql-engine-mono/pull/2538 Co-authored-by: Robert <132113+robx@users.noreply.github.com> GitOrigin-RevId: d9047e997dd221b7ce4fef51911c3694037e7c3f
2021-11-09 15:00:21 +03:00
kill_hge_servers
unset HASURA_GRAPHQL_CORS_DOMAIN
;;
auth-webhook-cookie)
# test auth webhook set-cookie forwarding on response
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH SET-COOKIE HEADER IN AUTH WEBHOOK ########>\n"
export HASURA_GRAPHQL_AUTH_HOOK="http://localhost:9876/auth"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--hge-webhook="$HASURA_GRAPHQL_AUTH_HOOK" \
--test-auth-webhook-header \
test_auth_webhook_cookie.py
kill_hge_servers
;;
ws-init-cookie-read-cors-enabled)
# test websocket transport with initial cookie header
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH COOKIE IN WEBSOCKET INIT ########>\n"
export HASURA_GRAPHQL_AUTH_HOOK="http://localhost:9876/auth"
export HASURA_GRAPHQL_AUTH_HOOK_MODE="POST"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
run_hge_with_args serve
wait_for_port 8080
echo "$(time_elapsed): testcase 1: read cookie, cors enabled"
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-ws-init-cookie=read \
test_websocket_init_cookie.py
kill_hge_servers
;;
ws-init-cookie-noread)
echo "$(time_elapsed): testcase 2: no read cookie, cors disabled"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
export HASURA_GRAPHQL_AUTH_HOOK="http://localhost:9876/auth"
export HASURA_GRAPHQL_AUTH_HOOK_MODE="POST"
run_hge_with_args serve --disable-cors
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-ws-init-cookie=noread \
test_websocket_init_cookie.py
kill_hge_servers
;;
ws-init-cookie-read-cors-disabled)
echo "$(time_elapsed): testcase 3: read cookie, cors disabled and ws-read-cookie"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
export HASURA_GRAPHQL_AUTH_HOOK="http://localhost:9876/auth"
export HASURA_GRAPHQL_AUTH_HOOK_MODE="POST"
export HASURA_GRAPHQL_WS_READ_COOKIE="true"
run_hge_with_args serve --disable-cors
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-ws-init-cookie=read \
test_websocket_init_cookie.py
kill_hge_servers
;;
ws-graphql-api-disabled)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH GRAPHQL DISABLED ########>\n"
export HASURA_GRAPHQL_ENABLED_APIS="metadata"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
export HASURA_GRAPHQL_AUTH_HOOK="http://localhost:9876/auth"
export HASURA_GRAPHQL_AUTH_HOOK_MODE="POST"
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_apis_disabled.py
kill_hge_servers
;;
ws-metadata-api-disabled)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH METADATA DISABLED ########>\n"
export HASURA_GRAPHQL_ENABLED_APIS="graphql"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
export HASURA_GRAPHQL_AUTH_HOOK="http://localhost:9876/auth"
export HASURA_GRAPHQL_AUTH_HOOK_MODE="POST"
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_apis_disabled.py
kill_hge_servers
;;
remote-schema-permissions)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH REMOTE SCHEMA PERMISSIONS ENABLED ########>\n"
export HASURA_GRAPHQL_ENABLE_REMOTE_SCHEMA_PERMISSIONS=true
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_remote_schema_permissions.py
unset HASURA_GRAPHQL_ENABLE_REMOTE_SCHEMA_PERMISSIONS
[Preview] Inherited roles for postgres read queries fixes #3868 docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de` Note: To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`. Introduction ------------ This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`. How are select permissions of different roles are combined? ------------------------------------------------------------ A select permission includes 5 things: 1. Columns accessible to the role 2. Row selection filter 3. Limit 4. Allow aggregation 5. Scalar computed fields accessible to the role Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`. Let's say the following GraphQL query is queried with the `combined_roles` role. ```graphql query { employees { address phone } } ``` This will translate to the following SQL query: ```sql select (case when (P1 or P2) then address else null end) as address, (case when P2 then phone else null end) as phone from employee where (P1 or P2) ``` The other parameters of the select permission will be combined in the following manner: 1. Limit - Minimum of the limits will be the limit of the inherited role 2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation 3. Scalar computed fields - same as table column fields, as in the above example APIs for inherited roles: ---------------------- 1. `add_inherited_role` `add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments `role_name`: the name of the inherited role to be added (String) `role_set`: list of roles that need to be combined (Array of Strings) Example: ```json { "type": "add_inherited_role", "args": { "role_name":"combined_user", "role_set":[ "user", "user1" ] } } ``` After adding the inherited role, the inherited role can be used like single roles like earlier Note: An inherited role can only be created with non-inherited/singular roles. 2. `drop_inherited_role` The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument: `role_name`: name of the inherited role to be dropped Example: ```json { "type": "drop_inherited_role", "args": { "role_name":"combined_user" } } ``` Metadata --------- The derived roles metadata will be included under the `experimental_features` key while exporting the metadata. ```json { "experimental_features": { "derived_roles": [ { "role_name": "manager_is_employee_too", "role_set": [ "employee", "manager" ] } ] } } ``` Scope ------ Only postgres queries and subscriptions are supported in this PR. Important points: ----------------- 1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done. TODOs ------- - [ ] Tests - [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features - [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?) - [ ] Introspection test with a inherited role (nullability changes in a inherited role) - [ ] Docs - [ ] Changelog Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com> GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 14:14:13 +03:00
kill_hge_servers
;;
[Preview] Inherited roles for postgres read queries fixes #3868 docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de` Note: To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`. Introduction ------------ This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`. How are select permissions of different roles are combined? ------------------------------------------------------------ A select permission includes 5 things: 1. Columns accessible to the role 2. Row selection filter 3. Limit 4. Allow aggregation 5. Scalar computed fields accessible to the role Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`. Let's say the following GraphQL query is queried with the `combined_roles` role. ```graphql query { employees { address phone } } ``` This will translate to the following SQL query: ```sql select (case when (P1 or P2) then address else null end) as address, (case when P2 then phone else null end) as phone from employee where (P1 or P2) ``` The other parameters of the select permission will be combined in the following manner: 1. Limit - Minimum of the limits will be the limit of the inherited role 2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation 3. Scalar computed fields - same as table column fields, as in the above example APIs for inherited roles: ---------------------- 1. `add_inherited_role` `add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments `role_name`: the name of the inherited role to be added (String) `role_set`: list of roles that need to be combined (Array of Strings) Example: ```json { "type": "add_inherited_role", "args": { "role_name":"combined_user", "role_set":[ "user", "user1" ] } } ``` After adding the inherited role, the inherited role can be used like single roles like earlier Note: An inherited role can only be created with non-inherited/singular roles. 2. `drop_inherited_role` The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument: `role_name`: name of the inherited role to be dropped Example: ```json { "type": "drop_inherited_role", "args": { "role_name":"combined_user" } } ``` Metadata --------- The derived roles metadata will be included under the `experimental_features` key while exporting the metadata. ```json { "experimental_features": { "derived_roles": [ { "role_name": "manager_is_employee_too", "role_set": [ "employee", "manager" ] } ] } } ``` Scope ------ Only postgres queries and subscriptions are supported in this PR. Important points: ----------------- 1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done. TODOs ------- - [ ] Tests - [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features - [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?) - [ ] Introspection test with a inherited role (nullability changes in a inherited role) - [ ] Docs - [ ] Changelog Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com> GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 14:14:13 +03:00
function-permissions)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH FUNCTION PERMISSIONS ENABLED ########>\n"
export HASURA_GRAPHQL_INFER_FUNCTION_PERMISSIONS=false
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
run_hge_with_args serve
wait_for_port 8080
[Preview] Inherited roles for postgres read queries fixes #3868 docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de` Note: To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`. Introduction ------------ This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`. How are select permissions of different roles are combined? ------------------------------------------------------------ A select permission includes 5 things: 1. Columns accessible to the role 2. Row selection filter 3. Limit 4. Allow aggregation 5. Scalar computed fields accessible to the role Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`. Let's say the following GraphQL query is queried with the `combined_roles` role. ```graphql query { employees { address phone } } ``` This will translate to the following SQL query: ```sql select (case when (P1 or P2) then address else null end) as address, (case when P2 then phone else null end) as phone from employee where (P1 or P2) ``` The other parameters of the select permission will be combined in the following manner: 1. Limit - Minimum of the limits will be the limit of the inherited role 2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation 3. Scalar computed fields - same as table column fields, as in the above example APIs for inherited roles: ---------------------- 1. `add_inherited_role` `add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments `role_name`: the name of the inherited role to be added (String) `role_set`: list of roles that need to be combined (Array of Strings) Example: ```json { "type": "add_inherited_role", "args": { "role_name":"combined_user", "role_set":[ "user", "user1" ] } } ``` After adding the inherited role, the inherited role can be used like single roles like earlier Note: An inherited role can only be created with non-inherited/singular roles. 2. `drop_inherited_role` The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument: `role_name`: name of the inherited role to be dropped Example: ```json { "type": "drop_inherited_role", "args": { "role_name":"combined_user" } } ``` Metadata --------- The derived roles metadata will be included under the `experimental_features` key while exporting the metadata. ```json { "experimental_features": { "derived_roles": [ { "role_name": "manager_is_employee_too", "role_set": [ "employee", "manager" ] } ] } } ``` Scope ------ Only postgres queries and subscriptions are supported in this PR. Important points: ----------------- 1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done. TODOs ------- - [ ] Tests - [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features - [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?) - [ ] Introspection test with a inherited role (nullability changes in a inherited role) - [ ] Docs - [ ] Changelog Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com> GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 14:14:13 +03:00
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_graphql_queries.py::TestGraphQLQueryFunctionPermissions
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_graphql_mutations.py::TestGraphQLMutationFunctions
[Preview] Inherited roles for postgres read queries fixes #3868 docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de` Note: To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`. Introduction ------------ This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`. How are select permissions of different roles are combined? ------------------------------------------------------------ A select permission includes 5 things: 1. Columns accessible to the role 2. Row selection filter 3. Limit 4. Allow aggregation 5. Scalar computed fields accessible to the role Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`. Let's say the following GraphQL query is queried with the `combined_roles` role. ```graphql query { employees { address phone } } ``` This will translate to the following SQL query: ```sql select (case when (P1 or P2) then address else null end) as address, (case when P2 then phone else null end) as phone from employee where (P1 or P2) ``` The other parameters of the select permission will be combined in the following manner: 1. Limit - Minimum of the limits will be the limit of the inherited role 2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation 3. Scalar computed fields - same as table column fields, as in the above example APIs for inherited roles: ---------------------- 1. `add_inherited_role` `add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments `role_name`: the name of the inherited role to be added (String) `role_set`: list of roles that need to be combined (Array of Strings) Example: ```json { "type": "add_inherited_role", "args": { "role_name":"combined_user", "role_set":[ "user", "user1" ] } } ``` After adding the inherited role, the inherited role can be used like single roles like earlier Note: An inherited role can only be created with non-inherited/singular roles. 2. `drop_inherited_role` The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument: `role_name`: name of the inherited role to be dropped Example: ```json { "type": "drop_inherited_role", "args": { "role_name":"combined_user" } } ``` Metadata --------- The derived roles metadata will be included under the `experimental_features` key while exporting the metadata. ```json { "experimental_features": { "derived_roles": [ { "role_name": "manager_is_employee_too", "role_set": [ "employee", "manager" ] } ] } } ``` Scope ------ Only postgres queries and subscriptions are supported in this PR. Important points: ----------------- 1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done. TODOs ------- - [ ] Tests - [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features - [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?) - [ ] Introspection test with a inherited role (nullability changes in a inherited role) - [ ] Docs - [ ] Changelog Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com> GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 14:14:13 +03:00
unset HASURA_GRAPHQL_INFER_FUNCTION_PERMISSIONS
unset HASURA_GRAPHQL_ADMIN_SECRET
[Preview] Inherited roles for postgres read queries fixes #3868 docker image - `hasura/graphql-engine:inherited-roles-preview-48b73a2de` Note: To be able to use the inherited roles feature, the graphql-engine should be started with the env variable `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` set to `inherited_roles`. Introduction ------------ This PR implements the idea of multiple roles as presented in this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FGALanguageICDE07.pdf). The multiple roles feature in this PR can be used via inherited roles. An inherited role is a role which can be created by combining multiple singular roles. For example, if there are two roles `author` and `editor` configured in the graphql-engine, then we can create a inherited role with the name of `combined_author_editor` role which will combine the select permissions of the `author` and `editor` roles and then make GraphQL queries using the `combined_author_editor`. How are select permissions of different roles are combined? ------------------------------------------------------------ A select permission includes 5 things: 1. Columns accessible to the role 2. Row selection filter 3. Limit 4. Allow aggregation 5. Scalar computed fields accessible to the role Suppose there are two roles, `role1` gives access to the `address` column with row filter `P1` and `role2` gives access to both the `address` and the `phone` column with row filter `P2` and we create a new role `combined_roles` which combines `role1` and `role2`. Let's say the following GraphQL query is queried with the `combined_roles` role. ```graphql query { employees { address phone } } ``` This will translate to the following SQL query: ```sql select (case when (P1 or P2) then address else null end) as address, (case when P2 then phone else null end) as phone from employee where (P1 or P2) ``` The other parameters of the select permission will be combined in the following manner: 1. Limit - Minimum of the limits will be the limit of the inherited role 2. Allow aggregations - If any of the role allows aggregation, then the inherited role will allow aggregation 3. Scalar computed fields - same as table column fields, as in the above example APIs for inherited roles: ---------------------- 1. `add_inherited_role` `add_inherited_role` is the [metadata API](https://hasura.io/docs/1.0/graphql/core/api-reference/index.html#schema-metadata-api) to create a new inherited role. It accepts two arguments `role_name`: the name of the inherited role to be added (String) `role_set`: list of roles that need to be combined (Array of Strings) Example: ```json { "type": "add_inherited_role", "args": { "role_name":"combined_user", "role_set":[ "user", "user1" ] } } ``` After adding the inherited role, the inherited role can be used like single roles like earlier Note: An inherited role can only be created with non-inherited/singular roles. 2. `drop_inherited_role` The `drop_inherited_role` API accepts the name of the inherited role and drops it from the metadata. It accepts a single argument: `role_name`: name of the inherited role to be dropped Example: ```json { "type": "drop_inherited_role", "args": { "role_name":"combined_user" } } ``` Metadata --------- The derived roles metadata will be included under the `experimental_features` key while exporting the metadata. ```json { "experimental_features": { "derived_roles": [ { "role_name": "manager_is_employee_too", "role_set": [ "employee", "manager" ] } ] } } ``` Scope ------ Only postgres queries and subscriptions are supported in this PR. Important points: ----------------- 1. All columns exposed to an inherited role will be marked as `nullable`, this is done so that cell value nullification can be done. TODOs ------- - [ ] Tests - [ ] Test a GraphQL query running with a inherited role without enabling inherited roles in experimental features - [] Tests for aggregate queries, limit, computed fields, functions, subscriptions (?) - [ ] Introspection test with a inherited role (nullability changes in a inherited role) - [ ] Docs - [ ] Changelog Co-authored-by: Vamshi Surabhi <6562944+0x777@users.noreply.github.com> GitOrigin-RevId: 3b8ee1e11f5ceca80fe294f8c074d42fbccfec63
2021-03-08 14:14:13 +03:00
kill_hge_servers
;;
roles-inheritance)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH EXPERIMENTAL FEATURE: ROLES INHERITANCE ########>\n"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
export HASURA_GRAPHQL_ENABLE_REMOTE_SCHEMA_PERMISSIONS="true"
export HASURA_GRAPHQL_INFER_FUNCTION_PERMISSIONS=false
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_roles_inheritance.py
unset HASURA_GRAPHQL_ADMIN_SECRET
unset HASURA_GRAPHQL_ENABLE_REMOTE_SCHEMA_PERMISSIONS
unset HASURA_GRAPHQL_INFER_FUNCTION_PERMISSIONS
kill_hge_servers
;;
naming-conventions)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH EXPERIMENTAL FEATURE: NAMING CONVENTIONS ########>\n"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM"
run_hge_with_args serve
wait_for_port 8080
unset HASURA_GRAPHQL_EXPERIMENTAL_FEATURES
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_naming_conventions.py
kill_hge_servers
export HASURA_GRAPHQL_EXPERIMENTAL_FEATURES=naming_convention
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_naming_conventions.py
kill_hge_servers
# We are now going to test by setting the default naming convention to
# graphql-default. So now we don't need to set the naming convention in
# source customisation
export HASURA_GRAPHQL_DEFAULT_NAMING_CONVENTION="graphql-default"
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_naming_conventions.py
unset HASURA_GRAPHQL_ADMIN_SECRET
export HASURA_GRAPHQL_EXPERIMENTAL_FEATURES=$DEFAULT_HASURA_EXPERIMENTAL_FEATURES
unset HASURA_GRAPHQL_DEFAULT_NAMING_CONVENTION
kill_hge_servers
;;
streaming-subscriptions)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH STREAMING SUBSCRIPTIONS #########################>\n"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
run_hge_with_args serve
wait_for_port 8080
# run all the subscriptions tests with streaming subscriptions enabled
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_subscriptions.py
unset HASURA_GRAPHQL_ADMIN_SECRET
unset HASURA_GRAPHQL_EXPERIMENTAL_FEATURES
kill_hge_servers
;;
query-caching)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE QUERY CACHING #####################################>\n"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
# use only one capability to disable cache striping
run_hge_with_args +RTS -N1 -RTS serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_graphql_queries.py::TestGraphQLQueryCaching
kill_hge_servers
;;
query-logs)
# verbose logging tests
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH QUERY LOG ########>\n"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
export HASURA_GRAPHQL_ENABLED_LOG_TYPES=" startup,http-log,webhook-log,websocket-log,query-log"
export HASURA_GRAPHQL_LOG_LEVEL="debug"
#run_hge_with_args serve
# we are doing this instead of calling run_hge_with_args, because we want to save in a custom log file
set -x
export LOGGING_TEST_LOGFILE_PATH="$OUTPUT_FOLDER/graphql-engine-verbose-logging.log"
"$GRAPHQL_ENGINE" serve >"$LOGGING_TEST_LOGFILE_PATH" 2>&1 &
HGE_PIDS=("${HGE_PIDS[@]}" $!)
set +x
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-logging \
test_logging.py
unset HASURA_GRAPHQL_ENABLED_LOG_TYPES
kill_hge_servers
# end verbose logging tests
;;
startup-db-calls)
# verbose logging tests
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE STARTUP DB CALLS ########>\n"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
export HASURA_GRAPHQL_ENABLED_LOG_TYPES=" startup,http-log,webhook-log,websocket-log,query-log"
export HASURA_GRAPHQL_LOG_LEVEL="debug"
#run_hge_with_args serve
# we are doing this instead of calling run_hge_with_args, because we want to save in a custom log file
set -x
export LOGGING_TEST_LOGFILE_PATH="$OUTPUT_FOLDER/graphql-engine-verbose-logging-db.log"
"$GRAPHQL_ENGINE" serve >"$LOGGING_TEST_LOGFILE_PATH" 2>&1 &
HGE_PIDS=("${HGE_PIDS[@]}" $!)
set +x
wait_for_port 8080
kill_hge_servers
# end verbose logging
# running HGE server again for pytest, the test will use the log generated from the previous run
# see https://github.com/hasura/graphql-engine-mono/pull/3813 for more information
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-startup-db-calls \
test_startup_db_calls.py
kill_hge_servers
unset HASURA_GRAPHQL_ENABLED_LOG_TYPES
unset HASURA_GRAPHQL_LOG_LEVEL
unset HASURA_GRAPHQL_ADMIN_SECRET
unset LOGGING_TEST_LOGFILE_PATH
;;
read-only-db)
## read-only DB tests; Hasura should start and run read queries against a read-only DB
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH READ-ONLY DATABASE ########>\n"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
export HASURA_GRAPHQL_ENABLED_LOG_TYPES="startup,http-log,webhook-log,websocket-log,query-log"
export HASURA_GRAPHQL_LOG_LEVEL="debug"
export HASURA_GRAPHQL_DEV_MODE="false"
export HASURA_GRAPHQL_ADMIN_INTERNAL_ERRORS="false"
# setup the database for read-only access
# 'test_graphql_read_only_source.py' assumes 'HASURA_READONLY_DB_URL' is set
# Note: setting default_transaction_mode to read-only etc. doesn't work for
# DDL statements. To replicate read-only access even for DDLs, we need to
# create a read-only user
readonly_sql=$(cat <<EOF
CREATE USER hasuraro WITH PASSWORD 'passme';
GRANT CONNECT ON DATABASE pg_source_1 TO hasuraro;
GRANT USAGE ON SCHEMA public TO hasuraro;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO hasuraro;
GRANT SELECT ON ALL TABLES IN SCHEMA pg_catalog TO hasuraro;
GRANT SELECT ON ALL TABLES IN SCHEMA information_schema TO hasuraro;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO hasuraro;
EOF
)
psql "$HASURA_GRAPHQL_PG_SOURCE_URL_1" -c "$readonly_sql"
export HASURA_READONLY_DB_URL="postgresql://hasuraro:passme@localhost:5432/pg_source_1"
run_hge_with_args serve
wait_for_port 8080
# and then test graphql queries work
pytest "${PYTEST_REPORTING_ARGS[@]}" \
--hge-urls "$HGE_URL" \
--pg-urls "$HASURA_GRAPHQL_PG_SOURCE_URL_1" \
--test-read-only-source \
test_graphql_read_only_source.py
unset HASURA_GRAPHQL_ENABLED_LOG_TYPES
kill_hge_servers
# end read-only DB tests
;;
remote-schema-https)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH SECURE REMOTE SCHEMA #########################>\n"
OLD_REMOTE_SCHEMAS_WEBHOOK_DOMAIN="${REMOTE_SCHEMAS_WEBHOOK_DOMAIN}"
export REMOTE_SCHEMAS_WEBHOOK_DOMAIN="https://localhost:5001"
init_ssl
run_hge_with_args serve
wait_for_port 8080
python3 graphql_server.py 5001 "$OUTPUT_FOLDER/ssl/webhook.pem" "$OUTPUT_FOLDER/ssl/webhook-key.pem" >"$OUTPUT_FOLDER/remote_gql_server.log" 2>&1 &
GQL_SERVER_PID=$!
wait_for_port 5001
pytest "${PYTEST_COMMON_ARGS[@]}" test_schema_stitching.py::TestRemoteSchemaBasic
export REMOTE_SCHEMAS_WEBHOOK_DOMAIN="${OLD_REMOTE_SCHEMAS_WEBHOOK_DOMAIN}"
kill_hge_servers
kill "$GQL_SERVER_PID"
;;
post-webhook)
webhook_tests_check_root
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET & WEBHOOK (POST) #########################>\n"
export HASURA_GRAPHQL_AUTH_HOOK="https://localhost:9090/"
export HASURA_GRAPHQL_AUTH_HOOK_MODE="POST"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
init_ssl
start_multiple_hge_servers
python3 webhook.py 9090 "$OUTPUT_FOLDER/ssl/webhook-key.pem" "$OUTPUT_FOLDER/ssl/webhook.pem" >"$OUTPUT_FOLDER/webhook.log" 2>&1 &
WH_PID=$!
wait_for_port 9090
run_pytest_parallel --hge-webhook="$HASURA_GRAPHQL_AUTH_HOOK"
kill_hge_servers
;;
webhook-request-context)
webhook_tests_check_root
echo -e "\n$(time_elapsed): <########## TEST WEBHOOK RECEIVES REQUEST DATA AS CONTEXT #########################>\n"
export HASURA_GRAPHQL_AUTH_HOOK="http://localhost:5594/"
export HASURA_GRAPHQL_AUTH_HOOK_MODE="POST"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--hge-webhook="$HASURA_GRAPHQL_AUTH_HOOK" \
--test-webhook-request-context \
test_webhook_request_context.py
kill_hge_servers
;;
get-webhook)
webhook_tests_check_root
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET & WEBHOOK (GET) #########################>\n"
export HASURA_GRAPHQL_AUTH_HOOK="https://localhost:9090/"
export HASURA_GRAPHQL_AUTH_HOOK_MODE="GET"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
init_ssl
start_multiple_hge_servers
python3 webhook.py 9090 "$OUTPUT_FOLDER/ssl/webhook-key.pem" "$OUTPUT_FOLDER/ssl/webhook.pem" >"$OUTPUT_FOLDER/webhook.log" 2>&1 &
WH_PID=$!
wait_for_port 9090
run_pytest_parallel --hge-webhook="$HASURA_GRAPHQL_AUTH_HOOK"
kill_hge_servers
;;
insecure-webhook)
webhook_tests_check_root
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN SECRET & HTTPS INSECURE WEBHOOK (GET) ########>\n"
export HASURA_GRAPHQL_AUTH_HOOK="https://localhost:9090/"
export HASURA_GRAPHQL_AUTH_HOOK_MODE="GET"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
init_ssl
rm /etc/ssl/certs/webhook.crt
update-ca-certificates
run_hge_with_args serve
wait_for_port 8080
echo -e "running webhook"
python3 webhook.py 9090 "$OUTPUT_FOLDER/ssl/webhook-key.pem" "$OUTPUT_FOLDER/ssl/webhook.pem" &
WH_PID=$!
echo -e "webhook pid $WH_PID"
wait_for_port 9090
pytest "${PYTEST_COMMON_ARGS[@]}" \
--hge-webhook="$HASURA_GRAPHQL_AUTH_HOOK" \
--test-webhook-insecure \
test_webhook_insecure.py
kill_hge_servers
;;
insecure-webhook-with-admin-secret)
webhook_tests_check_root
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ADMIN_SECRET & HTTPS INSECURE WEBHOOK WITH ADMIN SECRET (POST) ########>\n"
export HASURA_GRAPHQL_AUTH_HOOK="https://localhost:9090/"
export HASURA_GRAPHQL_AUTH_HOOK_MODE="POST"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
init_ssl
rm /etc/ssl/certs/webhook.crt
update-ca-certificates
run_hge_with_args serve
wait_for_port 8080
python3 webhook.py 9090 "$OUTPUT_FOLDER/ssl/webhook-key.pem" "$OUTPUT_FOLDER/ssl/webhook.pem" >"$OUTPUT_FOLDER/webhook.log" 2>&1 &
WH_PID=$!
echo -e "webhook pid $WH_PID"
wait_for_port 9090
pytest "${PYTEST_COMMON_ARGS[@]}" \
--hge-webhook="$HASURA_GRAPHQL_AUTH_HOOK" \
--test-webhook-insecure \
test_webhook_insecure.py
kill_hge_servers
kill $WH_PID
;;
apollo-federation)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH APOLLO FEDERATION ########>\n"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM"
export HASURA_GRAPHQL_EXPERIMENTAL_FEATURES="apollo_federation"
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_apollo_federation.py
unset HASURA_GRAPHQL_EXPERIMENTAL_FEATURES
unset HASURA_GRAPHQL_ADMIN_SECRET
kill_hge_servers
;;
allowlist-queries)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH ALLOWLIST QUERIES ########> \n"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
export HASURA_GRAPHQL_ENABLE_ALLOWLIST=true
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_allowlist_queries.py
kill_hge_servers
;;
developer-api-tests)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH DEVELOPER API ENABLED ########>\n"
export HASURA_GRAPHQL_ENABLED_APIS="metadata,graphql,developer,config,pgdump"
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
run_hge_with_args serve --enabled-apis "$HASURA_GRAPHQL_ENABLED_APIS"
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
test_dev_endpoints.py
unset HASURA_GRAPHQL_ENABLED_APIS
kill_hge_servers
;;
jwk-url)
# TODO(swann): ditto, these have to be parallelised
# jwk test
unset HASURA_GRAPHQL_AUTH_HOOK
unset HASURA_GRAPHQL_AUTH_HOOK_MODE
unset HASURA_GRAPHQL_JWT_SECRET
export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH JWK URL ########> \n"
# start the JWK server
python3 jwk_server.py >"$OUTPUT_FOLDER/jwk_server.log" 2>&1 &
JWKS_PID=$!
wait_for_port 5001
echo "Test: Cache-Control with max-age=3"
export HASURA_GRAPHQL_JWT_SECRET='{"jwk_url": "http://localhost:5001/jwk-cache-control?max-age=3"}'
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-jwk-url \
-k 'test_cache_control_header_max_age'
kill_hge_servers
unset HASURA_GRAPHQL_JWT_SECRET
echo "Test: Cache-Control with must-revalidate, max-age=3"
export HASURA_GRAPHQL_JWT_SECRET='{"jwk_url": "http://localhost:5001/jwk-cache-control?must-revalidate=true&must-revalidate=true"}'
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-jwk-url \
-k 'test_cache_control_header_max_age'
kill_hge_servers
unset HASURA_GRAPHQL_JWT_SECRET
echo "Test: Cache-Control with must-revalidate"
export HASURA_GRAPHQL_JWT_SECRET='{"jwk_url": "http://localhost:5001/jwk-cache-control?must-revalidate=true"}'
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-jwk-url \
-k 'test_cache_control_header_no_caching'
kill_hge_servers
unset HASURA_GRAPHQL_JWT_SECRET
echo "Test: Cache-Control with no-cache, public"
export HASURA_GRAPHQL_JWT_SECRET='{"jwk_url": "http://localhost:5001/jwk-cache-control?no-cache=true&public=true"}'
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-jwk-url \
-k 'test_cache_control_header_no_caching'
kill_hge_servers
unset HASURA_GRAPHQL_JWT_SECRET
echo "Test: Cache-Control with no-store, max-age=3"
export HASURA_GRAPHQL_JWT_SECRET='{"jwk_url": "http://localhost:5001/jwk-cache-control?no-store=true&max-age=3"}'
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-jwk-url \
-k 'test_cache_control_header_no_caching'
kill_hge_servers
unset HASURA_GRAPHQL_JWT_SECRET
echo "Test: Expires with three second expiry"
export HASURA_GRAPHQL_JWT_SECRET='{"jwk_url": "http://localhost:5001/jwk-expires?seconds=3"}'
run_hge_with_args serve
wait_for_port 8080
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-jwk-url \
-k 'test_expires_header'
kill_hge_servers
unset HASURA_GRAPHQL_JWT_SECRET
kill $JWKS_PID
# end jwk url test
;;
horizontal-scaling)
# horizontal scale test
unset HASURA_GRAPHQL_AUTH_HOOK
unset HASURA_GRAPHQL_AUTH_HOOK_MODE
unset HASURA_GRAPHQL_ADMIN_SECRET
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH HORIZONTAL SCALING ########>\n"
HASURA_HS_TEST_DB='postgresql://postgres:postgres@localhost:6543/hs_hge_test'
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
if ! psql "$HASURA_GRAPHQL_DATABASE_URL" -c "SELECT 1 FROM pg_database WHERE datname = 'hs_hge_test'" | grep -q -F '(1 row)'; then
psql "$HASURA_GRAPHQL_DATABASE_URL" -c 'CREATE DATABASE hs_hge_test;'
fi
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
pgUserInfo=$(python3 -c '
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
import os
from urllib.parse import urlparse
uri = urlparse( os.environ["HASURA_GRAPHQL_DATABASE_URL"] )
if uri.password:
print("password="+uri.password+" user="+uri.username)
else:
print("user="+uri.username)')
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
pgDbInfo=$(psql "$HASURA_GRAPHQL_DATABASE_URL" -c "SELECT concat(' host=',inet_server_addr(),' port=', inet_server_port(),' dbname=',current_database())" | sed -n '3 p')
# create pgbouncer user
id pgbouncer || useradd pgbouncer
cd "$CIRCLECI_FOLDER"
mkdir -p pgbouncer
chown -R pgbouncer:pgbouncer pgbouncer
echo '[databases]
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
hs_hge_test = '"$pgDbInfo" "$pgUserInfo"'
[pgbouncer]
listen_port = 6543
listen_addr = 127.0.0.1
logfile = pgbouncer/pgbouncer.log
pidfile = pgbouncer/pgbouncer.pid
auth_type = md5
auth_file = pgbouncer/users.txt
admin_users = postgres' >pgbouncer/pgbouncer.ini
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
# start pgbouncer
pgbouncer -u pgbouncer -d pgbouncer/pgbouncer.ini
cd "$PYTEST_ROOT"
sleep 2
# start 1st server
run_hge_with_args --database-url "$HASURA_HS_TEST_DB" serve
wait_for_port 8080
# start 2nd server
run_hge_with_args --database-url "$HASURA_HS_TEST_DB" serve \
--server-port 8081
wait_for_port 8081
# run test
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-hge-scale-url="http://localhost:8081" \
test_horizontal_scale.py
# Shutdown pgbouncer
psql "postgresql://postgres:postgres@localhost:6543/pgbouncer" -c "SHUTDOWN;" || true
cd "$CIRCLECI_FOLDER"
# start pgbouncer again
pgbouncer -u pgbouncer -d pgbouncer/pgbouncer.ini
cd "$PYTEST_ROOT"
# sleep for 20 seconds
sleep 20
# run test
pytest "${PYTEST_COMMON_ARGS[@]}" \
--test-hge-scale-url="http://localhost:8081" \
test_horizontal_scale.py
# Shutdown pgbouncer
psql "postgresql://postgres:postgres@localhost:6543/pgbouncer" -c "SHUTDOWN;" || true
kill_hge_servers
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
psql "$HASURA_GRAPHQL_DATABASE_URL" -c "drop database hs_hge_test;"
sleep 4
unset HASURA_HS_TEST_DB
# end horizontal scale test
;;
#
# ###########################################
# the following backend-* tests are excluded from `server-test-names.txt`
# and are run via their respective `test_oss_server_*` jobs
#
# [Specifying Pytests with -k flag]
# tests are run with the -k flag to filter on common and
# backend-specific test classes using keyword expressions.
#
# this reduces the number of unrelated tests skipped, which
# avoids an increasingly negative impact on our test run
# time as we add more backends and tests.
#
# https://docs..org/en/6.2.x/usage.html#specifying-tests-selecting-tests
# https://github.com/hasura/graphql-engine/blob/master/server/py-tests/README.md#running-bigquery-tests
#
backend-mssql)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH SQL SERVER BACKEND ###########################################>\n"
export HASURA_GRAPHQL_EXPERIMENTAL_FEATURES=naming_convention
run_hge_with_args serve
wait_for_port 8080
source_data_sources_utils
add_mssql_source 8080 "$HASURA_GRAPHQL_MSSQL_SOURCE_URL"
# See note [Specifying Pytests with -k flag]
pytest "${PYTEST_COMMON_ARGS[@]}" \
--backend=mssql \
-k "MSSQL"
# start inherited roles test
echo -e "\n$(time_elapsed): <########## TEST INHERITED-ROLES WITH SQL SERVER BACKEND ###########################################>\n"
pytest "${PYTEST_COMMON_ARGS[@]}" \
--backend=mssql \
-k TestGraphQLInheritedRolesMSSQL
# end inherited roles test
# start naming conventions test (failure for other than postgres backend)
echo -e "\n$(time_elapsed): <########## TEST NAMING CONVENTIONS WITH SQL SERVER BACKEND ###########################################>\n"
pytest "${PYTEST_COMMON_ARGS[@]}" \
--backend mssql \
-k TestNamingConventionsFailure
export HASURA_GRAPHQL_EXPERIMENTAL_FEATURES=$DEFAULT_HASURA_EXPERIMENTAL_FEATURES
# end naming conventions test
kill_hge_servers
;;
backend-citus)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH CITUS BACKEND ###########################################>\n"
run_hge_with_args serve
wait_for_port 8080
source_data_sources_utils
add_citus_source 8080 "$HASURA_GRAPHQL_CITUS_SOURCE_URL"
# See note [Specifying Pytests with -k flag]
pytest "${PYTEST_COMMON_ARGS[@]}" \
--backend=citus \
-k "Citus"
kill_hge_servers
;;
backend-bigquery)
echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH BIGQUERY BACKEND ###########################################>\n"
# shellcheck source=../scripts/bigquery.sh
server/tests: ephemeral BigQuery projects for CI test jobs _Problem_ We currently run teardown/`DELETE` statements on the same, shared `hasura_test` dataset. This is not ideal as parallel test runs can and do affect each other, resulting in nondeterministic CI failures. Closes https://github.com/hasura/graphql-engine-mono/issues/2521 _Solution and design_ This PR introduces ephemeral, isolated projects for each test run _in CI only_. Projects are created within [the Google Cloud Platform `data-sources-test-bigquery` directory](https://console.cloud.google.com/iam-admin/settings?folder=704256416468&orgonly=true&supportedpurview=organizationId) on each test run, and destroyed afterwards. I've only introduced this change in CI for the time being: 1. this isn't as much of an issue locally because we're less likely to run bigquery tests in parallel. 2. to more quickly unblock https://github.com/hasura/graphql-engine/issues/7929. 3. to limit the number of new projects created until we have a better idea of our usage vs GCP quota/limits. Also updated the [internal wiki here](https://github.com/hasura/graphql-engine-mono/wiki/Testing-BigQuery) with this info. _To verify_ - CI: [this job](https://buildkite.com/hasura/graphql-engine-mono/builds/3770#89e5bac6-16fe-447e-bcda-85cd47ea1b77) successfully runs all tests on a temporary project & dataset - local: follow [these steps](https://github.com/hasura/graphql-engine-mono/wiki/Testing-BigQuery#ci--optional-dedicated-gcp-project-for-tests) to try the same setup locally PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3240 GitOrigin-RevId: d88d9cb7922266bfa962cfcb481e0272b8929a5d
2022-01-25 11:26:32 +03:00
source "$CIRCLECI_FOLDER/../scripts/bigquery.sh" && verify_bigquery_pytest_env
run_hge_with_args serve
wait_for_port 8080
server/tests: ephemeral BigQuery projects for CI test jobs _Problem_ We currently run teardown/`DELETE` statements on the same, shared `hasura_test` dataset. This is not ideal as parallel test runs can and do affect each other, resulting in nondeterministic CI failures. Closes https://github.com/hasura/graphql-engine-mono/issues/2521 _Solution and design_ This PR introduces ephemeral, isolated projects for each test run _in CI only_. Projects are created within [the Google Cloud Platform `data-sources-test-bigquery` directory](https://console.cloud.google.com/iam-admin/settings?folder=704256416468&orgonly=true&supportedpurview=organizationId) on each test run, and destroyed afterwards. I've only introduced this change in CI for the time being: 1. this isn't as much of an issue locally because we're less likely to run bigquery tests in parallel. 2. to more quickly unblock https://github.com/hasura/graphql-engine/issues/7929. 3. to limit the number of new projects created until we have a better idea of our usage vs GCP quota/limits. Also updated the [internal wiki here](https://github.com/hasura/graphql-engine-mono/wiki/Testing-BigQuery) with this info. _To verify_ - CI: [this job](https://buildkite.com/hasura/graphql-engine-mono/builds/3770#89e5bac6-16fe-447e-bcda-85cd47ea1b77) successfully runs all tests on a temporary project & dataset - local: follow [these steps](https://github.com/hasura/graphql-engine-mono/wiki/Testing-BigQuery#ci--optional-dedicated-gcp-project-for-tests) to try the same setup locally PR-URL: https://github.com/hasura/graphql-engine-mono/pull/3240 GitOrigin-RevId: d88d9cb7922266bfa962cfcb481e0272b8929a5d
2022-01-25 11:26:32 +03:00
source_data_sources_utils
add_bigquery_source 8080
# See note [Specifying Pytests with -k flag]
pytest "${PYTEST_COMMON_ARGS[@]}" \
--backend=bigquery \
-k "Bigquery"
kill_hge_servers
;;
esac
echo "Finished running tests on node $CIRCLE_NODE_INDEX of $CIRCLE_NODE_TOTAL"
run graphql tests on both http and websocket; add parallelism (close #1868) (#1921) Examples 1) ` pytest --hge-urls "http://127.0.0.1:8080" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" -vv ` 2) `pytest --hge-urls "http://127.0.0.1:8080" "http://127.0.0.1:8081" --pg-urls "postgresql://admin@127.0.0.1:5432/hge_tests" "postgresql://admin@127.0.0.1:5432/hge_tests2" -vv ` ### Solution and Design <!-- How is this issue solved/fixed? What is the design? --> <!-- It's better if we elaborate --> #### Reducing execution time of tests - The Schema setup and teardown, which were earlier done per test method, usually takes around 1 sec. - For mutations, the model has now been changed to only do schema setup and teardown once per test class. - A data setup and teardown will be done once per test instead (usually takes ~10ms). - For the test class to get this behaviour, one can can extend the class `DefaultTestMutations`. - The function `dir()` should be define which returns the location of the configuration folder. - Inside the configuration folder, there should be - Files `<conf_dir>/schema_setup.yaml` and `<conf_dir>/schema_teardown.yaml`, which has the metadata query executed during schema setup and teardown respectively - Files named `<conf_dir>/values_setup.yaml` and `<conf_dir>/values_teardown.yaml`. These files are executed to setup and remove data from the tables respectively. #### Running Graphql queries on both http and websockets - Each GraphQL query/mutation is run on the both HTTP and websocket protocols - Pytests test parameterisation is used to achieve this - The errors over websockets are slightly different from that on HTTP - The code takes care of converting the errors in HTTP to errors in websockets #### Parallel executation of tests. - The plugin pytest-xdist helps in running tests on parallel workers. - We are using this plugin to group tests by file and run on different workers. - Parallel test worker processes operate on separate postgres databases(and separate graphql-engines connected to these databases). Thus tests on one worker will not affect the tests on the other worker. - With two workers, this decreases execution times by half, as the tests on event triggers usually takes a long time, but does not consume much CPU.
2019-04-08 10:22:38 +03:00
echo -e "\n$(time_elapsed): <########## DONE ########>\n"