Jberryman/1993 skip just report mode

https://github.com/hasura/graphql-engine-mono/pull/2028

Co-authored-by: David Overton <7734777+dmoverton@users.noreply.github.com>
GitOrigin-RevId: afbdb8d74bede1e78b054f5582cbf144bba20762
This commit is contained in:
Brandon Simmons 2021-08-06 14:40:35 -04:00 committed by hasura-bot
parent d483109443
commit 681113f1d8
3 changed files with 22 additions and 2 deletions

View File

@ -8,7 +8,14 @@ Here is an overview of this directory:
Each sub-directory here contains a different schema and accompanying latency
benchmarks (see below for adding your own). Each benchmark set runs in parallel
on CI.
on CI. See `benchmark_sets/chinook/` for reference.
In a benchmark set directory, the existence of an empty file named one of the
following have the following effects:
- `SKIP_CI`: don't run the benchmark in CI at all
- `SKIP_PR_REPORT`: don't post the regression results directly in the PR
comment body (useful if the results are noisy for now)
### bench.sh
@ -83,3 +90,8 @@ follow the pattern from `chinook`. The process looks like:
Make sure the benchmark set takes **less than 20 minutes**, otherwise it will
be killed. You can always split benchmarks up into two different sets to be run
in parallel.
If a benchmark set is not fairly stable, or you're not sure if it is, add an
empty file named `SKIP_PR_REPORT` in the benchmark set's directory; this will
prevent display of regression numbers in the PR comment body, but will still
run and record the benchmarks.

View File

@ -77,9 +77,15 @@ REGRESSION_REPORT_COMMENT_FILENAME = "/tmp/hasura_regression_report_comment.md"
def main():
try:
benchmark_sets_basepath = abs_path("benchmark_sets")
# Collect the benchmarks we'll run in CI:
benchmark_sets = [ dir for dir in os.listdir(benchmark_sets_basepath)
if not pathlib.Path(benchmark_sets_basepath, dir, 'SKIP_CI').is_file()]
# Theses benchmark sets won't have raw regression numbers displayed
# (likely because they are unstable for now)
skip_pr_report_names = [ dir for dir in os.listdir(benchmark_sets_basepath)
if pathlib.Path(benchmark_sets_basepath, dir, 'SKIP_PR_REPORT').is_file()]
# Start all the instances we need and run benchmarks in parallel
# NOTE: ProcessPoolExecutor doesn't work with shared queue
with concurrent.futures.ThreadPoolExecutor() as executor:
@ -93,6 +99,7 @@ def main():
report, merge_base_pr = generate_regression_report()
pretty_print_regression_report_github_comment(
report,
skip_pr_report_names,
merge_base_pr,
REGRESSION_REPORT_COMMENT_FILENAME
)
@ -453,7 +460,7 @@ def generate_regression_report():
# We (ab)use githubs syntax highlighting for displaying the regression report
# table as a github comment, that can be easily scanned
def pretty_print_regression_report_github_comment(results, merge_base_pr, output_filename):
def pretty_print_regression_report_github_comment(results, skip_pr_report_names, merge_base_pr, output_filename):
f = open(output_filename, "w")
def out(s): f.write(s+"\n")
@ -494,6 +501,7 @@ def pretty_print_regression_report_github_comment(results, merge_base_pr, output
out( f"``` diff ") # START DIFF SYNTAX
for benchmark_set_name, (mem_in_use_before_diff, live_bytes_before_diff, mem_in_use_after_diff, live_bytes_after_diff, benchmarks) in results.items():
if benchmark_set_name[:-5] in skip_pr_report_names: continue
l0 = live_bytes_before_diff
l1 = live_bytes_after_diff
u0 = mem_in_use_before_diff