* Fix report downloading from Azure (reports are now zipped)
* Extracted upload logic into an action
* Extracted PR number file generation into its own job
Fixes
```
Notice: Report url: https://mspwblobreport.z1.web.core.windows.net/run-5533005176-1-a0b0752662f8af5f841ff7a65b04d02066474ff2/index.html
ReferenceError: fs is not defined
at eval (eval at callAsyncFunction (/home/runner/work/_actions/actions/github-script/v6/dist/index.js:15143:16), <anonymous>:30:18)
at callAsyncFunction (/home/runner/work/_actions/actions/github-script/v6/dist/index.js:15144:12)
at main (/home/runner/work/_actions/actions/github-script/v6/dist/index.js:15236:26)
at /home/runner/work/_actions/actions/github-script/v6/dist/index.js:15217:1
at /home/runner/work/_actions/actions/github-script/v6/dist/index.js:15268:3
at Object.<anonymous> (/home/runner/work/_actions/actions/github-script/v6/dist/index.js:152[71](https://github.com/microsoft/playwright/actions/runs/5533535965/jobs/10097205178#step:12:72):12)
at Module._compile (node:internal/modules/cjs/loader:1105:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1159:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
```
This reverts commit a1cdae6bff.
The problem with this approach is that each job overwrites the zip
artifact whereas previously it was merging all reports in the same
directory. We are going to zip .jsonl files instead.
Compressed size for `tests 1` blob report is 19Mb whil uncompressed one
is 211Mb. Also according to [GitHub
policy](https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts
) it is uncompressed size that is used for billing:
"Artifacts are uploaded during a workflow run, and you can view an
artifact's name and size in the UI. When an artifact is downloaded using
the GitHub UI, all files that were individually uploaded as part of the
artifact get zipped together into a single file. This means that billing
is calculated based on the size of the uploaded artifact and not the
size of the zip file."
The check summary has a link to the report and a link to the merge
workflow run. Otherwise it's very hard to tell which merge workflow
corresponds to given PR.
Downloading 457Mb of reports with traces (for tracing tests) takes >3
minutes, uploading it to Azure takes >5 minutes which easily exceeds 10
minutes budget.
For some reason pull_requests field on workflow_run is empty for pull
requests created from branches in forked repositories, see
https://github.com/orgs/community/discussions/25220. As a workaround we
store triggering pull request number in a file.
* Moved report merging and publishing logic into create_test_report.yml
shared between all workflows
* Merged reports are now published for try jobs on pull requests too. In
order to achieve that the logic had to be extracted into a separate
workflow triggered by
[workflow_run](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflow_run),
this way it can access secrets even if the original workflow was not
able to.
* The blob report data flow is different depending on whether the
workflow is triggered by a pull request or a push:
- For `pull_request` the workflow doesn't have access to the secrets it
uploads the blob report to the GitHub
artifact storage. Later on the merge workflow uploads that blob report
to Azure blob storage.
- Workflows triggered by `push` event can read secrets. They upload blob
report directly to Azure blob storage
and the merge workflow downloads the report from there rather than from
GitHub artifacts.
`az storage blob download-batch` has been timing out over the last few
days, see upstream issue
https://github.com/Azure/azure-cli/issues/26567. Replacing it with a
simple bash script that discovers blobs with a given prefix and then
downloads one-by-one.
This removes everything related to docker integration experiments that
we conducted over the last 6 months.
I'll send a follow-up with an alternative suggestion that was demo'ed on
a team meeting in the end of December.
- Use `snapshotPathTemplate` for docker screenshots in html-reporter
- Mark the snapshot path template test as slow since it re-spawns
worker for each project.
- Fix docker smoke tests
This patch implements a new mode of network tethering for Playwright
server & its clients.
With this patch:
- playwright server could be launched with the
`--browser-proxy-mode=tether` flag to engage in the new mode
- a new type of client, "Network Tethering Client" can connect to the
server to provide network traffic to the browsers
- all clients that connect to the server with the `x-playwright-proxy:
*` header will get traffic from the "Network Tethering Client"
This patch also adds an environment variable
`PW_OWNED_BY_TETHER_CLIENT`. With this env, playwright server will
auto-close when the network tethering client disconnects. It will also
auto-close if the network client does not connect to the server in the
first 10 seconds of the server existence. This way we can ensure that
`npx playwright docker start` blocks terminal & controls the lifetime of
the started container.
* Added macos-12 bots to the secondary workflow
* Moved ubuntu 20.04 from primary to secondary workflow
* For the bots where we don't care about macos version (Chrome Stable, Edge Dev etc.) switched to macos-latest
References #16180