⚠️ Add initial Maintainers Annotation parsing (#3905)

* feat: Get maintainers annotation from repo

This commits adds functionality to read a scorecard.yml file from a repository and parse it to get the maintainers annotation. It introduces the concepts of exemptions, annotations, annotated checks, and annotation reasons.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* feat: Hand off maintainers annotation for SARIF

Hnad off maintainers annotation to SARIF formatting so it can decide to skip or not skip checks when creating the output.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* feat: If check is annotated, skip in SARIF output

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* feat: Add other annotation reasons

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* feat: Add options to show maintainers annotations in output

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* feat: Output maintainers annotations in JSON

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Remove unnecessary maintainers annotation param in SARIF

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* feat: Output maintainers annotations in string default result

This commit changes how data is appended to the table rows. Previously, we defined the table columns size and added information to each index. To avoid complicating the calculation of the index now that we are adding another optional column, the data is appended to the row as needed.

Also, the maintainers annotation was chosen to be displayed as last column to give space for Scorecard official reasoning and documentation to appear first.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* feat: Ignore annotation if check has max score

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* doc: Add documentation for maintainers annotation

Introduce what flag should be used to show maintainers annotation and how to configure maintainers annotation for your repository.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: A maintainers annotation obj can verify if a check is exempted

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: Get annotations function can be private

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: Find scorecard.yml file in the repository's root

Change to "GetFileContent" method since we're looking for a specific file instead of using "OnMatchingFileContentDo" method that looks files with a specific content.
This also removes the dependency from "checks/fileparser". This is necessary to move "IsCheckExempted" to checker.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: A check should know if it's exempted or not

Moving the verification "IsCheckExempted" from maintainers_annotation package to checker package. This way a check result will define, consulting maintainers annotation, if it is exempted or not.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Maintainers annotation can only be used in experimental mode

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Ignore if scorecard.yml does not exist

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Remove unnecessary maintainers annotation param

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* docs: Move complete mantainers annotation doc to feature folder

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Error logs

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: Rename AnnotationReason to Reason

Avoid repetition in variable references.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: Reason documentation

Redo reason documentation as a switch case to be called when necessary instead of defining a global map. Another reason to redo this logic as switch is that switch should be more performatic then instantiating a local map.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: Rename ScorecardYml to ScorecardConfig

This is a better generic name to reference Scorecard configuration file and leave the file format for the implementation.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Check name comparison

The EqualFold comparison is already case insensitive.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: Rename maintainers annotation folder/file to config

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: Rename and simplify parsing the config

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: Check parses its reasons

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Is check exempted

Fix config struture renaming and collect all annotation reasons for a check. Don't stop in the first annotation that the check is exempted.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: Rename maintainers annotation to annotations

Renaming flags, function params, docs and fixing config renamings.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: Separate annotations content from config parsing

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Omit empty annotations in JSON results

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: Read config file content

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: JSON2 result options

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* refactor: String result options

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Mock GetFileReader

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Annotation on Binary-Artifacts check

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* feat: Validate annotated checks

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Annotating all checks

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* feat: Validate annotated reasons

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Annotating all reasons

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Multiple annotations

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Binary-Artifacts exempted for testing

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Binary-Artifacts not exempted

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: No checks exempted

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Exemption is outdated

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Improve reasons error comparison

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Multiple exemption reasons in a single annotation

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Multiple exemption reasons across annotations

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: cmd show annotations flag doc

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Add show annotations flag

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Remove unnecessary function

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Annotations string format

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Annotations json format

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Linter fallthrough

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Linter imports

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Linter unnecessart struct type declaration

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Linter append combine

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Linter struct memory

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Linter improve error msg in run scorecard

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Linter dynamic errors

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* docs: Disable security alerts on SARIF output

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* docs: Redirect to configuration doc on main README

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Invalid check in annotations

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Invalid reason in annotations

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Exempt check on SARIF output clears runs

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* test: Add check1 annotations json

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: On parse error return empty config file not a "dirty" one

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: On parse config error continue execution

We log the error to the user but continue execution with empty config.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Merge conflics importing rules

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix: Readd is experimental enabled method

This method is necessary to validate if experimental feature is enabled so it can activate show annotations feature.

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* feat: Wrap config parse under experimental flag

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>

* fix unit test by removing unused mock call

Signed-off-by: Spencer Schrock <sschrock@google.com>

---------

Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>
This commit is contained in:
Gabriela Gutierrez 2024-04-23 20:15:12 +00:00 committed by GitHub
parent 39b56e809b
commit 8789bbbbfc
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
28 changed files with 1066 additions and 63 deletions

View File

@ -423,6 +423,13 @@ RESULTS
|---------|------------------------|--------------------------------|--------------------------------|---------------------------------------------------------------------------|
```
##### Showing Maintainers Annotations (Experimental)
To see the maintainers annotations for each check, use the `--show-annotations` option.
For more information on how to configure annotations or what are the available annotations, see [the configuration doc](config/README.md).
##### Using a GitLab Repository
To run Scorecard on a GitLab repository, you must create a [GitLab Access Token](https://gitlab.com/-/profile/personal_access_tokens) with the following permissions:

View File

@ -19,7 +19,9 @@ import (
"errors"
"fmt"
"math"
"strings"
"github.com/ossf/scorecard/v5/config"
sce "github.com/ossf/scorecard/v5/errors"
"github.com/ossf/scorecard/v5/finding"
)
@ -254,3 +256,32 @@ func LogFinding(dl DetailLogger, f *finding.Finding, level DetailType) {
dl.Warn(&lm)
}
}
// IsExempted verifies if a given check in the results is exempted in annotations.
func (check *CheckResult) IsExempted(c config.Config) (bool, []string) {
// If check has a maximum score, then there it doesn't make sense anymore to reason the check
// This may happen if the check score was once low but then the problem was fixed on Scorecard side
// or on the maintainers side
if check.Score == MaxResultScore {
return false, nil
}
// Collect all annotation reasons for this check
var reasons []string
// For all annotations
for _, annotation := range c.Annotations {
for _, checkName := range annotation.Checks {
// If check is in this annotation
if strings.EqualFold(checkName, check.Name) {
// Get all the reasons for this annotation
for _, reasonGroup := range annotation.Reasons {
reasons = append(reasons, reasonGroup.Reason.Doc())
}
}
}
}
// A check is considered exempted if it has annotation reasons
return (len(reasons) > 0), reasons
}

View File

@ -21,6 +21,7 @@ import (
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/ossf/scorecard/v5/config"
sce "github.com/ossf/scorecard/v5/errors"
)
@ -809,3 +810,195 @@ func TestCreateRuntimeErrorResult(t *testing.T) {
})
}
}
func TestIsExempted(t *testing.T) {
t.Parallel()
type args struct {
check CheckResult
config config.Config
}
type want struct {
reasons []config.Reason
isExempted bool
}
tests := []struct {
name string
args args
want want
}{
{
name: "Binary-Artifacts exempted for testing",
args: args{
check: CheckResult{
Name: "Binary-Artifacts",
Score: 0,
},
config: config.Config{
Annotations: []config.Annotation{
{
Checks: []string{"binary-artifacts"},
Reasons: []config.ReasonGroup{
{Reason: "test-data"},
},
},
},
},
},
want: want{
isExempted: true,
reasons: []config.Reason{
config.TestData,
},
},
},
{
name: "Binary-Artifacts not exempted",
args: args{
check: CheckResult{
Name: "Binary-Artifacts",
Score: 0,
},
config: config.Config{
Annotations: []config.Annotation{
{
Checks: []string{"pinned-dependencies"},
Reasons: []config.ReasonGroup{
{Reason: "test-data"},
},
},
{
Checks: []string{"branch-protection"},
Reasons: []config.ReasonGroup{
{Reason: "not-applicable"},
},
},
},
},
},
want: want{
isExempted: false,
},
},
{
name: "No checks exempted",
args: args{
check: CheckResult{
Name: "Binary-Artifacts",
Score: 0,
},
config: config.Config{},
},
want: want{
isExempted: false,
},
},
{
name: "Exemption is outdated",
args: args{
check: CheckResult{
Name: "Binary-Artifacts",
Score: 10,
},
config: config.Config{
Annotations: []config.Annotation{
{
Checks: []string{"binary-artifacts"},
Reasons: []config.ReasonGroup{
{Reason: "test-data"},
},
},
},
},
},
want: want{
isExempted: false,
},
},
{
name: "Multiple exemption reasons in a single annotation",
args: args{
check: CheckResult{
Name: "Binary-Artifacts",
Score: 0,
},
config: config.Config{
Annotations: []config.Annotation{
{
Checks: []string{"binary-artifacts"},
Reasons: []config.ReasonGroup{
{Reason: "test-data"},
{Reason: "remediated"},
},
},
},
},
},
want: want{
isExempted: true,
reasons: []config.Reason{
config.TestData,
config.Remediated,
},
},
},
{
name: "Multiple exemption reasons across annotations",
args: args{
check: CheckResult{
Name: "Binary-Artifacts",
Score: 0,
},
config: config.Config{
Annotations: []config.Annotation{
{
Checks: []string{
"binary-artifacts",
"pinned-dependencies",
},
Reasons: []config.ReasonGroup{
{Reason: "test-data"},
},
},
{
Checks: []string{
"binary-artifacts",
"dangerous-workflow",
},
Reasons: []config.ReasonGroup{
{Reason: "remediated"},
},
},
},
},
},
want: want{
isExempted: true,
reasons: []config.Reason{
config.TestData,
config.Remediated,
},
},
},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
isExempted, reasons := tt.args.check.IsExempted(tt.args.config)
if isExempted != tt.want.isExempted {
t.Fatalf("IsExempted() = %v, want %v", isExempted, tt.want.isExempted)
}
wantReasons := []string{}
if tt.want.reasons != nil {
for _, r := range tt.want.reasons {
wantReasons = append(wantReasons, r.Doc())
}
} else {
wantReasons = nil
}
if cmp.Equal(reasons, wantReasons) == false {
t.Fatalf("Reasons for IsExempted() = %v, want %v", reasons, wantReasons)
}
})
}
}

View File

@ -56,5 +56,9 @@ func JSON(r *pkg.ScorecardResult, w io.Writer) error {
return err
}
Normalize(r)
return r.AsJSON2(details, logLevel, docs, w)
o := pkg.AsJSON2ResultOption{
Details: details,
LogLevel: logLevel,
}
return r.AsJSON2(w, docs, o)
}

View File

@ -39,7 +39,7 @@ import (
const (
scorecardLong = "A program that shows the OpenSSF scorecard for an open source software."
scorecardUse = `./scorecard (--repo=<repo> | --local=<folder> | --{npm,pypi,rubygems,nuget}=<package_name>)
[--checks=check1,...] [--show-details]`
[--checks=check1,...] [--show-details] [--show-annotations]`
scorecardShort = "OpenSSF Scorecard"
)

63
config/README.md Normal file
View File

@ -0,0 +1,63 @@
# Maintainers Annotations
Maintainers Annotations is an experimental feature to let maintainers add annotations to Scorecard checks.
## Showing Maintainers Annotations
To see the maintainers annotations for each check on Scorecard results, use the `--show-annotations` option.
## Adding Annotations
To add annotations to your repository, create a `scorecard.yml` file in the root of your repository.
The file structure is as follows:
```yml
exemptions:
- checks:
- binary-artifacts
annotations:
# the binary files are only used for testing
- annotation: test-data
- checks:
- dangerous-workflow
annotations:
# the workflow is dangerous but only run under maintainers verification and approval
- annotation: remediated
```
You can annotate multiple checks at a time:
```yml
exemptions:
- checks:
- binary-artifacts
- pinned-dependencies
annotations:
# the binary files and files with unpinned dependencies are only used for testing
- annotation: test-data
```
And also provide multiple annotations for checks:
```yml
exemptions:
- checks:
- binary-artifacts
annotations:
# test.exe is only used for testing
- annotation: test-data
# dependency.exe is needed and it's used but the binary signature is verified
- annotation: remediated
```
The available checks are the Scorecard checks in lower case e.g. Binary-Artifacts is `binary-artifacts`.
The annotations are predefined as shown in the table below:
| Annotation | Description | Example |
|------------|-------------|---------|
| test-data | To annotate when a check or probe is targeting a danger in files or code snippets only used for test or example purposes. | The binary files are only used for testing. |
| remediated | To annotate when a check or probe correctly identified a danger and, even though the danger is necessary, a remediation was already applied. | A workflow is dangerous but only run under maintainers verification and approval, or a binary is needed but it is signed or has provenance. |
| not-applicable | To annotate when a check or probe is not applicable for the case. | The dependencies should not be pinned because the project is a library. |
| not-supported | To annotate when the maintainer fulfills a check or probe in a way that is not supported by Scorecard. | Clang-Tidy is used as SAST tool but not identified because its not supported. |
| not-detected | To annotate when the maintainer fulfills a check or probe in a way that is supported by Scorecard but not identified. | Dependabot is configured in the repository settings and not in a file. |
These annotations, when displayed in Scorecard results are parsed to a human-readable format that is similar to the annotation description described in the table above.

82
config/annotations.go Normal file
View File

@ -0,0 +1,82 @@
// Copyright 2024 OpenSSF Scorecard Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
// Reason is the reason behind an annotation.
type Reason string
const (
// TestData is to annotate when a check or probe is targeting a danger
// in files or code snippets only used for test or example purposes.
TestData Reason = "test-data"
// Remediated is to annotate when a check or probe correctly identified a
// danger and, even though the danger is necessary, a remediation was already applied.
// E.g. a workflow is dangerous but only run under maintainers verification and approval,
// or a binary is needed but it is signed or has provenance.
Remediated Reason = "remediated"
// NotApplicable is to annotate when a check or probe is not applicable for the case.
// E.g. the dependencies should not be pinned because the project is a library.
NotApplicable Reason = "not-applicable"
// NotSupported is to annotate when the maintainer fulfills a check or probe in a way
// that is not supported by Scorecard. E.g. Clang-Tidy is used as SAST tool but not identified
// because its not supported.
NotSupported Reason = "not-supported"
// NotDetected is to annotate when the maintainer fulfills a check or probe in a way
// that is supported by Scorecard but not identified. E.g. Dependabot is configured in the
// repository settings and not in a file.
NotDetected Reason = "not-detected"
)
// ReasonGroup groups the annotation reason and, in the future, the related probe.
// If there is a probe, the reason applies to the probe.
// If there is not a probe, the reason applies to the check or checks in
// the group.
type ReasonGroup struct {
Reason Reason `yaml:"reason"`
}
// Annotation defines a group of checks that are being annotated for various reasons.
type Annotation struct {
Checks []string `yaml:"checks"`
Reasons []ReasonGroup `yaml:"reasons"`
}
// Doc maps a reason to its human-readable explanation.
func (r *Reason) Doc() string {
switch *r {
case TestData:
return "The files or code snippets are only used for test or example purposes."
case Remediated:
return "The dangerous files or code snippets are necessary but remediations were already applied."
case NotApplicable:
return "The check or probe is not applicable in this case."
case NotSupported:
return "The check or probe is fulfilled but in a way that is not supported by Scorecard."
case NotDetected:
return "The check or probe is fulfilled but in a way that is supported by Scorecard but it was not detected."
default:
return string(*r)
}
}
// IsValidReason validates if a reason exists.
func IsValidReason(r Reason) bool {
switch r {
case TestData, Remediated, NotApplicable, NotSupported, NotDetected:
return true
default:
return false
}
}

94
config/config.go Normal file
View File

@ -0,0 +1,94 @@
// Copyright 2024 OpenSSF Scorecard Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
import (
"errors"
"fmt"
"io"
"strings"
"gopkg.in/yaml.v3"
sce "github.com/ossf/scorecard/v5/errors"
)
var (
ErrInvalidCheck = errors.New("check is not valid")
ErrInvalidReason = errors.New("reason is not valid")
)
// Config contains configurations defined by maintainers.
type Config struct {
Annotations []Annotation `yaml:"annotations"`
}
// parseFile takes the scorecard.yml file content and returns a `Config`.
func parseFile(c *Config, content []byte) error {
unmarshalErr := yaml.Unmarshal(content, c)
if unmarshalErr != nil {
return sce.WithMessage(sce.ErrScorecardInternal, unmarshalErr.Error())
}
return nil
}
func isValidCheck(check string, checks []string) bool {
for _, validCheck := range checks {
if strings.EqualFold(check, validCheck) {
return true
}
}
return false
}
func validate(c Config, checks []string) error {
for _, annotation := range c.Annotations {
for _, check := range annotation.Checks {
if !isValidCheck(check, checks) {
return fmt.Errorf("%w: %s", ErrInvalidCheck, check)
}
}
for _, reasonGroup := range annotation.Reasons {
if !IsValidReason(reasonGroup.Reason) {
return fmt.Errorf("%w: %s", ErrInvalidReason, reasonGroup.Reason)
}
}
}
return nil
}
// Parse reads the configuration file from the repo, stored in scorecard.yml, and returns a `Config`.
func Parse(r io.Reader, checks []string) (Config, error) {
c := Config{}
// Find scorecard.yml file in the repository's root
content, err := io.ReadAll(r)
if err != nil {
return Config{}, fmt.Errorf("fail to read configuration file: %w", err)
}
err = parseFile(&c, content)
if err != nil {
return Config{}, fmt.Errorf("fail to parse configuration file: %w", err)
}
err = validate(c, checks)
if err != nil {
return Config{}, fmt.Errorf("configuration file is not valid: %w", err)
}
// Return configuration
return c, nil
}

150
config/config_test.go Normal file
View File

@ -0,0 +1,150 @@
// Copyright 2024 OpenSSF Scorecard Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Warning: config cannot import checks. This is why we declare a different package here
// and import both config and checks to test config.
package config_test
import (
"errors"
"os"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/ossf/scorecard/v5/checks"
"github.com/ossf/scorecard/v5/config"
)
func Test_Parse_Checks(t *testing.T) {
t.Parallel()
tests := []struct {
name string
configPath string
wantErr error
want config.Config
}{
{
name: "Annotation on a single check",
configPath: "testdata/single_check.yml",
want: config.Config{
Annotations: []config.Annotation{
{
Checks: []string{"binary-artifacts"},
Reasons: []config.ReasonGroup{{Reason: "test-data"}},
},
},
},
},
{
name: "Annotation on all checks",
configPath: "testdata/all_checks.yml",
want: config.Config{
Annotations: []config.Annotation{
{
Checks: []string{
"binary-artifacts",
"branch-protection",
"cii-best-practices",
"ci-tests",
"code-review",
"contributors",
"dangerous-workflow",
"dependency-update-tool",
"fuzzing",
"license",
"maintained",
"packaging",
"pinned-dependencies",
"sast",
"security-policy",
"signed-releases",
"token-permissions",
"vulnerabilities",
},
Reasons: []config.ReasonGroup{{Reason: "test-data"}},
},
},
},
},
{
name: "Annotating all reasons",
configPath: "testdata/all_reasons.yml",
want: config.Config{
Annotations: []config.Annotation{
{
Checks: []string{"binary-artifacts"},
Reasons: []config.ReasonGroup{
{Reason: "test-data"},
{Reason: "remediated"},
{Reason: "not-applicable"},
{Reason: "not-supported"},
{Reason: "not-detected"},
},
},
},
},
},
{
name: "Multiple annotations",
configPath: "testdata/multiple_annotations.yml",
want: config.Config{
Annotations: []config.Annotation{
{
Checks: []string{"binary-artifacts"},
Reasons: []config.ReasonGroup{{Reason: "test-data"}},
},
{
Checks: []string{"pinned-dependencies"},
Reasons: []config.ReasonGroup{{Reason: "not-applicable"}},
},
},
},
},
{
name: "Invalid check",
configPath: "testdata/invalid_check.yml",
wantErr: config.ErrInvalidCheck,
},
{
name: "Invalid reason",
configPath: "testdata/invalid_reason.yml",
wantErr: config.ErrInvalidReason,
},
}
for _, tt := range tests {
tt := tt // Re-initializing variable so it is not changed while executing the closure below
allChecks := []string{}
for check := range checks.GetAll() {
allChecks = append(allChecks, check)
}
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
r, err := os.Open(tt.configPath)
if err != nil {
t.Fatalf("Could not open config test file: %s", tt.configPath)
}
result, err := config.Parse(r, allChecks)
if tt.wantErr != nil {
if !errors.Is(err, tt.wantErr) {
t.Fatalf("Unexpected error during Parse: got %v, wantErr %v", err, tt.wantErr)
}
return
}
if diff := cmp.Diff(tt.want, result); diff != "" {
t.Errorf("Config mismatch (-want +got):\n%s", diff)
}
})
}
}

22
config/testdata/all_checks.yml vendored Normal file
View File

@ -0,0 +1,22 @@
annotations:
- checks:
- binary-artifacts
- branch-protection
- cii-best-practices
- ci-tests
- code-review
- contributors
- dangerous-workflow
- dependency-update-tool
- fuzzing
- license
- maintained
- packaging
- pinned-dependencies
- sast
- security-policy
- signed-releases
- token-permissions
- vulnerabilities
reasons:
- reason: test-data

9
config/testdata/all_reasons.yml vendored Normal file
View File

@ -0,0 +1,9 @@
annotations:
- checks:
- binary-artifacts
reasons:
- reason: test-data
- reason: remediated
- reason: not-applicable
- reason: not-supported
- reason: not-detected

5
config/testdata/invalid_check.yml vendored Normal file
View File

@ -0,0 +1,5 @@
annotations:
- checks:
- credentials
reasons:
- reason: test-data

5
config/testdata/invalid_reason.yml vendored Normal file
View File

@ -0,0 +1,5 @@
annotations:
- checks:
- binary-artifacts
reasons:
- reason: trust-data

View File

@ -0,0 +1,9 @@
annotations:
- checks:
- binary-artifacts
reasons:
- reason: test-data
- checks:
- pinned-dependencies
reasons:
- reason: not-applicable

5
config/testdata/single_check.yml vendored Normal file
View File

@ -0,0 +1,5 @@
annotations:
- checks:
- binary-artifacts
reasons:
- reason: test-data

View File

@ -54,6 +54,9 @@ const (
// FlagShowDetails is the flag name for outputting additional check info.
FlagShowDetails = "show-details"
// FlagShowAnnotations is the flag name for outputting annotations on checks.
FlagShowAnnotations = "show-annotations"
// FlagChecks is the flag name for specifying which checks to run.
FlagChecks = "checks"
@ -152,6 +155,15 @@ func (o *Options) AddFlags(cmd *cobra.Command) {
"show extra details about each check",
)
if o.isExperimentalEnabled() {
cmd.Flags().BoolVar(
&o.ShowAnnotations,
FlagShowAnnotations,
o.ShowAnnotations,
"show maintainers annotations for checks",
)
}
cmd.Flags().IntVar(
&o.CommitDepth,
FlagCommitDepth,

View File

@ -180,3 +180,54 @@ func TestOptions_AddFlags_Format(t *testing.T) {
})
}
}
func TestOptions_AddFlags_Annotations(t *testing.T) {
tests := []struct {
opts *Options
name string
experimental bool
}{
{
name: "Cannot show annotations if experimental is disabled",
opts: &Options{
ShowAnnotations: true,
},
},
{
name: "Show annotations with experimental enabled",
opts: &Options{
ShowAnnotations: true,
},
experimental: true,
},
{
name: "Don't show annotations with experimental enabled",
opts: &Options{
ShowAnnotations: false,
},
experimental: true,
},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
cmd := &cobra.Command{}
if tt.experimental {
t.Setenv("SCORECARD_EXPERIMENTAL", "1")
}
tt.opts.AddFlags(cmd)
if tt.experimental == false && cmd.Flag(FlagShowAnnotations) != nil {
t.Fatal("expected FlagShowAnnotations to be nil since experimental is disabled")
}
if tt.experimental == true {
value, err := cmd.Flags().GetBool(FlagShowAnnotations)
if err != nil {
t.Fatal("expected FlagShowAnnotations to be set but got error %w", err)
}
if tt.opts.ShowAnnotations != value {
t.Fatalf("expected FlagShowAnnotations to be %t, got %t", tt.opts.ShowAnnotations, value)
}
}
})
}
}

View File

@ -30,22 +30,23 @@ import (
// Options define common options for configuring scorecard.
type Options struct {
Repo string
Local string
Commit string
LogLevel string
Format string
NPM string
PyPI string
RubyGems string
Nuget string
PolicyFile string
ResultsFile string
ChecksToRun []string
ProbesToRun []string
Metadata []string
CommitDepth int
ShowDetails bool
Repo string
Local string
Commit string
LogLevel string
Format string
NPM string
PyPI string
RubyGems string
Nuget string
PolicyFile string
ResultsFile string
ChecksToRun []string
ProbesToRun []string
Metadata []string
CommitDepth int
ShowDetails bool
ShowAnnotations bool
// Feature flags.
EnableSarif bool `env:"ENABLE_SARIF"`
EnableScorecardV6 bool `env:"SCORECARD_V6"`
@ -228,6 +229,13 @@ func (o *Options) Probes() []string {
return o.ProbesToRun
}
// isExperimentalEnabled returns true if experimental features were enabled via
// environment variable.
func (o *Options) isExperimentalEnabled() bool {
value, _ := os.LookupEnv(EnvVarScorecardExperimental)
return value == "1"
}
// isSarifEnabled returns true if SARIF format was specified in options or via
// environment variable.
func (o *Options) isSarifEnabled() bool {

View File

@ -49,11 +49,12 @@ type jsonCheckDocumentationV2 struct {
//nolint:govet
type jsonCheckResultV2 struct {
Details []string `json:"details"`
Score int `json:"score"`
Reason string `json:"reason"`
Name string `json:"name"`
Doc jsonCheckDocumentationV2 `json:"documentation"`
Details []string `json:"details"`
Score int `json:"score"`
Reason string `json:"reason"`
Name string `json:"name"`
Doc jsonCheckDocumentationV2 `json:"documentation"`
Annotations []string `json:"annotations,omitempty"`
}
type jsonRepoV2 struct {
@ -87,6 +88,13 @@ type JSONScorecardResultV2 struct {
Metadata []string `json:"metadata"`
}
// AsJSON2ResultOption provides configuration options for JSON2 Scorecard results.
type AsJSON2ResultOption struct {
LogLevel log.Level
Details bool
Annotations bool
}
// AsJSON exports results as JSON for new detail format.
func (r *ScorecardResult) AsJSON(showDetails bool, logLevel log.Level, writer io.Writer) error {
encoder := json.NewEncoder(writer)
@ -121,8 +129,8 @@ func (r *ScorecardResult) AsJSON(showDetails bool, logLevel log.Level, writer io
}
// AsJSON2 exports results as JSON for new detail format.
func (r *ScorecardResult) AsJSON2(showDetails bool,
logLevel log.Level, checkDocs docs.Doc, writer io.Writer,
func (r *ScorecardResult) AsJSON2(writer io.Writer,
checkDocs docs.Doc, o AsJSON2ResultOption,
) error {
score, err := r.GetAggregateScore(checkDocs)
if err != nil {
@ -162,16 +170,22 @@ func (r *ScorecardResult) AsJSON2(showDetails bool,
Reason: checkResult.Reason,
Score: checkResult.Score,
}
if showDetails {
if o.Details {
for i := range checkResult.Details {
d := checkResult.Details[i]
m := DetailToString(&d, logLevel)
m := DetailToString(&d, o.LogLevel)
if m == "" {
continue
}
tmpResult.Details = append(tmpResult.Details, m)
}
}
if o.Annotations {
exempted, reasons := checkResult.IsExempted(r.Config)
if exempted {
tmpResult.Annotations = reasons
}
}
out.Checks = append(out.Checks, tmpResult)
}

View File

@ -27,6 +27,7 @@ import (
"github.com/xeipuuv/gojsonschema"
"github.com/ossf/scorecard/v5/checker"
"github.com/ossf/scorecard/v5/config"
"github.com/ossf/scorecard/v5/finding"
"github.com/ossf/scorecard/v5/log"
)
@ -84,11 +85,12 @@ func TestJSONOutput(t *testing.T) {
//nolint:govet
tests := []struct {
name string
expected string
showDetails bool
logLevel log.Level
result ScorecardResult
name string
expected string
showDetails bool
showAnnotations bool
logLevel log.Level
result ScorecardResult
}{
{
name: "check-1",
@ -127,6 +129,55 @@ func TestJSONOutput(t *testing.T) {
Metadata: []string{},
},
},
{
name: "check-1 annotations",
showDetails: true,
showAnnotations: true,
expected: "./testdata/check1_annotations.json",
logLevel: log.DebugLevel,
result: ScorecardResult{
Repo: RepoInfo{
Name: repoName,
CommitSHA: repoCommit,
},
Scorecard: ScorecardInfo{
Version: scorecardVersion,
CommitSHA: scorecardCommit,
},
Date: date,
Checks: []checker.CheckResult{
{
Details: []checker.CheckDetail{
{
Type: checker.DetailWarn,
Msg: checker.LogMessage{
Text: "warn message",
Path: "src/file1.cpp",
Type: finding.FileTypeSource,
Offset: 5,
Snippet: "if (bad) {BUG();}",
},
},
},
Score: 5,
Reason: "half score reason",
Name: "Check-Name",
},
},
Config: config.Config{
Annotations: []config.Annotation{
{
Checks: []string{"Check-Name"},
Reasons: []config.ReasonGroup{
{Reason: "test-data"},
{Reason: "remediated"},
},
},
},
},
Metadata: []string{},
},
},
{
name: "check-2",
showDetails: true,
@ -448,7 +499,12 @@ func TestJSONOutput(t *testing.T) {
}
var result bytes.Buffer
err = tt.result.AsJSON2(tt.showDetails, tt.logLevel, checkDocs, &result)
o := AsJSON2ResultOption{
Details: tt.showDetails,
LogLevel: tt.logLevel,
Annotations: tt.showAnnotations,
}
err = tt.result.AsJSON2(&result, checkDocs, o)
if err != nil {
t.Fatalf("%s: AsJSON2: %v", tt.name, err)
}

View File

@ -626,6 +626,14 @@ func (r *ScorecardResult) AsSARIF(showDetails bool, logLevel log.Level,
for _, check := range r.Checks {
check := check
// SARIF output triggers GitHub security alerts for a repository.
// For annotated checks, we don't want to send alerts.
exempted, _ := check.IsExempted(r.Config)
if exempted {
continue
}
doc, err := checkDocs.GetCheck(check.Name)
if err != nil {
return sce.WithMessage(sce.ErrScorecardInternal, fmt.Sprintf("GetCheck: %v: %s", err, check.Name))

View File

@ -25,6 +25,7 @@ import (
"github.com/google/go-cmp/cmp"
"github.com/ossf/scorecard/v5/checker"
"github.com/ossf/scorecard/v5/config"
"github.com/ossf/scorecard/v5/finding"
"github.com/ossf/scorecard/v5/log"
"github.com/ossf/scorecard/v5/options"
@ -115,12 +116,13 @@ func TestSARIFOutput(t *testing.T) {
//nolint:govet
tests := []struct {
name string
expected string
showDetails bool
logLevel log.Level
result ScorecardResult
policy spol.ScorecardPolicy
name string
expected string
showDetails bool
showAnotations bool
logLevel log.Level
result ScorecardResult
policy spol.ScorecardPolicy
}{
{
name: "check with detail remediation",
@ -226,6 +228,63 @@ func TestSARIFOutput(t *testing.T) {
Metadata: []string{},
},
},
{
name: "check-1 annotations",
showDetails: true,
showAnotations: true,
expected: "./testdata/check1_annotations.sarif",
logLevel: log.DebugLevel,
policy: spol.ScorecardPolicy{
Version: 1,
Policies: map[string]*spol.CheckPolicy{
"Check-Name": {
Score: checker.MaxResultScore,
Mode: spol.CheckPolicy_ENFORCED,
},
},
},
result: ScorecardResult{
Repo: RepoInfo{
Name: repoName,
CommitSHA: repoCommit,
},
Scorecard: ScorecardInfo{
Version: scorecardVersion,
CommitSHA: scorecardCommit,
},
Date: date,
Checks: []checker.CheckResult{
{
Details: []checker.CheckDetail{
{
Type: checker.DetailWarn,
Msg: checker.LogMessage{
Text: "warn message",
Path: "src/file1.cpp",
Type: finding.FileTypeSource,
Offset: 5,
Snippet: "if (bad) {BUG();}",
},
},
},
Score: 5,
Reason: "half score reason",
Name: "Check-Name",
},
},
Config: config.Config{
Annotations: []config.Annotation{
{
Checks: []string{"Check-Name"},
Reasons: []config.ReasonGroup{
{Reason: "test-data"},
},
},
},
},
Metadata: []string{},
},
},
{
name: "check-2",
showDetails: true,

View File

@ -19,6 +19,7 @@ import (
"context"
"errors"
"fmt"
"os"
"strings"
"sync"
"time"
@ -27,9 +28,12 @@ import (
"github.com/ossf/scorecard/v5/checker"
"github.com/ossf/scorecard/v5/clients"
"github.com/ossf/scorecard/v5/config"
sce "github.com/ossf/scorecard/v5/errors"
"github.com/ossf/scorecard/v5/finding"
proberegistration "github.com/ossf/scorecard/v5/internal/probes"
sclog "github.com/ossf/scorecard/v5/log"
"github.com/ossf/scorecard/v5/options"
)
// errEmptyRepository indicates the repository is empty.
@ -160,6 +164,25 @@ func runScorecard(ctx context.Context,
// If the user runs checks
go runEnabledChecks(ctx, repo, request, checksToRun, resultsCh)
if os.Getenv(options.EnvVarScorecardExperimental) == "1" {
// Get configuration
rc, err := repoClient.GetFileReader("scorecard.yml")
// If configuration file exists, continue. Otherwise, ignore
if err == nil {
defer rc.Close()
checks := []string{}
for check := range checksToRun {
checks = append(checks, check)
}
c, err := config.Parse(rc, checks)
if err != nil {
logger := sclog.NewLogger(sclog.DefaultLevel)
logger.Error(err, "parsing configuration file")
}
ret.Config = c
}
}
for result := range resultsCh {
ret.Checks = append(ret.Checks, result)
ret.Findings = append(ret.Findings, result.Findings...)

View File

@ -18,6 +18,7 @@ import (
"fmt"
"io"
"os"
"strings"
"time"
"github.com/olekukonko/tablewriter"
@ -29,6 +30,7 @@ import (
"github.com/ossf/scorecard/v5/checks/raw/gitlab"
"github.com/ossf/scorecard/v5/clients/githubrepo"
"github.com/ossf/scorecard/v5/clients/gitlabrepo"
"github.com/ossf/scorecard/v5/config"
docChecks "github.com/ossf/scorecard/v5/docs/checks"
sce "github.com/ossf/scorecard/v5/errors"
"github.com/ossf/scorecard/v5/finding"
@ -59,6 +61,14 @@ type ScorecardResult struct {
RawResults checker.RawResults
Findings []finding.Finding
Metadata []string
Config config.Config
}
// AsStringResultOption provides configuration options for string Scorecard results.
type AsStringResultOption struct {
LogLevel log.Level
Details bool
Annotations bool
}
func scoreToString(s float64) string {
@ -129,12 +139,22 @@ func FormatResults(
switch opts.Format {
case options.FormatDefault:
err = results.AsString(opts.ShowDetails, log.ParseLevel(opts.LogLevel), doc, output)
o := AsStringResultOption{
Details: opts.ShowDetails,
Annotations: opts.ShowAnnotations,
LogLevel: log.ParseLevel(opts.LogLevel),
}
err = results.AsString(output, doc, o)
case options.FormatSarif:
// TODO: support config files and update checker.MaxResultScore.
err = results.AsSARIF(opts.ShowDetails, log.ParseLevel(opts.LogLevel), output, doc, policy, opts)
case options.FormatJSON:
err = results.AsJSON2(opts.ShowDetails, log.ParseLevel(opts.LogLevel), doc, output)
o := AsJSON2ResultOption{
Details: opts.ShowDetails,
Annotations: opts.ShowAnnotations,
LogLevel: log.ParseLevel(opts.LogLevel),
}
err = results.AsJSON2(output, doc, o)
case options.FormatProbe:
var opts *ProbeResultOption
err = results.AsProbe(output, opts)
@ -158,27 +178,19 @@ func FormatResults(
}
// AsString returns ScorecardResult in string format.
func (r *ScorecardResult) AsString(showDetails bool, logLevel log.Level,
checkDocs docChecks.Doc, writer io.Writer,
func (r *ScorecardResult) AsString(writer io.Writer,
checkDocs docChecks.Doc, o AsStringResultOption,
) error {
data := make([][]string, len(r.Checks))
for i, row := range r.Checks {
const withdetails = 5
const withoutdetails = 4
var x []string
if showDetails {
x = make([]string, withdetails)
} else {
x = make([]string, withoutdetails)
}
// UPGRADEv2: rename variable.
if row.Score == checker.InconclusiveResultScore {
x[0] = "?"
x = append(x, "?")
} else {
x[0] = fmt.Sprintf("%d / %d", row.Score, checker.MaxResultScore)
x = append(x, fmt.Sprintf("%d / %d", row.Score, checker.MaxResultScore))
}
cdoc, e := checkDocs.GetCheck(row.Name)
@ -187,18 +199,18 @@ func (r *ScorecardResult) AsString(showDetails bool, logLevel log.Level,
}
doc := cdoc.GetDocumentationURL(r.Scorecard.CommitSHA)
x[1] = row.Name
x[2] = row.Reason
if showDetails {
details, show := detailsToString(row.Details, logLevel)
x = append(x, row.Name, row.Reason)
if o.Details {
details, show := detailsToString(row.Details, o.LogLevel)
if show {
x[3] = details
x = append(x, details)
}
x[4] = doc
} else {
x[3] = doc
}
x = append(x, doc)
if o.Annotations {
_, reasons := row.IsExempted(r.Config)
x = append(x, strings.Join(reasons, "\n"))
}
data[i] = x
}
@ -215,10 +227,13 @@ func (r *ScorecardResult) AsString(showDetails bool, logLevel log.Level,
table := tablewriter.NewWriter(writer)
header := []string{"Score", "Name", "Reason"}
if showDetails {
if o.Details {
header = append(header, "Details")
}
header = append(header, "Documentation/Remediation")
if o.Annotations {
header = append(header, "Annotation")
}
table.SetHeader(header)
table.SetBorders(tablewriter.Border{Left: true, Top: true, Right: true, Bottom: true})
table.SetRowSeparator("-")

View File

@ -22,6 +22,7 @@ import (
"time"
"github.com/ossf/scorecard/v5/checker"
"github.com/ossf/scorecard/v5/config"
"github.com/ossf/scorecard/v5/docs/checks"
"github.com/ossf/scorecard/v5/finding"
"github.com/ossf/scorecard/v5/log"
@ -67,6 +68,17 @@ func mockScorecardResultCheck1(t *testing.T) *ScorecardResult {
Name: "Check-Name",
},
},
Config: config.Config{
Annotations: []config.Annotation{
{
Checks: []string{"Check-Name"},
Reasons: []config.ReasonGroup{
{Reason: "test-data"},
{Reason: "not-applicable"},
},
},
},
},
Metadata: []string{},
}
}
@ -125,6 +137,23 @@ func Test_formatResults_outputToFile(t *testing.T) {
err: false,
},
},
{
name: "output file with format default and annotations",
args: args{
opts: &options.Options{
Format: options.FormatDefault,
ShowDetails: true,
LogLevel: log.DebugLevel.String(),
ShowAnnotations: true,
},
results: scorecardResults,
doc: checkDocs,
},
want: want{
path: "check1_annotations.log",
err: false,
},
},
}
for _, tt := range tests {
tt := tt

32
pkg/testdata/check1_annotations.json vendored Normal file
View File

@ -0,0 +1,32 @@
{
"date": "2023-03-02T10:30:43-06:00",
"repo": {
"name": "org/name",
"commit": "68bc59901773ab4c051dfcea0cc4201a1567ab32"
},
"scorecard": {
"version": "1.2.3",
"commit": "ccbc59901773ab4c051dfcea0cc4201a1567abdd"
},
"score": 5,
"checks": [
{
"details": [
"Warn: warn message: src/file1.cpp:5"
],
"score": 5,
"reason": "half score reason",
"name": "Check-Name",
"documentation": {
"url": "https://github.com/ossf/scorecard/blob/main/docs/checks.md#check-name",
"short": "short description for Check-Name"
},
"annotations": [
"The files or code snippets are only used for test or example purposes.",
"The dangerous files or code snippets are necessary but remediations were already applied."
]
}
],
"metadata": []
}

12
pkg/testdata/check1_annotations.log vendored Normal file
View File

@ -0,0 +1,12 @@
Aggregate score: 5.0 / 10
Check scores:
|--------|------------|-------------------|--------------------------------|-----------------------------------------------------------------------|--------------------------------|
| SCORE | NAME | REASON | DETAILS | DOCUMENTATION/REMEDIATION | ANNOTATION |
|--------|------------|-------------------|--------------------------------|-----------------------------------------------------------------------|--------------------------------|
| 5 / 10 | Check-Name | half score reason | Warn: warn message: | https://github.com/ossf/scorecard/blob/main/docs/checks.md#check-name | The files or code snippets are |
| | | | src/file1.cpp:5 | | only used for test or example |
| | | | | | purposes. The check or probe |
| | | | | | is not applicable in this |
| | | | | | case. |
|--------|------------|-------------------|--------------------------------|-----------------------------------------------------------------------|--------------------------------|

5
pkg/testdata/check1_annotations.sarif vendored Normal file
View File

@ -0,0 +1,5 @@
{
"$schema": "https://raw.githubusercontent.com/oasis-tcs/sarif-spec/master/Schemata/sarif-schema-2.1.0.json",
"version": "2.1.0",
"runs": []
}