scorecard/probes/README.md
AdamKorcz 1a1d9b175c
📖 Add documentation about probes and contributing (#3762)
* 📖 Add documentation about probes and contributing

Signed-off-by: Adam Korczynski <adam@adalogics.com>

* change 'subdirectory' to 'directory'

Signed-off-by: Adam Korczynski <adam@adalogics.com>

* fix 'golangci' typo

Signed-off-by: Adam Korczynski <adam@adalogics.com>

* Added 'make fix-linter' to Makefile

Signed-off-by: Adam Korczynski <adam@adalogics.com>

* Move commands to their own table

Signed-off-by: Adam Korczynski <adam@adalogics.com>

* change 'problem' to 'supply-chain security risk'

Signed-off-by: Adam Korczynski <adam@adalogics.com>

* Add sentence about what a finding is

Signed-off-by: Adam Korczynski <adam@adalogics.com>

* remove sentence about running make rule locally

Signed-off-by: Adam Korczynski <adam@adalogics.com>

* change 'supply-chain security risk' to 'heuristic'

Signed-off-by: Adam Korczynski <adam@adalogics.com>

* Modify text on where to set remediation data

Signed-off-by: Adam Korczynski <adam@adalogics.com>

* Add example

Signed-off-by: Adam Korczynski <adam@adalogics.com>

* add line about discussing changes to the score in a GitHub issue

Signed-off-by: Adam Korczynski <adam@adalogics.com>

---------

Signed-off-by: Adam Korczynski <adam@adalogics.com>
2024-01-23 11:32:59 -08:00

3.5 KiB

Scorecard probes

This directory contains all the Scorecard probes.

A probe is an assessment of a focused, specific heuristic typically isolated to a particular ecosystem. For example, Scorecards fuzzing check consists of many different probes that assess particular ecosystems or aspects of fuzzing.

Each probe has its own directory in scorecard/probes. The probes follow a camelcase naming convention that describe the exact heuristic a particular probe assesses.

Probes can return multiple or a single finding, where a finding is a piece of data with an outcome, message, and optionally a location. Probes should be designed in such a way that a finding.OutcomePositive reflects a positive result, and finding.OutcomeNegative reflects a negative result. Scorecard has other finding.Outcome types available for other results; For example, the finding.OutcomeNotAvailable is often used for scenarios, where Scorecard cannot assess a project with a given probe. In addition, probes should also be named in such a way that they answer "yes" or "no", and where "yes" answers positively to the heuristic, and "no" answers negatively. For example, probes that check for SAST tools in the CI are called toolXXXInstalled so that finding.OutcomePositive reflects that it is positive to use the given tool, and that "yes" reflects what Scorecard considers the positive outcome. For some probes, this can be a bit trickier to do; The notArchived probe checks whether a project is archived, however, Scorecard considers archived projects to be negative, and the probe cannot be called isArchived. These naming conventions are not hard rules but merely guidelines. Note that probes do not do any formal evaluation such a scoring; This is left to the evaluation part once the outcomes have been produced by the probes.

A probe consists of three files:

  • def.yml: The documentation of the probe.
  • impl.go: The actual implementation of the probe.
  • impl_test.go: The probes test.

Reusing code in probes

When multiple probes use the same code, the reused code can be placed on scorecard/probes/internal/utils

How do I know which probes to add?

In general, browsing through the Scorecard GitHub issues is the best way to find new probes to add. Requests for support for new tools, fuzzing engines or other heuristics can often be converted into specific probes.

Probe definition formatting

Probe definitions can display links following standard markdown format.

Probe definitions can display dynamic content. This requires modifications in def.yml and impl.go and in the evaluation steps.

The following snippet in def.yml will display dynamic data provided by impl.go:

${{ metadata.dataToDisplay }}

And then in impl.go add the following metadata:

f, err := finding.NewWith(fs, Probe,
	"Message", nil,
	finding.OutcomePositive)
f = f.WithRemediationMetadata(map[string]string{
	"dataToDisplay": "this is the text we will display",
})

Example

Consider a probe with following line in its def.yml:

The project ${{ metadata.oss-fuzz-integration-status }} integrated into OSS-Fuzz.

and the probe sets the following metadata:

f, err := finding.NewWith(fs, Probe,
	"Message", nil,
	finding.OutcomePositive)
f = f.WithRemediationMetadata(map[string]string{
	"oss-fuzz-integration-status": "is",
})

The probe will then output the following text:

The project is integrated into OSS-Fuzz.

Should the changes be in the probe or the evaluation?

The remediation data must be set in the probe.