OpenSSF Scorecard - Security health metrics for Open Source
Go to file
Dan Lorenc 1a1f70cb40 I've seen this test crash a couple times with a "nil" result.
I'm not quite sure how it happens since I've never been able to reproduce
it a second time in a row, so let's treat it as somethign to be retried.
2020-11-12 10:23:22 -06:00
artwork adding logo (#44) 2020-11-06 11:36:23 -06:00
checker Use the GraphQL API to retrieve the list of tags in signed-tags. (#45) 2020-11-06 15:28:26 -06:00
checks I've seen this test crash a couple times with a "nil" result. 2020-11-12 10:23:22 -06:00
cmd Bug fixes 2020-11-09 18:13:59 -08:00
cron Add a script to output in csv that can be run daily. (#56) 2020-11-10 13:25:57 -06:00
pkg Use the GraphQL API to retrieve the list of tags in signed-tags. (#45) 2020-11-06 15:28:26 -06:00
roundtripper Add license header and code of conduct files. (#34) 2020-10-26 15:22:13 -05:00
.gitignore Validate checks and improve docs. 2020-10-16 07:54:29 -07:00
checks.md Minor doc nits. 2020-10-27 08:23:47 -07:00
CODE_OF_CONDUCT.md updating code of conduct to match ossfs (#42) 2020-11-05 14:13:16 -06:00
CONTRIBUTING.md Add license header and code of conduct files. (#34) 2020-10-26 15:22:13 -05:00
gen_github.sh Add license header and code of conduct files. (#34) 2020-10-26 15:22:13 -05:00
go.mod Add a script to output in csv that can be run daily. (#56) 2020-11-10 13:25:57 -06:00
go.sum Add a script to output in csv that can be run daily. (#56) 2020-11-10 13:25:57 -06:00
LICENSE Add license header and code of conduct files. (#34) 2020-10-26 15:22:13 -05:00
main.go Rename repo/modules. 2020-10-27 14:23:48 -05:00
README.md Add helper hyperlinks for check references. 2020-11-09 19:15:46 -08:00

Security Scorecards

Motivation

A short motivational video clip to inspire us: https://youtu.be/rDMMYT3vkTk "You passed! All D's ... and an A!"

Goals

  1. Automate analysis and trust decisions on the security posture of open source projects.

  2. Use this data to proactively improve the security posture of the critical projects the world depends on.

Usage

The program only requires one argument to run, the name of the repo:

$ go build
$ ./scorecard --repo=github.com/kubernetes/kubernetes
Starting [Active]
Starting [CI-Tests]
Starting [CII-Best-Practices]
Starting [Code-Review]
Starting [Contributors]
Starting [Frozen-Deps]
Starting [Fuzzing]
Starting [Pull-Requests]
Starting [SAST]
Starting [Security-Policy]
Starting [Signed-Releases]
Starting [Signed-Tags]
Finished [Fuzzing]
Finished [CII-Best-Practices]
Finished [Frozen-Deps]
Finished [Security-Policy]
Finished [Contributors]
Finished [Signed-Releases]
Finished [Signed-Tags]
Finished [CI-Tests]
Finished [SAST]
Finished [Code-Review]
Finished [Pull-Requests]
Finished [Active]

RESULTS
-------
Active: Pass 10
CI-Tests: Pass 10
CII-Best-Practices: Pass 10
Code-Review: Pass 10
Contributors: Pass 10
Frozen-Deps: Pass 10
Fuzzing: Pass 10
Pull-Requests: Pass 10
SAST: Fail 0
Security-Policy: Pass 10
Signed-Releases: Fail 10
Signed-Tags: Fail 5

It is recommended to use an OAuth token to avoid rate limits. You can create one by the following the instructions here. Set the access token as an environment variable:

export GITHUB_AUTH_TOKEN=<your access token>

Checks

The following checks are all run against the target project:

Name Description
Security-MD Does the project contain a security policy?
Contributors Does the project have contributors from at least two different organizations?
Frozen-Deps Does the project declare and freeze dependencies?
Signed-Releases Does the project cryptographically sign releases?
Signed-Tags Does the project cryptographically sign release tags?
CI-Tests Does the project run tests in CI?
Code-Review Does the project require code review before code is merged?
CII-Best-Practices Does the project have a CII Best Practices Badge?
Pull-Requests Does the project use Pull Requests for all code changes?
Fuzzing Does the project use OSS-Fuzz?
SAST Does the project use static code analysis tools, e.g. CodeQL?
Active Did the project get any commits and releases in last 90 days?

To see detailed information on how each check works, see the check-specific documentation page.

If you'd like to add a check, make sure it is something that meets the following criteria:

  • automate-able
  • objective
  • actionable

and then create a new GitHub Issue.

Results

Each check returns a Pass / Fail decision, as well as a confidence score between 0 and 10. A confidence of 0 should indicate the check was unable to achieve any real signal, and the result should be ignored. A confidence of 10 indicates the check is completely sure of the result.

Many of the checks are based on heuristics, contributions are welcome to improve the detection!

Running specific checks

To use a particular check(s), add the --checks argument with a list of check names.

For example, --checks=CI-Tests,Code-Review.

Formatting Results

There are two formats currently: default and csv. Others may be added in the future.

These may be specified with the --format flag.

Requirements

  • The scorecard must only be composed of automate-able, objective data. For example, a project having 10 contributors doesnt necessarily mean its more secure than a project with say 50 contributors. But, having two maintainers might be preferable to only having one - the larger bus factor and ability to provide code reviews is objectively better.
  • The scorecard criteria can be as specific as possible and not limited general recommendations. For example, for Go, we can recommend/require specific linters and analyzers to be run on the codebase.
  • The scorecard can be populated for any open source project without any work or interaction from maintainers.
  • Maintainers must be provided with a mechanism to correct any automated scorecard findings they feel were made in error, provide "hints" for anything we can't detect automatically, and even dispute the applicability of a given scorecard finding for that repository.
  • Any criteria in the scorecard must be actionable. It should be possible, with help, for any project to "check all the boxes".
  • Any solution to compile a scorecard should be usable by the greater open source community to monitor upstream security.

Contributing

If you want to get involved or have ideas you'd like to chat about, we discuss this project in the OSSF Best Practices Working Group meetings.

See the Community Calendar for the schedule and meeting invitations.

See the Contributing documentation for guidance on how to contribute.