logo

How Do We Rank Project Risk?

2022-06-22

Authors:   Jacques Chester


Summary

The presentation discusses the challenges of identifying and reducing cybersecurity risks in software projects, and the need for a combination of objective data and expert input.
  • The speaker emphasizes the importance of honest probabilities and dollars in assessing risk.
  • There are numerous software projects, creating a sparsity problem for expert opinions.
  • Automated tools like the Criticality Score and Harvard Census can help identify high-risk projects, but they have limitations.
  • Human input is necessary to fill in gaps in data and provide context, but experts may have biases and limited knowledge.
  • Prediction markets can be a useful tool for eliciting expert opinions, but they require high liquidity to be effective.
The speaker gives an example of how download counts can be misleading in assessing risk, as they do not account for the ratio of downloads to actual usage. They also note that some software projects may be high-risk but not well-known, making it difficult to gather expert opinions.

Abstract

Somewhere, right now, out there in the world, lurks libnebraska*. To most of us it seems innocuous and maybe even irrelevant. But it turns out to be on the critical dependency path for massive swathes of software worldwide. If a vulnerability affects libnebraska, or someone malicious takes control of it, we're all in a world of hurt.How can we identify libnebraska? How can we estimate its risks? How can we classify projects into Alpha and Omega categories? Who should make these identifications and estimates? How should they do it?In this talk, Jacques will discuss various methods for integrating the information that can be found in expert opinions. As an adjunct to data-driven methods, aggregation of expert opinions may be vital to identifying and protecting the next libnebraska.* https://xkcd.com/2347/

Materials: