« Are Vulnerability Metrics and Risk Metrics Different? | Main | FIPS 140 Dash What? »

Risk? What Do You Mean?

The products that my employer develops are all about buzzwords like Vulnerability, Compliance, and Risk. You will find these words and phrases all over the computer security field along with others, like Buffer Overflow, Malware, PUPS (potentially unwanted programs, meaning, stuff you probably don't want but if someone called it Malware someone could get sued), Data Loss Prevention, Host Intrusion Prevention System, SPAM, and Antivirus. Most folks are only really aware of one, Antivirus, and may think it means all of the above. :) [For a longer discussion of this topic see Chapter 12 of the myths of security.]

There is a growing field of research in computer security that involves two buzzwords: Risk and Metrics. This article (and others to follow) is driven by these buzzwords. I'll stick with the topic of Risk for this article and leave Metrics as a follow-up topic.

What is Risk?

If you do your research on Risk by Googling, Wikipediaing or looking into your rarely-used hard copy dictionaries you will find a number of definitions that vary by the area of interest.

In the world of computer security we frequently feel the pull (or push) to rely on the same definitions and measures of Risk as in financial and mathematical arenas where Risk is defined quantitatively. It helps that the things in which they are interested can be quantified -- money, in the financial world, and everything in the mathematics world (math without quantities is just philosophy ;).

The world of computer security is, however, much more resistant to usable quantification. How can I measure, for example, the Risk that my cable provider will accidentally leave my cable modem in a state that is vulnerable to exploit, thereby making my home network a playground for intruders?  We can talk about the issues underlying that problem but this is not the time and place, if only because it would take a long time and prevent me from getting to the things I really want to say today. :)

Get to something useful already!

Let's start with Risk and provide an operational definition that makes the topic easier to talk about. Getting there will require us to define a few other terms for which I will provide practical examples. Here we go:

Risk is the possibility that some Bad Thing™ will happen

That's not so mysterious, now is it? Note that I say "some Bad Thing™" because in the computer security world it is not always easy to clearly define which sort of bad things we mean. Some examples:

It would be a Bad Thing™ for the network-connected coffee maker in the geek lounge to get denial-of-serviced. Why is this bad? Geeks need coffee! [Here wee see that $ might be quantified, if we stretch things, as lost productivity. But how do you measure that without keeping track of how much sleep the geeks are getting?]

It would be a Bad Thing™ for the credit card processing system to get hacked and customer data extracted. (see PCI DSS) Why is this bad? The business could get fined or lose PCI certification. [Here we see $ rearing its head in a direct way.]

It would be a Bad Thing™ for the IT staff if the wireless access point in the office got hacked, giving free network access to other businesses in the building. Why is this bad? Well, from the IT staff perspective they could get fired! [Here the $ cost is not to the business, really, but to the employee. Yes, I'm assuming that the network connection is a fixed cost, not a function of megabytes per time period.]

As you can see, the types of Bad Thing™ that can happen are diverse and could be defined differently depending on your point of view. The types of Bad Thing™ come in many shapes and sizes, and yes, values. How bad each Bad Thing™ is depends on one's perspective.

Contrast this with the financial world (insurance, investment, etc.), where things are straightforward: the Bad Thing™ is that someone will lose money (or fail to make money, e.g., opportunity cost or loss of future income). I'm not saying that calculating how much money is lost is not difficult, only that everyone agrees on what the common Bad Thing™ is that needs calculating.


Let’s talk about a framework for discussing Risk. To get there I want to define some terms derived from what computer security is normally concerned with:

Actor is an agent (person, malware, virus, mother nature, etc.) that can take action

Action is any input, adjustment, or similar operation that can be applied to an exploit

Exploit is a condition, like a software defect or misconfiguration, we would rather not have exercised because it can lead to a Bad Thing™, like privilege escalation

Risk requires the presence of all factors: Actor, Action, and Exploit. If the required Action cannot be taken, you are good. If the Exploit is resolved, you are good. If the Actor is held at arms length, you are good. If any one factor is out of the picture then your Risk is zero because the Bad Thing™ cannot happen. [Yes, the presence of all three also does not mean that your Risk is not zero, but that is related to the good will of the actor. ]

I visualize the four terms -- Actor, Action, Exploit, and Risk -- in a relationship as shown in the figure. Stated simply:

Risk is the possibility that an Actor will perform an Action using an Exploit resulting in some Bad Thing™.


In pseudo math:

Risk(Actor, Action, Exploit) = possibility of Bad Thing™

Note that, although it is not my intention to create a generalized modeling framework, this simple model for Risk can be augmented through composition or extension - add more actors, actions, or exploits, chain together multiple diagrams (compose Risk functions), or any combination thereof.

So talk about Risk already!

Risk is the thing that we all want to get our mitts on, to measure, manage, and take control of. Unfortunately, Risk is the slipperiest of customers and does not want to be corralled. :) If doing so were easy it would already have been done.

Fortunately we can discuss various Risk types in computer security in these terms before we worry about how to create and apply a Metric to Risk. This will lead us directly into Metrics and the important question: How do we estimate Risk in a way that we can use?

Ok, I have talked long enough for today but I have set the stage for talking about Risk and Metrics. I will leave you with the following lead-in to the next article(s). :)

What is the difference between a Vulnerability Metric and a Risk Metric?

Can we create useful Metrics without quantifying dollar cost?


the myths of security by John Viega, O'Reilly, 2009 [ISBN-13: 978-0596523022]

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.
Member Account Required
You must have a member account on this website in order to post comments. Log in to your account to enable posting.