Layer 8

Security is fundamentally about people, and everything we know about people is relevant to security. -- B. Schneier

The meaning of metrics.

I started writing about metrics at the very beginning of this blog, and hadn’t really seen anything I could add to the ongoing blogo-discussion since then, but the planets aligned:  I had a thought-provoking discussion with a super-sharp colleague, and it was followed by a Saturday in which I finally have a little time to write.

This provocateur kept referring to “security practitioners” as opposed to—well, everyone else in the security space, I suppose—and I began wondering what the cultural differences are between the two spheres.  The implication is that “real practitioners” know what the real risks are, because they’re living them every day.  (What is a practitioner, anyway?  Let’s just say it’s someone who’s directly responsible for protecting an organization’s data, as opposed to someone who is developing problems or solutions for OTHER organizations’ data.  You can argue with that definition if you like, but I’ll bet if you do, you’re not someone who meets the first description. wink)  But I don’t find that necessarily to be the case.  One of the great frustrations that I see with vendors, researchers, analysts and regulators is that they don’t believe most practitioners actually understand the risk landscape.  The only group that arguably does is the one that’s getting attacked the most—the military—and they can’t talk about it.  The only other group that kinda understands is the financial institutions, and they won’t talk about it even if they get caught with their cyber-pants down. I remember hearing a great talk by Aaron Turner that was eye-opening in its level of disclosure about things that were really happening, and that was only because of a confidentiality agreement; you’d never get that level of discourse in the public blogosphere. 

The reason I think that most practitioners don’t appreciate the security risks in the same way that other security professionals do is this:  [donning my asbestos undies] for the most part, security breaches don’t have observable impact in the real world.

Wait, wait, before you click on the comment field—read the rest of this entry.

Riddle me this:  when has anyone died as a result of a “cyber” security breach?  When has anyone been injured?  And can you prove it?

The only group that can even get CLOSE to answering that question is the military, because the loss of military information can arguably lead down the path to that very real scenario.  For them, the two are so closely linked that a “hit” on a system is really appreciated as being a “hit” in the very core of their business. 

But I’m going to argue that for ALL the rest of us practitioners—including the financial institutions—there is no real, consistent, demonstrable impact.  The only effect is in currency numbers, and those are, at the end of the day, just numbers.  Money is a way of keeping score.  Sure, if you’re homeless and on the street, the lack of money at your level is going to have a very concrete effect on your life.  And if you’re a poor nation, the lack of money at that mega-scale is going to have a concrete effect on the quality of everyone’s life.  But in the vast realm in between, where we’re talking about security breaches, I’m going to argue that it’s just score-keeping.  (“They stole MILLIONS of our numbers and siphoned them off to Russia!”)

Stock price?  Nobody can prove that stock price goes down because of a revealed security breach.

Regulatory fines?  Have you ever seen an institution fail solely due to regulatory fines?  Have you ever seen people lose jobs because their employers had to pay fines?

Downtime and recovery costs?  Okay, now you’re getting somewhere, BUT. 

BUT.

I submit that organizations don’t make a distinction between downtime due to security breaches and downtime due to any other reason.  Nick Selby hit the hammer on the thumb right here in his excellent post pointing this out.  For the purposes of risk planning, most organizations see cyberattacks as a force of nature just as they do ice storms, tornados, backhoes, and misplaced dolphins.

So if attacks on computer systems aren’t seen as different or more dangerous than simple availability problems—because by and large the only concrete, real-world impacts look exactly the same—then why are we trying so hard to convince practitioners (and ourselves) that there IS a difference?

And why are we trying to describe the risk of losing numbers in terms of MORE NUMBERS?

Folks, simply driving to more precise numbers isn’t going to make anyone appreciate this kind of risk any more.  Or let’s put it another way:  the only people who will appreciate those numbers are people who already appreciate numbers.  The score-keepers can relate to the metrics folks and they can sit together and entertain each other. 

But for the average Joe Practitioner, I believe that security breaches simply don’t have real-world importance.  This is why nobody but security wonks gets exercised about the existence of botnets, even if they’re in one (“MILLIONS of BOTS!”).  This is why organizations just don’t get freaked out over scan reports that show that they have EIGHT HUNDRED VULNERABILITIES!!!1 This is why only Homeland Security bureaucrats care about the number of hits seen on an IDS (“MILLIONS OF ATTACKS FOILED!”).  And this is why only the media gets excited about OMGWTFHEARTLAND!!

This is why even the frantic, hopeless pursuit of security ROI isn’t going to make any difference.  It’s still numbers.  Sorry, guys.

Until people start losing concrete things that they really care about—homes, food, health, loved ones—as a consistent, demonstrable, direct result of cyberattacks, they’re just not going to bestir themselves to divert funding from defending against risks that actually do have an impact.

So what does that mean for security metrics?

Keep applying the “so what?” criterion to your metrics.  Make sure that the metrics can be linked both to evidence-based probability and impact.  Impact that results in lowering the number of products that ship per hour.  Impact that results in unavoidable personnel costs that affect the organization’s bottom line more than once in a decade.  If you can’t make your management get excited about your metrics, it’s because your metrics aren’t exciting. 

Make sure you’re not just score-keeping in a game that the rest of the world doesn’t care about.  Don’t be a metrics wanker.

 

 

 

Posted by shrdlu on Saturday, November 21, 2009
(3) CommentsPermalink

Comments

Chris Hayes United States on 11/21  at  12:28 PM:

Hi @shrdlu. Nice post. Here are a few thoughts.

I am not convinced that most organizations see cyber attacks as mere accidents. While developing a risk management strategy in the context of “accident theory” would be an interesting approach it would not work in companies or industries where the value proposition of the company is impacted when an incident occurs – especially incidents where malicious intent is a factor.

In the financial industry the one metric that exists that is not often impacted by information risk management groups is required economic capital (REC) (blog post worthy). Most financial services companies consider information security risks – operational risk (BASEL II) – which gets accounted for in REC. REC usually gets distributed as a percent of allocation back to the organization’s business units – but often this allocation is a percentage of revenue and not reflective of the percentage of risk that a single business unit may be responsible for regardless of the risk type (operational, legal, investment, product, regulatory…).

I only mention all of this to suggest in some industries – there is no need to invent the perfect metric – a suitable metric may already exist. The challenge becomes: a. contributing to that metric; and b. doing so in a credible manner.

shrdlu United States on 11/21  at  12:47 PM:

Chris, awesome comments, thank you.  Yes, I agree with you that information security risks belong in operational risk (I was lucky enough to miss BASEL II during my financial institution days), and when you stack those up against all the other operational risks that have to be considered, the other ones look a lot more pressing (credit default swaps?  typos in the trading app?). 

However, you could argue that since the banking core business IS numbers, an impact on numbers is closer there to a “real impact” than in an organization where finance isn’t core.  But it still hasn’t been enough to cause real FAIL.

I would contend that most organizations (outside of finance & military) don’t see themselves as targets, and therefore consider themselves only at risk for opportunistic attacks, which to them look the same as the infamous “backhoe upgrade” and occur just as seldom.

.(JavaScript must be enabled to view this email address) United States on 11/23  at  02:51 PM:

Working as a company that provides service to Financial Institutions, our reputation as a stable and secure is a very important goal corporate-wide.  That said, we tend to fight a lot with non-practitioners (audits, consultants, vendors) who try to push solutions on us that achieve “best practice” or compliance requirements, but don’t give us much in terms of security.


Add a comment

Name:

Email:

Location:

URL:

Smileys

Remember my personal information

Notify me of follow-up comments?

Please demonstrate that you're a human: