I started writing about metrics at the very beginning of this blog, and hadn’t really seen anything I could add to the ongoing blogo-discussion since then, but the planets aligned: I had a thought-provoking discussion with a super-sharp colleague, and it was followed by a Saturday in which I finally have a little time to write.
This provocateur kept referring to “security practitioners” as opposed to—well, everyone else in the security space, I suppose—and I began wondering what the cultural differences are between the two spheres. The implication is that “real practitioners” know what the real risks are, because they’re living them every day. (What is a practitioner, anyway? Let’s just say it’s someone who’s directly responsible for protecting an organization’s data, as opposed to someone who is developing problems or solutions for OTHER organizations’ data. You can argue with that definition if you like, but I’ll bet if you do, you’re not someone who meets the first description. ) But I don’t find that necessarily to be the case. One of the great frustrations that I see with vendors, researchers, analysts and regulators is that they don’t believe most practitioners actually understand the risk landscape. The only group that arguably does is the one that’s getting attacked the most—the military—and they can’t talk about it. The only other group that kinda understands is the financial institutions, and they won’t talk about it even if they get caught with their cyber-pants down. I remember hearing a great talk by Aaron Turner that was eye-opening in its level of disclosure about things that were really happening, and that was only because of a confidentiality agreement; you’d never get that level of discourse in the public blogosphere.
The reason I think that most practitioners don’t appreciate the security risks in the same way that other security professionals do is this: [donning my asbestos undies] for the most part, security breaches don’t have observable impact in the real world.
Wait, wait, before you click on the comment field—read the rest of this entry.
Riddle me this: when has anyone died as a result of a “cyber” security breach? When has anyone been injured? And can you prove it?
The only group that can even get CLOSE to answering that question is the military, because the loss of military information can arguably lead down the path to that very real scenario. For them, the two are so closely linked that a “hit” on a system is really appreciated as being a “hit” in the very core of their business.
But I’m going to argue that for ALL the rest of us practitioners—including the financial institutions—there is no real, consistent, demonstrable impact. The only effect is in currency numbers, and those are, at the end of the day, just numbers. Money is a way of keeping score. Sure, if you’re homeless and on the street, the lack of money at your level is going to have a very concrete effect on your life. And if you’re a poor nation, the lack of money at that mega-scale is going to have a concrete effect on the quality of everyone’s life. But in the vast realm in between, where we’re talking about security breaches, I’m going to argue that it’s just score-keeping. (“They stole MILLIONS of our numbers and siphoned them off to Russia!”)
Stock price? Nobody can prove that stock price goes down because of a revealed security breach.
Regulatory fines? Have you ever seen an institution fail solely due to regulatory fines? Have you ever seen people lose jobs because their employers had to pay fines?
Downtime and recovery costs? Okay, now you’re getting somewhere, BUT.
I submit that organizations don’t make a distinction between downtime due to security breaches and downtime due to any other reason. Nick Selby hit the hammer on the thumb right here in his excellent post pointing this out. For the purposes of risk planning, most organizations see cyberattacks as a force of nature just as they do ice storms, tornados, backhoes, and misplaced dolphins.
So if attacks on computer systems aren’t seen as different or more dangerous than simple availability problems—because by and large the only concrete, real-world impacts look exactly the same—then why are we trying so hard to convince practitioners (and ourselves) that there IS a difference?
And why are we trying to describe the risk of losing numbers in terms of MORE NUMBERS?
Folks, simply driving to more precise numbers isn’t going to make anyone appreciate this kind of risk any more. Or let’s put it another way: the only people who will appreciate those numbers are people who already appreciate numbers. The score-keepers can relate to the metrics folks and they can sit together and entertain each other.
But for the average Joe Practitioner, I believe that security breaches simply don’t have real-world importance. This is why nobody but security wonks gets exercised about the existence of botnets, even if they’re in one (“MILLIONS of BOTS!”). This is why organizations just don’t get freaked out over scan reports that show that they have EIGHT HUNDRED VULNERABILITIES!!!1 This is why only Homeland Security bureaucrats care about the number of hits seen on an IDS (“MILLIONS OF ATTACKS FOILED!”). And this is why only the media gets excited about OMGWTFHEARTLAND!!
This is why even the frantic, hopeless pursuit of security ROI isn’t going to make any difference. It’s still numbers. Sorry, guys.
Until people start losing concrete things that they really care about—homes, food, health, loved ones—as a consistent, demonstrable, direct result of cyberattacks, they’re just not going to bestir themselves to divert funding from defending against risks that actually do have an impact.
So what does that mean for security metrics?
Keep applying the “so what?” criterion to your metrics. Make sure that the metrics can be linked both to evidence-based probability and impact. Impact that results in lowering the number of products that ship per hour. Impact that results in unavoidable personnel costs that affect the organization’s bottom line more than once in a decade. If you can’t make your management get excited about your metrics, it’s because your metrics aren’t exciting.
Make sure you’re not just score-keeping in a game that the rest of the world doesn’t care about. Don’t be a metrics wanker.