A dirty little secret of cybersecurity is that no one really knows how to measure it. 

And that means there’s no perfect way for companies to decide how much they need to invest in security, and no straightforward, objective method for comparing two products and saying which is more secure. 

“There’s no generally accepted, widely usable, scaleable, transparent way to measure cybersecurity,” said Paul Rosenzweig, a former Department of Homeland Security deputy assistant secretary for policy and senior fellow at the R Street Institute. 

That’s due to many factors, he said, including the enormous complexity of computer software and the multiple ways that skillful hackers can attack it.

Without measurement, he noted, “cybersecurity remains much more of an art than a science,” bereft of the kind of objective parameters that are generally regarded as necessary to inform rational decision-making, especially in business. 

To start filling that void, the U.S. government standards agency, the National Institute of Standards and Technology, is leading a big push to catalogue existing measurement systems and research new ones. NIST asked in September for public comments about how organizations measure their cybersecurity performance.

The aim, explained Kevin Stine, chief of NIST’s Applied Cybersecurity Division, is to provide organizations with the data they need to drive risk management, “to help them gauge… the effectiveness and the impact of the information security decisions that they are making.”

“Organizations have finite resources. When they’re making investment decisions, when they’re managing their risk, when they’re trying to align their security decisions with their mission and business objectives… [they] need data to support those decisions,” Stine said.

While there are some technical controls, like the bit-strength of encryption keys, that can be expressed as a numeric value, experts say, that’s still far short of a measure of how secure a given piece of software is. And even that, if it existed, would, in turn, not give you a complete picture of how secure the whole enterprise is.

Stine said the first step would be to rewrite NIST’s cybersecurity measurement guide, SP 800-55, which hasn’t been updated in more than a decade. Since then, “We’ve learned a lot more—not just at NIST, but as a broader cybersecurity community—about some of the opportunities and challenges for measuring cybersecurity,” he said.

A CAT-AND-MOUSE GAME

One major challenge, said Rosenzweig, can be summed up in a single word: complexity.

“Computer code is amazingly complex,” he noted. A single device might run multiple software programs, each containing many millions of lines of code—much of it in open source libraries or programs the developers didn’t even write themselves. “Any single error could create a vulnerability,” Rosenzweig said. “The attack surface is impossibly large.”

And that makes it almost impossible to determine how secure a piece of computer equipment is. A mechanical device like an aircraft engine, can be tested by running it at unusual speed, or for very long periods of time, but that approach doesn’t work with code.

“They tried it with Huawei,” said Rosenzweig, referring to the center the U.K. government set up to test the source code of the Chinese tech giant, after security concerns emerged about the use of its equipment in the British telecoms backbone. But two years ago, the center’s oversight board concluded it could offer only “limited assurance” the code was secure.

“There’s just no good way to test something that is orders of magnitude more complex than the most sophisticated avionics engine,” he said.

And it gets harder still when you consider what it is being tested for. When you test a machine, you are testing it against the unvarying limits of physics—gravity, speed, mass. Testing the cybersecurity of a piece of software or hardware against an adaptive, learning adversary is a very different business.

“Steel doesn’t have an enemy,” Rosenzweig said. “Imagine trying to build a bridge if gravity kept changing the way it behaved to beat you.”

That insight is fundamental, noted Don Snyder, a senior physical scientist at the RAND Corp, a think tank with historical links to the U.S. military. And, he argues, it means you cannot effectively measure the cybersecurity of an organization by reducing it to a score or a checklist. Synder is part of a team at RAND working the measurement issue.

Cybersecurity “more resembles this cat-and-mouse game where, as soon as we set up some kind of defense, an attacker is examining that for any kind of holes and thinking: ‘Now what can I do [to defeat that]?’ And of course, they change their tactics. And then the defenders respond… So it’s this dynamic back-and-forth and you can’t have some static measure of how well you’re doing,” said Snyder.

Moreover, he points out that even sophisticated, maturity-based measures capable of reflecting that dynamic back-and-forth, can be thrown off by third-party dependencies—relying on another company’s products.

“We outsource a lot of our security because we’re constantly buying equipment, software, even software security elements from third parties. And, of course, exactly how secure they are, whether or not they themselves have been infiltrated, is difficult to know,” he said, noting the recent use of network management software provided by SolarWind as an attack vector.

The Defense Department is seeking to deal with the dependency issue by requiring that not just its suppliers, but their suppliers, get certified under the new Cybersecurity Maturity Model Certification system.

THE ROLE OF INSURANCE

Outside of the defense industrial base, it’s less clear what the engine might be to move the measurement issue forward.

It certainly won’t be consumer demand, according to Rosenzweig, “Unfortunately, I see no evidence that consumer demand drives anything but price and usability,” he said of the technology marketplace. 

Insurance has long been a candidate, said Jeremy Turner, threat intelligence chief for cyber insurer Coalition, Inc. In other insurance lines, like property, or car insurance, policy holders get credited with lower premiums if they follow best practices. Being able to get lower cost insurance might be a powerful incentive for companies to boost their cybersecurity—and find a way to document it.

Cyber insurance is a growing market, but one where the paucity of data is a big issue, said Turner. “In all the other insurance markets, there are lots of data available. Auto thefts, for example, you can track by reported location, by vehicle model—there’s so many different metrics.” But not in cybersecurity.

Part of the issue is that, in the highly litigious post-breach environment—with potential lawsuits flying—it’s hard to be up front about how hackers got in. “This is data no one wants to share,” said Turner. 

But if they want to file a claim, they have to tell their insurance company. And that makes the industry a great vector for data collection, noted Mark Montgomery, a fellow at the Foundation for Defense of Democracy and the former executive director of the Cyber Solarium Commission. The commission recommended the establishment of a Bureau of Cyber Statistics to collect data about cybersecurity that would help in establishing benchmarks and metrics.

“There will be some pushback” on the proposal, he acknowledged. “If you build something in the government, you’ll get friction,” he said. But the bureau will collect data that “underwriters and other companies need… Honestly in five years, I think people will be asking why it took so long.”


contributor

Shaun Waterman is an award-winning journalist and communicator who has worked for the BBC, United Press International and POLITICO, and an expert on cybersecurity and counter-terrorism who has presented at the Aspen Security Forum, the Industrial College of the Armed Forces and the Canadian Security and Intelligence Service.

Freelance writer