A Verizon security expert on why 5G is raising the bar for cyber defenders
Much has been written about how 5G and the proliferation of internet-connected devices might make us more secure or more vulnerable in the coming years, depending on how you look at the next-generation wireless standard.
For people like Alexander Schlager, executive director of security services at Verizon, 5G isn’t so much about tallying the risks and benefits as it is about adopting a new approach to cybersecurity. Defenders will need to accelerate their detection and response capabilities, he said, but will also need to prioritize and devote more attention to worst-case scenarios.
“When you think about an autonomous car, a smart city appliance, or a mobile health care installation, these use cases could actually harm people or kill people [if breached],” Schlager said. “So you move from the hack being a nuisance—creating financial damage, embarrassment, or reputational damage—to it actually causing damage to human life.”
Schlager sat down for multiple interviews with The Record to talk about securing emerging technology, quantifying cyber risk, and more. The conversations have been edited and condensed below:
TR: You deal a lot with future threats, like defending smart cities from cyberattacks which could affect everything from connected traffic lights to wastewater treatment plants that rely on IoT sensors. It’s alarming to think just how bad those attacks could be—how do we keep this infrastructure safe going forward?
Alexander Schlager: Let’s be honest, how long have we been talking about the protection of critical infrastructure, and how much has happened? This whole thing is accelerating. On the optimistic side, one of the very rare positive side effects of COVID-19 was that there was a significant acceleration of digital transformation. So for a lot of companies we saw a lot of improvements in cyber security investments. But to your point, if we look forward in time, to things like 5G and smart communities, the first thing I think about is risk and the worst possible outcome. What changes in this new reality is that the worst possible outcome is not limited to damage of reputation or finances, but it will involve human lives—injury or even death. This is where the bar moves significantly up.
I think we have the technologies that would protect such use cases, whether it’s advancements in detection response, technologies we use to verify the integrity of devices, technologies in the endpoint protection space. So I think we have all the tools, more or less, that are required. But the question is how do you deploy them, integrate them, and orchestrate these capabilities? I don’t want to sound like a broken record, but a lot of conversations we have with enterprises is that it’s an opportunity to start looking at cybersecurity from a risk-centric point of view, quantifying and qualifying the worst possible scenario a breach could cause, and look back into the technologies and posture that can best prevent those.
TR: Cyber risk quantification sometimes feels like more of an art than a science—but you feel like we’re getting better at it?
AS: That’s probably an hour and a half talk, but I semi-agree with your point. Here’s how I look at it: We can argue about whether an open port should be penalized with X or Y points, we can argue about the fact that you’re one week behind a particular patch is good or bad or has an impact of X or Y. That’s all fair, and it’s good to be having these debates and discussions.
What changes in this new reality is that the worst possible outcome is not limited to damage of reputation or finances, but it will involve human lives—injury or even death. This is where the bar moves significantly up.”
I do think it’s a science because from a pure data handling and data modeling perspective, it has become quite sophisticated. Where I agree with you is that to some degree the interpretation is a bit of an art. But I look very practically at it. It doesn’t matter whether an open port, to go back to a simple example, should be penalized with X or Y. But it gives me insight into how seriously you take cybersecurity. If we in principle agree it’s bad to have open ports or that it’s bad to go unpatched for three weeks, the debate of whether it should be 10 or 20 points is secondary—it gives me an impression of whether you’re taking cybersecurity seriously or not. And I love to go back to the Equifax example, because it was the first very public and prominent case—the point wasn’t whether Equifax should have a score of 400 or 500, but the facts spoke for themselves that they were negligent in maintaining a mature and effective cyber posture. This is what I mostly take from quantification—whether you take cybersecurity seriously, whether you apply the necessary investments and diligence, and whether you’re investing whatever is proportional to the potential cost, and I think that’s what we should be using scoring for. It’s not dissimilar to a credit score—you don’t care about the exact score as much as whether it’s a positive score and I can see that you take care of your finances.
Let me ask you this—I ask you to get into a fully autonomous car and drive from point A to B over 50 miles through a smart-enabled community. What would you expect the government, the industry, the regulators to put in place so that you can expect someone won’t be able to hijack the car and drive it into a tree? As all these new use cases are being enabled, a lot of what companies will have to rely on is consumer trust, which they will have to gain by putting the right measures in place. But there will also need to be some degree of government regulation and support in order to enforce certain minimum standards. Quantification scoring could be one of such measures, but it doesn’t mean it has to.
TR: In your opinion, what are the main benefits and challenges that come with 5G?
AS: I think the biggest challenge is the near real-time nature of use cases and the requirement to detect incidents, compromises and breaches in a way shorter time window. Going back to the Data Breach Investigations Report that we put out in 2011, the average time for an enterprise to detect a breach was 10-11 months. So imagine you’re sitting in your autonomous car saying, “Great! It’ll be 10 months until someone recognizes that the smart city has been breached!” The point is we talk about 5G enabling the real-time enterprise, which means that detection and response needs to get way faster.
The good news is that we’ve come a long way from 10 months—that number is down to days and weeks now. But the step forward means we have to get down to the area of minutes if not seconds, and this is where you’ll see a lot of technological innovation, where things like integrity verification will help us determine very fast if something has happened without even immediately knowing what happened. We need to start working backwards from the worst possible outcome—it helps you understand how much time you have. If I enable mobile gaming and GPU processing on the edge, in the worst case it’s a DDoS and you get a disgruntled gamer—you can survive that—versus the autonomous driving car example.
The point is we talk about 5G enabling the real-time enterprise, which means that detection and response needs to get way faster.”
The benefits are basically everything that was incorporated into the 3GPP standard, which built on what we learned from 4G and previous integrations. There’s a number of vulnerabilities, exploits, and attack vectors that were present in 4G, for example, which have been accommodated for. The challenge is they are not mandatory—while they’re all covered in 3GPP from a standards point of view, you’re not obligated to implement them, so it will be interesting as we move forward to see how the different carriers deal with them. I think zero trust is an overarching theme that you’ll find in 5G, and I think it’s a big step forward in terms of carriers protecting their own infrastructure and building a strong base layer of security and protection into the transport layer by default. I’m quite positive on that.
TR: When you talk about securing things like autonomous vehicles, what’s the worst-case scenario in your mind?
AS: Taking it over and driving it into a tree, a wall, another car… Even worse, there’s an expectation that autonomous vehicles in urban areas will get a lot of telemetry from smart city appliances, so they can look around corners, see things two or three blocks away—being fully aware of their surroundings. Imagine you hack that and start influencing traffic lights. It sounds science fiction, but you can create massive amounts of crashes and collisions in an area of a city.
There’s a cost asymmetry in cybersecurity. It’s extremely costly to protect against any potential or possible attack, versus the effort it takes to actually attack. As an attacker, you only have to be successful once. And it’s a reminder for a case like SolarWinds that there’s no guarantee against the breach and security should really be about risk management and looking at the proportionality between what we’re protecting and the effort we’re deploying.
TR: Is it harder to protect emerging technologies from cyberattacks—because we don’t fully understand how cybercriminals will exploit them?
AS: Yes. There’s so many things you have to take care of. Adversaries will focus their energy and resources into areas that have been neglected. You can’t focus on everything, but the hacker can—that will always be a problem. The big challenge is how do we deploy detection mechanisms and highly automated response mechanisms that help us protect things in real-time.
Adversaries will focus their energy and resources into areas that have been neglected. You can’t focus on everything, but the hacker can.”
TR: What technologies do you think can help with these challenges?
AS: AI and machine learning is obviously important. Contextual enrichment is the term I like to use—the more context you give an analytics stack the more reliable a result would be. It’s not necessarily new. But the more the machine understands the context when looking at a threat, the more reliable the detection and response will be. Another area is being able to not rely on massive amounts of data. If the only way I can detect is to collect all these log files and packets and sift through this data, you’ll always have that processing lag. We can’t put all this massive storage and computing power to the edge, because it’s now a mini data center, and you can’t run a massive analytics stack on each of these edges. One thing we’re working on is called a smart sensor, meaning some decisions can be made at the edge but others have to be done at the core, where we have the big storage and computing capacities. We’re also looking at things like machine state integrity, for example, and what it does is run a small agent at the endpoint—let’s say a car—and it captures all the data required to assess the identity’s integrity. What is all the data that will tell me this car’s integrity is intact. We hash that and write it into a blockchain, and we go back and probe that machine. It will reproduce the hash and the moment it doesn’t match what we have in the blockchain, we know that someone has tampered with the machine. It’s very dumb, because it’s binary—integrity is intact or not—but it’s very fast.