Google’s Heather Adkins on infostealers, two-factor authentication and fixing the security ‘mess’ for future generations
Heather Adkins, the vice president of security engineering at Google, has spent more than two decades at the company. As head of its Office of Cybersecurity Resilience, she is responsible for safeguarding the tech giant’s networks, systems and applications.
Adkins’ work isn’t confined to the private sector, however. She sits on the federal government’s Cyber Safety Review Board (CSRB), which has taken on a leading role in investigating significant cybersecurity incidents.
Since it was created in 2021, the board has conducted three investigations: one on the widespread Log4j vulnerability, another on the Lapsus$ hacker group and most recently an investigation of how a “cascade” of avoidable security failures at Microsoft allowed Chinese spies to break into the unclassified email inboxes of senior U.S. officials at the State and Commerce departments.
Recorded Future News sat down with Adkins recently to discuss where the industry is succeeding and failing and how it can leave behind a more secure framework for the generations to come.
This conversation has been edited for length and clarity.
Recorded Future News: What has it been like being on the Cyber Safety Review Board (CSRB) and what has the process been for you?
Heather Adkins: Taking my Google hat off, I serve on the CSRB in my personal capacity. For me, it has been a privilege to be able to give back in some way. I've been in this industry for so long working for a commercial company, and you always wonder “what would it be like to work a low paid job and in my case, for free, and give back to the country in some way?”
The other thing I've really loved about the CSRB is that the Cybersecurity and Infrastructure Security Agency (CISA), especially the leadership under Director Jen Easterly and Homeland Security Secretary Alejandro Mayorkas, have been very forward-leading in how we take the wisdom that we've had for a long time and actually get something done.
And so the CSRB has been a way for us to say, here are major incidents. Here’s our chance as an industry to say we're doing an official review and here are the findings.
And for me, it's felt like for the first time we really have that. The board has done three reviews and I participated in two of them. They did a review of Microsoft starting last year, which I recused myself from because of my conflict of interest. So I got to sit on the sidelines and watch and I'm just waiting for them to put me back.
RFN: What do you hope the CSRB covers next?
HA: For me personally, the things I find most interesting intersect with national security — so protecting the country, which is our remit. But also where we can make a substantial contribution in a technical area or policy area that doesn't just address that one issue, but actually addresses other kinds of incidents as well.
So for example, when we did the review of Lapsus$ [a hacking group that carried out a string of high-profile cyberattacks between 2020 and 2022], we talked a lot about multi-factor authentication. We've been doing this a whole lot in the industry: “MFA everywhere.” But what we found during that review is that the attackers were able to SIM swap and steal your SMS codes, and because of that the MFA was undermined.
That's something we've seen, not just in the Lapsus$ series of attacks, but in the subsequent kind of Scattered Spider arena [another hacker group that has targeted large companies], and as an industry we've seen that in state-sponsored attacks, especially outside the U.S.
So that's an example that if you fix the things that were in that incident — even though it wasn't a fancy-pants Chinese nation-state incident — you're still going to get a lot of the same technical recommendations out of there. So you get the leverage of scale with that.
RFN: Where is the cybersecurity industry making the most progress and where is it falling short?
HA: Putting my Google hat back on, I'm really pleased with the push for multi-factor authentication. We're not where we need to be, and it is a milestone on a very long timeline for deprecating passwords [phasing out passwords in favor of other types of authentication].
We probably still have another decade to go, so I'm really pleased about that. I am really pleased about the conversation on memory safety [affecting how memory can be accessed, written, allocated, or deallocated in unintended ways in programming languages].
We've known about memory safety issues since the late ‘60s, and Aleph One's seminal Smashing The Stack paper, I think will be [30 years] old in a couple years. So we've known about these problems for a really long time.
When you look at reports — like Google Threat Analysis Group released a report on the zero-days they've seen exploited in the wild — a lot of those are memory safety issues. Not all of them, but a lot of them. And I think if you can imagine the legacy we're going to leave cybersecurity professionals in two decades, I'd like to leave them a legacy that we got started on this problem. And so I'm excited by that. It's in CISA’s Secure By Design pledge, which we signed at the RSA Conference in April.
But it's going to be a multifaceted approach. You're going to have to move to memory safe languages. It's going to be improving the languages we have. But there's also a story there for AI and the ability to find vulnerabilities in code faster than the bad guys is going to be how we win.
And I see a world where we actually push that earlier in the development cycle, as we can. So a phrase we talk about a lot is “shift left.” Don't do the security at the end of the life cycle. So you can imagine you’re a developer and you’re coding in maybe C++, because you have to and you make a mistake. Ideally, we catch that before that code goes public in any kind of way, and if you can catch it there then the bad guy never has an opportunity.
So we close the window of opportunity for threat actors to exploit memory safety bugs. And it will take time for that technology to build and to get it right, to make it cost effective, to get engineers comfortable. Trust is such an important component of all of this. But I think if we can make that investment and push towards that, we will not leave the same mess that the generations before us left us.
I'm also really enthusiastic about DARPA's [the Defense Advanced Research Projects Agency] cyber grant challenge that they have going on. The whole purpose of that is to build cyber reasoning systems that can sort of look through and find these kinds of vulnerabilities, and you're going to need a lot of that kind of experimentation in order to kind of reason about how to do this kind of thing.
So you're gonna see more and more of that come. But the idea is to build that momentum so that we're making higher quality code, safer code, and that the generations that come in the next couple decades don't have to deal with this stuff.
RFN: Google frequently releases reports on nation-state attacks. How have nation-state attacks evolved over the last two years?
HA: We're certainly seeing an increased tempo. Some of that may be elements of scaling.
Some of that could be automation assisted, maybe large language models. My personal theory is that if we are using large language models, so are they.
So it could be that they are advancing a little bit with vulnerability discovery. We certainly know that they would be using it the way they do Search or Google Translate. These are obvious uses for large language models.
I think we continue to see where there is traditional geopolitical conflict — whether that's the war in Ukraine or the military build up and uncertainty around Taiwan, as well as the conflict with Israel and Gaza and now Hezbollah and Iran — cyber is playing a role in all of that. Those geopolitical conflicts are increasing in cadence and tempo and seriousness. So you're seeing cyber go along with it. And then, of course, this year, pretty much everyone on the planet in a democracy is voting, and we are continuing to see where you have conflict in politics, there is disinformation.
I haven't seen anything sort of remarkably innovative. There’s just a lot of it. And one thing for us as defenders that we have to think about is if they can continue to increase that scale, how will we survive? And here is where the automation comes in, right? We really need to close those windows of opportunity.
I will call out one talk I saw at Black Hat which was about a Chinese organized crime gang spinning up illegal gambling sites. It's really fascinating. They have these shell companies which are like official companies and they get actors to play the CEO. And they do licensing and sponsorship deals with sports teams.
So you could buy a [sports team] shirt with this company name on it and a website in China, and then when you go to the website it's an illegal gambling site. Now, if you're in the U.K. and you visit that site, you'd have a forbidden error. So it's really targeted at these communities, and it turns out the organization behind them is also responsible for pig butchering scams.
And then we think now the Polyfill compromise [which affected a popular website tool recently used to distribute malware] was linked to the same kind of nerve center. It's really professional. They have compounds where they physically entrap people. I think that sort of thing, we should be paying really close attention to, because it's right now focused on sort of underground money-making in a very traditional, kind of organized-crime-mafia kind of way.
But if you are operating in China, you are not state-supported but certainly state tolerated. As things change, do they then get leveraged for the state? I worry about things like that, where there's just so much uncertainty in the threat actor space about these intersections of things. They will run into geopolitics at some point. Every business does, whether a good business or a bad business.
RFN: Since the Snowflake incident, there has been increased talk about the rise of infostealers and how passwords and usernames can’t be the only login used anymore. [In that incident, hackers used malware to steal credentials that Snowflake’s customers were using to get into their cloud accounts with the company.] Is multi-factor authentication the only answer to the increased use of infostealers?
HA: I think two-factor is really important, and certainly if you have malware on your computer and it takes your username, password, that's a problem. But we also know that infostealers steal web cookies that are stored in the cookie jar locally.
So what happens when you log into a website is you put your username, password, your two factor code, and then you get a cookie placed here, and that actually can act as an authentication token over the web.
So when we talk about infostealers, we have to make sure we're talking about that as well. And actually last week, Chrome put in some new technology that's rolling out now to protect those cookies on the Windows platform. We'll see what the impact of that is, but it's an issue that Apple dealt with a long time ago.
It's actually very difficult for other processes on the system to steal your cookies — like they put a barrier in there, and so we're now trying to replicate that somehow on Windows where it just naturally hasn't existed. So I think it's good, but not great enough.
And then there's some other stuff coming down. A new web standard on making the cookies bound to the device, so that even if you stole it, you couldn't reuse it because that cookie's assigned to this machine and this machine only. It’s called device bound session cookies (DBSC), and that's an industry coalition thing happening with Microsoft and Google. We’re gonna need those kinds of solutions to deal with infostealers and put them out of business.
RFN: If you were made queen for a day, what kind of cybersecurity rules would you pass to make things safer in a broad sense?
HA: Realistically, we need modern computing platforms. We need them everywhere, and I love that by the time we all got iPhones and Android phones, we had designed new architectures for them, and we just didn't take what we'd been using for 30 years, whether it was Windows-based or Unix-based, and threw it on a phone. We put some thought into this, and this is why you don't have infostealers on iPhones and Android phones. And why we are not talking about Crowdstrike running on your phone.
We need that for the rest of the computing platforms. At Google we use Chrome OS, we give that to the workforce as much as possible. But we also need it for critical infrastructure. I'm excited with what's happening in the IoT [internet of things] space, where they now have the Matter Standard, where they're starting to standardize what some of this interoperability functionality looks like, but also the safety story there. But we need that more ubiquitously.
Most of the problems we are seeing, most of the hacks, are an architecture problem. We blame it on the users, but when you go one level deeper, it's ‘we didn't have memory safety,’ ‘we didn't have protection for credentials session.’ We made the user have to get a PhD in computers to understand what we're trying to get them to do.
I think if we can modernize platforms, I think that that's what I would do if I were queen for one day. My first love was always to deprecate the password. We should never have had them in the beginning. But it's a huge problem. Two-factor is only going to survive for so long under attacker scrutiny.
RFN: Google has focused more than most companies on the proliferation of spyware. Is it inevitable that more and more governments will continue to use this technology?
HA: There is great academic work on this by a researcher out of Switzerland. He did his PhD on this and he views it in the context of naval safety on the high seas in the European colonial period.
Once upon a time, Britain did not have a huge navy to protect itself, and what they did is they contracted it out to companies. This is how you end up with the East India Company. And so the Crown would declare that people or ships are allowed to shoot their enemies without getting in trouble for it. Letters of marque, they called them.
And actually, what happened is the relationship between the Crown and these companies became so strained and unbalanced at some point as history went on over a couple hundred years that eventually Britain just said ‘we're just gonna make a navy.’
But even today, the military has private contractors. Russia's got very famously Wagner, and we have them here in the U.S. as well. So I think it's unrealistic to think we're ever going to find a situation where nation-states do not contract on cyber. And this was the thesis of his PhD, which is, there's a spectrum timeline here because we're competing for talent. And they did it back then, too. How many people really want to live on a boat and sail around the world?
So there was a competition for talent. Private companies could pay more and there's a lot more freedom if you're a private company than if you're in the government apparatus. They can move faster. They can serve multiple clients. They can innovate. They can pull the researchers out of universities, et cetera.
And so the prevailing thought is that for the foreseeable future, there will be private companies serving governments.
I do think one thing you might see again on this timeline is better norms and the international policymaking community might get together and decide, ‘here's what we're okay with and here's what we're not.’
‘Here's what we're going to prosecute through various legal avenues or target with our own capabilities.’ So I think you will see that. And again, this is my personal opinion, not Google's opinion. I think you're already seeing this a little bit with the lawsuits against NSO, the sanctions.
Think of them as policy experiments. If they're successful, you'll see more and more of it, and at some point you do it enough that it's just the norm, and then you end up with the whole system shifting around that. I don't think feasibly it's going to go away entirely.
RFN: What other topics are top of mind right now?
HA: I think this work around the Coalition For Secure AI is one. I'm enthusiastic about it because we've brought together so many industry partners that are otherwise competing in this space. But we are all getting together at the same table to help people figure out how to use it safely.
One of the policy initiatives is, if I'm an enterprise and I want to use AI in my enterprise, what do I need to be thinking about? Should I allow you to log my prompts? What happens if somebody puts sensitive data in there? How do I think about that? We've all been kind of making this up as we go talking to smart lawyers, but actually, if we had playbooks, if we had frameworks for thinking about things, it would be easier for us to just pick up the playbook and go do it.
I love that it's cross industry and collaborative.
Jonathan Greig
is a Breaking News Reporter at Recorded Future News. Jonathan has worked across the globe as a journalist since 2014. Before moving back to New York City, he worked for news outlets in South Africa, Jordan and Cambodia. He previously covered cybersecurity at ZDNet and TechRepublic.