Jacquelyn Schneider, Hoover Institution, Stanford University

Wargames director Jackie Schneider on why cyber is one of 'the most interesting scholarly puzzles'


The Hoover Institution’s director of war gaming and crisis simulation, Jacquelyn Schneider, had certain assumptions when she started organizing war simulations more than a decade ago.

Wargames test how military strategies might work in a real crisis situation: Are leaders responding as expected? Are there certain scenarios that uncover problems the planners hadn’t anticipated?

Schneider’s area of specialty is studying how humans interact with technology, and she found that certain types of disruptions created interesting and unexpected reactions. 

Cyberattacks — like a hack of the electrical grid or a wiper malware infection in military communications systems — elicit a completely different response from wargame participants than traditional physical ones, she said.

“This has been one of the most interesting scholarly puzzles that I have ever worked with,” Schneider told the Click Here podcast.

This interview has been edited for clarity and length.

CLICK HERE: When I think of wargames, I think of tabletop exercises like you see in the movies with miniature tanks and destroyers moving around on a map. Are wargames like that anymore?

JACQUELYN SCHNEIDER: Those games still exist, and there's a whole community that deals with miniatures and wargames. But a lot of what I do is look at decision making and how individuals make decisions in uncertain conditions. So it’s more like a National Security Council sitting around a table and having those difficult conversations about what to do in a crisis.

CH: How is the gaming scenario set up?

JS: Generally, I organize people in groups of four to six, and I tell them to take a role. They're relatively small groups, so you get that really rich discussion about the crisis and the uncertainty. 

Often I'll have a Head of State, a Minister of Finance, a Minister of War, or other similar roles. If we're doing a U.S.-specific war game, we’ll assign a Deputy Secretary of Defense, a Deputy Secretary of State, or a National Security Council advisor. Then, I’ll have them play those roles in the context of the game.

CH: When you look at the players’ decision making process, what kind of information are you drawing from these conversations?

JS: We actually don't get a lot of the good data about the conversations that they're having at the table. We rely on a few different data mechanisms. We give them response plans to fill out. The prompts are along the lines of: What are you doing? Why are you doing it? What are the means that you want to take? What are the risks about the plan that you're coming up with? That represents a kind of a group consensus. 

Then we use surveys and rely on asking questions like: Why did you do that? How did you feel about the other players? Did you feel like somebody was very dominant? If we had told you that you had a different type of capability, how would that have affected the way you played the game?

These questions and subsequent answers provide us with an individual level of behavioral understanding behind decisions made in the game. So both of those data sets are used to understand not only the games’ outcomes, but why and how we got to those outcomes.

CH: And when you conduct these wargames now, is there a role representing the cyber world, like someone from Cybercom or the National Cyber Mission Force or NSA? 

JS: In some games I’ve run, we had former foreign ministers or former heads of state. And in other games, we had people from Silicon Valley who were leading AI companies or cyber companies. Sometimes we had military cyber officers. Sometimes they were American or from other parts of the world like Latin America or Europe. We had a wide range of expertise which allowed us to go back and ask, “If somebody's a cyber expert, do they play games differently than people who are nuclear experts?”

CH: What was the answer to that question?

JS: They play the crises very similarly. The difference is in the nuance. 

What we found is those who had expertise in cyber operations were more likely to be more nuanced about how they used the cyber capability. So, for example, they'd say, “Hey, I'm going to activate the Reserves or the National Guard in order to augment homeland defense.”

That's something that somebody who's not working the cyber mission set all the time doesn't understand. So that's where we saw big changes in the details. But on a larger level about how we as humans respond to cyber technologies, or how we as humans respond to crisis situations, we actually see very generalizable patterns that extend beyond individuals’ expertise. 

CH: What have we learned since we realized that cyber is actually more in the adversarial mix than it was, say, just when there was Stuxnet?

JS: This has been one of the most interesting scholarly puzzles that I have ever worked with. When I first started working with cyber operations and thinking about the impact of cyber operations on crises, it was in the early 2010s. People were really concerned that all the characteristics of the technology, the uncertainty about attribution, the timeliness of the ability to execute an offensive cyber operation, and the relative ease would all be an incendiary that would lead to conflict where there wouldn't otherwise be conflict.

But then I started running wargames. And I realized people react in very unusual ways to cyber operations. I would run experiments and wargames, and I would find that individuals don't respond to cyber operations like they would when faced with a physical threat. 

Instead, they treat cyber operations in this kind of anxiety-inducing way, where the uncertainty about cyber operations actually creates this kind of buffer area where they don't feel an impetus to respond violently to cyber. 

My hunch here is that it's because of the way in which cyber operations create effects. It's virtual. It's hard for us to grasp when it has a significant effect. You don't drop a cyberattack and then something explodes. Even in the worst case scenario, somebody needs to piece back, how did this happen? Was it an accident? Did someone mean to do this, or did something just fail? That act of tracing things back and processing through why something occurs actually creates the emotional distance from the attack.

And that allows us to have this emotional space to make more rational choices.

I think as we see cyber operations there, we know that cyber operations are proliferating, but this is occurring all the time. But one of the most fundamentally interesting things is how they have stayed below this kind of threshold of violent conflict, and how they fall behind a violent conflict in the world instead of leading to violent conflict.

CH: Can you talk a little bit about the authorities that are needed to launch cyberattacks? There seems to be a misconception that, to deploy an attack, all you need to do is push a button.

JS: Talking about it from an American perspective, there are authorities that we have created that we believe are important in order to make sure that the use of cyber operations are appropriate and effective. So, any use of force within what the US president uses is tied to some level of authority.

The president created executive agencies and he had to approve them in order to use offensive cyber. This happened in the Obama administration. During the Trump administration, these authorities were delegated down to the combatant command level — in this case, Cyber Command.

Cyber Command had more direct authorities to execute offensive cyber. They still have internal requirements in order to conduct those operations; just because you say that Cyber Command might have an authority, that does not mean that the airman or the lieutenant that's on the keyboard can now conduct an operation. 

They still have to go through a tasking process within the Department of Defense and within Cyber Command that includes lawyers to make sure that what is occurring is appropriate. So, these are all part of the bureaucratic process of making cyber decisions.

CH: Could you make a lot of decisions in advance and decide not push the button until the appropriate time? 

JS: That has been the constant tension about cyber operations. Let's say I develop a GPS guided bomb. I can shelf a series of different targets’ tiering plans. I can say, based on my bomb and based on a target that I've already identified, exactly how I think we could destroy that target with my assessment about what the potential risks and collateral damage could be. 

Now, when you go into war, you then apply context to that targeting solution. That should give you more information about who might be near the facility, what the stakes are in the conflict, right? And then you make the choice. 

But you can have a target on the shelf, and you still have to make a decision the day of — whether the context dictates that that targeteering will be effective and appropriate.  We thought we could apply those same analogies to cyber operations. At the beginning of cyber operations in the U.S., we thought about building cyberattacks like we would build those kinds of bombing plans. The problem is the network changes all the time.

And that's the big difference between a bomb and a cyberattack. Because with deploying a bomb targeting a building, for example, it's going to take a bit of time for the parameters of that building to change. That's not true in cyber operations. Something that worked two hours ago might not work when you go to execute it.

So you can't actually put a cyberattack on the shelf. You can try to  find vulnerabilities or exploits that are more pervasive or that are more longer lasting, but then the actual decision to execute, you're still going to need that final sign off about context.

CH: You wrote an article for Foreign Affairs magazine about why the military can't trust AI. What was the message you were trying to send? 

JS: The hope was that we’d give them more nuance. My co-author Max Lampart is a computer scientist who works closely with artificial intelligence, and we've been working together to use artificial intelligence in the context of board games. What we found in our research and what Max finds in his broader research with artificial intelligence is that there's a limit to the technology where the technology may mimic our conversations in our language, but is unable to internalize the way we make decisions and because of their lack of internalization.

You can't ever be sure what the AI is going to do. And when that comes to war, what that means is that even if you have a majority of responses that emulate or mimic the way humans would respond, you have the chance to win. That it's going to go nuclear early in a conflict, which we actually find in some of this work.

So the concern is if you are relying on AI and more specifically large language models to make strategic decisions, that you may find that they are going to escalate in a way that might be dangerous or that humans wouldn't do. 

This is actually something we've seen in board gaming for a long time — the use of computer models. The computer model wants to win the game. As a human during wartime, winning the war is very complicated. But for the computer model, especially in the early 1960s when we started doing this in war gaming, that just meant we'll go violent and use nuclear weapons early because you win. And the computer didn't understand the moral and ethical, the human implications.

These new models are better about emulating our human understandings or concerns about the use of violence, but they still don't internalize that that is. 

CH: Do you get the sense that this may be one of the places where the U.S. and China agree — that AI needs to stay out of weapons launch decisions?

JS: This is where the military is a really varied group of people. It's organizations and people. I think if you look at what's coming out of the Office of Secretary of Defense policy, they are really in tune with the dangers of broad use of artificial intelligence without thinking about the risks and the dangers.

So you can see policies about tests and evaluation and thinking about safety, and I think we’re really responsible and a great model going forward. But some of this technology is already in how we use what we consider smart weapons. 

As we increase the automation and the artificial intelligence resident within these weapons, then you need to get to the operator level. Does the operator understand the limitations of the artificial intelligence that's enabling the weapon system?  And I think we're not quite there yet.

Having the operators,war fighters and commanders understand the limits of the technology to the same extent that those policies do, I think that's kind of where we need to be concerned.

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
No previous article
No new articles
Dina Temple-Raston

Dina Temple-Raston

is the Host and Managing Editor of the Click Here podcast as well as a senior correspondent at Recorded Future News. She previously served on NPR’s Investigations team focusing on breaking news stories and national security, technology, and social justice and hosted and created the award-winning Audible Podcast “What Were You Thinking.”

Jade Abdul-Malik

Jade Abdul-Malik

is a producer for the Click Here podcast. She has worked on podcasts with Gimlet Media and Sony Music Entertainment and was a reporter for Georgia Public Broadcasting in Atlanta.

Sean Powers

Sean Powers

is a Senior Supervising Producer for the Click Here podcast. He came to the Recorded Future News from the Scripps Washington Bureau, where he was the lead producer of "Verified," an investigative podcast. Previously, he was in charge of podcasting at Georgia Public Broadcasting in Atlanta, where he helped launch and produced about a dozen shows.